Search is not available for this dataset
text
stringlengths 7
7.7M
|
---|
STAY IN THE KNOW - PEACE OF MIND FOR YOUR MACHINED PRODUCTS
Manufacturing doesn't have to be unpredictable anymore
VALUES
We don’t pretend to be perfect, but we make an honest effort to be the type of people we’d want to work with personally. In early 2008, we sat down and came up with a detailed list of exactly what being a “teammate” means to us.
Pleaseclick hereto see the set of standards we apply to being a “teammate”. It’s not just a document; it’s a set of qualities that we refer to often, from the start of our Monday meetings, to our hiring, to our teammate reviews and counseling sessions.
Our turnover rate is uncommonly low, partly because we discuss our values early in the hiring process and work to hire new teammates that share them. Our folks tend to stay with us for a long time, make us productive and pleasant to work with, and help give Bowden a positive, family atmosphere. Of course, anybody can say that, but we work to actually show our values by the way we do business. |
1. Field of the Invention
The present invention is concerned with a two-web, cylinder printing press especially adapted for make ready and proofing of gravure cylinders or rollers, and for printing of high quality samples of finished products therefrom without the necessity of employing full-scale production equipment. More particularly, it is concerned with such a press which preferably includes a pair of shiftable impression rollers disposed proximal to the gravure roller, with means for shifting of the impression rollers alternately into a web-defining relationship with the gravure roller.
2. Description of the Prior Art
During high quality gravure printing operations, it is generally necessary to "make ready" and proof individual gravure cylinders or rollers which are received from an intaglio engraver. Such proofing and make ready generally involves printing with the roller and making necessary adjustments of color strengths, color hue, half tone solid printing relationships and other parameters, until the printer is satisfied that the roller in question meets established gravure printing standards. As can be appreciated, in some instances considerable effort must be expended in these preliminary operations for a given gravure roller, particularly where exacting printing criteria for the finished product have to be met.
In full-scale production runs using gravure equipment, normally a series (e.g. four or five) gravure printing stations are placed in an in-line relationship, and a web to be printed is passed in serial order through the respective gravure stations. In full-scale production equipment of this type where extensive footages of finished product are contemplated, make ready and proofing of the respective gravure rollers does not represent a significant economic drawback. That is to say, the printer can expend whatever quantity of web is required for insuring that all of the gravure rollers are properly operating to give desired printing results, inasmuch as the extent of the final production justifies the effort and expense. Thus, in full-scale production work, gravure roller make ready and proofing is accepted as a necessary preliminary to final printing.
In many cases however, the printer wishes to prepare samples of finished product of relatively small footages, e.g., one to five thousand lineal feet of printed web. In such cases the time and expense involved in gravure roller proofing becomes, proportionately speaking, very high. As will be readily understood, the time involved in make ready and proofing for a contemplated production run of one thousand lineal feet is the same as that required for a full-scale production run of many hundreds of thousands of feet. Accordingly, there is a real reluctance on the part of printers to interrupt full production schedules for the purpose of making small samples for salesmen or the like.
In response to the above problem, it has been known to employ a single station gravure press separate from full-scale production equipment for gravure roller make ready, proofing and printing of samples. Of course, such a scheme requires that each respective gravure roller be made ready and proofed, the required amount of finished printing therefrom accomplished, and the web rewound so that the operation can be repeated until completion. However, this procedure makes it extremely difficult to predict the number of lineal feet of finished printing for each color to run in order to achieve the final desired footage. For example, if during the make ready and proofing of the first gravure roller three thousand feet of web is expended as waste, and thereafter two thousand feet of web are printed as final product, there is no way of knowing whether the second or subsequent rollers can be made ready and proofed using the three thousand feet of web previously allotted in connection with the first roller. If more than three thousand feet is expended, a portion of the "finished" printing from the first roller is destroyed because the printing from the later roller is not up to standard. Thus, it has been the practice to err on the side of liberality in printing footages from the respective gravure rollers, in order to avoid not producing enough of the sample product; of course, this represents in many cases a significant waste of material.
It has also been suggested in the past to provide a completely separate make ready unit and proofing unit for gravure rollers. Such units have presented serious technical problems, however, primarily because of the fact that the printing conditions established during the make ready proofing phase are substantially different than those encountered during actual production. To give but one example, prior make ready and proofing units may operate at relatively slow speeds (e.g., one hundred fifty lineal feet per minute), whereas actual production runs are much faster, on the order of eight hundred feet per minute. Such a disparity can and often does represent a considerable difference in printing quality between the make ready and proofing phase, and full production. Hence, prior equipment of this type has not provided a real solution to the problems outlined above. |
830 F.2d 770
56 USLW 2206, 1987-2 Trade Cases 67,728,4 U.S.P.Q.2d 1568
FMC CORPORATION, Plaintiff-Respondent,v.GLOUSTER ENGINEERING CO., et al., Defendants,andM. Lehmacher & Sohn GmbH Maschinenfabrik (Lemo),Reifenhauser GmbH & Co. Maschinenfabrik, andStiegler GmbH Maschinenfabrik,Defendants-Petitioners.
Nos. 87-8037, 87-8038.
United States Court of Appeals,Seventh Circuit.
Submitted July 1, 1987.Decided Oct. 1, 1987.Rehearings and Rehearings En Banc Denied Nov. 19, 1987.As Amended Dec. 21, 1987.
Before POSNER, EASTERBROOK, and MANION, Circuit Judges.
POSNER, Circuit Judge.
1
Three defendants in a suit pending in the federal district court in Massachusetts have asked us (in two applications) for permission under 28 U.S.C. Sec. 1292(b) to appeal from the district court's order refusing to dismiss the complaint as to them. The request raises a question of first impression: whether, when the panel on multidistrict litigation transfers a case for consolidated pretrial proceedings (see 28 U.S.C. Sec. 1407) and the district court to which the case is transferred makes an order and certifies it for appeal under section 1292(b), the court of appeals for the circuit in which the case was originally filed has jurisdiction to hear the appeal.
2
FMC Corporation filed this suit, an antitrust suit against several companies some of which are German, in the Northern District of Illinois. The German companies moved to dismiss the case against them on the ground that they did not transact business in the Northern District of Illinois. While the motion was pending, the panel on multidistrict litigation transferred FMC's lawsuit, for pretrial proceedings only, to the District of Massachusetts, there to be consolidated with the pretrial proceedings in a suit for patent infringement that FMC had brought against one of the domestic defendants in the antitrust suit. The district judge in Massachusetts denied the German defendants' motion to dismiss them from the antitrust case. He ruled that the Clayton Act's requirements for personal jurisdiction (Sec. 12, 15 U.S.C. Sec. 22) are satisfied if a defendant transacts business anywhere in the United States; it needn't be in the district where the suit is brought, as the defendants had argued. The judge certified his order denying the motion to dismiss for an immediate appeal under 28 U.S.C. Sec. 1292(b). The German defendants have asked us to accept the appeal. At our request the parties have briefed the question whether we have jurisdiction of the appeal.
3
Section 1292(b) provides thatWhen a district judge, in making in a civil action an order not otherwise appealable under this section, shall be of the opinion that such order involves a controlling question of law as to which there is substantial ground for difference of opinion and that an immediate appeal from the order may materially advance the ultimate termination of the litigation, he shall so state in writing in such order. The Court of Appeals which would have jurisdiction of an appeal of such action may thereupon, in its discretion, permit an appeal to be taken from such order, if application is made to it within ten days after the entry of the order.... [Emphasis added.]
4
Section 1294 provides so far as pertinent to this case that "appeals from reviewable decisions of the district and territorial courts shall be taken to the courts of appeals as follows: (1) From a district court of the United States to the court of appeals for the circuit embracing the district...."
5
The order denying the German defendants' motion to dismiss the case was made by the district court in Massachusetts; so section 1294, read in isolation, would require that any appeal from that order be taken to the First Circuit rather than to us. However, the italicized words in section 1292(b) point the other way. For after pretrial proceedings are over, the antitrust case, unless terminated at the pretrial stage, will be remanded to the Northern District of Illinois, see 28 U.S.C. Sec. 1407(a), and we will have jurisdiction over the "appeal of such [civil] action." Added in 1984 by section 412(a) of the Technical Amendments to the Federal Courts Improvement Act of 1982, Public Law 98-620, 98th Cong., 2d Sess., 98 Stat. 3362, the italicized language was intended to make clear that appeals under section 1292(b) in patent-infringement cases would go to the Federal Circuit, which has exclusive appellate jurisdiction in such cases, rather than, as section 1294 read literally would have required, to the court of appeals covering the district in which the case was pending. See H.R.Rep. No. 619, 98th Cong., 2d Sess. 4 (1984), U.S.Code Cong. & Admin.News 1984, at p. 5708.
6
Although the multidistrict statute does not say which court of appeals has jurisdiction over appeals from orders by the district court to which a case is transferred, most cases hold that it is the court of appeals covering the transferee court rather than the one covering the transferor court. See, e.g., Allegheny Airlines, Inc. v. LeMay, 448 F.2d 1341, 1344 (7th Cir.1971) (per curiam); Astarte Shipping v. Allied Steel & Export Service, 767 F.2d 86 (5th Cir.1985) (per curiam); but see Meat Price Investigators Ass'n v. Spencer Foods, 572 F.2d 163, 164 (8th Cir.1978). Cf. In re Corrugated Container Antitrust Litigation, 662 F.2d 875, 879-81 (D.C. Cir.1981). Indirect support for this conclusion comes from the statute's provision on venue for review by extraordinary writ of post-transfer orders issued by the multidistrict panel. See 28 U.S.C. Sec. 1407(e). However, none of the cases except LeMay involves section 1292(b), which since the 1984 amendment has contained language suggesting a different conclusion for appeals under that section; and LeMay was decided long before the amendment. The amendment is not dispositive. Although written in general terms, it was responding to a specific and distinguishable problem--the anomaly of a system where the Federal Circuit would exercise exclusive jurisdiction over all appeals in patent-infringement suits except appeals under section 1292(b). Nevertheless, Congress's choice of general language may authorize us to deal with a lesser anomaly.
7
Confining appellate jurisdiction to the court of appeals for the region where the transferee court is located makes a great deal of sense in every situation we can think of--except possibly an appeal under section 1292(b). The court of appeals for that region is more convenient to the parties and knows the district judges. And many of the issues that arise in pretrial proceedings (and it is only for the pretrial stage of litigation that a transfer under section 1407, that is, an involuntary transfer, is allowed) will involve the practices and procedures of the local district court. Moreover, since most litigation never gets beyond the pretrial stage and most pretrial orders are not appealable, there is relatively little likelihood that appellate jurisdiction will be divided between two circuits if the court of appeals for the transferee circuit has jurisdiction over appeals taken while the case is in the transferee district court. There is some likelihood, admittedly. The court of appeals for the transferee district might reverse a judgment of dismissal (which the transferee court can enter, see last sentence of section 1407(a)) and order the case tried; and any appeal from the judgment entered after trial would be heard by the court of appeals for the transferor court, assuming that the case had been (but, as we shall see, it might not have been) returned to that court for trial.
8
The situation when appeal is taken under section 1292(b) is rather special. The usual case in which permission to appeal under that statute is requested and likely to be granted is where an immediate appeal may head off a trial. The discretionary judgment that the court of appeals must make is whether to hear the appeal or let the trial go forward and decide the issue later, on appeal (if any is taken) from the final judgment. The court that will have jurisdiction over any appeal from a judgment entered after trial is in a better position to make a responsible choice between appeal now and appeal later than a court that will not hear an appeal later because if the case is tried it will lose appellate jurisdiction. And since appeals under 1292(b) are permitted only when they present controlling questions of law--as to which appellate review is plenary--the reputation of the district judge for care and skill in resolving factual disputes and making the many discretionary determinations confided to trial judges--a reputation better known to the court of appeals for the transferee circuit than to the court of appeals for the transferor circuit--is not an important factor in deciding the appeal.
9
These considerations are substantial, but we consider them outweighed by others:
10
1. In part because most cases wash out one way or the other before trial, in part because the parties often consent to trial in the transferee court (which will have become familiar with the case during the course of the pretrial proceedings), few cases transferred under section 1407 are ever retransferred to the transferor court; a study some years ago found that fewer than 5 percent were retransferred. See Weigel, The Judicial Panel on Multidistrict Litigation, Transferor Courts and Transferee Courts, 78 F.R.D. 575, 583 and n. 62 (1978). The Clerk of the Judicial Panel on Multidistrict Litigation tells us that the figure is higher today, but how much higher is unclear. And of those cases that are retransferred only a fraction--we suspect a small one, but have no figures--generate appeals in the transferor circuit. The argument that the court with ultimate appellate jurisdiction should decide whether to accept an interlocutory appeal thus appears to have little practical significance in the setting of section 1407.
11
2. If the transferee court enters an order that affects more than one of the consolidated cases, and the cases affected had been transferred from different circuits, it is unclear which transferor circuit would have jurisdiction of an appeal from the order.
12
3. The previous point illustrates, what is anyway plain, that a rule which gives the transferee circuit exclusive appellate jurisdiction over all orders issued by the transferee district court is simple to administer and free from uncertainty, and these are important advantages in a rule governing jurisdiction. The statutory exception for 1292(b) orders in patent cases is necessary to carry out Congress's wish to have all appeals in cases arising under the patent laws decided by a putatively expert body, the Court of Appeals for the Federal Circuit, and need not be interpreted to confer jurisdiction of 1292(b) appeals on courts of appeals for transferor circuits generally.
13
We conclude that we do not have jurisdiction, and the applications for permission to appeal are therefore D ISMISSED.
14
The applicants have asked us in the alternative to exercise our power under 28 U.S.C. Sec. 1631 to transfer their applications to the court of appeals that does have jurisdiction. That we believe is the First Circuit. It is true that the Federal Circuit has exclusive jurisdiction of appeals in cases arising in whole or part under the patent laws, 28 U.S.C. Secs. 1295(a)(1), 1338, and that the case with which the pretrial proceedings in this suit have been consolidated in the District of Massachusetts is a patent case. However, the German defendants' motion to dismiss pertains only to the antitrust case, and not to the patent case, to which they are not even parties. The cases were consolidated for pretrial proceedings only; and if the antitrust case were to be tried separately, no appeal would lie to the Federal Circuit. Recall that the amendment to section 1292(b) made in 1984 was intended to give the Federal Circuit jurisdiction over appeals under section 1292(b) whenever it had ultimate jurisdiction over the action. It does not have ultimate jurisdiction over the antitrust case; the consolidation of the pretrial proceedings in that case with the pretrial proceedings in a patent case is adventitious.
15
It is possible that consolidation for pretrial purposes might be thought consolidation for purposes of appellate jurisdiction over orders made during pretrial proceedings. See Sandwiches, Inc. v. Wendy's Int'l, Inc., 822 F.2d 707 (7th Cir.1987); Huene v. United States, 743 F.2d 703 (9th Cir.1984). The First Circuit's view on this issue, as it happens, is to the contrary, In re Massachusetts Helicopter Airlines, Inc., 469 F.2d 439 (1st Cir.1972); the Federal Circuit's view is unknown. We do not want these applications for appeal to wander around the circuits like the Ancient Mariner. As we know that the First Circuit will assume jurisdiction of the applications (whether it will grant them is a separate question on which we express no view), transferring them to that circuit will assure that they have a home.
16
The applications for permission to appeal under section 1292(b) are ordered TRANSFERRED to the First Circuit.
|
Warming increases methylmercury production in an Arctic soil.
Rapid temperature rise in Arctic permafrost impacts not only the degradation of stored soil organic carbon (SOC) and climate feedback, but also the production and bioaccumulation of methylmercury (MeHg) toxin that can endanger humans, as well as wildlife in terrestrial and aquatic ecosystems. Currently little is known concerning the effects of rapid permafrost thaw on microbial methylation and how SOC degradation is coupled to MeHg biosynthesis. Here we describe the effects of warming on MeHg production in an Arctic soil during an 8-month anoxic incubation experiment. Net MeHg production increased >10 fold in both organic- and mineral-rich soil layers at warmer (8 °C) than colder (-2 °C) temperatures. The type and availability of labile SOC, such as reducing sugars and ethanol, were particularly important in fueling the rapid initial biosynthesis of MeHg. Freshly amended mercury was more readily methylated than preexisting mercury in the soil. Additionally, positive correlations between mercury methylation and methane and ferrous ion production indicate linkages between SOC degradation and MeHg production. These results show that climate warming and permafrost thaw could potentially enhance MeHg production by an order of magnitude, impacting Arctic terrestrial and aquatic ecosystems by increased exposure to mercury through bioaccumulation and biomagnification in the food web. |
Craigslist ad poster needed (service section)
Looking for an experienced CRAIGSLIST POSTER to post initially 100 ads DAILY in 35 different cities in service section
We will:
• Provide you with all ad titles, description, and pictures
o Craigslist titles to post with each ad
o Specific locations, city, state and category for each ad
o We will send you a new schedule and ad content weekly
The ads need to be posted in 35 different cities.
Postings must be between 11am- 6pm Central Standard Time 7 days a week
You must send a daily report via email of all Craigslist ads links posted so that we can verify that the ads are active on Craigslist and that the ads are not ghosted, flagged, removed, or have otherwise become Inactive. You will not be paid for ghosted, flagged, removed, or inactive ads.
You will not be paid each day we do not receive a Craigslist email daily report.
Ads must be Live on Craigslist for at least 24 hours.
You’re responsible for:
• Post minimum 100 total ads per day
• Ads are to be posted between 9 am and 1:00 pm US Central Time (IE Texas, US time)
• Provide any and all email or Craigslist accounts
• Provide any and all PVAs and IPs to post the ads
• Post each ad in the Craigslist area specified by the link associated with each ad on the schedule in the spreadsheet.
12 freelance ont fait une offre moyenne de 38 $ pour ce travail
We are keen to render our services with reference to your requirement.
Your work is as important to us as it is to you.
We understand that seamless communication with our clients is of prime importance for the succesPlus
Dear Hiring Manager,
I am very interested in your job offer & want to start work immediately.I am fluent in English writing & speaking.I also have an eye for details & I am very efficient in my work.and i also have goPlus |
Q:
How to dismiss modal view controller and refresh previous view controller
I have simple qr code scanner (using Avfoundation). When qr code is detected it stop capturing and present information view controller over scanner view controller (not full screen). But when I dismiss information view controller I can't start capturing again (apearance methods are not called). Any ideas how to fix it?
Controller A presenting controller B:
let sb = UIStoryboard(name: "customViewAlert", bundle: nil)
let vc = sb.instantiateInitialViewController()!
vc.modalTransitionStyle = .crossDissolve
vc.modalPresentationStyle = .overCurrentContext
present(
vc,
animated: true,
completion: nil
)
Controller A delegation:
extension ViewController: ModalHandler {
func modalDismissed() {
self.captureSession.startRunning()
}
}
Controller B dismiss:
@IBAction func closeButtonTap(_ sender: Any) {
delegate?.modalDismissed()
dismiss(
animated: true,
completion: nil
)
}
A:
You need to set the delegate
let vc = sb.instantiateInitialViewController()!
vc.delegate = self
|
May 16, 2018
ASHRAE MISSISSIPPI
ASHRAE, founded in 1894, is a global society advancing human well-being through sustainable technology for the built environment. The Society and its members focus on building systems, energy efficiency, indoor air quality, refrigeration and sustainability within the industry. Through research, standards writing, publishing and continuing education, ASHRAE shapes tomorrow’s built environment today.
This web page is Copyright 2016 by the Mississippi Chapter of ASHRAE. The chapter does not speak for Society. Reference to manufacturers, representatives, contractors, or consultants within this website is for information only, as the Chapter does not recommend or endorse a particular organization, especially over another similar one. |
We start off 2016 with an encouraging update in the ongoing battle over on-premise sales for Georgia breweries. You’ll recall that mostly via a loophole brewers in Georgia were allowed to give beers to customers to take home from the brewery as a “souvenir” from a brewery tour. Depending on the souvenir the customer wanted, the price of the tour would vary. Everyone was excited for this new arrangement until the Department of Revenue sent out a Bulletin to suddenly and without warning end the practice. Here is the original story.
Since that the bulletin and ensuing uproar, House Speaker David Ralston came out in opposition to the bulletin, and now, according to the Atlanta Journal Constitution (which is reporting the hell out of this saga) Senate President David Shafer is now urging the Department of Revenue to back off.
From AJC.com:
“To the extent that the regulations you have already promulgated require clarification, I believe that it would be more appropriate for you to do so through the formal rule-making process, with public notice of any proposed new regulations and opportunity for public comment and legislative oversight,” Shafer wrote in a Dec. 23 letter. “Accordingly, I would urge you to withdraw the Bulletin.”
The article goes on to note the significance of this letter considering that, in the past, Shafer has been aligned with the interests of the wholesalers. Head to AJC.com to read the full letter and inside info. |
Clintonian recurrence
Politico’s Rachel Bade and Josh Gerstein take a look at the the new Clinton email story first reported yesterday by Catherine Herridge. Bade and Gerstein extracted a statement from the Clinton campaign that has a familiar ring:
“This is the same interagency dispute that has been playing out for months, and it does not change the fact that these emails were not classified at the time they were sent or received” said Clinton Campaign Spokesman Brian Fallon. “It is alarming that the intelligence community IG, working with Republicans in Congress, continues to selectively leak materials in order to resurface the same allegations and try to hurt Hillary Clinton’s presidential campaign. The Justice Department’s inquiry should be allowed to proceed without any further interference.”
Shades of the Vast Right-Wing Conspiracy (as the Daily Caller’s Chuck Ross notes). With the Clintons we have a special case of eternal recurrence. It’s the same thing over and over.
Ken Dilanian adds a telling detail in his NBC News report regarding the TOP SECRET/SAP information sworn to be present in some of the emails on Clinton’s server:
An intelligence official familiar with the matter told NBC News that the special access program in question was so sensitive that McCullough and some of his aides had to receive clearance to be read in on it before viewing the sworn declaration about the Clinton emails.
Dilanian notes that “Clinton’s campaign did not immediately respond to a request for comment.” Even if this detail belies Fallon’s statement to Bade and Gerstein, Clintonian recurrence suggests that the campaign would offer something like it here. |
Symptoms for example drooling, excessive tears derived from one of eye, other vision problems or paralysis may also occur. Besides a blistering rash and tingling, burning pain, shingles makes the skin extremely sensitive. The chicken pox virus is airborne, and spreads easily through coughing or sneezing. A new study shows that the popular genital herpes medication acyclovir will not prevent the transmission from the HIV virus as previously suspected. Here we come in the 21st century and dealing having a STD (Sexually Transmitted Disease) that is still plaguing us.
Yes, it could be painful and discomforting, but in most situations, as time passes, care, and cure, it will go away. As with any viral infection in youngsters, the key for a child's expedient recovery lies inside the early diagnosis and treatment. In the long run, they carry minimum possible side effects and can be taken in the type of oral medication or ointment or both sometimes. *Blisters develop into ulcerated sores that usually heal once the herpes simplex virus has run its course. Use an Anesthetic: There a wide range of over-the-counter anesthetics you'll be able to use to numb the pain sensation temporarily.
Therefore, it usually only affects one side from the body. How Shingles Typically Occurs - Shingles is really a virus from chicken pox that lies inactive within the body until something a real weakened disease fighting capability or severe stress reactivates it. Exercise - Inactivity might cause fatigue and enable stress to fester. The following people must not get the shingles vaccine:. This can also help soothe the pruritus or itchiness brought about with the disease process.
Widespread or disseminated herpes zoster can bring about infection with the lung, liver, pancreas, and brain. One from the newest anti-depressants, Cymbalta, may be a real Godsend for me. The usual adult dose is 800 milligrams 4 times a day for 5 days. Detachment in the retina also can occur that's extremely serious as well as necrosis that is virus induced. It can be known that acyclovir could only relieve some amount from the pain. |
1376) + (5 - d - 5)*(-4 + 4 + 1).
-139*d
Expand (-111*d - 6 - 106*d + 247*d)*(-5*d**3 + 0*d**3 + 3*d**3).
-60*d**4 + 12*d**3
Expand (-m + 4*m - 2*m)*(-3*m + 3*m + m)*(0 + 11 + 10*m**2 - 12).
10*m**4 - m**2
Expand 340*h**2 + 524*h**2 - 1218*h**2 + h**2 + 0*h + 0*h + (-h + 4*h - 2*h)*(h - 4*h + 2*h) + 2*h**2 - 2*h + 2*h.
-352*h**2
Expand 13*z**3 + 2*z**3 + 0*z**3 + (-5*z + 5*z - z)*(4 - 4 + z**2) + z**3 - 2*z**3 + 3*z**3 + z**3 - 25 + 25.
17*z**3
Expand (-m**4 + 0*m**4 - 4*m**4)*(-372 + 372 - 17*m + m - 1 + 1 + (-2 + 3 - 2)*(2*m + 2 - 2) + 3 - m - 3) + (1 + m - 1)*(-3*m**4 + 2*m**4 - m**4).
93*m**5
Expand 4*d**5 + 5*d**5 - 6*d**5 + (0*d**5 - 2*d**5 + d**5)*(74 - 199 - 187 + 60).
255*d**5
Expand (2*q + 0*q - 5*q)*(-20*q + 10*q + 25*q + (-3 - 4 - 1)*(4*q + 2*q - 5*q)).
-21*q**2
Expand (849 - 377 + 313)*(-q + 0 + 0)*(1 + 0 + 0).
-785*q
Expand (q - 2*q + 2*q)*(-3*q**4 - 4*q**3 + 4*q**3) + 0*q**5 - 5*q**5 - q**5 + (4*q - 2*q - q)*(-4 + 4 + 2*q**4) + 4*q**5 - 8*q**5 - 7*q**5.
-18*q**5
Expand (-1 - 4 + 4)*(-3*v + 3*v - 3*v) + 777 - 72*v - 34*v - 775.
-103*v + 2
Expand (-225*i - 32 + 107*i + 115*i)*(2*i**2 - i**2 + i**2).
-6*i**3 - 64*i**2
Expand (3 - 1 - 1)*(31*g + 27*g - 40*g)*(-2 - 2 + 3)*(0*g**3 - 2*g**3 + 4*g**3)*(9*g - 2*g + 9*g).
-576*g**5
Expand (26 - 26 - 15*n**2)*(-5 + 5 - 5)*(-2 + 2 + n).
75*n**3
Expand (-4 + 2*x + 2 + 5)*(-1 - 1 + 0)*(32 - 32 + 6*x).
-24*x**2 - 36*x
Expand (1 - 3*j - 1)*(-3*j**2 + 2*j**2 - j**2) + 8274*j**2 + 1 + 16*j**3 - 8274*j**2.
22*j**3 + 1
Expand -4 + 4 - a**2 + (-2*a - 2 + 2)*(-1 + a + 1) - 73 + 35 + 38 + 798*a**2.
795*a**2
Expand -146 - r + 147 + 11*r**2 - 13*r**2 + (0*r + 3*r + 0*r)*(-2*r + r + 0*r).
-5*r**2 - r + 1
Expand (-3*y**4 - 3*y + 3*y)*(-10 + 3 + 25)*(1 + 2 + 0).
-162*y**4
Expand (-7 - 8*p + 7)*(3*p - p + 3*p) - 1 - p**2 + 1.
-41*p**2
Expand (-109*g**4 + 606 - 606)*(2*g - 4*g - 13*g).
1635*g**5
Expand (-2 - l + 2)*(-924*l - 29 + 23 + 5 + 1).
924*l**2
Expand (-119*s - 94*s - 83*s)*(1 - 7 + 4)*(-s + 2*s - 3*s).
-1184*s**2
Expand 0*n**2 + n**4 + 0*n**2 + (-4*n**2 + 9 - 9)*(-n - 2*n + 0*n)*(7*n - n - 2*n).
49*n**4
Expand -j**4 - 4*j + 4*j + j**4 + j**3 - j**3 - 3*j**3 + 3*j**3 - j**4 + (j**4 - 2*j**4 + 0*j**4)*(-3 - 3 + 4) + 18*j**4 + 318*j**3 + 322*j**3 - 606*j**3.
19*j**4 + 34*j**3
Expand 61 - 8*v**2 - 61 + (2*v + v - 5*v)*(-5*v + 12*v - 4*v).
-14*v**2
Expand (0*g + g - 4*g)*(1 - 2 - 1) + 0*g - 3*g + g + 6*g + 0*g - g.
9*g
Expand -2*g**3 - g**3 + 5*g**3 + (2*g**2 + 3*g**2 - 4*g**2)*(-4 + 4 + 2*g) + (-2*g**2 - 8*g**2 + 17*g**2)*(-5*g + 4*g + 3*g).
18*g**3
Expand ((0*h**3 - h**3 + 3*h**3)*(-h - h - h) + 15*h**4 + 26 - 26)*(4*h + 2*h - 4*h).
18*h**5
Expand 5*v**4 + 0*v**4 - 6*v**4 + (v**3 + v - v)*(3 - 3 + v) + 4*v**4 + 0*v**4 - 3*v**4 + 144*v**4 - 53*v**4 - 7*v**4.
85*v**4
Expand ((-5*l**2 - l**2 + 4*l**2)*(-3 + 1 + 3) + 4*l**2 - 3*l**2 + 0*l**2 + 6*l**2 - l**2 - l**2)*(13*l**3 + 3*l**3 - 10*l**3).
18*l**5
Expand (3*w**5 + 2*w**5 - 3*w**5)*(-20 - 65 + 37) + (2*w**2 - 3*w**2 + 2*w**2)*(-4*w**3 - 3*w**3 + 0*w**3).
-103*w**5
Expand (n + 3 - 3)*(2*n - 2*n + 3*n) - 16*n**2 + 19*n**2 + 3*n - n.
6*n**2 + 2*n
Expand (-4 + 0 + 2)*(-3*h - 20*h - h) + 4*h - 3*h - 3*h - 2*h + 9*h + 0*h.
53*h
Expand (-1 + 0 + 2)*(-157 + 157 - 100*i)*(-4 + 0*i**3 - 6*i - 2*i**3 + 6).
200*i**4 + 600*i**2 - 200*i
Expand (21*n**2 - 171*n + 84*n + 87*n)*(9*n**3 - 18*n**3 + 7*n**3).
-42*n**5
Expand (-10*i**2 + 88*i - 497*i**2 - 88*i)*(i**2 + 1 - 1).
-507*i**4
Expand (3 - 1 + 5*q - 2*q)*(-3 + 37 - 11).
69*q + 46
Expand (-5*n**3 + 0*n**3 + 3*n**3)*(-1 + 0 + 3 + (1 - 6 + 2)*(-1 + 6 - 1))*(-3 + 1 + 3).
20*n**3
Expand (-7*y - 4*y - 3*y)*(-3*y + 3*y + 3*y)*(-2*y + 3 - 3 + (-5 + 2 + 4)*(3 - 2*y - 3) + 0*y - 2*y + 7*y)*(3 - 3 - y**2).
42*y**5
Expand (-188*z**2 - 199*z**2 - 117*z**2 - 497*z**2)*(0*z + z**3 + 0*z).
-1001*z**5
Expand (8*k**2 + 34 - 34)*(-2*k - 9*k + 3*k).
-64*k**3
Expand (18*o + 45*o - 31*o)*(-17*o + 6*o - 8*o).
-608*o**2
Expand (5 - 4 - 1577*z + 1727*z)*(-z + 0 + 0).
-150*z**2 - z
Expand (-19*v**3 - 100*v**3 + 15*v**3)*(-v + 5*v - 2*v).
-208*v**4
Expand (13 - 13 + 33*p**2)*(-3*p + 4*p + 4*p).
165*p**3
Expand (-3*k**2 + 5*k**2 - 6*k**2)*(k + 0*k + 0*k - 7*k - 3 + 1 + 6 + (-1 + 1 - k)*(5 - 2 - 1)) + 0*k + 3*k**3 + 0*k.
35*k**3 - 16*k**2
Expand (0*h**2 - 2*h**2 + 0*h**2)*(1 + 1 - 4) + 4*h + 2*h**2 - 4*h - 17*h**2 - 6*h**2 - 2*h**2.
-19*h**2
Expand (4*t**2 - t**2 + 5*t**2)*(76*t + 26*t - 12*t).
720*t**3
Expand (-b + 3*b - 3*b)*((3*b**2 - b**2 + 0*b**2)*(0*b**2 + b**2 - 2*b**2) + 7*b**4 - 4*b**4 - 4*b**4).
3*b**5
Expand (470*c + 695*c - 733*c)*(2 + 1 - 5) + (0 - 1 + 2)*(-1 - c + 1) - 2*c + 5*c - 4*c.
-866*c
Expand (57 + 34 - 66)*(0 - 2 + 3)*(-2*d - 1 + 1 + (3 - 3 + d)*(-1 + 0 + 0))*(-d - 3*d + d).
225*d**2
Expand (-3*z + 4*z + 3*z)*(37*z - 6*z + 56*z)*(z**2 + z**2 - 4*z**2).
-696*z**4
Expand (182*k**2 + 17*k**4 - 182*k**2)*(6 - 5 - 2) + (4*k + 2*k - 4*k)*(3*k**3 - 2*k**3 + 0*k**3).
-15*k**4
Expand (-2 - 1 + 1)*(n - n - n**3) + 93*n + 56*n + n**3 - 16*n.
3*n**3 + 133*n
Expand (1 - 2 + 3)*(42 - 93 + 38)*(-d**4 - 2*d**4 + 0*d**4).
78*d**4
Expand -2*g**5 + 2*g**5 + 2*g**5 + (0*g**2 + g**2 - 2*g**2)*(g**3 - g**3 + g**3) + 287*g**4 + 27*g**5 + 61 - 59 - 30*g**5.
-2*g**5 + 287*g**4 + 2
Expand -4*y**4 - 29 + 29 + (2 - 5 + 2)*(-3*y**3 + y**4 + 3*y**3) - 14*y**4 - 18*y**4 - 20*y**4.
-57*y**4
Expand (176*g**3 + 228*g**3 - 333*g**3)*(-g + 6*g - 3*g).
142*g**4
Expand (-3 + 3 + 3*q**2)*(-3*q**2 + 2*q**2 + 2*q**2) - q**4 + 0*q**4 + 0*q**4 + (-2*q - 2*q**2 + 2*q)*(0*q - q**2 + 0*q) + 4*q + 6*q**4 - 2*q + 9*q**4.
19*q**4 + 2*q
Expand -6*i**4 + 37*i**4 + 27*i**4 + (-i**2 + i**2 - 6*i**2)*(0*i**2 + 2*i**2 + 0*i**2) - 2*i**2 + i**4 + 2*i**2.
47*i**4
Expand (-3 + 4 - 3)*(1 - 3 + 3)*((4*m - 2*m - 3*m)*(2*m**2 + m**2 - 5*m**2) - 5*m**3 + 3*m**3 + m**3 + 1212*m**2 - 325*m**3 - 1212*m**2).
648*m**3
Expand 1 + y**2 - 1 - 7 + 3*y**2 + 7 + 2*y**2 - 2*y + 2*y + (-y - 2 + 2)*(0*y - 2*y + 0*y) - y**2 + 0 + 0 - 3*y**2 + 2*y**2 + 3*y**2 + 7*y**2 + 5*y**2 + 5*y**2.
26*y**2
Expand -3 + 6 - 5 - n + (18 + 2 - 3)*(-n - 3*n + 2*n).
-35*n - 2
Expand (8 - 27 + 11)*(-5*j**4 + 7*j**2 - 7*j**2) - 5*j**4 + 2*j**4 + j**4.
38*j**4
Expand 3*k**3 + k**3 + 15*k**3 - 3 - k**3 + 3 + (2*k - 4*k + 4*k)*(-2*k**2 - 2*k**2 + 7*k**2).
24*k**3
Expand (4*q**3 - 4*q**3 - q**4)*(6*q + 3 - 3) + 2*q**4 - 2*q**4 - 2*q**5 + (-4*q + 2*q**2 + 4*q)*(-2*q**2 + 2*q**2 - q**3) + 13*q**3 + 42*q**5 - 13*q**3.
32*q**5
Expand 2*u**2 - 1403528*u**4 + 1403528*u**4 - 912 + 2*u**5 + (1 - 1 + 1)*(2*u**5 + 3*u**5 - 4*u**5) - u**2 - 2*u**5 + u**2.
u**5 + 2*u**2 - 912
Expand (1 - 6 + 4)*(16 + 0 + 14)*(1 - 2*t - 1).
60*t
Expand 2*u**2 - 3*u**2 + 2*u**2 + (-2*u + 14*u + 2*u)*(-32 + 32 - 3*u).
-41*u**2
Expand (2*f**4 - 2*f**4 + 2*f**4)*(-4 - 11 + 4 + (0 + 3 - 5)*(0 - 2 + 0)).
-14*f**4
Expand (-u + u - 3*u)*(-u + 4*u - 5*u)*(u + 10*u - 7*u)*(4*u - 5*u + 0*u).
-24*u**4
Expand (1 + l - 1 + (-3*l + 2*l - l)*(4 + 1 - 3))*(9*l**3 - 49*l**3 + 8*l**3).
96*l**4
Expand -3*s + 4*s + 0*s + (4 + 2 - 1)*(-3*s + 2*s + 2*s) + (3*s - s + 0*s)*(-1 - 1 + 0).
2*s
Expand (-z - 2*z + z)*(7 - 10*z - 7)*(3 + 0 + 0)*(2*z + 0*z - 3*z).
-60*z**3
Expand (-1075 + 253*b - 27*b + 1075)*(1 - 1 + 1).
226*b
Expand (3*d - d - d)*(5 - 1 - 2) + 30 - 30 + 13*d + (-1 - 4 + 3)*(-4*d + 4*d + 2*d) + 2*d + 3*d - 4*d.
12*d
Expand (-3*w**2 + w**2 - 2*w**2 + 5)*(-71*w + 71*w - 3*w**3).
12*w**5 - 15*w**3
Expand (2*g - 4*g + g)*(1 + 4 - 6)*(2*g + g - 5*g)*(4 - 1 - 2).
-2*g**2
Expand (2*p**2 - 2*p**2 + p**2)*((17 - 17 + 5*p)*(p - p + p) - 3*p**2 + 0*p**2 + 0*p**2 + p**2 - p**2 + p**2).
3*p**4
Expand (-2 + 2 + 535*m - 346*m)*(0 + 0 - 1).
-189*m
Expand f - 2*f - f + (1 - 2 - 2)*(1 + 0 - 4)*(f - 8*f + 10*f).
25*f
Expand (59 - 42 + 73)*(2*d + 0 + 0)*(4*d + 3 + 0*d - 1).
720*d**2 + 360*d
Expand -357*k + 135*k - 60*k + 3*k - 7*k + 2*k - 1 + 1 - k + (-4 + 1 + 4)*(3*k + 3*k - 4*k).
-283*k
Expand (681*p - 681*p - 26*p**3)*((-5*p + p + 3*p)*(-6 + 1 + 3) + 2*p + 1 - 1).
-104*p**4
Expand (3*l - l**3 - 3*l)*(16 + 2*l**2 - 6 - 3*l**2).
l**5 - 10*l**3
Expand (14*p - 3*p + 19*p)*(4*p**3 - 4*p**3 - 2*p**3) + ((-3*p**2 - 2*p**3 + 3*p**2)*(5 + 0 - 4) - 2*p**2 + p**3 + 2*p**2)*(p - 3*p + 5*p).
-63*p**4
Expand (s**2 + 0*s**2 - 3*s**2 + (-s + 2*s - 2*s)*(2*s - 3*s + 2*s) - 13*s**2 + 8*s**2 + 11 |
TREKS AND TRAILS
THE RAINFOREST AND VEGETATION
Bako contains an incredible variety of plant species and vegetation types, and this is one of the park's attractions. At Bako it is possible to see every type of vegetation found in Borneo. 25 distinct types of vegetation form seven complete eco-systems – Beach Vegetation, Cliff Vegetation, Kerangas or Heath Forest, Mangrove Forest, mixed Dipterocarp Forest, Padang or Grasslands Vegetation and Peat Swamp Forest. It is easy to explore these eco-systems via the jungle trails. The contrasts are so distinct that you do not have to be a scientist to notice the differences. Furthermore, most of the different vegetation types are found close to the park HQ at Telok Assam. |
# Andi Chandler <[email protected]>, 2016. #zanata
# Andi Chandler <[email protected]>, 2017. #zanata
# Andi Chandler <[email protected]>, 2018. #zanata
# Andi Chandler <[email protected]>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.messaging\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-15 22:07+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-04-16 12:41+0000\n"
"Last-Translator: Andi Chandler <[email protected]>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid ""
"(see `bug 1800957 <https://bugs.launchpad.net/oslo.messaging/"
"+bug/1800957>`_)."
msgstr ""
"(see `bug 1800957 <https://bugs.launchpad.net/oslo.messaging/"
"+bug/1800957>`_)."
msgid "*conn_pool_min_size* (default 2)"
msgstr "*conn_pool_min_size* (default 2)"
msgid "*conn_pool_ttl* (defaul 1200)"
msgstr "*conn_pool_ttl* (defaul 1200)"
msgid "*retry* (default=-1)"
msgstr "*retry* (default=-1)"
msgid "*ssl_client_cert_file* (default='')"
msgstr "*ssl_client_cert_file* (default='')"
msgid "*ssl_client_key_file* (default='')"
msgstr "*ssl_client_key_file* (default='')"
msgid "*ssl_client_key_password* (default='')"
msgstr "*ssl_client_key_password* (default='')"
msgid "11.0.0"
msgstr "11.0.0"
msgid "12.0.0"
msgstr "12.0.0"
msgid "5.17.3"
msgstr "5.17.3"
msgid "5.20.0"
msgstr "5.20.0"
msgid "5.24.0"
msgstr "5.24.0"
msgid "5.24.2"
msgstr "5.24.2"
msgid "5.26.0"
msgstr "5.26.0"
msgid "5.27.0"
msgstr "5.27.0"
msgid "5.30.2"
msgstr "5.30.2"
msgid "5.30.8"
msgstr "5.30.8"
msgid "5.33.0"
msgstr "5.33.0"
msgid "5.34.1"
msgstr "5.34.1"
msgid "5.35.5"
msgstr "5.35.5"
msgid "5.6.0"
msgstr "5.6.0"
msgid "6.0.0"
msgstr "6.0.0"
msgid "6.2.0"
msgstr "6.2.0"
msgid "7.0.0"
msgstr "7.0.0"
msgid "8.0.0"
msgstr "8.0.0"
msgid "8.1.3"
msgstr "8.1.3"
msgid "9.0.0"
msgstr "9.0.0"
msgid "9.3.0"
msgstr "9.3.0"
msgid "9.5.0"
msgstr "9.5.0"
msgid ""
"A bug in the ``amqp`` python library can cause the connection to the "
"RabbitMQ broker to hang when using SSL/TLS. This results in frequent errors "
"such as this::"
msgstr ""
"A bug in the ``amqp`` python library can cause the connection to the "
"RabbitMQ broker to hang when using SSL/TLS. This results in frequent errors "
"such as this::"
msgid ""
"Add get_rpc_transport call to make the API clear for the separation of RPC "
"and Notification messaging backends."
msgstr ""
"Add get_rpc_transport call to make the API clear for the separation of RPC "
"and Notification messaging backends."
msgid "Bug Fixes"
msgstr "Bug Fixes"
msgid ""
"Change the default value of RPC dispatcher access_policy to "
"DefaultRPCAccessPolicy."
msgstr ""
"Change the default value of RPC dispatcher access_policy to "
"DefaultRPCAccessPolicy."
msgid "Configuration param 'retry' is added. Default is -1, indefinite"
msgstr "Configuration param 'retry' is added. Default is -1, indefinite"
msgid "Critical Issues"
msgstr "Critical Issues"
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid "Default ttl is 1200s. Next configuration params was added"
msgstr "Default TTL is 1200s. Next configuration params was added"
msgid ""
"Deprecate get_transport and use get_rpc_transport or "
"get_notification_transport to make the API usage clear for the separation of "
"RPC and Notification messaging backends."
msgstr ""
"Deprecate get_transport and use get_rpc_transport or "
"get_notification_transport to make the API usage clear for the separation of "
"RPC and Notification messaging backends."
msgid "Deprecation Notes"
msgstr "Deprecation Notes"
msgid "Idle connections in the pool will be expired and closed."
msgstr "Idle connections in the pool will be expired and closed."
msgid ""
"In combination with amqp<=2.4.0, ``oslo.messaging`` was unreliable when "
"configured with TLS (as is generally recommended). Users would see frequent "
"errors such as this::"
msgstr ""
"In combination with amqp<=2.4.0, ``oslo.messaging`` was unreliable when "
"configured with TLS (as is generally recommended). Users would see frequent "
"errors such as this::"
msgid ""
"It is recommended that deployments using SSL/TLS upgrade the amqp library to "
"v2.4.1 or later."
msgstr ""
"It is recommended that deployments using SSL/TLS upgrade the amqp library to "
"v2.4.1 or later."
msgid "Known Issues"
msgstr "Known Issues"
msgid "New Features"
msgstr "New Features"
msgid "Newton Series Release Notes"
msgstr "Newton Series Release Notes"
msgid "Next configuration params was added"
msgstr "Next configuration params was added"
msgid ""
"NoSuchMethod exception will not be logged for special non-existing methods "
"which names end with '_ignore_errors'. Such methods might be used as health "
"probes for openstack services."
msgstr ""
"NoSuchMethod exception will not be logged for special non-existing methods "
"which names end with '_ignore_errors'. Such methods might be used as health "
"probes for OpenStack services."
msgid "Ocata Series Release Notes"
msgstr "Ocata Series Release Notes"
msgid ""
"On rabbitmq, in the past, acknownlegement of messages was done within the "
"application callback thread/greenlet. This thread was blocked until the "
"message was ack. In newton, we rewrote the message acknownlegement to ensure "
"we haven't two threads writting the the socket at the same times. Now all "
"pendings ack are done by the main thread. They are no more reason to block "
"the application callback thread until the message is ack. Other driver "
"already release the application callback threads before the message is "
"acknownleged. This is also the case for rabbitmq, now."
msgstr ""
"On RabbitMQ, in the past, acknowledgement of messages was done within the "
"application callback thread/greenlet. This thread was blocked until the "
"message was acknowledged. In Newton, we rewrote the message acknowledgement "
"to ensure we haven't two threads writing to the socket at the same time. Now "
"all pending acknowledgements are done by the main thread. They are no more "
"reason to block the application callback thread until the message is "
"acknowledged. Other drivers already release the application callback threads "
"before the message is acknowledged. This is also the case for RabbitMQ now."
msgid ""
"Operators must switch to setting the transport_url directive in the "
"[DEFAULT] section."
msgstr ""
"Operators must switch to setting the transport_url directive in the "
"[DEFAULT] section."
msgid "Other Notes"
msgstr "Other Notes"
msgid "Pike Series Release Notes"
msgstr "Pike Series Release Notes"
msgid "Prelude"
msgstr "Prelude"
msgid ""
"Projects using any of the optional drivers can use extras to pull in "
"dependencies for that driver."
msgstr ""
"Projects using any of the optional drivers can use extras to pull in "
"dependencies for that driver."
msgid ""
"Projects using the AMQP 1.0 driver may now depend on oslo.messaging[amqp1]. "
"Projects using the Kafka driver may now depend on oslo.messaging[kafka]"
msgstr ""
"Projects using the AMQP 1.0 driver may now depend on oslo.messaging[amqp1]. "
"Projects using the Kafka driver may now depend on oslo.messaging[kafka]"
msgid "Queens Series Release Notes"
msgstr "Queens Series Release Notes"
msgid ""
"RPC call monitoring is a new RPCClient feature. Call monitoring causes the "
"RPC server to periodically send keepalive messages back to the RPCClient "
"while the RPC call is being processed. This can be used for early detection "
"of a server failure without having to wait for the full call timeout to "
"expire."
msgstr ""
"RPC call monitoring is a new RPCClient feature. Call monitoring causes the "
"RPC server to periodically send keepalive messages back to the RPCClient "
"while the RPC call is being processed. This can be used for early detection "
"of a server failure without having to wait for the full call timeout to "
"expire."
msgid ""
"RPCClient now supports RPC call monitoring for detecting the loss of a "
"server during an RPC call."
msgstr ""
"RPCClient now supports RPC call monitoring for detecting the loss of a "
"server during an RPC call."
msgid "Remove deprecated configuration options from multiple drivers."
msgstr "Remove deprecated configuration options from multiple drivers."
msgid ""
"RequestContextSerializer was deprecated since 4.6, and it isn't used by any "
"other project, so we can remove it safely."
msgstr ""
"RequestContextSerializer was deprecated since 4.6, and it isn't used by any "
"other project, so we can remove it safely."
msgid "Retry support for oslo_messaging_notifications driver"
msgstr "Retry support for oslo_messaging_notifications driver"
msgid "Rocky Series Release Notes"
msgstr "Rocky Series Release Notes"
msgid "SSL support for oslo_messaging's kafka driver"
msgstr "SSL support for oslo_messaging's Kafka driver"
msgid "Stein Series Release Notes"
msgstr "Stein Series Release Notes"
msgid ""
"Such issues would typically lead to downstream service timeouts, with no "
"recourse available other than disabling TLS altogether (see `bug 1800957 "
"<https://bugs.launchpad.net/oslo.messaging/+bug/1800957>`_)."
msgstr ""
"Such issues would typically lead to downstream service timeouts, with no "
"recourse available other than disabling TLS altogether (see `bug 1800957 "
"<https://bugs.launchpad.net/oslo.messaging/+bug/1800957>`_)."
msgid ""
"Support for Python 2.7 has been dropped. The minimum version of Python now "
"supported is Python 3.6."
msgstr ""
"Support for Python 2.7 has been dropped. The minimum version of Python now "
"supported is Python 3.6."
msgid ""
"The AMQP driver has removed the configuration options of "
"allow_insecure_clients, username and password from the [oslo_messaging_amqp] "
"section."
msgstr ""
"The AMQP driver has removed the configuration options of "
"allow_insecure_clients, username and password from the [oslo_messaging_amqp] "
"section."
msgid ""
"The Kafa driver has removed the configuration options of kafka_default_host "
"and kafka_default_port from the [oslo_messaging_kafka] section."
msgstr ""
"The Kafka driver has removed the configuration options of kafka_default_host "
"and kafka_default_port from the [oslo_messaging_kafka] section."
msgid "The Pika-based driver for RabbitMQ has been removed."
msgstr "The Pika-based driver for RabbitMQ has been removed."
msgid ""
"The Rabbit driver has removed the configuration options of rabbit_host, "
"rabbit_port, rabbit_hosts, rabbit_userid, rabbit_password, "
"rabbit_virtual_host rabbit_max_retries and rabbit_durable_queues from the "
"[oslo_messaging_rabbit] section."
msgstr ""
"The Rabbit driver has removed the configuration options of rabbit_host, "
"rabbit_port, rabbit_hosts, rabbit_userid, rabbit_password, "
"rabbit_virtual_host rabbit_max_retries and rabbit_durable_queues from the "
"[oslo_messaging_rabbit] section."
msgid "The ZMQ-based driver for RPC communications has been removed"
msgstr "The ZMQ-based driver for RPC communications has been removed"
msgid ""
"The blocking executor has been deprecated for removal in Rocky and support "
"is now dropped in Ussuri. Its usage was never recommended for applications, "
"and it has no test coverage. Applications should choose the appropriate "
"threading model that maps to their usage instead."
msgstr ""
"The blocking executor has been deprecated for removal in Rocky and support "
"is now dropped in Ussuri. Its usage was never recommended for applications, "
"and it has no test coverage. Applications should choose the appropriate "
"threading model that maps to their usage instead."
msgid ""
"The blocking executor has been deprecated for removal in Rocky. Its usage "
"was never recommended for applications, and it has no test coverage. "
"Applications should choose the appropriate threading model that maps their "
"usage instead."
msgstr ""
"The blocking executor has been deprecated for removal in Rocky. Its usage "
"was never recommended for applications, and it has no test coverage. "
"Applications should choose the appropriate threading model that maps their "
"usage instead."
msgid ""
"The driver support for the ZeroMQ messaging library is removed. Users of the "
"oslo.messaging RPC services must use the supported rabbit (\"rabbit://...\") "
"or amqp1 (\"amqp://...\" )drivers."
msgstr ""
"The driver support for the ZeroMQ messaging library is removed. Users of the "
"oslo.messaging RPC services must use the supported Rabbit (\"rabbit://...\") "
"or amqp1 (\"amqp://...\" )drivers."
msgid ""
"The pika driver has been deprecated for removal in Rocky. This driver was "
"developed as a replacement for the default rabbit driver. However testing "
"has not shown any appreciable improvement over the default rabbit driver in "
"terms of performance and stability."
msgstr ""
"The Pika driver has been deprecated for removal in Rocky. This driver was "
"developed as a replacement for the default rabbit driver. However testing "
"has not shown any appreciable improvement over the default rabbit driver in "
"terms of performance and stability."
msgid ""
"The rabbitmq driver option ``DEFAULT/max_retries`` has been deprecated for "
"removal (at a later point in the future) as it did not make logical sense "
"for notifications and for RPC."
msgstr ""
"The RabbitMQ driver option ``DEFAULT/max_retries`` has been deprecated for "
"removal (at a later point in the future) as it did not make logical sense "
"for notifications and for RPC."
msgid "The rpc_backend option from the [DEFAULT] section has been removed."
msgstr "The rpc_backend option from the [DEFAULT] section has been removed."
msgid ""
"The underlying issue is fixed in amqp version 2.4.1, which is now the "
"minimum version that ``oslo.messaging`` requires."
msgstr ""
"The underlying issue is fixed in amqp version 2.4.1, which is now the "
"minimum version that ``oslo.messaging`` requires."
msgid ""
"This bug has been fixed in `v2.4.1 of amqp <https://github.com/celery/py-"
"amqp/commit/bf122a05a21a8cc5bca314b0979f32c8026fc66e>`_."
msgstr ""
"This bug has been fixed in `v2.4.1 of amqp <https://github.com/celery/py-"
"amqp/commit/bf122a05a21a8cc5bca314b0979f32c8026fc66e>`_."
msgid ""
"Threading issues with the kafka-python consumer client were identified and "
"documented. The driver has been updated to integrate the confluent-kafka "
"python library. The confluent-kafka client leverages the high performance "
"librdkafka C client and is safe for multiple thread use."
msgstr ""
"Threading issues with the kafka-python consumer client were identified and "
"documented. The driver has been updated to integrate the confluent-kafka "
"python library. The confluent-kafka client leverages the high performance "
"librdkafka C client and is safe for multiple thread use."
msgid "Train Series Release Notes"
msgstr "Train Series Release Notes"
msgid "Upgrade Notes"
msgstr "Upgrade Notes"
msgid ""
"Users of the Pika-based driver must change the prefix of all the "
"transport_url configuration options from \"pika://...\" to \"rabbit://...\" "
"to use the default kombu based RabbitMQ driver."
msgstr ""
"Users of the Pika-based driver must change the prefix of all the "
"transport_url configuration options from \"pika://...\" to \"rabbit://...\" "
"to use the default kombu based RabbitMQ driver."
msgid "Ussuri Series Release Notes"
msgstr "Ussuri Series Release Notes"
msgid ""
"With the change in the client library used, projects using the Kafka driver "
"should use extras oslo.messaging[kafka] to pull in dependencies for the "
"driver."
msgstr ""
"With the change in the client library used, projects using the Kafka driver "
"should use extras oslo.messaging[kafka] to pull in dependencies for the "
"driver."
msgid ""
"ZeroMQ support has been deprecated. The ZeroMQ driver ``zmq://`` has been "
"unmaintained for over a year and no longer functions properly. It is "
"recommended to use one of the maintained backends instead, such as RabbitMQ "
"or AMQP 1.0."
msgstr ""
"ZeroMQ support has been deprecated. The ZeroMQ driver ``zmq://`` has been "
"unmaintained for over a year and no longer functions properly. It is "
"recommended to use one of the maintained backends instead, such as RabbitMQ "
"or AMQP 1.0."
msgid "oslo.messaging Release Notes"
msgstr "oslo.messaging Release Notes"
|
The Definitive Top 10 Cheap Cans In Ireland Today
There’s a big, beautiful world of cans and beer out there, but as a student chances are you’re going to want to get the most you can for as little as possible.
We’re not talking about the sophisticated craft ales, or even a typical pint of Guinness. This is about the best, and cheapest cans in the country.
In order to keep things fair and affordable, we’ve set a price limit of €1.25 per can in order to be eligible for this lift (i.e you will be able to get 4 of the beauties you see on this list for a fiver or less).
We’ve no doubt that this list will be the source of endless debate and potential civil unrest, so before you get your pitchforks sharpened keep in mind that this is only opinion. We’ve put hundreds of hours of research into this to make sure its as an informed opinion as possible.
Also please note, this list does not contain cider. Only beer for today. Chances are you either like one or the other, so there’s no use in comparing them.
Whatever can makes you happy, that is the important thing. With that being said, let’s get started:
10) Galahad
We hate for their to be a last place in this list, but Aldi’s offering just reach the lofty heights of other contenders on this list.
At a miniscule 75c per can, this 4% alcohol can wins points for value but very little else.
Taste-wise, there’s very little worth talking about here apart from its bitterness. The price means it’s very easy to drink enough to forget about it, but Galahad needs other attributes here if its to get any higher than tenth place.
9) Tesco Premium Lager
What this lacks in originality name-wise, it makes up for with great value and a taste that won’t blind you.
The can is a slightly smaller offering at 440ml compared to the standard 500ml, but at a satisfying 4.8% for less than €1 a can, you won’t be complaining about its size.
The taste of this particular beer is surprisingly crisp at first, despite not being the smoothest offering on this list.
However, it loses serious points for a horrific aluminum aftertaste which makes the end of every sip a challenge in itself until you have enough of them in you.
A valiant effort, but unless you like the taste of tin from your tins, we can’t place this any higher.
8) Karpackie
This may be a controversial entry being so low for some people, but it can’t be disputed that the taste isn’t all too great.
It’s a staple 4 for €5 can in shops and off-licenses around the country, but the bitter taste and off-putting flavor means it has very few redeeming features, unless it’s ice cold.
We love the design of the can and it does offer you a little more for your cash at 5% a can, but when you can get other beers on this list for the same price or less, there’s no reason why you shouldn’t.
7) Excelsior
It’s LIDL’s turn to enter this competition, with an offering that is most definitely value for money.
Weighing in at a solid 4%, Excelsior offers a smoother alternative to Galahad while also earning extra points for resembling R2-D2 from Star Wars.
It won’t win any awards for taste, but it’s smooth enough to enjoy at a very affordable price.
6) Finkbrau
This might be an odd entry to some, seeing as visually LIDL’s Finkbrau looks more like a cough bottle than a beer.
However, there’s no denying Finkbrau has some serious pluses on its side. Ten 250ml bottles at 4.7% will only cost you a fiver, which makes it the best alcohol unit per euro entry on this list.
The taste won’t be for everyone, but we find being in a bottle that it will give you a smoother, crisper taste than a lot of its aluminum brothers on this list.
Definitely worth giving a chance if you’ve yet to seek this out.
5) Hackenberg
Once seen in shops around the country, Hackenberg is a bit of a relic these days and not as widespread as it used to be.
If you do happen to seek it out though, you wouldn’t be disappointed as it epitomises all the qualities of a classic, cheap beer.
A little watery, but the taste isn’t terrible and it’s smooth enough to get through as quickly as you’d like.
Be sure to pick up the ‘Premium’ version though, with its lovely gold and white design, superior taste and alcohol content.
4) Carling
We’re getting into the heavy hitters here, with England’s offering very unlucky not to get a podium finish.
They say ‘It’s good, but it’s not quite Carling’. That might be a load of shite, but there’s no denying this is one of the smoothest beers on this list.
It lacks a bit on flavour, which excludes it on a top 3 finish, but you can’t help but praise a beer that is so crisp and available in eight packs for less than a tenner in most shops and off license.
3) Dutch Gold
A time-tested classic, Dutch Gold holds a place in all of our hearts, and third place on this list.
There’s an old Irish seanfhocal that reads “If you’re not paying Dutch, you’re paying too much.” With the rise of supermarket value beers in recent years, you can indeed get cheaper beers today than this legend.
However, you could still argue that with most off-licenses stocking this at a €1 a can, you won’t get better value for your money than with good ‘ol Dutch.
Like the top two on this list, it’s a smooth beer with decent flavour, but an after-taste that is slightly too bitter for some people’s liking separates it from gold and silver. |
blog
Let's talk about testing
Test Automation – What is it Good for…
Much of the software development in website development is done using agile methods. Automation is usually necessary to keep up with the speed.
It’s not really a question of whether automation is useful. Only a few people would rather repeat the same tasks many times every day then to let a computer do them. The question is whether the benefits are bigger then the costs. There will always be costs; automation is an investment. Businesses expect a return for their investment. There are several advantages:
Requiring less manual work to reach the same test coverage
Being able to use testing resources for higher value work improving the user experience
Happier users -> more users
How to get started?
One of the critical parts in building test automation and automation suites is to have the right people to do it. There are other factors too, which I’ll briefly go through in this post. Tools, planning and working methods are other important factors.
Test automation suite?
What is it?
A test automation suite can be many things. It can and will evolve during its lifetime. The word test automation suite, at least for me, conveys a message of something massive, something that took a long time to build, covering the whole site functionality. That’s one definition, but there is no reason not to use a simpler, more agile approach.
A suite can be as small of automating a specific part of a site, for example, top navigation or a registration form. Anything, which needs a lot of steps to cover with every build, is a good candidate for automation. Can’t these things be tested manually then? Yes, but testers are humans too, you know. They will not be happy clicking the same 100 links or entering the 100 different email addresses required for good coverage. Does it matter that the tester is not happy about something. Well, that’s more of a HR question, but for me, it does matter. People need to be motivated to achieve the best results.
And the main reason for doing any testing is to achieve good quality. Saving the time from manually testing the forms and navigation will allow the tester to really do his/her job and concentrate on the more challenging test areas, for example, making sure the user experience of your site’s users is as good as possible.
And to be honest, no one will test all the links manually with every build. It’s only a matter of time before there is a serious problem when something important is missed in testing.
Okay, okay, we need automation. What next?
Requirements for successful automation project?
Here is a list of things that are important for a successful automation project.
Co-operation between development and QA teams in terms of communication about upcoming chances
Time. It will take some time to implement the automation. It will still be worth it, in many cases. Just remember to allocate some time for it. It’s important to keep working on automation iteratively, building more coverage as the site and the development work progresses. There will never be a two to four week period in the end of the project where you can just concentrate on building automation. Really. There isn’t.
Stability. If the site changes every day, it’s better to not start building an extensive suite. Again, start with the areas that are more stable. If not, you might face the chicken and the egg problem. You need a stable site to build automation but the site does not get stable when you don’t have it.
Iterative (agile) approach. Start from a single area in one browser. While doing this, you will see how well the automation works with your site. Once you are comfortable with the automation and how it works with your site, increase the coverage and expand to other browsers and platforms.
Tools. Use a tool set that’s easy to maintain. Use modern tools like Cucumber for writing the test cases and combine it with either Watir or Selenium. Whatever tool you use, make sure you can reuse your code. Cucumber is essentially a collection of navigation commands on your site combined with test cases that can be understood by other people, especially business analysts. They could even give you a hand in the test case creation by documenting the requirements in a format that can be used for test cases. Ideally, when using Cucumber, the requirements are the test cases.
Test automation is a tool to increase your test coverage, do not make it the software development project. You will not be allowed to keep working on the automation for months and months. While it is important to get the automation done properly, there is a risk that too much effort will be allocated for it. Like in any other form of testing, there is a trade-off between the time spent and the results achieved. 80-20 rule applies here too. Test automation is a tool to make your development project more successful in terms of quality and schedule. When you achieve that, you can be satisfied with the results.
The people
Can your average QA person do automation in a maintainable and reusable way? Most likely not. It requires a different skill set then manual testing. Or should I say, other skills in addition to the functional tester skills. A good solution is to have a team where some of the people are good in planning testing and test cases while others have the coding skills to maintain and build the test automation. If the team does not have the people with coding skills, most people start to automate testing with the help of a tool recording test cases. This will in most cases lead to major difficulties in maintaining the tests in the future. It can also be difficult to customize the tests for data-driven testing.
Required competencies
Automation requires development skills. You don’t need to be a high-level C++ programmer to create automation scripts. However, knowing how to implement code in a managed way will be necessary to get the automation done. And it is not only about completing the initial set of automation tests. The software/website you’re testing will typically receive updates every day/week. Some of those changes are going to break your test automation. Even if all the functionality on the site still works, some elements could have been moved or renamed causing the test automation not to find the element anymore. Elements on the page will need to be identified with the element attributes (for example, link text) or XPATH expressions, which are powerful but rigid. This is really the part where good understanding of HTML is required.
Software development practices
Maintaining any software, also test automation will benefit from good development process. One of the most important ones is using version control systems, for example, Subversion or git. It will enable maintaining the current version of the automation and tracking the changes done to it as well as branching your scripts for the current production system and the development version if there are major differences.
Development guidelines
Having a set of development guidelines to support the automation is needed to minimize unnecessary changes to the site. A big part of the technical implementation of test automation has to do with locating HTML elements on the page. It is good to establish naming conventions for the elements, and then stick to it. While it might be / is a good idea to fix naming of an element from something that’s obviously wrong, it can potentially cause test cases to fail. A typical case of this is could be something like renaming an element to something that’s clearer to other users:
Renaming the above unordered list would require the following change to an XPATH expression identifying the element:
//ul[@class='top-nav'] -> //ul[@class='top-navigation']
While this change does make sense, it can easily cause automation tests to fail. The fact that test cases sometimes fail is not really a problem; it’s a fact of life that needs to be managed by allocating time for maintenance work.
Allocating time for automation
The project manager creates a plan for his project. The plan includes time for QA after the development work is completed.
Project of 6 months:
2 month for requirements / user stories
3 months for development
1 month for QA
Now, what happens when development time is increased by 30%. That’s not at all unrealistic, is it? Even with this relatively low delay percentage, the whole time to QA is basically eaten up by the development work. And there is of course a commitment for a customer to deliver on the agreed date. An additional complication is that there are most likely areas that have not been tested before the QA cycle is supposed to end.
The team will find defects on the last week of the project, things that should have been discovered 3 months earlier. Maybe there are areas that cannot be implemented at all with the current architecture.
This can, especially for a new team, lead to a situation where test automation is never taken into use in a proper way. There is not a lot of time, people are not familiar with the tools and enough progress is not made. In such a situation, management can easily loose faith in the usefulness of automation in general. |
San Pedro de Tiquina Municipality
San Pedro de Tiquina Municipality is the second municipal section of the Manco Kapac Province in the La Paz Department in Bolivia. Its seat is San Pedro de Tiquina.
Subdivision
The municipality is divided into five cantons.
The people
The people are predominantly indigenous citizens of Aymaran descent.
Ref.: obd.descentralizacion.gov.bo
Languages
The languages spoken in the San Pedro de Tiquina Municipality are mainly Aymara and Spanish.
See also
Strait of Tiquina
References
obd.descentralizacion.gov.bo
External links
Population data and map of San Pedro de Tiquina Municipality
Category:Municipalities of La Paz Department (Bolivia) |
Q:
How would you parenthetically cite an author that appears twice in a works cited page? MLA
How would you parenthetically cite an author that appears twice in a works cited page?
I would like to cite Wachs. Here is a piece of my works cited:
Wachs, Juan, Helman Stern, Yael Edan, Michael Gillam, Craig Feied, Mark Smith, and Jon Handler. Gestix: A Doctor-Computer Sterile Gesture Interface for Dynamic Environments. Tech. Web. 21 Mar. 2012.
Wachs, Juan, Yu-Ting Li, and Mithun Jacob. "Gestonurse." Gestonurse. Purdue University Industrial Engineering Lab., 2012. Web. 21 Mar. 2012.
Assuming that I would like to parenthetically cite the second source, how would I go about doing this to differentiate from the first source?
A:
According to the Purdue Online Writing Lab, when using the MLA format you would distinguish multiple works by the same author by including a shortened version of the title of the particular reference that you are citing. Thus your examples might appear as follows (where 'p.' indicates the page number/s, which should also be included):
(Wachs, Gestix p.)
and
(Wachs, Gestonurse p.)
|
Top Washington State Legal Marijuana Stores (October Total Sales)
New Vansterdam, based in Vancouver, continues to impress by leading the way with approximately $760,000 in total sales*. Just minutes from the state border, we assume that visitors from Oregon have an impact on total sales. This assumption is only confirmed by an observation from local marijuana blog Mrs. Nice Guy: ‘it seems as though there’s a decent amount of Oregon customers.’
We should note, the number of reported sales for New Vansterdam was obtained from the retail store and not the Washington State Liquor Control Board (WSLCB). According to New Vansterdam, the WSLCB did not report a correct number. If any other retail stores have the same issue, contact us so we can update the infographic accordingly. The information in this infographic was gathered from a document located here.
Here are the top 5 recreational marijuana shops in Washington State for total sales in the month of October:
Manic Conrad
Manic Conrad is editor of High Above Seattle. His past exploits include purposely wandering around aimlessly, before accidentally confronting his biggest fear and dream, standing face to face with a large bear. |
![](glasgowmedj75495-0017){#sp1 .209}
![](glasgowmedj75495-0018){#sp2 .210}
![](glasgowmedj75495-0019){#sp3 .211}
![](glasgowmedj75495-0020){#sp4 .212}
![](glasgowmedj75495-0021){#sp5 .213}
![](glasgowmedj75495-0022){#sp6 .214}
![](glasgowmedj75495-0023){#sp7 .215}
![](glasgowmedj75495-0024){#sp8 .216}
![](glasgowmedj75495-0025){#sp9 .217}
|
joefreid: There is a new MBA guy doing a rotation in Trading, his name is Edmund Gaither...have you met him? Very large, strong looking guy
joefreid: infectious laugh
joefreid: I went to Tuck with him
TXsscott5: no....do you know what trading group?
joefreid: I'll ask him when i can get him on the phone again. Monique's name was familiar...I think he just started in whatever group he is in
TXsscott5: hm....but he's on the gas floor?
joefreid: I think...but not sure, will find out
TXsscott5: ok...i can probably track him down....
TXsscott5: mo thinks i was a search dog in a former life
joefreid: haha
joefreid: are you one of those people that can find anything on the internet in less than 2 minutes?
TXsscott5: y
TXsscott5: :-)
TXsscott5: had a huge debate at work yesterday
TXsscott5: is there such a thing as too much water....for consumption purposes
TXsscott5: i was battling almost everyone claiming that the law of diminishing returns applies even to good, old-fashioned water
joefreid: personally? or from a macroeconomic sense (property rights in the West, blah, blah, blah
TXsscott5: personally, as in water in-take into the human body
TXsscott5: and there is such a thing....and it can cause a condition known as water intoxication...
TXsscott5: it's so gratifying to be right ;
TXsscott5: ;-)
joefreid: yes, I buy into the law of diminishing returns. I've heard of water intoxication and I agree with your assessment
joefreid: that doesn't mean you are right
joefreid: though
TXsscott5: why not?
joefreid: my bad...because you found info on Water Intoxification you are right. I though you were saying b/c i agreed with you, you were right
TXsscott5: oh...ha
TXsscott5: people were attacking me...saying, "no amount of water can be bad for you" etc. etc.
TXsscott5: and i replied, "at some point your body can't process the water...the system has to overload"
joefreid: too much of anything is bad for you....way to overpower them with your tremendous intellect
TXsscott5: ha
TXsscott5: thx
TXsscott5: i think i've scared them all know
TXsscott5: now
joefreid: Cow them into submission
TXsscott5: i always do....could be why i've heard the words, "you're intimidating" more than once in my lifetime
TXsscott5: oh well....i gotta be me
joefreid: not necessarily a bad thing....when used in moderation
joefreid: Law of Diminishing Returns again...it is everywhere
TXsscott5: ha
TXsscott5: so true
TXsscott5: seriously though....i may have issues w/ challenges and/or the gauntlet being thrown down....i hate being wrong, but i'd rather be wrong than not know the correct answer
joefreid: ...so as long as you can gain the knowledge you are ok?
joefreid: insatiable intellectual curiosity?
TXsscott5: oh, i'll be a little bitter for a while that i was wrong...but i get over it and then i have the knowledge from there on out...so i come out ahead
joefreid: ...I am the guy that will look out the airplane window and marvel how the "wing" provides lift. The physics of it all and who in the world came up with that?
TXsscott5: that one still gets me
TXsscott5: i mean, really....how cool is that
TXsscott5: this giant heap of metal....airborne
TXsscott5: i took the aviation ground course in college as an elective
joefreid: is it and extrapolation of Bernoulli's Principle
joefreid: ?
TXsscott5: ok...engineer talking to business major
TXsscott5: heard of the principle...but couldn't define it for you
TXsscott5: ;(
TXsscott5: :-(
joefreid: i realize the whole greater distance over the top of the wing creates a faster wind velocity above than below the wing and therefore causes a pressure differential (low above, high below and this "lifts" the wing. But it the newtons of lift is dependent upon the shape of the wing, the airflow over it, etc...
TXsscott5: ok.....i'm officially impressed....my dad could talk to you for HOURS
joefreid: That is why for the longest time, maybe even still today, the wings are the most difficult aspect of the plane to reproduce. Other countries would love for Boeing to outsource the construction of the wing so they could learn how to make it.
joefreid: I'm sure your father would shame me
TXsscott5: no....i doubt, but it would make for some interesting discussion |
Who's left
SFI staff & NFL Scout
04/24/2005
A look at the best prospects still available entering Day 2 of the 2005 NFL Draft. The 49ers don't select until the top of the fifth round after trading their fourth-round (No. 102 overall) and first sixth-round (No. 175) picks to Philadelphia on Saturday evening for the Eagles' late third-round selection (No. 94), which San Francisco used to select Oregon offensive lineman Adam Snyder. |
Bundesliga Season Preview 2009/10
Last year; apart from Aston Villa and Hull’s brief stays in the top four, at the start of the season a child could have predicted the teams who would occupy the Champions League spots in May.
Are you sick of this predictability?
Then look no further than what has become the greatest footballing contest in any league in Europe – the German Bundesliga. If someone tells you that they knew Wolfsburg would win the Bundesliga last season and that newly-promoted Hoffenheim would be cruising well above the heavyweights of Bayern Munich and Werder Bremen last Christmas, then they’re lying. This season Bayern have spent big (€ 30 million) on former Stuttgart striker Mario Gomez, even though he hasn’t been playing well lately and resembles (for my money anyway) a poor man’s Luca Toni. Apart from that, the Bavarians have trawled the free transfer market, with the major signings being Ukrainian international midfielder Anatoliy Tymoschuk and Croatian striker Ivica Olic, formerly of Hamburg. After all the speculation that surrounded Franck Ribery leaving Bayern, the Frenchman ended up staying put. New boss Louis Van Gaal did, however, oversee a bit of a clearout. After a huge campaign by FC Koln supporters, Lukas Podolski left Bayern to rejoin his hometown club for a fee of € 10 million. Centre back Lucio left on a free transfer, Ze Roberto left for Hamburg, Tim Borowski rejoined Werder and Massimo Oddo returned to Milan after his loan spell ended. Werder Bremen will look to atone for their awful 10th placing last season this year, and the signing of “the next big thing” in Germany; Marko Marin from Moenchengladbach for €9m, should help them along the way. Marin will have big boots to fill, though, with Diego’s departure for Juventus (for €24m) leaving the Northeners with a creative gap in their side. Werder also lost Claudio Pizarro, who returned to Chelsea after his loan ended, Carlos Alberto, who has been loaned to Vasco Da Gama and veteran midfielder Frank Baumann, who retired recently. The returns of Tim Borowski and Boubacar Sanogo from Bayern and Hoffenheim, respectively, will also help but it looks like a big ask for Werder to replicate their league win of 2007/08. Champions Vfl Wolfsburg will look to consolidate on their success last season, and have the look of a team ready for another shot at the title. With no major departures, and the signings of players like Obafemi Martins and Karim Ziani, die Woelfe look like contenders again. Everyone’s favourite upstarts, TSG Hoffenheim, strengthened their young squad with the capture of Josep Simunic from Hertha Berlin for €7m and the highly rated Maicosuel from Botafogo for €5.4m. VfB Stuttgart will look to cope with the loss of Mario Gomez, and much of their success will hinge on how well new €10m signing Pavel Pogrebnyak adapts to the Bundesliga. Consistency will be the name of the game this season, as the second half of last season saw them mount a credible title challenge (after an abysmal first half to the season.) Aliaksandr Hleb has also returned to the club on loan from Barcelona.
Although this is a season preview, I am not going to even attempt to claim I can predict how this season will finish. The Bundesliga thrives because it has that unpredictable dynamic. Give it a chance and you may well be hooked. |
It was more than a little depressing to see the first North Brooklyn Farms get clobbered by bulldozers last fall, even if everybody knew it was coming. But as of this weekend, the farm is back and better than ever with Sunday night dinner parties, a fireworks viewing, and a host of other community events extending through the tolerable months. But best of all is that North Brooklyn Farms, now the Farm on Kent, will be an accessible plot of nature for the neighborhood’s residents.
“We’ve gotten up on the roof [of Domino] a couple of times and when you look out over the neighborhood you really see there’s no green space,” co-founder Ryan Watson explained. “People are really hurting for it, and that’s why we really wanted to limit some of the vegetable growth and set aside space for this lawn.”
The farm, across the street from its original location at Havemeyer Park, is eerily quiet, which is surprising not only because the Farm on Kent is located almost directly under the JMZ, but because the crops are growing in the shadow of what’s left of the Domino sugar refinery, where bulldozers and steel claws are slowly making progress in turning the ruble into sparkly glass towers. “It’s kind of an oasis here,” Ryan said. “We have this space for the next three years, is what we’ve been told — it could be longer, but that’s what we’ve got going.”
Future site of the kitchen (Photo: Nicole Disser)
Two Trees, the developers behind the massive project that’s slated to continue for the next decade or so and will result in the construction of approximately 700 new residential units along the Williamsburg waterfront, has granted the farmers a temporary lease on the land. “This site was where the Domino sugar refinery was,” Ryan explained. “But this is the first time this property has been made accessible to the public in over 150 years. At one point, the majority of American sugar production came through this factory, so this site has a really big historical significance to the neighborhood and the country as well, considering the role of sugar in our history.”
(Photo: Nicole Disser)
North Brooklyn Farms has made an effort to use some of the old materials found at the site including cobblestones and pieces of blue stone. “Last time these cobblestones were laid they were being run over by horse and buggy,” Ryan said.
It’s hard to imagine but Ryan told us that at one point, Kings County was once the second largest producer of produce in the U.S. “It was number two behind Queens County,” Ryan explained. “There’s this funny irony in the fact that this was once agricultural land, became factory land, and is now reverting back to agricultural land and being made accessible to the public.”
But NBF is also looking toward the future in trying its best to accommodate the interests of all types of residents. There’s a blacktop bike course (known as Brooklyn Bike Park), picnic tables and a kitchen, as well as plans for a shady grove, a mushroom farm housed inside a shipping container, pick-your-own produce, and a lawn where people can simply hang out. Ryan has even designed what he calls “grass bowls,” prototyped at the previous site. “It’s like a really beautiful place to hang out with friends, drink a beer,” he said, plopping down inside one of the bowls.
probably the best grass bowl of the bunch (Photo: Nicole Disser)
“We run the space to be accessible to the public,” Ryan confirmed, explaining the dining area would be accessible to everyone outside of the events. Besides the Jimmy Carbone dinners, NBF will also run some to-be-determined barbecues. “They’re going to be more casual events,” he explained. “We’re also going to do seasonal parties. We have a Halloween party, we do a harvest carnival which is really, really fun. We had a square dance and a live string band last year. It was very much a country carnival vibe in Brooklyn.”
The space will also be used for yoga classes, meditation courses, and gardening workshops. “We want to give people who sit in front of a computer all day the opportunity to come out here, use their bodies, and get connected with the Earth,” Ryan said. “We want people to have a disconnect from the hecticness of the city. You can sit and watch the bridge at rush hour and see all the traffic and be removed from it, and watch the butterflies and the dragonflies just fly around.”
And of course, a wide variety of produce will be sprouting. Both volunteers and patrons will help pick the produce, which can be bought at the farm stand on Wednesdays and Saturdays. “We have kale, we have an Italian heirloom eggplant called Rosa Bianca, and a couple different varieties of cherry tomatoes over here, butter-head lettuce, cucumbers for pickling for our Sunday suppers, okra as well, and some arugula popping up here,” Ryan recounted.
(Photo: Nicole Disser)
Most of the plants, save for the annuals, are from the old park. “We took everything up to our friend’s place upstate over the wintertime,” Ryan explained. “We’re in the final stages of getting the trees back down here from Pennsylvania.”
And though the new site does share a lot of the same amenities as the previous location at Havemeyer Park, this space is much larger. “We grew thousands of pounds of produce last year, and this year we have ten times the space,” Ryan said. “And because our lease only went to September of last year, we didn’t have those extra months of growing.”
We’ve seen Two Trees’ renderings for a public park planned for the development, but we wondered if perhaps residents would prefer that the Farm on Kent become a permanent fixture? It’s not far-fetched to think that maybe Two Trees would consider keeping the farm after all.
“It depends. Two Trees, they’re very good at responding to community requests in a way I’ve never seen any other developer do, and one of the requests of the community is addressing the lack of green space,” Ryan answered. “It’s out of our control– everything in life is temporary and three years in New York City is a very long time– but we recognize that if we work really hard, that speaks for itself, and the memories that are created here, that’s all that we really care about.”
The Farm on Kent will be open to the public Tuesday through Friday, 11 am to 8 pmand Saturday and Sunday from 11 am to 9 pm.
Advertisement
Most Popular Topics
Follow Us On
About B + B
Bedford + Bowery is where downtown Manhattan and north Brooklyn intersect. Produced by NYU’s Arthur L. Carter Journalism Institute in collaboration with New York magazine, B + B covers the East Village, Lower East Side, Williamsburg, Greenpoint, Bushwick, and beyond. Want to contribute? Send a tip? E-mail the editor. |
/*
* Copyright (C) 2017 Freie Universität Berlin
*
* This file is subject to the terms and conditions of the GNU Lesser
* General Public License v2.1. See the file LICENSE in the top level
* directory for more details.
*/
/**
* @ingroup net_emcute
* @{
*
* @file
* @brief emCute internals
*
* @author Hauke Petersen <[email protected]>
*/
#ifndef EMCUTE_INTERNAL_H
#define EMCUTE_INTERNAL_H
#ifdef __cplusplus
extern "C" {
#endif
/**
* @brief MQTT-SN message types
*/
enum {
ADVERTISE = 0x00, /**< advertise message */
SEARCHGW = 0x01, /**< search gateway message */
GWINFO = 0x02, /**< gateway info message */
CONNECT = 0x04, /**< connect message */
CONNACK = 0x05, /**< connection acknowledgment message */
WILLTOPICREQ = 0x06, /**< will topic request */
WILLTOPIC = 0x07, /**< will topic */
WILLMSGREQ = 0x08, /**< will message request */
WILLMSG = 0x09, /**< will message */
REGISTER = 0x0a, /**< topic registration request */
REGACK = 0x0b, /**< topic registration acknowledgment */
PUBLISH = 0x0c, /**< publish message */
PUBACK = 0x0d, /**< publish acknowledgment */
PUBCOMP = 0x0e, /**< publish received (QoS 2) */
PUBREC = 0x0f, /**< publish complete (QoS 2) */
PUBREL = 0x10, /**< publish release (QoS 2) */
SUBSCRIBE = 0x12, /**< subscribe message */
SUBACK = 0x13, /**< subscription acknowledgment */
UNSUBSCRIBE = 0x14, /**< unsubscribe message */
UNSUBACK = 0x15, /**< unsubscription acknowledgment */
PINGREQ = 0x16, /**< ping request */
PINGRESP = 0x17, /**< ping response */
DISCONNECT = 0x18, /**< disconnect message */
WILLTOPICUPD = 0x1a, /**< will topic update request */
WILLTOPICRESP = 0x1b, /**< will topic update response */
WILLMSGUPD = 0x1c, /**< will message update request */
WILLMSGRESP = 0x1d /**< will topic update response */
};
/**
* @brief MQTT-SN return codes
*/
enum {
ACCEPT = 0x00, /**< all good */
REJ_CONG = 0x01, /**< reject, reason: congestions */
REJ_INVTID = 0x02, /**< reject, reason: invalid topic ID */
REJ_NOTSUP = 0x03 /**< reject, reason: operation not supported */
};
#ifdef __cplusplus
}
#endif
#endif /* EMCUTE_INTERNAL_H */
/** @} */
|
Prevalence of war-related sexual violence and other human rights abuses among internally displaced persons in Sierra Leone.
Sierra Leone's decade-long conflict has cost tens of thousands of lives and all parties to the conflict have committed abuses. To assess the prevalence and impact of war-related sexual violence and other human rights abuses among internally displaced persons (IDPs) in Sierra Leone. A cross-sectional, randomized survey, using structured interviews and questionnaires, of internally displaced Sierra Leone women who were living in 3 IDP camps and 1 town, which were conducted over a 4-week period in 2001. A total of 991 women provided information on 9166 household members. The mean (SE) age of the respondents was 34 (0.48) years (range, 14-80 years). The majority of the women sampled were poorly educated (mean [SE], 1.9 [0.11] years of formal education); 814 were Muslim (82%), and 622 were married (63%). Accounts of war-related sexual assault and other human rights abuses. Overall, 13% (1157) of household members reported incidents of war-related human rights abuses in the last 10 years, including abductions, beatings, killings, sexual assaults and other abuses. Ninety-four (9%) of 991 respondents and 396 (8%) of 5001 female household members reported war-related sexual assaults. The lifetime prevalence of non-war-related sexual assault committed by family members, friends, or civilians among these respondents was also 9%, which increased to 17% with the addition of war-related sexual assaults (excluding 1% of participants who reported both war-related and non-war-related sexual assault). Eighty-seven percent of women believed that there should be legal protection for women's human rights. More than 60% of respondents believed a man has a right to beat his wife if she disobeys, and that it is a wife's duty/obligation to have sex with her husband even if she does not want to. Sexual violence committed by combatants in Sierra Leone was widespread and was perpetrated in the context of a high level of human rights abuses against the civilian population. |
Analysis of oxygen binding by Xenopus laevis hemoglobin: implications for the Root effect.
We have measured oxygen binding properties for red cell suspensions and stripped hemolysate of Xenopus laevis (XL) in order to answer the question whether XL hemoglobin exhibits a Root effect. The present results show that under physiological conditions XL red cells do not exhibit all the criteria for a Root effect. Compared to human Hb A, stripped XL hemoglobin has a low oxygen affinity, a normal alkaline Bohr effect and a lower interaction with 2,3-diphosphoglycerate, its physiological allosteric effector in red cells. The values for the Hill coefficients are lower than those for human hemoglobin but XL Hb remains cooperative (n50 approximately 2), even at pH values below 6. Attempts to mimick some of the criteria for the Root effect were carried out in pure Hb A solution upon addition of potent allosteric effectors. This leads to low cooperativity and less than 100% oxygen saturation under room air at acidic pH. Under these conditions, mammalian or XL Hb have a maximum proton release at pH approximately 8 and an increased reverse Bohr effect. By contrast, a Root effect Hb exhibits a maximum proton release at neutral pH with the absence of reverse Bohr effect. Therefore the Root effect in fish Hb and the extreme stabilization of the T-state with effectors in mammalian Hb are not an identical phenomenon. Without crystallographic analyses of fish Hb exhibiting a Root effect, the molecular interpretation of this functional property still remains unexplained. |
---
abstract: 'In an earthquake event, the combination of a strong mainshock and damaging aftershocks is often the cause of severe structural damages and/or high death tolls. The objective of this paper is to provide estimation for the probability of such extreme events where the mainshock and the largest aftershocks exceed certain thresholds. Two approaches are illustrated and compared – a parametric approach based on previously observed stochastic laws in earthquake data, and a non-parametric approach based on bivariate extreme value theory. We analyze the earthquake data from the North Anatolian Fault Zone (NAFZ) in Turkey during 1965–2018 and show that the two approaches provide unifying results.'
author:
- 'Juan-Juan Cai[^1]'
- 'Phyllis Wan[^2]'
- 'Gamze Ozel[^3]'
bibliography:
- 'earthquake.bib'
title:
- |
Parametric and non-parametric for extreme earthquake events\
the joint tail inference for mainshocks and aftershocks
- 'Parametric and non-parametric estimation of extreme earthquake event: the joint tail inference for mainshocks and aftershocks'
---
[*Keywords and phrases: bivariate extreme value theory; earthquake data; tail probability; mainshock; aftershock*]{}\
[ 62G32 (60G70; 86A17).]{}
Introduction {#Sec:Intro}
============
In a seismically active area, a strong earthquake, namely the [*mainshock*]{}, is often followed by subsequent damaging earthquakes, known as the [*aftershocks*]{}. These aftershocks may occur in numerous quantity and with magnitudes equivalent to powerful earthquakes on their own. For instance, in the 1999 İzmit earthquake, a magnitude 7.6 mainshock triggered hundreds of aftershocks with magnitudes greater than or equal to 4 in the first six days, cf. [@Polatetal2002]. In the 2008 Sichuan earthquake, a mainshock of magnitude 8.0 induced a series of aftershocks with magnitudes up to 6.0. The results are severe structural damage and loss of life, especially when the area has already been weakened by the mainshock. The İzmit earthquake killed over 17,000 people and left half a million homeless [@marza2004]. The Sichuan earthquake caused over 69,000 deaths and damages of over 150 billion US dollars [@cui2011].
The goal of this paper is to provide a statistical analysis for the joint event of an extreme mainshock and extreme aftershocks. Throughout the paper, we denote the magnitude of a mainshock with $X$ and that of the largest aftershock with $Y$. We estimate via two approaches the probability of $$\P(X>s, Y>t), \label{eq:pst}$$ for large values of $s$ and $t$. The first approach uses a parametric model based on a series of well-know stochastic laws that describe the empirical relationships of the aftershocks and the mainshock, which we briefly review in Section \[subsec:par\]. In the second approach, we apply bivariate extreme value theory to estimate the joint tail. Both methods are applied to the extreme earthquake events in the North Anatolian Fault Zone (NAFZ) in Turkey, the region where the 1999 İzmit earthquake occurred.
The remainder of the paper is structured as follows. In Section \[Sec:Data\], we present the earthquake data in NAFZ and describe the relevant data processing. Section \[Sec:Method\] provides the parametric and non-parametric estimation procedures for the joint main-/after-shock distribution. The detailed data analysis and results are presented in Section \[Sec:Result\]. We conclude in Section \[Sec:Discuss\] with some discussions.
Data description {#Sec:Data}
================
We use the North Anatolian Fault Zone (NAFZ) as an area of investigation due to its long and extensive historical record of large earthquakes [@Ambraseys1970; @AmbraseysFinkel1987]. Extending from eastern Turkey to Greece, the 1,500-kilometer-long rip sustained several cycle-like sequences of large-magnitude ($M>7$) earthquakes over the past centuries [@Steinetal1997], several resulting in high death tolls and severe economic losses. The most recent activities include the İzmit (Mw 7.6) and Düzce (Mw 7.1) earthquakes of 1999 [@Parsonsetal2000; @Reilingeretal2000].[^4] We obtain data from the Presidential of Earthquake Department database of the Turkish Disaster and Emergency Management Authority (<https://deprem.afad.gov.tr/?lang=en>) and consider all earthquake records between 1965–2018 with magnitudes 4 or higher in the area of $39.00^o - 42.00^o$ latitude and $26.00^o-40.00^o$ longitude. The left panel of Figure \[fig:shocks\] shows the time series of all earthquakes from 1965 onward.
We now label earthquake events by identifying the mainshocks and their corresponding aftershocks. Being interested in extreme events, we only consider earthquake events with significant mainshocks such that $X\ge5$. We use the window algorithm proposed in [@gardner1974] as follows. For each shock with magnitude $X\ge5$, we scan the window within distance $L(X)$ and time $T(X)$. If a larger shock exists, we move on to that shock and perform the same scan. If not, then the shock is labelled as the mainshock and all shocks within the specified window are pronounced as its aftershocks. Table \[tab:aftershock\] provides the values for $L(X)$ and $T(X)$. For example, for an earthquake of magnitude 6.0, any shock following $T=510$ days and within $L=54$ km radius, with a magnitude less than 6, is considered to be its aftershock.
The right panel of Figure \[fig:shocks\] shows the labelled mainshocks in the time series. The algorithm identifies $n=180$ earthquake events with mainshocks $X\ge5$ among which 129 have aftershocks with magnitude greater than 4. Note that a few large earthquakes in the early years are not labelled as mainshocks, instead they are identified as aftershocks of mainshocks before the year 1965 from earlier data.
--------- ------ --------
$X$ $L$ $T$
(km) (days)
5.0–5.4 40 155
5.5–5.9 47 290
6.0–6.4 54 510
6.5–6.9 61 790
7.0–7.4 70 915
7.5–7.9 81 960
8.0–8.4 94 985
--------- ------ --------
: Window specification $L(X)$, $T(X)$ for aftershock labelling from [@gardner1974].[]{data-label="tab:aftershock"}
[.49]{} ![Shocks and labelled mainshocks in the NAFZ during 1965–2018.[]{data-label="fig:shocks"}](ts_shocks2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Shocks and labelled mainshocks in the NAFZ during 1965–2018.[]{data-label="fig:shocks"}](ts_shocks_lab2.pdf "fig:"){width="\linewidth"}
Methodology {#Sec:Method}
===========
Parametric approach {#subsec:par}
-------------------
It is agreed in the literature that the distribution of aftershocks in space, time and magnitude can be characterized by stochastic laws, see [@Utsu1970; @Utsu1971] and [@Utsu1972] for a summary with detailed empirical studies. In this section, we propose a simple parametric model for the joint magnitudes of the mainshock and the largest aftershock based on these relationships. This derivation is similar to that in [@Vere2006].
The following empirical evidence for aftershocks have been noted in prior literature.
1. The frequency $g(t)$ of the aftershocks per time unit at time $t$ after the mainshock follows the modified Omori’s law: $$g(t) = \frac{K}{(t+c)^p},$$ where $K, c, p$ are constants [@Utsu1970].
2. The magnitude of the aftershocks follows Gutenberg-Richter’s law [@GutenbergRichter1944], that is, the number of aftershocks $N(m)$ with magnitude $m$ follows $$\label{eq:gr}
N(m) = 10^{a-bm},$$ where $a,b$ are constants.
3. The magnitude difference between a mainshock and its largest aftershock is approximately constant, independent of the mainshock magnitude and typically between 1.1 and 1.2 [@Bath1965].
Based on the above, [@Utsu1970] modelled the intensity rate of aftershocks with magnitude $m$ as $$\label{eq:as_intensity}
\lambda(t,m) = \frac{10^{a+b(m_0-m)}}{(t+c)^p}, \quad m \le m_0,$$ where $m_0$ is the mainshock magnitude and $a,b,c,p$ are constants. By defintion, $m\le m_0$ such that aftershocks are always smaller than the mainshock. This modelling is used widely in ensuing literature, cf. [@ReasenbergJones1989], and is the basis of the ETAS (epidemic-type aftershock sequence) simulation model, cf. [@Ogata1988]. It is common to model the occurrences of aftershocks as a Poisson point process.
On the other hand, the mainshocks can be considered as independent events and their magnitude can also be modelled by the Gutenberg-Richter’s law in equation [@Utsu1972]. In the following, we model the magnitude of the mainshocks $X$ using an exponential distribution with distribution and density functions $$\label{eq:ms}
\P(X>x) = e^{-\alpha x}, \quad f_X(x) = \alpha e^{-\alpha x}.$$
### The model
Let $X_A$ denote the magnitude of an aftershock. Given the mainshock $X=m_0$, we assume that the aftershocks sequence follows a non-homogeneous Poisson process with intensity function . We derive the following.
- The total number of aftershocks $N$ follows a Poisson random variable with mean $$\E[N|X=m_0] = \sum_{t=1}^\infty \int_0^{m_0} \lambda(t,u) du =: Ce^{\beta m_0} \left(1-e^{-\beta m_0}\right),$$ where $\beta = b \ln 10$ and $C = \frac{1}\beta10^a\sum_{t=1}^\infty \frac{1}{(t+c)^p}$.
- The conditional distribution of $X_A$ follows $$\P(X_A>m|X=m_0) = \frac{\sum_{t=1}^\infty \int_m^{m_0} \lambda(t,u) du}{\sum_{t=1}^\infty \int_0^{m_0} \lambda(t,u) du} = \frac{e^{-\beta m}-e^{-\beta m_0}}{1-e^{-\beta m_0}}, \quad 0 \le m \le m_0,$$ and is conditionally independent of $N$.
Observe that the largest aftershock $Y=\max_{1\leq i\leq N}X_{A_i}$. Therefore, by the conditional independence of $N$ and $X_A$, it follows that for $m\in [0, m_0]$, $$\begin{aligned}
\P(Y\le m|X=m_0) &=& \E\left[(1-P(X_A>m))^N|X=m_0\right] \\
&=& \exp\left\{-\P(X_A>m|X=m_0){\E[N|X=m_0]}\right\} \\
&=& \exp\left\{-\frac{e^{-\beta m}-e^{-\beta m_0}}{1-e^{-\beta m_0}} \cdot Ce^{\beta m_0} \left(1-e^{-\beta m_0}\right) \right\} \\
&=& \exp\left\{-C\left(e^{-\beta (m-m_0)}-1\right)\right\},
\end{aligned}$$ where the second equality follows from a property of Poisson expectation: $\E[(1-p)^N]=\exp(-p\lambda)$, where $N\sim \text{Poi}(\lambda)$ and $p \in (0, 1)$. Let $Z := X - Y$, then $Z$ has distribution function $$\label{eq:gompertz}
F(z) := P(Z\ge z) = \exp\left\{-C\left(e^{\beta z}-1\right)\right\}, \quad z\le m_0.$$ This suggests that $Z$ follows a Gompertz distribution, that is, $-Z$ follows a Gumbel distribution conditional to be negative. If we impose the convention that $\{Z>m_0\}=\{X_A<0\}$ represents the event that no aftershock occurs, then we can model $Z$ as independent of $X$. Note that when $m_0$ is large the probability of $\{Z>m_0\}$ is negligible.
Combining and yields the joint model for $(X,Y)$ given by $$\P(X>x,Y>y) = \P(X>x,Z<X-y) = \int_x^\infty f_X(x) \int_0^{x-y} f_Z(z) dz dx,$$ where $f_X(x)$ is as defined in and $f_Z(z)$ is the density function of $Z$ from . Given data, the parameters $(\alpha,\beta,C)$ can be estimated through maximum likelihood.
Bivariate extreme value approach
--------------------------------
Multivariate extreme statistics has been exhibited to be a powerful tool for inference on multidimensional risk factors. Examples of applications can be found in [@dehaan1998; @ledford1997] and [@PoonRockingerTawn2004] among others. Recall that the goal is to estimate the probability: $\P(X>t, Y>s)$. To this end, we assume that the joint distribution of $(X,Y)$ is in the max domain of a bivariate extreme distribution introduced in [@deHaanResnick1977]. This is a common condition in multivariate tail analysis and includes distributions with various types of copulas. Let $F_1$ and $F_2$ denote the marginal distribution functions of $X$ and $Y$, respectively. The assumption implies that for any $(x, y)\in [0, \infty]^2 \setminus ({\infty, \infty})$, the following limit exists: $$\lim_{t\rightarrow 0}\frac{1}{t}\P(1-F_1(X)<tx, 1-F_2(y)<ty)=:R(x,y). \label{eq: R}$$ The function $R$ characterizes the extremal dependence between $X$ and $Y$ and it can be expressed via other extremal dependence measures. For instance, it is linked to the stable tail dependence function $L$ and the Pickand function $A$: $$R(x,y)=x+y-L(x, y)=(x+y)\left(1-A\left(\frac{y}{x+y}\right)\right). \label{eq: RLA}$$ For a general review on the multivariate extreme value theory, see for example Chapter 6 in [@deHaanFerreira2006] and Chapter 8 in [@Beirlantetal2004].
The limit relation in guarantees the regularity in the right tail of the copula of $(X, Y)$, which enables us to do the bivariate extrapolation to the range far beyond the historical observations. Let $s$ and $t$ be sufficiently large and denote that $p_1=\P(X>s)$ and $p_2=\P(X>t)$. $$\begin{aligned}
\P(X>s, Y>t)&=&\P(1-F_1(X)<p_1, 1-F_2(y)<p_2)\nonumber\\
&=&p_2\cdot \frac{1}{p_2}\P\left(1-F_1(X)<p_2\cdot \frac{p_1}{p_2}, 1-F_2(y)<p_2\right) \nonumber\\
&\approx &p_2 R\left(\frac{p_1}{p_2},1\right). \label{eq:pst}\end{aligned}$$ Then the problem transforms to estimating $p_1$, $p_2$ and $R(x, 1)$. Due to the relation in , the various methods of estimating $L$ or $A$ can be applied to estimate $R(x, 1)$; for instance see [@Caperaaetal1997; @Einmahletal2008; @Bucheretal2011; @Fougeresetal2015; @Beirlantetal2016] among many others. Because of the particular features of earthquake data – that they have been rounded to the first digit and censored below – we use a basic non-parametric estimator of $R(x, 1)$, which requires least assumptions on the data and is the basis of other more advanced estimation approaches. Let $n$ be the sample size and $k=k(n)$ be a sequence of integers such that $k\rightarrow\infty$ and $k/n\rightarrow 0$ as $n\rightarrow \infty$. Let $R_i^X$ and $R_i^Y$ denote the ranks of $X_i$ and $Y_i$ in their respective samples. The estimator of $R(x,1)$ is given by $$\begin{aligned}
\hat R(x, 1)=\frac{1}{k}\sum_{i=1}^n I(R_i^X>n+1/2-kx, R_i^Y>n+1/2-k). \label{eq: hatR}\end{aligned}$$
As for estimating $p_1$ and $p_2$, we fit exponential distributions to both margins, which is a typical choice for modeling earthquake magnitude justified by the Gutenberg–Richter law [@GutenbergRichter1944]. A natural alternative is to apply univariate extrem value theory to estimate these tail probabilities. Many studies have been devoted to study the tail distribution or the endpoint of earthquake magnitude; see for instance [@Kijko2004] and [@Beirlantetal2018]. However, due to the small sample size and the rounding issue, we choose to fit parametric margins.
Results {#Sec:Result}
=======
From Section \[Sec:Data\] we extract from the NAFZ dataset time series of mainshocks magnitude $(x_i)$ where $x_i\ge5$. For the time series of the corresponding largest aftershock, we only observe the values that are above 4, that is, we observe $\left(y_i \bf1_{\{y_i\ge4\}}\right)$. The two time series are plotted in Figure \[fig:ts\].
Parametric approach {#parametric-approach}
-------------------
The mainshock sequence $(x_i)$ is fitted with an exponential distribution truncated at 4.95 – we take into consideration the continuity correction. Since all observations are discrete by 0.1 increment, from now on whenever we show the fit of a distribution or calculate the goodness-of-fit $p$-value, we jitter all observations by uniform noises between $(-.05,.05)$. The fit is shown on the left panel of Figure \[fig:fit\] and the Komogorov-Smirnov $p$-value is 0.83, indicating a good fit.
Next we fit a Gompertz distribution to the difference $x_i-y_i$ by maximizing the following censored likelihood: $$L(\beta,C|x_i,y_i) = \prod_{y_i\ge4}f_Z(x_i-y_i;\beta,C) \prod_{y_i<4}(1-F_Z(x_i-4;\beta,C)),$$ where $F_Z$ is as defined in and $f_Z$ is the corresponding density. To assess the goodness-of-fit, we first approximate the complete set of maximum aftershock sequence by $\tilde{y}_i$ as follows . When $y_i\ge4$, set $\tilde{y}_i :=y_i$. When $y_i<4$, simulate $z_i$ from $F$ conditional on $z_i \ge x_i - 4$ and set $\tilde{y}_i:= x_i - z_i$. The histogram of jittered $\tilde{y}_i$ is shown on the right panel of Figure \[fig:fit\] with the fitted density. The $p$-value is 0.95.
The scatterplot of the jittered $(x_i,\tilde{y}_i)$ is plotted in Figure \[fig:fill\], with the red point indicating the simulation for the censored observations. As we can see, the simulation is in agreement with the pattern of the observed pairs. We also note that the Bth’s law [@Bath1965] – the empirical evidence that the magnitude difference between the mainshock and the largest aftershock is constant between 1.1 and 1.2 – can be well-justified by the fitted model. The fitted mean of $Y-X$ is 1.1. The $x=y+1.1$ line is shown in as the dotted line in Figure \[fig:fill\].
[.49]{} ![Time series of mainshocks (left) and largest aftershocks (right).[]{data-label="fig:ts"}](ts_ms2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Time series of mainshocks (left) and largest aftershocks (right).[]{data-label="fig:ts"}](ts_mas2.pdf "fig:"){width="\linewidth"}
[.49]{} ![The histogram and fitted curve of: i) mainshocks with exponential distribution (left); ii) differences between mainshocks and largest aftershocks with Gompertz distribution (right).[]{data-label="fig:fit"}](fit_ms.pdf "fig:"){width="\linewidth"}
[.49]{} ![The histogram and fitted curve of: i) mainshocks with exponential distribution (left); ii) differences between mainshocks and largest aftershocks with Gompertz distribution (right).[]{data-label="fig:fit"}](fit_diff.pdf "fig:"){width="\linewidth"}
![Scatterplot of mainshocks vs. aftershocks. Black indicates observations and red indicates censored aftershocks with magnitude $<4$ and simulated from the fitted model. The solid line indicate $y=x$. Clearly all observations falls below as $Y\le X$ by definition. The dotted line indicates $y=x-1.1$ as suggested by the Bth’s law.[]{data-label="fig:fill"}](fill_missing.pdf){width=".7\linewidth"}
Extreme value approach
----------------------
For the extreme value analysis approach, we use the same estimate for the marginal distribution of $X$ as in the parametric approach. We fit an exponential distribution truncated at 4.55 to the marginal distribution $Y$. The fitted density is shown in Figure \[fig:histY\] with the Komogorov-Smirnov $p$-value 0.11. The fitted distributions are used to compute $p_1$ and $p_2$ in .
When estimating $R(x, 1)$, we note that there are ties in the data as they are rounded to one decimal place. For this, we randomly assign ranks to the tied observations. The missing values of $Y$ (they are censored above 4) does not effect the estimator in provided that $k$ is smaller than $n_1$, the number of observed $Y_i$’s. Obviously, the ranks of the missing values are smaller than $n-n_1$, thus the corresponding indicator function in equals to zero regardless of the precise value of $R_i^Y$.
The left panel of Figure \[Fig:Rk\] shows the estimates of $R(x,1)$ for three different values of $x$ and $k\in [10, 100]$. Also note that $R(1,1)$ is a commonly used quantity to distinguish tail dependence ($R(1,1)>0$) and tail independence ($R(1,1)=0$). Roughly, tail dependence says that the extremes of X and Y tend to occur simultaneously while joint extremes rarely occur under tail independence. This plot clearly suggests tail dependence between $X$ and $Y$ because the estimates of $R(1,1)$ are clearly above zero. Based on these three curves, we choose our $k=40$.
With the choice of $k=40$, we obtain the non-parametric estimate of $R(x, 1)$ for $x\in[0.02, 5]$ plotted in the black curve in right panel of Figure \[Fig:Rk\]. The wiggly behaviour of this estimator motivates us to consider a smoothing method. We adopt the smoothing method introduced in [@Kiriliouketal2018], which makes use of the beta copula. This smoothed estimator, denoted as $\hat R_b(x,1)$, respects the pointwise upper bounds of the function, that is $R(x, 1)\leq \max (x, 1)$ and it does not require smoothing parameter such as bandwidth. The resulted estimates are represented by the red curve in right panel of Figure \[Fig:Rk\]. The two estimators are coherent with each other. $\hat R_b(x,1)$ is only used in obtaining the level curves in Figure \[fig:level\].
![The histogram of the largest aftershocks and fitted exponential density[]{data-label="fig:histY"}](histY.pdf){width="0.5\linewidth"}
Results and comparison
----------------------
We are ready to estimate probabilities for the joint tail of $(X,Y)$. First, we estimate the tail probability[^5] defined in for the ten largest earthquakes (mainshocks) in the NAFZ since 1965. As shown in the fifth and sixth column of Table \[tab:prob\], the estimates by two approaches are surprisingly close to each other, which supports the reasonability of the results. We emphasize that the two approaches only share one common assumption, that is, the marginal distribution of $X$. The distribution of $Y$ and the dependence between the $(X, Y)$ are modelled separately.
Next we obtain the level curves of $(X, Y)$ for the tail probability sequence $$(10^{-3},5\cdot10^{-4},10^{-4},5\cdot10^{-5},10^{-5},5\cdot10^{-6},10^{-6}),$$ as shown in Figure \[fig:level\]. The points $(x, y)$ on each curve represents such that $\P(X>x,Y>y) = p$ for the given probability level. Albeit based on different theories, the two approaches provide coinciding prediction results. The two dot lines in Figure \[fig:level\] correspond to $x=y$ and $x=y+1.1$. The horizontal shape of the curves between these two lines indicates that the results respect the Bth’s law. This is particularly remarkable for the non-parametric approach, which does not impose any dependence structure for $(X, Y)$.
![Predicted level curve for $(X,Y)$ based on the parametric model (red dotted) and extreme value analysis (black solid), with existing observations.[]{data-label="fig:level"}](level.pdf){width=".8\linewidth"}
---- ------------ ----------- ------------ ------------- ---------------- ----------------
largest parametric non-parametric
date mainshock aftershock probability probability location
1 1999-08-17 7.6 5.8 0.00265 0.00257 İzmit
2 1970-03-28 7.2 5.6 0.00618 0.00580 Gediz
3 1999-11-12 7.1 5.2 0.00815 0.00805 Düzce
4 1967-07-22 6.8 5.4 0.01413 0.01356 Mudurnu
5 1992-03-13 6.6 5.9 0.01429 0.01464 Erzincan
6 2002-02-03 6.5 5.8 0.01785 0.01827 Afyon
7 1969-03-28 6.5 4.9 0.02927 0.02745 Alaşehir
8 1968-09-03 6.5 4.6 0.03092 0.03051 Bartin
9 1995-10-01 6.4 5.0 0.03437 0.03296 Dinar
10 2017-07-20 6.3 5.1 0.03938 0.03747 Mugla Province
---- ------------ ----------- ------------ ------------- ---------------- ----------------
: Tail probability estimation for the ten largest earthquakes in the NAFZ since 1965.[]{data-label="tab:prob"}
Discussion {#Sec:Discuss}
==========
In this paper we consider estimating the tail probability of an extreme earthquake event where the mainshock magnitude $X$ and the largest aftershock magnitude $Y$ both exceed certain thresholds. We approach the problems from two directions. On one hand, based on the well-known stochastic rules for aftershocks, we propose a joint parametric model for $(X,Y)$, estimate the model using (censored) maximum likelihood, and from the model, calculate the desired probabilities. On the other hand, we use non-parametric methods from bivariate extreme value analysis to extrapolate tail probabilities. We illustrates both methods using the earthquake data in the North Anatolian Fault Zone (NAFZ) in Turkey from 1965 to 2018. The two approaches produce surprisingly agreeing results.
This is an exploratory effort in applying multivariate extreme value analysis to seismology problems and much extension is possible. For example, the occurrences of the earthquake events can be modelled in time and return level of extreme events can be estimated. Further information, such as distance between shocks and other geological covariates, can be incorporated into the analysis to provide more accurate or customized results.
This paper serves as a confirmation that simple techniques from multivariate extreme value analysis, though with little expert knowledge behind the data, is able to provide useful information in the analysis of extreme events.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank Anna Kiriliouk for helpful discussion on a smoothed estimator of the stable tail dependence function.
[^1]: Department of Applied Mathematics, Delft University of Technology, Mekelweg 4 2628 CD Delft, the Netherlands; email: [email protected]
[^2]: Econometric Institute, Erasmus University Rotterdam, Burg. Oudlaan 50, 3062 PA Rotterdam, the Netherlands; email: [email protected]
[^3]: Department of Statistics, Hacettepe University, 06800 Ankara, Turkey; [email protected]
[^4]: Regarding the earthquake scale in our data: Before 1977, all earthquakes were recorded in the body-wave magnitude scale (mb) or the surface-wave magnitude scale (Ms), depending on the depth of the earthquake. Following the development of the moment magnitude scale (Mw) by [@kanamori1977; @hanks1979], earthquakes with magnitude larger than 5.0 are recorded in the Mw scale whereas smaller earthquakes were still generally measure in the mb or the Ms scale. From 2012 on, all earthquakes are recorded in the Mw scale.
[^5]: We remark that as our data set consists of only mainshocks with magnitude ($X\geq 5$), the probability in this section has to be interpreted as a conditional probability that given a significant mainshock $X\ge5$ occurs.
|
The National Primary Drinking Water Regulations (NPDWR) are legally enforceable primary standards and treatment techniques that apply to public water systems. Primary standards and treatment techniques protect public health by limiting the levels of contaminants in drinking water.
Printable version: Complete NPDWR Table
Microorganisms Surface Water Treatment Rules Summary Contaminant MCLG1 (mg/L)2 MCL or TT1 (mg/L)2 Potential Health Effects from Long-Term Exposure Above the MCL (unless specified as short-term) Sources of Contaminant in Drinking Water Cryptosporidium zero TT3 Gastrointestinal illness (such as diarrhea, vomiting, and cramps) Human and animal fecal waste Giardia lamblia zero TT3 Gastrointestinal illness (such as diarrhea, vomiting, and cramps) Human and animal fecal waste Heterotrophic plate count (HPC) n/a TT3 HPC has no health effects; it is an analytic method used to measure the variety of bacteria that are common in water. The lower the concentration of bacteria in drinking water, the better maintained the water system is. HPC measures a range of bacteria that are naturally present in the environment Legionella zero TT3 Legionnaire's Disease, a type of pneumonia Found naturally in water; multiplies in heating systems Total Coliforms (including fecal coliform and E. Coli) Quick reference guide
Rule Summary zero 5.0%4 Not a health threat in itself; it is used to indicate whether other potentially harmful bacteria may be present5 Coliforms are naturally present in the environment; as well as feces; fecal coliforms and E. coli only come from human and animal fecal waste. Turbidity n/a TT3 Turbidity is a measure of the cloudiness of water. It is used to indicate water quality and filtration effectiveness (such as whether disease-causing organisms are present). Higher turbidity levels are often associated with higher levels of disease-causing microorganisms such as viruses, parasites and some bacteria. These organisms can cause symptoms such as nausea, cramps, diarrhea, and associated headaches. Soil runoff Viruses (enteric) zero TT3 Gastrointestinal illness (such as diarrhea, vomiting, and cramps) Human and animal fecal waste
Top of page
Top of page
Top of page
Inorganic Chemicals Chemical Contaminant Rules Summary Contaminant MCLG1 (mg/L)2 MCL or TT1 (mg/L)2 Potential Health Effects from Long-Term Exposure Above the MCL (unless specified as short-term) Sources of Contaminant in Drinking Water Antimony 0.006 0.006 Increase in blood cholesterol; decrease in blood sugar Discharge from petroleum refineries; fire retardants; ceramics; electronics; solder Arsenic Quick reference guide
Consumer fact sheet 0 0.010 as of 01/23/06 Skin damage or problems with circulatory systems, and may have increased risk of getting cancer Erosion of natural deposits; runoff from orchards, runoff from glass and electronicsproduction wastes Asbestos (fiber > 10 micrometers) 7 million fibers per liter (MFL) 7 MFL Increased risk of developing benign intestinal polyps Decay of asbestos cement in water mains; erosion of natural deposits Barium 2 2 Increase in blood pressure Discharge of drilling wastes; discharge from metal refineries; erosion of natural deposits Beryllium 0.004 0.004 Intestinal lesions Discharge from metal refineries and coal-burning factories; discharge from electrical, aerospace, and defense industries Cadmium 0.005 0.005 Kidney damage Corrosion of galvanized pipes; erosion of natural deposits; discharge from metal refineries; runoff from waste batteries and paints Chromium (total) 0.1 0.1 Allergic dermatitis Discharge from steel and pulp mills; erosion of natural deposits Copper 1.3 TT7; Action Level=1.3 Short term exposure: Gastrointestinal distress Long term exposure: Liver or kidney damage People with Wilson's Disease should consult their personal doctor if the amount of copper in their water exceeds the action level Corrosion of household plumbing systems; erosion of natural deposits Cyanide (as free cyanide) 0.2 0.2 Nerve damage or thyroid problems Discharge from steel/metal factories; discharge from plastic and fertilizer factories Fluoride 4.0 4.0 Bone disease (pain and tenderness of the bones); Children may get mottled teeth Water additive which promotes strong teeth; erosion of natural deposits; discharge from fertilizer and aluminum factories Lead Quick reference guide
Rule information zero TT7; Action Level=0.015 Infants and children: Delays in physical or mental development; children could show slight deficits in attention span and learning abilities Adults: Kidney problems; high blood pressure Corrosion of household plumbing systems; erosion of natural deposits Mercury (inorganic) 0.002 0.002 Kidney damage Erosion of natural deposits; discharge from refineries and factories; runoff from landfills and croplands Nitrate (measured as Nitrogen) 10 10 Infants below the age of six months who drink water containing nitrate in excess of the MCL could become seriously ill and, if untreated, may die. Symptoms include shortness of breath and blue-baby syndrome. Runoff from fertilizer use; leaking from septic tanks, sewage; erosion of natural deposits Nitrite (measured as Nitrogen) 1 1 Infants below the age of six months who drink water containing nitrite in excess of the MCL could become seriously ill and, if untreated, may die. Symptoms include shortness of breath and blue-baby syndrome. Runoff from fertilizer use; leaking from septic tanks, sewage; erosion of natural deposits Selenium 0.05 0.05 Hair or fingernail loss; numbness in fingers or toes; circulatory problems Discharge from petroleum refineries; erosion of natural deposits; discharge from mines Thallium 0.0005 0.002 Hair loss; changes in blood; kidney, intestine, or liver problems Leaching from ore-processing sites; discharge from electronics, glass, and drug factories
Top of page
Organic Chemicals Chemical Contaminant Rules Summary Contaminant MCLG1 (mg/L)2 MCL or TT1 (mg/L)2 Potential Health Effects from Long-Term Exposure Above the MCL (unless specified as short-term) Sources of Contaminant in Drinking Water Acrylamide zero TT8 Nervous system or blood problems; increased risk of cancer Added to water during sewage/wastewater treatment Alachlor zero 0.002 Eye, liver, kidney or spleen problems; anemia; increased risk of cancer Runoff from herbicide used on row crops Atrazine 0.003 0.003 Cardiovascular system or reproductive problems Runoff from herbicide used on row crops Benzene zero 0.005 Anemia; decrease in blood platelets; increased risk of cancer Discharge from factories; leaching from gas storage tanks and landfills Benzo(a)pyrene (PAHs) zero 0.0002 Reproductive difficulties; increased risk of cancer Leaching from linings of water storage tanks and distribution lines Carbofuran 0.04 0.04 Problems with blood, nervous system, or reproductive system Leaching of soil fumigant used on rice and alfalfa Carbon tetrachloride zero 0.005 Liver problems; increased risk of cancer Discharge from chemical plants and other industrial activities Chlordane zero 0.002 Liver or nervous system problems; increased risk of cancer Residue of banned termiticide Chlorobenzene 0.1 0.1 Liver or kidney problems Discharge from chemical and agricultural chemical factories 2,4-D 0.07 0.07 Kidney, liver, or adrenal gland problems Runoff from herbicide used on row crops Dalapon 0.2 0.2 Minor kidney changes Runoff from herbicide used on rights of way 1,2-Dibromo-3-chloropropane (DBCP) zero 0.0002 Reproductive difficulties; increased risk of cancer Runoff/leaching from soil fumigant used on soybeans, cotton, pineapples, and orchards o-Dichlorobenzene 0.6 0.6 Liver, kidney, or circulatory system problems Discharge from industrial chemical factories p-Dichlorobenzene 0.075 0.075 Anemia; liver, kidney or spleen damage; changes in blood Discharge from industrial chemical factories 1,2-Dichloroethane zero 0.005 Increased risk of cancer Discharge from industrial chemical factories 1,1-Dichloroethylene 0.007 0.007 Liver problems Discharge from industrial chemical factories cis-1,2-Dichloroethylene 0.07 0.07 Liver problems Discharge from industrial chemical factories trans-1,2-Dichloroethylene 0.1 0.1 Liver problems Discharge from industrial chemical factories Dichloromethane zero 0.005 Liver problems; increased risk of cancer Discharge from drug and chemical factories 1,2-Dichloropropane zero 0.005 Increased risk of cancer Discharge from industrial chemical factories Di(2-ethylhexyl) adipate 0.4 0.4 Weight loss, liver problems, or possible reproductive difficulties. Discharge from chemical factories Di(2-ethylhexyl) phthalate zero 0.006 Reproductive difficulties; liver problems; increased risk of cancer Discharge from rubber and chemical factories Dinoseb 0.007 0.007 Reproductive difficulties Runoff from herbicide used on soybeans and vegetables Dioxin (2,3,7,8-TCDD) zero 0.00000003 Reproductive difficulties; increased risk of cancer Emissions from waste incineration and other combustion; discharge from chemical factories Diquat 0.02 0.02 Cataracts Runoff from herbicide use Endothall 0.1 0.1 Stomach and intestinal problems Runoff from herbicide use Endrin 0.002 0.002 Liver problems Residue of banned insecticide Epichlorohydrin zero TT8 Increased cancer risk, and over a long period of time, stomach problems Discharge from industrial chemical factories; an impurity of some water treatment chemicals Ethylbenzene 0.7 0.7 Liver or kidneys problems Discharge from petroleum refineries Ethylene dibromide zero 0.00005 Problems with liver, stomach, reproductive system, or kidneys; increased risk of cancer Discharge from petroleum refineries Glyphosate 0.7 0.7 Kidney problems; reproductive difficulties Runoff from herbicide use Heptachlor zero 0.0004 Liver damage; increased risk of cancer Residue of banned termiticide Heptachlor epoxide zero 0.0002 Liver damage; increased risk of cancer Breakdown of heptachlor Hexachlorobenzene zero 0.001 Liver or kidney problems; reproductive difficulties; increased risk of cancer Discharge from metal refineries and agricultural chemical factories Hexachlorocyclopentadiene 0.05 0.05 Kidney or stomach problems Discharge from chemical factories Lindane 0.0002 0.0002 Liver or kidney problems Runoff/leaching from insecticide used on cattle, lumber, gardens Methoxychlor 0.04 0.04 Reproductive difficulties Runoff/leaching from insecticide used on fruits, vegetables, alfalfa, livestock Oxamyl (Vydate) 0.2 0.2 Slight nervous system effects Runoff/leaching from insecticide used on apples, potatoes, and tomatoes Polychlorinated biphenyls (PCBs) zero 0.0005 Skin changes; thymus gland problems; immune deficiencies; reproductive or nervous system difficulties; increased risk of cancer Runoff from landfills; discharge of waste chemicals Pentachlorophenol zero 0.001 Liver or kidney problems; increased cancer risk Discharge from wood preserving factories Picloram 0.5 0.5 Liver problems Herbicide runoff Simazine 0.004 0.004 Problems with blood Herbicide runoff Styrene 0.1 0.1 Liver, kidney, or circulatory system problems Discharge from rubber and plastic factories; leaching from landfills Tetrachloroethylene zero 0.005 Liver problems; increased risk of cancer Discharge from factories and dry cleaners Toluene 1 1 Nervous system, kidney, or liver problems Discharge from petroleum factories Toxaphene zero 0.003 Kidney, liver, or thyroid problems; increased risk of cancer Runoff/leaching from insecticide used on cotton and cattle 2,4,5-TP (Silvex) 0.05 0.05 Liver problems Residue of banned herbicide 1,2,4-Trichlorobenzene 0.07 0.07 Changes in adrenal glands Discharge from textile finishing factories 1,1,1-Trichloroethane 0.20 0.2 Liver, nervous system, or circulatory problems Discharge from metal degreasing sites and other factories 1,1,2-Trichloroethane 0.003 0.005 Liver, kidney, or immune system problems Discharge from industrial chemical factories Trichloroethylene zero 0.005 Liver problems; increased risk of cancer Discharge from metal degreasing sites and other factories Vinyl chloride zero 0.002 Increased risk of cancer Leaching from PVC pipes; discharge from plastic factories Xylenes (total) 10 10 Nervous system damage Discharge from petroleum factories; discharge from chemical factories
Top of page
Radionuclides Quick Reference Guide Radionuclides Rule Information and Summary Contaminant MCLG1 (mg/L)2 MCL or TT1 (mg/L)2 Potential Health Effects from Long-Term Exposure Above the MCL (unless specified as short-term) Sources of Contaminant in Drinking Water Alpha particles none ---------- zero 15 picocuries per Liter (pCi/L) Increased risk of cancer Erosion of natural deposits of certain minerals that are radioactive and may emit a form of radiation known as alpha radiation Beta particles and photon emitters none ---------- zero 4 millirems per year Increased risk of cancer Decay of natural and man-made deposits of certain minerals that are radioactive and may emit forms of radiation known as photons and beta radiation Radium 226 and Radium 228 (combined) none ---------- zero 5 pCi/L Increased risk of cancer Erosion of natural deposits Uranium zero 30 ug/L as of 12/08/03 Increased risk of cancer, kidney toxicity Erosion of natural deposits
Top of Page
Notes
1Definitions:
Maximum Contaminant Level Goal (MCLG) - The level of a contaminant in drinking water below which there is no known or expected risk to health. MCLGs allow for a margin of safety and are non-enforceable public health goals.
Maximum Contaminant Level (MCL) - The highest level of a contaminant that is allowed in drinking water. MCLs are set as close to MCLGs as feasible using the best available treatment technology and taking cost into consideration. MCLs are enforceable standards.
Maximum Residual Disinfectant Level Goal (MRDLG) - The level of a drinking water disinfectant below which there is no known or expected risk to health. MRDLGs do not reflect the benefits of the use of disinfectants to control microbial contaminants.
Treatment Technique (TT) - A required process intended to reduce the level of a contaminant in drinking water.
Maximum Residual Disinfectant Level (MRDL) - The highest level of a disinfectant allowed in drinking water. There is convincing evidence that addition of a disinfectant is necessary for control of microbial contaminants.
2 Units are in milligrams per liter (mg/L) unless otherwise noted. Milligrams per liter are equivalent to parts per million (PPM).
3 EPA's surface water treatment rules require systems using surface water or ground water under the direct influence of surface water to
Disinfect their water, and Filter their water, or Meet criteria for avoiding filtration so that the following contaminants are controlled at the following levels:
Cryptosporidium: Unfiltered systems are required to include Cryptosporidium in their existing watershed control provisions
Giardia lamblia: 99.9% removal/inactivation.
Viruses: 99.99% removal/inactivation.
Legionella: No limit, but EPA believes that if Giardia and viruses are removed/inactivated, according to the treatment techniques in the Surface Water Treatment Rule, Legionella will also be controlled.
Turbidity: For systems that use conventional or direct filtration, at no time can turbidity (cloudiness of water) go higher than 1 Nephelometric Turbidity Unit (NTU), and samples for turbidity must be less than or equal to 0.3 NTUs in at least 95 percent of the samples in any month. Systems that use filtration other than the conventional or direct filtration must follow state limits, which must include turbidity at no time exceeding 5 NTUs.
Heterotrophic Plate Count (HPC): No more than 500 bacterial colonies per milliliter.
Long Term 1 Enhanced Surface Water Treatment: Surface water systems or groundwater under the direct influence (GWUDI) systems serving fewer than 10,000 people must comply with the applicable Long Term 1 Enhanced Surface Water Treatment Rule provisions (such as turbidity standards, individual filter monitoring, Cryptosporidium removal requirements, updated watershed control requirements for unfiltered systems).
Long Term 2 Enhanced Surface Water Treatment Rule: This rule applies to all surface water systems or ground water systems under the direct influence of surface water. The rule targets additional Cryptosporidium treatment requirements for higher risk systems and includes provisions to reduce risks from uncovered finished water storage facilities and to ensure that the systems maintain microbial protection as they take steps to reduce the formation of disinfection byproducts.
Filter Backwash Recycling: This rule requires systems that recycle to return specific recycle flows through all processes of the system's existing conventional or direct filtration system or at an alternate location approved by the state.
4 No more than 5.0% samples total coliform-positive (TC-positive) in a month. (For water systems that collect fewer than 40 routine samples per month, no more than one sample can be total coliform-positive per month.) Every sample that has total coliform must be analyzed for either fecal coliforms or E. coli if two consecutive TC-positive samples, and one is also positive for E.coli fecal coliforms, system has an acute MCL violation.
5 Fecal coliform and E. coli are bacteria whose presence indicates that the water may be contaminated with human or animal wastes. Disease-causing microbes (pathogens) in these wastes can cause diarrhea, cramps, nausea, headaches, or other symptoms. These pathogens may pose a special health risk for infants, young children, and people with severely compromised immune systems.
6 Although there is no collective MCLG for this contaminant group, there are individual MCLGs for some of the individual contaminants:
Trihalomethanes: bromodichloromethane (zero); bromoform (zero); dibromochloromethane (0.06 mg/L): chloroform (0.07 mg/L.
Haloacetic acids: dichloroacetic acid (zero); trichloroacetic acid (0.02 mg/L); monochloroacetic acid (0.07mg/L). Bromoacetic acid and dibromoacetic acid are regulated with this group but have no MCLGs.
7 Lead and copper are regulated by a treatment technique that requires systems to control the corrosiveness of their water. If more than 10% of tap water samples exceed the action level, water systems must take additional steps. For copper, the action level is 1.3 mg/L, and for lead is 0.015 mg/L.
8 Each water system must certify, in writing, to the state (using third-party or manufacturer's certification) that when acrylamide and epichlorohydrin are used to treat water, the combination (or product) of dose and monomer level does not exceed the levels specified, as follows:
Acrylamide = 0.05% dosed at 1 mg/L (or equivalent)
Epichlorohydrin = 0.01% dosed at 20 mg/L (or equivalent)
Top of Page |
Blackmoney: Switzerland reveals names of two Indian account holders
New Delhi: Two Indian women figure among scores of foreign nationals with Swiss bank accounts, whose names have been made public by Switzerland in its official gazette for being probed in their respective countries.
Making public these names, the Swiss Federal Tax Administration (FTA) has asked the two Indians to file an appeal within 30 days before the Federal Administrative Court if they do not want their details to be shared with the Indian authorities under their 'mutual assistance' treaty on tax matters.
However, no further details -- other than their dates of birth -- were made public for the two "Indian nationals" – Sneh Lata Sawhney and Sangita Sawhney.
The Indian government has been pushing the Swiss authorities for a long time to share information on the suspected tax evaders, while Switzerland has shared some details in cases where India has been able to provide some independent evidence of suspected tax evasion by Indian clients of Swiss banks.
The notices for the two 'Indian nationals' and many others are dated May 12, while other such notices are dated May 19 and May 5.
Committing full support to India's fight against the black money menace, Switzerland last week had said its Parliament would soon consider changes in laws to look into the possibility of sharing information in cases being probed on the basis of stolen data of Swiss bank accounts. |
Canon UK ambassador and renowned wildlife and aviation photographer Andy Rouse has been posting images he took “using some new kit” on his Twitter and Instagram accounts, raising suspicions that a new Canon camera could be released in the near future.
A little gift for you all this weekend. Shot last night, it’s a very different image of a cuckoo using some new kit. No questions yet as I won’t answer!!! Just enjoy the pic. pic.twitter.com/YeBFPoJCCXMay 3, 2019
Rouse, of course, gives no information away on the camera itself, except that it’s capable of shooting at a 30fps burst speed which, he says, he had to drop down to 5fps as he “was taking too many sharp shots”.
The possibilities
The Canon rumor mill has been rife with whispers of a new DSLR on the way. Talk of a 32MP Canon DSLR began only a few days ago, while we’ve been expecting to hear more about the Canon EOS 1D X Mark III since the end of last year.
Testing for the latter began in February, according to Canon Rumors, and Rouse was one of the first people to test drive the EOS 1D X Mark II, tempting us to put our money on the next iteration of the pro-level full-frame DSLR, especially since shooting at 30fps will likely be difficult for a 32MP APS-C system.
That kind of frame rate, however, is possible on a mirrorless camera. Back in April 2019, Canon confirmed it was working on another professional high-end full-frame mirrorless shooter to join the current EOS R line-up. This potential new EOS R series camera is said to feature in-body stabilization, something neither the EOS R nor the more affordable EOS RP offer.
Whether it’s a new DSLR or a mirrorless snapper remains to be seen, but Rouse does say he hasn’t “dumped anyone for anyone”, making us believe it’s definitely a Canon camera and not a new system from another manufacturer.
I haven’t dumped anyone for anyone and it wouldn’t be Sony anywayMay 3, 2019
[Via Canon Rumors] |
Arnold Kopelson, the Oscar-winning producer of such films as “Platoon” and “The Fugitive,” died Monday at his home in Beverly Hills. He was 83.
Kopelson’s death was confirmed Monday his wife and business partner of 42 years, Anne Kopelson.
Anne Kopelson said her husband was a consummate producer who dedicated himself wholeheartedly to every film he produced over his long career.
“He loved what he did,” Kopelson told Variety. “He loved dealing with people in making movies and he had a very, very big heart.”
Kopelson had a prolific career in the film business from the 1970s through the early 2000s. From 2007 until September, Kopelson served as a board member of CBS Corp. He became close friends with CBS controlling shareholder Sumner Redstone and was a strong supporter of former CBS chairman-CEO Leslie Moonves.
“Arnold was a man of exceptional talent whose legacy will long survive him. He also, of course, was a highly dedicated CBS board member for more than 10 years,” CBS said in a statement. “Our hearts go out to Anne and his family.”
Kopelson became wrapped up in the legal battle, now settled, between CBS and Shari Redstone earlier this year when a recent video of Sumner Redstone taken by Kopelson was introduced into the court to support CBS’ claim that Sumner Redstone was no longer capable of making his own decisions.
After attending New York University and earning a law degree, Kopelson started his career as lawyer focusing on entertainment clients before moving into film and television sales. With his future wife, Anne, he founded Inter-Ocean Film Sales in 1972 and became one of the first to specialize in funding independent films based on foreign pre-sales. He was a founding member of the American Film Marketing Assn., which launched the American Film Market. He was well regarded as having the rare combination of business acumen and a strong sense of creative material.
He moved into producing with films with indies such as 1981’s “Porky’s,” one of the most profitable films ever.
Kopelson shepherded notable films of the 1980s and 1990s including Oliver Stone’s best picture winner “Platoon,” “Falling Down,” “The Fugitive” and “Se7en.”
“Platoon” grossed nearly $140 million in the U.S. and also won best director for Stone and two additional Oscars.
A typically low-budget Kopelson affair, Oliver Stone’s passionate semiautobiographical morality tale about an Army platoon splintering between two warring commanding officers (Willem Defoe and Tom Berenger) in the midst of the Vietnam War went on to gross nearly $140 million at the box office and sweep the Oscars, including a Best Picture win for Kopelson.
After he won the Oscar for “Platoon,” Kopelson used his clout to secure financing for 1989’s “Triumph of the Spirit,” a Holocaust drama about a boxer, played by Willem Defoe, sent with his family to the Auschwitz concentration camp but still forced to compete for the Nazis. “Spirit” became the first movie shot entirely on the grounds of the Auschwitz camp in Poland.
“He worked hard to make people understand that movies can meaningful,” Anne Kopelson said. “They can have a purpose and they can be entertaining. His body of work expressed that on many levels.”
Kopelson was proud of the long road he took to bringing “Fugitive” to the screen in 1993 after many stops and starts, and screenwriters and stars attached to the project that went on to land an Oscar nomination for best picture.
He served for many years on the Executive Committee of the Producer’s Branch of the Academy of Motion Picture Arts and Sciences and was a member of the Board of Mentors of the Peter Stark Motion Picture Producer’s Program at the University of Southern California.
Survivors include his wife and business partner of 42 years, Anne Kopelson and three children, Peter, Evan and Stephanie.
Funeral services will be held on Wednesday, October 10 at Mt. Sinai Memorial Park, 5950 Forest Lawn Drive, Los Angeles, CA 90068 at 12:30 p.m. A Memorial will also take place at a later date.
Donations may be made to Cedars-Sinai. |
Nurse Kelly On Name
There were many worrying, stacked nurses at SCORE‘s Boob Polyclinic over time. Nurses constructed like brick properties with ample jugs that come in the examination apartment prior to their soles do.
Tigerr Benson, Terry Nova, Lana Ivans, Minka, Angelique, Dallas Dixon, Alanna Ackerman and extra bra-busters have outworn the nurse uniform and popped the thermometers in their sufferers. The nurse desire by no means will get senior. However baby’s were given to put on the poor, over-the-top nurse gown, now not the nurse uniform that adorns the whole lot. Kelly Christiansen is ideal!
SCORE editor Dave introduced up the attention-grabbing undeniable fact that a lot of our fashions thru the years have been nurses prior to they turned into porno starlets (like Alyssa Lynn) or have been nurses prior to and after porno (Renee Ross) or turned into some roughly nurse or well being caregiver once they exited bare modeling.
It is nicer that fewer scorching fashions develop into nurses as a result of The usa’s well being care machine is tousled sufficient now. We don’t want slackers pretending to be unwell to fulfill big-chested nurses.
Kelly wishes a batter pattern. Little doll has her personal extraction strategies. Seeing her do her factor with this bozo delivers uncontrollable erections.
“My mates at all times say I am highly down-to-Earth and extra just like the next-door neighbor,” stated adorable Kelly. Little doll used to be a SCORE Type of the Yr winner as voted by means of the devotees. “Prior to I ambled into the studio the very first time, I truly, genuinely didn’t suppose I may position.” Kelly afterwards learned baby may. We had superb religion in her all alongside and baby made all of our cravings come true. |
= m + -134. Put v, 4, 1 in decreasing order.
4, 1, v
Suppose w + 6 = 0, -2*x + 312*w = 316*w + 24. Let k(n) = n + 1. Let v be k(-5). Put x, -5, -2, v in decreasing order.
x, -2, v, -5
Let b = -0.37 + 2.87. Let h = b + -2.2. Sort h, -3, 1.
-3, h, 1
Let q = -40 + 51. Suppose 2*x - q = -1. Put -2, 1, x in decreasing order.
x, 1, -2
Let j = -0.2 - -0.3. Let t = 442 + -295. Let y be t/(-28) - 3/((-6)/10). Put y, 1, j in decreasing order.
1, j, y
Let l = 0.0649 - -4.9351. Sort l, -1/48, 3/5 in increasing order.
-1/48, 3/5, l
Let q = -0.1 - -0.1. Let b be 76/133 + 11/(-7). Sort b, 0.3, q in descending order.
0.3, q, b
Suppose -2*l + 1 + 5 = 0. Put -2, 8, l in descending order.
8, l, -2
Let w = -0.8 + 0.6. Let i = -0.3 + -14.7. Let a = i - -13. Sort -1, w, a.
a, -1, w
Suppose 0 = 3*v - 0*v - 24. Suppose 0*c - 4*c = v. Let b = -2 + c. Put -2, b, 3 in decreasing order.
3, -2, b
Let a = -4 - -8. Suppose -3*b + b + a = 0. Let r = 8 + -4. Sort -1, r, b in decreasing order.
r, b, -1
Let j = -8 - -5. Suppose -2*b + 134 = 3*k, 0*k + k - 22 = 5*b. Let o = 47 - k. Put 4, o, j in descending order.
o, 4, j
Let b = -1646 - -1641. Suppose -5*j + 65 = 5*h, -3*j = h - 32 - 1. Put -2, j, 4, b in ascending order.
b, -2, 4, j
Let j = -70 - -43. Let d = -27.4 - j. Sort 5, d, -4 in decreasing order.
5, d, -4
Let c be (2/25)/((-48)/80). Sort 0.4, 4, c in descending order.
4, 0.4, c
Let x be ((-4)/26)/1 + (-84)/(-546). Let t be -2 + 1 - 2/(-2). Suppose 3*l - 7 + 19 = t. Sort l, 5, x.
l, x, 5
Suppose 20 - 4 = 4*d. Let p(q) = -q**2 - 7*q + 2. Let v be p(-7). Let n be (v/d)/(3/(-12)). Sort 2, -3, n.
-3, n, 2
Let z = -86 + 91. Suppose 44 = -z*v + 69. Sort -9, v, -3.
-9, -3, v
Let x = 28 - 31. Let f(p) = p**2 + 7*p + 4. Let d be f(-7). Sort d, 5, x in descending order.
5, d, x
Let i = 89.21 - 90. Let r = 0.25 + i. Let x = 0.24 + r. Sort x, -3/5, 3 in descending order.
3, x, -3/5
Let b(r) = 4*r**3 - 37*r**2 - 28*r - 16. Let w be b(10). Sort 2, w, -52, -1 in increasing order.
-52, -1, 2, w
Let d = 3.5 - 2. Let m = d - 5.5. Let v = -4 - m. Sort 2/7, -0.1, v.
-0.1, v, 2/7
Let p(m) = m**3 + m + 2. Let w be p(-1). Suppose w = u - 8*u + 105. Suppose -2*c + 6*c + 25 = 5*n, -4*c - 5*n - u = 0. Put -1/4, c, -0.1 in increasing order.
c, -1/4, -0.1
Suppose 3*f - 9 - 6 = -3*p, -2*f - 8 = 0. Sort 2, 0, p, -1 in descending order.
p, 2, 0, -1
Let q(u) = -4*u - 48. Let o be q(-12). Let z(i) = -i**3 - 4*i**2 + 5*i - 5. Let b be z(-5). Sort o, -1, b in decreasing order.
o, -1, b
Let m = -72 + 90. Suppose 2*p = -5*l + m, p - l - 14 = -3*p. Suppose -2*x - 3*x - 15 = 0. Sort p, -4/3, x in ascending order.
x, -4/3, p
Suppose m - 32 = -36. Sort 4, -1, -5, m.
-5, m, -1, 4
Suppose -4*a = -4*m, -2*a - 14 = -5*m - 4*a. Sort m, 12, -2, -3 in ascending order.
-3, -2, m, 12
Let x be (377/58 - (-1 + 5))*-2. Put x, 3, -3 in increasing order.
x, -3, 3
Let c = 1687 - 1686.6. Sort c, 0, -2/3, 0.3 in descending order.
c, 0.3, 0, -2/3
Let i be (-8)/13*2/(-8). Let z be ((-4)/1)/((-36)/27). Let y = 5047/17 + -297. Sort y, i, z.
y, i, z
Let b = 0.184 + 0.016. Sort -2, b, -15 in ascending order.
-15, -2, b
Let q be 10/12 - 70/60. Put -5, 0, 4, q in increasing order.
-5, q, 0, 4
Let p = 10 - 10.9. Let c = 8 + -7.6. Let t = p + c. Sort 2/3, -1/6, t in increasing order.
t, -1/6, 2/3
Let l = -34.64 - -0.64. Let b = l - -34. Put 13, b, -1/3 in descending order.
13, b, -1/3
Let m be 1 - (0/1 - 2). Suppose -m*f = -0 + 9. Let c(w) = 3*w**2 - 33*w + 26. Let q be c(10). Put 0, q, f in descending order.
0, f, q
Suppose -3 = s - 2*s. Suppose 7 = -v - 2*q, -3*v = -s*q + q - 19. Let j(o) = -2*o + 1. Let m be j(v). Put m, 2, 6 in decreasing order.
6, 2, m
Suppose -4*v = 854 + 3634. Let r be v/15 - (-1)/(-1). Let x = r + 76. Sort 3/4, x, 1/10.
1/10, x, 3/4
Suppose -2*b + 2*v = 24, 2*b + 5*v = b + 12. Let g be 1/(((-4)/6)/(-9 + 7)). Put -1, b, g in decreasing order.
g, -1, b
Let r(z) = z**2 - 7*z - 225. Let l be r(-12). Let u be ((-1)/2)/(2/(-16)). Put u, -5, l, 7 in decreasing order.
7, u, l, -5
Let i(a) = a**2 + a - 9. Let p(v) = -v**3 + v**2 + 2. Let o be p(2). Let m = o + -2. Let r be i(m). Put r, -2, -4 in decreasing order.
r, -2, -4
Let z = 119 - 114. Put z, 4, 131 in increasing order.
4, z, 131
Let y be (1 + -3 + 0)*(-14 - -12). Put 99, y, 0 in decreasing order.
99, y, 0
Let v = 20 - 10. Let p = -9.7 + v. Let q = 7.038 + -0.038. Put 1/6, p, q in ascending order.
1/6, p, q
Suppose -4*r = 4, 4*x - 2*r = x + 8. Suppose 5*c - x + 2 = 0. Put c, -2, 3, -1 in decreasing order.
3, c, -1, -2
Let a(m) = 8*m - 53. Let r be a(7). Put 7, -5, r in decreasing order.
7, r, -5
Let u = 20 + 0. Suppose -2*v = -6*v + u. Sort -2, -3, -4, v in decreasing order.
v, -2, -3, -4
Suppose 2*r - 15 = 3*g + 2*g, 4*r + 4*g = 16. Let u = 3277 - 3277. Put r, -4, u, 7 in increasing order.
-4, u, r, 7
Let m = 0.3 - 0.2. Let d = 65 + -45. Let j be (-30)/7 - d/(-70). Sort j, m, -0.4 in descending order.
m, -0.4, j
Let y = -2 - -4. Let m = -10 + y. Let o be ((-6)/9)/(m/60). Sort 4, -2, o in ascending order.
-2, 4, o
Suppose 0 = 23*u + 25*u - 43*u. Put u, 4, -4, 2 in descending order.
4, 2, u, -4
Let n(m) = 3*m + 54. Let r be n(-17). Let a = -4 + 9. Let q(k) = k**3 - 4*k**2 - k + 2. Let z be q(4). Put r, a, z in ascending order.
z, r, a
Suppose 13*c + 1069 = 1082. Suppose -90 = -3*w - 0*w. Suppose -i - 11 = -n - 4, 4*i - 3*n = -w. Sort -2, c, i.
i, -2, c
Let n = 12 - 7. Suppose y + 64 = 4*p, 5*y = -4*p + 49 - 9. Suppose -10*i + 15 = -p*i. Sort 0.1, i, n.
i, 0.1, n
Let q be 1/(2/(-10)*1). Let i = -35 + 40. Suppose -23 = -i*s + 5*l - 3, 0 = 2*s + 4*l - 2. Sort q, 1, s.
q, 1, s
Suppose -5*a = -2*p - 30, 5*p + 9 = 18*a - 22*a. Sort a, -3, 27 in ascending order.
-3, a, 27
Suppose -94 = -29*v + 51. Put -20, -1, v in descending order.
v, -1, -20
Let m = 0.612 - -0.048. Let s = 0.16 - m. Put 5, 4, s in descending order.
5, 4, s
Let a = -11195 + 11193. Suppose -f = f - 2. Let c be 3*(f/(-3) - 0). Sort a, 12, c.
a, c, 12
Let t = 50 + -148/3. Let b = -3578 - -3576. Sort 17, b, t in descending order.
17, t, b
Let l(o) = -o**3 - 4*o**2 - 5*o - 4. Let a be l(-3). Let g(b) = -b**2 + 9*b - 15. Let c be g(4). Put a, -7, -5, c in decreasing order.
c, a, -5, -7
Let f = -816 + 816.2. Sort -2/3, -5, 4/9, f in descending order.
4/9, f, -2/3, -5
Let k be 1/7 - (-1026)/(-126). Let f be 26/k - (-3)/12. Put 0, f, 2 in decreasing order.
2, 0, f
Let t = -28.4 + 28.38. Sort 2, t, 0.3, 4/5 in decreasing order.
2, 4/5, 0.3, t
Let t(i) = 7*i**2 - 9*i - 1. Let q(u) = 4*u**2 - 4*u. Let s(m) = 5*q(m) - 3*t(m). Let f be s(6). Sort -2, -3, f.
-3, -2, f
Let v = 0.02 + -0.1. Let d = v - 15.92. Let b = d - -15.5. Put b, 4, -3/2 in ascending order.
-3/2, b, 4
Let d = 160/3 + -53. Put 4, -0.4, 0.1, d in decreasing order.
4, d, 0.1, -0.4
Let l be (1*(-21)/7)/7. Put -3/2, 1/5, l, 0.4 in increasing order.
-3/2, l, 1/5, 0.4
Let t = -902 + 912. Sort -3, 7, t in decreasing order.
t, 7, -3
Let a(f) = f**2 - 3*f - 2. Let d be a(4). Let g(x) = -70*x + 275. Let k be g(4). Sort 0, d, k in decreasing order.
d, 0, k
Let b = 1.1 - 0.6. Let x = -1790 - -6829/4. Let t = 84 + x. Put 6, t, b in descending order.
6, t, b
Let s be (8/6*-3)/(-16 + 15). Put -4, s, 0, -2 in ascending order.
-4, -2, 0, s
Suppose z = -16 + 8. Suppose -3*u - 4*f - 33 = 0, -3*u - 2*u - 2*f - 41 = 0. Let n be (u - z)/(2/(-10)). Sort 6, n, -1.
n, -1, 6
Let o = 10 - 4. Suppose o*m - 4 = 2. Let z be -1*m*0/(-11). Sort z, 3, -2.
-2, z, 3
Let z = 2 - -1. Suppose 0 = z*j - 4*b - 29, 9 = j + 5*b - 7. Suppose h = -4*x - 0 + 11, -4*h + j = 5*x. Put -3, x, -1 in decreasing order.
x, -1, -3
Let c be 2*-2 - 4/((-60)/65). Suppose 5*b = 11 + 4. Sort c, -2/19, b, 1/8.
-2/19, 1/8, c, b
Let w = -1/422 - 209/844. Sort -2, 1.2, w, -1.
-2, -1, w, 1.2
Let r(s) = s**2 + 10*s + 22. Let z be r(-16). Put -5, 4, z in ascending order.
-5, 4, z
Let k = 3059 + -24473/8. Sort 1/3, 13, 0.1, k in descending order.
13, 1/3, 0.1, k
Suppose 16*d = 5*d + 242. Suppose -d*x + 16*x + 30 = 0. Sort 3, x, 0, -4.
-4, 0, 3, x
Let i = -61.64 - -61. Let v = i - -0.84. Sort -0.2, v, 5.
-0.2, v, 5
Suppose 11 = -3*s + 6*s - 2*k, 14 = 2*s - 3*k. Suppose 7 = a + s. Let v be (-4)/12 - 22/a. Sort -2/7, v, -5 in increasing order.
-5, v, -2/7
Let g(d) = d**3 - 6*d**2 + 15. Let a be g(7). Suppose 12 = |
List of people from Potomac, Maryland
Past and present residents of Potomac, Maryland include:
Atiku Abubakar, billionaire and vice president of Nigeria
Freddy Adu, professional soccer player for Philadelphia Union
Robert A. Altman, owner of ZeniMax Media; married to Lynda Carter
Sam Anas, ice hockey player for Iowa Wild
Surinder Arora, English hotelier
Mike Barrowman, Olympic Champion Swimmer
Howard Behrens, painter
Eric F. Billings, CEO of FBR Capital Markets Corporation
Wolf Blitzer, anchor and host of CNN's The Situation Room
Eric Brodkowitz, Israeli-American baseball pitcher for the Israel National Baseball Team
F. Lennox Campello, artist, art critic, writer and art dealer
Lynda Carter, television actress, best known for her roles of Diana Prince and the title character on Wonder Woman
Calbert Cheaney, NBA player
Michael Chertoff, former Secretary of Homeland Security
Kelen Coleman
Mike Cowan, professional caddy for Jim Furyk
Kamie Crawford, Miss Maryland Teen USA 2010, Miss Teen USA 2010
Donald Dell
Sherman Douglas
Margaret Durante, country music artist signed to Emrose Records
Patrick Ewing, NBA player
Kenneth Feld, owner and CEO of Feld Entertainment, producers of Ringling Bros. and Barnum & Bailey Circus
Ben Feldman
Raul Fernandez
Thomas Friedman, author
Phil Galfond, professional poker player
John Glenn, Senator and astronaut
Jeff Halpern (born 1976), NHL player, the first in league history to be raised in the American South
Beth Harbison, New York Times bestselling author
Ayman Hariri, Lebanese billionaire and son of Rafic Hariri
Leon Harris, anchor for WJLA-TV
Dwayne Haskins, football quarterback for the Washington Redskins
John Hendricks, founder and former chairman of Discovery Communications
Marillyn Hewson, chairman and CEO of Lockheed Martin
E. Howard Hunt, author, CIA Officer and Watergate figure
Juwan Howard
Frank Islam, philanthropist and founder of QSS Group
Nurul Islam, Bangladeshi ex-minister, politician, and economist
Antawn Jamison, NBA player
Yahya Jammeh, President of Gambia
Dhani Jones, NFL player
Eddie Jordan, former NBA coach
Joseph P. Kennedy, Ambassador to the United Kingdom, resided at Marwood Manor
Olaf Kolzig
Ted Koppel, former ABC News anchor
Ryan Kuehl, NFL player
Sachiko Kuno, patron of the arts and pharmaceutical tycoon, appeared on Forbes list of Wealthiest Self-Made Women
Paul Laudicina, Chairman and CEO of A.T. Kearney
Richard Kane, President and CEO of International Limousine Service
Sugar Ray Leonard, professional and Olympic champion boxer
Ted Leonsis, owner of the NHL's Washington Capitals, NBA's Washington Wizards, and WNBA's Washington Mystics
Ted Lerner, owner of Lerner Enterprises and MLB's Washington Nationals
Bruce Levenson, owner of NBA's Atlanta Hawks
Barry Levinson, Academy Award-winning director and screenwriter
Liza Levy, Jewish community activist
Chelsea Manning, convicted of violating the Espionage Act
J.W. Marriott, Jr., billionaire executive of Marriott International
Mac McGarry, host of the Washington and Charlottesville, Virginia, versions of It's Academic
Nana Meriwether, Miss Maryland USA 2012, Miss USA 2012 (succeeded)
Serge Mombouli, Ambassador of Congo 2000-2010
Taylor Momsen, actress from CW TV series Gossip Girl
Alonzo Mourning, NBA player
Dikembe Mutombo, NBA player
Gheorghe Muresan, NBA player
George Muresan, Aspiring Doctor and Future Baller
Rachel Nichols, sports journalist, CNN anchor
Queen Noor of Jordan, Queen Consort of Jordan, widow of Hussein of Jordan
Teodoro Obiang Nguema Mbasogo, President of Equatorial Guinea
Farah Pahlavi, former Queen of Iran
Reza Pahlavi II, Crown Prince of Iran
Benedict Peters, Nigerian billionaire and CEO of Aiteo
Issa Rae, writer, actress, director, producer, author. Co-creator of Insecure.
Mitchell Rales, Chairman of the Danaher Corporation
Isabel Rayner, Division 1 Diver at the University of Maryland, Baltimore County
Eric Billings, co-founder of Friedman Billings Ramsey
Rosa Rios, Treasurer of the United States
David Ritz, owner of Ritz Camera
Franklin Delano Roosevelt, United States President, occupied Marwood Mansion during the summer
Greg Rosenbaum, co-founder of The Carlyle Group
Pete Sampras, tennis player (moved to California at age 7)
Eunice Kennedy Shriver, sister of John, Robert, and Ted Kennedy; mother of Maria Shriver
Sargent Shriver, husband of Eunice Kennedy Shriver; founder of the Peace Corps; former Ambassador to France
Topper Shutt, Chief Meteorologist for WUSA-TV
Donnie Simpson, WPGC 95.5 radio personality; former BET VJ
Daniel Snyder, owner of the NFL's Washington Redskins; former Chairman of the Board of Six Flags
Sylvester Stallone, actor
Darren Star
Tim Sweeney, video game developer, founder of Epic Games
David Trone, businessman and U.S. Congressman
Mike Tyson
John Wall, NBA player for the Washington Wizards
Mark A. Weinberger, Global Chairman and CEO of EY
Robert Wexler, U.S. Congressman
Buck Williams, NBA player
Gary Williams, former head coach of University of Maryland's basketball team
Willie J. Williams, NFL player
References
Potomac, Maryland
Potomac |
Herb medicine Gan-fu-kang attenuates liver injury in a rat fibrotic model.
To verify therapeutic effects of Gan-fu-kang (GFK), a traditional Chinese medicine compound, in a rat model and to investigate the underlying mechanisms. Liver fibrosis was established by 12 weeks of carbon tetrachloride (CCl(4)) treatment (0.5mg/kg, twice per week) followed by 8 weeks of "recovery" in rats. Rats randomly received GFK (31.25, 312.5 and 3125 mg/kg/day, p.o.) or vehicle from weeks 9 to 20, and were sacrificed at the end of week 20 for histological, biochemical, and molecular biological examinations. In a separate set of experiments, rats received 12 weeks of CCl(4) treatment, concomitant with GFK (312.5mg/kg/day, p.o.) during the same period in some subjects, but were then sacrificed immediately. An additional group of rats receiving no CCl(4) treatment served as normal controls. (1) CCl(4) treatment resulted in severe liver damage and fibrosis. (2) In the main block of the 20-week study, GFK attenuated liver damage and fibrosis. (3) In the 12-week study, GFK produced prevention effect against hepatic injury. (4) GFK suppressed the expression of tissue inhibitor of metalloproteinase-1 (TIMP-1), type I collagen, platelet-derived growth factor-BB (PDGF-BB)/PDGF receptor-beta chains (PDGFRbeta) and mitogen-activated protein kinases (MAPKs)/active protein-1 (AP-1) signal pathways. Taken together, these results indicated that GFK could attenuate liver injuries in both settings. Our findings also suggest that the AP-1 pathway is the likely molecular substrate for the observed GFK effects. |
IN THE COURT OF CRIMINAL APPEALS
OF TEXAS
NO. AP-76,950
EX PARTE SCOTTIE DWAYNE HADNOT, Applicant
ON APPLICATION FOR A WRIT OF HABEAS CORPUS
CAUSE NO. 47,179-A IN THE 30TH DISTRICT COURT
FROM WICHITA COUNTY
Per curiam.
OPINION
Pursuant to the provisions of Article 11.07 of the Texas Code of Criminal Procedure, the
clerk of the trial court transmitted to this Court this application for a writ of habeas corpus. Ex parte
Young, 418 S.W.2d 824, 826 (Tex. Crim. App. 1967). Applicant was convicted of one count of
possession of a controlled substance and one count of evading arrest and sentenced to forty years’
and eighteen months’ imprisonment, respectively. The Seventh Court of Appeals affirmed the
controlled substance conviction. Hadnot v. State, No. 07-10-00296-CR (Tex. App.—Amarillo Jun.
13, 2011) (unpublished).
Applicant contends that his appellate counsel rendered ineffective assistance because counsel
failed to timely notify Applicant of his rights to pursue a pro-se petition for discretionary review.
2
Appellate counsel filed an affidavit with the trial court. Based on that affidavit, the trial court
has entered findings of fact and conclusions of law that appellate counsel failed to notify Applicant
that he had a right to file a pro-se petition for discretionary review The trial court recommends that
relief be granted. Ex parte Wilson, 956 S.W.2d 25 (Tex. Crim. App. 1997).
We find, therefore, that Applicant is entitled to the opportunity to file an out-of-time petition
for discretionary review of the judgment of the Seventh Court of Appeals in Cause No. 07-10-
020296-CR that affirmed his conviction in Cause No. 47,179-A from the 30th District Court of
Wichita County. Applicant shall file his petition for discretionary review with this Court within 30
days of the date on which this Court’s mandate issues.
Delivered: January 16, 2013
Do not publish
|
Buses leaving Pristina each night are packed.| Photo by Una Hajdari
The number of Kosovo Albanians trying to illegally enter the European Union is on the rise, with the Hungarian Police reporting that 880 people from Kosovo were arrested on Sunday and a total of around 4,400 in the period from Friday to Sunday.
The numbers represent the biggest exodus of ethnic Albanians from Kosovo since the conflict in the late 1990s, with Hungarian embassy in Pristina saying that the total number of Kosovo Albanian in the EU country could be higher than 60,000.
The illegal migrants, most whom say they are leaving because of the lack of opportunities in Kosovo, can easily enter Serbia because of an agreement between the countries on travel with ID cards.
They then try to cross the border with Hungary at night, usually with the assistance of smugglers, travelling illegally because they still need visas for the EU.
“I left five months ago. I crossed the border illegally and then went to Budapest. From Budapest I took the train to Austria,” a 24-year-old migrant who spoke to Balkan Insight on condition of anonymity.
He said he then stayed at a camp for asylum seekers in Austria but was subsequently deported. “I was in Traiskirchen for a little over a month, which is when they sent me back to Kosovo by plane,” he said.
After staying in Kosovo for a couple of months, he decided to try his luck again a week ago. He and two other friends successfully crossed the border with Hungary, reaching the town of Gyor, from where he spoke to Balkan Insight.
Kosovo president Atifete Jahjaga kicked off an awareness-raising campaign about the EU asylum-seeking issue on Friday, visiting the towns of Vushtrri and Ferizaj, which have been hit hard by the phenomenon. In Vushtrri, a local school has reported that it has lost 440 students to migration.
Jahjaga insisted that “departure is not the solution” and urged people to stay and work on improving the situation in the country.
The Ministry of Internal Affairs told Balkan Insight in a statement that “measures are being taken to stop the illegal migration” through awareness campaigns and the targeting of “the criminal groups that deal with the trafficking of migrants”.
It said that Kosovo Police were working with EU countries and EU rule of law institutions in a bid to stem the exodus.
Kosovo lawmakers also passed a bill last week aimed at dealing with the outflow of migrants towards the EU.
The bill calls on the government to show greater commitment to economic development, with particular emphasis on creating new jobs to prevent people leaving for European countries. It also envisages the creation of a “fund for the prevention of illegal emigration”.
Parliament further pledged to work on and meet conditions for visa liberalisation with the EU to minimise the number of people trying to get to western Europe illegally. |
About Me
It is one thing to live through a traumatic car accident, and another thing altogether to survive the troublesome aftermath. You might find yourself dealing with insurance companies, injuries, and missed work. However, you might be able to overcome your accident with the help of an attorney. I got into a terrible car accident several years ago, and I honestly don't think I would have recovered emotionally without the closure that my lawsuit brought. As you peruse the legal articles on my website, remember that your case is important, and that a lawyer can help you to right some wrongs.
Under chapter 13 bankruptcy, you are allowed to keep your assets, as long as you come up with a feasible and practical repayment that can be completed from anywhere between 3 to 5 years. There's usually little wiggle room with the repayment plan, and you won't be left with a lot of disposable income. Due to this reason, even minor changes to your financial situation can cause you to experience significant hardships in making payments to your repayment plan. In some situations, you can file a motion with the bankruptcy court in hopes of being able to modify your repayment plan after it has already been confirmed. Here are 3 situations when this is possible, and the type of evidence that you'll need.
Loss of Employment
If you get laid off, then there's a huge chance that you will no longer be able to make payments to your plan. In these situations, instead of modifying the terms or the conditions of the plan, the court might want to grant you some leniency and give you a short break in the form of a moratorium until you have things sorted out. The moratorium is typically limited to 90 days. During this time, you'll be expected to seek employment capable of providing with you a similar income. To make your case, you'll need to file your motion with a letter from your employer detailing the fact that you have just lost your job.
Pay Cut or a Reduction in Other Income
With money being so tight when dealing with a chapter 13 repayment plan, you might not be able to afford any pay cuts at work or any other type of reduction in any other lines of income. Even the smallest reduction might make it difficult to pay certain bills. With that said, a paystub documenting your reduction in income can be filed with your motion. Based on your new financial situation, different repayment terms may need to be negotiated with creditors. Naturally, the court will allow your creditors to file an appeal should they disagree with your new terms.
Illness or Disability
Falling ill or dealing with an accident can significantly impact your ability to pay. This is particularly true if you are no longer able to work as much. In these situations, medical reports and documents, along with testimony from a medical professional, can go a long way in your motion. The court will need to determine whether your situation is temporary or permanent. In the event that it is temporary, they will determine whether giving you a break or changing the terms of your repayment plan will be best. In the event that it is permanent, they might recommend that you switch and file for bankruptcy under chapter 7.
Conclusion
Bankruptcy courts acknowledge that even the most minor changes in your financial situation can cause extreme hardships in your ability to pay off your repayment plan. Once you file a motion, a hearing will be scheduled in front of a judge so that you can plead your case. For more information, contact companies like http://www.tblakelaw.com. |
1. Field of the Invention
The invention relates to a load detecting device for a roller bearing that is used to support, for example, a main shaft of a wind power generator, and a roller bearing apparatus.
2. Description of the Related Art
Wind, power generation becomes a focus of attention anew in recent years as an eco-friendly way to generate electric power without emitting carbon dioxide. Wind power generation has been rapidly becoming widespread in the world, while upsizing of a wind power generator has been proceeding in order to obtain a larger amount of generated power. In addition, in order to suppress an increase in the weight of a generator resulting from such upsizing, various structure improvements, such as employment of a thin-walled generator frame and a thin-walled journal box, for weight reduction have also been carried out. On the other hand, a load applied to a bearing that supports a main shaft of a rotor has been increasing due to the upsizing of the generator, and, in addition, a distribution of load applied to a bearing, particularly, rolling elements, has become complex due to improvement in the structure of the journal box, or the like. Therefore, it is especially important to accurately analyze the durability and service life of the bearing. Accordingly, there is a need for a measuring method by which a load applied to the rolling elements of the bearing is more accurately obtained.
In an existing art, in order to measure a load applied to a bearing, a strain gauge is provided inside a rolling element of the bearing as described in Japanese Patent Application Publication No. 7-77218 (W-A-7-77218). Specifically, in the technique described in JP-A-7-77218, a hole is formed along the axis of a roller, a strain gauge is provided on an inner surface of the hole, and the strain gauge is connected to a transmitting coil provided integrally with the roller. In addition, an annular receiving coil is provided on a side surface of an outer ring, and an output signal from the strain gauge, transmitted from the transmitting coil in real time, is received by the receiving coil. Then, the receiving coil is connected to an external computer, and received data are processed by the computer. |
Reorganization of North Atlantic marine copepod biodiversity and climate.
We provide evidence of large-scale changes in the biogeography of calanoid copepod crustaceans in the eastern North Atlantic Ocean and European shelf seas. We demonstrate that strong biogeographical shifts in all copepod assemblages have occurred with a northward extension of more than 10 degrees latitude of warm-water species associated with a decrease in the number of colder-water species. These biogeographical shifts are in agreement with recent changes in the spatial distribution and phenology detected for many taxonomic groups in terrestrial European ecosystems and are related to both the increasing trend in Northern Hemisphere temperature and the North Atlantic Oscillation. |
I run them from time to time. It's one of the few things the Smith Machine is good for. Scorches the peewaddins out of my inner triceps. I couldn't see doing them with real heavy weights or without a spotter or Smith.
Not a bad exercise. Not the be all/end all either.
__________________Bogdan Petia Sarac - Must keel moose and squirrel
Cancer Survivor - 7/21/10
Benchmark 5K time:27:45 (3/5/11)
It's not the weight we move, but the people we move that matters. -- Bearded Beast of Duloc (12/31/10) |
<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<title>EightShapes Blocks</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="apple-touch-icon" href="apple-touch-icon.png">
<!-- Place favicon.ico in the root directory -->
<link rel="stylesheet" href="site/css/blocks_site.css">
<!-- <link rel="stylesheet" href="css/main.css"> -->
<!-- <script src="js/vendor/modernizr-2.8.3.min.js"></script> -->
<script src="dist/esb.js"></script>
<link rel="stylesheet" href="dist/esb.min.css"/>
</head>
<body>
</body>
</html> |
Namespace BattleSystem.Moves.Normal
Public Class ViseGrip
Inherits Attack
Public Sub New()
'#Definitions
Me.Type = New Element(Element.Types.Normal)
Me.ID = 11
Me.OriginalPP = 30
Me.CurrentPP = 30
Me.MaxPP = 30
Me.Power = 55
Me.Accuracy = 100
Me.Category = Categories.Physical
Me.ContestCategory = ContestCategories.Tough
Me.Name = "Vise Grip"
Me.Description = "The target is gripped and squeezed from both sides to inflict damage."
Me.CriticalChance = 1
Me.IsHMMove = False
Me.Target = Targets.OneAdjacentTarget
Me.Priority = 0
Me.TimesToAttack = 1
'#End
'#SpecialDefinitions
Me.MakesContact = True
Me.ProtectAffected = True
Me.MagicCoatAffected = False
Me.SnatchAffected = False
Me.MirrorMoveAffected = True
Me.KingsrockAffected = True
Me.CounterAffected = True
Me.DisabledWhileGravity = False
Me.UseEffectiveness = True
Me.ImmunityAffected = True
Me.HasSecondaryEffect = False
Me.RemovesFrozen = False
Me.IsHealingMove = False
Me.IsRecoilMove = False
Me.IsPunchingMove = False
Me.IsDamagingMove = True
Me.IsProtectMove = False
Me.IsSoundMove = False
Me.IsAffectedBySubstitute = True
Me.IsOneHitKOMove = False
Me.IsWonderGuardAffected = True
'#End
End Sub
End Class
End Namespace |
; RUN: opt < %s -instcombine -S | grep "16" | count 1
define i8* @bork(i8** %qux) {
%tmp275 = load i8** %qux, align 1
%tmp275276 = ptrtoint i8* %tmp275 to i32
%tmp277 = add i32 %tmp275276, 16
%tmp277278 = inttoptr i32 %tmp277 to i8*
ret i8* %tmp277278
}
|
The steady stream of incidents in which hackers have been able to access traditional passwords highlight the need for something more secure over and over again. Adding so-called two-factor authentication increases security by validating users with something they know (a regular password) and something they have (a hardware or software generated one-time password).
Protecting Workspaces desktops with two-factor authentication helps prevent unauthorized users from gaining access to enterprise resources, while defending against password attacks such as phishing and keystroke logging. The feature itself is available now at no extra charge, Amazon said in a blog post.
For the authentication to work, organizations need a Radius server. Amazon has verified its implementation against the Symantec VIP (Validation and ID Protection) and Microsoft Radius Server products.
Gemalto offers two products that can be used to generate the one-time passwords; the Ezio keyfob costs US$12.99 and the Ezio display card costs $19.99. The six digit passwords they generate are valid for one attempt and for 30 seconds.
For companies that don’t want to roll out new hardware, there are applications for Android, BlackBerry OS, iOS and Windows. The applications are free, but aren’t considered as secure.
Amazon Workspaces was made generally available at the end of March. The service offers managed virtual desktops users can access from PCs, Macs, Apple’s iPads and tablets based on Android, including Amazon’s own Kindle Fire products.
The desktops cost from $35 per user and month and are available from Amazon’s data centers in North Virginia, Oregon, Sydney and Ireland. |
1. Field of the Invention
The present invention relates to devices used in orthognatic surgery interventions or in the preparation thereof. Such interventions are surgery interventions of repair, in particular, of a mispositioning of the jaws with respect to each other. An orthognatic surgery intervention especially consists of performing osteotomies of the maxilla and/or of the mandible to reposition them correctly with respect to the rest of the skull by recreating a defective bite.
2. Discussion of the Related Art
The preparation of such a surgery intervention requires implementing orthodontic and radiological techniques.
A mandibular casting and a maxillary casting providing the respective implantations of the patient's teeth in the respectively mandibular and maxillary bone segments are first performed. The castings, generally made of plaster, are used to simulate the relative displacement which has to be applied to the jaws to recreate the bite. To enable the surgeon to respect these simulated relative positions, a plate comprising, on each of its surfaces, tooth-prints of the two castings is made with the dental castings. Such a plate, called an interscupidation plate, is used to maintain the castings or the jaws in relative positions where the teeth are in occlusion.
Since the surgical intervention generally includes osteotomies of both jaws, two interscupidation plates are generally made from the dental castings, in addition to a so-called initial interscupidation plate linking the two jaws in their occlusion position before the intervention.
A so-called intermediary plate determines the foreseeable displacement of the maxilla with respect to the mandible when said mandible is in its original (preoperative) position. This plate enables the surgeon to place the maxilla back on the skull in the desired definitive position before intervening on the mandible. A so-called definitive plate determines the occlusion objective to be surgically achieved and is thus used to correctly position the mandible on the skull by setting the position of the mandible with respect to the previously replaced maxilla.
The preparation of the surgical operation also uses a profile radiography of the patient enabling, in particular, performing an approximate simulation of the operative action.
This simulation is performed manually from a tracing paper placed on the radiography. For example, the contours of the mandible are first drawn. The tracing paper is then moved to approximately reproduce thereon the desired postoperatory occlusion, after which the maxillary contours are drawn. The maxillomandibulary assembly drawn on the tracing paper is then moved in one block while respecting cephalometric standards, labial ratios, as well as other criteria known for this type of intervention. The direction and amplitude of the jaw displacements are thus radiologically and approximately defined. The results of this simulation are compared and adjusted according to the relative motion of the mandible and of the maxilla envisaged by means of the interscupidation plates.
The actual simulation of an orthognatic surgery intervention is thus performed essentially manually. Further, this simulation is only done in two dimensions based on a plane profile view of the skull.
The development of scanners associated with image processing systems enables obtaining three-dimensional views of a patient's skull. Such systems would be particularly useful to perform a three-dimensional simulation of an orthognatic surgery intervention. In particular, it is known to isolate from one another different portions of the three-dimensional images reconstructed from scanner cross-sections. Thus, a portion corresponding to the maxilla and a portion corresponding to the mandible could be isolated from the rest of the skull. This would enable simulating, by means of the image processing system, relative displacements of these elements with respect to one another. However, the sole use of such three-dimensional image processing systems for a simulation of an orthognatic surgery intervention remains up to now impossible for several reasons.
First, the accuracy of a scanner is incompatible with the accuracy requirement of a bite. Indeed, the jaw positioning accuracy required for the bite is on the order of one tenth of a millimeter while the minimum pitch between two scanner tomographies ranges from approximately two to five millimeters. The respective initial positions of the mandible and of the maxillary thus cannot be precisely reproduced by means of the scanner.
Second, teeth amalgams (fillings) create artifacts which appear as blurred spots on the scanner images. It is thus impossible to plot, on a three-dimensional view, the exact position of the teeth based on the scanner images to obtain the bite.
A technique of preparation and assistance of an operatory action in orthognatic surgery using a scanner is however known. This technique consists of affixing three titanium screws on the patient's maxillary. A resin model of the skull is then made based on scanner cross-sections of the patient's skull. Since the screws appear on the scanner views, they are reproduced on the model. The screws are used to position with respect to the skull a metal frame for receiving a final interscupidation plate. This plate is made based on maxillary and mandibulary dental castings taken on the patient. Once the castings are made, the maxillary is cut-off from the resin model to be replaced with the corresponding casting. The maxillary casting is attached to the model in the desired definitive position. Then, the mandible is cut-off from the model to be replaced with a casting previously made on the patient.
The position of the mandibular casting is given, with respect to the maxillary, by the interscupidation plate which is then rigidly coupled to the metal frame forming a system of transfer of the plate position between the model and the patient. The frame is then brought back on the patient in the position defined by the three maxillary screws and is attached to the patient's skull by two additional screws. The position of the transfer system being now fixed by these two screws, the osteotomy of the maxillary, which is correctly repositioned by means of the interscupidation plate which is rigidly coupled to the transfer system, is performed. Then, the osteotomy of the mandible is performed, and said mandible is correctly positioned by means of the interscupidation plate.
Such a technique has several drawbacks. On the one hand, it requires an additional surgical intervention to place the screws in the patient's mouth. On the other hand, it requires making a resin model of the skull, which is particularly costly. Further, the simulation is performed, empirically, by means of the resin model and does not enable taking account of cephalometric standards based on accurate data. |
Q:
Speed-optimizing tree data parser
I'm working on an assignment where the input is in the following format, and I have to parse it as fast as possible:
5 (
5 (
3 (
)
)
3 (
3 (
)
3 (
)
)
5 (
2 (
)
4 (
)
)
)
It is a tree structure of "Employees", the numbers are for the subsequent task (index of language).
Each employee can have any number of subordinates and one superior (the root node is "Boss").
Here's my parser: (Originally I used Scanner and it was short and simple, but about twice slower)
// Invocation
// Employee boss = collectEmployee(null, 0, reader);
private Employee collectEmployee(final Employee parent, int indent, final Reader r) throws IOException
{
final StringBuilder sb = new StringBuilder();
boolean nums = false;
while (true) {
char c = (char) r.read();
if (c == 10 || c == 13) continue; // newline
if (c == ' ') {
if (nums) break;
} else {
nums = true;
sb.append(c);
}
}
final int lang = Integer.parseInt(sb.toString());
final Employee self = new Employee(lang, parent);
r.skip(1); // opening paren
int spaces = 0;
while (true) {
r.mark(1);
int i = r.read();
char c = (char) i;
if (c == 10 || c == 13) continue; // newline
if (c == ' ') {
spaces++;
} else {
if (spaces == indent) {
break; // End of this employee
} else {
spaces = 0; // new line.
r.reset();
self.add(collectEmployee(self, indent + 1, r));
}
}
}
return self; // the root employee for this subtree
}
I need to shave a few more cycles off the code, so it will pass the strict requirements. I've profiled it and this part is indeed what slows the app down. The input file can have up to 30 MiB so any little improvement makes big difference.
Any ideas appreciated. thanks.
(Just for completeness, the Scanner implementation is here - it can give you idea of how I parse it)
private Employee collectEmployee(final Employee parent, final Scanner sc)
{
final int lang = Integer.parseInt(sc.next());
sc.nextLine(); // trash the opening parenthesis
final Employee self = new Employee(lang, parent);
while (sc.hasNextInt()) {
Employee sub = collectEmployee(self, sc);
self.add(sub);
}
sc.nextLine(); // trash the closing parenthesis
return self;
}
A:
You are doing a lot of data pushing with the StringBuilder — it may be beneficial to keep an int value that you update on encountering a decimal char ('0'-'9') (num = num * 10 + (c - '0')) and storing/resetting on encountering a non-decimal. That way you can also get rid of Integer.parseInt.
You seem to be using/checking indentation for the hierarchy, but your input format contains braces which makes it an S-Expression based syntax — so your parser does a lot more work than needed (you can ignore spaces and handle braces using a stack of Employees).
I'd consider using a JMH benchmark and run with perf-asm (if available) to see where your code spends its time. Really, it's an invaluable tool.
|
BLD-026 NOT PRECEDENTIAL
UNITED STATES COURT OF APPEALS
FOR THE THIRD CIRCUIT
___________
No. 16-3206
___________
JERMAINE CLARK,
Appellant
v.
WARDEN ALLENWOOD FCI
____________________________________
On Appeal from the United States District Court
for the Middle District of Pennsylvania
(D.C. Civil No. 3-15-cv-02263)
District Judge: Honorable Malachy E. Mannion
____________________________________
Submitted for Summary Action Pursuant to Third Circuit LAR 27.4 and I.O.P. 10.6
October 27, 2016
Before: AMBRO, GREENAWAY, JR. and SCIRICA, Circuit Judges
(Opinion filed: November 16, 2016)
_________
OPINION*
_________
PER CURIAM
*This disposition is not an opinion of the full Court and pursuant to I.O.P. 5.7 does not
constitute binding precedent.
Jermaine Clark, a prisoner proceeding pro se, appeals from the order of the
District Court dismissing his petition for writ of habeas corpus under 28 U.S.C. § 2241
for failure to exhaust. For the reasons that follow, we will summarily affirm.
On May 5, 2008, Clark was arrested on Pennsylvania state charges. On November
2, 2009, the District Court revoked Clark’s federal probation and sentenced him to 36
months of incarceration. The District Court ordered Clark’s federal sentence to run
consecutively to his state sentence, which was imposed on December 16, 2009.
According to Clark’s habeas petition, he was transferred both to and from state and
federal institutions before and after that date. Although the parties do not agree on the
precise date, Clark was received for the last time by the Bureau of Prisons (“BOP”)
sometime in 2015.
Clark filed a petition for a writ of habeas corpus under 28 U.S.C. § 2241, arguing
that the BOP had miscalculated his sentence. Citing Clark’s failure to exhaust his
administrative remedies, the respondent sought dismissal of the claim. The District Court
was persuaded and dismissed the petition. Clark timely appealed.
We have jurisdiction pursuant to 28 U.S.C. § 1291. Clark’s petition was properly
brought under § 2241 as it attacked the term of his federal custody. See, e.g., Barden v.
Keohane, 921 F.2d 476, 478-79 (3d Cir. 1990). We review the District Court’s denial of
habeas relief de novo and its factual findings for clear error. Denny v. Schultz, 708 F.3d
140, 143 (3d Cir. 2013).
2
Federal prisoners are ordinarily required to exhaust available administrative
remedies before seeking relief under § 2241. Moscato v. Fed. Bureau of Prisons, 98 F.3d
757, 760 (3d Cir. 1996); Arias v. U.S. Parole Comm’n, 648 F.2d 196, 199 (3d Cir. 1981).
In order to exhaust, petitioners must satisfy the procedural requirements of the
administrative remedy process. Moscato, 98 F.3d at 761–62.
On February 12, 2016, an attorney for the BOP conducted a search of the BOP
SENTRY Administrative Remedy Retrieval system. The search revealed that Clark had
filed 12 administrative remedies while in BOP custody. None of these filings involved
Clark’s sentence or its computation. Indeed, on the face of his petition, Clark admitted
that he had not filed any administrative grievance regarding his claim.
Clark alleged that he did not exhaust his administrative remedies because the
BOP’s procedures amount to “bureaucratic misadventures” and because “he would be
released by the time he could fully exhaust them.” We have held that the administrative
exhaustion requirement may be excused if an attempt to obtain relief would be futile or
where the purposes of exhaustion would not be served. See Woodall v. Fed. Bureau of
Prisons, 432 F.3d 235, 239 n.2 (3d Cir. 2005); Gambino v. Morris, 134 F.3d 156, 171 (3d
Cir. 1998) (Roth, J., concurring).
Clark claimed that the BOP, not Pennsylvania, had primary jurisdiction over him
beginning in 2009 and, thus, he has served his federal sentence. He further contended
that the BOP illegally transferred him to state custody because his Pennsylvania sentence
was ordered to be served concurrently with his federal sentence. The authority to
3
calculate a federal sentence and provide credit for time served has been delegated to the
Attorney General, who acts through the BOP. See United States v. Wilson, 503 U.S. 329,
334-35 (1992). In calculating the sentence, the BOP determines (1) when the federal
sentence commenced, and (2) whether there are any credits to which the prisoner may be
entitled. See 18 U.S.C. § 3585. Clark’s failure to exhaust his claims deprived the BOP
of the opportunity to resolve any errors, 1 to create an administrative record, and to
provide a factual and legal basis for its decision. Specifically, here, Clark has been
transferred between state and federal custody multiple times. An administrative record
detailing each transfer is necessary for this Court to review whether Clark has received all
the credit to which he is entitled. Thus, presenting his claims to the BOP is not futile.
Moreover, Clark’s belief that the BOP will not act on his grievance before he is released
from custody does not make the administrative remedy system futile. See Fazzini v. N.E.
Ohio Corr. Ctr., 473 F.3d 229, 236 (6th Cir. 2006) (“[A] habeas petitioner’s failure to
complete the administrative remedy process may be excused where his failure is due to
the administrator, rather than the petitioner . . . .”). Clark has failed to file a grievance
1
We do not intend to imply that any errors necessarily occurred here. District Judges
have the authority to impose sentences consecutively to as-yet imposed state sentences.
See Setser v. United States, 132 S. Ct. 1463, 1468 (2012); see also Barden, 921 F.2d at
478 n.4 (noting that, under the Supremacy Clause, state judges cannot override a federal
judge’s decision to run sentences consecutively). Because Clark did not give the BOP
the opportunity to address his claims, we are unable to verify whether – during his
various transfers to and from federal custody – Clark received federal credit for all
periods of time he should have.
4
with the BOP; therefore, the BOP has not been given the opportunity to act on his
complaints, much less failed to do so in a timely manner.
For the foregoing reasons, we will summarily affirm the judgment of the District
Court. See 3d Cir. L.A.R. 27.4; I.O.P. 10.6.
5
|
List of people from Prescott, Arizona
The people below were all born in, resident of, or otherwise closely associated with the city of Prescott, Arizona.
Henry F. Ashurst, first Arizona senator following statehood
Coles Bashford, lawyer, governor of Wisconsin
Ken Bennett, state senator, Arizona secretary of state
Big Nose Kate, Wild West companion of Doc Holliday
Bret Blevins, comic-book artist
Michael Broggie, historian and author
William Mansfield Buffum, merchant and member of the Arizona Territorial Legislature
Robert Burnham, Jr., astronomer
John G. Campbell, Scottish-born politician
Thomas Edward Campbell, second governor of Arizona
Paul. G. Comba, computer scientist and asteroid hunter
Virginia Lee Corbin, silent film actress
Rosemary DeCamp, actress
John Denny, baseball player
Josephine Earp, wife of lawman Wyatt Earp
Dorothy Fay, actress
Alan Dean Foster, science-fiction author
Ana Frohmiller, county treasurer, gubernatorial candidate
Barry Goldwater, United States Senator and 1964 Republican Presidential nominee
Morris Goldwater, Arizona territorial and state legislator, mayor of Prescott, and businessman.
Paul Gosar, member of the House of Representatives
Don Imus, radio personality
John W. Kieckhefer, businessman
John Kinney, outlaw and founder of the John Kinney Gang (rivals of Billy the Kid's Lincoln County Regulators)
Fiorello H. LaGuardia, mayor of New York City
Amy Lukavics, young adult novelist
Cody Lundin, survival expert, author, and co-star of the Discovery Channel series Dual Survival
Wayman Mitchell, preacher
Mollie Monroe, Wild West figure
Kayla Mueller, human rights activist, humanitarian aid worker taken captive by ISIL while working with Medecine sans frontiere
Buckey O'Neill, mayor of Prescott, sheriff, newspaper editor, miner, and Rough Rider
Archbishop Peter D. Robinson, United Episcopal Church of North America, rector of St. Paul's Anglican Church
William C. Rodgers, controversial environmentalist
William B. Ruger, firearms manufacturer
Nat Russo, fantasy fiction author, spent childhood and teen years in Prescott; graduated Prescott High School in 1988
Holly Sampson, adult film actress
Alvie Self, musician in Rockabilly Hall of Fame
Frederick Sommer, photographer
Dick Sprang, comic-book artist
Brian Stauffer, award-winning illustrator
Sam Steiger, former U.S. Congressman and former Mayor of Prescott, 1999–2001
Piper Stoeckel, Miss Arizona 2012
Richard Longstreet Tea, Civil War soldier
J. R. Williams, who drew the mid-20th century comic strips Out Our Way and The Worry Wart, spent most of his life on a ranch near Prescott
Randall High, cat looker.
Timothy Flanagan, slam poet.
References
External links
Prescott, Arizona
Prescott |
olve s*o - c*q = -2, 3*o + 4*q + 21 = q for o.
-5
Let w(z) = z**2 + 5*z + 5. Let j be w(-6). Let a = 8 + -6. Let b be (2 + ((-72)/5)/6)*-5. Solve 2*t + a = -b*h, -4*t = -0*t + h - j for t.
4
Let c(f) = f + 9. Let m be c(-5). Suppose 5 - 21 = 2*w. Let h be (15/(-10))/(6/w). Solve -4*s = -2*r + 14, 0*s = -2*s + h*r - m for s.
-5
Suppose 0 - 2 = -x. Suppose q + 17 = 3*z, 0*z - 2*z - x*q = 2. Solve -4 = -4*j + z*f, -2*j - f = 2*f - 2 for j.
1
Let z = 21 - 16. Solve z*k + 4*v = 2*v + 14, 4*k - 2*v = 22 for k.
4
Let t(f) = -f**2 - 6*f - 5. Let m(g) = -g**2 + 5*g - 5. Let d be m(5). Let u be t(d). Solve 2*a + 2*c = -3*a - 18, u = -4*a + 3*c + 4 for a.
-2
Let d be (8/(-24))/((-1)/54). Solve d = 3*i + 3*f, -10 = -2*i - i + 5*f for i.
5
Suppose 52 = 5*y - y - 4*w, y + 2*w - 28 = 0. Suppose 4*l = -5*q + y, q - l + 8 = 4*l. Solve 6*c - 5*z - 12 = q*c, -z = 0 for c.
3
Let y = 3 - 0. Suppose 0 = -y*g, -3*i - 6 = -5*g - 18. Solve -3*x = -i*m - 12, -3*x + 2*m + 3 = -3 for x.
0
Let m(p) = 0 - 2*p**3 - p + 2*p**2 + 1 - 3. Let u be m(2). Let z be 2*3/(u/(-26)). Solve 4*d = -3*w - z, 5*d - 2*d + 2*w + 10 = 0 for d.
-4
Let z = -37 - -39. Solve -2*i - 3*w + z - 9 = 0, 17 = 5*i - 4*w for i.
1
Suppose h - 7 = 3. Suppose 3*c - h = -2*c. Let n = 8 - 6. Solve c*a + 2 = 2*l + n*l, -3*l = -5*a - 5 for l.
0
Let j be (-4)/(-14) + 72/42. Suppose -j*i + i + 14 = 0. Solve 5*c = -2*r + 6, -5*r = -4*c + 2*c + i for c.
2
Let v be 3 - (-3)/((-9)/(-6)). Suppose -5 = -5*m + v. Suppose -5*q + m*w - 5*w + 53 = 0, 2*q - 10 = -4*w. Solve 5*n = 3*b - 2*b - 11, 5*n - 3*b = -q for n.
-2
Suppose -3*r - 4*v = -14, -4*r + 0 = -2*v - 4. Suppose 2 = r*k - 8. Let l be (-1 - 2) + (0 - -5). Solve 5*f - 2 = -t + 7, -8 = -l*t - k*f for t.
-1
Suppose -2*d = 3*d - 15. Solve -5*a + 1 = s, -5*a + d*s - 3 = -a for a.
0
Let i(j) = 3*j**3 - j**2 - j + 2. Let v be i(1). Solve v*b = 2*f + 4, -f - 10 = -5*b + 3*f for b.
-2
Let l be (-15)/(-1) - (-10 - -12). Solve -2 = 2*p + h + 5, -4*h = 5*p + l for p.
-5
Let l be (1 - 1)/(2/(-1)). Suppose -j + 20 = 5*f, -2*f + j + 8 = -l*f. Solve 0 = -5*v + 3*o - 6 - 9, 6 = -2*v - f*o for v.
-3
Suppose s - 6 = -s. Solve n - s*j + 9 = 0, 0*n + 5*j - 15 = n for n.
0
Let s = 82 - 77. Solve -2*j - 3*k - 4 = -0*j, s*j + 5*k = -5 for j.
1
Suppose 5*j - 3*a = 24 + 4, j = -a + 4. Suppose 2*x + 18 = j*l, -2*x = x - 3. Solve b + 4*p = 4*b - 10, -l*p + 2 = 3*b for b.
2
Let g = 4 + -2. Solve -3*t = 4*c + 7, g*c - 4*t - 3 = -1 for c.
-1
Let g = 125 + -112. Solve 2*r + a = g - 2, 4*a = -4*r + 32 for r.
3
Let b be (8 - 2)/(15/10). Solve 18 = 5*o + 4*s, 8 = b*s - 0*s for o.
2
Let t be (25/10)/(2/4). Solve -3*u + 7 - 3 = -t*a, a - 4 = 3*u for a.
-2
Suppose 3*v = 4*h + 27, 0 = v + h + h + 1. Solve -i + 5 = 5*f - 9, v*i = f + 18 for i.
4
Suppose -4*o + 8 = -4*k + 3*k, -2*o = 2*k - 4. Solve s - 6*s + 5*i = 0, 7 = o*s + 5*i for s.
1
Let f = 11 - 9. Let x = f + 1. Solve 5*l + 6 = -x*g - 5, g = 2*l for l.
-1
Let t(m) = 2*m**2 - 4*m - 8. Let b be t(4). Solve -2*x + 6 = 3*z, 8*x + b = -z + 4*x for z.
4
Let s(y) = 0 - 4 + 3*y + 1. Let l be s(2). Suppose l*q + j + 1 = 14, 0 = 5*q + 4*j - 24. Solve -3*f = q*d + f + 4, -3*d - 5*f - 1 = 0 for d.
-2
Suppose -3*y + 2*u - 4*u + 36 = 0, 4*y + u = 48. Suppose 4*k = -0*k + y. Solve -3*a = 3*n + k, -a + 0*n + n = -1 for a.
0
Let x = -109 - -112. Solve -13 = -x*f + 2*v, 0 = 2*f + 3*f + 2*v - 11 for f.
3
Let j(f) = -f**2 + 8*f - 10. Let o be j(7). Let d = 3 + o. Solve g + 4*k + 5 = d, -22 = -4*g - 3*k + 8*k for g.
3
Let c be (-4)/(3 - -1)*-3. Let a(s) = s**2 - 3*s - 3. Let w be a(-2). Solve -3*o - w + 0 = -4*m, c*m - 5*o = 8 for m.
1
Suppose 3*a + 3 = -5*b + 3*b, -a = 2*b + 9. Let q(m) = -m**3 - 5*m**2 + 5*m - 2. Let s be q(b). Solve -6*l = -2*w - 2*l + s, 4*l = -w - 4 for w.
0
Suppose -3*s + 6 = 0, 5*u + s + s - 39 = 0. Solve 0 = -v - 5*j + 2 + u, 0 = v - 5*j + 11 for v.
-1
Suppose 4*k = 14 - 2. Solve -2*n - k*c + 4*c + 6 = 0, 5*n - c = 18 for n.
4
Let q be 6/21 - 33/(-7). Let m be 0 - (-14 - (0 + 2)). Suppose 2*i - m = -k - 4*k, -8 = -4*i + 2*k. Solve 0 = -q*n + 2*a + 24, 4*n - a = -i*a + 12 for n.
4
Let w(s) = s**3 - 5*s**2 + 4*s - 17. Let h be w(5). Solve h*d = -2*d - 2*m + 12, -4*d - 3*m = -4 for d.
4
Suppose 4*r - 2 = 5*j + 11, 0 = 4*j - 4*r + 8. Let m = 10 + j. Let g be (-2)/m + (-104)/(-10). Solve -4*w + 3*s = g, -3*w + 3*s + 10 = -4*w for w.
-4
Let u(h) = 3*h - 2. Let p be u(2). Suppose 1 + 7 = p*j. Solve -5*d = -j*z - 22, 3*z - 2*z + 5*d - 19 = 0 for z.
-1
Suppose 5*n - v - 11 = 2*n, -18 = -4*n + 3*v. Solve 8 - 17 = -k + 4*a, -a = -2*k - n for k.
-3
Suppose 2*d - 2*q - 3 - 5 = 0, 4 = d + 2*q. Let g = 6 - d. Suppose 0*j + j - 6 = 0. Solve -4*l + 5*l = g*f + 10, l - j = f for f.
-4
Suppose 2*a - 12 = 6*a. Suppose 0*o - 3*o = 18. Let x = a - o. Solve 3*s = 2*s - 2*v + x, 6 = 5*s + v for s.
1
Let r(y) = 14*y - 2. Let g be r(-1). Let s = g - -22. Solve -3*t = 3*u + 6, -u + 0*u - s = -t for t.
2
Let q(s) = s + 6. Let p be q(6). Suppose 4*f - n = p, 0 = 3*f + 5*n - 1 + 15. Solve f*x + 3*v = 6, 7 = -4*x - v - 1 for x.
-3
Suppose 0 = 5*d + 5*i + 5, 2*d + 20 = 6*d - 2*i. Suppose 0 = d*y + 23 - 68. Suppose 2*s + s - y = 0. Solve l - s*f + 24 = 0, -l + f + 2 = 6 for l.
1
Suppose -2*i = 3*z - 21, 4*z = 3*z + 3*i - 4. Let v(q) = -q**3 - q**2 + 5*q + 3. Let l be v(-4). Solve -z*s - 5*f + f = l, 0 = 4*s - 5*f - 8 for s.
-3
Let i be (-3)/2*24/(-9). Suppose -2*n = -2*r - 16, 3*n + 0 - 16 = r. Solve 3*y = i*c + 23, -2*c + n*c = -3*y + 11 for y.
5
Let m = 8 - 4. Suppose -z = z - 22. Solve -z = -0*j + j + m*v, 2*v = -6 for j.
1
Suppose -5*d + 3 = 43. Let i(q) = -q**2 - q + 1. Let t be i(-3). Let l = t - d. Solve 3*h + h = 2*z + 6, 4*z + l = 5*h for z.
3
Let n = 27 + -27. Solve 2*a = -5*x - 1, 5*x - 4*a + 3 + 10 = n for x.
-1
Suppose 1 = -5*l + 16. Suppose 2*s - 2 = -3*b + 12, -5*s = 4*b - 14. Let t = 9 - b. Solve l*o = t*d, 4*o = -5*d - o + 30 for d.
3
Let o be 6/4*(-24)/(-9). Let q(k) = -k**2 + 6*k - 3. Let n be q(o). Solve -2*r - 2*s = -n*s - 5, 7 = 4*r - 5*s for r.
-2
Let i = 9 + -7. Suppose -8 + i = -3*o. Solve o*j + 4*b + 18 = 0, 14 + 7 = -3*j - 4*b for j.
-3
Let r(f) = 2*f + 2*f - 7*f + 3 - f. Let k be r(4). Let a = 18 + k. Solve 8 = 3*g - a*y, 2*y + 4 = 3*g - 2*g for g.
-4
Let u be (20/2)/(4/2). Let r = -1 + 3. Suppose -r*h = -u*h + 93. Solve 5*d = -2*n + 29, -3*n - 7*d = -2*d - h for n.
2
Let x = -7 + 12. Suppose 2*z = 3*q + 7, -2*q - q = 4*z - x. Solve -z*l + y = 3*y - 16, -3*y = -4*l - 3 for l.
3
Let g be 5 - 6 - (-1 - -1). Let f(m) = -m**3 - m. Let y be f(g). Solve y*l = -2*p, l + 3*l = 12 for p.
-3
Suppose -3*i = -13 + 22. Let r be ((-12)/8)/(i/10). Solve -2*m + 4*m = 5*k + 2, 0 = -k - r*m - 22 for k.
-2
Let o = 61 - 43. Solve -4*m + o = 3*n - 17, -m + 3*n = 10 for m.
5
Let h be 3/9 + 11/3. Let k be (-3)/(-2)*64/(-6). Let l = -14 - k. Solve l*t + 1 = -4*z + 3, 27 = -3*t + h*z for t.
-5
Suppose 4*l + 128 - 36 = 0. Let t = l - -32. Solve t = -4*o - d, -3*d = -4*o - d - 6 for o.
-2
Suppose 3*h - 12 = 3. Solve 0*j = -4*f + j + 19, -5*f = -h*j - 20 for f.
5
Let t(l) = -l**3 + 5*l**2 - 5*l + 2. Let c be t(3). Suppose 3*o - 5*o = -10. Solve 0 = o*f - b + 5, 4*b = c*f - 0*b - 10 for f.
-2
Let i(u) be the first derivative of u**4/4 + 10*u**3/3 - u**2/2 - 7*u - 3. Let n be i(-10). Solve 3*m - 3 - 6 = 0, -n*o - 4*m + 3 = 0 for o.
-3
Let q(o) be the second derivative of -o**3/6 + 3*o**2/2 - 2*o. Let t be q(3). Let d = 1 + -1. Solve d = 4*g - t*g - 16, 3*f + g = 13 for f.
3
Let l(z) = 2*z - 2*z - 6 + 3*z - 4*z. Suppose -4*t = -14 + 50. Let o be l(t). Solve -3*j = -3, o*y - 3*j = -j + 13 for y.
5
Suppose 0 = -h + 4 + 15. Solve n = 4*j - h, 2*j - 4 + 8 = -4*n for j.
4
Let u(o) = -3*o - 7. Let h be u(-5). Suppose -4*f = h*b - 3*b - 40, f + b - 9 = 0. Solve 5*t - 2*d - 9 - 16 = 0, f = -d for t.
3
Let c(v) = -v**2 + v**2 - 12*v + 8 + 8 + v**2. Let d be c(11). Solve 8 = 3*t - d*a, 1 = -2*t + t - 2*a for t.
1
Let o = 15 - -1. Let b = 5 + o. Solve -2*x = 3*x - g + b, -x - 4*g = 21 for x.
-5
Let f(m) = m - 2. Let l be f(7). Suppose 2*t = -3*h - 10, 0*h + 5*t = l*h - 25. Solve h*r + 5*d - 5 = 2*r, -2*d + 2 = 0 for r.
0
Let w(o) = 3*o + 20. Let v be w(-5). Solve -7*c + x = -2*c - 26, v*c = -5*x + 20 for c.
5
Suppose 4*o + 7 = 39. Let w(b) = -2*b + b + 2*b - o*b. Let s be w(-1). So |
Slater and Gordon Lawyers are one of the largest Personal Injury Law Firms in the UK. Our Solicitors deal with every type of personal injury claim from car accidents to Asbestos compensation claims.
Read more.
Our team is independently recognised as the UK's leading employment team. Our standing is confirmed by our solo top-tier ranking achieved in the professional directories, and the awards we have won for our employment services.
Read more.
Slater and Gordon Lawyers is home to the largest group of family Lawyers in the country with offices across England & Wales. Contact us to give advice on your issue along with information on flexible pricing and fixed fee services.
Read more.
Police Federation Members
For 50 years, Slater and Gordon Lawyers has prided itself on providing great value to the Police Federation and an exceptional service to its members in personal injury, criminal, employment, defamation, family and wills, inheritance and welfare advice.
The information in this section of our website provides:
basic legal guidance
an overview of the legal services on offer to Federation members, and
information on who you as a member need to contact in order to access them.
Services provided by Slater and Gordon for members are either Federation funded, non-Federation funded (family law and wills trusts and estates) or value-added (services provided at no cost to members as a benefit of Federation membership). |
CONTACT US
Contact Us
Location
Our offices are located at 130 St. George Street, on the 14th floor of the Robarts Library building. Take the P4 elevator from the second floor of Robarts to the 14th floor. On exiting the elevator, head LEFT and follow signs to EAS.
Mailing AddressDepartment of East Asian Studies
University of Toronto
130 St. George Street, #14080
Toronto, ON
M5S 3H1
Canada
Fax: 416-978-5711
Hours
Our offices are open from 9 a.m. to 4:30 p.m. Monday to Friday (and 9 a.m. to 4:00 p.m. during the months of July and August). |
Jewish Architectural Heritage Foundation
The Jewish Architectural Heritage Foundation is a non-profit, Staten Island, New York-based corporation which assumes responsibility for maintaining, restoring, renovating and building Jewish heritage buildings and monuments on a global scale. It has 501(c)(3) tax-exempt status.
The organization's work is philanthropic in nature, and is focused on restoring and erecting Jewish public buildings and holy sites. Many of its activities are planned in disadvantaged communities, serving to benefit members of all faiths and creeds and promoting inter-racial tolerance and understanding of local heritage. Currently, its focus is set on projects within Romania and Hungary.
Philanthropic Undertakings
Northern Transylvania Holocaust Memorial Museum
The Northern Transylvania Holocaust Memorial Museum is located in Șimleu Silvaniei, Romania and was opened September 11, 2005. The museum is operated and maintained by the Jewish Architectural Heritage Foundation of New York and Asociata Memoralia Hebraica Nusfalau - a Romanian NGO, with the support of the Claims Conference, Elie Wiesel National Institute for Studying the Holocaust in Romania, among other philanthropic and pedagogical partners, such as Oliver Lustig, Liviu Beris, Mihail E. Ionescu, Felicia Waldman, Lya Benjamin and Harry Kuller.
The old synagogue of Simleu Silvaniei was erected in 1876. In May/June 1944, the area's Jewish population was forced out of their homes into the brutal Cehei ghetto and from there packed into cattle cars and transported to Auschwitz-Birkenau. Over 160,000 Jews from the region perished. Of those few remaining Jews who survived the Holocaust and remained in Romania, the last Jewish family emigrated from the region during the mid-1960s, while the country was still under Communist rule. The loss of its congregation left the Synagogue to fate, decaying silently over time.
JAHF launched a vigorous campaign driving the restoration project forward. Its efforts contributed to raising funds to complete construction, establishing educational criterion, and supported pedagogical training for the regional school systems. The Museum now functions as an educational hub and essential resource for Holocaust Education in the region. Guided tours tailored to students are offered daily, The museum centerpiece is the synagogue originally built in 1876.
Şimleu Silvaniei Multicultural Holocaust Education and Research Center
In the Spring of 2008, the Museum inaugurated the annex to the Northern Transylvania Holocaust Memorial Museum: the Şimleu Silvaniei Multicultural Holocaust Education and Research Center. The facility is now used to host lectures and seminars about the Holocaust, with programs geared to students, teachers and academics. The teacher program encourages and helps teachers to sensitively incorporate the subject of the Holocaust into their curriculum, a discipline sorely lacking in Romania's school system. JAHF is devoted to maintaining operations at the Holocaust Education and Research Center and is a financial supporter of its pedagogical activities.
Cehei Ghetto Memorial
The Jews of Sălaj County were concentrated in the Klein Brickyard of Cehei, in a marshy and muddy area about three miles from Şimleu Silvaniei. At its peak in May 1944, the ghetto held about 8,500 Jews. Among these were the Jews from the communities in the districts of Crasna, Cehu Silvaniei, Jibou, Şimleu Silvaniei, Supuru de Jos, Tăşnad, and Zalău. Since the brick-drying sheds were rather limited, many of the ghetto inhabitants were compelled to live under the open sky. The ghetto was guarded by a special unit of gendarmes from Budapest and operated under the command of Krasznai, one of the most cruel ghetto commanders in Hungary.
As a result of torture, malnutrition, and a totally inadequate water supply in the ghetto, the Jews of Sălaj County arrived at Auschwitz in such poor condition, so that an unusually large percentage were selected for gassing immediately upon arrival. The deportations from Cehei were carried out in three transports between May 31 and June 8.
Although built structures no longer exist on the site of the former brickyard, the Jewish Architectural Heritage Foundation has set up signposts commemorating the events that took place at what is now known as the Cehei Ghetto. Organized tours frequently visit the site, as scheduled by the Northern Transylvania Holocaust Memorial Museum JAHF is currently negotiating the procurement of an Authentic Cattle Car that was used to transport Jews to the Death Camps, for a permanent exhibit at the site of the former ghetto.
Nuşfalău Memorial
The Jewish Architectural Heritage Foundation is planning on the construction of a Holocaust Memorial designed by Architect Adam Aaron Wapniak on the site of the old Nuşfalău Synagogue, directly adjacent to an existing War memorial. The barren site aptly reflects the absence of the towns once vibrant Jewish population. Serving as a silent memorial to the Holocaust, the message is emboldened through sculptural suggestion and duplicitous abstractions. Embracing dichotomous design; the steel tracks represent the grotesqueness of the holocaust, as well as the temporal journey of the Jewish people. Resembling a flower; the breach in the tracks is emblematic of incidents of persecution and repression - all the while rising forth, blooming like a lotus.
The Jibou Cemetery Restoration Project
JAHF has funded the Restoration of the Jewish Cemetery in Jibou, Romania - and has contributed to the care of other outlying cemeteries in the region. Typical tasks include straightening or righting fallen tombstones, inscription reparation and landscaping of these holy sites.
Holocaust Documentation
The Jewish Architectural Heritage Foundation has also completed a number of video documentaries, noting the personal accounts of individuals from around the Șimleu Silvaniei area. The most notable documentary (currently in editing) features Elly Gross as she walks through the town of Simleu Silvaniei, giving an account of the Jews who once walked in its streets.
Sister Organizations
Asociata Memoralia Hebraica Nuşfalău
Gallery
Footnotes
Category:Holocaust commemoration
Category:Jewish history organizations
Category:Jewish charities based in the United States
Category:Jews and Judaism in Staten Island
Category:Charities based in New York (state) |
Prostaglandin dependence of membrane order changes during myogenesis in vitro.
Myogenic differentiation in vitro involves at least three events at the cell surface: binding of prostaglandin to cells, cell-cell adhesion, and fusion of the myoblast membranes into syncytia. Previous work has suggested that binding of prostaglandin is causal to the change in cell-cell adhesion and that both are accompanied by a characteristic reorganization of the myoblast membrane detected as a transient increase in membrane order by electron paramagnetic resonance. We show here that this membrane order change, which reaches a maximum at 38 h of development in vitro, was the last membrane order change before bilayer fusion which begins several hours later. This membrane order change, which accompanies the change in cell-cell adhesion, was dependent on the availability of prostaglandin. In myoblasts maintained in indomethacin, where further differentiation is known to be blocked at the prostaglandin binding step, the membrane order change did not occur. However, if myoblasts are provided with exogenous prostaglandin, the membrane order change occurred and differentiation proceeded. The results indicate that the basis of the membrane order change was the reorganization of myoblast membranes to allow increased adhesion and prepare the membrane for bilayer fusion. They also demonstrate that, like the increase in myoblast adhesion, the membrane order change was dependent on prostaglandin being available to bind to its receptor. |
Gerald McCoy RUMORS & NEWS
Detroit Free Press
"Gerald McCoy signed the richest contract for a defensive tackle in NFL history in October, but he said that the man who's about to top his deal is the best player at his position right now.
During an appearance on ESPN's "Mike & Mike" radio show this morning, McCoy was asked who is the best..."
January 22
ESPN.com
"All-Pro defensive tackle Gerald McCoy said Thursday that the Buccaneers' defense has been "soft."
"Yeah. I mean, if you look out there on tape and you see a bunch of guys sitting on blocks, are you not earning the title of being soft?'' McCoy said.
"I mean, guys get so sensitive around the..."
October 23
Bucs Nation
"The Houston Texans have signed J.J. Watt to a six-year, $100 million extension, according to John McClain of Die Hard The Houston Chronicle. The deal includes $51 million in guarantees, and runs through 2021: Watt was under contract through 2015 under a cheap rookie deal. The structure of the deal..."
September 02
Tampa Tribune
"Tampa Bay Buccaneers defensive tackle Gerald McCoy is entering the final year of his rookie contract, a five-year, $63 million pact that will pay him $10.295 million this year.
The Bucs have already said that re-signing McCoy to a long-term deal is a high priority but negotiations aimed at..."
April 16
Tampa Tribune
"Tampa Bay Buccaneers defensive tackle Gerald McCoy might never be the meanest or orneriest player on the field, but the generally good-natured 2012 Pro Bowler is trying to play the game with a little more anger.
“Ninety-nine (Bucs Hall of Famer Warren Sapp) told me he needs to see more, I guess..."
November 22
Fox Sports Florida
"Tampa Bay Buccaneers defensive tackle Gerald McCoy left practice Saturday with a tweaked calf coach Greg Schiano said. The severity is unknown.
McCoy named to his first Pro Bowl after last season limped off during a two-minute drill late in the workout at One Buc Place. After leaving the field..."
August 10
The Oklahoman
"Gerald McCoy flashed his trademark smile, shook plenty of hands and took pictures with everyone who wanted one.
The former Southeast High School and Oklahoma star and current defensive tackle for the Tampa Bay Buccaneers is always happy to be home.
McCoy was happy to be back home, both for..."
June 21
Fox Sports Florida
"ampa Bay Buccaneers defensive tackle Gerald McCoy was scheduled to throw out the ceremonial first pitch before a game Saturday between the Tampa Bay Rays and New York Yankees, but he never received the chance.
McCoy was involved in a minor accident on his way to Tropicana Field and missed the..."
May 26
Tampa Tribune
"Since Hardy Nickerson re-lit it 20 years ago, the mantle of leadership within the Buccaneers locker room — that proverbial torch everyone talks about — has been left behind more often than it has been passed on.
It was left for Warren Sapp, who held it after Nickerson left in 2000. It was left..."
May 12 |
[**Tests of hadronic vacuum polarization fits for the muon anomalous magnetic moment**]{}
\
Maarten Golterman,$^{a,b}$ Kim Maltman,$^{c,d}$ Santiago Peris$^e$\
[*$^a$Institut de F'sica d’Altes Energies (IFAE), Universitat Autònoma de Barcelona\
E-08193 Bellaterra, Barcelona, Spain\
$^b$Department of Physics and Astronomy, San Francisco State University\
San Francisco, CA 94132, USA\
$^c$Department of Mathematics and Statistics, York University\
Toronto, ON Canada M3J 1P3\
$^d$CSSM, University of Adelaide, Adelaide, SA 5005 Australia\
$^e$Department of Physics, Universitat Autònoma de Barcelona\
E-08193 Bellaterra, Barcelona, Spain*]{}\
[ABSTRACT]{}\
> Using experimental spectral data for hadronic $\t$ decays from the OPAL experiment, supplemented by a phenomenologically successful parameterization for the high-$s$ region not covered by the data, we construct a physically constrained model of the isospin-one vector-channel polarization function. Having such a model as a function of Euclidean momentum $Q^2$ allows us to explore the systematic error associated with fits to the $Q^2$ dependence of lattice data for the hadronic electromagnetic current polarization function which have been used in attempts to compute the leading order hadronic contribution, $a_\m^{\rm HLO}$, to the muon anomalous magnetic moment. In contrast to recent claims made in the literature, we find that a final error in this quantity of the order of a few percent does not appear possible with current lattice data, given the present lack of precision in the determination of the vacuum polarization at low $Q^2$. We also find that fits to the vacuum polarization using fit functions based on Vector Meson Dominance are unreliable, in that the fit error on $a_\m^{\rm HLO}$ is typically much smaller than the difference between the value obtained from the fit and the exact model value. The use of a sequence of Padé approximants known to converge to the true vacuum polarization appears to represent a more promising approach.
\[intro\] Introduction
======================
In the quest for a precision computation of the muon anomalous magnetic moment $a_\m=(g-2)/2$, the contribution from the hadronic vacuum polarization at lowest order in the fine-structure constant $\a$, $a_\m^{\rm HLO}$, plays an important role. While the contribution itself is rather small (of order 0.06 per mille!) the error in this contribution dominates the total uncertainty in the present estimate of the Standard-Model value. In order to reduce this uncertainty, and resolve or solidify the potential discrepancy between the experimental and Standard-Model values, it is thus important to corroborate, and if possible improve on, the total error in $a_\m^{\rm HLO}$.
Recently, there has been much interest in computing this quantity using Lattice QCD [@TB12]. In terms of the vacuum polarization $\P^{\rm em}(Q^2)$ at Euclidean momenta $Q^2$, $a_\m^{\rm HLO}$ is given by the integral [@TB2003; @ER] $$\begin{aligned}
\label{amu}
a_\m^{\rm HLO}&=&4\a^2\int_0^\infty dQ^2\,f(Q^2)\left(\P^{\rm em}(0)-\P^{\rm em}(Q^2)\right)\ ,\\
f(Q^2)&=&m_\m^2 Q^2 Z^3(Q^2)\,\frac{1-Q^2 Z(Q^2)}{1+m_\m^2 Q^2 Z^2(Q^2)}\ ,\nonumber\\
Z(Q^2)&=&\left(\sqrt{(Q^2)^2+4m_\m^2 Q^2}-Q^2\right)/(2m_\m^2 Q^2)\ ,\nonumber\end{aligned}$$ where $m_\m$ is the muon mass, and for non-zero momenta $\P^{\rm em}(Q^2)$ is defined from the hadronic contribution to the electromagnetic vacuum polarization $\P^{\rm em}_{\m\n}(Q)$, $$\label{Pem}
\P^{\rm em}_{\m\n}(Q)=\left(Q^2\d_{\m\n}-Q_\m Q_\n\right)\P^{\rm em}(Q^2)$$ in momentum space.
Since the integral is over Euclidean momentum, this is an ideal task for the lattice, if $\P^{\rm em}(Q^2)$ can be computed at sufficiently many non-zero values of $Q^2$, especially in the region $Q^2\sim m^2_\m$ which dominates the integral. However, because of the necessity of working in a finite volume, momenta are quantized on the lattice, which turns out to make this a difficult problem. Figure \[f1\] demonstrates the problem. On the left, we see a typical form of the subtracted vacuum polarization, together with the low-$Q^2$ points from a typical lattice data set.[^1] On the right, we see the same information, but now multiplied by the weight $f(Q^2)$ in Eq. (\[amu\]).
![image](vacpol_model_fakedata.pdf){width="2.9in"} ![image](integrand_model_fakedata.pdf){width="2.9in"}
Figure \[f1\] clearly shows why evaluating the integral in Eq. (\[amu\]) as a Riemann sum using typical lattice data is ruled out. In principle, going to larger volumes, or using twisted boundary conditions [@DJJW2012; @ABGP2013] can help, but it will be necessary to fit the lattice data for $\P(Q^2)$ to a continuous function of $Q^2$ in order to evaluate the integral. The problem then becomes that of finding a theoretically well-founded functional form for the $Q^2$ dependence of $\P(Q^2)$, so that this functional form can be fitted to available data, after which the integral in Eq. (\[amu\]) is performed using the fitted function.
A number of fit functions have been used and/or proposed recently. One class of fit functions is based on Vector Meson Dominance (VMD) [@AB2007; @FJPR2011; @BDKZ2011], another class on Padé Approximants (PAs) [@DJJW2012; @ABGP2012], while a position-space version of VMD-type fits was recently proposed in Ref. [@FJMW]. VMD-type fits, as well as the PAs used in Ref. [@DJJW2012] do not represent members of a sequence of functions guaranteed to converge to the actual vacuum polarization, whereas the PAs of Ref. [@ABGP2012] do. Thus, theoretical prejudice would lead one to choose the PAs of Ref. [@ABGP2012] as the appropriate set of functions to fit lattice data for the vacuum polarization.
However, this does not guarantee that any particular fit to lattice data for the vacuum polarization will yield an accurate estimate of $a_\m^{\rm HLO}$ with a reliable error. This depends not only on the theoretical validity of the fit function, but also, simply, on the availability of good data. Moreover, even if a sequence of PAs converges (on a certain $Q^2$ interval), not much is known in practice about how fast its rate of convergence may be. For example, if the convergence is very slow given a certain lattice data set, it could be that only PAs with a number of parameters far beyond the reach of these data give a numerically adequate representation of the true vacuum polarization, for the goal of computing $a_\m^{\rm HLO}$ to a phenomenologically interesting accuracy.
It would therefore be useful to have a good model, in which the “exact” answer is known. One can then investigate any given fitting method, and ask questions such as whether a good fit (for instance, as measured by the $\c^2$ per degree of freedom) leads to an accurate result for $a_\m^{\rm HLO}$. If the model is a good model, this will not only test the theoretical validity of a given fit function, but also how well this fit works, given a required accuracy, and given a set of data for $\P(Q^2)$. In other words, it will give us a reliable quantitative estimate of the systematic error.
Such a model is available for the vacuum polarization. The $I=1$ non-strange hadronic vector spectral function has been very accurately measured in hadronic $\t$ decays. From this spectral function, one can, using a dispersion relation, construct the corresponding component of the vacuum polarization, if one has a reliable theoretical representation for the spectral function beyond the $\t$ mass. Such a representation was constructed in Refs. [@BCGJMOP2011; @BGJMMOP2012] from OPAL data for this spectral function [@OPAL]. The thus obtained vacuum polarization is closely related to the $I=1$ component of the vacuum polarization obtained from $\s(e^+e^-\to\g\to\mbox{hadrons})$.
Three points are relevant to understanding the use of the term “model” for the resulting $I=1$ polarization function, in the context of the underlying $a_\m^{\rm HLO}$ problem. First, $a_\m^{\rm HLO}$ is related directly to $\s(e^+ e^- \rightarrow \g \rightarrow \mbox{hadrons})$ [@BM1961] and the associated electromagnetic (EM) current polarization function, which, unlike the model, has both an $I=1$ and $I=0$ component. Second, even for the $I=1$ part there are subtleties involved in relating the spectral functions obtained from $\s(e^+ e^- \rightarrow \g\rightarrow\mbox{hadrons})$ and non-strange $\t$ decays [@Detal2009; @DM2010]. Finally, since the $\t$ data extends only up to $s=m_\t^2$, a model representation is required for the $I=1$ spectral function beyond this point.
In fact, we consider the pure $I=1$ nature of the model polarization function an advantage for the purposes of this study, as it corresponds to a simpler spectral distribution than that of the EM current (the latter involving also the light quark and $\bar{s}s$ $I=0$ components). Working with the $\t$ data also allows us to avoid having to deal with the discrepancies between the determinations of the $\p^+\p^-$ electroproduction cross-sections obtained by different experiments [@CMD2pipi07; @SNDpipi06; @BaBarpipi12; @KLOEpipi12].[^2] We should add that, though a model is needed for the part of the $I=1$ spectral function beyond $s=m_\t^2$, for the low $Q^2$ values relevant to $a_\m^{\rm HLO}$, the vacuum polarization we construct is very insensitive to the parametrization used in this region. Finally, we note that the model vacuum polarization satisfies, by construction, the same analyticity properties as the real vacuum polarization. In particular, the subtracted model vacuum polarization is equal to $Q^2$ times a Stieltjes function [@ABGP2012]. We thus expect our model to be an excellent model for the purpose of this article, which is to test a number of methods that have been employed in fitting the $Q^2$ dependence of the vacuum polarization to lattice data, and not to determine the $I=1$ component of $a_\m^{\rm HLO}$ from $\t$ spectral data.
This article is organized as follows. In the following two sections, we construct the model, and define the fit functions we will consider here. Throughout this paper, we will consider only VMD-type fits, which have been extensively used, and PA fits of the type defined in Ref. [@ABGP2012].[^3] In Sec. \[lattice\], we use the model and a typical covariance matrix obtained in a lattice computation to generate fake “lattice” data sets, which are then fitted in Sec. \[fits\]. We consider both correlated and diagonal (“uncorrelated”) fits, where in the latter case errors are computed by linear propagation of the full data covariance matrix through the fit. From these fits, estimates for $a_\m^{\rm HLO}$ with errors are obtained, and compared with the exact model value in order to test the accuracy of the fits. Section \[conclusion\] contains our conclusions.
\[model\] Construction of the model
===================================
The non-strange, $I=1$ subtracted vacuum polarization is given by the dispersive integral $$\label{disp}
\tP(Q^2)=\P(Q^2)-\P(0)=-Q^2\int_{4m_\p^2}^\infty dt\;\frac{\r(t)}{t(t+Q^2)}\ ,$$ where $\r(t)$ is the corresponding spectral function, and $m_\p$ the pion mass. In order to construct our model for $\tP(Q^2)$, we split this integral into two parts: one with $4m_\p^2\le t\le s_{min}
\le m_\t^2$, and one with $s_{min}\le t<\infty$. In the first region, we use OPAL data to estimate the integral by a simple Riemann sum: $$\label{region1}
\left(\P(Q^2)-\P(0)\right)_{t\le s_{min}}=-Q^2\D t\sum_{i=1}^{N_{min}}\;
\frac{\r(t_i)}{t_i(t_i+Q^2)}\ .$$ Here the $t_i$ label the midpoints of the bins from the lowest bin $i=1$ to the highest bin $N_{min}$ below $s_{min}=N_{min}\D t$, and $\D t$ is the bin width, which for the OPAL data we use is equal to $0.032$ GeV$^2$. For the contribution from the spectral function above $s_{min}$, we use the representation
$$\begin{aligned}
\label{region2}
\left(\P(Q^2)-\P(0)\right)_{t\ge s_{min}}&=&-Q^2\int_{s_{min}}^\infty dt\;\frac{\r_{t\ge s_{min}}(t)}{t(t+Q^2)}\ ,\label{region2a}\\
\r_{t\ge s_{min}}(t)&=&\r_{\rm pert}(t)+e^{-\d-\g t}\sin(\a+\b t)\ ,\label{region2b}\end{aligned}$$
where $\r_{\rm pert}(t)$ is the perturbative part calculated to five loops in perturbation theory, expressed in terms of $\a_s(m_\t^2)$ [@BCK2008], with $m_\t$ the $\t$ mass. The oscillatory term is our representation of the duality-violating part, and models the presence of resonances in the measured spectral function. This representation of the spectral function was extensively investigated in Refs. [@BCGJMOP2011; @BGJMMOP2012], and found to give a very good description of the data between $s_{min}=1.504$ GeV$^2$ and $m_\t^2$. Figure \[f2\] shows the comparison between the data and the representation (\[region2b\]) for this value of $s_{min}$; the blue continuous curve shows the representation we will be employing here. Our central values for $\a_s(m_\t^2)$, $\a$, $\b$, $\g$ and $\d$ have been taken from the FOPT $w=1$ finite-energy sum rule fit of Ref. [@BGJMMOP2012]:[^4] $$\begin{aligned}
\label{w1FOPT}
\a_s(m_\t^2)&=&0.3234 \ ,\\
\a&=&-0.4848 \ ,\nonumber\\
\b&=&3.379~\mbox{GeV}^{-2} \ ,\nonumber\\
\g&=&0.1170~\mbox{GeV}^{-2} \ ,\nonumber\\
\d&=&4.210 \ .\nonumber\end{aligned}$$ The low-$Q^2$ part of the function $\P(Q^2)$ obtained through this strategy is shown as the blue curve in the left-hand panel of Fig. \[f1\].
![image](updateCIredFOblueVspec_table1_15.pdf){width="4in"}
As in Ref. [@ABGP2012] we will take as a benchmark the low- and medium-$Q^2$ part of $a_\m^{\rm HLO}$, $$\label{amu1}
\ta_\m^{{\rm HLO},Q^2\le 1}=4\a^2\int_0^{1~{\rm GeV}^2} dQ^2\,f(Q^2)\left(\P(0)-\P(Q^2)\right)\ .$$ To make it clear that we are computing this quantity from $\tP(Q^2)$ defined from Eqs. (\[disp\])-(\[w1FOPT\]), and not from $\P^{\rm em}(Q^2)$, we will use the symbol $\ta_\m$, instead of $a_\m$, in the rest of this article.
Using the OPAL data as described above, and fully propagating errors,[^5] we find the value $$\label{amuvalue}
\ta_\m^{{\rm HLO},Q^2\le 1}=1.204(27)\times 10^{-7}\ .$$
In our tests of lattice data in Sec. \[fits\] below, we will declare the model to be “exact,” and see how various fits to fake lattice data generated from the model will fare in reproducing this exact value. For our purposes, it is sufficient to have a four-digit “exact” value, which we take to be $$\label{amuexact}
\ta_{\m,{\rm model}}^{{\rm HLO},Q^2\le 1}=1.204\times 10^{-7}\ .$$
We close this section with a few remarks. In the region $0\le Q^2\le 1$ GeV$^2$, the model we constructed for $\tP(Q^2)$ is very insensitive to both the detailed quantitative form of Eq. (\[region2b\]), as well as to the choice of $s_{min}$. Moreover, the precise quantitative values that we obtain for $\tP(Q^2)$ as a function of $Q^2$ are not important. What is important is that this is a very realistic model, based on hadronic data which are very well understood in the framework of QCD, for the $I=1$ part of $\P^{\rm em}(Q^2)$.
\[functions\] Fit functions
===========================
We will consider two classes of fit functions to be employed in fits to data for $\P(Q^2)$. The first class of functions involves PAs of the form $$\label{PA}
\P(Q^2)=\P(0)-Q^2\left(a_0+\sum_{k=1}^K\frac{a_k}{b_k+Q^2}\right)\ .$$ For $a_0=0$, the expression between parentheses is a $[K-1,K]$ Padé; if also $a_0$ is a parameter, it is a $[K,K]$ Padé. With $a_{k\ge 1}>0$ and $b_k>b_{k-1}>\dots>b_1>4m_\p^2$, these PAs constitute a sequence converging to the exact vacuum polarization in the sense described in detail in Ref. [@ABGP2012]. With “good enough” data, we thus expect that, after fitting the data, one or more of these PAs will provide a numerically accurate representation of $\P(Q^2)$ on a compact interval for $Q^2$ on the positive real axis. For each such fit, we may compute $\ta_\m^{{\rm HLO},Q^2\le 1}$, and compare the result to the exact model value. Of course, the aim of this article is to gain quantitative insight into what it means for the data to be “good enough,” as well as into what order of PA might be required to achieve a given desired accuracy in the representation of $\P(Q^2)$ at low $Q^2$.
We note that in the model, by construction we have that $\tP(0)=0$. In contrast, a lattice computation yields only the unsubtracted $\P(Q^2)$ at non-zero values of $Q^2$.[^6] It thus appears that the model does not quite match the lattice framework it is designed to simulate. However, if in the test fits we treat $\P(0)$ in Eq. (\[PA\]) as a free parameter, we discard the information that $\tP(0)=0$ in the model, and we can use the fake data generated from the model as a test case for the lattice. In other words, if we treat $\P(0)$ in Eq. (\[PA\]) as a free parameter, we can think of the model vacuum polarization as $\P(Q^2)$ in a scheme in which $\P(0)$ happens to vanish, rather than as $\tP(Q^2)$. This turns out to be a very important observation, because even if a PA or VMD-type fit does a good job of fitting the overall $Q^2$ behavior over a given interval, it is generally difficult for these fits to yield the correct curvature at very low $Q^2$. Because the integral in Eq. (\[amu\]) is dominated by the low-$Q^2$ region, this effect can lead to significant deviations of $\ta_\m^{{\rm HLO},Q^2\le 1}$ from the exact model value, as we will see below.
We will also consider VMD-type fits, which have been widely used in the literature. Typical VMD-type fits have the form of Eq. (\[PA\]), but with the lowest pole, $b_1$, fixed to the $\r$ mass, $b_1=m_\r^2$. We will consider two versions: straight VMD, obtained by taking $K=1$ in Eq. (\[PA\]) and setting $a_0=0$, and VMD$+$, which is similar but with $a_0$ a free parameter. Such VMD-type fits have been employed previously [@DJJW2012; @AB2007; @FJPR2011; @BDKZ2011; @BFHJPR2013]. We emphasize that VMD-type fits, despite their resemblance to the PAs of Eq. (\[PA\]), are [*not*]{} of that type. The exact function $\P(Q^2)$ has a cut at $Q^2=-4m_\p^2$, which has to be reproduced by the gradual accumulation of poles in Eq. (\[PA\]) toward that value. If instead we choose the lowest pole at the $\r$ mass, the fit function is a model function based on the intuitive picture of vector meson dominance, and is definitely not a member of the convergent sequence introduced in Ref. [@ABGP2012]. However, as already emphasized in Sec. \[intro\], the aim here is to investigate the quality of various fits on test data, without theoretical prejudice. We will thus investigate both PA and VMD-type fits in the remainder of this article.
In Ref. [@BDKZ2011] also a VMD-type fit with two poles, obtained by choosing $K=2$, $a_0=0$ and $b_1=m_\r^2$ has been considered. In our case, such a fit turns out to not yield any extra information beyond VMD+: we always find that $b_2$ is very large, and $a_2$ and $b_2$ are very strongly correlated, with the value of $a_2/b_2$ equal to the value of $a_0$ found in the VMD+ fit. The reason this does not happen in Ref. [@BDKZ2011] is probably that in that case also the connected part of the $I=0$ component is included in $\P(Q^2)$, and this component has a resonance corresponding to the octet component of the $\phi$-$\omega$ meson pair. In our case, in which only the $I=1$ component is present, these two-pole VMD-type fits never yield any information beyond the VMD+ fits.
\[lattice\] The generation of fake lattice data
===============================================
![image](model_fake_data_all.pdf){width="4in"}
In order to carry out the tests, we need data that correspond to a world described by our model, and that resemble a typical set of lattice data. In order to construct such a data set, we proceed as follows. First, we choose a set of $Q^2$ values. The $Q^2$ values we will consider are those available on an $L^3\times T=64^3\times 144$ lattice with periodic boundary conditions, and an inverse lattice spacing $1/a=3.3554$ GeV. The smallest momenta on such a lattice in the temporal and spatial directions are $$\begin{aligned}
\label{exmom}
Q&=&\left(0,0,0,\frac{2\p}{aT}\right)\quad\rightarrow\quad Q_1^2=0.02143~\mbox{GeV}^2\ ,\\
Q&=&\left(0,0,0,\frac{4\p}{aT}\right)\quad\rightarrow\quad Q_2^2=0.08574~\mbox{GeV}^2\ ,\nonumber\\
Q&=&\left(\frac{2\p}{aL},0,0,0\right)\quad\rightarrow\quad Q_3^2=0.1085~\mbox{GeV}^2\ ,\nonumber\end{aligned}$$ Next, we construct a multivariate Gaussian distribution with central values $\P(Q_i^2)$, $i=1,2,\dots$, and a typical covariance matrix obtained in an actual lattice computation of the vacuum polarization on this lattice. The covariance matrix we employed is the covariance matrix for the $a=0.06$ fm data set considered in Ref. [@ABGP2012]. The fake data set is then constructed by drawing a random sample from this distribution.[^7] The data points shown in Fig. \[f1\] are the first three data points of this fake data set. The full data set is shown in Fig. \[f3\]. We will refer to this as the “lattice” data set.
Below, we will also have use for a “science-fiction” data set. This second data set is obtained exactly as the fake data set described above, except that we first divide the lattice covariance matrix by 10000, which corresponds to reducing diagonal errors by a factor 100. After this reduction, the data set is generated as before. We refer to this as the “science-fiction” data set because it seems unlikely that a realistic lattice data set with such good statistics will exist in the near future. However, this second data set will allow us to gain some additional insights in the context of this model study.
\[fits\] Fits to the fake lattice data
======================================
Fit $\ta_\m^{{\rm HLO},Q^2\le 1}\times 10^7$ $\sigma$ $\c^2$/dof $\ta_\m^{{\rm HLO},Q^2\le 1}\times 10^7$ $\s$ $\c^2$/dof
------------ ------------------------------------------ ---------- ------------ ------------------------------------------ ------ ------------
PA $[0,1]$ 0.8703(95) 285/46 0.6805(45) 1627/84
PA $[1,1]$ 1.116(22) 4 61.4/45 1.016(12) 16 189/83
PA $[1,2]$ 1.182(43) 0.5 55.0/44 1.117(22) 4 129/82
PA $[2,2]$ 1.177(58) 0.5 54.6/43 1.136(38) 1.8 128/81
VMD 1.3201(52) 2189/47 1.3873(44) 18094/85
VMD+ 1.0658(76) 18 67.4/46 1.1041(48) 21 243/84
In this section, we will present and discuss the results of a number of fits, based on the data sets constructed in Sec. \[lattice\].
\[fake1\] “Lattice” data set
----------------------------
Table \[table1\] shows the results of a number of correlated fits of the lattice data set to the functional forms defined in Sec. \[functions\]. To the left of the vertical double line the fitted data are those in the interval $0<Q^2\le 1$ GeV$^2$; to the right the fitted data are those in the interval $0<Q^2\le 1.5$ GeV$^2$. In each of these two halves, the left-most column shows the fit function, and the second column gives the value of $\ta_\m^{{\rm HLO},Q^2\le 1}$ obtained from the fit, with the $\c^2$ fit error between parentheses. The “pull” $\s$ in the third column is defined as $$\label{pull}
\s=\frac{|\mbox{exact\ value}-\mbox{fit\ value}|}{\mbox{error}}\ .$$ For instance, with the exact value of Eq. (\[amuexact\]), we have for the $[1,1]$ PA on the interval $0<Q^2\le 1$ GeV$^2$ that $\s=|1.204-1.116|/0.022=4$. The fourth column gives the $\c^2$ value per degree of freedom (dof) of the fit.
Of course, the pull can only be computed because we know the exact model value. This is precisely the merit of this model study: it gives us insight into the quality of the fit independent of the $\c^2$ value. Clearly, the fit does a good job if the pull is of order one, because if that is the case, the fit error covers the difference between the exact value and the fitted value.
The primary measure of the quality of the fit is the value of $\c^2/$dof. This value clearly rules out $[0,1]$ PA and VMD as good fits. In this case, we do not even consider the pull: these functional forms clearly just do not represent the data very well. However, in all other cases, one might consider the value of $\c^2/$dof to be reasonable, although less so for fits on the interval $0<Q^2\le 1.5$ GeV$^2$. However, only the $[1,2]$ and $[2,2]$ PAs have a good value for the pull for fits on the interval $0<Q^2\le 1$ GeV$^2$, whereas the pull for the $[1,1]$ PA and VMD$+$ is bad: the fit error does not nearly cover the difference between the true (, exact model) value and the fitted value. Note that with the errors of the “lattice” data set even the best result, from the $[2,2]$ PA, only reaches an accuracy of 5% for $\ta_\m$. On the interval $0<Q^2\le 1.5$ GeV$^2$ all fits get worse, both as measured by $\c^2$ and $\s$, and only the $[2,2]$ PA may be considered acceptable.
![image](vacpol_12pade_correlated_0to1.pdf){width="2.9in"} ![image](integrand_12pade_correlated_0to1.pdf){width="2.9in"}
![image](vacpol_VMDplus_correlated_0to1.pdf){width="2.9in"} ![image](integrand_VMDplus_correlated_0to1.pdf){width="2.9in"}
For illustration, we show the $[1,2]$ PA and VMD$+$ fits on the interval $0<Q^2\le 1$ GeV$^2$ in Figs. \[PA12\] and \[VMDplus\]. The left-hand panels show the fit over a wider range of $Q^2$, including the full set of $Q^2$ values employed in the fit, while the right-hand panels focus on the low $Q^2$ region of the integrand in Eq. (\[amu\]) of primary relevance to $\ta_\m^{{\rm HLO},Q^2\le 1}$, which contains only a few of the $Q^2$ fit points. The blue solid curve shows the exact model, the green dashed curve the fit, and the red points are the lattice data. Both fits to the vacuum polarization look like good fits (confirmed by the $\c^2/$dof values) when viewed from the perspective of the left-hand panels. A clear distinction, however, emerges between the $[1,2]$ PA and VMD$+$ cases when one focuses on the low-$Q^2$ region shown in the right-hand panels. In these panels, the PA fit follows the exact curve very closely, while the VMD$+$ fit undershoots the exact curve by a significant amount, as quantified by the pull. Looking at the left hand panels in Figs. \[PA12\] and \[VMDplus\], one would never suspect the difference in the results for $\ta_\m^{{\rm HLO},Q^2\le 1}$ illustrated in the corresponding right hand panels.
\[fake10000\] “Science-fiction” data set
----------------------------------------
In Table \[table2\], we show the same type of fits as in Table \[table1\], but now using the “science-fiction” data set defined in Sec. \[lattice\]. The corresponding figures are very similar to Figs. \[PA12\] and \[VMDplus\], and hence are not shown here.
Fit $\ta_\m^{{\rm HLO},Q^2\le 1}\times 10^7$ $\s$ $\c^2$/dof
------------ ------------------------------------------ ------ -------------
PA $[0,1]$ 0.87782(9) 1926084/46
PA $[1,1]$ 1.0991(2) 51431/45
PA $[1,2]$ 1.1623(4) 1340/44
PA $[2,2]$ 1.1862(15) 12 76.4/43
PA $[2,3]$ 1.1965(28) 2 42.0/42
VMD 1.31861(5) 20157120/47
VMD+ 1.07117(8) 70770/46
This data set is, of course, quite unrealistic: real lattice data with such precision will not soon be generated. But these fits address the question of which of the fit functions considered might still be acceptable in this hypothetical world, and whether simply decreasing the errors, in this case by the large factor of 100, rather than also filling in low $Q^2$ values, will be sufficient to achieve the goal of getting to the desired $\sim 1\%$ accuracy in the determination of $\ta_\m^{{\rm HLO},Q^2\le 1}$. The answer is barely.
First, we see that the VMD-type fits are completely ruled out already by the $\c^2$ values. The higher precision data are also more punishing on the PA fits. By $\c^2$ values, the first three PAs are excluded, in contrast to Table \[table1\], where only the $[0,1]$ PA is really excluded by its $\c^2$ value. The $[2,2]$ PA has a possibly reasonable $\c^2$ value, but its accuracy does not match its precision, with a pull equal to 12.[^8] The more precise data make it possible to perform a $[2,3]$ PA fit, and this fit is borderline acceptable, given the value of the pull.
The best fit for each data set yields $a_\m^{{\rm HLO},Q^2\le 1}$ with an error of 5% for the lattice data set, down to 0.2% for the science-fiction data set. While this means that (real) lattice data with a precision somewhere in between would yield an error of order 1% or below, we also see from this example that that precision does not necessarily translate into an equal accuracy. We conjecture that in order to increase accuracy, data at more low-$Q^2$ values than present in the fake data sets considered here will be needed. While precision data in the region of the peak of the integrand would be ideal, we suspect that filling in the region between the two lowest $Q^2$ values in this data set might already be of significant help.
\[diagonal\] Diagonal fits
--------------------------
It is important to emphasize that the data sets considered here are constructed such that by definition the covariance matrix employed is the true covariance matrix, and not some estimator for the true one. However, it is possible that for some unknown reason the covariance matrix we employed for generating the fake data set is less realistic, even though we took it to come from an actual lattice computation. For instance, the vacuum polarization of this lattice computation contains both $I=1$ and (the connected part of the) $I=0$ components, whereas the vacuum polarization considered here has only an $I=1$ component.
For this reason, we also considered diagonal fits, in which instead of minimizing the $\c^2$ function, we minimize the quadratic form $\cq^2$ obtained by keeping only the diagonal of the covariance matrix. However, our errors take into account the full data covariance matrix by linear error propagation. (For a detailed description of the procedure, see the appendix of Ref. [@BCGJMOP2011].[^9])
Results of diagonal fits are shown in Tables \[table3\] and \[table4\]. These tables show fits analogous to those shown in Tables \[table1\] and \[table2\], but instead of taking the full covariance matrix into account through a $\c^2$ fit, it is only taken into account in the error propagation, after the fit parameters have been determined from a diagonal fit.
$\ta_\m^{{\rm HLO},Q^2\le 1}\times 10^7$ $\s$ $\cq^2$ $\ta_\m^{{\rm HLO},Q^2\le 1}\times 10^7$ $\s$ $\cq^2$
------------ ------------------------------------------ ------ --------- ------------------------------------------ ------ ---------
PA $[0,1]$ 0.997(23) 19 20.1 0.906(15) 20 62.4
PA $[1,1]$ 1.173(74) 0.4 13.8 1.108(39) 2.5 30.3
PA $[1,2]$ 1.30(32) 0.3 13.55 1.22(15) 0.1 29.5
VMD 1.2122(82) 1 75.2 1.2895(69) 12 510
VMD+ 1.083(17) 7 15.0 1.081(12) 10 30.7
The results of these diagonal fits are consistent with, and confirm, the conclusions one draws from the correlated fits shown in Tables \[table1\] and \[table2\]. For PA fits, the only differences are that errors from the diagonal fits are larger, and the maximum order of the PA for which we can find a stable fit is one notch lower. Since the fit quality $\cq^2$ is not a $\c^2$ function, its absolute value (per degree of freedom) has no quantitative probabilistic meaning. But clearly the $[0,1]$, $[1,1]$, VMD and VMD$+$ fits shown in Table \[table4\] are bad fits, as judged from their $\cq^2$ values. We therefore did not compute the pull for these fits. For all other fits in Tables \[table3\] and \[table4\] the pull is shown, and consistent with that shown in Tables \[table1\] and \[table2\] for PAs of one higher order.
Also from these diagonal fits we conclude that the VMD-type fits considered here do not work. Amusingly, VMD appears to get it right, if one takes the VMD fit on the interval $0<Q^2\le 1$ GeV$^2$ in Table \[table3\] at face value. However, this should be considered an accident. If one adds a parameter to move to a VMD$+$ fit, the value of $\cq^2$ decreases significantly, as it should, but the pull increases dramatically, showing that VMD$+$ is not a reliable fit. This should not happen if the VMD result were to be reliable itself. Likewise, if we change the fitting interval from $0<Q^2\le 1$ GeV$^2$ to $0<Q^2\le 1.5$ GeV$^2$, the pull increases much more dramatically than for the PA fits. In addition, both VMD-type fits in Table \[table4\] are bad fits, as judged by the $\cq^2$ values, even though, because of the same accident, the VMD value for $a_\m^{{\rm HLO},Q^2\le 1}$ looks very good. Note however, that again the error is nowhere near realistic as well: we did not compute the pull because of the large $\cq^2$ value, but its value given the numbers reported is very large.
We conclude from this example that in order to gauge the reliability of a fit, ideally one should consider a sequence of fit functions in which parameters are systematically added to the fit function. This allows one to test the stability of such a sequence of fits, and avoid mistakenly interpreting an accidental agreement with the model result as an indication that a particular fit strategy is reliable when it is not, as happens here for the VMD fit and the specific $0$ to $1$ GeV$^2$ fitting window. The PA approach provides a systematic sequence of fit functions in this respect.
$\ta_\m^{{\rm HLO},Q^2\le 1}\times 10^7$ $\s$ $\cq^2$
------------ ------------------------------------------ ------ --------- -- --
PA $[0,1]$ 0.99623(23) 40350
PA $[1,1]$ 1.12875(68) 623
PA $[1,2]$ 1.1762(21) 13 31.3
PA $[2,2]$ 1.1904(54) 2.5 22.1
VMD 1.21076(8) 589751
VMD+ 1.08341(16) 4081
\[medium\] The region $1\le Q^2\le 2$ GeV$^2$
---------------------------------------------
While higher-order PAs appear to work reasonably well, in the sense that their accuracy matches their precision, we also noted that on our fake lattice data set this is less true when one increases the fit interval from $0<Q^2\le 1$ GeV$^2$ to $0<Q^2\le 1.5$ GeV$^2$. At the same time, one expects QCD perturbation theory only to be reliable above approximately $2$ GeV$^2$. This leads to the question whether one can do better on the interval between 0 and 2 GeV$^2$.
As we saw in Sec. \[fits\], the accuracy of the contribution to $\ta_\m^{{\rm HLO},Q^2\le 1}$ is limited to about 5% on the lattice data set, because of the relatively sparse data at low values of $Q^2$. We will therefore limit ourselves here to a few exploratory comments, in anticipation of future data sets with smaller errors in the low-$Q^2$ region, and a denser set of $Q^2$ values.[^10]
A possible strategy is to fit the data using a higher-order PA on the interval $0<Q^2\le Q^2_{\max}$, while computing the contribution between $Q^2_{\max}$ and 2 GeV$^2$, $\ta_\m^{{\rm HLO},Q^2_{\max}\le Q^2\le 2}$, directly from the data, for some value of $Q^2_{\max}$ such that the PA fits lead to reliable results for $\ta_\m^{\rm HLO}$ on the interval between 0 and $Q^2_{\max}$. This is best explained by an example, in which we choose $Q^2_{max}\approx 1$ GeV$^2$.
The $Q^2$ value closest to 1 GeV$^2$ is $Q^2_{49}=0.995985$ GeV$^2$; that closest to 2 GeV$^2$ is $Q^2_{129}=2.00909$ GeV$^2$. From our fake data set, using the covariance matrix with which it was generated, we use the trapezoidal rule to find an estimate $$\begin{aligned}
\label{trapest}
\ta_\m^{{\rm HLO},Q^2_{49}\le Q^2\le Q^2_{129}}
&=&\half\sum_{i=49}^{128}\left(Q^2_{i+1}-Q^2_i\right)\Bigl(f(Q^2_i)(\P(0)-\P(Q^2_i))\\
&&\hspace{3.5cm}+f(Q^2_{i+1})(\P(0)-\P(Q^2_{i+1}))\Bigr)\nonumber\\
&=&6.925(26)\times 10^{-10}\qquad\mbox{(estimate)}\ .\nonumber\end{aligned}$$ This is in good agreement with the exact value $$\label{exactvalue}
\ta_\m^{{\rm HLO},0.995985\le Q^2\le 2.00909}=6.922\times 10^{-10}\qquad
\mbox{(exact)}\ .$$ On this interval no extrapolation in $Q^2$ is needed, nor does the function $f(Q^2)$ play a “magnifying” role, so we expect the error in Eq. (\[trapest\]) to be reliable, and we see that this is indeed the case. In contrast, it is obvious from Fig. \[f1\] that estimating $\ta_\m^{{\rm HLO},Q^2\le 1}$ in this way would not work. One may now combine the estimate (\[trapest\]) with, for instance, the result from a fit to the $[1,2]$ PA on the interval $0<Q^2\le 0.995985$ GeV$^2$ in order to estimate $\ta_\m^{{\rm HLO},Q^2\le 2.00909}$.[^11] The error on this estimate would be determined completely by that on $\ta_\m^{{\rm HLO},Q^2\le 1}$ coming from the fit, since the error in Eq. (\[trapest\]) is tiny. Of course, in a complete analysis of this type, correlations between the “fit” and “data” parts of $\ta_\m^{\rm HLO}$ should be taken into account, because the values obtained for the fit parameters in Eq. (\[PA\]) will be correlated with the data. However, we do not expect this to change the basic observation of this subsection: The contribution to $\ta_\m^{\rm HLO}$ from the $Q^2$ region between 1 and 2 GeV$^2$ can be estimated directly from the data with a negligible error, simply because this contribution to $\ta_\m^{\rm HLO}$ is itself very small (less than 0.6%). With better data, this strategy can be optimized by varying the value of $Q^2_{max}$.
0.8cm
\[conclusion\] Conclusion
=========================
In order to compute the lowest-order hadronic vacuum polarization contribution $a_\m^{\rm HLO}$ to the muon anomalous magnetic moment, it is necessary to extrapolate lattice data for the hadronic vacuum polarization $\P(Q^2)$ to low $Q^2$. Because of the sensitivity of $a_\m^{\rm HLO}$ to $\P(Q^2)$ in the $Q^2$ region around $m_\m^2$, one expects a strong dependence on the functional form used in order to fit data for $\P(Q^2)$ as a function of $Q^2$.
It is therefore important to test various possible forms of the fit function, and a good way to do this is to use a model. Given a model, given a set of values of $Q^2$ at which lattice data are available, and given a covariance matrix typical of the lattice data, one can generate fake data sets, and test fitting methods by comparing the difference between the fitted and model values for $a_\m^{\rm HLO}$ with the error on the fitted value obtained from the fit. In this article, we carried out such tests, using a model constructed from the OPAL data for the $I=1$ hadronic spectral function as measured in $\t$ decays, considering both fit functions based on Vector Meson Dominance as well as a sequence of Padé approximants introduced in Ref. [@ABGP2012]. We took our $Q^2$ values and covariance matrix from a recent lattice data set with lattice spacing $0.06$ fm and volume $64^3\times 144$ [@ABGP2012].
For a fake data set generated for these $Q^2$ values with the given covariance matrix, we found that indeed it can happen that the precision of $\ta_\m^{\rm HLO}$ (the analog of $a_\m^{\rm HLO}$ for our model), , the error obtained from the fit, is much smaller than the accuracy, , the difference between fitted and exact values. We considered correlated fits as well as diagonal fits, and we also considered fits to a “science-fiction” data set generated with the same covariance matrix scaled by a factor $1/10000$.
From these tests, we conclude that fits based on the VMD-type fit functions we considered cannot be trusted. In nearly all cases, the accuracy is much worse than the precision, and there is no improvement with the more precise data set with the rescaled covariance matrix. Adding parameters (VMD$+$) does not appear to help. Based on our tests, we therefore call into question the use of VMD-type fits for the accurate computation of $a_\m^{\rm HLO}$.[^12]
The sequence of PAs considered here performs better, if one goes to high enough order. The order needed may be higher if one uses more precise data, as shown in the comparison between Tables \[table1\] and \[table2\]. Still, with the lattice $Q^2$ values and covariance matrix of Ref. [@ABGP2012], the maximum accuracy obtained is of order a few percent, but at least this is reflected in the errors obtained from the fits. Of course, given a certain data set, one cannot add too many parameters to the fit, and indeed we find that adding parameters beyond the $[2,2]$ PA ($[2,3]$ PA for the science-fiction data set) does not help: parameters for the added poles at larger $Q^2$ have such large fitting errors that they do not add any information. We also found that PA fits do less well when one increases the fitting interval, and proposed that the contribution to $a_\m^{\rm HLO}$ from the region between around 1 GeV$^2$ and the value where QCD perturbation theory becomes reliable can, instead, be accurately computed using (for instance) the trapezoidal rule ( Sec. \[medium\]).
We believe that tests such as that proposed in this article should be carried out for all high-precision computations of $a_\m^{\rm HLO}$. We have clearly demonstrated that a good $\c^2$ value may [*not*]{} be sufficient to conclude that a given fit is good enough to compute $a_\m^{\rm HLO}$ with a reliable error. The reason is the “magnifying effect” produced by the multiplication of the subtracted vacuum polarization by the kinematic weight in the integral yielding $a_\m^{\rm HLO}$. While other useful models (for instance, based on $\s(e^+e^-\to\mbox{hadrons})$ data) may also be constructed, the model considered here, for the $I=1$ polarization function $\P(Q^2)$, is already available, and data on this model will be provided on request.
[**Acknowledgments**]{}
We would like to thank Christopher Aubin and Tom Blum for discussions. KM thanks the Department of Physics at the Universitat Autònoma de Barcelona for hospitality. This work was supported in part by the US Department of Energy, the Spanish Ministerio de Educación, Cultura y Deporte, under program SAB2011-0074 (MG), the Natural Sciences and Engineering Research Council of Canada (KM), and by CICYTFEDER-FPA2011-25948, SGR2009-894, the Spanish Consolider-Ingenio 2010 Program CPAN (CSD2007-00042) (SP).
[99]{}
For a recent review, see T. Blum, M. Hayakawa and T. Izubuchi, PoS LATTICE [**2012**]{}, 022 (2012) \[arXiv:1301.2607 \[hep-lat\]\]. T. Blum, Phys. Rev. Lett. [**91**]{}, 052001 (2003) \[hep-lat/0212018\]. B. E. Lautrup, A. Peterman and E. de Rafael, Nuovo Cim. A [**1**]{}, 238 (1971). M. Della Morte, B. Jäger, A. Jüttner and H. Wittig, JHEP [**1203**]{}, 055 (2012) \[arXiv:1112.2894 \[hep-lat\]\]; PoS LATTICE [**2012**]{}, 175 (2012) \[arXiv:1211.1159 \[hep-lat\]\]. C. Aubin, T. Blum, M. Golterman and S. Peris, arXiv:1307.4701 \[hep-lat\]. C. Aubin and T. Blum, Phys. Rev. D [**75**]{}, 114502 (2007) \[arXiv:hep-lat/0608011\]. X. Feng, K. Jansen, M. Petschlies and D. B. Renner, Phys. Rev. Lett. [**107**]{}, 081802 (2011) \[arXiv:1103.4818 \[hep-lat\]\]; X. Feng, G. Hotzel, K. Jansen, M. Petschlies and D. B. Renner, PoS LATTICE [**2012**]{}, 174 (2012) \[arXiv:1211.0828 \[hep-lat\]\]. P. Boyle, L. Del Debbio, E. Kerrane and J. Zanotti, arXiv:1107.1497 \[hep-lat\]. C. Aubin, T. Blum, M. Golterman and S. Peris, Phys. Rev. D [**86**]{}, 054509 (2012) \[arXiv:1205.3695 \[hep-lat\]\]. A. Francis, B. Jäger, H. B. Meyer and H. Wittig, arXiv:1306.2532 \[hep-lat\]. D. Boito, O. Catá, M. Golterman, M. Jamin, K. Maltman, J. Osborne and S. Peris, Phys. Rev. D [**84**]{}, 113006 (2011) \[arXiv:1110.1127 \[hep-ph\]\].
D. Boito, M. Golterman, M. Jamin, A. Mahdavi, K. Maltman, J. Osborne and S. Peris, Phys. Rev. D [**85**]{}, 093015 (2012) \[arXiv:1203.3146 \[hep-ph\]\]. K. Ackerstaff [*et al.*]{} \[OPAL Collaboration\], Eur. Phys. J. C [**7**]{} (1999) 571 \[arXiv:hep-ex/9808019\]. C. Bouchiat and L. Michel, J. Phys. Radium [**22**]{} (1961) 121.
M. Davier, A. Hoecker, G. Lopez Castro, B. Malaescu, X. H. Mo, G. Toledo Sanchez, P. Wang and C. Z. Yuan [*et al.*]{}, Eur. Phys. J. C [**66**]{}, 127 (2010) \[arXiv:0906.5443 \[hep-ph\]\]. C. E. Wolfe and K. Maltman, Phys. Rev. D [**83**]{}, 077301 (2011) \[arXiv:1011.4511 \[hep-ph\]\]. R. R. Akhmetshin [*et al.*]{} \[CMD-2 Collaboration\], Phys. Lett. B [**648**]{}, 28 (2007) \[hep-ex/0610021\]. M. N. Achasov, K. I. Beloborodov, A. V. Berdyugin, A. G. Bogdanchikov, A. V. Bozhenok, A. D. Bukin, D. A. Bukin and T. V. Dimova [*et al.*]{}, J. Exp. Theor. Phys. [**103**]{}, 380 (2006) \[Zh. Eksp. Teor. Fiz. [**130**]{}, 437 (2006)\] \[hep-ex/0605013\]. J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**86**]{}, 032013 (2012) \[arXiv:1205.2228 \[hep-ex\]\]. D. Babusci [*et al.*]{} \[KLOE Collaboration\], Phys. Lett. B [**720**]{}, 336 (2013) \[arXiv:1212.4524 \[hep-ex\]\]. P. A. Baikov, K. G. Chetyrkin and J. H. Kühn, Phys. Rev. Lett. [**101**]{} (2008) 012002 \[arXiv:0801.1821 \[hep-ph\]\]. G. M. de Divitiis, R. Petronzio and N. Tantalo, Phys. Lett. B [**718**]{}, 589 (2012) \[arXiv:1208.5914 \[hep-lat\]\]. X. Feng, S. Hashimoto, G. Hotzel, K. Jansen, M. Petschlies and D. B. Renner, arXiv:1305.5878 \[hep-lat\]. F. Burger, X. Feng, G. Hotzel, K. Jansen, M. Petschlies and D. B. Renner, arXiv:1308.4327 \[hep-lat\].
[^1]: For the curve and data shown here, see Sec. \[model\] and Sec. \[lattice\].
[^2]: Figs. 48 and 50 of Ref. [@BaBarpipi12] provide a useful overview of the current situation.
[^3]: For other PA fits considered in the literature, we are not aware of any convergence theorems.
[^4]: The final one or two digits of these parameter values are not significant in view of the errors obtained in Eq. (5.3) of Ref. [@BGJMMOP2012], but these are the values we used to construct the model.
[^5]: Taking into account the OPAL data covariance matrix, the parameter covariance matrix for the parameters in Eq. (\[region2b\]), as well as the correlations between OPAL data and the parameters.
[^6]: A recent paper proposed a method for computing $\P(0)$ directly on the lattice [@DPT2012], whereas another recent paper proposed to obtain $\P(Q^2)$ at and near $Q^2=0$ by analytic continuation [@FHHJPR2013]. Since we do not know yet what the size of the combined statistical and systematic errors on $\P(0)$ determined in such ways will turn out to be, we do not consider these options in this article.
[^7]: We used the Mathematica routines [MultinormalDistribution]{} and [RandomVariate]{}.
[^8]: We define the “precision” as the error we obtain, while the “accuracy” is the difference between the exact and fitted values.
[^9]: We prefer to refer to this type of fit as a “diagonal” fit, instead of an “uncorrelated” fit, as the latter phrase suggests, incorrectly, that the off-diagonal part of the covariance matrix is completely omitted from the analysis.
[^10]: A denser set can be obtained by going to larger volumes, and/or the use of twisted boundary conditions [@DJJW2012; @ABGP2013].
[^11]: The result from this fit is identical to that on the interval $0<Q^2\le 1$ GeV$^2$ given in Table \[table1\] to the precision shown in that table.
[^12]: This includes the recent work in Ref. [@BFHJPR2013], in which the error on $a_\m^{\rm HLO}$ is obtained from a VMD$+$ fit, and in which, reportedly, the error from PA-type fits is much larger. Based on the results we have obtained, we strongly suspect that the error on $a_\m^{\rm HLO}$ in Ref. [@BFHJPR2013] is significantly underestimated. For example, the results from the $[1,2]$ PA and VMD$+$ fits on the interval $0<Q^2\le 1.5$ GeV$^2$ in Table \[table1\] are compatible within errors, with the PA error 5 times larger than the VMD$+$ error. Moreover, in both cases the fit error is too small.
|
Reform All Categorical Programs
Kudos to David Hornbeck and members of the Commission on Chapter 1
for bold plans to improve the learning of economically disadvantaged
students. We think, however, the independent commission framed its work
too narrowly. There is every reason--political, economic, scientific,
and professional--in 1993 to work for broad reformation of all
categorical school programs. We believe the panel made a major mistake
by excluding from its proposals categorical programs other than Chapter
1, such as programs for migrant workers' children, handicapped
children, neglected and delinquent children, limited-English-proficient
children, and Native American children. (See Education Week, Dec. 16,
1992.)
Narrowly framed categorical programs designated to serve only
specific categories of students cause the disjointedness and
inefficiency that plague schools as they attempt to meet legislative
mandates. This problem is particularly serious in urban schools with
high concentrations of economically disadvantaged students. Although
the Chapter 1 commission recognized that the "school is the primary
unit in need of change and improvement,'' panel members chose not to
say how Chapter 1 reform, as they envision it, will be carried out in
isolation from other categorical programs.
Categorical programs have become very large and expensive--and an
administrative nightmare for school administrators and teachers. It is
not unusual to find schools in which over 50 percent of students are
separated in "pullout'' programs from the mainstream and from one
another. Classrooms become something like Grand Central Station and
teachers turn into dispatchers. According to one report, 25 percent of
New York City public school expenditures were tied to special
education. Add the costs of other categorical programs and we're
dealing with what the late Sen. Everett M. Dirksen of Illinois called
"real money.''
On the scientific side, a study of placement practices in the
schools by a panel created by the National Academy of Sciences is of
great importance. The panel reported that there is little empirical
justification for categorical labeling that differentiates mildly
mentally retarded children from other children with academic
difficulties; and, perhaps more importantly, it said that similar
instructional processes appear to be effective with educable mentally
retarded, learning-disabled, and compensatory-educational populations,
including Chapter 1.
It is noteworthy that so-called "learning disabled'' children, a
special-education category, now constitute more than half of the
special-education population in the nation's public schools. The
researcher Joe Jenkins at the University of Washington reports a
virtually total "overlap'' in characteristics of Chapter 1 and
learning-disabled student populations. Richard Allington at the State
University of New York at Albany reports that students in categorical
programs actually receive less instruction in special support programs
designed to provide intensive instructional support, such as Chapter 1
and learning-disabled, or L.D., programs. In what we have called the
"Matthew effect,'' these students tend to fall further and further
behind students in the mainstream.
School psychologists in many school districts are very heavily
occupied in psychometrics just to allocate children to categorical
programs. This often involves waiting until discrepancies between
"expectations'' and actual achievements of students are large enough to
warrant a given categorical label for placement and/or services. Such
pseudoscientific procedures involve calculating "points'' generated out
of test results to suggest that a child needs intensive help. All of
this "micromanaging on the input side'' of meaningless boundaries is
wasteful and unjustified. Indeed, much of it may be harmful in that it
involves delays in providing help to children at early stages of their
studies. It preoccupies school staff members with the tasks of
justifying eligibility for services and precludes broader professional
services, which they could provide. In fact, the preoccupation with
eligibility certification precludes the delivery of a broader vision of
improvement.
It is true, of course, that legislators can make provisions in law
and in funding systems in almost any fashion. But it is the job of
educators to help frame the concerns of public policymakers so that we
don't have disarrayed programs at the school level.
Research and practical experience indicate that categorical school
programs as they are currently implemented in schools are in disorder.
They cause extreme disjointedness in schools. Further aggravation is
provided by state and federal monitoring activities that are more
oriented to processes than to the substance of teaching and learning.
Literally, monitors seem more interested in what they find in filing
cabinets (Did the parents approve, in writing, before the testing of
their child began?) than in what goes on in classrooms.
By noting these problems, we suggest that the strategies for making
schools work for children in poverty proposed by the independent
commission will, in fact, continue to cause the kinds of disjointedness
that have failed to serve the many children presently in all kinds of
categorical programs, including Chapter 1. We agree with Mr. Hornbeck
and other commission members that aggressive teaching, higher
expectations, and broader curricular approaches are needed by many
students presently being served by Chapter 1 programs and, we add, by
other specially designed entitlement programs.
In thinking about a solution, the quite simple proposal by Carl
Bereiter at the Ontario Institute for Studies in Education may be
useful. He stated: "For any sort of learning, from swimming to reading,
some children learn with almost no help and other children need a great
deal of help. Children whom we have labeled educationally disadvantaged
are typically children who need more than ordinary amounts of help with
academic learning. Why they need help is open to all sorts of
explanations. But suppose that, instead of reopening that issue, we
simply accept the fact that youngsters vary greatly in how much help
they need and why.''
In addition, the problem of Chapter 1 must be viewed in the context
of proposed radical reforms in educational organization. The worrisome
difficulties of Chapter 1 identified by the commission and further
elaborated here may be part of a larger
problem--"intergovernmentalism''--making more levels and parts of
government responsible for domestic affairs, although common sense says
that when all are nominally responsible, none is truly responsible. On
this subject, John Kincaid of the University of Washington concluded:
"Virtually all of the factors most associated with academically
effective education are school- and neighborhood-based. Yet, we have
shifted more control and financing of education to state and national
institutions.'' Our own reviews of research literature certainly
confirm this view.
The commission links its ideas for revision in Chapter 1 to the
integration of schools with other public agencies, such as health and
welfare. We agree that such broader integration approaches are
desirable. But schools cannot lead in these broad services unless and
until the services are integrated in the schools' own internal
operations.
In sum, we believe it is a mistake to prepare revised legislation
and complex accountability procedures in just the Chapter 1 framework
alone. There are no remaining rewards in school practices that make
categorical distinctions that have no merit. It is time to make
organizational and curricular changes that reform all categorical
programs in ways that involve the regular or general education programs
as well. We believe that reframing the entire set of categorical
programs, all the way from legislative and regulatory levels down to
classrooms and individual students, is long overdue.
Perhaps the Commission on Chapter 1's report can be taken as the
first step toward broader deliberation and reform. We are among those
ready to join in such an effort.
Margaret C. Wang is director of the Center for Research in Human
Development at Temple University and director of the National Center on
Education in the Inner Cities. Maynard C. Reynolds is a professor
emeritus of educational psychology at the University of Minnesota and a
senior research associate at the Temple University center. Herbert J.
Walberg is a research professor of education at the University of
Illinois at Chicago.
Vol. 12, Issue 26, Page 64
Published in Print: March 24, 1993, as Reform All Categorical Programs
Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public. |
Duration of breastfeeding and gender are associated with methylation of the LEPTIN gene in very young children.
Perinatal environmental factors have been associated with the metabolic programming of children and consequent disease risks in later life. Epigenetic modifications that lead to altered gene expression may be involved. Here, we study early life environmental and constitutional factors in association with the DNA methylation of leptin (LEP), a non-imprinted gene implicated in appetite regulation and fat metabolism. We investigated maternal education, breastfeeding, and constitutional factors of the child at 17 mo of age. We measured the DNA methylation of LEP in whole blood and the concentration of leptin in serum. Duration of breastfeeding was negatively associated with LEP methylation. Low education (≤12 y of education) was associated with higher LEP methylation. Boys had higher birth weight and lower LEP methylation than girls. An inverse association was established between birth weight per SD increase (+584 g) and LEP methylation. High BMI and leptin concentration were associated with lower methylation of LEP. The early life environment and constitutional factors of the child are associated with epigenetic variations in LEP. Future studies must reveal whether breastfeeding and the associated decrease in LEP methylation is an epigenetic mechanism contributing to the protective effect of breastfeeding against obesity. |
Q:
HttpClient in java
I want to use a simple HttpClient.
However, it appears sun.net.www.http.HttpClient is inaccessible.
Also, com.ibm.ws.http.HTTPConnection - appears to be more supporting of http server and not client. Why? Because when I create an instance of HttpConnection, it has a "getHttpResponse" to which I am supposed to write.
Anyway to use the IBM HttpConnection for HttpClient?
Or, is there any standard httpClient code that I can use?
A:
Many people use Apache's HTTPClient.
Have a look at the first few chapters of its tutorial to see if it's what you're looking for.
If you're after something simple that's already built into Java, you can look at HttpURLConnection, which you can use to build HTTP requests (example). If you need to do anything more than just simple HTTP requests, though, HTTPClient is probably the way to go.
A:
I highly recommend Unirest:
Unirest.post("http://httpbin.org/post")
.queryString("name", "Mark")
.field("last", "Polo")
.asString()
A:
Try jcabi-http, which acts as a wrapper of JDK HttpURLConnection or Apache HttpClient:
String body = new JdkRequest("http://www.example.com")
.uri().queryParam("id", "123").back()
.method(Request.GET)
.fetch()
.body();
Check this blog post for more information: http://www.yegor256.com/2014/04/11/jcabi-http-intro.html
|
qeip.com is a group of expert domain investors that provide digital asset consulting and brokerage services designed to connect businesses with domain owners. Our team's experience in corporate sales in addition to domain investing enable us to be the perfect intermediary to secure & sell domains for you! |
Tallahassee, Florida, city commissioners last Friday voted to approve a $2.6 million settlement in the wrongful death suit of a young woman killed in a drug sting when she agreed to be a confidential informant for police after being busted on marijuana and ecstasy charges. The payout comes even as a similar killing is shaking the Detroit area.
Rachel Hoffman (facebook.com)
Rachel Hoffman, 23, a recent Florida State University graduate, inhabited student drug circles, but after she was busted and agreed to become a snitch in 2008, Tallahassee police sent her out into an entirely different world. They set up a "buy-bust" sting, giving Hoffman $13,000 in marked bills to buy ecstasy, cocaine, and a gun. Instead of completing the transaction, the two men targeted shot and killed her, stole the money, her credit cards, and her car, and left her body in a ditch. The killers were later caught and are now serving life sentences.But Hoffman's parents sued after her death, claiming police were negligent in setting her up as an informant and putting her in harm's way. Jury selection in the case began two weeks ago, and the trial was set to begin Monday. After meeting with city attorneys, commissioners voted 3-2 to approve the settlement. The city itself will pay an initial $200,000 installment shortly, but under Florida law, the rest will only be paid after the Florida legislature passes a "claims bill," which could take years.The city's settlement isn't the only fallout from Hoffman's killing. After her death, her parents lobbied for, and the legislature passed, "Rachel's Law," which mandated reforms to protect informants. Under that law, police who work with informants are also required to get special training, must allow them to talk with an attorney before agreeing to anything, and cannot promise them reduced sentences if they cooperate.If Michigan had such a law, perhaps Shelley Hilliard would be alive today . The 19-year-old transgender woman was found murdered and mutilated on Detroit's east side in October after last being seen going to meet a man she had set up in a drug sting after being busted herself for marijuana.In a Thursday preliminary hearing for Qasim Raqib, the man charged with her killing, testimony revealed that police told her she could avoid arrest by helping to set up a drug deal. She used her cell phone to call Raqib as police listened in on a speaker phone and told him she knew someone who wanted to buy $335 worth of marijuana and cocaine. He was arrested when he arrived at a local motel 20 minutes later.Further testimony suggested Raqib called Hilliard two days later and urged her to meet him. A taxi driver who took Hilliard on all her calls testified she said she was worried that Raqib would seek payback over the drug bust. The taxi driver testified that after dropping her off, she called him and sounded fearful, and he then heard a sound like the phone dropping to the ground before it went dead. Her body was found hours later. |
Later today, NASA astronaut Steve Swanson will liftoff towards the International Space Station, not from the Space Coast of Florida or some other American spaceport, but from Kazakhstan on a Russian spacecraft. And unfortunately, the plan put forward by the Obama Administration to address this situation has been stymied by some in Congress.
Since the retirement of the Space Shuttle – a decision made in 2004 – the United States has been dependent on the Russians to get our astronauts to the International Space Station. Recognizing that this was unacceptable, President Obama has requested in NASA’s budget more than $800 million each of the past 5 years to incentivize the American aerospace industry to build the spacecraft needed to launch our astronauts from American soil. Had this plan been fully funded, we would have returned American human spaceflight launches – and the jobs they support – back to the United States next year. With the reduced level of funding approved by Congress, we’re now looking at launching from U.S. soil in 2017.
Budgets are about choices. The choice moving forward is between fully funding the President’s request to bring space launches back to American soil or continuing to send millions to the Russians. It’s that simple. The Obama Administration chooses to invest in America – and we are hopeful that Congress will do the same.
Over the past few years, two U.S. companies, Orbital Sciences and SpaceX, have demonstrated a new way of partnering NASA with the U.S. aerospace industry, providing more bang for the taxpayer buck in space. There have already been five private spacecraft visits to the ISS with the Dragon and Cygnus capsules – and another one is slated to launch in just a few days. At the end of last year, SpaceX launched a commercial satellite—a global industry worth nearly $190 billion per year—from Florida for the first time in four years. One study estimated that if NASA had procured this launcher and capsule using a more traditional contracting method, it could have been about three times the cost of this new public-private partnership approach.
NASA has already returned ISS cargo resupply missions to America using these two companies, bringing space launches and jobs back to our shores – and we are using the same model send our astronauts to the space station. Three American companies – Boeing, Sierra Nevada, and SpaceX – are developing spacecraft and competing to replace the Space Shuttle and launch American astronauts within the next three years. We are betting on American innovation and competition to help lead us into a new era of space exploration. As President Obama has said, this is “a capture the flag moment for [U.S.] commercial space flight.”
Earlier this month, the President proposed a $17.5 billion fiscal year 2015 budget for NASA. This includes $848 million for NASA’s Commercial Crew Program and supports the Administration’s commitment that NASA be a catalyst for the growth of a vibrant American commercial space industry. It also keeps us on target to ending our reliance on the Russians for transporting our astronauts to and from space, and frees NASA to carry out even more ambitious missions beyond low-Earth orbit, including a mission to redirect and visit an asteroid and a human mission to Mars in the 2030s. The International Space Station—which the Obama Administration just extended to at least 2024—remains our springboard to going beyond the Moon and exploring deep space for the first time.
The American commercial space flight industry is boosting our economy and creating thousands of good paying jobs. More than a dozen states in the U.S. are trying to build spaceports, hoping to help foster the next job-creating, innovation-based industry in their areas.
With such strong economic potential, it is no wonder that this approach has garnered bipartisan support. House Majority Whip Rep. Kevin McCarthy (R-Calif.) recently noted, “Support for U.S. commercial space will lead to American astronauts flying on American-made rockets from American soil.” He added, “That is exceptionalism that both parties can get behind.”
It is important to note that NASA continues to cooperate successfully with Russia on International Space Station (ISS) activities. But even as the “space race” has evolved over the past 50 years from competition to collaboration with Russia, NASA is rightfully focused now more than ever on returning our astronauts to space aboard American rockets – launched from U.S. soil – as soon as possible. |
251 P.3d 990 (2011)
Carol CALVERT, Appellant,
v.
STATE of Alaska, DEPARTMENT OF LABOR & WORKFORCE DEVELOPMENT, EMPLOYMENT SECURITY DIVISION, Appellee.
No. S-13721.
Supreme Court of Alaska.
April 15, 2011.
*994 Carol Calvert, pro se, Soldotna, Appellant.
Erin Pohland, Assistant Attorney General, Anchorage, and Daniel S. Sullivan, Attorney General, Juneau, for Appellee.
Before: CARPENETI, Chief Justice, FABE, WINFREE, CHRISTEN, and STOWERS, Justices.
OPINION
CHRISTEN, Justice.
I. INTRODUCTION
Carol Calvert quit her job at a seafood processing plant and filed for unemployment insurance benefits. Her reasons for quitting included difficulties with transportation to work and personality conflicts with coworkers. The Department of Labor's unemployment insurance claim center determined that Calvert voluntarily left work without good cause; as a result, she was statutorily ineligible for unemployment benefits for the first six weeks of her unemployment, and her maximum potential benefits were reduced by three times the weekly benefit amount.
Calvert appealed to the Department of Labor's Appeal Tribunal where the assigned Hearing Officer found that transportation problems were the "precipitating event" in Calvert's decision to quit. The Hearing Officer concluded that, although Calvert's transportation problems may have provided a compelling reason to quit, Calvert had not "exhaust[ed] all reasonable alternatives prior to quitting," as is required in order to show good cause. The Hearing Officer affirmed the claim center's determination.
The Commissioner of the Department of Labor affirmed the Hearing Officer's decisions, as did the superior court. For the reasons explained below, we affirm.
II. FACTS AND PROCEEDINGS
Carol Calvert was a seasonal employee of Snug Harbor Seafoods in Kenai. She worked there for the first time during the summer of 2007. She was rehired on March 10, 2008, and quit on April 6, 2008. She filed for unemployment benefits on April 6.
The Department of Labor's (Department) unemployment insurance claim center sent Calvert a "Voluntary Leaving Statement" to complete and return in order to provide additional information about her separation from employment. Calvert returned the Voluntary Leaving Statement on May 9. In her explanation of why she quit, she cited conflicts that had begun during the 2007 season with a supervisor, Mike, and his girlfriend, *995 Hope (also a Snug Harbor employee). Calvert alleged that the conflict began when the plant manager asked Calvert to run the "gear department" and planned to move Hope out of her position in that department. Calvert also described her disappointment at the March 2008 departure of Brandi O'Reagan, who was the plant manager during the 2007 season and who had encouraged Calvert to return for the 2008 season. Calvert reported that Mike cut her hours immediately after O'Reagan left, allegedly in retaliation for Calvert's conflicts with Hope. Calvert was also concerned that Richard King, the plant manager who replaced O'Reagan, was aligned with Mike and Hope, and shared their hostility toward her.
Calvert also described transportation difficulties. She biked ten miles each way to get to Snug Harbor, so she required ample notice of the start times for her shifts. Work shifts were announced via a "hotline." In 2007, shifts were posted so there was typically a three-hour lead time. According to Calvert, in 2008 the hotline was updated less frequently, later in the day, and sometimes as little as half an hour in advance. Calvert argued, "[i]t became a cruel guessing game whether [she] should start for work on [her] bike." She found the local public transit agency to be "relentlessly uncooperative" in arranging transportation, her bike broke, and she anticipated increased difficulties associated with springtime road construction on her route to work.
In addition, Calvert expressed concern on her Voluntary Leaving Statement about the new plant manager's attitude toward workplace safety. During the 2007 season, a co-worker standing next to Calvert was "badly shocked" after water hit an electric box. When Calvert was asked later that season to coil an extension cord lying in several inches of water, she expressed her safety concerns in King's presence. According to Calvert, King "looked at [her], turned his back on [her], and has not spoken to [her] since," except when she approached him about her hours.
The Department's claim center contacted Calvert on May 13 to ask what "final incident" caused her to quit. Calvert reiterated the reasons cited in her Voluntary Leaving Statement:
There was really no final incident, just a compilation of everything that happened, the old branch manager quitting without telling me, and then problems with the new manager. I quit because my hours week [sic] being cut, and problems with co-workers, and transportation problems. . . . I talked to the owner my last day about them cutting my hours, and he didn't seem like he wanted to do anything about it. . . . I don't know if it was any one thing, just everything piled together.
On May 14, the claim center issued a notice of determination finding that Calvert "quit work at Snug Harbor Seafoods because [she was] unhappy with the new manager's supervisory style and apportionment of work." The claim center reasoned that because Calvert had not provided information demonstrating that the manager's actions were "hostile or discriminatory," she had not established "good cause for leaving." As a result, Calvert was denied waiting-week credit for the first week of employment and benefits for the next five weeks,[1] and her maximum potential benefits were reduced by three times the weekly benefit amount.[2]
On June 16, Calvert filed a Notice of Unemployment Insurance Appeal with the Department of Labor's Anchorage Appeal Tribunal. In her appeal, she argued: (1) the claim center did not establish that the work she left was "sufficient and suitable," and she was therefore not required to show good *996 cause for leaving it; (2) "good cause" for leaving the job existed in any case based on Calvert's insufficient work hours, transportation issues, lack of notice by the employer regarding work hours, and the patterns of "[w]orkplace violence" she experienced; and (3) the Department of Labor has a duty to better inform employees on the rules and requirements for unemployment insurance benefits. In addition to her brief, Calvert submitted a request for subpoenas to the Appeal Tribunal.
Hearing Officer Kathy A. Thorstad conducted the Appeal Tribunal hearing telephonically on July 29, 2008. The Hearing Officer observed that, under AS 23.20.379, a person who quits a job without good cause is ineligible for full unemployment insurance benefits. She also explained that the burden of showing good cause is on the employee seeking benefits and that a worker has "good cause" when she has a compelling reason for leaving work and has exhausted all reasonable alternatives to quitting.
During the hearing, the Hearing Officer heard testimony from Calvert, Snug Harbor plant manager King, and the president of Snug Harbor, Paul Dale. Calvert testified that she quit her job on April 6 because she was "upset all day long" and made her decision "based on the amount of stress . . . [and] based on the problems [she] was having with transportation." In response to the Hearing Officer's questions about her transportation difficulties, Calvert stated that she had not realized that biking to work "would be much more difficult" during March than it had been when she worked at Snug Harbor the previous summer. She noted that her bike broke down and weather conditions were bad in March and April, forcing her to rely on the Central Area Rural Transit System (CARTS). CARTS requires its passengers to book trips hours in advance, which was difficult for Calvert given her unpredictable work schedule and King's habit of posting the hours for the following day after 6:00 p.m., when CARTS had stopped answering its phones for the evening. According to Calvert, her only other transportation option was to take a taxi, which she stated would not be cost-effective given the amount of money she was making at her job.
The Hearing Officer also questioned Calvert about her work-related stress. Calvert stated that Mike cut her hours after O'Reagan's departure and told her that the decision had been sanctioned by Dale. Calvert noted that she had not asked Mike why her hours were being cut but later asked King, the plant manager, who told her he would "see that the work got done." She also recounted a subsequent conversation with Dale, who told her he had not authorized Mike to cut her hours. Calvert told the Hearing Officer that she did not directly ask Dale to address the situation with Mike, explaining that she "didn't feel that it was necessary" because she assumed someone in Dale's position would "look into it and . . . find out exactly what happened." Calvert also stated that she did not directly confront Mike after learning that Dale had not authorized the reduction in her hours.
When the Hearing Officer asked Calvert what efforts she made to keep her job, Calvert responded that she tried to get CARTS to provide her with transportation to work and that she asked King and Dale about the reduction to her hours, "and that's about it." The Hearing Officer asked Calvert whether she explicitly informed King or Dale that Mike's "messing with [her] hours was creating enough of a hardship that [she] would not be able to continue to work if it wasn't corrected"; Calvert confirmed that she did not. The Hearing Officer then asked Calvert whether it was her transportation difficulties or her problems with Mike that caused her to quit. Calvert answered, "It's both of them. . . . I don't know if one . . . had been taken away, if the other one could have been solved and vice versa." The Hearing Officer rephrased her question and asked, "If CARTS had not been giving you any difficulty on that day, would you still have quit your job?" Calvert replied, "I think I would have gone to work, yes. I think I would have given it another week. . . . I might have complained harder."
Plant manager King testified that he became aware of Calvert's problems with Mike after Mike reported having had a confrontation *997 with Calvert about her hours. He claimed that Calvert's impression that she was being singled out for reduced hours was incorrect, and that the company was trying to "keep hours at a minimum" based on its "limited product" in April. He also stated that he had not known why Calvert quit her job until he saw the exhibits she presented at the hearing.
Dale testified that he did not recall the conversation Calvert reported having had with him about whether Mike had been authorized to cut her hours, but that he "wouldn't dispute it" and it "sound[ed] plausible." Dale added that, after Calvert quit, he asked King "on at least four occasions" if he had contacted her "to discuss her concerns regarding employment"; King reportedly told Dale that he had left messages for Calvert but had not heard back from her. In subsequent appeals, Calvert denied receiving any calls or messages, but she did not raise this point before the Hearing Officer.
The Hearing Officer affirmed the claim center's determination. Because she found that Calvert's transportation problems were the "precipitating event" in her decision to quit, the Hearing Officer did not address Calvert's conflicts with supervisors and co-workers.[3] The Hearing Officer determined that, although the loss of transportation can create a compelling reason for a worker to quit a job, Calvert had not "exhaust[ed] all reasonable alternatives prior to quitting." She found that "[t]he claimant did not discuss her transportation problems with the employer nor did she request a possible adjustment to her work schedule which would have enabled her to use the transit system and continue working." The Hearing Officer concluded that Calvert had not established that she had "good cause for quitting suitable work." The Hearing Officer ruled that Calvert was not entitled to waiting-week benefits under AS 23.20.379.
Calvert appealed the Hearing Officer's decision to the Commissioner of the Department of Labor. In her appeal, she argued that the Hearing Officer had confused the facts; that her phone bills contradicted King's claim that he had attempted to call her after she quit; that the Hearing Officer's reliance on Calvert's testimony about her decision to quit on April 6 was "sleight of hand"; and that there had been no reason to believe talking to her employer about her transportation problems would lead to an adjustment of hours or other resolution. On October 3, 2008, the Commissioner affirmed the Hearing Officer's decision, finding that any factual errors in the decision were not prejudicial and adopting the Hearing Officer's finding that Calvert did not give Snug Harbor a chance to adjust "by making known her problems in getting to work." The Commissioner upheld the Hearing Officer's conclusion that Calvert failed to show good cause for quitting.
Calvert subsequently appealed to the superior court, which held that "[t]here was substantial evidence to support the Hearing Officer's . . . conclusion that Calvert did not exhaust all reasonable alternatives before voluntarily quitting, a requirement for finding good cause." The superior court decision also found that the Department had adequately informed Calvert of the law regarding unemployment benefit eligibility. Calvert appeals.
III. STANDARD OF REVIEW
Calvert appeals the decision of the superior court, which affirmed the decisions of the Commissioner of the Department of Labor and the Appeal Tribunal for the Department of Labor. As we have noted, "when the superior court acts as an intermediate court of appeal, no deference is given to the lower court's decision"; rather, we "independently scrutinize directly the merits of the administrative determination."[4] In this case, our *998 independent review has led us to substantial agreement with the superior court's carefully considered decision.
We apply four standards of review to administrative decisions. The "substantial evidence" test applies to questions of fact.[5] The "reasonable basis" test is used for questions of law involving agency expertise.[6] Where no expertise is involved, questions of law are reviewed under the "substitution of judgment" test.[7] Finally, the "reasonable and not arbitrary" test applies to review of administrative regulations.[8]
We have held that the question of whether a person was dismissed from her job for "misconduct" (one of the grounds for disqualification for waiting-week credits under AS 23.20.379(a)(2)) is a question of fact to be reviewed under the "substantial evidence" test.[9] Consistent with that holding, whether Calvert voluntarily quit suitable work for good cause is reviewed here as a question of fact.[10] In applying this test, we must determine whether there exists "such relevant evidence as a reasonable mind might accept as adequate to support a conclusion"[11]; the court "does not reweigh the evidence or choose between competing inferences."[12]
"[D]ue process and evidentiary arguments raise questions of law which we will review de novo."[13]
IV. DISCUSSION
A. The Unemployment Insurance Benefits Eligibility Framework
Under AS 23.20.379(a), a worker may be partially disqualified from receiving benefits if she "left . . . suitable work voluntarily without good cause" or was "discharged for misconduct." These rules are further detailed in the Department's Benefit Policy Manual.[14] The BPM clarifies that suitability and good cause are independent inquiries. "A worker who voluntarily leaves unsuitable work leaves with good cause"[15] and need not make a separate showing of good cause to quit.[16] The factors relevant to *999 suitability are distinguishable from those affecting good cause: "[s]uitability is based on circumstances surrounding the job, and usually involves a comparison of the offered work with other similar work in the locality. . . . Good cause is based on personal circumstances surrounding the claimant . . . and not directly related to the conditions of the work."[17] And while a worker may leave unsuitable work without further efforts to remedy the situation, establishing good cause for leaving work that is otherwise suitable requires a two-step showing: not only must "[t]he underlying reason for leaving work . . . be compelling," but "[t]he worker must exhaust all reasonable alternatives before leaving the work."[18]
B. The Hearing Officer Correctly Determined That Calvert Left Suitable Work.
1. The Hearing Officer was required to analyze the suitability of Calvert's job at Snug Harbor.
In finding that Calvert failed to establish good cause for quitting suitable work, the Hearing Officer did not explicitly discuss whether Calvert's job at Snug Harbor was suitable. Calvert argues that the hearing officer improperly "abandoned the issue of suitable work" and therefore did not correctly analyze whether she was required to show good cause for leaving. We agree that the Hearing Officer was required to analyze the suitability of Calvert's work. But we hold that the Hearing Officer's decision implicitly found that Calvert's work was suitable.
Alaska Statute 23.20.385 provides that the suitability of work depends on a range of factors, including whether wages, hours, or other conditions of work are substantially less favorable than prevailing conditions in the locality; the degree of risk to a claimant's health, safety, and morals; the claimant's physical fitness for the work; the distance of the work from the claimant's residence; "and other factors that influence a reasonably prudent person in the claimant's circumstances."[19] Although suitability of work may not be presumed, it need not be analyzed in all cases.[20] Suitability of work must be examined if: (1) a worker objects to the suitability of wages, hours, or other "conditions of work"; (2) a worker specifically raises the issue of suitability of work; or (3) facts appear during investigation of a worker's claim that put the Department on notice that wages or other conditions of work may be substantially less favorable than prevailing conditions for similar work in the locality.[21]
Calvert raised the issue of suitability in her initial appeal to the Appeal Tribunal, arguing that the Department "makes no claim of sufficient and suitable work from my employer." She did not provide explicit justification for the claim that her work at Snug Harbor was unsuitable, but elsewhere in her appeal and in her initial Voluntary Leaving Statement, Calvert did identify a number of concerns that might be considered objections to "conditions of work" sufficient to place the Hearing Officer on notice that conditions were potentially unfavorable. Specifically, Calvert cited safety concerns, "workplace violence," personality conflicts, and difficulties with transportation to work.
The Department contends that none of the issues Calvert raised other than transportation could render her work unsuitable *1000 because they were not found to be the "precipitating event" that led Calvert to quit. We disagree with this reasoning. The "precipitating event" analysis described in the BPM identifies which of a worker's reasons for leaving is to be analyzed for good cause: "good cause depends on the precipitating event and the other reasons [for quitting] are irrelevant."[22] By contrast, the determination of whether work is unsuitable is a separate inquiry that is not similarly limited; if work is unsuitable, a worker has good cause to leave it without having to make a separate showing.[23] The fact that a circumstance did or did not precipitate a worker's decision to quit is not relevant to whether the circumstance may render work unsuitable.
The Department also argues that none of the issues Calvert raised can properly be considered "conditions of work" as the term is used in the BPM. It contends that this term should be interpreted to refer not to work conditions generally, but to "an essential aspect of the job."[24] The Department argues that workplace hostility and transportation problems of the type Calvert claims are not properly categorized as "essential aspects" of a job and should instead be considered under the good cause rubric.[25]
We find this argument convincing as it applies to Calvert's transportation problems. The BPM provides that "[w]ork that is unreasonably distant from a worker's residence is unsuitable and the worker has good cause for leaving it."[26] The BPM illustrates this rule with a case involving a claimant whose employer assigned him to work in a community 118 miles from his home; the Commissioner found this to be an "unreasonable commuting distance" and concluded that the job was unsuitable, giving the claimant good cause for quitting.[27] But as the Department argues, there is "a subtle but logical distinction" between distance to work and personal factors affecting a commute; "[p]ersonal circumstances that render a reasonable, customary commute no longer feasible cannot make a job unsuitable." Here, Calvert's ten-mile commute was not "unreasonably distant" by any objective measure; she had easily made the commute by bike during the previous summer and would have had no problem getting to work at other times of year if her means of transportation had been less limited. And unlike the case described in the BPM, Calvert's employer had not asked her to relocate or done anything else to change the distance she had to travel to get to work. Her difficulties stemmed from personal circumstances, not from an inherent characteristic of her job at Snug Harbor; they did not give rise to a question of suitability.
In contrast with Calvert's transportation issues, the workplace hostility and safety breaches Calvert described in her Voluntary Leaving Statement and Appeal Tribunal brief may be considered "circumstances surrounding the job," rather than merely "personal circumstances surrounding the claimant."[28] By mentioning suitability and raising workplace hostility and safety issues, Calvert objected to conditions of her work, raised the issue of suitability, and put the Department on notice that conditions at Snug Harbor might be less favorable than standard conditions in the locality. Thus, the Hearing Officer was obligated to analyze the suitability of Calvert's job before determining *1001 that she had failed to show good cause for quitting.
2. Calvert did not show that her job at Snug Harbor was unsuitable.
Although the Hearing Officer did not explicitly address the question of suitability in reviewing Calvert's appeal, her determination that Calvert "did not establish that she had good cause for quitting suitable work" implicitly concluded that Calvert's work was suitable. Upon independent review of the evidence, we agree with the Hearing Officer's implied finding that Calvert did not show that her job at Snug Harbor was unsuitable.
Calvert mentioned "workplace violence" in her written appeal to the Hearing Officer, but she neither explained what she meant by this term nor provided any evidence of "physical violence" at Snug Harbor. Construing this phrase in light of Calvert's oral testimony, we understand her to refer to workplace hostility and to the personality conflicts she had with Mike and plant manager King. We hold that the level of hostility Calvert describes at Snug Harbor, while no doubt uncomfortable, did not rise to the level of unsuitability. Calvert's personality conflicts did not pose a risk to her "health, safety, and morals."[29] And she does not claim she experienced threats or even serious verbal altercations. Rather, she describes tensions of the type that commonly develop in a workplace as a result of poor communication and the suspicion that coworkers are receiving preferential treatment based on personal relationships. Although certainly not ideal, these workplace conditions did not render Calvert's job "unsuitable."
Nor does Calvert's description of unsafe practices indicate that her work was unsuitable. The incident Calvert described in her Voluntary Leaving Statement took place in 2007. The BPM provides that "[i]f the conditions of work violate a state or federal law concerning wages, hours, safety, or sanitation, the worker has good cause for leaving, regardless of the length of time that the worker has worked under the objectionable condition."[30] Therefore, the fact that Calvert continued to work at Snug Harbor the following year does not, by itself, imply that the work was suitable. But Calvert did not describe any safety-related incidents in 2008 or offer evidence that unsafe practices were an ongoing condition of work. The single 2007 incident cited by Calvert does not provide evidence of health and safety risks sufficient to demonstrate unsafe conditions rendering work at Snug Harbor unsuitable in 2008.
C. Calvert Did Not Show Good Cause For Voluntarily Leaving Work.
If the work a claimant has left is determined to be suitable, that claimant's eligibility for unemployment insurance benefits depends on whether she left for good cause.[31] To show good cause, a worker must demonstrate that the underlying reason for leaving work was compelling, and that the worker exhausted all reasonable alternatives before leaving the work.[32] The burden of demonstrating both elements of good cause is on the worker.[33] The BPM provides that "[a] compelling reason is one that causes a reasonable and prudent person of normal sensitivity, exercising ordinary common sense, to leave employment."[34]
The BPM further notes that "[a] reasonable and prudent worker sincerely interested in remaining at work attempts to correct any condition or circumstance that interferes with continued employment."[35] In order to exhaust all reasonable alternatives, the worker must notify the employer of the problem and request adjustment; the worker must also bring the problem to the attention of someone with the authority to *1002 make the necessary adjustments, describe the problem in sufficient detail to allow for resolution, and give the employer enough time to correct the problem.[36] At the same time, "a worker is not expected to do something futile or useless in order to establish good cause for leaving employment."[37]
We agree with the Hearing Office that Calvert failed to exhaust alternatives to quitting and therefore did not demonstrate good cause for leaving work.
1. We analyze both transportation problems and workplace hostility as potential precipitating causes.
The Hearing Officer identified transportation problems as the precipitating cause of Calvert's decision to quit. The BPM provides that, where a worker gives multiple reasons for quitting, "the one reason that was the precipitating event is the real cause of the quit, with the other reasons being incidental. In such cases, good cause depends on the precipitating event and the other reasons are irrelevant."[38] In other words, whether a worker has shown good cause for quitting is to be analyzed in reference only to the event that directly led the worker to quit and not to any other events or circumstances.
Throughout her application for unemployment benefits and subsequent appeals process, Calvert identified two major factors transportation obstacles and workplace hostilityin her decision to quit. During the hearing on her administrative appeal, the Hearing Officer asked Calvert whether it was her transportation difficulties or personality conflicts that caused her to quit. Calvert answered, "It's both of them . . . I don't know if one . . . had been taken away, if the other one could have been solved and vice versa." The Hearing Officer then asked, "If CARTS had not been giving you any difficulty on that day, would you still have quit your job?" Calvert replied, "I think I would have gone to work, yes. I think I would have given it another week. . . . I might have tried the new schedule. I might have complained harder." Based on this testimony, the Hearing Officer concluded that the precipitating event in Calvert's quitting was the loss of transportation, and she therefore limited her good cause analysis to transportation issues.
In her brief to our court, Calvert objects that the Hearing Officer took her words out of context, focusing on "[t]he one comment [she] elicited which was speculative, retrospective, and in conflict with prior testimony and actions." While there is no indication that the Hearing Officer's question was intentionally designed to "trick" Calvertindeed, the question's structure simply reflected the BPM's emphasis on determining which event led a worker to "quit at [a] particular time"[39]we nonetheless acknowledge that the Hearing Officer's question may have elicited a different answer than Calvert would have provided in response to an alternatively worded or more open-ended inquiry.[40] Therefore, we analyze both transportation problems and workplace hostility to determine whether Calvert demonstrated good cause for leaving work on the basis of either issue.
2. Calvert did not show good cause for leaving work on the basis of her transportation problems.
a. Calvert's transportation problems did provide a compelling reason to quit.
The first element of good cause requires that a worker have a "compelling *1003 reason" for leaving work.[41] In contrast to the requirements for determining suitability, "[t]here is no requirement that [a] worker's reasons for leaving work be connected with the work. Either work-connected or personal factors may present sufficiently compelling reasons."[42] 8 AAC 85.095 provides a limited list of factors the Department may consider in determining the existence of good cause, including those factors identified in AS 23.20.385(b). One such factor is "distance of. . . available work from the claimant's residence."[43] The BPM clarifies that, for purposes of determining good cause, "[t]he actual mileage from the worker's residence to work is never the determining factor in establishing compelling reasons. It is the time and expense of commuting which must be considered."[44]
The Hearing Officer concluded in her decision that "[t]he loss of transportation can create a compelling reason for a worker to quit [his or her] job." We agree that Calvert demonstrated that her transportation difficulties gave her a compelling reason to quit. The "actual mileage" from Calvert's home to Snug Harbor was not unusual, but the "time and expense" involved in her commute were significant: her bike was gradually breaking down to the point where her commute took an hour and a half each way, CARTS would not allow her to schedule open-ended trips or make last-minute arrangements to fit her work schedule, and taxi fare was prohibitively expensive. These facts provide substantial evidence that meets the standard for showing a compelling reason to quit: left unresolved, they would cause a "reasonable and prudent person of normal sensitivity . . . to leave employment."[45]
b. Calvert did not exhaust all reasonable alternatives before leaving work due to transportation problems.
The Hearing Officer noted in her findings of fact that Calvert had never told her supervisor that her work scheduleand particularly the lack of notice regarding working hourswas creating transportation problems for her. The Hearing Officer also found that Snug Harbor made repeated attempts to contact Calvert after she quit and was not aware of her reasons for quitting until the hearing. The Hearing Officer concluded that, because Calvert did not discuss her transportation problems with her employer or request an adjustment to her work schedule, she did not exhaust all reasonable alternatives prior to quitting and therefore left without good cause. We agree.
Calvert argues that the primary alternative envisioned by the Hearing Officer, i.e., talking to her supervisors and seeking adjustments to her schedule, was "neither reasonable nor proved to be viable." First, she suggests that the Hearing Officer's failure to investigate why Snug Harbor's representatives (presumably King and Dale) "did not talk to [Calvert] when she brought problems to them" casts doubt on whether her employers would have been willing to accommodate her requests. Second, she contends that "her work schedule was reliant upon the schedule of everyone else" and was therefore not amenable to adjustment.[46] Finally, Calvert contends that the Hearing Officer erroneously relied upon Dale's unreliable "hearsay" report that King had attempted to contact Calvert several times after she quit, a claim that Calvert disputed in earlier stages of the proceeding.[47]
*1004 An employer's limited authority or expressed refusal to accommodate an employee can establish that requesting an adjustment to work conditions would be futile: "[i]f the employer has already made it known that the matter will not be adjusted to the worker's satisfaction, or if the matter is one which is beyond the power of the employer to adjust, then the worker is not expected to perform a futile act."[48] That does not appear to be the situation here. King and Dale apparently had the authority to assign work hours and adjust employee schedules. Even taking into account the limited flexibility of hours in the gear department, giving Calvert more advance notice of her hours would have significantly mitigated her transportation problems; alternatively, her supervisors may have been able to transfer her to one of the other departments at Snug Harbor where she had worked in the past.
Moreover, neither King nor Dale (or any other Snug Harbor employee) had explicitly "made it known"[49] that they would not accommodate Calvert. Calvert's claim that her employers "did not talk to [her] when she brought problems to them" seems to refer to King's and Dale's failure to follow up on her inquiries about Mike cutting her hours. But as Calvert acknowledged to the Hearing Officer, she did not actually "ask [Dale] to do anything" to address her problems with Mike or the reduction in her hours, on the assumption that to do so would be "presumptuous." When the Hearing Officer asked if Calvert told "Richard [King], Paul [Dale] or Mike that if they didn't stop messing with [her] hours, [she was] going to quit," Calvert said she had not. Nor did she ever raise the issue of her transportation difficulties. There is no indication that Calvert's inquiries about her hours were framed as complaints demanding a response, or that King and Dale would have ignored more direct requests for assistance. Although she reports general "hostility," Calvert presented no evidence beyond her own subjective belief to suggest that her employers' attitudes toward her would make them unwilling to help resolve her transportation problems had they known she was otherwise likely to quit.[50]
Calvert's claim that the Hearing Officer relied on "hearsay" to find that she had not exhausted her alternatives is also unconvincing. First, Calvert raised this argument for the first time in the superior court; we therefore consider it to have been waived.[51] Second, even if this argument had not been waived, the hearsay claim would be misplaced. Calvert did not object at the Appeal Tribunal hearing to Dale's testimony that King had reported attempting to call Calvert after she quit. "In the absence of a hearsay objection, hearsay evidence is competent evidence which may be considered."[52] Moreover, "[t]he strict rules of evidence governing admissibility of hearsay in judicial proceedings do not apply to administrative hearings, and [this court] will not reverse an administrative judgment based on hearsay unless the hearsay was inherently unreliable or jeopardized the fairness of the proceedings." *1005 [53] Here, the admission of Dale's testimony does not appear to have "jeopardized the fairness" of Calvert's appeal proceedings. It is not clear from the Hearing Officer's decision that Dale's testimony was, in fact, used to lay a foundation for the viability of the proposed alternative. The Hearing Officer stated in her Finding of Facts that "[a]t no time did the claimant approach the supervisor and explain that the new work schedule created transportation issues for her . . . . [T]he employer made repeated attempts to contact [Calvert] in an attempt to discover why she had not returned." This context suggests that the Hearing Officer relied on Dale's testimony primarily in support of the general finding that Calvert never informed her employers of her transportation problems (a finding that is amply supported by other evidence in the record, including Calvert's own testimony), rather than as evidence of Dale's and King's willingness to accommodate Calvert.
We hold that there is substantial evidence in support of the Hearing Officer's finding that Calvert failed to exhaust all reasonable alternatives to quitting on the basis of transportation problems and therefore did not show good cause for leaving suitable work.
3. Calvert did not show good cause for leaving work on the basis of workplace hostility.
a. Calvert's personality conflicts did not provide a compelling reason to quit.
Under the BPM, dislike for a fellow employee may only be considered good cause for leaving work if "[t]he worker establishes that the actions of the fellow worker subjected the worker to abuse, endangered the worker's health, or caused the employer to demand an unreasonable amount of work from the worker."[54] Mike's behavior toward Calvert did not endanger her health and, if anything, decreased the amount of work demanded of her (although with correspondingly decreased wages). Nor did it rise to the level of "abuse" as the Department has used the term in the past. Cases in which the Commissioner has found dislike of a fellow employee to be a compelling reason for quitting involve much more serious conflicts, such as threats of physical violence to the claimant.[55] To the extent that the change in Calvert's schedule motivated her decision to quit, it also did not constitute a compelling reason. "A change in [a] worker's hours, shifts, or days of work initiated by the employer is seldom a sufficient breach of the contract of hire to give a compelling reason to quit."[56] A reduction in hours is rarely considered compelling for purposes of establishing good cause: "a worker who leaves work merely because the work is less than full-time has voluntarily left work without good cause" and a "reduction in hours is not good cause for voluntarily leaving work" even where that reduction results in reduced earnings.[57]
b. Calvert did not exhaust all reasonable alternatives before leaving work due to personality conflicts.
Even if the personality conflicts Calvert describes were to qualify as "abuse," she made only limited efforts to remedy the situation. As we have already noted, Calvert spoke to King and Dale about Mike reducing her hours but never directly asked either of them to address the issue or, as far as the *1006 record indicates, described the full extent of her conflicts with Mike. By her own admission, Calvert never explicitly sought a remedy for her problem. She contended that Dale and King "should have known . . . what [she] was saying to them without [her] having to challenge them to do something about [her] problem." But without more information about precisely what Calvert said to her supervisors, there was little basis for a finding that they should have guessed or intuited what Calvert failed to articulate. Calvert was required to take more active steps to exhaust her alternatives before quitting; because she failed to do so, we agree with the Hearing Officer that she did not show good cause for leaving work on the basis of personality conflicts.
D. Calvert Received A Fair Hearing.
Calvert makes a number of arguments relating to the procedural adequacy of her administrative hearing. We review these arguments de novo.[58] We note at the outset that Calvert has waived a number of her due process arguments by not raising them earlier in the appeals process. For example, she argues that the Hearing Officer "[n]eglected the fair hearing principle of discovery to claimant by employer" and "declined to obtain discovery from the employer, disregarding claimant's request for it." She also contends that "[d]ue [p]rocess requires notice of evidence to be used against claimant and an appropriate amount of time to develop a challenge and answer to any information from any source" and claims that she did not have sufficient notice of the evidence to be presented at the Appeal Tribunal hearing. Because these arguments were raised for the first time in Calvert's appeal to the superior court, rather than in her initial post hearing appeal to the Commissioner, we consider them waived.[59] Similarly, Calvert's argument that the Hearing Officer improperly admitted hearsay evidence is waived because she raised it for the first time on appeal to the superior court.
1. Calvert did not demonstrate actual bias by the Hearing Officer.
Calvert alleges that the hearing was biased, claiming that "[t]he hearing officer picked what she wanted out of the evidence and used it to try to prove her point" and that "[t]he reasonings and conclusions of the Tribunal were not fairly and impartially supported by the record." But as the Department notes in its brief, administrative officers are "presumed to be honest and impartial until a party shows actual bias or prejudgment."[60] To show the bias of a hearing officer, a party must demonstrate that the hearing officer "had a predisposition to find against a party or that the hearing officer interfered with the orderly presentation of the evidence."[61] This is a demanding standard. The United States Supreme Court has found a "probability of actual bias. . . too high to be constitutionally tolerable" in cases where "the adjudicator has a pecuniary interest in the outcome" or "has been the target of personal abuse or criticism from the party before him,"[62] but not where a decisionmaker merely performs combined investigative and adjudicative functions.[63] Similarly, we have held that a hearing officer's failure to disclose his position as an AFL-CIO president during a worker's compensation hearing was insufficient to show actual or probable bias.[64]
Calvert has not presented any evidence that the Hearing Officer was predisposed to find against her. The assertion that the *1007 Hearing Officer selected evidence to support her findings is insufficient to show actual bias. Nor does the hearing transcript suggest that the Hearing Officer interfered in any way with the presentation of evidence. The Hearing Officer's questions were thorough and objective; the only evidence she excluded was related to Calvert's efforts to find work after quitting at Snug Harbor, an issue irrelevant to the question of whether Calvert quit suitable work with good cause. Calvert failed to demonstrate bias sufficient to overcome the presumption of the Hearing Officer's impartiality.
2. The superior court did not improperly reweigh evidence.
In applying the substantial evidence test to review an administrative determination, a reviewing court may not reweigh evidence.[65] Calvert argues that "[t]he Superior Court erred when it improperly reweighed evidence concerning transportation: Transportation problems DID present insurmountable difficulties. . . . This would affect the issue of suitable work." Though the meaning of this argument is somewhat unclear, Calvert seems to be referring to the superior court's conclusion that, notwithstanding Calvert's expressed concerns regarding transportation (among other issues), "the record does not support a finding that the work at Snug was unsuitable." But this statement does not suggest that the superior court "reweighed" evidence. The superior court clearly indicated that its conclusion regarding suitability was based on the record created by the Hearing Officer. And although the Hearing Officer did not explicitly address the question of suitability, her factual findings provide sufficient evidence to support the conclusion that Calvert's work was suitable. The superior court presumably relied on the Hearing Officer's findings for its conclusion that "there is no evidence that the work was inconsistent with Calvert's physical capability, training, experience, earning capacity, or skill" and that the work was therefore suitable; this did not constitute a reweighing of the evidence.
E. The Department Of Labor Did Not Fail To Inform Calvert Regarding Eligibility For Unemployment Insurance Benefits.
Calvert argues that "[t]he Department of Labor & Workforce Development neglected [its] duty" by failing to "inform the public or claimant adequately concerning its requirements for separation from employment regarding eligibility for full benefits before separation takes place." This argument reiterates Calvert's claim in her brief to the Appeal Tribunal that "[t]he DOL neglects to make known its presence [and] expectations. . . regarding [unemployment insurance] benefits"[66] and that, although "[a] reasonably prudent person would believe they had been completely informed by orientation, handbook, practices, and notices posted," the materials distributed to new employees do not in fact provide sufficient information on unemployment insurance eligibility.
To the extent Calvert is arguing that she lacked access to the policies governing unemployment benefits eligibility, we find her argument unconvincing. The Department's Wage and Hour Information brochure, which Calvert submitted as an exhibit in her appeal to the Commissioner, includes detailed information about the relevant statutes and regulations as well as directions for accessing past unemployment insurance appeals decisions online and reviewing the BPM at Department offices. And as the Department notes, the BPM is also available online. The "Voluntary Leaving Statement" that Calvert filled out and submitted after she filed for unemployment benefits gives notice to claimants that they must show "reasons for quitting . . . so compelling" as to leave "no reasonable alternative." The Hearing Officer also explained the eligibility requirements to Calvert at the start of the *1008 Appeal Tribunal hearing. As a result, Calvert had notice of the Department's basic eligibility requirements and directions for accessing additional information, both prior to her Appeal Tribunal Hearing and throughout the appeals process.
To the extent Calvert contends that the Department had a duty to inform her, while she was working, of how she might quit her job and maintain her eligibility for unemployment benefits, we find this argument equally unavailing. We have held that "[a]s a general rule, people are presumed to know the law" without being specifically informed of it.[67] The United States Supreme Court has required explicit notice of hearing procedures only where "the administrative procedures at issue were not described in any publicly available document."[68]
All of the statutes, regulations, and internal policy documents governing eligibility for unemployment insurance benefits are "publicly available documents" that are easily accessible and identified in Department-published materials, such as the Wage and Hour Information brochure and the unemployment insurance section of the Department's website.[69] Workers may be presumed to be familiar with the provisions of those documents. In this case, Calvert has not demonstrated that any circumstance prevented her from informing herself about the Department's eligibility requirements before she left work. The Department did not neglect a legal duty or deny Calvert due process by not informing her of its policies more directly.
V. CONCLUSION
We AFFIRM the decision of the superior court. Calvert did not demonstrate good cause for leaving suitable work voluntarily.
NOTES
[1] Under AS 23.20.379(a)(1), "[a]n insured worker is disqualified for waiting-week credit or benefits for the first week in which the insured worker is unemployed and for the next five weeks of unemployment" if the worker left the "last suitable work voluntarily without good cause." "Waiting-week credit" refers to credit received for the initial week of unemployment, during which the worker does not immediately receive unemployment insurance benefits but still accrues benefits eligibility. See Alaska Department of Labor, Frequently Asked Questions: Filing for Unemployment Insurance, available at http://labor.state.ak.us/esd_unemployment_insurance/faq.htm.
[2] AS 23.20.379(c).
[3] The Department of Labor's Benefit Policy Manual (hereinafter BPM) provides that "A worker may give two or more reasons for quitting. However, the one reason that was the precipitating event is the real cause of the quit, with the other reasons being incidental. In such cases, good cause depends on the precipitating event and the other reasons are irrelevant." Department of Labor, BPM at VL 385-2 (Nov.2009), available at http://labor.state.ak.us/esd_unemployment_insurance/ui-bpm.htm.
[4] Tesoro Alaska Petroleum Co. v. Kenai Pipe Line Co., 746 P.2d 896, 903 (Alaska 1987); see also Handley v. State, Dep't of Revenue, 838 P.2d 1231, 1233 (Alaska 1992).
[5] Handley, 838 P.2d at 1233 (citing Jager v. State, 537 P.2d 1100, 1107 n. 23 (Alaska 1975)).
[6] Id.
[7] Id.
[8] Id.
[9] Smith v. Sampson, 816 P.2d 902, 904 (Alaska 1991) (applying the substantial evidence test to the "factual determination" of whether an employee was dismissed from his job for "misconduct" for purposes of AS 23.20.379); see also Risch v. State, 879 P.2d 358, 363 n. 4 (Alaska 1994).
[10] Though the Hearing Officer's Appeal Tribunal Decision separates its "Findings of Fact" from its "Conclusion," the conclusion section includes the Hearing Officer's finding that Calvert quit without good cause. The Hearing Officer's conclusion appears to be entirely fact-based; the determinative factual question was "whether the claimant exhausted all reasonable alternatives prior to quitting her job."
[11] Storrs v. State Medical Bd., 664 P.2d 547, 554 (Alaska 1983) (citing Keiner v. City of Anchorage, 378 P.2d 406, 411 (Alaska 1963)).
[12] Id. (citing Interior Paint Co. v. Rodgers, 522 P.2d 164, 170 (Alaska 1974)).
[13] Smith, 816 P.2d at 904; see also Childs v. Kalgin Island Lodge, 779 P.2d 310, 313 (Alaska 1989) (holding that findings of the Alaska Workers' Compensation Board will not be vacated when supported by substantial evidence, but "independent review of the law is proper" where "the Board's decision rests on an incorrect legal foundation").
[14] The BPM fulfills 8 AAC 85.360's mandate that "the department . . . maintain a policy manual interpreting the provisions of AS 23.20 and this chapter." We have looked to the BPM to interpret AS 23.20 in the past, and continue to do so here. See, e.g., Wescott v. State, Dep't of Labor, 996 P.2d 723 (Alaska 2000) (adopting the BPM's criteria for determining good cause and citing the BPM throughout). The Wescott opinion refers to the BPM as the "Precedent Manual." The BPM is divided into eight sections: Able & Available, Evidence, Labor Dispute, Miscellaneous, Misconduct, Suitable Work, Total & Partial Unemployment, and Voluntary Leaving. Content within each section is indicated by an combination of the abbreviated section title (e.g., "VL" for Voluntary Leaving, "EV" for Evidence) and a numbered subsection (e.g., VL 385-2). Individual subsections may have different dates based on their most recent updates.
[15] BPM at VL 425-1 (Nov.2009).
[16] Id. at VL 5-2 (Apr.2004); see also Wescott, 996 P.2d at 726.
[17] BPM at SW 5-4 to 5-5 (Aug.2008).
[18] Id. at VL 210-1 (Oct. 1999). The language of the statute and the BPM, while defining suitability and good cause as separate inquiries, creates significant overlap in the criteria applicable to each. AS 23.20.385(b), for example, identifies a single set of factors to be used "[i]n determining whether work is suitable for a claimant and in determining the existence of good cause for leaving or refusing work." Similarly, the BPM includes statements such as "work that is unreasonably distant from a worker's residence is unsuitable, and the worker has good cause for leaving it," id. at VL 150-2 (Nov.2010), followed by a discussion addressing distance from work primarily in terms of good cause. As a result, it can be difficult to draw a clear line between the two concepts in practice.
[19] AS 23.20.385(a)-(b); see also BPM at VL 425-1 to 425-3 (Nov.2009).
[20] BPM at EV 190.3-1 (July 1999).
[21] Id. at VL 425-1 (Nov.2009); see also id. at EV 190.3-1 to 190.3-2 (July 1999).
[22] See id. at VL 385-2 (Nov.2009).
[23] Id. at VL 5-2 (Apr.2004); see also Wescott, 996 P.2d at 726.
[24] In support of this reading, the Department cites the fact that a suitability inquiry does not require a claimant to show that she exhausted reasonable alternatives before leaving a job, "presumably because an issue that makes work unsuitable is a fundamental attribute of the job itself" and cannot be easily changed. By contrast, a worker who quits for "good cause" unrelated to suitability is required to demonstrate that she explored alternatives, which implies that good cause is determined by factors that are at least potentially within the worker's power to control or adjust.
[25] The Department's brief does not address the safety concerns raised by Calvert in her Voluntary Leaving Statement.
[26] BPM at VL 425-2 (Nov.2009).
[27] Id. (citing Appeal Tribunal Decision, Docket No. 99-1253, September 2, 1999).
[28] See BPM at SW 5-4 to 5-5 (Aug.2008).
[29] Id. at VL 425-1 (Nov.2009).
[30] Id. at VL 425-3 (Nov.2009).
[31] Id. at VL 210-1 (Oct. 1999).
[32] Id.
[33] Id. at VL 5-3 (Apr.2004); id. at EV 5-1 (July 1999); see also Wescott v. State, Dep't of Labor, 996 P.2d 723, 727 (Alaska 2000) (citing Reedy v. M.H. King Co., 128 Idaho 896, 920 P.2d 915, 918 (1996)).
[34] BPM at VL 210-1 (Oct. 1999).
[35] Id. at VL 210-2 (Oct. 1999).
[36] Id. at VL 160-2 (Nov.2010).
[37] Id. at VL 210-2 (Oct. 1999). The BPM quotes the Commissioner of Labor: "The `good cause' test only requires a worker to exhaust all reasonable alternatives. An alternative is reasonable only if it has some assurance of being successful. . . . [T]here must be a foundation laid that the alternative does have some chance of producing that which the employee desires." Id. at VL 160-1 (Nov.2010).
[38] Id. at VL 385-2 (Nov.2009); see also id. at VL 385-3 (Nov.2009) ("[T]he precipitating event is the reason for the separation, although the combined effect of the reasons may be taken into account in determining good cause.").
[39] Id. at VL 385-2 (Nov.2009).
[40] For instance, as Calvert pointed out in her appeal brief to the Commissioner of Labor, "[t]he Hearing Officer did not ask the opposite question: Would you have quit if your job security and agreement with your employer had not been tampered with [as a result of personality conflicts]?"
[41] BPM at VL 210-1 (Oct. 1999).
[42] Id.
[43] AS 23.20.385(b).
[44] Id. at VL 150-2 (Nov.2010). The BPM also explains "if the time and expense of commuting is customary in the worker's occupation and locality, the worker generally does not have good cause." Id.
[45] Id. at VL 210-1 (Oct. 1999).
[46] Calvert worked in the "gear" department, where she was responsible for ensuring that other employees' lab coats and other specialized clothing were cleaned daily and ready to be handed out at the start of the work day. This required her to get to Snug Harbor an hour before most employees started work to "start coffee and . . . make sure that there was enough gear to hand out and . . . everything was ready to go."
[47] In her appeal to the Commissioner of the Department of Labor, Calvert argued that her phone bill did not reflect that she had received any calls from King during the relevant period. And in her appeal to the superior court, she contended that "[t]he claim by employer of attempting to phone claimant four times is unsupported HEARSAY and there is a preponderance of credible evidence (my phone bill and written statements) in opposition to that claim."
[48] BPM at VL 160-3 (Nov.2010).
[49] Id.
[50] At least one prior decision of the Commissioner of Labor has held that where an employer's actions established a "pattern of abuse and hostility" toward his employee, it would have been futile for the employee to confront the employer about his offensive behavior. See Decision of the Comm'r, Docket No. 98-0321, April 30, 1998. But the situation in Docket No. 98-0321 is distinguishable from the present case. There, the employer was the sole owner of the business, was verbally abusive, and had proven hostile to previous attempts by the employee to resolve other problems. Here, Calvert does not allege a relationship with her employers of such open hostility. She also had multiple levels of authority within Snug Harbor management from whom to seek assistance, and she does not appear to have been refused accommodation (upon direct request) on prior occasions.
[51] See, e.g., Wagner v. Stuckagain Heights, 926 P.2d 456, 459 (Alaska 1996).
[52] Smith v. Sampson, 816 P.2d 902, 907 (Alaska 1991).
[53] Button v. Haines Borough, 208 P.3d 194, 201 (Alaska 2009) (internal quotation marks and citations omitted).
[54] BPM at VL 515.4-1 (Nov.2009). The worker must also "present[] the grievance to the employer and allow[] the employer an opportunity to adjust the situation." Id. This requirement is addressed below.
[55] See, e.g., Decision of the Comm'r, Docket No. 95-1484, August 1, 1995 (implying that verbal threats by a fellow employee gave worker "adequate reason" for leaving work, though still finding an absence of good cause based on the worker's failure to attempt to remedy the situation); Appeal Tribunal Decision, Docket No. 98-0392, March 20, 1998 (finding that a worker had good cause to quit after a fellow employee threatened to get in a gun fight with him, and the worker reported the incident to his employer).
[56] BPM at VL 450.05-5 (Nov.2009).
[57] Id. at VL 450.4-1 (Nov.2009) (noting that a worker whose hours are reduced to part-time "is able to seek other work without leaving the existing employment").
[58] Smith v. Sampson, 816 P.2d 902, 904 (Alaska 1991); see also Childs v. Kalgin Island Lodge, 779 P.2d 310 (Alaska 1989).
[59] See, e.g., Wagner v. Stuckagain Heights, 926 P.2d 456, 459 (Alaska 1996).
[60] AT & T Alascom v. Orchitt, 161 P.3d 1232, 1246 (Alaska 2007) (citing Bruner v. Petersen, 944 P.2d 43, 49 (Alaska 1997)); see also Withrow v. Larkin, 421 U.S. 35, 47, 95 S.Ct. 1456, 43 L.Ed.2d 712 (1975).
[61] AT & T Alascom, 161 P.3d at 1246 (citing Tachick Freight Lines, Inc. v. State, Dep't of Labor, Emp't Sec. Div., 773 P.2d 451, 452 (Alaska 1989)).
[62] Withrow, 421 U.S. at 47, 95 S.Ct. 1456.
[63] Id. at 58, 95 S.Ct. 1456.
[64] AT&T Alascom, 161 P.3d at 1246.
[65] Bollerud v. State, Dep't of Pub. Safety, 929 P.2d 1283, 1286 (Alaska 1997).
[66] Similarly, in her brief on appeal to the superior court, Calvert contended that she "was never properly warned or informed by the employer or DOL that her OWN judgments regarding good cause for leaving work . . . was not the standard for which she could voluntarily quit her job and still be eligible for [unemployment insurance] benefits."
[67] Hutton v. Realty Executives, Inc., 14 P.3d 977, 980 (Alaska 2000) (citing Ferrell v. Baxter, 484 P.2d 250, 265 (Alaska 1971)).
[68] City of W. Covina v. Perkins, 525 U.S. 234, 241-42, 119 S.Ct. 678, 142 L.Ed.2d 636 (1999) (distinguishing Memphis Light, Gas & Water Div. v. Craft, 436 U.S. 1, 98 S.Ct. 1554, 56 L.Ed.2d 30 (1978), from West Covina, because the state law remedies at issue in West Covina were "established by published, generally available state statutes and case law").
[69] See http://labor.state.ak.us/esd_unemployment_insurance/home.htm.
|
Completion Shell
################
.. note::
The documentation is not currently supported in Chinese language for this
page.
Please feel free to send us a pull request on
`Github <https://github.com/cakephp/docs>`_ or use the **Improve This Doc**
button to directly propose your changes.
You can refer to the English version in the select top menu to have
information about this page's topic.
|
Maryland police revealed Friday that a woman stabbed outside a parking lot earlier this month was attacked by a man wielding a syringe full of semen.
Thomas Bryon Stemen, 51, was arrested Tuesday and charged with first and second-degree assault and reckless endangerment after police allege he attacked a woman outside a grocery store on February 18, the Anne Arundel County Police Department said in a news release.
ARKANSAS POLICE OFFER TO TEST METH FOR CORONAVIRUS: ‘BETTER SAFE THAN SORRY’
Police were called to a parking lot on 5570 Shady Side Rd. in Churton at around 7 p.m. over reports that an adult female had been assaulted and poked by what appeared to be a syringe.
OKLAHOMA MAN JAILED FOR 23 YEARS AFTER KIDNAPPING STEPDAUGHTER, FATHERING 9 CHILDREN WITH HER
The suspect, later identified as Stemen, can be seen in video surveillance following the woman as she returns her shopping cart and then stabbing her with an object as he bumps into her.
The startled woman looks back as Stemen appears to act confused while still hovering around her.
The victim, identified by WBAL as Katie Peters, told the station that she initially thought she has been burned by a cigarette. She said she confronted Stemen who reportedly asked her, “It felt like a bee sting didn’t it?”’
CLICK HERE TO GET THE FOX NEWS APP
Stemen was arrested on Tuesday but at that point police still couldn’t confirm what Peters had been stabbed with.
On Friday the Anne Arundel County Police Department learned that the substance in the syringe was semen. Authorities warned that there are likely more victims and that the investigation is “extremely active” and more charges could follow.” |
Acesse os bastidores da política de graça com Crusoé e O Antagonista+ (7 dias)
Em manifestação enviada hoje à Justiça Federal, a AGU defendeu a legalidade da indicação de Eduardo Bolsonaro para a embaixada do Brasil nos Estados Unidos, informa o Globo.
O documento foi encaminhado a pedido do juiz federal André Jackson de Holanda, da 1ª Vara Federal Cível da Bahia, em resposta a pedido do deputado federal Jorge Solla (PT-BA). Na ação, o parlamentar questiona a legalidade da indicação do filho do presidente para o cargo.
O advogado da União Samuel Augusto Rodrigues Nogueira Neto defendeu que a indicação é uma decisão política, cabível apenas ao presidente Jair Bolsonaro, e que não afronta os princípios legais.
“Não se pode manietar o Presidente da República no seu típico espaço de discricionariedade na direção política”, diz o advogado na manifestação. |
fake or not, im here for the meme |
Standards such as the Bluetooth Wireless Technology and WiFi are often used to carry GSM data, sensor data, GPS data, etc.
All of these devices lack a few key elements such as: a. The devices do not enable the accumulation of wireless data from one or more wireless devices that are connected to the device that then convey the data through a single Bluetooth wireless link to a paired and connected product. b. The devices do not enable the synchronization of the wireless links so as to reduce power consumption. It should be noted that reducing power consumption may increase battery life. c. They do not facilitate the abstraction of the third-party wireless standards so as to provide an extension of existing Bluetooth profiles and protocols. d. Wireless protocols have differing power, communication frequency, and timing requirements, and are generally not optimized for use with small battery-powered devices.
Furthermore, no designs currently exist that accumulate data from paired and connected Bluetooth Low Energy (or other low power standard such as ANT and IEEE 802.15.4 (ZigBee)) wireless technology devices into a single standardized Bluetooth wireless technology pipe for use with existing Bluetooth wireless technology products. |
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
Input to cgo -godefs. See README.md
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#define KERNEL
#include <dirent.h>
#include <fcntl.h>
#include <poll.h>
#include <signal.h>
#include <termios.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/capability.h>
#include <sys/event.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/param.h>
#include <sys/ptrace.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/signal.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/un.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <net/bpf.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <net/route.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <netinet/tcp.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
// This structure is a duplicate of stat on FreeBSD 8-STABLE.
// See /usr/include/sys/stat.h.
struct stat8 {
#undef st_atimespec st_atim
#undef st_mtimespec st_mtim
#undef st_ctimespec st_ctim
#undef st_birthtimespec st_birthtim
__dev_t st_dev;
ino_t st_ino;
mode_t st_mode;
nlink_t st_nlink;
uid_t st_uid;
gid_t st_gid;
__dev_t st_rdev;
#if __BSD_VISIBLE
struct timespec st_atimespec;
struct timespec st_mtimespec;
struct timespec st_ctimespec;
#else
time_t st_atime;
long __st_atimensec;
time_t st_mtime;
long __st_mtimensec;
time_t st_ctime;
long __st_ctimensec;
#endif
off_t st_size;
blkcnt_t st_blocks;
blksize_t st_blksize;
fflags_t st_flags;
__uint32_t st_gen;
__int32_t st_lspare;
#if __BSD_VISIBLE
struct timespec st_birthtimespec;
unsigned int :(8 / 2) * (16 - (int)sizeof(struct timespec));
unsigned int :(8 / 2) * (16 - (int)sizeof(struct timespec));
#else
time_t st_birthtime;
long st_birthtimensec;
unsigned int :(8 / 2) * (16 - (int)sizeof(struct __timespec));
unsigned int :(8 / 2) * (16 - (int)sizeof(struct __timespec));
#endif
};
// This structure is a duplicate of if_data on FreeBSD 8-STABLE.
// See /usr/include/net/if.h.
struct if_data8 {
u_char ifi_type;
u_char ifi_physical;
u_char ifi_addrlen;
u_char ifi_hdrlen;
u_char ifi_link_state;
u_char ifi_spare_char1;
u_char ifi_spare_char2;
u_char ifi_datalen;
u_long ifi_mtu;
u_long ifi_metric;
u_long ifi_baudrate;
u_long ifi_ipackets;
u_long ifi_ierrors;
u_long ifi_opackets;
u_long ifi_oerrors;
u_long ifi_collisions;
u_long ifi_ibytes;
u_long ifi_obytes;
u_long ifi_imcasts;
u_long ifi_omcasts;
u_long ifi_iqdrops;
u_long ifi_noproto;
u_long ifi_hwassist;
// FIXME: these are now unions, so maybe need to change definitions?
#undef ifi_epoch
time_t ifi_epoch;
#undef ifi_lastchange
struct timeval ifi_lastchange;
};
// This structure is a duplicate of if_msghdr on FreeBSD 8-STABLE.
// See /usr/include/net/if.h.
struct if_msghdr8 {
u_short ifm_msglen;
u_char ifm_version;
u_char ifm_type;
int ifm_addrs;
int ifm_flags;
u_short ifm_index;
struct if_data8 ifm_data;
};
*/
import "C"
// Machine characteristics; for internal use.
const (
sizeofPtr = C.sizeofPtr
sizeofShort = C.sizeof_short
sizeofInt = C.sizeof_int
sizeofLong = C.sizeof_long
sizeofLongLong = C.sizeof_longlong
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
// Time
type Timespec C.struct_timespec
type Timeval C.struct_timeval
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit
type _Gid_t C.gid_t
// Files
const ( // Directory mode bits
S_IFMT = C.S_IFMT
S_IFIFO = C.S_IFIFO
S_IFCHR = C.S_IFCHR
S_IFDIR = C.S_IFDIR
S_IFBLK = C.S_IFBLK
S_IFREG = C.S_IFREG
S_IFLNK = C.S_IFLNK
S_IFSOCK = C.S_IFSOCK
S_ISUID = C.S_ISUID
S_ISGID = C.S_ISGID
S_ISVTX = C.S_ISVTX
S_IRUSR = C.S_IRUSR
S_IWUSR = C.S_IWUSR
S_IXUSR = C.S_IXUSR
)
type Stat_t C.struct_stat8
type Statfs_t C.struct_statfs
type Flock_t C.struct_flock
type Dirent C.struct_dirent
type Fsid C.struct_fsid
// File system limits
const (
PathMax = C.PATH_MAX
)
// Advice to Fadvise
const (
FADV_NORMAL = C.POSIX_FADV_NORMAL
FADV_RANDOM = C.POSIX_FADV_RANDOM
FADV_SEQUENTIAL = C.POSIX_FADV_SEQUENTIAL
FADV_WILLNEED = C.POSIX_FADV_WILLNEED
FADV_DONTNEED = C.POSIX_FADV_DONTNEED
FADV_NOREUSE = C.POSIX_FADV_NOREUSE
)
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddrDatalink C.struct_sockaddr_dl
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Linger C.struct_linger
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPMreqn C.struct_ip_mreqn
type IPv6Mreq C.struct_ipv6_mreq
type Msghdr C.struct_msghdr
type Cmsghdr C.struct_cmsghdr
type Inet6Pktinfo C.struct_in6_pktinfo
type IPv6MTUInfo C.struct_ip6_mtuinfo
type ICMPv6Filter C.struct_icmp6_filter
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPMreqn = C.sizeof_struct_ip_mreqn
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Ptrace requests
const (
PTRACE_TRACEME = C.PT_TRACE_ME
PTRACE_CONT = C.PT_CONTINUE
PTRACE_KILL = C.PT_KILL
)
// Events (kqueue, kevent)
type Kevent_t C.struct_kevent
// Select
type FdSet C.fd_set
// Routing and interface messages
const (
sizeofIfMsghdr = C.sizeof_struct_if_msghdr
SizeofIfMsghdr = C.sizeof_struct_if_msghdr8
sizeofIfData = C.sizeof_struct_if_data
SizeofIfData = C.sizeof_struct_if_data8
SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr
SizeofIfmaMsghdr = C.sizeof_struct_ifma_msghdr
SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr
SizeofRtMsghdr = C.sizeof_struct_rt_msghdr
SizeofRtMetrics = C.sizeof_struct_rt_metrics
)
type ifMsghdr C.struct_if_msghdr
type IfMsghdr C.struct_if_msghdr8
type ifData C.struct_if_data
type IfData C.struct_if_data8
type IfaMsghdr C.struct_ifa_msghdr
type IfmaMsghdr C.struct_ifma_msghdr
type IfAnnounceMsghdr C.struct_if_announcemsghdr
type RtMsghdr C.struct_rt_msghdr
type RtMetrics C.struct_rt_metrics
// Berkeley packet filter
const (
SizeofBpfVersion = C.sizeof_struct_bpf_version
SizeofBpfStat = C.sizeof_struct_bpf_stat
SizeofBpfZbuf = C.sizeof_struct_bpf_zbuf
SizeofBpfProgram = C.sizeof_struct_bpf_program
SizeofBpfInsn = C.sizeof_struct_bpf_insn
SizeofBpfHdr = C.sizeof_struct_bpf_hdr
SizeofBpfZbufHeader = C.sizeof_struct_bpf_zbuf_header
)
type BpfVersion C.struct_bpf_version
type BpfStat C.struct_bpf_stat
type BpfZbuf C.struct_bpf_zbuf
type BpfProgram C.struct_bpf_program
type BpfInsn C.struct_bpf_insn
type BpfHdr C.struct_bpf_hdr
type BpfZbufHeader C.struct_bpf_zbuf_header
// Terminal handling
type Termios C.struct_termios
type Winsize C.struct_winsize
// fchmodat-like syscalls.
const (
AT_FDCWD = C.AT_FDCWD
AT_REMOVEDIR = C.AT_REMOVEDIR
AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
)
// poll
type PollFd C.struct_pollfd
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLINIGNEOF = C.POLLINIGNEOF
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)
// Capabilities
type CapRights C.struct_cap_rights
// Uname
type Utsname C.struct_utsname
|
Common
Name:
None
Synonyms:
Hemidoras micropoeus
Family:
Doradidae
Distribution:
South America:
Essequibo, Demerara and Corantijn River basins and possibly
other coastal drainages east to mouth of Amazon River. Type
locality: British Guiana at Lama stop-off and Wismar
on the Demerara River |
Q:
Create a fixed length string function (originally from Java)
I'm trying to create a fixed length string function in swift. I know how it's done in Java for Android but unsure how to translate it to Swift. This is what the Java function looks like:
private String getFixedLengthString(String name, int fixedLength) {
if(name == null) {
name = "";
}
if(name.length() > fixedLength) {
name = name.substring(0, fixedLength);
}
return String.format("%1$-" + fixedLength + "s", name);
}
Any help is appreciated!
A:
You can use the prefix function to get the first fixedLength characters of a String. The output String format can easily be achieved using String interpolation.
func getFixedLengthString(name:String?,fixedLength:Int)->String{
if let name = name {
if name.count > fixedLength {
return "\(name.prefix(fixedLength))-\(fixedLength)s"
} else {
return "\(name)-\(fixedLength)s"
}
} else {
return "-\(fixedLength)s"
}
}
|
Q:
what's the difference between mid=(beg+end)/2 and mid=beg+(end-beg)/2 in binary search?
It is a problem from C++ primer fifth edition problem 3.26, I don't know the difference between them ?
May be the second one can avoid overflow.
A:
May be the second one can avoid overflow.
Exactly. There's no guarantee that beg+end is representable; but in the second case the intermediate values, as well as the expected result, are no larger than end, so there is no danger of overflow.
The second form can also be used for affine types like pointers and other random-access iterators, which can be subtracted to give a distance, but not added together.
|
Q:
Is there any relationship between 田 (tián), 由 (yóu), 甲 (jiǎ), and 申 (shēn)?
Question: Is there any relationship between 田, 由, 甲, and 申?
田 (tián) = field; it seems rarely used
由 (yóu) = ???; I've only seen it as a radical for 油 (yóu) = oil
甲 (jiǎ) = nails, as in fingernails 手指甲 (shǒuzhǐjiǎ); it's also a radical in 鸭子 (yāzi) = duck
申 (shēn) = ???; I've seen it used in 申请 (shēnqǐng) = application
The characters appear similar, so I tend to get them confused often. It might be there's a reason these characters are similar which might help me remember them better (or there might be no meaningful relationship).
A:
The origin of 田 was a pictograph of a rice field
http://www.zdic.net/z/1e/zy/7530.htm
The primary meaning of 田 is 'rice field'
~
The origin of 由 was a pictograph of a sprout
http://www.zdic.net/z/1f/zy/7531.htm
The primary meaning of 由 is 'from; begin with'
~
The origin of 甲 was a pictograph of the intersection of a planted armor
http://www.zdic.net/z/1f/zy/7532.htm
The primary meaning of 甲 is 'armor; top class'
~
The origin of 申 was a pictograph of a man kneeing with pleading hands (to plead to gods)
http://www.zdic.net/z/1f/zy/7533.htm
The primary meaning of 申 is 'plead'
|
---
abstract: 'Sterile neutrinos with mass $\simeq 1$ eV and order 10% mixing with active neutrinos have been proposed as a solution to anomalies in neutrino oscillation data, but are tightly constrained by cosmological limits. It was recently shown that these constraints are avoided if sterile neutrinos couple to a new MeV-scale gauge boson $A''$. However, even this scenario is restricted by structure formation constraints when $A''$-mediated collisional processes lead to efficient active-to-sterile neutrino conversion after neutrinos have decoupled. In view of this, we reevaluate in this paper the viability of sterile neutrinos with such “secret” interactions. We carefully dissect their evolution in the early Universe, including the various production channels and the expected modifications to large scale structure formation. We argue that there are two regions in parameter space — one at very small $A''$ coupling, one at relatively large $A''$ coupling — where all constraints from big bang nucleosynthesis (BBN), cosmic microwave background (CMB), and large scale structure (LSS) data are satisfied. Interestingly, the large $A''$ coupling region is precisely the region that was previously shown to have potentially important consequences for the small scale structure of dark matter halos if the $A''$ boson couples also to the dark matter in the Universe.'
author:
- Xiaoyong Chu
- Basudeb Dasgupta
- Joachim Kopp
bibliography:
- 'reference.bib'
title: 'Sterile Neutrinos with Secret Interactions — Lasting Friendship with Cosmology'
---
Introduction {#sec:intro}
============
The possible existence of extra, “sterile”, neutrino species with masses at the eV scale and $\mathcal{O}(10\%)$ mixing with the Standard Model (SM) neutrinos is one of the most debated topics in neutrino physics today. Several anomalies in neutrino oscillation experiments [@Aguilar:2001ty; @AguilarArevalo:2012va; @Mueller:2011nm; @Mention:2011rk; @Hayes:2013wra; @Acero:2007su] seem to point towards the existence of such particles, but null results from other experiments that did not observe a signal cast doubt on this hypothesis [@Kopp:2011qd; @Kopp:2013vaa; @Conrad:2012qt; @Kristiansen:2013mza; @Giunti:2013aea]. A multi-faceted experimental program is under way to clarify the issue and either detect, or conclusively rule out, eV-scale sterile neutrinos with large mixing angle.
If the SM is indeed augmented with one or several such sterile neutrinos, but nothing else, some of the tightest constraints come from cosmological observations. In particular, measurements of the effective number of relativistic particle species in the primordial plasma, $N_\text{eff}$ [@Steigman:2012ve; @Planck:2015xua], disfavor the existence of an abundance of light or massless particles beyond the SM neutrinos and the photon in the early Universe. If sterile neutrinos are at the eV scale or above, they are also constrained by the distribution of large scale structure (LSS) in the Universe [@Hamann:2011ge] which would be washed out due to efficient energy transport over large distances by free-streaming neutrinos.
Cosmology, however, only constrains particle species that are abundantly produced in the early Universe. In two recent letters [@Hannestad:2013ana; @Dasgupta:2013zpn], it was demonstrated that the production of sterile neutrinos can be suppressed until relatively late times if they are charged under a new interaction. This idea has elicited interest in detailed model building and has interesting phenomenological consequences[@Bringmann:2013vra; @Ko:2014nha; @Ng:2014pca; @Kopp:2014fha; @Saviano:2014esa; @Mirizzi:2014ama; @Cherry:2014xra; @Kouvaris:2014uoa; @Tang:2014yla]. However, Mirizzi *et al.* [@Mirizzi:2014ama] have recently pointed out that collisions mediated by the new interaction can result in significant late time production of sterile neutrinos and lead to tensions with CMB and LSS data on structure formation. There are, however, several important caveats to this statement. In particular, the bounds from Ref. [@Mirizzi:2014ama] can be evaded if the sterile neutrinos either never recouple with active neutrinos or remain collisional until matter-radiation equality. (We communicated on these caveats with the authors of Ref. [@Mirizzi:2014ama], who were aware of the first possibility but did not mention it as they found it less interesting in the context of previous literature. They mention the second possibility in the final version of their paper.)
Our aim in the present paper is to understand in detail the role and impact of sterile neutrino collisions, and reevaluate if secretly interacting sterile neutrinos remain cosmologically viable. We begin in Sec. \[sec:setup\] by reviewing the main features of self-interacting sterile neutrino scenarios. Then, in Sec. \[sec:Neff\], we calculate the additional contribution to $N_{\rm eff}$ at the BBN and CMB epochs. In Sec. \[sec:LSS\] we consider the impact on the large scale structure in the Universe, focusing in particular on the sensitivity of Sloan Digital Sky Survey (SDSS) and Lyman-$\alpha$ data. We find that there are two regions of parameter space where sterile neutrinos with secret interactions are only weakly constrained. In Sec. \[sec:conclusions\], we discuss our conclusions and summarize the results.
Secret Interactions and Sterile Neutrino Production {#sec:setup}
====================================================
We assume that the Standard Model is augmented by a sterile neutrino $\nu_s$ with mass $m_s$[^1] and with order 10% mixing with the SM neutrinos. We moreover assume the existence of a new secret $U(1)_{s}$ gauge interaction, mediated by a vector boson $A'$ of mass $M$ at the MeV scale and coupling to sterile neutrinos through an interaction of the form $$\begin{aligned}
{\cal L}_\text{int} = e_s \bar{\nu}_s \gamma^\mu P_L \nu_s A'_\mu \,.\end{aligned}$$ Here, $e_s$ is th $U(1)_s$ coupling constant and $P_L = (1 - \gamma^5)/2$ is the left-handed chirality projection operator. We define the secret fine structure constant as $\alpha_s \equiv e_s^2/(4\pi)$.
This new interaction generates a large temperature-dependent potential [@Dasgupta:2013zpn] $$\begin{aligned}
V_\text{eff} \simeq
\left\lbrace
\begin{array}{lcl}
-\dfrac{7 \pi^2 e_s^2 E T_s^4}{45 M^4} &\quad& \text{for $T_s \ll M$} \\[3ex]
+\dfrac{e_s^2 T_s^2}{8 E} &\quad& \text{for $T_s \gg M$}
\end{array}
\right.\,
\label{eq:Veff}\end{aligned}$$ for sterile neutrinos of energy $E$ and sterile sector temperature $T_s$. This potential leads to an in-medium mixing angle $\theta_m$ between active neutrinos $\nu_a$ and sterile neutrinos $\nu_s$, given by $$\begin{aligned}
\sin^2 2\theta_m
= \frac{\sin^2 2\theta_0}
{\big(\cos 2\theta_0 + \tfrac{2 E}{\Delta m^2} V_\text{eff} \big)^2
+ \sin^2 2\theta_0} \,.
\label{eq:s22thm}\end{aligned}$$ In the following, we will use a vacuum mixing angle $\theta_0 \simeq 0.1$ and an active–sterile mass squared difference $\Delta m^2 \simeq 1\,\text{eV}^2$. As shown in [@Hannestad:2013ana; @Dasgupta:2013zpn], the secret interactions can suppress $\theta_m$, and thus active to sterile neutrino oscillations, until after neutrino decoupling as long as $|V_{\rm eff}|\gg |\Delta m^2/(2E)|$.
The new interaction also leads to collisions of sterile neutrinos. The collision rate for $\nu_s\nu_s\leftrightarrow \nu_s\nu_s$ scattering is given by $$\begin{aligned}
\Gamma_\text{coll}= n_{\nu_s} \sigma\sim
\left\lbrace
\begin{array}{lcl}
n_{\nu_s} e_s^4\frac{E^2}{M^4}
&\quad& \text{for $T_s \ll M$} \\[1.5ex]
n_{\nu_s} e_s^4\frac{1}{E^2}
&\quad& \text{for $T_s \gg M$}
\end{array}
\right. \,,
\label{eq:Gamma-coll}\end{aligned}$$ where $n_{\nu_s}$ is the sterile neutrino density. The sterile neutrino production rate $\Gamma_s$ and the final density depend on this collision rate. Two qualitatively different scenarios must be distinguished:
*Collisionless production:* If the collision rate $\Gamma_\text{coll}$ is smaller than the Hubble rate $H$ at all times, the active and sterile neutrinos can be taken to be oscillating without scattering [@Dodelson:1993je].[^2] If $\Delta m^2/(2 T_{\nu_a}) \gg H$, $\nu_s$ are then produced only through oscillations, so that the final sterile neutrino number density is $n_{\nu_s}\simeq\tfrac{1}{2} \sin^2
2\theta_m\,n_{\nu_a}$, where $n_{\nu_a}=3\zeta(3)/(4\pi^2)g_{\nu_a}T_{\nu_a}^3$ is the density of one of active neutrino flavors and $T_{\nu_a}$ is the active neutrino temperatur. The final population of sterile neutrinos thus remains small, at most ${\cal O}(10^{-2})$ of the active neutrino density, because of the small mixing angle.
*Collisional production:* If $\Gamma_\text{coll}$ exceeds the Hubble rate $H$, then sterile neutrinos cannot be treated as non-collisional [@Stodolsky:1986dx]. In each collision, the sterile component of a $\nu_a$–$\nu_s$ superposition changes its momentum, separates from the $\nu_a$ component, and continues to evolve independently. Subsequently, the active component again generates a sterile component, which again gets scattered. This process continues for all neutrinos until eventually the phase space distributions of $\nu_a$ and $\nu_s$ have become identical. Thus, the fraction of $\nu_a$ converted to sterile neutrinos is not limited by the mixing angle, and all neutrino flavors end up with equal number densities.
The $\nu_a\rightarrow \nu_s$ production rate in this case is $\Gamma_s \simeq
\frac{1}{2}\sin^2 2\theta_m \cdot \Gamma_\text{coll}$ [@Jacques:2013xr], where we can interpret the first factor as the average probability that an initially active neutrino is in its sterile state at the time of collision. The second factor gives the scattering rate that keeps it in the sterile state. We note that the production rate $\Gamma_s$ is proportional to $n_{\nu_s}$ and thus rapidly approaches its final value, $$\Gamma_s \simeq {1\over2}\sin^2 2\theta_m \times
\frac{3}{4} n^\text{SM}_{\nu_a} \cdot
\left\lbrace
\begin{array}{lcl}
e_s^4\frac{E^2}{M^4}
&\quad& \text{for $T_s \ll M$} \\[1.5ex]
e_s^4\frac{1}{E^2}
&\quad& \text{for $T_s \gg M$}
\end{array}
\right. \,.
\label{eq:Gamma-s}$$ Note that, when $\Gamma_\text{coll}$ is much larger than the oscillation frequency, using the average oscillation probability $\frac{1}{2}\sin^2
2\theta_m$ is inappropriate, and in fact the production rate $\Gamma_s$ goes to zero in this case. Such a situation is, however, not realized for the parameter values explored in this work.
In the following, we will look at both collisionless and collisional production of sterile neutrinos in more detail,[^3] with a special focus on the latter where more sterile neutrinos may be produced.
Constraints on $N_\text{eff}$ {#sec:Neff}
=============================
Cosmology is sensitive to the presence of relativistic degrees of freedom through their contribution to the overall energy density. At early times the sterile sector was presumably in equilibrium with the SM plasma, so that $\nu_s$ and $A'$ were thermally populated. We assume that the sterile sector decouples from the SM sector well above the QCD scale and that oscillations remain suppressed until active neutrinos also decouple. Since the temperature $T_\gamma$ of the SM sector drops more slowly than the sterile sector temperature $T_s$ when extra entropy is produced during the QCD phase transition, $T_s$ at BBN is significantly smaller than $T_\gamma$.
It is useful to track the ratio $$\begin{aligned}
\xi \equiv \frac{T_s}{T_{\nu}^\text{SM}}\end{aligned}$$ of the sterile sector temperature $T_s$ and the the temperature $T_\nu^\text{SM}$ of a standard neutrino. Before $e^+e^-$ annihilation, $T_{\nu}^\text{SM}
= T_\gamma$, while afterwards $T_{\nu}^\text{SM} = (4/11)^{1/3} T_\gamma$. Assuming comoving entropy is conserved, the ratio $\xi$ at BBN is $$\begin{aligned}
\xi_\text{BBN}
&= \left\lbrace
\begin{array}{ll}
\big( \frac{10.75}{106.7} \big)^\frac{1}{3}
\big( \frac{2 \cdot 7/8 + 3}{2 \cdot 7/8} \big)^\frac{1}{3}
&\quad\text{for $M \gg 0.5$~MeV} \\[1ex]
\big( \frac{10.75}{106.7} \big)^\frac{1}{3}
&\quad\text{for $M \ll 0.5$~MeV}
\end{array} \right. \nonumber\\[1ex]
&= \left\lbrace
\begin{array}{lcl}
0.649 &\quad\text{for $M \gg 0.5$~MeV} &\quad\text{(case A)} \\[1ex]
0.465 &\quad\text{for $M \ll 0.5$~MeV} &\quad\text{(case B)}
\end{array} \right.\,.
\label{eq:xi-BBN}\end{aligned}$$ Here, the factor $(10.75/106.7)^{1/3}$ gives the ratio of the sterile sector temperature to the active sector temperature before $A'$ decay, assuming that the two sectors have decoupled above the electroweak scale. It is based on counting the SM degrees of freedom that freeze out between the electroweak and BBN epochs.
$A'$ is present in the Universe at the BBN epoch if $M \ll 3T_s|_{\rm BBN}\simeq0.5$ MeV, and has decayed away if heavier. The factor $(2 \cdot 7/8 + 3) / (2 \cdot 7/8)$ in the first row of Eq. corresponds to the ratio of sterile sector degrees of freedom[^4] before and after the decay of $A'$ at $T_s\simeq M/3$.
The extra radiation in the Universe is parameterized as $N_\text{eff} \equiv (\rho_{\nu_a} + \rho_{\nu_s,A'}) / \rho_\nu^\text{SM}$, i.e., the energy density of all non-photon relativistic species, measured in units of the energy density of a SM neutrino species. The primordial population of $\nu_s$ and $A'$ leads to $N_{\rm eff}$ marginally larger than 3. For $M \gg 0.5\ \text{MeV}$,
$$\begin{aligned}
N_\text{eff,BBN{\tiny (A)}}
&= N_{\nu_a} + \xi_\text{BBN{\tiny (A)}}^{4}
\simeq 3.22 \,,
\label{eq:N-BBN1}\end{aligned}$$
at the BBN epoch. The first term, $N_{\nu_a}$, on the right hand side accounts for the active neutrinos and is equal to 3.045. The second term includes the relativistic sterile sector particles, i.e., only $\nu_s$ if $M \gg 0.5\ \text{MeV}$. If the $A'$ bosons are lighter, i.e., $M \ll 0.5\ \text{MeV}$, they are present during BBN and contribute $g_{A'} = 3$ degrees of freedom in the sterile sector, in addition to the $g_{\nu_s} =
2 \times 7/8$ degrees of freedom of a sterile neutrino. Using the fact that also each active neutrino species has $g_{\nu_a} = 2 \times 7/8$ degrees of freedom, we find $$\begin{aligned}
N_\text{eff,BBN{\tiny (B)}}
&= N_{\nu_a} + \frac{g_{\nu_s} + g_{A'}}{g_{\nu_a}} \xi_\text{BBN{\tiny (B)}}^{4}
\simeq 3.17 \,,
\label{eq:N-BBN}\end{aligned}$$
![Possible cosmological histories of the active neutrinos $\nu_a$, the sterile neutrinos $\nu_s$, and the sterile sector gauge bosons $A'$ below the electroweak (EW) scale. Various possibilities, labeled as A1, A2 and B1, B2, B3, are determined by the values of the $A'$ mass $M$ and the $U(1)_s$ fine structure constant $\alpha_s$ and lead to testable predictions for $N_\text{eff}$ at both the BBN and CMB epochs. See text for details.[]{data-label="fig:chart"}](Charts.pdf){width="0.95\columnwidth"}
In Fig. \[fig:chart\], these two cases are summarized as BBN[(A)]{} and BBN[(B)]{}, respectively. In either case, $N_\text{eff,BBN{\tiny (A/B)}}$ remains consistent with the current BBN bound on extra radiation, $\Delta N_\text{eff,BBN} = 0.66^{+0.47}_{-0.45}$ (68%C.L.) [@Steigman:2012ve].
After BBN, the next important event is a possible secret *recoupling* of $\nu_a$ and $\nu_s$. If the sterile neutrino production rate $\Gamma_s > H$, a new hotter population of $\nu_s$ can be collisionally produced from $\nu_a$, and they achieve kinetic equilibrium with the primordially produced colder population of $\nu_s$. Also, the $A'$ can decay and heat up the sterile neutrinos. Both processes change the number and energy density of neutrinos, and $N_{\rm eff}$ at CMB depends on the order in which they occur.
In Fig. \[fig:thermalization-rate\], we show the collisional $\nu_a\rightarrow \nu_s$ production rate $\Gamma_s$, normalized to the Hubble expansion rate $H$, as a function of the photon temperature $T_\gamma$: $\Gamma_s/H$ is suppressed at high temperatures (say, above GeV), where $\sin^2
2\theta_m$ is small due to the large $V_\text{eff}$. Since $\Gamma_s \propto
T_\gamma^{-3}$ in this regime (see Eqs. , and ) and $H \propto T_\gamma^2$, $\Gamma_s/H$ increases with $T_\gamma^{-5}$ as the temperature decreases. We define the [recoupling]{} temperature $T_\text{re}$ as the temperature where $\Gamma_s
/H > 1$ for the first time since the primordial decoupling of the active and sterile sectors above the QCD phase transition. When $T_s \sim M$, the energy and temperature dependence of $\Gamma_s$ changes (see Eq. ), and when also $V_\text{eff}$ drops below $\Delta m^2 / (2 T_s)$ at $T_s < M$, $\Gamma_s$ begins to drop again. The asymptotic behavior is $\Gamma_s / H \propto T_\gamma^3$ at $T_s \ll M$ and $\theta_m \simeq
\theta_0$. There are then three possible sequences of events:
1. *No recoupling*: For a sufficiently small interaction strength $\alpha_s$, the scattering rate $\Gamma_s$ always stays below the Hubble rate and there is no recoupling (solid black curve in Fig. \[fig:thermalization-rate\]).
If the interaction is stronger, a recoupling of $\nu_a$ and $\nu_s$ can happen either *after* or *before* $A'$ decay:
2. *Recoupling after $A'$ decay*: If $M > \text{few} \times
10^{-2}$ MeV, the recoupling happens after $A'$ have decayed (dotted blue curve in Fig. \[fig:thermalization-rate\]).
3. *Recoupling before $A'$ decay*: If $M < \text{few} \times
10^{-2}$ MeV, the recoupling happens before $A'$ have decayed (dashed red curve in Fig. \[fig:thermalization-rate\]).
In the second and third cases, there is also a secret *decoupling* when $\Gamma_s/H$ again drops below one. If $e_s^2/M^2 \le {\cal O}(10\ \text{MeV}^{-2})$, this decoupling happens while $\nu_s$ are still relativistic, i.e. $T_s \gtrsim m_s/3$.[^5]
![Evolution of the collisional $\nu_a\rightarrow \nu_s$ production rate $\Gamma_s$, normalized by the Hubble rate $H$, versus the photon temperature $T_\gamma$, for different representative choices of the secret gauge boson mass $M$ and the secret fine structure constant $\alpha_s$. When $\Gamma_s/H > 1$, collisional production of $\nu_s$ from the thermal bath of $\nu_a$ is effective. The solid black curve shows a case where this never happens. The shoulder around $T_\gamma \sim M$ is where the $A'$ decay away. The dotted blue and dashed red curves correspond to recoupling after and before $A'$ decay, respectively.[]{data-label="fig:thermalization-rate"}](fig1.pdf){width="0.9\columnwidth"}
In the following, we discuss the three aforementioned cases in detail.
### No Recoupling
In the *no recoupling* cases, labeled as A1 and B1 in Fig. \[fig:chart\], the cosmological evolution after BBN is very straightforward. Vacuum oscillations convert a small fraction $\simeq \frac{1}{2} \sin^2 2\theta_0 \simeq 0.01$ of active neutrinos into sterile neutrinos (and vice versa), but this has negligible impact on the cosmological observables. Therefore, the temperature ratio $\xi$ at CMB can be derived from the separate conservation of entropy in the active neutrino sector and in the sterile neutrino sector. It is independent of when the $A'$ decay (provided that it happens before the CMB epoch and approximately in chemical equilibrium.) That is, $\xi_\text{CMB\tiny{A/B}} \simeq \xi_\text{BBN\tiny{(A)}} = 0.649$.
$N_\text{eff,CMB}$ can be estimated in analogy to Eq. . For the assumed sterile neutrino mass $\simeq 1$ eV, the $\nu_s$ contribution to the relativistic energy density has to be weighted by an extra factor because they are already semi-relativistic at the CMB epoch, where the photon temperature is $T_\gamma \simeq 0.30$ eV and the kinetic temperature of the sterile sector is $T_s = \xi_\text{CMB}\cdot (4/11)^{1/3} T_\gamma \simeq
0.14$ eV. As in [@Mirizzi:2014ama], we assume that the extra weight factor is characterized by the pressure $P$. (See Appendix \[sec:Tkin\] for the definition and calculations of the kinetic temperature and the pressure $P$ used here.) We thus obtain $$\begin{aligned}
N_\text{eff,CMB}
&= N_{\nu_a} + \frac{P_{m_s=\text{1\,eV}}}{P_{m_s=0}}
\Bigg|_\text{CMB} \cdot \xi_\text{CMB}^{4}
\simeq 3.13 \,.
\label{eq:N-CMB1}\end{aligned}$$ It is worth noting that the CMB temperature spectrum does not exactly measure the value of $N_\text{eff,CMB}$. Instead, the observed spectrum depends on the evolution of the energy density in relativistic degrees of freedom between the epoch of matter-radiation equality ($T_{\gamma,\text{eq}} \sim 0.7$eV) and recombination ($T_{\gamma,\text{CMB}} \simeq 0.30$ eV) [@Planck:2015xua]. Therefore, the value of $N_\text{eff,CMB}$ measured from the CMB temperature power spectrum lies between the values of $N_\text{eff}$ at $T_{\gamma,\text{CMB}}$ and $T_{\gamma,\text{eq}}$. The latter value, which we will denote by $N_\text{eff,eq}$, is given by $$\begin{aligned}
N_\text{eff,eq}
&= N_{\nu_a} + \frac{P_{m_s=\text{1\,eV}}}
{P_{m_s=0}}
\Bigg|_\text{eq} \cdot \xi_\text{eq}^{4} \simeq 3.18 \,.\end{aligned}$$ Both of these values agree with the bound from the 2015 Planck data release, $N_\text{eff} = 3.15 \pm 0.23$ (68% C.L.) [@Planck:2015xua].
### Recoupling after $A'$ decay
The cases of *recoupling after $A'$ decay* are labeled as A2 and B2 in Fig. \[fig:chart\]. In both cases, entropy conservation in the sterile sector before recoupling leads to a temperature ratio just after $A'$ decay of $\xi_M \simeq \xi_\text{BBN\tiny{(A)}} = 0.649$, which in turn implies $$\begin{aligned}
N_{\text{eff},M} = 3.045 + \xi^4_M = 3.22 \,.\end{aligned}$$ Here, we have assumed that during $A'$ decay chemical equilibrium holds in the sterile sector.
After recoupling, efficient neutrino oscillations and collisions lead to equilibration of the number densities and energy densities of all active and sterile neutrino species. Nevertheless, since number-changing interactions are strongly suppressed at $T_s \ll M$, this recoupling cannot change the total (active + sterile) neutrino number density and energy density beyond what is necessitated by cosmological expansion. The kinetic temperature $T_{\nu,\text{re}}$ shared by all neutrinos after recoupling is then given by $$\begin{aligned}
T_{\nu,\text{re}} \simeq
\frac{3.045 \cdot (T^\text{SM}_{\nu,\text{re}})^4 + 1 \cdot T_{s,\text{re},0}^4}
{3.045 \cdot (T^\text{SM}_{\nu,\text{re}})^3 + 1 \cdot T_{s,\text{re},0}^3}
\simeq 0.97 \, T^\text{SM}_\nu \,,\end{aligned}$$ where $T^\text{SM}_\nu$ is the active neutrino temperature just prior to recoupling (which is at its SM value) and $T_{s,\text{re},0}$ denotes the sterile sector temperature just prior to recoupling.
Eventually, the mostly sterile eV-scale mass eigenstate decouples from the light mass eigenstates and becomes semi-relativistic at the CMB time. Its kinetic temperature at this epoch is $T_{s,\text{CMB}} \simeq 0.13$ eV. The effective number of relativistic species at the CMB epoch is given by $$\begin{aligned}
N_\text{eff,CMB} &=
N_\text{eff,$M$} \bigg( \frac{3}{4}
+ \frac{1}{4} \frac{P_{m_s=\text{1\,eV}}}{P_{m_s=0}}
\bigg|_\text{CMB}
\bigg) \notag \\
&\simeq 2.51 + 0.39 \simeq 2.90 \,.\end{aligned}$$ Note that this is smaller than the SM value 3.045. This happens because part of the energy of active neutrinos has been transferred to the mostly sterile mass eigenstate $\nu_4$, whose kinetic energy gets redshifted away more efficiently after it becomes non-relativistic. Ref. [@Mirizzi:2014ama] also found $N_\text{eff} < 3$ for this scenario. Similarly, we obtain for the time of matter-radiation equality: $$\begin{aligned}
N_\text{eff,eq} = 3.09 \,.\end{aligned}$$ Both values are in agreement with the Planck bound [@Planck:2015xua].
### Recoupling before $A'$ decay
The last possibility, labeled as case B3 in Fig. \[fig:chart\], is that recoupling happens before $A'$ decay. In this case, all neutrinos, together with $A'$, reach a common chemical equilibrium, which lasts until most of the $A'$ particles have decayed. During the formation of chemical equilibrium, the total energy is conserved while entropy increases. Energy conservation allows us to calculate the temperature $T_{\nu,\text{re}}$ of the active + sterile neutrino sector immediately after recoupling: $$\begin{gathered}
(3 g_{\nu_a} + g_{\nu_s} + g_{A'}) \, T_{\nu,\text{re}}^4 \\
= \big[ 3 g_{\nu_a} + (g_{\nu_s} + g_{A'}) \,
\xi_\text{BBN{\tiny (B)}}^4 \big] \, (T^{SM}_{\nu,\text{re}})^4 \,,\end{gathered}$$ where $T^{SM}_{\nu,\text{re}}$ is again the active neutrino temperature just prior to recoupling. Plugging in numbers for the effective numbers of degrees of freedom $g_{\nu_a}$, $g_{\nu_s}$, $g_{A'}$ and using $\xi_\text{BBN{\tiny (B)}}
= 0.465$, we obtain $$\begin{aligned}
T_{\nu,\text{re}} = 0.861 \, T^{SM}_{\nu,\text{re}} \,.\end{aligned}$$ Later, the $A'$ decay and the thermal bath of neutrinos gets reheated by a factor $[(3 g_{\nu_a} + g_{\nu_s} + g_{A'}) / (3 g_{\nu_a} + g_{\nu_s})]^{1/3}
\simeq 1.125$. The effective number of relativistic species after $A'$ decay is then $$\begin{aligned}
N_{\text{eff},M} \simeq
\frac{3 g_{\nu_a} + g_{\nu_s}}{g_{\nu_a}} \,
(1.125 \cdot 0.861)^4 \simeq 3.568.\end{aligned}$$ The next steps are the decoupling of sterile neutrinos and active neutrinos, and then the freeze-out of sterile neutrino self-interactions. Since the number densities and energy densities of the different species do not change during these decouplings, the total effective number of relativistic degrees of freedom at the CMB epoch is given by $$\begin{aligned}
N_\text{eff,CMB} &=
N_\text{eff,$M$} \bigg( \frac{3}{4}
+ \frac{1}{4} \frac{P_{m_s=\text{1\,eV}}}{P_{m_s=0}}
\bigg|_\text{CMB}
\bigg)
\simeq 3.21 \,,\end{aligned}$$ where again the pressure characterizes the contribution of the semi-relativistic $\nu_4$ to the radiation density in the Universe. Similarly, at matter-radiation equality we have $$\begin{aligned}
N_\text{eff,eq} & \simeq 3.43 \,.\end{aligned}$$ This number is still within the $2\sigma$ error of the Planck bound [@Planck:2015xua].
Structure Formation {#sec:LSS}
===================
Besides the constraints on extra radiation measured by $N_\text{eff}$, CMB data also prefers that most of the active (massless) neutrinos start to free-stream before redshift $z \sim 10^5$ [@Archidiacono:2013dua; @Cyr-Racine:2013jua]. On the other hand, matter power spectrum observations forbid these free-streaming degrees of freedom from carrying so much energy as to suppress small scale structures [@Lesgourgues:2012uu]. Therefore measurements of the matter power spectrum put the most stringent upper bound on the mass of all fully thermalized neutrino species: $\sum m_\nu \lesssim 0.2$–$0.7$ eV (95% C.L.) [@Planck:2015xua]. This concern [@Mirizzi:2014ama] excludes a large proportion of the parameter region for self-interacting sterile neutrinos considered in [@Dasgupta:2013zpn]. However, like the constraint on $N_\text{eff}$ discussed in Sec. \[sec:Neff\], it is avoided if $\Gamma_s$ never exceeds $H$ after the epoch when $V_\text{eff}$ drops below the oscillation frequency (cases A1 and B1 in Fig. \[fig:chart\]).
Interestingly, structure formation constraints are *also* significantly relaxed when the $U(1)_s$ gauge coupling $e_s$ is large and/or the gauge boson mass $M$ is small. In this case, sterile neutrinos, although produced abundantly through collisional production (see Sec. \[sec:setup\]), cannot free-stream until late times, long after matter-radiation equality. Thus, their influence on structure formation is significantly reduced. We will now discuss this observation in more detail.
After the active and sterile neutrinos have equilibrated through $A'$-mediated collisions, they should be treated as an incoherent mixture of the four mass eigenstates $\nu_i$ ($i = 1\dots4$). The reason is that for $m_4 \sim 1$ eV, their oscillation time scales are much smaller than both the Hubble time and the time interval between scatterings. For simplicity, assume that only the mostly sterile mass eigenstate $\nu_4$ is massive with mass $m_4 \simeq 1$ eV, and that it only mixes appreciably with one of the mostly active mass eigenstates, say $\nu_1$: $$\begin{aligned}
\nu_s \simeq \sin\theta_0 \,\nu_1 + \cos \theta_0 \,\nu_4 \,.\end{aligned}$$ We take the vacuum mixing angle to be $\theta_0 \simeq 0.1$ and we take into account that matter effects are negligible at temperatures relevant for structure formation (after matter–radiation equality). Since it is the flavor eigenstate $\nu_s$ that is charged under $U(1)_s$, the mass eigenstates $\nu_1$ and $\nu_4$ interact with relative rates $\sin^2\theta_0$ and $\cos^2\theta_0$, respectively, while $\nu_2$ and $\nu_3$ essentially free-stream.
To study the influence of the secret interaction on structure formation, we estimate the mean comoving distance $\lambda_s$ that each $\nu_4$ can travel in the early Universe. Since neutrinos can transport energy efficiently over scales smaller than $\lambda_s$, the matter power spectrum will be suppressed on these scales. As long as neutrinos are collisional, they do not free stream, but diffuse over scales of order [@Kolb:1988aj] $$\begin{aligned}
(\lambda_s^\text{coll})^2 \simeq \int_0^{t_s^\text{dec}} \! dt \,
\frac{{\ensuremath{\langle v_s \rangle}}^2}{a^2(t)} \, \frac{1}{n_s\,{\ensuremath{\langle \sigma v \rangle}}_s} \,,
\label{eq:lambda-Silk}\end{aligned}$$ where $a(t)$ is the scale factor of the Universe, $t_s^\text{dec}$ is the time at which sterile neutrino self-interactions decouple, $$\begin{aligned}
{\ensuremath{\langle \sigma v \rangle}}_s \sim {\ensuremath{\langle v_s \rangle}}
\frac{e_s^4 \cos^2\theta_0}{(M^2 + T_s^2)^2} \, (T_s + m_s)^2
\label{eq:sigma-v}\end{aligned}$$ is the thermally averaged interaction cross section of the mostly sterile mass eigenstate $\nu_4$, estimated here by naïve dimensional analysis, and $n_s$ is the number density of sterile neutrinos. For simplicity, we take the kinetic temperature $T_s$ of the sterile sector equal to the active neutrino temperature in this section, i.e.$T_s = T_\nu^\text{SM} = (4/11)^{1/3} T_\gamma \propto a^{-1}(t)$, as long as $\nu_4$ are relativistic. If sterile neutrinos become non-relativistic ($T_s <
m_s$) while they are still strongly self-coupled, the kinetic temperature of the sterile sector scales as $T_s \propto a^{-2}(t)$ until $T_s$ drops below $T_{s,\text{dec}}$. After that, the sterile neutrino momenta are simply redshifted proportional to $a^{-1}(t)$. This implies in particular that, at $T_s \gg m_s$, we have $n_s \simeq T_s^3$, while after $\nu_4$ become non-relativistic, but are still strongly coupled, this changes to $n_s \propto T_s^{3/2}$. The computation of the average velocity ${\ensuremath{\langle v_s \rangle}}$ of $\nu_4$ entering eq. is discussed in Appendix \[sec:Tkin\].
The decoupling temperature $T_{s,\text{dec}}$ and the corresponding time $t_s^\text{dec}$ are defined by the condition that the sterile neutrino interaction rate is just equal to the Hubble rate: $$\begin{aligned}
n_s \, {\ensuremath{\langle \sigma v \rangle}}_s \big|_{t=t^\text{dec}}
= H(t^\text{dec}) \,.
\label{eq:fs-cond}\end{aligned}$$ After $t^\text{dec}$, sterile neutrinos start to free stream. The total comoving distance that a $\nu_4$ travels between the time $t^\text{dec}$ and the present epoch $t^0$ is [@Kolb:1988aj] $$\begin{aligned}
\lambda^\text{fs}_s
= \int_{t^\text{dec}}^{t_0} \! dt \, \frac{{\ensuremath{\langle v_s(t) \rangle}}}{a(t)} \,.
\label{eq:lambda}\end{aligned}$$ The overall damping scale is then given by $$\begin{aligned}
\lambda_s^2 = (\lambda_s^\text{coll})^2 + (\lambda^\text{fs}_s)^2\end{aligned}$$ At scales larger than $\lambda_s$, structure formation is unaffected by the existence of sterile neutrinos, while at smaller scales, structures are washed out.
------------------------------------------------- ----------------------------------------------------
![image](Pk-SDSS-LRG){width="0.95\columnwidth"} ![image](Pk-Lyman-alpha){width="0.95\columnwidth"}
(a) (b)
------------------------------------------------- ----------------------------------------------------
As a numerical example, for $M = 0.1\,\text{MeV}$, $e_s = 0.1$, we find $\lambda_s^\text{coll} \simeq 29~\text{Mpc}/h$, $\lambda^\text{fs}_s \simeq 68~\text{Mpc}/h$ and thus $$\begin{aligned}
\lambda_s \simeq 74~\text{Mpc}/h \,,\end{aligned}$$ corresponding to a wave number of $$\begin{aligned}
k_s \equiv 2 \pi / \lambda_s \simeq 0.085~h/\text{Mpc} \,.
\label{eq:ks}\end{aligned}$$ This should be compared to the free streaming scale of a decoupled sterile neutrino with a mass $\lesssim 1$ eV, $$\begin{aligned}
k_s^\text{no self-int.} \simeq 0.018 \, \sqrt{m\over \text{eV}} \,h/\text{Mpc} \,.\end{aligned}$$ This factor of $\sim 5$ decrease in the free streaming scale compared to a conventional sterile neutrinos without self-interactions implies that data on large scale structure (LSS) and baryon acoustic oscillations (BAO) will be in much better agreement with our model than with sterile neutrino models that do not feature self-interactions. The strongest constraints will come from data probing very small scales, in particular Lyman-$\alpha$ forests.
Even at scales $k > k_s$, the suppression of the matter power spectrum $P_M(|\vec{k}|)$ does not set in abruptly, but increases gradually. For non-interacting sterile neutrinos, numerical simulations show that the suppression saturates at $k \simeq 50\, k_s$. At even smaller scales (even larger $k$), the deviation from the prediction of standard cosmology is [@Lesgourgues:2006nd; @Lesgourgues:2012uu] $$\begin{aligned}
\frac{\delta P_M(|\vec{k}|)}{P_M(|\vec{k}|)} \simeq -8 f_\nu
\label{eq:DeltaP-lin}\end{aligned}$$ in the linear structure formation regime. Here, $f_\nu = 3 m_s \zeta(3) / (2\pi^2) \, T_s^3(t_0) \times 8\pi G / (3
H^2(t_0)) / \Omega_m \simeq 0.07$ is the ratio of the sterile neutrino mass density $\Omega_s$ to the total mass density $\Omega_m \simeq 0.3$ today. In the regime of non-linear structure formation, $\delta P_M(|\vec{k}|) / P_M(|\vec{k}|)$ is somewhat larger [@Lesgourgues:2006nd; @Lesgourgues:2012uu], but N-body simulations show that it decreases again at scales $k \gtrsim \text{few}~h /
\text{Mpc}$ [@Brandbyge:2008rv; @Rossi:2014wsa].
It is, however, difficult to directly measure $P_M(|\vec{k}|)$ at these nonlinear scales. The most sensitive data sets are Lyman-$\alpha$ forests, from which the 1-dimensional *flux* power spectrum $P_F(k)$ of Lyman-$\alpha$ photons can be extracted. Translating $P_F(k)$ into a measurement of $P_M(|\vec{k}|)$ requires a determination of the bias $b(k)$, which is obtained from numerical simulations of structure formation that include the dynamics of the gas clouds in which Lyman-$\alpha$ photons from distant quasars are absorbed. For SM neutrinos, such simulations have been performed for instance in [@Rossi:2014wsa], and we can estimate from Fig. 13 of that paper that the maximal suppression of $P_F(k)$ is of order $$\begin{aligned}
\frac{\delta P_F(k)}{P_F(k)} \sim
-0.1 \times \bigg( \frac{\sum m_\nu}{1\ \text{eV}}\bigg) \,,
\label{eq:DeltaP-nonlin}\end{aligned}$$ where $\sum m_\nu$ is the sum of all neutrino masses. This estimate is crude but conservative, and ignores the fact that the maximal suppression is actually smaller at lower redshifts. The suppression of $P_F(k)$ described by Eq. is *smaller* than the suppression of $P_M(|\vec{k}|)$ from Eq. because of the nonlinear $k$-dependent relation between the matter power spectrum and the flux power spectrum (see for instance [@Croft:2000hs], especially Fig. 16 in that paper). Since no dedicated simulations are available for our self-interacting sterile neutrino model, we will assume in the following that $\delta P_F(k) / P_F(k)$ saturates at the value given by Eq. even when $\sum m_\nu$ is dominated by the sterile neutrino mass $m_s$. This amounts to assuming that the impact of secretly interacting sterile neutrinos on these small scales is *qualitatively* similar to that of active neutrinos. A more detailed treatment requires a dedicated simulation including these secretly interacting sterile neutrinos. Note that neutrino free-streaming after CMB decoupling may lead to less suppression than described in Eqs. and because perturbation modes well within the horizon have already grown significantly by that time. We will not include this effect in the following discussion to remain conservative.
We show the qualitative impact of self-interacting sterile neutrinos on large scale structure in Fig. \[fig:matter-power\]. Panel (a) compares theoretical predictions in models with and without sterile neutrinos to data on the three-dimensional matter power spectrum $P_M(|\vec{k}|)$ from the Sloan Digital Sky Survey (SDSS) Luminous Red Galaxy (LRG) catalog [@Tegmark:2006az]. Panel (b) compares to one-dimensional flux power spectra $\Delta^2(k) \equiv k
\, P_F(k) / \pi$ from Lyman-$\alpha$ forest data [@Viel:2013fqw]. SDSS-LRG data corresponds to a mean redshift of $z \simeq 0.35$, while Lyman-$\alpha$ data is split up according to redshift and reaches up to $z \simeq 5.4$. Note that the data in [@Viel:2013fqw] is presented as a function of the wave number $k_v$ in velocity space, measured in units of sec/km. The conversion to the wave number $k$ in coordinate space, measured in units of $h/\text{Mpc}$, is done according to the formula $k = k_v \, H(z) / (1+z)$, where $H(z)$ is the Hubble rate at redshift $z$. The theoretical predictions for the SM with vanishing neutrino mass (solid green curves in Fig. \[fig:matter-power\]) are taken from [@Tegmark:2006az] and [@Viel:2013fqw], respectively. Our (qualitative) predictions for sterile neutrino models with and without self-interactions are obtained in the following way: we start from the numerical prediction for the neutrino-induced suppression of the matter power spectrum from Ref. [@Lesgourgues:2012uu]. In particular, we use the curve corresponding to $f_\nu = 0.07$ from Fig. 7 in that paper. We then shift this curve such that the onset of the suppression coincides with our calculated damping scale $k_s$, and we rescale it such that the maximal suppression is $-8 f_\nu$ in Fig. \[fig:matter-power\] (a) (linear regime) and 10% in Fig. \[fig:matter-power\] (b) (nonlinear regime), see Eqs. and . We then multiply with the SM prediction to obtain the dotted green curves for sterile neutrinos without self-interactions and the red dashed curves for sterile neutrinos with self-interactions in Fig. \[fig:matter-power\]. We use $e_s = 0.1$, $M =
0.1$ MeV and $m_s = 1$ eV. Since we neglect a possible upturn of the power spectrum at $k \gtrsim 1~h / \text{Mpc}$ [@Brandbyge:2008rv], our estimates are very conservative.
From Fig. \[fig:matter-power\] (a), we observe that the suppression of the matter power spectrum at scales $\lesssim 0.2~h$/Mpc due to self-interacting sterile neutrinos is completely negligible, while a fully thermalized non-interacting sterile neutrino with the same mass leads to a clear suppression already at these scales. This implies that self-interacting sterile neutrinos with the parameters chosen here are not constrained by data on linear structure formation. Going to smaller scales or larger $k$ (Fig. \[fig:matter-power\] (b)), where non-linear effects become relevant, we see that both the sterile neutrino model with self-interactions and the one without lead to suppression, but the amount of suppression is reduced in the self-interacting case. It was shown in Ref. [@Viel:2013fqw] that the data disfavors suppression larger than 10% at $k=10 \,h/\text{Mpc}$. Self-interacting sterile neutrinos at the benchmark point shown in Fig. \[fig:matter-power\] appear to be marginally consistent with this constraint. It should be kept in mind, however, that our predictions are only qualitative. Therefore, only a detailed fit using simulations of non-linear structure formation that include sterile neutrino self-interactions could provide a conclusive assessment of the viability of such a scenario.
Let us finally discuss how the cosmological effects of the three active neutrinos are modified in the self-interacting sterile neutrino scenario. The dynamics of the mass eigenstates $\nu_2$ and $\nu_3$, which we assume not to mix with $\nu_4$, is the same as in standard cosmology: they start to free stream at redshift $z \gg 10^5$. $\nu_1$, however, starts to free stream later than a non-interacting neutrino, but earlier than $\nu_4$. The free-streaming condition for $\nu_1$ is, in analogy to Eq. , $$\begin{aligned}
(T_\nu^\text{SM})^3 \cdot (T_\nu^\text{SM})^2 \bigg({e_s^2 \over M^2}\bigg)^2
\sin^2\theta_0 \lesssim H \,.
\label{eq:fs-cond-nu-1}\end{aligned}$$ As shown in Ref. [@Cyr-Racine:2013jua], free-streaming of active neutrinos before redshift $z \sim 10^5$ is required to sufficiently suppress the acoustic peaks in the CMB power spectrum. The change from three to two truly free streaming neutrino species in our model will lead to minor modifications of the CMB power spectrum, but the analysis from [@Cyr-Racine:2013jua] suggests that these are unlikely to spoil the fit to CMB data, in particular since they may be compensated by changes in the best fit values of other cosmological parameters.
Note that also Planck CMB data alone, without including data on large scale structure observations, imposes an upper limit on the mass of sterile neutrinos, which, for a fully thermalized species is $m_s \lesssim 0.5$ eV at 95% C.L. [@Planck:2015xua]. A much weaker bound is expected if self- interactions among sterile neutrinos are so strong that they remain collisional until after the CMB epoch. In this case, the early Integrated Sachs-Wolfe (ISW) effect induced by $\nu_s$ perturbations at low multipole order ($50\le l \le 200$) will be suppressed [@Lesgourgues:2012uu]. Thus, the main effect on the CMB will come from the shift of matter–radiation equality, to which the sensitivity is, however, much weaker.
Discussion and conclusions {#sec:conclusions}
==========================
As we have seen in the previous section, there are two main scenarios in which self-interacting sterile neutrinos do not run into conflict with cosmological data:
[*(i)*]{} The $\nu_s$ production rate $\Gamma_s$ drops below the Hubble expansion rate $H$ before the effective potential $|V_\text{eff}|$ drops below the oscillation frequency $|\Delta m^2/(2E)|$ and the dynamic suppression of active–sterile mixing due to $V_\text{eff}$ ends. In this case, sterile neutrinos are not produced in significant numbers in the early Universe and hence cosmology is not sensitive to their existence. An explanation of small scale structure anomalies as advocated in [@Dasgupta:2013zpn] is, however, not possible in this scenario.[^6] In particular, even if the new interaction also couples to dark matter as proposed in [@Dasgupta:2013zpn], it is too weak to have phenomenological consequences.
This disadvantage can be avoided if more than one self-interacting sterile neutrino exists. Consider for example, a model with three mostly sterile neutrino mass eigenstates $\nu_4$, $\nu_5$, $\nu_6$. Let $\nu_4$ and $\nu_5$ have a relatively large mixing $\theta_0 \sim 0.1$ with active neutrinos, as motivated for instance by the short baseline oscillation anomalies. Let their coupling to the $A'$ gauge boson be $e_s^{(4,5)} \simeq 10^{-5}$, large enough to dynamically suppress their mixing with the mostly active mass eigenstates until after BBN, but small enough to prevent their equilibration afterwards. On the other hand, let $\nu_6$ have a vanishing mixing with $\nu_{1,2,3}$, but a larger secret gauge coupling $e_s^{(6)} \simeq
0.1$. Due to its small mixing, it is never produced through oscillations. However, its primordial population—the relic density produced before the visible and sterile sectors decoupled in the very early Universe—still acts as a thermal bath to which the dark matter may be strongly coupled, thus potentially solving the missing satellites problem [@Klypin:1999uc; @Boehm:2000gq; @Bringmann:2006mu; @Aarssen:2012fx; @Shoemaker:2013tda].
[*(ii)*]{} The self-interactions are so strong that sterile neutrinos remain collisional at least until matter–radiation equality. In this scenario, sterile neutrinos are produced when $|V_\text{eff}| \leq |\Delta m^2/(2E)|$. However, as shown in Sec. \[sec:Neff\], the effective number of relativistic degrees of freedom in the Universe, $N_\text{eff}$, remains close to 3 because equilibration between active and sterile neutrinos happens after neutrinos have decoupled from the photon bath. Moreover, as argued in Sec. \[sec:LSS\], the impact of self-interacting sterile neutrinos on structure formation is much smaller in this scenario than the impact of conventional non-interacting sterile neutrino because they cannot transport energy efficiently over large distances due to their reduced free-streaming. Structure formation constraints could be further relaxed in models containing, besides an eV-scale mass eigenstate $\nu_4$, one or more additional mostly sterile states with much lower masses [@Tang:2014yla]. It is intriguing that the parameter region corresponding to scenario [*(ii)*]{} contains the region where small scale structure anomalies can be explained, as shown in [@Dasgupta:2013zpn].
We summarize these results in Fig. \[fig:paramspace\]. The yellow cross-hatched region on the right is ruled out because active and sterile neutrinos come into thermal equilibrium before the active neutrino decoupling from the SM plasma. In the lower part of this region, this happens simply because $V_\text{eff}$ is negligibly small. In the upper part, $V_\text{eff}$ is large, but also $\Gamma_s$ is large so that collisional production of sterile neutrinos is efficient in spite of the suppressed in-medium mixing $\theta_m$. This leads to constraints from $N_\text{eff}$ and from the light element abundances in BBN [@Saviano:2014esa]. In the blue vertically hatched region, sterile neutrinos are produced after $\nu_a$ decoupling, so that CMB constraints on $N_\text{eff}$ remain satisfied. However, sterile neutrinos free-stream early on in this region and violate the CMB and structure formation constraints on their mass. This mass constraint can be considerably relaxed if the sterile neutrinos remain collisional until *after* the CMB epoch at $T_\gamma \simeq 0.3$ eV. This defines the upper edge of the blue hatched region. In the red shaded region in the upper left corner, the secret interaction is too strong and $\nu_1$ free streams too late. CMB data requires that active neutrinos free stream early enough, and thus strongly disfavors this region. Two white regions remain allowed: Scenario $(i)$ with weak self-interactions, corresponds to the wedge-shaped white region in the lower part of the plot. Scenario $(ii)$, with strong self-interactions, is realized in the thin white band between the blue vertically hatched region and the red shaded region. As explained above, whether or not this white band is allowed depends strongly on systematic uncertainties at Lyman-$\alpha$ scales and on the possible existence of additional states with masses $\ll 1$ eV.
![Schematic illustration of the parameter space for eV-scale sterile neutrinos coupled to a new “secret” gauge boson with mass $M$ and a secret fine structure constant $\alpha_s$. The vacuum mixing angle between active and sterile neutrinos was taken to be $\theta_0 = 0.1$. The white region in the lower half of the plot is allowed by all constraints, while the narrow white band in the upper left part satisfies all constraints except possibly large scale structure (LSS) limits from Lyman-$\alpha$ data at the smallest scales. The red stars show representative models in scenarios [*(i)*]{} and [*(ii)*]{}. The colored regions are excluded, either by LSS observations (blue vertically hatched), by the requirement that active neutrinos should free stream early enough (red shaded), or by a combination of CMB and BBN data (yellow cross-hatched).[]{data-label="fig:paramspace"}](paramspace.pdf){width="0.9\columnwidth"}
There are several important caveats and limitations to the above analysis. First, we have only worked with thermal averages for the parameters characterizing each particle species, such as energy, velocity, pressure, etc. To obtain more accurate predictions, it would be necessary to solve momentum-dependent quantum kinetic equations. This would be in particular interesting in the temperature regions where $V_\text{eff}$ changes sign and where $V_\text{eff} \sim \Delta m^2 / (2 T_{\nu_a}) \times \cos 2\theta_0$. We expect that our modeling of flavor conversions in this region as fully non-adiabatic transitions is accurate, but this assumption remains to be checked explicitly. Moreover, the impact of self-interacting sterile neutrinos on non-linear structure formation at the smallest scales probed by Lyman-$\alpha$ data should be calculated more carefully. Improving these issues is left for future work.
In conclusion, we have argued in this paper that self-interacting sterile neutrinos remain a cosmologically viable extension of the Standard Model. As long as the self-interaction dynamically suppresses sterile neutrino production until neutrinos decouple from the photon bath, the abundance produced afterwards is not in conflict with constraints on $N_\text{eff}$. Moreover, if the self-interaction is either weak enough for scattering to be negligible after the dynamic mixing suppression is lifted, or strong enough to delay free streaming of sterile neutrinos until sufficiently late times, also structure formation constraints can be avoided or significantly relaxed.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to Vid Irsic, Gianpiero Mangano, Alessandro Mirizzi and Ninetta Saviano for very useful discussions. Moreover, it is a pleasure to thank Matteo Viel for providing the Lyman-$\alpha$ data underlying Fig. \[fig:matter-power\] (b) in machine-readable form and for discussing it with us.
Kinetic temperature and pressure {#sec:Tkin}
================================
In the following, we give more details on the momentum distribution function $f(p,t)$ of sterile neutrinos $\nu_s$ after they have decoupled from all other particle species. $f(p,t)$ is essential in the calculation of the pressure $P$ in Sec. \[sec:Neff\] and the average velocity ${\ensuremath{\langle v_s \rangle}}$ in Sec. \[sec:LSS\].
Even when $\nu_s$ are decoupled from other particles, they may still couple to themselves via strong self-interactions. If the self-interaction conserves the number of particles, such as $\nu_s \nu_s \leftrightarrow \nu_s \nu_s$, it only maintains kinetic equilibrium, but not chemical equilibrium. Number conservation and entropy maximization force the $\nu_s$ momentum distribution function in kinetic equilibrium to take the form $$\begin{aligned}
f(p, t) = \frac{1}{e^{[E(p) - \mu_s(t)]/T_{s}(t)} + 1} \,,
\label{kinetical:MDF}\end{aligned}$$ where $T_{s}(t)$ is defined as the *kinetic temperature*, $\mu_s(t)$ is the chemical potential, and $E(p) = (p^2 + m_s^2)^{1/2}$. Here and in the following, we use the definition $p \equiv |\vec{p}|$. Since we are interested in the evolution at relatively late times, when the sterile neutrino density is low compared to the density of a degenerate fermion gas and thus ${\ensuremath{\langle f(p,t) \rangle}} \ll 1$, the classical approximation $$\begin{aligned}
f(p,t) \simeq e^{-[E(p) - \mu_s(t)]/T_{s}(t)}\end{aligned}$$ is adequate.
Our goal is to solve for the functions $T_{s}(t)$ and $\mu_s(t)$ with the initial condition of a relativistic thermal ensemble of sterile neutrinos. This means that, initially, $T_s =T_i \gg m_s$ and $\mu_s = 0$ at $a=a_i$. Note that the sterile neutrino mass will lead to a non-zero $\mu_s$ soon after neutrinos go out of chemical equilibrium. Although it is difficult to analytically solve the corresponding Boltzmann equation, there are two conditions that can be used to numerically obtain $T_{s}$ and $\mu_s$ as functions of the scale factor $a(t)$. One is number conservation. The other is entropy conservation, which holds approximately for kinetic equilibrium in the classical limit [@Bernstein:1988bw].
The number density is $$\begin{aligned}
n_s(t) &= \int \! \frac{d^3 p}{(2\pi)^3} \, f(p, t)\end{aligned}$$ and the classical entropy density is defined as $$\begin{aligned}
s_s(t) \equiv \int \! \frac{d^3 p}{(2\pi)^3} \, f(p,t) \, [1 - \ln f(p,t)] \,.\end{aligned}$$ It is straightforward to obtain the asymptotic solutions [@Bernstein:1988bw] $$\begin{aligned}
T_{s}(t) &\propto
\begin{cases}
a^{-1}(t) & \text{for $T_s \gg m_s$} \\[0.5em]
a^{-2}(t) & \text{for $T_s \ll m_s$}
\end{cases} \, \\
\intertext{and}
\mu_s(t) &\propto
\begin{cases}
a(t) & \text{for $T_s \gg m_s$} \\[0.5em]
\text{const} & \text{for $T_s \ll m_s$}
\end{cases} \,.\end{aligned}$$ In the transition region $T_s \sim m_s$, the solution needs to be obtained numerically. The result is plotted in Fig. \[fig:pressure\].
![Kinetic temperature $T_s$ and chemical potential $\mu_s$ of sterile neutrinos, as functions of the scale factor $a(t)$ during the transition from the relativistic regime to the non-relativistic regime, with initial conditions $T_s = T_i \gg m_s$ and $\mu_s = 0$ at $a=a_i$.[]{data-label="fig:pressure"}](TEvolution.pdf){width="0.85\columnwidth"}
Finally, we comment on the calculation of the pressure $P$ and the average velocity ${\ensuremath{\langle v_s \rangle}}$ of sterile neutrinos. The pressure is given by [@Bernstein:1988bw] $$\begin{aligned}
P &\equiv \int \! \frac{d^3 p}{(2\pi)^3} \, \frac{p^2}{3E} \, f(p,t)
\nonumber\\[1ex]
&= -T_s \, e^{\mu_s/T_s} \int \! \frac{d^3 p}{(2\pi)^3} \, \frac{p}{3} \, \frac{d}{dp}
e^{-E(p)/T_{s}}
\nonumber\\[1ex]
&= T_{s} \cdot n_s \,,\end{aligned}$$ and the average velocity is $$\begin{aligned}
{\ensuremath{\langle v_s \rangle}} \simeq
\frac{1}{N} \int\!\frac{d^3 p}{(2\pi)^3} \frac{p}{E(p)} \, f(p, t) \,.\end{aligned}$$ Here, $N \equiv \int\!dp\,4\pi p^2 / (2\pi)^3 \times f(p, t)$ is a normalization factor. Besides the conditions of kinetic equilibrium discussed above, we also need to take into account that sterile neutrino self-interactions freeze out at a time $t^\text{dec}$ and sterile sector temperature $T_s = T_{s,\text{dec}}$, after which kinetic equilibrium is lost and sterile neutrino momenta are simply redshifted as $a^{-1}(t)$. This implies for the momentum distribution function:
$$\begin{aligned}
f(p, t) = \!
\begin{cases}
\frac{1}{\exp\!\big[
\frac{1}{T_s(t)} \big( \sqrt{p^2 + m_s^2} - \mu_s(t) \big) \big] + 1}
&\text{for $t < t^\text{dec}$} \\[0.4cm]
\frac{1}{\exp\!\big[
\frac{1}{T_{s,\text{dec}}} \big( \sqrt{\frac{p^2 a^2(t)}{a^2(t^\text{dec})}
+ m_s^2} - \mu_s(t^\text{dec}) \big) \big] + 1}
&\text{for $t > t^\text{dec}$}
\end{cases} \,.\end{aligned}$$
Here, $\mu_s(t^\text{dec})$ is the chemical potential at the time of decoupling. We have checked that the exact decoupling time only slightly changes the evolution of $P$, so we regard our solution in Fig. \[fig:pressure\] as universal for all parameter values of interest. In Sec. \[sec:LSS\], we have for simplicity assumed a sudden decoupling of self-interactions though. For the value $e_s^2/M^2 \simeq
1$ MeV$^{-2}$ chosen there, this leads to $T_{s,\text{dec}} \sim 0.0024$ eV, corresponding to a photon temperature of 0.038 eV.
[^1]: Since $\nu_s$ is not a mass eigenstate, $m_s$ actually means the mass of the fourth, mostly sterile, mass eigenstate.
[^2]: We ignore the SM matter potential and scattering experienced by active neutrinos because we will be interested in the regime where the secret interaction dominates over the SM interaction.
[^3]: There is also the possibility that Mikheyev-Smirnov-Wolfenstein (MSW) type resonant effects, e.g., because of the sign-flip of the secret potential $V_\text{eff}$ around $T_s \simeq M$, modify the $\nu_s$ production probability. In this work we treat all MSW transitions to be completely non-adiabatic and thus ignore them. A careful momentum-dependent treatment, which we defer to future work, is needed to accurately describe resonant conversion.
[^4]: Note that in complete models, for instance in scenarios including a dark Higgs sector to break the $U(1)_s$ symmetry, more degrees of freedom may need to be taken into account in the above equations.
[^5]: In this paper, we will always assume this to be the case since we will find that the parameter region with $e_s^2/M^2 \ge {\cal
O}(10\ \text{MeV}^{-2})$ is already disfavored by the requirement that active neutrinos should free stream sufficiently early [@Cyr-Racine:2013jua] (see Secs. \[sec:LSS\] and \[sec:conclusions\]). If $\nu_s$ and $\nu_a$ are still coupled when the $\nu_s$ become non-relativistic, the mostly sterile mass eigenstate $\nu_4$ will undergo a non-relativistic freeze-out and partly annihilate to pairs of mostly active neutrinos. Similarly, there is the possibility that the $A'$ decay after the decoupling, but this does not happen for the range of parameters we will discuss here.
[^6]: Note that recent simulations of cosmological structure formation suggest that these anomalies may be resolved once baryons are included in the simulations [@Vogelsberger:2014kha; @Sawala:2014xka].
|
Manitoba CBC Investigates
Federal pilots union surprised Air Transat given weeks to fix 'major' safety system problems in 2015
Share on Facebook Share on Twitter Share by Email
A report, recently released under Access to Information, says maintenance checks on Boeing 737s were missed
A recently released Transport Canada report from 2015 shows inspectors found 22 safety problems at Air Transat. Of those, 14 were classified as 'major,' which means a 'system-wide failure' was evident.
When Transport Canada audited Air Transat's safety system in 2015 and found 22 problems, including missed maintenance checks on Boeing 737s and workers without proper training, the airline was given up to a month to come up with some of the necessary fixes.
Those deadlines, outlined in a three-year-old report recently released under Access to Information, surprised Mark Laurence, national chair of the Canadian Federal Pilots Association, which represents the 450 pilots who conduct inspections and safety analyses for Transport Canada, the Transportation Safety Board and Nav Canada.
Based on the litany of problems described, he says he would have expected Air Transat to receive a notice of suspension with a time limit attached "to push the air operator to fix the issues."
ADVERTISEMENT
"Either they are major findings or they aren't. From the response, it looks like they aren't."
Transport Canada considers a problem to be "major" when procedures have not been established or followed and a system-wide failure is evident. Corrections for major findings are often more difficult and involved, the report says.
Depending on the nature of the problem, Transport Canada gave Air Transat between just over a week to one month to come up with a plan to fix it.
'Enforcement action'
CBC News put Laurence's concerns about the deadlines to Transport Canada. In a statement, the department said compliance is monitored and "if the operator does not work to address identified safety concerns, Transport Canada does not hesitate to take enforcement action."
Enforcement action can include verbal counselling, monetary penalties, and, in some cases, suspending a company's air operator certificate.
Mark Laurence worked for Transport Canada for nearly two decades before he became the national chairperson of the Canadian Federal Pilots Association. The union's members include the 450 pilots who conduct inspections and safety analyses for Transport Canada, the Transportation Safety Board and Nav Canada. (Brian Morris/CBC News)
Included with the report was a letter from Transport Canada that says all its findings were satisfactorily addressed. "Air Transat takes significant efforts to maintain strong safety records," it says.
ADVERTISEMENT
In a statement sent to CBC, Air Transat says none of the findings in the 2015 report ever compromised the safety of its operations, and that all of the required corrections were made "swiftly" and the airline passed two subsequent inspections in 2016 and 2018.
Air Transat's 450 pilots and 1,700 flight attendants, as well as its fleet of 33 aircraft, were put under the microscope with on-site reviews between Feb. 16 and 27, 2015, according to the report.
It says Air Transat was given a month to submit a plan of action after inspectors found hangar employees lacked understanding of how to communicate safety information.
Another "major" finding was that some maintenance checks on Air Transat's Boeing 737s were being missed. Inspectors had discovered four different maintenance schedules for the planes. The airline was given nine days to come up with a plan to solve the problem.
Air Transat also confirmed to inspectors that 41 out of 42 contracted employees performing maintenance work in Montreal and Toronto did not meet its own training program requirements.
No one should be afraid of getting on an airplane ever and going anywhere in the world - Elaine Parker, vice-president of Beyond Risk Management, a Calgary-based aviation safety consultancy
A flight attendant manager told Transport Canada she relied on social media — in addition to a Transport Canada training guide, an aviation conference and brainstorming sessions with a colleague — to figure out appropriate training.
ADVERTISEMENT
The deadline to fix the different training shortcomings was two weeks.
Inspectors also discovered a service difficulty report was never submitted after corroded hinge pins were identified on a plane's rudder. These reports, which are required of all airlines, are to be submitted to Transport Canada, which keeps track of conditions that can adversely affect the airworthiness of planes. This finding was considered "moderate," which means a simple modification could correct the issue. Air Transat was given a month to submit its solution.
Airlines in charge of safety systems
In 2005, the federal government started shifting responsibility for safety oversight to the airline industry.
Canada's major airlines are now responsible for developing and following their own safety management systems (SMS). These are extensive playbooks of best practices covering everything from maintenance to operational safety and emergency protocols — all of which must be in line with Transport Canada regulations.
When mistakes occur, the airline is supposed to document them.
Transport Canada's role in the system is to conduct periodic reviews of the airline SMS documentation to make sure it complies with Canadian aviation rules.
ADVERTISEMENT
The department and some aviation safety experts tout SMS programs as the global standard for aviation safety.
But critics of Canada's shift to SMS say it puts too much of the responsibility for safety in the hands of industry.
After reviewing the report about Air Transat, Elaine Parker, vice-president of Beyond Risk Management, a Calgary-based aviation safety consultancy, zeroed in on what she saw as the root cause of the airline's poor showing in the evaluation: A lack of organized systems and processes.
"Having all of your processes well written out, well followed and then well checked internally is what's got aviation from its beginnings to where it is today," she said.
"No one should be afraid of getting on an airplane ever and going anywhere in the world, because that's really an international standard." |
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// CrossVersionObjectReference contains enough information to let you identify the referred resource.
type CrossVersionObjectReference struct {
// Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds"
Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`
// Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
Name string `json:"name" protobuf:"bytes,2,opt,name=name"`
// API version of the referent
// +optional
APIVersion string `json:"apiVersion,omitempty" protobuf:"bytes,3,opt,name=apiVersion"`
}
// specification of a horizontal pod autoscaler.
type HorizontalPodAutoscalerSpec struct {
// reference to scaled resource; horizontal pod autoscaler will learn the current resource consumption
// and will set the desired number of pods by using its Scale subresource.
ScaleTargetRef CrossVersionObjectReference `json:"scaleTargetRef" protobuf:"bytes,1,opt,name=scaleTargetRef"`
// lower limit for the number of pods that can be set by the autoscaler, default 1.
// +optional
MinReplicas *int32 `json:"minReplicas,omitempty" protobuf:"varint,2,opt,name=minReplicas"`
// upper limit for the number of pods that can be set by the autoscaler; cannot be smaller than MinReplicas.
MaxReplicas int32 `json:"maxReplicas" protobuf:"varint,3,opt,name=maxReplicas"`
// target average CPU utilization (represented as a percentage of requested CPU) over all the pods;
// if not specified the default autoscaling policy will be used.
// +optional
TargetCPUUtilizationPercentage *int32 `json:"targetCPUUtilizationPercentage,omitempty" protobuf:"varint,4,opt,name=targetCPUUtilizationPercentage"`
}
// current status of a horizontal pod autoscaler
type HorizontalPodAutoscalerStatus struct {
// most recent generation observed by this autoscaler.
// +optional
ObservedGeneration *int64 `json:"observedGeneration,omitempty" protobuf:"varint,1,opt,name=observedGeneration"`
// last time the HorizontalPodAutoscaler scaled the number of pods;
// used by the autoscaler to control how often the number of pods is changed.
// +optional
LastScaleTime *metav1.Time `json:"lastScaleTime,omitempty" protobuf:"bytes,2,opt,name=lastScaleTime"`
// current number of replicas of pods managed by this autoscaler.
CurrentReplicas int32 `json:"currentReplicas" protobuf:"varint,3,opt,name=currentReplicas"`
// desired number of replicas of pods managed by this autoscaler.
DesiredReplicas int32 `json:"desiredReplicas" protobuf:"varint,4,opt,name=desiredReplicas"`
// current average CPU utilization over all pods, represented as a percentage of requested CPU,
// e.g. 70 means that an average pod is using now 70% of its requested CPU.
// +optional
CurrentCPUUtilizationPercentage *int32 `json:"currentCPUUtilizationPercentage,omitempty" protobuf:"varint,5,opt,name=currentCPUUtilizationPercentage"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// configuration of a horizontal pod autoscaler.
type HorizontalPodAutoscaler struct {
metav1.TypeMeta `json:",inline"`
// Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
// +optional
Spec HorizontalPodAutoscalerSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// current information about the autoscaler.
// +optional
Status HorizontalPodAutoscalerStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// list of horizontal pod autoscaler objects.
type HorizontalPodAutoscalerList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// list of horizontal pod autoscaler objects.
Items []HorizontalPodAutoscaler `json:"items" protobuf:"bytes,2,rep,name=items"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Scale represents a scaling request for a resource.
type Scale struct {
metav1.TypeMeta `json:",inline"`
// Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
// +optional
Spec ScaleSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// current status of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status. Read-only.
// +optional
Status ScaleStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
// ScaleSpec describes the attributes of a scale subresource.
type ScaleSpec struct {
// desired number of instances for the scaled object.
// +optional
Replicas int32 `json:"replicas,omitempty" protobuf:"varint,1,opt,name=replicas"`
}
// ScaleStatus represents the current status of a scale subresource.
type ScaleStatus struct {
// actual number of observed instances of the scaled object.
Replicas int32 `json:"replicas" protobuf:"varint,1,opt,name=replicas"`
// label query over pods that should match the replicas count. This is same
// as the label selector but in the string format to avoid introspection
// by clients. The string will be in the same format as the query-param syntax.
// More info about label selectors: http://kubernetes.io/docs/user-guide/labels#label-selectors
// +optional
Selector string `json:"selector,omitempty" protobuf:"bytes,2,opt,name=selector"`
}
// the types below are used in the alpha metrics annotation
// MetricSourceType indicates the type of metric.
type MetricSourceType string
var (
// ObjectMetricSourceType is a metric describing a kubernetes object
// (for example, hits-per-second on an Ingress object).
ObjectMetricSourceType MetricSourceType = "Object"
// PodsMetricSourceType is a metric describing each pod in the current scale
// target (for example, transactions-processed-per-second). The values
// will be averaged together before being compared to the target value.
PodsMetricSourceType MetricSourceType = "Pods"
// ResourceMetricSourceType is a resource metric known to Kubernetes, as
// specified in requests and limits, describing each pod in the current
// scale target (e.g. CPU or memory). Such metrics are built in to
// Kubernetes, and have special scaling options on top of those available
// to normal per-pod metrics (the "pods" source).
ResourceMetricSourceType MetricSourceType = "Resource"
// ExternalMetricSourceType is a global metric that is not associated
// with any Kubernetes object. It allows autoscaling based on information
// coming from components running outside of cluster
// (for example length of queue in cloud messaging service, or
// QPS from loadbalancer running outside of cluster).
ExternalMetricSourceType MetricSourceType = "External"
)
// MetricSpec specifies how to scale based on a single metric
// (only `type` and one other matching field should be set at once).
type MetricSpec struct {
// type is the type of metric source. It should be one of "Object",
// "Pods" or "Resource", each mapping to a matching field in the object.
Type MetricSourceType `json:"type" protobuf:"bytes,1,name=type"`
// object refers to a metric describing a single kubernetes object
// (for example, hits-per-second on an Ingress object).
// +optional
Object *ObjectMetricSource `json:"object,omitempty" protobuf:"bytes,2,opt,name=object"`
// pods refers to a metric describing each pod in the current scale target
// (for example, transactions-processed-per-second). The values will be
// averaged together before being compared to the target value.
// +optional
Pods *PodsMetricSource `json:"pods,omitempty" protobuf:"bytes,3,opt,name=pods"`
// resource refers to a resource metric (such as those specified in
// requests and limits) known to Kubernetes describing each pod in the
// current scale target (e.g. CPU or memory). Such metrics are built in to
// Kubernetes, and have special scaling options on top of those available
// to normal per-pod metrics using the "pods" source.
// +optional
Resource *ResourceMetricSource `json:"resource,omitempty" protobuf:"bytes,4,opt,name=resource"`
// external refers to a global metric that is not associated
// with any Kubernetes object. It allows autoscaling based on information
// coming from components running outside of cluster
// (for example length of queue in cloud messaging service, or
// QPS from loadbalancer running outside of cluster).
// +optional
External *ExternalMetricSource `json:"external,omitempty" protobuf:"bytes,5,opt,name=external"`
}
// ObjectMetricSource indicates how to scale on a metric describing a
// kubernetes object (for example, hits-per-second on an Ingress object).
type ObjectMetricSource struct {
// target is the described Kubernetes object.
Target CrossVersionObjectReference `json:"target" protobuf:"bytes,1,name=target"`
// metricName is the name of the metric in question.
MetricName string `json:"metricName" protobuf:"bytes,2,name=metricName"`
// targetValue is the target value of the metric (as a quantity).
TargetValue resource.Quantity `json:"targetValue" protobuf:"bytes,3,name=targetValue"`
// selector is the string-encoded form of a standard kubernetes label selector for the given metric.
// When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping
// When unset, just the metricName will be used to gather metrics.
// +optional
Selector *metav1.LabelSelector `json:"selector,omitempty" protobuf:"bytes,4,name=selector"`
// averageValue is the target value of the average of the
// metric across all relevant pods (as a quantity)
// +optional
AverageValue *resource.Quantity `json:"averageValue,omitempty" protobuf:"bytes,5,name=averageValue"`
}
// PodsMetricSource indicates how to scale on a metric describing each pod in
// the current scale target (for example, transactions-processed-per-second).
// The values will be averaged together before being compared to the target
// value.
type PodsMetricSource struct {
// metricName is the name of the metric in question
MetricName string `json:"metricName" protobuf:"bytes,1,name=metricName"`
// targetAverageValue is the target value of the average of the
// metric across all relevant pods (as a quantity)
TargetAverageValue resource.Quantity `json:"targetAverageValue" protobuf:"bytes,2,name=targetAverageValue"`
// selector is the string-encoded form of a standard kubernetes label selector for the given metric
// When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping
// When unset, just the metricName will be used to gather metrics.
// +optional
Selector *metav1.LabelSelector `json:"selector,omitempty" protobuf:"bytes,3,name=selector"`
}
// ResourceMetricSource indicates how to scale on a resource metric known to
// Kubernetes, as specified in requests and limits, describing each pod in the
// current scale target (e.g. CPU or memory). The values will be averaged
// together before being compared to the target. Such metrics are built in to
// Kubernetes, and have special scaling options on top of those available to
// normal per-pod metrics using the "pods" source. Only one "target" type
// should be set.
type ResourceMetricSource struct {
// name is the name of the resource in question.
Name v1.ResourceName `json:"name" protobuf:"bytes,1,name=name"`
// targetAverageUtilization is the target value of the average of the
// resource metric across all relevant pods, represented as a percentage of
// the requested value of the resource for the pods.
// +optional
TargetAverageUtilization *int32 `json:"targetAverageUtilization,omitempty" protobuf:"varint,2,opt,name=targetAverageUtilization"`
// targetAverageValue is the target value of the average of the
// resource metric across all relevant pods, as a raw value (instead of as
// a percentage of the request), similar to the "pods" metric source type.
// +optional
TargetAverageValue *resource.Quantity `json:"targetAverageValue,omitempty" protobuf:"bytes,3,opt,name=targetAverageValue"`
}
// ExternalMetricSource indicates how to scale on a metric not associated with
// any Kubernetes object (for example length of queue in cloud
// messaging service, or QPS from loadbalancer running outside of cluster).
type ExternalMetricSource struct {
// metricName is the name of the metric in question.
MetricName string `json:"metricName" protobuf:"bytes,1,name=metricName"`
// metricSelector is used to identify a specific time series
// within a given metric.
// +optional
MetricSelector *metav1.LabelSelector `json:"metricSelector,omitempty" protobuf:"bytes,2,opt,name=metricSelector"`
// targetValue is the target value of the metric (as a quantity).
// Mutually exclusive with TargetAverageValue.
// +optional
TargetValue *resource.Quantity `json:"targetValue,omitempty" protobuf:"bytes,3,opt,name=targetValue"`
// targetAverageValue is the target per-pod value of global metric (as a quantity).
// Mutually exclusive with TargetValue.
// +optional
TargetAverageValue *resource.Quantity `json:"targetAverageValue,omitempty" protobuf:"bytes,4,opt,name=targetAverageValue"`
}
// MetricStatus describes the last-read state of a single metric.
type MetricStatus struct {
// type is the type of metric source. It will be one of "Object",
// "Pods" or "Resource", each corresponds to a matching field in the object.
Type MetricSourceType `json:"type" protobuf:"bytes,1,name=type"`
// object refers to a metric describing a single kubernetes object
// (for example, hits-per-second on an Ingress object).
// +optional
Object *ObjectMetricStatus `json:"object,omitempty" protobuf:"bytes,2,opt,name=object"`
// pods refers to a metric describing each pod in the current scale target
// (for example, transactions-processed-per-second). The values will be
// averaged together before being compared to the target value.
// +optional
Pods *PodsMetricStatus `json:"pods,omitempty" protobuf:"bytes,3,opt,name=pods"`
// resource refers to a resource metric (such as those specified in
// requests and limits) known to Kubernetes describing each pod in the
// current scale target (e.g. CPU or memory). Such metrics are built in to
// Kubernetes, and have special scaling options on top of those available
// to normal per-pod metrics using the "pods" source.
// +optional
Resource *ResourceMetricStatus `json:"resource,omitempty" protobuf:"bytes,4,opt,name=resource"`
// external refers to a global metric that is not associated
// with any Kubernetes object. It allows autoscaling based on information
// coming from components running outside of cluster
// (for example length of queue in cloud messaging service, or
// QPS from loadbalancer running outside of cluster).
// +optional
External *ExternalMetricStatus `json:"external,omitempty" protobuf:"bytes,5,opt,name=external"`
}
// HorizontalPodAutoscalerConditionType are the valid conditions of
// a HorizontalPodAutoscaler.
type HorizontalPodAutoscalerConditionType string
var (
// ScalingActive indicates that the HPA controller is able to scale if necessary:
// it's correctly configured, can fetch the desired metrics, and isn't disabled.
ScalingActive HorizontalPodAutoscalerConditionType = "ScalingActive"
// AbleToScale indicates a lack of transient issues which prevent scaling from occurring,
// such as being in a backoff window, or being unable to access/update the target scale.
AbleToScale HorizontalPodAutoscalerConditionType = "AbleToScale"
// ScalingLimited indicates that the calculated scale based on metrics would be above or
// below the range for the HPA, and has thus been capped.
ScalingLimited HorizontalPodAutoscalerConditionType = "ScalingLimited"
)
// HorizontalPodAutoscalerCondition describes the state of
// a HorizontalPodAutoscaler at a certain point.
type HorizontalPodAutoscalerCondition struct {
// type describes the current condition
Type HorizontalPodAutoscalerConditionType `json:"type" protobuf:"bytes,1,name=type"`
// status is the status of the condition (True, False, Unknown)
Status v1.ConditionStatus `json:"status" protobuf:"bytes,2,name=status"`
// lastTransitionTime is the last time the condition transitioned from
// one status to another
// +optional
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty" protobuf:"bytes,3,opt,name=lastTransitionTime"`
// reason is the reason for the condition's last transition.
// +optional
Reason string `json:"reason,omitempty" protobuf:"bytes,4,opt,name=reason"`
// message is a human-readable explanation containing details about
// the transition
// +optional
Message string `json:"message,omitempty" protobuf:"bytes,5,opt,name=message"`
}
// ObjectMetricStatus indicates the current value of a metric describing a
// kubernetes object (for example, hits-per-second on an Ingress object).
type ObjectMetricStatus struct {
// target is the described Kubernetes object.
Target CrossVersionObjectReference `json:"target" protobuf:"bytes,1,name=target"`
// metricName is the name of the metric in question.
MetricName string `json:"metricName" protobuf:"bytes,2,name=metricName"`
// currentValue is the current value of the metric (as a quantity).
CurrentValue resource.Quantity `json:"currentValue" protobuf:"bytes,3,name=currentValue"`
// selector is the string-encoded form of a standard kubernetes label selector for the given metric
// When set in the ObjectMetricSource, it is passed as an additional parameter to the metrics server for more specific metrics scoping.
// When unset, just the metricName will be used to gather metrics.
// +optional
Selector *metav1.LabelSelector `json:"selector,omitempty" protobuf:"bytes,4,name=selector"`
// averageValue is the current value of the average of the
// metric across all relevant pods (as a quantity)
// +optional
AverageValue *resource.Quantity `json:"averageValue,omitempty" protobuf:"bytes,5,name=averageValue"`
}
// PodsMetricStatus indicates the current value of a metric describing each pod in
// the current scale target (for example, transactions-processed-per-second).
type PodsMetricStatus struct {
// metricName is the name of the metric in question
MetricName string `json:"metricName" protobuf:"bytes,1,name=metricName"`
// currentAverageValue is the current value of the average of the
// metric across all relevant pods (as a quantity)
CurrentAverageValue resource.Quantity `json:"currentAverageValue" protobuf:"bytes,2,name=currentAverageValue"`
// selector is the string-encoded form of a standard kubernetes label selector for the given metric
// When set in the PodsMetricSource, it is passed as an additional parameter to the metrics server for more specific metrics scoping.
// When unset, just the metricName will be used to gather metrics.
// +optional
Selector *metav1.LabelSelector `json:"selector,omitempty" protobuf:"bytes,3,name=selector"`
}
// ResourceMetricStatus indicates the current value of a resource metric known to
// Kubernetes, as specified in requests and limits, describing each pod in the
// current scale target (e.g. CPU or memory). Such metrics are built in to
// Kubernetes, and have special scaling options on top of those available to
// normal per-pod metrics using the "pods" source.
type ResourceMetricStatus struct {
// name is the name of the resource in question.
Name v1.ResourceName `json:"name" protobuf:"bytes,1,name=name"`
// currentAverageUtilization is the current value of the average of the
// resource metric across all relevant pods, represented as a percentage of
// the requested value of the resource for the pods. It will only be
// present if `targetAverageValue` was set in the corresponding metric
// specification.
// +optional
CurrentAverageUtilization *int32 `json:"currentAverageUtilization,omitempty" protobuf:"bytes,2,opt,name=currentAverageUtilization"`
// currentAverageValue is the current value of the average of the
// resource metric across all relevant pods, as a raw value (instead of as
// a percentage of the request), similar to the "pods" metric source type.
// It will always be set, regardless of the corresponding metric specification.
CurrentAverageValue resource.Quantity `json:"currentAverageValue" protobuf:"bytes,3,name=currentAverageValue"`
}
// ExternalMetricStatus indicates the current value of a global metric
// not associated with any Kubernetes object.
type ExternalMetricStatus struct {
// metricName is the name of a metric used for autoscaling in
// metric system.
MetricName string `json:"metricName" protobuf:"bytes,1,name=metricName"`
// metricSelector is used to identify a specific time series
// within a given metric.
// +optional
MetricSelector *metav1.LabelSelector `json:"metricSelector,omitempty" protobuf:"bytes,2,opt,name=metricSelector"`
// currentValue is the current value of the metric (as a quantity)
CurrentValue resource.Quantity `json:"currentValue" protobuf:"bytes,3,name=currentValue"`
// currentAverageValue is the current value of metric averaged over autoscaled pods.
// +optional
CurrentAverageValue *resource.Quantity `json:"currentAverageValue,omitempty" protobuf:"bytes,4,opt,name=currentAverageValue"`
}
|
Jacob Ralph Abarbanell
Jacob Ralph Abarbanell (December 6, 1852 – November 9, 1922) was an American lawyer, author, and playwright from New York City.
Early life
Born to furrier Rudolph Abarbanell and his wife Rosalia. Married Cornelia L. Eaton, of Jersey City, June 30, 1892. After graduating from City College in 1872 and Columbia Law in 1874, he practiced in the city.
Literary career
While practicing law, he also wrote stories, articles, magazine serials, and plays throughout his life.
While some work and translations were published under his own name, he also used the pseudonyms 'Ralph Royal' and 'Paul Revere'. His best known works were the books The Model Pair (1881) and The Rector's Secret (1892), and the dramas Countess of Monte Cristo (1902) and The Heart of the People (1909). He also published translations of stories from French and German.
References
Category:1852 births
Category:1922 deaths
Category:American male short story writers
Category:American male dramatists and playwrights
Category:Columbia Law School alumni
Category:City College of New York alumni
Category:American male novelists
Category:American translators
Category:19th-century American male writers
Category:19th-century American novelists
Category:19th-century American short story writers
Category:19th-century American dramatists and playwrights
Category:19th-century American lawyers
Category:19th-century translators
Category:20th-century American novelists
Category:20th-century American male writers
Category:20th-century American short story writers
Category:20th-century American dramatists and playwrights
Category:20th-century American lawyers
Category:20th-century translators
Category:New York (state) lawyers |
12 Early Signs You Might Be in Labour and Not Know It!How to know it's time!
Jody Allen Founder/Chief Content Editor
Jody is the founder and essence of Stay at Home Mum. An insatiable appetite for reading from a very young age had Jody harbouring dreams of being a published author since primary school. That deep seeded need to write found its way to the public eye in 2011 with the launch of SAHM. Fast forward 4 years and a few thousand articles Jody has fulfilled her dream of being published in print. With the 2014 launch of Once a Month Cooking and 2015’s Live Well on Less, thanks to Penguin Random House, Jody shows no signs of slowing down. The master of true native content, Jody lives and experiences first hand every word of advertorial she pens.
Mum to two magnificent boys and wife to her beloved Brendan; Jody’s voice is a sure fire winner when you need to talk to Mums.
Labour – the thing most first time Mums dread the most.
But the thing is – first baby or fifth – signs of labour are not always easy to distinguish – especially in the early stages! Here are some usual, quirky and odd signs – you may be just close to getting your newborn in your arms.
1. Your Braxton Hicks Are Evenly Spread
Braxton Hicks are those annoying ways your baby prepares you for the real deal of labour. You know you are having Braxton Hicks when your stomach area will go very very tight – a contraction. The thing is – in early labour – your body is doing exactly the same thing! Many a mother (first time or not) have had a ‘false start’ because of these annoying bad boys. The best way to tell is to have a nice bath or go for a walk and time them. Real labour won’t go away and will be regular. Braxton Hicks will fade down and stop after awhile.
via giphy
2. Period Type Cramps
Not everyone feels or even have Braxton Hicks (that they are aware of). But early labour usually starts out very very slow… Late pregnancy, you hurt all over anyway from being just plain ole uncomfortable – so you might not even realise that you have started getting the odd cramp because they blend in with the ache of everything else. But period style cramps are usually how labour begins… ever so slowly. So be aware of them, and time them if they start getting stronger.
I was 28 weeks pregnant and so achey all over. I had to go to the shops with Mum to buy a few things to make sure the house was fully stocked and wasn’t game to leave the house without having someone with me. I’d been having cramps sporadically for days… and didn’t give them a second thought because just everything hurt. When I got to Coles, I actually had to ‘stop’ when some of these cramps hit me – I STILL didn’t realise I was in early labour… Not until about six hours later and I started thinking about timing them – and they were five minutes apart!!! Two hours later, my son was born!
via www.fitpregnancy.com
3. Very Uncomfortable the Night Before
Many women (myself included) just couldn’t rest the night before labour – right when you need that rest – no fair!!! Late pregnancy is uncomfortable in itself, but when you feel yourself being unusually restless, your legs won’t stop moving, and you just can’t find a comfortable position – this is a good sign labour isn’t far away!
via Bigstock
4. Your Vagina is Puffy
If you have ever witnessed a birth (or seen a Youtube video) – you will notice that the vagina certainly doesn’t normally quite look that puffy normally. Your labia can really ‘swell’ during the late stages of pregnancy and with the baby moving further down and into the birthing position, it also puts a lot more pressure on your precious girly bits making them feel big and swollen. If they are giving you a bit of trouble, grab a bag of peas and wrap them in a clean cloth that you will never use again and apply it to the area for a bit of relief.
via Bigstock
5. Urgent Poo and Diarrhoea
Our bodies naturally try and get rid of anything in our bowel just before labour. Plus by emptying our bowel, it makes more room for the baby to move. Oh don’t worry – you will still probably shit yourself during labour – but I can assure you – you can’t give a single care at the time and the nurses know and will discreetly clean it up before you’ve even noticed it.
Whatever you do – don’t take anything to make yourself go to the toilet before labour… Going to the toilet during labour is perfectly natural, but full on spraying diarrhoea is just nasty on the poor nurses!
www.babycenter.com
6. Mucus Plug or ‘Show’
The Mucus Plug is the ‘wine bottle cork’ of the cervix. If you all of a sudden see a mass of snot-like bloody mess in your undies, this is the mucus plug. It signals that your body is ready to give birth soon – usually in the next three days. Of course, not everyone has this happen to them – everyone is different. But keep an eye out! Always let your midwife know if you have any blood in your undies during late labour – and perhaps even save any ‘show’ that you do have to show the midwife – she can assess whether it is just the plug or you are bleeding.
via Bigstock
7. Lower Back Pain
If you can feel a dull ache in the small of your back – this is also a good sign labour is close. Some of us ladies feel a lot of the labour pains in our backs. Grab a hot pack and apply it. Lie down and rest and see if it goes away. If it doesn’t, start to take note of any regular Braxton Hicks or regular pain. The lower back pain is usually caused by the baby getting into the birthing position – and it can be quite painful. Plus, some women just feel the pain in their backs as opposed to their abdomen.
via Bigstock
8. Urgent Nesting
The day before I had my son – I just HAD to paint a wall. Just absolutely – positively HAD to. So there is me, nine months pregnant and big as a house up a ladder, making sure that wall was absolutely perfect. Most women go into the full on nesting mode just before. You get a surge of energy to make sure the house is clean, the clothes are ready, the floors are vacuumed etc. You won’t be able to sit down – the urge to get shit done will be way too strong.
via giphy
9. You Can’t Make it to the Toilet for a Wee
Losing bladder control is totally normal for a heavily pregnant woman. But when it gets to the stage that the urge to wee and making it the toilet do not align (i.e. you wee yourself on the way to the toilet), this is a good sign labour is imminent. Se, most of the time, the cause of this is that the baby’s head has moved right down into the lower pelvis, pushing on your bladder. With all systems go, the bladder just can’t hold itself.
via giphy
10. Your Waters Break
Your waters can break in a great big ‘snap’ like you see in the movies (seriously!) or they could break quietly and the baby is holding back the flow. So if you feel a constant dampness in your undies that you don’t think is wee, this could be a sign your waters have broken. Whether you have started to go into labour or feel any pain or not, go get it checked out. If your waters break and it takes a few days for the baby to come, there is a great chance you or your baby can get an infection. So go see your doctor.
Just to be gross – do the sniff test. Wee smells like wee. Amniotic fluid kinda smells like a wet chook.
via giphy
11. Your Cervix is Dilating
Not for the faint hearted, but I have heard of ladies (that after washing their hands well) could actually feel the cervix dilating. Best leave this one for the doctor.
via giphy
12. Constant Shivering or Trembling
A change in the hormones from our brain telling our body to go into labour can cause a woman’s body to ‘shiver’ or ‘tremble’. It is fairly rare though – and always get checked out by your quack just in case.
via giphy
So, what do I do now?
Whilst you are waiting for more signs, this is a great time to make sure you:
Call your doctor or midwife and let them know what is happening, they will probably tell you to stay home for now.
Make sure your maternity bag is packed and at the door.
Ring your partner and your Mum to let them know – plus it’s always nice to have someone with you.
Have a nice bath, wash your hair, get into some comfy pj’s or clothing
Have a nice cup of tea and put your feet up and rest – you have some hard work in front of you!
How did your pregnancy go? Did you experience any of these signs? Share it with us in the comments! |
Expression of neutral endopeptidase (NEP/CD10) on pancreatic tumor cell lines, pancreatitis and pancreatic tumor tissues.
Neutral endopeptidase (NEP/CD10) is a cell surface zinc metalloprotease cleaving peptide bounds on the amino terminus of hydrophobic amino acids and inactivating multiple physiologically active peptides. Loss or decrease in NEP/CD10 expression have been reported in many types of malignancies, but the role of NEP/CD10 in pancreatic carcinoma has not yet been identified. Using real-time RT-PCR, flow cytometry as well as immunohistochemistry, NEP/CD10 expression was quantified in both pancreatic carcinoma cell lines and in tumor specimens obtained from patients with primary pancreatic carcinomas. Three out of 8 pancreatic carcinoma cell lines exhibit heterogeneous NEP/CD10 expression levels: PATU-8988T expressed the highest NEP/CD10 levels, whereas HUP-T4 and HUP-T3 cells showed a moderate to low NEP/CD10 expression. NEP/CD10 immunoreactivity was found in 6 of 24 pancreatic ductal adenocarcinomas, but also in 3 of 6 tissues of patients with chronic pancreatitis. NEP/CD10 expression in pancreatic tumor lesions and cell lines was not associated with tumor grading and staging. Treatment of PATU-8988T cells with the histone deacetylase inhibitors sodium butyrate and valproic acid induced an increase of NEP/CD10 expression. This was accompanied by a reduced cell proliferation rate of PATU-8988T cells, which was increased by the addition of the enzyme activity inhibitors phosphoramidon and thiorphan. Thus, NEP/CD10 is differentially expressed in pancreatic tumors and might be involved in the proliferative activity of pancreatic cancer cells. However, further studies are needed to provide more detailed information of the role of NEP/CD10 under physiological and pathophysiological conditions of the pancreas. |
Baby Sathanya
Baby Sathanya (born 7 July 2007 as Sathanya Vijayasundar) is an Indian child actress from Chennai who has appeared in feature films, short films and Television commercials. She took up acting when she was 5 yrs old appearing for TV commercials and in shows of Tamil kids channel Chutti TV, She has also dubbed for the Tamil version of the English movie "Baby Geniuses". In 2013, she was signed on to portray a lead role in D.Suresh's Tamil horror film Baby, which got her more opportunities lining up. Currently she has completed shooting Veera sivaji.
Filmography
Feature films
Short films
Awards
Awarded "Best Actor" for the short Film "Kolors"
Awarded "Chutti Princess" by Chutti TV
External links
https://www.facebook.com/BabySathanya
References
http://www.deccanchronicle.com/content/tags/baby-sathanya
Category:2007 births
Category:Living people
Category:Indian film actresses
Category:Child actresses in Tamil cinema
Category:Actresses in Tamil cinema
Category:Actresses from Chennai
Category:21st-century Indian child actresses |
// This file is part of Eigen, a lightweight C++ template library
// for linear algebra.
//
// Copyright (C) 2008 Gael Guennebaud <[email protected]>
// Copyright (C) 2006-2008 Benoit Jacob <[email protected]>
//
// This Source Code Form is subject to the terms of the Mozilla
// Public License v. 2.0. If a copy of the MPL was not distributed
// with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
#ifndef EIGEN_GENERIC_PACKET_MATH_H
#define EIGEN_GENERIC_PACKET_MATH_H
namespace Eigen {
namespace internal {
/** \internal
* \file GenericPacketMath.h
*
* Default implementation for types not supported by the vectorization.
* In practice these functions are provided to make easier the writing
* of generic vectorized code.
*/
#ifndef EIGEN_DEBUG_ALIGNED_LOAD
#define EIGEN_DEBUG_ALIGNED_LOAD
#endif
#ifndef EIGEN_DEBUG_UNALIGNED_LOAD
#define EIGEN_DEBUG_UNALIGNED_LOAD
#endif
#ifndef EIGEN_DEBUG_ALIGNED_STORE
#define EIGEN_DEBUG_ALIGNED_STORE
#endif
#ifndef EIGEN_DEBUG_UNALIGNED_STORE
#define EIGEN_DEBUG_UNALIGNED_STORE
#endif
struct default_packet_traits
{
enum {
HasHalfPacket = 0,
HasAdd = 1,
HasSub = 1,
HasMul = 1,
HasNegate = 1,
HasAbs = 1,
HasArg = 0,
HasAbs2 = 1,
HasMin = 1,
HasMax = 1,
HasConj = 1,
HasSetLinear = 1,
HasBlend = 0,
HasDiv = 0,
HasSqrt = 0,
HasRsqrt = 0,
HasExp = 0,
HasExpm1 = 0,
HasLog = 0,
HasLog1p = 0,
HasLog10 = 0,
HasPow = 0,
HasSin = 0,
HasCos = 0,
HasTan = 0,
HasASin = 0,
HasACos = 0,
HasATan = 0,
HasSinh = 0,
HasCosh = 0,
HasTanh = 0,
HasLGamma = 0,
HasDiGamma = 0,
HasZeta = 0,
HasPolygamma = 0,
HasErf = 0,
HasErfc = 0,
HasIGamma = 0,
HasIGammac = 0,
HasBetaInc = 0,
HasRound = 0,
HasFloor = 0,
HasCeil = 0,
HasSign = 0
};
};
template<typename T> struct packet_traits : default_packet_traits
{
typedef T type;
typedef T half;
enum {
Vectorizable = 0,
size = 1,
AlignedOnScalar = 0,
HasHalfPacket = 0
};
enum {
HasAdd = 0,
HasSub = 0,
HasMul = 0,
HasNegate = 0,
HasAbs = 0,
HasAbs2 = 0,
HasMin = 0,
HasMax = 0,
HasConj = 0,
HasSetLinear = 0
};
};
template<typename T> struct packet_traits<const T> : packet_traits<T> { };
template <typename Src, typename Tgt> struct type_casting_traits {
enum {
VectorizedCast = 0,
SrcCoeffRatio = 1,
TgtCoeffRatio = 1
};
};
/** \internal \returns static_cast<TgtType>(a) (coeff-wise) */
template <typename SrcPacket, typename TgtPacket>
EIGEN_DEVICE_FUNC inline TgtPacket
pcast(const SrcPacket& a) {
return static_cast<TgtPacket>(a);
}
template <typename SrcPacket, typename TgtPacket>
EIGEN_DEVICE_FUNC inline TgtPacket
pcast(const SrcPacket& a, const SrcPacket& /*b*/) {
return static_cast<TgtPacket>(a);
}
template <typename SrcPacket, typename TgtPacket>
EIGEN_DEVICE_FUNC inline TgtPacket
pcast(const SrcPacket& a, const SrcPacket& /*b*/, const SrcPacket& /*c*/, const SrcPacket& /*d*/) {
return static_cast<TgtPacket>(a);
}
/** \internal \returns a + b (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
padd(const Packet& a,
const Packet& b) { return a+b; }
/** \internal \returns a - b (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
psub(const Packet& a,
const Packet& b) { return a-b; }
/** \internal \returns -a (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pnegate(const Packet& a) { return -a; }
/** \internal \returns conj(a) (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pconj(const Packet& a) { return numext::conj(a); }
/** \internal \returns a * b (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pmul(const Packet& a,
const Packet& b) { return a*b; }
/** \internal \returns a / b (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pdiv(const Packet& a,
const Packet& b) { return a/b; }
/** \internal \returns the min of \a a and \a b (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pmin(const Packet& a,
const Packet& b) { return numext::mini(a, b); }
/** \internal \returns the max of \a a and \a b (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pmax(const Packet& a,
const Packet& b) { return numext::maxi(a, b); }
/** \internal \returns the absolute value of \a a */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pabs(const Packet& a) { using std::abs; return abs(a); }
/** \internal \returns the phase angle of \a a */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
parg(const Packet& a) { using numext::arg; return arg(a); }
/** \internal \returns the bitwise and of \a a and \a b */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pand(const Packet& a, const Packet& b) { return a & b; }
/** \internal \returns the bitwise or of \a a and \a b */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
por(const Packet& a, const Packet& b) { return a | b; }
/** \internal \returns the bitwise xor of \a a and \a b */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pxor(const Packet& a, const Packet& b) { return a ^ b; }
/** \internal \returns the bitwise andnot of \a a and \a b */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pandnot(const Packet& a, const Packet& b) { return a & (!b); }
/** \internal \returns a packet version of \a *from, from must be 16 bytes aligned */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pload(const typename unpacket_traits<Packet>::type* from) { return *from; }
/** \internal \returns a packet version of \a *from, (un-aligned load) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
ploadu(const typename unpacket_traits<Packet>::type* from) { return *from; }
/** \internal \returns a packet with constant coefficients \a a, e.g.: (a,a,a,a) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pset1(const typename unpacket_traits<Packet>::type& a) { return a; }
/** \internal \returns a packet with constant coefficients \a a[0], e.g.: (a[0],a[0],a[0],a[0]) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pload1(const typename unpacket_traits<Packet>::type *a) { return pset1<Packet>(*a); }
/** \internal \returns a packet with elements of \a *from duplicated.
* For instance, for a packet of 8 elements, 4 scalars will be read from \a *from and
* duplicated to form: {from[0],from[0],from[1],from[1],from[2],from[2],from[3],from[3]}
* Currently, this function is only used for scalar * complex products.
*/
template<typename Packet> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet
ploaddup(const typename unpacket_traits<Packet>::type* from) { return *from; }
/** \internal \returns a packet with elements of \a *from quadrupled.
* For instance, for a packet of 8 elements, 2 scalars will be read from \a *from and
* replicated to form: {from[0],from[0],from[0],from[0],from[1],from[1],from[1],from[1]}
* Currently, this function is only used in matrix products.
* For packet-size smaller or equal to 4, this function is equivalent to pload1
*/
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
ploadquad(const typename unpacket_traits<Packet>::type* from)
{ return pload1<Packet>(from); }
/** \internal equivalent to
* \code
* a0 = pload1(a+0);
* a1 = pload1(a+1);
* a2 = pload1(a+2);
* a3 = pload1(a+3);
* \endcode
* \sa pset1, pload1, ploaddup, pbroadcast2
*/
template<typename Packet> EIGEN_DEVICE_FUNC
inline void pbroadcast4(const typename unpacket_traits<Packet>::type *a,
Packet& a0, Packet& a1, Packet& a2, Packet& a3)
{
a0 = pload1<Packet>(a+0);
a1 = pload1<Packet>(a+1);
a2 = pload1<Packet>(a+2);
a3 = pload1<Packet>(a+3);
}
/** \internal equivalent to
* \code
* a0 = pload1(a+0);
* a1 = pload1(a+1);
* \endcode
* \sa pset1, pload1, ploaddup, pbroadcast4
*/
template<typename Packet> EIGEN_DEVICE_FUNC
inline void pbroadcast2(const typename unpacket_traits<Packet>::type *a,
Packet& a0, Packet& a1)
{
a0 = pload1<Packet>(a+0);
a1 = pload1<Packet>(a+1);
}
/** \internal \brief Returns a packet with coefficients (a,a+1,...,a+packet_size-1). */
template<typename Packet> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet
plset(const typename unpacket_traits<Packet>::type& a) { return a; }
/** \internal copy the packet \a from to \a *to, \a to must be 16 bytes aligned */
template<typename Scalar, typename Packet> EIGEN_DEVICE_FUNC inline void pstore(Scalar* to, const Packet& from)
{ (*to) = from; }
/** \internal copy the packet \a from to \a *to, (un-aligned store) */
template<typename Scalar, typename Packet> EIGEN_DEVICE_FUNC inline void pstoreu(Scalar* to, const Packet& from)
{ (*to) = from; }
template<typename Scalar, typename Packet> EIGEN_DEVICE_FUNC inline Packet pgather(const Scalar* from, Index /*stride*/)
{ return ploadu<Packet>(from); }
template<typename Scalar, typename Packet> EIGEN_DEVICE_FUNC inline void pscatter(Scalar* to, const Packet& from, Index /*stride*/)
{ pstore(to, from); }
/** \internal tries to do cache prefetching of \a addr */
template<typename Scalar> EIGEN_DEVICE_FUNC inline void prefetch(const Scalar* addr)
{
#ifdef __CUDA_ARCH__
#if defined(__LP64__)
// 64-bit pointer operand constraint for inlined asm
asm(" prefetch.L1 [ %1 ];" : "=l"(addr) : "l"(addr));
#else
// 32-bit pointer operand constraint for inlined asm
asm(" prefetch.L1 [ %1 ];" : "=r"(addr) : "r"(addr));
#endif
#elif (!EIGEN_COMP_MSVC) && (EIGEN_COMP_GNUC || EIGEN_COMP_CLANG || EIGEN_COMP_ICC)
__builtin_prefetch(addr);
#endif
}
/** \internal \returns the first element of a packet */
template<typename Packet> EIGEN_DEVICE_FUNC inline typename unpacket_traits<Packet>::type pfirst(const Packet& a)
{ return a; }
/** \internal \returns a packet where the element i contains the sum of the packet of \a vec[i] */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
preduxp(const Packet* vecs) { return vecs[0]; }
/** \internal \returns the sum of the elements of \a a*/
template<typename Packet> EIGEN_DEVICE_FUNC inline typename unpacket_traits<Packet>::type predux(const Packet& a)
{ return a; }
/** \internal \returns the sum of the elements of \a a by block of 4 elements.
* For a packet {a0, a1, a2, a3, a4, a5, a6, a7}, it returns a half packet {a0+a4, a1+a5, a2+a6, a3+a7}
* For packet-size smaller or equal to 4, this boils down to a noop.
*/
template<typename Packet> EIGEN_DEVICE_FUNC inline
typename conditional<(unpacket_traits<Packet>::size%8)==0,typename unpacket_traits<Packet>::half,Packet>::type
predux_downto4(const Packet& a)
{ return a; }
/** \internal \returns the product of the elements of \a a*/
template<typename Packet> EIGEN_DEVICE_FUNC inline typename unpacket_traits<Packet>::type predux_mul(const Packet& a)
{ return a; }
/** \internal \returns the min of the elements of \a a*/
template<typename Packet> EIGEN_DEVICE_FUNC inline typename unpacket_traits<Packet>::type predux_min(const Packet& a)
{ return a; }
/** \internal \returns the max of the elements of \a a*/
template<typename Packet> EIGEN_DEVICE_FUNC inline typename unpacket_traits<Packet>::type predux_max(const Packet& a)
{ return a; }
/** \internal \returns the reversed elements of \a a*/
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet preverse(const Packet& a)
{ return a; }
/** \internal \returns \a a with real and imaginary part flipped (for complex type only) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet pcplxflip(const Packet& a)
{
// FIXME: uncomment the following in case we drop the internal imag and real functions.
// using std::imag;
// using std::real;
return Packet(imag(a),real(a));
}
/**************************
* Special math functions
***************************/
/** \internal \returns the sine of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet psin(const Packet& a) { using std::sin; return sin(a); }
/** \internal \returns the cosine of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pcos(const Packet& a) { using std::cos; return cos(a); }
/** \internal \returns the tan of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet ptan(const Packet& a) { using std::tan; return tan(a); }
/** \internal \returns the arc sine of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pasin(const Packet& a) { using std::asin; return asin(a); }
/** \internal \returns the arc cosine of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pacos(const Packet& a) { using std::acos; return acos(a); }
/** \internal \returns the arc tangent of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet patan(const Packet& a) { using std::atan; return atan(a); }
/** \internal \returns the hyperbolic sine of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet psinh(const Packet& a) { using std::sinh; return sinh(a); }
/** \internal \returns the hyperbolic cosine of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pcosh(const Packet& a) { using std::cosh; return cosh(a); }
/** \internal \returns the hyperbolic tan of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet ptanh(const Packet& a) { using std::tanh; return tanh(a); }
/** \internal \returns the exp of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pexp(const Packet& a) { using std::exp; return exp(a); }
/** \internal \returns the expm1 of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pexpm1(const Packet& a) { return numext::expm1(a); }
/** \internal \returns the log of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet plog(const Packet& a) { using std::log; return log(a); }
/** \internal \returns the log1p of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet plog1p(const Packet& a) { return numext::log1p(a); }
/** \internal \returns the log10 of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet plog10(const Packet& a) { using std::log10; return log10(a); }
/** \internal \returns the square-root of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet psqrt(const Packet& a) { using std::sqrt; return sqrt(a); }
/** \internal \returns the reciprocal square-root of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet prsqrt(const Packet& a) {
return pdiv(pset1<Packet>(1), psqrt(a));
}
/** \internal \returns the rounded value of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pround(const Packet& a) { using numext::round; return round(a); }
/** \internal \returns the floor of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pfloor(const Packet& a) { using numext::floor; return floor(a); }
/** \internal \returns the ceil of \a a (coeff-wise) */
template<typename Packet> EIGEN_DECLARE_FUNCTION_ALLOWING_MULTIPLE_DEFINITIONS
Packet pceil(const Packet& a) { using numext::ceil; return ceil(a); }
/***************************************************************************
* The following functions might not have to be overwritten for vectorized types
***************************************************************************/
/** \internal copy a packet with constant coeficient \a a (e.g., [a,a,a,a]) to \a *to. \a to must be 16 bytes aligned */
// NOTE: this function must really be templated on the packet type (think about different packet types for the same scalar type)
template<typename Packet>
inline void pstore1(typename unpacket_traits<Packet>::type* to, const typename unpacket_traits<Packet>::type& a)
{
pstore(to, pset1<Packet>(a));
}
/** \internal \returns a * b + c (coeff-wise) */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pmadd(const Packet& a,
const Packet& b,
const Packet& c)
{ return padd(pmul(a, b),c); }
/** \internal \returns a packet version of \a *from.
* The pointer \a from must be aligned on a \a Alignment bytes boundary. */
template<typename Packet, int Alignment>
EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE Packet ploadt(const typename unpacket_traits<Packet>::type* from)
{
if(Alignment >= unpacket_traits<Packet>::alignment)
return pload<Packet>(from);
else
return ploadu<Packet>(from);
}
/** \internal copy the packet \a from to \a *to.
* The pointer \a from must be aligned on a \a Alignment bytes boundary. */
template<typename Scalar, typename Packet, int Alignment>
EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE void pstoret(Scalar* to, const Packet& from)
{
if(Alignment >= unpacket_traits<Packet>::alignment)
pstore(to, from);
else
pstoreu(to, from);
}
/** \internal \returns a packet version of \a *from.
* Unlike ploadt, ploadt_ro takes advantage of the read-only memory path on the
* hardware if available to speedup the loading of data that won't be modified
* by the current computation.
*/
template<typename Packet, int LoadMode>
EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE Packet ploadt_ro(const typename unpacket_traits<Packet>::type* from)
{
return ploadt<Packet, LoadMode>(from);
}
/** \internal default implementation of palign() allowing partial specialization */
template<int Offset,typename PacketType>
struct palign_impl
{
// by default data are aligned, so there is nothing to be done :)
static inline void run(PacketType&, const PacketType&) {}
};
/** \internal update \a first using the concatenation of the packet_size minus \a Offset last elements
* of \a first and \a Offset first elements of \a second.
*
* This function is currently only used to optimize matrix-vector products on unligned matrices.
* It takes 2 packets that represent a contiguous memory array, and returns a packet starting
* at the position \a Offset. For instance, for packets of 4 elements, we have:
* Input:
* - first = {f0,f1,f2,f3}
* - second = {s0,s1,s2,s3}
* Output:
* - if Offset==0 then {f0,f1,f2,f3}
* - if Offset==1 then {f1,f2,f3,s0}
* - if Offset==2 then {f2,f3,s0,s1}
* - if Offset==3 then {f3,s0,s1,s3}
*/
template<int Offset,typename PacketType>
inline void palign(PacketType& first, const PacketType& second)
{
palign_impl<Offset,PacketType>::run(first,second);
}
/***************************************************************************
* Fast complex products (GCC generates a function call which is very slow)
***************************************************************************/
// Eigen+CUDA does not support complexes.
#ifndef __CUDACC__
template<> inline std::complex<float> pmul(const std::complex<float>& a, const std::complex<float>& b)
{ return std::complex<float>(real(a)*real(b) - imag(a)*imag(b), imag(a)*real(b) + real(a)*imag(b)); }
template<> inline std::complex<double> pmul(const std::complex<double>& a, const std::complex<double>& b)
{ return std::complex<double>(real(a)*real(b) - imag(a)*imag(b), imag(a)*real(b) + real(a)*imag(b)); }
#endif
/***************************************************************************
* PacketBlock, that is a collection of N packets where the number of words
* in the packet is a multiple of N.
***************************************************************************/
template <typename Packet,int N=unpacket_traits<Packet>::size> struct PacketBlock {
Packet packet[N];
};
template<typename Packet> EIGEN_DEVICE_FUNC inline void
ptranspose(PacketBlock<Packet,1>& /*kernel*/) {
// Nothing to do in the scalar case, i.e. a 1x1 matrix.
}
/***************************************************************************
* Selector, i.e. vector of N boolean values used to select (i.e. blend)
* words from 2 packets.
***************************************************************************/
template <size_t N> struct Selector {
bool select[N];
};
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pblend(const Selector<unpacket_traits<Packet>::size>& ifPacket, const Packet& thenPacket, const Packet& elsePacket) {
return ifPacket.select[0] ? thenPacket : elsePacket;
}
/** \internal \returns \a a with the first coefficient replaced by the scalar b */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pinsertfirst(const Packet& a, typename unpacket_traits<Packet>::type b)
{
// Default implementation based on pblend.
// It must be specialized for higher performance.
Selector<unpacket_traits<Packet>::size> mask;
mask.select[0] = true;
// This for loop should be optimized away by the compiler.
for(Index i=1; i<unpacket_traits<Packet>::size; ++i)
mask.select[i] = false;
return pblend(mask, pset1<Packet>(b), a);
}
/** \internal \returns \a a with the last coefficient replaced by the scalar b */
template<typename Packet> EIGEN_DEVICE_FUNC inline Packet
pinsertlast(const Packet& a, typename unpacket_traits<Packet>::type b)
{
// Default implementation based on pblend.
// It must be specialized for higher performance.
Selector<unpacket_traits<Packet>::size> mask;
// This for loop should be optimized away by the compiler.
for(Index i=0; i<unpacket_traits<Packet>::size-1; ++i)
mask.select[i] = false;
mask.select[unpacket_traits<Packet>::size-1] = true;
return pblend(mask, pset1<Packet>(b), a);
}
} // end namespace internal
} // end namespace Eigen
#endif // EIGEN_GENERIC_PACKET_MATH_H
|
Chameleon (Ira Losco song)
"Chameleon" is a song performed by Maltese singer Ira Losco. Originally, the song would have represented Malta in the Eurovision Song Contest 2016. However, the Maltese broadcaster TVM later changed it to "Walk on Water".
References
Category:Eurovision songs of Malta
Category:Eurovision songs of 2016
Category:2015 songs
Category:2016 singles |
# coding: utf-8
# menuTitle : Select Negative Advance Widths
# Ethan Cohen 2019-11-03
# Fraunces-LightOpMaxGoofyMax.ufo and Fraunces-BlackOpMaxGoofyMax.ufo
# were not test installing because a few glyphs (/hyphensoft /gnrl:hyphen
# /registered /fhook) had negative advance widths.
f = CurrentFont()
f.selection = list(g.name for g in f if g.width < 0) |
1. INTRODUCTION {#sec1}
===============
Cisplatin and the structurally related platinum-based drugs represent one of the most important classes of antineoplastic agents, being especially valuable for the treatment of germ cell cancer and a variety of other solid malignancies \[[@B1]--[@B3]\]. Despite their important clinical role, however, the platinum-based chemotherapeutics possess relatively low selectivity to malignant cells and hence their application is associated with significant dose-limiting organ toxicities \[[@B1]\]. Beside their unfavorable safety profile, the major limitation in the clinical application of the currently marketed platinum agents is the development of acquired resistance by the tumor cells \[[@B2]\]. Consequently, a significant interest is manifested towards the design and synthesis of cisplatin-dissimilar analogues with modified pharmacological properties capable of bypassing the cellular resistance mechanisms \[[@B4]--[@B6]\].
Considering the fact that the thermodynamic stability and kinetic behavior of the metal complexes in biological milieu and hence their biochemical and pharmacological properties depend greatly on the nature of the adduct-forming metal centers, it is well appreciated that a change of the metal ion could alter the antineoplastic activity \[[@B6]\]. Among the nonplatinum metal-based chemotherapeutics much attention has been paid to gold complexes \[[@B7]\]. Well known for their clinical antiarthritic application \[[@B8]\], the gold-based drugs have also attracted interest as potential antineoplastic agents with gold(I)-phosphine derivatives being among the most active in vivo against murine tumor models \[[@B9]\]. Currently, the greatest interest towards development of gold-based chemotherapeutics is focused on the Au(III) compounds which being isoelectronic with platinum(II) share the propensity of forming square planar complexes, analogous to cisplatin \[[@B7], [@B10]\]. It could be anticipated that similar to Pt(II) compounds, the gold(III) species are capable of binding on DNA and this is the reason for their cytotoxicity. Unlike platinum(II), however, the gold(III) complexes are extremely unstable under physiological conditions which practically preclude the interest towards this class of metal-based drugs. In addition, the gold(III) complexes are highly reactive and are able to oxidize a series of biomolecules such as methionine, glycine, and albumin leading to a quick reduction to gold(I) or even to elemental gold \[[@B10]--[@B12]\]. It has been proven that the stability of Au(III) compounds can be augmented by bonding with nitrogen donor-containing bi- and multidentate chelating ligands such as ethylendiamine, cyclam, bipyridine, and so forth, that lower the redox potential of the metal center \[[@B10], [@B13], [@B14]\]. Recently, a large number of reports on the preparation, structural characterization, and cytotoxic studies of stable gold(III) complexes and organometallic compounds appeared in the scientific literature \[[@B10]\].
With regard to redox stability and kinetic behavior, the search for proper cytotoxic gold(II) complexes is an intriguing and previously unexplored area of anticancer drug design. Nowadays, the Au(II) oxidation state can be considered as a common state in gold chemistry. Despite the large number of stable diamagnetic dinuclear and polynuclear gold(II) complexes, the examples of mononuclear ones are scarce and most of them are with S-containing ligands \[[@B15]--[@B17]\]. Recently, the synthesis and structural characterization of a stable monomeric hematoporphyrin Au(II) complex with general formula \[Au(II)Hp~−2H~.(H~2~O)~2~\] ([Figure 1](#fig1){ref-type="fig"}) and distorted octahedral structure has been reported \[[@B18]\]. Au(II) species are stabilized in the complex through coordination via the four nitrogen atoms of the porphyrin macrocycle and the two water molecules are in axial position.
The rationale design for synthesizing porphyrin-based metal complexes as anticancer drugs is based on their selective accumulation within malignant tissue together with participation in augmentation of the cytotoxicity upon light irradiation. Hence, such complexes are expected to behave like hybrid drugs with combined cytotoxic/phototoxic properties \[[@B4]\]. Brunner and coworkers have described large series of planar platinum(II)-porphyrin conjugates whereby the metal centers are coordinated with the porphyrin residues via the pendant functionalities \[[@B19]--[@B21]\]. Recently, we have synthesized and characterized three stable octahedral platinum hematoporphyrin complexes in the unusual oxidation state of platinum 3+. In these complexes the hematoporphyrin ligand is coordinated as follows: via four pyrrole N-atoms forming metalloporphyrin type complex; or by asymmetric coordination through two N-atoms from adjacent pyrrole rings forming SAT-type complex; or by the side chains propionic COO^−^groups outside the porphyrin macrocycle. The complexes displayed significant cytotoxic and proapoptotic activities against human tumor cell lines \[[@B22]\].
This study deals with the cytotoxic activity of the newly synthesized stable gold(II) complex Au(II)Hp~−2H~.(H~2~O)~2~ against a spectrum of tumor cell lines. Its effect on the human epidermal kidney cell line 293T has been studied as well in order to estimate the selectivity of cytotoxicity.
2. MATERIALS AND METHODS {#sec2}
========================
2.1. Chemicals and reagents {#subsec2.1}
---------------------------
RPMI-1640 and DMEM growth media, fetal calf serum, and L-glutamine were purchased from 3. Sigma-Aldrich Co, "St. Louis", Missouri, USA. (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT)- Triton X-100, gold standard, Tris HCl, DMSO, and EDTA were supplied from Merck Co.
The gold(II) complex with hematoporphyrin---\[Au(II)Hp~−2H~.(H~2~O)~2~\] was synthesized as previously described \[[@B18]\]. The reference anticancer drug cisplatin was purchased from Sigma. Stock solutions of both agents were freshly prepared in DMSO and promptly diluted serially with RPMI-1640 medium to the desired extent. The DMSO concentration never exceeded 1% in the final dilutions obtained.
2.2. Cell lines and culture conditions {#subsec2.2}
--------------------------------------
The T-cell leukaemia SKW-3 (a KE-37 derivative) (DSMZ No.: ACC 53); the nonHodgkin lymphoma DOHH-2 (DSMZ No.: ACC 47); the chronic myeloid leukaemias K-562 (DSMZ No. ACC 10), and LAMA-84 (DSMZ No. ACC 168) as well as the urinary bladder carcinoma-derived 5637 (DSMZ No.: ACC 35) were obtained from DSMZ GmbH (Braunschweig , Germany). The human urinary bladder carcinoma cell line MGH-U1 was supplied by American Type Cell Culture (Rockville, MD, USA). The cells were maintained as suspension type cultures (leukaemias) or as adherent cultures (5637 and MGH-U1) in controlled environment: RPMI-1640 medium, supplemented by 10% heat-inactivated fetal calf serum and 2 mM L-glutamine, at 37°C in a "Heraeus" incubator with 5% CO~2~ humidified atmosphere. In order to keep cells in log phase, the cultures were refed with fresh RPMI-1640 medium two or three times/week.
2.3. Cytotoxicity assay {#subsec2.3}
-----------------------
Cell viability was assessed using the standard MTT-dye reduction assay as previously described \[[@B23]\] with minor modifications \[[@B24]\]. Exponentially growing cells were seeded in 96-well flat-bottomed microplates (100 *μ*L/well) at a density of 1 × 10^5^ cells per mL and after 24-hour incubation at 37°C; they were exposed to various concentrations of the tested complexes for 72 hours. For each concentration at least 8 wells were used. After the incubation with the test compounds MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (Sigma) solution (10 mg/mL in PBS) was added (10 *μ*L/well). Microplates were further incubated for 4 hours at 37°C and the quantity of formazan product obtained was determined spectrophotometrically using a microprocessor-controlled multiplate reader (Labexim LMR-1) at 580 nm. The cell survival fractions were calculated as percentage of the untreated control (untreated control = 100%). The experimental data were transformed to sigmoidal dose-response curves using nonlinear regression analysis (GraphPad Prizm), which enabled the calculation of the corresponding IC~50~ values.
2.4. DNA fragmentation analysis {#subsec2.4}
-------------------------------
The characteristic for apoptosis mono- and oligonucleosomal fragmentation of genomic DNA was detected using "Cell Death Detection" ELISA kit (Roche Diagnostics, Germany). The exponentially growing MGH-U1, K-562, HD-MY-Z, and SKW-3 cells were plated in sterile petry dishes and exposed to equipotent concentrations for 24 hours. Cytosolic fractions of 1 × 10^4^ cells per group (treated or untreated) served as antigen source in a sandwich ELISA utilizing primary antihistone antibody-coated microplate and a secondary peroxidase-conjugated anti-DNA antibody. The photometric immunoassay for histone-associated DNA fragments was executed in accordance with the manufacturer\'s instructions at 405 nm using ELISA reader (Labexim LMR-1). The results were expressed as the oligonucleosome enrichment factor representing the ratio between the absorption in the treated versus the untreated control samples.
2.5. Cellular accumulation kinetics {#subsec2.5}
-----------------------------------
Aliquots of 2 × 10^7^ K-562 cells and HD-MY-Z cells (in 2 mL RPMI 1640) were placed in sterile petry dishes and exposed to different concentrations of the gold complex: (12.5, 25, 50, and 100 *μ*M) for 30 or 60 minutes. After the exposure period, the cells were spun at 2000 rpm for 5 minutes and the drug-containing medium was discarded. The cells were then washed thrice with phosphate buffered saline and aliquots were taken for counting and cell viability determination (trypan blue dye exclusion assay). Thereafter, the cells were digested in 50 *μ*L 10% Triton X100 (EDTA) for 5 minutes, and 950 *μ*L mixture of concentrated hydrochloric acid and concentrated nitric acid (3 : 1) was added in order to allow complete decomposition of the cells, and dissolution of the accumulated gold complex for 1 hour. Following the complete decomposition of the cells, the volume of the samples was adjusted to 1 mL and the gold concentration was determined. The gold concentrations in the sample solutions higher than 0.5 *μ*g/mL were determined using flame atomic absorption spectrometry (FAAS) with an air-acetylene flame (PYE UNICAM SP 1950). Lower gold concentrations were determined by electrothermal AAS using a Perkin-Elmer Zeeman 3030 spectrometer with an HGA-600 graphite furnace. The light source was a hollow cathode lamp for Au. The spectral bandpass was 0.7 nm. Standard uncoated graphite tubes were used as atomizer. Only peak areas were used for quantification. The results were expressed as nmol gold/10^6^ cells.
2.6. Data processing and statistics {#subsec2.6}
-----------------------------------
The cytotoxicity assays were carried out in eight separate experiments, whereas the apoptosis induction evaluation was conducted in quadruplicate. The data processing exploited MS Excel and GraphPad Prizm software for PC. Student\'s *t*-test was performed with *P* ≤ .05 taken as significance level.
3. RESULTS {#sec3}
==========
3.1. Cytotoxicity against tumor cell lines {#subsec3.1}
------------------------------------------
The cytotoxic potential of the novel Au(II) complex was studied in a panel of malignant cell lines, originating from leukaemias, lymphomas, and solid tumors. The results of the chemosensitivity screening program, following 72-hour treatment, are summarized in [Table 1](#tab1){ref-type="table"}. Throughout the screening investigation cisplatin was used as a positive control.
The cytotoxic effects of the Au(II) complex were evaluated using the concentration-response curves presented in [Figure 2](#fig2){ref-type="fig"}. The cellular viability was reduced significantly causing 50% inhibition at micromolar concentrations in the majority of the cell lines tested.
Generally, the leukaemia and lymphoma-derived cell lines were more sensitive to the Au(II) complex and the IC~50~ values obtained were comparable to those of the referent anticancer drug cisplatin. Among them the most responsive tumor model was the T-cell leukaemia SKW-3 (KE-37 derivative) ([Table 1](#tab1){ref-type="table"}, [Figure 2](#fig2){ref-type="fig"}). Against this cell line the relative potency of \[Au(II)Hp~−2H~.(H~2~O)~2~\] even surpassed that of cisplatin. In contrast, the solid-tumor-derived cell lines showed far more pronounced sensitivity to cisplatin as compared to the novel gold(II) complex. The murine neuroblastoma Neuro2A was resistant to both metal complexes.
3.2. In vitro cytotoxicity study of \[Au(II)Hp~−2H~.(H ~2~O) ~2~\] on human kidney cells in comparison to cisplatin {#subsec3.2}
-------------------------------------------------------------------------------------------------------------------
The nephrotoxicity of the novel compound in an in vitro test system versus the established nephrotoxic drug cisplatin was estimated. The human embryonic kidney 293T cells were exposed for 72 hours to either cisplatin or \[Au(II)Hp~−2H~.(H~2~O)~2~\] and thereafter, their viability was detected with MTT-dye reduction assay ([Figure 3](#fig3){ref-type="fig"}). Throughout the tested range of concentrations the gold(II) complex proved to be only marginally cytotoxic and failed to cause 50% reduction of cell viability. In contrast, the referent drug cisplatin exhibited more pronounced cytotoxicity upon kidney cells with an IC~50~ value of 3.87 *μ*M.
3.3. Induction of apoptosis following \[Au(II)Hp~−2H~.(H ~2~O) ~2~\] treatment {#subsec3.3}
------------------------------------------------------------------------------
Despite their principle modes of action, the majority of anticancer drugs share the distinction of being capable of recruiting the apoptotic cell death signaling pathways in malignant cells. Therefore, the ability of \[Au(II)Hp~−2H~.(H~2~O)~2~\] to evoke genomic DNA-fragmentation which is a key hallmark of programmed cell death was investigated. For this purpose, the exponentially growing SKW-3, K-562, HD-MY-Z, and MGH-U1 cells were exposed to equieffective concentrations of the gold(II) complex or cisplatin for 24 hours and thereafter, the levels of oligonucleosomal DNA fragmentation were assessed using a commercially available ELISA kit ([Figure 4](#fig4){ref-type="fig"}).
The results obtained indicate that \[Au(II)Hp~−2H~.(H~2~O)~2~\] is a potent apoptosis inductor causing similar level of DNA fragmentation as the referent drug cisplatin if applied at equipotent concentrations. The effect of the gold complex in MGH-U1 and HD-MY-Z was characterized by a straightforward concentration dependence, whereby the proportion of apoptotic cells arise increasing the concentration. In a contrast, the level of oligonucleosomal DNA-fragmentation was found to decrease at the highest concentration as compared to the lower ones in SKW-3. This result could be ascribed to the relative increase of the proportion of necrotic cells undetectable in our experimental setting. The gold complex failed to induce prominent DNA-fragmentation in K-562 cells throughout the concentration range used. The level of apoptosis was significantly higher than the control only after exposure of cells to the highest concentration (twice the IC~50~ value).
3.4. Intracellular accumulation of \[Au(II)Hp~−2H~.(H ~2~O) ~2~\] {#subsec3.4}
-----------------------------------------------------------------
The determination of the intracellular levels of gold attained after 30 minutes or 60 minutes of treatment of either K-562 cells or HD-MY-Z with the Au(II) complex is depicted on [Figure 5](#fig5){ref-type="fig"}. A prominent time- and concentration-dependent pattern of gold accumulation is evident, whereby the intracellular levels in the K-562 cells are substantially higher as compared to those in HD-MY-Z cells. These data indicate that putative pharmacological targets of the tested compounds are readily accessible after a short incubation period.
4. DISCUSSION {#sec4}
=============
To our best knowledge, the present study is the first one addressing the cytotoxic potential of stable monomeric octahedral Au(II) complexes. The antiproliferative activity of the novel compound was evaluated in a wide spectrum of cell lines, representative for some important types of human cancer. The results of the MTT-dye reduction assay unambiguously indicate that \[Au(II)Hp~−2H~.(H~2~O)~2~\] exerts potent cytotoxic/antiproliferative effect which in some cases is comparable to that of the referent cytotoxic drug cisplatin. Among the cell lines under evaluation the human T-cell leukaemia SKW-3 proved to be the most sensitive to Au(II)-complex treatment, actually the IC~50~ value in these cells was lower than cisplatin. In the other leukaemia models, the relative potency of \[Au(II)Hp~−2H~.(H~2~O)~2~\] was somewhat lower but more or less comparable to that of cisplatin. Our experimental data indicated that cisplatin was prominently superior against the solid tumor-derived cell lines.
The clinically used platinum drugs as well as the gold antirheumatic agents are characterized by significant nephrotoxicity which is recognized as major dose-limiting factor. Hence, we sought to determine the nephrotoxic potential of \[Au(II)Hp~−2H~.(H~2~O)~2~\] versus the established nephrotoxin cisplatin. The transformed cell lines appear to be an attractive model among the in vitro test systems used for assessment of nephrotoxicity. They are more versatile than the primary cells on the one hand and retain most of the biochemical features of the normal kidney tissue on the other hand \[[@B26]\]. In the present study, 293T cells were used which have been recently characterized as a suitable model for in vitro assessment of nephrotoxicity \[[@B27]--[@B29]\]. The newly synthesized complex \[Au(II)Hp~−2H~.(H~2~O)~2~\] proves to be far less cytotoxic against kidney cells and in contrast to cisplatin, it fails to induce 50% inhibition of cellular viability. In contrast cisplatin was found to exert prominent cytotoxic effects with the IC~50~ value similar to those obtained in the cancer cell lines.
Another important objective of the present investigation was to determine the intracellular penetration of the gold complex. Although the specific mechanisms of the cytotoxicity of gold species are not fully elucidated, there is a general consensus that they interact with intracellular targets, a feature which is common for most of the anticancer drugs \[[@B7], [@B10]\]. Thus, the cellular accumulation of the drug appears to be a crucial prerequisite for optimal cytotoxic activity. The novel gold(II) complex is characterised by a significant intracellular accumulation which is more pronounced in the leukemic model K-562 than in HD-MY-Z Hodgkin\'s lymphoma. A possible explanation is the discrepancy between the cultures type in these cell lines. While the K-562 cells are suspended in the medium, the HD-MY-Z cells tend to attach to the bottom of the cultivation vessel forming monolayers. Conversely, the exposure area in K-562 cells is greater in comparison to that of the semiadherent HD-MY-Z.
In order to elucidate the mechanisms underlying the observed cytotoxicity of \[Au(II)Hp~−2H~.(H~2~O)~2~\], the level of DNA-fragmentation has been quantified. The induction of programmed cell death appears to be a common feature mediating the cytotoxic effects of anticancer agents and in particular of metal-based drugs \[[@B1], [@B4]\]. The results reported here confirm the general character of this phenomenon: \[Au(II)Hp~−2H~.(H~2~O)~2~\] was found to induce apoptotic cell death in SKW-3, MGH-U1, K-562, and HD-MY-Z cells after 24-hour exposure. The low responsiveness of K-562 cells to the proapoptotic effects of \[Au(II)Hp~−2H~.(H~2~O)~2~\] is the most probable explanation for the discrepancy between its low sensitivity to the gold agent and its excellent intracellular accumulation patterns.
The present data for the Au(II) metalloporphyrin complex are in accordance with the effects established with structurally similar, octahedral Pt(III) complexes which are characterized by a significant cytotoxicity too \[[@B22]\]. The complex \[Au(II)Hp~−2H~.(H~2~O)~2~\] has an analogous structure with one of these Pt(III) complexes, both having octahedral structure with the metal center being coordinated in the porphyrin ring via the four pyrrolic nitrogens. The juxtaposition of their cytotoxicity shows that while the platinum complex is far more active against the chronic myeloid leukaemia LAMA-84 \[[@B18]\], the Au(II) complex studied exerts superior activity against the T-cell leukaemia SKW3 (KE-37). These data correlate well with the estimated specific inhibiting effect of gold species upon immune cells and T-cells in particular \[[@B7]\].
The selective uptake of porphyrins in malignant tissues/cells is due to complex mechanisms, among them the most important being the LDL-receptor mediated endocytosis of porphyrin/lipoprotein complexes formed in the circulation. Hence, porphyrins are employed as targeting moieties to ensure selective accumulation of cytotoxic agents into the solid tumor microenvironment \[[@B19]--[@B21]\]. The novel complex used in the present study demonstrated significant intracellular accumulation presumably mediated by formation of FCS-lipoprotein complexes and subsequent endocytosis. Due to the phototoxic properties of the ligand, a light-borne augmentation of the cytotoxicity of \[Au(II)Hp~−2H~.(H~2~O)~2~\] could not be ruled out and would be addressed in a further, more detailed evaluation of this compound.
All experimental data presented in this study indicate that \[Au(II)Hp~−2H~.(H~2~O)~2~\] is a biologically active compound with well-pronounced cytotoxic and proapoptotic properties against malignant cells. As compared to cisplatin, it is less cytotoxic for human kidney cells and this feature may prove to be advantageous.
The present study was financially supported by the National Science Fund of the Bulgarian Ministry of Education and Science through Grant no. WU-06/05.
![Chemical structure of the tested gold complex \[Au(II)Hp~−2H~.(H~2~O)~2~\].](BCA2008-367471.001){#fig1}
![Concentration-response curves of \[Au(II)Hp~−2H~.(H~2~O)~2~\] (■) and cisplatin (▲) against a panel of tumor cell lines as assessed by the MTT-dye reduction assay after 72-hour exposure. Each data point represents the arithmetic mean ± sd of eight separate experiments.](BCA2008-367471.002){#fig2}
![Cytotoxic effects of \[Au(II)Hp~−2H~.(H~2~O)~2~\] (■) and cisplatin (▲) against the human embryonic kidney cell line 293T as assessed by the MTT-dye reduction assay after 72 hours exposure. Each data point represents the arithmetic mean ± sd of eight separate experiments.](BCA2008-367471.003){#fig3}
![Internucleosomal DNA fragmentation in SKW-3, K-562, HD-MY-Z, and MGH-U1 cells after 24-hour exposure to equipotent concentrations of \[Au(II)Hp~−2H~.(H~2~O)~2~\] (white columns) or cisplatin (gray columns). The level of DNA fragmentation expressed as the corresponding enrichment factor (ef = 1 in untreated control) was determined using "Cell Death Detection" ELISA (Roche Diagnostics).](BCA2008-367471.004){#fig4}
![Intracellular accumulation of gold following \[Au(II)Hp~−2H~.(H~2~O)~2~\] treatment of HD-MY-Z or K-562 cells for 30 minutes (white columns) or 60 minutes (gray columns), means of 4 independent experiments.](BCA2008-367471.005){#fig5}
######
IC~50~ values of Au(II)Hp~−2H~.(H~2~O)~2~ and cisplatin against a panel of tumor cell lines assessed after 72-hour exposure (MTT-dye reduction assay).
Cell line Origin/Cell type IC~50~(*μ*M)
------------- --------------------------- -------------- -------
LAMA-84 chronic myeloid leukaemia 65.1 20.3
K-562 chronic myeloid leukaemia 161.2 32.0
SKW-3^(a)^ T-cell leukaemia 7.6 11.7
DOHH-2 nonHodgkin lymphoma 50.1 35.0
HD-MY-Z Hodgkin\'s lymphoma 43.1 12.2
MGH-U1^(b)^ urinary bladder cancer 56.4 5.9
MCF-7 breast cancer 166.8 33.4
SAOS-2 osteogenic sarcoma \>200 15.2
Neuro-2a murine neuroblastoma \>200 \>200
^(a)^ KE-37 derivative;
^(b)^ Formerly designated as EJ.
[^1]: Recommended by Lundmila Krylova
|
Two Lehigh researchers studied full-term births in a four-county New Jersey area immediately downwind of the Portland Generating Station in Upper Mount Bethel Township, Pennsylvania. Photo courtesy of New Jersey Department of Environmental Protection.
Women who live with pollution generated by coal-fired power plants may be far more likely to give birth to a child with below normal weight than women who are not subjected to similar pollution, according to a study by Lehigh researchers.
Muzhe Yang, associate professor of economics, and co-author Shin-Yi Chou, professor of economics, studied 52,000 full-term births in a four-county New Jersey area immediately downwind of Portland Generating Station in Upper Mount Bethel Township, Pennsylvania, from 2004 to 2010.
They found that the likelihood of having a low birth weight (2,500 grams or less) baby jumped from 2 to 3 percent if the mother was exposed to sulfur dioxide emissions during the first month of pregnancy.
“This is a 50 percent increase in the occurrence of low birth weight among full-term babies,” Yang said. “So really, this is a big effect, and full-term, low birth weight usually results from intrauterine growth restriction. ”
In 2011, the U.S. Environmental Protection Agency ruled that the plant, located on the west bank of the Delaware River, was the sole reason that four downwind New Jersey counties often suffered sulfur dioxide levels in excess of EPA’s National Ambient Air Quality Standards. Warren County is across the river from the plant. Sussex, Morris and Hunterdon counties are further away, but close enough to be affected by sulfur dioxide released from the plant and carried by prevailing westerly winds, Yang said.
According to a 2007 report by the Environmental Integrity Project, the Portland Generating was ranked fifth among the top 50 “dirtiest” power plants by sulfur dioxide emission rate. The researchers said the 30,465 tons of sulfur dioxide emitted by the plant in 2009 was more than double the sulfur dioxide emissions from all power-generating facilities in New Jersey combined.
Then-owner GenON REMA LLC challenged in court the EPA’s authority to impose emission limits on the plant. But in 2013, the U.S. Third Circuit Court of Appeals upheld the so-called “Portland Rule,” and in 2014, the plant stopped burning coal under a consent decree with the N.J. Department of Environmental Protection. The researchers recognized that the ruling provided them with a unique opportunity to study the causal impacts of a coal-fired power plant on its downwind neighbors.
“It’s about the most famous ruling in recent history because this is the first time that the EPA won the case where the sole contributor is identified,” Yang said. “In general, when we talk about air pollution, it is very difficult to identify one single source.”
The study—the first on the impacts of prenatal exposure to a uniquely identified large polluter—notes that a full-term, low birth weight indicates an intrauterine growth restriction and that low birth weight infants can die at rates of up to 40 times greater than their normal weight counterparts.
In addition to being capable of causing respiratory distress, sulfur dioxide is also a known precursor in the formation of fine particle, often referred to in the scientific community as PM2.5 (particulate matter smaller than 2.5 micrometers in diameter.) These smallest of dust particles, which are about 1/30th the width of a human hair, are “actually the biggest threat of all air pollutants,” Yang said, “because they can just penetrate through our lungs and permanently stay inside our bodies.”
The researchers combined data from several sources: birth certificates, power plant emission records, air pollution data and weather data.
Why it matters:
Weighing the long-term impact of coal-fired power plant pollution on prenatal life can help improve the cost-benefit analysis needed in establishing EPA regulations.
By Daryl Nerl |
New Products
Skye has a programme of continual product improvement. We are a vibrant and enthusastic Company with a list of New Products to develop that would take us into the next century!! We try to introduce a new product annually. Please see the posts below for our latest New Products.
Live display of the weather on your PC Live display on a website Pre-configured templates OR design your own Updates every 10 seconds Data sent by GPRS from remote stations Alarm functions Email of data Low cost On-line forums Visit www.skyedata.com for live display of the weather from Skye MiniMets Contact Skye for further details… [Read More]
SpectroSense2+/.GPS has been upgraded to include more Vegetation Indices. The indices are calculated from the raw data of the connected sensors and the results are displayed on the screen. Vegetation Indices available on the main menu are NDVI, PRI, MODIS-EVI, EVI2, MSAVI, LAI, fPAR, RVI, & WBI. Many related indices have been grouped together… [Read More]
PRI and NDVI Sensors & Systems These are set wavelengths for NDVI or PRI measurements with 10nm bandwidths. Other bespoke wavelengths are still available as normal. For further details, please click on the following link or contact Skye on [email protected] NDVI/PRI Sensors
NEW HYDROSENSE3 METER – We happy to announce the launch of our new HydroSense3 meter, which is a new upgraded version of our popular HydroSense product. It allows easy, instantaneous readings & storing of soil moisture measurements from a Skye Needle Sensor and septum tensiometers, as well as being able to be used with electronic… [Read More]
ABSOLUTE CALIBRATION OF RADIANCE SENSORS – after a large investment in extra calibration equipment and procedures, we are now able to supply fully traceable calibration figures in engineering units for radiance sensors. In the past we were only able to give figures as a Relative Sensitivity between the channels in each sensor. Whilst these figures… [Read More]
NEW MULTI-SPECTRAL RADIOMETERS – Skye is pleased to announce the introduction of brand new multi-channel sensors, which have been designed especially for long term installation applications such as on Flux Towers. The new design can be built with up to 4 channels of ‘user choice’ wavelengths from 400 to 2500nm and ‘user choice’ band widths… [Read More]
Skye Instruments’ underwater lowering frame allows users to take accurate radiation measurements underwater. Up to two sensors may be mounted on the frame. This accessory is designed to be very stable, even in high flows.
Recently we attended Exeter University with an invitation to contribute to a meeting of UAV enthusiast experts and scientists with an interest in using unmanned aerial vehicles in remote sensing studies. UAV’s are already used in individual studies all around the world, but Exeter University wants to develop a unique vehicle ready for off-the-shelf deployment… [Read More] |
I want marriage and kids, but is now the time to talk about it?
My boyfriend and I have been together for three years. We're in our mid-30s and have been living together for almost a year. He's a great guy, and it's the best relationship I've ever had. He's smart, fun, caring, we have similar interests - we are just a really good match, in my opinion. His family tells me the same thing about us and he's given me no reason to think he doesn't feel the same.
So what's the problem? He's never been great with feelings and expressing himself. The most I get from him is the occasional "I love you." I know he does feel it, and he shows it in different ways (ie: he’s excited when he comes home and sees me, and sometimes he sings these silly little made up songs with my pet names in them). I was the one who had to broach the subject of moving in together, even though I was renting and he owned a condo. I feel like I'm always the one bringing up the big questions.
I'm at that point where I want to take the next step: marriage and kids. At my age, there is only so much of a window of time left to start a family. We have talked about marriage/kids, but only in a very general "yes, that's something I want" kind of way, and less framed around our specific plans as a couple.
With the current state of the world today, I have no idea how to start this conversation with him. The world is on lockdown due to COVID-19. There is no shopping for engagement rings, setting weddings dates, picking venues, etc. And who knows when any of that will be possible again; could be months, or over a year. He's fairly traditional and I know he'd want to be married first before having kids, but if we stick to that type of timeline, I could be nearly 40 before kids are an option, and I don't want to wait that long because it may be too late. I'd be fine with getting engaged (don't need a real ring) and then beginning to start a family soon, knowing we'd get married later once the world goes back to normal. I don't mind if we were to do the marriage and kids out of order, but I'm worried he will care.
And because he's been so bad about expressing himself in the past, I guess I don't even feel 100 percent sure that he wants what I want. I'm nervous about how things will play out if we aren’t on the same page once we begin to discuss this. He changed jobs literally right before the pandemic hit and he's been quite stressed with learning this new role remotely, and I don't want to add any more stress to his plate, but I also don't want to put my life and future on hold. How do I have this conversation? How can I try to get him to understand we don't have the luxury of time if we want a family? If the world wasn't in the state it is right now I'd be far less worried about talking to him about all of this.
– Starting a future during a pandemic?
This is a very stressful time. But life is still happening. In fact, some people are making big decisions because of this pandemic. We've seen stories about secluded proposals and Zoom weddings. Trust me, wedding planners are booking out into 2021 and beyond.
Your boyfriend is learning a new job, which is a lot to handle right now (although it’s great that he has work, in general ... so many people don't.). But that's temporary stress. You're assuming that your life plans will be new obligations for him to worry about, but they're supposed to be things to look forward to. If he wants to get married and have kids someday, this talk will be about shared desires and how to make them work.
Everything you said in this letter makes perfect sense. I mean, you could wait a few more months before you act on any decisions; we have no idea what will go down over the next year, and maybe we'll see more light at the end of the tunnel soon. But at the very least, you can talk about possible timelines. Honestly, if he's not on board with any of this, you need to know.
I'm sorry you have to be the one to bring it up, but sometimes one partner is better at that kind of thing. Hopefully he can balance that by bringing some enthusiasm to the table. Maybe he'll write some little songs about a life with you. Get brave, be honest, and see if he does.
Reader Favorites
The Book
CAN’T HELP MYSELF is Meredith’s memoir about giving advice, learning from readers, working with an ex, and moms and daughters. It’s also a story about how an online community can become another kind of family. |
Hip Hop Museum Could Be Coming To Harlem
The Hip Hop Hall of Fame is a non-profit founded by James Thompson. James is also an Army veteran. He started the organization on a mission to primarily chronicle Hip Hop’s influence & significance on economic and social factors. Furthermore, Thompson even created and executive-produced a Hip Hop Hall of Fame Awards show on BET.
Also according to Newsweek, The Hip Hop Hall of Fame announced on Tuesday, June 6th, that they won a bid for a building in Harlem near the Apollo Theater. It is a twenty story building as recorded in Time Out New York. Additionally, the development will be completed in two phases. The Hip Hop Museum is set to feature an art gallery, event space, multimedia studio, and a number of other amenities.
The non-profit is also said to be hosting a $150 million fundraising campaign to assist in constructing the museum. Lastly, James Thompson admitted that he’d wanted to get the project completed years ago. He said after the loss of Tupac and Biggie- he felt that it wasn’t the right time.
So now, with the building secured, maybe Hip Hop lovers can eventually experience The Hip Hop Hall of Fame Museum. |
import { isString } from '../../utils/isString';
import { createFileProcessorFunction } from './createFileProcessorFunction';
export const createProcessorFunction = (apiUrl = '', action, name, options) => {
// custom handler (should also handle file, load, error, progress and abort)
if (typeof action === 'function') return (...params) => action(name, ...params, options);
// no action supplied
if (!action || !isString(action.url)) return null;
// internal handler
return createFileProcessorFunction(apiUrl, action, name, options);
}; |
Q:
Why do XSS strings often start with ">?
One of the ways, XSS can be exploited, is to use following tag:
"><script>alert(document.cookie)</script>
Here, What is the meaning of "> before script (<script> tag) and why it is used?
A:
This way you escape from a double-quoted attribute (") and close the previous tag (>) before opening a script tag that contains your payload. It's one of the most basic XSS patterns.
Example:
<input type="text" value="$XSS">
With your sequence it becomes:
<input type="text" value=""><script>alert(document.cookie)</script>">
^- a completed tag ^- payload garbage -^
Note that your vector only works if HTML entities aren't filtered.
So if you can't escape from that attribute, it's XSS-safe. This doesn't trigger:
<input type="text" value="<script>alert(document.cookie)</script>">
You can see the same idea with XSS inside Javascript (e.g. '); to end a string and a function call) or with SQL injections. The first characters of an injection sequence often have the purpose of escaping from the current context.
As for @Mindwin's obligatory SQL injection xkcd strip, I freehand-circled the part I'm referring to:
|
The effect of hypophysectomy on somatostatin-like immunoreactivity in discrete hypothalamic and extrahypothalamic nuclei.
Several hypothalamic and extrahypothalamic sites that have high concentrations of somatostatin-positive nerve terminals and/or cell bodies are important in the regulation of GH secretion. GH is capable of inhibiting its own secretion under certain prescribed conditions, and a short loop feedback regulatory mechanism may involve somatostatinergic pathways. The purpose of this investigation was to determine the effect of removal of GH by hypophysectomy on the content of somatostatin-like immunoreactivity (SLI) in discrete hypothalamic and extrahypothalamic nuclei. Individual nuclei were removed from frozen brain sections of hypophysectomized and sham-operated male rats. The tissue content of somatostatin was determined by a specific RIA. The content of SLI in the median eminence of hypophysectomized animals was significantly reduced by 38%, compared to sham-operated controls (278 +/- 53.2 vs. 447.0 +/- 57.4 pg/microgram protein, respectively). Significant reductions of SLI in the medial preoptic (50%), arcuate (33%), and periventricular (30%) nuclei were also observed in hypophysectomized animals when compared to controls (10.2 +/- 1.6 vs. 20.0 +/- 3.0; 60.2 +/- 8.2 vs. 89.8 +/- 13.3; and 19.4 +/- 1.8 vs. 27.8 +/- 3.1 pg/microgram protein, respectively). No significant changes were detected in the ventromedial, suprachiasmatic, medial, central, or cortical a mygdaloid nuclei nor in the nucleus interstitialis striae terminalis. These data suggest that GH may exert a feedback effect on specific hypothalamic nuclei that involves somatostatin-containing systems. |
Unlike some of his colleagues and the President, McConnell stopped short of stating protesters were paid to protest. However he did say they were clearly “trained” based on how well they did.
In organizing protests, it is common practice to let people know what to expect and what they could and should not do. Many of the most effective protests probably did benefit from shared knowledge.
However McConnell, like his fellow members of the GOP, intended to imply the protesters were not there to express their own views and opinions, but rather were paid to victimize the Republican Party by rich liberal leaders. That narrative dominated statements made by numerous members of the GOP for the last several weeks.
The Congressperson as victim to the voice of their constituents was further emphasized by McConnell’s choice of words in his opening statement. He said:
“I couldn’t be prouder of the Senate Republican Conference… we were literally under assault.”
“These demonstrators, I’m sure some of them were well-meaning citizens. But many of them were obviously trained to get in our faces, to go to our homes up there. Basically almost attack us in the halls of the capitol. So there was a full-scale effort to intimidate…”
Watch McConnell’s comments here.
Considering the reason many people protested—the multiple sexual assault allegations against Kavanaugh—the Senate Majority Leader’s choice to use the word “assault” to describe his own situation angered many people. |
Sub-optimal dose of Sodium Antimony Gluconate (SAG)-diperoxovanadate combination clears organ parasites from BALB/c mice infected with antimony resistant Leishmania donovani by expanding antileishmanial T-cell repertoire and increasing IFN-gamma to IL-10 ratio.
We demonstrate that the combination of sub-optimal doses of Sodium Antimony Gluconate (SAG) and the diperoxovanadate compound K[VO(O2)2(H2O)], also designated as PV6, is highly effective in combating experimental infection of BALB/c mice with antimony resistant (Sb(R)) Leishmania donovani (LD) as evident from the significant reduction in organ parasite burden where SAG is essentially ineffective. Interestingly, such treatment also allowed clonal expansion of antileishmanial T-cells coupled with robust surge of IFN-c and concomitant decrease in IL-10 production. The splenocytes from the treated animals generated significantly higher amounts of IFN-c inducible parasiticidal effector molecules like superoxide and nitric oxide as compared to the infected group. Our study indicates that the combination of sub-optimal doses of SAG and PV6 may be beneficial for the treatment of SAG resistant visceral leishmaniasis patients. |
Silver pikeconger
The silver pikeconger (Hoplunnis pacifica) is an eel in the family Nettastomatidae (duckbill/witch eels). It was described by E. David Lane and Kenneth W. Stewart in 1968. It is a marine, tropical eel which is known from the eastern Pacific Ocean. Males can reach a maximum total length of , but more commonly reach a TL of .
References
Category:Nettastomatidae
Category:Fish described in 1968 |
Specialized intramembrane organizations of the cone presynaptic membrane in the pigeon retina. Freeze-fracture study.
The presynaptic membranes of the cone cell endings of the pigeon retina were investigated using the freeze-fracture technique. En face views of the cytoplasmic leaflet (P-face) of the split presynaptic membrane revealed several specialized membrane organizations, 1. membrane particle aggregates composed of 10-20 particles which were larger than the usual ones seen in the cell membrane, 2. fenestration-like circular structures of 30-50 nm in diameter which were not surrounded by membrane particles. 3. similiar circular structures as described above but which were accompanied by a few membrane particles on the circular margin and were considered to be an intermediate form of the first and second membrane structures. These three structures appeared simultaneously in one fracture plane of the presynaptic membrane; were situated at the same intervals from one another and were approximately equal in size to synaptic vesicles (30-50 nm). These findings strongly suggested that these three structures were serial events in presynaptic membrane organization. When fortuitous cross fractures exposed both the P-face of the presynaptic membrane and the adjacent cytoplasm of the cone ending, fusion of the synaptic vesicles to the presynaptic membrane was observed, and was considered to be the opening of the synaptic vesicle to the synaptic cleft. These openings were also situated at the same distance as the structures described above. These findings demonstrate the process of exocytosis of the synaptic vesicles by which the chemical transmitter is probably released to the synaptic cleft. |
Andi Story
Andrea Douglas Story (born April 2, 1959) is a Democratic member of the Alaska Legislature representing the State's 34th House district.
Career
Story won the election for her House seat on November 6, 2018 as the candidate of the Democratic Party. She secured fifty-three percent of the vote while Republican Jerry Nankervis secured forty-seven percent.
References
Category:1959 births
Category:21st-century American women politicians
Category:Alaska Democrats
Category:Living people
Story, Andi
Category:People from Juneau, Alaska
Category:Women state legislators in Alaska
Category:People from Olivia, Minnesota |
1. Field of the Invention
The present invention relates to an automatic drawer slide homing apparatus, and more particularly to an automatic drawer slide homing apparatus capable of increasing the length of embedding a middle rail onto a bottom rail by suspending a slide base, such that when a drawer is pulled out, the drawer can be supported to prevent a deformation of the rails.
2. Description of the Related Art
A general traditional hanging basket or drawer slide structure usually installs corresponding rollers on both sides of a cabinet, a rail disposed separately on both sides of the hanging basket or drawer, a roller installed separately on both sides of the rail, and a concavely curved guiding track disposed at a rear end of the rail, wherein the position of the guiding tracks is lower than that of the rails. Although the rollers can be rolled to move the hanging basket or drawer along the guiding track of a rail, and a slope is formed naturally between the guiding track and the rail, and the weight of the hanging basket or drawer can position the drawer at a fixed position, yet the hanging basket or drawer has its own weight and carries heavy objects, and a certain pressure exerted on the guiding track will deform the guiding track, and the slope between the rail and the guiding track will be changed from a smooth condition to a rough condition. As a result, the drawer will get stuck easily or become unsmooth in the process of pushing the drawer inward from the guiding track. Since the position of the guiding track is lower than that of the rail, users may find it difficult to move the drawers containing heavy objects from the guiding track towards the rail, and such application definitely requires improvements.
Referring to FIG. 1 for R.O.C. Pat. No. 504988, an external slide element is provided for overcoming the foregoing shortcomings, and the external slide element includes an external fixed base fixed at an end of the slide element, a quick restoring element that freely slides back and forth in the fixed base, and a resilient element coiled around the periphery of the fixed base and having both ends fixed to the quick restoring element. In the operation of pulling an internal slide element out from the drawer, an end of a pulling element hooks the quick restoring element to be displaced outward and latched at a final position of a path in the fixed base. In the meantime, the quick restoring element pulls the resilient element to a tense state, and then the internal slide element will shut the drawer, such that when the pulling element and the quick restoring element are engaged with each other, the resilient element in a tense state will pull the drawer back rapidly, so as to achieve the effect of automatically shutting the drawer. Although this patented invention can avoid the difficulty of pulling a drawer, the fixed base is fixed at an end of the external slide element and the quick restoring element is pivotally coupled to fixed base and an end of the external slide element, and thus the length of the slide element being embedded into the external slide element becomes relatively shorter. When the slide element moves outward with the internal slide element connected to the drawer, the drawer cannot be supported fully when the drawer is pulled out, and thus the drawer may become deformed. Since it is necessary to raise the quick restoring element in order to latch the pulling element, the quick restoring element may get stuck easily, and all these cause inconvenience to the use of drawers. |
SEATTLE - There is no denying Puget Sound has seen remarkable growth. Seattle, for example, has been the fastest growing city for years in the US.
With growth comes more density, more traffic and more expensive homes.
Despite all that, people move here in droves.
“People are friendly, but you do get the Seattle freeze when it gets cold, that I do notice,” Ray Munsami said.
Munsami moved to Washington state from Texas because of family. Q13 News also met a woman who moved to Seattle from France who says a job and a healthy lifestyle in this region attracted her here.
“I really like it,” said the woman.
Both of those people were at a branch of the Department of Licensing on Thursday, an agency that tracks new Washington state driver’s licenses.
The latest numbers from DOL continue to show a decrease in people from out of state moving to Washington. There were 9,169 fewer people who moved into Washington from other states in the first two months of 2019 compared to the same time in 2016.
Couple that with the fact that home prices in King County have dropped more than $100,00 in the last year. So could all that be signaling an end to the economic boom in our region?
“I don’t think it is,” Executive Director of Puget Sound Regional Council Josh Brown said.
Brown says when it comes to the local economy, one thing reigns king, jobs.
“Our economy continues to perform very well. The 5 largest employers in Washington state are all in our region,” Brown said.
The region last year created more jobs than the previous year.
“Last year we actually rebounded and created 55,000 jobs, remember this was during the HQ2 debate, it was the head tax,” Brown said.
Brown is talking about Amazon’s search and announcement over a second headquarters elsewhere in the country.
But what about the news that Amazon will no longer move into the Rainier Square Tower in Seattle?
Brown says region-wide that will have little impact since it appears the tech giant will move those jobs to Bellevue.
“At the end of the day whether Amazon’s presence is in Seattle or Bellevue or anywhere in our region the fact that they are still advertising 10,000 jobs show their committed to our region,” Brown said.
Besides jobs, another important player is Sea-Tac airport that just eclipsed Las Vegas as the 8th busiest airport in the country.
“Airport data is important because it shows disposable income,” Brown said.
As for the big drop in home prices, Brown says it’s important to watch the market.
“You could argue that there are other things that are impacting those factors in case of real estate prices, interest rates have gone up in the same period of time,” Brown said. |
After a four week hiatus, Grey’s Anatomy came back March 24 with a bang (literally), which left me stunned…and a little peeved
Whenever characters on television are driving and talking, I always get nervous. Inevitably, intense car scenes end in trauma — and unfortunately, the March 24 episode of Grey’s Anatomy was no exception. Callie and Arizona were driving through the woods when…WHAM…disaster struck. And whether he deserves it or not, I place ALL the blame on Mark.
I love Calzona, so it breaks my heart watching Callie put Mark and Arizona on an equal level. Sure, Mark’s the father of Callie’s baby, but Arizona is her PARTNER. I don’t understand why she thinks they both get equal say in her life. Yes, parenting the child together is one thing. However, at the end of the day, Callie should be looking out for Arizona first and foremost.
But she doesn’t. After hyper-focusing on her baby shower the entire episode, Callie offers to take Arizona to a bed and breakfast away from the city. The trip goes well for a whopping 10 minutes, before Mark calls Callie, destroyed because he realizes Avery and Lexie are hooking up (that’s a whole other story). In typical Callie fashion, she puts Mark first and tries to call him back to console him. Frustrated (and rightfully so), Arizona grabs Callie’s cell phone and throws it in the back seat.
Never one to back down, Callie refuses to give up, takes off her seat belt (make a mental note of that) and picks up her phone. Meanwhile, she goes on a rant about how she’s trying to make Arizona, Mark and the baby inside her happy and it’s hard and she’ll do anything Arizona tells her.
(At this point, I was so angry at Callie. Once again, WHY is she trying to keep Mark happy? He’s the father and the friend, NOT her husband or boyfriend.)
Then, in a jealously-fueled power struggle, Arizona asks Callie to marry her from out of nowhere. It’s not romantic and it’s clearly not meant to happen right then. The proposal seems forced and done for the wrong reasons.
Unfortunately, before any of this can be digested Calzona’s car crashes and it’s all over. End scene. UGH.
Even though I saw it coming, my mouth was still agape. All I wanted was one scene with the old-fashioned Calzona chemistry! Is that too much to ask?
Apparently so. Next week, we’ll watch the beloved members of Seattle Grace Mercy West sing through the hallways as they desperately try to save Callie and her unborn baby. I hope they both make it, but I have a feeling Calzona the couple won’t be as lucky.
Do YOU think Calzona is over for good or will they figure out their issues after this accident?
What did you think about the rest of the episode “This Is How We Do It?” Will Adele be admitted into Mer and Der’s trial? Will Cristina become Chief Resident, even though Owen is in charge of picking?
Lastly, how do you feel about Avery and Lexie? Are they more than just sex or is she meant to get back together with Mark? Sound off below! |
Continuously monitoring a patient's physiological condition generally requires the patient's hospitalization, usually at great cost, especially where long term monitoring is required. In some circumstances a wide variety of out-patient monitoring devices may be used to monitor the physiology of patients who are physically outside of the hospital. Some out-patient monitoring devices have a limited range of operation, requiring monitored patients to remain close to a receiving station and thus limiting his mobility. Other devices are adapted for monitoring mobile or ambulatory patients while they move about in a vehicle or on foot and have a wide range of operation.
One such group of devices includes holter devices which generally record a patient's physiological data, such as the patient's ECG, during predetermined period of time for examination at later time. Other devices include event recorders. These devices provide for the capture of a patient's physiological data during a physiological “event,” such as a cardiac arrhythmia or an episode of patient discomfort. These devices may be patient activated or activated automatically when physiological data are detected which meet predefined event criteria.
Holter devices and event recorders typically require that a patient return to the hospital periodically in order to transfer the recorded data. Some of these devices provide for transmission via telephone or other communications facilities to a remote location for interpretation by a clinician. These devices generally require additional communications and medical testing devices to be present at patient location. In the case of event recorders, unnecessary delay between event recording and transmission is often introduced where such additional devices are not present during the event.
The mobility of high-risk patients must be weighed against the need to monitor a patient's location in order to provide a patient with emergency medical attention should a dangerous event occur. |