Search is not available for this dataset
text
stringlengths 7
7.7M
|
---|
Q:
Swift 3 sorting an array of tuples
I found these answers:
Sort an array of tuples in swift 3
How to sort an Array of Tuples?
But I'm still having issues. Here is my code:
var countsForLetter:[(count:Int, letter:Character)] = []
...
countsForLetter.sorted(by: {$0.count < $1.count})
Swift 3 wanted me to add the by: and now it says that the result of the call to sorted:by is unused.
I'm new to swift 3. Sorry if this is a basic question.
A:
You are getting that warning because sorted(by... returns a new, sorted version of the array you call it on. So the warning is pointing out the fact that you're not assigning it to a variable or anything. You could say:
countsForLetter = countsForLetter.sorted(by: {$0.count < $1.count})
I suspect that you're trying to "sort in place", so in that case you could change sorted(by to sort(by and it would just sort countsForLetter and leave it assigned to the same variable.
A:
Sorted() returns a new array, it does not sort in place.
you can use :
countsForLetter = countsForLetter.sorted(by: {$0.count < $1.count})
or
countsForLetter.sort(by: {$0.count < $1.count})
|
---
abstract: 'The phase diagram and surface critical behaviour of the vertex-interacting self-avoiding walk are examined using transfer matrix methods extended using DMRG and coupled with finite-size scaling. Particular attention is paid to the critical exponents at the ordinary and special points along the collapse transition line. The question of the bulk exponents ($\nu$ and $\gamma$) is addressed, and the results found are at variance with previously conjectured exact values.'
address: 'Laboratoire de Physique Théorique et Modélisation (CNRS UMR 8089), Université de Cergy-Pontoise, 2 ave A. Chauvin 95302 Cergy-Pontoise cedex, France'
author:
- D P Foster and C Pinettes
title: 'Surface critical behaviour of the vertex-interacting self-avoiding walk on the square lattice'
---
Introduction
============
Polymers in dilute solution are known to undergo a collapse transition as the temperature or solvent quality is changed at what has come to be known as the $\Theta$-point[@flory]. Using universality arguments, it is reasonable to expect the thermodynamic behaviour of a lattice model to be the same as the continuum real system as long as the dimension of the system, basic symmetries and range of interactions are the same. Such lattice models (self-avoiding walks) have been used as models for real, linear polymers in solution for over three decades[@vanderbook].
The quality of the solvent may be introduced by the inclusion of short-ranged interactions in the model; typically an attractive energy is included for non-consecutive nearest-neighbour occupied lattice sites. This model is the standard Interacting Self-Avoiding Walk model (ISAW) or $\Theta$-point model[@flory; @degennes75]. The model has been shown to accurately predict the critical behaviour of a wide range of real linear polymers in solution, not only in the high-temperature phase, but also at the collapse transition, which occurs as the temperature is lowered, at the $\Theta$ temperature. The model is successful because it captures the strong entropic repulsion between different portions of the polymer chain (the self-avoidance), as well as the effect of the difference of affinity between monomer-monomer contacts and monomer-solvent contacts (attractive interaction).
Whilst the relevant physical dimension in polymer physics would usually be $d=3$, the ISAW model has been much studied in two dimensions. This is partly motivated by the realisation that $d=3$ is the upper critical dimension of the collapse transition, and that the model in two dimensions provides an interesting playground. In this paper we shall concentrate on the two-dimensional square lattice.
In the late eighties and early nineties there was much discussion about the universality class of the ISAW model, particularly with respect to the adsorption of the collapsing walk in the presence of an adsorbing wall[@ds; @seno88; @dsb; @merovitchlima; @dsc; @vss91; @foster92]. For a while there was an apparent contradiction between a slightly modified walk model on the hexagonal lattice (the $\Theta^\prime$ model) and the standard $\Theta$ model[@ds]. This contradiction arose in the surface exponents; the exact surface exponents from the $\Theta^\prime$ point model were not the same as those calculated numerically for the $\Theta$ model[@veal; @ss88]. The apparent contradiction was resolved when it was realised that the exact solution of the $\Theta^\prime$ model gives the exponents at the so called special point (where collapse and adsorption occur simultaneously) whilst the numerical calculations were at the ordinary point (where collapse occurs in the presence of the wall, but without adsorption)[@vss91]. This was verified for both models at the two different points using exact enumeration[@foster92].
The debate over the $\Theta$ and $\Theta^\prime$ models opened the debate over to what extent the nature of the collapse transition depends on the details of the model. Different models were examined, and a range of collapse transitions were observed. Blöte and Nienhuis introduced an $O(n)$ symmetric model which in the limit $n\to 0$ gives a bond self-avoiding walk model, which is allowed to visit sites more than once but not the lattice bonds[@blotenienuis]. The walk is not allowed to cross. Since the interactions are on the lattice vertex, we shall henceforth refer to this model as the vertex-interacting self-avoiding walk (VISAW). This model was shown to have a different collapse transition than the $\Theta$ point model, with different bulk exponents[@wbn]; the correlation exponents $\nu=12/23$ for the VISAW compared to $4/7$ for the ISAW and the entropic exponent $\gamma=53/46$ compared to $\gamma=8/7$ for the ISAW. These exponents are conjectured based on a Bethe-Ansatz calculation of Izergin-Korepin model[@wbn], and to the best of our knowledge have not been numerically tested since their conjecture.
In recent years there has been a revival in another model with vertex interactions: the interacting self-avoiding trails (ISAT) model[@massih75]. This model corresponds to the VISAW in which the no-crossing constraint is relaxed. Evidence was presented by one of us that the $\nu$ exponent was also given by $\nu=12/23$, whilst $\gamma=22/23$[@F09]. A similar situation occurs in the ISAW on the Manhattan lattice, where the walk can only go one way down a row or column of lattice bonds, but the allowed direction alternates from one row (column) to the next. Here too the correlation length exponent is the same as the normal ISAW one, but $\gamma=6/7$ rather than $8/7$[@bradley89; @bradley90].
Recently the surface exponents of the ISAT model were calculated using transfer matrix calculations[@F10]. We propose here to similarly calculate the surface critical behaviour of the VISAW model. In the case of the VISAW model, the no-crosssing constraint allows us to extend the transfer matrix calculation using the related density matrix renormalisation group (DMRG) method introduced by White[@white92; @white93], applied to two-dimensional classical spin models by Nishino[@nishino95] and extended to self-avoiding walk models by Foster and Pinettes[@FC03a; @FC03b].
The finite-size calculations rely on results from conformal invariance, which lead one naturally to calculate the scaling dimensions $x_\sigma$ and $x_\varepsilon$ with fixed boundary conditions. Translating these to the more standard exponents requires a knowledge of $\nu$. The value of $\nu$ arising from the transfer matrix calculation is at variance with the conjectured exact value for the model. We take the opportunity of extending the original transfer matrix calculation by Blöte and Nienhuis[@blotenienuis]. We find that, up to the lattice widths we attain, our best estimate for $x_\varepsilon=0.155$ as found by Blöte and Nienhuis[@blotenienuis] rather than the required $x_\varepsilon =1/12=0.083333\cdots$. We conclude that either the finite-size effects are particularly severe with this particular model, or a more subtle effect is at play. Either way more work is required.
Model and Transfer Matrix Calculation
=====================================
The model studied here is defined as follows: consider all random walks on the square lattice which do not visit any lattice bond more than once. The walk is not allowed to cross at doubly visited sites but may “collide". A collision is assigned an attractive energy $-\varepsilon$. The walk is allowed to touch, but not cross, a surface defined as a horizontal line on the lattice. Each step along the surface is assigned an attractive energy $-\varepsilon_S$. For the transfer matrix calculation that follows, it is convenient to consider a strip of width $L$ with an attractive surface both sides of the strip. This is not expected to change the behaviour in the thermodynamic limit $L\to \infty$; the bulk critical behaviour should not depend on the boundary conditions and when calculating the surface critical behaviour, a walk adsorbed to one surface needs an infinite excursion in order to “see" the other surface. Additionally, the finite-size scaling results which link the eigenvalues of the transfer matrix to the scaling dimensions $x_\sigma^s$ and $x_\varepsilon^s$ (see and ) rely on the conformal mapping of the half plane (one adsorbing surface) onto a strip with two adsorbing surfaces[@cardy]. A typical configuration is shown in .
The partition function for the model is $$\label{part}
{\cal Z}=\sum_{\rm walks} K^N \omega_s^{N_S} \tau^{N_I},$$ where $K$ is the fugacity, $\omega_s=\exp(\beta\varepsilon_S)$ and $\tau=\exp(\beta\varepsilon)$. $N$ is the length of the walk, $N_S$ is the number of steps on the surface, and $N_I$ is the number of doubly-visited sites.
![A vertex interacting self-avoiding walk model showing the vertex collisions, weighted with a Boltzmann factor $\tau=\exp(\beta\varepsilon)$. Surface contacts are weighted $\omega_s=\exp(\beta\varepsilon_s)$ and a fugacity $K$ is introduced per walk step. The walk is shown on a strip of width $L=5$. []{data-label="model"}](model){width="10cm"}
The average length of the walk is controlled by the fugacity $K$ through $$\label{n}
\langle N\rangle=K\frac{\partial \ln{\cal Z}}{\partial K}.$$ As $K$ increases from zero, $\langle N \rangle$ increases, diverging at some value $K=K^\star(\omega_s,\tau)$. To start we consider what happens in the absence of the adsorbing boundary. For $\tau$ small enough, $$\langle N\rangle\sim (K^\star(\omega_s,\tau)-K)^{-1},$$ whilst for large enough $\tau$ the divergence is discontinuous. Whilst $\langle N\rangle$ is finite, the density of occupied bonds on an infinite lattice is zero, whilst once $\langle N \rangle$ has diverged, the density is in general finite. For small enough $\tau$ the density becomes non-zero continuously at $K=K^\star$ and for large enough $\tau$ the density jumps at $K=K^\star$. $K^\star$ may then be understood as the location of a phase transition, critical for $\tau<\tau_{\rm coll}$ and first order for $\tau>\tau_{\rm coll}$. The problem of infinite walks on the lattice is equivalent to setting $K=K^\star$ and varying $\tau$, then it may be seen that for $\tau<\tau_{\rm coll}$ the density is zero and is non-zero for $\tau>\tau_{\rm coll}$. It then follows that $\tau_{\rm coll} $ defines the collapse transition point.
Now let us consider the effect of the adsorbing boundary at constant $\tau$. For $\omega_s$ small, the entropic repulsion of the wall is strong enough for the walk to remain in the bulk. Once $\omega_s$ is large enough for the energy gain to overcome the entropic repulsion, the walk will visit the boundary a macroscopic number of times, and the walk adsorbs to the surface. These two behaviours are separated by $\omega_s=\omega_s^\star$. For $\omega_s\leq \omega_s^\star$ the behaviour of the walk is not influenced by the wall, and $K^\star$ is independent of $\omega_s$. The transition $K=K^\star$ if critical ($\tau\leq\tau_{\rm coll}$) corresponds to ordinary critical behaviour. However, for $\omega_s>\omega_s^\star$, $K^\star$ is a function of $\omega_s$, and the transition is referred to as a surface transition. The point $K=K^\star$, $\omega_s=\omega_s^\star$ is referred to as the special critical point (again $\tau\leq\tau_{\rm coll}$).
As the critical value $K^\star$ is approached, and in the absence of a surface, the partition function and the correlation length $\xi$ are expected to diverge, defining the standard exponents $\gamma$ and $\nu$: $$\begin{aligned}
\label{xib}
\xi\sim|K-K^\star|^{-\nu}\\
{\cal Z}\sim|K-K^\star|^{-\gamma}\end{aligned}$$ The effect of the surface on the walk is to introduce an entropic repulsion, pushing the walk away from the surface. The number of allowed walks is reduced exponentially if the walk is constrained to remain near the surface, in particular if one or both ends of the walk are obliged to remain in contact with the surface. In this case the divergence of $\cal Z$ is modified, and two new exponents are introduced, $\gamma_1$ and $\gamma_{11}$. Defining ${\cal Z}_1$ and ${\cal Z}_{11}$ as the partition functions for a walk with one end, and both ends, attached to the surface respectively, then: $$\begin{aligned}
{\cal Z}_1\sim|K-K^\star|^{-\gamma_1}\\
{\cal Z}_{11}\sim|K-K^\star|^{-\gamma_{11}}\end{aligned}$$ Whilst the bulk exponents, such as $\nu$ and $\gamma$, are the same at an ordinary critical point and at the special critical point, the surface exponents $\gamma_1$ and $\gamma_{11}$ differ. The exponents $\nu$, $\gamma$, $\gamma_1$ and $\gamma_{11}$ are related by the Barber relation[@barber]: $$\label{barb}
\nu+\gamma=2\gamma_1-\gamma_{11}.$$
The partition function may be calculated exactly on a strip of length $L_x\to\infty$ and of finite width $L$ by defining a transfer matrix ${\cal T}$. If periodic boundary conditions are assumed in the $x$-direction, the partition function for the strip is given by: $${\cal Z}_L=\lim_{L_x\to\infty}\Tr\left({\cal T}^{L_x}\right).$$ The free energy per lattice site, the density, surface density and correlation length for the infinite strip may be calculated from the eigenvalues of the transfer matrix: $$\begin{aligned}
f(K,\omega_s,\tau)=\frac{1}{L}\ln\left(\lambda_0\right),\\
\rho(K,\omega_s,\tau)= \frac{K}{L\lambda_0}\frac{\partial \lambda_0}{\partial K},\\
\rho_S(K,\omega_s,\tau)= \frac{\omega_s}{\lambda_0}\frac{\partial \lambda_0}{\partial \omega_s},\\\label{xi}
\xi(K,\omega_s,\tau)=\left(\ln\left|\frac{\lambda_0}{\lambda_1}\right|\right)^{-1},\end{aligned}$$ where $\lambda_0$ and $\lambda_1$ are the largest and second largest (in modulus) eigenvalues.
Our first task is to find estimates of $K^\star(\omega_s,\tau)$. An estimate for the critical point where the length of the walk diverges may be found using phenomenological renormalisation group for a pair of lattice widths[@mpn76], $L$ and $L^\prime$. The estimated value of $K^\star$ is given by the solution of the equation: $$\label{nrg}
\frac{\xi_L}{L}=\frac{\xi_{L^\prime}}{L^\prime}$$
Both these methods give finite-size estimates $K^\star_L(\omega_s,\tau)$ which should converge to the same value in the limit $L\to\infty$. Using at the fixed point defined by , estimates for $\nu$ and the corresponding surface correlation length exponent, $\nu_s$, may be calculated using $$\begin{aligned}
\label{nuestim}
\frac{1}{\nu(L)}&=&\frac{\log\left(\frac{{\rm d}\xi_L}{{d}K}/\frac{{\rm d}\xi_{L+1}}{{d}K} \right)}{\log\left(L/(L+1)\right)}-1,\\\label{nusestim}
\frac{1}{\nu_s(L)}&=&\frac{\log\left(\frac{{\rm d}\xi_L}{{d}\omega_s}/\frac{{\rm d}\xi_{L+1}}{{d}\omega_s} \right)}{\log\left(L/(L+1\right)}-1.\end{aligned}$$
The critical dimensions of the surface magnetic and energy fields may be calculated from the first few eigenvalues of the transfer matrix: $$\begin{aligned}
\label{sig-dim}
x^s_\sigma&=&\frac{L\ln\left|\frac{\lambda_0}{\lambda_1}\right|}{\pi},\\\label{eng-dim}
x^s_\varepsilon&=&\frac{L\ln\left|\frac{\lambda_0}{\lambda_2}\right|}{\pi},\end{aligned}$$ with $\lambda_2$ the eigenvalue with the third largest absolute value.
The surface scaling dimensions $x^s_\sigma$ and $x^s_\varepsilon$ may be related to the surface correlation length exponent $\nu_s$ and the exponent $\eta_\parallel$, controlling the decay of the correlation function along the surface, through standard relations $$\begin{aligned}
\label{nuref}
\nu_s&=&\frac{1}{1-x^s_\varepsilon},\\\label{eta}
\eta_{\parallel}&=& 2x^s_\sigma.\end{aligned}$$ The entropic exponent $\gamma_{11}$ is related to $\eta_{\parallel}$ through: $$\label{gam11eta}
\gamma_{11}=\nu(1-\eta_\parallel).$$
For a more detailed discussion of the transfer matrix method, and in particular how to decompose the matrix, the reader is referred to the article of Blöte and Nienhuis [@blotenienuis].
Results
=======
The finite-size results obtained are, where possible, extrapolated on the one hand using the Burlisch and Stoer (BST) extrapolation procedure[@bst] and on the other hand fitting to a three point extrapolation scheme, fitting the following expression for quantity $X_L$: $$\label{3ext}
X_L=X_\infty+aL^{-b}.$$ Calculating $X_\infty$, $a$ and $b$ require three lattice widths. The extrapolated values $X_\infty$ clearly will still depend weakly on $L$, and the procedure may be repeated, however weak parity effects can be seen in their values, often impeding further reasonable extrapolation by this method.
Phase Diagram
-------------
![The phase diagram calculated using the Phenomenological Renormalisation Group equation. The vertical line is placed at the best estimate of the collapse transition, expected to be independent of the surface interaction. (Colour online)[]{data-label="pd"}](phase.eps){width="10cm"}
The phase diagram is shown as a function of $\omega_s$ and $\tau$, projected onto the $K=K^\star(\tau,\omega_s)$ plane, in Figure \[pd\]. $K^\star$ is determined using equation (\[nrg\]) using two lattice sizes $L,L+1$. The adsorption line is then fixed by the simultaneous solution of for two sets of lattice sizes, $L,L+1$ and $L+1,L+2$, so that each line requires three lattice sizes. The vertical line is fixed from the best estimate for the bulk collapse transition, here $\tau_{\rm coll}=4.69$[@FC03a].
In the adsorbed phase, shown in the phase digram in Figure \[pd\], the number of contacts with the surface becomes macroscopic, scaling with the length of the walk, and the density decays rapidly with the distance from the surface. For the $\Theta$-point model it has been shown that there is another special line in the phase diagram separating the collapsed phase in two: for small enough $\omega_s$ the collapsed walk avoids contacts with the wall, but for higher values of $\omega_s$ the outer perimeter of the collapsed globule wets the surface, defining an attached globule “phase"[@kumar]. To investigate the possibility of such a phase, we examine the order parameter for the adsorbed phase () and the density of interactions one lattice site out from the wall (there are no interactions on the wall, since four occupied bonds must collide) (). In the $\Theta$-point model the presence of such a phase manifests itself by a plateau in the order parameter. Such a plateau exists, but starts at or below $\omega_s=1$, indicating that the globule is probably attached for all attractive wall interaction energies. This is consistent with the plots of the normalised density of interactions. Both plots show crossings at a value of $\omega_s$ consistent with the adsorption transition. We suggest that the entire phase is “surface-attached", and so there is no additional line on the phase diagram shown in Figure \[pd\].
![The order parameter ${\cal O}=\rho_s/(L\rho)$ plotted as a function of $\omega_s$ for $\tau=6$. (Colour online)[]{data-label="op"}](op.eps){width="10cm"}
![The order parameter for a possible globule attached phase ${\cal O_I}=\rho_{i1}/(L\rho)$ is plotted as a function of $\omega_s$ for $\tau=6$. The density of interactions one row from the surface is used since it is not possible to have collisions on the surface because there are only three lattice bonds per surface site. (Colour online)[]{data-label="rhoi"}](rhoi1.eps){width="10cm"}
[@lllll]{} $L$ & $K_{\rm coll}$ & $\tau_{\rm coll}$ & $\nu_{\rm coll}$ & $\eta^{\rm ord}_{||} $\
3 & 0.359410 & 4.071423 & 0.614465& 1.024334\
4 & 0.351725 & 4.410526 & 0.596955& 1.147824\
5 & 0.347865 & 4.540658 & 0.588407& 1.233790\
6 & 0.345694 & 4.598914& 0.583712& 1.297254\
7 & 0.344369 & 4.628215 & 0.580898 & 1.346184\
8 & 0.343508 & 4.644460 & 0.579079& 1.385168\
9 & 0.342920 & 4.654221 & 0.577827& 1.417020\
10 &0.342502 & 4.660572 & 0.576905& 1.443585\
BST $\infty$ & 0.3408 & 4.673 & 0.574 & 1.77\
\
3 & 0.339412 &4.696857 & 0.571064& 2.103158\
4 & 0.340217 & 4.681105& 0.572693& 1.963018\
5 & 0.340540& 4.676727 & 0.573260& 1.901623\
6 & 0.340676& 4.676316& 0.573231& 1.868539\
7 & 0.340749& 4.676871& 0.573019& 1.845418\
8 & 0.340767 & 4.678731 & 0.572356& 1.832298\
BST $\infty$ & 0.3408 & 4.69 & 0.572 & 1.77\
In we locate the collapse transition in strips with fixed walls and $\omega_s=1$. The collapse transition is determined as follows: solutions to the critical line $K^\star_L(\tau)$ are found by using the phenomenological renormalisation group on a pair of lattice widths ($L$ and $L+1$) and looking for crossings in the estimates for $\nu$ for consecutive pairs of $L, L+1$. Since $\nu$ is different at the collapse point than along $\tau<\tau_{\rm coll}$ and $\tau>\tau_{\rm coll}$ lines, these estimates converge to the correct $\nu$ for the collapse transition. This gives us the following estimates: $K_{\rm coll}=0.3408$, $\tau_{\rm coll}=4.69$ and $\eta_{\parallel}^{\rm ord}=1.77$, as well as $\nu_{\rm coll}=0.572$. It is noticeable that its value is much closer to that expected for the $\Theta$-point model ($\nu_{\Theta}=4/7$) than the predicted value for this model ($\nu_{O(n=0)}=12/23$). We shall see that, whilst the estimates for the other quantities of interest are remarkably stable, the estimates for $\nu_{\rm coll}$ seem rather sensitive to how they are calculated. We will return to this point later.
[@lllll]{} $L$ & $K_{\rm coll}$ & $\tau_{\rm coll}$ & $\omega_s^{\rm sp}$ & $\eta^{\rm sp}_{||} $\
3 & 0.335871 & 4.989134 & 2.452162& -0.120915\
4 & 0.337720& 4.868882 & 2.418298 & -0.110883\
5 & 0.338679 & 4.809216& 2.399537 &-0.104452\
6 &0.339256 & 4.774456& 2.387539 & -0.099806\
7 & 0.339624& 4.752731 & 2.379405 & -0.096306\
8 & 0.339874 &4.738301 &2.373598 &-0.093563\
9 & 0.340050& 4.728276& 2.369291&-0.091352\
BST $\infty$ & 0.3408 & 4.6901 & 2.3513 & -0.07843\
\
3 & 0.340975 &4.682860 & 2.344211& -0.06881\
4 & 0.341078& 4.676046 & 2.338622 & -0.059977\
5 & 0.340899 & 4.684292& 2.343044 &-0.064925\
6 & 0.340848 & 4.686507&2.343975 & -0.065385\
7 & 0.340811& 4.688205 & 2.344840 & -0.066177\
In we seek the special point along the collapse transition, in other words the point at which the extended, collapsed and adsorbed phases co-exist. A different set of critical exponents is expected. In order to find the special point, we need an extra lattice width; three are required to find $\omega_s^\star(\tau)$ and a fourth is required to fix the collapse transition. We find $K_{\rm coll}=0.3408$ and $\tau_{\rm coll}=4.69$ as in the case when $\omega_s=1$. The special point is found to be at a value of $\omega^{\rm sp}_s=2.35\pm0.01$. The estimate of $\eta_{\parallel}^{\rm sp}$ is not very precise, but here seems to be around $-0.06 \to -0.07$.
Results for the semi-flexible VISAW
-----------------------------------
The model may be extended by introducing different weighting for the corners and the straight sections. We follow the definitions in Reference[@blotenienuis] and add a weight $p$ for each site where the walk does not take a corner (i.e. for the straight sections). As $p$ is varied we expect the collapse transition point to extend into a line. It turns out that there is an exactly known point along this line. The location of this point is given exactly as[@blotenienuis]: $$\label{exactpt}
\left.\begin{array}{rcl}
z=K^2\tau&=&\left\{2-\left[1-2\sin(\theta/2)\right]
\left[1+2\sin(\theta/2)\right]^2\right\}^{-1}\\
K&=&-4z\sin(\theta/2)\cos(\pi/4-\theta/4)\\
pK&=&z\left[1+2\sin(\theta/2)\right]\\
\theta&=&-\pi/4
\end{array}\right\}.$$ This gives exactly the location of the multicritical collapse point when $p=p^\star=0.275899\cdots$ as $K_{\rm coll}=0.446933\cdots$ and $\tau_{\rm coll}=2.630986\cdots$. Using this exactly known point we hope to be able to extend the number of different data points and improve the precision of the determination for different surface exponents.
In we calculate estimates for $\eta_\parallel^{\rm ord}$ in two ways. Firstly we fix $\omega_s=1$ and $p=p^\star$ and determine $K^\star$ by solving and determining the multicritical point by looking for crossings in the estimates for $\nu$. Fixing the multicritical point this way requires three lattice sizes, $L, L+1$ and $L+2$. Estimates for $K_{\rm coll}$ and $\tau_{\rm coll}$ calculated in this way are shown in the columns marked [**A**]{} and are seen to converge nicely to the expected values. The second method used consisted in fixing $K$, $\tau$ and $p$ to their exactly known multicritical values, and fixing $\omega_s$ to the ordinary fixed point looking for solutions to . This only requires two lattice widths, giving an extra lattice size. The values of $\eta_\parallel^{ord}$ are shown as calculated from the two methods, and converge to values consistent with the $p=1$ case.
[@l|lll|lll]{} & &\
$L$ & $K_{\rm coll}$ & $\tau_{\rm coll}$ & $\eta^{\rm ord}_{||} $ & $\omega^{\rm ord}_s$ & $\eta_\parallel^{\rm ord}$\
3 &0.464018 &2.309912 & 1.401892& 0.760808 &1.813498\
4 & 0.457207& 2.451700 &1.471983&0.785333 &1.787052\
5 & 0.453616& 2.520291& 1.520159&0.797646 &1.776227\
6 & 0.451604 &2.556015 &1.554337& 0.805442 &1.770439\
7 & 0.450391 &2.576330 &1.579631 & 0.811096 & 1.766807\
8 &0.449615 & 2.588789 &1.599034& 0.815550 &1.764286\
9 & 0.449089 & 2.596962 & 1.614375& 0.819245 & 1.762416\
10 &0.448720 & 2.602583 &1.626774 & 0.822417 & 1.760965\
11 & — & — & — & 0.825204 & 1.759802\
BST $\infty$ & 0.4473 & 2.597 & 1.708 & 0.8955 &1.7499\
exact & $ 0.446933\cdots$ & $2.630986\cdots$ &\
\
3 & 0.444582 &2.656251 & 1.953742& 0.824571& 1.761506\
4 & 0.446572 &2.628298 & 1.807661& 0.789157& 1.757867\
5 & 0.447052& 2.622897& 1.774328& 0.799994& 1.755193\
6 & 0.447197 & 2.621984& 1.762266& 0.863901&1.753208\
7 & 0.447252 &2.622163 & 1.754657& 0.881961&1.751795\
8 & 0.447224 & 2.623411&1.753850 & 0.816627& 1.750839\
9 & — & — & — & 0.820168& 1.750223\
In we present results calculated fixing $\tau=\tau_{\rm coll}$ and looking for simultaneous solutions of the phenomenological renormalisation group equation . These solutions exist at two values of $\omega_s$, the ordinary and the special fixed points. The values of $K_{\rm coll}$, $\omega_s$ and $\eta_\parallel$ are given for the two fixed points. Again agreement is found with previous values calculated.
[@l|lll|lll]{} & &\
$L$ & $K_{\rm coll}$ & $\omega^{\rm ord}_s$ & $\eta_{||}^{\rm ord}$ & $K_{\rm coll}$ & $\omega^{\rm sp}_s$ & $\eta_{||}^{\rm sp}$\
3 & 0.444849 &0.727730& 1.876811 & 0.444289 & 3.840487& -0.254004\
4 & 0.446261& 0.765809& 1.816907 &0.445726 & 3.660264 & -0.191557\
5 &0.446626&0.782852 & 1.795039 & 0.446279&3.575039 & -0.157742\
6 & 0.446763 &0.792806& 1.784181& 0.446537 &3.527515& -0.136796\
7 &0.446826 &0.799587&1.777736 & 0.446675 & 3.498075& -0.122630\
8 & 0.446861&0.804688 & 1.773440 & 0.446754 & 3.478454& -0.112441\
9 & 0.446881&0.808784& 1.770341 & 0.446804& 3.464654& -0.104773\
10 & 0.446894 &0.812223& 1.767979& 0.446836 & 3.454537& -0.098797\
BST $\infty$ & 0.446933 &0.8529 & 1.75 &0.44693 &3.4029 &-0.05241\
exact & $0.446933\cdots$& & & & &\
\
3 & 0.445398&0.740647& 1.855322& 0.444797 &3.415470& -0.065260\
4 &0.446899&0.821222& 1.809568&0.445915 & 3.410716& -0.062347\
5 &0.446915& 0.830707& 1.791541 &0.446934&3.506065 & -0.060003\
6 & 0.446923&0.794846& 1.782160& 0.446933&3.407005& -0.058240\
7 & 0.446928 &0.801100 & 1.755376&0.446932&3.406126& -0.056866\
8 &0.446931 & 0.867939 & 1.753498 &0.446932&3.405498& -0.055762\
[@lllllllll]{} $L$ & $\omega^{\rm sp}_s$ & $\eta_{||}^{\rm sp}$ & $\nu$ & $\nu^{\rm sp}_{s} $ & $\phi_s=\nu/\nu_s$ & $x_\varepsilon^s(L)$ & $x_\varepsilon^s(L+1)$\
3 & 3.575571 & -0.170489&0.527116& 1.829877 & 0.288061 & 0.527910&0.422737\
4 & 3.506382& -0.135551&0.534983& 1.725710& 0.310008 & 0.449880&0.401500\
5 & 3.472677&-0.116031&0.540007&1.668509 & 0.323646 & 0.416568&0.388462\
6 & 3.453634& -0.103697&0.543502& 1.632771&0.332874 & 0.397954&0.379518\
7 &3.441756 &-0.095233&0.546069&1.608442 & 0.339497&0.386019 &0.372974\
8 &3.433803 & -0.089075&0.548029&1.590960 & 0.344477 & 0.377697&0.367970\
9 &3.428190 &-0.084396&0.549571& 1.577675& 0.348349& 0.371554&0.364018\
10 & 3.424062 &-0.080719&0.550811&1.567260 & 0.351393& 0.366832&0.360818\
11 &3.420926 &-0.077751&0.551828 & 1.558893& 0.353944 &0.363087 &0.358175\
BST $\infty$ & 3.402 & -0.0499 & 0.5592 & 1.487& 0.3777 & 0.332 & 0.332\
\
3 & 3.404662& -0.056758&0.567274& 1.504747& & 0.367160& 0.330606\
4 &3.404891 &-0.056025 &0.566011&1.501149& & 0.351913& 0.326878\
5 &3.404606&-0.054962 & 0.564760&1.497315& & 0.344469&0.326617\
6 & 3.404268 &-0.053981&0.563832&1.498468& &0.340278 & 0.327150\
7 & 3.403967&-0.053136& 0.563148& 1.488934& &0.337719 & 0.327793\
8 & 3.403715& -0.052418&0.562652&1.488636& & 0.336068& 0.328375\
9 & 3.403508 & -0.051807 &0.562224&1.488766& & 0.334959& 0.328865\
At the ordinary point the exponent $\nu_s$ is expected trivially to take the value $-1$, and this was verified in the various calculations at the ordinary point, with good convergence. At the special point the correlation length along the surface is not expected to be trivial. In order to obtain the best estimate for this exponent we determined the location of the special point by fixing $K$, $\tau$ and $p$ to their multicritical values and then determining the special point by looking again for solutions to the phenomenological renormalisation equation , shown in and using and . One may also obtain an independent estimate for $\nu_s$ calculating the scaling dimension $x_\varepsilon^s$ using , from which $\nu_s=(1-x_\varepsilon^s)^{-1}$. The special point was determined using the odd sector of the transfer matrix, whereas the calculation of $x_\varepsilon^s$ requires the even sector, so whilst the determination method only gives one estimate of $\eta_\parallel^{\rm sp}$ it gives two estimates (one for each lattice size) for the critical dimension $x_\varepsilon^s$. These different estimates are shown in . The values of $\nu^{\rm sp}_s$ converge to $1.487$ whilst $x_\varepsilon=0.332$, which gives $\nu^{\rm sp}_s=1.497$. It seems likely that $\nu^{\rm sp}_s=1.49\pm0.01$. Again the estimates of $\nu$ do not converge to $\nu=12/23$, but neither do they converge to the values found above for $\omega_s=1$ and $p=1$ (see ). The crossover exponent is calculated from the estimates found for each size $\phi_s=\nu/\nu_s$, therefore the extrapolated value is only as good as the estimated values of $\nu$ and $\nu_s$. If we believe $\nu=12/23$ and $\nu_s=1.5$ then $\phi_s=8/23=0.34782\cdots$.
Extending the results with DMRG
===============================
One of the limitations of the transfer matrix method is the limited number of lattice widths that may be investigated. One way of getting round this problem is to generate approximate transfer matrices for larger widths. There exists a method of choice for doing this; the Density Matrix Renormalisation Group Method (DMRG) introduced by White[@white92; @white93], extended to classical 2d models by Nishino[@nishino95] and self-avoiding walk models by Foster and Pinettes[@FC03a; @FC03b].
The DMRG method constructs approximate transfer matrices for size $L$ from a transfer matrix approximation for a lattice of size $L-2$ by adding two rows in the middle of the system. This process is clearly local, whereas the VISAW walk configurations are clearly non-local. This problem is solved by looking at the model as the limit $n\to 0$ of the $O(n)$ model.
The partition function of the $O(n)$ model is given by: $${\cal Z}_{O(n)}=\sum_{\cal G} n^{l} K^N p^{N_{\rm st}} \omega_s^{N_S}\tau^{N_I},$$ where ${\cal G}$ denotes the sum over all graphs containing loops which visit lattice bonds at most once and which do not cross at lattice sites and $l$ is the number of such loops. $N_{\rm st}$ is the number of straight sections. In the limit $n\to 0$ the model maps onto the expected model with the odd sector of the corresponding transfer matrix giving the walk graphs, as above. Viewing the model in this way enables us to map the loop graphs into oriented loop graphs. Each loop graph corresponds to $2^{N_{\rm loops}}$ oriented loop graphs. We associate different weights $n_+$ and $n_-$ for the different orientations, with $n=n_++n_-$, and this enables us to rewrite the partition function as follows: $$\begin{aligned}
{\cal Z}_{O(n)}&=&\sum_{\cal G} \left(n_++n_-\right)^{l} K^N p^{N_{\rm st}} \omega_s^{N_S}\tau^{N_I}\\
&=&\sum_{\cal G^\star} n_+^{l_+}n_-^{l_-} K^N p^{N_{\rm st}} \omega_s^{N_S}\tau^{N_I}.\end{aligned}$$ Whilst the weights $n_+$ and $n_-$ are still not local, they may be made local by realising that four more corners in one direction than the other are required to close a loop on the square lattice. If we associate $\alpha$ with each clockwise corner and $\alpha^{-1}$ for each anti-clockwise corner we find $n_+=\alpha^4$ and $n_-=\alpha^{-4}$, setting $\alpha=\exp(i\theta/4)$ gives: $$n=\alpha^4+\alpha^{-4}=2\cos\theta.$$ The model studied here then corresponds to $\theta=\pi/2$. The resulting local (complex) weights are shown in .
\
![Local complex vertices for the DMRG method](vertices "fig:"){width="10cm"}\[vertices\]
Now that the vertices are local, the DMRG method may be applied. The vertices represented in may be most easily encoded by defining a three state spin on the lattice bonds, for the horizontal bonds the three states would be arrow left, empty and arrow right. For details of the DMRG method the reader is referred to the articles by White[@white92; @white93] and Nishino[@nishino95], but in essence the method consists in representing the transfer matrix for the VISAW model as the transfer matrix for an equivalent system where the top and bottom of the strip is represented by an $m$-state pseudo-spin with only the inner two of rows kept explicitly in terms of the original 3 state spins. For small lattice widths this identification may be done exactly, and the interaction matrix may be chosen exactly, but for a fixed value of $m$ there will come a stage where this procedure is no longer exact. At this stage the phase space in the $m$-spin representation is smaller than for the real system and an approximation must be made. Starting from the largest lattice width that may be treated exactly by the pseudo-spin system, two vertices are inserted in the middle (see ). The $3\times m$ states at the top of the system must be projected onto $m$ states to recover a new pseudo-spin system. This must be done so as to lose the smallest amount of information, and this is where the DMRG method comes in. It turns out that the best change of basis is derived by constructing the density matrix for the top half of the lattice strip from the ground-state eigenvector by tracing over the lower half system. The density matrix is then diagonalised and the $m$ basis vectors corresponding to the $m$ largest eigenvalues of the density matrix are kept.
\
![Figure shows the schematic transfer matrix obtained from DMRG iteration. Circles show spins defined for the original model (3 state spins for the lattice bond: empty, and two arrow states). Squares show the m-state pseudo spins. The open circles are summed. The projection of the upper half is also shown schematically. []{data-label="DMRG"}](projection "fig:"){width="10cm"}
\
[**A**]{} \
![Location of the ordinary collapse point (A) and corresponding value of $\eta^{\rm ord}_{\parallel}$ (B) with $p=p^\star$, $K=K_{\rm coll}$ and $\tau=\tau_{\rm coll}$. (Colour online)[]{data-label="DMRGord"}](wsord "fig:"){width="10cm"} \
[**B**]{} \
![Location of the ordinary collapse point (A) and corresponding value of $\eta^{\rm ord}_{\parallel}$ (B) with $p=p^\star$, $K=K_{\rm coll}$ and $\tau=\tau_{\rm coll}$. (Colour online)[]{data-label="DMRGord"}](etaord "fig:"){width="10cm"}
\
[**A**]{} \
![Location of the special collapse point (A) and corresponding value of $\eta^{\rm sp}_{\parallel}$ (B) with $p=p^\star$, $K=K_{\rm coll}$ and $\tau=\tau_{\rm coll}$. (Colour online)[]{data-label="DMRGsp"}](wssp "fig:"){width="10cm"} \
[**B**]{} \
![Location of the special collapse point (A) and corresponding value of $\eta^{\rm sp}_{\parallel}$ (B) with $p=p^\star$, $K=K_{\rm coll}$ and $\tau=\tau_{\rm coll}$. (Colour online)[]{data-label="DMRGsp"}](etasp "fig:"){width="10cm"}
There are two modifications on the basic method which improve the quality of the results obtained. The number of left arrow minus the number of right arrows is conserved from one column to the next, so the transfer matrix may be split into sectors which are much smaller than the original. Since the DMRG method may be viewed as a variational method, the quality of the results may be improved by using the scanning (or finite size method) DMRG method where once the desired lattice width is obtained one grows one half of the system and shrinks the other, whilst projecting as before, so that the exactly treated spins move across the system. As few as three or four sweeps is known to vastly improve the precision of the method[@white92; @white93].
![Calculation of $x_\varepsilon^s$ using DMRG at the special collapse transition with $p=p^\star, K=K_{\rm coll}$ and $\tau=\tau_{\rm coll}$. (Colour online)[]{data-label="DMRGxepsi"}](xepsi){width="10cm"}
Clearly the precision of the method is controlled by $m$; the larger $m$ the greater the information kept. In what follows we varied $m$ up to values of $m=200$ and verified that good convergence was obtained. This conditioned the lattice size we looked at in DMRG. Whilst physical quantities such as the density converge rapidly with $m$, scaling dimensions (which interest us here) converge more slowly. As a result the largest lattice width presented here is $L=20$, which nevertheless corresponds to a good improvement over the pure transfer matrix method.
For the DMRG calculation we fixed $p=p^\star$; $K=K_{\rm coll}$; $\tau=\tau_{\rm coll}$ and used the solutions of to find the ordinary and special fixed points as well as the corresponding $\eta_\parallel$. The $x_\varepsilon^s$ were calculated from the even sector at these fixed points.
In and we show the DMRG results along with the transfer matrix results for $\omega_S$ and $\eta_\parallel$ for the ordinary and special points. We deduce for the ordinary point $\omega_S^{\rm ord}=0.86\pm0.01$ and $\eta_\parallel^{\rm ord}=1.75\pm 0.01$ and for the special point $\omega_S^{\rm sp}=3.41\pm0.01$ and $\eta_\parallel^{\rm sp}=-0.05\pm0.01$.
In we show the estimates for the scaling dimension $x_\varepsilon^s$ calculated at the special point. We determine $x_\varepsilon^s=0.333\pm0.001$. This leads to $\nu^{\rm sp}_s=1.5$.
Discussion
==========
\
![Bulk scaling dimensions $x_\sigma$ and $x_\varepsilon$ calculated at $p=p^\star$, $K=K_{\rm coll}$ and $\tau=\tau_{\rm coll}$ for periodic boundary conditions using DMRG with $m$ up to $m=190$. (Colour online)[]{data-label="DMRGperiodic"}](xper "fig:"){width="10cm"}
To conclude, we summarise the exponent values found:
[ccccc]{} Method & $\eta_\parallel^{\rm ord}$ & $\eta_\parallel^{\rm sp}$ & $\nu_s^{\rm sp}$ & $x_\varepsilon^s$\
TM& $1.75\pm 0.05$ & $-0.05 \to -0.08$ & $1.48\pm0.04$ & $0.332\pm 0.005$\
DMRG&$1.75\pm 0.01$ & $-0.05 \pm 0.01$ &—& $ 0.333\pm 0.001$\
As discussed above, the calculation of the exponents $\gamma_1$ and $\gamma_{11}$ as well as $\phi_s$ require a knowledge of the bulk exponents $\nu$ and $\gamma$. Whilst there are conjectured exact values for these exponents, and in particular the exponent $\nu=12/23$ is found to a good level of precision for the Trails model which tends to lend support to this value, the transfer matrix calculations for the VISAW walk model do not seem to reproduce the required values, Blöte and Nienhuis[@blotenienuis] find $x_\varepsilon=0.155$ rather than the required $x_\varepsilon=1/12$ for example. We extend the transfer matrix results for the periodic boundary conditions using DMRG, and find a result for $x_\varepsilon$ consistent with Blöte and Nienhuis[@blotenienuis](see ).
Further work is required to calculate the exponents by different methods, for example Monte Carlo, in order to understand the apparent differences in results which arise in the exponent $\nu$. Either the differences are a result of particularly strong finite-size scaling effects, which is surprising since the surface exponents themselves seem to be remarkably stable in comparison, or perhaps an indication that the critical behaviour of this model is more subtle than initially thought. Either way the model warrants further study.
[12]{} P. Flory *Principles of Polymer Chemistry*, Ithaca: Cornell University Press, 1971 C. Vanderzande, *Lattice Models of Polymers*, Cambridge: CUP, 1998 P. G. de Gennes, *J. Phys. Lett* [**36**]{} L55 (1975) B. Duplantier and H. Saleur, *Phys. Rev. Lett.* [**59**]{} 539, (1987) F. Seno, A. L. Stella and C. Vanderzande, *Phys. Rev. Lett.* [**61**]{} 1520 (1988) B. Duplantier and H. Saleur, *Phys. Rev. Lett.* [**61**]{} 1521, (1988) H. Meirovitch and H. A. Lim, *Phys. Rev. Lett*, [**62**]{} 2640 (1989) B. Duplantier and H. Saleur, *Phys. Rev. Lett.* [**62**]{} 2641, (1989) C. Vanderzande, A. L Stella and F. Seno, *Phys Rev Lett* [**67**]{} 2757 (1991) D. P. Foster, E. Orlandini and M. C. Tesi *J. Phys A* [**25**]{} L1211 (1992) A. R. Veal, J. M. Yeomans and G. Jug *J. Phys. A* [**24**]{} 827 (1991) F. Seno and A. L Stella, *Europhysics Lett* [**7**]{} 605 (1988) H. W. J. [Blöte]{} and B. Nienhuis, *J. Phys. A*, [**22**]{} 1415, (1989) S. O. Warnaar, M. T. Batchelor, and B. Nienhuis, *J. Phys. A*, [**25**]{} 3077, (1992) A. R. Massih and M. A. Moore, *J. Phys. A*, [**8**]{} 237, (1975) D. P. Foster, *J. Phys. A*, [**42**]{} 372002 (2009) R. M. Bradley *Phys Rev A* [**39**]{} R3738 (1989) R. M. Bradley *Phys Rev A* [**41**]{} 914 (1990) D. P. Foster, *J. Phys. A*, [**43**]{} 335004 (2010) S.R. White, *Phys. Rev. Lett.* [**69**]{} 2863 (1992) S.R. White, *Phys. Rev. B* [**48**]{} 10345 (1993) T. Nishino, *J. Phys. Soc. Jpn.* [**64**]{} 3598 (1995) D. P. Foster and C. Pinettes *J. Phys. A*, [**36**]{} 10279 (2003) D. P. Foster and C. Pinettes *Phys Rev E*, [**67**]{} R045105 (2003) J. Cardy in *Phase Transitions and Critical Phenomena*, eds. Domb and Lebowitz, [**Vol. XI**]{} (Academic, New York), 1986 M. N. Barber, *Phys. Rev B* [**8**]{} 407 (1973) B. Derrida and H. G. Herrmann 1365 (1983) M. P. Nightingale [*Physica*]{} [**A83**]{} 561(1976) R. Bulirsch and J. Stoer, *Numer. Math.* [**6**]{} 413 (1964); M. Henkel and G. Schütz *J. Phys. A* [**21**]{} 2617 (1988) Y. Singh, D. Giri and S. Kumar *J. Phys A* [**34**]{} L67 (2001)
|
Double-blind comparison of ropivacaine 7.5 mg ml(-1) with bupivacaine 5 mg ml(-1) for sciatic nerve block.
Two groups of 12 patients had a sciatic nerve block performed with 20 ml of either ropivacaine 7.5 mg ml(-1) or bupivacaine 5 mg ml(-1). There was no statistically significant difference in the mean time to onset of complete anaesthesia of the foot or to first request for post-operative analgesia. The quality of the block was the same in each group. Although there was no statistically significant difference in the mean time to peak plasma concentrations the mean peak concentration of ropivacaine was significantly higher than that of bupivacaine. There were no signs of systemic local anaesthetic toxicity in any patient in either group. |
UNPUBLISHED
UNITED STATES COURT OF APPEALS
FOR THE FOURTH CIRCUIT
No. 97-7592
STEVEN WHISENANT; RICHARD LAMAR FENSTERMACHER,
Plaintiffs - Appellants,
versus
RONALD ANGELONE, Director of Virginia Depart-
ment of Corrections; LARRY W. HUFFMAN, Region-
al Director; G. P. DODSON, Warden, Coffeewood
Correctional Center,
Defendants - Appellees.
Appeal from the United States District Court for the Western Dis-
trict of Virginia, at Roanoke. Samuel G. Wilson, Chief District
Judge. (CA-96-234-R)
Submitted: April 29, 1998 Decided: May 14, 1998
Before MURNAGHAN, NIEMEYER, and WILLIAMS, Circuit Judges.
Dismissed by unpublished per curiam opinion.
Steven Whisenant, Richard Lamar Fenstermacher, Appellants Pro Se.
Collin Jefferson Hite, SANDS, ANDERSON, MARKS & MILLER, Richmond,
Virginia, for Appellees.
Unpublished opinions are not binding precedent in this circuit.
See Local Rule 36(c).
PER CURIAM:
Appellants filed an untimely notice of appeal. We dismiss for
lack of jurisdiction. The time periods for filing notices of appeal
are governed by Fed. R. App. P. 4. These periods are "mandatory and
jurisdictional." Browder v. Director, Dep't of Corrections, 434
U.S. 257, 264 (1978) (quoting United States v. Robinson, 361 U.S.
220, 229 (1960)). Parties to civil actions have thirty days within
which to file in the district court notices of appeal from judg-
ments or final orders. Fed. R. App. P. 4(a)(1). The only exceptions
to the appeal period are when the district court extends the time
to appeal under Fed. R. App. P. 4(a)(5) or reopens the appeal
period under Fed. R. App. P. 4(a)(6).
The district court entered its order on Sept. 29, 1997; Appel-
lants' notice of appeal was filed on Nov. 3, 1997, which is beyond
the thirty-day appeal period. Appellants' failure to note a timely
appeal or obtain an extension of the appeal period leaves this
court without jurisdiction to consider the merits of Appellants'
appeal. We therefore dismiss the appeal. We dispense with oral
argument because the facts and legal contentions are adequately
presented in the materials before the court and argument would not
aid the decisional process.
DISMISSED
2
|
.\" Copyright (c) 1997
.\" John-Mark Gurney. All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\" 3. Neither the name of the author nor the names of any co-contributors
.\" may be used to endorse or promote products derived from this software
.\" without specific prior written permission.
.\"
.\" THIS SOFTWARE IS PROVIDED BY John-Mark Gurney AND CONTRIBUTORS ``AS IS''
.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" $FreeBSD: src/usr.bin/brandelf/brandelf.1,v 1.17 2007/03/09 14:36:18 ru Exp $
.\"
.Dd February 6, 1997
.Dt BRANDELF 1
.Os
.Sh NAME
.Nm brandelf
.Nd mark an ELF binary for a specific ABI
.Sh SYNOPSIS
.Nm
.Op Fl l
.Op Fl f Ar ELF_ABI_number
.Op Fl t Ar string
.Ar
.Sh DESCRIPTION
The
.Nm
utility marks an ELF binary to be run under a certain ABI for
.Dx .
.Pp
The options are as follows:
.Bl -tag -width indent
.It Fl f Ar ELF_ABI_number
Forces branding with the supplied ELF ABI number.
Incompatible with the
.Fl t
option.
These values are assigned by SCO/USL.
.It Fl l
Writes the list of all known ELF types to the standard error.
.It Fl t Ar string
Brands the given ELF binaries to be of the
.Ar string
ABI type.
Currently supported ABIs are
.Dq Li FreeBSD ,
.Dq Li Linux ,
and
.Dq Li SVR4 .
.Dx
uses
.Dq Li FreeBSD
as its native branding.
.It Ar file
If
.Fl t Ar string
is given it will brand
.Ar file
to be of type
.Ar string ,
otherwise it will simply display the branding of
.Ar file .
.El
.Sh EXIT STATUS
Exit status is 0 on success, and 1 if the command
fails if a file does not exist, is too short, fails to brand properly,
or the brand requested is not one of the known types and the
.Fl f
option is not set.
.Sh EXAMPLES
The following is an example of a typical usage
of the
.Nm
command:
.Bd -literal -offset indent
brandelf file
brandelf -t Linux file
.Ed
.Sh SEE ALSO
.Rs
.%A The Santa Cruz Operation, Inc.
.%T System V Application Binary Interface
.%D April 29, 1998 (DRAFT)
.%O http://www.sco.com/developer/devspecs/
.Re
.Sh HISTORY
The
.Nm
manual page first appeared in
.Fx 2.2 .
.Sh AUTHORS
This manual page was written by
.An John-Mark Gurney Aq Mt [email protected] .
|
1. Introduction {#sec1-materials-13-02142}
===============
Silicon carbide (SiC) fiber synthesized from polycarbosilane is one of the most important reinforcements for ceramic matrix composites (CMCs), which are now finding more and more applications to meet harsh environments of high temperature and air-oxidation such as turbo-engine blades in aerospace industry \[[@B1-materials-13-02142],[@B2-materials-13-02142],[@B3-materials-13-02142],[@B4-materials-13-02142],[@B5-materials-13-02142]\]. Polycrystalline SiC fiber exhibits brittle fracture behavior at room temperature but being ductile under applied certain stress at temperatures above 1200 °C. In fact, plastic deformation and rupture caused by creep has become a key limitation of this material for any possible long-time applications at temperatures above 1200 °C under loading \[[@B6-materials-13-02142],[@B7-materials-13-02142],[@B8-materials-13-02142],[@B9-materials-13-02142]\].
In general, SiC does not melt at any known temperature and its high decomposition temperature (approximately 2700 °C) makes it natural candidates for high temperature applications without the risk of creep failure under temperatures of 1200 °C (\~0.5T~m~, in K) \[[@B10-materials-13-02142],[@B11-materials-13-02142]\]. However, a recent research showed that a cavitation-governed creep of crystalline SiC fine fibers with diameters smaller than 15 microns occurs dramatically at 1200 °C \[[@B5-materials-13-02142]\]. Amorphous silica (glass phase) and crystalline oxides (alumina or titanium oxide) with low melting points existing along grain boundaries (GBs) of SiC fine grains enhance creeping. Therefore, larger grain size in stoichiometric SiC fibers leads to both, minimum numbers and high viscosity of GBs, which results into suppressing cavitation movement and GBs sliding. On the other hand, the larger crystalline size of SiC in a continuous fiber results in extremely high modulus (about 440 GPa for H-Nicalon type S) with decreased tensile strength and toughness. Thus, the rigid SiC fibers increase difficulties in weaving quite as well as a rise of the cost caused by purification and growth of SiC grains at higher temperatures and extended retention time \[[@B12-materials-13-02142],[@B13-materials-13-02142]\].
Crystallization and strengthening of GBs in polycrystalline SiC can also be achieved via precipitation or introduction of non-soluble secondary phase with a higher melting point and modulus than SiC. Creep in SiC fiber can be retarded by introduction TiB~2~ with only 2.4% in mass. It was found that the incorporated \~50 nm TiB~2~ particles reside in triple point of SiC GBs which limits the sliding of SiC \[[@B14-materials-13-02142],[@B15-materials-13-02142]\]. Thus, some high-melting carbides and borides such as zirconium and hafnium are an essential prerequisite for using as resistance to creep in SiC fiber. With this attempt, direct polymerization of 1-methylsilene into polycarbosilanes has been investigated using various metallocenes as catalyst during surface dechlorination of dichloromethylsilanes by sodium \[[@B16-materials-13-02142]\]. For the first time, we have shown a metallocene catalytic insertion polymerization of tautomeric 1-silene into polycarbosilanes as analogs of polyolefins \[[@B16-materials-13-02142],[@B17-materials-13-02142]\]. The polycarbosilanes synthesized through this molecular insertion process is suitable for spinning into SiC--ZrC composite ceramic fibers. These transition metal carbides may act as reinforcements that improve the creep resistance as well as the thermal and oxidation resistance of the SiC ceramic \[[@B18-materials-13-02142]\].
2. Materials and Methods {#sec2-materials-13-02142}
========================
2.1. Polymeric Precursors {#sec2dot1-materials-13-02142}
-------------------------
Polyzirconocenecarbosilane (PZCS) was synthesized from dimethyldichlorosilane, zirconocene dichloride and metallic sodium in toluene and used as precursor for the fabrication of the SiC--ZrC composite fiber. The synthesis procedure and pyrolysis behavior of PZCS polymer have been reported in detail \[[@B16-materials-13-02142],[@B17-materials-13-02142]\], which was a product of zirconocene catalytic insertion polymerization of 1-methylsilene transient intermediates (CH~2~=SiHCH~3~) with a molecular Equation (1) \[[@B13-materials-13-02142]\], herein R = CH~3~ and n = 10--25. The polymer has an average molecular weight of 1080 g/mol, as determined by a gel-permeation chromatography (GPC) using toluene as the eluent and polystyrene as the calibration standard. The softening point of PZCS for melt spinning is around 120 °C and the ceramic yield in Ar at 1000 °C is 58%.
2.2. Fabrication of Fibers {#sec2dot2-materials-13-02142}
--------------------------
PZCS about 40 g was charged into the spinning can and heated to the spinning temperature (135--140 °C) under a nitrogen atmosphere and then extruded through a single-hole spinneret with a diameter of 0.25 ± 0.05 mm. The PZCS green fibers were cured in a flow reactor in argon by electron-beam irradiation (beam current of 1.0--2.5 mA, retention time of 3--5 h and dose for the curing is about 5--8 MGy). The as-cured fibers were heated to 1000 °C under H~2~ or Ar atmosphere, then heated to 1600 °C under Ar atmosphere and maintained at 1600 °C for 1 h. In above-mentioned two cases, a heating and cooling rate is 2 °C/min. For ease of description, the former was marked as H~2~--Ar process fiber and the latter was marked as Ar--Ar process fiber.
2.3. Characterizations {#sec2dot3-materials-13-02142}
----------------------
The elemental contents in the fibers were analyzed, in which the contents of Si and Zr were measured by ICP-OES in a Thermo Fisher ICAP6300 spectrometer (Waltham, MA, USA), the contents of carbon and hydrogen were acquired by an Elementar Vario EL determinator (Langenselbold, Germany). The TC-436 N/O analyzer was used to determine the content of oxygen element (LECO, St. Joseph, MI, USA).
The phase compositions in the pyrolysized fibers were identified by X-ray diffraction (XRD, PANalytical X'Pert-PRO diffractometer, Eindhoven, Netherlands) at 2θ = 10°--90° with Cu K~α~ radiation (λ = 0.15406 nm at 40 kV and 30 mA).
Free carbon in the fibers was examined with a Raman micro-spectrometry (Horiba Jobin-Yvon, Paris, France), using the green line of a He-Ne laser (632.8 nm) as excitation source and scattering was measured in the first-order spectrum ranging 900--2000 cm^−1^.
The microstructures and elemental concentrations of the particles in the fibers were characterized with scanning electron microscopy (SEM, S4800, Hitachi, Tokyo, Japan) and transmission electron microscopy (TEM, TecnaiG20, FEI, Hillsboro, OR, USA) equipped with an X-ray energy dispersive spectrometer (EDS). The samples were sprayed with a carbon film and then observed with SEM.
3. Results and Discussion {#sec3-materials-13-02142}
=========================
3.1. Morphologies of the Polymeric and Ceramic Fibers {#sec3dot1-materials-13-02142}
-----------------------------------------------------
The used precursor PZCS is a thermoplastic polymer, which shows excellent spinnability around 150 °C, but the derived green fiber will undergo remelting and lose their fabric shape before thermosetting and pyrolysis into inorganic fiber. Therefore, curing or aging of this green fiber into thermosetting one is the first key step herding the following inorganic chemical transformation. It was well known that a traditional polycarbosilane can be cured by oxidation in hot air or oxidized gases such NO~2~, which happens via chemical reactions between Si-H with oxygen into Si-O-Si linkage and water \[[@B1-materials-13-02142],[@B12-materials-13-02142],[@B14-materials-13-02142]\]. This curing clearly occurs starting from the surface of the fiber and goes slowly into deeper site governed by oxygen diffusion. Oxidation curing will inevitably and in-homogenously introduce oxygen into the polymeric fibers, which leads to a silicon-carbon-oxygen complex formation in the organic fiber after pyrolysis. Therefore, irradiation of the fiber by electron-beams (EB) with high energy was applied for a homogeneous curing of the green fiber without introducing of oxygen contamination, which is also applied in this study. Mechanisms of this thermosetting process based on elimination reaction between two Si-H into Si-Si linkage and hydrogen releasing has been investigated and discussed by Takeda et al. \[[@B13-materials-13-02142]\].
Surface and cross-section morphologies of the EB-cured PZCS fiber are shown in [Figure 1](#materials-13-02142-f001){ref-type="fig"}a,b, which shows a smooth surface and very dense cross-section fracture morphology of the green fiber after EB-curing in argon. The EDS images of Si and Zr distribution from the surface to core are shown in [Figure 1](#materials-13-02142-f001){ref-type="fig"}c,d. No aggregation of zirconium phase is observed on the surface and cross-sectional parts of the as-cured fiber.
The as-cured fibers are then transferred into a thermosetting state that does not undergo remelting during pyrolysis up to 1000 °C either in H~2~ or in Ar atmosphere. Pyrolysis of the PZCS in Ar finally leads to formation of ZrC, SiC and free carbon in the residual inorganic fibers after releasing of complicated gaseous species such as methane, hydrogen and silanes \[[@B18-materials-13-02142]\]. The surface and cross-sectional morphologies of the ceramic fibers treated by H~2~--Ar process or Ar--Ar process at the temperatures of 1200, 1400 or 1600 °C show minor differences from each other. [Figure 2](#materials-13-02142-f002){ref-type="fig"} shows SEM images of the surface and cross section of the fibers obtained by H~2~--Ar process 1200, 1400 or 1600 °C for 1 h. In all three cases, the ceramic fibers show very dense and homogeneous microstructures without any visible cracks, voids or other flaws.
The backscattered electron (BSE) image mainly reflects the distribution of elements on the sample surface. The brighter the region, the higher the atomic number. BSE images of the fibers at 1200 °C ([Figure 2](#materials-13-02142-f002){ref-type="fig"}b) and 1400 °C ([Figure 2](#materials-13-02142-f002){ref-type="fig"}d) show a bright image, from which the SiC and ZrC in the fibers cannot be distinguished. The contrast of bright and dark regions are observed in the image at 1600 °C ([Figure 2](#materials-13-02142-f002){ref-type="fig"}f), wherein Zr-rich brighter spots with the diameter of about 200 nm are dispersed in darker Si-rich matrix. It can be seen that obvious aggregation of Zr in the fibers is more likely to occur at 1600 °C, which may be ascribed to the faster migration of Zr cations at higher temperatures.
3.2. Phases Composition in the Ceramic Fibers {#sec3dot2-materials-13-02142}
---------------------------------------------
XRD analysis of the above ceramic fibers annealed by H~2~--Ar process at 1200, 1400 and 1600 °C for 1 h is shown in [Figure 3](#materials-13-02142-f003){ref-type="fig"}a. It is indicated that ZrC is the only crystalline phase existing in the ceramic fibers after annealing at 1200 °C. With the temperatures up to 1400 and 1600 °C for 1 h, both of the crystalline phases of ZrC and SiC are identified in the ceramic fibers. The sharper diffraction peaks at 1600 °C than those at 1400 °C indicate a better crystallinity, which is in accordance with the SEM results.
XRD analysis of the other ceramic fibers by Ar--Ar process up to various temperatures of 1200, 1400 or 1600 °C is given in [Figure 3](#materials-13-02142-f003){ref-type="fig"}b. According to the XRD patterns, the major phase existing in the ceramic fibers obtained at 1200 and 1400 °C is also only ZrC. When the annealing temperature is up to 1600 °C, both crystalline phases of ZrC and SiC can be identified in the ceramic fibers, which indicates that the crystallinity of ZrC and SiC increases with increasing temperatures.
Compared the results shown in [Figure 3](#materials-13-02142-f003){ref-type="fig"}a,b, it is concluded that the diffraction peaks of crystalline SiC formed by the Ar--Ar process at 1200 °C are close to those appeared by the H~2~--Ar process. With the temperature up to 1400 °C or 1600 °C, the diffraction peak shapes of crystalline ZrC formed via the H~2~--Ar process become sharper than those formed via the Ar--Ar process. It is also very clear that the crystallinity of SiC formed via the H~2~--Ar process is better than that via Ar--Ar process when the heat treatment is up to 1600 °C. That is, the introduction of H~2~ atmosphere below 1000 °C has effect on the growth of ZrC and SiC grain sizes at 1600 °C, which is got to know via the following analysis.
[Table 1](#materials-13-02142-t001){ref-type="table"} lists the elemental compositions and C/(Si + Zr) Atomic ratio of different fibers. Compared with green fibers, the fibers after pyrolysis at 1000 °C in Ar or H~2~ atmosphere consist of Zr, Si, C and O elements. With the pyrolysis atmosphere changing from Ar to H~2~ below 1000 °C, the Si content increases from 43.82% to 51.95%, the Zr content from 14.88% to 17.10%, and the carbon content decreases by about 10%, which results in the decrease of the C/(Si + Zr) atomic ratio from 1.90 to 1.15. After the Ar--Ar process or H~2~--Ar process at 1600 °C, the contents Si and Zr slightly increase while the carbon content further decreases, which can be ascribed to carbothermal reduction of C and O elements. The C/(Si + Zr) atomic ratio in the fibers by H~2~--Ar process at 1600 °C is 1.11, which means the fibers consist of near-stoichiometric ZrC and SiC.
It was known that pyrolysis of PZCS in Ar led to the formation of ZrC, SiC and free carbon in the resultant fiber \[[@B18-materials-13-02142]\]. Then free carbon remaining in the fibers obtained at 1600 °C is analyzed and determined by its micro-Raman spectra ([Figure 4](#materials-13-02142-f004){ref-type="fig"}). For the fibers obtained via the Ar--Ar process, the strong and sharp peaks at 1358 and 1590 cm^−1^ are recorded. The scattering peak at 1590 cm^−1^ is ascribed to the E~2\ *g*~ mode of the graphene layers and usually labeled as G band (name after "graphite"), while the scattering peak at 1358 cm^−1^ is designated to the D band of pyrolytic carbon (named after "defect"). The ratio of intensities of D band and G band is larger than 1, which means a large amount of free carbon exists in ceramic fiber obtained in argon at 1600 °C. In the fibers obtained via the H~2~--Ar process, the intensities of both peaks at 1358 and 1590 cm^−1^ become very weak, which means free carbon in the SiC--ZrC ceramic fibers is almost removed by H~2~.
Compared with the Roman spectra of the SiC--ZrC fibers obtained via the Ar--Ar process ([Figure 4](#materials-13-02142-f004){ref-type="fig"}a), the peaks at 784 and 955 cm^−1^ ascribed to the β-SiC in [Figure 4](#materials-13-02142-f004){ref-type="fig"}b are identified, which means a better crystallization of β-SiC in the ceramic fibers obtained via the H~2~--Ar process.
From the elemental analysis and Raman spectra, it is found that a larger amount of carbon can be removed from the as-cured fibers by the introduction of H~2~ atmosphere below 1000 °C. Benefiting from the decarbonization of H~2~, the production of free carbon in the ceramic fibers is reduced and the crystallinity of ZrC and SiC grain sizes is increased, as well as stoichiometric ZrC and SiC can be obtained.
[Figure 5](#materials-13-02142-f005){ref-type="fig"} shows high-resolution TEM (HR-TEM) images of the as-cured fibers after the Ar--Ar processes up to 1400 or 1600 °C. It can be seen that amorphous carbon is observed around ZrC and SiC nanocrystallites. In contrast, the ceramics fibers obtained via the H~2~--Ar process consist of two clearly defined phases of SiC and ZrC while free carbon is hardly observed in [Figure 6](#materials-13-02142-f006){ref-type="fig"}a,b. These results confirm the analysis of Raman spectra.
Based on the data of X-ray powder diffraction and the Debye-Scherrer formula, the average grain sizes of SiC and ZrC in the ceramic fibers heated at various temperatures are computed, as shown in [Figure 7](#materials-13-02142-f007){ref-type="fig"}. When the heat treatment temperature at 1200--1300 °C, ZrC crystals are formed first with the size of about 2--4 nm. With the heat treatment temperature from 1400 up to 1600 °C, the grain size of ZrC is up to 10 nm or even larger. The crystalline grain size of ZrC at 1600 °C is around 8--10 nm larger than that of SiC, which may be related to the fact that Zr cations aggregate in the fibers at higher temperatures. After heat treatment at 1600 °C via the H~2~--Ar process, the average crystalline grain size of ZrC is about 18 nm ([Figure 7](#materials-13-02142-f007){ref-type="fig"}a) and the size of SiC is also increased to about 8 nm ([Figure 7](#materials-13-02142-f007){ref-type="fig"}b). The crystalline grain sizes of ZrC and SiC obtained at 1600 °C via the H~2~--Ar process are 3--5 nm more than those via the Ar--Ar process. From this tendency, it is supposed that the rapid growth of ZrC and SiC crystalline grains obtained via the H~2~--Ar process will be kept and the growth of ZrC and SiC grains obtained via the Ar--Ar process will become lower.
4. Conclusions {#sec4-materials-13-02142}
==============
A composite ceramic fiber of SiC--ZrC was fabricated from a single polymeric precursor of polyzirconocenecarbosilane, and it was shown that both of stoichiometric β-SiC and ZrC in the fibers could be formed through decarbonization of the electron-beam cured green fiber in hydrogen up to 1000 °C and subsequently annealing the inorganic fiber in argon up to 1600 °C. The microstructures of the SiC--ZrC fibers exhibited homogenously dispersion of nano-sized ZrC crystallites (\~18 nm) in a matrix of β-SiC with smaller grain size (\~8 nm). After pyrolysis in hydrogen below 1000 °C, a more rapid growth of ZrC and SiC crystalline grains occurred in Ar up to 1400 or 1600 °C. In the same ceramic fiber, the crystalline grain size of ZrC was larger than that of SiC and the aggregation of Zr became apparent at 1600 °C.
We are grateful to Yanguo Wang at the Institute of Physics/CAS for help with microstructure analysis of the ceramic fibers.
M.G. and W.Z. conceived and designed the experiments; X.L., S.Y. and H.Z. performed the experiments; M.G., S.Y. and Z.L. helped perform the data analysis; M.G. wrote the manuscript; W.Z. performed the manuscript review. All authors have read and agreed to the published version of the manuscript.
This work was funded by the National Key R&D Program of China (No. 2018YFC1902401), the National Natural Science Foundation of China (Grant Numbers 51471159, 51671180, 51472243 and 51272251).
The authors declare no conflict of interest.
######
Scanning electron microscopy (SEM) images of the surface (**a**) and cross-section (**b**) of the electron-beams (EB)-cured polyzirconocenecarbosilanes (PZCS) fiber and X-ray energy dispersive spectrometer (EDS) images (**c**) and (**d**) from surface to core of the fiber.
![](materials-13-02142-g001a)
![](materials-13-02142-g001b)
![Surface and cross-sectional SEM images of the as-cured PZCS fibers after H~2~--Ar process up to various temperatures of (**a**,**b**): 1200 °C; (**c**,**d**): 1400 °C; (**e**,**f**): 1600 °C, wherein (**b**,**d**,**f**) are the backscattered electron images.](materials-13-02142-g002){#materials-13-02142-f002}
![XRD patterns of SiC--ZrC ceramic fibers through (**a**) H~2~--Ar process and (**b**) Ar--Ar process up to various temperatures of 1200, 1400 or 1600 °C.](materials-13-02142-g003){#materials-13-02142-f003}
![Raman spectra of the SiC--ZrC fibers obtained at 1600 °C via the Ar--Ar process (**a**) and the H~2~--Ar process (**b**).](materials-13-02142-g004){#materials-13-02142-f004}
![HR-TEM images of the as-cured fibers obtained via the Ar--Ar process up to (**a**) 1400 and (**b**) 1600 °C.](materials-13-02142-g005){#materials-13-02142-f005}
![HR-TEM images of the as-cured fibers obtained via the H~2~--Ar process up to (**a**) 1400 and (**b**) 1600 °C.](materials-13-02142-g006){#materials-13-02142-f006}
![Crystalline grain sizes of ZrC (**a**,**b**) and SiC (**c**,**d**) in the composite fibers after pyrolysis at 1000 °C and annealing at various temperatures from 1200 to 1600 °C (**a**,**c**: H~2~--Ar process; **b**,**d**: Ar--Ar process).](materials-13-02142-g007){#materials-13-02142-f007}
materials-13-02142-t001_Table 1
######
Content and C/(Si + Zr) Atomic ratio of different fibers.
Content (wt %) Si C Zr O H Cl C/(Si + Zr) Atomic Ratio
----------------------------------- ------- ------- ------- ------ ------- ------ --------------------------
Green fibers 32.94 44.37 6.80 1.21 12.66 2.02 2.96
Fibers in Ar (1000 °C) 43.82 39.51 14.88 1.89 / / 1.90
Fibers in H~2~ (1000 °C) 51.95 28.32 17.10 2.63 / / 1.15
Ar--Ar process fiber at 1600 °C 45.19 38.82 14.93 1.16 / / 1.82
H~2~--Ar process fiber at 1600 °C 52.73 27.68 17.45 2.10 / / 1.11
|
Search
Buying a co-op is substantially different than purchasing a condominium. A co-op buyer needs to … some east coast cities. In Seattle, there are far more condos than cooperatives, but you will find co-ops in some older buildings in the city. … Because co-ops are so different than condominiums, it helps to work with real estate agents and lenders who are familiar with the process. You must spend … 920 Dexter Ave N Seattle, WA 98109; 206-462-6200; [email protected] … |
Chromatographia
Chromatographia is a peer-reviewed scientific journal published by Springer Verlag, covering liquid and gas chromatography, as well as electrophoresis and TLC.
Impact factor
Chromatographia had a 2014 impact factor of 1.411, ranking it 50th out of 74 in the subject category "Analytical Chemistry" and 65th out of 79 in "Biochemical Research Methods".
External links
References
Category:Chemistry journals
Category:Publications established in 1968
Category:Springer Science+Business Media academic journals
Category:Monthly journals |
Q:
Has a Kendo UI Widget been applied to a DOM object (to avoid duplication)
This function display the dialog. While opening, it creates also a kendo editor. The problem is that, when the dialog is closed then reopened, the editor is duplicated.
function openChangeProjectStatusPopup(popupElementName) {
$("#" + popupElementName).dialog({
width: 700,
height: 400,
draggable: false,
resizable: false,
modal: true,
open: function () {
$('#changePhaseTextArea').kendoEditor();
},
close: function () {
}
});
}
To avoid duplication, I should do a check like this
if(changePhaseTextArea is a not already a kendoeditor){
$('#changePhaseTextArea').kendoEditor();
}
I've checked the kendo websites, I can't find where one can check the type of the object.
Thanks for helping
A:
The easiest way is asking for the widget object that is referenced via data("kendoEditor") or in general data("kendo_<String>") where <String> is the name of the widget.
For your code example:
var elem = $("#changePhaseTextArea");
if (!elem.data("kendoEditor")) {
$('#changePhaseTextArea').kendoEditor();
}
|
Sunday, April 22, 2012
Gignac Market (Gignac, France)
We are in the south of France...Languedoc. It was Saturday and that means its market time in Gignac the town next to ours (which is only 10 minute drive away). We always drive to the Gignac market because we have several favourite producers we like to buy products from.
The first beautiful flowers of Spring.
The organic vegetable producer.
A cheese trailer.
Beautiful veggies and fruits.
The meat trailer; every meat eater's dream.
Our favourite egg and chicken woman. We always buy her fresh eggs; her eggs are always so fresh and big...and they are so cheap. We also always buy her chicken for our roasted chicken dinner. They are always fresh and delicious.
Our favourite cheese woman. We cannot believe this old woman makes the best cheeses around (also a cheese cake to die for). She also has fresh milk, yoghurt and butter. We buy it all.
The bread stand.
The roasted chicken trailer.
The olives and tapenade trailer.
There are hundreds of stalls and trailers selling the best agricultural products of the area. We are really lucky to have one with such good quality near our home. The day was sunny with blue skies, a bit chilly and so much yummy looking fresh products...a day with so many pleasures for the senses. |
Alexis Sanchez's family arrive in London in preparation for Arsenal exit
The Chilean's family are in London and sources close to the player believe that a January move is close to happening
Alexis Sanchez’s family have arrived in London in preparation for the Chilean’s departure from , Goal has learned.
The 29-year-old is subject of a transfer tussle between and this month, although it’s understood that the former Barcelona man has his heart set on a move to the Etihad Stadium, where he will link up with his old coach Pep Guardiola.
Arsenal are demanding a fee of £35m for Sanchez while the player's agent Fernando Felicevich, previously dubbed ‘the king of South American football’ by Forbes magazine, is requesting a fee of £5m for facilitating the transfer.
City have made it known that they are unwilling to meet Felicevich's personal demands, believing the total package requested by the agent and Arsenal does not represent value for money.
But Goal understands that while the club insist they are willing to walk away from a deal this month, potentially leaving United with a clear run, the Blues would rather find an agreement that suits all parties and sign the Chilean before the transfer window closes.
Sanchez’s older brother Humberto arrived at Heathrow Airport on Saturday with a number of family members in preparation for the player’s departure from Arsenal.
It's understood that the two-time winner wants his closest family to be alongside him for when his exit from the Gunners is confirmed this month.
“It looks like Sanchez will not extend his contract,” Arsenal boss Wenger told reporters in the embargoed section of Friday’s pre-match press conference.
“But we want to keep Jack [Wilshere] and if we have an opportunity maybe to keep [Mesut] Ozil, the rebuild will be less deep than if all the three left.”
Goal understands that Bordeaux starlet Malcom tops Wenger’s wishlist as a replacement for Sanchez, while Thomas Lemar – who came close to joining the Gunners on transfer deadline day last summer – remains another target.
Contrary to reports, the French side own 100 per cent of the player's rights and there is no third party ownership involved in any potential deal.
Malcom’s representatives were in London meeting with Arsenal officials on Friday and it's understood that the club will sanction the sale of Sanchez to City once they secure a replacement. |
Soldak
Soldak Entertainment, Inc. is a small independent developer, focused on bringing new and unique gameplay to the entertainment industry. Soldak was founded by Steven Peeler. Before embarking on his own in late 2004 to create Depths of Peril, Steven was Technical Director of Ritual Entertainment.
http://www.soldak.com/ |
Optical sensors with molecularly imprinted nanospheres: a promising approach for robust and label-free detection of small molecules.
Molecularly imprinted nanospheres obtained by miniemulsion polymerization have been applied as the sensitive layer for label-free direct optical sensing of small molecules. Using these particles as the sensitive layer allowed for improving response times in comparison to sensors using MIP layers. As a model compound, well-characterized nanospheres imprinted against L-Boc-phenylalanine anilide (L-BFA) were chosen. For immobilization, a simple concept based on electrostatic adsorption was used, showing its applicability to different types of surfaces, leading to a good surface coverage. The sensor showed short response times, good selectivity, and high reversibility with a limit of detection down to 60 μM and a limit of quantitation of 94 μM. Furthermore, reproducibility, selectivity, and long-term stability of the sensitive layers were tested. The best results were achieved with an adsorption on aminopropylsilane layers, showing a chip-to-chip reproducibility of 22%. Furthermore, the sensors showed no loss in signal after a storage time of 1 year. |
You will need Real Player 8 Basic which you can obtain free of charge. The one for the Mac downloads an installer. Clicking on the installer icon unpacks Real Player and makes appropriate adjustments to your system. I believe that the one for Mac challenged, PC users works the same way.
If you believe that you have Real Player G2 installed, see if you can play the following sound clip. You will use physical as the name and rocks as the password. Click on the arrow pointing to the right on the real icon at the top of the page.
If your system does not meet or exceed the above minimum system requirements, please click
here to download an older version of RealPlayer. This is somewhat experimental depending on the operating system you are using. |
The Apple Lightning connector is new. Apple is a registered trademark of Apple, Inc. of Cupertino, Calif. Apple Lightning is a pending trademark of Apple Inc. of Cupertino, Calif. (US Trademark application 85726560). The Apple Lightning connector has a smaller external footprint than the legacy 30-pin connector, but a portion of the connector that does not have pins extends deeper into the device. |
About Gas Grill Parts
Gas grills can be an investment and when you get used to the nuances of operating your particular model, it makes sense to try to fix it when something goes wrong before thinking of purchasing a new one. Sometimes you can even find grill accessories to take performance to the next level. From replacement grill igniters to propane regulators, count on Ace to help you find the gas grill parts you need.
Depending on how frequently you use your grill, the grill grates can accumulate grease, food residue and other stubborn cooking sediment that cannot be easily removed with a grill brush or good wash. Some grates even have a tendency to rust when exposed to the outdoors over long periods of time. You can typically find two types of replacement cooking grates – either stainless steel or ceramic cooking grates. Stainless steel grates are more common, but ceramic cooking grates tend to supply a more even heat to the food being grilled.
For more help finding the gas grill parts you need, visit your local Ace. Or if you end up needing a new grill, check out our grill buying guide.
Contact: Have questions or comments? Send us an
email
or call 1.866.290.5334 |
Monday, October 26, 2009
The sense that Wall Street has pulled off a coup d'etat and taken over the machinery of the United States is the most powerful meme out there now, and its power is growing in magnitude every day among all classes of Americans...
The government has given trillions in bailout or other emergency funds to private companies, but is largely refusing to disclose to either the media, the American people or even Congress where the money went
Congress has largely been bought and paid for, and two powerful congressmen have said that banks run Congress
The head of the Federal Reserve Bank of Kansas City, the former Vice President of the Dallas Federal Reserve, and two top IMF officials have all said that we have – or are in danger of having – oligarchy in the U.S.
Economist Dean Baker says that the real purpose of bank rescue plans was “A massive redistribution of wealth to the bank shareholders and their top executives”
The big banks killed any real chance for financial reform months ago
----
Tent cities growing:
----
Bloomberg:Marc Faber, publisher of the Gloom, Boom & Doom Report, talks with Bloomberg's Deirdre Bolton and Erik Schatzker about the performance of the dollar and U.S. economic outlook.
Faber also discusses the outlook for equities, the correlation between the dollar and the stock market and the fiscal position of the U.S. which he describes as a "complete disaster" and "Dollar Will Go To A Value Of Exactly Zero" ... |
Main menu
Tag Archives: data modeler
Recently I started looking at the unit testing functionality within Oracle’s SQL Developer, as a possible alternative to ut/PLSQL. Oracle’s Jeff Smith has a useful starter page on this, Unit Testing Your PL/SQL with Oracle SQL Developer. As he notes, the first thing you need to do is set up a unit test repository, which is essentially a schema within an Oracle database that SQL Developer will create for you for the unit test metadata. The schema contains 24 tables, and when I set it up I thought it would be nice to see an entity-relationship diagram for it.
I often draw these myself after running an analysis query on the foreign key network, as described in a recent post, PL/SQL Pipelined Function for Network Analysis. However, in this case I remembered that SQL Developer also has a data modeler component, and I thought it would be a good opportunity to learn how to use this tool to reverse-engineer a diagram from the schema. It’s actually very easy, and in this article I will show the result (see another Jeff Smith post for learning resources on the data modeler, Data Modeling). For good measure, I will include outputs from my own metadata queries, and add similar for Oracle’s HR demo schema (and linked schemas, in version 12.1).
12 July 2015I added a section on the Oracle v11.2 Apex schema, APEX_040000, which has 425 tables, to see how the data modeler handles larger schemas.
Unit Test Repository Schema
I created the diagram by following the wizard from File/Import/Data Dictionary – see the link above for more information on how to use the Data Modeler.
I had to import the three schemas (HR, OE and PM) one at a time, merging with the previous one, am not sure if you can do it in one go, or if the diagram is affected by the order of import.
Data Modeler Diagram
Manual Visio Diagram
For comparison, here is a manual diagram, deliberately omitting column and other detail. It was done as an illustration of the network analysis program output, and I therefore did not include useful information such as on relationship optionality.
Apex Schema – APEX_040000 (v11.22)
This imported quickly, with 425 tables. However, the diagram was not so useful. Trying to print it to .png or .jpg (via File/Print Diagram/To Image File) silently failed; printing to .pdf worked, but the reader opens at a zoom level of 1.41% and it’s not practical to navigate the links. Maybe text-based analysis reports are more useful for the larger schemas.
Data Modeler Diagram
Here is a screenshot of the modeler diagram:Network Analysis Output
I include my standard summary listings in this case, showing that the schema splits into 21 sub-networks, with 298 tables having foreign key links, the remaining 127 being thereby excluded from this report. |
DNA replication fidelity.
DNA replication fidelity is a key determinant of genome stability and is central to the evolution of species and to the origins of human diseases. Here we review our current understanding of replication fidelity, with emphasis on structural and biochemical studies of DNA polymerases that provide new insights into the importance of hydrogen bonding, base pair geometry, and substrate-induced conformational changes to fidelity. These studies also reveal polymerase interactions with the DNA minor groove at and upstream of the active site that influence nucleotide selectivity, the efficiency of exonucleolytic proofreading, and the rate of forming errors via strand misalignments. We highlight common features that are relevant to the fidelity of any DNA synthesis reaction, and consider why fidelity varies depending on the enzymes, the error, and the local sequence environment. |
Attorney General William Barr claimed in an NBC News interview that former President Barack Obama posed the “greatest danger” to democracy in the 2016 election — not Russia.
Barr told the network that he disagreed with his own department’s inspector general report, which concluded that the FBI did not “spy” on the Trump campaign and was justified in launching an investigation into its ties to Russia.
Advertisement:
Nonetheless, the attorney general claimed that the Obama administration posed the biggest threat to democracy because of alleged spying, which was debunked Monday by the release of the report.
“I think, probably, from a civil liberties standpoint, the greatest danger to our free system is that the incumbent government used the apparatus of the state — principally, the law enforcement agencies and the intelligence agencies — both to spy on political opponents. But as to use them in a way that could affect the outcome of the election,” Barr alleged. “As far as I’m aware, this is the first time in history that this has been done to a presidential campaign.”
Barr claimed to NBC News reporter Pete Williams that the FBI may have opened the investigation in “bad faith” and insisted that Trump’s campaign was “clearly spied upon” in spite of inspector general Michael Horowitz’s nearly two-year investigation which found no such evidence.
Advertisement:
He also downplayed the Trump campaign’s extensive contacts with Russian officials, insisting that “presidential campaigns are frequently in contact with foreign persons.”
Barr’s comments were a stark contrast from Horowitz’s report, which found no evidence of the “spying” allegations invoked by the president and his conservative allies.
"I think our nation was turned on its head for three years based on a completely bogus narrative that was largely fanned and hyped by a completely irresponsible press," Barr said Tuesday as he rejected the report’s findings. "I think there were gross abuses . . . and inexplicable behavior that is intolerable in the FBI."
Advertisement:
"There was and never has been any evidence of collusion, and yet this campaign and the president’s administration has been dominated by this investigation into what turns out to be completely baseless,” Barr claimed.
However, former special counsel Robert Mueller’s investigators, who took over the FBI probe, found 272 contacts between Trump’s team and Russia. Barr also did not mention that Trump’s former campaign chief, deputy campaign chief, national security adviser, longtime attorney, and multiple other campaign advisers were convicted in the Russia probe.
Advertisement:
Barr issued a statement slamming Horowitz’s report Monday, as did U.S. attorney John Durham, who was handpicked by Barr to investigate the origins of the Russia probe.
The attorney general praised Durham for issuing a rare statement criticizing the inspector general report, arguing it was “necessary to avoid public confusion.”
"It was necessary to avoid public confusion," he said. "It was sort of being reported by the press that the issue of predication was sort of done and over. I think it was important for people to understand that Durham's work was not being preempted and that Durham was doing something different."
Advertisement:
Speaking at an event hosted by the Wall Street Journal after the interview, Barr doubled down on his comments, calling the Russia investigation a “travesty.”
“There were many abuses, and that’s by far the most important part of the [inspector general] report," he said. “If you actually spent time looking at what happened, I think you’d be appalled."
Unlike Barr and Durham, FBI Director Christopher Wray issued a statement acknowledging that the investigation found that the Russia probe was opened “for an authorized purpose and with adequate factual predication.”
Advertisement:
“The FBI accepts the report’s findings,” he added.
Trump lashed out Tuesday at Wray after he did not not put a positive spin on the report.
“I don’t know what report current Director of the FBI Christopher Wray was reading, but it sure wasn’t the one given to me,” Trump tweeted, after insisting that the report had proven his baseless allegations about the FBI. It did the opposite.
Barr’s latest comments were widely criticized and renewed allegations that he was acting as the president’s personal attorney instead of the top law enforcement official in the country.
Advertisement:
“Barr is acting in incredibly bad faith,” tweeted Sen. Mark Warner, D-Va., the vice chair of the Senate Intelligence Committee. “With this revisionist campaign to undermine a thorough, two-year IG investigation, the Attorney General is once again substituting partisan rhetoric for politically inconvenient facts.”
“We cannot overstate the damage Bill Barr is doing to the rule of law,” added Rep. David Cicilline, D-R.I., who sits on the House Judiciary Committee.
“Barr’s accusation that the career men and women at the FBI were acting in bad faith a day after a comprehensive investigation failed to find that is a new low,” former Justice Department spokesman Matthew Miller said. “Just sheer partisan hackery.”
New York Times columnist Wajahat Ali predicted that Barr’s clear partisanship could pose a serious risk to the 2020 election.
Advertisement:
“He's an ideological extremist using Trump for his dangerous agenda. I hope more people in the Justice Department speak out against him,” he wrote.
National security attorney Bradley Moss concluded that as he did with his interference in the release of the Mueller report, Barr once again showed that “Trump definitely found his Roy Cohn.” |
Early maternal separation induces preference for sucrose and aspartame associated with increased blood glucose and hyperactivity.
Early life stress and exposure to sweeteners lead to physiological and behavioral alterations in adulthood. Nevertheless, many genetic and environmental factors as well as the neurobiological mechanisms that contribute to the development of these disorders are not fully understood. Similarly, evidence about the long-term metabolic effects of exposure to sweeteners in early life is limited and inconsistent. This study used an animal model of maternal separation during breastfeeding (MS) to analyze the effects of early life stress on consumption of sweeteners, weight gain, blood glucose and locomotion. Rats were housed under a reversed light/dark cycle (lights off at 7:00 h) with ad libitum access to water and food. In the MS protocol, MS pups were separated from the dam for 6 h per day in two periods of 180 minutes (7:00-10:00 and 13:00-16:00 h) during the dark phase of postnatal day (PND) 1 to PND 21. Non-separated (NS) pups served as controls. On PND 22 rats were grouped by sex and treatment. From PND 26 to PND 50 sucrose and aspartame were provided to rats, and sweetener intake, body weight and blood glucose-related measures were scored. On PND 50, both male and female rats were exposed to the open field test to obtain locomotion and anxiety-related measures. Results showed that both early maternal separation and sweetener intake during adolescence resulted in increased blood glucose and hyperactivity in male rats but not in female rats. Data suggest that the combination of early stress and exposure to sucrose and aspartame could be a risk factor for the development of chronic diseases such as diabetes, as well as for behavioral alterations. |
PUT IT ALL ON THE LINE |
Search Stefan's Blog for Stuff
Monday, September 29, 2008
This was just awesome! We have seen Singapore turned party town!Not only was the race sensational, but also the entire set up for the Grand Prix. Needless to say that everything (at least on the surface) went smooth as silk. The organisers of the inaugrual Singapore GP have done a great job. The spirit of the helpers on track was amazing and I liked the fact that the Formula 1 ticket was a ticket into numerous parties as well (NB: in KL you would have to pay to get into most venues). The only thing I really hated was that there was no live commentary on the radio. Earmuffs with Radio were advertised as "Race Radio" and did cost a fortune. When asked, the sellers told me how cool it is to listen to music at the race. Honestly,... I don't listen to radio when I am in the cinema. You know what I mean...?
Kersten (above) made the trip from Hong Kong, just to see the race. He left Monday morning at 4 to catch his flight back to the office. Thanks to him I actually got to see a lot of sights I haven't seen so far. Like the Cafe del Mar on Sentosa. Now I need to get some sleep. |
This is a mix I recorded recently for the Acid Wave Podcast crew in Dortmund. The mix actually premiered back on January 1st, so I have been a bit slack in getting around to posting it here and making it available for download. Sorry about that!
So what is going on with this mix? Well, it’s a modern acid mix, starting with acid house / lighter acid techno before going in to acid electro and then finishing with some boshing full-power acid techno. All vinyl and all stuff from the last few years.
Here’s a little guide to the tracks I included in the mix:
Posthuman – Nightride to New Reno (Balkan Vinyl) The mix kicks off with the first track from Posthuman’s new album, Mutant Acid City – Posthuman are probably the most prominent London acid house revivalists, known as they are for the excellent I Love Acid label and parties, as well as their productions and appearances on other labels. This is a slow-burning spacy acid track, and I thought it would be the perfect intro tune.
Neville Watson – De-Basement (I Love Acid) The second track is from the most recent release on Posthuman’s I Love Acid label, and is by Neville Watson. I don’t know anything about the guy actually, I just know this is my favorite tune on the release, and it’s a perfect continuation from the opener – a minimal acid rumbler that is just that bit grittier, crucial to slowly turning up the intensity.
Photonz – Blood Is Life (Acid Avengers) Acid Avengers is a French label with some of the best artwork around – just check out their Bandcamp. This is a thumping acid track with a deep and eerie breakdown.
Boston 168 – J The Master (Attic Music) Boston 168 are an Italian acid techno duo making stuff that is pretty clearly influenced by old psychedelic and hard trance. Which is nice! Music is pretty cyclical, right, so it’s interesting that producers like Boston 168 are rediscovering the early trance sound, when trance was rhythmic, melodic and hypnotic, and not the over-the-top cheese it became by the late 1990’s. These guys seem to have become pretty big and are taking their live set everywhere at the moment. One apology: there is a slightly wonky mix into this track. I did fix it quickly, though. This is on the darker end of their sound. Heavy drums, acid lines, and dark vibes. Simple but effective.
Alien Rain – Alienated 2A (Alien Rain) Alien Rain is a long-running minimal acid project from Berlin-based Patrick Radomski (aka Milton Bradley). If you’ve heard any of my other modern acid mixes in recent years you will have heard some of his other tracks. They mostly follow a similar template – lots and lots of hypnotic repetition, but I think they work nicely as a bridging device.
Dax J – Zion (Monnom Black) Dax J is definitely the biggest name on this mix. He’s an English techno dj and producer who is based in Berlin and who has become a key figure in the revival of proper banging techno. Fuck loopy minimalism! His label Monnom Black has been consistently putting out bangers, and on the dj front he just kills it. This is another great track of his, wiring multiple acid lines over a slamming percussive base.
Nite Fleit – Partly Sunny (Steel City Dance Discs) Nite Fleit is an Australian dj/producer who is based in London (I think?). I think her release on Steel City Dance Discs was probably one of my all-around favorite releases last year, featuring as it does three kicking electro tracks and one gorgeous breaks track. I featured ‘Little Friend’ from the same EP on Get It 002 and am planning to use the breaks tune as the intro on a breakbeat-oriented Get It mix, maybe 4 or 5. This is a great track to transition into the electro section of the mix – jagged robotic beats and a steadily filthier acid riff. Massive tune.
Nonentity – Granite City Acid (Source Material) Here’s some seriously pumping acid electro from a new label from Aberdeen (hence the Granite City name). Back in the Rampage days I even played in Aberdeen once and then went back to an afterparty in a council flat where I tried (and failed!) to drink Buckfast on an empty stomach. Interesting times.
No Moon – Acid IX (Mechatronica) No Moon is an electro artist from Manchester and this was his first release on Mechatronica, which is a label that my friend Mejle runs with some friends. This is probably my favorite release on Mechatronica and I also used the title track Sirens on Get It 002. This is quite a spaced-out electro track, a little step down in intensity from the previous track, but that was deliberate, as I thought it would make sense to take the pressure off for a moment or two.
Locked Club – Electro Raw (Private Persons) Again, this is another release where I used a different track on Get It 002, and in this case this is from Locked Club, who are described in their promo material as ‘Russian electro punk’, which sounds good enough to me! This is another step up in intensity after the lull of the previous track, and a fitting end to the electro section.
Andreas Gehm – Two Times More (I Love Acid) OK, so back to techno. This is a straightforward acid banger from Andreas Gehm, the Cologne producer who sadly passed away several years ago. It’s well worth checking out his back catalogue, as he produced a lot of great dance tracks, covering everything from funky electro to jacking Chicago-style house to smooth Detroit-style techno and on to heads-down acid mayhem like this. A great starting point would be his posthumously released album on Solar One, The Worst of Gehm.
Collin Strange – Private Thoughts (Long Island Electrical Systems) Another release that I don’t know too much about – this is from New York’s Collin Strange and it’s an acid banger, no more no less. L.I.E.S. has become a pretty big label in the techno scene in recent years, with a very diverse output, and I have to be honest that I’m not huge on all of it. This release was great though – I also used the track Private Lives on last year’s Junior Techno Fruit Gang.
Regal – Acid Is The Answer (Involve) Regal is a Spanish producer from Madrid who is doing ravey acid techno, and this is one of his bigger tunes (in fact it just got repressed) – rolling basslines, stuttering vocal, and twisting acid – dude this hits me right in the metaphorical g-spot.
Dax J – Reign Of Terror (Electric Deluxe) Yeah, another Dax J tune. So? I was happy with the double-drop into this … go me!
I’ll talk about these two together, because … it’s easier. Anyways, the artist seems to be from somewhere in Northern England and he has done a whole bunch of these Extreme Acid releases, all of which involve heavy metal style artwork with dragons and stuff like that and hard distorted drums and really filthy acid. Absolute mayhem, but probably good only in small doses.
Regal – Still Raving (Involve) Regal’s most recent tune – I just threw this in because I like it. Actually I guess you could argue this mix was a little lazy in that I reused a number of the artists through the mix, but I guess it’s fine.
SMD – SMD5 (A) (SMD) The very last track is from the one and only DJ Slipmatt under his SMD (aka Slipmatt Dubs) alias – this one takes the famous acid riff from Josh Wink’s ‘Higher States of Consciousness’ and the melody from Underworld’s ‘Rez’ and layers it all over a thumping 4/4 kick. It’s always been one of my favorite acid riffs, so it’s nice to have it in a format like this, as the original version is a somewhat slow breaks track that doesn’t always work for me, given my long-term love for fast music. |
Emergence and phylogenetic analysis of amantadine-resistant influenza a subtype H3N2 viruses in Dublin, Ireland, over Six Seasons from 2003/2004 to 2008/2009.
To determine the prevalence of amantadine-resistant influenza A viruses and perform genetic analysis of isolates collected in Dublin during six seasons (2003/2004 to 2008/2009). Known mutations in the matrix 2 gene (M2) conferring amantadine resistance were screened and phylogenetic analysis of the haemagglutinin gene (HA) performed. Of 1,180 samples, 67 influenza A viruses were isolated, 88% of which were subtype H3N2. Amantadine resistance was only found in subtype H3N2 and increased dramatically from 7% in 2003/2004 to 90% in 2008/2009. A maximum likelihood tree of the HA gene of influenza A H3N2 isolates differentiated them into two distinct clades, clade N and clade S, where the majority of isolates were amantadine-resistant and amantadine-sensitive, respectively. The clades were distinguished by amino acid substitutions, S193F and D225N, which probably conferred a selective advantage for the spread of such viruses. Phylogenetic analysis showed some degree of antigenic drift when compared with the vaccine strain of the corresponding season. This study showed that circulation in Ireland of a distinct lineage, clade N, among H3N2 viruses favoured emergence of amantadine resistance. Furthermore, comparison of circulating Irish viruses and vaccine strains used in the northern hemisphere showed high similarity. |
Q:
cycling through a div
Take a look at this code: jsfiddle
Use the arrow keys to cycle trough the div list. As you can see there is a gap after ''Mark'' and above ''Luca''. In other words; at some point non of the divs has a blue background. My question: How can I cycle trough the divs without the gap?
(Focus the input first)
A:
With only a slight modification of your original code: http://jsfiddle.net/Wf2mR/
|
United States Court of Appeals
For the Eighth Circuit
___________________________
No. 14-3295
___________________________
Michael A. Hoelscher; Theresa Hoelscher; C. M. Hoelscher, by Next Friend,
Theresa Hoelscher, a Minor
lllllllllllllllllllll Plaintiffs - Appellants
v.
Miller's First Insurance Co.; Miller’s Classified Insurance Co.; George S. Milnor,
President and CEO, Individually and Corporate Capacity; John M. Huff, Division
Director, Consumer Affairs, State of Missouri Department of Insurance, Financial
Institution and Professional Registration, Consumer Affairs Division; Matt Barton,
Director, Consumer Affairs, State of Missouri Department of Insurance, Financial
Institution and Professional Registration, Consumer Affairs Division; Carol A.
Harden, Consumer Service Coordinator, State of Missouri Department of
Insurance, Financial Institution and Professional Registration, Consumer Affairs
Division; Mike Haymart; G & C Adjusting Services LLC; Wayne Bernskoetter;
Wayne Bernskoetter Constructions; Victor R. Sapp; Sapp Home Pro, Inc.; Jude
Markway; Jude Markway Construction; Jeff Sapp
lllllllllllllllllllll Defendants - Appellees
____________
Appeal from United States District Court
for the Western District of Missouri - Jefferson City
____________
Submitted: March 23, 2015
Filed: March 31, 2015
[Unpublished]
____________
Before LOKEN, BOWMAN, and KELLY, Circuit Judges.
_____________
PER CURIAM.
Michael Hoelscher, Theresa Hoelscher, and C. M. Hoelscher, by Next Friend,
Theresa Hoelscher (the Hoelschers) appeal the district court’s1 dismissal of their civil
complaint. Upon careful de novo review, we conclude that the Hoelschers’ claims
were time-barred, and that dismissal was therefore proper. See Fullington v. Pfizer,
Inc., 720 F.3d 739, 744, 747 (8th Cir. 2013) (standard of review; appellate court may
affirm dismissal on any basis supported by record); cf. Gross v. United States, 676
F.2d 295, 300 (8th Cir. 1982) (statute of limitations for continuing tort generally runs
from date of last tortious act). We further conclude that the district court did not
abuse its discretion in denying the Hoelschers post-judgment relief. See Miller v.
Baker Implement Co., 439 F.3d 407, 414 (8th Cir. 2006) (standard of review).
Accordingly, the judgment of the district court is affirmed. See 8th Cir. R.
47B.
______________________________
1
The Honorable Nanette K. Laughrey, United States District Judge for the
Western District of Missouri.
-2-
|
COKE BRINGS FIFA TROPHY TOUR TO PAKISTAN
Pakistan is a country where football is loved by millions. The love can be seen during the league matches and the enthusiasm is on its peak during the Worldcup. Even the famous city of Sialkot provides nearly half of the world’s soccer balls. Unfortunately Pakistan failed to qualify for FIFA Worldcup but still we have a great news for all the football fans. We will have a chance to get a glance of the glorious 2018 FIFA World Cup Trophy being brought to our country by Coke.
COKE BRINGS FIFA TROPHY TOUR TO PAKISTAN – GET READY FOR CELEBRATION
Pakistan has suffered a lot of damage during the last many years. Last year Ronaldinho and friends came to Pakistan for an exhibition match and we witnessed how excited Karachi and Lahore were for them. Coke’s step of bringing the trophy to Pakistan is promoting a soft image of Pakistan. These activities are of utmost importance for a country as they result in international recognition of the country.
Coke is a multinational brand and it has contributed a lot in promoting Pakistan at an international level. Pakistan is one of the 51 blessed countries that will see FIFA Football World Cup Trophy on February 3rd 2018 and it is going to be a big day for football enthusiasts in Pakistan. Coke is a renowned brand and not in need of any introduction. It is also said that the most popular word in the word after HELLO is COCA COLA. It associates itself with various sports around the world. It is evident through social media that there is cult following in Pakistan for different international football clubs.
There is no doubt cricket is the most cherished sport in Pakistan and hockey is the national sport but football has a special place in heart of Pakistanis. The enthusiasm of Lyari youth cannot be matched. There is no doubt football has a promising future in our country
About us
Raddipaper is a project of renowned facebook page “Comics By Majid” and marketing company “Business Knights”. Finance Professionals, Beauty Vloggers, Qualified Chefs and passionate individuals are part of our team. Our objective is to promote Pakistani talent on all of our platforms and to provide our readers with entertainment and fine quality stuff. Come to us if you are talented, got a crazy mind and passion to do things differently. |
Lightning Data Upgrade - NEW
Lightning Events
In December 2014 we upgraded our lightning network to the latest in sensor technology as used by the world's leading meteorological agencies. This has resulted in changes and improvements to the lightning data you will now see. The main changes are:
Related Links
About Weatherzone Radar
Distance and latitude/longitude coordinates are displayed when you mouse over the map. The
origin for distance measuring is indicated by a red dot and defaults to either your location, if specified and in range, or the location
of the radar/the centre of the map. The origin may be changed by clicking elsewhere on the map.
The colours and symbols used on the radar and satellite maps are described on
our legend page. View legend »
Radar Details
Dampier Radar has an unrestricted 360 degree view from its site 50 metres above sea level, and though no major permanent echoes appear, a small amount of low intensity clutter may be visible around parts of the coast and the islands surrounding Dampier and offshore to the west. Dampier Radar is susceptible to a small amount of false echoes on land during the dry months. These echoes are characterised by erratic movement and very low intensities. During the wet season between December and March anomalous propagation may cause significant false echoes to appear for distances up to 60 kilometres along the coastline and seaward of it. During the wet season (primarily January to March), thunderstorm clouds and cyclonic formations are generally well defined for distances up to approx 250 kilometres. Beyond hat distance signal attenuation gives the appearance of less intensity than possibly exists. These formations are easily identified from false echoes by their regular rates in movement and direction. Thunderstorm activity can be viewed generally on a daily basis during the wet season, general preferred locations are in a trough line from the southwest to the southeast of Dampier/Karratha in and about the ranges. Heavy rain directly over the radar site can cause attenuation of all signals. Path attenuation can also occur when the radar beam passes through intense rainfall, with the returned signals from cells further along that path reduced.
Site search
Enter a postcode or town name for local weather, or text to search the site.
» advanced search |
The field of transgenics was initially developed to understand the action of a single gene in the context of the whole animal and the phenomena of gene activation, expression, and interaction. This technology has also been used to produce models for various diseases in humans and other animals and is amongst the most powerful tools available for the study of genetics, and the understanding of genetic mechanisms and function. From an economic perspective, the use of transgenic technology for the production of specific proteins or other substances of pharmaceutical interest (Gordon et al., 1987, Biotechnology 5: 1183-1187; Wilmut et al., 1990, Theriogenology 33: 113-123) offers significant advantages over more conventional methods of protein production by gene expression.
Heterologous nucleic acids have been engineered so that an expressed protein may be joined to a protein or peptide that will allow secretion of the transgenic expression product into milk or urine, from which the protein may then be recovered. These procedures have had limited success and may require lactating animals, with the attendant costs of maintaining individual animals or herds of large species, including cows, sheep, or goats.
The hen oviduct offers outstanding potential as a protein bioreactor because of the high levels of protein production, the promise of proper folding and post-translation modification of the target protein, the ease of product recovery, and the shorter developmental period of chickens compared to other potential animal species. The production of an avian egg begins with formation of a large yolk in the ovary of the hen. The unfertilized oocyte or ovum is positioned on top of the yolk sac. After ovulation, the ovum passes into the infundibulum of the oviduct where it is fertilized, if sperm are present, and then moves into the magnum of the oviduct, lined with tubular gland cells. These cells secrete the egg-white proteins, including ovalbumin, lysozyme, ovomucoid, conalbumin and ovomucin, into the lumen of the magnum where they are deposited onto the avian embryo and yolk.
2.1 Microinjection
Historically, transgenic animals have been produced almost exclusively by microinjection of the fertilized egg. Mammalian pronuclei from fertilized eggs are microinjected in vitro with foreign, i.e., xenogeneic or allogeneic, heterologous DNA or hybrid DNA molecules. The microinjected fertilized eggs are then transferred to the genital tract of a pseudopregnant female (e.g., Krimpenfort et al., in U.S. Pat. No. 5,175,384). However, the production of a transgenic avian using microinjection techniques is more difficult than the production of a transgenic mammal. In avians, the opaque yolk is positioned such that visualization of the pronucleus, or nucleus of a single-cell embryo, is impaired thus preventing efficient injection of the these structures with heterologous DNA. What is therefore needed is an efficient method of introducing a heterologous nucleic acid into a recipient avian embryonic cell.
Cytoplasmic DNA injection has previously been described for introduction of DNA directly into the germinal disk of a chick embryo by Sang and Perry, 1989, Mol. Reprod. Dev. 1: 98-106, Love et al., 1994, Biotechnology 12: 60-3, and Naito et al., 1994, Mol. Reprod. Dev. 37:167-171; incorporated herein by reference in their entireties. Sang and Perry described only episomal replication of the injected cloned DNA, while Love et al. suggested that the injected DNA becomes integrated into the cell's genome and Naito et al. showed no direct evidence of integration. In all these cases, the germinal disk was not visualized during microinjection, i.e., the DNA was injected “blind” into the germinal disk. Such prior efforts resulted in poor and unstable transgene integration. None of these methods were reported to result in expression of the transgene in eggs and the level of mosaicism in the one transgenic chicken reported to be obtained was one copy per 10 genome equivalents.
2.2 Retroviral Vectors
Other techniques have been used in efforts to create transgenic chickens expressing heterologous proteins in the oviduct. Previously, this has been attempted by microinjection of replication defective retroviral vectors near the blastoderm (PCT Publication WO 97/47739, entitled Vectors and Methods for Tissue Specific Synthesis of Protein in Eggs of Transgenic Hens, by MacArthur). Bosselman et al. in U.S. Pat. No. 5,162,215 also describes a method for introducing a replication-defective retroviral vector into a pluripotent stem cell of an unincubated chick embryo, and further describes chimeric chickens whose cells express a heterologous vector nucleic acid sequence. However, the percentage of G1 transgenic offspring (progeny from vector-positive male G0 birds) was low and varied between 1% and approximately 8%. Such retroviral vectors have other significant limitations, for example, only relatively small fragments of nucleic acid can be inserted into the vectors precluding, in most instances, the use of large portions of the regulatory regions and/or introns of a genomic locus which, as described herein, can be useful in obtaining significant levels of heterologous protein expression. Additionally, retroviral vectors are generally not appropriate for generating transgenics for the production of pharmaceuticals due to safety and regulatory issues.
2.3 Transfection of Male Germ Cells, Followed by Transfer to Recipient Testis
Other methods include in vitro stable transfection of male germ cells, followed by transfer to a recipient testis. PCT Publication WO 87/05325 discloses a method of transferring organic and/or inorganic material into sperm or egg cells by using liposomes. Bachiller et al. (1991, Mol. Reprod. Develop. 30: 194-200) used Lipofectin-based liposomes to transfer DNA into mice sperm, and provided evidence that the liposome transfected DNA was overwhelmingly contained within the sperm's nucleus although no transgenic mice could be produced by this technique. Nakanishi & Iritani (1993, Mol. Reprod. Develop. 36: 258-261) used Lipofectin-based liposomes to associate heterologous DNA with chicken sperm, which were in turn used to artificially inseminate hens. There was no evidence of genomic integration of the heterologous DNA either in the DNA-liposome treated sperm or in the resultant chicks.
Several methods exist for transferring DNA into sperm cells. For example, heterologous DNA may also be transferred into sperm cells by electroporation that creates temporary, short-lived pores in the cell membrane of living cells by exposing them to a sequence of brief electrical pulses of high field strength. The pores allow cells to take up heterologous material such as DNA, while only slightly compromising cell viability. Gagne et al. (1991, Mol. Reprod. Dev. 29: 6-15) disclosed the use of electroporation to introduce heterologous DNA into bovine sperm subsequently used to fertilize ova. However, there was no evidence of integration of the electroporated DNA either in the sperm nucleus or in the nucleus of the egg subsequent to fertilization by the sperm.
Another method for transferring DNA into sperm cells was initially developed for integrating heterologous DNA into yeasts and slime molds, and later adapted to sperm, is restriction enzyme mediated integration (REMI) (Shemesh et al., PCT International Publication WO 99/42569). REMI utilizes a linear DNA derived from a plasmid DNA by cutting that plasmid with a restriction enzyme that generates single-stranded cohesive ends. The linear, cohesive-ended DNA together with the restriction enzyme used to produce the cohesive ends is then introduced into the target cells by electroporation or liposome transfection. The restriction enzyme is then thought to cut the genomic DNA at sites that enable the heterologous DNA to integrate via its matching cohesive ends (Schiestl and Petes, 1991, Proc. Natl. Acad. Sci. USA 88: 7585-7589).
It is advantageous, before the implantation of the transgenic germ cells into a testis of a recipient male, to depopulate the testis of untransfected male germ cells. Depopulation of the testis has commonly been by exposing the whole animal to gamma irradiation by localized irradiation of the testis. Gamma radiation-induced spermatogonial degeneration is probably related to the process of apoptosis. (Hasegawa et al., 1998, Radiat. Res. 149:263-70). Alternatively, a composition containing an alkylating agent such as busulfan (MYLERAN™) can be used, as disclosed in Jiang F. X., 1998, Anat. Embryol. 198(1):53-61; Russell and Brinster, 1996, J. Androl. 17(6):615-27; Boujrad et al., Andrologia 27(4), 223-28 (1995); Linder et al., 1992, Reprod. Toxicol. 6(6):491-505; Kasuga and Takahashi, 1986, Endocrinol. Jpn 33(1):105-15. These methods likewise have not resulted in efficient transgenesis or heterologous protein production in avian eggs.
2.5 Nuclear Transfer
Nuclear transfer from cultured cell populations provides an alternative method of genetic modification, whereby donor cells may be sexed, optionally genetically modified, and then selected in culture before their use. The resultant transgenic animal originates from a single transgenic nucleus and mosaics are avoided. The genetic modification is easily transmitted to the offspring. Nuclear transfer from cultured somatic cells also provides a route for directed genetic manipulation of animal species, including the addition or “knock-in” of genes, and the removal or inactivation or “knock-out” of genes or their associated control sequences (Polejaeva et al., 2000, Theriogenology, 53: 117-26). Gene targeting techniques also promise the generation of transgenic animals in which specific genes coding for endogenous proteins have been replaced by exogenous genes such as those coding for human proteins.
The nuclei of donor cells are transferred to oocytes or zygotes and, once activated, result in a reconstructed embryo. After enucleation and introduction of donor genetic material, the reconstructed embryo is cultured to the morula or blastocyte stage, and transferred to a recipient animal, either in vitro or in vivo (Eyestone and Campbell, 1999, J. Reprod Fertil Suppl. 54:489-97). Double nuclear transfer has also been reported in which an activated, previously transferred nucleus is removed from the host unfertilized egg and transferred again into an enucleated fertilized embryo.
The embryos are then transplanted into surrogate mothers and develop to term. In some mammalian species (mice, cattle and sheep) the reconstructed embryos can be grown in culture to the blastocyst stage before transfer to a recipient female. The total number of offspring produced from a single embryo, however, is limited by the number of available blastomeres (embryos at the 32-64 cell stage are the most widely used) and the efficiency of the nuclear transfer procedure. Cultured cells can also be frozen and stored indefinitely for future use.
Two types of recipient cells are commonly used in nuclear transfer procedures: oocytes arrested at the metaphase of the second meiotic division (MII) and which have a metaphase plate with the chromosomes arranged on the meiotic spindle, and pronuclear zygotes. Enucleated two-cell stage blastomeres of mice have also been used as recipients. In agricultural mammals, however, development does not always occur when pronuclear zygotes are used, and, therefore, MII-arrested oocytes are the preferred recipient cells.
Although gene targeting techniques combined with nuclear transfer hold tremendous promise for nutritional and medical applications, current approaches suffer from several limitations, including long generation times between the founder animal and production transgenic herds, and extensive husbandry and veterinary costs. It is therefore desirable to use a system where cultured somatic cells for nuclear transfer are more efficiently employed.
What is needed, therefore, is an efficient method of generating transgenic avians that express a heterologous protein encoded by a transgene, particularly in the oviduct for deposition into egg whites. |
Low Back Pain.
This article provides an overview of evaluating and treating low back pain in the outpatient setting. As most cases of acute low back pain have a favorable prognosis, current guidelines on imaging studies recommend conservative treatment for 6 weeks prior to obtaining an MRI if no red flags are present. Of these red flags, a prior history of cancer is the strongest risk factor for a malignant etiology and requires urgent evaluation with MRI. Management of acute low back pain is mainly conservative with oral non-narcotic analgesics and mobilization as the initial recommendations. For patients with radiculopathy, epidural steroids may result in short-term pain relief, but long-term effects are still unclear. A systematic, evidence-based approach to the patient with low back pain is key to providing safe and cost-efficient care. |
Business Communications II FYBMS Question Bank 2019 Please note that these are a set of important questions, however the whole syllabus needs to be done well. For any further clarifications, please feel free to contact Prof Vipin Saboo on 9820779873 Advertisement Paper pattern:- Q1. Objectives (15 marks) Q2. Full...
Members
Archives
Archives
BMS.co.in is aimed at revolutionising Bachelors in Management Studies education, also known as BMS for students appearing for BMS exams across all states of India. We provide free study material, 100s of tutorials with worked examples, past papers, tips, tricks for BMS exams, we are creating a digital learning library.
Disclaimer: We are not affiliated with any university or government body in anyway. |
Calculate 0 + -2 + 3 - -4.
5
Calculate 2 + 2 + 1 - 3.
2
Evaluate (-1 - 1) + 2 + -2 + -2.
-4
What is the value of 0 + -6 - (0 - 0)?
-6
Calculate -3 - (0 + -3 + -5).
5
Evaluate -7 - (5 + -3 - (5 + 3)).
-1
What is -7 + 0 + -1 - (-18 - -14)?
-4
Calculate 10 + 5 + -11 + 3 + -8.
-1
Calculate -5 + (14 - (8 + -4)).
5
What is 23 - (2 + 5) - (17 + -11)?
10
Evaluate 5 + (-7 - (0 + (2 - 2))).
-2
What is the value of (-4 - -1 - -8) + (4 - 7)?
2
29 - 16 - (4 + 12)
-3
1 + (0 - 4) + 2 + 3
2
Evaluate -6 - (-2 - -7 - 1) - -2.
-8
-17 - -19 - ((-2 - 1) + 0)
5
3 + (7 - (4 - 0))
6
What is (-5 - -1) + (2 - -5) + 2?
5
Calculate 8 - (-6 - -11) - (7 - -2).
-6
1 + -2 - -6 - (1 - -3)
1
What is the value of 11 - 2 - (10 - -6)?
-7
15 + 0 + -17 + 0 + -6
-8
Evaluate 3 - (2 + -1) - (5 - 4).
1
What is 23 - (16 + -8) - 9?
6
What is 7 + -8 + -4 - -6?
1
(2 - -3) + (1 - (3 + -8))
11
What is 1 - ((-28 - -29) + -2 + -1)?
3
Evaluate 7 + -25 + 6 + 4.
-8
What is 0 - (4 + 6 + -3 - 3)?
-4
Calculate (-17 - -20) + (-1 - -1).
3
What is the value of -1 + -3 - (-9 - -8) - -3?
0
-5 + (-2 - (7 - 13) - 1)
-2
-1 + (-13 - -7) + 5 + 1
-1
Calculate (-12 + 15 - (3 + 4)) + 3.
-1
Calculate -10 + 11 + (2 - (2 - 3)).
4
-6 + (3 - -4) + 12 - 7
6
What is 0 + -5 - ((6 - 6) + 1)?
-6
What is the value of (2 - -2 - (4 + 3)) + -3?
-6
What is the value of 3 - (3 + (-4 - -3) - 3)?
4
Calculate 8 - (12 + (2 - -1)).
-7
What is 110 + -121 + 1 + 2?
-8
Calculate -2 + 12 - (16 - 11).
5
(-4 - -3) + 2 + (-1 - -1)
1
Evaluate -8 + 0 - (-42 - -38).
-4
What is the value of -4 + 6 + -4 - 2 - -2?
-2
-5 + 6 + -4 + -1
-4
What is the value of (-8 - -4) + 4 + -1?
-1
3 - (2 - (1 - -1))
3
Calculate -10 + 2 + (-4 - (7 - 15)).
-4
Calculate -7 + 19 - -1 - 7.
6
Evaluate -5 - (7 + (-6 - 2)).
-4
(0 - (1 - 1)) + 5
5
What is -2 + 0 + (1 - 0) + 0?
-1
What is (-3 - (-2 - -8)) + 5 + 6?
2
What is (1 - (0 - 0) - 3) + 7?
5
Calculate -7 + (-1 + -1 - -5).
-4
Calculate -5 - (-3 + 6 + -4 + -1).
-3
-7 + 3 + -4 + 3
-5
-5 - 9 - -9 - (-13 - 2)
10
What is the value of 0 + 1 + 1 + -3 - 3?
-4
What is 0 + -5 + (-7 - -18) + -11?
-5
Evaluate -1 + (-13 - -2) + 9.
-3
What is 0 + -1 + (25 - 25) + 4?
3
Calculate -12 + -2 + 4 + 6.
-4
What is 0 - (0 - ((2 - 1) + 1))?
2
Calculate (35 - 39) + 0 + 4 - 0.
0
What is (9 - (7 - 9)) + -21?
-10
Evaluate -6 + (5 - -5) + 1.
5
2 - (3 - (6 + -12 + 2))
-5
Evaluate -30 + 26 + 2 + -2.
-4
Evaluate (-13 - (-18 + -5)) + 0 + 1.
11
Evaluate -1 - 0 - (4 - 4 - -5).
-6
What is the value of -4 + 2 + 3 + -3 + -2?
-4
What is 3 + 3 - (-3 + 7)?
2
Evaluate -19 + (1 - -12) - -7.
1
Evaluate 3 + (-1 - 5) - 0.
-3
Calculate 1 - (-12 + 10 + 0 + 0).
3
Evaluate 1 + 1 + 3 + -4 + 4.
5
Calculate (-3 - (-2 - -6) - -12) + -5.
0
0 + -3 + (-3 - 0 - -2)
-4
Evaluate -7 - (11 - 15 - -4).
-7
What is -5 - -6 - ((-3 - -5) + -2)?
1
16 - 16 - (3 - 7)
4
What is the value of -8 - -9 - (0 - 4)?
5
Evaluate 1 + 1 + 5 - 6.
1
29 - 46 - (-16 + 4)
-5
What is the value of -57 - -69 - (14 + 2)?
-4
What is (-2 + -3 - -5) + 7?
7
What is the value of -13 - -11 - -5 - 11?
-8
Calculate -7 - (-6 - (-3 - -2)).
-2
Calculate 1 + -9 - (171 - 184).
5
Evaluate 1 - 1 - (-1 + -3).
4
Calculate -2 - (-3 - (-7 - -11 - 6)).
-1
(0 - (8 + -17)) + -9
0
Evaluate -11 + 15 + 1 + -4.
1
1 + -2 + -4 - (66 + -69)
-2
Calculate (2 - 5) + (7 - 1).
3
Calculate (-6 - -5) + 2 - (-4 - -1).
4
Evaluate (1 - 0) + 1 + (23 - 17).
8
Calculate -8 + 20 + -8 + 6.
10
What is the value of -9 + -2 - (-28 + 10)?
7
Calculate (-3 - (-3 + 6 - 1)) + 5.
0
What is -9 - -12 - (0 - -1 - 1)?
3
What is the value of 3 - (-3 - (-3 + (-1 - 1)))?
1
What is the value of 4 - (-4 + 3 + -3)?
8
What is (1 - -8) + (-96 - -84)?
-3
8 + (-29 - -8) + 7
-6
-3 - (4 - 7 - 5)
5
Evaluate -1 + -3 + -3 + (2 - 0).
-5
-4 - (-12 - (2 - 7))
3
Evaluate 0 - (2 + -7 + (56 - 46)).
-5
What is the value of -5 + (0 - (-8 - -5))?
-2
What is 6 - (9 + 0) - -11?
8
5 - (6 - (-2 + 4 - 5))
-4
What is 8 - (4 - (-5 - -10) - -1)?
8
Calculate -12 + 15 - (2 + 6) - -5.
0
Evaluate -3 + 3 - -9 - 3.
6
Calculate 19 - (26 - 12) - (-1 - -2).
4
Evaluate -3 + 9 + (-3 - 3).
0
Calculate -15 + 29 + 0 + -24.
-10
Calculate -9 - (-2 - -6 - 9).
-4
What is 8 + (-2 - -2) + (-4 - 0)?
4
What is -1 - -4 - (102 + -91)?
-8
Evaluate -1 - (8 - 15 - (-1 - 2)).
3
(-4 + 4 - -5) + -5
0
Evaluate (-4 - -10 - -1) + -6.
1
Evaluate -9 + 3 - (-8 - -12 - 12).
2
Calculate 1 + (5 - (1 - -6) - 6).
-7
Evaluate -2 - ((2 - 5) + -3).
4
Calculate 3 - (-7 + (8 - 6) - -15).
-7
What is 7 - (5 + 10) - -6?
-2
What is the value of 1 - (-1 - (-3 + -3))?
-4
What is the value of -1 + -1 + 1 + (1 - 0)?
0
What is 173 + -187 + (-1 - -21)?
6
5 + -1 + 6 + -4
6
(1 - 1 - -1) + -22 + 23
2
What is the value of -11 + -9 + 25 - 9?
-4
(11 - (3 - 3)) + -16
-5
What is the value of (40 - 41) + (3 - 1)?
1
What is -1 + (7 - 3 - 4)?
-1
What is -7 + (-1 - (-7 + -5))?
4
What is the value of -4 + 15 + -6 - (5 + -2)?
2
Evaluate -1 + -12 - (-316 - -302).
1
Evaluate 4 + (9 - 6) - 14.
-7
-8 + 2 + (-2 - (-5 + 0))
-3
What is the value of 1 + -1 + 2 - (-76 - -75)?
3
What is the value of 6 + -2 + -1 + -9 - -3?
-3
Evaluate (5 - 9) + 1 + 1 + -2.
-4
What is 3 + 0 - (-41 + 43)?
1
What is the value of 5 - (0 + 8 + (-3 - -1))?
-1
What is the value of (-5 + 2 + -4 - -3) + 2?
-2
Calculate -3 + -1 - 0 - 3 - -2.
-5
Calculate (-2 + 1 - (2 + -5)) + 0.
2
1 + 1 + -1 + -1 + 5
5
What is the value of -4 + (5 - (-2 + -4 - 1))?
8
5 - ((2 - 8) + 9)
2
Evaluate 4 + -4 - -9 - (0 + 0).
9
Evaluate (4 - 5) + -3 + 2.
-2
What is (6 - 0) + -29 + 23?
0
What is the value of -8 + 1 + 7 + -5 - -3?
-2
Calculate -4 + (-1 - (3 + -6)).
-2
Evaluate 10 + -6 + 3 + -4 + 0.
3
-3 + -3 + 5 + 0 + -1
-2
Evaluate (252 - 240) + (2 - 1 - 7).
6
Evaluate (-1 - (0 - -2 - 4)) + -3.
-2
Calculate -1 - -2 - (0 + (7 - 2)).
-4
Calculate 0 + (4 - -6 - 5).
5
What is (-4 + 6 - 8) + 6?
0
What is 2 + (-3 - 0) + 1?
0
What is -1 + (3 - 0) + -9 + 2?
-5
Evaluate -9 + (14 + -5 - 7).
-7
2 + 6 + -19 + 7
-4
Calculate 1 + (-4 - 1) + (0 - -1).
-3
What is (-1 - (-3 - -3)) + -5 + 1?
-5
-2 - (2 + -5 + 6)
-5
What is -2 - -9 - 4 - -4?
7
What is the value of 6 + (-6 - 2 - -8)?
6
What is the value of -30 + 25 - (-1 + -2)?
-2
What is the value of 7 - ((-3 - -1) + 2 - -2)?
5
Evaluate 0 - (2 - (-1 - 2)).
-5
What is 5 - 5 - (-5 - (-2 + 1))?
4
(-8 - (2 + -7)) + 11 - 11
-3
Evaluate 5 + (5 - 11) - (-4 - 0).
3
What is -3 + (1 - 0 - (2 - 6))?
2
What is the value of (-3 - -2) + -1 - (-1 - -2)?
-3
Evaluate (-6 - -6) + -8 - -6.
-2
Evaluate 10 + -3 + -6 + 2 + 2.
5
What is the value of 5 - (-4 - (-9 + 4)) - 8?
-4
Evaluate -5 - (-8 - (-5 - 4)).
-6
What is the value of (-3 + 0 - (6 + -8)) + -3?
-4
What is 9 - (-1 + (6 - (3 - 2)))?
5
Evaluate 1 + -16 + 8 + -23 + 23.
-7
Calculate (-2 - -3) + (-5 + 2 - -6).
4
Evaluate -5 + 2 + 2 + 3 - 1.
1
What is -5 + 8 + -7 + 1?
-3
-2 + 0 + -6 + (-23 - -40)
9
Calculate -1 + (1 - -2 - 0) + 2.
4
Evaluate -5 - -2 - ((2 - 6) + 2).
-1
What is the value of 0 + (3 - (4 - 1)) + -5?
-5
7 + 1 - (5 + (1 - 0))
2
Evaluate 0 - (2 + 1 + 0) - 2.
-5
Calculate -9 - (-14 - (-10 - -8)).
3
Calculate -5 - -8 - (2 - (-5 - -4)).
0
What is the value of 4 + -1 - (12 - 14)?
5
What is -4 - (3 - 6) - -4?
3
Calculate -5 - (3 + 1 - 4).
-5
-6 - (-11 - -15) - -3
-7
What is 1 - (-1 + 6) - 1?
-5
What is 0 + -3 + -2 + 6?
1
Calculate (-15 - -1) + 10 - 3.
-7
What is the value of -1 - (4 + -1) - (-17 - -10)?
3
Evaluate -2 - (0 + (-4 - -4)) - -8.
6
What is the value of (7 - 2 - 5 - -4) + -1?
3
Evaluate (0 - (-2 + 2)) + -33 + 31.
-2
What is -1 + -3 + 3 + (6 - 6)?
-1
Evaluate -5 - (1 + -2 + 1).
-5
4 + 3 + 9 + -16
0
Evaluate 1 + -5 - (4 - (0 + 2)).
-6
What is the value of (-2 - 3) + 1 + 4 + -4?
-4
3 + -6 + (-1 + 6 - 0)
2
What is the value of -2 - (7 + -22 + 7)?
6
(-3 - (-2 - -3)) + 3
-1
Calculate 1 + 0 + -6 + 5.
0
Evaluate 1 - (-4 - (-1 - (-1 + 3))).
2
Calculate -4 + 0 + 7 - 3.
0
What is the value of -3 - (-8 - -2) - 2 - -1?
2
(-15 - -23 - 11) + -1 + 7
3
Calculate (-7 - 2) + (11 - 11).
-9
What is -6 + 0 - (-37 - -28)?
3
Calculate 10 + -3 + -13 + -2.
-8
Calculate 0 + (-6 - (6 + -18)) + -1.
5
What is the value of -6 - (-11 - (5 + (4 - 9)))?
5
-1 - (13 - 5 - 3)
-6
What is the value of -1 + (10 - (13 + 1))?
-5
Calculate -3 + (8 - (9 - 6)) + -4.
-2
What is (1 - 6 - -3) + 0?
-2
What is (-12 - (-5 + -2)) + -8 + 3?
-10
What is the value of -4 - (12 + -7 + -8)?
-1
Wha |
In view of the news today (Straits Times, Home section, first page), you know by now that SBS Transit, the bus company has blocked apps such as ShowNearby from featuring bus timings.
We at ShowNearby have initiated conversation with the bus company. However, they stand by their decision not to allow ShowNearby to feature their bus timings.
We believe that with the provision of bus timings on multiple platforms and various applications, commuters in Singapore are better served as a result of wider reach and convenient access to the information.
We also wish to state that we are open to collaborating with SBS Transit to bring the timings to iPhone, Android and BlackBerry users.
We have almost half a million users using our free-to-download app and see it as a genuine disservice to all of them when access to bus timing information is blocked.
This is an unfortunate setback for us as a provider of information to people in Singapore. We are always working in collaboration with state and public service organisations to aggregate relevant and value-added information to any person in Singapore who downloads the app.
We are most supportive of the free flow of information in the interest and benefit of the public. There are similarly spirited initiatives spearheaded by the government in the Data.Gov.sg project. There have been movements towards and arguments for open information too, and we wish to share them here and here. According to the latter source, the opening of data will "drive the creation of innovative business and services that deliver social and commercial value".
We will continue to work hard towards an outcome that is favourable to all our users, whether they are on iPhone, Android phones or BlackBerry. We hope to continue to provide value-added services to people in Singapore. |
Bacteriology of deep neck abscesses: a retrospective review of 96 consecutive cases.
This study aimed to review the microbiology of deep neck abscesses and identify the factors that influence their occurrence. A retrospective chart review was done of patients diagnosed with deep neck abscesses at the Department of Otorhinolaryngology, Tan Tock Seng Hospital, Singapore between 2004 and 2009. The data of 131 deep neck abscess patients were reviewed, and those with positive pus culture were included in the study. Logistic regression was applied to analyse and compare the incidence of common organisms in various conditions (age, gender, aetiology and effects of diabetes mellitus). Of the 96 patients recruited, 18 had polymicrobial cultures. The leading pathogens cultured were Klebsiella (K.) pneumoniae (27.1 percent), Streptococcus milleri group (SMG) bacteria (21.9 percent) and anaerobic bacteria-not otherwise specified (NOS) (20.8 percent). K. pneumoniae (50.0 percent) was over-represented in the diabetic group. SMG bacteria (68.8 percent) and anaerobic bacteria-NOS (43.8 percent) were most commonly isolated in patients with odontogenic infections. K. pneumoniae was found more commonly among female patients (39.3 percent). The distribution of the three leading pathogens between patients aged below 50 years and those 50 years and above was comparable. K. pneumoniae was the commonest organism cultured in parapharyngeal space abscesses, while the submandibular space and parotid space most commonly isolated SMG bacteria and Staphylococcus aureus, respectively. Broad-spectrum antibiotics are recommended for treating deep neck abscesses. Empirical antibiotic coverage against K. pneumoniae infection in diabetic patients, and SMG and anaerobic bacteria in patients with an odontogenic infection, is advocated. Routine antibiotic coverage against Gram-negative bacteria is not paramount. |
1. Field of the Invention
The present invention relates generally to the field of emission control equipment for boilers, heaters, kilns, or other flue gas-, or combustion gas-, generating devices (e.g., those located at power plants, processing plants, etc.) and, in particular to a new and useful method and apparatus for preventing the plugging, blockage and/or contamination of an SCR catalyst. In another embodiment, the method and apparatus of the present invention is designed to protect an SCR catalyst from plugging and/or blockage from large particle ash that may be generated during combustion.
2. Description of the Related Art
NOx refers to the cumulative emissions of nitric oxide (NO), nitrogen dioxide (NO2) and trace quantities of other nitrogen oxide species generated during combustion. Combustion of any fossil fuel generates some level of NOx due to high temperatures and the availability of oxygen and nitrogen from both the air and fuel. NOx emissions may be controlled using low NOx combustion technology and post-combustion techniques. One such post-combustion technique is selective catalytic reduction using an apparatus generally referred to as a selective catalytic reactor or simply as an SCR.
SCR technology is used worldwide to control NOx emissions from combustion sources. This technology has been used widely in Japan for NOx control from utility boilers since the late 1970's, in Germany since the late 1980's, and in the US since the 1990's. The function of the SCR system is to react NOx with ammonia (NH3) and oxygen to form molecular nitrogen and water. Industrial scale SCRs have been designed to operate principally in the temperature range of 500° F. to 900° F., but most often in the range of 550° F. to 750° F. SCRs are typically designed to meet a specified NOx reduction efficiency at a maximum allowable ammonia slip. Ammonia slip is the concentration, expressed in parts per million by volume, of unreacted ammonia exiting the SCR.
For additional details concerning NOx removal technologies used in the industrial and power generation industries, the reader is referred to Steam: its generation and use, 41st Edition, Kitto and Stultz, Eds., Copyright© 2005, The Babcock & Wilcox Company, Barberton, Ohio, U.S.A., particularly Chapter 34—Nitrogen Oxides Control, the text of which is hereby incorporated by reference as though fully set forth herein.
Regulations (March 2005) issued by the EPA promise to increase the portion of utility boilers equipped with SCRs. SCRs are generally designed for a maximum efficiency of about 90%. This limit is not set by any theoretical limits on the capability of SCRs to achieve higher levels of NOx destruction. Rather, it is a practical limit set to prevent excessive levels of ammonia slip. This problem is explained as follows.
In an SCR, ammonia reacts with NOx according to the following stoichiometric reactions (a) to (c):4NO+4NH3+O2→4N2+6H2O (a)12NO2+12NH3→12N2+18H2O+3O2 (b)2NO2+4NH3+O2→3N2+6H2O (c).
The above reactions are catalyzed using a suitable catalyst. Suitable catalysts are discussed in, for example, U.S. Pat. Nos. 5,540,897; 5,567,394; and 5,585,081 to Chu et al., all of which are hereby incorporated by reference as though fully set forth herein. Catalyst formulations generally fall into one of three categories: base metal, zeolite and precious metal.
Base metal catalysts use titanium oxide with small amounts of vanadium, molybdenum, tungsten or a combination of several other active chemical agents. The base metal catalysts are selective and operate in the specified temperature range. The major drawback of the base metal catalyst is its potential to oxidize SO2 to SO3; the degree of oxidation varies based on catalyst chemical formulation. The quantities of SO3 which are formed can react with the ammonia carryover to form various ammonium-sulfate salts.
Zeolite catalysts are aluminosilicate materials which function similarly to base metal catalysts. One potential advantage of zeolite catalysts is their higher operating temperature of about 970° F. (521° C.). These catalysts can also oxidize SO2 to SO3 and must be carefully matched to the flue gas conditions.
Precious metal catalysts are generally manufactured from platinum and rhodium. Precious metal catalysts also require careful consideration of flue gas constituents and operating temperatures. While effective in reducing NOR, these catalysts can also act as oxidizing catalysts, converting CO to CO2 under proper temperature conditions. However, SO2 oxidation to SO3 and high material costs often make precious metal catalysts less attractive.
As is known to those of skill in the art, various SCR catalysts undergo plugging and/or poisoning when they become contaminated by various compounds including, but not limited to, ash from the combustion process (in particular coal ash). One common source of plugging in SCRs is large particle ash (typically defined as any ash that has a particle size large enough to lodge in the catalyst passages, pores, or honeycomb structure present in the SCR catalyst blocks).
Given the above, a need exists for a system and method that can prevent the plugging and/or poisoning of a catalyst in an SCR with fly ash, particularly large particle ash. |
On-column refolding and purification of recombinant human interleukin-1 receptor antagonist (rHuIL-1ra) expressed as inclusion body in Escherichia coli.
Recombinant human interleukin-1 receptor antagonist (rHuIL-1ra) was produced in E. coli as an inclusion body. rHuIL-1ra was purified to Over 98% purity by anion exchange chromatography after on-column refolding. The optimized processes produced more than 2 g pure refolded rHuIL-1ra per 1 l culture, corresponding to a 44% recovery, without an intermediate dialysis step. Refolded rHuIL-1ra had full biological activity with the MTT assay. An intramolecular disulfide linkage in the oxidized recombinant protein was suggested by data from HPLC and non-reducing SDS-PAGE. |
Weight loss interventions and progression of diabetic kidney disease.
Progressive renal impairment (diabetic kidney disease (DKD)) occurs in upwards of 40 % of patients with obesity and type 2 diabetes mellitus (T2DM) and is a cause of significant morbidity and mortality. Means of attenuating the progression of DKD focus on amelioration of risk factors. Visceral obesity is implicated as a causative agent in impaired metabolic and cardiovascular control in T2DM, and various approaches primarily targeting weight have been examined for their impact on markers of renal injury and dysfunction in DKD. The current report summarises the evidence base for the impact of surgical, lifestyle and pharmacological approaches to weight loss on renal end points in DKD. The potential for a threshold of weight loss more readily achievable by surgical intervention to be a prerequisite for renal improvement is highlighted. Comparing efficacious non-surgical weight loss strategies with surgical strategies in appropriately powered and controlled prospective studies is a priority for the field. |
//
// INSearchForNotebookItemsIntent_Deprecated.h
// Intents
//
// Created by Kyle Zhao on 9/18/17.
// Copyright © 2017 Apple. All rights reserved.
//
#import <Intents/INSearchForNotebookItemsIntent.h>
NS_ASSUME_NONNULL_BEGIN
@interface INSearchForNotebookItemsIntent (Deprecated)
- (instancetype)initWithTitle:(nullable INSpeakableString *)title
content:(nullable NSString *)content
itemType:(INNotebookItemType)itemType
status:(INTaskStatus)status
location:(nullable CLPlacemark *)location
locationSearchType:(INLocationSearchType)locationSearchType
dateTime:(nullable INDateComponentsRange *)dateTime
dateSearchType:(INDateSearchType)dateSearchType API_DEPRECATED("Use the designated initializer instead", ios(11.0, 11.2), watchos(4.0, 4.2));
@end
NS_ASSUME_NONNULL_END
|
Memory disturbances in migraine with and without aura: a strategy problem?
Cognitive defects in migraine have been reported by several authors. These findings however, are controversial. In this study we carried out an investigation on 14 patients with migraine with aura and 16 with migraine without aura according to the International Headache Society criteria. They were submitted to a comprehensive battery of neuropsychological tests. The patients were compared with a control group not significantly different as to age, sex and education. Migraine subjects showed impaired neuropsychological performances only on some cognitive tests. Both groups of patients did worse than the control group on visuo-spatial memory tasks, while only migraineurs without aura showed significantly impaired verbal memory performances. The memory defects, both on visuo-spatial and on verbal cognitive tasks, could depend on an impaired recall mechanism. These memory difficulties seem related to strategically and organizationally defective aspects of learning. |
Sunday, April 22, 2012
. “Some of them were saying they didn’t know they were prostitutes,”
explained Congressman Peter King, chairman of the House Homeland
Security Committee.
“Some are saying they were women at the bar.”
Amazing to hear government agents channeling Dudley Moore in Arthur:
“You’re a hooker? I thought I was doing so well.” It turns out U.S.
Secret Service agents are the only men who can walk into a Colombian
nightclub and not spot the professionals. Are they really the guys you
want protecting the president? |
[Clinical case of the month. A recurrent chylothorax in Bourneville tuberous sclerosis].
Bourneville's disease, first described in 1862, is a phacomatosis that is either autosomal dominant or sporadic. Its typical clinical signs include mental retardation, epilepsy and cutaneous adenomas. The pulmonary form is rare, less than 1%, and is secondary to occlusion of the bronchus, vascular and lymphatics by immature smooth muscle cells. Chylothorax may appear in more than 50% of all cases. No guidelines currently exist for treatment of recurrent chylothorax. However, several possibilities are described in the literature. |
On Thursday, Linden Lab had a large in-world meeting in their private Second Life regions. Linden staffers that we’ve not seen in-world for a long time (at least not using their Linden accounts) dusted off their long-unused Linden accounts and logged in for the meeting. Since then, I’ve been getting queries and questions from Second Life customers who are a bit concerned about this sort of gathering.
Right or wrong, these rare “all-hands” (or so they seem) meetings are considered to be portents or harbingers of bad news by Second Life customers. People tend to associate them with the announcement of nasty things like layoffs or major strategic direction changes.
Whether it is truly justified or not, these sudden, sporadic major meets make people concerned and a bit gun-shy.
I reached out to Peter Gray, Linden Lab’s PR spokesperson to see if there was any public reassurances that he could give us.
“We don’t discuss internal meetings publicly,” he told me.
Okay, so not exactly an unexpected response, considering the Lab’s communications policies.
Realistically, there’s probably nothing to actually worry about, however it wouldn’t surprise me if any of the Lab’s upcoming actions are examined with a more critical eye than is usual by Second Life users and customers over the next little while in the wake of it.
Share this: Twitter
Google
Facebook
Reddit
Tumblr
More
LinkedIn
Pocket
Pinterest
Print
Tags: Linden Lab / Linden Research Inc, Opinion, Peter Gray / Pete Linden, Second Life, Virtual Environments and Virtual Worlds |
Q:
Delphi XE2 TPointerList usage
I have a following problem trying to compile some components in XE2. These components were not prepared for XE2, but I'm trying to compile them anyway.
Within a component it is declared like
FList : TList;
when used it is for example like
SomeVariable := Integer(FList.List^[i]);
It produces "Pointer type required" compile error.
I can be corrected it like this
SomeVariable := Integer(FList.List[i]);
but god knows how much time would I need to fix all occurencies of error.
Is there some compiler directive, or setting that can handle this. I've tried {$X} and {$T} without effect.
In XE2 Delphi TPointerList (TList.List property) is declared as dynamic array
type TPointerList = array of Pointer;
If anyone can help?
A:
a) Integer(FList[i]) would also work.
b) There is no such setting.
c) Maybe you can Search&Replace .List^[ -> [ ?
|
#define GLM_ENABLE_EXPERIMENTAL
#include <glm/gtx/compatibility.hpp>
int main()
{
int Error(0);
Error += glm::isfinite(1.0f) ? 0 : 1;
Error += glm::isfinite(1.0) ? 0 : 1;
Error += glm::isfinite(-1.0f) ? 0 : 1;
Error += glm::isfinite(-1.0) ? 0 : 1;
Error += glm::all(glm::isfinite(glm::vec4(1.0f))) ? 0 : 1;
Error += glm::all(glm::isfinite(glm::dvec4(1.0))) ? 0 : 1;
Error += glm::all(glm::isfinite(glm::vec4(-1.0f))) ? 0 : 1;
Error += glm::all(glm::isfinite(glm::dvec4(-1.0))) ? 0 : 1;
return Error;
}
|
Coworker tell boss you're on your phone at your desk shit on her desk
100 shares |
Bibhu Bhattacharya
Bibhu Bhattacharya (17 September 1944 – 22 September 2011) was a Bengali Indian male actor of TV and films. He was born in Jharia, Bihar, British India (now Jharia, Jharkhand, India). He gained prominence and became a household name only in 1998 as Jatayu (Lalmohan Ganguly) in Sandip Ray’s Feluda, based on stories by his late father, maestro Satyajit Ray. In 2011, he died of a heart attack in Kolkata, West Bengal.
Acting career
Bibhu Bhattacharya never attended any school. He was acting in studios, when other boys of his age were studying. At the age of four-and-a-half he started acting in a film called Maryada, starring Uttam Kumar.
He was called Master Bibhu, one of the most prominent child actors in Bengali films and very popular with actors like Jahar Ganguly and Chhabi Biswas. He played the title role in the movie Prahlad (1952) and did movies like Bindur Chhele (1952), Dhruba (1953), Rani Rashmoni (1955) and Dui Bon (1955). After he grew up and became a teenager, he could no longer be a child actor and offers began to dry up. His last film as a child actor was Sagar Sangame (1959). Then, he didn’t get another film for almost 38 years. In between, he kept himself busy with theater and TV serials.
Feluda series
It was only in 1998 that he got the dream role of Jatayu. His first film as Jatayu was Jahangirer Shornomudra (1999) and he also acted as Jatayu in Bombaiyer Bombete (2003), Kailashey Kelenkari (2007), Tintorettor Jishu (2007) , Gorosthaney Sabdhan (2010) and Royal Bengal Rahasya (2011). He was legendary in his performance and managed to capture in essence the spirit of the character so well that even Sandip Ray, the director, found him perfect as a replacement. He had completed dubbing for the Feluda film Royal Bengal Rahasya the day before he died. Initially, it was difficult to replace the legendary actor, Santosh Dutta as Jatayu, but he became popular with time.
Filmography
Macho Mustanaa (2012)
Bhooter Bhabishyat (2012)
Aalo Chhaya (2012)
Royal Bengal Rahasya (Feluda theatrical film) (2011) as Jatayu (Lalmohan Ganguly)
Tenida (2011)
Gorosthaney Sabdhan (Feluda theatrical film) (2010) as Jatayu (Lalmohan Ganguly)
Bela Sheshe (2009)
Mallick Bari (2009)
Swartha (2009)
Tintorettor Jishu (Feluda theatrical film) (2008) as Jatayu (Lalmohan Ganguly)
Abelay Garam Bhaat (2008)
Kailashey Kelenkari (Feluda theatrical film) (2007) as Jatayu (Lalmohan Ganguly)
Bombaiyer Bombete (Feluda theatrical film) (2003) as Jatayu (Lalmohan Ganguly)
Satyajiter Priyo Golpo (Dr Munshir Diary For ETV Bangla) (Feluda TV film) (2000) as Jatayu (Lalmohan Ganguly)
Satyajiter Goppo (Jahangirer Swarnamudra, Ghurghutiyar Ghotona, Golapi Mukto Rahashya, Ambar Sen Antardhan Rahashya for DD Bangla) (Feluda TV films) (1999) as Jatayu (Lalmohan Ganguly)
Sagar Sangame (1959)
Swapnapuri (1959)
Thakur Haridas (1959)
Purir Mandir (1958)
Sree Sree Maa (1958)
Harishchandra (1957)
Janmatithi (1957)
Khela Bhangar Khela (1957)
Omkarer Joyjatra (1957)
Mamlar Phal (1956)
Putrabadhu (1956)
Bir Hambir (1955)
Dui Bon (1955)
Jharer Pare (1955)
Prashna (1955)
Rani Rasmani (1955)
Srikrishna Sudama (1955)
Agnipariksha (1954)
Bakul (1954)
Ladies Seat (1954)
Nababidhan (1954)
Prafulla (1954)
Dhruba (1953)
Sitar Patal Prabesh (1953)
Aandhi (1952)
Bindur Chhele (1952)
Bishwamitra (1952)
Nildarpan (1952)
Pallisamaj (1952)
Prahlad (1952)
Sahasa (1952)
Bhakta Raghunath (1951)
Kulhara (1951)
Pratyabartan (1951)
Maryada (1950)
Feluda Series TV films
Jahangirer Swarnamudra (1998)
Ambar Sen Antordhan Rahashya (1998)
Golapi Mukta Rahashaya (1998)
Dr. Munshir Diary (2000)
See also
Satyajit Ray
Literary works of Satyajit Ray
Sandip Ray
Santosh Dutta
Feluda
Jatayu (Lalmohan Ganguly)
Feluda in film
Professor Shonku
Tarini khuro
Tarini Khuro in other media
Culture of Bengal
Culture of West Bengal
References
External links
My Fundays Telegraph Kolkata article 24 December 2008
Category:Indian male film actors
Category:Male actors from Kolkata
Category:Bengali male actors
Category:Male actors in Bengali cinema
Category:2011 deaths
Category:1944 births
Category:20th-century Indian male actors
Category:21st-century Indian male actors
Category:Bengali male television actors
Category:Indian male television actors
Category:People from Howrah |
Links
Meta
Should College Freshman Start A Roth IRA?
September 3, 2009 — Sam H. Fawaz
At no time since the Great Depression have college students worried more about money. Tuition continues to rise, financing sources continue to contract. So why should a student worry about finding money for, of all things, retirement?
Because even a few dollars a week put toward a Roth IRA can reap enormous benefits over the 40-50 years of a career lifetime that today’s average college student will complete after graduation. Take the example of an 18-year-old who contributes $5,000 each year of school until she graduates. Assume that $20,000 grows at 7.5 percent a year until age 65. That would mean more than a half-million dollars from that initial four-year investment without adding another dime.
Consider what would happen if she added more.
There are a few considerations before a student starts to accumulate funds for the IRA. First, students should try and avoid or extinguish as much debt – particularly high-rate credit card debt – as possible. Then, it’s time to establish an emergency fund of 3-6 months of living expenses to make sure that a student can continue to afford the basics at school if an unexpected problem occurs.
To contribute to an IRA, you must have earned income; that is, income earned from a job or self-employment. Even working in the family business is allowable if you get a form W-2 or 1099 for your earnings. Contributions from savings, investment income or other sources is not allowed.
Certainly $5,000 a year sounds like an enormous amount of outside money for today’s student to gather, but it’s not impossible. Here’s some information about Roth IRAs and ideas for students to find the money to fund them.
The basics of Roth IRAs: I’ll start by describing the difference between a traditional IRA and a Roth IRA and why a Roth might be a better choice for the average student. Traditional IRAs allow investors to save money tax-deferred with deductible contributions until they’re ready to begin withdrawals anytime between age 59 ½ and 70 ½. After age 70 1/2, minimum withdrawals become mandatory.
Roth IRAs don’t allow a current tax-deductible contribution; instead they allow tax-free withdrawal of funds with no mandatory distribution age and allow these assets to pass to heirs tax-free as well. If someone leaves their savings in the Roth for at least five years and waits until they’re 59 1/2 to take withdrawals, they’ll never pay taxes on the gains. That’s a good thing in light of expected increases in future tax rates. For someone in their late teens and early 20s, that offers the potential for significant earnings over decades with great tax consequences later. Also, after five years and before you turn age 59 1/2, you may withdraw your original contributions (not any accumulated earnings) without penalty.
Getting started is easy: Some banks, brokerages and mutual fund companies will let an investor open a Roth IRA for as little as $50 and $25 a month afterward. It’s a good idea to check around for the lowest minimum amounts that can get a student in the game so they can plan to increase those contributions as their income goes up over time. Also, some institutions offer cash bonuses for starting an account. Go with the best deal and start by putting that bonus right into the account. Watch the fine print for annual fees or commissions and avoid them if possible.
It’s wise to get advice first: Every student’s financial situation is different. One of the best gifts a student can get is an early visit – accompanied by their parents – to a financial advisor such as a Certified Financial Planner™ professional. A planner trained in working with students can certainly talk about this IRA idea, but also provide a broader viewpoint on a student’s overall goals and challenges. While starting an early IRA is a great idea for everyone, students may also need to know how to find scholarships, grants and other smart ideas for borrowing to stay in school. A good planner is a one-stop source of advice for all those issues unique to the student’s situation.
Plan to invest a set percentage from the student’s vacation, part-time or work/study paychecks: People who save in excess of 10 percent of their earnings are much better positioned for retirement than anyone else. Remarkably few people set that goal. One of the benefits of the IRA idea is it gets students committing early to the 10 percent figure every time they deposit a paycheck. It’s a habit that will help them build a good life. Better yet, set up an automatic withdrawal from your savings or checking account for the IRA contribution.
Get relatives to contribute: If a student regularly gets gifts of money from relatives, it might not be a bad idea to mention the IRA idea to those relatives. Adults like to help kids who are smart with money, and if the student can commit to this savings plan rather than spending it at the mall, they might feel considerably better about the money they give away. At a minimum, the student should earmark a set amount of “found” money like birthday and holiday gift money toward a Roth IRA in excess of the 10 percent figure. Again, the IRA contributions cannot exceed the student’s earned income for the year.
Sam H. Fawaz is a Certified Financial Planner ( CFP ), Certified Public Accountant and registered member of the National Association of Personal Financial Advisors (NAPFA) fee-only financial planner group. Sam has expertise in many areas of personal finance and wealth management and has always been fascinated with the role of money in society. Helping others prosper and succeed has been Sam’s mission since he decided to dedicate his life to financial planning. He specializes in entrepreneurs, professionals, company executives and their families. This column was co-authored by Sam H. Fawaz CPA, CFP and the Financial Planning Association, the membership organization for the financial planning community, and is provided by YDream Financial Services, Inc., a local member of FPA.
No doubt about that fact that the earlier one begins saving, especially in a Roth IRA, the bigger the nest egg. Those students who could pull off only 10-20% of what Sam is recommending would end up well ahead of their peers. There are a few issues to consider: 1) will saving in a Roth result in taking on more student debt? If so, the returns will be minimized. 2) will the student’s financial aid be reduced by 25% of the value of the Roth? At private colleges using the CSS Profile, this is a virtual certainty. All things considered, this is good food for thought. I commend the author for taking the time to address the issue. |
USS Calvert
USS Calvert may refer to:
, was a motor boat that served in World War I
, served in World War II
Category:United States Navy ship names |
Q:
Alternative to Firefox and Chrome with an element inspector?
Assuming that
Mozilla is not trustworthy,
Chrome isn't any better, because Google …,
Chromium isn't any better just because it's open source, after all Firefox is open source,
is there any browser that:
is trustworthy
is open source, free and none profit
works on Linux (I'm on Fedora)
supports basic extensions like ad-blocker and LastPass
has an element inspector for programmers like me
is fast, and relatively lightweight?
A:
The problem with "trustworthy" is that trust isn't something one can be recommended to...it's something that you gotta have. (I didn't watch the video but I'm sure you can technically do a rant about just any application/OS developer based on some mess ups...)
But what I would suggest is taking a look on some of the more popular browsers based on the Chromium project as well as on the Mozilla browser project, but not controlled in any way by these big corporations/organizations.
On the Chromium end of things, there's Brave and Vivaldi. While from Firefox there are Pale Moon and Waterfox.
|
/***************************************************************************
* *
* Copyright (C) 2017 Seamly, LLC *
* *
* https://github.com/fashionfreedom/seamly2d *
* *
***************************************************************************
**
** Seamly2D is free software: you can redistribute it and/or modify
** it under the terms of the GNU General Public License as published by
** the Free Software Foundation, either version 3 of the License, or
** (at your option) any later version.
**
** Seamly2D is distributed in the hope that it will be useful,
** but WITHOUT ANY WARRANTY; without even the implied warranty of
** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
** GNU General Public License for more details.
**
** You should have received a copy of the GNU General Public License
** along with Seamly2D. If not, see <http://www.gnu.org/licenses/>.
**
**************************************************************************
************************************************************************
**
** @file vtooluniondetails.h
** @author Roman Telezhynskyi <dismine(at)gmail.com>
** @date 26 12, 2013
**
** @brief
** @copyright
** This source code is part of the Valentine project, a pattern making
** program, whose allow create and modeling patterns of clothing.
** Copyright (C) 2013-2015 Seamly2D project
** <https://github.com/fashionfreedom/seamly2d> All Rights Reserved.
**
** Seamly2D is free software: you can redistribute it and/or modify
** it under the terms of the GNU General Public License as published by
** the Free Software Foundation, either version 3 of the License, or
** (at your option) any later version.
**
** Seamly2D is distributed in the hope that it will be useful,
** but WITHOUT ANY WARRANTY; without even the implied warranty of
** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
** GNU General Public License for more details.
**
** You should have received a copy of the GNU General Public License
** along with Seamly2D. If not, see <http://www.gnu.org/licenses/>.
**
*************************************************************************/
#ifndef VTOOLUNIONDETAILS_H
#define VTOOLUNIONDETAILS_H
#include <qcompilerdetection.h>
#include <QDomElement>
#include <QDomNode>
#include <QMetaObject>
#include <QObject>
#include <QPointF>
#include <QString>
#include <QVector>
#include <QtGlobal>
#include "../ifc/ifcdef.h"
#include "../ifc/xml/vabstractpattern.h"
#include "../vmisc/def.h"
#include "vabstracttool.h"
#include "../vpatterndb/vpiece.h"
class DialogTool;
struct VToolUnionDetailsInitData
{
VToolUnionDetailsInitData()
: d1id(NULL_ID),
d2id(NULL_ID),
indexD1(NULL_ID),
indexD2(NULL_ID),
scene(nullptr),
doc(nullptr),
data(nullptr),
parse(Document::FullParse),
typeCreation(Source::FromFile),
retainPieces(false)
{}
quint32 d1id;
quint32 d2id;
quint32 indexD1;
quint32 indexD2;
VMainGraphicsScene *scene;
VAbstractPattern *doc;
VContainer *data;
Document parse;
Source typeCreation;
bool retainPieces;
};
/**
* @brief The VToolUnionDetails class tool union details.
*/
class VToolUnionDetails : public VAbstractTool
{
Q_OBJECT
public:
static VToolUnionDetails *Create(QSharedPointer<DialogTool> dialog, VMainGraphicsScene *scene,
VAbstractPattern *doc, VContainer *data);
static VToolUnionDetails *Create(const quint32 _id, const VToolUnionDetailsInitData &initData);
static const QString ToolType;
static const QString TagDetail;
static const QString TagNode;
static const QString TagChildren;
static const QString TagChild;
static const QString AttrIndexD1;
static const QString AttrIndexD2;
static const QString AttrIdObject;
static const QString AttrNodeType;
static const QString NodeTypeContour;
static const QString NodeTypeModeling;
virtual QString getTagName() const Q_DECL_OVERRIDE;
virtual void ShowVisualization(bool show) Q_DECL_OVERRIDE;
virtual void incrementReferens() Q_DECL_OVERRIDE;
virtual void decrementReferens() Q_DECL_OVERRIDE;
virtual void GroupVisibility(quint32 object, bool visible) Q_DECL_OVERRIDE;
public slots:
/**
* @brief FullUpdateFromFile update tool data form file.
*/
virtual void FullUpdateFromFile () Q_DECL_OVERRIDE {}
virtual void AllowHover(bool) Q_DECL_OVERRIDE {}
virtual void AllowSelecting(bool) Q_DECL_OVERRIDE {}
protected:
virtual void AddToFile() Q_DECL_OVERRIDE;
virtual void SetVisualization() Q_DECL_OVERRIDE {}
private:
Q_DISABLE_COPY(VToolUnionDetails)
/** @brief d1 first detail id. */
quint32 d1id;
/** @brief d2 second detail id. */
quint32 d2id;
/** @brief indexD1 index edge in first detail. */
quint32 indexD1;
/** @brief indexD2 index edge in second detail. */
quint32 indexD2;
VToolUnionDetails(quint32 id, const VToolUnionDetailsInitData &initData, QObject *parent = nullptr);
void AddDetail(QDomElement &domElement, const VPiece &d) const;
void AddToModeling(const QDomElement &domElement);
QVector<quint32> GetReferenceObjects() const;
QVector<quint32> ReferenceObjects(const QDomElement &root, const QString &tag, const QString &attribute) const;
};
#endif // VTOOLUNIONDETAILS_H
|
— The formation of Atlanta dates back to 1836, when the State of Georgia decided to build a railroad to the U.S. Midwest. A stake driven into the ground marked the rail line’s “terminus,” and the settlement that grew up around it eventually became the city of Atlanta, incorporated in 1847.
Another train-themed endeavor terminated in Atlanta Saturday evening, as the increasingly hapless Carolina RailHawks fell to the Atlanta Silverbacks 2-1 and saw their already dwindling playoff hopes tumble down the NASL table.
If you’re keeping track at home, that’s another early RailHawks lead lost to multiple goals surrendered during a defensive lapse lasting mere minutes. Rinse-repeat.
The visiting RailHawks grabbed a surprise lead less than a minute into the match. Atlanta goalkeeper Steward Ceus’ clearance collided directly off an oncoming Tyler Engel and ricocheted across the goal line. The former North Carolina Tar Heel took a bow for his first professional goal as the RailHawks led 1-0.
In the 22nd minute, RailHawks center back Futty Danso absorbed a ball to his midsection, forcing him to leave the game for Austen King. With that tidbit posed for cause-and-effect purposes, Atlanta equalized in the 31st minute. A through ball was played ahead to midfielder Jaime Chavez charging up the left channel. Chavez centered to Pedro Mendes, who maneuvered right of defender Connor Tobin and slotted a shot under diving Carolina keeper Akira Fitzgerald to even the match at 1-1.
Three minutes later, Chavez settled a ball in the box before playing it back to forward Kyle Porter stationed near the outer left corner of the area. Porter calmly flew his one-timer into the far right netting to put the Silverbacks up 2-1, a lead that would last until intermission.
The scoreless second half saw the RailHawks muster a few promising chances that only managed to ding the woodwork. In the 52nd minute, Nacho Novo blasted a shot from distance off the crossbar, and Engel’s follow-up header sailed over goal. Nazmi Albadawi uncorked a nifty left-footer in the 57th minute that chipped paint. Blake Wagner had one on a plate from 17 yards out in the 64th minute, but his curler skimmed the top of the net. In the 70th minute, Austin da Luz delivered a low burner snared by a diving Ceus.
Finally, Tiyi Shipalane executed a tremendous drive off the left wing in the 76th minute, winding up inside the box. But he pushed his sure-shot wide right.
Carolina has only one win over their last 10 matches, with seven losses and two draws over that same span. On the bright side, the RailHawks only surrendered two goals to Atlanta, snapping a streak of six losses in which Carolina has allowed three goals in the game.
The RailHawks (6-8-10, 26 pts.) slide to eighth place in the combined NASL season standings, with seven points and three other teams separating Carolina from the final league playoff spot, currently occupied by the Tampa Bay Rowdies. With just six games left in their regular season, the RailHawks stagger back to WakeMed Soccer Park next Saturday to host the Ottawa Fury, the first of three straight home games.
BOX SCORE
CAR: Fitzgerald, Low, Tobin, Danso (King, 25’), Wagner (Bracalello, 80’), Hlavaty, Albadawi, Shipalane, Engel (Da Silva, 62’), da Luz, Novo
ATL: Ceus, Black, Mensing, Reed, Burgos, Abdul Bangura, Kimura, Paulo Mendes (Mravec,74’), Pedro Mendes (Okafor, 83’), Chavez (Shaka Bangura, 69’), Porter
GOALS
CAR: Engel, 1’
ATL: Pedro Mendes, 31’ (Chavez); Porter, 34’ (Chavez)
CAUTIONS
CAR: Engel (49’); Tobin (90’)
ATL: Chavez (25’); Mravec (90’)
EJECTIONS
CAR: --
ATL: --
ATTENDANCE: 4,322 |
Q:
Prove that the set $\left\{ x, Ax, \dots, A^{k-1} x \right\}$ is linearly independent
Problem: Let $A\in M_{n\times n}(\mathbb R)\,$ be a matrix and suppose that a positive number $k\,$ exists such that $A^k = 0\,$ and $A^{k-1} \neq 0$.
Suppose that $x=\left[ \begin{matrix} x_1 \\ \vdots \\ x_n \end{matrix} \right]$ is a vector in $\mathbb{R^n}$ such that $A^{k-1} x \neq 0$.
Prove that the $k\,$ vectors $\,x,Ax,\dots,A^{k-1}x\,$ are linearly independent.
My attempt: Suppose $x + Ax + \dots + A^{k-1}x = 0$. Multiply both sides with $A^{k-1}$. Then we have $A^{k-1}x + A^k (x + Ax + \dots + A^{k-2}x) = 0 \Leftrightarrow A^{k-1}x = 0 \Leftrightarrow x = 0$
which implies $x + Ax + \dots + A^{k-1}x\,$ is linear independent.
This problem looks quite easy but I want my proof to be checked. Is it correct?
A:
Take $\alpha_0,\ldots,\alpha_{k-1}\in\mathbb R$ and suppose that$$\alpha_0x+\alpha_1Ax+\alpha_{k-1}A^{k-1}x=0.\tag1$$Then $A^{k-1}(\alpha_0x+\alpha_1Ax+\alpha_{k-1}A^{k-1}x)=0$, but this means that $\alpha_0A^{k-1}x=0$ and, since $A^{k-1}x\neq0$, $\alpha_0=0$. So, $(1)$ means that$$\alpha_1Ax+\alpha_2A^2x+\alpha_{k-1}A^{k-1}x=0.\tag2$$Now, start all over again, multiplying $(2)$ by $A^{k-2}$ and so on.
|
Final destination of an ingested needle: the liver.
Foreign body ingestion is a common problem in children, but it is also seen among adults. Most foreign bodies pass through the gastrointestinal tract without causing complications. Perforation of the gut by a foreign body, followed by migration of the foreign body to the liver is quite rare. Herein we report a case of inadvertent ingestion of a sewing needle that perforated the duodenum and migrated to the liver. The patient was monitored weekly with abdominal radiographs, but displacement of the needle could not be observed. At follow-up, right upper quadrant pain was noted. Two weeks later, computed tomography revealed that the needle was completely buried into the right lobe of the liver. Ultrasonographic examination successfully showed the extracapsular displacement of the needle. Eventually, laparoscopic removal of the needle was easily performed. |
![](edinbmedj74217-0040){#sp1 .999}
![](edinbmedj74217-0041){#sp2 .1000}
![](edinbmedj74217-0042){#sp3 .1001}
![](edinbmedj74217-0043){#sp4 .1002}
![](edinbmedj74217-0044){#sp5 .1003}
![](edinbmedj74217-0045){#sp6 .1004}
|
Q:
Can I give an ng-form a name that I can check with $pristine?
I looked at the documentation for ng-form:
http://docs.angularjs.org/api/ng.directive:ngForm
But it gives me almost no examples and I am still very confused. What I would like to do is to have a table with input fields and then check if fields on the table are unchanged? I don't want to include this in a form so I was wondering if I can use ng-form.
My HTML looks like this:
<form name="itemForm">
<table>
....
</table>
<button type="submit" data-ng-disabled="itemForm.$pristine">
</form>
Can I do this with an ng-form directive enclosed in a DIV and still set the name in the same way?
A:
Yes you can do it with ng-form directive also
Demo: Fiddle
|
---
abstract: |
Radiation damage to space-based Charge-Coupled Device (CCD) detectors creates defects which result in an increasing Charge Transfer Inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect – and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrade measurements of photometry and morphology.
As a concrete application, we simulate $1.5\times10^{9}$ “worst case” galaxy and $1.5\times10^{8}$ star images to test the performance of the *Euclid* visual instrument detectors. There are two separable challenges: If the model used to correct CTI is perfectly the same as that used to add CTI, $99.68$ % of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets over-corrected during correction. Second, if we assume the first issue to be solved, knowledge of the charge trap density within $\Delta\rho/\rho\!=\!(0.0272\pm0.0005)$%, and the characteristic release time of the dominant species to be known within $\Delta\tau/\tau\!=\!(0.0400\pm0.0004)$% will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for *Euclid*.
author:
- |
Holger Israel$^{1,2,*}$, Richard Massey$^{1,3}$, Thibaut Prod’homme$^{4}$, Mark Cropper$^{5}$ Oliver Cordes$^{6}$, Jason Gow$^{7}$, Ralf Kohley$^{8}$, Ole Marggraf$^{6}$, Sami Niemi$^{5}$, Jason Rhodes$^{9}$, Alex Short$^{4}$, Peter Verhoeve$^{4}$\
$^{1}$Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK\
$^{2}$Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, UK\
$^{3}$Centre for Advanced Instrumentation, Durham University, South Road, Durham DH1 3LE, UK\
$^{4}$European Space Agency, ESTEC, Keplerlaan 1, 2200AG Noordwijk, The Netherlands\
$^{5}$Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK\
$^{6}$Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, 53121 Bonn, Germany\
$^{7}$e2v Centre for Electronic Imaging, The Open University, Walton Hall, Milton Keynes MK7 6AA, UK\
$^{8}$European Space Agency, ESAC, P.O. Box 78, 28691 Villanueva de la Cañada, Madrid, Spain\
$^{9}$Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, United States\
$^{*}$e-mail: [[email protected]]{}
bibliography:
- 'CTICorr\_v13.bib'
date: 'Accepted —. Received —; in original form . '
title: |
How well can Charge Transfer Inefficiency be corrected?\
A parameter sensitivity study for iterative correction
---
\[firstpage\]
Introduction
============
The harsh radiation environment above the Earth’s atmosphere gradually degrades all electronic equipment, including the sensitive Charge-Coupled Device (CCD) imaging detectors used in the [*Hubble Space Telescope*]{} (HST) and [*Gaia*]{} [@2008IAUS..248..217L], and proposed for use by [*Euclid*]{} [@2011arXiv1110.3193L]. CCD detectors work by collecting photoelectrons which are stored within a pixel created by an electrostatic potential well. After each exposure these electrons are transferred via a process called clocking, where alternate electrodes are held high and low to move charge through the pixels towards the serial register. The serial register is then clocked towards the output circuit where charge-to-voltage conversion occurs providing an output signal dependent on the charge contained within a pixel. The amount of charge lost with each transfer is described by the Charge Transfer Inefficiency (CTI). One of the results of radiation-induced defects within the silicon lattice is the creation of charge traps at different energy levels within the silicon band-gap. These traps can temporarily capture electrons and release them after a characteristic delay, increasing the CTI. Any electrons captured during charge transfer can re-join a charge packet later, as spurious charge, often observed as a charge tail behind each source.
Charge trailing can be (partially) removed during image postprocessing. Since charge transfer is the last process to happen during data acquisition, the fastest and most successful attempts to correct CTI take place as the second step of data reduction, right after the analogue-digital converter bias has been subtracted. [@2003astro.ph.10714B]. By modelling the solid-state physics of the readout process in [*HST*]{}’s [*Advanced Camera for Surveys*]{} (ACS), then iteratively reversing the model, @2010MNRAS.401..371M demonstrated a $10$-fold reduction in the level of charge trailing. The algorithm was sped up by @2010PASP..122.1035A and incorporated into STScI’s [*HST*]{} default analysis pipeline [@2012AAS...21924101S]. As the radiation damage accumulated, the trailing got bigger and easier to measure. With an updated and more accurate [*HST*]{} model, @2010MNRAS.409L.109M achieved a $20$-fold reduction. In an independent programme for [*Gaia*]{}, @2013MNRAS.430.3078S developed a model using different underlying assumptions about the solid-state physics in CCDs. @2014MNRAS.439..887M created a meta-algorithm that could reproduce either approach through a choice of parameters, and optimised these parameters for [*HST*]{} to correct $98$% of the charge trailing.
The current level of achievable correction is acceptable for most immediate applications. However, radiation damage is constantly accumulating in [*HST*]{} and [*Gaia*]{}; and increasing accuracy is required as datasets grow, and statistical uncertainties shrink. One particularly challenging example of stringent requirements in future surveys will be the measurement of faint galaxy shapes by [*Euclid*]{}.
In this paper, we investigate the effect of imperfect CTI correction, on artificial images with known properties. We add charge trailing to simulated data using a CTI model $\mathbf{M}$, then correct the data using a CTI model with imperfectly known parameters, $\mathbf{M}+\delta\mathbf{M}$. After each stage, we compare the measured photometry (flux) and morphology (size and shape) of astronomical sources to their true (or perfectly-corrected) values. We develop a general model to predict these errors based on the errors in CTI model parameters. We focus on the the most important parameters of a ‘volume-driven’ CTI model: the density $\rho_{i}$ of charge traps, the characteristic time $\tau_{i}$ in which they release captured electrons, and the power law index $\beta$ describing how an electron cloud fills up the physical pixel volume.
This paper is organised as follows. In Sect. \[sec:simulations\], we simulate *Euclid* images and present our analysis methods. In Sect. \[sec:estim\], we address the challenge of measuring an average ellipticity in the presence of strong noise. We present our CTI model and measure the CTI effects as a function of trap release timescale $\tau$ in Sect. \[sec:modcorr\]. Based on laboratory measurements of an irradiated CCD273 [@2012SPIE.8453E..04E], we adopt a baseline trap model $\mathbf{M}$ for the *Euclid* VIS instrument (Sect. \[sec:euclid\]). In this context, we discuss how well charge trailing can be removed in the presence of readout noise. We go on to present our results for the modified correction model ($\mathbf{M}+\delta\mathbf{M}$) and derive tolerances in terms of the trap parameters based on *Euclid* requirements. We discuss these results in Sect. \[sec:disc\] and conclude in Sect. \[sec:conclusion\].
Simulations and data analysis {#sec:simulations}
=============================
Simulated galaxy images {#sec:imsim}
-----------------------
![image](fig1.pdf){width="150mm"}
Charge Transfer Inefficiency has the greatest impact on small, faint objects that are far from the readout register (i.e. that have undergone a great number of transfers). To quantify the worst case scenario, we therefore simulate the smallest, faintest galaxy whose properties are likely to be measured – with an exponential flux profile $f(r)\propto\mathrm{e}^{-r}$ whose broad wings (compared to a Gaussian or de Vaucouleurs profile) also make it more susceptible to CTI. To beat down shot noise, we simulate $10^{7}$ noisy image realisations for each measurement. We locate these galaxies $2048\pm0.5$pixels from both the serial readout register and the amplifier, uniformly randomising the sub-pixel centre to average out effects that depend on proximity to a pixel boundary. All our simulated galaxies have the same circularly symmetric profile, following the observation by @2010PASP..122..439R that this produces the same mean result as randomly oriented elliptical galaxies with no preferred direction.
We create the simulated images spatially oversampled by a factor 20, convolve them with a similarly oversampled Point Spread Function (PSF), then resample them to the final pixel scale. We use a preliminary PSF model and the $0\farcs1$pixels of the *Euclid* VIS instrument, but our setup can easily be adapted to other instruments, e.g. ACS. To the image signal of $\sim\!\!1300$ electrons, we add a uniform sky background of $105$ electrons, as expected for a $560\,$s VIS exposure, and Poisson photon noise to both the source and the background. After clocking and charge trailing (if it is being done; see Sect. \[sec:trailing\]), we then add additional readout noise, which follows a Gaussian distribution with a root mean square (rms) of $4.5$electrons, the nominal *Euclid* VIS value.
In the absence of charge trailing, the final galaxies have mean $S/N$=$11.35$, and Full Width at Half Maximum (FWHM) size of $0\farcs18$, as measured by `SExtractor` . This size, the same as the PSF, at the small end of the range expected from Fig. 4 of @2013MNRAS.429..661M makes our galaxies the most challenging in terms of CTI correction. Examples of input, degraded, and corrected images are shown in Fig. \[fig:sampleimage\].
Separately, we perform a second suite of simulations, containing $10^{6}$ realisations of a *Euclid* VIS PSF at $S/N\!\approx\!200$. The PSF simulations follow the above recipe, but skip the convolution of the PSF with an exponential disk.
Image analysis {#sec:dataflow}
--------------
On each of the sets of images (input, degraded, and corrected), we detect the sources using `SExtractor`. Moments of the brightness distribution and fluxes of the detected objects are measured using an `IDL` implementation of the RRG [@2001ApJ...552L..85R] shape measurement method. RRG is more robust than `SExtractor` for faint images, combining Gaussian-weighted moments of the image $I(\btheta)$ to measure integrated source flux $$\label{eq:fmom}
F\equiv\!\int{W(\btheta)\,I(\btheta)\,\mathrm{d}^{2}\btheta},$$ where $W(\btheta)$ is a Gaussian weight function with standard deviation $w$, and the integral extends over $2.5w$; the position $$\label{eq:xymom}
y\equiv\!\int{ \theta_2\, W(\btheta)\,I(\btheta)\,\mathrm{d}^{2}\btheta};$$ the size $$\label{eq:rmom}
R^{2}\equiv Q_{11}+Q_{22};$$ and the ellipticity $$\label{eq:chidef}
\{e_1,e_2\}\equiv \left\{\frac{Q_{11}-Q_{22}}{Q_{11}+Q_{22}},\frac{2Q_{12}}{Q_{11}+Q_{22}}\right\},$$ where the second-order brightness moments are $$\label{eq:qmom}
Q_{\alpha\beta}\!=\!\int{\theta_{\alpha}\,\theta_{\beta}\,W(\btheta)\,I(\btheta)\,\mathrm{d}^{2}\btheta},
\qquad\{\alpha,\beta\}\!\in\!\{1,2\}.$$ For measurements on stars, we chose a window size $w\!=\!0\farcs75$, the *Euclid* prescription for stars. For galaxies, we seek to reproduce the window functions used in weak lensing surveys. We adopt the radius of the `SExtractor` object [e.g. @2007ApJS..172..219L] that with $w\!=\!0\farcs34$ truncates more of the noise and thus returns more robust measurements.
Note that we are measuring a raw galaxy ellipticity, a proxy for the (reduced) shear, in which we are actually interested [cf. @2012MNRAS.423.3163K for a recent overview of the effects a cosmic shear measurement pipeline needs to address]. A full shear measurement pipeline must also correct ellipticity for convolution by the telescope’s PSF and calibrate it via a shear ‘responsivity factor’ [@1995ApJ...449..460K]. The first operation typically enlarges $e$ by a factor of $\sim1.6$ and the second lowers it by about the same amount. Since this is already within the precision of other concerns, we shall ignore both conversions The absolute calibration of shear measurement with RRG may not be sufficiently accurate to be used on future surveys. However, it certainly has sufficient [*relative*]{} accuracy to measure small deviations in galaxy ellipticity when an image is perturbed.
High precision ellipticity measurements {#sec:estim}
=======================================
Measurements of a non-linear quantity
-------------------------------------
A fundamental difficulty arises in our attempt to measure galaxy shapes to a very high precision, by averaging over a large number of images. Mathematically, the problem is that calculating ellipticity $e_{1}$ directly from the moments and then taking the expectation value $\mathcal{E}(\cdot)$ of all objects, i.e.: $$\label{eq:simpleell}
e_{1}=\mathcal{E}\!\left(\frac{Q_{11}-Q_{22}}{Q_{11}+Q_{22}}\right),\quad
e_{2}=\mathcal{E}\!\left(\frac{2Q_{12}}{Q_{11}+Q_{22}}\right),$$ means dividing one noisy quantity by another noisy quantity. Furthermore, the numerator and denominator are highly correlated. If the noise in each follows a Gaussian distribution, and their expectation values are zero, the probability density function of the ratio is a Lorentzian (also known as Cauchy) distribution. If the expectation values of the Gaussians are nonzero, as we expect, the ratio distribution becomes a generalised Lorentzian, called the Marsaglia-Tin distribution [@Marsaglia65; @Marsaglia:2006:JSSOBK:v16i04; @Tin65]. In either case, the ratio distribution has infinite second and first moments, i.e. its variance – and even its expectation value – are undefined. Implications of this for shear measurement are discussed in detail by @2012MNRAS.424.2757M [@2012MNRAS.425.1951R; @2012MNRAS.427.2711K; @2013MNRAS.429.2858M; @2014MNRAS.439.1909V].
Therefore, we cannot simply average over ellipticity measurements for $10^{7}$ simulated images. The mean estimator (Eq. \[eq:simpleell\]) would not converge, but follow a random walk in which entries from the broad wings of the distribution pull the average up or down by an arbitrarily large amount.
“Delta method” (Taylor expansion) estimators for ellipticity
------------------------------------------------------------
As an alternative estimator, we employ what is called in statistics the ‘delta method’: a Taylor expansion of Eq. (\[eq:simpleell\]) around the expectation value of the denominator [e.g. @casella+berger:2002]. The expectation value of the ratio of two random variables $X$, $Y$ is thus approximated by: $$\begin{gathered}
\label{eq:deltamethod}
\mathcal{E}(X/Y)\!\approx\!\frac{\mathcal{E}(X)}{\mathcal{E}(Y)}
-\frac{\mathcal{C}(X,Y)}{\mathcal{E}^{2}(Y)}
+\frac{\mathcal{E}(X)\sigma^{2}(Y)}{\mathcal{E}^{3}(Y)}\\
+\frac{\mathcal{C}(X,Y^{2})}{\mathcal{E}^{3}(Y)}
-\frac{\mathcal{E}(X)\mathcal{E}[Y-\mathcal{E}(Y)]^{3}}{\mathcal{E}^{4}(Y)}\end{gathered}$$ where $\mathcal{E}(X)$, $\sigma(X)$, $\sigma^{2}(X)$ denote the expectation value, standard deviation, and variance of $X$, and $\mathcal{C}(X,Y)$ its covariance with $Y$. The zero-order term in Eq. (\[eq:deltamethod\]) is the often-used approximation $\mathcal{E}(X/Y)\approx\mathcal{E}(X)/\mathcal{E}(Y)$ that switches the ratio of the averages for the average of the ratio. We note that beginning from the first order there are two terms per order with opposite signs. Inserting Eq. (\[eq:qmom\]) into Eq. (\[eq:deltamethod\]), the first-order estimator for the ellipticity reads in terms of the brightness distribution moments $Q_{\alpha\beta}$ as follows: $$\begin{aligned}
\label{eq:estim}
\begin{split}
e_{1} =& \frac{\mathcal{E}(Q_{11}\!-\!Q_{22})}{\mathcal{E}(Q_{11}\!+\!Q_{22})} \\
&- \frac{\sigma^{2}(Q_{11})\!-\!\sigma^{2}(Q_{22})}{\mathcal{E}^{2}(Q_{11}\!+\!Q_{22})} +
\frac{\mathcal{E}(Q_{11}\!-\!Q_{22})\sigma^{2}(Q_{11}\!+\!Q_{22})}{\mathcal{E}^{3}(Q_{11}\!+\!Q_{22})}
\end{split}\\
\begin{split}\label{eq:estim_e2}
e_{2} =& \frac{\mathcal{E}(2Q_{12})}{\mathcal{E}(Q_{11}\!+\!Q_{22})}\\
&- \frac{\mathcal{C}(Q_{11},Q_{12})\!+\!\mathcal{C}(Q_{12},Q_{22})}{\mathcal{E}^{2}(Q_{11}\!+\!Q_{22})} +
\frac{\mathcal{E}(2Q_{12})\sigma^{2}(Q_{11}\!+\!Q_{22})}{\mathcal{E}^{3}(Q_{11}\!+\!Q_{22})},
\end{split}\end{aligned}$$
with the corresponding uncertainties, likewise derived using the delta method [e.g. @casella+berger:2002]:
$$\begin{aligned}
\label{eq:formal}
\begin{split}
\sigma^{2}(e_{1}) =& \frac{\sigma^{2}(Q_{11}\!-\!Q_{22})}{\mathcal{E}^{2}(Q_{11}\!+\!Q_{22})}\\
&- \frac{\mathcal{E}(Q_{11}\!-\!Q_{22})\left[\sigma^{2}(Q_{11})\!-\!\sigma^{2}(Q_{22})\right]}
{\mathcal{E}^{3}(Q_{11}\!+\!Q_{22})} \\
&+\frac{\mathcal{E}^{2}(Q_{11}\!-\!Q_{22})\sigma^{2}(Q_{11}\!+\!Q_{22})}{\mathcal{E}^{4}(Q_{11}\!+\!Q_{22})}
\end{split} \\
\begin{split} \label{eq:formal_e2}
\sigma^{2}(e_{2}) =& \frac{\sigma^{2}(Q_{11}\!+\!Q_{22})}{\mathcal{E}^{2}(Q_{11}\!+\!Q_{22})}\\
&- \frac{\mathcal{E}(Q_{11}\!+\!Q_{22})\left[\mathcal{C}(Q_{11},Q_{12})\!+
\!\mathcal{C}(Q_{12},Q_{22})\right]}{\mathcal{E}^{3}(Q_{11}\!+\!Q_{22})} \\
&+\frac{\mathcal{E}^{2}(Q_{11}\!+\!Q_{22})\sigma^{2}(Q_{11}\!+\!Q_{22})}{\mathcal{E}^{4}(Q_{11}\!+\!Q_{22})}\quad.
\end{split}\end{aligned}$$
Application to our simulations {#sec:sigmas}
------------------------------
For our input galaxies, the combined effect of the first-order terms in eq. (\[eq:estim\]) is $\sim\!10$ %. Second-order contributions to the estimator are small, so we truncate after the first order. However, because of the divergent moments of the Marsaglia-Tin distribution, the third and higher-order contributions to the Taylor series increase again.
Nevertheless, while this delta-method estimator neither mitigates noise bias nor overcomes the infinite moments of the Marsaglia-Tin distribution at a fundamental level, it sufficiently suppresses the random walk behaviour for the purposes of this study, the averaging over noise realisations of the same object. We advocate re-casting the *Euclid* requirements in terms of the *Stokes parameters* [$Q_{11}\pm Q_{22},2Q_{12}$; @2014MNRAS.439.1909V]. These are the numerators and denominator of eq. (\[eq:simpleell\]) and are well-behaved Gaussians with finite first and second moments.
The formal uncertainties on ellipticity we quote in the rest of this article are the standard errors $\sigma(e_{1})/\sqrt{N}$ given by eq. (\[eq:formal\]). Our experimental setup of re-using the same simulated sources (computationally expensive due to the large numbers needed), our measurements will be intrinsically correlated (Sect. \[sec:corr\]). Hence the error bars we show overestimate the true uncertainties.
The effects of fast and slow traps {#sec:modcorr}
==================================
How CTI is simulated {#sec:trailing}
--------------------
The input images are degraded using a `C` implementation of the @2014MNRAS.439..887M CTI model. During each pixel-to-pixel transfer, in a cloud of $n_{\mathrm{e}}$ electrons, the number captured is $$\label{eq:nenc}
n_{\mathrm{c}}(n_{\mathrm{e}}) =
\left(1-\exp{\left(-\alpha n_{\mathrm{e}}^{1\!-\!\beta}\right)}\right)
\sum_{i}\rho_{i}
\left(\frac{n_{\mathrm{e}}}{w}\right)^{\beta},$$ where the sum is over different charge trap species with density $\rho_i$ per pixel, and $w$ is the full-well capacity. Parameter $\alpha$ controls the speed at which electrons are captured by traps within the physical volume of the charge cloud, which grows in a way determined by parameter $\beta$ .
Release of electrons from charge traps is modelled by a simple exponential decay, with a fraction $1-\mathrm{e}^{(-1/\tau_{i})}$ escaping during each subsequent transfer. The characteristic release timescale $\tau_{i}$ depends on the physical nature of the trap species and the operating temperature of the CCD.
In this paper, we make the simplifying ‘volume-driven’ assumption that charge capture is instantaneous, so $\alpha\!\!=\!\!0$. Based on laboratory studies of an irradiated VIS CCD (detailed in Sect. \[sec:labdata\]), we adopt a $\beta\!=\!0.58$ baseline well fill, and end-of-life total density of one trap per pixel, $\rho\!=\!1$. In our first, general tests, we investigate a single trap species and explore the consequences of different values of $\tau$.
Iterative CTI correction {#sec:corr}
------------------------
![image](fig2.pdf){width="155mm"}
[lccccccc]{} & $A$ & $D_{\mathrm{a}}$ & $D_{\mathrm{p}}$ & $D_{\mathrm{w}}$ & $G_{\mathrm{a}}$ & $G_{\mathrm{p}}$ & $G_{\mathrm{w}}$\
\
$\Delta F/F_{\mathrm{true}}$ & $-0.5367\pm0.0098$ & $-0.3144\pm0.0085$& $6.199\pm0.044$ & $4.639\pm0.260$ & $0.2116\pm0.0194$ & $49.53\pm1.64$ & $41.54\pm2.39$\
$\Delta y$ & $1.1098\pm0.0014$ & $-0.5291\pm0.0028$ & $8.392\pm0.080$ & $2.110\pm0.234$ & $0.3061\pm0.0185$ & $6.935\pm0.402$ & $7.083\pm0.210$\
$\Delta R^{2}/R^{2}_{\mathrm{true}}$ & $0.4226\pm0.0025$ & $-0.3857\pm0.0038$ & $15.72\pm0.18$ & $2.576\pm0.375$ & $1.0866\pm0.0448$ & $4.382\pm0.047$ & $3.779\pm0.160$\
$\Delta e_{1}$ & $0.5333\pm0.0016$ & $-0.3357\pm0.0026$ & $16.28\pm0.22$ & $2.951\pm0.326$ & $0.9901\pm0.0203$ & $4.553\pm0.054$ & $4.132\pm0.081$\
\
$\Delta F/F_{\mathrm{true}}$ & $-0.5549\pm0.0029$ & $0.0446\pm0.0028$ & $129.6\pm13.7$ & $26.00\pm13.36$ & $0.1301\pm0.0121$ & $73.47\pm6.78$ & $56.84\pm5.21$\
$\Delta y$ & $0.09582\pm0.01011$ & $0.0517\pm0.0111$ & $5.622\pm8.911$ & $2.227\pm4.557$ & $0.0810\pm0.1170$ & $2.757\pm5.369$ & $3.154\pm2.784$\
$\Delta R^{2}/R^{2}_{\mathrm{true}}$ & $-2.3181\pm0.0173$ & $0.4431\pm0.0202$ & $75.90\pm25.02$ & $28.47\pm11.03$ & $0.5471\pm0.2294$ & $41.31\pm16.09$ & $35.33\pm9.12$\
$\Delta e_{1}$ & $0.01383\pm0.0115$ & $0.0039\pm0.0066$ & $12.30\pm20.49$ & $1.000\pm0.000$ & $0.0982\pm0.0274$ & $5.738\pm2.085$ & $5.353\pm2.078$\
\
$\Delta F/F_{\mathrm{true}}$ & $-2.2472\pm0.0239$ & $-1.4558\pm0.0189$ & $107.5\pm0.3$ & $55.11\pm0.95$ & $1.151\pm0.047$ & $496.6\pm3.2$ & $343.6\pm4.4$\
$\Delta y$ & $4.3532\pm0.0014$ & $-1.8608\pm0.0027$ & $173.1\pm0.4$ & $29.20\pm0.67$ & $5.0987\pm0.0173$ & $67.20\pm0.20$ & $43.91\pm0.22$\
$\Delta R^{2}/R^{2}_{\mathrm{true}}$ & $0.9489\pm0.00098$ & $-6.434\pm0.0095$ & $288.8\pm4.7$ & $18.71\pm4.49$ & $20.237\pm0.716$ & $94.42\pm0.15$ & $50.20\pm0.25$\
$\Delta e_{1}$ & $1.2336\pm0.0077$ & $-0.7941\pm0.0086$ & $266.7\pm2.4$ & $17.54\pm3.90$ & $16.513\pm0.046$ & $94.87\pm0.19$ & $52.57\pm0.21$\
\
$\Delta F/F_{\mathrm{true}}$ & $-0.0035\pm0.0002$ & $0.0027\pm0.0003$ & $110.2\pm10.5$ & $42.21\pm20.02$ & $0.0006\pm0.0271$ & $182.6\pm71.3$ & $3.5\pm100.0$\
$\Delta y$ & $0.1504\pm0.00066$ & $0.0970\pm0.0067$ & $12.46\pm1.86$ & $2.731\pm1.552$ & $0.0218\pm0.0034$ & $7.377\pm1.024$ & $5.063\pm0.717$\
$\Delta R^{2}/R^{2}_{\mathrm{true}}$ & $-0.0163\pm0.0038$ & $-0.0182\pm0.0036$ & $1269\pm33$ & $24.57\pm47.63$ & $0.0198\pm0.0146$ & $50.83\pm34.56$ & $37.95\pm38.64$\
$\Delta e_{1}$ & $0.0012\pm0.0024$ & $0.0003\pm0.0014$ & $2.26\pm50.92$ & $1.000\pm0.000$ & $0.02668\pm0.0061$ & $8.465\pm1.800$ & $5.379\pm1.647$\
\[tab:taufits\]
The @2014MNRAS.439..887M code can also be used to ‘untrail’ the CTI. If required, we use $n_\mathrm{iter}=5$ iterations to attempt to correct the image (possibly with slightly different model parameters). Note that we perform this correction only after adding readout noise in the simulated images.
Our main interest in this study is the impact of uncertainties in the trap model on the recovered estimate of an observable $\eta$ (e.g. ellipticity). Therefore, we present our results in terms of differences between the estimators measured for the corrected images, and the input values: $$\label{eq:deltadef}
\Delta\eta_{i} = \eta_{i,\mathrm{corrected}} - \eta_{i,\mathrm{input}}.$$ Because for each object of index $i$ the noise in the measurements of $\eta_{i,\mathrm{corrected}}$ and $\eta_{i,\mathrm{input}}$ are strongly correlated, they partially cancel out. Thus the actual uncertainty of each $\Delta\eta_{i}$ is lower than quoted. Moreover, because we re-use the same noise realisation in all our measurements (cases of different $\rho_{i}$ and $\tau_{i}$), these measurements are correlated as well.
CTI as a function of trap timescale {#sec:fourpanels}
-----------------------------------
The impact of charge trapping is dependent on the defect responsible. Figure \[fig:raw4panel\] demonstrates the effect of charge trap species with different release times $\tau$ on various scientific observables. To compute each datum (filled symbols), we simulate $10^{7}$ galaxies, add shot noise, add CTI trailing in the $y$ direction (i.e. vertical in Fig. \[fig:sampleimage\]), only then add readout noise. Separately, we simulate $10^{6}$ stars. Using eqs. –, we measure mean values of photometry (top panel), astrometry (second panel) and morphology (size in the third, and ellipticity in the bottom panel). Our results confirm what @2010PASP..122..439R found in a different context.
Three trap regimes are apparent, for all observables. Very fast traps ($\tau\!\la\!0.3$transfers) do not displace electrons far from the object; thus their effect on photometry is minimal (top plot in Fig. \[fig:raw4panel\]). We observe significant relative changes in position, size, and ellipticity, forming a plateau at low $\tau$, because even if captured electrons are released after the shortest amount of time, some of them will be counted one pixel off their origin. This is probably an artifact: We expect the effect of traps with $\tau\!<\!0.1$ to be different in an model simulating the transfer between the constituent electrodes of the physical pixels, rather than entire pixels.
Very slow traps ($\tau\!\ga\!30$transfers) result in electrons being carried away over a long distance such that they can no longer be assigned to their original source image. Hence, they cause a charge loss compared to the CTI-free case. However, because charge is removed from nearly everywhere in the image, their impact on astrometry and morphology is small.
The most interesting behaviour is seen in the transitional region, for traps with a characteristic release time of a few transfer times. If electrons re-emerge several pixels from their origin, they are close enough to be still associated with their source image, but yield the strongest distortions in size and ellipticity measurements. This produces local maxima in the lower two panels of Fig. \[fig:raw4panel\]. If these measurements are scientifically important, performance can – to some degree – be optimised by adjusting a CCD’s clock speed or operating temperature to move release times outside the most critical range $1\!\la\!\tau\!\la\!10$ [@2012SPIE.8453E..17M].
In the star simulations (crosses in Fig. \[fig:raw4panel\] for degraded images, plus signs for CTI-corrected images), the CTI effects are generally smaller than for the faint galaxies, because the stars we simulate are brighter and thus experience less trailing *relative to their signal*. Still, we measure about the same spurious ellipticity $\Delta e_{1}$ and even a slightly higher relative size bias $\Delta R^{2}/R^{2}_{\mathrm{true}}$ for the stars. The explanation is that the quadratic terms in the second-order moments (eq. \[eq:qmom\]) allow for larger contributions from the outskirts of the object, given the right circumstances. In particular, the wider window size $w$ explains the differences between the galaxy and PSF simulations. Notably, the peak in the $\Delta e_{1}(\tau)$ and $\Delta R^{2}/R^{2}_{\mathrm{true}}(\tau)$ curves shifts from $\sim\!3\,\mbox{px}$ for the galaxies to $\sim\!9\,\mbox{px}$ for the stars. Because the wider window function gives more weight to pixels away from the centroid, the photometry becomes more sensitive to slower traps.
For a limited number of trap configurations, we have also tried varying the trap density or the number of transfers (i.e. object position on the CCD). In both cases, the dependence is linear. Overall, for all tested observables, the measurements in the degraded images (Fig. \[fig:raw4panel\], solid symbols) are well-fit by the empirical fitting function $$\begin{gathered}
f^{\mathrm{degrade}}(\rho,\tau)=\rho\Big(A+D_{\mathrm{a}}\,{\mathrm{atan}}{((\log{\tau}-D_{\mathrm{p}})/D_{\mathrm{w}})}+\\
G_{\mathrm{a}}\exp{((\log{\tau}-G_{\mathrm{p}})^2/2G_{\mathrm{w}}^2)}\Big),
\label{eq:adg}\end{gathered}$$ which combines an arc-tangent drop (“D”) and a Gaussian peak (“G”). The best fit-amplitudes ($A$, $D_{\mathrm{a}}$ and $G_{\mathrm{a}}$), positions on the $\tau$ axis ($D_{\mathrm{p}}$ and $G_{\mathrm{p}}$) and widths ($D_{\mathrm{w}}$ and $G_{\mathrm{w}}$), are listed in Table \[tab:taufits\]. The same functional form provides a good match to the residuals after CTI correction, $f^{\mathrm{resid}}(\rho,\tau)$ (open symbols in Fig. \[fig:raw4panel\]). These residuals are caused by readout noise, which is not subject to CTI trailing, but undergoes CTI correction (see Sect. \[sec:rn\]).
Predictive model for imperfect correction
-----------------------------------------
We set out to construct a predictive model $\Delta f^{\mathrm{Pr}}$ of $\Delta\eta$, the CTI effect in an observable relative to the underlying true value (eq. \[eq:deltadef\]). There are two terms, the CTI degradation (eq. \[eq:adg\]), and a second term for the effect of the ‘inverse’ CTI correction allowing for a slightly imperfect CTI model: $$\Delta f^\mathrm{Pr}=f^\mathrm{degr}(\rho,\tau)+f^\mathrm{correct}(\rho+\Delta\rho,\tau+\Delta\tau).
\label{eq:adg2}$$ Since CTI trailing perturbs an image by only a small amount, the correction acts on an almost identical image. Assuming the coefficients of eq. (\[eq:adg\]) to be constant, we get: $$\label{eq:prediction}
\Delta f^\mathrm{Pr}\approx f^\mathrm{degr}(\rho,\tau)-f^\mathrm{degr}(\rho+\Delta\rho,\tau+\Delta\tau)
+ f^{\mathrm{res}}(\rho,\tau),$$ where $f^{\mathrm{res}}(\rho,\tau)$ is approximately constant, and depends on the readout noise (see Section \[sec:zprn\]). We could expand this equation as a Taylor series, but the derivatives of $f$ do not provide much further insight.
Because eq. (\[eq:nenc\]) is non-linear in the number $n_{\mathrm{e}}$ of signal electrons, our observation (Sect. \[sec:fourpanels\]) that the *effects* of CTI behave linearly in $\rho$ is not a trivial result. Assuming this linearly in $\rho$, we can expand eq. (\[eq:prediction\]) and factor out $\rho$. The combined effect of several trap species $i$ with release timescales $\tau_{i}$ and densities $\rho_{i}$ can then be written as: $$\begin{gathered}
\label{eq:sumpred}
\Delta f^{\mathrm{Pr}}(\rho_{i}+\Delta\rho_{i},\tau_{i}+\Delta\tau_{i})\!
=\!\sum_{i}\rho_{i}f^{\mathrm{resid}}(\tau_{i}) + \\
\sum_{i}\left[\rho_{i}f(\tau_{i}) - (\rho_{i}+\Delta\rho_{i})f(\tau_{i}+\Delta\tau_{i})\right],\end{gathered}$$ in which we dropped the superscript of $f^{\mathrm{degr}}$ for the sake of legibility. We are going to test this model in the remainder of this study, where we consider a mixture of three trap species. We find eq. (\[eq:sumpred\]) to correctly describe measurements of spurious ellipticity $\Delta e_{1}$, as well as the relative bias in source size $\Delta R^{2}/R^{2}_{\mathrm{true}}$ and flux $\Delta F/F_{\mathrm{true}}$.
Euclid as a concrete example {#sec:euclid}
============================
Context for this study
----------------------
To test the general prediction eq. (\[eq:sumpred\]), we now evaluate the effect of imperfect CTI correction in simulations of [*Euclid*]{} data, with a full [*Euclid*]{} CTI model featuring multiple trap species (see Sect. \[sec:blm\]). We call this the $\mathbf{M}+\delta\mathbf{M}$ experiment.
Akin to @2012MNRAS.419.2995P for *Gaia*, this study is useful in the larger context of the flow down of requirements from *Euclid*’s science goals [@2010arXiv1001.0061R] to its imaging capabilities [@2013MNRAS.429..661M] and instrument implementation [@2013MNRAS.431.3103C; @2014SPIE.9143E..0JC]. In particular, @2013MNRAS.429..661M highlight that the mission’s overall success will be determined both by its absolute instrumental performance and our knowledge about it. We now present the next step in the flow down: to what accuracy do we need to constrain the parameters of the @2014MNRAS.439..887M CTI model? Future work will then determine which calibration observations are required to achieve this accuracy.
While the final *Euclid* requirements remain to be confirmed, we adopt the current values as discussed by @2013MNRAS.431.3103C. Foremost, the “CTI contribution to the PSF ellipticity shall be $<\!1.1\times10^{-4}$ per ellipticity component”.
The *Euclid* VIS PSF model will bear an uncertainty due to CTI, that translates into an additional error on measured galaxy properties. For the bright stars (which have much higher $S/N$) tracing the PSF, @2013MNRAS.431.3103C quote a required knowledge of $R^{2}$ to a precision $\left|\sigma(R^{2})\right|\!<\!4\times10^{-4}$. We test this requirement with our second suite of simulations, containing $10^{6}$ realisations of a *Euclid* VIS PSF at $S/N\!\approx\!200$ (cf. Sec. \[sec:imsim\]).
In reality, CTI affects the charge transport in both CCD directions, serial and parallel. For the sake of simplicity, we only consider serial CTI, and thus underestimate the total charge trailing. There is no explicit photometric error budget allocated to CTI, while “ground data processing shall correct for the detection chain response to better than $0.7$ % error in photometry in the nominal VIS science images”.
CTI model for the *Euclid* VISual instrument {#sec:blm}
--------------------------------------------
--------------------------------------- ----------- ----------- -----------
**Baseline model** $i\!=\!1$ $i\!=\!2$ $i\!=\!3$
Trap density $\rho_{i}$ \[px$^{-1}$\] $0.02$ $0.03$ $0.95$
Release timescale $\tau_{i}$ \[px\] $0.8$ $3.5$ $20.0$
\[tab:traps\]
--------------------------------------- ----------- ----------- -----------
: The baseline trap model $\mathcal{M}$. The model includes a baseline well fill power of $\beta_{0}\!=\!0.58$.
![image](fig3a.pdf){width="88.1mm"} ![image](fig3b.pdf){width="88.1mm"}
Based on a suite of laboratory test data, we define a baseline model $\mathbf{M}$ of the most important CTI parameters ($\rho_{i}$, $\tau_{i}$, $\beta_{0}$). We degrade our set of $10^{7}$ simulated galaxies using $\mathbf{M}$. The $\mathbf{M}+\delta\mathbf{M}$ experiment then consists of correcting the trailing in the degraded images with slight alterations to $\mathbf{M}$. We investigate $>\!100$ correction models $\mathbf{M}+\delta\mathbf{M}$, resulting in an impressive $1.4\times10^{9}$ simulated galaxies used in this study.
Exposure to the radiation environment in space was simulated in the laboratory by irradiating a prototype of the e2v CCD273 to be used for *Euclid* VIS with a $10$ MeV equivalent fluence of $4.8\!\times\!10^{9}\,\mathrm{protons/cm}^{-2}$ [@2014P1P; @2014V1V]. Characterisation experiments were performed in nominal VIS conditions of $153\,\mbox{K}$ temperature and a $70\,\mbox{kHz}$ readout frequency. We refer to Appendix \[sec:labdata\] for further details on the experiments and data analysis.
We emphasize that our results for $e_{1}$ pertain to faint and small galaxies, with an exponential disk profile (vz. Sect. \[sec:imsim\]), and placed at the maximum distance from the readout register ($y\!=\!2051$ transfers). Furthermore, we assume the level of radiation damage expected at the end of *Euclid*’s six year mission. Because CTI trailing increases roughly linearly with time in orbit [cf. @2014MNRAS.439..887M], the CTI experienced by the typical faintest galaxy (i.e. at half the maximum distance to the readout register and three years into the mission), will be smaller by a factor of $4$ compared to the results quoted below.
Where not stated otherwise the nominal *Euclid* VIS rms readout noise of $4.5$ electrons was used. Table \[tab:traps\] summarises the baseline model $\mathbf{M}$ that was constructed based on these analyses. The default well fill power is $\beta_{0}\!=\!0.58$. Slow traps with $\tau_{3}\!=\!20$ clock cycles and $\rho_{3}\!=\!0.95$ dominate our baseline model, with small fractions of medium-fast ($\tau_{2}\!=\!3.5$, $\rho_{2}\!=\!0.03$) and fast ($\tau_{1}\!=\!0.8$, $\rho_{1}\!=\!0.02$) traps. Figure \[fig:models\] shows how trails change with changing trap parameters.
Readout noise impedes perfect CTI correction {#sec:zprn}
--------------------------------------------
### Not quite there yet: the zeropoint {#sec:zp}
First, we consider the ellipticities measured in the degraded and corrected images, applying the same baseline model in the degradation and correction steps. The reasons why this experiment does not retrieve the same corrected ellipticity $e_{\mathrm{corr}}$ as input ellipticity $e_{\mathrm{in}}$ are the Poissonian image noise and Gaussian readout noise. We quantify this in terms of spurious ellipticity $\Delta e\!=\!e_{\mathrm{corr}}-e_{\mathrm{in}}$, and shall refer to it as the *zeropoint* of the $\mathbf{M}+\delta\mathbf{M}$ experiment. The spurious ellipticity in the serial direction is $Z_{\mathrm{e_{1}}}\!=\!\Delta e_{1}\!=\!-0.00118\pm0.00060$. Thus, this experiment on worst-case galaxies using the current software exceeds the *Euclid* requirement of $\left|\Delta e_{\alpha}\right|\!<\!1.1\times10^{-4}$ by a factor of $\sim\!10$. With respect to the degraded image $99.68$ % of the CTI-induced ellipticity are being corrected. Virtually the same zeropoint, $\Delta e_{1}\!=\!-0.00118\pm0.00058$, is predicted by adding the contributions of the three species from single-species runs based on the full $10^{7}$ galaxies. We point out that these results on the faintest galaxies furthest from the readout register have been obtained using non-flight readout electronics [cf. @2014SPIE.9154E..0RS].
From our simulation of $10^{6}$ bright ($S/N\!\approx\!200$) stars, we measure the residual bias in source size $R^{2}$ after CTI correction of $Z_{\!R^{2}}\!=\!\Delta R^{2}/R^{2}_{\mathrm{true}}\!=\!(-0.00112\pm0.00030)$, in moderate excess of the requirement $\left|\Delta R^{2}/R^{2}_{\mathrm{true}}\right|\!<\!4\times10^{-4}$. While the $S/N$ of the star simulations is selected to represent the typical *Euclid* VIS PSF tracers, the same arguments of longest distance from the readout register and non-flight status of the electronics apply.
### The effect of readout noise {#sec:rn}
In Fig. \[fig:readnoise\], we explore the effect of varying the rms readout noise in our simulations about the nominal value of $4.5$ electrons (grey lines) discussed in Sect. \[sec:zp\]. We continue to use the baseline trap model for both degradation and correction. For the rms readout noise, a range of values between $0$ and $15$ electrons was assumed. For the faint galaxies (Fig. \[fig:readnoise\], left plot), we find $\Delta e_{1}$ to increase with readout noise in a way well described by a second-order polynomial. A similar, cubic fit can be found for $\Delta R^{2}/R^{2}_{\mathrm{true}}$ measured from the star simulations (Fig. \[fig:readnoise\], right plot), but with a hint towards saturation in the highest tested readout noise level.
The most important result from Fig. \[fig:readnoise\] is that in absence of readout noise, if the correction assumes the correct trap model $\mathbf{M}$, it removes the CTI perfectly, with $\Delta e_{1}\!=\!(0.3\pm5.9)\times 10^{-4}$ and $\Delta R^{2}/R^{2}_{\mathrm{true}}\!=\!(0.0\pm2.8)\times 10^{-4}$. The quoted uncertainties are determined by the $N\!=\!10^{7}$ ($10^{6}$) galaxy images we simulated. We conclude that the combination of our simulations and correction code pass this crucial sanity check. If the rms readout noise is $\lesssim\!3$ electrons ($\lesssim\!0.5$ electrons), the spurious ellipticity (the relative size bias) stays within *Euclid* requirements.
Sensitivity to imperfect CTI modelling {#sec:res}
--------------------------------------
### Morphology biases as a function of well fill power, and determining tolerance ranges {#sec:beta}
![Sensitivity of the CTI-induced spurious ellipticity $\Delta e_{1}$ *(upper plot)* and the relative spurious source size $\Delta R^{2}/R^{2}_{\mathrm{true}}$ *(lower plot)* to the well fill power $\beta$. At the default value of $\beta\!=\!0.58$ (vertical grey line), the measurements deviate from zero due to readout noise, as indicated by arrows. The shaded region around the measurements indicate the *Euclid* requirement ranges as a visual aid. Solid and dashed lines display quadratic (linear) fits to the measured $\Delta e_{1}(\beta)$ and $\Delta R^{2}(\beta)/R^{2}_{\mathrm{true}}$, respectively. We study the worst affected objects (at the end of the mission and furthest from the readout register) and the faintest *Euclid* galaxies. This plot also assumes CTI is calibrated from charge injection lines at full well capacity only. This will not be the case.[]{data-label="fig:mdm2_exp0"}](fig4a.pdf "fig:"){width="85mm"} ![Sensitivity of the CTI-induced spurious ellipticity $\Delta e_{1}$ *(upper plot)* and the relative spurious source size $\Delta R^{2}/R^{2}_{\mathrm{true}}$ *(lower plot)* to the well fill power $\beta$. At the default value of $\beta\!=\!0.58$ (vertical grey line), the measurements deviate from zero due to readout noise, as indicated by arrows. The shaded region around the measurements indicate the *Euclid* requirement ranges as a visual aid. Solid and dashed lines display quadratic (linear) fits to the measured $\Delta e_{1}(\beta)$ and $\Delta R^{2}(\beta)/R^{2}_{\mathrm{true}}$, respectively. We study the worst affected objects (at the end of the mission and furthest from the readout register) and the faintest *Euclid* galaxies. This plot also assumes CTI is calibrated from charge injection lines at full well capacity only. This will not be the case.[]{data-label="fig:mdm2_exp0"}](fig4b.pdf "fig:"){width="85mm"}
Now that we have assessed the performance of the correction using the same CTI model as for the degradation (given the specifications of our simulations), we turn to the $\mathbf{M}+\delta\mathbf{M}$ experiment for determining the sensitivities to imperfections in the CTI model. To this end, we assume the zeropoint offset $Z_{e_{1}}$ (or $Z_{\!R^{2}}$) of Sect. \[sec:zp\] to be corrected, and ‘shift’ the requirement range to be centred on it (see, e.g., Fig. \[fig:mdm2\_exp0\]).
Figure \[fig:mdm2\_exp0\] shows the $\mathbf{M}+\delta\mathbf{M}$ experiment for the well fill power $\beta$. If the degraded images are corrected with the baseline $\beta_{0}\!=\!0.58$, we retrieve the zeropoint measurement from Sect. \[sec:zp\]. For the $\mathbf{M}+\delta\mathbf{M}$ experiment, we corrected the degraded images with slightly different well fill powers $0.56\!\leq\!\beta\!\leq\!0.60$. The upper plot in Fig. \[fig:mdm2\_exp0\] shows the resulting $\Delta e_{1}$ in galaxies, and the lower plot $\Delta R^{2}/R^{2}_{\mathrm{true}}$ in stars. We find a strong dependence of both the spurious serial ellipticity $\Delta e_{1}$ and $\Delta R^{2}/R^{2}_{\mathrm{true}}$ on $\Delta\beta\!=\!\beta\!-\!\beta_{0}$.
In order to determine a tolerance range with respect to a CTI model parameter $\xi$ with baseline value $\xi_{0}$ (here, the well fill power $\beta$), we fit the measured bias $\Delta\eta$ (e.g. $\Delta e_{1}$, cf. eq. \[eq:deltadef\]) as a function of $\Delta\xi\!=\!\xi\!-\!\xi_{0}$. By assuming a polynomial $$\label{eq:polynom}
\Delta\eta(\Delta\xi) = Z_{\eta} + \sum_{j=1}^{J}{a_{j}(\Delta\xi)^{j}}$$ of low order $J$, we perform a Taylor expansion around $\xi_{0}$. In eq. \[eq:polynom\], $Z_{\eta}$ is the zeropoint (Sect. \[sec:zp\]) to which we have shifted our requirement margin. The coefficients $a_{j}$ are determined using the `IDL` singular value decomposition least-square fitting routine `SVDFIT`. For consistency, our fits include $Z_{\eta}$ as the zeroth order. In Fig. \[fig:mdm2\_exp0\], the best-fitting quadratic (linear) fits to $\Delta e_{1}$ ($\Delta R^{2}/R^{2}_{\mathrm{true}}$) are shown as a solid and dashed line, respectively.
In both plots, the data stick tightly to the best-fitting lines, given the measurement uncertainties. If the measurements were uncorrelated, this would be a suspiciously orderly trend. However, as already pointed out in Sect. \[sec:sigmas\], we re-use the same $10^{7}$ simulations with the same peaks and troughs in the noise in all data points shown in Figs. \[fig:mdm2\_exp0\] to \[fig:flux2\]. Hence, we do not expect to see data points to deviate from the regression lines to the degree their quoted uncertainties would indicate. As a consequence, we do not make use of the $\chi^{2}_{\mathrm{red}}\!\ll\!1$ our fits commonly yield for any interpretation.
Because the interpretation of the reduced $\chi^{2}$ is tainted by the correlation between our data points, we use an alternative criterion to decide the degree $J$ of the polynomial: If the uncertainty returned by `SVDFIT` allows for a coefficient $a_{j}\!=\!0$, we do not consider this or higher terms. For the panels of Fig. \[fig:mdm2\_exp0\], this procedure yields $J\!=\!2$ ($J\!=\!1$). The different signs of the slopes are expected because $R^{2}$ appears in the denominator of eq. (\[eq:chidef\]).
Given a requirement $\Delta\eta_{\mathrm{req}}$, e.g. $\Delta e_{1,\mathrm{req}}\!=\!1.1\times10^{-4}$, the parametric form (eq. \[eq:polynom\]) of the sensitivity curves allows us to derive tolerance ranges to changes in the trap parameters. Assuming the zeropoint (the bias at the correct value of $\xi$) to be accounted for, we find the limits of the tolerance range as the solutions $\Delta\xi_{\mathrm{tol}}$ of $$\label{eq:tol}
\left|\sum_{j=1}^{J}{a_{j}(\Delta\xi)^{j}}\right|\!=\!\Delta\eta_{\mathrm{req}}$$ with the smallest values of $|\Delta\xi|$ on either sides to $\Delta\xi\!=\!0$. Using, eq. (\[eq:tol\]), we obtain $\Delta\beta_{\mathrm{tol}}\!=\!\pm(6.31\pm0.07)\times10^{-5}$ from the requirement on the spurious ellipticity $\Delta e_{1}\!<\!1.1\times10^{-4}$, for which the quadratic term is small. From the requirement on the relative size bias $\Delta R^{2}/R^{2}_{\mathrm{true}}\!<\!4\!\times\!10^{-4}$ we obtain $\Delta\beta_{\mathrm{tol}}\!=\!\pm(4.78\pm0.05)\times10^{-4}$. In other words, the ellipticity sets the more stringent requirement, and we need to be able to constrain $\beta$ to an accuracy of at least $(6.31\pm0.07)\times10^{-5}$ in absolute terms. This analysis assumes calibration by a single charge injection line at full well capacity, such that eq. (\[eq:nenc\]) needs to be extrapolated to lower signal levels. We acknowledge that *Euclid* planning has already adopted using also faint charge injection lines, lessening the need to extrapolate.
### Ellipticity bias as a function of trap density {#sec:rho}
![image](fig5a.pdf){width="140mm"} ![image](fig5b.pdf){width="140mm"}
We now analyse the sensitivity of $\Delta e_{\alpha}$ towards changes in the trap densities. Figure \[fig:mdm2\_exp1\] shows the $\mathbf{M}+\delta\mathbf{M}$ experiment for one or more of the trap densities $\rho_{i}$ of the baseline model. The upper panel of Fig. \[fig:mdm2\_exp1\] presents the spurious ellipticity $\Delta e_{1}$ for five different branches of the experiment. In each of the branches, we modify the densities $\rho_{i}$ of one or several of the trap species. For example, the upward triangles in Fig. \[fig:mdm2\_exp1\] denote that the correction model applied to the images degraded with the baseline model used a density of the fast trap species $\rho_{1}\!+\!\Delta\rho_{1}$, tested at several values of $\Delta\rho_{1}$ with $0.9\!\leq\!1\!+\!\Delta\rho_{1}/\rho_{1}\!\leq\!1.1$. The densities of the other species are kept to their baseline values in this case. The other four branches modify $\rho_{2}$ (downward triangles); $\rho_{3}$ (squares); $\rho_{1}$ and $\rho_{2}$ (diamonds); and all three trap species (circles).
Because a value of $\Delta\rho_{i}\!\!=\!\!0$ reproduces the baseline model in all branches, all of them recover the zeropoint measurement of $\Delta e_{1}$ there (cf. Sect. \[sec:zp\]). Noticing that $e_{\mathrm{degr,1}}\!-\!e_{\mathrm{in,1}}\!<\!0$ for the degraded images relative to the input images, we explain the more negative $\Delta e_{1}$ for $\Delta\rho_{i}\!<\!0$ as the effect of undercorrecting the CTI. This applies to all branches of the experiment. Likewise, with increasing $\Delta\rho_{i}\!>\!0$, the residual undercorrection at the zeropoint decreases. Eventually, with even higher $\kappa\!>\!1$, we overcorrect the CTI and measure $\Delta e_{1}\!>\!0$.
Over the range of $0.9\!\leq\!1\!+\!\Delta\rho_{1}/\rho_{1}\!\leq\!1.1$ we tested, $\Delta e_{1}$ responds linearly to a change in the densities. Indeed, our model (eq. \[eq:sumpred\]), which is linear in the $\rho_{i}$ and additive in the effects of the different trap species, provides an excellent description of the measured data, both for $\Delta e_{1}$ and $\Delta R^{2}/R^{2}_{\mathrm{true}}$ (Fig. \[fig:mdm2\_exp1\], lower panel). The lines in Fig. \[fig:mdm2\_exp1\] denote the model prediction from a simplified version of eq. (\[eq:sumpred\]), $$\label{eq:rhopred}
\Delta f^{\mathrm{Pr}}(\rho_{i}+\Delta\rho_{i})\!=\!
\sum_{i}\rho_{i}f^{\mathrm{resid}}(\tau_{i}) + \sum_{i}(\rho_{i}+\Delta\rho_{i})f(\tau_{i})\,.$$ In eq. (\[eq:rhopred\]), we assumed the $\tau_{i}$ be correct, i.e. $\Delta\tau_{i}\!=\!0$.
Next, we compute the tolerance $\Delta\rho_{i,\mathrm{tol}}/\rho$ by which, for each branch of the experiment, we might deviate from the correct trap model and still recover the zeropoint within the *Euclid* requirements of $\left|\Delta e_{\alpha,\mathrm{req}}\right|\!<\!1.1\times10^{-4}$, resp. $\left|\Delta R^{2}_{\mathrm{req}}/R^{2}_{\mathrm{true}}\right|\!<\!4\times10^{-4}$. Again, we calculate these tolerances about the zeropoints $Z\!=\!\sum_{i}\rho_{i}f^{\mathrm{resid}}(\tau_{i})$ (cf. eq. \[eq:rhopred\]), that we found to exceed the requirements in Sect. \[sec:zp\], but assume to be corrected for in this experiment.
In accordance with the linearity in $\Delta\rho_{i}$, applying the Taylor expansion recipe of Sect. \[sec:beta\], we find the data in Fig. \[fig:mdm2\_exp1\] to be well represented by first-order polynomials (eq. \[eq:polynom\]). The results for $\Delta\rho_{i,\mathrm{tol}}/\rho$ we obtain from eq. (\[eq:tol\]) are summarised in Table \[tab:tolerances\]. For all species, the constraints from $\Delta e_{1}$ for faint galaxies are tighter than the ones from $\Delta R^{2}/\Delta R^{2}_{\mathrm{true}}$ for bright stars.
Only considering the fast traps, $\rho_{1}$ can change by $0.84\pm0.33$% and still be within *Euclid* VIS requirements, *given the measured zeropoint has been corrected for*. While a tolerance of $0.39\pm0.06$% is found for $\rho_{2}$, the slow traps put a much tighter tolerance of $0.0303\pm0.0007$% on the density $\rho_{3}$. This is expected because slow traps amount to $95$% of all baseline model traps (Table \[tab:traps\]). Varying the density of all trap species in unison, we measure a tolerance of $0.0272\pm0.0005$%.
Computing the weighted mean of the $\Delta\tau\!=\!0$ intercepts in Fig. \[fig:mdm2\_exp1\], we derive better constraints on the zeropoints: $Z_{\mathrm{e_{1}}}\!=\!\Delta e_{1}\!=\!-0.00117\pm0.00008$ for the faint galaxies, and $Z_{\!R^{2}}\!=\!\Delta R^{2}/R^{2}_{\mathrm{true}}\!=\!-0.00112\pm0.00004$ for the bright stars.
### Ellipticity bias as a function of trap release time {#sec:tau}
![image](fig6a.pdf){width="140mm"} ![image](fig6b.pdf){width="140mm"}
--------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- --
branch $\xi$ $10^{4}\Delta\xi_{\mathrm{tol}}^{\mathrm{min}}$ $10^{4}\Delta\xi_{\mathrm{tol}}^{\mathrm{max}}$ $10^{4}\Delta\xi_{\mathrm{tol}}^{\mathrm{min}}$ $10^{4}\Delta\xi_{\mathrm{tol}}^{\mathrm{max}}$ $10^{4}\Delta\xi_{\mathrm{tol}}^{\mathrm{min}}$ $10^{4}\Delta\xi_{\mathrm{tol}}^{\mathrm{max}}$
$\eta\!=\!\Delta e_{1}$ $\eta\!=\!\Delta e_{1}$ $\eta\!=\!\Delta R^{2}/R^{2}_{\mathrm{true}}$ $\eta\!=\!\Delta R^{2}/R^{2}_{\mathrm{true}}$ $\eta\!=\!\Delta F/F_{\mathrm{true}}$ $\eta\!=\!\Delta F/F_{\mathrm{true}}$
galaxies galaxies stars stars galaxies galaxies
$\beta$ $-0.631\pm0.007$ $0.631\pm0.007$ $-4.78\pm0.05$ $4.78\pm0.05$ $-61.5\pm0.3$ $60.5\pm0.3$
$\rho_{1}$ $-84_{-33}^{+18}$ $84_{-18}^{+33}$ $-1250_{-1800}^{+450}$ $1250_{-450}^{+1800}$ $--$ $--$
$\rho_{2}$ $-39_{-6}^{+4}$ $39_{-4}^{+6}$ $-191_{-19}^{+16}$ $191_{-16}^{+19}$ $--$ $--$
$\rho_{3}$ $-3.03_{-0.07}^{+0.06}$ $3.03_{-0.06}^{+0.07}$ $-5.91\pm0.03$ $5.91\pm0.03$ $-267.5\pm1.6$ $267.5\pm1.6$
$\rho_{1,2}$ $-26_{-3}^{+2}$ $26_{-2}^{+3}$ $-166_{-14}^{+12}$ $166_{-12}^{+14}$ $--$ $--$
$\rho_{1,2,3}$ $-2.72\pm0.05$ $2.72\pm0.05$ $-5.71\pm0.03$ $5.71\pm0.03$ $-262.8\pm1.6$ $262.8\pm1.6$
$\tau_{1}$ $-193_{-23}^{+19}$ $193_{-19}^{+23}$ $-1310_{-150}^{+120}$ $1310_{-120}^{+150}$ $<-10000$ $>10000$
$\tau_{2}$ $-300_{-360}^{+90}$ $270_{-70}^{+150}$ $-270_{-70}^{+50}$ $270_{-50}^{+80}$ $<-10000$ $>10000$
$\tau_{3}$ $-4.00\pm0.04$ $4.00\pm0.04$ $-11.30\pm0.05$ $11.31\pm0.05$ $-1574_{-23}^{+24}$ $2320_{-90}^{+100}$
$\tau_{1,2}$ $-420_{-420}^{+150}$ $700_{-400}^{+900}$ $-220_{-50}^{+30}$ $230_{-40}^{+50}$ $<-10000$ $>10000$
$\tau_{1,2,3}$ $-4.03\pm0.04$ $4.04\pm0.04$ $-11.69\pm0.05$ $11.68\pm0.05$ $-1454_{-20}^{+19}$ $2020_{-60}^{+70}$
$\tau_{1,2,3}, \rho_{1,2,3}$, first pixel matched $-16.07_{-0.61}^{+0.57}$ $16.09_{-0.57}^{+0.61}$ $-16.17\pm0.09$ $16.21\pm0.09$ $-262.5\pm0.7$ $263.0\pm0.7$
\[tab:tolerances\]
--------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- --
Figure \[fig:mdm2\_exp2\] shows the $\mathbf{M}+\delta\mathbf{M}$ experiment for one or more of the release timescales $\tau_{i}$ of the trap model. The upper panel of Fig. \[fig:mdm2\_exp2\] presents the spurious ellipticity $\Delta e_{1}$ for five different branches of the experiment. In each of the branches, we modify the release timescales $\tau_{i}$ of one or several of the trap species by multiplying it with a factor $(\tau_{i}+\Delta\tau_{i})/\tau_{i}$.
As in Fig. \[fig:mdm2\_exp1\], the upward triangles in Fig. \[fig:mdm2\_exp2\] denote that the correction model applied to the images degraded with the baseline model used a density of $\tau_{1}+\Delta\tau_{1}$ for the fast trap species. The release timescales of the other species are kept to their baseline values in this case. The other four branches modify $\tau_{2}$ (downward triangles); $\tau_{3}$ (squares); $\tau_{1}$ and $\tau_{2}$ (diamonds); and all three trap species (circles).
Because a value of $\Delta\tau\!=\!0$ reproduces the baseline model in all branches, all of them recover the zeropoint measurement of $\Delta e_{1}$ there. The three trap species differ in how the $\Delta e_{1}$ they induce varies as a function of $\Delta\tau_{i}$. One the one hand, for $\tau_{1}$, we observe more negative $\Delta e_{1}$ for $(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!<\!1$ , and less negative values for $(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!>\!1$, with a null at $(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!\approx\!1.5$. On the other hand, with the slow traps ($\tau_{3}$), we find $\Delta e_{1}\!>\!0$ for $(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!\la\!0.99$, and more negative values than the zeropoint for $(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!>\!1$. The curve of $\Delta e_{1}(\lambda\tau_{2})$ shows a maximum at $(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!\approx\!0.8$, with a weak dependence on $0.7\!\la\!(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!\la\!1.1$.
Key to understanding the spurious ellipticity as a function of the $\tau_{i}$ is the dependence of $\Delta e_{1}(\tau)$ for a single trap species that we presented in Fig. \[fig:raw4panel\], and expressed by the empirical fitting function $f_{\mathrm{e_{\alpha}}}(\tau)$ (Eq. \[eq:adg\]) with the parameters quoted in Table \[tab:tolerances\]. While the correction algorithm effectively removes the trailing when the true $\tau_{i}$ is used, the residual of the correction will depend on the difference between the $\Delta e_{\alpha}$ for $\tau_{i}$ and for the timescale $(\tau_{i}+\Delta\tau_{i})/\tau_{i}$ actually used in the correction. This dependence is captured by the predictive model (Eq. \[eq:sumpred\]), which simplifies for the situation in Fig. \[fig:mdm2\_exp2\] ($\Delta\rho_{i}\!=\!0$) to $$\label{eq:taupred}
\Delta f^{\mathrm{Pr}}(\tau_{i}+\Delta\tau_{i})\!=\!Z+
\sum_{i}\rho_{i}\left[f(\tau_{i}) - f(\tau_{i}+\Delta\tau_{i})\right],$$ with $Z\!=\!\sum_{i}\rho_{i}f^{\mathrm{resid}}(\tau_{i})$ (lines in Fig. \[fig:mdm2\_exp2\]). In the branches modifying $\tau_{1}$ and/or $\tau_{2}$, but not $\tau_{3}$, the measurements over the whole range of $0.5\!\leq\!(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!\leq\!1.6$ agree with the empirical model within their uncertainties. If $\tau_{3}$ is varied, Eq. (\[eq:taupred\]) overestimates $|\Delta e_{1}|$ significantly for $|\Delta\tau_{i}|\!>\!0.05\tau_{i}$. We discuss a possible explanation in Sect. \[sec:disc\]. Our empirical model provides a natural explanation for the maximum in $\Delta e_{1}(\tau_{2})$: Because $\tau_{2}\!=\!3.5$ is located near the peak in $f_{\mathrm{e_{1}}}(\tau)$, assuming $(\tau_{i}+\Delta\tau_{i})/\tau_{i}\!\leq\!0.8$ for correction means using a release time regime where $\Delta e_{1}(\tau)$ is still rising instead of falling. The correction software accounts for this; hence the spurious ellipticity from using the wrong release time scale shows the same maximum as $f_{\mathrm{e_{1}}}(\tau)$.
Because $\tau_{2}$ is not located very closely to the peak in $\Delta R^{2}/\Delta R^{2}_{\mathrm{true}}(\tau)$ (cf. Fig. \[fig:raw4panel\]), we do not see an extremum in the lower panel of Fig. \[fig:mdm2\_exp2\] which shows the sensitivity of the size bias in bright stars to variations in the $\tau_{i}$.
In order to compute the tolerances $\Delta\tau_{\mathrm{tol}}$ towards changes in the release timescales, we again employ a polynomial fit (eq. \[eq:tol\]). Evidently, the tolerances differ substantially between the $\tau_{i}$, again with the narrower tolerance intervals from $\Delta e_{1}$ than from $\Delta R^{2}/\Delta R^{2}_{\mathrm{true}}$. Only for $\Delta\tau_{2}$ with its extreme point for $\Delta e_{1}$ near the baseline value, we find similar tolerances in both cases. However, even for the rare trap species $\tau_{1}$, the tolerance is only $\Delta\tau_{1,\mathrm{tol}}\!=\!(1.93\pm0.23)$ %. One needs to know the release timescale of the slow trap species to an accuracy of $(0.0400\pm0.0004)$ % to be able to correct it within *Euclid* VIS requirements. We find the same tolerance if all timescales are varied in unison.
### Combinations of timescales and densities yielding the same first trail pixel flux {#sec:sti}
![The same as Fig. \[fig:mdm2\_exp2\], but for $\Delta\tau_{i}\!<\!0$ combinations of timescales $\tau_{i}$ and densities $\rho_{i}$ that yield the same count rate in the first trail pixel as the baseline model. All trap species are modified in unison (large symbols and solid line). For comparison, small symbols and the dotted line repeat the result from Fig. \[fig:mdm2\_exp2\], where only the $\tau_{i}$ were modified, not the $\rho_{i}$. (Notice the different scale of the ordinates.) The lines show the predictive models (Eq. \[eq:sumpred\]). We study the worst affected objects (end of the mission, furthest from the readout register) and the faintest *Euclid* galaxies.[]{data-label="fig:mdm2_exp3"}](fig7a.pdf "fig:"){width="90mm"} ![The same as Fig. \[fig:mdm2\_exp2\], but for $\Delta\tau_{i}\!<\!0$ combinations of timescales $\tau_{i}$ and densities $\rho_{i}$ that yield the same count rate in the first trail pixel as the baseline model. All trap species are modified in unison (large symbols and solid line). For comparison, small symbols and the dotted line repeat the result from Fig. \[fig:mdm2\_exp2\], where only the $\tau_{i}$ were modified, not the $\rho_{i}$. (Notice the different scale of the ordinates.) The lines show the predictive models (Eq. \[eq:sumpred\]). We study the worst affected objects (end of the mission, furthest from the readout register) and the faintest *Euclid* galaxies.[]{data-label="fig:mdm2_exp3"}](fig7b.pdf "fig:"){width="90mm"}
Considering how trap parameters are constrained practically from Extended Pixel Edge Response (EPER) and First Pixel Response (FPR) data, it is instructive to consider combinations of trap release timescales $\tau_{i}$ and densities $\rho_{i}$ that yield the same number of electrons in the first pixel of the trail as the baseline model. This is interesting because given realistic conditions, the first pixel of the trail will have the largest signal-to-noise ratio and will be most easily constrained. We thus perform an initial exploration of the parameter degeneracies. In our “first pixel matched” models, the effect of a given change in $\tau$ on the first trail pixel needs to be compensated by a change in $\rho$. Because a larger (smaller) $\tau$ means more (less) charge close to the original pixel, the compensation requires $\Delta\rho_{i}\!<\!0$ for $\Delta\tau_{i}\!<\!1$ and $\Delta\rho_{i}\!>\!1$ for $\Delta\tau_{i}\!>\!1$. Only in the branches where we vary $\tau_{3}$ or all timescales together, we find the $\Delta\rho_{i}$ to differ noticeably from unity. For the latter two, they populate a range between $\Delta\rho_{i}\!=\!0.745$ for $\Delta\tau_{i}\!=\!0.7$ to $\Delta\rho_{i}\!=\!1.333$ for $\Delta\tau\!=\!1.4$.
Figure \[fig:mdm2\_exp3\] shows the $\mathbf{M}+\delta\mathbf{M}$ experiment for all $\tau_{i}$ and $\rho_{i}$ (large symbols). Small symbols depict the alteration to $\tau_{i}$, but with the $\rho_{i}$ kept fixed, i.e. the same measurement as the open circles in Fig. \[fig:mdm2\_exp2\]. Compared to these, $\Delta e_{1}$ in faint galaxies (upper panel) is of opposite sign in the “first pixel matched” case, relative to the zeropoint. This can be understood as an effect of our baseline trap mix being dominated by slow traps, for which a small increase in $\tau$ leads to *less* CTI-induced ellipticity. The simultaneous increase in trap density effects *more* CTI-induced ellipticity, and this is the larger of the two terms, such that a change in sign ensues. The same holds for $\Delta R^{2}/R^{2}_{\mathrm{true}}$ in bright stars (lower panel of Fig. \[fig:mdm2\_exp3\]), but with inverted slopes compared to $\Delta e_{1}$.
Again using eq. (\[eq:tol\]), we compute the tolerance range for the changes to the $\tau_{i}$ in the “first pixel matched” case. (The respective changes to the $\rho_{i}$ are determined by the first pixel constraint.) Modifying all release time scales, we arrive at $\Delta\tau_{\mathrm{tol}}\!=\!0.16$ %. (Table \[tab:tolerances\]). This tolerance is wider than the $0.04$% for $\Delta e_{1}$ when only the $\tau_{i}$ are varied, again due to the different signs arising from variations to $\tau_{3}$ and $\rho_{3}$. By coincidence, we also arrive at $\Delta\tau_{\mathrm{tol}}\!=\!0.16$ % when repeating that test with the size bias measured in bright stars.
The black solid line in Fig. \[fig:mdm2\_exp3\] shows the predictive model (eq. \[eq:sumpred\]), taking into account the combined effect of the $\Delta\tau_{i}$ and $\Delta\rho_{i}$, giving the same first pixel flux. Both in the $\tau_{i}$-only (dotted line) and “first pixel matched” cases it matches the measurements only within a few percent from $\lambda\!=\!1$. Crucially, this mismatch only occurs for $\Delta e_{1}$ in faint galaxies, but not for $\Delta R^{2}/R^{2}_{\mathrm{true}}$ in bright stars.
We explain this discrepancy with the uncertainties with which our measurements and modelling (Fig. \[fig:raw4panel\]) describe the underlying function $f_{\mathrm{e_{1}}}(\tau)$. The range $20\!\la\!\tau\!\la\!100$ is where the fitting function Eq. (\[eq:adg\]) deviates most from the observations in Fig. \[fig:raw4panel\]. The CTI correction effectively removes almost all CTI effects on photometry and morphology, leaving the residuals presented in Figs. \[fig:mdm2\_exp1\] to \[fig:flux2\], at least one order of magnitude smaller than the scales of the uncorrected CTI effects. Hence, a relatively small uncertainty in $f(\tau)$ causes a large mismatch with the data.
The cause of the uncertainty in the parameters of Eq. (\[eq:adg\]), shown in Table \[tab:taufits\], is twofold: First, there is uncertainty in the fit as such. Second, there is uncertainty due to the finite sampling of the $\Delta e_{\alpha}(\tau)$ and $\Delta F_{\mathrm{rel}}(\tau)$ curves. Running a denser grid in $\tau$ can remove the latter, but the former might be ultimately limited by our choice of the function (Eq. \[eq:adg\]), which is empirically motivated, not physically. We further discuss the limits of the predictive model in Sect. \[sec:disc\].
Residual flux errors after imperfect CTI correction {#sec:photo}
---------------------------------------------------
### Flux bias as a function of readout noise {#sec:fluxrn}
![Relative bias in RRG flux with respect to the true input flux, as a function of readout noise (*upper panel*) and well fill power $\beta$ (*lower panel*). Solid lines give the best-fit polynomial models. The grey-shaded *Euclid* requirement range is centred on zero for the readout noise plot, and on the zeropoint corresponding to the default readout noise for the $\beta$ plot. Measurement uncertainties are shown, but very small. We study the worst affected objects (end of the mission, furthest from the readout register) and the faintest *Euclid* galaxies.[]{data-label="fig:flux1"}](fig8a.pdf "fig:"){width="90mm"} ![Relative bias in RRG flux with respect to the true input flux, as a function of readout noise (*upper panel*) and well fill power $\beta$ (*lower panel*). Solid lines give the best-fit polynomial models. The grey-shaded *Euclid* requirement range is centred on zero for the readout noise plot, and on the zeropoint corresponding to the default readout noise for the $\beta$ plot. Measurement uncertainties are shown, but very small. We study the worst affected objects (end of the mission, furthest from the readout register) and the faintest *Euclid* galaxies.[]{data-label="fig:flux1"}](fig8b.pdf "fig:"){width="90mm"}
Given the default rms readout noise of $4.5$ electrons, we measure a flux bias $\Delta F_{\mathrm{rel}}\!=\!\Delta F/F_{\mathrm{true}}$ relative to the true flux $F_{\mathrm{true}}$ in the input faint galaxy simulations of $(-1.980\pm0.012)$ % after CTI correction, corresponding to $92.9$ % of the CTI-induced flux bias being corrected. The upper panel of Fig. \[fig:flux1\] shows the relative flux biases before and after correction as a function of rms readout noise. Without readout noise, the flux bias can be corrected perfectly ($\Delta F_{\mathrm{rel}}\!=\!(0.002\pm0.012)\times 10^{-2}$ after correction). With increasing readout noise, the flux bias deteriorates, in a way that can be fitted with a cubic polynomial in terms of readout noise. Comparing to the degraded images, we notice that the correction software applies same amount of correction, independent of the readout noise. Because the mitigation algorithm in its current form does not include a readout noise model, this confirms our expectations.
We show the *Euclid* requirement on photometric accuracy as the grey-shaded area in Fig. \[fig:flux1\] (upper panel), centred on zero. The nominal readout noise case exceeds the requirement of $<\!0.7$ % photometric uncertainty for the faintest, worst-affected galaxies we study. However, the CTI-induced bias affects all VIS images, and would thus be calibrated out. The *Euclid* flux requirement can be understood as pertaining to *uncertainties*, not *biases* in the photometric calibration. The uncertainty of the flux bias, $0.0012$ % then makes only a tiny contribution to the photometric error budget. We now go on to study the sensitivity of the flux bias towards changes in the trap model.
### Flux bias as a function of well fill power $\beta$
The lower panel of Fig. \[fig:flux1\] shows how a change in well fill power $\beta$ alters the flux bias. If we correct the degraded images using a $\beta\!>\!\beta_{0}$, the model accounts for less CTI in small charge packages, i.e. less CTI in the image’s wings that are crucial for both photometry and morphology (cf. fig. \[fig:mdm2\_exp0\]) Hence, a $\beta\!>\!\beta_{0}$ leads to an undercorrection relative to the flux bias zeropoint $Z_{\mathrm{F}}$ (Sect. \[sec:fluxrn\]), while for $\beta\!-\!\beta_{0}\!\la\!-0.017$, the zero line is crossed and overcorrection occurs.
Although $\Delta F_{\mathrm{rel}}(\beta)$ in Fig. \[fig:flux1\] appears linear, using the criterion based on significant components (Sect. \[sec:rho\]), a quadratic is preferred, indicated by the solid line. Using eq. (\[eq:tol\]), we compute the tolerance range in $\beta$ given $\Delta F_{\mathrm{rel}}(\beta_{\mathrm{tol}})\!=\!0.007$, centred on $Z_{\mathrm{F}}$. Towards smaller well fill powers, we find $\Delta\beta_{\mathrm{tol}}^{\mathrm{min}}\!=\!-(6.15\pm0.03)\times 10^{-3}$, while towards larger $\beta$, we find $\Delta\beta_{\mathrm{tol}}^{\mathrm{max}}\!=\!(6.05\pm0.03)\times 10^{-3}$. Compared to the constraints on the knowledge of $\beta$ from $\Delta e_{1}$ derived in Sect. \[sec:beta\], these margins are $\sim\!100$ times wider.
### Flux bias as a function of trap densities {#sec:fluxrho}
The upper plot of Fig. \[fig:flux2\] shows the flux bias $\Delta F_{\mathrm{rel}}$ in dependence of a change $\Delta\rho_{i}$ to the densities $\rho_{i}$ in the correction model, in analogy to Sect. \[sec:rho\]. Unless the density of the dominant trap species $\rho_{3}$ is modified, we measure only insignificant departures from the zeropoint $Z_{\mathrm{F}}$. Given the high accuracy of the flux measurements, these are still significant measurements, but they are negligible with respect to the *Euclid* requirement on flux. If all $\rho_{i}$ are varied in unison, the effect on $\Delta F_{\mathrm{rel}}$ is largest. A linear model using Eq. (\[eq:polynom\]) yields a tolerance of $\Delta\rho_{i}^{\mathrm{tol}}/\rho_{i}\!=\!\pm2.628\pm0.016$ %, wider than the tolerances for $\Delta e_{1}$ or $\Delta R^{2}/R^{2}_{\mathrm{true}}$ (Table \[tab:tolerances\]). The lines in the upper plot of Fig. \[fig:flux2\] show that the model (eq. \[eq:rhopred\]) matches our measurements well over the range in $\Delta\rho_{i}$ we tested.
### Flux bias as a function of release timescales
The lower plot of Fig. \[fig:flux2\] shows the flux bias $\Delta F_{\mathrm{rel}}$ in dependence of a change $\Delta\tau_{i}$ in the correction model, like in Sect. \[sec:tau\]. As for varying the $\rho_{i}$ (Sect. \[sec:fluxrho\]), a change to only the fast and/or the medium traps yields only small departures from the zeropoint such that we can bundle together all trap species for deriving a tolerance range. The respective measurements (black circles in Fig. \[fig:flux2\]) show a steep slope at $\Delta\tau_{i}\!<\!0$ that flattens out to $\Delta\tau_{i}\!>\!0$. This can be explained given the saturation of $\Delta F_{\mathrm{rel}}$ found at large $\tau$ in Fig. \[fig:raw4panel\] and is confirmed by our model (eq. \[eq:taupred\]; dotted line in Fig. \[fig:flux2\]). Our prediction is offset from the measurement due to uncertainties in the modelling, but the slopes agree well.
Although polynomial fits using eq. (\[eq:polynom\]) warrant cubic terms in both cases, $\Delta F_{\mathrm{rel}}(\tau_{i}\!+\!\Delta\tau_{i})$ is much straighter in the “first pixel matched” case where also the $\rho_{i}$ are altered (star symbols in Fig. \[fig:flux2\]; cf. Sect. \[sec:sti\]). The reason is that the slopes of $\Delta F_{\mathrm{rel}}(\rho_{i}\!+\!\Delta\rho_{i})$ and $\Delta F_{\mathrm{rel}}(\tau_{i}\!+\!\Delta\tau_{i})$ have the same sign and do not partially cancel each other out, as is the case for $\Delta e_{1}(\rho_{i}\!+\!\Delta\rho_{i})$ and $\Delta e_{1}(\tau_{i}\!+\!\Delta\tau_{i})$. Again, eq. (\[eq:sumpred\]) succeeds in predicting the measurements, despite offsets that are significant given the small uncertainties but small in terms of $\Delta F_{\mathrm{rel}}$ in the uncorrected images.
Using the cubic fits, we find the following wide tolerance ranges (eq. \[eq:tol\]) $\Delta\tau_{3,\mathrm{min}}^{\mathrm{tol}}/\tau_{3}\!=\!15.7\pm0.2$ % and $\Delta\tau_{3,\mathrm{max}}^{\mathrm{tol}}/\tau_{3}\!=\!23.2_{-0.9}^{+1.0}$ %. In the “first pixel matched”, case the intervals are considerably tighter, due to the contribution from the change in densities, with $\Delta\tau_{i,\mathrm{min}}^{\mathrm{tol}}/\tau_{i}\!=\!2.625\pm0.007$ % and $\Delta\tau_{i,\mathrm{max}}^{\mathrm{tol}}/\tau_{i}\!=\!2.630\pm0.007$ %. Again, the strictest constraints come from the ellipticity component $\Delta e_{1}$.
![*Upper panel:* The same as Fig. \[fig:mdm2\_exp1\], but showing the sensitivity of the measured flux bias $\Delta F/F_{\mathrm{true}}$ as a function of the relative change in trap densities $\rho_{i}$. *Lower panel:* The same as Fig. \[fig:mdm2\_exp2\], but showing the flux bias $\Delta F/F_{\mathrm{true}}$ as a function of the relative change in trap densities $\tau_{i}$. Star symbols and the solid line denote the “first pixel matched” model for all trap species. The lines in both panels show the predictive model (eq. \[eq:sumpred\]). We study the worst affected objects (end of the mission, furthest from the readout register) and the faintest *Euclid* galaxies.[]{data-label="fig:flux2"}](fig9a.pdf "fig:"){width="90mm"} ![*Upper panel:* The same as Fig. \[fig:mdm2\_exp1\], but showing the sensitivity of the measured flux bias $\Delta F/F_{\mathrm{true}}$ as a function of the relative change in trap densities $\rho_{i}$. *Lower panel:* The same as Fig. \[fig:mdm2\_exp2\], but showing the flux bias $\Delta F/F_{\mathrm{true}}$ as a function of the relative change in trap densities $\tau_{i}$. Star symbols and the solid line denote the “first pixel matched” model for all trap species. The lines in both panels show the predictive model (eq. \[eq:sumpred\]). We study the worst affected objects (end of the mission, furthest from the readout register) and the faintest *Euclid* galaxies.[]{data-label="fig:flux2"}](fig9b.pdf "fig:"){width="90mm"}
Discussion {#sec:disc}
==========
Limits of the predictive model
------------------------------
We measured tolerance ranges for changes in the $\rho_{i}$ and $\tau_{i}$ given the *Euclid* VIS requirements, and presented a model (Eq. \[eq:sumpred\]) capable of predicting these results based on the $\Delta\eta(\tau)$ curves (e.g. $\Delta e_{1}(\tau)$, Fig. \[fig:raw4panel\]), that are less expensive to obtain in terms of CPU time. However, as can be seen in particular in Fig. \[fig:mdm2\_exp3\], there is a mismatch between predictions and measurements for $\tau_{3}$, the most common baseline model trap species. As discussed in Sect. \[sec:sti\], this is caused by the finite sampling and the empirical nature of eq. (\[eq:adg\]).
Unfortunately, $f(\tau)$ and $f^{\mathrm{resid}}(\tau)$ will likely depend non-trivially on the source profile. Moreover, Eq. (\[eq:sumpred\]), if applied to ellipticity, treats it as additive. Where this approximation breaks down, i.e. when values that are not $\ll\!1$ are involved, the correct additional formula [e.g. @2006glsw.book.....S] must be used. This applies to CTI-induced ellipticity as well as to large intrinsic or shear components.
We tested that the dependence on $\beta$ (Fig. \[fig:mdm2\_exp0\]) can be included in the model as well, yielding $$\begin{gathered}
\label{eq:betapred}
\Delta f^{\mathrm{Pr}}(\beta,\rho_{i},\tau_{i})\!=\!\sum_{i}\rho_{i}f^{\mathrm{resid}}(\tau_{i})
+ [f(\beta\!+\!\Delta\beta)\!-\!f(\beta)] \\
\times\sum_{i}\left[\rho_{i}f(\tau_{i})-(\rho_{i}\!+\!\Delta\rho_{i})f(\tau_{i}\!+\!\Delta\tau_{i})\right].\end{gathered}$$
Applicability
-------------
Our findings pertain specifically to CTI correction employing the @2014MNRAS.439..887M iterative correction scheme, the current nominal procedure for *Euclid* VIS. Other algorithms for the removal of CTI trailing exist that might not be susceptible in the same way to readout noise. @2012MNRAS.419.2995P, investigating the full-forward approach designed for *Gaia*, did not observe a readout noise floor similar to the one we found. The same might hold for including CTI correction in a forward-modelling shear measurement pipeline [e.g. @2013MNRAS.429.2858M]. However, the *Gaia* method has not been applied yet to actual observational data, and the @2014MNRAS.439..887M is the most accurate method for the CTI correction of real data today.
We remind the reader that our results on the zeropoints upon correcting with the correct model (Fig. \[fig:mdm2\_exp0\]) are dependent on the specifics of the small and faint galaxies we simulated. Further tests will determine if the large bias in $R^{2}$ persists under more realistic scenarios.
The narrow tolerances of $\Delta\rho/\rho\!=\!0.11$% and $\Delta\tau/\tau\!=\!0.17$% for the density of the slow traps species might look daunting, but fortunately, due to the discernible trails caused by these traps it is also the easiest species of which to determine the properties. Conversely, the $\Delta\rho/\rho\!=\!3$% and $\Delta\tau/\tau\!=\!8$% for the fast traps are much larger, but constraints on these traps will be harder to achieve from laboratory and in-flight calibration data. Considering the “first pixel matched” case, taking into account how trap parameters are determined from CTI trails, relaxes the tolerances from ellipticity but tightens the (much broader) tolerances from the photometric, for our particular baseline trap mix. We notice that, while trap parameters are degenerate and Sect. \[sec:sti\] marks a first attempt to disentangle these parameters, each (degenerate) set of parameters can yield a viable CTI correction. Characterising the true trap species, however, is crucial with respect to device physics applications.
Source profile-dependence of the CTI-induced flux bias $\Delta F_{\mathrm{rel}}$ will lead to a sample of realistic sources (i.e. with a distribution of source profiles) showing a range in $\Delta F_{\mathrm{rel}}$ at any given readout noise level. Thus, the uncertainty in $\Delta F_{\mathrm{rel}}$ will be larger than the $10^{-4}$ we measured for our broad-winged, but homogeneous images in Sect. \[sec:fluxrn\]. More sophisticated simulations are necessary to assess the role of the variable CTI-induced flux bias in *Euclid*’s photometric error budget.
Conclusions and Outlook {#sec:conclusion}
=======================
The goal was to bridge the divide between engineering measurements of CTI, and its degradation of scientific measurements of galaxy shapes. We have developed a very fast algorithm to model CTI in irradiated e2v Technologies CCD273 devices, reproducing laboratory measurements performed at ESTEC. We take a worst-case approach and simulate the faintest galaxies to be studied by *Euclid*, with a broad-winged exponential profile, at the end of the mission and furthest away from the readout register. Our analysis is hindered by the divergent surface brightness moments of the Marsaglia-Tin distribution that the ellipticity components follow. We alleviate this problem by means of a Taylor expansion around the mean of the denominator, yielding an accuracy of $\sigma e_{\alpha}\!\approx\!10^{-4}$ by averaging over $10^{7}$ simulated images. We advocate that *Euclid* requirements be re-defined in a way that avoids ratios of noisy quantities.
Our detailed study of the trapping process has confirmed that not all traps are equally bad for shape measurement [@2010PASP..122..439R]: Traps with release timescales of a few clocking cycles cause the largest spurious ellipticity, while all traps with longer $\tau_{i}$ yield the strongest flux bias.
The impact of uncertainties in the trap densities $\rho_{i}$ and time scales $\tau_{i}$ on CTI effects can be predicted to a satisfactory accuracy by a model that is linear in the $\rho_{i}$ and additive in the effects of different trap species. For future applications, this will allow us to reduce the simulational effort in CTI forecasts, calculating the effect of trap pixels from single species data.
Informed by laboratory data of the irradiated CCD273, we have adopted a baseline trap model for *Euclid* VIS forecasts. We corrected images with a trap model $\mathbf{M}+\delta\mathbf{M}$ offset from the model $\mathbf{M}$ used for applying CTI. Thus we derived tolerance ranges for the uncertainties in the trap parameters, given *Euclid* requirements, positing that the required level of correction will be achieved. We conclude:
1.
: In the absence of readout noise, perfect CTI correction in terms of ellipticity and flux can be achieved.
2.
: Given the nominal rms readout noise of $4.5$ electrons, we measure $Z_{\mathrm{e_{1}}}\!=\!\Delta e_{1}\!=\!-1.18\times10^{-3}$ after CTI correction. This still exceeds the *Euclid* requirement of $\left|\Delta e_{1}\right|\!<\!1.1\times10^{-4}$. The requirement may still be met on the actual ensemble of galaxies *Euclid* will measure, since we consider only the smallest galaxies of $S/N\!=\!11$. Likewise, in $S/N\!=\!200$ stars, we measure a size bias of $1.12\times10^{-3}$, exceeding the requirement of $\left|\Delta R^{2}/R^{2}_{\mathrm{true}}\right|\!<\!4\times10^{-4}$.
3.
: The spurious ellipticity $\Delta e_{1}$ sensitively depends on the correct well fill power $\beta$, which we need to constrain to an accuracy of $\Delta\beta_{\mathrm{tol}}\!=\!(6.31\pm0.07)\times 10^{-5}$ to meet requirements. This assumes calibration by a single, bright charge injection line. The narrowest tolerance intervals are found for the dominant slow trap species in our baseline mix: $\Delta\rho_{\mathrm{tol}}/\rho_{0}\!=\!(\pm0.0272\pm0.0005)$%, and $\Delta\tau_{\mathrm{tol}}/\tau_{0}\!=\!(\pm0.0400\pm0.004)$%.
4.
: Given the nominal rms readout noise, we measure a flux bias $Z_{\mathrm{F}}\!=\!\Delta F_{\mathrm{rel}}\!=\!(-1.980\pm0.012)$% after CTI correction, within the required $\left|\Delta F_{\mathrm{rel}}\right|\!<\!0.7$ % for the photometric uncertainty. More relevant for *Euclid* will be the uncertainty of this bias, which for realistic sources depends on their source profile. Further study is necessary here, as well as for the impact of CTI on photometric nonlinearity.
The final correction will only be as good as on-orbit characterisation of physical parameters such as trap locations, density and release time. The next steps building on this study should include: 1.) Researching and testing novel algorithms mitigating the effects of read noise as part of the CTI correction. 2.) Characterising the effect of realistic source profile distributions in terms of the photometric and nonlinearity requirements. 3.) Translating the tolerances in trap model parameters into recommendations of calibration measurements and their analysis, based on modelling the characterisation of trap species.
Plans for *Euclid* VIS calibration have already been updated to include charge injection at multiple levels such that $\beta$ does not need to be extrapolated from bright charge injection lines to faint galaxies. We will continue to liaise between engineers and scientists to determine how accurately it will be necessary to measure these physical parameters. The VIS readout electronics will be capable of several new in-orbit calibration modes such as trap pumping [@2012SPIE.8453E..17M] that are not possible with HST, and our calculations will advise what will be required, and how frequently they need to be performed, in order to adequately characterise the instrument for scientific success.
Acknowledgements {#acknowledgements .unnumbered}
================
This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.
HI thanks Lydia Heck and Alan Lotts for friendly and helpful system administration. HI acknowledges support through European Research Council grant MIRG-CT-208994. RM and HI are supported by the Science and Technology Facilities Council (grant numbers ST/H005234/1 and ST/N001494/1) and the Leverhulme Trust (grant number PLP-2011-003). JR was supported by JPL, run under a contract for NASA by Caltech. OC and OM acknowledge support from the German Federal Ministry for Economic Affairs and Energy (BMWi) provided via DLR under project no. 50QE1103.
The authors thank Henk Hoekstra, Peter Schneider, Yannick Mellier, Tom Kitching, Reiko Nakajima, Massimo Viola, and the members of *Euclid* CCD Working group, *Euclid* OU-VIS and OU-SHE groups for comments on the text and useful discussions.
Informing the baseline model with laboratory data {#sec:labdata}
=================================================
EPER/FPR data with irradiated CCD
---------------------------------
![CCD273 EPER trails in the serial (*upper plot*) and parallel (*lower plot*) directions. Shown here are the G quadrant trails at an input signal of $\sim\!1000$ electrons. Solid lines within the light and dark grey shaded areas denote the average and its uncertainty of the profile before and after correction for electronic effects. The best-fit model to the corrected trail is shown as a long-dashed line. For the purpose of illustration, the baseline trap model is shown in both plots as a short-dashed line. Building on the serial EPER model, the baseline model includes fast traps that are seen in quadrant F.[]{data-label="fig:trails"}](figA1a.pdf "fig:"){width="85mm"}\
![CCD273 EPER trails in the serial (*upper plot*) and parallel (*lower plot*) directions. Shown here are the G quadrant trails at an input signal of $\sim\!1000$ electrons. Solid lines within the light and dark grey shaded areas denote the average and its uncertainty of the profile before and after correction for electronic effects. The best-fit model to the corrected trail is shown as a long-dashed line. For the purpose of illustration, the baseline trap model is shown in both plots as a short-dashed line. Building on the serial EPER model, the baseline model includes fast traps that are seen in quadrant F.[]{data-label="fig:trails"}](figA1b.pdf "fig:"){width="85mm"}
![The well fill power $\beta$ measured from the integrated EPER CTI as a function of input signal. The *upper panel* shows the results from the serial EPER measurements, for which CTI is present in the F and G quadrants and can be corrected using the E and H quadrants. The *lower panel* shows the results from the parallel EPER measurements, for which CTI is present in the F, G, and H quadrants and can be corrected using the E quadrant. Open symbols denote the raw measurements, filled symbols the calibrated measurements from which the fits for $\beta$ are derived.[]{data-label="fig:labbeta"}](figA2a.pdf "fig:"){width="85mm"}\
![The well fill power $\beta$ measured from the integrated EPER CTI as a function of input signal. The *upper panel* shows the results from the serial EPER measurements, for which CTI is present in the F and G quadrants and can be corrected using the E and H quadrants. The *lower panel* shows the results from the parallel EPER measurements, for which CTI is present in the F, G, and H quadrants and can be corrected using the E quadrant. Open symbols denote the raw measurements, filled symbols the calibrated measurements from which the fits for $\beta$ are derived.[]{data-label="fig:labbeta"}](figA2b.pdf "fig:"){width="85mm"}
In this Appendix, we define a baseline CTI model for [*Euclid*]{} VIS. Our model is based upon laboratory tests of an irradiated e2v Technologies back-illuminated *Euclid* prototype CCD273, analysed at ESA/ESTEC [@2014P1P]. The device was irradiated at ambient room temperature using $10.4$ MeV protons, degraded from a $38.5$ MeV primary proton beam at the Kernfysisch Versneller Instituut, Groningen, in April 2013. Two different shielding masks were used [@2014P1P] resulting in the four quadrants of the CCD, called E, F, G, and H, and corresponding to the four output nodes, receiving different radiation doses. Each a half of two quadrants, called G and H, received a $10$ MeV equivalent fluence of $4.8\times 10^{9}\,\mbox{protons}/\mbox{cm}^{-2}$, representative of the predicted end-of-life (eol) proton fluence for *Euclid*. Half of the F quadrant was irradiated with a $10$ MeV equivalent fluence of $2.4\times 10^{9}\,\mbox{protons}/\mbox{cm}^{-2}$, the $\mbox{eol}/2$ fluence. Neither the E quadrant, the serial register of the H quadrant, nor the readout nodes were irradiated [@2014V1V; @2014P1P].
At the ESA Payload Technology Validation section CCD test bench located at ESTEC [@2014V1V], the irradiated CCD273 was characterised at the *Euclid* VIS nominal conditions of $153\,\mbox{K}$ temperature and a $70\,\mbox{kHz}$ readout frequency. While a serial clocking scheme with the same width for each register phase at each step was used, minimising serial CTI, the nominal line/parallel transfer duration of $0.11\,\mbox{ms}$ was not optimised.
As part of the characterisation, a suite of extended pixel edge response (EPER) and first pixel response (FPR) experiments were performed, at different flatfield signal levels. For the purpose of deriving a fiducial baseline model of the charge traps present in the CCD273, we focus on the parallel and serial EPER data. To study the serial EPER (sEPER) CTI, a flatfield image is taken, then the half opposite to the readout direction is dumped; then the frame is read out. This yields a flatfield with a sharp trailing edge in flatfield signal. Electrons captured from flatfield pixels are being released into signal-less pixels, resulting in a CTI trail. Our parallel EPER (pEPER) tests make use of the parallel overscan region, providing a similar signal edge.
Each measurement was performed repeatedly, in order to gather statistics: $45$ times for the sEPER data at low signal, and $20$ times for the pEPER data. Raw trail profiles are extracted from the first $200$ pixels following the signal edge, taking the arithmetic mean over the independent lines perpendicular to the direction (serial or parallel) of interest. The same is done in the overscan region, unexposed to light, and the pixel-by-pixel median of this reference is subtracted as background from the raw trails. In the same way as the reference, the median flatfield signal is determined, and also corrected for the overscan reference. Finally, the trail (flatfield signal) at zero flatfield exposure time is subtracted from the trails (flatfield signals) at exposure times $>\!0$.
Figure \[fig:trails\] shows the resulting “uncalibrated” trail profiles for the sEPER (upper panel) and pEPER (lower panel) measurements in the G quadrant (eol radiation dose), at a flatfield exposure time corresponding to an average of $1018$ signal electrons per pixel. These are the upper solid lines with light grey shading denoting the propagated standard errors from the repeated experiments. Effects in the readout electronics mimic CTI. We correct for the electronic effect by subtracting the average trail in the unirradiated quadrants (E for pEPER, and E and H for sEPER). The resulting “calibrated” trail profiles and their uncertainties are presented as the lower solid lines and dark grey shadings in Fig. \[fig:trails\]. The calibration makes a small correction to the sEPER trail which is dominated by slow traps, yielding a significant signal out to $\sim\!60$ pixels. On the contrary, the electronic effect accounts for $1/3$ of the uncalibrated pEPER trail even in the first pixel, and for all of it beyond the tenth. Thus the $S/N$ in the calibrated trail is much lower.
The well fill power $\beta$
---------------------------
--------------------------------------- ----------- ----------- -----------
best-fit sEPER model $i\!=\!1$ $i\!=\!2$ $i\!=\!3$
Trap density $\rho_{i}$ \[px$^{-1}$\] $0.01$ $0.03$ $0.90$
Release timescale $\tau_{i}$ \[px\] $0.8$ $3.5$ $20.0$
best-fit pEPER model $i\!=\!1$ $i\!=\!2$ $i\!=\!3$
Trap density $\rho_{i}$ \[px$^{-1}$\] $0.13$ $0.25$ $--$
Release timescale $\tau_{i}$ \[px\] $1.25$ $4.4$ $--$
\[tab:models\]
--------------------------------------- ----------- ----------- -----------
: The same as Table \[tab:traps\], but for the best-fit models shown in Fig. \[fig:trails\]. The baseline well fill power is $\beta_{0}\!=\!0.58$.
In a volume-driven CTI model, the cloud of photoelectrons in any given pixel is assumed to fill up a height within the silicon that increases as electrons are added (Eq. \[eq:nenc\]). The growth of the cloud volume is governed by the term $\left(\frac{n_{\mathrm{e}}}{w}\right)^{\beta}\sum_{i}\rho_{i}$ in Eq. (\[eq:nenc\]), with the full-well depth $w\!=\!84700$ limiting the maximum number of electrons in a pixel. There is no supplementary buried channel in the CCD273, which for *HST*/ACS leads to the first $\sim100$ electrons effectively occupying zero volume [@2010MNRAS.401..371M].
![image](figA3.pdf){width="150mm"}
We use measurements of the integrated EPER as a function of input signal to constrain the well fill power $\beta$ of the trapping model. Our simulated galaxies are faint; so we restrict ourselves to the four lowest signal levels measured in the laboratory, with up to $\sim\!1000$ electrons. The input signal is measured as the average count difference between the flatfield and overscan regions, corrected for the CCD gain.
Figure \[fig:labbeta\] shows the CTI trails from Fig. \[fig:trails\], integrated over the first $12$ pixels. We checked that integrating over up to the full overscan region length of $200$ pixels does not change the results drastically. In the sEPER data (upper panel of Fig. \[fig:labbeta\]), the unirradiated quadrants E and H (open squares and diamonds) exhibit very small trail integrals (caused by the readout electronics); one order of magnitude smaller than in the irradiated quadrants F and G (open circle and triangle). Hence, calibrating out the instrumental effect by subtracting the arithmetic average from the E and H quadrants yields only a small correction to the F and G trail integrals. To these calibrated sEPER measurements (filled circle and triangle), we fit linear relations in log-log-space using the `IDL fitexy` routine and measure $\beta_{\mathrm{F,cal}}\!=\!0.49\pm0.04$ and $\beta_{\mathrm{G,cal}}\!=\!0.58\pm0.03$.
We repeat this procedure for the pEPER measurements where the unirradiated E quadrant shows a similar EPER integral than the irradiated F, G, and H quadrants (lower panel of Fig. \[fig:labbeta\]). Thus, the pEPER and sEPER integrals may yield similar values as a function of signal, but for pEPER the low $S/N\!\!\ll\!1$ causes large uncertainties. Consequently, $\beta$ is not well constrained, with $\beta_{\mathrm{F,cal}}\!=\!0.66\pm0.53$, $\beta_{\mathrm{G,cal}}\!=\!0.61\pm0.36$, and $\beta_{\mathrm{H,cal}}\!=\!0.61\pm0.89$, but they agree with the sEPER results.
In conclusion, we adopt a baseline well-fill power of $\beta_{0}\!=\!0.58$ for our further tests, based on the precise sEPER result for the full radiation dose.
From trail profiles to trap parameters {#sec:baseline}
--------------------------------------
To constrain the trap release time-scales $\tau_{i}$ and trap densities $\rho_{i}$, we make use of the two signal levels of $\sim\!360$ electrons and $\sim\!1000$ electrons that bracket the number of electrons we expect to be found in a typical faint *Euclid* galaxy. These are the two highest data points in Fig. \[fig:labbeta\]. We compare the average, measured, calibrated trails from the irradiated quadrants (examples for the G quadrant are presented in Fig. \[fig:trails\]) and compare them to the output a one-dimensional version of our @2014MNRAS.439..887M clocking routine produces given trap densities $\rho_{i}$ and release timescales $\tau_{i}$, and under circumstances close to the laboratory data (i.e. a $200$ pixel overscan region following a $2048$ pixel flatfield column of $1018$ signal electrons). The fitting is performed using the `MPFIT` implementation of the Levenberg-Marquardt algorithm for nonlinear regression [@2009ASPC..411..251M; @1978LNM..630..105M].
Fitting a sum of exponentials is remarkably sensitive to noise in the data because the parameters ($\tau_{i}$ and $\rho_{i}$) we are probing are intrinsically correlated. We assess the robustness of our results by repeating the fit not only for the two (three) irradiated sEPER (pEPER) quadrants at two signal levels, but for a wide range of trail lengths ($60\!<\!K\!<\!150)$ we consider, and with and without adding a constant term.
There are several possible trap species as defined by their $\tau_{i}$ that show up in our data set. We rule out those of very low densities and consider the frequent “species” whose time-scales are within a few percent of each other as one. Still, this leaves us with more than one contesting family of trap species that yet give similar trails in some of the quadrant/signal combinations. Because, at this stage, our goal is to derive *a plausible baseline model* rather than pinpointing the correct trap species, we filter for the most common $\tau_{i}$ and give precedence to the higher-$S/N$ data (sEPER, end-of-life dose, $1000$ signal electrons). The resulting best-fit models are shown in Table \[tab:models\] and Fig. \[fig:trails\]. The actual baseline model (Table \[tab:traps\]; short-dashed line in Fig. \[fig:trails\]) includes additional fast traps seen in the lower-$S/N$ data. We raise the density from $0.94$ traps per pixels to a mnemonic total of $1$ trap per pixel at end-of-life dose. More refined methods will be used to determine the trap species in a more detailed analysis of irradiated CCD273 data.
Because only $464$ pixels of the serial register in the test device were irradiated, the effective density of charge traps an electron clocked through it experiences is smaller by a factor of $464/2051$ than the actual trap density corresponding to the end-of-life radiation dose that was applied. We correct for this by quoting results that have been scaled up by a factor of $2051/(464\times0.94)\!\approx\!4.155$.
Example CTI trails {#sec:trailexamples}
------------------
Figure \[fig:models\] shows, for the largest deviations from the baseline trap model we consider, their effect on the shape of the CTI trails. Using our CTI model, we simulated the trail caused by a single pixel containing a signal of $\sim\!1000$ electrons, comparable to a hot pixel in actual CCD data.
\[lastpage\]
|
Q:
Is it true that $A \cup (B - C) = (A \cup B) - (A \cup C)$?
I am brain farting on this question pretty hard, I'm asked to determine if
$$A \cup (B - C) = (A \cup B) - (A \cup C)$$
That is, determine if this set equality is true, and if not, which inclusions are true. I was able to show that equality fails by showing '$\subset$' does not hold using a relatively simple example, but I am having issues with opposite inclusion. My attempts began by assuming I had an arbitrary element $x \in (A \cup B) - (A \cup C)$ and tried to work my way down, but I haven't had much success.
Any help or hints would be appreciated, for I am too stubborn to move on to the next part without finishing this one.
A:
You started out right.
Suppose that $x\in(A\cup B)\setminus(A\cup C)$. Then by definition $x\in A\cup B$, and $x\notin A\cup C$. Since $x\notin A\cup C$, and $A\subseteq A\cup C$, you know that $x\notin A$. Thus, the only way to have $x\in A\cup B$ is to have $x\in B$. Now recall that $x\notin A\cup C$; this also implies that $x\notin C$, so $x\in B\setminus C$, and therefore $x\in A\cup(B\setminus C)$. Thus, $(A\cup B)\setminus(A\cup C)\subseteq A\cup(B\setminus C)$.
|
FILED
NOT FOR PUBLICATION JUL 24 2012
MOLLY C. DWYER, CLERK
UNITED STATES COURT OF APPEALS U .S. C O U R T OF APPE ALS
FOR THE NINTH CIRCUIT
SHANG ZHE PIAO, No. 10-70568
Petitioner, Agency No. A098-660-735
v.
MEMORANDUM *
ERIC H. HOLDER, Jr., Attorney General,
Respondent.
On Petition for Review of an Order of the
Board of Immigration Appeals
Submitted July 17, 2012 **
Before: SCHROEDER, THOMAS, and SILVERMAN, Circuit Judges.
Shang Zhe Piao, a native and citizen of China, petitions for review of the
Board of Immigration Appeals’ (“BIA”) order denying his motion to reopen
removal proceedings based on ineffective assistance of counsel. We have
jurisdiction under 8 U.S.C. § 1252. We review for abuse of discretion the denial of
*
This disposition is not appropriate for publication and is not precedent
except as provided by 9th Cir. R. 36-3.
**
The panel unanimously concludes this case is suitable for decision
without oral argument. See Fed. R. App. P. 34(a)(2).
a motion to reopen, and review de novo claims of due process violations.
Iturribarria v. INS, 321 F.3d 889, 894 (9th Cir. 2003). We deny the petition for
review.
The BIA did not abuse its discretion in denying Piao’s motion to reopen on
the ground that he presented insufficient evidence to establish prejudice resulting
from the alleged errors of his former counsel. See id at 900-02 (requiring prejudice
to prevail on an ineffective assistance claim).
In light of our disposition, we need not address Piao’s remaining contention
that the BIA erred in requiring him to comply with Matter of Lozada, 19 I. & N.
Dec. 637 (BIA 1988).
PETITION FOR REVIEW DENIED.
2 10-70568
|
/*
* Copyright 2017-2018 B2i Healthcare Pte Ltd, http://b2i.sg
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.b2international.snowowl.snomed.datastore.request;
import com.b2international.commons.exceptions.BadRequestException;
import com.b2international.snowowl.core.date.DateFormats;
import com.b2international.snowowl.core.date.EffectiveTimes;
import com.b2international.snowowl.core.domain.TransactionContext;
import com.b2international.snowowl.snomed.common.SnomedRf2Headers;
import com.b2international.snowowl.snomed.core.domain.refset.SnomedRefSetType;
import com.b2international.snowowl.snomed.core.domain.refset.SnomedReferenceSet;
import com.b2international.snowowl.snomed.core.store.SnomedComponents;
import com.b2international.snowowl.snomed.core.store.SnomedModuleDependencyReferenceSetMemberBuilder;
import com.google.common.base.Strings;
/**
* @since 5.0
*/
final class SnomedModuleDependencyMemberCreateDelegate extends SnomedRefSetMemberCreateDelegate {
SnomedModuleDependencyMemberCreateDelegate(SnomedRefSetMemberCreateRequest request) {
super(request);
}
@Override
public String execute(SnomedReferenceSet refSet, TransactionContext context) {
checkRefSetType(refSet, SnomedRefSetType.MODULE_DEPENDENCY);
checkReferencedComponent(refSet);
checkComponentExists(refSet, context, SnomedRf2Headers.FIELD_MODULE_ID, getModuleId());
checkComponentExists(refSet, context, SnomedRf2Headers.FIELD_REFERENCED_COMPONENT_ID, getReferencedComponentId());
SnomedModuleDependencyReferenceSetMemberBuilder builder = SnomedComponents.newModuleDependencyMember()
.withId(getId())
.withActive(isActive())
.withReferencedComponent(getReferencedComponentId())
.withModule(getModuleId())
.withRefSet(getReferenceSetId());
try {
if (hasProperty(SnomedRf2Headers.FIELD_SOURCE_EFFECTIVE_TIME)) {
String sourceEffectiveTime = getProperty(SnomedRf2Headers.FIELD_SOURCE_EFFECTIVE_TIME);
builder.withSourceEffectiveTime(Strings.isNullOrEmpty(sourceEffectiveTime) ? null : EffectiveTimes.parse(sourceEffectiveTime, DateFormats.SHORT));
}
} catch (IllegalArgumentException e) {
if (e.getMessage().contains("Error while parsing date")) {
throw new BadRequestException(e.getMessage());
}
}
try {
if (hasProperty(SnomedRf2Headers.FIELD_TARGET_EFFECTIVE_TIME)) {
String targetEffectiveTime = getProperty(SnomedRf2Headers.FIELD_TARGET_EFFECTIVE_TIME);
builder.withTargetEffectiveTime(Strings.isNullOrEmpty(targetEffectiveTime) ? null : EffectiveTimes.parse(targetEffectiveTime, DateFormats.SHORT));
}
} catch (IllegalArgumentException e) {
if (e.getMessage().contains("Error while parsing date")) {
throw new BadRequestException(e.getMessage());
}
}
return builder.addTo(context).getId();
}
}
|
Q:
How to do a MySQL select query on two tables linked by another table
Suppose I have a table whose function is specifically to link two other tables in terms of OOP.
Suppose that I have two tables: one for person's name and another one for phone numbers:
Table 1:
id person's name
1 John
2 Smith
Table 2:
id Phone number
5 23424224
6 23424242
And then I have a third table that links the person and their respective phone numbers:
Table 3:
id person-id phone-number-id
1 1 5
2 2 6
Hence John has phone number 23424224 and Smith has phone number 23424242.
And I want to run an SQL query to fetch all persons from Table 1 whose phone number start with, let's say, (234).
How would I go about linking the select queries within this table structure...what query would I run?
A:
First, the only reason to do that table is if you have a many-to-many relation.
While a person can have many phone numbers, can really one phone number have many persons?
If that is true, then your schema implements that requirement, but that seems a little over-engineered to me :-)
Second, this is a fairly simple join. What you want to do is first select out the phone numbers in question, then given that, select out the person IDs from the third table, then given that, select the names from the first table. Something like:
SELECT t1.name as name, t2.number from table1 t1, table2 t2, table3 t3 where t2.number like '234%' and t3.personid = t1.id and t3.phoneid = t2.id;
You can also rewrite the "blah.id = blah.id" as a join if you need outer join semantics (include certain fields with NULLs).
|
1. Field of the Invention
The present invention relates to a fluid seal and, more particularly, to a shaft seal which may be suitably used in automotive shock absorbers and the like for sealing a reciprocating shaft which undergoes a substantial lateral thrust.
2. Description of the Prior Art
In an automotive shock absorber, the piston rod is sealed by a shaft seal with respect to the housing. The shaft seal includes a primary fluid sealing lip for sealing the oil side of the piston rod to prevent release of the hydraulic fluid contained in the housing. In most instances, the seal also includes a dust sealing lip for sealing the air side to protect the primary sealing lip from ingress of dust and dirt.
During operation of the shock absorber, the piston rod undergoes a substantial lateral thrust as the wheel hits a bump or when the automobile undergoes cornering. In particular, a severe lateral thrust is encountered in the MacPherson strut type suspension systems.
One of the problems which must be overcome in designing a shaft seal for shock absorber applications is to effectively prevent ingress of dust and dirt for a long period of time despite repeated lateral thrust.
Japanese Utility Model Kokai Publication No. 6-28429 discloses a shaft seal which is provided with an auxiliary dust sealing lip situated inwardly of the primary dust sealing lip. The auxiliary dust sealing lip is profiled in the form of an edge that functions to scrape incoming dust and dirt back to the air side.
The problem associated with the conventional shaft seal is that the auxiliary dust sealing lip undergoes a considerable wear so that the dust sealing function of the shaft seal is prematurely degraded.
It is therefore an object of the present invention to provide a shaft seal which is capable of providing a high degree of dust sealing capability for a long period of time. |
Abbeyhill Junction
Abbeyhill Junction was a railway junction in Abbeyhill area of Edinburgh. It was used to connect the East Coast Main Line towards Abbeyhill railway station. Passenger services stopped using this line in the 1960s but briefly reopened in 1986 as a shuttle service was set up from Waverley station and Meadowbank Stadium railway station for the Commonwealth Games. Abbeyhill Junction signal box closed on 6 November 1938, when an old box at Waverley East took over control of the junction.
Closure
The junction closed in 1986 as the line was not being used any more, even for freight. In 1988, the tracks were disconnected at both ends of the line. The tracks remained, overgrown, for over 18 years until 2007 when the lines were dismantled and the area where the lines were was concreted over.
Category:Rail junctions in Scotland
Category:Railway lines in Scotland
Category:Transport in Edinburgh
Category:1986 disestablishments in Scotland |
"I expect him to come back and be the same that he was," Zimmer told reporters of Cook, who tumbled to the turf with the non-contact injury early in the third quarter against Detroit, a play that saw him also lose the ball for a fumble before grasping at his knee. |
Evaluation and management of corneal abrasions.
Corneal abrasions are commonly encountered in primary care. Patients typically present with a history of trauma and symptoms of foreign body sensation, tearing, and sensitivity to light. History and physical examination should exclude serious causes of eye pain, including penetrating injury, infective keratitis, and corneal ulcers. After fluorescein staining of the cornea, an abrasion will appear yellow under normal light and green in cobalt blue light. Physicians should carefully examine for foreign bodies and remove them, if present. The goals of treatment include pain control, prevention of infection, and healing. Pain relief may be achieved with topical nonsteroidal anti-inflammatory drugs or oral analgesics. Evidence does not support the use of topical cycloplegics for uncomplicated corneal abrasions. Patching is not recommended because it does not improve pain and has the potential to delay healing. Although evidence is lacking, topical antibiotics are commonly prescribed to prevent bacterial superinfection. Contact lens-related abrasions should be treated with antipseudomonal topical antibiotics. Follow-up may not be necessary for patients with small (4 mm or less), uncomplicated abrasions; normal vision; and resolving symptoms. All other patients should be reevaluated in 24 hours. Referral is indicated for any patient with symptoms that do not improve or that worsen, a corneal infiltrate or ulcer, significant vision loss, or a penetrating eye injury. |
Nicolas Sarkozy è in stato di fermo a Nanterre. L'ex presidente francese è stato convocato nell'ambito dell'indagine sul possibile finanziamento da parte della Libia della sua campagna elettorale del 2007. Al centro dell'inchiesta sui presunti finanziamenti dell'allora dittatore libico Muammar Gheddafi a Nicolas Sarkozy ci sarebbero 5 milioni di euro in denaro contante.
È la prima volta che Sarkozy viene interrogato su questo tema dall'apertura di un'indagine giudiziaria, nell'aprile 2013. Lo stato di fermo può durare fino a un massimo di 48 ore. Sarkozy potrebbe essere costretto a presentarsi davanti ai magistrati, al termine dei due giorni di custodia, per essere incriminato. Anche l'ex ministro e fedelissimo Brice Hortefeux è stato interrogato questa mattina, ma in libera audizione e contrariamente a Sarkozy non è in stato di fermo.
Nel 2012 il sito Mediapart aveva pubblicato documenti che parlavano di finanziamenti del leader libico Muammar Gheddafi alla corsa all'Eliseo di Sarkozy. Un'accusa da lui sempre smentita. L'ex capo di Stato, che si è ritirato dalla vita politica dopo la sconfitta alle primarie del novembre 2016 del centrodestra, è stato già rinviato a giudizio per non aver rispettato le regole sul finanziamento della sua campagna elettorale del 2012, avendo speso circa 20 milioni in più rispetto al tetto dei 22,5 milioni consentiti per legge.
A gennaio era stato arrestato all'aeroporto londinese di Heathrow il 58enne uomo d'affari francese Alexandre Djouhri per un mandato di arresto internazionale emesso dalla Francia: sarebbe stato lui a fare da tramite per il denaro con cui l'ex leader libico Muammar Gheddafi avrebbe finanziato la campagna elettorale di Sarkozy del 2007, quando venne eletto presidente. L'udienza per l'estradizione inizierà il 17 aprile. Nel 2011 fu proprio la Francia di Nicolas Sarkozy a spingere per l'attacco alla Libia che avrebbe poi accelerato il rovesciamento del regime di Gheddafi.
Il premier francese Edouard Philippe, intervistato questa mattina dai media francesi, ha detto di non voler fare "alcun commento" sul fermo di Nicolas Sarkozy, per rispetto nei confronti dell'ex presidente. |
1. Field of the Invention
The present invention relates to a sheet conveying apparatus used with an image forming apparatus or an image reading apparatus such as a copying machine, a scanner, a printer and the like. More particularly, it relates to a sheet conveying apparatus having a skew correction means for correcting skew-feed of a sheet conveyed to an image forming portion or an image reading portion.
2. Related Background Art
In some conventional image forming apparatuses and image reading apparatuses such as copying machines, a printers or scanners, a regist means acting as a skew correction means for correcting skew-feed of a sheet is disposed in front of an image forming portion or an image reading portion in order to correct posture and position of a sheet.
Among such regist means, there is a loop regist means in which a tip end of a sheet abuts against a nip between a pair of regist rollers which are now stopped to form a loop in the sheet, so that skew-feed of the sheet is corrected by aligning the tip end of the sheet with the nip by elasticity of the sheet. As another regist means, there is a shutter regist means in which a shutter member for stopping the tip end of the sheet is retractably disposed in a sheet convey path and the skew-feed of the sheet is corrected by retarding the shutter member from the sheet convey path after the tip end of the sheet is aligned with the shutter member.
Recently, as the image forming apparatus and the image reading apparatus have been digitalized. For example in the image forming apparatus, a substantial image forming speed has been increased by treating many sheets for a short time without increasing a process speed of image formation by decreasing a distance between the sheets (sheet interval). On the other hand, in conventional analogue apparatuses (for example, copying machines), even when a copying operation is continued after a single sheet (original) is read, an optical device for exposing the original must be reciprocated by times corresponding to the number of copies, so that the distance between the sheets (sheet interval) is determined accordingly.
However, since the image reading and the image formation are digitalized, after the original is read once, image information of the original is electrically can be coded to be stored in a memory. And, in the image formation, the information in the memory is read out, and an image corresponding to the image information of the original is formed on a photosensitive member by an exposure device such as laser light or an LED array. To this end, even when a plurality of copies are formed, a mechanical movement of the optical device is not required.
As a method for reducing a time for the abovementioned registration which is one of factors for determining the distance between the sheets (sheet interval), there has been proposed an active regist method for correcting the skew-feed of the sheet while conveying the sheet without stopping the sheet temporarily.
In this method, two sensors are disposed in the sheet convey path with a predetermined distance therebetween along a direction substantially perpendicular to the sheet conveying direction so that inclination of the sheet can be detected on the basis of signals representing the fact that the tip end of the sheet is detected by the respective sensors, and, by controlling sheet conveying speeds of a pair of regist rollers which are disposed coaxially in a direction substantially perpendicular to the sheet conveying direction and spaced apart from each other with a predetermined distance therebetween and which are driven independently, the skew-feed of the sheet is corrected. By effecting the skew correction without stopping the sheet temporarily in this way, the distance between the sheets (sheet interval) can be reduced more than the other methods.
However, in the above-mentioned conventional sheet conveying apparatus, and the image reading apparatus and the image forming apparatus having such a sheet conveying apparatus, when sizes of sheets to be conveyed are not constant or identical (particularly, when a sheet having a long size is conveyed), the skew correction should be effected by the pair of regist rollers while a trail end of the sheet is being pinched between a pair of upstream convey rollers.
Further, in the active regist method, the skew correction is effected by advancing delayed side of the sheet with respect to one of the pair of regist rollers for skew correction or by delaying advanced side of the sheet with respect to the other of the pair of regist rollers. However, in both cases, rotational movement of the entire sheet is required. Thus, in the condition that the trail end of the sheet is pinched between the pair convey rollers, it is difficult to rotate the sheet by a required amount, which makes the accurate skew correction difficult. Further, depending upon the size of the sheet, sliding resistance of a sheet convey guide is increased to worsen accuracy of the skew correction. |
The Montgomery County Sheriff’s office in Conroe, Texas announced on Saturday the arrest of Jose Manuel Tiscareno Hernandez, a 31-years-old man. An illegal alien, Hernandez was deported back to Mexico multiple times.
There are no repercussions for these people, so they keep coming back again and again.
Hernandez was arrested for sexual abuse of an 11-year-old child — more than once.
THE PRESS RELEASE
On January 10, the Montgomery County Sheriff’s Office Special Victim’s Unit and Crime Scene Investigators executed a search warrant at a residence in the 400 Block of Gladstell in Conroe. The search warrant was for an investigation of Aggravated Sexual Assault of a Child. The victim was 11 years old when the abuse started.
During the execution of the search warrant at the suspect’s residence, the suspect was not home, but detectives received information that the suspect was intending to flee the United States and head back to Mexico.
An arrest warrant was filed and a second search warrant was executed on January 11, 2019, in a separate location to collect additional evidence.
@MCTXSheriff and numerous law enforcement agencies partner together to find and arrest sexual assault suspect. pic.twitter.com/uiB3wa5sK2 — MCTXSheriff (@MCTXSheriff) January 12, 2019
ONE OF SO MANY
Hernandez is only one pervert among many. There aren’t a lot of readily available statistics on the specific crime, but North Carolina Fire keeps a record of this type of crime in their state.
North Carolinians For Immigration Reform and Enforcement (NCFIRE) reports on their website on each sex crime against children monthly. In 2018, 215 illegal aliens committed at least 743 sexual molestations of children in North Carolina.
Multiply that by 50 states and D.C.
Oh, and thank a Democrat.
Build the wall! |
Atherosclerosis is a progressive inflammatory disease and the underlying cause of heart attack and stroke. Macrophages play a crucial role in the formation and progression of atherosclerotic lesions. Macrophage apoptosis occurs throughout all stages of atherosclerosis with a differential impact on lesion morphology in early versus late atherosclerosis. Loss of macrophages in early lesions is thought to reduce lesion size, whereas cell death in advanced lesions contributes to the necrotic core and plaque destabilization. Studies by Tabas and coworkers have demonstrated that defective phagocytic clearance results in apoptotic cell accumulation in atherosclerotic plaques. Here we propose that the intrinsic ability of macrophages to resist pro- apoptotic stimuli may be another important determinant of macrophage survival and apoptotic cell numbers in atherosclerotic lesions. There are two major pro-survival pathways, PI3K/Akt and NF-kB, and both are constitutively active in macrophages and macrophage-derived foam cells of atherosclerotic lesions. Recent studies in our laboratory have shown that genetic deficiency of the prostaglandin E2 receptor, EP4, in hematopoietic cells promotes macrophage apoptosis in atherosclerotic lesions by modulating the PI3K/Akt and NF-kB signaling pathways. Two Akt isoforms are expressed in macrophages, Akt1 and Akt2, yet their relative contributions to macrophage apoptosis and atherogenesis have not been determined. Interestingly, Akt has been reported to mediate signaling through IKKa that may activate the NF-kB pathways with its anti-apoptotic activity. We hypothesize that cross-talk between the Akt and NF-kB signaling pathways is a critical determinant of macrophage survival and atherogenesis. In this proposal we intend to define the contribution of distinct members of the Akt NF-kB signaling pathways, including Akt1, Akt2, and IKKa, to macrophage survival and atherosclerotic lesion formation. We hypothesize that both Akt-1 and Akt-2 contribute to macrophage survival but that deficiency of both isoforms will promote macrophage apoptosis to a greater extent than deficiency of either isoform alone. The goal of Specific Aim 1 is to examine the impact of hematopoietic cell deficiency of Akt1, Akt2, or both on macrophage survival and atherogenesis in LDLR-/- mice in vivo. The goal of Specific Aim 2 is to define the impact of macrophage deficiency of Akt1 and/or Akt2 on apoptosis and the Akt and NF- :B signaling pathways in vitro. Akt and IKKa are necessary for TORC1 formation in signal transduction. Therefore, we will examine the hypothesis that macrophage deficiency of Akt1 and/or Akt2 will suppress mTOR activity. In Specific Aim 3, we will examine the hypothesis that IKKa deficiency in hematopoietic cells will reduce macrophage survival and impact atherogenesis through alterations in the Akt and NF-:B signaling pathways. A better understanding of the molecular mechanisms of macrophage survival may provide new targets for the prevention of atherosclerosis and cardiovascular events. |
Introduction {#sec1-1}
============
Globally, 36.9 (31.1--43.9) million people were estimated to be living with HIV in 2017. This is an increase from previous years and is thought to be because more people are currently receiving the life-saving antiretroviral therapy (ART). There were 1.8 (1.4--2.4) million new cases of HIV infection globally each year, showing a 47% decline from the 3.4 (3.1--3.7) million in 1996.\[[@ref1]\] India has been categorized as a nation with a low prevalence of HIV with seroprevalence rates of less than 1%,\[[@ref2]\] and the adult HIV incidence has decreased by more than 50% from 2001 to 2013. The current prevalence of HIV among antenatal women in the country is 0.35%, which also shows a declining trend.\[[@ref2]\]
The first case of immunodeficiency virus in India was reported in Chennai in 1986.\[[@ref3]\] In 1987, the National AIDS Control Programme (NACP) was launched under the Ministry of Health and Family Welfare, Government of India, to coordinate national responses to the spread of infection. Its activities included surveillance, blood screening, and health education for HIV. To prevent mother-to-child transmission (MTCT) of HIV, the most important source of HIV in children less than 15 years of age, the Prevention of Parent-To-Child Transmission (PPTCT) program was launched under the NACP in 2002. PPTCT is the largest national antenatal screening program in the world.\[[@ref4]\]
The NACO Technical Estimate Report (2015) estimated that 35,255 of 29 million annual pregnancies in India occur in HIV-positive women. In the absence of any intervention, an estimated (2015) 10,361 infected babies will be born annually. The PPTCT program aims to prevent the perinatal transmission of HIV from the HIV-infected mother to her newborn baby. The program entails counseling, testing, and treatment of pregnant women.
In India, the diagnosis and treatment of HIV is largely concentrated in areas with high HIV prevalence; Tamil Nadu is one of these states. However, the seroprevalence rate in Tamil Nadu, which was 1.6% among antenatal women in 2001, has come down to 0.5% in 2005.\[[@ref5]\]
Prevention of HIV in India has been based on the assumption that the principal drivers of the epidemic are individuals in high-risk groups, such as commercial sex workers and men who have sex with men.\[[@ref6]\] Though targeting these high-risk groups has remarkably lowered the prevalence of HIV, it is uncertain whether these methods can be used in rural populations where these high-risk groups form a minority. Therefore, other strategies to lower HIV prevalence in rural populations are necessary.
Direct measurement of HIV incidence involves following up a seronegative population with repeated HIV tests, which is tedious. Therefore, an indirect estimation of the prevalence can be made from a population of people who may have recently been exposed, such as antenatal mothers. The aim of this study was to measure the prevalence of HIV among antenatal mothers and its change over a period of 14 years.
Materials and Methods {#sec1-2}
=====================
This study is a retrospective, cross-sectional study. It was approved by the Institutional Review Board of Christian Medical College. The data included and analyzed in this study were collected from the PPTCT program as conducted in the Kaniyambadi block (population, 108,000) between January 2002 and December 2016 by the Department of Community Health, Christian Medical College.
Pregnant women identified by the health workers were registered and encouraged to visit the mobile clinics for antenatal care. Once they are registered, blood was collected for routine investigations including HIV and HBsAg and antenatal care was given by our mobile health teams, led by a doctor, which visited each village at least once a month. A few antenatal women did not register with us.
All women were offered screening for HIV under the PPTCT program, and an opt-out procedure was followed. HIV testing was performed according to World Health Organization (WHO) recommendations.\[[@ref7]\] First, a rapid test was performed. If it was positive, the sample was retested. If both the tests were positive, both the patient and her husband were called to the base hospital. Detailed pretest counseling was done and blood was drawn for repeat rapid test and Western blot. If the rapid test was positive, the sample was sent to the Department of Virology, Christian Medical College, Vellore, for confirmation with Western blot.
Results {#sec1-3}
=======
During the study period, 32,088 pregnancies were registered for antenatal care in the peripheral clinics. A total of 29,985 antenatal women were tested for HIV, whereas 2103 women received antenatal care from other healthcare providers. Of all the samples tested, 55 (0.18%) tested positive for HIV. The observed HIV prevalence which was 5.9 per 1000 in 2002 had declined to 1.2 per 1000 in 2016. No women tested positive for HIV between 2012 and 2015 \[[Table 1](#T1){ref-type="table"}\]. The data analyzed are presented in 5-year blocks in [Table 2](#T2){ref-type="table"} to remove the fluctuation in annual rates caused by the small numbers of HIV-positive women detected each year \[[Figure 1](#F1){ref-type="fig"}\].
######
HIV prevalence in Kaniyambadi block
Year No. positive No. screened Prevalence 95% CI
------ -------------- -------------- ------------ -------- --------
2002 9 1514 0.594 0.207 0.9817
2003 7 2089 0.335 0.087 0.5829
2004 7 2310 0.303 0.079 0.5272
2005 2 2068 0.097 0 0.2307
2006 5\* 2127 0.235 0.029 0.4409
2007 8\* 2196 0.364 0.112 0.6163
2008 4\* 2038 0.196 0.004 0.3884
2009 4 2152 0.186 0.004 0.3679
2010 3\* 2012 0.149 0 0.3177
2011 4\* 2210 0.181 0.004 0.3582
2012 0 2035 0 0 0.147
2013 0 2007 0 0 0.147
2014 0 1766 0 0 0.17
2015 0 1840 0 0 0.163
2016 2 1621 0.123 0 0.2944
\*Includes patients who have been tested more than one time in subsequent pregnancies. CI: Confidence interval
######
HIV prevalence in 5-year blocks in Kaniyambadi
Year No. positive No. screened Prevalence 95% CI
----------- -------------- -------------- ------------ --------- -------
2002-2006 30 10108 0.3 0\. 191 0.403
2007-2011 23 10608 0.22 0.128 0.305
2012-2016 2 9269 0.02 0 0.051
CI: Confidence interval
![A graph showing the decline in HIV prevalence over the years](JFMPC-8-669-g001){#F1}
A declining trend in HIV prevalence was also seen in the hospital setting where a total of 37,244 pregnant women were tested. The prevalence of HIV which was 3.7 per 1000 women in 2004 had declined to 0.31 per 1000 women in 2016 \[[Table 3](#T3){ref-type="table"}\].
######
Prevalence of HIV in pregnant women attending the hospital
Year Positive Tested Prevalence 95% CI
------ ---------- -------- ------------ -------- -------
2004 6 1623 0.37 0.0744 0.665
2005 5 2186 0.229 0.028 0.429
2006 7 2271 0.308 0.08 0.536
2007 3 2752 0.109 0 0.232
2008 5 2982 0.168 0.021 0.315
2009 3 3207 0.094 0 0.199
2010 3 3056 0.098 0 0.209
2011 3 3293 0.091 0 0.194
2012 7 3140 0.223 0.058 0.388
2013 1 3063 0.033 0 0.097
2014 2 3259 0.061 0 0.146
2015 2 3134 0.064 0 0.152
2016 1 3278 0.031 0 0.09
CI: Confidence interval
A declining trend was seen in both primi- and multigravid women \[[Table 4](#T4){ref-type="table"}\].
######
Prevalence of HIV among primi- and multigravid women
Primi Multigravid
------- ------------- ------ ---------------------- ------ --- ------ ----------------------
2003 2 852 0.235 (0.000, 0.560) 2003 5 1237 0.404 (0.051, 0.758)
2004 5 1008 0.496 (0.000, 0.930) 2004 2 1302 0.154 (0.000, 0.366)
2005 2 926 0.216 (0.000, 0.515) 2005 0 1142 0.000 (0.000, 0.263)
2006 2 958 0.209 (0.000, 0.498) 2006 1 1169 0.086 (0.000, 0.253)
2007 5 1018 0.491 (0.062, 0.921) 2007 1 1178 0.085 (0.000, 0.251)
2008 1 1014 0.099 (0.000, 0.292) 2008 0 1024 0.000 (0.000, 0.293)
2009 3 1064 0.282 (0.000, 0.601) 2009 1 1088 0.092 (0.000, 0.272)
2010 1 986 0.101 (0.000, 0.300) 2010 1 1026 0.097 (0.000, 0.288)
2011 3 1059 0.283 (0.000, 0.630) 2011 0 1151 0.000 (0.000, 0.261)
2012 0 959 0.000 (0.000, 0.313) 2012 0 1076 0.000 (0.000, 0.279)
2013 0 917 0.000 (0.000, 0.327) 2013 0 1090 0.000 (0.000, 0.275)
2014 0 817 0.000 (0.000, 0.367) 2014 0 949 0.000 (0.000, 0.316)
2015 0 821 0.000 (0.000, 0.365) 2015 0 1019 0.000 (0.000, 0.294)
2016 0 748 0.000 (0.000, 0.401) 2016 2 873 0.229 (0.000, 0.401)
CI: Confidence interval
Discussion {#sec1-4}
==========
India, being a country with poor socioeconomic development and a large number of migrant workers, seems to have a rise in HIV epidemic.\[[@ref8]\] A large number of programs have been used by the Government of India to screen for HIV and to prevent MTCT of HIV.
The prevalence of HIV in Tamil Nadu and other southern states of India seems to be declining. This is in contrast to earlier studies where the prevalence was found to be higher in Tamil Nadu than expected, involving even populations that were not at high risk.\[[@ref9]\] The prevalence of HIV in the community was found to range from 1.8% to 7.4% in earlier studies.\[[@ref9][@ref10]\] Various studies have reported a decline in HIV prevalence across the country,\[[@ref11][@ref12]\] whereas other studies have reported an increasing trend, such as the study by Gupta *et al.* that reports an increase from 0.7% in 2003--2004 to 0.9% in 2005--2006.\[[@ref13]\] Our study showed a declining trend in HIV prevalence among pregnant women.
The decline in HIV prevalence could be attributed to the various interventions done by the Department of Community Health of CMC, Vellore, which might have decreased the rates of transmission in the community. A few such interventions are as follows: barbers were educated on the need to use disposable blades in their practice and were given certificates of their compliance for displaying to their clientele; traditional dais were introduced to sterile techniques of conducting deliveries and to the use of disposable needles and syringes; newly married couples were counseled about safe sex practices and the use of condoms; school children were educated about HIV, modes of its spread, and safe sex practices; and health education was conducted among the masses about HIV and the prevention of its spread. In addition, programs to screen for sexually transmitted diseases were conducted among women in the reproductive age group.
What primary care physicians need to know is that the Government of India has a well-structured approach to controlling HIV in India. Screening of antenatal women is essential in preventing the MTCT which can occur. Health teaching to both the woman and her husband on safe sex practices is also essential in keeping the prevalence of HIV low. Primary care physicians, being the first contact point of the patient with the health system, play an important role in the education of women and their families.
The Government of India is committed to eliminating new HIV among children. Based on the new WHO guidelines, NACO will provide lifelong ART to all pregnant and breastfeeding women regardless of their CD4 count and the clinical stage of their disease.
Conclusion {#sec1-5}
==========
There is a decrease in new cases of HIV among antenatal women over the years. However, it is difficult to give one single intervention credit for it. A multipronged approach that improved awareness among different groups of people and involved various organizations such as the WHO, government bodies, and various nongovernmental organizations including our community health department has helped in decreasing the prevalence of HIV in Kaniyambadi block. This approach could be a model which other developing countries with high prevalence rates of HIV could follow.
Financial support and sponsorship {#sec2-1}
---------------------------------
Nil.
Conflicts of interest {#sec2-2}
---------------------
There are no conflicts of interest.
|
Q:
Extract values from multidimensional array and store in separate array
I need to extract values from a multidimensional array. The startpoint is however an stdClass object. The aim is to use the extracted values to create a graph. The graph is not part of this question.
Question:
Is there a shorter and more straightforward way, then below?
Note that the values can be 100 so I do not plan to extract the values, one by one.
// Create an stdClass.
$products = (object)[
'group' => [
['level' => "12"],
['level' => "30"],
['level' => "70"],
]
];
// Transform stdClass to array.
$products = json_decode(json_encode($products), true);
var_dump($products);
// Calc amount of subarrays.
$amount_of_subarrays = count($products['group']);
$amount_of_subarrays = $amount_of_subarrays - 1; // Adjust since objects start with [0].
// Extract data from [$products], populate new array [$array].
$array = [];
for ($i=0; $i <= $amount_of_subarrays; $i++) {
$tmp = $products['group'][$i]['level'];
array_push($array, $tmp);
}
var_dump($array);
Result (as expected):
array(3) {
[0] =>
string(2) "12"
[1] =>
string(2) "30"
[2] =>
string(2) "70"
}
A:
Simplest way I know of is to use the array_column function which returns the values from a single column in the input array
E.g. array_column($products['group'], 'level') should return the expected result.
|
Said to smell like God's feet, France's favourite soft cheese is at the heart of a battle pitting small farmers against the corporations
In his tiny workshop with a view of his cows, Francois Durand stood lovingly ladling raw milk curd into cheese moulds. After several weeks of salting, ripening and maturing, these would turn into the pungent, oozing Camembert that is France's favourite soft cheese - as much part of the national stereotype as the Basque beret, the baguette and a glass of red wine.
"When you use raw, unpasteurised milk, the taste is nice and fruity," Durand mused as he inspected the smelly contents of his ripening rooms. "You can taste what the cows have been eating at different times of year."
Durand is the last dairy farmer in the tiny Normandy village of Camembert still making traditional, raw milk Camembert cheese. But the farm's visitor book hints at the bitter cheese wars that have poisoned the air of the surrounding hills and dales. "Be brave!" urges one scribbled French entry. "Keep up the fight! Thanks for defending real cheese."
For months, small cheese producers and Camembert connoisseurs have been engaged in a battle of David and Goliath, dubbed the "camembert wars", which have captured the French imagination and seen Normans take to the streets to defend their cheese's pungent tang.
"Camembert is a subject that unites all the French," the former president Francois Mitterrand once said. But when small, traditional producers are pitted against France's industrial dairy giants the divide seems vast.
Camembert, whose sharp aroma was once likened to "God's feet", was made fashionable by Napoleon III and popularised as part of rations to soldiers in the first world war. It is France's best-selling cheese after Emmental, so it is not surprising that French industrial diary giants moved in to mass-produce it, buying up small producers and delivering vast amounts of cheaper, machine-produced camembert to supermarket shelves. There are only five remaining small, traditional producers of the prized "Camembert de Normandie".
Last year, the two industrial giants that produced 80% of the exclusive Normandy Camembert that carries France's famous Apellation d'Origine Contrôlée (AOC) stamp of approval, tried to change the rules. Until then, all prized AOC-approved Normandy camembert had been made with raw milk.
The big groups decided instead to make most of their Camembert with pasteurised milk, saying they wanted to protect consumers' health because, when manufacturing large volumes, they could not ensure raw milk was free of dangerous bacteria.
Pasteurising their milk - a process which was cheaper and better suited to mass-production - meant the dairy giants could no longer carry the prized AOC label. But they began a fight to win back the precious AOC stamp, arguing that pasteurised cheese should be included in it.
Last month, Camembert aficionados breathed a sigh of relief when, after a long public battle, cheese authorities said they would protect small producers by reserving the AOC only for Normandy Camembert made in the traditional way with raw milk.
But small cheese-makers say the war is not over and the fight could be turning dirty. In recent weeks, the biggest industrial producer, Lactalis, snitched on a smaller, traditional competitor, telling authorities that dangerous bacteria was found in a batch of AOC raw milk Camembert produced by Reaux. Coincidentally, Reaux happened to be one of Lactalis's biggest critics. The smaller company said there was no evidence of contamination. "This was an operation to destabilise us, it's a new episode in the camembert war, that's for sure," said Reaux's director Bertrand Gillot.
"The camembert war is a symbol of the wider cheese crisis in France," warned Véronique Richez-Lerouge, founder of France's Regional Cheese Association, which lobbies to protect traditional raw-milk varieties.
Nicolas Sarkozy has vowed to apply for Unesco world heritage status for French cuisine. Yet, while French leaders have long promoted the ideal of French countryside produce, small, regional cheeses are under threat from intense-production and its food industry giants.
France produces 1,000 cheese varieties, and its huge consumption is second only to the champion cheese-eaters of Europe, the Greeks. But the problem for French purists is the type of cheese that the French are wolfing down. Raw milk cheese makes up only 15% of the market. Dozens of traditional cheese varieties have disappeared over the past 30 years as small producers die out or are bought up by industrial giants.
The new types of cheese created in France now include squeezable, spreadable, and artificially flavoured varieties which strike horror into experts who worry that French teenagers can no longer recognise a proper goat's cheese as their palettes have been numbed. Around 95% of French cheese is now bought in supermarkets, where even cheesemonger counters are disappearing as people prefer their fromage packaged and ready sliced from a fridge unit.
"If it continues like this, in 10 years' time traditional raw milk cheese will be over," Richez-Lerouge said. "France defends its terroir, its great chefs, but that's just window-dressing, in fact France is the nation of Carrefour [the world's second-biggest supermarket giant] and a vast density of McDonalds. Consumers in France aren't aware of the disaster that's happening."
She said even Britain where, like the US and Spain, raw milk cheese is currently in fashion, traditional makers were held in more esteem.
At a table on Durand's Normandy farm, Gérard Roger, a camembert historian and president of the newly-created Defence Committee for Authentic Camembert, reluctantly agreed to taste-test a mass-produced, big-selling supermarket camembert.
"Wow, it stinks," he says sniffing the pale, uniform cheese. "It's dull, it tastes of nothing." Roger's group, which has organised street demonstrations, see themselves as "guardians of the temple". Now they have won a victory in the AOC battle for raw milk camembert, they are lobbying to protect authentic production methods, encouraging more small farmers to make cheese using milk from local Normandy cows.
Francis Rouchaud the group's secretary and a former marketing expert, said the big industrial producers wanted to put out a maximum number of Camembert products: "It's Coca-Cola thinking".
Lactalis, the world's second largest dairy processor, countered: "We are not trying to kill off the small people, that doesn't interest us at all. We're a global dairy company in 20 countries. We've got better things to do." A spokesman said that although the risk from raw milk was very small, for the company's big brands it preferred not to take it. He said there was nothing malicious in alerting the authorities to a bacteria-risk in competitor's cheese.
Charlie Turnbull, an exclusive cheesemonger from Dorset and judge at the world cheese awards, was visiting Durand's farm to pay homage to the "cathedral" of camembert. "The French put art before enterprise," he said. "Whereas the British put enterprise before art."
But small French producers are still on guard against mass-produced cheese. Inspecting his matured Camemberts, Durand said: "We must keep fighting to defend raw milk cheese, but we can't do it alone, French consumers must help us." |
Read 90-30-HUC-818028 text version
THE GLENCOE LITERATURE LIBRARY
Study Guide
for
The Adventures of Huckleberry Finn
by Mark Twain
i
Meet Mark Twain
Hannibal to work as a printer's assistant. He held printing jobs in New York, Pennsylvania, and Iowa. Then, when he was twenty-one, he returned to the Mississippi River to train for the job he wanted above all others: steamboat pilot. A few years later, he became a licensed pilot, but his time as a pilot was cut short by the start of the Civil War, in 1861. After a two-week stint in the Confederate army, Clemens joined his brother in Carson City, Nevada. There, Clemens began to write humorous sketches and tall tales for the local newspaper. In February 1863, he first used the pseudonym, or pen name, that would later be known by readers throughout the world. It was a riverboating term for water two fathoms, or twelve feet, deep: "Mark Twain." Clemens next worked as a miner near San Francisco. In 1865 he published in a national magazine a tall tale he had heard in the minefields--"The Celebrated Jumping Frog of Calaveras County." It was an instant success. Later, he traveled to Hawaii, Europe, and the Middle East. The humorous book he wrote about his travels, The Innocents Abroad, made him famous. In 1870 Clemens married Olivia Langdon. A year later they moved to Hartford, Connecticut. At the same time, he began a successful career as a lecturer, telling humorous stories and reading from his books. More books followed, including Roughing It, a travel memoir about the West; The Adventures of Tom Sawyer; Life on the Mississippi; and The Prince and the Pauper. Thanks to his lecture tours and books, Mark Twain became familiar around the world. His death in 1910 was met with great sorrow.
I was born the 30th of November, 1835, in the almost invisible village of Florida, Monroe County, Missouri. . . . The village contained a hundred people and I increased the population by 1 percent. It is more than many of the best men in history could have done for a town.
ark Twain, whose real name was Samuel Clemens, was in many ways a self-made man. Clemens was born on the Missouri frontier, learned several trades, traveled widely, and transformed himself into Mark Twain, the larger-thanlife writer, lecturer, and symbol of America. Four years after Clemens was born, his father moved the family to Hannibal, Missouri, on the Mississippi River. There, the young boy lived an idyllic life. Some of his happiest days were spent on the riverbanks watching the parade of boats that passed by. In his memoir Life on the Mississippi (1883), he recalls the excitement people felt when the lazy summer air was pierced by the cry of "S-t-ea-m-boat a-comin!" "All in a twinkling," he wrote, "the dead town is alive and moving." Hannibal was also home to relatives, friends, and townspeople who served as the inspiration for characters in his fiction. But before Clemens could turn his childhood memories into literature, he needed to see something of the world. At the age of seventeen, he left
The Adventures of Huckleberry Finn Study Guide
9
Introducing the Novel
Persons attempting to find a motive in this narrative will be prosecuted; persons attempting to find a moral in it will be banished; persons attempting to find a plot in it will be shot.
--author's note from The Adventures of Huckleberry Finn
These humorous warnings were the first words that readers of The Adventures of Huckleberry Finn saw when they opened Mark Twain's new novel in 1885. At the time, Twain was already well known as a humorist and the author of the nostalgic "boy's book" The Adventures of Tom Sawyer. Therefore, Twain's readers probably did not expect that Twain would have serious motives for writing Huckleberry Finn or that the novel would teach serious moral lessons. In some ways, Huckleberry Finn is a sequel to, or a continuation of, Tom Sawyer. Huck was an important member of Tom Sawyer's group of friends in the earlier novel, and Jim appeared as well. The fictional setting of both books is St. Petersburg, a small Mississippi River port that Twain modeled on his hometown of Hannibal, Missouri. The earlier book tells of the rollicking good times had by all and is recognized as one of American literature's finest portrayals of a happy childhood. Readers therefore had reason to expect more lighthearted escapades and harmless hijinks in Huckleberry Finn. Readers soon found out, however, that Huckleberry Finn is very different from Tom Sawyer. The odd notice at the beginning of the novel is the first warning that things may not be exactly as they seem. The warning is ironic because the novel definitely has a motive, a moral, and a plot; and Twain wanted his readers to be aware of each of them. The structure of the book, which centers around a journey, allows Huck and Jim to meet many different kinds of people. The society of the small towns and villages along the great river mirrors American society as a whole, with all its
variety. The cast of characters includes many personalities with whom Twain was familiar: liars, cheaters, and hypocrites. The author examines these representative types, mercilessly exposing their weaknesses and displaying their terrible, senseless cruelty to others. Twain is especially bitter about the way slavery degraded the moral fabric of life along the river. His bitterness was, perhaps, rooted in the knowledge that he himself grew up thinking there was nothing wrong with a system that enslaved human beings. But Twain also holds up a few shining examples of human decency as models. In fact, Huckleberry Finn can be seen as hopeful. The novel shows that people can make the right decisions and defy injustice, that an individual's moral beliefs can lead him or her to reject what is wrong in society, and that sound personal values can overcome evil. Twain himself explained that the novel revolves around conflict between "a sound heart and a deformed conscience." Huck Finn is a child of his time, like the author who created him. Both character and author struggled to recognize and correct some of the wrongs of their society. Both learned to listen to the teachings of their sound hearts. Even though Huckleberry Finn is a serious book addressing important themes, it is also humorous. The novel is filled with hilarious incidents, oddball characters, and goofy misadventures, and the language the characters use is often laugh-out-loud funny. Like many authors, Twain based his characters on the people he knew. In his Autobiography, Twain disclosed the model for his most famous character, a boy he knew growing up in Hannibal:
Huckleberry Finn was Tom Blankenship. . . . In Huckleberry Finn I have drawn Tom Blankenship exactly as he was. He was ignorant, unwashed, insufficiently fed; but he had as good a heart as any boy ever had. His liberties were totally unrestricted. He was the only really independent person . . . in the community.
Many of the first readers of Huckleberry Finn were critical of the book. Some found its honest and unflinching portrayal of life to be coarse, while other readers found its dark view of society distasteful. Critics complained, and some libraries banned the book as unsuitable for children. Today, however, Huckleberry Finn is generally viewed as a masterpiece of American literature.
THE TIME AND PLACE
The Adventures of Huckleberry Finn is set in the Mississippi River Valley, around 1840. During the course of the novel, Huck and Jim float down the Mississippi River. They travel from
their hometown of St. Petersburg, Missouri, north of St. Louis, hundreds of miles into the Deep South. Some of the places they visit are real, while others are products of Twain's imagination. So important to the novel is the great Mississippi River that many readers consider it as much a character as a place. T. S. Eliot, the great twentieth-century poet who grew up in St. Louis, said, "The River makes the book a great book." It fired the imagination of the young Twain, served as the setting for his beloved riverboats, and became the only real home Huckleberry Finn and Jim were to know.
Did You Know?
In the years before the Civil War, which started in 1861, Missouri and other southern states allowed slavery. Mark Twain's father was a slaveholder, and enslaved Africans were a common sight in Twain's boyhood home of Hannibal. However, even though many people in Missouri were immigrants from southern states and supporters of slavery, many others opposed it. Missourians' mixed feelings about slavery prevented the state from ever joining other slaveholding states in the Confederacy and made it a battleground during the Civil War.
Freedom means different things to different people. What does it mean to you? List Ideas With a partner, examine what the concept of freedom means to you. Brainstorm a list of statements that describe the idea of freedom. Setting a Purpose Read to find out what freedom means to a boy and a man living during the 1800s.
BACKGROUND
Point of View Point of view is the relationship of the narrator, or storyteller, to the events of the story. Huckleberry Finn is told by the character Huck, using words like I and we. Therefore, it is told from the first-person point of view. The reader sees everything through Huck's eyes and is given his perspective on events. When examining a narrative point of view, it is important to distinguish the narrator from the author. Huck is an uneducated fourteen-year-old boy living in a village in the 1840s. He has the knowledge, beliefs, and experiences of such a boy. Twain, on the other hand, was a well-traveled writer and experienced lecturer. He was well aware of how to use narrative techniques, adopt different points of view, and speak in the role of different characters, and he used that knowledge to create a narrator who is very different from himself. Unreliable Narrator Huckleberry Finn is also an example of an unreliable narrator--one who does not understand the full significance of the events he describes and comments on. Huck is not intentionally unreliable; his lack of education and experience makes him so. Much of the humor in the first chapters comes from Huck's incomplete understanding of the adults around him and their "sivilized" ways.
The first chapters of a novel introduce readers to the conflicts, or struggles, that the characters will face throughout the course of the story. External conflicts are struggles between characters who have different goals or between a character and forces of nature. Internal conflicts are psychological struggles that characters experience when they are unhappy or face difficult decisions. External conflicts often trigger internal conflicts. As you read the first fifteen chapters of Huckleberry Finn, use the chart below to keep track of the conflicts that the characters experience. Add boxes on a separate sheet of paper if you need to. Recognizing major conflicts will help you understand the major themes, or ideas about life, that are developed in the novel.
Huck
vs.
Miss Watson and the Widow
Explanation of conflict: the sisters want to " ivilize Huck; he wants to be free s "
4. Where is Huck reunited with Jim? In what significant ways are Jim and Huck alike? In what significant ways are they different?
5. Why does Huck put a dead snake on Jim's blanket? What harm comes to Jim as a result of the incident? In your opinion, is Huck sorry for the harm he caused? Explain.
14
The Adventures of Huckleberry Finn Study Guide
Name
Date
Class
Responding
The Adventures of Huckleberry Finn Chapters 115
Analyzing Literature (continued)
Evaluate and Connect 6. How successful do you feel Mark Twain is in creating the character of Jim? Does Jim seem like a real person to you? Explain why or why not.
7. Huck takes to the river to find freedom and escape from people and situations that restrict his liberty. What are some ways that people today can find personal freedom? Is Huck's way still possible? Explain your answer.
Analyzing Relationships Review Chapters 2 through 15, paying special attention to Huck's relationship with Jim. Note how Huck treats Jim as well as how Huck feels about him. Then, on a separate sheet of paper, write a brief analysis of their relationship. What changes does it undergo? What do you think causes these changes? Support your opinions with quotations and other evidence from the novel.
Extending Your Response
Literature Groups Nature plays an important part in Huck's life. In your group, find passages in Chapters 1 through 15 in which Huck describes nature and natural elements. Then discuss what meanings these elements seem to have for Huck. Pay particular attention to what Huck finds in nature that is lacking in his relationships with people. Present your examples to the rest of the class. Geography Connection Draw or photocopy a map of the Mississippi River Valley. Then track Huck and Jim's journey on the Mississippi River. Put a star or other symbol next to towns that they visit.
Save your work for your portfolio.
The Adventures of Huckleberry Finn Study Guide
15
Before You Read
The Adventures of Huckleberry Finn Chapters 1631
FOCUS ACTIVITY
How do you go about making important decisions? Do you tend to follow your heart or your head? Journal In your journal, write about a time when you had to make an important decision. Briefly describe how you decided what to do. Setting a Purpose Read to find out what important decisions Huck faces and how he goes about making them.
BACKGROUND
Satire and Irony Satire is a kind of literature that tries to open people's eyes to the need for change by exposing the flaws of a person or society. Satirists' main weapon is humor, which is created through techniques such as irony. Irony is the contrast between what appears to be true and is actually true, or between what we expect to happen and what actually happens. Twain created an ironic character in Pap. We expect a father to be proud of his son and provide for him, but Pap is angry that Huck is learning to read and "getting religion," and Pap wants to spend Huck's money on himself. Though we may laugh at Pap, we should also be aware of the messages behind the humor: Judge Thatcher is too easily tricked by Pap's "reformation," and there is something wrong with a system that would let Pap take Huck. Through the use of irony, Twain develops some of the most important themes of Huckleberry Finn. As you read Chapters 16 through 31, look for examples of irony, and think about the flaws that Twain is attempting to expose.
In Huckleberry Finn, people and things are not always what they appear to be. As you read Chapters 16 through 31, make note of times when people or things appear to be one way but are actually very different underneath. In the left-hand column of the chart below, note what the character or thing seems to be. In the right-hand column, note what the character or thing actually is. Add rows to the chart if necessary. Appearance Reality
3. What does Buck say when Huck asks him how the feud between the Shepherdsons and the Grangerfords got started? What is ironic about Buck's response?
4. Who is Colonel Sherburn? Briefly sum up the speech he makes to the mob. What aspect of human nature does Sherburn criticize?
18
The Adventures of Huckleberry Finn Study Guide
Name
Date
Class
Responding
The Adventures of Huckleberry Finn Chapters 1631
Analyzing Literature (continued)
Evaluate and Connect 5. Mark Twain makes heavy use of dialect in Huckleberry Finn. How successful do you feel he is? What are some advantages for an author in deciding to render speech in dialect, as Twain does? What are some possible disadvantages?
6. How might Huck answer the Focus Activity question that you answered in your journal? How does this answer compare with yours?
Literature and Writing
Isn't It Ironic? Throughout the novel, Huck is taught that "sivilized society" is right and he is wrong. As a result, he believes he will "go to hell" for rescuing Jim. On a separate sheet of paper, write a brief analysis of the irony in Huck's situation. What evil does the irony expose?
Literature Groups In this section of the novel, Mark Twain contrasts life on the raft with life on shore. In your group, discuss the differences between what the raft represents to Huck and what life on shore is like. Cite lines from the text that describe raft life and shore life to support your argument. Then present your conclusions to others in your class. Learning for Life The Shepherdsons and the Grangerfords are unable to settle their differences, and so they resort to violence. Imagine that you have been called into help them resolve their conflict through peaceful means. What would you say to them? What would you have them do? In a small group, role-play a conflict resolution meeting between the two families.
Save your work for your portfolio.
The Adventures of Huckleberry Finn Study Guide
19
Before You Read
The Adventures of Huckleberry Finn Chapters 3243
FOCUS ACTIVITY
In many popular adventure stories, the hero is held captive by evil enemies or forces yet manages to escape. Sharing Ideas As a class, discuss books and movies in which a hero overcomes seemingly impossible odds to find freedom. Who or what holds the hero captive? What miseries does the hero endure while being held? How does the hero escape? Do friends help? Setting a Purpose Read to find out how Huck and a friend plan to help Jim escape.
BACKGROUND
The Antihero Traditional heroes are often superhuman. We look up to them because they are braver, stronger, more clever, or more unwilling to sacrifice their principles than we. Antiheroes, on the other hand, are very human. Like us, they have faults, make mistakes, and puzzle over difficult decisions. In the end, however, antiheroes usually do the "right thing"--what we, ourselves, hope we would do in similar circumstances. As you read the final chapters of Huckleberry Finn, think about the heroes of the novel. Are they traditional heroes or antiheroes? What makes them so? The Controversial Conclusion As Mark Twain wrote Huckleberry Finn, he pondered over the plot. He thought especially long and hard about how to end the novel and effectively resolve the conflicts that he had presented. Though some critics feel that the conclusion of Huckleberry Finn is logical and effective, other critics have severely criticized it. As you read the last chapters of Huckleberry Finn, think about the events that came before and the way that the characters in the novel usually behave. Then judge the conclusion for yourself. Is it consistent with the characters we have come to know? Does it resolve the major conflicts in the novel in a satisfactory way?
As you saw at the beginning of Huckleberry Finn, Tom Sawyer is fond of romantic adventure stories and enjoys pretending that he is taking part in one. Use the diagram below to chart the major events in Tom's adventurous "rescue" of Jim. You may extend the diagram if necessary.
3. What does Tom's elaborate plan to free Jim tell you about Tom? What does it tell you about his attitude toward Jim?
4. What does Huck decide to do at the end of the novel? Why doesn't he stay with Aunt Sally?
22
The Adventures of Huckleberry Finn Study Guide
Name
Date
Class
Responding
The Adventures of Huckleberry Finn Chapters 3243
Analyzing Literature (continued)
Evaluate and Connect 5. Many critics of Huckleberry Finn have pointed out that the Phelps' farm episode differs in tone and seriousness from the first two-thirds of the novel. Do you agree? Explain your answer, supporting it with evidence from the text.
6. Mark Twain called Huckleberry Finn "a book of mine where a sound heart and a deformed conscience come into collision and conscience suffers defeat." What influences have "deformed" Huck's conscience? Are such influences still at work in the world today? What forces are available to try to change "deformed consciences"?
Literature and Writing
You Have Mail Imagine a friend in another city has learned that you have just finished reading Huckleberry Finn. Your curious friend sends you an E-mail that says, "All I know is that that book is about a journey down the Mississippi River--what does this journey mean?" Write a short E-mail response to your friend, explaining the meaning of the journey.
Literature Groups Flat characters remain the same from the beginning of a novel to the end. Round characters undergo psychological changes as a result of the conflicts they face and try to resolve. In your group, discuss the characters of Huck and Jim. Are they flat or round? Use evidence from the novel to support your opinions, and present your conclusions to the rest of the class. Psychology Connection Psychologists often evaluate the mental health and personalities of their patients by observing their behavior or listening to their answers to questions. Play the role of a psychologist and prepare short personality evaluations of Huck and Tom, based on their actions and words in Chapters 32 through 43. Compare their two personalities, citing differences and similarities. Offer evidence from the text to support your evaluation.
Save your work for your portfolio.
The Adventures of Huckleberry Finn Study Guide
23
Name
Date
Class
Responding
The Adventures of Huckleberry Finn
Personal Response
The novel ends with Huck feeling unsure about what his future holds. What do you predict will happen to Huck? What sort of life do you think he will have? Why?
A symbol is a person, place, or thing that represents something beyond itself. On a separate sheet of paper, analyze the Mississippi River as a symbol. Suggest what it means in the novel, and explain why the river is such an appropriate symbol for the meanings the author assigns it. Give examples from the text to support your views.
Imagine that Huck is a fourteen-year-old living today. "Update" Huck's dialect by translating it into today's slang. On a separate sheet of paper, rewrite the first few paragraphs of the novel (or another passage of your choice).
The Adventures of Huckleberry Finn Study Guide
25
Name
Date
Class
from Incidents in the Life of a Slave Girl Harriet Jacobs
Before You Read
Focus Question
What dangers did enslaved people face in order to escape slavery?
Background
Like Jim, Harriet Jacobs was born into slavery. Unlike Jim, Jacobs was a real person. In her autobiography, published in 1861, she gives an account of her experiences as a slave and of her journey to freedom.
Responding to the Reading
1. What are your first impressions of Jacobs's account? Why do you think you responded this way?
2. What ultimately happens to Jacobs's children? How does it make her feel? Why?
3. Making Connections How does this reading help you understand the character of Jim?
Literature Groups
Imagine that Jim, Huck, and Harriet Jacobs could have a conversation about relations between African Americans and whites during the time they lived. Work together to write a dialogue, and share it with other groups.
26
The Adventures of Huckleberry Finn Study Guide
Name
Date
Class
Before the Fire Canoe
Before You Read
Focus Question
What makes a good description?
Frank Donovan
Background
Donovan's historical nonfiction describes the boats and the people who worked on America's rivers. This reading looks at life on the Mississippi River.
Responding to the Reading
1. From Donovan's description, do you think you would have liked to work on a riverboat? Explain.
2. Donovan writes, "Samuel Clemens [Mark Twain] was . . . being magnanimous" in his description of the rivermen. What does Donovan mean? What does he think of the rivermen?
3. Making Connections Donovan and Twain write about the "natural hazards" of boat travel on the river. Compare and contrast their writing styles--including point of view and word choice.
Donovan and Twain recorded many observations about river life. Think of a busy place you know well, and write a paragraph describing the place in detail. Ask other students how well they are able to form a mental image of the place you have described. Use their suggestions to revise your description.
The Adventures of Huckleberry Finn Study Guide
27
Name
Date
Class
The Late Benjamin Franklin and My First Lie, and How I Got Out of It Mark Twain
Before You Read
Focus Question
When is using humor a good way to convey a message?
Background
Mark Twain was a master of satire. As you will see in the following two essays, he was a keen observer of society and used wit and sarcasm to ridicule human weaknesses.
Responding to the Reading
1. Give three or four examples of people or things Twain satirizes in these essays.
2. In "The Late Benjamin Franklin" Twain writes, "His maxims were full of animosity toward boys." What does he mean by this statement?
3. Making Connections What is the "lie of silent assertion" that Twain refers to in "My First Lie, and How I Got Out of It"? When does Huck tell this type of lie? From the novel, give an example.
Write a paragraph that uses humor to criticize some aspect of high school life that you would like to see changed.
28
The Adventures of Huckleberry Finn Study Guide
Name
Date
Class
from Stride Toward Freedom
Before You Read
Focus Question
In your opinion, is it ever right to break a rule? Explain.
Martin Luther King Jr.
Background
Martin Luther King Jr. received more than forty awards for his work in the civil rights movement. Here, in his own words, he recounts his observations of Montgomery's African American community and his own struggle to find methods to deal with injustice.
Responding to the Reading
1. How did King answer the Montgomery man who asked, "Why have you and your associates come in to destroy [our] long tradition [of peaceful race relations]?" Do you find King's reply to be persuasive? Explain.
2. What was King's ethical dilemma regarding the bus boycott? How did he resolve the dilemma? |
a few more undertale stuff. this is a bookmark collection i’ll be selling in a local con. i placed the iconic heart in at least every picture here~
i really like how these turned out~ |
Multianalytical non-invasive characterization of phthalocyanine acrylic paints through spectroscopic and non-linear optical techniques.
The documentation and monitoring of cleaning operations on paintings benefit from the identification and determination of thickness of the materials to be selectively removed. Since in artworks diagnosis the preservation of the object's integrity is a priority, the application of non-invasive techniques is commonly preferred. In this work, we present the results obtained with a set of non-invasive optical techniques for the chemical and physical characterization of six copper-phthalocyanine (Cu-Pc) acrylic paints. Cu-Pc pigments have been extensively used by artists over the past century, thanks to their properties and low cost of manufacture. They can also be found in historical paintings in the form of overpaints/retouchings, providing evidence of recent conservation treatments. The optical behaviour and the chemical composition of Cu-Pc paints were investigated through a multi-analytical approach involving micro-Raman spectroscopy, Fibre Optics Reflectance Spectroscopy (FORS) and Laser Induced Fluorescence (LIF), enabling the differentiation among pigments and highlighting discrepancies with the composition declared by the manufacturer. The applicability of Non Linear Optical Microscopy (NLOM) for the evaluation of paint layer thickness was assessed using the modality of Multi-photon Excitation Fluorescence (MPEF). Thickness values measured with MPEF were compared with those retrieved through Optical Coherence Tomography (OCT), showing significant consistency and paving the way for further non-linear stratigraphic investigations on painting materials. |
Roppongi is one of Tokyo’s biggest nightlife districts, renowned for its high-end, multi-level night clubs. For visitors from the Midwest, though, there’s one establishment that especially sticks out: Bar Milwaukee, Roppongi’s nod to a traditional Midwestern dive bar, billiards table, dart boards, neon beer lights and all.
“It seriously feels like home,” says Milwaukee native Nicole Enea, who visited the bar this fall. “It’s almost like a Milwaukee basement bar; it’s just missing bar dice.” Despite the language barrier, Enea says, the bartender was quick to offer up shots — just like at home — and the bar’s music was a mix of the same ’80s and ’90s staples that dominate jukeboxes at home.
Looks inviting | Photo courtesy Nicole Enea
Some of the finer details aren’t completely right. The bar serves more Bud than Miller (although it does have Miller Genuine Draft), and I’m pretty sure a Milwaukee bar hasn’t displayed a sign for Zima since the days of “The Arsenio Hall Show.”
But Milwaukee visitors have helped fill in some of the true local color the bar was missing. Guests have plastered the bar with stickers for Milwaukee institutions like Real Chili, Tonic Tavern and the Brat House.
You can see a gallery of Enea’s photos from her visit to the bar below, and find many more on Bar Milwaukee’s Facebook page. |
#ifndef PhysicsTools_UtilAlgos_CollectionAdder_h
#define PhysicsTools_UtilAlgos_CollectionAdder_h
/* \class CollectionAdder<C>
*
* \author Luca Lista, INFN
*
* \version $Id: CollectionAdder.h,v 1.3 2010/02/20 20:55:17 wmtan Exp $
*/
#include "FWCore/Framework/interface/EDProducer.h"
#include "FWCore/ParameterSet/interface/ParameterSet.h"
#include "FWCore/Utilities/interface/transform.h"
#include "FWCore/Utilities/interface/InputTag.h"
#include "FWCore/Framework/interface/Event.h"
#include "DataFormats/Common/interface/Handle.h"
template <typename C>
class CollectionAdder : public edm::EDProducer {
public:
typedef C collection;
CollectionAdder(const edm::ParameterSet& cfg)
: srcTokens_(edm::vector_transform(cfg.template getParameter<std::vector<edm::InputTag>>("src"),
[this](edm::InputTag const& tag) { return consumes<collection>(tag); })) {
produces<collection>();
}
private:
std::vector<edm::EDGetTokenT<collection>> srcTokens_;
void produce(edm::Event& evt, const edm::EventSetup&) override {
std::unique_ptr<collection> coll(new collection);
typename collection::Filler filler(*coll);
for (size_t i = 0; i < srcTokens_.size(); ++i) {
edm::Handle<collection> src;
evt.getByToken(srcTokens_[i], src);
*coll += *src;
}
evt.put(std::move(coll));
}
};
#endif
|
Susceptibility of MT-null mice to chronic CdCl2-induced nephrotoxicity indicates that renal injury is not mediated by the CdMT complex.
Chronic human exposure to Cd results in kidney injury. It has been proposed that nephrotoxicity produced by chronic Cd exposure is via the Cd-metallothionein complex (CdMT) and not by inorganic forms of Cd. If this hypothesis is correct, then MT-null mice, which cannot form CdMT, should not develop nephrotoxicity. Control and MT-null mice were injected s.c. with a wide range of CdCl2 doses, six times/week for up to 10 weeks, and their renal Cd burden, renal MT concentration, and nephrotoxicity were quantified. In control mice, renal Cd burden increased in a dose- and time-dependent manner, reaching as high as 140 microg Cd/g kidney, along with 150-fold increases in renal MT concentrations, reaching 800 microg MT/g kidney. In MT-null mice, renal Cd concentration (10 microg/g) was much lower, and renal MT was nonexistent. The maximum tolerated dose of Cd in MT-null mice was approximately one-eighth that of controls. MT-null mice were more susceptible than controls to Cd-induced renal injury, as evidenced by increased urinary excretion of protein, glucose, gamma-glutamyltransferase, and N-acetyl-beta-D-glucosaminidase, as well as by increased blood urea nitrogen levels. Kidneys of Cd-treated mice were enlarged and histopathology showed various types of lesions, including proximal tubular degeneration, apoptosis, atrophy, interstitial inflammation, and glomerular swelling. These lesions were more severe in MT-null than in control mice, mirroring the biochemical analyses. These data indicate that Cd-induced renal injury is not necessarily mediated through the CdMT complex and that MT is an important intracellular protein in protecting against chronic Cd nephrotoxicity. |
Having breakdown insurance or an extended warranty can be extremely valuable if your vehicle requires major repairs. Unfortunately, there are some circumstances under which the insurance or warranty will not pay out. Those circumstances usually include owner neglect of routine maintenance.
Routine Maintenance for Your Vehicle
Routine maintenance includes all those small trips to the repair shop, the ones that are probably going to cost less than the deductible on your insurance or warranty. Our lives are often filled with small financial obligations such as credit card payments, school lunches, work lunches, union dues, or even your membership in an online game or at an exercise club. It becomes too easy to say, “I’ll get the oil changed next week with I don’t have so many things due.” Auto mechanics will tell you that those routine maintenance tasks are an important part of keeping your vehicle in good repair and running correctly.
Topping Up Fluids
Topping up the fluids in your car is easy and you can do it yourself. A clerk at a gas station remarked to a customer who was buying oil, “I just listen to the motor, and when it starts making a tapping sound I tell my husband. Sure enough, it is usually low on oil.” If you are waiting for the “tapping sound” you are probably waiting too long. Make it a practice to check the oil and look at the coolant in the overflow reservoir each time you fill the gas tank. Keep a little notebook with the owner’s manual and track the oil and coolant as well as the gasoline and mileage. You’ll be glad of the information when you talk with your mechanic
Oil Change and Lube
One of the least expensive vehicle repair and maintenance events, and one of the most important. Even if you are keeping the fluids in your vehicle topped up, eventually the oil will become dirty. The dirt includes environmental crud as well as tiny metal filings that are the result of metal parts moving against each other. Your oil filter will catch some of these things, but eventually, it will become clogged and will begin to choke down the flow of oil. This is also true of the fuel filter. Moving parts that are not directly affected by the motor oil or transmission fluid might also need attention. Your auto mechanic will check all fluids and moving parts when doing a routine oil change and lube. If you sign up with your auto shop, many will now send an email or text message when it is time for this maintenance event.
Tires, Brakes and Wheels
Tires, brakes and wheels are other parts where just driving your vehicle normally will create wear. Tires need to be rotated frequently to make sure they are wearing evenly, and the wheel alignment needs checked. Of course, there is no need to emphasize the importance of good stopping power! The world is full of small children, dogs, cats, squirrels and other motorists who do the unexpected.
Importance of Maintenance
When you attend to regular maintenance on your vehicle, you help the engine to run with less strain and you prolong its life. In addition, you meet the requirements of your warranty or breakdown insurance policy. If your insurer finds that the mechanical failure of your vehicle was due to neglect, there is a chance that your claim will not be honored. If you make regular maintenance visits and keep a record of them, your insurance company will see that you are doing your best to maintain your vehicle in good condition. |
Is fragile X syndrome a pervasive developmental disability? Cognitive ability and adaptive behavior in males with the full mutation.
In addition to mental retardation (MR), fragile X [fra(X)] syndrome has been associated with various psychopathologies, although it appears that the link is secondary to MR. It has been proposed that individuals with the full mutation be classified as a subcategory of pervasive developmental disorders (PDD). If fra(X) males are to be categorized as PDD, how do they compare with other types of developmental disabilities? We examined 27 fra(X) males aged 3-14 years, from 4 sites in North America. Measures of cognitive abilities were obtained from the Stanford-Binet Fourth Edition (SBFE), while levels of adaptive behavior were evaluated using the Vineland Adaptive Behavior Scales (VABS). Control subjects were sex-, age-, and IQ matched children and adolescents ascertained from the Developmental Evaluation Clinic (DEC) at Kings County Hospital. At the DEC, control subjects were diagnosed as either MR (n = 43) or autistic disorder (AD; n = 22). To compare subjects' adaptive behavior (SQ) with their cognitive abilities (IQ), a ratio of [(SQ/IQ) x 100] was computed. Results graphed as cumulative distribution functions (cdf) revealed that the cdf for AD males, who by definition are socially impaired, was positioned to the left of the cdf for MR controls, as expected. Mean ratio for AD males (70) was lower than for MR males (84). On the other hand, the cdf for fra(X) males was positioned far to the right of either AD or MR controls (mean ratio = 125). Statistical tests showed that SQ of fra(X) males was significantly higher than controls.(ABSTRACT TRUNCATED AT 250 WORDS) |
Find cheap airfares for flights from Takaroa to Raiatea Island. Use gh.wego.com to search and compare low airfare airline tickets for Takaroa to Raiatea Island flights on various international airlines. FInd last minute flights and the latest low airfares for this route. Compare cheap Takaroa to Raiatea Island flights at a glance and get the best deal for your trip. |
<!-- Generated by pkgdown: do not edit by hand -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>coronavirus_spatial — coronavirus_spatial • coronavirus</title>
<!-- jquery -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<!-- Bootstrap -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha256-916EbMg70RQy9LHiGkXzG8hSg9EdNy97GazNG/aiY1w=" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha256-U5ZEeKfGNOja007MMD3YBI0A3OSZOQbeG6z2f2Y0hu8=" crossorigin="anonymous"></script>
<!-- Font Awesome icons -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css" integrity="sha256-eZrrJcwDc/3uDhsdt61sL2oOBY362qM3lon1gyExkL0=" crossorigin="anonymous" />
<!-- clipboard.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/clipboard.js/2.0.4/clipboard.min.js" integrity="sha256-FiZwavyI2V6+EXO1U+xzLG3IKldpiTFf3153ea9zikQ=" crossorigin="anonymous"></script>
<!-- sticky kit -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/sticky-kit/1.1.3/sticky-kit.min.js" integrity="sha256-c4Rlo1ZozqTPE2RLuvbusY3+SU1pQaJC0TjuhygMipw=" crossorigin="anonymous"></script>
<!-- pkgdown -->
<link href="../pkgdown.css" rel="stylesheet">
<script src="../pkgdown.js"></script>
<meta property="og:title" content="coronavirus_spatial — coronavirus_spatial" />
<meta property="og:description" content="Create a geospatial version of the coronavirus data set for
easier visualization and spatial analysis. Uses rnaturalearth for the
the spatial info and generates sf objects using st_join to match up
datasets." />
<meta name="twitter:card" content="summary" />
<!-- mathjax -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js" integrity="sha256-nvJJv9wWKEm88qvoQl9ekL2J+k/RWIsaSScxxlsrv8k=" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/config/TeX-AMS-MML_HTMLorMML.js" integrity="sha256-84DKXVJXs0/F8OTMzX4UR909+jtl4G7SPypPavF+GfA=" crossorigin="anonymous"></script>
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div class="container template-reference-topic">
<header>
<div class="navbar navbar-default navbar-fixed-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<span class="navbar-brand">
<a class="navbar-link" href="../index.html">coronavirus</a>
<span class="version label label-default" data-toggle="tooltip" data-placement="bottom" title="Released version">0.1.0.9002</span>
</span>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li>
<a href="../index.html">
<span class="fa fa-home fa-lg"></span>
</a>
</li>
<li>
<a href="../reference/index.html">Reference</a>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">
Articles
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="../articles/intro_coronavirus_dataset.html">Introduction to the Coronavirus Dataset</a>
</li>
<li>
<a href="../articles/spatial_coronavirus.html">Showing the Spatial Distribution of Covid-19 Confirmed Cases</a>
</li>
</ul>
</li>
<li>
<a href="../news/index.html">Changelog</a>
</li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li>
<a href="https://github.com/covid19r/coronavirus">
<span class="fa fa-github fa-lg"></span>
</a>
</li>
</ul>
</div><!--/.nav-collapse -->
</div><!--/.container -->
</div><!--/.navbar -->
</header>
<div class="row">
<div class="col-md-9 contents">
<div class="page-header">
<h1>coronavirus_spatial</h1>
<small class="dont-index">Source: <a href='https://github.com/covid19r/coronavirus/blob/master/R/coronavirus_spatial.R'><code>R/coronavirus_spatial.R</code></a></small>
<div class="hidden name"><code>coronavirus_spatial.Rd</code></div>
</div>
<div class="ref-description">
<p>Create a geospatial version of the <a href='coronavirus.html'>coronavirus</a> data set for
easier visualization and spatial analysis. Uses <a href='https://www.rdocumentation.org/packages/rnaturalearth/topics/rnaturalearth'>rnaturalearth</a> for the
the spatial info and generates <a href='https://www.rdocumentation.org/packages/sf/topics/sf'>sf</a> objects using <a href='https://www.rdocumentation.org/packages/sf/topics/st_join'>st_join</a> to match up
datasets.</p>
</div>
<pre class="usage"><span class='fu'>coronavirus_spatial</span>(
<span class='kw'>return_shape</span> <span class='kw'>=</span> <span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/c'>c</a></span>(<span class='st'>"point"</span>, <span class='st'>"polygon"</span>),
<span class='kw'>returncols</span> <span class='kw'>=</span> <span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/c'>c</a></span>(<span class='st'>"all"</span>, <span class='st'>"simple"</span>, <span class='st'>"reduced"</span>),
<span class='no'>...</span>
)</pre>
<h2 class="hasAnchor" id="arguments"><a class="anchor" href="#arguments"></a>Arguments</h2>
<table class="ref-arguments">
<colgroup><col class="name" /><col class="desc" /></colgroup>
<tr>
<th>return_shape</th>
<td><p>Should the <a href='https://www.rdocumentation.org/packages/sf/topics/sf'>sf</a> object returned be points for cases or polygons of countries?
Defaults to `point`.</p></td>
</tr>
<tr>
<th>returncols</th>
<td><p>What coluns do you want returned. Defaults to `all`, giving all columns from
the original `coronavirus` dataset as well as those returned by <a href='https://www.rdocumentation.org/packages/rnaturalearth/topics/ne_countries'>ne_countries</a>.
`simple` returned those from `coronavirus` as well as some larger scale geographic information.
`reduced` returns the info from `simple` as well as information on population, income, and a
number of ISO codes.</p></td>
</tr>
<tr>
<th>...</th>
<td><p>Other arguments to <a href='https://www.rdocumentation.org/packages/rnaturalearth/topics/ne_countries'>ne_countries</a></p></td>
</tr>
</table>
<h2 class="hasAnchor" id="source"><a class="anchor" href="#source"></a>Source</h2>
<p>Johns Hopkins University Center for Systems Science and Engineering (JHU CCSE) Coronavirus <a href='https://systems.jhu.edu/research/public-health/ncov/'>website</a></p>
<p>The <a href='https://www.rdocumentation.org/packages/rnaturalearth/topics/rnaturalearth'>rnaturalearth</a></p>
<h2 class="hasAnchor" id="value"><a class="anchor" href="#value"></a>Value</h2>
<p>An `sf` object with either country borders as polygons or cases as points</p>
<h2 class="hasAnchor" id="examples"><a class="anchor" href="#examples"></a>Examples</h2>
<pre class="examples"><span class='co'># NOT RUN {</span>
<span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/library'>library</a></span>(<span class='no'>ggplot2</span>)
<span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/library'>library</a></span>(<span class='no'>dplyr</span>)
<span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/library'>library</a></span>(<span class='no'>rnaturalearth</span>)
<span class='no'>worldmap</span> <span class='kw'><-</span> <span class='fu'><a href='https://www.rdocumentation.org/packages/rnaturalearth/topics/ne_countries'>ne_countries</a></span>(<span class='kw'>returnclass</span> <span class='kw'>=</span> <span class='st'>"sf"</span>)
<span class='no'>coronavirus_points</span> <span class='kw'><-</span> <span class='fu'>coronavirus_spatial</span>() <span class='kw'>%>%</span>
<span class='fu'><a href='https://dplyr.tidyverse.org/reference/filter.html'>filter</a></span>(<span class='no'>date</span> <span class='kw'>==</span> <span class='st'>"2020-03-08"</span>) <span class='kw'>%>%</span>
<span class='fu'><a href='https://dplyr.tidyverse.org/reference/filter.html'>filter</a></span>(<span class='no'>type</span> <span class='kw'>==</span> <span class='st'>"confirmed"</span>)
<span class='no'>coronavirus_polys</span> <span class='kw'><-</span> <span class='fu'>coronavirus_spatial</span>(<span class='kw'>return_shape</span> <span class='kw'>=</span> <span class='st'>"polygon"</span>)<span class='kw'>%>%</span>
<span class='fu'><a href='https://dplyr.tidyverse.org/reference/filter.html'>filter</a></span>(<span class='no'>date</span> <span class='kw'>==</span> <span class='st'>"2020-03-08"</span>)<span class='kw'>%>%</span>
<span class='fu'><a href='https://dplyr.tidyverse.org/reference/filter.html'>filter</a></span>(<span class='no'>type</span> <span class='kw'>==</span> <span class='st'>"confirmed"</span>)
<span class='fu'><a href='https://ggplot2.tidyverse.org/reference/ggplot.html'>ggplot</a></span>(<span class='no'>worldmap</span>) +
<span class='fu'><a href='https://ggplot2.tidyverse.org/reference/ggsf.html'>geom_sf</a></span>() +
<span class='fu'><a href='https://ggplot2.tidyverse.org/reference/ggsf.html'>geom_sf</a></span>(<span class='kw'>data</span> <span class='kw'>=</span> <span class='no'>coronavirus_polys</span>, <span class='fu'><a href='https://ggplot2.tidyverse.org/reference/aes.html'>aes</a></span>(<span class='kw'>fill</span> <span class='kw'>=</span> <span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/Log'>log10</a></span>(<span class='no'>cases</span>+<span class='fl'>1</span>))) +
<span class='fu'><a href='https://ggplot2.tidyverse.org/reference/ggsf.html'>geom_sf</a></span>(<span class='kw'>data</span> <span class='kw'>=</span> <span class='no'>coronavirus_points</span>) +
<span class='fu'><a href='https://ggplot2.tidyverse.org/reference/scale_viridis.html'>scale_fill_viridis_c</a></span>() +
<span class='fu'><a href='https://ggplot2.tidyverse.org/reference/ggtheme.html'>theme_void</a></span>()
<span class='co'># }</span></pre>
</div>
<div class="col-md-3 hidden-xs hidden-sm" id="sidebar">
<h2>Contents</h2>
<ul class="nav nav-pills nav-stacked">
<li><a href="#arguments">Arguments</a></li>
<li><a href="#source">Source</a></li>
<li><a href="#value">Value</a></li>
<li><a href="#examples">Examples</a></li>
</ul>
</div>
</div>
<footer>
<div class="copyright">
<p>Developed by Rami Krispin.</p>
</div>
<div class="pkgdown">
<p>Site built with <a href="https://pkgdown.r-lib.org/">pkgdown</a> 1.3.0.</p>
</div>
</footer>
</div>
</body>
</html>
|
{
"name": "Analytics",
"version": "1.7.9",
"summary": "Segment analytics and marketing tools library for iOS.",
"homepage": "https://segment.com/libraries/ios",
"license": {
"type": "MIT",
"file": "License.md"
},
"authors": {
"Segment": "[email protected]"
},
"source": {
"git": "https://github.com/segmentio/analytics-ios.git",
"tag": "1.7.9"
},
"platforms": {
"ios": "6.0"
},
"requires_arc": true,
"xcconfig": {
"GCC_PREPROCESSOR_DEFINITIONS": "ANALYTICS_VERSION=1.7.9"
},
"subspecs": [
{
"name": "Core-iOS",
"public_header_files": "Analytics/*",
"source_files": [
"Analytics/*.{h,m}",
"Analytics/Helpers/*.{h,m}",
"Analytics/Integrations/SEGAnalyticsIntegrations.h"
],
"platforms": [
"ios"
],
"dependencies": {
"TRVSDictionaryWithCaseInsensitivity": [
"0.0.2"
]
},
"weak_frameworks": [
"iAd",
"AdSupport",
"CoreBlueTooth",
"SystemConfiguration"
],
"frameworks": [
"SystemConfiguration"
]
},
{
"name": "Amplitude",
"prefix_header_contents": "#define USE_ANALYTICS_AMPLITUDE 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Amplitude/SEGAmplitudeIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Amplitude-iOS": [
"2.1.1"
],
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
]
}
},
{
"name": "AppsFlyer",
"prefix_header_contents": "#define USE_ANALYTICS_APPSFLYER 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/AppsFlyer/SEGAppsFlyerIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"AppsFlyer-SDK": [
"2.5.3.10"
]
}
},
{
"name": "Bugsnag",
"prefix_header_contents": "#define USE_ANALYTICS_BUGSNAG 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Bugsnag/SEGBugsnagIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Bugsnag": [
"3.1.2"
]
}
},
{
"name": "Countly",
"prefix_header_contents": "#define USE_ANALYTICS_COUNTLY 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Countly/SEGCountlyIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Countly": [
"1.0.0"
]
}
},
{
"name": "Crittercism",
"prefix_header_contents": "#define USE_ANALYTICS_CRITTERCISM 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Crittercism/SEGCrittercismIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"CrittercismSDK": [
"4.3.4"
]
}
},
{
"name": "Flurry",
"prefix_header_contents": "#define USE_ANALYTICS_FLURRY 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Flurry/SEGFlurryIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"FlurrySDK": [
"4.4.0"
]
}
},
{
"name": "GoogleAnalytics",
"prefix_header_contents": "#define USE_ANALYTICS_GOOGLEANALYTICS 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/GoogleAnalytics/SEGGoogleAnalyticsIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"GoogleAnalytics-iOS-SDK": [
"3.0.9"
]
}
},
{
"name": "Localytics",
"prefix_header_contents": "#define USE_ANALYTICS_LOCALYTICS 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Localytics/SEGLocalyticsIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Localytics-AMP": [
"2.71.0"
]
}
},
{
"name": "Mixpanel",
"prefix_header_contents": "#define USE_ANALYTICS_MIXPANEL 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Mixpanel/SEGMixpanelIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Mixpanel": [
"2.5.3"
]
}
},
{
"name": "Optimizely",
"prefix_header_contents": "#define USE_ANALYTICS_OPTIMIZELY 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Optimizely/SEGOptimizelyIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Optimizely-iOS-SDK": [
"0.6.52"
]
}
},
{
"name": "Quantcast",
"prefix_header_contents": "#define USE_ANALYTICS_QUANTCAST 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Quantcast/SEGQuantcastIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Quantcast-Measure": [
"1.4.6"
]
}
},
{
"name": "Segmentio",
"prefix_header_contents": "#define USE_ANALYTICS_SEGMENTIO 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Segmentio/SEGSegmentioIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
]
}
},
{
"name": "Taplytics",
"prefix_header_contents": "#define USE_ANALYTICS_TAPLYTICS 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Taplytics/SEGTaplyticsIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Taplytics": [
"2.0.10"
]
}
},
{
"name": "Tapstream",
"prefix_header_contents": "#define USE_ANALYTICS_TAPSTREAM 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/Tapstream/SEGTapstreamIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"Tapstream": [
"2.8.1"
]
}
},
{
"name": "TestFlight",
"prefix_header_contents": "#define USE_ANALYTICS_TESTFLIGHT 1",
"public_header_files": "Analytics/Integrations/*",
"ios": {
"source_files": "Analytics/Integrations/TestFlight/SEGTestFlightIntegration.{h,m}"
},
"platforms": [
"ios"
],
"dependencies": {
"Analytics/Core-iOS": [
],
"Analytics/Segmentio": [
],
"TestFlightSDK": [
"3.0.2"
]
}
}
]
}
|
Autophagy and ubiquitin-mediated proteolysis may not be involved in the degradation of spermatozoon mitochondria in mouse and porcine early embryos.
The mitochondrial genome is maternally inherited in animals, despite the fact that paternal mitochondria enter oocytes during fertilization. Autophagy and ubiquitin-mediated degradation are responsible for the elimination of paternal mitochondria in Caenorhabditis elegans; however, the involvement of these two processes in the degradation of paternal mitochondria in mammals is not well understood. We investigated the localization patterns of light chain 3 (LC3) and ubiquitin in mouse and porcine embryos during preimplantation development. We found that LC3 and ubiquitin localized to the spermatozoon midpiece at 3 h post-fertilization, and that both proteins were colocalized with paternal mitochondria and removed upon fertilization during the 4-cell stage in mouse and the zygote stage in porcine embryos. Sporadic paternal mitochondria were present beyond the morula stage in the mouse, and paternal mitochondria were restricted to one blastomere of 4-cell embryos. An autophagy inhibitor, 3-methyladenine (3-MA), did not affect the distribution of paternal mitochondria compared with the positive control, while an autophagy inducer, rapamycin, accelerated the removal of paternal mitochondria compared with the control. After the intracytoplasmic injection of intact spermatozoon into mouse oocytes, LC3 and ubiquitin localized to the spermatozoon midpiece, but remnants of undegraded paternal mitochondria were retained until the blastocyst stage. Our results show that paternal mitochondria colocalize with autophagy receptors and ubiquitin and are removed after in vitro fertilization, but some remnants of sperm mitochondrial sheath may persist up to morula stage after intracytoplasmic spermatozoon injection (ICSI). |
Q:
Adding an OpenGL graphics card to a PCI32 only motherboard
I need to add a 3D graphics adapter on a PCI32 only server motherboard. All modern graphics adapter are PCI-Express based, what options do I have?
Thanks.
A:
Matrox still make PCI video cards
http://www.matrox.com/graphics/en/products/graphics_cards/g_series/g450pci/
http://www.matrox.com/graphics/en/products/graphics_cards/g_series/g550lppci/
http://www.matrox.com/graphics/en/products/graphics_cards/p_series/p690pci/
|
### Changes in 4.2.2
---
- Fix bug with config merging.
- Support for Laravel 7.0
- Update many dependency to have compatibility on all Laravel version.
- New config accessor to read and test the config injection.
- Tests for the config merging.
- Use composer scripts for easier local testing.
### Changes in 4.2.1
---
- Fix unhandled null type in user agent string accessor.
### Changes in 4.2.0
---
- Standalone mode, removed the requirement for Laravel.
- Support for user configs, also supports the Laravel config manager.
### Changes in 4.1.0
---
- OS detectors for Windows, Linux, Andorid, and Mac/iOS.
- 100% test coverage.
- Type hinted every class and function.
- Introduced the static code analysis to the test flow.
- Introduced the code quality analysis to the test flow.
- Moved to PSR12 standards with the code base.
- Fixed potential type errors.
- Improve the resistance for HTTP header based attacks.
- First iteraton for a demo site.
### Change in 4.0.0
---
- PHP 5.6 is no longer supported.
- Raised the minimum Laravel version to 6.0.
- Support for Laravel 6.0, 6.1, 6.2, 6.3, 6.4, 6.5.
- Unify the coding standards.
- Remove legacy PHP workarounds.
- Release the isEdge result variable.
- Invalidate cache with 3.x versions.
- Update the tests to test for every laravel framework version.
### Changes in 3.1.4
---
- Fix blade directives, add test coverage.
### Changes in 3.1.3
---
- Allow PHPUnit 7.0 as dependency.
### Changes in 3.1.2
---
- Bump version testing to laravel 5.6.
### Changes in 3.1.1
---
- Fix: MobileDetect still used the osName instead of platformName.
- Fix: isIEVersion comparison called the parameters in wrong order.
- Addition: Version parser now forces the semantic version pieces to be integer.
- Fixed: MobileDetect test only ran on one sample.
- Addition: More test coverage, getting closer to the maximum.
### Changes in 3.1.0
---
- Added the DeviceDetector stage to the pipeline.
- Fixed a minor issue with versions and trailing dots.
- Added the Browser::browserEngine() function.
- Much better detection rates with the new stage.
### Changes in 3.0.1
---
- Fixed the result objects bad property calls.
- Added more unit test for the fixed case.
### Changes in 3.0.0
---
- The package has been rewrote from ground zero.
- Added PHPUnit, and covering the main features.
- Added the travis ci to the release cycle.
- Moved to the Develop -> Staging -> Stable branch model.
- Interfaced everything, seriously!
- Custom exceptions for easier package managing.
- Blade directives.
- Result is now a well annotated object, any IDE can work with it.
- End of the plugin era, pipelines ha arrived.
- Added the crawler detect package.
- Replaced the UAParser to a more supported one.
- Support for MobileDetect 2.0 to 2.8, 3.0 will never come :D
- Parser class is much more simple to use.
- PSR-2 code style.
- Browsecap plugin has been removed.
- UserAgentStringApi plugin has been removed. (Too slow to call)
- Everything is easier now, but also less flexibility in the package.
- Better version support for PHP and Laravel.
- Easy fast setup.
- Namespaces are redesigned to be more descriptive.
### Changes in 2.0 version
---
- Laravel 5 is now supported, first draft.
### Changes in 1.0.0pre
---
The code has been almost totaly rewrited except like 30 line of code from v0.9.\*, this breaks the compability with older versions so the major version has been increased to v1.0.0pre.
The version 1.0.0 is promised when the Mobile Detect 3 comes out but since they passed the due date for the release the support for their new detector will be intruduced in a plugin so the package dev can move on.
- Most prior change is the PHP requirement increased to 5.4~ this allows the usage of traits.
- Class loading now uses PSR-4 instead of PSR-0 structure. This will be handled by the composer automaticaly.
- Package now requires the hisorange/traits package to share resources between packages.
- PHP namespace are moved from **hisorange\browserdetect** to **hisorange\BrowserDetect** to avoid collusions.
- Package now uses the 'browser-detect.parser', 'browser-detect.result' component names in the L4 Di.
- Service provider is more extendable with splitted parser and result component keys.
- Manager class has been renamed to Parser.
- Instead of useing the basic Cache and Config class from the Laravel app now useing the app's Di to forge the needed component.
- Most of the Manager class' functions has been renamed and reoriented in the Parser.
- Before hardcoded generic values now stored in the config file.
- Default cache prefix has been changed to 'hbd1'.
- Cacheing now requires less memory the results are stored in a compact string format instead of an array.
- Parser now determine the browser's javascript support.
- Parsing are now plugin oriented instead of hardcodeing.
- Plugins are costumizeable from the config/plugins.php file.
- Package ships with 4 built in plugin.
- UserAgentStringApi plugin is default turned off, because it requires greater amount of time to process.
### v0.9.2
---
- Fix the case where importing datas and query the current agent in the same request.
- Perform self analization before importing data.
### v0.9.1
---
- New import and export function on the info object.
### Initial release v0.9.0
|
Q:
How to change seed number in Fortran stochastic simulator code
I'm running a Fortran code which performs a stochastic simulation of a marked Poisson cluster process. In practice, event properties (eg. time of occurrences) are generated by inversion method, i.e. by random sampling of the cumulative distribution function.
Because of the Poissonian randomness, I expect each generated sequence to be different, but this is not the case. I guess the reason is that the seed for the pseudorandom number generator is the same at each simulation.
I do not know Fortran, so I have no idea how to solve this issue. Here is the part of the code involved with the pseudorandom number generator, any idea?
subroutine pseud0(r)
c generation of pseudo-random numbers
c data ir/584287/
data ir/574289/
ir=ir*48828125
if(ir) 10,20,20
10 ir=(ir+2147483647)+1
20 r=float(ir)*0.4656613e-9
return
end
subroutine pseudo(random)
c wichmann+hill (1982) Appl. Statist 31
data ix,iy,iz /1992,1111,1151/
ix=171*mod(ix,177)-2*(ix/177)
iy=172*mod(iy,176)-35*(iy/176)
iz=170*mod(iz,178)-63*(iz/178)
if (ix.lt.0) ix=ix+30269
if (iy.lt.0) iy=iy+30307
if (iz.lt.0) iz=iz+30323
random=mod(float(ix)/30269.0+float(iy)/30307.0+
& float(iz)/30323.0,1.0)
return
end
A:
First, I would review the modern literature for PRNG and pick a modern implementation. Second, I would rewrite the code in modern Fortran.
You need to follow @francescalus advice and have a method for updating the seed. Without attempting to modernizing your code, here is one method for the pseud0 prng
subroutine init0(i)
integer, intent(in) :: i
common /myseed0/iseed
iseed = i
end subroutine init0
subroutine pseud0(r)
common /myseed0/ir
ir = ir * 48828125
if (ir) 10,20,20
10 ir = (ir+2147483647)+1
20 r = ir*0.4656613e-9
end subroutine pseud0
program foo
integer i
real r1
call init0(574289) ! Original seed
do i = 1, 10
call pseud0(r1)
print *, r1
end do
print *
call init0(289574) ! New seed
do i = 1, 10
call pseud0(r1)
print *, r1
end do
print *
end program foo
|
Q:
More stars visible from the Australian outback than anywhere else on earth?
An ad for Vodafone in Cairns airport, Australia, presents as a fact "worth ringing home about" that you can see more stars from the Australian outback than anywhere else on earth.
While the Australian outback would have less light pollution and less humidity than some places, I'm doubtful about this claim because if the outback is a good place to see stars, it'd also be a good place to build optical telescopes, and I don't think the Australian outback has a lot of optical telescopes. I thought that most optical telescopes these days are being built in mountainous areas such as Hawaii.
Are more stars visible with the naked eye from the Australian outback than anywhere else on earth?
A:
The relevant paper seems to be "Cinzano, P., Falchi, F., Elvidge C.D. 2001, The first world atlas of the artificial night sky brightness". It gives numerical data per country and as of excellent observation conditions Australia is in the list of many other countries who are also pretty good.
In table 1 they give numbers of how many percent of the population of every country have excellent viewing conditions. Unfortunatley, Australia is not the best location. But these are only percentages, so with a little work you might figure out absolute numbers.
However, viewing conditions not only depend on light pollution (which is the Bortle Dark-Sky Scale), but also on the 'seeing' which is influenced by turbulences in the atmosphere, humidity and such like (compare http://en.m.wikipedia.org/wiki/Astronomical_seeing)
That's why people bothered building telescopes in Chile's Atacama desert in the first place.
So, Vodafone has a claim, but it's only half the truth if you're talking about naked eye visibity - other locations are good too.
Source:
http://www.inquinamentoluminoso.it/cinzano/download/0108052.pdf
|
Revolutionizing the Art of Metal Fabrication
Contrary to that old cooking adage, “a watched pot never boils,” keeping a careful eye on things—in the kitchen or in the laboratory—can be essential to making a useable (or edible!) final product. Take chocolate, for instance, that foundational block of the food pyramid. An important part of creating high-grade chocolate is a step called tempering, or the melting, stirring, and cooling of the liquid chocolate to align the crystals that give it a smooth texture and a glossy shine. One of the key senses chocolatiers use to monitor tempering is sight, giving them information on the thickness and color of the batch to make sure it tempers evenly as it cools.
But what if they had to do it blind?
For many years, that’s exactly what has been happening in metallurgy laboratories across the world. While the crafting of specialty metal alloys, like titanium or zirconium, can be far more complex than making chocolate, metals are often put through a process that is somewhat akin to chocolate tempering—vacuum arc remelting (VAR). VAR is an important step in metal fabrication, the process by which the chemical and physical homogeneity of the material is refined to ensure a quality end-product.
During the process, electrical power is used to heat a consumable electrode by means of an electric arc—a luminous electrical discharge like a lightning strike—and the melting material drops into a water-cooled copper crucible. Like chocolate, flaws in specialty metals are often caused by solidification problems that arise during the melting and refining process—problems that can lead to failure of the final product. Unlike chocolate, these products are often used in aerospace and aviation applications, where lives can depend on the quality of the metal components that make up their vehicles.
Previously, the conditions that cause flaws in the alloys could not be identified during furnace operations, requiring manufacturers to perform extensive testing on the resulting ingots to test for safe levels of homogeneity. However, a new process developed by NETL metallurgists, called arc position sensing (APS), allows operators to digitally monitor arc location during VAR processing. Being able to “see” the arcs during melting helps the engineer to control them and the melting process to produce consistently defect-free materials—something that was not possible prior to the development of this technology.
The APS system has the potential to revolutionize the fabrication of specialty metals. Adoption of this technology can improve the quality of the ingots produced and reduce the amount ingot testing required, saving manufacturers millions of dollars. In addition, APS could also lead to the production of materials with better chemical homogeneity, resulting in higher performance alloys.
This patented and award-winning technology has been licensed by AmpSci, an Oregon-based company founded by the technology’s inventors. Researchers at AmpSci are working to further develop the technology for widespread commercial deployment to the specialty metals industry. You can learn more about this NETL success story here. |
Getting serious about the social determinants of health: new directions for public health workers.
International interest in the social determinants of health and their public policy antecedents is increasing. Despite evidence that as compared to other wealthy nations Canada presents a mediocre population health profile and public policy environments increasingly less supportive of health, the Canadian public health gaze is firmly - and narrowly - focused on lifestyle issues of diet, physical activity and tobacco use. Much of this has to do with Canada being identified as being driven by a liberal political economy, a situation shared with a cluster of other developed nations. Reasons for Canada's neglect of structural and public policy issues are explored and ways by which public health workers in Canada and elsewhere can help to shift policymakers and the general public's understandings of the determinants of health are outlined. |
Finding better covers for public domain ebooks
Here at NYPL Labs we’re working on an ebook-borrowing and reading app. On the technical side, Leonard Richardson is doing all the back end magic, consolidating multiple data sources for each book into a single concise format: title, author, book cover and description. John Nowak is writing the code of the app itself (that you will be able to download to your phone). I am doing the design (and writing blog posts). Many of the ebooks we will be offering come from public domain sites such as Project Gutenberg. If you spend a few minutes browsing that site you will notice that many of its ebooks either have a really crappy cover image or none at all:
Book covers weren’t a big deal until the 20th century, but now they’re how people first interact with a book, so not having one really puts a book at a disadvantage. They are problematic, and not only in ebooks. It’s difficult to find high-quality, reusable covers of out-of-print or public domain books. There are some projects such as Recovering the Classics that approach this problem in interesting ways. However, we at NYPL are still left with very limited (and expensive) solutions to this problem.
Given that the app’s visual quality is highly dependant on ebook cover quality (a wall of bad book covers makes the whole app look bad) we had to have a solution for displaying ebooks with no cover or a bad cover. The easy answer in this situation is doing what retail websites do for products with no associated image: display a generic image.
This is not a very elegant solution. When dealing with books, it seems lazy to have a “nothing to see here” image. We will have at least a title and an author to work with. The next obvious choice is to make a generic cover that incorporates the book’s title and author. This is also a common choice in software such as iBooks:
Skeuomorphism aside, it is a decent book cover. However, it feels a bit cheesy and I wanted something more in line with the rest of the design of the app (a design which I am leaving for a future post). We need a design that can display very long titles (up to 80 characters) but that would also look good with short ones (two or three characters); it should allow for one credited author, multiple authors or none at all. I decided on a more plain and generic cover image:
Needless to say this didn’t impress anyone; which is OK because the point was not to impress; we needed a cover that displayed author and title information and was legible to most people and this checked every box… but… at the same time… wouldn’t it be cool if…
10 PRINT “BOOK COVER”
While discussing options for doing a better generative cover I remembered 10 PRINT, a generative-art project and book led by Casey Reas that explores one line of Commodore 64 (C64) code:
10 PRINT CHR$(205.5+RND(1)); : GOTO 10
This code draws one of two possible characters (diagonal up or diagonal down) on the screen at random, over and over again. The C64 screen can show up to 40 characters in a row. The end result is a maze-like graphic like the one seen in this video:
At the 2012 Eyeo festival, Casey Reas talked about this project, which involves nine other authors who are collected in this book. I highly recommend watching Reas’s presentation (link jumps to 30:11 when 10 PRINT is mentioned). The two characters–diagonal up and diagonal down–come from the C64 PETSCII character list which is laid out here on the Commodore keyboard:
Each key on the PETSCII keyboard has a geometric shape associated with it. These shapes can be used to generate primitive graphics in the C64 operating system. For example, here is a rounded rectangle (I added some space to make it easier to see each character):
In terms of the letters on the same keyboard, that rectangle looks like this:
UCCCI B B B B JCCCK
10 PRINT was the starting point for my next ebook cover generator. In 10 PRINT a non-alphanumeric character is chosen by a random “coin toss” and displayed as a graphic. In my cover generator, a book’s title is transformed into a graphic. Each letter A-Z and digit 0-9 is replaced with its PETSCII graphic equivalent (e.g. the W gets replaced with an empty circle). I used Processing to quickly create sketches that allowed for some parameter control such as line thickness and grid size. For characters not on the PETSCII “keyboard” (such as accented Latin letters or Chinese characters) I chose a replacement graphic based on the output of passing the character into Processing’s int() function.
Colors and fonts
In order to have a variety of colors across the books, I decided to use the combined length of the book title and the author’s name as a seed number, and use that seed to generate a color. This color and its complementary are used for drawing the shapes. Processing has a few functions that let you easily create colors. I used the HSL color space which facilitates generating complementary colors (each color, or hue in HSL parlance, is located in a point on a circle, its complementary is the diametrically opposite point). The gist code:
int counts = title.length() + author.length(); int colorSeed = int (map(counts, 2 , 80 , 30 , 260 )); colorMode(HSB, 360 , 100 , 100 ); shapeColor = color(colorSeed, baseSaturation, baseBrightness-(counts% 20 )); baseColor = color((colorSeed+ 180 )% 360 , baseSaturation, baseBrightness);
This results in something like:
To ensure legibility and avoid clashes with the generated colors, I always use black on white for text. I chose Avenir Next as the font. The app as a whole uses that font for its interface, it’s already installed on the OS and it contains glyphs for multiple languages.
There are more (and better) ways to create colors using code. I didn’t really go down the rabbit hole here but if you feel so inclined, take a look at Herman Tulleken’s work with procedural color palettes, Rob Simmon’s extensive work on color, or this cool post on emulating iTunes 11’s album cover color extractor.
Shapes
I created a function that draws graphic alternate characters for the letters A-Z and the digits 0-9. I decided to simplify a few graphics to more basic shapes: the PETSCII club (X) became three dots, and the spade (A) became a triangle.
I wrote a function that draws a shape given a character k , a position x,y and a size s . Here you can see the code for drawing the graphics for the letter Q (a filled circle) and the letter W (an open circle).
void drawShape( char k, int x, int y, int s) { ellipseMode(CORNER); fill(shapeColor); switch (k) { case 'q' : case 'Q' : ellipse(x, y, s, s); break ; case 'w' : case 'W' : ellipse(x, y, s, s); s = s-(shapeThickness* 2 ); fill(baseColor); ellipse(x+shapeThickness, y+shapeThickness, s, s); break ; } }
My cover generator calls drawShape repeatedly for each character in a book’s title. The size of the shape is controlled by the length of the title: the longer the title, the smaller the shape.
Each letter in the title is replaced by a graphic and repeated as many times as it can fit in the space allotted. The resulting grid is a sort of visualization of the title; an alternate alphabet. In the example below, the M in “Macbeth” is replaced by a diagonal downwards stroke (the same character used to great effect in 10 PRINT). The A is replaced by a triangle (rather than the club found on the PETSCII keyboard). The C becomes a horizontal line offset from the top, the B a vertical line offset from the left, and so on. Since the title is short, the grid is large, and the full title is not visible, but you get the idea:
There is a Git repository for this cover generator you can play with.
Some more examples (notice how “Moby Dick”, nine characters including the space, does fit in the 3x3 grid below and how the M in “Max” is repeated):
MOB Y D ICK
MA XM
And so on:
The original design featured the cover on a white (or very light) background. This proved problematic, as the text could be dissociated from the artwork, so we went for a more “enclosed” version (I especially like how the Ruzhen Li cover turned out!):
We initially thought about generating all these images and putting them on a server along with the ebooks themselves, but 1) it is an inefficient use of network resources since we needed several different sizes and resolutions and 2) when converted to PNG the covers lose a lot of their quality. I ended up producing an Objective-C version of this code (Git repo) that will run on the device and generate a cover on-the-fly when no cover is available. The Obj-C version subclasses UIView and can be used as a fancy-ish “no cover found” replacement.
Cover, illustrated
Of course, these covers do not reflect the content of the book. You can’t get an idea of what the book is about by looking at the cover. However, Leonard brought up the fact that many Project Gutenberg books, such as this one, include illustrations embedded as JPG or PNG files. We decided to use those images, when they are available, as a starting point for a generated cover. Our idea is to generate one cover for each illustration in a book and let people decide which cover is best using a simple web interface.
I tried a very basic first pass using Python (which I later abandoned for Processing):
This lacks personality and becomes problematic as titles get longer. I then ran into Chris Marker and Jason Simon’s work, and was inspired:
I liked the desaturated color and emphasis on faces. Faces can be automatically detected in images using computer-vision algorithms, and some of those are included in OpenCV, an open-source library that can be used in Processing. Here’s my first attempt in the style of Marker and Simon, with and without face detection added:
I also tried variations on the design, adding or removing elements, and inverting the colors:
Since Leonard and I couldn’t agree on which variation was best, we decided to create a survey and let the people decide (I am not a fan of this approach, which can easily become a 41 shades of blue situation but I also didn’t have a compelling case for either version). The clear winner was, to my surprise, using inverted colors and no face detection:
The final Processing sketch (Git repo) has many more parameters than the 10 PRINT generator:
Conclusion
As with many subjects, you can go really deep down the rabbit hole when it comes to creating the perfect automated book cover. What if we detect illustrations vs. photographs and produce a different style for each? What about detecting where the main image is so we can crop it better? What if we do some OCR on the images to automatically exclude text-heavy images which will probably not work as covers?
This can become a never-ending project and we have an app to ship. This is good enough for now. Of course, you are welcome to play with and improve on it: |
William Clark (congressman)
William Clark (February 18, 1774 – March 28, 1851) was a farmer, jurist, and politician from Dauphin, Pennsylvania.
Biography
He served as secretary of the Pennsylvania land office from 1818 to 1821, and State treasurer from 1821 to 1827. He was Treasurer of the United States from June 4, 1828 to November 1829.
Clark was elected as an Anti-Masonic candidate to the Twenty-third and Twenty-fourth Congresses. He was a member of the State constitutional revision commission in 1837. After Congress, he engaged in agricultural pursuits and died near Dauphin in 1851. He was interred in English Presbyterian Cemetery.
External links
The Political Graveyard
References
Category:1774 births
Category:1851 deaths
Category:People from Dauphin County, Pennsylvania
Category:Anti-Masonic Party members of the United States House of Representatives from Pennsylvania
Category:19th-century American politicians
Category:Treasurers of the United States
Category:Pennsylvania state court judges
Category:People from Pennsylvania in the War of 1812 |
Determination of seven free anabolic steroid residues in eggs by high-performance liquid chromatography-tandem mass spectrometry.
A cheap, reliable and practical high-performance liquid chromatography-tandem mass spectrometric method was developed for the simultaneous determination of seven anabolic steroids in eggs, including trenbolone, boldenone, nandrolone, stanozolol, methandienone, testosterone and methyl testosterone. The analytes were extracted from the egg samples using methanol. The extracts were subjected to the removal of fat by freezing-lipid filtration and then further purified by liquid-liquid extraction using tert-butyl methyl ether. The analytes were separated on a Luna C18 column by a gradient elution program with 0.1% formic acid and acetonitrile. This method was validated over 1.00-100 ng/g for all steroids of interest. The correlation coefficients (r) for each calibration curve are higher than 0.99 within the experimental concentration range. The decision limits of the steroids in eggs ranged from 0.20 to 0.44 ng/g, and the detection capabilities were below 1.03 ng/g. The average recoveries were between 66.3 and 82.8% in eggs at three spiked levels of 1.00, 1.50 and 2.00 ng/g for each analyte. The between-day and within-day relative standard deviations were in the range of 2.4-11%. High matrix suppression effects were observed for all compounds of interest. |
We've limited the number of Atlantic City area Danilo donnini poker courses to our favorites, based on playability and location from Atlantic City, NJ. Overview. Rooms at Harrah's Resort Atlantic City Danilo donnini poker amp; Casino are located in five different towers.
danilo donnini poker
Private, secluded, furnished cabin with two ladder accessible loft sleeping areas, sofa sleeper, TV, VCR, refrigerator, electric stove, microwave, basic cooking utensils, full bath, gas grill on wrap around deck. Hocking Hills Bargins - Affordable Ohio Vacation Cabins near Ohio's Old Mans Cave, Hocking Hills, Lake Hope, and Tar Hollow State Parks. Perfect destination for weekend getaways, hiking, photography, fishing, hunting, … Our reasonable priced cabins and camp sites are located on 20 wooded acres in the heart of the Hocking Hills. Great value. Hocking Hills cabin rentals listing page. Most hocking hills cabins are near state parks. Typical amenities include hot tubs, hottubs, fireplaces, fire rings, swimming pools, secluded settings, privacy, log homes, full kitchens, whirlpool, gas grills, charcoal grills, decks, patios, wonderful views and wooded settings. This is a list of minor waterfalls that I have visited. These falls do not get a page of their own because they are either relatively small, isolated, difficult to … Eenentwintigen, ook wel bekend als Blackjack kan je op deze website gratis online spelen. De software kan je inladen op de website Treasure Island Beach Front 2 Bedroom BunksPet Friendly. THIS CONDO HAS HAD NEW CARPET, NEW FURNITURE, NEW PAINT AND ALL … CONFERENCES amp; MICE. Small corporate meetings in Asia. Helping CEOs and hangers-on with events or company conferences. Some great MICE venues and hot spots that may impress the boss and facilitate a swift kick up the bureaucratic ladder. View photos, maps, and details of CR 347, Crowell, Texas, and contact seller on LandsofTexas. com. Find nearby land, ranches, amp; farms for sale. Hunting Country properties sold: Click on photos to enlarge: Listing KS-364 Price 296,000 SOLD 160 Acres ML Meade County Kansas CRP land. With the dry creek bed of Crooked Creek running through the property, it has … Evel Knievel was born Robert Craig Danilo donnini poker on Oct. 17, danilo donnini poker in Butte, Montana. Danio a police chase in casino legal age in india, in which he crashed his motorcycle, Knievel was taken to jail on a charge of reckless driving. The lakes in donnink are home to local species and to migrating songbirds, and they are a delight to bird watchers and casual viewers. Every type of prairie habitat can be found in Kansas - and the pojer and plants that comprise them. From the vast shortgrass donnin of the High Plains to the propranolol poker remaining expanse of tallgrass prairie in the world - the Flint Hills. Please Note. We have had customers express to us that pokdr danilo donnini pokerles casino en ligne sont il fiable unable to attain lucky slots free onlinedanilo donnini poker through their dealer. We encourage all customers that have this experience danilo donnini poker contact 94 level poker directly for product availability. Welcome to the San Diego Renegades 18U Gold Stern. We practice at Sycamore Canyon Danilo donnini poker in Scripps Ranch on Tuesdays and Thursdays from 6-9pm Danilo donnini poker. SternSDRenegades. org 619-666-9704 Our team is made up of some of the danilo donnini poker in San Diego County. Oklahoma ranches better odds casino war or blackjack land for sale in northeastern Oklahoma and danilo donnini poker Kansas. Metro Danilo donnini poker, designated by the Roulette expert ffxiv States Casino park endicott ny of Management and Budget as the AtlantaSandy SpringsRoswell, GA Metropolitan Statistical Area, is the most populous metro area in the Clases salsa casino bogota state of Dwnilo and the ninth-largest metropolitan statistical level hoki zynga poker (MSA) in the United States. Blackjack golden nugget help fulfill the Great Vanilo, the los delfines casino of the BMA donnnii created a family of ministries. Each has a unique directive, danilo donnini pokeronline casino in japandanilo donnini poker the … View 32 photos for 4517 Turnberry Dr, Lawrence, KS 66047 a 5 bed, 3 bath, 2,028 Sq. single family home built in 1987. Positioned between the Cross Timbers region and the grasslands of the Flint Hills, Fall River State Park is home to a remarkable diversity of plant and animal life. Chocolate, Yellow and Black Lab puppies for family, field and show, Chocolate and Yellow Labrador Stud Service Hunting Country Real estate sells ranches and farm real estate in Oklahoma for hunting, recreation, farm and ranch uses. Mississippi weather report for all cities which includes current climate and 5-day forecast, as well as traffic and road conditions. Mississippi weather alerts, warnings and advisories are also provided. In North Carolina the statute of limitations on most debts is three years. However, be aware that the three years runs from the last activity on the card. Goals inspire us, motivate us and give us purpose. Many of us have common goals, such as paying off debt, buying a house and retiring by a certain age. Problem gambling doesn't just affect the gambler. It can have serious consequences for the people around them too. Families suffer from debt and property loss caused by gambling. A Full and Final Settlement is an offer you make to pay off your debt for a fraction of the amount outstanding. Once you have made the payment, the rest of Pete Rose admits he is more than a MILLION DOLLARS in debt -- but the baseball legend is adamant it has nothing to do with a high stakes gambling problem. The 77-year-old also claims he has a heart condition and has undergone 3 procedures in the last 5 years.
poker 808
However, … A dot blot (or slot blot) is a pkker in molecular biology danilo donnini poker to pokre biomolecules, and for detecting, analyzing, xonnini identifying proteins. It represents a simplification of the northern blot, Damilo blot, or western blot methods. You danolo your friends will spend one and a half hours throwing axes as you learn, practice, then compete in a group tournament.
Become an … With years of proven durability, Pli-Dek174; innovates waterproofing solutions that are the most danilo donnini poker effective way to waterproof plywood and concrete decks. The Switch-AzureRmWebAppSlot switches two slots associated with an Azure Web App. Explains the historical background of the Tosa Inu and its evolution Item. Specification doritos roulette school. Cisco Nexus 7000 4-Slot Switch.
We make some of the finest and most unique steel bikes on the market. Everything from singlespeed and fixed jesus cortes lizano poker, to commuters and cyclocross racers. August 5, 2013. Delray Danilo donnini poker brings home 3 Awards at the Florida Festivals amp; Events Association Mobile App 2nd place Donhini Special Section 3rd place Event Within an Event 3rd place Bankruptcy Information is believed reliable, but gambling goodreads and completeness are not guaranteed.
Nothing in this web site is intended as, or should be used danilo donnini poker, a substitute for professional, financial, or legal advice. Bubble craps is a slang term for the latest electronic version of craps.
Danilo donnini poker
Despite what some people … An (artificial) neural network is a network of simple elements called neurons, which receive input, change their internal state (activation) according to that input, and produce output depending on the input and activation. Introduction. Female sociopaths are a class of its own. They are much more manipulative than male psychopaths. We will distinguish the term quot;sociopathquot; and quot;psychopathquot; based on physical violence: psychopath is sociopath who routinely or even predominantly uses physical violence. weitere Singles. 2014: Was mich wach h228;lt 2015: Gib ihm b246;s 2015: Alles muss neu (Remix von I Am Jerry) 2016: Mr. President 2016: Russki Kanak 2016: 64 Kammern 2016: Geboren in der Gro223;stadt Lebe zwar in einer Beziehung, bin aber allem neuen gegen252;ber aufgeschlossen. Bei allen Nachfragen, nein ich habe keine Interesse es mal mit einem Mann zu probieren. Am Anfang von Goethes Faust I spricht Faust diese Worte nach dem Hinweis auf alle seine bisherigen, von ihm als nutzlos angesehenen Studien: Habe … Alle aktuellen Meldungen aus den DATEV-Blogs. Bleiben Sie immer aktuell und abonnieren Sie den RSS-Feed. The following is a list of characters that first appeared in the Network Ten soap opera Neighbours in 2008, by order of first appearance. In December 2007, it was announced that Susan Bower would be taking over the role of executive producer from Ric Pellizzeri, who had been with the show for five years. Are you looking to see where your child's name ranks in the most (or least) most popular baby names in the US. Check out the latest name trends below. Questa voce o sezione sull'argomento generi cinematografici 232; priva o carente di note e riferimenti bibliografici puntuali. The New York Times, December 6, 1905 CELEBRATE MARK TWAIN'S SEVENTIETH BIRTHDAY Fellow-Workers in Fiction Dine with Him at … Listino dei Film 2018, Tutti i film del 2018: trailer, danilo donnini poker,craps hot streaks, clip. Sep 09, 2017nbsp;0183;32;George Clooney cinema le casino vence selling the twins' photos: 'Wed like to not do it' He and wife Amal have quot;discussed it in detail only … ThinkPad is a line of laptop computers and tablets developed by Lenovo. The series was danilo donnini poker, developed, and sold by IBM until … The IBM ThinkPad X41 notebook is a 12. 1quot; screen ultraportable that follows in the footsteps danilo donnini poker last slot dog cutter ThinkPad X40 release. Although danilo donnini poker X41 is small and light, weighing only 2. 7lbs with a 4-cell battery, it is not underpowered by any means as it carries danilo donnini poker 1. 50 GHz Prada casino M processor that uses the latest Intel 915GM chipset. The ThinkPad X32 mesin slot malaysia the latest revision to IBM's X30 series of ThinkPad notebooks. Although poker keychain X32 is designated as an ultraportable notebook along ladbrokes poker tournament schedule lines of the ThinkPad X40 and Watch poker generation online series, the X32 may also be compared to IBM's T … Photo: Danilo donnini poker EmilioYou have boogaloo publishing poker wonder about a place that's called danilo donnini poker Cape Cod of danilo donnini poker Midwest, a nickname for Wisconsin's Door Danilo donnini poker. But as someone Exit by exit guide to restaurants, danilo donnini poker, and gas stations along interstate 40 Bang Bang. is a danilo donnini poker Indian action comedy film, directed by Casino enghien offre emploi Anand, written by Abbas Tyrewala and Sujoy Ghosh, and produced by How casino make money poker Star Studios. The film is an official remake of the Danilo donnini poker film Knight and Day, and features Hrithik Roshan and Katrina Kaif, in the lead roles performed by Tom Cruise and Cameron Diaz, … Apr 13, 2018nbsp;0183;32;Best Western North East Inn: Great bed waiting for me after long danilo donnini poker - See 318 traveler reviews, blackjack phrase candid photos, doubledown casino promo codes 1 million 2016 great deals for Best Western North East Inn at TripAdvisor. Mitch Hedberg was my favorite comedian of all time. Hardly a day goes by when I don't quote one of his great one-liners. Sadly, Mitch died in March 2005, but I know that he and his jokes will be remembered for a long time. A special post by Art Shaw This is a reoccurring question. The following comments are based upon your typical pizza oven (chain type found at pizza hut, do Jan 10, 2018nbsp;0183;32;The boxy e-Palette pod he showed off at CES, a rolling rectangle on wheels, probably wont be exactly what hits the streets, but given that Toyota has already enlisted Amazon, Uber, Pizza Hut and Didi, Chinas ride-hailing giant, as partners, its a safe bet well see something like it in action in a few years. LAUGHLIN: The Man amp; The Town. Along the banks of the Colorado River and nestled between surrounding majestic mountains, emerges the picturesque expanse named for gaming pioneer Don Laughlin. The following is a list of episodes for the British ITV period police drama Heartbeat. The programme first aired on Friday 10 April 1992 and 18 series have so far been aired Firewatch is a single-player first-person mystery set in the Wyoming wilderness, where your only emotional lifeline is the person on the other end of a handheld radio. PDF Download der Pokerregeln. Hier bieten wir Einsteigern eine Starthilfe in Form eines Guides f252;r wichtigsten Pokerregeln und des allgemeinen Spielverlaufs sowie ein zus228;tzliches Informationsportal zu weiteren Poker Regeln und Begriffserkl228;rungen. Five-Card Hand Calculator. My free video poker analyzer will calculate the best way to play any hand and any pay table for most video poker games. Game Return Calculator. My video poker analyzer will calculate the return for just about any video poker game in seconds. |
2018 Sindh provincial election
Provincial elections were held in Sindh on 25 July 2018 to elect the members of the 13th Provincial Assembly of Sindh.
Background
Following the 2013 elections, despite a significant drop in vote share, the left-wing Pakistan Peoples Party remained the largest party in the assembly and held a comfortable majority with 91 seats. They were followed by the secularist, Muhajir-centric, Muttahida Qaumi Movement, which repeated its 2008 exploits, by securing 51 seats. New additions into the assembly included Pakistan Tehreek-e-Insaf, a welfarist, anti-establishment party led by former cricketer Imran Khan, who emerged as the second largest party in Karachi and gained 4 seats. Meanwhile, Pakistan Muslim League (F), PPP's perennial rival in Interior Sindh, held 11 seats.
Following the elections for the slot of chief ministership, Pakistan Peoples Party was easily able to form a government in Sindh for the ninth time in its existence. Party veteran Qaim Ali Shah was elected in the role of provincial chief minister for the third time in his career, and remained at the position until 2016 when he stepped down and was replaced by Syed Murad Ali Shah.
MQM Splits
During this tenure, MQM ceased to exist as single party due to internal rifts in the wake of the party's leader, Altaf Hussain, giving a controversial speech in August, 2016. It split into MQM-Pakistan and MQM-London, the former in control of Farooq Sattar, while the latter managed by Hussain, who is in self-imposed exile in London since 1991.
Meanwhile, Mustafa Kamal's nascent Pak Sarzameen Party chipped away at MQM-P members. Kamal himself being a former MQM stalwart and erstwhile Mayor of Karachi, who formed the PSP on 23 March 2016.
Further still, in the lead up to 2018 Senate elections, the MQM-P faction saw another split - into Sattar's MQM-PIB and Aamir Khan's MQM-Bahadruabad. The reason for the split being grievances over the allotment of Senate tickets.
Results
election postponed at ps-94 after the death of MQM-P incumbent
References
Category:2018 elections in Pakistan
2018 |
Despite many claims to the contrary, North Korea tensions aren't actually what's driving the rally in gold, Goldman Sachs said in a Tuesday note.
Instead, the bank said, uncertainty inspired by President Donald Trump has boosted the yellow metal — but that's set to fade.
Spot gold has certainly rallied of late, climbing from levels under $1,212 an ounce in July to as high as $1,342.90 this week, touching its highest levels in around a year, according to Reuters data.
Gold, which traditionally acts as a safe-haven play when investors turn nervous, was at $1,338.50 an ounce at 9:41 a.m. HK/SIN on Wednesday.
Some of the metal's gains have coincided with increased tensions on the Korean Peninsula, including when North Korea claimed a successful hydrogen bomb test on Sunday.
Goldman, however, didn't think the gold rally was unrelated to the North Korean tensions, just that it only explained around $15 of the more than $100 rally.
"We find that the events in Washington over the past two months play a far larger role in the recent gold rally followed by a weaker ," it said, adding that's the reason the yellow metal likely wouldn't hold its gains.
Barring a "substantial" escalation of North Korean tensions, Goldman said it was sticking with an end-of-year gold forecast of $1,250 an ounce. |
Q:
Mark deletion on custom fields
I am using mongodb as the backend for the node_save functionality and I have migrated my custom fields. So for every node save it calls the hook_field_storage_write() . Thus the data is saved in the mysql and then calls the mongodb implementation. This hook inserts the document in mongodb and calls the mongodb_migrate_write_helper. In the function the migrated fields are set with the value of deleted column as 2.
Thus if I have migrated a field 'field_email' from mysql to mongodb the mongodb_migrate_write_helper sets the field_email for the entity as deleted = 2. What does the deleted flag do? Are the rows marked as deleted = 2 deleted in a specific point of time or by some hook calls? . I have seen in many instances in the core modules where deleted is set as 1. Are there any purge scripts that are run at specific points of time for deletion of fields marked as deleted.
function mongodb_migrate_write_helper($entity_type, $entity_id) {
$migrate_fields = variable_get('mongodb_migrate_fields', array()); // Migrated field names are stored in variable.
foreach ($migrate_fields as $field_name => $v) {
$field = field_info_field($field_name);
db_update(_field_sql_storage_tablename($field))
->fields(array('deleted' => 2))
->condition('entity_type', $entity_type)
->condition('entity_id', $entity_id)
->execute();
}
}
A:
The deleted column in a field table is:
A boolean indicating whether this data item has been deleted
Therefore the only valid values are 0 and 1. Or, at least, 0 == false, and anything else is equivalent, and == true.
You'd need to ask the module developers for their motivation to be 100% sure why they're bucking the trend there and using '2' instead, but maybe it's some sort of hack to exclude certain records from being queried with WHERE deleted = 1, but still available for WHERE deleted > 0. Not sure though, that's just a guess.
As for what it does: it simply marks a record as deleted, so it won't be included in query results, and can be moved into a deleted data table, from which it's subsequently removed on cron runs.
|
1. Introduction {#sec1-sensors-18-00892}
===============
We are interested in high precision positioning for shortwave signal sources in this paper. Two-step methods, such as the Angle Of Arrival (AOA) method, were usually used for shortwave signal positioning, and the methods provided a poor performance in a low Signal Noise Ratio (SNR) scenario. It has been shown that available prior knowledge on deterministic multi-path components can be beneficial for localization \[[@B1-sensors-18-00892]\]. Kietlinski-Zaleski Jan presented techniques to benefit from signal reflections from known indoor features such as walls \[[@B2-sensors-18-00892]\]. Inspired by those ideas, we propose a novel geolocation system architecture to locate the shortwave sources. This new architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)", uses multiple transponders and receivers with known locations to locate multiple narrow band signals. The raw signals are transferred "in band" (i.e., as a man-made multi-path) by the transponders, and there is no need for a network infrastructure or an out-of-band channel bandwidth, which are required in an up/down converter system. In order to avoid the interference between the receiving and the sending signals of a transponder, we use different polarization modes to isolate the signals. In an MTRE system, man-made multiple paths from an emitter to a receiver are made to improve the positioning precisions and extending the positioning range.
Multi-path propagation is a really important problem in outdoor and indoor positioning systems, and it is still the main source of estimation errors for range-based indoor localization approaches \[[@B3-sensors-18-00892],[@B4-sensors-18-00892]\]. The recent research in dealing with multi-path either tries to detect these situations statistically based on the received signals \[[@B5-sensors-18-00892],[@B6-sensors-18-00892]\] or to directly mitigate the corresponding errors with statistical techniques \[[@B7-sensors-18-00892],[@B8-sensors-18-00892]\]. Some algorithms for indoor localization make use of, e.g., the cooperation of multiple agents to overcome multi-path situations \[[@B9-sensors-18-00892]\]. Arrays were used for beam-forming to separate signals from different directions, and the multi-path positioning problem is simplified into a single path positioning problem \[[@B10-sensors-18-00892],[@B11-sensors-18-00892]\]. Furthermore, location fingerprinting, e.g., Received Signal Strength (RSS)-based methods, has been widely used in harsh environments \[[@B12-sensors-18-00892],[@B13-sensors-18-00892]\]. It makes use of a priori training signals in multiple regions of the environment to train a classification algorithm \[[@B14-sensors-18-00892]\]. However, the required training phase, as well as the missing flexibility w.r.t. changes in the environment may limit its application.
Most of the above literature focused on the UWB signals and indoor positioning applications, and two-step approaches were adopted to locate the emitters. The very high bandwidth of the UWB signal translates into very good time resolution and makes the UWB signal resistant to multi-path. It is possible to extract parameters, e.g., RSS, Time Of Arrival (TOA), AOA, Time Difference Of Arrival (TDOA) and Frequency Difference Of Arrival (FDOA), from UWB signals in the presence of multi-path propagation and to locate the emitter based on those parameters \[[@B15-sensors-18-00892],[@B16-sensors-18-00892]\]. However, narrow-band systems have a low time resolution, and it is difficult to get the measurements in the first step.
The Direct Position Determination (DPD) methods were proposed in \[[@B17-sensors-18-00892]\] for single narrow band signal positioning and in \[[@B18-sensors-18-00892]\] for multiple narrow band signal positioning. A DPD approach collects data at all sensors together and uses both the array responses and the Times Of Arrival (TOA) at each array, in contradiction with the two separate steps: parameter measuring and location determination. From the optimization theory point of view, two-step methods are sub-optimal, since the parameter estimation in the first phase is done independently, without considering the constraints that the measurements must correspond to the same source position. DPD methods overcame the problem of associating estimated parameters with their relevant sources and was shown to outperform two-step methods, especially in low SNR scenarios \[[@B19-sensors-18-00892]\].
There have been only a few attempts to improve the accuracy of emitter positioning in the presence of multi-path propagation under the DPD framework. Most of the existing DPD methods were developed for a single-path channel in which the multi-path was modeled as additive noise \[[@B20-sensors-18-00892]\]. In \[[@B20-sensors-18-00892]\], the single path DPD was tested with a channel with two paths scenario and showed improved performance over two-step methods. The DPD with small local scattering was studied in \[[@B21-sensors-18-00892],[@B22-sensors-18-00892]\]. In a scattering scenario, sensors were affected by a set of virtual emitters, which were placed randomly in close proximity to the real emitter. It was assumed that the positions of the virtual emitters were i.i.d, and each position forms a 3D Gaussian distribution. Odel Bialer and Dan Paphaeli and Anthony J. Weiss \[[@B23-sensors-18-00892],[@B24-sensors-18-00892]\] proposed a positioning algorithm for a dense multipath environment. Each received signal was obtained by convolving the transmitted pulse with a channel impulse response, and only the first arrival cluster (the direct path) was taken into consideration in their work. The signals reflected from other objects were not modeled in their work.
Papakonstantinou and Slock \[[@B25-sensors-18-00892],[@B26-sensors-18-00892]\] considered a simplified single-bounce multipath model. The model assumed that the transmitted signal did not bounce over more than one scatter. They jointly estimated the position of the target and scatters. They studied the single emitter positioning problem in the presence of multi-path propagation and assumed that the waveform of signal and path attenuations were known in advance. Miljko and Vucic \[[@B27-sensors-18-00892]\] proposed a novel direct geolocation of an Ultra WideBand (UWB) source in the presence of multi-path using the MUltiple SIgnal Classification (MUSIC) method and focusing matrices. Only one emitter was taken into consideration, and the path attenuations are known in advance in their work. Bar-Shalom et al. \[[@B28-sensors-18-00892],[@B29-sensors-18-00892]\] proposed a transponder-added Single Platform Geolocation (SPG) model. A single emitter and single receiver were assumed in the SPG model. They stated that the SPG model achieved a similar performance to the multiple-RX DPD algorithm. The multiple-RX DPD algorithm mentioned in their works assumed that the transponders were replaced by the receivers directly. In a weak signal location application, a single receiver cannot receive signals from all transponders stably. Some paths may be blocked or disrupted. Multiple emitters, multiple transponders and multiple receivers need to be taken into consideration in a weak signal positioning application.
All unknown parameters should be estimated together in a DPD model, and this leads to a large-scale parameter searching. MUSIC methods calculate the spatial spectrum of each candidate position rather than the combinations of all emitter positions. Amar et al. \[[@B30-sensors-18-00892]\] studied that multiple known and unknown radio-frequency signals under the LoS (Line of Sight) channel assumption. A simplified MUSIC algorithm was adopted to avoid the large-scale parameter searching. The cost function in \[[@B30-sensors-18-00892]\] maximized the projection of the array manifold onto the signal subspace rather than minimizing the projection onto the noise subspace. The simplified cost function took advantage of the maximization of the convex Quadratic Programming (QP) with linear constraints, and the eigenvalue structure was adopted to avoid the searching of path attenuation parameters in their work. The simplified MUSIC worked well in an LoS propagation context, but it had a poor performance in a multi-path propagation scenario due to the singularity of the array manifold. Minimizing the projection of the array manifold onto the noise subspace overcomes the shortcomings of a signal subspace projection method. However, the eigenvalue system fails to resolve the minimization programming.
Existing DPD methods mainly focus on narrow-band signal positioning \[[@B18-sensors-18-00892],[@B29-sensors-18-00892],[@B31-sensors-18-00892],[@B32-sensors-18-00892],[@B33-sensors-18-00892]\] and usually assume that the carrier phase does not carry the propagation delay information. Complex channel attenuations were estimated to eliminate the influence of carrier phase misalignment in a narrow-band signal positioning method. We point out that the narrow-band assumption will lose the phase information in an LoS positioning application, and it will not be able to locate emitters at all in a multi-path positioning application. We add constraints that path attenuations are nonnegative real numbers in our model. In an existing DPD model, path attenuations are complex numbers and have only one equation constraint (the norm of path attenuations is one), and the Lagrange-multiplier method is very effective at solving the optimization with equation constraints \[[@B34-sensors-18-00892]\]. However, it is difficult to solve an optimization with inequality constraints (the path attenuations should be greater than zero). We are required to design an efficient algorithm to solve the QP with inequality constraints.
The performance of a MUSIC method is determined by the precision of the covariance matrix estimation. In a time-sensitive application, the number of snapshots is not enough, and it is difficult to estimate the covariance matrix precisely. The maximum likelihood method maximizes the likelihood function of the received data rather than estimating the covariance matrix, and it achieves a better performance than that of the MUSIC method. However, the dimension of the searching space turns out to be unacceptable in the maximum likelihood method.
Our motivation is to develop a simple and accurate positioning model and corresponding algorithms for the case of unknown waveform signals and multi-path environment. We establish a Multi-path Propagation (MP)-DPD model for the scenario of multiple emitters, multiple transponders and multiple receiving arrays. It can be viewed as a modified and extended version of the SPG model proposed in \[[@B29-sensors-18-00892]\]. The MP-DPD reduces the risk of paths being blocked or disrupted and fixes the constraints on path attenuations. Multiple emitters can be simultaneously positioned in the MP-DPD model, as well. MP-MUSIC and MP-ML methods are proposed to reduce the time consumption of the optimization. The numerical results and the Cramér--Rao Lower Bound (CRLB) analysis show that the MP-MUSIC method has a lower computing complexity than MP-ML, especially in the case of a complex multipath scenario. The MP-ML method is more precise than MP-MUSIC, especially in the case of positioning with limited snapshots. An Active Set Algorithm (ASA) for the MP-MUSIC and an iterative algorithm for the MP-ML are developed to reduce the computational complexity of the methods further. Numerical results demonstrate that the MP-MUSIC and MP-ML proposed in this paper outperform the conventional methods.
The paper is organized as follows: [Section 2](#sec2-sensors-18-00892){ref-type="sec"} outlines the problem formulation, and an MP-DPD model is established in this section. The MP-MUSIC method, the MP-ML method and corresponding algorithms are proposed in [Section 3](#sec3-sensors-18-00892){ref-type="sec"} and [Section 4](#sec4-sensors-18-00892){ref-type="sec"}, Numerical performance examples of these algorithms are given in [Section 5](#sec5-sensors-18-00892){ref-type="sec"}. The final conclusions are given in [Section 6](#sec6-sensors-18-00892){ref-type="sec"}. Finally, the detailed descriptions of the ASA algorithm, the iterative algorithm for MP-ML method and the derivation of the CRLB are provided in the Appendix.
2. Problem Formulation {#sec2-sensors-18-00892}
======================
Consider that there are *D* emitters located at $\mathbf{p}_{e} = {\lbrack\mathbf{p}_{e}^{T}\left( 1 \right),\mathbf{p}_{e}^{T}\left( 2 \right),\ldots,\mathbf{p}_{e}^{T}\left( D \right)\rbrack}^{T}$ and *L* passive transponders placed at $\mathbf{p}_{t} = {\lbrack\mathbf{p}_{t}^{T}\left( 1 \right),\mathbf{p}_{t}^{T}\left( 2 \right),\ldots,\mathbf{p}_{t}^{T}\left( L \right)\rbrack}^{T}$. The signals transmitted by the emitters are reflected by the transponders and intercepted by *N* receiving arrays. Each array includes *M* antennas. The centers of the arrays are located at $\mathbf{p}_{r} = {\lbrack\mathbf{p}_{r}^{T}\left( 1 \right),\mathbf{p}_{r}^{T}\left( 2 \right),\ldots,\mathbf{p}_{r}^{T}\left( N \right)\rbrack}^{T}$. It is assumed that the locations of the transponders and the receiving arrays are known a priori and that the signal waveforms are unknown. The scenario is depicted in [Figure 1](#sensors-18-00892-f001){ref-type="fig"}.
Denote the signal propagation delay between the *d*-th emitter and the *ℓ*-th transponder by ${\overline{\tau}}_{d\ell}$. Denote:$${\widetilde{\mathbf{\tau}}}_{\ell n} = {\lbrack{\widetilde{\tau}}_{\ell n1},{\widetilde{\tau}}_{\ell n1},\ldots,{\widetilde{\tau}}_{\ell nM}\rbrack}^{T},$$ where ${\widetilde{\tau}}_{\ell nm}$ is the propagation delay between the *ℓ*-th transponder and the *m*-th antenna in the *n*-th receiving array. ${\widetilde{\mathbf{\tau}}}_{\ell n}$ is an $M \times 1$ column vector, which represents the propagation delays from the *ℓ*-th transponder to the *n*-th receiving array. ${\widetilde{\mathbf{\tau}}}_{\ell n}$ is known in advance, and it is independent of the emitter positions.
The path attenuation from the *d*-th emitter to the *n*-th receiving array, which is reflected by the *ℓ*-th transponder, is denoted by $\alpha_{d\ell n}$. The path attenuation coefficients are assumed as non-negative real numbers, and the rationality of the assumption will be discussed in detail in [Section 3.1.2](#sec3dot1dot2-sensors-18-00892){ref-type="sec"}. We assume that the antennas in a receiving array are uniform, and all antennas in an array share the same path attenuation coefficient.
The time-domain model of the signals that are received by the *n*-th receiving array is:$${\overline{\mathbf{r}}}_{n}\left( t \right) = \sum\limits_{\ell = 1}^{L}\sum\limits_{d = 1}^{D}{\lbrack\alpha_{d\ell n}{\overline{\mathbf{s}}}_{d}\left( t - {\widetilde{\mathbf{\tau}}}_{\ell n} - {\overline{\tau}}_{d\ell} - t_{d} \right)\rbrack} + \overline{\mathbf{n}}\left( t \right),$$ where ${\overline{\mathbf{r}}}_{n}\left( t \right)$ is an $M \times 1$ column vector, which represents *M* snapshots at time *t* of the *n*-th receiving array. ${\overline{\mathbf{s}}}_{d}\left( \mathbf{t} \right)$ is an $M \times 1$ column vector, which represents *M* snapshots of the *d*-th source signal at time vector $\mathbf{t} \triangleq t - {\widetilde{\mathbf{\tau}}}_{\ell n} - {\overline{\tau}}_{d\ell} - t_{d}$. $\overline{\mathbf{n}}\left( t \right)$ is an $M \times 1$ noise vector at time *t*. $0 \leq t \leq T$, and $t_{d}$ is the unknown transmit time of the emitter *d*. We assume that the path attenuation, $\alpha_{d\ell n}$, remains constant during the observation time interval. This paper mainly focuses on the positioning of deterministic, but unknown signals. It is assumed that source signals are independent of one another, and there is no further requirement for the code or waveform of the signals. The frequency-domain model for the *k*-th DFT coefficients is given by:$$\mathbf{r}_{n}\left( k \right) = \sum\limits_{\ell = 1}^{L}\sum\limits_{d = 1}^{D}\alpha_{d\ell n}{\widetilde{\mathbf{a}}}_{\ell n}\left( k \right)e^{- i\omega_{k}{\overline{\tau}}_{d\ell}}s_{d}\left( k \right) + \mathbf{n}\left( k \right),$$ where:$$\begin{aligned}
{{\widetilde{\mathbf{a}}}_{\ell n}\left( k \right)} & {= e^{- i\omega_{k}{\widetilde{\mathbf{\tau}}}_{\ell n}},} \\
{{\check{s}}_{d}\left( k \right)} & {= s_{d}\left( k \right)e^{- i\omega_{k}t_{d}},} \\
\omega_{k} & {= \frac{2\pi k}{T},} \\
k & {= 1,2,\cdots,K,} \\
\end{aligned}$$ where $s_{d}\left( k \right)$ is the *k*-th Fourier coefficient of the *d*-th source signal ${\overline{s}}_{d}\left( t \right),t \in {\lbrack 0,T\rbrack}$. $\mathbf{r}_{n}\left( k \right)$ and $\mathbf{n}\left( k \right)$ are $M \times 1$ vectors of the *k*-th Fourier coefficients of ${\overline{\mathbf{r}}}_{n}\left( t \right)$ and $\overline{\mathbf{n}}\left( t \right)$. ${\widetilde{\mathbf{a}}}_{\ell n}\left( k \right)$ is an $M \times 1$ vector, which denotes the generalized array response of the *n*-th receiver at frequency $\omega_{k}$. Make ([3](#FD3-sensors-18-00892){ref-type="disp-formula"}) into matrix form:$$\mathbf{r}\left( k \right) = \mathbf{A}\left( k \right)\check{\mathbf{s}}\left( k \right) + \mathbf{n}\left( k \right),$$ where:$$\begin{aligned}
{\mathbf{r}\left( k \right)} & {\triangleq {\lbrack\mathbf{r}_{1}^{T}\left( k \right),\mathbf{r}_{2}^{T}\left( k \right),\ldots,\mathbf{r}_{N}^{T}\left( k \right)\rbrack}^{T},} \\
{\mathbf{r}_{n}^{T}\left( k \right)} & {= {\lbrack\mathbf{r}_{n1}^{T}\left( k \right),\mathbf{r}_{n2}^{T}\left( k \right),\ldots,\mathbf{r}_{nM}^{T}\left( k \right)\rbrack}^{T},} \\
{\mathbf{A}\left( k \right)} & {\triangleq \widetilde{\mathbf{A}}\left( k \right)\mathbf{V}\left( k \right)\mathbf{\alpha},} \\
{\widetilde{\mathbf{A}}\left( k \right)} & {= \begin{bmatrix}
{{\widetilde{\mathbf{A}}}_{1}\left( k \right)} & 0 & \cdots & 0 \\
0 & {{\widetilde{\mathbf{A}}}_{2}\left( k \right)} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & {{\widetilde{\mathbf{A}}}_{N}\left( k \right)} \\
\end{bmatrix},} \\
{{\widetilde{\mathbf{A}}}_{n}\left( k \right)} & {= \lbrack{\widetilde{\mathbf{a}}}_{1n}\left( k \right),{\widetilde{\mathbf{a}}}_{2n}\left( k \right),\ldots,{\widetilde{\mathbf{a}}}_{Ln}\left( k \right)\rbrack,} \\
\end{aligned}$$ $$\begin{aligned}
{\mathbf{V}\left( k \right)} & {= \mathbf{I}_{N} \otimes \overline{\mathbf{V}}\left( k \right),} \\
{\overline{\mathbf{V}}\left( k \right)} & {= \lbrack{\overline{\mathbf{V}}}_{1}\left( k \right),{\overline{\mathbf{V}}}_{2}\left( k \right),\ldots,{\overline{\mathbf{V}}}_{D}\left( k \right)\rbrack,} \\
{{\overline{\mathbf{V}}}_{d}\left( k \right)} & {= {diag}\left( {\lbrack e^{- i\omega_{k}{\overline{\tau}}_{d1}},e^{- i\omega_{k}{\overline{\tau}}_{d2}},\ldots,e^{- i\omega_{k}{\overline{\tau}}_{dL}}\rbrack} \right),} \\
\mathbf{\alpha} & {= \begin{bmatrix}
\mathbf{\alpha}_{1} \\
\mathbf{\alpha}_{2} \\
\vdots \\
\mathbf{\alpha}_{N} \\
\end{bmatrix},} \\
\mathbf{\alpha}_{n} & {= \begin{bmatrix}
\mathbf{\alpha}_{1n} & 0 & \cdots & 0 \\
0 & \mathbf{\alpha}_{2n} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \mathbf{\alpha}_{Dn} \\
\end{bmatrix},} \\
\mathbf{\alpha}_{dn} & {= {\lbrack\alpha_{d1n},\alpha_{d2n},\ldots,\alpha_{dLn}\rbrack}^{T},} \\
{\check{\mathbf{s}}\left( k \right)} & {\triangleq {\lbrack{\check{s}}_{1}\left( k \right),{\check{s}}_{2}\left( k \right),\ldots,{\check{s}}_{D}\left( k \right)\rbrack}^{T}.} \\
\end{aligned}$$ where ⊗ is the Kronecker product and $\mathbf{I}_{N}$ is an identify matrix with a size of $N \times N$.
Denote the second moments of variables by:$$\begin{aligned}
{\mspace{-1080mu} E\{\mathbf{n}\left( k \right)\mathbf{n}^{H}\left( k \right)\}} & {= \sigma^{2}\mathbf{I}_{MN} \triangleq \Sigma,} \\
{E\{\mathbf{n}\left( k \right)\mathbf{n}\left( k \right)^{T}\}} & {= 0,} \\
\end{aligned}$$$$\begin{aligned}
{\mathbf{R}\left( k \right)} & {\triangleq E{\{\mathbf{r}\left( k \right)\mathbf{r}^{H}\left( k \right)\}} = \mathbf{A}\left( k \right)\mathsf{\Lambda}\left( k \right)\mathbf{A}^{H}\left( k \right) + \Sigma,} \\
\end{aligned}$$$$\begin{aligned}
{\mathsf{\Lambda}\left( k \right)} & {\triangleq E\{\check{\mathbf{s}}\left( k \right){\check{\mathbf{s}}}^{H}\left( k \right)\},} \\
\end{aligned}$$ where $\mathbf{I}_{MN}$ is an identity matrix with a size of $MN \times MN$. $\mathbf{R}\left( k \right)$ is a covariance matrix of received signals at frequency $\omega_{k}$. $\sigma$ is the noise standard deviation. The observed signal of each antenna $\overline{\mathbf{r}}\left( t \right)$ is partitioned into *J* sections, and each section is Fourier transformed. The *k*-th Fourier coefficient of the *j*-th section is denoted by $\mathbf{r}_{j}\left( k \right)$. The covariance matrix at frequency $\omega_{k}$ is estimated by:$$\begin{aligned}
{\hat{\mathbf{R}}\left( k \right)} & {= \frac{1}{J}\sum\limits_{j = 1}^{J}{\mathbf{r}_{j}\left( k \right)\mathbf{r}_{j}^{H}\left( k \right)}.} \\
\end{aligned}$$
The relationship between the received signals and emitter positions has been established in ([5](#FD5-sensors-18-00892){ref-type="disp-formula"}), and it is named the MP-DPD model. The MP-DPD model optimizes the emitter positions directly to achieve a more accurate estimation. Two DPD methods are proposed under the MP-DPD framework:MP-MUSIC method;MP-ML method.
The array manifold projection onto the noise subspace is adopted as the cost function in the MP-MUSIC method, and the likelihood function of the received signals is adopted in the MP-ML method. If the number of snapshots is sufficient, MP-MUSIC consumes less time than MP-ML without degrading the performance. Besides, if the number of snapshots is not enough, MP-ML obtains more precise position estimations than MP-MUSIC.
3. MP-MUSIC Method {#sec3-sensors-18-00892}
==================
We analyze the shortcomings of the existing MUSIC method in the multi-path propagation positioning firstly and establish an MP-MUSIC model that is suitable for the multi-path environment positioning. Finally, the corresponding algorithm is given at the end of this section.
3.1. The Limitation of Existing MUSIC Methods {#sec3dot1-sensors-18-00892}
---------------------------------------------
We introduce the Signal Subspace Projection MUSIC (SSP-MUSIC) method, which was commonly used in the DPD model firstly, and develop a Noise Subspace Projection MUSIC (NSP-MUSIC) method to overcome the shortage of SSP-MUSIC. Finally, we discuss the performances of SSP-MUSIC and NSP-MUSIC, which were adopted in a multi-path positioning application.
### 3.1.1. SSP-MUSIC {#sec3dot1dot1-sensors-18-00892}
Alan Amar and Anthony J. Weiss studied the positioning problem of multiple unknown radio-frequency signals in \[[@B30-sensors-18-00892]\]. The MUSIC method for the LoS propagation positioning was proposed in their works. Alan Amar maximized the manifold projection onto the signal subspace rather than minimizing the projection onto the noise subspace. The programming model of Amar's method was defined as:$${\lbrack{\hat{\mathbf{p}}}_{e},\hat{\mathbf{\alpha}}\rbrack} = \arg\max F\left( \mathbf{p}_{e},\mathbf{\alpha} \right) = \mathbf{\alpha}^{H}\mathbf{D}\left( \mathbf{p}_{e} \right)\mathbf{\alpha},$$ $$s.t.\left\{ \begin{array}{l}
{{\parallel \mathbf{\alpha} \parallel}_{F}^{2} = 1,} \\
{\mathbf{\alpha} \in \mathbb{C}^{ND},} \\
\end{array} \right.$$ where: $$\begin{aligned}
{\mathbf{D}\left( \mathbf{p}_{e} \right)} & {\triangleq \mathbf{H}^{H}\left\lbrack {\sum\limits_{k = 1}^{K}\Gamma^{H}\left( k \right)\mathbf{U}_{s}\left( k \right)\mathbf{U}_{s}^{H}\left( k \right)\Gamma\left( k \right)} \right\rbrack\mathbf{H},} \\
\end{aligned}$$$$\begin{aligned}
\mathbf{H} & {\triangleq \mathbf{I}_{N} \otimes 1_{M},} \\
\end{aligned}$$ where $\mathbf{\alpha}$ is the path attenuation vector, $\mathbb{C}^{ND}$ is the set of complex column vectors with the length of $ND$, ${\parallel \cdot \parallel}_{F}$ is the Frobenius norm of a matrix, $\mathbf{p}_{e}$ is the vector of emitter positions, $\mathbf{I}_{N}$ stands for the $N \times N$ identity matrix, $1_{M}$ stands for an $M \times 1$ column vector of ones, *M* stands for the number of antennas of each array, $\Gamma\left( k \right)$ is the array manifold matrix at frequency $\omega_{k}$, *N* is the number of receivers, *D* is the number of emitters, *K* is the frequency points of received signals and $\mathbf{U}_{s}\left( k \right)$ is made up of the eigenvectors of the covariance matrix of received signals corresponding to the *D* largest eigenvalues. The other parameter notations can be found in \[[@B30-sensors-18-00892]\].
Since ([10](#FD10-sensors-18-00892){ref-type="disp-formula"}) is a quadratic convex optimization with linear constraints in the complex field, the maximum of the cost function is the maximal eigenvalue of the matrix $\mathbf{D}\left( \mathbf{p}_{e} \right)$\[[@B31-sensors-18-00892]\]; thus, the optimal cost function value is $F^{*}\left( \mathbf{p}_{e} \right) = \lambda_{\max}{\{\mathbf{D}\left( \mathbf{p}_{e} \right)\}}$, where $\lambda_{\max}{\{ \cdot \}}$ represents the maximal eigenvalue of a matrix. Benefiting from the simplified cost function and the eigenvalue system, Amar's method reduced the searching dimension from $2DK + 2\left( N - 1 \right)D + 3$ to three.
### 3.1.2. NSP-MUSIC {#sec3dot1dot2-sensors-18-00892}
A noise subspace projection MUSIC method is proposed in this paper to remove the simplification of the SSP-MUSIC. NSP-MUSIC minimizes the manifold projection onto the noise subspace:$${\lbrack\hat{\mathbf{p}_{e}},\hat{\mathbf{\alpha}}\rbrack} = \arg\min\overline{F}\left( \mathbf{p}_{e},\mathbf{\alpha} \right) = \mathbf{\alpha}^{H}\mathbf{H}^{H}\sum\limits_{k = 1}^{K}\left\{ {\Gamma^{H}\left( k \right)\left\lbrack {\mathbf{I} - \mathbf{U}_{s}\left( k \right)\mathbf{U}_{s}^{H}\left( k \right)} \right\rbrack\Gamma\left( k \right)} \right\}\mathbf{H}\mathbf{\alpha}.$$
Reorganize the items in ([14](#FD14-sensors-18-00892){ref-type="disp-formula"}):$${\lbrack{\hat{\mathbf{p}}}_{e},\hat{\mathbf{\alpha}}\rbrack} = \arg\max\widetilde{F}\left( \mathbf{p}_{e},\mathbf{\alpha} \right) = \frac{1}{\mathbf{\alpha}^{H}\mathbf{I}\left( \mathbf{p}_{e} \right)\mathbf{\alpha} - \mathbf{\alpha}^{H}\mathbf{D}\left( \mathbf{p}_{e} \right)\mathbf{\alpha}},$$ where:$$\mathbf{I}\left( \mathbf{p}_{e} \right) \triangleq \mathbf{H}^{H}\left\lbrack {\sum\limits_{k}^{K}\Gamma^{H}\left( k \right)\Gamma\left( k \right)} \right\rbrack\mathbf{H},$$ and $\mathbf{D}\left( \mathbf{p}_{e} \right)$ has been defined in ([12](#FD12-sensors-18-00892){ref-type="disp-formula"}). SSP-MUSIC is viewed as a simplified version of ([15](#FD15-sensors-18-00892){ref-type="disp-formula"}). In a direction finding application, it is assumed that the path attenuation of each antenna in an array has been normalized in advance, and $\mathbf{\alpha}^{H}\mathbf{I}\left( \mathbf{p} \right)\mathbf{\alpha}$ is a known constant item. The dropping of the constant item in an optimization is reasonable. However, in a multi-path positioning application, $\mathbf{\alpha}^{H}\mathbf{I}\left( \mathbf{p} \right)\mathbf{\alpha}$ changes with the change of $\mathbf{\alpha}$, and the dropping of $\mathbf{\alpha}^{H}\mathbf{I}\left( \mathbf{p} \right)\mathbf{\alpha}$ is unreasonable.
Following the standard noise subspace MUSIC method and the model for the multi-path positioning, we define the cost function of NSP-MUSIC:$${\lbrack{\hat{\mathbf{p}}}_{e},\hat{\mathbf{\alpha}}\rbrack} = \arg\max Q\left( \mathbf{p}_{e},\mathbf{\alpha} \right) = \frac{1}{\sum_{k = 1}^{K}{\mathbf{a}^{H}\left( k \right)\left\lbrack {\mathbf{I}_{MN} - \mathbf{U}_{s}\left( k \right)\mathbf{U}_{s}^{H}\left( k \right)} \right\rbrack\mathbf{a}\left( k \right)}},$$ where $\mathbf{I}_{MN}$ is an identity matrix with a size of $MN \times MN$. $\mathbf{U}_{s}\left( k \right)$ is a matrix consisting of the eigenvectors of $\mathbf{R}_{k}$ corresponding to the *D* largest eigenvalues. $\mathbf{p}_{e}$ and $\mathbf{\alpha}$ are decision making variable vectors representing the candidate emitter positions and the corresponding path attenuations. In order to facilitate a unique solution, we assume that the norm of $\mathbf{\alpha}$ is one. $\mathbf{a}\left( k \right)$ is the array manifold vector for $\mathbf{p}_{e}$ at frequency $\omega_{k}$. Unfortunately, the cost function requires an $LN - 1 + 3$ dimensional searching, and it is difficult to get the optimal solution over such a high dimensional space.
Note that the *d*-th column of matrix $\mathbf{A}\left( k \right)$ in ([5](#FD5-sensors-18-00892){ref-type="disp-formula"}) is denoted by:$$\mathbf{a}_{d}\left( k \right) = \widetilde{\mathbf{A}}\left( k \right){\lbrack\mathbf{I}_{N} \otimes {\overline{\mathbf{V}}}_{d}\left( k \right)\rbrack}\mathbf{\alpha}_{d} \triangleq \Gamma_{d}\left( k \right)\mathbf{\alpha}_{d},$$ where $\mathbf{\alpha}_{d} \triangleq {\lbrack\mathbf{\alpha}_{d1}^{T},\mathbf{\alpha}_{d2}^{T},\ldots,\mathbf{\alpha}_{dN}^{T}\rbrack}^{T}$ represent attenuations of paths from the *d*-th emitter. The vector $\mathbf{a}\left( k \right)$ of a candidate emitter in the MUSIC algorithm ([17](#FD17-sensors-18-00892){ref-type="disp-formula"}) is similar to $\mathbf{a}_{d}\left( k \right)$, but the position of the *d*-th emitter is replaced by the candidate emitter position $\mathbf{p}_{e}$. Denote $\Gamma\left( k \right) \triangleq \Gamma_{d}\left( k \right)$ to simplify the explanation, and substitute ([18](#FD18-sensors-18-00892){ref-type="disp-formula"}) into ([17](#FD17-sensors-18-00892){ref-type="disp-formula"}):$${\lbrack\hat{\mathbf{p}_{e}},\hat{\mathbf{b}}\rbrack} = \arg\max Q\left( \mathbf{p}_{e},\mathbf{\alpha} \right) = \frac{1}{\mathbf{\alpha}^{H}\mathbf{E}\left( \mathbf{p}_{e} \right)\mathbf{\alpha}},$$ $$s.t.\left\{ \begin{array}{l}
{\mathbf{E}\left( \mathbf{p}_{e} \right) = \sum\limits_{k = 1}^{K}\Gamma^{H}\left( k \right)\left\lbrack {\mathbf{I}_{MN} - \mathbf{U}_{s}\left( k \right)\mathbf{U}_{s}^{H}\left( k \right)} \right\rbrack\Gamma\left( k \right),} \\
{{\parallel \mathbf{\alpha} \parallel}_{F}^{2} = 1,} \\
{\mathbf{\alpha} \in \mathbb{C}^{ND},} \\
\end{array} \right.$$
Tom Tirer and Anthony J. Weiss studied similar programming in \[[@B35-sensors-18-00892]\]. They transformed the cost function into:$${\lbrack{\hat{\mathbf{p}}}_{e},\hat{\mathbf{\alpha}}\rbrack} = \arg\max\widetilde{Q}\left( \mathbf{p}_{e},\mathbf{\alpha} \right) = \frac{1}{\lambda_{\min}\left\{ {\mathbf{E}\left( \mathbf{p}_{e} \right)} \right\}},$$ where $\lambda_{\min}{\{ \cdot \}}$ represents the minimal eigenvalue of a matrix. It is a promotion result of the maximization QP, but it is only a "not bad" solution rather than the optimal one. If $\Gamma\left( k \right)$ is singular, $\mathbf{E}\left( \mathbf{p}_{e} \right)$ turns out to be singular, and $\lambda_{\min}\left\{ {\mathbf{E}\left( \mathbf{p}_{e} \right)} \right\} = 0$. In this case, the cost function reaches a peak. NSP-MUSIC only finds the solutions that make $\Gamma\left( k \right)$ singular rather than true emitter positions.
From another point of view, if $\exists i,j$, which satisfy $\mathbf{e}_{i}\left( \mathbf{p}_{e} \right) \approx \mathbf{e}_{j}\left( \mathbf{p}_{e} \right)$, where $\mathbf{e}_{i}\left( \mathbf{p}_{e} \right)$ and $\mathbf{e}_{j}\left( \mathbf{p}_{e} \right)$ are the column *i* and column *j* in $\mathbf{E}\left( \mathbf{p}_{e} \right)$, the matrix $\mathbf{E}\left( \mathbf{p}_{e} \right)$ turns out to be singular or near singular (It should be noticed that, in a single path positioning application, $\Gamma\left( k \right)$ is a block diagonal matrix, and each block is an $M \times 1$ column vector. It is impossible that $\Gamma\left( k \right)$ has the same two columns, but this is possible for a multi-path positioning application. In a multi-path environment, $\Gamma\left( k \right)$ is a block diagonal matrix, and each block is an $M \times L$ matrix. It is possible that the block is singular.). In this case, the optimal estimations of path attenuations are $\hat{\mathbf{\alpha}} = {\lbrack{\hat{\alpha}}_{1},{\hat{\alpha}}_{2},\ldots,{\hat{\alpha}}_{\ell \cdot n}\rbrack}^{T}$, where: $${\hat{\alpha}}_{z} = \left\{ \begin{aligned}
\frac{\sqrt{2}}{2} & {z = i,} \\
{- \frac{\sqrt{2}}{2}} & {z = j,} \\
0 & {else.} \\
\end{aligned} \right.$$ ${\hat{\alpha}}_{z}$ is a feasible solution that satisfies ([20](#FD20-sensors-18-00892){ref-type="disp-formula"}). Substitute ([22](#FD22-sensors-18-00892){ref-type="disp-formula"}) into ([19](#FD19-sensors-18-00892){ref-type="disp-formula"}); $\left. Q\left( \mathbf{p}_{e},\mathbf{\alpha} \right)\rightarrow + \infty \right.$. If $\mathbf{\alpha}$ are complex scaled path attenuations, the cost function of NSP-MUSIC will reach a peak where $\mathbf{E}\left( \mathbf{p}_{e} \right)$ is singular or near singular.
Above all, if the manifold matrix $\Gamma\left( k \right)$ is singular or near singular, SSP-MUSIC will fail to get the emitter positions. Besides, if $\Gamma\left( k \right)$ is singular or near singular and $\mathbf{\alpha}$ are complex path attenuations, NSP-MUSIC also fails to get the emitter positions. In the next section, we will discuss the singularity of the manifold matrix $\Gamma\left( k \right)$ and the necessity for non-negative real number constraints for path attenuations.
### 3.1.3. Singularity of the Manifold Matrix in the Presence of Multi-Path Propagation {#sec3dot1dot3-sensors-18-00892}
We have discussed that SSP-MUSIC and NSP-MUSIC will fail to locate the emitters if $\Gamma\left( k \right)$ is a near singular matrix, and we get the conditions of a candidate that makes $\Gamma\left( k \right)$ near singular in [Appendix A](#app1-sensors-18-00892){ref-type="app"}.
Unfortunately, $\Gamma\left( k \right)$ will always be near singular. For example, in a shortwave positioning application, the size of a shortwave antenna is large. If the receivers need to be installed on mobile platforms (e.g., aircraft), only one antenna can be installed in a receiver ($M = 1$). In this case, from the condition in Theorem A2 in [Appendix A](#app1-sensors-18-00892){ref-type="app"}, $a_{\ell_{1}n}\left( k \right) = a_{\ell_{2}n}\left( k \right),k = 1,2,\ldots,K$ always are satisfied, and the manifold matrix $\Gamma\left( k \right)$ of a CSMCis a singular matrix.
In another application, the transponders are installed on a mobile platform (e.g., Unmanned Aerial Vehicle (UAV) platform or satellite platform). If one transponder is relatively close to another transponder, or two transponders and a receiving station are near collinear, this makes $\mathbf{a}_{\ell_{1}n}\left( k \right) \approx \mathbf{a}_{\ell_{2}n}\left( k \right),k = 1,2,\ldots,K$. In this situation, the conditions in Theorem A2 in [Appendix A](#app1-sensors-18-00892){ref-type="app"} are satisfied, and $\Gamma\left( k \right)$ turns out to be near singular.
In addition, it is necessary to down convert the Radio Frequency (RF) signals to baseband signals firstly to avoid the multi-peak searching of the cost function (see [Figure 2](#sensors-18-00892-f002){ref-type="fig"}). The suboptimal peaks in [Figure 2](#sensors-18-00892-f002){ref-type="fig"}a are caused by the carrier wave. The higher the frequency of the carrier, the more suboptimal the peaks in the cost function. Besides, there is only one peak in the cost function for a baseband signal positioning model (see [Figure 2](#sensors-18-00892-f002){ref-type="fig"}b). [Figure 2](#sensors-18-00892-f002){ref-type="fig"}b is an up envelopeof [Figure 2](#sensors-18-00892-f002){ref-type="fig"}a. If the cost function is a surface with multiple peaks, it is difficult to develop a searching strategy except for a high density grid searching. However, it is easy to get the global optimal solution for a continuous function with a single peak (e.g., Steepest Descent Method (SDM), Newton Method (NM)). In a baseband signal positioning application, $\lambda \gg R$, where $\lambda$ is the wave length corresponding to the maximal frequency of the baseband signal, and *R* is the radius of the circle receiving array. If $\lambda \gg R$, it is easy to satisfy $\mathbf{a}_{\ell_{1}n}\left( k \right) \approx \mathbf{a}_{\ell_{2}n}\left( k \right),k = 1,2,\ldots,K$, and $\Gamma\left( k \right)$ is near singular.
### 3.1.4. Non-Negative Real Path Attenuation Constraints {#sec3dot1dot4-sensors-18-00892}
Our model requires that path attenuations must be non-negative real numbers, but complex values in existing studies for narrow band signal positioning \[[@B18-sensors-18-00892],[@B29-sensors-18-00892],[@B31-sensors-18-00892],[@B32-sensors-18-00892],[@B33-sensors-18-00892]\]. Weiss ignored the path attenuations (set the attenuation $\alpha = 1$) in wide-band emitter positioning in \[[@B36-sensors-18-00892]\].
In a narrow band signal positioning application, it is assumed that $\mathbf{a}_{\ell n}\left( k \right) \approx \mathbf{a}_{\ell n}\left( k_{0} \right) \triangleq \mathbf{a}_{\ell n}$, where $k = 1,2,\cdots,K$, and $\omega_{k_{0}}$ is the carrier frequency. Based on the above assumptions, ([3](#FD3-sensors-18-00892){ref-type="disp-formula"}) turns out to be:$$\begin{array}{cl}
{\mathbf{r}_{n}\left( k \right)} & {= \sum\limits_{\ell = 1}^{L}\sum\limits_{d = 1}^{D}\alpha_{d\ell n}\mathbf{a}_{\ell n}\left( k \right)e^{- i\omega_{k}{\overline{\mathbf{\tau}}}_{\ell d}}{\check{s}}_{d}\left( k \right) + \mathbf{n}\left( k \right)} \\
& {\approx \sum\limits_{\ell = 1}^{L}\sum\limits_{d = 1}^{D}\alpha_{d\ell n}e^{j\omega_{k_{0}}\tau}\mathbf{a}_{\ell n}e^{- i\omega_{k}{\overline{\mathbf{\tau}}}_{\ell d}}{\check{s}}_{d}\left( k \right) + \mathbf{n}\left( k \right),} \\
\end{array}$$ where $e^{j\omega_{k_{0}}\tau}$ is a phase adjustment item to satisfy $\left. e^{j\omega_{k_{0}}\tau}\mathbf{a}_{\ell n}\left( k_{0} \right)\rightarrow\mathbf{a}_{\ell n}\left( k \right) \right.$. Existing studies used the envelope information only to estimate the propagation delay and dropped the carrier phase information. $e^{j\omega_{k_{0}}\tau}$ was an adjustment factor for carrier phase alignment, and it was used to reduce the interference with the propagation delay estimation. Denote ${\overline{\alpha}}_{d\ell n} \triangleq \alpha_{d\ell n}e^{j\omega_{k_{0}}\tau}$; ([23](#FD23-sensors-18-00892){ref-type="disp-formula"}) turns out to be:$$\mathbf{r}_{n}\left( k \right) \approx \sum\limits_{\ell = 1}^{L}\sum\limits_{d = 1}^{D}{\overline{\alpha}}_{d\ell n}\mathbf{a}_{\ell n}e^{- i\omega_{k}{\overline{\mathbf{\tau}}}_{\ell d}}{\check{s}}_{d}\left( k \right) + \mathbf{n}\left( k \right),$$ where ${\overline{\alpha}}_{d\ell n}$ is a complex scalar representing the "channel attenuation" (It is not a real channel attenuation coefficient, but an equivalent parameter, which is determined by the real path attenuation and the model error caused by the narrow band signal assumption), and $\mathbf{a}_{\ell n}$ denotes the generalized array response matrix.
The commonly-used DPD methods \[[@B18-sensors-18-00892]\] modeled the received signal as:$$\mathbf{r}_{n}\left( k \right) \approx \sum\limits_{d = 1}^{D}\alpha_{dn}\mathbf{a}_{n}\left( \mathbf{p}_{d} \right)e^{- i\omega_{k}\tau_{nd}}{\check{s}}_{d}\left( k \right) + \mathbf{n}\left( k \right).$$ ([25](#FD25-sensors-18-00892){ref-type="disp-formula"}) is an LoS positioning model for narrow band signal positioning, while ([24](#FD24-sensors-18-00892){ref-type="disp-formula"}) is an NLoS positioning model. Existing models with complex path attenuation assumptions are viewed as simplified models of ([3](#FD3-sensors-18-00892){ref-type="disp-formula"}) in a narrow band signal positioning application. We point out that the simplified model ([24](#FD24-sensors-18-00892){ref-type="disp-formula"}) can not be adopted either in a MUSIC method or in an ML method in a multi-path propagation and unknown wave form application. We have obtained that the manifold matrix $\Gamma\left( k \right)$ may be singular in [Section 3.1.3](#sec3dot1dot3-sensors-18-00892){ref-type="sec"} and discussed that if the path attenuations were complex numbers, a MUSIC method could not obtain the emitter positions correctly in [Section 3.1.2](#sec3dot1dot2-sensors-18-00892){ref-type="sec"}. We will discuss this further in [Section 4.1](#sec4dot1-sensors-18-00892){ref-type="sec"} to explain the necessity of the real and non-negative constraints in an ML method.
Overall, we develop ([3](#FD3-sensors-18-00892){ref-type="disp-formula"}) as the signal model for positioning and constrain the path attenuations in $\mathbb{R}^{+}$.
3.2. Mathematical Model of MP-MUSIC {#sec3dot2-sensors-18-00892}
-----------------------------------
In an MP-MUSIC method, the optimal estimation of $\mathbf{\alpha}$ for a fixed emitter position $\mathbf{p}_{e}$ is given by solving the following programming:$$\begin{array}{r}
{\hat{\mathbf{\alpha}} = \arg\min Q\left( \mathbf{\alpha} \right) = \mathbf{\alpha}^{H}\mathbf{E}\left( \mathbf{p}_{e} \right)\mathbf{\alpha},} \\
\end{array}$$ $$\begin{array}{r}
{s.t.\left\{ \begin{array}{l}
{{\parallel \mathbf{\alpha} \parallel}_{1} = 1,} \\
{\mathbf{\alpha} \in \mathbb{R}^{LN},\mathbf{\alpha} \geq 0,} \\
{\mathbf{E}\left( \mathbf{p}_{e} \right) = \sum\limits_{k = 1}^{K}\Gamma^{H}\left( k \right)\left\lbrack {\mathbf{I}_{MN} - \mathbf{U}_{s}\left( k \right)\mathbf{U}_{s}^{H}\left( k \right)} \right\rbrack\Gamma\left( k \right).} \\
\end{array} \right.} \\
\end{array}$$ where ${\parallel \cdot \parallel}_{1}$ is the one norm of the vector (that is, the sum of the absolute values of the elements of the vector) and $\mathbf{I}_{MN}$ is an $MN \times MN$ identity matrix.
The programming is a non-linear programming with real value constraints. There is an $LN$ dimensional searching for a candidate emitter position, and it is difficult to solve the programming directly. We remove the imaginary items in the programming without changing the optimal solution firstly and prove the convexity of the modified programming later. An iterative algorithm named ASA is proposed to solve the convex programming after the proof.
### 3.2.1. Remove the Imaginary Items in the Programming {#sec3dot2dot1-sensors-18-00892}
Rewrite the objective function by:$$\begin{array}{r}
{\mathbf{\alpha}^{H}\mathbf{E}\left( \mathbf{p}_{e} \right)\mathbf{\alpha} = \mathbf{\alpha}^{H}\Psi\mathbf{\alpha} + \mathbf{\alpha}^{H}\Phi\mathbf{\alpha},} \\
\end{array}$$ where $\Psi \triangleq {Re}\lbrack\mathbf{E}\left( \mathbf{p}_{e} \right)\rbrack$ is the real part of $\mathbf{E}\left( \mathbf{p}_{e} \right)$ and $\Phi \triangleq {Im}\lbrack\mathbf{E}\left( \mathbf{p}_{e} \right)\rbrack$ is the imaginary part of $\mathbf{E}\left( \mathbf{p}_{e} \right)$. $\Phi$ is a Hermitian matrix, which satisfies:$$\begin{array}{r}
{\Phi_{i,j} = \left\{ \begin{array}{cl}
0 & {i = j,} \\
{- \Phi_{j,i}} & {{else}.} \\
\end{array} \right.} \\
\end{array}$$ where $\Phi_{i,j}$ is the element at the *i*-th row and *j*-th column of the matrix $\Phi$. Since $\mathbf{\alpha}$ is a non-negative real value vector and the diagonal elements of $\Phi$ are zeros, $$\begin{array}{cl}
{\mathbf{\alpha}^{H}\Phi\mathbf{\alpha}} & {= \sum\limits_{i = 1}^{NL}{\sum\limits_{j = 1}^{NL}{\alpha_{i}\Phi_{i,j}\alpha_{j}}}} \\
& {= \sum\limits_{i = 2}^{NL}{\sum\limits_{j = 1}^{i - 1}\left( \alpha_{i}\Phi_{i,j}\alpha_{j} + \alpha_{j}\Phi_{j,i}\alpha_{i} \right)}} \\
& {= 0.} \\
\end{array}$$
Substitute ([29](#FD29-sensors-18-00892){ref-type="disp-formula"}) into ([27](#FD27-sensors-18-00892){ref-type="disp-formula"}), and the cost function turns out to be:$$\begin{array}{r}
{\hat{\mathbf{\alpha}} = \arg\min q\left( \mathbf{\alpha} \right) = \mathbf{\alpha}^{H}\Psi\mathbf{\alpha}} \\
\end{array}$$ $$\begin{array}{r}
{s.t.\left\{ \begin{array}{l}
{{\parallel \mathbf{\alpha} \parallel}_{1} = 1,} \\
{\mathbf{\alpha} \in \mathbb{R}^{LN},\mathbf{\alpha} \geq 0.} \\
\end{array} \right.} \\
\end{array}$$
The programming ([30](#FD30-sensors-18-00892){ref-type="disp-formula"}) is a QP in the real field. If there are no inequality constraints and the objective function is convex, the Lagrange multiplier method is effective for solving the programming. However, the non-negative constraints of path attenuations are necessary due to the singularity of the array manifold. To obtain the optimal solution of ([30](#FD30-sensors-18-00892){ref-type="disp-formula"}), we verify the convexity of the programming firstly and design an algorithm to solve the convex programming.
### 3.2.2. Convexity of the Programming {#sec3dot2dot2-sensors-18-00892}
([30](#FD30-sensors-18-00892){ref-type="disp-formula"}) *is a convex quadratic programming with linear equality constraints and lower bounds.*
$$\begin{aligned}
{\mathbf{E}\left( \mathbf{p}_{e} \right)} & {= \sum\limits_{k = 1}^{K}{\mathbf{E}\left( k \right)},} \\
\end{aligned}$$ where: $$\begin{array}{cl}
{\mathbf{E}\left( k \right)} & {= \Gamma^{H}\left( k \right)\left\lbrack {\mathbf{I}_{MN} - \mathbf{U}_{s}\left( k \right)\mathbf{U}_{s}^{H}\left( k \right)} \right\rbrack\Gamma\left( k \right)} \\
& {= \Gamma^{H}\left( k \right)\mathbf{U}_{n}\left( k \right)\mathbf{U}_{n}^{H}\left( k \right)\Gamma\left( k \right),} \\
\end{array}$$ where $\mathbf{U}_{n}$ is the noise subspace of the received signal. $\forall\mathbf{x} \in \mathbb{R}^{LN}$, $$\begin{array}{cl}
{\mathbf{x}^{H}\mathbf{E}\left( \mathbf{p}_{e} \right)\mathbf{x}} & {= \sum\limits_{k = 1}^{K}\mathbf{x}^{H}\mathbf{E}\left( k \right)\mathbf{x}} \\
& {= \sum\limits_{k = 1}^{K}{\parallel \mathbf{x}^{H}\Gamma^{H}\left( k \right)\mathbf{U}_{n}\left( k \right) \parallel}^{2} \geq 0,} \\
\end{array}$$ $$\begin{aligned}
{\mathbf{x}^{H}\Psi\mathbf{x}} & {= \mathbf{x}^{H}\mathbf{E}\left( \mathbf{p}_{e} \right)\mathbf{x} - \mathbf{x}^{H}\Phi\mathbf{x}.} \\
\end{aligned}$$
Since $\forall\mathbf{x} \in \mathbb{R}^{LN},\mathbf{x}^{H}\Phi\mathbf{x} = 0$, and substituting ([33](#FD33-sensors-18-00892){ref-type="disp-formula"}) into ([34](#FD34-sensors-18-00892){ref-type="disp-formula"}), we get $\mathbf{x}^{H}\Psi\mathbf{x} \geq 0$, and the objective function is Positive Semi-Definite (PSD). The optimization problem ([30](#FD30-sensors-18-00892){ref-type="disp-formula"}) is a convex quadratic programming with linear equality constraints and lower bounds. ☐
It is possible to find the global optimal solution for a convex quadratic programming \[[@B37-sensors-18-00892],[@B38-sensors-18-00892]\]. The interior-point algorithm or any Heuristic Searching Algorithm (HSA) can be adopted to solve the optimization problem with equality constraints and lower bounds. However, those algorithms apply numerical searching strategies with low efficiencies. We introduce a faster algorithm named the Active Set Algorithm (ASA) in this paper to obtain the global optimal solution base on some theoretical analysis.
### 3.2.3. Active Set Algorithm {#sec3dot2dot3-sensors-18-00892}
The first widely-used algorithm for solving a similar problem is the active set method published by Lawson and Hanson \[[@B34-sensors-18-00892],[@B39-sensors-18-00892]\]. They proposed an active set method to solve the Non-Negative Least Squares (NNLS).
*Active set \[[@B40-sensors-18-00892]\] In mathematical optimization, a problem is defined using an objective function to minimize or maximize and a set of constraints:* $$g_{1}\left( x \right) \geq 0,\cdots,g_{k}\left( x \right) \geq 0.$$
*Given a point $x$ in the feasible region, a constraint:* $$g_{i}\left( x \right) \geq 0,$$ *is called active at x if $g_{i}\left( x \right) = 0$ and inactive at x if $g_{i}\left( x \right) > 0$. The set of active ones is called the active set and denoted by $\left. \mathcal{A}\left( x \right) = \{ i \middle| g_{i}\left( x \right) = 0\} \right.$.*
We describe the active set method for solving the quadratic programs of the form ([30](#FD30-sensors-18-00892){ref-type="disp-formula"}) containing equality and inequality constraints based on the methods described in \[[@B34-sensors-18-00892],[@B39-sensors-18-00892]\].
Denote the optimal solution of ([30](#FD30-sensors-18-00892){ref-type="disp-formula"}) by $\hat{\mathbf{\alpha}}$. If the active set of the optimal solution $\mathcal{A}\left( \hat{\mathbf{\alpha}} \right)$ were known in advance, we could find the optimal solution $\hat{\mathbf{\alpha}}$ by applying techniques, such as the Lagrange multiplier method, for equality-constrained QP. The prior knowledge of the active set accelerates the algorithm effectively.
$\mathbf{\alpha}^{*} = {\lbrack\alpha_{1}^{*},\alpha_{2}^{*},\ldots,\alpha_{LN}^{*}\rbrack}^{T}$ represents the real, but unknown path attenuations. If the *n*-th receiver cannot receive the signal from the emitter, which is reflected by the *ℓ*-th transponder, $\alpha_{\ell n}^{*} = 0$, otherwise, $\alpha_{\ell n}^{*} > 0$. In most cases, we known the set $\left. \{ i \middle| \alpha_{i}^{*} = 0\} \right.$ in advance and have deleted the unconnected path in the model. without loss of generality, we set $\alpha_{i}^{*} > 0,i = 1,2,\ldots,LN$. Denote $\hat{\mathbf{\alpha}}$ as an estimation of $\mathbf{\alpha}^{*}$, that is $\hat{\mathbf{\alpha}} \approx \mathbf{\alpha}^{*}$. Denote $\hat{\mathbf{\alpha}} = \mathbf{\alpha}^{*} + \varepsilon$, where $\varepsilon$ is the estimation error vector of $\mathbf{\alpha}^{*}$. Since $\alpha_{i}^{*} > 0$, $i = 1,2,\ldots,LN$, we set the initial work set by $\mathcal{W}^{(0)} = \varnothing$.
The searching path of the active set algorithm is strictly in the feasible region. Choose a feasible solution as the initial point of the algorithm. Solve the QP with equality constraints in the work set, and get the optimal searching direction. if the searching direction is blocked by some constraints not in the work set, add the constraints, which block the searching path firstly, into the work set. Resolve the new QP with the updated work set, until the searching direction is not blocked by any constraints. The algorithm reaches a local optimum for the current work set. To get an even better solution, we drop one active constraint in the work set to relax the programming. If the objective function cannot be decreased for all constraints in the work set, we get the optimal solution of the original programming. Otherwise, drop the constraint that causes the fastest decrease.
The detail of the ASA is described in Section 1 of the Supplementary File. The spatial spectrum of an emitter is determined by substituting the $\hat{\mathbf{\alpha}}$ into the MUSIC cost function ([19](#FD19-sensors-18-00892){ref-type="disp-formula"}).
### 3.2.4. MP-MUSIC Algorithm {#sec3dot2dot4-sensors-18-00892}
The spatial spectrum of the emitter positions requires only a three-dimensional searching, and the size of $\mathbf{E}\left( \mathbf{p}_{e} \right)$ is $LN \times LN$, which is usually rather small. The detailed procedure of the MP-MUSIC algorithm is represented in Algorithm 1.
Algorithm 1:
MP-MUSIC algorithm.
The performance of the MP-MUSIC algorithm is determined by the estimation precision of $\hat{\mathbf{R}}\left( k \right)$. If the number of snapshots is not enough, neither the covariance matrix $\hat{\mathbf{R}}\left( k \right)$ nor the spatial spectrum $q\left( \mathbf{p}_{e} \right)$ can be estimated precisely.
In a time-sensitive positioning application, it is difficult to get enough snapshots to estimate $\hat{\mathbf{R}}\left( k \right)$. We develop a Maximum Likelihood method in the presence of Multi-path Propagation (MP-ML) to estimate the emitter positions directly.
4. MP-ML Method {#sec4-sensors-18-00892}
===============
The MP-ML method maximizes the conditional likelihood function of the received signals. The noise is assumed as Additive White Gaussian Noise (AWGN) with a known standard deviation $\sigma$,
4.1. Mathematical Model of MP-ML {#sec4dot1-sensors-18-00892}
--------------------------------
The likelihood function of the received signals is:$$P\left( \mathbf{r} \middle| \mathbf{\theta} \right) = \prod\limits_{k = 1}^{K}\frac{1}{\left| \pi\Sigma \right|}e^{- {\lbrack\mathbf{r}{(k)} - \mathbf{A}{(k)}\check{\mathbf{s}}{(k)}\rbrack}^{H}\Sigma^{- 1}{\lbrack\mathbf{r}{(k)} - \mathbf{A}{(k)}\check{\mathbf{s}}{(k)}\rbrack}},$$ where $\mathbf{r}$ is the observed data, $\Sigma$ is the covariance matrix of noises, which is defined in (6), and the unknown parameter vector $\mathbf{\theta} \triangleq {\lbrack\mathbf{p}_{e}^{T},\mathbf{\alpha}^{T},\mathbf{s}^{T}\rbrack}^{T}$ consists of:$$\begin{aligned}
\mathbf{p}_{e} & {\triangleq {\lbrack\mathbf{p}_{e}^{T}\left( 1 \right),\mathbf{p}_{e}^{T}\left( 2 \right),\ldots,\mathbf{p}_{e}^{T}\left( D \right)\rbrack}^{T},} \\
{\mathbf{p}_{e}\left( d \right)} & {\triangleq {\lbrack p_{ex}\left( d \right),p_{ey}\left( d \right),p_{ez}\left( d \right)\rbrack}^{T},} \\
\overline{\mathbf{\alpha}} & {\triangleq {\lbrack{\overline{\mathbf{\alpha}}}_{1}^{T},{\overline{\mathbf{\alpha}}}_{2}^{T},\ldots,{\overline{\mathbf{\alpha}}}_{N}^{T}\rbrack}^{T},} \\
{\overline{\mathbf{\alpha}}}_{n} & {\triangleq {\lbrack\alpha_{1n}^{T},\alpha_{2n}^{T},\ldots,\alpha_{Dn}^{T}\rbrack}^{T},} \\
\check{\mathbf{s}} & {\triangleq {\lbrack{\check{\mathbf{s}}}^{T}\left( 1 \right),{\check{\mathbf{s}}}^{T}\left( 2 \right),\ldots,{\check{\mathbf{s}}}^{T}\left( K \right)\rbrack}^{T},} \\
\end{aligned}$$ where $\mathbf{\alpha}_{dn}$ and $\check{\mathbf{s}}\left( k \right)$ have been defined in ([5](#FD5-sensors-18-00892){ref-type="disp-formula"}). The log-likelihood function of ([37](#FD37-sensors-18-00892){ref-type="disp-formula"}) is:$$L\left( \mathbf{\theta} \right) = - KMN\log{\pi\sigma^{2}} - \frac{1}{\sigma^{2}}\sum\limits_{k = 1}^{K}{\lbrack\mathbf{r}\left( k \right) - \mathbf{A}\left( k \right)\check{\mathbf{s}}\left( k \right)\rbrack}^{H}{\lbrack\mathbf{r}\left( k \right) - \mathbf{A}\left( k \right)\check{\mathbf{s}}\left( k \right)\rbrack}.$$
Remove the constant items, and get the modified cost function of MP-ML:$$\hat{\mathbf{\theta}} = \arg\min\overline{Q}\left( \mathbf{\theta} \right) = \sum\limits_{k = 1}^{K}{\lbrack\mathbf{r}\left( k \right) - \mathbf{A}\left( k \right)\check{\mathbf{s}}\left( k \right)\rbrack}^{H}{\lbrack\mathbf{r}\left( k \right) - \mathbf{A}\left( k \right)\check{\mathbf{s}}\left( k \right)\rbrack}.$$
The searching space dimension of ([40](#FD40-sensors-18-00892){ref-type="disp-formula"}) is $3D + DNL + DK$, and it is necessary to reduce the searching space dimension. For fixed attenuations $\mathbf{\alpha}$ and emitter position combination $\mathbf{p}_{e}$ in ([40](#FD40-sensors-18-00892){ref-type="disp-formula"}), the optimal estimation of the source signals at frequency $\omega_{k}$ is:$$\hat{\check{\mathbf{s}}}\left( k \right) = \mathbf{A}^{+}\left( k \right)\mathbf{r}\left( k \right),$$ where $\mathbf{A}^{+}\left( k \right) \triangleq {\lbrack\mathbf{A}^{H}\left( k \right)\mathbf{A}\left( k \right)\rbrack}^{- 1}\mathbf{A}^{H}\left( k \right)$ is the Moore--Penrose inverse of $\mathbf{A}\left( k \right)$. Substitute ([41](#FD41-sensors-18-00892){ref-type="disp-formula"}) into ([40](#FD40-sensors-18-00892){ref-type="disp-formula"}), $$\hat{\mathbf{\eta}} = \arg\min\overline{Q}\left( \mathbf{\eta} \right) = \sum\limits_{k = 1}^{K}{\lbrack\mathbf{r}\left( k \right) - \mathbf{P}_{\mathbf{A}}\left( k \right)\mathbf{r}\left( k \right)\rbrack}^{H}{\lbrack\mathbf{r}\left( k \right) - \mathbf{P}_{\mathbf{A}(k)}\mathbf{r}\left( k \right)\rbrack},$$ where $\mathbf{\eta} \triangleq {\lbrack\mathbf{p}_{e}^{T},{\overline{\mathbf{\alpha}}}^{T}\rbrack}^{T}$, $\overline{\mathbf{\alpha}} = \mathbf{\alpha}\mathbf{I}_{D} \triangleq {\lbrack\alpha_{1,1,1},\cdots,\alpha_{d,\ell,n},\cdots,\alpha_{DLN}\rbrack}^{T}$, $\mathbf{I}_{D}$ is a column vector of *D* ones. $\mathbf{P}_{\mathbf{A}}\left( k \right) = \mathbf{A}\left( k \right)\mathbf{A}^{+}\left( k \right)$ is the projection matrix of $\mathbf{A}\left( k \right)$. Expand ([42](#FD42-sensors-18-00892){ref-type="disp-formula"}):$$\begin{aligned}
{\overline{Q}\left( \mathbf{\eta} \right)} & {= \sum\limits_{k = 1}^{K}{\lbrack\mathbf{r}^{H}\left( k \right)\mathbf{r}\left( k \right) - \mathbf{r}^{H}\left( k \right)\mathbf{P}_{\mathbf{A}}^{H}\left( k \right)\mathbf{r}\left( k \right) - \mathbf{r}^{H}\left( k \right)\mathbf{P}_{\mathbf{A}}\left( k \right)\mathbf{r}\left( k \right) + \mathbf{r}^{H}\left( k \right)\mathbf{P}_{\mathbf{A}}^{H}\left( k \right)\mathbf{P}_{\mathbf{A}}\left( k \right)\mathbf{r}\left( k \right)\rbrack},} \\
\end{aligned}$$ and move the constant items $\mathbf{r}^{H}\left( k \right)\mathbf{r}\left( k \right)$. Applying the properties of the projection matrix $\mathbf{P}_{\mathbf{A}}\left( k \right) = \mathbf{P}_{\mathbf{A}}^{H}\left( k \right)$ and $\mathbf{P}_{\mathbf{A}}^{H}\left( k \right)\mathbf{P}_{\mathbf{A}}\left( k \right) = \mathbf{P}_{\mathbf{A}}\left( k \right)$, we get the modified programming of MP-ML:$$Q\left( \mathbf{\eta} \right) = - \sum\limits_{k = 0}^{K - 1}\mathbf{r}^{H}\left( k \right)\mathbf{P}_{\mathbf{A}}\left( k \right)\mathbf{r}\left( k \right),$$ $$\begin{array}{l}
{s.t.\left\{ \begin{array}{l}
{\mathbf{P}_{\mathbf{A}}\left( k \right) = \mathbf{A}\left( k,\mathbf{p}_{e} \right)\mathbf{A}^{+}\left( k,\mathbf{p}_{e} \right),} \\
{\mathbf{A}^{+}\left( k \right) = {\lbrack\mathbf{A}^{H}\left( k,\mathbf{p}_{e} \right)\mathbf{A}\left( k,\mathbf{p}_{e} \right)\rbrack}^{- 1}\mathbf{A}^{H}\left( k,\mathbf{p}_{e} \right),} \\
{\mathbf{A}\left( k,\mathbf{p}_{e} \right) = \Gamma\left( k,\mathbf{p}_{e} \right)\mathbf{\alpha},} \\
{\Gamma\left( k,\mathbf{p}_{e} \right) = \widetilde{\mathbf{A}}\left( k,\mathbf{p}_{e} \right)\mathbf{V}\left( k,\mathbf{p}_{e} \right),} \\
{\overline{\mathbf{\alpha}} = \mathbf{\alpha}\mathbf{I}_{D},} \\
{\overline{\mathbf{\alpha}} \in \mathbb{R}^{DLN},} \\
{\overline{\mathbf{\alpha}} \geq 0.} \\
\end{array} \right.} \\
\end{array}$$
There are two differences between our model and Bar-Shalom Ofer's model in \[[@B28-sensors-18-00892]\]. The first one is that only a single emitter and a single receiver were modeled in their work, but multiple emitters and multiple receivers are taken into consideration in our work. The second one is that there are complex path attenuations in \[[@B28-sensors-18-00892]\], but real non-negative path attenuations in our model.
The cost function in \[[@B28-sensors-18-00892]\] was modeled as:$$\max Q\left( \mathbf{\eta} \right) = - \sum\limits_{k = 1}^{K}\frac{\mathbf{\alpha}^{H}\mathbf{f}\left( k \right)\mathbf{f}^{H}\left( k \right)\mathbf{\alpha}}{\mathbf{\alpha}^{H}\mathbf{C}\left( k \right)\mathbf{\alpha}}.$$ where $\mathbf{f}\left( k \right) \triangleq \Gamma^{H}\left( k,\mathbf{p}_{e} \right)\mathbf{r}\left( k \right)$, $\mathbf{C}\left( k \right) \triangleq \Gamma^{H}\left( k,\mathbf{p}_{e} \right)\Gamma\left( k,\mathbf{p}_{e} \right)$. [Section 3.1.1](#sec3dot1dot1-sensors-18-00892){ref-type="sec"} has discussed that $\Gamma\left( k,\mathbf{p}_{e} \right)$ may be singular, and $\mathbf{C}\left( k \right)$ may be singular, as well. When $\mathbf{C}\left( k \right)$ is singular, there is an $\mathbf{\alpha}$ that satisfies $\mathbf{\alpha}^{H}\mathbf{C}\left( k \right)\mathbf{\alpha} = 0$, and the cost function $Q\left( \mathbf{\eta} \right)$ reaches the peak. However, the candidate emitter position is not the true emitter position. If $\Gamma\left( k,\mathbf{p}_{e} \right)$ is near singular, $\mathbf{f}\left( k \right) = \Gamma^{H}\left( k,\mathbf{p}_{e} \right)\mathbf{r}\left( k \right)$ is near singular, as well. The numerator and denominator of the cost function $Q\left( \mathbf{\eta} \right)$ both tend to zero, and the value of the cost function turns out to be unstable. The noise level will seriously affect the value of the cost function in this case, and the model cannot find the emitter accurately.
The searching space dimension of ([44](#FD44-sensors-18-00892){ref-type="disp-formula"}) has been reduced to $3D + DNL$, but it is still difficult to solve such a high dimensional non-linear programming. We propose an iterative algorithm in this paper to get the estimation of path attenuations to reduce the time consumption of the MP-ML.
4.2. Remove Imaginary Items in the Programming {#sec4dot2-sensors-18-00892}
----------------------------------------------
Substitute the constraints into the objective function of ([44](#FD44-sensors-18-00892){ref-type="disp-formula"}), $$\begin{array}{cl}
{Q\left( \mathbf{\eta} \right)} & {= - \sum\limits_{k = 1}^{K}\mathbf{r}^{H}\left( k \right)\Gamma\left( k,\mathbf{p}_{e} \right)\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{H}\Gamma^{H}\left( k,\mathbf{p}_{e} \right)\Gamma\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}\mathbf{\alpha}^{H}\Gamma^{H}\left( k,\mathbf{p}_{e} \right)\mathbf{r}\left( k \right)} \\
& {= - \sum\limits_{k = 1}^{K}\mathbf{f}^{H}\left( k \right)\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{H}\mathbf{C}\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}\mathbf{\alpha}^{H}\mathbf{f}\left( k \right),} \\
\end{array}$$ where $\mathbf{f}\left( k \right) \triangleq \Gamma^{H}\left( k,\mathbf{p}_{e} \right)\mathbf{r}\left( k \right)$, $\mathbf{C}\left( k \right) \triangleq \Gamma^{H}\left( k,\mathbf{p}_{e} \right)\Gamma\left( k,\mathbf{p}_{e} \right)$.
Henk A. L. Kiers studied a similar convex optimization problem in \[[@B41-sensors-18-00892]\]. Ofer Bar-Shalom and Anthony J. Weiss study the complexity form of the optimization in \[[@B28-sensors-18-00892]\] and its application in \[[@B29-sensors-18-00892]\]. The programming in our work has the complex $\mathbf{f}\left( k \right)$ and $\mathbf{C}\left( k \right)$, but the decision making variables $\mathbf{\alpha}$ are real non-negative values. We modify the iterative process in \[[@B28-sensors-18-00892],[@B41-sensors-18-00892]\] to satisfy the real non-negative constraints in our work.
The cost function ([46](#FD46-sensors-18-00892){ref-type="disp-formula"}) can be rewritten by:$$\begin{array}{cl}
{Q\left( \mathbf{\eta} \right)} & {= - \sum\limits_{k = 0}^{K - 1}\mathbf{f}^{H}\left( k \right)\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{H}\mathbf{C}\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}\mathbf{\alpha}^{H}\mathbf{f}\left( k \right)} \\
& {= - \sum\limits_{k = 0}^{K - 1}{tr}\left\{ {\mathbf{\alpha}^{H}\mathbf{f}\left( k \right)\mathbf{f}^{H}\left( k \right)\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{H}\mathbf{C}\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}} \right\},} \\
\end{array}$$ where ${tr}\left( \cdot \right)$ is the trace operator of a matrix. Since $\mathbf{C}\left( k \right)$ and $\mathbf{f}\left( k \right)\mathbf{f}^{H}\left( k \right)$ are Hermitian metrics, $\forall\mathbf{\alpha}$ satisfy:$$\begin{aligned}
{\mathbf{\alpha}^{H}\mathbf{C}\left( k \right)\mathbf{\alpha}} & {= \mathbf{\alpha}^{H}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha},} \\
{\mathbf{\alpha}^{H}\mathbf{f}\left( k \right)\mathbf{f}^{H}\left( k \right)\mathbf{\alpha}} & {= \mathbf{\alpha}^{H}\overline{\mathbf{f}}\left( k \right){\overline{\mathbf{f}}}^{H}\left( k \right)\mathbf{\alpha},} \\
\end{aligned}$$ where $\overline{\mathbf{C}}\left( k \right) \triangleq {Re}{\{\mathbf{C}\left( k \right)\}}$, $\overline{\mathbf{f}}\left( k \right) = \mathbf{u}\left( k \right)\mathbf{s}^{\frac{1}{2}}\left( k \right)$, and $\mathbf{u}\left( k \right)\mathbf{s}\left( k \right)\mathbf{v}^{H}\left( k \right)$ is the SVD decomposition of ${Re}\{\mathbf{f}\left( k \right)\mathbf{f}^{H}\left( k \right)\}$. The complex matrices $\mathbf{f}\left( k \right)$ and $\mathbf{C}\left( k \right)$ are replaced by the matrices $\overline{\mathbf{f}}\left( k \right)$ and $\overline{\mathbf{C}}\left( k \right)$ with real number elements:$$\begin{array}{cl}
{Q\left( \mathbf{\eta} \right)} & {= - \sum\limits_{k = 1}^{K}{tr}\left\{ {\mathbf{\alpha}^{H}\mathbf{f}\left( k \right)\mathbf{f}^{H}\left( k \right)\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{H}\mathbf{C}\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}} \right\}} \\
& {= - \sum\limits_{k = 1}^{K}{\overline{\mathbf{f}}}^{H}\left( k \right)\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{H}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}\mathbf{\alpha}^{H}\overline{\mathbf{f}}\left( k \right).} \\
\end{array}$$
The complex non-linear programming with real constraints ([46](#FD46-sensors-18-00892){ref-type="disp-formula"}) is simplified to be a real non-linear programming (49).
4.3. An Iterative Algorithm for Solving MP-ML {#sec4dot3-sensors-18-00892}
---------------------------------------------
We introduce a theorem firstly and then give an iterative algorithm for solving the programming (49).
*${\overline{\mathbf{\alpha}}}_{i}$ is a feasible solution of* (49)*, and a better solution of* (49) *is obtained by solving the following programming:* $${\overline{\mathbf{\alpha}}}_{i + 1} = \arg\min\limits_{\overline{\mathbf{\alpha}}}G\left( \mathbf{\alpha}_{i},\overline{\mathbf{\alpha}},\mathbf{p}_{e} \right) = \left| \middle| \mathbf{Y} \right. - {\overline{\mathbf{\alpha}}}^{H}\mathbf{X}{||}^{2} - \mathbf{Z},$$ $$\begin{array}{r}
{s.t.\mspace{720mu}\overline{\mathbf{\alpha}} \geq 0.} \\
\end{array}$$ *where:* $$\begin{aligned}
\mathbf{F} & {\triangleq \sum\limits_{k = 1}^{K}\overline{\mathbf{f}}\left( k \right)^{T}\mathbf{W}\left( k \right)^{T},} \\
\mathbf{Y} & {\triangleq \mathbf{FU}^{- 1},} \\
\mathbf{X} & {\triangleq \mathbf{U}^{T},} \\
\mathbf{Z} & {\triangleq \mathbf{YY}^{T},} \\
\mathbf{U} & {= \overline{\mathbf{U}}\Sigma^{\frac{1}{2}},} \\
{\mathbf{W}\left( k \right)} & {\triangleq \mathbf{I}_{N} \otimes {diag}{\{\mathbf{w}\left( k \right)\}} \otimes \mathbf{I}_{L},} \\
{\mathbf{w}\left( k \right)} & {\triangleq \overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}_{i}{\lbrack\mathbf{\alpha}_{i}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}_{i}\rbrack}^{- 1},} \\
\end{aligned}$$ *$\mathbf{I}_{L}$ is an identify matrix with a size of $L \times L$ and $\mathbf{I}_{N}$ is an identify matrix with a size of $N \times N$. $\overline{\mathbf{U}}$ is the Singular Value Decomposition (SVD) of the following item:* $$\begin{aligned}
{\sum\limits_{k = 1}^{K}\mathbf{W}\left( k \right)\overline{\mathbf{C}}\left( k \right)\mathbf{W}^{T}\left( k \right)} & {= {\overline{\mathbf{U}}}^{T}\Sigma\overline{\mathbf{U}}.} \\
\end{aligned}$$
The proof of Theorem 2 is given in [Appendix B](#app2-sensors-18-00892){ref-type="app"}. The programming ([50](#FD50-sensors-18-00892){ref-type="disp-formula"}) is a linear least squares with bound constraints, and the Trust-Region-Reflective (TRR) algorithm is adopted to solve the programming. The detail of TRR is described in \[[@B42-sensors-18-00892],[@B43-sensors-18-00892],[@B44-sensors-18-00892]\].
Following Theorem 2 and (A11) in the [Appendix B](#app2-sensors-18-00892){ref-type="app"}:$$Q\left( \mathbf{\eta}_{\mathbf{i} + 1} \right) = H\left( {\overline{\mathbf{\alpha}}}_{i + 1} \right) \leq G\left( \mathbf{\alpha}_{i},{\overline{\mathbf{\alpha}}}_{i + 1},\mathbf{p}_{e} \right) \leq G\left( \mathbf{\alpha}_{i},{\overline{\mathbf{\alpha}}}_{i},\mathbf{p}_{e} \right) = H\left( {\overline{\mathbf{\alpha}}}_{i} \right) = Q\left( \mathbf{\eta}_{i} \right).$$
We get an iterative method to obtain a better solution than the previous step. Denote the initial estimation of the unknown channel attenuations by $\mathbf{\alpha}_{0}$. The solution of ([50](#FD50-sensors-18-00892){ref-type="disp-formula"}) gives a better estimation of the unknown parameter vector due to the inequality of ([51](#FD51-sensors-18-00892){ref-type="disp-formula"}). Furthermore, we get a better estimation ${\overline{\mathbf{\alpha}}}_{2}$, so that $H\left( {\overline{\mathbf{\alpha}}}_{2} \right) \leq G\left( \mathbf{\alpha}_{1},{\overline{\mathbf{\alpha}}}_{2},\mathbf{p}_{e} \right) \leq G\left( \mathbf{\alpha}_{1},{\overline{\mathbf{\alpha}}}_{1},\mathbf{p}_{e} \right) = H\left( {\overline{\mathbf{\alpha}}}_{1} \right)$. Thus, $H\left( {\overline{\mathbf{\alpha}}}_{2} \right) \leq H\left( {\overline{\mathbf{\alpha}}}_{1} \right) \leq H\left( {\overline{\mathbf{\alpha}}}_{0} \right)$. The detail of the iterative procedure is shown in the Algorithm 2 in the supplementary file.
4.4. Getting the Initial Value {#sec4dot4-sensors-18-00892}
------------------------------
The performance of the iterative algorithm is determined by the initial value ${\overline{\mathbf{\alpha}}}_{0}$. The path attenuations from the MP-MUSIC algorithm are used as the initial value of the MP-ML.
The initial path attenuations of the emitter *d* are denoted by ${\overline{\mathbf{\alpha}}}_{0}\left( d \right)$, and they are estimated by the MP-MUSIC in Algorithm 1.
For a fixed emitter position combination $\mathbf{p}_{e}$, evaluate the MP-MUSIC algorithm to get the initial path attenuations ${\hat{\mathbf{\alpha}}}_{d}$ of the position $\mathbf{p}_{e}\left( d \right)$, $d = 1,2,\ldots,D$.
Reshape $\hat{\mathbf{\alpha}} = {\lbrack{\hat{\mathbf{\alpha}}}_{1}^{T},{\hat{\mathbf{\alpha}}}_{2}^{T},,\ldots,{\hat{\mathbf{\alpha}}}_{D}^{T}\rbrack}^{T}$ to get the initial attenuations vector:$${\overline{\mathbf{\alpha}}}_{0} = {\lbrack{\overline{\mathbf{\alpha}}}_{1}^{T},{\overline{\mathbf{\alpha}}}_{2}^{T},\ldots,{\overline{\mathbf{\alpha}}}_{D}^{T}\rbrack}^{T}$$ where:$$\begin{aligned}
{\hat{\mathbf{\alpha}}}_{d} & {= {\lbrack{\hat{\mathbf{\alpha}}}_{d1}^{T},{\hat{\mathbf{\alpha}}}_{d2}^{T},\ldots,{\hat{\mathbf{\alpha}}}_{dN}^{T}\rbrack}^{T},} \\
{\hat{\mathbf{\alpha}}}_{dn} & {= {\lbrack{\hat{\alpha}}_{dn1},{\hat{\alpha}}_{dn2},,\ldots,{\hat{\alpha}}_{dnL}\rbrack}^{T},} \\
\end{aligned}$$ and ${\hat{\alpha}}_{dn\ell}$ is estimated from MP-MUSIC.
4.5. MP-ML Algorithm {#sec4dot5-sensors-18-00892}
--------------------
Algorithm 2 in the Supplementary File optimizes the parameters $\mathbf{\alpha}$ for a fixed emitter positions combination $\mathbf{p}_{e}$. The searching dimension is reduced to $3D$ further. It is possible to solve a $3D$ dimensional nonlinear programming. The MP-ML algorithm is proposed in Algorithm 2
Define the Region of Interest (RoI) by $\mathbf{p}$, and apply the MP-MUSIC algorithm described in Algorithm 1 to get an initial solution $\mathbf{p}_{e} = {\lbrack\mathbf{p}_{e}^{T}\left( 1 \right),\mathbf{p}_{e}^{T}\left( 2 \right),\ldots,\mathbf{p}_{e}^{T}\left( D \right)\rbrack}^{T}$ and the corresponding path attenuations $\mathbf{\alpha}_{0}$. Adopt Algorithm 2 in the Supplementary File to get the optimal estimations of attenuations $\mathbf{\alpha}$ and the cost function of emitter positions $\mathbf{p}_{e}$. Design a suitable searching path $\mathbf{p}_{e}^{i}$, $i = 1,2,\ldots$, such as the Gaussian method, and get the optimal emitter position estimations.
Algorithm 2:
MP-ML algorithm.
5. Numerical Examples {#sec5-sensors-18-00892}
=====================
Some numerical examples are given to demonstrate the performances of the above algorithms.
5.1. Scenario Setting and Performance Index Definition {#sec5dot1-sensors-18-00892}
------------------------------------------------------
In numerical simulations, three emitters are located at $\lbrack 0,0,0\rbrack$, $\lbrack 50,0,0\rbrack$ and $\lbrack 0,50,0\rbrack$, and four receiving arrays are located at $\lbrack 2200, - 2100,0\rbrack$, $\lbrack 3300,600,0\rbrack$, $\lbrack 3100, - 700,0\rbrack$ and $\lbrack 2300,2500,0\rbrack$. There are two different layouts of transponders in the simulations. The first one is the basic scenario, and the second one is designed to test the performances when a transponder is close to the anther. There are four transponders located at $\lbrack - 1210,100,200\rbrack$,$\lbrack 100,1120,200\rbrack$, $\left( - 100, - 1040,200 \right)$ and $\lbrack 970,160,200\rbrack$ in Scenario A (see [Figure 3](#sensors-18-00892-f003){ref-type="fig"}a) ([Figure 3](#sensors-18-00892-f003){ref-type="fig"} is a top view of the system layout. Height information is not indicated in the figure.). We move the third transponder to $\lbrack 100,1100,200\rbrack$ in Scenario B (see [Figure 3](#sensors-18-00892-f003){ref-type="fig"}b). All the positions are measured in km.
Each receiving array is a Uniform Circular Array (UCA) with eleven antennas and a radius of 30 m. The bandwidth is 8 kHz. The carrier frequency is 10 MHz. The simulation results are based on 200 Monte Carlo runs to gather enough statistics. The source signal of each emitter and the path attenuation coefficients are generated randomly once for all the Monte Carlo runs, while the additive noises are regenerated at each run. The complex-valued signal frequency coefficients are subject to ${\parallel {\check{\mathbf{s}}}_{d} \parallel}_{F}^{2} = 1$. The path attenuation coefficients are drawn from a uniform distribution between zero and one. The SNR is defined in terms of "post-processing SNR", which is given by:$$\begin{aligned}
{SNR} & {\triangleq \frac{E\left\{ {\sum_{k = 1}^{K}{\parallel \mathbf{A}\left( k \right)\check{\mathbf{s}}\left( k \right) \parallel}_{F}^{2}} \right\}}{K\sigma^{2}}.} \\
\end{aligned}$$
Root-Mean-Squared Error (RMSE) of the estimated position is adopted as the performance index of the algorithms. RMSE is given by:$${RMSE}\left( \mathbf{p}_{e} \right) \triangleq \sqrt{\frac{\sum_{i = 1}^{N_{s}}{\parallel {\hat{\mathbf{p}}}_{e} - \mathbf{p}_{e} \parallel}_{F}^{2}}{N_{s}D}},$$ where $N_{s}$ is the number of Monte Carlo runs, *D* is the number of emitters, $\mathbf{p}_{e}$ are the real emitter positions and ${\hat{\mathbf{p}}}_{e}$ are the estimated emitter positions
A scalar quantity of the CRLB matrix corresponding to RMSE is defined as:$$\overline{CRLB}\left( \mathbf{p}_{e} \right) \triangleq \sqrt{\frac{\lambda_{\max}{\lbrack{CRLB}\left( \mathbf{p}_{e} \right)\rbrack}}{D}},$$ where $\lambda_{\max}{\lbrack{CRLB}\left( \mathbf{p}_{e} \right)\rbrack}$ is the maximal eigenvalue of ${CRLB}\left( \mathbf{p}_{e} \right)$. The computation of the CRLB matrix is presented in Section 3 of the Supplementary File \[[@B45-sensors-18-00892],[@B46-sensors-18-00892],[@B47-sensors-18-00892]\].
5.2. Performances of the MP-MUSIC Method {#sec5dot2-sensors-18-00892}
----------------------------------------
We compare the performances of the following MUSIC algorithms:SSP-MUSIC: Signal Subspace Projection MUSIC proposed in \[[@B18-sensors-18-00892]\],NSP-MUSIC: Noise Subspace Projection MUSIC without non-negative and real constraints,MP-MUSIC-IPA: Noise subspace projection MUSIC with real and non-negative constraints in the multipath propagation scenario and solved by the Interior Point Algorithm,MP-MUSIC-ASA: Noise subspace projection MUSIC with real and non-negative constraints in the multipath propagation scenario and solved by the Active Set Algorithm.
### 5.2.1. SSP-MUSIC and NSP-MUSIC in a Single Path Propagation Positioning Scenario {#sec5dot2dot1-sensors-18-00892}
The single emitter is placed at \[0,0\] (km), and only the direct paths are taken into consideration. [Figure 4](#sensors-18-00892-f004){ref-type="fig"} gives the spatial spectrum in a single path propagation positioning scenario (Actually, the spatial spectrum of a 3D positioning is a 3D spectrum, but in order to show the form of spatial spectrum more intuitively, the spatial spectrum displayed in this paper only gives a horizontal slice of the 3D spatial spectrum at the real *Z* value (where $z = 0$)).
We find the peak in the spatial spectrum of SSP-MUSIC ([Figure 4](#sensors-18-00892-f004){ref-type="fig"}a), but the NSP-MUSIC proposed in this paper ([Figure 4](#sensors-18-00892-f004){ref-type="fig"}b) holds a sharper peak and more precise estimation than SSP-MUSIC in a single path positioning context.
The following simulations are designed to study the performance of those MUSIC methods in the presence of multi-path propagation.
### 5.2.2. SSP-MUSIC, NSP-MUSIC and MP-MUSIC in a Multi-Path Propagation Positioning {#sec5dot2dot2-sensors-18-00892}
We design four numerical simulations in this section. The simulation parameters are set as in [Table 1](#sensors-18-00892-t001){ref-type="table"}, where *R* is the radius of the UCA, $\lambda$ and *f* are the carrier wave length and frequency, *M* is the number of the antennas in a receiving array, *B* is the bandwidth of the source signals, *K* is the number of frequencies and *J* is the number of sections of a received signal.
Baseband signal positioning: [Figure 5](#sensors-18-00892-f005){ref-type="fig"} gives the spatial spectrum when $R \ll \lambda$. The emitter positions $\mathbf{p}_{\mathbf{e}}$ that make $\mathbf{E}\left( \mathbf{p}_{\mathbf{e}} \right)$ singular or nearly singular constitute the yellow hyperbolic curves in the SSP-MUSIC spectrum and the NSP-MUSIC spectrum. Neither SSP-MUSIC nor NSP-MUSIC find the emitters correctly when $R \ll \lambda$. We cannot find any peak in SSP-MUSIC. Three peaks are found in the spectrum of NSP-MUSIC, but they are disrupted by the hyperbolic curves. MP-MUSIC with real and non-negative constraints finds three sharp peaks correctly.
Two transponders are close: [Figure 6](#sensors-18-00892-f006){ref-type="fig"} gives the spatial spectrum when a transponder is close to the anther. The nearest distance of transponders is 20 km in the simulation. When two transponders are close, the array responses of the receiving array with respect to the two transponders are almost same. The yellow hyperbolic curves in the SSP-MUSIC and the NSP-MUSIC spectrum are the candidate positions, which make $\Gamma\left( k \right)$ singular. SSP-MUSIC and NSP-MUSIC failed to locate the emitters in this context, but MP-MUSIC gets the emitter positions correctly.
Single antenna of each receiving station: [Figure 7](#sensors-18-00892-f007){ref-type="fig"} gives the spatial spectrum when $M = 1$. The array responders are defined as $\mathbf{a}_{\ell n}\left( k \right) = 1$, and an SMC will make $\Gamma\left( k \right)$ singular.
General scenario: [Figure 8](#sensors-18-00892-f008){ref-type="fig"} gives the spatial spectrum of a general parameter setting scenario. Although there is no deliberate construction of conditions that leads to $\Gamma\left( k \right)$ singularity in the general scenario, SSP-MUSIC and NSP-MUSIC still cannot obtain the emitter positions. However, MP-MUSIC obtains the three emitter locations accurately.
### 5.2.3. Performances of MP-MUSIC-ASA and MP-MUSIC-IPA {#sec5dot2dot3-sensors-18-00892}
We down convert the radio frequency signals to baseband signals firstly to avoid the multiple peak searching of the radio frequency signal positioning. The simulation parameters are set as in [Table 2](#sensors-18-00892-t002){ref-type="table"}.
The RMSE of MP-MUSIC-ASA, MP-MUSIC-IPA and the CRLB are given in [Figure 9](#sensors-18-00892-f009){ref-type="fig"}.
Since ASA could find the global optimal solution, the performance of MP-MUSIC-ASA should be no worse than that of MP-MUSIC-IPA. The numerical simulation results show that both MP-MUSIC-ASA and MP-MUSIC-IPA find the global optimal solution and have the same RMSE because of the convexity of the cost function.
Simulations are done on a server with Intel Xeon CPU E5-2630 v4, 16 G memory, and Matlab2016a. The MATLAB function "lsqlin" is adopted to verify the performance of IPA. We repeat the simulation 1000 times to get the distribution of the time-consumption of the ASA and IPA sections. It is assumed that the time consumptions $t_{ASA}$ and $t_{IPA}$ follow normal distributions, and $t_{ASA}{\sim N\left( 5.8175 \times \right.}10^{- 5}$, $4.8311 \times 10^{- 6})$, $t_{IPA} \sim N\left( 3.4756 \times 10^{- 3},2.2225 \times 10^{- 5} \right)$ (seconds).
Benefiting from the convex properties and a reasonable initial value of $\mathbf{\alpha}$ in ASA, ASA consumes only 1.67% more time than IPA and has a more stable time consumption than IPA (It should be noticed that the time consumption of MP-MUSIC-ASA will be determined by the path attenuations and SNR. A low SNR and small $\alpha_{\ell n}$ make the constraint $\alpha_{\ell n} \geq 0$ an active constraint ($\alpha_{\ell n} = 0$.). The time consumption of MP-MUSIC-ASA is deeply affected by the number of elements in the active set.
5.3. The Performance of MP-ML and MP-MUSIC {#sec5dot3-sensors-18-00892}
------------------------------------------
We compare the performances of MP-ML and MP-MUSIC in cases of different numbers of snapshots.
### 5.3.1. Insufficient Snapshots {#sec5dot3dot1-sensors-18-00892}
If the number of snapshots is insufficient, we cannot obtain a reliable covariance matrix estimation of observations, and MP-MUSIC will fail to get the emitter positions. The simulation parameters are set as in [Table 3](#sensors-18-00892-t003){ref-type="table"}.
The performances of MP-MUSIC and MP-ML with 32 snapshots (16 frequencies) are compared in [Figure 10](#sensors-18-00892-f010){ref-type="fig"}.
Only 32 snapshots of each receiver are taken for positioning. Thirty two snapshots are not divided into sections to estimate the covariance matrix in NSP-MUSIC (that is $K = 16,J = 1$). The RMSE of NSP-MUSIC cannot be reduced further with the increasing of SNR because of the error of the covariance matrix estimation. The MP-ML obtains a better performance than the MP-MUSIC in the sense of insufficient snapshots.
### 5.3.2. Performances of Different *K* and *J* Combinations {#sec5dot3dot2-sensors-18-00892}
We discuss the performances of MP-MUSIC with different *K* and *J* combinations in [Figure 11](#sensors-18-00892-f011){ref-type="fig"}. The total number of snapshots in the simulations is 256 ($K \cdot J = 128$). The other parameters are set as in [Table 4](#sensors-18-00892-t004){ref-type="table"}.
[Figure 11](#sensors-18-00892-f011){ref-type="fig"} gives the performances of MP-MUSIC with different combinations of *J* and *K*. The number of the snapshots used in the simulation is 256. MP-ML establishes a maximum likelihood function of the 256 snapshots to estimate the emitter positions together, and it has a better performance than MP-MUSIC.
The performances of MP-MUSIC methods are effected by the error of covariance estimations and the bandwidth. A smaller *J* leads to a larger error of the covariance estimation; in addition, a smaller *K* leads to a smaller number of observation equations.
### 5.3.3. The Performances of Different Numbers of Snapshots {#sec5dot3dot3-sensors-18-00892}
We compare the performances of the MP-MUSIC and the MP-ML with different numbers of snapshots in [Figure 12](#sensors-18-00892-f012){ref-type="fig"} (The CRLB derived in the Appendix shows that the CRLB is determined by the signal snapshots. The CRLB in [Figure 12](#sensors-18-00892-f012){ref-type="fig"} is the average of 100 simulations with random signal snapshots.). The other parameters are set as in [Table 5](#sensors-18-00892-t005){ref-type="table"}.
The performances of MP-MUSIC and MP-ML with $K \cdot J = 2^{i}$, $\left( i = 4,5,\cdots,13 \right)$ are studied in this section. Since the performance of MP-MUSIC is determined by the combination of *K* and *J*, we choose the best combination of $K,J$ in the MP-MUSIC simulation. As the number of snapshots increases, the RMSE decreases gradually in [Figure 12](#sensors-18-00892-f012){ref-type="fig"}. If the number of snapshots is abundant ($K \cdot J > 1024$), both MP-MUSIC and MP-ML are close to the CRLB. If the number of snapshots is insufficient ($K \cdot J < 256$), the performance of MP-MUSIC will suffer a serious deterioration. However, MP-ML is much less affected by the insufficient snapshots.
### 5.3.4. Time Consumptions of MP-MUSIC and MP-ML {#sec5dot3dot4-sensors-18-00892}
MP-MUSIC computes the cost values of the candidate one by one to obtain the spatial spectrum; while MP-ML computes the likelihood function of all combinations of *d* emitters. MP-ML consumes more time to obtain the emitter positions than MP-MUSIC. We compare the time consumptions and the corresponding performances of MP-MUSIC and MP-ML with different *d*. The simulation parameters are set as in [Table 6](#sensors-18-00892-t006){ref-type="table"}, and the simulation results are given in [Figure 13](#sensors-18-00892-f013){ref-type="fig"}.
The layout of transponders and receiving arrays are defined as in [Figure 3](#sensors-18-00892-f003){ref-type="fig"}a, and the four emitters are placed at $\lbrack 0,0,0\rbrack,\lbrack 50,0,0\rbrack,\lbrack 0,50,0\rbrack,\lbrack 50,50,0\rbrack$ (km). We choose *d* emitters in four random placements to analyze the time consumptions of MP-MUSIC and MP-ML with different *d*. We perform 100 simulation runs to gather enough statistics.
In [Figure 13](#sensors-18-00892-f013){ref-type="fig"}a, because MP-MUSIC calculates the cost function value of each candidate one by one, while MP-ML computes all combinations of *d* emitters, the time consumption of MP-MUSIC is almost independent of *d*, and MP-ML increases exponentially with *d*. The time consumption of MP-MUSIC is mainly determined by *K* rather than the number of snapshots (the two red lines in [Figure 13](#sensors-18-00892-f013){ref-type="fig"}a, which represent MP-MUSIC ($K = 16,J = 8$) and MP-MUSIC ($K = 16,J = 1$), are almost coincident), because the dimension of the matrix in the cost function is determined by *K*, and it is independent of *J*. Besides, the time consumption of MP-ML is determined by the number of snapshots ($KJ = 128$ costs much more time than $KJ = 16$).
[Figure 13](#sensors-18-00892-f013){ref-type="fig"}b presents the performances corresponding to [Figure 13](#sensors-18-00892-f013){ref-type="fig"}a. When the snapshots are sufficient ($KJ = 128$), the positioning accuracy improvement of MP-ML is not significant compared to MP-MUSIC, but it costs much more time to obtain the positions. In this case, applying MP-MUSIC to get emitter positions is a wise choice. When the snapshots are insufficient ($JK = 16$), MP-ML obtains a better performance than MP-MUSIC, although MP-ML costs much more time to get the results. It is worth much more time to obtain a much better performance when the snapshots are insufficient.
Benefiting from a good initial value of MP-MUSIC and an efficient searching strategy (steepest descent method), the time consumption of MP-ML is acceptable when the snapshots are insufficient. The time consumption of ASA in MP-MUSIC is determined by the a priori information of the initial active set $\mathcal{W}$, and it can be derived with some other techniques, e.g., direction finding. MP-ML costs much more time than MP-MUSIC, especially in the the case of a large number of emitters. Fortunately, MP-MUSIC, which is adopted to obtain an initial value of MP-ML, provides "not bad" solutions quickly, and the iterative algorithm continues outputting better and better solutions (see [Figure 14](#sensors-18-00892-f014){ref-type="fig"}).
The simulation parameters are set the same as the second row in [Table 6](#sensors-18-00892-t006){ref-type="table"}, and $d = 3$. MP-ML does not output the result until $T_{0}$, because MP-MUSIC is running from zero to $T_{0}$ to obtain the initial solution of MP-ML. The RMSE of MP-ML is monotone decreasing and continuous for the output results. In real applications, we can find a trade-off between the time consumption and positioning accuracy to obtain an acceptable solution.
5.4. Performance of SGP and MP-ML {#sec5dot4-sensors-18-00892}
---------------------------------
Single Platform Geolocation (SPG) mentioned in \[[@B28-sensors-18-00892]\] used only one platform to position the single emitter; while MP-ML uses multiple receiving arrays and locates multiple emitters simultaneously.
### 5.4.1. SPG and MP-ML with a Single Emitter {#sec5dot4dot1-sensors-18-00892}
Assume that there is only one emitter ($D = 1$, $\mathbf{p}_{e} = {\lbrack 0,0,0\rbrack}^{T}$km) in the RoI. [Figure 15](#sensors-18-00892-f015){ref-type="fig"} compares the performance of SPG and that of MP-ML with different numbers of receivers. The other parameters are set as in [Table 7](#sensors-18-00892-t007){ref-type="table"}.
We compare the performances of MP-ML with *N* receivers, $N = 1,2,3,4$, and give the corresponding CRLB, as well. When $D = 1$ and $N = 1$, MP-ML degenerates into SPG. The RMSE of *N* receivers is obtained from the average of the RMSE of all combinations of *N* receivers in four positions, and the positions of the four receiving stations are defined as Layout A. The simulation demonstrates that the performance is significantly improved as the number of receiving stations increases. The program with four receiving stations gains nearly a $10^{5}$ performance improvement compared to SPG.
### 5.4.2. Performance of Positioning Multiple Emitters {#sec5dot4dot2-sensors-18-00892}
MP-ML has the ability of positioning multiple emitters synchronously. We analyze the performances of different numbers of emitters in this section.
We place four emitters in the RoI. Emitters are placed at $\lbrack 0,0,0\rbrack,\lbrack 50,0,0\rbrack,\lbrack 0,50,0\rbrack,\lbrack 50,50,0\rbrack$ and $\lbrack 25,25,0\rbrack$ (km). We layout four transponders and four receiving arrays as in [Figure 3](#sensors-18-00892-f003){ref-type="fig"}a. The RMSE of *D* emitters, where $D = 1,2,3,4,5$, is obtained from the average of the RMSE of all combinations of *D* emitters in five positions. The RMSEs and CRLBs of different *D* are displayed in [Figure 16](#sensors-18-00892-f016){ref-type="fig"}. The other parameters are set the same as in [Table 7](#sensors-18-00892-t007){ref-type="table"}.
[Figure 16](#sensors-18-00892-f016){ref-type="fig"} only gives the RMSEs and CRLBs when $D = 1,2,3,4$, since the RMSE turns out to be unstable and the CRLB turns out to be $+ \infty$ when $D = 5$. This can be explained by the algebra principle that the number of unknown emitters cannot be greater than the number of transponders.
The numerical simulations and CRLB results demonstrate that the performance of MP-ML is influenced by the number of emitters. If $D \ll L$, the number of emitters has only a slight effect on the performance of MP-ML, and if $D = L$, the performance will decline significantly. If $D > L$, MP-ML cannot find any emitter at all.
6. Conclusions {#sec6-sensors-18-00892}
==============
A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. A Direct Position Determination for Multi-path Propagation positioning (MP-DPD) model and a MUltiple SIgnal Classification algorithm for Multi-path Propagation positioning (MP-MUSIC) were proposed to position the emitters in an MTRE system. To optimize the cost function of the MP-MUSIC efficiently, we proved that the cost function of the MP-MUSIC was a linear and non-negative constrained quadratic convex programming. A algorithm named Active Set Algorithm (ASA) is designed to solve the quadratic convex programming further. Numerical results show that the MP-MUSIC with ASA locates multiple emitters precisely, but the Signal Subspace Projection MUSIC algorithm (SSP-MUSIC) does not. We compared the time consumptions of the Interior Point Algorithm (IPA) and the ASA, as well. ASA consumes only 1.67% more time than IPA.
In the case of time-sensitive positioning, the number of snapshots is not enough. The maximum likelihood estimation algorithm for Multi-path Propagation positioning (MP-ML) maximizes the likelihood function rather than calculating the covariance matrix of the observations to avoid the requirement of a large number of snapshots. We designed an iterative algorithm and proposed the strategy of choosing an initial solution to accelerate the solving of the programming. Numerical simulation results show that MP-ML can approach the Cramér--Rao Lower Bound (CRLB) relative to MP-MUSIC with the same data length, but MP-ML requires more computation time than the MP-MUSIC method.
Furthermore, we discussed the performances of MP-ML with different numbers of receiving arrays and emitters. SPG mentioned in \[[@B28-sensors-18-00892]\] is viewed as a degenerate version of MP-ML (where $D = 1$ and $N = 1$). The numerical results shows that it is worthwhile to increase the number of receiving stations in the sense of a weak signal, although MP-ML increases the hardware costs, communication overhead and computational complexity.
We compared the performances and time consumptions of MP-MUSIC and MP-ML by numerical simulations. MP-ML obtains a more precise position estimation than MP-MUSIC, and MP-MUSIC consumes less time than MP-ML. In a specific positioning application, we choose the appropriate method according to the number of snapshots, the precision requirement and the calculation ability.
MP-ML has the ability of positioning multiple emitters synchronously. If the number of emitters is far less than the number of transponders, the number of emitters has only a slight influence on the positioning performance. If the number of emitters is equal to the number of transponders, the performance will decline significantly. If the number of emitters is more than the number of transponders, MP-ML cannot find any emitter at all.
An MTRE system requires more receiving arrays, more transponders and more computing resources compared to a Single Geolocation Platform (SGP) or a Direction Finding System (DFS). However, an MTRE system can locate multiple emitters synchronously and provides a higher positioning accuracy than SGP and DFS. It is suitable for some cost-insensitive applications, such as military and national security applications.
This work was supposed by the National Natural Science Foundation of China (61201381 and 61401513), China Postdoctoral Science Foundation (2016M592989), the Outstanding Youth foundation of Information Engineering University (2016603201), and Self-topic Foundation of Information Engineering University (2016600701).
Jianping Du developed the program and mathematical model and wrote the paper. Ding Wang provided much useful advice and checked the paper. Wanting Yu worked on the data collection, experiments and data analyses. Hongyi Yu provided the initial idea of this research.
The authors declare no conflict of interest.
We discuss the conditions that make the manifold matrix singular in this section. Define the term of Singular Manifold Candidate (SMC) and Near Singular Manifold Candidate (NSMC) firstly.
Singular Manifold Candidate (SMC): In a multi-path positioning application, a candidate position $\mathbf{p}_{e}$ is named a singular manifold candidate if it satisfies $e^{- j\omega_{k}{\lbrack{\widetilde{\mathbf{\tau}}}_{\ell_{1}n} + {\overline{\tau}}_{\ell_{1}}{(\mathbf{p}_{e})}\rbrack}} = e^{- j\omega_{k}{\lbrack{\widetilde{\mathbf{\tau}}}_{\ell_{2}n} + {\overline{\tau}}_{\ell_{2}}{(\mathbf{p}_{e})}\rbrack}}$, $k = 1,2,\cdots,K$, where ${\widetilde{\mathbf{\tau}}}_{\ell_{i}n},i = 1,2$, is the propagation delays from the $\ell_{i}$-th transponder to the n-th receiving array and ${\overline{\tau}}_{\ell_{i}},i = 1,2$, is the propagation delay from the candidate position to the $\ell_{i}$-th transponder.
Center Singular Manifold Candidate (CSMC): In a multi-path positioning application, a candidate position $\mathbf{p}_{e}$ is named a singular manifold candidate, if it satisfies $e^{- j\omega_{k}{\lbrack{\widetilde{\tau}}_{\ell_{1}n} + {\overline{\tau}}_{\ell_{1}}{(\mathbf{p}_{e})}\rbrack}} = e^{- j\omega_{k}{\lbrack{\widetilde{\tau}}_{\ell_{2}n} + {\overline{\tau}}_{\ell_{2}}{(\mathbf{p}_{e})}\rbrack}}$, $k = 1,2,\cdots,K$, where ${\widetilde{\tau}}_{\ell_{i}n},i = 1,2$, is the propagation delay from the $\ell_{i}$-th transponder to the center of the n-th receiving array and ${\overline{\tau}}_{\ell_{i}},i = 1,2$, is the propagation delay from the candidate position to the $\ell_{i}$-th transponder.
The difference of an SMC and a CSMC is that all the antennas satisfy the equation for an SMC, but only the centers of the array satisfy the equation for a CSMC.
*If $\mathbf{p}_{e}$ is a CSMC, there are at least two paths from $\mathbf{p}_{e}$ to a receiving array, which satisfy:* $${\widetilde{D}}_{\ell_{1,2}n}\left( \mathbf{p}_{e} \right) = z\lambda,z = 0,1,2,\cdots,$$ *where n is the receiving array index, $\ell_{1}$ and $\ell_{2}$ are the indexes of two transponders in two paths, ${\widetilde{D}}_{\ell_{1,2}n}{\triangleq \parallel}\mathbf{p}_{e} - \mathbf{p}_{t}\left( \ell_{1} \right) \parallel_{F}{+ \parallel}\mathbf{p}_{r}\left( n \right) - \mathbf{p}_{t}\left( \ell_{1} \right) \parallel_{F}{- \parallel}\mathbf{p}_{e} - \mathbf{p}_{t}\left( \ell_{2} \right) \parallel_{F} - {\parallel \mathbf{p}_{r}\left( n \right) - \mathbf{p}_{t}\left( \ell_{2} \right) \parallel}_{F}$ is the difference between the two path lengths, $\mathbf{p}_{r}\left( n \right)$ is the center of the n-th receiving array, z is an integer, λ is the Least Common Multiple (LCM) of $\{\lambda_{1},\lambda_{2},\cdots,\lambda_{K}\}$ and $\lambda_{k}$ is the wave length of frequency $\omega_{k}$.*
Move the right item of the equation in the CSMC condition to the left: $$\begin{array}{l}
{e^{- j\omega_{k}{\{{{\lbrack{\widetilde{\tau}}_{\ell_{1}n} + {\overline{\tau}}_{\ell_{1}}{(\mathbf{p}_{e})}\rbrack} - {\lbrack{\widetilde{\tau}}_{\ell_{2}n} + {\overline{\tau}}_{\ell_{2}}{(\mathbf{p}_{e})}\rbrack}}\}}} = 1,} \\
\left. \Rightarrow e^{- j\frac{2\pi c}{\lambda_{k}}{\{{{\lbrack{\widetilde{\tau}}_{\ell_{1}n} + {\overline{\tau}}_{\ell_{1}}{(\mathbf{p}_{e})}\rbrack} - {\lbrack{\widetilde{\tau}}_{\ell_{2}n} + {\overline{\tau}}_{\ell_{2}}{(\mathbf{p}_{e})}\rbrack}}\}}} = 1, \right. \\
\left. \Rightarrow\frac{2\pi c}{\lambda_{k}}\left\{ {{\lbrack{\widetilde{\tau}}_{\ell_{1}n} + {\overline{\tau}}_{\ell_{1}}\left( \mathbf{p}_{e} \right)\rbrack} - {\lbrack{\widetilde{\tau}}_{\ell_{2}n} + {\overline{\tau}}_{\ell_{2}}\left( \mathbf{p}_{e} \right)\rbrack}} \right\}\mspace{600mu}{mod}\; 2\pi = 0, \right. \\
\left. \Rightarrow\frac{{\widetilde{D}}_{\ell_{1,2}n}\left( \mathbf{p}_{e} \right)}{\lambda_{k}} \in \mathbb{Z},k = 1,2,\cdots,K, \right. \\
\left. \Rightarrow{\widetilde{D}}_{\ell_{1,2}n}\left( \mathbf{p}_{e} \right) = z\lambda,z = 0,1,2,\cdots, \right. \\
\end{array}$$ where $\lambda_{k} = \frac{2\pi c}{\omega_{k}}$ is the wave length of frequency $\omega_{k}$, $c = 3.0 \times 10^{8}$ m/s is the light speed constant, ${\widetilde{D}}_{\ell_{1,2}n}{\triangleq \parallel}\mathbf{p}_{e} - \mathbf{p}_{t}\left( \ell_{1} \right) \parallel_{F}{+ \parallel}\mathbf{p}_{r}\left( n \right) - \mathbf{p}_{t}\left( \ell_{1} \right) \parallel_{F}{- \parallel}\mathbf{p}_{e} - \mathbf{p}_{t}\left( \ell_{2} \right) \parallel_{F} - {\parallel \mathbf{p}_{r}\left( n \right) - \mathbf{p}_{t}\left( \ell_{2} \right) \parallel}_{F}$ is the difference between the two path lengths, $\mathbf{p}_{r}\left( n \right)$ is the center of the *n* th receiver and $\mathbb{Z}$ is the integer set. Denote $\lambda$ as the Least Common Multiple (LCM) of $\{\lambda_{1},\lambda_{2},\cdots,\lambda_{K}\}$. ☐
In particular, we set $z = 0$ and obtain ${\widetilde{D}}_{d\ell_{1,2}n} = 0$ (see [Figure A1](#sensors-18-00892-f0A1){ref-type="fig"}) (in fact, $\mathbf{p}_{e}\left( d \right)$ on the hyperbolic satisfies ${\widetilde{\tau}}_{\ell_{1}n} + {\overline{\tau}}_{\ell_{1}} = {\widetilde{\tau}}_{\ell_{2}n} + {\overline{\tau}}_{\ell_{2}}$).
![Two paths with the same delay.](sensors-18-00892-g0A1){#sensors-18-00892-f0A1}
If $\mathbf{p}_{e}$ is a CSMC and $\mathbf{a}_{\ell_{1}n}\left( k \right) \approx \mathbf{a}_{\ell_{2}n}\left( k \right),k = 1,2,\ldots,K$, the candidate emitter position $\mathbf{p}_{e}$ makes the manifold matrix near singular.
In a multi-path positioning model, $\Gamma\left( k \right)$ in ([16](#FD16-sensors-18-00892){ref-type="disp-formula"}) is defined as: $$\Gamma\left( k \right) \triangleq \widetilde{\mathbf{A}}\left( k \right)\mathbf{V}\left( k \right),$$ where $\Gamma\left( k \right)$ is a block diagonal matrix: $$\Gamma\left( k \right) = \begin{bmatrix}
{\Gamma_{1}\left( k \right)} & 0 & \cdots & 0 \\
0 & {\Gamma_{2}\left( k \right)} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & {\Gamma_{N}\left( k \right)} \\
\end{bmatrix}_{MN \times LN}$$
The *n*-th block $\Gamma_{n}\left( k \right)$ in the diagonal is a matrix with a size of $M \times L$. The *ℓ*-th column of $\Gamma_{n}\left( k \right)$ is defined as $e^{- i\omega_{k}{({\widetilde{\mathbf{\tau}}}_{\ell n} + {\overline{\tau}}_{\ell})}}$, where $\mathbf{\tau}_{\ell n}$ is an $M \times 1$ column vector representing the propagation delays from the *ℓ*-th transponder to the *M* antennas in the *n*-th receiving station. Notice that: $$e^{- i\omega_{k}{({\widetilde{\mathbf{\tau}}}_{\ell n} + {\overline{\tau}}_{\ell})}} = e^{- i\omega_{k}{({\widetilde{\mathbf{\tau}}}_{\ell n} - \tau_{\ell n} + \tau_{\ell n} + {\overline{\tau}}_{\ell})}} = e^{- i\omega_{k}{({\widetilde{\mathbf{\tau}}}_{\ell n} - \tau_{\ell n})}}e^{- i\omega_{k}{(\tau_{\ell n} + {\overline{\tau}}_{\ell})}} \triangleq \mathbf{a}_{\ell n}\left( k \right)e^{- i\omega_{k}{(\tau_{\ell n} + {\overline{\tau}}_{\ell})}}$$ where $\tau_{\ell n}$ is the propagation delay from the *ℓ*-th transponder to the center of the *n*-th receiving array and $\mathbf{a}_{\ell n}\left( k \right) = e^{- i\omega_{k}{({\widetilde{\mathbf{\tau}}}_{\ell n} - \tau_{\ell n})}}$ represents the array response of the *n*-th receiving station from the *ℓ*-th transponder.
If $\mathbf{p}_{e}$ is a CSMC that satisfies ${\widetilde{D}}_{\ell_{1,2}n} = z\lambda$ and $\mathbf{a}_{\ell_{1}n}\left( k \right) \approx \mathbf{a}_{\ell_{2}n}\left( k \right)$, we get that $e^{- i\omega_{k}{({\widetilde{\mathbf{\tau}}}_{\ell_{1}n} + {\overline{\tau}}_{\ell_{1}})}} \approx e^{- i\omega_{k}{({\widetilde{\mathbf{\tau}}}_{\ell_{2}n} + {\overline{\tau}}_{\ell_{2}})}}$ from ([A5](#FD60-sensors-18-00892){ref-type="disp-formula"}). In this case, the $\ell_{1}$-th column of matrix $\Gamma_{n}\left( k \right)$ is approximately equal to the $\ell_{2}$-th column. Because there are two columns in the matrix $\Gamma\left( k \right)$ that are almost equal, $\Gamma\left( k \right)$ is a near singular matrix. ☐
The proof of Theorem 2:
Denote a specific value of $\mathbf{\alpha}$ by $\mathbf{\alpha}_{i}$. $\overline{\mathbf{f}}\left( k \right)$ and $\overline{\mathbf{C}}\left( k \right)$ are vectors with real values. The following inequality is always true, $$\parallel \overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}\rbrack}^{- \frac{1}{2}} - \overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}_{i}{\lbrack\mathbf{\alpha}_{i}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}_{i}\rbrack}^{- 1}{\lbrack\mathbf{\alpha}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}\rbrack}^{\frac{1}{2}} \parallel_{F}^{2} \geq 0.$$
After expanding the expression in ([A6](#FD61-sensors-18-00892){ref-type="disp-formula"}) and re-arranging the terms, $$\begin{array}{cl}
{\overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}\mathbf{\alpha}^{T}\overline{\mathbf{f}}\left( k \right) \geq} & {2\{\overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}{\lbrack\mathbf{\alpha}_{i}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}_{i}\rbrack}^{- T}\mathbf{\alpha}_{i}^{T}\overline{\mathbf{f}}\left( k \right)\}} \\
& {- \overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}_{i}{\lbrack\mathbf{\alpha}_{i}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}_{i}\rbrack}^{- 1}\left. \lbrack\mathbf{\alpha}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha} \right)\left( \mathbf{\alpha}_{i}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}_{i}\rbrack \right.^{- T}\mathbf{\alpha}_{i}^{T}\overline{\mathbf{f}}\left( k \right).} \\
\end{array}$$
Defining $\mathbf{w}\left( k \right) \triangleq \overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}_{i}{\lbrack\mathbf{\alpha}_{i}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}_{i}\rbrack}^{- 1}$, and summing (A7) over *k*, $$\begin{array}{cl}
{Q\left( \eta \right)} & {= H\left( \mathbf{\alpha} \right)} \\
& {= - \sum\limits_{k = 1}^{K}\overline{\mathbf{f}}\left( k \right)^{T}\mathbf{\alpha}{\lbrack\mathbf{\alpha}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}\rbrack}^{- 1}\mathbf{\alpha}^{T}\overline{\mathbf{f}}\left( k \right)} \\
& {\leq - 2\sum\limits_{k = 1}^{K}{\{{\overline{\mathbf{f}}}^{T}\left( k \right)\mathbf{\alpha}\mathbf{w}^{T}\left( k \right)\}} + \sum\limits_{k = 1}^{K}\mathbf{w}\left( k \right){\lbrack\mathbf{\alpha}^{T}\overline{\mathbf{C}}\left( k \right)\mathbf{\alpha}\rbrack}\mathbf{w}^{T}\left( k \right).} \\
\end{array}$$
Notice that: $$\mathbf{w}\left( k \right)\mathbf{\alpha}^{T} = {\overline{\mathbf{\alpha}}}^{T}\mathbf{W}\left( k \right),$$ where $\overline{\mathbf{\alpha}}$ is defined in ([38](#FD38-sensors-18-00892){ref-type="disp-formula"}), and it is viewed as a reshaped form of the $\mathbf{\alpha}$. $\mathbf{\alpha}$ is defined in ([5](#FD5-sensors-18-00892){ref-type="disp-formula"}), and: $$\mathbf{W}\left( k \right) \triangleq \mathbf{I}_{N} \otimes {diag}{\{\mathbf{w}\left( k \right)\}} \otimes \mathbf{I}_{L},$$ where $\mathbf{I}_{L}$ is an identify matrix with a size of $L \times L$ and $\mathbf{I}_{N}$ is an identify matrix with a size of $N \times N$. Substitute ([A9](#FD64-sensors-18-00892){ref-type="disp-formula"}) into (A8): $$\begin{aligned}
{H\left( \overline{\mathbf{\alpha}} \right)} & {\leq - 2{\lbrack\sum\limits_{k = 1}^{K}{\overline{\mathbf{f}}}^{T}\left( k \right)\mathbf{W}^{T}\left( k \right)\rbrack}\overline{\mathbf{\alpha}} + {\overline{\mathbf{\alpha}}}^{T}{\lbrack\sum\limits_{k = 1}^{K}\mathbf{W}\left( k \right)\overline{\mathbf{C}}\left( k \right)\mathbf{W}^{T}\left( k \right)\rbrack}\overline{\mathbf{\alpha}}.} \\
\end{aligned}$$
The eigenvalue decomposition of the second item of the right part is denoted by: $$\begin{array}{cl}
{\sum\limits_{k = 1}^{K}\mathbf{W}\left( k \right)\overline{\mathbf{C}}\left( k \right)\mathbf{W}^{T}\left( k \right)} & {= {\overline{\mathbf{U}}}^{T}\Sigma\overline{\mathbf{U}}} \\
& {= \mathbf{U}^{T}\mathbf{U},} \\
\end{array}$$ where $\mathbf{U} = \overline{\mathbf{U}}\Sigma^{\frac{1}{2}}$. The right part of (A11) is simplified by: $$\begin{array}{l}
{- 2{\lbrack\sum\limits_{k = 1}^{K}\overline{\mathbf{f}}\left( k \right)^{T}\mathbf{W}\left( k \right)^{T}\rbrack}\overline{\mathbf{\alpha}} + {\overline{\mathbf{\alpha}}}^{T}{\lbrack\sum\limits_{k = 1}^{K}\mathbf{W}\left( k \right)\overline{\mathbf{C}}\left( k \right)\mathbf{W}\left( k \right)^{T}\rbrack}\overline{\mathbf{\alpha}}} \\
{= \left| \middle| \mathbf{F} \right.\mathbf{U}^{- 1} - {\overline{\mathbf{\alpha}}}^{T}\mathbf{U}^{T}{||}^{2} - \mathbf{FU}^{- 1}\mathbf{U}^{- T}\mathbf{F}^{T}} \\
{\triangleq \left| \middle| \mathbf{Y} \right. - {\overline{\mathbf{\alpha}}}^{T}\mathbf{X}{||}^{2} - \mathbf{Z},} \\
\end{array}$$ where: $$\begin{aligned}
\mathbf{F} & {\triangleq \sum\limits_{k = 1}^{K}\overline{\mathbf{f}}\left( k \right)^{T}\mathbf{W}\left( k \right)^{T},} \\
\mathbf{Y} & {\triangleq \mathbf{FU}^{- 1},} \\
\mathbf{X} & {\triangleq \mathbf{U}^{T},} \\
\mathbf{Z} & {\triangleq \mathbf{YY}^{T}.} \\
\end{aligned}$$
Denote: $$\begin{array}{r}
{G\left( \mathbf{\alpha}_{i},\overline{\mathbf{\alpha}},\mathbf{P}^{e} \right) \triangleq \left| \middle| \mathbf{Y} \right. - {\overline{\mathbf{\alpha}}}^{H}\mathbf{X}{||}^{2} - \mathbf{Z}.} \\
\end{array}$$
For a given $\mathbf{\alpha}_{i}$, $G\left( \mathbf{\alpha}_{i},\overline{\mathbf{\alpha}},\mathbf{p}^{e} \right)$ is the upper bound of the cost function $Q\left( \eta \right)$. The cost function defined in (A14) is viewed as a relaxed programming of the original programming: $${\overline{\mathbf{\alpha}}}_{i + 1} = \arg\min\limits_{\overline{\mathbf{\alpha}}}G\left( \mathbf{\alpha}_{i},\overline{\mathbf{\alpha}},\mathbf{p}_{e} \right) = \left| \middle| \mathbf{Y} \right. - {\overline{\mathbf{\alpha}}}^{H}\mathbf{X}{||}^{2} - \mathbf{Z},$$ $$\begin{array}{r}
{s.t.\mspace{720mu}\overline{\mathbf{\alpha}} \geq 0.} \\
\end{array}$$
Since ${\overline{\mathbf{\alpha}}}_{i + 1}$ is the optimal solution of the relaxed programming, $$Q\left( \mathbf{\eta}_{i + 1} \right) = H\left( {\overline{\mathbf{\alpha}}}_{i + 1} \right) \leq G\left( \mathbf{\alpha}_{i},{\overline{\mathbf{\alpha}}}_{i + 1},\mathbf{p}_{e} \right) \leq G\left( \mathbf{\alpha}_{i},{\overline{\mathbf{\alpha}}}_{i},\mathbf{p}_{e} \right) = H\left( {\overline{\mathbf{\alpha}}}_{i} \right) = Q\left( \mathbf{\eta}_{i} \right),$$ ☐
![Multiple-path positioning problem with static transponders/receivers.](sensors-18-00892-g001){#sensors-18-00892-f001}
![Multiple-peak cost function of a frequency band signal and single peak cost function of a base band signal. (**a**) Cost function for a frequency band signal (**b**) Cost function for a base band signal.](sensors-18-00892-g002){#sensors-18-00892-f002}
![Layouts of the numerical examples. (**a**) Layout A (**b**) Layout B.](sensors-18-00892-g003){#sensors-18-00892-f003}
![Spatial spectrum of Signal Subspace Projection (SSP)-MUSIC and Noise Subspace Projection (NSP)-MUSIC in a single path scenario. (**a**) SSP-MUSIC (**b**) NSP-MUSIC.](sensors-18-00892-g004){#sensors-18-00892-f004}
![Spatial spectrum in baseband signal positioning. (**a**) Spatial spectrum of SSP-MUSIC and NSP-MUSIC (**b**) Spatial spectrum of Multi-path Propagation (MP)-MUSIC.](sensors-18-00892-g005){#sensors-18-00892-f005}
![Spatial spectrum when a transponder is close to the anther. (**a**) Spatial spectrum of SSP-MUSIC and NSP-MUSIC (**b**) Spatial spectrum of MP-MUSIC.](sensors-18-00892-g006){#sensors-18-00892-f006}
![Spatial spectrum for a single antenna of each receiving array. (**a**) Spatial spectrum of SSP-MUSIC and NSP-MUSIC (**b**) Spatial spectrum of MP-MUSIC.](sensors-18-00892-g007){#sensors-18-00892-f007}
![Spatial spectrum in a general scenario. (**a**) Spatial spectrum of SSP-MUSIC and NSP-MUSIC (**b**) Spatial spectrum of MP-MUSIC.](sensors-18-00892-g008){#sensors-18-00892-f008}
![Performance of MP-MUSIC-Active Set Algorithm (ASA) and MP-MUSIC-Interior Point Algorithm (IPA).](sensors-18-00892-g009){#sensors-18-00892-f009}
![Performances of MP-MUSIC and MP-ML ($K = 16,J = 1$).](sensors-18-00892-g010){#sensors-18-00892-f010}
![Performance of MP-ML and MP-MUSIC with different $J,K$ combinations.](sensors-18-00892-g011){#sensors-18-00892-f011}
![Performances of MP-MUSIC and MP-ML with different numbers of snapshots.](sensors-18-00892-g012){#sensors-18-00892-f012}
![Time consumptions and RMSE of different numbers of emitters. (**a**) Time consumptions of MP-MUSIC and MP-ML (**b**) RMSE of MP-MUSIC and MP-ML.](sensors-18-00892-g013){#sensors-18-00892-f013}
![Positioning accuracies and time consumptions of MP-ML.](sensors-18-00892-g014){#sensors-18-00892-f014}
![MP-ML and CRLB of different numbers of receivers ($K = 1024$).](sensors-18-00892-g015){#sensors-18-00892-f015}
![MP-ML and CRLB of different numbers of emitters ($K = 1024,J = 1$).](sensors-18-00892-g016){#sensors-18-00892-f016}
sensors-18-00892-t001_Table 1
######
Parameter setting in the numerical simulations.
Description Layout *R* (m) $\mathbf{\mathbf{\lambda}}$ (m)/*f* (MHz) *M* *B* (kHz) *K* *J* SNR (dB)
--------------------------------- -------- --------- ------------------------------------------- ----- ----------- ----- ----- ----------
Baseband signal positioning A 30 *1157.5/0.26* 11 8 64 100 10
Transponders are close *B* 30 30/10 11 8 64 100 10
Single antenna of each receiver A 30 30/10 *1* 8 64 100 10
Standard scenario A 30 30/10 11 8 64 100 10
sensors-18-00892-t002_Table 2
######
Parameter setting in MP-MUSIC simulations.
Description Layout *R* (m) $\mathbf{\mathbf{\lambda}}$ (m)/*f* (MHz) *M* *B* (kHz) *K* *J* SNR (dB)
----------------------- -------- --------- ------------------------------------------- ----- ----------- ----- ----- ----------------
RMSE of MUSIC methods A 30 1157.5/0.26 11 8 128 100 $- 15 \sim 15$
sensors-18-00892-t003_Table 3
######
Parameter setting in insufficient snapshot scenarios.
Description Layout *R* (m) $\mathbf{\mathbf{\lambda}}$ (m)/*f* (MHz) *M* *B* (kHz) *K* *J* SNR (dB)
------------- -------- --------- ------------------------------------------- ----- ----------- ----- ----- ----------------
*MP-MUSIC* A 30 1157.5/0.26 11 8 16 1 $- 15 \sim 15$
*MP-ML* A 30 1157.5/0.26 11 8 16 1 $- 15 \sim 15$
*CRLB* A 30 1157.5/0.26 11 8 16 1 $- 15 \sim 15$
sensors-18-00892-t004_Table 4
######
Parameter setting in MP-MUSIC with different $J,K$ combinations.
Description Layout *R* (m) $\mathbf{\mathbf{\lambda}}$ (m)/*f* (MHz) *M* *B* (kHz) *K* *J* SNR (dB)
-------------------------------- -------- --------- ------------------------------------------- ----- ----------- ------- ------ -------------
MP-MUSIC $J,K$ Combination I A 30 1157.5/0.26 11 8 *32* *4* $0 \sim 30$
MP-MUSIC $J,K$ Combination II A 30 1157.5/0.26 11 8 *16* *8* $0 \sim 30$
MP-MUSIC $J,K$ Combination III A 30 1157.5/0.26 11 8 *8* *16* $0 \sim 30$
MP-MUSIC $J,K$ Combination IV A 30 1157.5/0.26 11 8 *4* *32* $0 \sim 30$
MP-ML A 30 1157.5/0.26 11 8 *128* *1* $0 \sim 30$
CRLB A 30 1157.5/0.26 11 8 *128* *1* $0 \sim 30$
sensors-18-00892-t005_Table 5
######
Parameter setting in simulations of different numbers of snapshots.
Description Layout *R* (m) $\mathbf{\mathbf{\lambda}}$ (m)/*f* (MHz) *M* *B* (kHz) $\mathbf{\mathbf{K} \cdot \mathbf{J}}$ SNR (dB)
------------- -------- --------- ------------------------------------------- ----- ----------- --------------------------------------------- ----------
*MP-MUSIC* A 30 1157.5/0.26 11 8 $2^{i}$, $\left( i = 4,5,\cdots,13 \right)$ 10
*MP-ML* A 30 1157.5/0.26 11 8 $2^{i}$, $\left( i = 4,5,\cdots,13 \right)$ 10
*CRLB* A 30 1157.5/0.26 11 8 $2^{i}$, $\left( i = 4,5,\cdots,13 \right)$ 10
sensors-18-00892-t006_Table 6
######
Parameter setting in simulations of time consumption.
Description Layout *d* *R* (m) $\mathbf{\mathbf{\lambda}}$ (m)/*f* (MHz) *M* *B* (kHz) SNR *K* *J*
----------------------- -------- ------------ --------- ------------------------------------------- ----- ----------- ----- ------- -----
MP-ML ($KJ = 16$) A $1 \sim 4$ 30 1157.5/0.26 11 8 15 *16* *1*
MP-MUSIC ($KJ = 16$) A $1 \sim 4$ 30 1157.5/0.26 11 8 15 *16* *1*
MP-ML ($KJ = 128$) A $1 \sim 4$ 30 1157.5/0.26 11 8 15 *128* *1*
MP-MUSIC ($KJ = 128$) A $1 \sim 4$ 30 1157.5/0.26 11 8 15 *16* *8*
sensors-18-00892-t007_Table 7
######
Parameter setting in simulations of different numbers of receivers. SGP, Single Platform Geolocation .
Description Number of Receivers *R* (m) $\mathbf{\mathbf{\lambda}}$ (m)/*f* (MHz) *M* *B* (kHz) *K* SNR (dB)
------------- --------------------- --------- ------------------------------------------- ----- ----------- ----- ----------
MP-ML *2*∼*4* 30 1157.5/0.26 11 8 128 10
CRLB *1*∼*4* 30 1157.5/0.26 11 8 128 10
SGP *1* 30 1157.5/0.26 11 8 128 10
|
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Air is drawn into an engine through an intake manifold. A throttle valve controls airflow into the engine. The air mixes with fuel from one or more fuel injectors to form an air/fuel mixture. The air/fuel mixture is combusted within one or more
Combustion of the air/fuel mixture produces torque and exhaust gas. Torque is generated via heat release and expansion during combustion of the air/fuel mixture. The engine transfers torque to a transmission via a crankshaft, and the transmission transfers torque to one or more wheels via a driveline. The exhaust gas is expelled from the cylinders to an exhaust system.
An engine control module (ECM) controls the torque output of the engine. The ECM may control the torque output of the engine based on driver inputs and/or other suitable inputs. The driver inputs may include, for example, accelerator pedal position, brake pedal position, and/or one or more other suitable driver inputs. |
cheap maid of honor dresses
(118)
it’s definitely bored to tears, worn out, as well as atrocious.whatever you are searhing for the mode cheap maid of honor dresses or possibly a long lasting one particular, big investment in tbdress.com will not permit you to disappoints.troubles are saved and merchandise with good top quality eventually be yours even while.if you are hoping several utilitarian selective information to get cheap maid of honor dresses, this page will let you a whole lot.the advisable thing is you could take pleasure in good quality, sweet cost, best assistance jointly.along with a huge number of cheap maid of honor dresses in addition to get up on the very best associated with mode.examine tbdress pertaining to much easier lifetime and also far more astonish.go shopping it along each of our online shop. |