text
stringlengths 4
5.48M
| meta
stringlengths 14
6.54k
|
---|---|
\section{Introduction}
Supersymmetric lattice models have been studied starting from the $\mathcal{N}=1$ supersymmetry in the tricritical Ising model \cite{FriadenQiuShenker, KastorMartinecShenker, Qiu} and the fully frustrated XY-model \cite{Foda}. In the introduction of \cite{SaleurWarner} the interested reader can find further references to early supersymmetric lattice models.
Our work goes back to\cite{FendleySchoutensBoer}, where a certain one-dimensional fermionic lattice model was constructed based on two supercharges (generators of supersymmetry) $Q$ and its hermitian conjugate $Q^\dagger$. An alternative more general approach is to work with $Q_1 = Q+Q^\dagger$ and $Q_2 ={\rm i}( Q-Q^\dagger )$ which naturally admits an extension of the number of generators, $\mathcal{N}$, which in this case is $\mathcal{N}=2$.
Supersymmetry is based on the property that $Q^2=0$ (in the general approach $Q_j^2 = Q_k^2$), which has many consequences for the spectrum of the Hamiltonian which is defined as $H = \{Q,Q^\dagger\}$. In particular there are no negative energy states, and all positive energy states come in doublets with the same eigenvalue, but differing in number of fermions by one.
States with zero energy, the lowest possible, can be highly degenerate, but form singlets with respect to supersymmetry.
By the specific choice of $Q$, the model in \cite{FendleySchoutensBoer} has a repulsive hardcore potential between the fermions, i.e. two neighbouring particles have to be separated by one empty site. In \cite{FendleyNienhuisSchoutens} this work has been continued by generalizing the interaction, i.e. not a single, but $k$-long strings of fermions have to be separated by an empty site.
We modify the supersymmetric model of
\cite{FendleySchoutensBoer} in a different way: we restore the
particle--hole symmetry, by symmetrizing the building elements of the model, $Q$ and $Q^{\dagger}$. In the original definition
$Q$ creates a solitary fermion: a fermion is created only on sites of which the neighbors are empty. We simply symmetrize this action with respect to the particle--hole symmetry. As an additional term in $Q$, we introduce the
operator that creates a ``solitary hole'', i.e. annihilate a fermion between
neighbouring sites that are occupied.
Because in the original model the particle--hole symmetry is strongly broken by the hardcore repulsion between fermions,
this modification changes the nature of the model to a large
extent. For example, the fermion number is no longer conserved.
Instead, we identify domain walls with Majorana--like properties, as
conserved objects.
But most surprisingly, this modification of $Q$ does not violate supersymmetry, i.e. the property that $Q^2=0$.
On the contrary, the high degeneracy of the zero energy state, a common
feature of supersymmetric models \cite{HS10,MSP03,FS05,E05}, is no longer limited to the ground
state, but in
this model all the eigenvalues exhibit an {\em extensive} degeneracy. While in the original model all energy levels are two-fold degenerate, in the model investigated here, the degeneracy is a power of two with an exponent growing linearly in the system size.
This property suggests a much higher symmetry and, because this symmetry
is not at all evident, it was our prime motivation to study the model.
We investigate the reason for the large degeneracy, and provide an answer in terms of symmetries and zero energy Cooper-pair like excitations as in \cite{FS}. We further show that the system's energy gap scales as $\sim 1/L^2$ which is usually associated to classical diffusive modes.
The paper is organized as follows. In Section \ref{sec:ModelDef} we define the model, and introduce the most important operators and notation. The model turns out to be solvable by nested Bethe Ansatz \cite{Baxter70}. In Section~\ref{sec:Symmetries} we expose the Bethe equations without derivation, the large degeneracy of the model, and the associated symmetry operators. In Section \ref{sec:Consequences} we provide some detailed examples of consequences of the symmetry operators, and in Section \ref{sec:BetheAnsatz} we derive the Bethe Ansatz equations for the model. Our approach is educational: we gradually look at more complicated cases, and derive the Bethe-equations for them.
\subsection{Model definition}
\label{sec:ModelDef}
In this section we define our model. We introduce the usual fermionic operators, fermionic Fock-space on lattice, and the supersymmetric generators $Q$ and $Q^{\dagger}$. The Hamiltonian is defined in terms of these generators. As we will see, the fermion number is not a conserved quantity, so we introduce an other notation, based on different conserved quantities (domain walls), useful for solving the model with Bethe ansatz.
We define the usual fermionic creation and annihilation operators $c_j$ and $c_j^\dagger$, acting on site $j$, satisfying the anti-commutation relations
\begin{equation}
\{c_i^\dagger,c_j\}=\delta_{ij},\qquad \{c_i^\dagger,c_j^\dagger\}=\{c_i,c_j\}=0.
\end{equation}
The on site fermion-number and hole-number operators are defined as
\begin{equation}
n_i = c_i^\dagger c_i, \qquad p_i=1-n_i.
\end{equation}
The number operator, $\mathcal{N}_i$, of fermions on positions $1$ to $i$ and the total fermion number operator, $\mathcal{N}_{\rm F}$, are defined as
\begin{equation}
\mathcal{N}_i = \sum_{j=1}^i n_j, \qquad \mathcal{N}_{\rm F}:=\mathcal{N}_L.
\end{equation}
These operators act in a fermionic Fock space spanned by ket vectors of the form
\begin{equation}
\ket{\mathbf{\tau}} = \prod_{i=1}^L \left(c_i^\dagger\right)^{\tau_i} \ket{\mathbf{0}},
\end{equation}
where the product is ordered and $\ket{\mathbf{0}}$ is the vacuum state defined by $c_i \ket{\mathbf{0}} =0$ for $i=1,\ldots, L$. The label $\mathbf{\tau}=\{\tau_1,\ldots,\tau_L\}$, with $\tau_i=1$ if there is a fermion at site $i$ and $\tau_i=0$ if there is a hole. Hence we have
\begin{equation}
n_i \ket{\mathbf{\tau}}= \tau_i \ket{\mathbf{\tau}}.
\end{equation}
We consider a one-dimensional supersymmetric lattice model analogous to
\cite{FendleySchoutensBoer}, but satisfying fermion-hole symmetry. For that purpose define the operators $d_i^\dagger$ and $e_i$ by
\begin{equation}
d_i^\dagger = p_{i-1}c_i^\dagger p_{i+1},\qquad e_i = n_{i-1}c_i n_{i+1}.
\end{equation}
Hence $d^\dagger_i$ creates a fermion at position $i$ provided all three of positions $i-1$, $i$ and $i+1$ are empty. Similarly, $e_i$ annihilates a fermion at position $i$ provide $i$ and its neighbouring sites are occupied, i.e.
\begin{align}
d^\dagger_i \ket{\tau_1\ldots \tau_{i-2}\, 000\,\tau_{i-2}+\ldots\tau_L} &= (-1)^{\mathcal{N}_{i-1}}\ket{\tau_1\ldots\tau_{i-2}\, 010\,\tau_{i+2}\ldots\tau_L},\\
e_i \ket{\tau_1\ldots\tau_{i-2}\, 111\,\tau_{i+2}\ldots\tau_L} &= (-1)^{\mathcal{N}_{i-1}}\ket{\tau_1\ldots\tau_{i-2}\, 101\,\tau_{i+2}\ldots\tau_L},
\end{align}
while these operators kill all other states.
We now define a supersymmetric Hamiltonian $H$ for fermions on a chain, by
\begin{equation}
H=\{Q^\dagger,Q\},\qquad Q=\sum_{i=1}^L (d_i^\dagger + e_i),\qquad Q^2=0.
\end{equation}
This is a simple variation of the supercharge considered in
\cite{FendleySchoutensBoer}, $Q=\sum_{i=1}^L d_i^{\dagger}$, by adding to it
the fermion--hole symmetric partner of $d_i^{\dagger}$ thus restoring that symmetry.
It is unexpected that this variation respects the requirement $Q^2=0$ of supersymmetry.
The Hamiltonian splits up naturally as a sum of three terms, the first term consists solely of $d$-type operators, the second solely of $e$-type operators and the third contains mixed terms.
\begin{equation}
H=H_I +H_{II}+H_{III},
\label{eq:modeldef}
\end{equation}
\begin{align}
H_I &= \sum_i \left( d_i^\dagger d_i + d_i d_i^\dagger + d_{i}^\dagger d_{i+1} + d_{i+1}^\dagger d_i\right),\nonumber\\
H_{II} &= \sum_i \left( e_i e_i^\dagger + e_i^\dagger e_i + e_i e_{i+1}^\dagger + e_{i+1} e_{i}^\dagger \right),\\
H_{III} &= \sum_i \left( e_i^\dagger d_{i+1}^\dagger\ + d_{i+1} e_i + e_{i+1}^\dagger d_i^\dagger + d_i e_{i+1}\right), \nonumber
\end{align}
where we use periodic boundary conditions
$
c_{i+L}^\dagger = c_{i}^\dagger.
$
Because the $d$ and $e$ are not simple fermion operators, they do not satisfy the canonical anticommutation relations. As a result this bilinear Hamiltonian can not be diagonalized by taking linear combinations of $d$, $e$, $d^\dagger$ and $e^\dagger$.
The Hamiltonian $H_I$ was considered in \cite{FendleySchoutensBoer} and is obtained when operators $e_i^\dagger$ and $e_i$ are not included in $Q$. In this case the model is equivalent to the integrable spin-1/2 quantum XXZ spin chain with $\Delta=-1/2$ and with variable length. The groundstate of this model exhibits interesting combinatorial properties.
The additon of the operator $e_i$ adds an obvious `fermion-hole' symmetry $d_i^\dagger \leftrightarrow e_i$ to the model which was our original motivation. As we will see, this symmetry results in a surprisingly large degeneracy across the full spectrum of $H$. Moreover, the new model \eqref{eq:modeldef} unexpectedly turns out to be integrable, as we will show below.
Note that the Hamiltonians $H_I$ and $H_{II}$ each contain only number operators and hopping terms and thus conserve the total number of fermions. The third Hamiltonian $H_{III}$ breaks this conservation law. For example, the term $e_i^\dagger d_{i+1}^\dagger$ sends the state $\ket{\ldots 1000\ldots}$ to $\ket{\ldots 1110\ldots}$, thus creating two fermions. Hence the fermion number is not conserved and not a good quantum number. However, the number of interfaces or domain walls between fermions and holes is conserved, and we shall therefore describe our states in terms of these.
\subsection{Domain walls}
\label{se:domainwalls}
We call an interface between a string of 0's followed by a string of 1's a 01-domain wall, and a string of 1's followed by a string of 0's, a 10-domain wall. For example, the following configuration contains six domain walls (we consider periodic boundary conditions), three of each type and starting with a 01-domain wall,
\[
000\Big| 11\Big| 000\Big| 1\Big| 0000\Big| 111\Big|
\]
Let us consider the effect of the various terms appearing in \eqref{eq:modeldef}. As already discussed in an example above, the terms in $H_{III}$ correspond to hopping of domain walls and map between the following states
\begin{equation}
\ket{\ldots 1\Big|000\ldots} \leftrightarrow \ket{\ldots 111\Big|0\ldots},\qquad \ket{\ldots 0\Big|111\ldots} \leftrightarrow -\ket{\ldots 000\Big|1\ldots},
\label{eq:process1}
\end{equation}
where the minus sign in the second case arises because of the fermionic nature of the model. Hopping of a domain wall always takes place in steps of two, so parity of position is conserved. Aside from their diagonal terms, $H_I$ and $H_{II}$ correspond to hopping of single fermions or holes, and therefore to hopping of \textit{pairs} of domain walls. They give rise to transitions between the states
\begin{equation}
\ket{\ldots 0\Big|1\Big|00\ldots} \leftrightarrow \ket{\ldots 00\Big|1\Big|0\ldots},\qquad \ket{\ldots 1\Big|0\Big|11\ldots} \leftrightarrow -\ket{\ldots 11\Big|0\Big|1\ldots},
\label{eq:oddprocess}
\end{equation}
Note that in these processes the total parity of positions of interfaces is again conserved, i.e. all processes in $H$ conserve the number of domain walls at even and odd positions separately.
Finally, the diagonal term $\sum_i (d_i^\dagger d_i + d_i d_i^\dagger + e_i^\dagger e_i + e_i e_i^\dagger)$ in $H_{I}$ and $H_{II}$ counts the number of $010$, $000$, $111$ and $101$ configurations. In other words it counts the number of pairs of second neighbour sites that are both empty or both occupied,
\begin{equation}
\sum_i (d_i^\dagger d_i + d_i d_i^\dagger + e_i^\dagger e_i + e_i e_i^\dagger) = \\\sum_i (p_{i-1}p_{i+1} + n_{i-1}n_{i+1}).
\end{equation}
This is equivalent to counting the total number of sites minus twice the number of domain walls that do not separate a single fermion or hole, i.e. twice the number of well separated domain walls.
Since the number of odd and even domain walls is conserved, the Hilbert space naturally breaks up into sectors labelled by $(m, k)$, where $m$ is the total number of domain walls, and $k$ the number of odd domain walls.
\section{Symmetries}
\label {sec:Symmetries}
The most remarkable feature of the model introduced in Section~\ref{sec:ModelDef} is the high degeneracy not only of the ground state, but of all the eigenvalues of the Hamiltonian. The number of different eigenvalues and the typical degeneracy both grow exponentially with the system size. Aside from some staggering with the system size modulo 4, the growth rate of the degeneracy and of the number of levels appears similar.
In this section we show that the model possesses symmetries which explain the large degeneracy of the energy levels. Fermions and holes are treated on the same footing and consequently the model is symmetric under the exchange of fermions and holes. Even though the number of fermions is not conserved, the fermion number can only change by two, so the parity of the number of fermions is conserved. The model is also invariant under the exchange of domain wall with non-domain walls. This symmetry interchanges the off-diagonal terms of $H_I$ and $H_{II}$ with $H_{III}$. Below we will describe further symmetries, first those that we can describe by simple real-space operators.
In addition to these, the model possesses a symmetry in momentum space due to the possibility of creating and removing zero mode Cooper pairs. This symmetry leads to an extensive degeneration of the ground state and other eigenstates.
As an indication of the high degeneracy, we list for system size $L$ up to 12, the number of groundstates $G$, and the number of different energy levels $\ell$, see Table~\ref{tab:degeneracy}. As the model respects particle hole symmetry, it makes sense to consider besides periodic also antiperiodic boundary conditions, defined by $c_j=c^{\dagger}_{L+j}$. We give the results for this boundary condition as well, because the two lists together give a better idea of the growth of these numbers.
While the mean degeneracy can be seen from the number of energy levels, we remark that almost all degeneracies that we see are powers of two. All this seems to indicate a high symmetry, which this paper aims to explain.
\begin{table}[h]
\begin{center}
\begin{tabular}{r|rr|rr}
& \multicolumn{2}{c|}{periodic}& \multicolumn{2}{c}{antiperiodic}\\
\hline
$L$ & $G$ & $\ell$& $G$ & $\ell$\\ \hline
4 & 8 & 2 & 4 & 3 \\
5 & 8 & 6 & 8 & 5 \\
6 & 0 & 4 & 16 & 4 \\
7 & 16 & 15 & 16 & 14 \\
8 & 32 & 7 & 16 & 20 \\
9 & 32 & 54 & 32 & 54 \\
10 & 0 & 46 & 64 & 94 \\
11 & 64 & 204 & 64 & 210 \\
12 & 128 & 80 & 64 & 201
\end{tabular}
\caption{The degeneracy $G$ of the groundstate, the number of energy levels $\ell$,
for periodic and antiperiodic boundary conditions}
\label{tab:degeneracy}
\end{center}
\end{table}
\subsubsection*{Supersymmetry}
Obviously, by construction the supersymmetry generators commute with the Hamiltonian,
\begin{align}
[ H,\, Q ]= 0, \qquad [ H,\, Q^\dagger ] = 0.
\end{align}
The supercharges $Q$ and $Q^\dagger$ are operators that add or remove a fermion, which means that they add or remove two neighbouring domain walls, one even and one odd, respectively, i.e.
\begin{equation}
Q:\ (m,k)\mapsto (m+2,k+1),\qquad Q^\dag:\ (m,k)\mapsto (m-2,k-1),
\end{equation}
where $(m,k)$ denotes the sector with $m$ domain walls of which $k$ are odd.
\subsubsection*{Domain wall number conservation and translational symmetry}
Two obvious symmetries are the total number of domain walls and translational symmetry due to the periodic boundary conditions for even system sizes. The domain wall number operator $\mathcal{D}$ commutes with $H$, $[H,\mathcal{D}]=0$ and so does the translation operator $T$.
\subsubsection*{Particle parity symmetry}
The total number of fermions is not conserved as both $H_{III}$ changes the fermion number. We denote the fermion parity operator by $P$, which acts on pure states $\ket{\tau_1,\ldots,\tau_L}$ as
\begin{equation}
P \ket{\tau_1,\ldots,\tau_L} = (-1)^{\mathcal{N}_L} \ket{\tau_1,\ldots,\tau_L} .
\end{equation}
Since $Q$ and $Q^\dag$ change parity the supersymmetry generators anti-commute with $P$,
\begin{equation}
\{Q,P\}=\{Q^\dag,P\}=0,
\end{equation}
from which it is simple to show that $[H,P]=0$.
\subsubsection*{Particle -- hole symmetry}
Introduce the operator
\begin{equation}
\Gamma= \prod_{i=1}^L \gamma_i, \qquad \gamma_i=c_i + c_i^\dagger,
\end{equation}
in terms of the Majorana fermions $\gamma_i$. This operator acts on a fermionic state $\ket{\tau}$ by exchanging the holes and fermions, and it is easy to see, that this is a symmetry of the model:
\begin{equation}
[ H,\, \Gamma ] = 0.
\end{equation}
In fact one can show that $\Gamma$ either commutes or anti-commutes with the supersymmetry generators
\begin{equation}
Q\Gamma+(-1)^L \Gamma Q=0,\qquad Q^\dag \Gamma +(-1)^L \Gamma Q^\dag=0.
\end{equation}
\subsubsection*{Domain wall -- non-domain wall symmetry}
For even system sizes it is not hard to see that we can expect a domain wall (DW) -- non domain wall (nonDW) symmetry. The processes described in \eqref{eq:process1} and \eqref{eq:oddprocess} are interpreted as movement of a single DW or a bound double DW, but equivalently they can be interpreted as the movement of a bound double nonDW, and single nonDW respectively. The DW-nonDW exchange operator can be written as
\begin{equation}
E = \prod_{i=1}^{L/2} (c_{2i}^{\vphantom{\dagger}} - c_{2i}^\dagger),
\end{equation}
which satisfies the commutation relations
\begin{align}
EQ &= Q^\dagger E, \qquad EQ^\dagger = QE, \qquad EH = HE.
\end{align}
%
The DW -- nonDW symmetry interchanges the sectors $(m,k)$ with $(L-m,L/2-m+k)$.
\subsubsection*{Shift symmetry}
\label{sec:Shift}
There is a further symmetry, defined by the operator $S$:
\begin{equation}
S=\sum_{i=1}^L n_{i-1} \gamma_i p_{i+1} + p_{i-1} \gamma_i n_{i+1},\qquad \gamma_i=c_i + c_i^\dagger.
\end{equation}
The operator $S$ shifts one of the domain walls either to the left or to the right by one. It is easy to see from the definition, that $S$ is self-adjoint, in fact, each summand is self-adjoint. By explicit computation, we can show that $S$ anticommutes with $Q$ and $Q^\dagger$,
\begin{equation}
\{Q,\,S\}=0,\quad \{Q^\dagger,\,S\}=0,\qquad [H,S]=0.
\end{equation}
This defines a symmetry of the model which relates the sector $(m,k)$ with the sectors $(m,k\pm 1)$.
\subsubsection*{Reflection symmetry of the spectrum for $L=4n$}
It is easy to prove, that for $L=4n, \, n \in \mathbb{N}$, the groundstate energy is $\Lambda_{0}=0$ and the highest energy level is given by $\Lambda_{\text{max}}=L$. We have observed, that the spectrum has a reflection symmetry, i.e., if there is an energy level with energy $\Lambda=L-\Delta \Lambda$, then there is one with $\widetilde{\Lambda}= \Delta \Lambda$. The degeneracy for these two mirrored levels is the same. These two energy levels are related by an operator defined in the following way. Let
\begin{equation}
\delta_j ={\rm i}\, (c_j-c_j^\dag),\qquad \delta_j^\dag = \delta_j.
\end{equation}
Then define
\begin{equation}
M = \prod_{i=0}^{n-1} \delta_{4i+1} \delta_{4i+2} = (-1)^{n} \prod_{i=0}^{n-1} (c_{4i+1} - c_{4i+1}^\dagger ) (c_{4i+2} - c_{4i+2}^\dagger ).
\end{equation}
The operator $M$ is (anti)hermitian depending on the parity of $n$, and squares to a multiple of the identity,
\begin{equation}
M^{\dagger} = (-1)^n M, \qquad M^2 = (-1)^n \mathbb{I}.
\end{equation}
The mirroring property is encoded in $M$ in the following way,
\begin{equation}
M (L \mathbb{I} - H) = H M,
\end{equation}
which means that for every eigenvector there is mirrored pair,
\begin{equation}
H\ket{\Psi} = \Lambda \ket{\Psi} \Leftrightarrow H M \ket{\Psi} = (L-\Lambda)M \ket{\Psi}.
\end{equation}
A good example of this pairing is to take the pseudo-vacuum $\ket{000 \ldots 0}$. This state maps into a half filled true ground state, i.e. into $\pm \ket{110011001 \ldots 100}$ (where the sign depends on $n$).
\subsubsection*{Antiperiodic boundary conditions and reflection symmetry of the spectrum for $L=4n-2$}
The reflection symmetry can be extended to antiperiodic boundary conditions, and for $L=4n-2$ systems, we can relate the antiperiodic spectrum with the periodic one by the mirroring.
Introduce antiperiodic boundary conditions, which we will use only in this section:
\begin{equation}
c_{i+L}^\dagger = c_i.
\end{equation}
This modifies the Hamiltonian, which we will denote by $H_{ap}$. The antiperiodic Hamiltonian's spectrum has the same reflection symmetry as the periodic for $L=4n$. The definition of $M$ is independent of the boundary condition, so we can write
\begin{equation}
M (L \mathbb{I} - H^{(L=4n)}_{ap}) = H^{(L=4n)}_{ap} M,
\end{equation}
where for clarity we emphasized the system size $L=4n$.
We have observed, that for $L=4n-2$, the periodic and the antiperiodic spectrum is related by the previous reflection, i.e. if there is a state of $H_{ap}$ with energy $\Lambda_{ap}$, there is a corresponding state of $H$ with energy $L-\Lambda_{ap}$. The largest energy for $H$ is $\Lambda_{p,max} = L$, corresponding to the antiperiodic GS with $\Lambda_{ap,GS} = 0$, which reflection is realized by the next operator equation:
\begin{equation}
M (L \mathbb{I} - H^{(L=4n-2)}_{ap}) = H_{p}^{(L=4n-2)} M,
\end{equation}
where we stressed the periodic Hamiltonian by $H_p$.
The last relation is easy to understand intuitively: For $L=4n-2$, $H_p$ has the largest eigenvalue equal to $L$ corresponding to e.g. the state $\ket{000 \ldots 00}$. This is mapped to $\ket{1100110...0011}$, where the first and the last two sites are all occupied. But since the boundary conditions are antiperiodic, this GS is analoguous to the periodic GS $\ket{0011..0011}$ for $L=4n$.
\subsubsection*{Zero mode Cooper pairs}
The Hamiltonian $H$ in \eqref{eq:modeldef} is diagonalisable using Bethe's ansatz. We derive the Bethe equations and present the explicit form of the Bethe vectors in Section~\ref{sec:BetheAnsatz}. Here we present the Bethe equations to elucidate a large symmetry which is most obvious in momentum space.
Note that there are two type of pseudo-particles, namely domain walls and odd domain walls. To diagonalise \eqref{eq:modeldef} we therefore employ a nested Bethe ansatz. Each domain wall is associated with a Bethe-root $z_j$, where $\log z_j$ is proportional to the momentum of the domain wall, and each odd domain wall is associated with an additional, nested Bethe-root $u_l$. In Section~\ref{sec:BetheAnsatz} we show that in the sector $(m,k)$, and for even system sizes $L$, the eigenvalue of $H$ is given by
\begin{equation}
\Lambda =L+ \sum_{i=1}^{m} (z_i^2+z_i^{-2}-2).
\label{eq:eigval}
\end{equation}
where the set of $z_1, z_2, \ldots, z_m$ and $u_1, \ldots, u_k$ satisfies the equations,
\begin{align}
z_j^L & =\pm {\rm i}^{-L/2} \prod_{l=1}^k \frac{u_l-(z_j-1/z_j)^2}{u_l+(z_j-1/z_j)^2},\qquad j=1,\ldots,m
\label{eq:bae1}\\
1 &= \prod_{j=1}^{m} \frac{u_l-(z_j-1/z_j)^2}{u_l+(z_j-1/z_j)^2},\qquad l=1,\ldots,k,
\label{eq:bae2}
\end{align}
where the $\pm$ is the same for all $j$.
\begin{figure}[t]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{BetheRoots1.pdf}
\captionsetup{width=0.7\linewidth}
\captionof{figure}{$L=16$, $(6,\,3)$ sector, $\Lambda=6.613$, free fermionic solution. The six $z_j$'s take six values of the $8^{\text{th}}$ unit roots. Two $u_l$'s form a zero mode Cooper pair, hence they are imaginary and each others negative.}
\label{fig:BetheRoots1}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{BetheRoots3.pdf}
\captionsetup{width=0.7\linewidth}
\captionof{figure}{$L=10$, $(4,\,2)$ sector, $\Lambda=6.721$, non free fermionic solution. The $z_j$'s not on the unit circle are related as $z\to (z^*)^{-1}$. }
\label{fig:BetheRoots3}
\end{minipage}
\end{figure}
A solution to the Bethe equations gives rise to an eigenvector, however this correspondence is not unique. Two solutions ${z_1, \ldots, z_m, u_1, \ldots, u_k}$ and ${z'_1, \ldots, z'_m, u'_1, \ldots, u'_k}$ give rise to the same eigenvector if there are permutations $\pi \in S_m$, $\sigma \in S_k$ and signs $\epsilon_1, \ldots, \epsilon_m$, $\epsilon_j \in \{+,-\}$ such that $z_j =\epsilon_j z'_{\pi(j)}$ and $u_l = u'_{\sigma(l)}$. In other words, the solutions are invariant under permutations of the Bethe-roots, and invariant under the sign of $z$'s.
Note, that the eigenvalue $\Lambda$ is only dependent on the $z_j$'s. In the absence of odd domain walls, i.e. $k=0$, the equations become free-fermion and are solved by
\begin{equation}
z_j = {\rm i}^{-1/2} {\rm e}^{\frac{2{\rm i} \pi I_j}{L}},\qquad j=1,\ldots,m
\label{eq:FFsol}
\end{equation}
where $I_j$ is a (half-)integer. This same solution \eqref{eq:FFsol} can be used to find a solution in the sector with $k=2$ for any solutions $u_1$ and $u_2$ of \eqref{eq:bae2},
\begin{equation}
1 = \prod_{j=1}^{m} \frac{u-(z_j-1/z_j)^2}{u+(z_j-1/z_j)^2},
\label{eq:bae_u}
\end{equation}
that are each others negatives, i.e. $u_2=-u_1$. In this case the product in (\ref{eq:bae1}) is
\begin{align}
\frac{u_1-(z_j-1/z_j)^2}{u_1+(z_j-1/z_j)^2}\times\frac{u_2-(z_j-1/z_j)^2}{u_2+(z_j-1/z_j)^2} =\frac{u_1-(z_j-1/z_j)^2}{u_1+(z_j-1/z_j)^2}\times\frac{u_1+(z_j-1/z_j)^2}{u_1-(z_j-1/z_j)^2}=1,
\end{align}
for any $z_j$, so that \eqref{eq:bae1} with $k=0$, i.e. \eqref{eq:FFsol}, is unchanged.
We can continue like this as long as $m$ is large enough to generate new solutions from \eqref{eq:bae_u}, and add (Cooper) pairs $(u_l,-u_l)$ without changing the eigenvalue. A similar construction is also possible if we started in a non-free-fermion sector with $k\neq 0$. In sectors where the total number of domain walls $m$ is proportional to the system size $L$ this give rise to an extensive degeneration of energy levels, as we explain in detail in the next section. Some typical solutions to the Bethe equations are shown in Fig.~\ref{fig:BetheRoots1}, \ref{fig:BetheRoots3}, \ref{fig:BetheRoots2}.
We have not been able to find an explicit operator that creates a Cooper pair when acting on a state that admits this. If such an operator can be constructed, it must either select one of the solution pairs $(u,-u)$ of (\ref{eq:bae_u}), or more likely create a linear combination of all such solution pairs. Since the pairs do not affect the energy, such linear combination is an eigenstate of the Hamiltonian, but not a pure Bethe state.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{BetheRoots2.pdf}
\caption{$L=16$, $(6,\,3)$ sector, $\Lambda=2.226$, non free fermionic solution on the complex plane. The six $z_j$'s are on the unit circle, but do not take the values of the $8^{\text{th}}$ unit roots.}
\label{fig:BetheRoots2}
\end{center}
\end{figure}
\section{Consequences of symmetry}
\label{sec:Consequences}
We find that not all the eigenvectors of $H$ are directly described by the Bethe ansatz. However, in all the finite size cases that we looked at, all the eigenvectors were found by applying the symmetry operations on known Bethe vectors.
The translation symmetry $T$ maps $(m,\,k)$ into $(m,\,m-k)$, eigenvector into eigenvectors. Also, $E$, the DW--nonDW symmetry maps $(m,k)$ into $(L-m,L/2-k)$. By applying both consecutively, $(m,\,k)$ is mapped into $(L-m,\, L/2-m+k)$. The process of the construction of all the eigenvectors from the Bethe vectors is complicated, and we did not find the general structure. Here and in Appendix~\ref{app:GSdeg} we report on certain cases that we studied. \\
\subsection{$L=6$, full spectrum}
For $L=6$, the problem is easily solvable by direct diagonalisation of the Hamiltonian. According to this, there are four energy levels, all four are $16$-folded degenerate (Table~\ref{tab:L6energies}).
\begin{table}[h]
\begin{center}
\begin{tabular}{l | l}
$\Lambda$ & deg. \\ \hline
$0.268$ & $16$ \\
$2.000$ & $16$ \\
$3.732$ & $16$ \\
$6.000$ & $16$
\end{tabular}
\caption{$L=6$ sector energy levels and degeneracies.}
\label{tab:L6energies}
\end{center}
\end{table}
In the Bethe ansatz we discriminate even and odd domain walls with an additional nested Bethe root $u_l$ because the interaction between walls depends on the parity of the distance between domain walls. But because it only depends on the distance between domain walls, it makes no difference if we change the parity of all domains walls. In other words, associating a nested Bethe root to the odd DWs is an artifical choice. We therefore identify the sector $(m,k)=(2,0)$ with two even domain walls with that of two odd domains walls $(m,k)=(2,2)$. Hence for $L=6$ there are $6$ different sectors which are listed in Table~\ref{tab:L6sectors}.
\begin{table}[h]
\begin{center}
\begin{tabular}{l | l | l}
$m$ & $k$ & dim. \\ \hline
$0$ & $0$ & $2$ \\
$2$ & $0$,$2$ & $6$,$6$ \\
$2$ & $1$ & $18$ \\
$4$ & $2$ & $18$ \\
$4$ & $1$,$3$ & $6$,$6$ \\
$6$ & $3$ & $2$
\end{tabular}
\caption{$L=6$ sectors. Certain sectors have the same dimensions and are listed in the same line.}
\label{tab:L6sectors}
\end{center}
\end{table}
Because of the DW-nonDW symmetry, it is enough to probe the lower half of the sectors, i.e. those with $m=0$ and $m=2$. Below we find all eigenvectors corresponding to the dimensions of the eigenspaces given in Table~\ref{tab:L6sectors}.
\subsubsection{$\Lambda=6$}
The $(m,k)=(0,0)$ sector contains two trivial Bethe vectors: $b_1=\ket{000000}$ and $b_2=\ket{111111}$, both are eigenvectors with $\Lambda = 6$. These vectors are mapped to the (6,3) sector by the DW -- nonDW exchange $E$, to the (2,1) sector by $Q$ and to the (4,2) sector by the combined action of $E$ and $Q$, giving rise to eight vectors: $\{b_1,b_2,Qb_1,Qb_2,Eb_1,Eb_2,Q^\dag Eb_1= EQb_1, Q^\dag E b_2 =EQb_2\}$.
The other eight eigenvectors of this eigenvalue come about in the following way. In the (2,1) sector the Bethe equations are
\begin{align}
z_j^6 & = \pm{\rm i}^{-3} \frac{u-(z_j-1/z_j)^2}{u+(z_j-1/z_j)^2},\qquad j=1,2,\label{eq:baeLambda6a}\\
1 &= \prod_{j=1}^2 \frac{u-(z_j-1/z_j)^2}{u+(z_j-1/z_j)^2}.\label{eq:baeLambda6b}
\end{align}
Due to Pauli exclusion principle, only distinct pairs $(z_1^2,z_2^2)$ of solutions of \eqref{eq:baeLambda6a} and \eqref{eq:baeLambda6b} give rise to different eigenvectors. In the (2,1) sector there are two independent non-free fermion solutions ($u\neq 0$ and $u\neq\infty$) with $\Lambda=6$, namely
$$(z_1^2,z_2^2,u_\pm)=(\frac{\sqrt{3}}{2}(\sqrt{3}+{\rm i}), \frac{1}{2\sqrt{3}}(\sqrt{3}+{\rm i}),\frac{1}{39}(-9\pm14\sqrt{3})).$$
If we denote the corresponding two Bethe vectors by $b_3$ and $b_4$ then in the (2,1) sector we have the four vectors $\{b_3,b_4,Q^\dag Eb_3=EQb_3,Q^\dag Eb_4=EQb_4\}$ and in the (4,2) sector we find $\{Qb_3,Qb_4,Eb_3,Eb_4\}$.
In summary we have recovered the full sixteen-dimensional $\Lambda=6$ eigenspace.
\subsubsection{Other eigenvalues}
Based on direct diagonalisation, the $(m,k)=(2,0)$ and $(m,k)=(2,2)$ sectors each contain two eigenvectors associated to each of the lower three eigenvalues. These are reproduced by the Bethe roots in the $(m,k)=(2,0)$ sector, as these satisfy the equation
\begin{equation}
z_j^6 = \pm {\rm i}^{-3}, \qquad j=1,2.
\label{eq:baeL6}
\end{equation}
There are precisely two times three distinct pairs $(z_1^2,z_2^2)$ of allowed solutions for the $+$ and $-$ solution respectively, giving each of the lower three eigenvalues twice, and this is doubled using the combined action of $E$ and $Q$. Similarly for $(m,k)=(2,2)$ and by symmetries also in the sectors $(m,k)=(4,1)$ and $(m,k)=(4,3)$. Hence we obtain eight vectors each for the first three eigenvalues. This leaves $24=3\times 8$ vectors still to be determined, and they all must come from the remaining twelve dimensions of the $(m,k)=(2,1)$ (four of the eighteen available vectors in this sector contribute to $\Lambda=6$), as well as the twelve remaining dimensions of the $(m,k)=(4,2)$ sector.
In the $(m,k)=(2,1)$ sector we may distinguish two types of solutions, the free fermionic (FF) and the non free fermionic (nonFF). The latter we found correspond to $\Lambda=6$, and the FF solutions are those with $u=0$ and $u=\infty$. For $u=0$, we obtain the following BEs,
\begin{align}
z_1^6 = - {\rm i}^{-3},\qquad z_2^6 = - {\rm i}^{-3},
\end{align}
while with $u=\infty$, we find
\begin{align}
z_1^6 = {\rm i}^{-3},\qquad z_2^6 = {\rm i}^{-3},
\end{align}
which are the same as for the $(m,k)=(2,0)$ sector. By the same reasoning as for \eqref{eq:baeL6}, these two sets each produce six solutions, i.e. twelve in total, and by DW-nonDW symmetry we obtain all of the remaining 24 solutions.
We have thus found the complete spectrum for $L=6$ from the Bethe equations and the symmetries.
\subsection{$L=10,\, \Lambda = 6$ degeneracy}
As an other example, we probed the mostly degenerate case in $L=10$, the $\Lambda = 6$ eigenvalue, which is 64-fold degenerate. Because of the DW-nonDW symmetry it is enough to look at the sectors $(m,k)$ with $m<L/2=5$. The Hamiltonian is easily diagonalisable in these sectors giving rise the degeneracies shown in Table~\ref{tab:L10deg}.
\begin{table}
\begin{center}
\begin{tabular}{l | l | l}
$m$ & $k$ & deg. of $\Lambda = 6$ \\ \hline
$0$ & $0$ & $0$ \\
$2$ & $0$,$2$ & $4$ \\
$2$ & $1$ & $8$ \\
$4$ & $0$ & $0$ \\
$4$ & $1$,$3$ & $4$ \\
$4$ & $2$ & $8$ \\
\end{tabular}
\caption{$L=10,\, \Lambda = 6$ degeneracies sector by sector. The unlisted sectors follow by DW-nonDW symmetry.}
\label{tab:L10deg}
\end{center}
\end{table}
The four states in $(2,0)$ are pure Bethe-states and we denote the four-dimensional span of these by $\mathcal{B}^{(2,0)}$. The four states in $(2,2)$ are the copies of these states under the translation symmetry $T$ which shifts all the sites by one.
Out of the eight states in $(2,1)$, only four are pure Bethe states spanning $\mathcal{B}^{(2,1)}$. Since $Q$ is a symmetry which maps from $(m,\,k)$ to $(m+2,\,k+1)$, by applying $Q$ we create four states each in the $Q\mathcal{B}^{(2,0)}$ subspace of $(4,1)$, the subspace $Q\mathcal{B}^{(2,1)}$ of $(4,2)$, and $QT\mathcal{B}^{(2,0)}$ of $(4,3)$. These all turn out to be linearly independent.
$S$ is a symmetry operator which moves one of the the domain walls by one unit, so it maps a state in the sector $(m,\,k)$ into $(m,\,k-1)$ and $(m,\,k+1)$, possibly creating a zero vector. By applying $S$ on $Q\mathcal{B}^{(2,0)}$ we can create two linearly independent (and two linearly dependent) vectors in $(4,2)$, and by applying $S$ on $QT\mathcal{B}^{(2,0)}$ we create the missing two linearly independent vectors (and again two linearly dependent). Applying $Q^\dagger$ on these four new vectors created by $S$, we found the missing four linearly independent vectors in $(2,1)$. We thus found thirty two states and using the DW-nonDW symmetry we find all sixty-four. This process is depicted in Fig.~\ref{fig:L10symmetries}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[circle,draw](20) at (0,2) {2,0};
\node[circle,draw](41) at (2,2) {4,1};
\draw[->] (20) edge (41);
\node [above] at (1,2){$Q$};
\node[circle,draw](21) at (0,0) {2,1};
\node[circle,draw](42) at (2,0) {4,2};
\draw[->] (21) edge (42);
\node [above] at (1,0){$Q$};
\draw[<-] (21) edge (42);
\node [below] at (1,0){$Q^\dag$};
\node[circle,draw](22) at (0,-2) {2,2};
\node[circle,draw](43) at (2,-2) {4,3};
\draw[->] (22) edge (43);
\node [above] at (1,-2){$Q$};
\draw[->] (43) edge (42);
\node [right] at (2,-1){$S$};
\draw[->] (41) edge (42);
\node [right] at (2,1){$S$};
\draw[->] (20) edge [bend right] (22);
\node[left] at (-1,0) {$T$};
\end{tikzpicture}
\end{center}
\caption{Action of symmetries between domain wall sectors}
\label{fig:L10symmetries}
\end{figure}
It would be very interesting to find the full underlying symmetry algebra, i.e. the general algorithm to create all the eigenvectors for given system size and given energy. This may be challenging as it seems not obvious which symmetries create nonzero and linearly independent vectors.
\subsection{The groundstate for $L=4n$}
\label{sec:L4nGS}
In the half-filled sector $(2n,\,0)$ with $L=4n$ where $2n$ is the total number of domain walls, the Bethe equations
\begin{equation}
z_j^{4n} =\pm {\rm i}^{-4n/2} = \pm (-1)^n =\pm 1\quad (j=1, \ldots, 2n)
\label{eq:BeqFFL4n}
\end{equation}
satisfied by the free fermion solutions
\begin{align}
z^{(+)}_j = {\rm e}^{\frac{{\rm i} \pi j}{2 n}} \quad (j=1, \ldots, 2n),\qquad
z^{(-)}_j = {\rm e}^{\frac{{\rm i} \pi (2j+1)}{4 n}} \quad (j=1, \ldots,2n).
\label{eq:ffsol}
\end{align}
These solutions produce a groundstate as for each of them the eigenvalue
\begin{equation}
\Lambda=4n+\sum_{j=1}^{2n} (z_j^2+z_j^{-2} -2) = 0.
\end{equation}
These solutions span the sector $(2n,\,0)$, which is also spanned by the two vectors $\ket{0011 \ldots 0011}$ and $\ket{1100 \ldots 1100}$, hence giving these groundstates in terms of Bethe states.
Based on these solutions, we can construct further eigenstates in the sectors $(2n,\,k)$.
In the presence of $k$ odd DWs, we have
\begin{equation}
1=\prod_{j=1}^{2n} \frac{u_l-(z_j-1/z_j)^2}{u_l+(z_j-1/z_j)^2}\quad (l=1,\ldots,k).
\label{eq:u}
\end{equation}
After substituting the free fermion solution \eqref{eq:ffsol} into the right hand side of \eqref{eq:u}, the resulting equation for $u$ has purely imaginary roots that form complex conjugate pairs. The key observation is, that the Bethe equations of the $(2n,\, 2k)$ sector can be satisfied with the free fermionic solution (\ref{eq:ffsol}, if we choose the solutions for $u_l$ in (purely imaginary) complex conjugate pairs, as for such a pair we have that $u^*=-u$ so that for each $j$
\begin{equation}
\frac{u-(z_j-1/z_j)^2}{u+(z_j-1/z_j)^2}\;\frac{u^*-(z_j-1/z_j)^2}{u^*+(z_j-1/z_j)^2}=1.
\end{equation}
Hence the Bethe equations \eqref{eq:bae1} remain of the free fermion form \eqref{eq:BeqFFL4n} for such solutions. This mechanism of zero energy Cooper pairs results in an overall degeneracy for the sector $m=2n$ growing exponentially in $L$. The computation for a lower bound of the growth is in Appendix~\ref{app:GSdeg}.
\subsection{The first excited state for $L=4n$}
Based on direct diagonalisation of the Hamiltonian for $L=4,\,8,\,12$, we observe that the first excited states occurs in the sectors $m=2n \pm 2$ with $k$ arbitrary, and $m=2n$ with $k\neq 0,2n$. Since the $(2n-2,\,0)$ sector is purely free fermionic, the Bethe equations are trivial and we can easily determine the first excited state energy for any $L=4n$. This computation is based on the assumption, that the identified free fermion state is indeed the first excited state for any system size. In case it does not hold, the results are an upper bound for the first excited state energy.
The Bethe equations for the $L=4n$, $(2n-2,\,0)$ sector reads,
\begin{equation}
z_j^L = \pm {\rm i}^{-L/2} = \pm (-1)^n =\pm 1, \quad (j=1, \ldots, 2n-2)
\label{eq:exc}
\end{equation}
These are the same equations as (\ref{eq:BeqFFL4n}), so the independent solution are (\ref{eq:ffsol}). The only difference compared to the groundstate is, that for the groundstate, we had to select all independent Bethe roots, while now we should leave out two,
\begin{equation}
\Lambda = 4n + \sum_{i=1}^{2n-2} z_i^{2} + z_i^{-2}.
\end{equation}
To minimise the energy, we have to minimize $\sum_i z_i^2 + z_i^{-2}$, which is the same as leaving out the two Bethe roots contributing the most. The two largest contributing Bethe roots are $z_{2n}^{(+)}=1$, $z_{1}^{(+)}={\rm e}^{{\rm i}\pi/2n}$ for the $+$ case of \eqref{eq:exc}, and $z_{1}^{(-)}={\rm e}^{{\rm i}\pi /4n}$, $z_{2n-1}^{(-)}={\rm e}^{- {\rm i}\pi/4n}$ for the $-$ case. The two associated energy levels are
\begin{align}
\Lambda^{(+)}(L=4n) &= 4-2- 2\cos (\pi/n)= 2 (1-\cos (\pi/n)) \\
\Lambda^{(-)}(L=4n) &= 4-4 \cos (\pi/2n)= 4 (1-\cos (\pi/2n))
\end{align}
As it is easy to see that $\Lambda^{(-)}<\Lambda^{(+)}$ gives the lower energy, hence the first excited state energy. This result correctly reproduces the $L=4,\,8,\,12$ first excited states energies.
The construction above creates the first excited state in the $(2n-2,\, 0)$ sector, but this highly degenerate energy level occurs in many other sectors.
The groundstate energy $\Lambda_{0} = 0$ and therefore the energy gap is given by
\begin{equation}
\Delta \Lambda_{n} = \Lambda^{(-)}_{n} - \Lambda_{0} = 4 (1-\cos (\pi/2n)) \approx \frac{\pi^2}{2n^2}.
\end{equation}
As we can see, the gap vanishes as $\sim 1/L^2$, which is a sign of a classical diffusive mode. It is worth mentioning that it is a conjecture that for large $n$ the energy $\Lambda_n^{(-)}$ is the first excited level, strictly speaking it is an upper bound.
\section{Bethe ansatz}
\label{sec:BetheAnsatz}
In this section we give a detailed derivation of the eigenvalues and eigenvectors using the coordinate Bethe ansatz. We have not been able to identify our model with one of the known solvable lattice models that exist in the literature.
As the space of fermions naturally breaks up into sectors labelled by numbers of domain walls, we now introduce a new labelling of the states instead of the fermionic Fock space notation $\ket{\mathbf{\tau}}$. Let $1\le x_1 < x_2 < \ldots, x_m \le L$ denote the positions of $m$ domain walls, then
\begin{equation}
\ket{x_1,\ldots,x_m;p_1,\ldots,p_k}_\epsilon,
\end{equation}
denotes the state with $m$ domain walls of which walls $p_1,\ldots,p_k$ are at an odd position. (This notation is convenient for the Bethe ansatz). If the first domain wall is of $01$ type then $\epsilon=0$, otherwise $\epsilon=1$. If all walls are at an even position (or all at and odd position) then the processes \eqref{eq:oddprocess}, involving pairs of domain walls, cannot take place as $x_{i+1} \ge x_i+2$, and the action of the Hamiltonian on a ket state with two domain walls is given by diffusion with hardcore exclusion.
For clarity and definiteness we give the explicit action of $H$ on the sector with two even domain walls. First we introduce the shift operators $S_i^\pm$
\begin{equation}
S_i^\pm \ket{x_1,\ldots,x_i,\ldots,x_m;p_1,\ldots, p_k} = \ket{x_1,\ldots,x_i\pm 1,\ldots,x_m;p_1,\ldots, p_k}.
\end{equation}
Then for $x_2>x_1+2$:
\begin{align}
H \ket{x_1,x_2}_\epsilon &= \left(L-4 + \sum_{i=1,2}(-1)^{i+\epsilon} \left(S_i^{+2}+S_i^{-2}\right)\right) \ket{x_1,x_2}_\epsilon,
\label{eq:m=2k=0gen}\\
H \ket{x,x+2}_\epsilon &= (L-4 +(-1)^\epsilon(-S_1^{-2} + S_2^{+2})\ \ket{x,x+2}_\epsilon.
\label{eq:m=2k=0exc}
\end{align}
Now consider the case where the first domain wall is odd. Again, if the walls are well separated, i.e. $x_2 > x_1+1$, the action of $H$ is that of diffusion:
\begin{equation}
H \ket{x_1,x_2;1}_\epsilon = \left(L-4 + \sum_{i=1,2}(-1)^{i+\epsilon} \left(S_i^{+2}+S_i^{-2}\right)\right) \ket{x_1,x_2;1}_\epsilon.
\label{eq:m=2k=1gen}
\end{equation}
The equations are the same if the second wall were on an odd position rather than the first. When two walls are close we no longer have hardcore exclusion, but there is a non-trivial interaction between the walls:
\begin{multline}
H \ket{x,x+1;1}_\epsilon = \big(L-2 +(-1)^\epsilon (- S_1^{-2} + S_2^{+2})\ \ket{x,x+1;1}_\epsilon \\
\mbox{} + (-1)^\epsilon (\ket{x-1,x;2}_\epsilon + \ket{x+1,x+2;2}_\epsilon\big).
\label{eq:m=2k=1exc}
\end{multline}
where in the last two terms the second domain wall has become odd.
In this section we diagonalise the Hamiltonian $H$ given by \eqref{eq:modeldef} using Bethe's ansatz. We will assume that $L$ is even and impose periodic boundary conditions. Since the total number of domain walls, $m$, is conserved, as well as the number of odd domain walls, $k$, the Bethe ansatz can be constructed separately within each $(m,k)$-sector. We therefore write general wave functions in the form
\begin{equation}
\ket{\Psi(m;k)} = \sum_{\{x_i\}} \sum_{\{p_j\}} \sum_{\epsilon=0,1} \psi_\epsilon(x_1,\ldots,x_m;p_1,\ldots, p_k) \ket{x_1,\ldots,x_m;p_1,\ldots, p_k}_\epsilon,
\end{equation}
and derive the conditions for the coefficients $\psi$ such that $\ket{\Psi(m;k)}$ is an eigenfunction of $H$,
\begin{equation}
H \ket{\Psi(m;k)} = \Lambda \ket{\Psi(m;k)}.
\end{equation}
We start with the simplest sectors, namely those with two domain walls.
\subsection{Two domain walls $(m=2)$}
\subsubsection{No odd wall $(k=0)$}
Assuming $x_1$ and $x_2$ are even, from \eqref{eq:m=2k=0gen} we find that two walls far apart satisfy
\begin{multline}
\Lambda\psi_\epsilon(x_1,x_2) = (L-4)\psi_\epsilon (x_1,x_2) + (-1)^\epsilon \big( -\psi_\epsilon(x_1+2,x_2) - \psi_\epsilon(x_1-2,x_2)\\ \mbox{} + \psi_\epsilon(x_1,x_2+2) + \psi_\epsilon(x_1,x_2-2) \big) ,
\end{multline}
while from \eqref{eq:m=2k=0exc} it follows $\psi$ satisfies the condition
\begin{equation}
-\psi_\epsilon(x-2,x-2) + \psi_\epsilon(x,x) =0.
\end{equation}
These two equations can be satisfied if we make the ansatz
\begin{equation}
\psi_\epsilon(x_1,x_2) = c_\epsilon \left(A^{12} ({\rm i}^{1-\epsilon} z_1)^{x_1} ({\rm i}^\epsilon z_2)^{x_2} + A^{21} ({\rm i}^{1-\epsilon} z_2)^{x_1} ({\rm i}^\epsilon z_1)^{x_2}\right) ,\\
\end{equation}
where $z_1$ and $z_2$ are some auxiliary complex numbers to be determined shortly. Using this ansatz we find that the eigenvalue $\Lambda$ and amplitudes $A$ satisfy the conditions
\begin{equation}
\Lambda =L+ \sum_{i=1}^2 (z_i^2+z_i^{-2}-2),\qquad A^{12}+A^{21}=0.
\end{equation}
Imposing the periodic boundary condition on an even lattice of size $L$ gives
\begin{equation}
\psi_\epsilon(x,L+2) = \psi_{1-\epsilon}(2,x),
\end{equation}
which results in
\begin{align}
c_0 A^{12} z_2^L &= c_1A^{21} & c_1A^{12} ({\rm i} z_2)^L &= c_0A^{21} \\
c_0 A^{21} z_1^L &= c_1A^{12} & c_1A^{21} ({\rm i} z_1)^L &= c_0A^{12} \nonumber
\end{align}
We thus find that $(c_0/c_1)^2 = {\rm i}^L$ and
\begin{equation}
z_1^{2L} = z_2^{2L} = {\rm i}^{-L}.
\end{equation}
Taking the square root we have two different set of solutions:
\begin{align}
\frac{c_0}{c_1} &= {\rm i}^{L/2} & \frac{c_0}{c_1} &= -{\rm i}^{L/2} \\
z_1^L &= -\frac{c_1}{c_0} = -{\rm i}^{-L/2} & z_1^L &= -\frac{c_1}{c_0} = {\rm i}^{-L/2} \label{EvenWallsBE}\\
z_2^L &= -\frac{c_1}{c_0} = -{\rm i}^{-L/2} & z_2^L &= -\frac{c_1}{c_0} = {\rm i}^{-L/2} \nonumber
\end{align}
The Pauli exclusion principle implies $z_1 \neq \pm z_2$. Two different solutions $(z_1,\, z_2)$ and $(z'_1,\, z'_2)$ are independent
(the corresponding Bethe vectors are orthogonal), if their squares are not equal up to interchange. Since for every solution $z$,
$-z$ is also a solution, it is enough to deal with half of the solutions to (\ref{EvenWallsBE}). This gives $2 \binom{L/2}{2}$ different solutions (where the $2$ is coming from the two different set of solutions). The dimension of the $(2,\,0)$ sector is $2 \binom{L/2}{2}$, so we conclude that the Bethe ansatz gives the full solution in this sector.
\subsubsection{One odd wall}
\label{se:d2o1}
We consider now the case that the first wall is at an odd position. Two walls far apart do not interact and satisfy the same equation as if both were on even positions, from \eqref{eq:m=2k=1gen}:
\begin{multline}
\Lambda\psi_\epsilon(x_1,x_2;1) = (L-4)\psi_\epsilon (x_1,x_2;1) + (-1)^\epsilon \big(-\psi_\epsilon(x_1+2,x_2;1) - \psi_\epsilon(x_1-2,x_2;1) \\ \mbox{} + \psi_\epsilon(x_1,x_2+2;1) + \psi_\epsilon(x_1,x_2-2;1)\big).
\label{eq:eo2d}
\end{multline}
When the walls are distance one apart, the eigenvalue equation changes due to the process described in \eqref{eq:m=2k=1exc}. We find in this case that
\begin{multline}
\Lambda\psi_\epsilon(x,x+1;1) = (L-2)\psi_\epsilon(x,x+1;1) + (-1)^\epsilon \big(-\psi_\epsilon(x-2,x+1;1) + \psi_\epsilon(x,x+3;1) \\ + \psi_{\epsilon}(x-1,x;2) + \psi_{\epsilon}(x+1,x+2;2) \big).
\end{multline}
And so, setting $x_2=x_1+1$ in \eqref{eq:eo2d}, it follows that the wave function has to satisfy
\begin{multline}
2\psi_\epsilon(x,x+1;1) + (-1)^\epsilon \big( \psi_\epsilon(x+2,x+1;1) - \psi_\epsilon(x,x-1;1) \\ + \psi_{\epsilon}(x-1,x;2) + \psi_{\epsilon}(x+1,x+2;2) \big)=0.
\label{eq:eo10}
\end{multline}
Likewise, considering the case were the second wall is odd, the condition on the wave function results in
\begin{multline}
2\psi_\epsilon(x,x+1;2) + (-1)^\epsilon \big(\psi_\epsilon(x+2,x+1;2) - \psi_\epsilon(x,x-1;2) \\ + \psi_{\epsilon}(x-1,x;1) + \psi_{\epsilon}(x+1,x+2;1) \big)=0.
\label{eq:eo01}
\end{multline}
To solve equations \eqref{eq:eo2d}, \eqref{eq:eo10} and \eqref{eq:eo01} we make the following ansatz
\begin{equation}
\psi_\epsilon(x_1,x_2;p) = \sum_{\pi\in S_2} B^{\pi_1\pi_2}_\epsilon(p) ({\rm i}^{1-\epsilon} z_{\pi_1})^{x_1} ({\rm i}^\epsilon z_{\pi_2})^{x_2},
\end{equation}
where, with a view to later generalizations, we take
\begin{align}
B^{\pi_1\pi_2}_\epsilon(p) &= c_\epsilon (-1)^{\lfloor (p+\epsilon-1)/2\rfloor } A^{\pi_1\pi_2} g(u,z_{\pi_{p}}) \prod_{j=1}^{p-1} f(u,z_{\pi_j}),
\label{eq:Bdef}
\end{align}
and
\begin{equation}
\Lambda = L+ \sum_{i=1}^2 (z_i^2+z_i^{-2}-2),\qquad A^{12}+A^{21}=0.
\end{equation}
With this ansatz, the scattering conditions \eqref{eq:eo10} and \eqref{eq:eo01} become the following equations for the functions $f$ and $g$,
\begin{align}
&\sum_{\pi\in S_2} A^{\pi_1\pi_2} \left[ z_{\pi_2} g(u,z_{\pi_1}) (2- z_{\pi_1}^2-z_{\pi_2}^{-2}) -{\rm i} z_{\pi_1} f_\epsilon (u,z_{\pi_1})g(u,z_{\pi_2}) (z_{\pi_1}^{-2}-z_{\pi_2}^2) \right]=0,\\
&\sum_{\pi\in S_2} A^{\pi_1\pi_2} \left[ z_{\pi_2} f_\epsilon (u,z_{\pi_1}) g(u,z_{\pi_2}) (2- z_{\pi_1}^2-z_{\pi_2}^{-2}) -{\rm i} z_{\pi_1} g(u,z_{\pi_1}) (z_{\pi_1}^{-2}-z_{\pi_2}^2) \right]=0.
\end{align}
It can be easily checked that these equation are solved by the functions
\begin{align}
f(u,z) &= {\rm i} \frac{u-(z-1/z)^2}{u+(z-1/z)^2},\\
g(u,z) &= \frac{z-1/z}{u+(z-1/z)^2},
\end{align}
where $u$ is an additional complex number to be fixed by the boundary conditions.
The periodic boundary condition needs to be implemented carefully as it introduces minus signs,
\begin{equation}
\psi_\epsilon(x,L+2;1) = \psi_{1-\epsilon} (2,x;2),\qquad \psi_\epsilon(x,L+1;2) = (-1)^{\mathcal{N}_F-1}\psi_{1-\epsilon} (1,x;1),
\end{equation}
and since $\mathcal{N}_F$ is odd in this case, these conditions result in
\begin{equation}
B^{\pi_1\pi_2}_\epsilon (1) ({\rm i}^\epsilon z_{\pi_2})^L = B^{\pi_2\pi_1}_{1-\epsilon} (2),\qquad
B^{\pi_1\pi_2}_\epsilon (2) ({\rm i}^\epsilon z_{\pi_2})^L = B^{\pi_2\pi_1}_{1-\epsilon} (1).
\label{eq:n=2bc}
\end{equation}
Combining $\epsilon=0$ and $\epsilon=1$ and using \eqref{eq:Bdef} we find that
\begin{equation}
c_0/c_1 = \pm {\rm i}^{L/2+1}.
\end{equation}
Finally we obtain from the two cases in \eqref{eq:n=2bc} that
\begin{align}
z_{\pi_2}^L &= -\frac{c_1}{c_0} \frac{A^{\pi_2\pi_1}}{A^{\pi_1\pi_2}} f(u,z_{\pi_2})
= \frac{c_1}{c_0} \frac{A^{\pi_2\pi_1}}{A^{\pi_1\pi_2}} f(u,z_{\pi_1})^{-1}
\end{align}
resulting in
\begin{align}
z_1^L &= \pm{\rm i}^{-L/2} \frac{u-(z_{1}-1/z_{1})^2}{u+(z_{1}-1/z_{1})^2},\\
z_2^L &= \pm{\rm i}^{-L/2} \frac{u-(z_{2}-1/z_{2})^2}{u+(z_{2}-1/z_{2})^2},
\end{align}
with consistency condition
\begin{equation}
1=-f(u,z_1)f(u,z_2)=\frac{u-(z_{1}-1/z_{1})^2}{u+(z_{1}-1/z_{1})^2} \frac{u-(z_{2}-1/z_{2})^2}{u+(z_{2}-1/z_{2})^2}.
\end{equation}
Note that solutions with $u=0$ give a free fermion spectrum.
\subsection{Arbitrary number of walls}
\subsubsection{No odd wall}
As long as all the walls are far apart ($x_{j+1} - x_{j} > 2 \, \forall j$), the wavefunction amplitude satisfies
\begin{multline}
\Lambda \psi_{\epsilon} (x_1, \ldots , x_m ) = (L-2m) \psi_{\epsilon} (x_1, \ldots , x_m) +\\+ (-1)^{\epsilon} \sum_{j=1}^m (-1)^j \psi_{\epsilon} (\ldots, x_j -2, \ldots ) + (-1)^j \psi_{\epsilon} (\ldots, x_j + 2, \ldots )
\label{eq:ArbEvenZeroOdd}
\end{multline}
If two walls are distance 2 apart, $x_{i+1} = x_i +2$, then $\psi_{\epsilon} (\ldots, x_i, x_{i+1}-2)$ and $\psi_{\epsilon} (x_i+2, x_{i+1}, \ldots)$ are missing from the sum.
Taking the $x_{i+1} = x_i + 2$ limit in (\ref{eq:ArbEvenZeroOdd}), we get
\begin{equation}
0 = (-1)^{\epsilon} (-1)^i \psi_\epsilon (\ldots, x_i +2, x_i +2, \ldots ) + (-1)^\epsilon (-1)^{i+1} \psi_\epsilon (\ldots, x_i, x_i, \ldots)
\end{equation}
These equations are solved by the ansatz
\begin{equation}
\psi_\epsilon (x_1, \ldots, x_m) = c_\epsilon \sum_{\pi \in S_m} A^{\pi} \prod_{j=1}^{m/2} ({\rm i}^{1-\epsilon} z_{\pi_{2j-1}})^{x_{2j-1}} ({\rm i}^\epsilon z_{2j})^{x_{2j}},
\end{equation}
which is the generalization of the case with two even walls. The solution is also a generalization of that case, namely we find, that
\begin{equation}
\Lambda = L + \sum_{j=1}^m (z_j^2 + z_j^{-2}-2), \quad A^\pi = \text{sign} (\pi).
\end{equation}
Imposing the periodic boundary condition
\begin{equation}
\psi_\epsilon (x_2, \ldots, x_m, x_1 +L) = \psi_{1-\epsilon} (x_1, \ldots, x_m),
\end{equation}
results in one of the next equations
\begin{equation}
z_j^L = -{\rm i}^{-L/2}, \quad z_j^L = {\rm i}^{-L/2}.
\end{equation}
\subsubsection{One odd wall}
Let $p$ denote the index of the odd wall, and thus $x_{p}$ denotes its position. In analogy with \eqref{eq:eo10} and \eqref{eq:eo01} we have the following equations for the wave function components in the case $x_{p+1}=x_p+1$,
\begin{multline}
2\psi_\epsilon(\ldots,x_p,x_{p}+1,\ldots;p+1) + (-1)^{\epsilon+p-1} \big[\psi_\epsilon(\ldots,x_p+2,x_p+1,\ldots;p+1) - \psi_\epsilon(\ldots,x_p,x_p-1,\dots;p+1)\\ + \psi_{\epsilon}(\ldots,x_p-1,x_p,\ldots;p) + \psi_{\epsilon}(\ldots,x_p+1,x_p+2,\ldots;p)\big]=0.
\label{eq:eom01}
\end{multline}
and
\begin{multline}
2\psi_\epsilon(\ldots,x_p,x_{p}+1,\ldots;p) + (-1)^{\epsilon+p-1} \big[\psi_\epsilon(\ldots,x_p+2,x_p+1,\ldots;p) - \psi_\epsilon(\ldots,x_p,x_p-1,\ldots;p)\\ + \psi_{\epsilon}(\ldots,x_p-1,x_p,\ldots;p+1) + \psi_{\epsilon}(\ldots,x_p+1,x_p+2,\ldots;p+1)\big]=0.
\label{eq:eom10}
\end{multline}
There are additional equations when three walls are close together. In the case where $x_{p+2}=x_{p+1}+1=x_p+2$ with $x_p$ even, the eigenvalue equation leads to the condition
\begin{multline}
4\psi_\epsilon(\ldots,x_p,x_{p}+1,x_p+2,\ldots;p+1) + (-1)^{\epsilon+p-1} \big[\psi_\epsilon(\ldots,x_p+2,x_p+1,x_p+2,\ldots;p+1) \\ + \psi_\epsilon(\ldots,x_p,x_p+1,x_p,\ldots;p+1) - \psi_\epsilon(\ldots,x_p,x_p-1,x_p+2;p+1)\\
- \psi_\epsilon(\ldots,x_p,x_p+3,x_p+2;p+1) + \psi_{\epsilon}(\ldots,x_p-1,x_p,x_p+2,\ldots;p) \\- \psi_{\epsilon}(\ldots,x_p,x_p+2,x_p+3,\ldots;p+2)\big]=0.
\label{eq:eoem}
\end{multline}
In the case where $x_{p+2}=x_{p+1}+1=x_p+3$ with $x_p$ even, the eigenvalue equation leads to the condition
\begin{multline}
2\psi_\epsilon(\ldots,x_p,x_{p}+2,x_p+3,\ldots;p+2) + (-1)^{\epsilon+p-1} \big[\psi_\epsilon(\ldots,x_p,x_p+2,x_p+1,\ldots;p+2) \\ - \psi_\epsilon(\ldots,x_p,x_p+4,x_p+3,\ldots;p+2) +\psi_\epsilon(\ldots,x_p+2,x_p+2,x_p+3,\ldots;p+2)\\
-\psi_\epsilon(\ldots,x_p,x_p,x_p+3,\ldots;p+2) - \psi_\epsilon(\ldots,x_p,x_p+1,x_p+2,\ldots;p+1)\\
+ \psi_\epsilon(\ldots,x_p,x_p+3,x_p+4;p+1) \big]=0,
\label{eq:eem}
\end{multline}
and similar for the case $x_{p+2}=x_{p+1}+2=x_p+3$ with $x_p$ even. These equations are automatically satisfied by the solution from Section~\ref{se:d2o1}. Define therefore the one-domain wall nested wave function by
\begin{equation}
\phi_{p}^{(\epsilon)} (u;\pi) = g(z_{\pi_{p}}) (-1)^{\lfloor (p+\epsilon-1)/2\rfloor} \prod_{j=1}^{p-1} f (u,z_{\pi_j}).
\end{equation}
Then the $2n$-domain wall ansatz for the wave function with one odd wall is
\begin{equation}
\psi_{\epsilon}(x_1,\ldots,x_{2n};p) = c_\epsilon \sum_{\pi\in S_n} A^{\pi_1\ldots\pi_{2n}} \phi_{p}^{(\epsilon)} (u;\pi) \prod_{j=1}^n \left[ ({\rm i}^{1-\epsilon} z_{\pi_{2j-1}})^{x_{2j-1}} ({\rm i}^\epsilon z_{\pi_{2j}})^{x_{2j}} \right],\\
\end{equation}
corresponding to the eigenvalue given by
\begin{equation}
\Lambda =L+ \sum_{i=1}^{2n} (z_i^2+z_i^{-2}-2),
\label{eq:eigvalm2no1}
\end{equation}
with wavefunction amplitudes
\begin{equation}
A^{\pi_1 \ldots \pi_{2n}} = \sign(\pi_1 \ldots \pi_{2n}).
\end{equation}
Periodic boundary conditions lead to
\begin{equation}
\psi_\epsilon(x_1,\ldots,x_{2n-1},L+2;p) = \psi_{1-\epsilon}(2, x_1,\ldots,x_{2n-1};p+1),
\end{equation}
and
\begin{equation}
\psi_\epsilon(x_1,\ldots,x_{2n-1},L+1;2n) = (-1)^{\mathcal{N}_F-1}\psi_{1-\epsilon}(1, x_1,\ldots,x_{2n-1};1).
\end{equation}
Since the parity of $\mathcal{N}_F$ is equal to the parity of the number of odd domain walls, we find the following conditions:
\begin{equation}
c_\epsilon A^{\pi_1\ldots\pi_{2n}} ({\rm i}^\epsilon z_{\pi_{2n}})^L (-1)^{\lfloor (p+\epsilon-1)/2\rfloor} = c_{1-\epsilon} A^{\pi_{2n}\pi_1\ldots\pi_{2n-1}} (-1)^{\lfloor (p+1-\epsilon)/2\rfloor} f(u,z_{\pi_{2n}}),
\end{equation}
and
\begin{equation}
c_\epsilon A^{\pi_1\ldots\pi_{2n}} ({\rm i}^\epsilon z_{\pi_{2n}})^L (-1)^{\lfloor (2n+\epsilon-1)/2\rfloor} \prod_{j=1}^{2n-1} f(u,z_{\pi_{j}}) = c_{1-\epsilon} (-1)^{\lfloor (1-\epsilon)/2\rfloor} A^{\pi_{2n}\pi_1\ldots\pi_{2n-1}}.
\end{equation}
Using $(-1)^{\lfloor (p+\epsilon-1)/2\rfloor}=(-1)^{\epsilon-1}(-1)^{\lfloor (p+1-\epsilon)/2\rfloor}$, we obtain again
\begin{equation}
c_0/c_1=\pm {\rm i}^{L/2+1},
\end{equation}
and the following consistency conditions
\begin{align}
&\prod_{j=1}^{2n} f(u,z_j) = (-1)^{n}\quad \Leftrightarrow \quad\prod_{j=1}^{2n} \frac{u-(z_j-1/z_j)^2}{u+(z_j-1/z_j)^2}=1, \\
& z_j^L = \pm {\rm i}^{-L/2-1} f(u,z_j) = \pm {\rm i}^{-L/2} \frac{u-(z_j-1/z_j)^2}{u+(z_j-1/z_j)^2}\qquad (j=1,\ldots,2n).
\end{align}
Recalling the eigenvalue \eqref{eq:eigvalm2no1}, we note that for all sectors with $2n$ domain walls one of which odd, there exist solutions with $u=0$ giving the free fermion part of the spectrum.
\subsubsection{Two odd walls}
The condition equivalent to \eqref{eq:eoem} when three walls are close together, but now with two at odd positions so that $p_2=p_1+2=p+2$, leads to
\begin{multline}
4\psi_\epsilon(\ldots,x_p,x_{p}+1,x_p+2,\ldots;p,p+2) + (-1)^{\epsilon+p-1} \big[\psi_\epsilon(\ldots,x_p+2,x_p+1,x_p+2,\ldots;p,p+2) \\ + \psi_\epsilon(\ldots,x_p,x_p+1,x_p,\ldots;p.p+2) - \psi_\epsilon(\ldots,x_p,x_p-1,x_p+2;p,p+2)\\
- \psi_\epsilon(\ldots,x_p,x_p+3,x_p+2;p,p+2) + \psi_{\epsilon}(\ldots,x_p-1,x_p,x_p+2,\ldots;p+1,p+2) \\- \psi_{\epsilon}(\ldots,x_p,x_p+2,x_p+3,\ldots;p,p+1)\big]=0.
\label{eq:oeom}
\end{multline}
The analogue of \eqref{eq:eem} is similar. We find that these are satisfied by the following ansatz for the wave function for $2n$-domain wall of which two are at odd positions:
\begin{multline}
\psi_{\epsilon}(x_1,\ldots,x_{2n};p_1,p_2) = c_\epsilon \sum_{\pi\in S_{2n}} A^{\pi_1\ldots\pi_{2n}} \sum_{\sigma \in S_2} B^{\sigma_1\sigma_2} \\\phi_{p_1}^{(\epsilon)} (u_{\sigma_1};\pi) \phi_{p_2}^{(\epsilon)} (u_{\sigma_2};\pi) \prod_{j=1}^n \left[ ({\rm i}^{1-\epsilon} z_{\pi_{2j-1}})^{x_{2j-1}} ({\rm i}^\epsilon z_{\pi_{2j}})^{x_{2j}} \right],
\end{multline}
Here
\[
A^\pi = \sign(\pi),\quad B^{\sigma}=\sign(\sigma).
\]
Implementing periodic boundary conditions gives rise to
\begin{equation}
\psi_\epsilon(x_1,\ldots,x_{2n-1},L+2;p_1,p_2) = \psi_{1-\epsilon}(2, x_1,\ldots,x_{2n-1};p_1+1,p_2+1),
\end{equation}
and
\begin{equation}
\psi_\epsilon(x_1,\ldots,x_{2n-1},L+1;p_1,2n) = (-1)^{\mathcal{N}_F-1}\psi_{1-\epsilon}(1, x_1,\ldots,x_{2n-1};1,p_1+1).
\end{equation}
These give rise to $c_0/c_1=\pm{\rm i}^{L/2+2}$ and the final set of Bethe equations is given by
\begin{align}
z_j^L & = \pm{\rm i}^{-L/2} \prod_{k=1,2}\frac{u_k-(z_j-1/z_j)^2}{u_k+(z_j-1/z_j)^2},\qquad j=1,\ldots,2n\\
1 &= \prod_{j=1}^{2n} \frac{u_k-(z_j-1/z_j)^2}{u_k+(z_j-1/z_j)^2},\qquad k=1,2.
\end{align}
\subsubsection{Arbitrary number of odd walls}
For the general case we find that the Hamiltonian can be diagonalised by the ansatz
\begin{multline}
\psi_{\epsilon}(x_1,\ldots,x_{2n};p_1,\ldots p_m) = c_\epsilon \sum_{\pi\in S_{2n}} A^{\pi_1\ldots\pi_{2n}} \sum_{\sigma \in S_m} B^{\sigma_1\ldots\sigma_m} \\ \prod_{j=1}^m \phi_{p_j}^{(\epsilon)} (u_{\sigma_j};\pi) \prod_{j=1}^n \left[ ({\rm i}^{1-\epsilon} z_{\pi_{2j-1}})^{x_{2j-1}} ({\rm i}^\epsilon z_{\pi_{2j}})^{x_{2j}} \right],
\end{multline}
where we recall that the wave function for one odd domain wall is given by
\begin{equation}
\phi_{p}^{(\epsilon)} (u;\pi) = g(z_{\pi_{p}}) (-1)^{\lfloor (p+\epsilon-1)/2\rfloor} \prod_{j=1}^{p-1} f (u,z_{\pi_j}).
\end{equation}
We find that the eigenvalues of the Hamiltonian are given by
\begin{equation}
\Lambda =L+ \sum_{i=1}^{2n} (z_i^2+z_i^{-2}-2).
\end{equation}
where the numbers $z_i$ satisfy the following equations
\begin{align}
z_j^L & = \pm{\rm i}^{-L/2} \prod_{k=1}^m \frac{u_k-(z_j-1/z_j)^2}{u_k+(z_j-1/z_j)^2},\qquad j=1,\ldots,2n\\
1 &= \prod_{j=1}^{2n} \frac{u_k-(z_j-1/z_j)^2}{u_k+(z_j-1/z_j)^2},\qquad k=1,\ldots,m.
\end{align}
\section{Conclusion}
We have introduced a new lattice supersymmetric chain in which fermion number conservation is violated. The model turns out to be integrable and we give a detailed derivation of the equations governing the spectrum using coordinate Bethe ansatz.
The energy spectrum is highly degenerate, all states with a finite density have an extensive degeneracy. This degeneracy is explained by the identification of several symmetry operators, but most significantly by the possibility at each level to create modes that do not cost any energy. These modes are analoguous to Cooper pairs in BCS theory, and our model contains a direct realisation of these which can be explicitly identified in the Bethe equations.
The class of finite solutions to the Bethe ansatz does not provide all eigenvectors. We give circumstancial evidence that all eigenvectors are obtained by the application of the symmetry operators on Bethe vectors. We furthermore find that the energy gap to the first excited state scales as $1/L^2$ where $L$ is the system size which is a signature of classical diffusion.
\section*{Acknowledgment}
We are grateful for financial support from the Australian Research Council (ARC), the ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), and the Koninklijke Nederlandse Academie voor de Wetenschap (KNAW). JdG thanks the hospitality of the Galileo Galilei Institute in Florence where part of this work was finalised. We further warmly thank Lisa Huijse, Jon Links and Kareljan Schoutens for discussions.
| {'timestamp': '2016-02-16T02:08:47', 'yymm': '1510', 'arxiv_id': '1510.02520', 'language': 'en', 'url': 'https://arxiv.org/abs/1510.02520'} |
\section{Introduction}
In Part I, \cite{part1} we derived a quantum generalization of classical transition-state theory (TST), which corresponds to the $t \to 0_+$\ limit of a new form of quantum flux-side time-correlation function. This function uses a ring-polymer \cite{chandler} dividing surface, which is invariant under cyclic permutation of the polymer beads, and thus becomes invariant to imaginary-time translation in the infinite-bead limit. The resulting quantum TST appears to be unique, in the sense that the $t \to 0_+$\ limit of any other known form of flux-side time-correlation function \cite{part1,MST,mill,pollack} gives either incorrect quantum statistics, or zero. Remarkably, this quantum TST is {\em identical} to ring-polymer molecular dynamics (RPMD) TST, \cite{jor} and thus validates a large number of recent RPMD rate calculations,\cite{rates,refined,azzouz,bimolec,ch4,mustuff,anrev,yury,tommy1,tommy2,tommy3,stecher,guo} as well as the earlier-developed `quantum TST method'\cite{gillan1,gillan2,centroid1,centroid2,schwieters,ides} (which is RPMD-TST in the special case of a centroid dividing surface,\cite{jor} and which, to avoid confusion,
we will refer to here as `centroid-TST'\cite{cent}).
There are a variety of other methods for estimating the quantum rate based on short-time\cite{pollack,scivr1,scivr2,scivr3,QI1,QI2} or
semiclassical\cite{billact,billhandy,stanton,bill,cole,benders,jonss,spanish,kastner1,kastner2,equiv} dynamics. What is different about quantum TST is that it corresponds to the instantaneous $t\rightarrow 0_+$ quantum flux through a dividing surface.
Classical TST corresponds to the analogous $t \to 0_+$\ classical flux, which is well known to give the exact (classical) rate if there is no recrossing of the dividing surface;\cite{green,daan} in practice, there is always some such recrossing, and thus classical TST gives a {\em good approximation} to the exact (classical) rate for systems in
which the amount of recrossing is small, namely direct reactions. The purpose of this article is to derive the analogous result for quantum TST (i.e.\ RPMD-TST), to show
that it gives the exact quantum rate if there is no recrossing (by the exact quantum dynamics\cite{nospring}), and thus that it gives a good
approximation to the exact quantum rate for direct reactions.
To clarify the work ahead, we summarize two important differences between classical and quantum TST. First, classical TST gives a strict upper bound to the corresponding exact rate, but quantum TST does not, since real-time coherences may increase the quantum flux upon recrossing.\cite{part1} Quantum TST breaks down if such coherences are large; one then has no choice but to attempt to model the real-time quantum dynamics. However, in many systems (especially in the condensed phase), real-time quantum coherence has a negligible effect on the rate. In such systems, quantum TST gives a {\em good approximation to} an upper bound to the exact quantum rate. This becomes a strict upper bound only in the high-temperature limit, where classical TST is recovered as a special limiting case.
Second, when discussing recrossing in classical TST, one has only to consider whether trajectories initiated on the dividing surface recross that surface. In quantum TST, the time-evolution operator is applied to a series of $N$ initial positions, corresponding to the positions of the polymer beads. A consequence of this, as we discuss below, is that one needs to consider, not just recrossing (by the exact quantum dynamics) of the ring-polymer dividing surface,
but also of surfaces orthogonal to it in the ($N\!-\!1$)-dimensional space describing fluctuations in the polymer-bead positions along the reaction coordinate.
A major task of this article will be to show that the recrossing of these surfaces (by the exact quantum dynamics) causes the long-time limit of the ring-polymer flux-side time-correlation function to differ from the exact quantum rate. It then follows that the RPMD-TST rate is equal to the exact quantum rate if there is neither recrossing of the ring-polymer dividing surface, nor of any of these $N\!-\!1$ orthogonal surfaces.
We will use quantum scattering theory to derive these results, although we emphasise that they apply also in condensed phases (where RPMD has proved particularly groundbreaking \cite{azzouz,anrev,yury,tommy1,tommy2,tommy3}). The scattering theory is employed merely as a derivational tool, exploiting the property that the flux-side plateau in a scattering system extends to infinite time, which makes derivation of the rate straightforward. The results thus derived can be applied in the condensed phase, subject to the usual caveat of there being a separation in timescales between barrier-crossing and equilibration. \cite{isomer,linres} We have relegated most of the scattering theory to Appendices, in the hope that the outline of the derivation can be followed in the main body of the text.
The article is structured as follows: After summarizing the main findings of Part I in Sec.~II, we introduce in Sec.~III a hybrid flux-side time-correlation function, which correlates flux through the ring-polymer dividing surface with the Miller-Schwarz-Tromp\cite{MST} side function, and which gives the exact quantum rate in the limit $t\rightarrow\infty$. We describe the $N$-dimensional integral over momenta obtained in this limit by an $N$-dimensional hypercube, and note that the $t\rightarrow\infty$ limits of the ring-polymer and hybrid flux-side time-correlation functions cut out different volumes from the hypercube, thus explaining why the former does not in general give the exact quantum rate. In Sec.~IV we show that the only parts of the integrand that cause this difference are
a series of Dirac $\delta$-function spikes running through the hypercube. In Sec.~V we show that these spikes disappear if there is no recrossing (by the exact quantum dynamics\cite{nospring}) in the ($N\!-\!1$)-dimensional space orthogonal to the dividing surface (mentioned above). It then follows that the RPMD-TST rate is equal to the exact rate if there is also no recrossing of the dividing surface itself. In Sec.~VI we explain how these results (which were derived in one dimension) generalize to multi-dimensions. Section VII concludes the article.
\label{sec:intro}
\section{Summary of Part I}
Here we summarize the main results of Part I. To simplify the algebra, we focus on a one-dimensional scattering system with hamiltonian ${\hat H}$, potential $V(x)$ and mass $m$. However, the results generalize immediately
to multi-dimensional systems (see Sec.~VI) and to the condensed phase (see comments in the Introduction).
The ring-polymer flux-side time-correlation function, introduced in Part I, is
\begin{align}
C_{\rm fs}^{[N]}(t)=&\int\! d{\bf q}\, \int\! d{\bf z}\,\int\! d{\bf \Delta}\,{\cal \hat F}[f({\bf q})]h[f({\bf z})]\nonumber\\
&\times\prod_{i=1}^{N}\expect{q_{i-1}-\Delta_{i-1}/2|e^{-\beta_N{\hat H}}|q_i+\Delta_i/2}
\nonumber\\
&\quad\times \expect{q_i+\Delta_i/2|e^{i{\hat H}t/\hbar}|z_i} \nonumber \\
& \quad \times \expect{z_i|e^{-i{\hat H}t/\hbar}|q_i-\Delta_i/2}
\label{utter}
\end{align}
where $N$ is the number of polymer beads, $\beta_N=\beta/N$, with $\beta=1/k_{\rm B}T$, and
${\bf q}\equiv\{q_1,\dots,q_N\}$, with ${\bf z}$ and $\Delta$ similarly defined. The function $f({\bf q})$ is the ring-polymer dividing surface, which is invariant under cyclic permutations of the polymer beads (i.e.\ of the individual $q_i$), and thus becomes invariant to imaginary-time translation in the limit $N\rightarrow\infty$. The operator ${\cal \hat F}[f({\bf q})]$ gives the flux perpendicular to $f({\bf q})$, and is given by
\begin{align}
{\cal \hat F}[f({\bf q})] &= {1\over 2m}\sum_{i=1}^N\left\{{\hat p_i}{\partial f({\bf q})\over\partial q_i} \delta[f({\bf q})]+ \delta[f({\bf q})]{\partial f({\bf q})\over\partial q_i}{\hat p_i}\right\}
\label{eq:superflux}
\end{align}
Note that we employ here a convention introduced in Part I, that the first term inside the curly brackets is inserted between $e^{-\beta_N \hat H}\ket{q_i+\Delta_i/2}$ and $\bra{q_i+\Delta_i/2}e^{i{\hat H}t/\hbar}$ in \eqn{utter},
and the second term between $e^{-i{\hat H}t/\hbar}\ket{q_i+\Delta_i/2}$ and $\bra{q_i+\Delta_i/2}e^{-\beta_N\hat H}$. This is done to emphasise the form of $C_{\rm fs}^{[N]}(t)$; [\eqn{utter} is written out in full in Part I].
We can regard $C_{\rm fs}^{[N]}(t)$ as a generalized Kubo-transformed time-correlation function, since it
correlates an operator (in this case ${\cal \hat F}[f({\bf q})]$) on the (imaginary-time) Feynman paths at $t=0$ with another operator (in this case $h[f({\bf z})])$ at some later time $t$, and would reduce to a standard Kubo-transformed function if these operators were replaced by linear functions of position or momentum operators. The advantage of $C_{\rm fs}^{[N]}(t)$ is that it allows both the flux and the side dividing surface to be made the {\em same} function of ring-polymer space (i.e. $f$), which is what makes $C_{\rm fs}^{[N]}(t)$ non-zero in the limit $t\rightarrow\infty$. One can show\cite{part1} that the invariance of $f({\bf q})$ to imaginary time-translation in the limit $N\rightarrow\infty$ ensures that $C_{\rm fs}^{[N]}(t)$ is positive-definite in the
limits ${t\rightarrow 0_+}$ and ${N\rightarrow\infty}$. This allows us to define the quantum TST rate
\begin{align}
k_{Q}^\ddag(\beta)Q_{\rm r}(\beta)=\lim_{t\rightarrow 0_+}\lim_{N\rightarrow\infty}C_{\rm fs}^{[N]}(t)
\end{align}
where
\begin{align}
k_{Q}^\ddag(\beta)Q_{\rm r}(\beta)=&\lim_{N\rightarrow\infty}{1\over (2\pi\hbar)^N}\int\! d{\bf q}\, \int\! dP_0\, \delta[f({\bf q})] \nonumber \\
&\times \sqrt{B_N({\bf q})}\frac{P_0}{m}h\!\left(P_0\right)\sqrt{2\pi\beta_N\hbar^2\over m} \nonumber\\
& \times e^{-P_0^2\beta_N/2m}\prod_{i=1}^{N}\expect{q_{i-1}|e^{-\beta_N{\hat H}}|q_i}\label{bog}
\end{align}
Comparison with refs.~\onlinecite{jor,rates,refined} shows that $k_{Q}^\ddag(\beta)$ is {\em identical} to the RPMD-TST rate. The terms `quantum TST' and `RPMD-TST' are therefore equivalent (and will be used interchangeably throughout the article).
For quantum TST to be applicable, one must be able to assume that real-time coherences have only a small effect on the rate. It then follows that (a good approximation to) the optimal dividing surface $f({\bf q})$ is the one that maximises the free energy of the ring-polymer ensemble. If the reaction barrier is reasonably symmetric,\cite{vlow} or if it is asymmetric but the temperature is too hot for deep tunnelling, then a good choice of dividing surface is
\begin{align}
f({\bf q}) = {\overline q}_0-q^\ddag
\end{align}
where
\begin{align}
{\overline q}_0 = {1\over N}\sum_{i=1}^Nq_i
\end{align}
is the centroid.
(This special case of RPMD-TST was introduced earlier\cite{gillan1,gillan2,centroid1,centroid2,schwieters} and referred to as `quantum TST'; to avoid confusion we refer to it here as `centroid-TST' \cite{cent}) If the barrier is asymmetric, and the temperature is below the cross-over to deep tunnelling, then a more complicated dividing surface should be used which allows the polymer to stretch.\cite{jor} As mentioned above, $f({\bf q})$ must be invariant under cyclic permutation of the beads so that it becomes invariant to imaginary time-translation in the limit $N\rightarrow\infty$, and thus gives positive-definite quantum statistics.
It is assumed above, and was stated without proof in Part I, that the RPMD-TST rate gives the exact quantum rate in the absence of recrossing, and is thus a good approximation to the exact rate if the amount of recrossing is small. The remainder of this article is devoted to deriving this result.
\section{Long-time limits}
\subsection{Hybrid flux-side time-correlation function}
To analyze the $t\rightarrow\infty$ limit of $C_{\rm fs}^{[N]}(t)$, we will find it convenient to consider the $t\rightarrow\infty$ limit of the closely related {\em hybrid} flux-side time-correlation function:
\begin{align}
{\overline C}_{\rm fs}^{[N]}(t)= & \int\! d{\bf q}\, \int\! d{\bf z}\,\int\! d{\bf \Delta}\,{\cal \hat F}[f({\bf q})]h(z_1-q^\ddag)\nonumber\\
& \times \prod_{i=1}^{N}\expect{q_{i-1}-\Delta_{i-1}/2|e^{-\beta_N{\hat H}}|q_i+\Delta_i/2}
\nonumber\\
& \quad\times \expect{q_i+\Delta_i/2|e^{i{\hat H}t/\hbar}|z_i} \nonumber \\
& \quad\times \expect{z_i|e^{-i{\hat H}t/\hbar}|q_i-\Delta_i/2}
\label{gruel}
\end{align}
Note that we could equivalently have inserted any one of the other $z_i$ into the side-function, and also that we could simplify this expression by
collapsing the identities $\int\! dz_i\, e^{i{\hat H}t/\hbar}\ket{z_i}
\bra{z_i}e^{-i{\hat H}t/\hbar}$, $ i\ne 1$ [but we have not done so in order to emphasise the relation with ${ C}_{\rm fs}^{[N]}(t)$].
The function ${\overline C}_{\rm fs}^{[N]}(t)$ does not give a quantum TST, except in the special case that $N=1$ and $f({\bf q})=q_1$. In this case,
${\overline C}_{\rm fs}^{[N]}(t)$ is identical to $C_{\rm fs}^{[1]}(t)$, whose $t \to 0_+$\ \ limit was shown in Part I to be identical to
the quantum TST introduced on heuristic grounds by Wigner in 1932.\cite{wiggy} For $N>1$,
the flux and side dividing surfaces in ${\overline C}_{\rm fs}^{[N]}(t)$ are different functions of ring-polymer space, with the result that ${\overline C}_{\rm fs}^{[N]}(t)$ tends smoothly to zero in the limit $t \to 0_+$. \cite{part1}
By taking the $t\to\infty$ limit of the equivalent {\em side-flux}
time-correlation function ${\overline C}_{\rm sf}^{[N]}(t)$, we show in Appendix A that
\begin{align}
k_{Q}(\beta)Q_{\rm r}(\beta)=\lim_{t\rightarrow\infty}{\overline C}_{\rm fs}^{[N]}(t)
\label{thicky}
\end{align}
where $k_Q(\beta)$ is the exact quantum rate, and this expression holds for all $N\ge 1$. For $N=1$, we have thus proved that the flux-side time-correlation function
that gives the Wigner form of quantum TST (see above) also gives the exact rate in the limit $t\to\infty$. \cite{23} For $N>1$, which is our main concern here, ${\overline C}_{\rm fs}^{[N]}(t)$ has the same limits as the Miller-Schwarz-Tromp\cite{MST} flux-side time-correlation function, tending smoothly to zero as $t \to 0_+$, and giving the exact quantum rate as $t\to \infty$. We can also evaluate the $t\to \infty$ limit of ${\overline C}_{\rm fs}^{[N]}(t)$ directly [i.e.\ not via ${\overline C}_{\rm sf}^{[N]}(t)$]. We apply
first the relation
\begin{align}
\lim_{t\to\infty} &\int_{-\infty}^\infty \! dz\, \expect{x|e^{i\hat K t/\hbar}|z}h(z-q^\ddag)\expect{z|e^{-i\hat K t/\hbar}|y}=\nonumber\\
&\int_{-\infty}^\infty \! dp\,\expect{\!x|p\!}h(p)\expect{\!p|y\!}\label{dragons}
\end{align}
where ${\hat K}$ is the kinetic energy operator and $\expect{x|p} = (2\pi\hbar)^{-1/2}\exp{(ipx)}$;
this converts \eqn{gruel} into a form that involves applications of the M{\o }ller operator\cite{taylor}
\begin{align}
\hat\Omega_-\equiv\lim_{t\to\infty}e^{i\hat H t/\hbar}e^{-i\hat K t/\hbar}
\end{align}
onto momentum states $\ket{p_i}$. We then use the relation
\begin{align}
\hat\Omega_-\ket{p}= \ket{\phi^-_{p}}
\end{align}
where $\ket{\phi^-_{p}}$ is the (reactive) scattering wave function with outgoing boundary conditions, \cite{bc} to obtain
\begin{align}
\lim_{t\rightarrow \infty} {\overline C}_{\rm fs}^{[N]}(t)=\int\! d{\bf p}\,A_N({\bf p})h(p_1)
\label{eq:longtana}
\end{align}
with
\begin{align}
A_N({\bf p})=\int\! d{\bf q}\, &\int\! d{\bf \Delta}\,{\cal \hat F}[f({\bf q})]\nonumber\\
\times&\prod_{i=1}^{N}\expect{q_{i-1}-\Delta_{i-1}/2|e^{-\beta_N{\hat H}}|q_i+\Delta_i/2}
\nonumber\\
\quad\times &\expect{q_i+\Delta_i/2|\phi^-_{p_i}}
\expect{\phi^-_{p_i}|q_i-\Delta_i/2}\label{pete}
\end{align}
\subsection{Representation of the ring-polymer momentum integral}
To analyze the properties of \eqn{eq:longtana} (and of \eqn{boodles} given below), we will find it helpful to
represent the space occupied by the intregrand as an $N$-dimensional hypercube,\cite{hyper} whose edges are the axes
$-p_{\rm max}<p_i<p_{\rm max}$, $i=1\dots N$, in the limit $p_{\rm max}\rightarrow\infty$. We assume no familiarity with the geometry of hypercubes, and in fact use this terminology mainly to indicate that once a property of $A_N({\bf p})$ has been derived for $N=3$ (where the hypercube is simply a cube and thus easily visualised as in Fig.~1) it generalizes straightforwardly to higher $N$.
\newlength{\figwidths}
\setlength{\figwidths}{0.45\columnwidth}
\begin{figure}[tb]
\centering
\resizebox{\columnwidth}{!} {\includegraphics[angle=270]{fig1_3.pdf}}
\caption{Representation of the momentum integrals in \eqnn{eq:longtana}{boodles} for $N=3$. The axes (a) are positioned such that the origin is at the centre of each of the cubes, which are cut by (b) the centroid dividing surface $h({\overline p}_0)$ (blue), and (c) the dividing surface $h(p_1)$ (blue). The red arrow represents the centroid axis. This picture can be generalized to $N>3$, by replacing the cubes with $N$-dimensional hypercubes.}
\end{figure}
The only formal properties of hypercubes that we need are, first that a hypercube has $2^N$ vertices, second that one can represent the hypercube by constructing a graph showing the connections between its vertices, and third that the graph for a hypercube of dimension $N$ can be made by connecting equivalent vertices on the graphs of two hypercubes of dimension $N-1$. Figure 2 illustrates this last point, showing how the graph for a cube ($N=3$) can be made by connecting equivalent vertices on the graphs for two squares ($N=2$). Figure 2 also introduces the (self-evident) notation that we will use to label vertices; e.g. $(-1,1,1)$ refers to the vertex on an $N=3$ hypercube (i.e.\ a cube) located
at $p_1=-p_{\rm max}$, $p_2=p_3=p_{\rm max}$.
\begin{figure}[b]
\resizebox{\columnwidth}{!} {\includegraphics[angle=270]{fig2_3.pdf}}
\caption{Diagram showing how a cube can be built up by connecting the equivalent vertices on two squares. One can similarly build up an $N$-dimensional hypercube by connecting the equivalent vertices on two $(N\!-\!1)$-dimensional hypercubes. This figure also illustrates the notation used in the text to label the vertices of a hypercube.}
\end{figure}
These properties allow one to build up a hypercube by adding together its {\em subcubes} in a recursive sequence. By subcube we mean that each $p_i$ is confined to either the positive or negative axis; there are therefore
$2^N$ subcubes, each corresponding to a different vertex of the hypercube (so we can label the subcubes using the vertex notation introduced above). Figure 3 shows how one can build up an $N=3$ hypercube (i.e.\ a cube) by adding its subcubes together recursively, joining first two individual subcubes along a line, then joining two lines
of subcubes in the form of a square, and finally joining two squares of subcubes to give the entire cube. The analogous sequence can be used to build up a hypercube of any dimension $N$ from its subcubes, and will be useful in Sec.~IV.B.
\begin{figure}[tb]
\resizebox{.8\columnwidth}{!} {\includegraphics{fig3_1.pdf}}
\caption{Diagram showing how a cube can be built up recursively in three steps from its eight subcubes. One can similarly build up an $N$-dimensional hypercube in $N$ steps from its $2^N$ subcubes.}
\end{figure}
We now define the energies
\begin{align}
E_i\equiv E^-(p_i)&={p_i^2\over 2m}+V_{\rm prod}&p_i>0\nonumber\\
&={{p}_i^2\over 2m}+V_{\rm reac}&p_i<0\label{nofood}
\end{align}
and introduce the notation ${\widetilde p}_i$, such that
\begin{align}
{\widetilde p}_i&=-\sqrt{p_i^2+2m(V_{\rm prod}-V_{\rm reac})} &p_i>0\nonumber\\
{\widetilde p}_i&=+\sqrt{p_i^2+2m(V_{\rm reac}-V_{\rm prod})} &p_i<0\label{mister}
\end{align}
where $V_{\rm reac}$ and $V_{\rm prod}$ are the asymptotes of the potential $V(x)$ in the reactant ($x\to-\infty$) and product ($x\to\infty$) regions; i.e.\ the tilde has the effect of converting a product momentum to the reactant momentum corresponding to the same energy $E_i$, and vice versa. Note that we will not need to interconvert between the reactant and product momenta if one or other of them is imaginary, and hence the square roots in \eqn{mister} are always real.
For a symmetric barrier, it is clear that ${\widetilde p}_i=-p_i$, and from this it is easy to show that
\begin{align}
A_N({\widetilde{\bf p}}) = - A_N({\bf p}) \ \ \ \ \ \ \text{for symmetric barriers}
\label{goody}
\end{align}
where ${\widetilde{\bf p}}\equiv({\widetilde p}_1,{\widetilde p}_2,\dots,{\widetilde p}_N)$; i.e.\
$ A_N({{\bf p}})$ is antisymmetric with respect to inversion through the origin.
Clearly this antisymmetry ensures that the integration of $ A_N({\widetilde{\bf p}})$ over the entire hypercube
(i.e.\ with the side function omitted) gives zero. This integral is also zero for an asymmetric barrier, but there is then no simple cancellation of $ A_N({{\bf p}})$ with $ A_N({\widetilde{\bf p}})$.
Finally, we note that $A_N({{\bf p}})$ is symmetric with respect to cyclic permutations of the $p_i$, and thus has an $N$-fold axis of rotational symmetry around the diagonal of the hypercube on which all $p_i$ are equal. We will refer to this diagonal as the `centroid axis', since displacement along this axis measures the displacement of the momentum centroid ${\overline p}_0=\sum_{i=1}^Np_i/N$.
\subsection{Ring-polymer flux-side time-correlation function}
It is straightforward to modify the above derivation to obtain the $t\to \infty$ limit of the ring-polymer flux-side time-correlation function
${ C}_{\rm fs}^{[N]}(t)$. The only change necessary is to replace the side function $h(z_1)$ by $h[f({\bf z})]$, which gives
\begin{align}
\lim_{t\rightarrow \infty} { C}_{\rm fs}^{[N]}(t)=\int\! d{\bf p}\,A_N({\bf p})h[{\overline{f}}(\bf p)]
\label{boodles}
\end{align}
where $ A_N({{\bf p}})$ is defined in \eqn{pete}, and ${\overline{f}}(\bf p)$ is defined by
\begin{align}
\lim_{t\rightarrow \infty} h[f({\bf p}t/m)]=h[{\overline f}({\bf p})]\label{lunchtime}
\end{align}
i.e.\ ${\overline{f}}(\bf p)$ is the limit of ${{f}}(\bf p)$ at very large distances. In the special case that
${{f}}({\bf q})={\overline q}_0$, we obtain ${\overline{f}}({\bf p})={\overline p}_0={{f}}({\bf p})$;
but in general ${\overline{f}}({\bf p})\ne {{f}}(\bf p)$. A time-independent limit of \eqn{lunchtime}
is guaranteed to exist, since otherwise ${{f}}({\bf q})$ would not satisfy the requirements of a dividing surface.
Whatever the choice of ${{f}}({\bf q})$, it is clear that the (permutationally invariant) $h[{\overline{f}}(\bf p)]$ encloses a different part of the hypercube than does $h(p_1)$. For example, if ${{f}}({\bf q})={\overline q}_0$ and $N=3$, then $h[{\overline{f}}({\bf p})]=h({\overline p}_0)$ cuts out the half of the cube on the positive side of the hexagonal cross-section shown in Fig.~1b, whereas $h(p_1)$ cuts off the top half of the cube on the $p_1$ axis (Fig.~1c). Thus we cannot in general expect the $t\to \infty$ limits of ${ C}_{\rm fs}^{[N]}(t)$ and ${\overline C}_{\rm fs}^{[N]}(t)$ to be the same, unless $A_N({\bf p})$ satisfies some special properties in addition to those just mentioned. We will show in the next two Sections that $A_N({\bf p})$ does satisfy such properties if there is no recrossing of any surface orthogonal to ${{f}}({\bf q})$ in ring-polymer space.
\section{Ring-polymer momentum integrals}
\subsection{Structure of $A_N({\bf p})$}
One can show using scattering theory (see Appendix B) that $A_N({\bf p})$ consists of the terms
\begin{align}
A_N({\bf p}) = a_N({\bf p})\left[\prod_{i=1}^{N-1}\delta(E_{i+1}-E_i)\right]+r_N({\bf p})
\label{piggy}
\end{align}
where $a_N({\bf p})$ is some function of ${\bf p}$, and $r_N({\bf p})$ satisfies
\begin{align}
r_N(p_1,\dots,{\widetilde p}_j,\dots,p_N)=-\left|{\widetilde p}_j\over p_j\right| r_N(p_1,\dots,p_j,\dots,p_N)
\label{flap}
\end{align}
(where the dots indicate that all the $p_i$ except $p_j$ take the same values on both sides of the equation). \Eqn{flap} is equivalent to stating that $r_N({\bf p})$ alternates in sign between {\em adjacent subcubes} (i.e.~subcubes that differ in respect of just one axis), or that $r_N({\bf p})$ takes opposite signs in {\em even} and {\em odd} subcubes (where a subcube is defined to be even/odd if it has an even/odd number of axes for which $p_i<0$). Note that $r_N({\bf p})=0$ if any ${\widetilde p}_i$, $i=1\dots N$, is imaginary (see Appendix B).
The first term in \eqn{piggy} describes a set of $2^N$ $\delta$-function spikes running along all the lines in the hypercube for which the energies $E_i$, $i=1\dots N$, are equal.
There is one such line in every subcube. Two of these lines point in positive and negative directions along the centroid axis (i.e.\ the diagonal of the hypercube). The other $2^N-2$ {\em off-diagonal} spikes radiate out from this axis. If the barrier is symmetric, then each off-diagonal
spike is a straight line joining the centre of the hypercube to one of its vertices. If the barrier is asymmetric, the off-diagonal spikes are hyperbolae [on account of \eqn{mister}]. The off-diagonal spikes are distributed with $N$-fold rotational symmetry about the centroid axis because of the invariance of $A_N({\bf p})$ under cyclic permutations; e.g.\ for $N=3$, the spikes $(-1,1,1)$, $(1,-1,1)$, $(1,1,-1)$ (where this notation identifies each spike by the subcube that it runs through) rotate into one another under cyclic permutation of the beads; see Fig.~4.
\begin{figure}[tb]
\resizebox{.6\columnwidth}{!} {\includegraphics{fig4.pdf}}
\caption{Plot of the off-diagonal spikes in $A_N({\bf p})$ for $N=3$, obtained by looking down the centroid axis (the red arrow in Fig.~1b).}
\end{figure}
\subsection{Cancellation of the term $r_N({\bf p})$}
We now show that $r_N({\bf p})$ in \eqn{piggy} contributes zero to ${\overline C}_{\rm fs}^{[N]}(t)$ and ${ C}_{\rm fs}^{[N]}(t)$ in the limits $t,N\to\infty$, and may therefore be ignored when discussing whether ${ C}_{\rm fs}^{[N]}(t)$ gives the exact quantum rate in these limits. This property is easy to show for a symmetric barrier, for which \eqnn{goody}{flap} imply that $r_N({\bf p})$ is zero for all even $N$, and thus that the contribution to the integral from $r_N({\bf p})$ tends to zero in the limit $N \to \infty$. For an asymmetric barrier, $r_N({\bf p})$ is in general non-zero. However, we now show that the alternation in sign between adjacent subcubes [\eqn{flap}] causes $r_N({\bf p})$ to cancel out in both ${\overline C}_{\rm fs}^{[N]}(t)$ and ${ C}_{\rm fs}^{[N]}(t)$ in the limits $t,N\to\infty$.
This cancellation is easy to demonstrate for ${\overline C}_{\rm fs}^{[N]}(t)$: One simply notes that the side-function $h(p_1)$ encloses an even number of subcubes, which can be added together in adjacent pairs. For example, if we add together the adjacent subcubes $(1,\dots,1,1)$ and $(1,\dots,1,-1)$ (where the dots indicate that the intevening values of 1 and $-1$ are the same for the two subcubes), we obtain
\begin{align}
&\int_{0}^\infty\! dp_1\dots\int_{0}^\infty\! dp_{N-1}\int_{0}^\infty\! dp_{N}\,r_N({\bf p})h(p_1)\nonumber\\+&\int_{0}^\infty\! dp_1\dots\int_{0}^\infty\! dp_{N-1}\int_{-\infty}^0\! dp_{N}\,r_N({\bf p})h(p_1)\label{yokel}
\end{align}
(where the dots indicate that the integration ranges for $p_i$, $i=2\dots N\!-\!2$ are the same in both terms).
We can change the limits on the last integrand to $0\rightarrow\infty$ by transforming the integration variable from $p_N$ to ${\widetilde p}_N$, and using the relation $p_idp_i={\widetilde p}_id{\widetilde p}_i$ [see \eqn{mister}]. \Eqn{flap} then ensures that the two terms in \eqn{yokel} cancel out. Hence the contribution from $r_N({\bf p})$ cancels out in the $t \to \infty$\ limit of ${\overline C}_{\rm fs}^{[N]}(t)$ (for any $N>0$).
Using similar reasoning, we can show that the contribution from $r_N({\bf p})$ to
${C}_{\rm fs}^{[N]}(t)$ cancels out in the limits $t,N\to\infty$. For finite $N$, this cancellation is in general\cite{centy} only partial, because the function $h[{\overline f}({\bf p})]$ encloses different volumes in any two adjacent subcubes. However, one can show that the total mismatch in the volumes enclosed in the even subcubes and the odd subcubes tends rapidly to zero as $N \to \infty$. The trick is to build up the hypercube recursively, by extending to higher $N$ the sequence shown in Fig.~3 for $N=3$. The $j$th step in this sequence can be written
\begin{align}
S(N)=&\int_{-\infty}^\infty\! dp_1\dots\int_{-\infty}^\infty\! dp_{j}\int_{0}^\infty\! dp_{j+1}\dots\int_{0}^\infty\! dp_{N} \nonumber \\
& \times \,r_N({\bf p})h[{\overline{f}}(\bf p)]\nonumber\\
=&\int_{-\infty}^\infty\! dp_1\dots\int_{-\infty}^\infty\! dp_{j-1}\int_{0}^\infty\! dp_{j}\dots\int_{0}^\infty\! dp_{N}\,\nonumber \\
& \times r_N({\bf p})h[{\overline{f}}(\bf p)]\nonumber\\
+&\int_{-\infty}^\infty\! dp_1\dots\int_{-\infty}^\infty\! dp_{j-1}\int_{-\infty}^0\! dp_{j} \nonumber \\
& \times \int_{0}^\infty\! dp_{j+1}\dots\int_{0}^\infty\! dp_{N}\,r_N({\bf p})h[{\overline{f}}(\bf p)]
\label{ladder}
\end{align}
(where the first set of dots in each term indicates that the intervening integration ranges are $-\infty<p_i<\infty$, and the second set that they are $0<p_i<\infty$). Because each subcube in the second term is adjacent to its counterpart in the third term, there is an almost complete cancellation in the $r_N({\bf p})$ terms. All that is left is the residue,
\begin{align}
S(N)
=&\int_{-\infty}^\infty\! dp_1\dots\int_{-\infty}^\infty\! dp_{j-1}\int_{0}^\infty\! dp_{j}\dots\int_{0}^\infty\! dp_{N}\,r_N({\bf p})\nonumber\\
&\times\left\{ h[{\overline{f}}(p_1,\dots,p_j,\dots,p_N)] \nonumber \right.\\
&\qquad\left. - h[{\overline{f}}(p_1,\dots,{\widetilde p}_j,\dots,p_N)]\right\}
\label{residue}
\end{align}
which occupies the volume sandwiched between the two heaviside functions. Appendix C shows that this volume is a thin strip on the order of $N$ times smaller than the volume occupied by $r_N({\bf p})$
in each of the two terms that were added together in \eqn{ladder}. Now, each of these terms was itself the result of a similar addition in the $j-1$ th step, which also reduced the volume occupied by $r_N({\bf p})$ by a factor on the order of $N$, and so on. As a result, the volume occupied by $r_N({\bf p})$ after the $N$th (i.e.\ final) step is on the order of $N^N$ times smaller than the volume of a single subcube. The mismatch in volume between the even and odd subcubes thus tends rapidly to zero in the limit $N \to \infty$, with the result that $r_N({\bf p})$ cancels out completely \cite{unless} in ${ C}_{\rm fs}^{[N]}(t)$ in the limits $t,N\to\infty$.
\subsection{Comparison of $\delta$-function spikes}
We have just shown that only the first term in \eqn{piggy} contributes to
${\overline C}_{\rm fs}^{[N]}(t)$ and ${ C}_{\rm fs}^{[N]}(t)$ in the limits $t,N\to\infty$. Any difference between these quantities can thus be accounted for by comparing which spikes are enclosed by the side functions $h(p_1)$ and $h[{\overline f}({\bf p})]$. It is clear that both $h(p_1)$ and $h({\overline p}_0)$ enclose the spike that runs along the centroid axis in a positive direction, and exclude the spike that runs in a negative direction. A little thought shows that this property must hold for any choice of $h[{\overline f}({\bf p})]$ (since the positive spike corresponds to all momenta $p_i$ travelling in the product direction as $t \to \infty$, and vice versa for the negative spike).
Any difference between the $t,N\to\infty$ limits of ${\overline C}_{\rm fs}^{[N]}(t)$ and ${ C}_{\rm fs}^{[N]}(t)$ can therefore be explained in terms of which off-diagonal spikes are enclosed by $h(p_1)$ and $h[{\overline f}({\bf p})]$. These functions will enclose different sets of spikes. For example, for a symmetric barrier, with $N=3$, the function $h({\overline p}_0)$ encloses the off-diagonal spikes $(-1,1,1)$, $(1,-1,1)$ and $(1,1,-1)$, whereas $h(p_1)$ encloses $(1,-1,1)$, $(1,1,-1)$ and $(1,-1,-1)$.
We have therefore obtained the result that the $t,N\to\infty$ limit of ${ C}_{\rm fs}^{[N]}(t)$ is identical to that of ${\overline C}_{\rm fs}^{[N]}(t)$ (and thus gives the exact quantum rate) {\em if} the contribution from each off-diagonal spike to $A_N({\bf p})$ is individually zero.
We make use of this important result in the next Section.
\section{Effects of recrossing}
The results just obtained show that quantum TST will give the exact quantum rate if two conditions are satisfied. First, there must be no recrossing of the cyclically invariant dividing surface $f({\bf q})$ (by which we mean simply that ${ C}_{\rm fs}^{[N]}(t)$ is time-independent). Second, each of the off-diagonal spikes [in the first term of \eqn{piggy}] must contribute zero to ${ C}_{\rm fs}^{[N]}(t)$ in the long-time limit. We now show that this last condition is satisfied if there is no recrossing of any dividing surface orthogonal to $f({\bf q})$ in ring-polymer space.
\subsection{Orthogonal dividing surfaces}
A dividing surface $g({\bf q})$ orthogonal to $f({\bf q})$ satisfies
\begin{align}
\sum_{i=1}^N {\partial g({\bf q})\over \partial q_i} {\partial f({\bf q})\over \partial q_i} &=0\label{orthy}
\end{align}
When $f({\bf q})={\overline q}_0$, the surface $g({\bf q})$ can be any function of any linear combination of polymer beads orthogonal to ${\overline q}_0$.
For a more general (cyclically permutable) $f({\bf q})$, $g({\bf q})$ will also take this form close to the centroid axis (where, by definition, all degrees of freedom orthogonal to the centroid vanish), and will assume a more general curvilinear form away from this axis.
By no recrossing of $g({\bf q})$, we mean that the time-correlation function
\begin{align}
M_{\rm fs}^{[N]}(t)=&\int\! d{\bf q}\, \int\! d{\bf z}\,\int\! d{\bf \Delta}\,{\cal \hat F}[f({\bf q})]h[g({\bf z})]
\nonumber\\
&\times\prod_{i=1}^{N}\expect{q_{i-1}-\Delta_{i-1}/2|e^{-\beta_N{\hat H}}|q_i+\Delta_i/2}
\nonumber\\
&\qquad\times \expect{q_i+\Delta_i/2|e^{i{\hat H}t/\hbar}|z_i} \nonumber\\
& \qquad\times \expect{z_i|e^{-i{\hat H}t/\hbar}|q_i-\Delta_i/2}
\label{puss}
\end{align}
is time-independent. We know from Part I that the $t \to 0_+$\ limit of $M_{\rm fs}^{[N]}(t)$ is zero, since the flux and side dividing surfaces are different. Hence no recrossing of $g({\bf q})$ implies that $M_{\rm fs}^{[N]}(t)$ is zero for all time $t$, indicating that there is no net passage of flux from the initial distribution on $f({\bf q})$ through the surface $g({\bf q})$. Taking the $t \to \infty$\ limit
(using the same approach as in Sec.~III), we obtain
\begin{align}
\lim_{t\rightarrow \infty} { M}_{\rm fs}^{[N]}(t)&=\int\! d{\bf p}\,A_N({\bf p})h[{\overline{g}}(\bf p)]\nonumber\\
&=0\ \ \ \ \ \ {\text{if no recrossing of }}g({\bf q})
\label{lunghi}
\end{align}
where $A_N({\bf p})$ is defined in \eqn{pete}, and
${\overline g}({\bf p})$ is defined analogously to ${\overline f}({\bf p})$, i.e.
\begin{align}
\lim_{t\rightarrow \infty} h[g({\bf p}t/m)]=h[{\overline g}({\bf p})]
\label{baffi}
\end{align}
In the $N \to \infty$\ limit, the contribution of $r_N({\bf p})$ to ${ M}_{\rm fs}^{[N]}(t)$ cancels out (for the same reason that it cancels out in ${ C}_{\rm fs}^{[N]}(t)$---see Sec.~IV.B). \Eqn{lunghi} is thus equivalent to stating that the total contribution to $A_N({\bf p})$ from the spikes enclosed by $h[{\overline{g}}(\bf p)]$ is zero if there is no recrossing of $g({\bf q})$.
\subsection{Effect of no recrossing orthogonal to $f({\bf q})$}
If there is no recrossing of any ${g}({\bf q})$ orthogonal to $f({\bf q})$, we can use \eqn{lunghi} to generate a set of equations giving constraints on the spikes.
Let us see what effect these constraints have in the simple case that $N=3$ and $f({\bf q})={\overline q}_0$. \cite{cancel} We can construct dividing surfaces $g({\bf q})$ orthogonal to $f({\bf q})$ by taking any function of the normal mode coordinates
\begin{align}
Q_{x}&={1\over\sqrt{6}}\left(2q_1-q_2-q_3\right)\nonumber\\
Q_{y}&={1\over\sqrt{2}}\left(q_2-q_3\right)
\end{align}
Let us take
\begin{align}
g_r({\bf q})&=\sqrt{Q_{x}^2+Q_y^2}-r^\ddag\nonumber\\
g_F({\bf q}) &= F\!\left[\phi(Q_x,Q_y) \right]
\end{align}
where $r^\ddag>0$ specifies the position of surface $g_r({\bf q})$, and $F$ can be chosen to be any smooth function \cite{smooth} of the angle
\begin{align}
\phi(Q_x,Q_y)=\arctan(Q_y/Q_x)
\end{align}
Clearly $g_r({\bf q})$ and $\phi$ are polar coordinates in the plane orthogonal to the centroid axis. If there is no recrossing of
$g_r({\bf q})$ or $g_F({\bf q})$, then \eqn{lunghi} will hold with
\begin{align}
{\overline g}_r({\bf p})&=\lim_{\epsilon\rightarrow 0}\sqrt{P_{x}^2+P_y^2}-\epsilon\nonumber\\
{\overline g}_F({\bf p})&=F\!\left[\phi(P_x,P_y) \right]
\end{align}
in place of ${\overline g}({\bf p})$ [where $(P_x,P_y)$ are the combinations of $p_i$ analogous to $(Q_x,Q_y)$]. Now, ${\overline g}_r({\bf p})$ is a thin cylinder enclosing the centroid axis, and hence this function gives the constraint that the contributions to $A_N({\bf p})$ from the two spikes lying along this axis (in positive and negative directions) cancel out.\cite{symmetric} We are then free to choose $F$ so that $h[{\overline g}_F({\bf p})]$ encloses each off-diagonal spike in turn, since no two off-diagonal spikes pass through the same angle $\phi$ (see Fig.~4). We do not need to worry about the spikes along the centroid axis (which appear as a point at the origin---see Fig.~4), since we have just shown that they cancel out. \Eqn{lunghi} then gives a set of constraints, each of which specifies that the contribution to $A_N({\bf p})$ from one of the spikes is individually zero [if there is no recrossing orthogonal to $f({\bf q})$].
In Appendix D, we show that this result generalizes to any $N$ and to any choice of the cyclically invariant dividing surface
$f({\bf q})$. The $t,N\to\infty$ limit of ${ C}_{\rm fs}^{[N]}(t)$ is therefore equal to the $t \to \infty$\ limit of ${\overline C}_{\rm fs}^{[N]}(t)$ if there is no recrossing orthogonal to $f({\bf q})$. Since the $t \to 0_+$\ limit of ${ C}_{\rm fs}^{[N]}(t)$ is by definition equal to its $t \to \infty$\ limit if there is also no recrossing of $f({\bf q})$, we have therefore derived
the main result of this article: {\em quantum TST (i.e.\ RPMD-TST) gives the exact quantum rate for a one-dimensional system
if there is no recrossing of $f({\bf q})$, nor of any surface orthogonal to it in ring-polymer space}. We will show in Sec.~VI that this result generalises straightforwardly to multi-dimensions.
\subsection{Interpretation}
Quantum TST therefore differs from classical TST in requiring an extra condition to be satisfied if it is to give the exact rate: in addition to no recrossing of the dividing-surface
$f({\bf q})$, there should also be no
recrossing (by the exact quantum dynamics) of surfaces in the ($N\!-\!1$)-dimensional
space orthogonal to $f({\bf q})$. In the limit $t\to 0_+$, this space describes {\em fluctuations} in the positions of the ring-polymer beads.
The extra condition is therefore satisfied automatically in the classical (i.e.\ high temperature) limit, where it is impossible to recross any surface orthogonal to $f({\bf q})$, since the initial distribution of polymer beads is localised at a point and only the projection of the momentum along the centroid axis is non-zero.
For similar reasons, it is also impossible to recross any surface orthogonal to $f({\bf q})={\overline q}_0-q^\ddag$ for a parabolic barrier
at any temperature (at which the parabolic-barrier rate is defined).
As a result, quantum TST gives the exact rate in the classical limit and for a parabolic barrier, provided there is no recrossing of $f({\bf q})$ (which condition is satisfied
for a parabolic barrier when $q^\ddag$ is located at the top of the barrier).
In a real system, there will always be some recrossing of surfaces orthogonal to $f({\bf q})$ on account of the anharmonicity.
However, the amount of such recrossing is zero in the high temperature limit (see above), and will only become significant at temperatures
sufficiently low that the $t \to 0_+$\ distribution of polymer beads is delocalised beyond the parabolic tip of the potential barrier. In practice, this means that
quantum TST (i.e.\ RPMD-TST) will give a good approximation to the exact quantum rate at temperatures above the cross-over to deep-tunnelling (provided the reaction is not dominated by dynamical recrossing or real-time coherence effects). On reducing the temperature below cross-over, the amount of recrossing orthogonal to $f({\bf q})$ will increase, with the result that quantum TST will become progressively less accurate. Previous work on RPMD \cite{jor,rates,refined,bimolec,ch4,anrev}
and related instanton methods \cite{jor,bill,cole,benders,jonss,spanish,kastner1,kastner2,equiv} has shown that this deterioration in accuracy is gradual, with the RPMD-TST rate typically giving a good approximation to the exact quantum rate at temperatures down to half the cross-over temperature and below.
\subsection{Correction terms}
An alternative way of formulating the above is to regard the ${ M}_{\rm fs}^{[N]}(t)$ as a set of correction terms, which can be added to
${ C}_{\rm fs}^{[N]}(t)$ in order to recover the exact quantum rate in the limits $t\to\infty$. The orthogonal
surfaces $g({\bf q})$ should be chosen such that the resulting sum of terms contains the same set of spikes in the $t \to \infty$\ limit as does ${\overline C}_{\rm fs}^{[N]}(t)$. For example, if $N=3$ and $f({\bf q})={\overline q}_0$, we can define two time-correlation functions ${M}_{1}(t)$ and ${M}_{2}(t)$ which use dividing surfaces of the form of $g_F({\bf q})$, with $F$ chosen to enclose, respectively, the spikes $(1,1,-1)$ and $(1,-1,-1)$. The corrected flux-side time-correlation function
\begin{align}
{ C}_{\rm corr}^{[N=3]}(t) = { C}_{\rm fs}^{[N=3]}(t) - {M}_{1}(t) + {M}_{2}(t)
\end{align}
then contains the same spikes in the $t \to \infty$\ limit as ${\overline C}_{\rm fs}^{[N]}(t)$. Since ${M}_{1}(t)$ and ${M}_{2}(t)$ are zero in the limit $t \to 0_+$, it follows that ${ C}_{\rm corr}^{[N=3]}(t)$ interpolates between the RPMD-TST rate in the limit $t \to 0_+$, and the exact quantum rate in the limit $t \to \infty$. \cite{cancel} Clearly ${M}_{1}(t)$ and ${M}_{2}(t)$ will be zero for all values of $t$ if there is no recrossing of surfaces orthogonal to $f({\bf q})$ in ring-polymer space. This treatment generalizes in an obvious way to $N>3$.
An alternative way of stating the result of Sec.~V.B is thus that ${ C}_{\rm fs}^{[N]}(t)$ gives the exact rate in the $t \to \infty$\ limit when added to correction terms which are zero in the absence of recrossing.
\section{Application to multi-dimensional systems}
Here we outline the modifications needed to extend Secs.~III-V to multi-dimensional systems. As in Secs.~III-V,
we make use of quantum scattering theory, but we emphasise that the results obtained here apply also
in the condensed phase, provided there is the usual separation in timescales between barrier-crossing and equilibration.\cite{isomer}
Following Part I, we represent the space of an $F$-dimensional reactive scattering system using cartesian coordinates $q_j$, $j=1\dots F$, and define ring-polymer coordinates ${\bf q}\equiv \{{\bf q}_{1},\dots, {\bf q}_{N}\}$,
where ${\bf q}_i\equiv \{q_{i,1},\dots,q_{i,F}\}$ is the geometry of the $i$th replica of the system.
Analogous generalizations can be made of $\bf z$, $\bf p$, $\bf \Delta$, and so on. We then construct a multi-dimensional
version of $C_{\rm fs}^{[N]}(t)$ by
making the replacements
\begin{align}
\ket{q_i+\Delta_i/2}\rightarrow \ket{q_{i,1}+\Delta_{i,1}/2,\dots ,q_{i,F}+\Delta_{i,F}/2 }\label{tires}
\end{align}
in \eqn{utter}, and integrating over the multi-dimensional coordinates $({\bf q},{\bf z},{\bf \Delta})$. The dividing surface $f({\bf q})$ is now invariant under {\em collective} cyclic permutations of the coordinates ${\bf q}$, and is thus a permutationally invariant function of the replicas ${{\boldsymbol \sigma}}\equiv \{\sigma_1({\bf q}_{1}),\dots,\sigma_N({\bf q}_{N})\}$ of a (classical) reaction coordinate $\sigma(q_1,\dots,q_F)$.
It is straightforward to analyze the $t \to \infty$\ behaviour of $C_{\rm fs}^{[N]}(t)$ by combining the analysis of Secs.~III-V with centre-of-mass-frame scattering theory. All we need to note is that the relative motion of the reactant or product molecules can be described by a one-dimensional scattering coordinate, with all other degrees of freedom being described by channel functions \cite{taylor} (which include the rovibrational states of the scattered molecules, and also specify whether the system is in the reactant or product arrangement). We will denote the momentum of the $i$th replica along the scattering coordinate as $\pi_i$, using the convention that $\pi_i$ is negative in the reactant
arrangement and positive in the product arrangement. Since all other internal degrees of freedom are bound, it follows that
\begin{align}
\lim_{t\rightarrow \infty} h[\sigma_i({\bf p}_it/m)]=h(\pi_i)
\end{align}
This last result allows us to construct a multi-dimensional generalisation of the hybrid function ${\overline C}_{\rm fs}^{[N]}(t)$
by replacing $h[f({\bf q})]$ in $C_{\rm fs}^{[N]}(t)$ by $h[\sigma_i({\bf q}_i)]$. One can show (by generalizing Appendix~A) that the multi-dimensional
${\overline C}_{\rm fs}^{[N]}(t)$ gives the exact quantum rate in the limit $t \to \infty$. We then take the $t \to \infty$\
limits of ${\overline C}_{\rm fs}^{[N]}(t)$ and $C_{\rm fs}^{[N]}(t)$ by using the scattering relation
\begin{align}
\hat\Omega_-\ket{\pi_i}\ket{v_i}&= \ket{\phi^-_{{\pi}_i,v_i}}
\end{align}
where ${\hat \Omega}_-$ is the (multi-dimensional) M{\o }ller operator,\cite{taylor} $\ket{\pi_i}$ is a momentum eigenstate, $\ket{v_i}$ is a reactant or product channel function, and $\ket{\phi^-_{{\pi}_i,v_i}}$ is a scattering eigenstate satisfying outgoing boundary conditions. As in one-dimension, we obtain integrals over an $N$-dimensional hypercube:
\begin{align}
\lim_{t\rightarrow \infty} {\overline C}_{\rm fs}^{[N]}(t)&=\int\! d{{\boldsymbol \pi}}\,A_N({ {\boldsymbol \pi}})h(\pi_i)\\
\lim_{t\rightarrow \infty} { C}_{\rm fs}^{[N]}(t)&=\int\! d{{\boldsymbol \pi}}\,A_N({ {\boldsymbol \pi}}) h[{\overline{f}}({\boldsymbol \pi})]\
\label{eq:longtnf}
\end{align}
where ${\boldsymbol \pi}\equiv\{\pi_1,\dots,\pi_N\}$, and $A_N({{\boldsymbol \pi}})$ is a generalisation of $A_N({\bf p})$, obtained by making the replacements of \eqn{tires} in
\eqn{pete}, replacing $\ket{\phi^-_{p_i}}$ by $\ket{\phi^-_{{\pi}_i,v_i}} $, and summing over $v_i$. The function ${\overline f}({{\boldsymbol \pi}})$ is a multi-dimensional generalisation of ${\overline f}({\bf p})$, and satisfies
\begin{align}
\lim_{t\rightarrow \infty} h[f({\bf p}t/m)]=h[{\overline f}({{\boldsymbol \pi}})]\label{basingstoke}
\end{align}
(which is equivalent to stating that $f({\bf q})$ separates cleanly the reactants from the products in the limit $t \to \infty$).
The derivation of Appendix B generalizes straightforwardly to multi-dimensions,
with the result that $A_N({{\boldsymbol \pi}})$ has the analogous structure to $A_N({\bf p})$ in \eqn{piggy}. Following Sec.~IV and Appendix C, one can show that only the $\delta$-function spikes [corresponding to the first term in \eqn{piggy}] contribute to ${\overline C}_{\rm fs}^{[N]}(t)$ and $C_{\rm fs}^{[N]}(t)$ in the limits $t,N\to\infty$. There are many more of these spikes in multi-dimensions than in one-dimension, since there is a spike for every possible pair of (open) reactant or product channels. However, it is possible to isolate each off-diagonal spike by constructing angular functions $F$ (see Sec.~V and Appendix D) in the space orthogonal to ${\overline f}({{\boldsymbol \pi}})$. It then follows that each off-diagonal spike in $A_N({{\boldsymbol \pi}})$ contributes zero to $C_{\rm fs}^{[N]}(t)$ in the limits $t,N\to\infty$, if there is no recrossing of surfaces orthogonal to $f({\bf q})$ in the space ${\boldsymbol \sigma}$.
Hence we have obtained the same result in multi-dimensions as in one-dimension: that the RPMD-TST rate is equal to the exact quantum rate if there is no recrossing of the dividing surface, nor of any surface orthogonal to it in an $(N\!-\!1)$-dimensional space orthogonal to $f({\bf q})$, which describes (in the $t \to 0_+$\ limit) the
fluctuations in the polymer-bead positions along the reaction coordinate $\sigma(q_1,\dots,q_F)$. It is impossible to recross these
orthogonal surfaces in the classical (i.e.\ high-temperature limit), where RPMD-TST thus reduces to classical TST.
\section{Conclusions}
\label{sec:con}
We have shown that quantum TST (i.e.\ RPMD-TST) is related to the exact quantum rate in the same way that classical TST is related
to the exact classical rate; i.e.\
quantum TST is exact in the absence of recrossing.
Recrossing in quantum TST is more complex than in classical TST, since, in addition to recrossing of the ring-polymer dividing surface, one must also consider recrossing through surfaces that describe fluctuations in
the positions of the polymer beads along the reaction coordinate. Such additional recrossing disappears in the classical and parabolic barrier limits, and thus becomes important only
at temperatures below the cross-over to deep tunnelling. Previous RPMD-TST calculations\cite{jor} indicate that the resulting loss in accuracy increases slowly as the temperature is reduced below cross-over, such that quantum TST remains within a factor of two of the exact rate at temperatures down to below half the cross-over temperature. However, it is clear that further work will be needed in order to predict quantitatively how far one can decrease the temperature below cross-over before quantum TST breaks down (which will always happen
at a sufficiently low temperature).
Just as with classical TST, quantum TST will not work for indirect reactions, such as those involving long-lived intermediates, or diffusive dynamics (e.g.\ the high-friction regimes of the quantum Kramers problem\cite{hang}). However, this leaves a vast range of chemical reactions for which quantum TST is applicable, and for which it will give an excellent approximation to the exact quantum rate. The findings in Part I and in this article thus validate the already extensive (and growing) body of results from RPMD rate-simulations\cite{rates,refined,azzouz,bimolec,ch4,mustuff,anrev,yury,tommy1,tommy2,tommy3,stecher,guo} (which give a lower bound to the RPMD-TST rate), as well as results obtained using the older centroid-TST method\cite{gillan1,gillan2,centroid1,centroid2,schwieters,ides} (which is a special case of RPMD-TST\cite{jor,cent}).
\begin{acknowledgments}
TJHH is supported by a Project Studentship from the UK Engineering and
Physical Sciences Research Council.
\end{acknowledgments}
\section*{APPENDIX A: Long-time limit of the hybrid flux-side time-correlation function}
\label{app:a}
\renewcommand{\theequation}{A\arabic{equation}}
\setcounter{equation}{0}
Here we derive \eqn{thicky}, which states that ${\overline C}_{\rm fs}^{[N]}(t)$ gives the exact quantum rate in the $t \to \infty$\ limit. We use the property that
\begin{align}
{\overline C}_{\rm fs}^{[N]}(t)= {\overline C}_{\rm sf}^{[N]}(t)=-{d\over d t}{\overline C}_{\rm ss}^{[N]}(t)
\end{align}
where ${\overline C}_{\rm sf}^{[N]}(t)$ and ${\overline C}_{\rm ss}^{[N]}(t)$ are the side-flux and side-side time-correlation functions corresponding to ${\overline C}_{\rm fs}^{[N]}(t)$. We then write ${\overline C}_{\rm sf}^{[N]}(t)$ as
\begin{align}
{\overline C}_{\rm sf}^{[N]}(t) = & \int\! d{\bf q}\, \int_{-\infty}^\infty\! d\Delta_1\,h[f({\bf q})]\nonumber\\
\times&\expect{q_{1}-\Delta_{1}/2|e^{-\beta_N{\hat H}}|q_2}\nonumber\\
\times&\prod_{i=3}^{N}\expect{q_{i-1}|e^{-\beta_N{\hat H}}|q_i}\nonumber\\
\times&\expect{q_N|e^{-\beta_N{\hat H}}|q_1+\Delta_1/2}
\nonumber\\
\quad\times &\expect{q_1+\Delta_1/2|e^{i{\hat H}t/\hbar}{\hat F}(q^\ddag) e^{-i{\hat H}t/\hbar}|q_1-\Delta_1/2}
\label{eq:cssN}
\end{align}
where ${\hat F}(q^\ddag)$ is the flux operator\cite{MST}
\begin{align}
{\hat F}(q^\ddag) = {1\over 2 m}\left[\hat p\delta(q-q^\ddag) + \delta(q-q^\ddag)\hat p \right]
\end{align}
and insert identities of the form $e^{i{\hat H}t/\hbar}e^{-i{\hat H}t/\hbar}$ to obtain
\begin{align}
{\overline C}_{\rm sf}^{[N]}(t) = &\int\! d{\bf q}\,\int_{-\infty}^\infty\! d\Delta_1\,h[f({\bf q})]\nonumber\\
\times&\expect{q_{1}-\Delta_{1}/2|e^{i{\hat H}t/\hbar}e^{-\beta_N{\hat H}}e^{-i{\hat H}t/\hbar}|q_2}\nonumber\\
\times&\prod_{i=3}^{N}\expect{q_{i-1}|e^{i{\hat H}t/\hbar}e^{-\beta_N{\hat H}}e^{-i{\hat H}t/\hbar}|q_i}\nonumber\\
\times&\expect{q_N|e^{i{\hat H}t/\hbar}e^{-\beta_N{\hat H}}e^{-i{\hat H}t/\hbar}|q_1+\Delta_1/2}
\nonumber\\
\quad\times &\expect{q_1+\Delta_1/2|e^{i{\hat H}t/\hbar}{\hat F}(q^\ddag) e^{-i{\hat H}t/\hbar}|q_1-\Delta_1/2}
\label{bob}
\end{align}
This allows us to take the $t \to \infty$\ limit of ${\overline C}_{\rm sf}^{[N]}(t)$ by using \eqn{dragons} together with the relation
\begin{align}
\hat\Omega_+\ket{p_i}= \ket{\phi^+_{p_i}}
\end{align}
where
\begin{align}
\hat\Omega_+\equiv\lim_{t\to\infty}e^{-i\hat H t/\hbar}e^{i\hat K t/\hbar}
\end{align}
and $\ket{\phi^+_{p_i}}$ is a (reactive) scattering wave function with incoming boundary conditions.\cite{taylor} We then
obtain
\begin{align}
{\overline C}_{\rm sf}^{[N]}(t) = \int\! d{\bf p}\, &\int_{-\infty}^\infty\! dp'_1\,h\!\left[{\overline f}\left({p_1+p_1'\over 2},p_2,\dots,p_N\right)\right]\nonumber\\
\times&\expect{\phi^+_{p_1'}|e^{-\beta_N{\hat H}}|\phi^+_{p_{2}}}\nonumber\\
\times&\prod_{i=3}^{N}\expect{\phi^+_{p_{i-1}}|e^{-\beta_N{\hat H}}|\phi^+_{p_i}}\nonumber\\
\times&\expect{\phi^+_{p_{N}}|e^{-\beta_N{\hat H}}|\phi^+_{p_1}}\expect{\phi^+_{p_1}|{\hat F}(q^\ddag) |\phi^+_{p_1'}}
\label{jupiter}
\end{align}
From the orthogonality of the scattering eigenstates,\cite{taylor} we obtain
\begin{align}
\expect{\phi^+_p|e^{-\beta_N{\hat H}}|\phi^+_{p'}}=e^{-p^2\beta_N/2m}\delta(p-p')
\end{align}
We also know that
\begin{align}
h\left[{\overline f}(p,p,\dots,p)\right]=h(p)
\end{align}
(since otherwise $f({\bf q})$ would not correctly distinguish between reactants and products in the limit $t \to \infty$).
We thus obtain
\begin{align}
\lim_{t\to\infty}{\overline C}_{\rm sf}^{[N]}(t) &= \int_{-\infty}^{\infty}\! dp\, e^{-p^2\beta/2m} h(p)
\expect{\!\phi^+_p|{\hat F}(q^\ddag) |\phi^+_p\!
\label{pip}
\end{align}
which is the $t \to \infty$\ limit of the Miller-Schwarz-Tromp flux-side time-correlation function, \cite{MST} from which we obtain \eqn{thicky}.
\section*{APPENDIX B: Derivation of the structure of $A_N({\bf p})$}
\label{app:b}
\renewcommand{\theequation}{B\arabic{equation}}
\setcounter{equation}{0}
Here we derive \eqn{piggy} of Sec.~IV.
We first define a special type of side-side time-correlation function,
\begin{align}
{ P}_{l}^{[N]}({\bf E},t)=\int\! & d{\bf q}\, \int\! d{\bf z}\,\int\! d{\bf \Delta}\,\nonumber\\
\times & h[f({\bf q})]
\left[\prod_{i=1,i\ne l}^N
h(z_i-q^\ddag)\right]\nonumber\\
\times&\prod_{i=1}^{N}\expect{q_{i-1}-\Delta_{i-1}/2|e^{-\beta_N{\hat H}}|q_i+\Delta_i/2}
\nonumber\\
\quad\times &\expect{q_i+\Delta_i/2|e^{i{\hat H}t/\hbar}\delta({\hat H}-E_i)|z_i}\nonumber\\
\times&\expect{z_i|e^{-i{\hat H}t/\hbar}|q_i-\Delta_i/2}
\label{tebbit}
\end{align}
where ${\bf E}\equiv\{E_1,E_2,\dots,E_N\}$, and then consider its $t \to \infty$\ time-derivative,
\begin{align}
{Q}_{l}^{[N]}({\bf E})=\lim_{t\rightarrow\infty}{d\over dt}{\overline P}_{l}^{[N]}({\bf E},t)
\end{align}
We can obtain two equivalent expressions for ${Q}_{l}^{[N]}({\bf E})$, by evaluating it as either a flux-side or a side-flux time-correlation function.
The flux-side version is
\begin{align}
{Q}_{l}^{[N]}({\bf E})=\int\! d{\bf q}\, &\int\! d{\bf p}\,\int\! d{\bf \Delta}\,{\cal \hat F}[f({\bf q})]
\left[\prod_{i=1,i\ne l}^N
h(p_i)\right]\nonumber\\
\times&\prod_{i=1}^{N}\expect{q_{i-1}-\Delta_{i-1}/2|e^{-\beta_N{\hat H}}|q_i+\Delta_i/2}
\nonumber\\
\quad\times &\expect{q_i+\Delta_i/2|\delta({\hat H}-E_i)|\phi^-_{p_i}}\nonumber\\
\times&\expect{\phi^-_{p_i}|q_i-\Delta_i/2}
\label{rip_thatcher}
\end{align}
which gives
\begin{align}
|p_l|^{-1} & A_N(p_1,\dots,p_l,\dots,p_N)\nonumber \\
& + |{\widetilde p}_l|^{-1}A_N(p_1,\dots,{\widetilde p}_l,\dots,p_N)\nonumber\\
&\qquad ={{Q}_{l}^{[N]}({\bf E})\over m^N}\prod_{i=1,i\ne l}^N |p_i|\ \ \ \ \text{\rm if ${\widetilde p}_l$ real} \label{dennis}
\end{align}
and
\begin{align}
|p_l|^{-1}&A_N(p_1,\dots,p_l,\dots,p_N)= \nonumber \\
&{{Q}_{l}^{[N]}({\bf E})\over m^N}\prod_{i=1,i\ne l}^N |p_i| \ \ \ \text{\rm if ${\widetilde p}_l$ imaginary}
\label{mennis}
\end{align}
The side-flux version is
\begin{align}
{ Q}_{l}^{[N]}({\bf E})=\int\! d{\bf s}\, &\int\! d{\bf s}'\,h[f({\bf s+s'})]\nonumber\\
\times&\left[\prod_{i=1}^{N}\expect{\phi^+_{s_{i-1}'}|e^{-\beta_N{\hat H}}|\phi^+_{s_i}}\right]\nonumber\\
\times&\expect{\phi^+_{s_l}|\delta({\hat H}-E_l)|\phi^+_{s_l'}}\nonumber\\
\times &\sum_{j=1,j\ne l}^N\ \expect{\phi^+_{s_j}|\delta({\hat H}-E_j){\hat F}(q^\ddag)|\phi^+_{s_j'}}\nonumber\\
\quad\times &\prod_{\substack{i=1,i\ne l\\i\ne j}}^N
\expect{\phi^+_{s_i}|\delta({\hat H}-E_i){\hat h}(q^\ddag)|\phi^+_{s_i'}}
\label{maggie_morta}
\end{align}
The second to fourth lines in this expression contain the $\delta$-functions,
\begin{align}
\delta(s_l-s'_l)\prod_{i=1}^{N}\delta(s'_{i-1}-s_i)\delta[E^+(s_i)-E_i]
\end{align}
where $E^+(s_i)$ is defined the other way round to $E^-(p_i)$ of \eqn{nofood}, and where the $\delta$-functions in $s_i $ and $s'_i$ result from the orthogonality of the scattering states $\ket{\phi^+_{s}}$.\cite{taylor} Integrating over $s_i $ and $s'_i$, we obtain
\begin{align}
{ Q}_{l}^{[N]}({\bf E})=b_N({\bf p})\delta(E_{l+1}-E_l)
\end{align}
where $b_N({\bf p})$ is some function of ${\bf p}$ (which we do not need to know explicitly). Substituting this expression into \eqnn{dennis}{mennis}, we obtain
\begin{align}
|p_l|^{-1} & A_N(p_1,\dots,p_l,\dots,p_N)\nonumber\\
&+ |{\widetilde p}_l|^{-1}A_N(p_1,\dots,{\widetilde p}_l,\dots,p_N)\nonumber\\
&\qquad =c_N({\bf p})\delta(E_{l+1}-E_l)
\ \ \ \ \text{\rm if ${\widetilde p}_l$ real}
\label{brutus}
\end{align}
and
\begin{align}
|p_l|^{-1}&A_N(p_1,\dots,p_l,\dots,p_N)\nonumber\\
& =c_N({\bf p})\delta(E_{l+1}-E_l)
\ \ \ \ \text{\rm if ${\widetilde p}_l$ imaginary}
\label{crutus}
\end{align}
where $c_N({\bf p})$ is some function of ${\bf p}$. This derivation was obtained for the case that $p_i>0$, $i\ne l$, but can clearly be repeated for all combinations of positive and negative $p_i$ [by replacing various $h(z_i-q^\ddag)$
by $h(-z_i+q^\ddag)$]. Hence \eqnn{brutus}{crutus} holds for all ${\bf p}$.
Now, we can obtain alternative expressions for the righthand side of \eqnn{brutus}{crutus} by adding and subtracting sequences of terms that correspond to following different paths through the hypercube. Consider, for example (for the case that ${\widetilde p}_i, {\widetilde p}_j$ are both real), the sequence
\begin{align}
|p_j|^{-1}&A_N(p_1,\dots,p_i,\dots,p_j,\dots,p_N)\nonumber\\
&+ |{\widetilde p}_j|^{-1}A_N(p_1,\dots,{p}_i,\dots,{\widetilde p}_j,\dots,p_N)\nonumber\\
&\qquad =X_N({\bf p})\delta(E_{j+1}-E_j)\nonumber\\
|p_i|^{-1}&A_N(p_1,\dots,p_i,\dots,{\widetilde p_j},\dots,p_N)\nonumber\\
&+ |{\widetilde p}_i|^{-1}A_N(p_1,\dots,{\widetilde p}_i,\dots,{\widetilde p}_j,\dots,p_N)\nonumber\\
&\qquad =Y_N({\bf p})\delta(E_{i+1}-E_i)\nonumber\\
|{\widetilde p}_j|^{-1}&A_N(p_1,\dots,{\widetilde p}_i,\dots,{\widetilde p}_j,\dots,p_N)\nonumber\\
&+ |{p}_j|^{-1}A_N(p_1,\dots,{\widetilde p}_i,\dots,{p}_j,\dots,p_N)\nonumber\\
&\qquad =Z_N({\bf p})\delta(E_{j+1}-E_j)
\label{bag-lady}
\end{align}
where each of $X_N({\bf p}),Y_N({\bf p}),Z_N({\bf p})$ is some (different) function of ${\bf p}$. Combining these expressions, we obtain
\begin{align}
|p_i|^{-1}&A_N(p_1,\dots,p_i,\dots,p_N)\nonumber\\
& + |{\widetilde p}_i|^{-1}A_N(p_1,\dots,{\widetilde p}_i,\dots,p_N)\nonumber\\
&\qquad=P_N({\bf p})\delta(E_{i+1}-E_i)+Q_N({\bf p})\delta(E_{j+1}-E_j)
\label{thugus}
\end{align}
where $P_N({\bf p})=-|p_j||{\widetilde p}_j|^{-1}Y_N({\bf p})$,
and $Q_N({\bf p})=|p_j|\left[|p_i|^{-1}X_N({\bf p})+|{\widetilde p}_i|^{-1}Z_N({\bf p})\right]$. \cite{qzero} We can repeat this procedure for each of the $N-1$ different values of $j\ne i$. Because the resulting set of coefficients $P_N({\bf p})$ and $Q_N({\bf p})$ are linearly independent\footnote{It is conceivable that these coefficients might become linearly dependent at some value of ${\bf p}$ but these would be isolated points and thereby contribute nothing to the integral.}, it follows that
\begin{align}
|p_i|^{-1}&A_N(p_1,\dots,p_i,\dots,p_N)\nonumber\\
&+ |{\widetilde p}_i|^{-1}A_N(p_1,\dots,{\widetilde p}_i,\dots,p_N)\nonumber\\
&\qquad=d_N({\bf p})\prod_{i=1}^{N-1}\delta(E_{i+1}-E_i) \ \ \ \ \text{\rm if ${\widetilde p}_i$ real}
\label{belgrano}
\end{align}
and
\begin{align}
|p_i|^{-1}&A_N(p_1,\dots,p_i,\dots,p_N)\nonumber\\
&=d_N({\bf p})\prod_{i=1}^{N-1}\delta(E_{i+1}-E_i) \ \ \ \ \text{\rm if ${\widetilde p}_i$ imaginary}
\label{borax}
\end{align}
where $d_N({\bf p})$ is some function of ${\bf p}$. From this, we obtain \eqn{piggy} of Sec.~IV.
\section*{APPENDIX C: Cancellation of the term $r_N({\bf p})$ in the limit $N \to \infty$}
\label{app:c}
\renewcommand{\theequation}{C\arabic{equation}}
\setcounter{equation}{0}
Because the function ${\overline{f}({\bf p})}$ must vary smoothly with imaginary time and converge in the limit $N \to \infty$, it can be rewritten as a function of a finite number $K$ of the linear combinations
\begin{align}
{\overline P}_i=\sum_{j=1}^N T_{ij}p_j\ \ \ \ \ \ i=1,\dots,K
\end{align}
in which $T_{ij}\sim N^{-1}$ (i.e.~${\overline P}_i$ is normalised such that it converges in the limit $N\to\infty$; e.g.~$T_{0j}= N^{-1}$ corresponds to the centroid).
It then follows that ${\partial f({\bf p})/ \partial p_j}\sim N^{-1}$, and hence that
\begin{align}
\lim_{N\rightarrow\infty}{\overline{f}}(p_1,\dots,{\widetilde p}_j,\dots,p_N)={\overline{f}}({\bf p})
+({\widetilde p}_j-p_j)
{\partial f({\bf p})\over \partial p_j}
\end{align}
provided the range of ${\widetilde p}_j-p_j$ is finite [which it is because $r_N({\bf p})$ contains Boltzmann factors].
We can therefore write the $N \to \infty$\ limit of \eqn{residue} as
\begin{align}
\lim_{N\rightarrow\infty}&S(N)=\nonumber\\
&\int_{-\infty}^\infty\! dp_1\dots\int_{-\infty}^\infty\! dp_{j-1}\int_{0}^\infty\! dp_{j}\dots\int_{0}^\infty\! dp_{N}\,r_N({\bf p})\nonumber\\
&\qquad \times ({\widetilde p}_j-p_j)
{\partial f({\bf p})\over \partial p_j}\delta[{\overline{f}}(p_1,\dots,p_j,\dots,p_N)]
\end{align}
which shows that the volume sandwiched between the two heaviside functions becomes a strip of width
$({\widetilde p}_j-p_j){\partial f({\bf p})/ \partial p_j}\sim N^{-1}$ in the limit $N \to \infty$.
\section*{APPENDIX D: Isolating the off-diagonal spikes for $N>3$}
\label{app:d}
\renewcommand{\theequation}{D\arabic{equation}}
\setcounter{equation}{0}
It is straightforward to generalize the result obtained for $N=3$ and $f({\bf q})={\overline q}_0$ in Sec.~V.B to general $N$ and to any (cyclically invariant) choice of $f({\bf q})$.
We consider first the special case of a centroid dividing surface [$f({\bf q})={\overline q}_0$]. The space orthogonal to ${\overline q}_0$
can be represented by orthogonal linear combinations $Q_i$, $i=1,\dots N-1$ of $q_i$, analogous to $Q_x$ and $Q_y$ in Sec.~V.B. We can then define a generalized radial dividing surface
\begin{equation}
g_r({\bf q})=\sqrt{\sum_{i=1}^{N-1}Q_i^2}-r^\ddag
\end{equation}
(which is invariant under cyclic permutation of the $q_i$) and generalized angular dividing surfaces
\begin{equation}
g_F({\bf q})=F\!\left[\phi({Q_X,Q_Y}) \right]
\end{equation}
with
\begin{align}
\phi(Q_X,Q_Y) &= \arctan(Q_Y/Q_X)\label{phiphi}
\end{align}
where $(Q_X,Q_Y)$ can be chosen to be any mutually orthogonal pair of linear combinations of the $Q_i$.
From \eqn{baffi}, the $t \to \infty$\ limits of $g_r({\bf q})$ and $g_F({\bf q})$ are
\begin{equation}
{\overline g}_r({\bf p})=\lim_{\epsilon\to 0}\sqrt{\sum_{i=1}^{N-1}P_i^2}-\epsilon
\end{equation}
and
\begin{equation}
g_F({\bf p})=F\!\left[\phi(P_X,P_Y) \right]
\end{equation}
where $P_i$ and $(P_X,P_Y)$ are the linear combinations of $p_i$ analogous to $Q_i$ and $(Q_X,Q_Y)$.
We can then proceed as for the $N=3$ example in Sec.~V.B. Substituting ${\overline g}_r({\bf p})$ into \eqn{lunghi}, we obtain the constraint that the spikes along the centroid axis contribute zero (since ${\overline g}_r({\bf p})$ encloses these spikes only). This leaves us free to construct angular dividing surfaces $g_F({\bf q})$ in various planes $(Q_X,Q_Y)$ (which need not be mutually orthogonal) in order to enclose individual off-diagonal spikes. \cite{exist} \Eqn{lunghi} then gives a set of constraints, each stating that the contribution to $A_N({\bf p})$ from one of these spikes is zero if there is no recrossing of any surface orthogonal to~$f({\bf q})$.
This reasoning can be applied with slight modification to a general (cyclically invariant) dividing surface $f({\bf q})$. By construction, such a surface reduces to a function of just the centroid near the centroid axis, and hence there exists a radial coordinate in the $(N\!-\!1)$-dimensional curvilinear space orthogonal to $f({\bf q})$ which reduces to $g_r({\bf q})$ close to the centroid axis. We therefore obtain the constraint that the spikes along the centroid axis contribute zero, and are then free to isolate each of the off-diagonal spikes by using curvilinear generalisations of the angles $\phi$, which sweep over curvilinear surfaces that are
orthogonal to $f({\bf q})$, and which reduce to the form of \eqn{phiphi} close to the centroid axis.
| {'timestamp': '2013-09-05T02:08:37', 'yymm': '1307', 'arxiv_id': '1307.3020', 'language': 'en', 'url': 'https://arxiv.org/abs/1307.3020'} |
\section{Asymptotic expansions}
\label{summary1x}
We have seen in Section~\ref{apriori_subsection} that existence of a smooth completion at null infinity requires $g_{AB}=\mathcal{O}(r^2)$ with $(\det \ol g_{AB})_{-4} > 0$, and thus $\varphi=\mathcal{O}(r)$ with $ \varphi_{-1}> 0$.
But then
\begin{equation*}
\frac{1}{\sqrt{\det\gamma}}\gamma_{AB} = \varphi^{-2} \frac{1}{\sqrt{\det s}} g_{AB} = \mathcal{O}(1)
\;.
\end{equation*}
Since only the conformal class of $\gamma_{AB}$ matters, we see that there is no loss of generality to assume that
$\gamma_{AB} = \mathcal{O}(r^2)$, with $(\det \gamma_{AB})_{-4}\ne 0$;
this is convenient because then $\gamma_{AB}$ and $\ol g_{AB}$ will display similar asymptotic behaviour.
Moreover, since any Riemannian metric on the 2-sphere is conformal to the standard metric $s=s_{AB}\mathrm{d}x^A\mathrm{d}x^B$,
in the case of smooth conformal completions we may without loss of generality require the initial data $\gamma$ to be of
the form, for large $r$,
\begin{equation}
\gamma_{AB} \sim r^2 \Big( s_{AB}+ \sum_{n=1}^{\infty} h^{(n)} _{AB} r^{-n}\Big)
\;,
\label{initial_data}
\end{equation}
for some smooth tensor fields $ h^{(n)} _{AB}$ on $S^2$.
(Recall that the symbol ``$\sim $'' has been defined in Section~\ref{ss12XII13.2}.)
If the initial data $\gamma_{AB}$ are not directly of the form \eq{initial_data}, they can either be brought to \eq{initial_data} via an appropriate choice of coordinates and conformal rescaling, or they lead to a metric $\ol g_{\mu\nu}$ which is not smoothly extendable across~$\scri^+$.
In the second part of this work~\cite{TimAsymptotics} the following theorem will be proved:
\begin{theorem}
\label{thm_asympt_exp}
Consider the characteristic initial value problem for Einstein's vacuum field equations in four space-time dimensions with smooth
conformal data $\gamma=\gamma_{AB}\mathrm{d}x^A\mathrm{d}x^B$ and gauge functions $\kappa$ and $\ol W^{\lambda}$ on a cone $C_O$ which has smooth closure
in the conformally completed space-time.
The following conditions are necessary-and-sufficient for the trace of the metric $g=g_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}$ on $C_O$,
obtained as solution to Einstein's wave-map characteristic vacuum constraint equations \eq{constraint_phi} and \eq{eqn_nuA_general}-\eq{dfn_zeta},
to admit a smooth conformal completion at infinity and for the connection coefficients $\ol \Gamma^r_{rA}$ to be smooth at $\scri^+$, in the sense of Definition~\ref{definition_smooth}, when imposing a generalized wave-map gauge condition $H^{\lambda}=0$:
\begin{enumerate}
\item[(i)] There exists a conformal factor so that the conformally rescaled $\gamma$ satisfies \eq{initial_data}.
\item[(ii)]The functions $\varphi$, $\nu^0$, $\varphi_{-1}$ and $(\nu_0)_0$ have no zeros on $C_O\setminus \{0\}$ and $S^2$, respectively, with the non-vanishing of $(\nu^0)_0$ being equivalent to
\begin{eqnarray}
(\ol W{}^0)_1
&< &
2(\varphi_{-1})^{-2}
\;.
\end{eqnarray}
\item[(iii)]
The gauge functions satisfy $\kappa=\mathcal{O}(r^{-3})$,
$\overline W{}^0=\mathcal{O}(r^{-1})$, $\overline W{}^A=\mathcal{O}(r^{-1})$, $\overline W{}^1=\mathcal{O}(r)$ and, setting $\ol W_A:= \ol g_{AB}\ol W{}^A$,
\begin{eqnarray}
\label{22XII13.11x}
(\overline W{}^0)_2
& = &
\Big[\frac{1}{2} (\overline W{}^0)_1 + (\varphi_{-1})^{-2}\Big]\tau_2
\;,
\label{0gaugecond}
\\
(\overline W_A)_1 &=& 4(\sigma_A{}^B)_2\mathring\nabla_A\log\varphi_{-1}
- (\check\varphi_{-1})^{-2}[(\nu_0)_2(\overline W_A)_{-1}
+ (\nu_0)_1(\overline W_A)_0]
\nonumber
\\
&& - \mathring\nabla_A \tau_2
- \frac{1}{2}( w_A{}^B)_1( w_B{}^C)_1 (\overline W_C)_{-1}
-\frac{1}{2} ( w_A{}^B)_2 (\overline W_B)_{-1}
\nonumber
\\
&& - ( w_A{}^B)_1\Big[(\overline W_B)_{0} + (\check\varphi_{-1})^{ 2} (\nu_0)_1 (\overline W_B)_{-1} \Big]
\label{nuA_cond}
\;,
\\ (\overline W{}^1)_2 &=& \frac{\zeta_2}{2} + (\varphi_{-1})^{-2}\tau_2 + \frac{\tau_2}{4}\check R_2 + \frac{\tau_2}{2} (\overline W{}^1)_1
+ \Big[ \frac{\tau_3}{4} + \frac{\kappa_3}{2} - \frac{(\tau_2)^2}{8} \Big](\overline W{}^1)_0
\nonumber
\\
&& \Big[ \frac{1}{48}(\tau_2)^3 - \frac{1}{8}\tau_2\tau_3 - \frac{1}{4}\tau_2\kappa_3 + \frac{1}{6}\tau_4 + \frac{1}{3}\kappa_4 \Big](\overline W{}^1)_{-1}
\;,
\label{g00_cond}
\end{eqnarray}
where $\mathring\nabla$ is the covariant derivative operator of the unit round metric on the sphere $s_{AB}\mathrm{d}x^A\mathrm{d}x^B$, $\check R_2$ is the $r^{-2}$-coefficient of the scalar curvature $\check R$ of the metric $\check g_{AB}\mathrm{d}x^A\mathrm{d}x^B$,
$\check\varphi_{-1} :=[(\varphi_{-1})^{-2} - \frac{1}{2}(\ol W{}^0)_1]^{-1/2}$,
and the expansion coefficients $( w_A{}^B)_n$ are defined using
\begin{equation*}
w_A{}^B := \Big[ \frac{r}{2}\nu_0(\ol W{}^0 + \ol{\hat\Gamma}{}^0)-1 \Big]\delta_A{}^B +2r\chi_A{}^B
\;.
\end{equation*}
\item[(iv)]
The \underline{no-logs-condition}
is satisfied:
%
\begin{eqnarray}
(\sigma_A{}^B)_3 = \tau_2 (\sigma_A{}^B)_2
\;.
\label{no-log-conditions}
\end{eqnarray}
\end{enumerate}
\end{theorem}
\begin{Remark}
{\rm
If any of the equations \eq{22XII13.11x}-\eq{no-log-conditions} fail to hold, the resulting characteristic initial data sets will have a \emph{polyhomogeneous} expansion in terms of powers of $r$.
}
\end{Remark}
\begin{Remark}
{\rm
Theorem~\ref{thm_asympt_exp} is independent of the particular setting used (and remains also valid when the light-cone is replaced by one of two transversally intersecting null hypersurfaces meeting $\scrip$ in a sphere), cf.\ Section~\ref{s16XII13.1}:
As long as the generalized wave-map gauge condition is imposed one can always compute $\ol W^{\lambda}$, $\tau$, $\sigma$ etc.\
and check the validity of \eq{0gaugecond}-\eq{no-log-conditions}, whatever the prescribed initial data sets are.
Some care is needed when the Minkowski target is replaced by some other target metric, cf.\ \cite{TimAsymptotics}.
}
\end{Remark}
All the conditions in (ii) and (iii) which involve $\kappa$ or $\overline W{}^{\lambda}$ can always be satisfied by an appropriate choice of coordinates.
Equivalently, those logarithmic terms which appear if these conditions are not satisfied are pure gauge artifacts.
Recall that to solve the equation for $\xi_A$ both $\kappa$ and $\varphi$ need to be known. This requires a choice of the $\kappa$-gauge. Since the choice of $\overline W{}^0$ does not affect the $\xi_A$-equation, there is no gauge-freedom left in that equation and if the no-logs-condition \eq{no-log-conditions} does not hold there is no possibility to get rid of the log terms that arise in this equation. (In Section~\ref{sec_no_log} we will return to the question, whether \eq{no-log-conditions} can be satisfied by a choice of $\kappa$.)
Similarly there is no gauge-freedom left when the equation for $\zeta$ is integrated but, due to the special structure of the asymptotic expansion of its source term, no new log terms arise in the expansion of $\zeta$.
The no-logs-condition involves two functions, $\varphi_{-1}$ and $\varphi_0$, which are globally determined by the gauge function $\kappa$ and the initial data $\gamma$, cf.\ \eq{constraint_phi}.
The dependence of these functions on the gauge and on the initial data is rather intricate.
Thus the question arises for which class of initial data one can find a function $\kappa=\mathcal{O}(r^{-3})$, such that the no-logs-condition
holds, and accordingly what the geometric restrictions are for this to be possible.
This issue will be analysed in part II of this work,
using a gauge scheme adjusted to the initial data so that all relevant globally defined integration functions can be computed explicitly.
\section{The no-logs-condition}
\label{sec_no_log}
\subsection{Gauge-independence}
In this section we show gauge-independence of \eq{no-log-conditions}.
It is shown in paper II \cite{TimAsymptotics} that \eq{no-log-conditions} arises from
integration of the equation for $\xi_A$, which is independent of the gauge functions $W^{\mu}$. Equation \eq{no-log-conditions} is therefore independent of those functions, as well. So the only relevant freedom is that of rescaling the $r$-coordinate parameterizing the null rays.
We therefore need to compute how (\ref{no-log-conditions}) transforms under rescalings of $r$.
For this we consider a smooth
coordinate transformation
\begin{equation}
r\mapsto \tilde r=\tilde r(r,x^A)
\;.
\label{rescaling_r}
\end{equation}
Under \eq{rescaling_r} the function $\varphi$ transforms as a scalar.
We have seen above that a necessary condition for the metric to be smoothly extendable across $\scri^+$ is that $\varphi$ has the asymptotic
behaviour
\begin{equation}
\varphi(r,x^A) \,=\, \varphi_{-1}(x^A) r +\varphi_0 + \mathcal{O}(r^{-1})\;, \quad \text{with} \quad \varphi_{-1}>0
\;.
\label{asympt_phi}
\end{equation}
The transformed $\varphi$ thus takes the form
\begin{eqnarray*}
\tilde\varphi(\tilde r,x^A) &:=&\varphi(r(\tilde r),x^A)\,=\, \varphi_{-1}(x^A) r(\tilde r) +\varphi_0 + O(r(\tilde r)^{-1})
\;,
\\
\partial_{\tilde r}\tilde\varphi(\tilde r,x^A) &=& \frac{\partial r}{\partial \tilde r}\partial_r\varphi(r(\tilde r),x^A)\,=\,
\frac{\partial r}{\partial \tilde r}\varphi_{-1}(x^A) r(\tilde r) + \frac{\partial r}{\partial \tilde r}O(r(\tilde r)^{-2})
\;.
\end{eqnarray*}
If we require $\tilde \varphi$ to be of the form \eq{asympt_phi} as well, it is easy to check that we must have
\begin{eqnarray}
r(\tilde r, x^A) &=& r_{-1}(x^A)\tilde r + r_0 + O(\tilde r^{-1}) \quad \text{and}
\label{asympt_coord_trafo}
\\
\partial_{\tilde r}r(\tilde r, x^A) &=& r_{-1}(x^A) + O(\tilde r^{-2})\;, \quad \text{with} \quad r_{-1}>0
\;.
\label{asympt_coord_trafo2}
\end{eqnarray}
We have:
\begin{Proposition}
\label{P11XII13.1}
The no-logs-condition (\ref{no-log-conditions}) is invariant under the coordinate transformations \eq{asympt_coord_trafo}-\eq{asympt_coord_trafo2}.
\end{Proposition}
{\sc Proof:}\
For the transformation behavior of the expansion coefficients we obtain
\begin{eqnarray*}
&\varphi_{-1} \,=\, (r_{-1})^{-1}\tilde\varphi_{-1}\;, \quad \varphi_0 \,=\,\tilde \varphi_0 - r_0(r_{-1})^{-1}\tilde\varphi_{-1}&
\\
&\Longrightarrow \quad \tau_2 \,=\, -2(\varphi_{-1})^{-1}\varphi_0 \,=\, r_{-1}\tilde\tau_2
+2 r_0
\;.&
\end{eqnarray*}
Moreover, with \eq{asympt_coord_trafo}-\eq{asympt_coord_trafo2}
we find
\begin{eqnarray*}
\tilde\sigma_A{}^B &=& \frac{\partial r}{\partial \tilde r} \sigma_A{}^B =\left[ r_{-1} + O(\tilde r^{-2}) \right]\left[ (\sigma_A{}^B)_2r(\tilde r)^{-2} + (\sigma_A{}^B)_3 r(\tilde r)^{-3} + \mathcal{O}(r(\tilde r)^{-4}) \right]
\\
&=& (r_{-1})^{-1} (\sigma_A{}^B)_2\tilde r^{-2} + \left[ (r_{-1})^{-2}(\sigma_A{}^B)_3 - 2r_0(r_{-1})^{-2}(\sigma_A{}^B)_2 \right]\tilde r^{-3}
+ O(\tilde r^{-4})
\\
\Longrightarrow && (\sigma_A{}^B)_2 = r_{-1} (\tilde \sigma_A{}^B)_2\;,
\\
&& (\sigma_A{}^B)_3 = (r_{-1})^2(\tilde\sigma_A{}^B)_3 + 2r_0r_{-1}(\tilde \sigma_A{}^B)_2
\;.
\end{eqnarray*}
Hence
\begin{equation*}
(\sigma_A{}^B)_3- \tau_2 (\sigma_A{}^B)_2
=
(r_{-1})^2[ (\tilde\sigma_A{}^B)_3 - \tilde\tau_2 (\tilde\sigma_A{}^B)_2 ]
\;.
\end{equation*}
\hfill $\Box$ \medskip
Although the No-Go Theorem~\ref{T9XII13.11} shows that the ($\kappa=0$, $\ol W^\lambda =0$)-wave-map gauge invariably produces logarithmic terms except in the flat case,
one can decide whether the logarithmic terms can be transformed away by checking \eq{no-log-conditions} using this gauge, or in fact any other.
In the $([\gamma],\kappa)$ scheme this requires to determine $\tau_2$ by solving the Raychaudhuri equation, which makes this scheme not practical for the purpose. In particular, it is not a priori clear within this scheme whether \emph{any} initial data satisfying this condition exist unless both $(\sigma^A{}_B)_2$ and $(\sigma^A{}_B)_3$ vanish.
On the other hand,
in any gauge scheme where $\check g$
is prescribed on the cone, the no-logs-condition \eq{no-log-conditions} is a straightforward condition on the asymptotic
behaviour of the metric.
Let us assume that \eq{no-log-conditions} is violated for say $\kappa=0$. We know that the metric cannot have a smooth conformal completion at infinity in an adapted null coordinate system arising from the $ \kappa=0$-gauge via a transformation which \textit{is not} of the asymptotic form \eq{asympt_coord_trafo}-\eq{asympt_coord_trafo2}.
On the other hand if the transformation \textit{is} of the form \eq{asympt_coord_trafo}, then the no-logs-condition will also be violated in the new coordinates. We conclude that we cannot have a smooth conformal completion in \emph{any} adapted null coordinate system. That yields
\begin{theorem}
\label{T3III14.1}
Consider initial data $\gamma$ on a light-cone $C_O$ in a $ \kappa=0$-gauge
with asymptotic behaviour $\gamma_{AB}\sim r^2 ( s_{AB}+ \sum_{n=1}^{\infty} h^{(n)}_{AB} r^{-n})$.
Assume that $\varphi$, $\nu^0$ and $\varphi_{-1}$ are strictly positive on $C_O\setminus\{O\}$ and $S^2$, respectively.
Then there exist
a gauge w.r.t.\ which the trace $\ol g$ of the metric on the cone admits a smooth conformal completion at infinity
and where the connection coefficients $\ol \Gamma^r_{rA}$ are smooth at $\scrip$
(in the sense of Definition~\ref{definition_smooth})
if and only if the no-logs-condition \eq{no-log-conditions} holds
in one (and then any) coordinate system related to the original one by a coordinate transformation of the form \eq{asympt_coord_trafo}-\eq{asympt_coord_trafo2}.
\end{theorem}
\subsection{Geometric interpretation}
\label{no-logs_geom}
Here we provide a geometric interpretation
of the no-logs-condition \eq{no-log-conditions} in terms of the conformal Weyl tensor. This ties our results with the analysis in~\cite{andersson:chrusciel:PRL} (compare also Section~\ref{ss16XII13.5}).
For this purpose let us consider the components of the conformal Weyl tensor, $ C_{rAr}{}^B$, on the cone.
To end up with smooth initial data for the conformal fields equations
we need to require its rescaled counterpart $\overline {\tilde d}_{rAr}{}^B = \ol \Theta^{-1} \ol {\tilde C}_{rAr}{}^B = \ol \Theta^{-1} \ol C_{rAr}{}^B$ to be smooth at $\scri^+$, which is equivalent to
\begin{equation}
\overline C_{rAr}{}^B = \mathcal{O}(r^{-5})
\;.
\label{asympt_beh_Weyl}
\end{equation}
In particular the $\ol C_{rAr}{}^B $-components of the Weyl tensor need to vanish one order faster than naively expected from the asymptotic behavior of the metric.
In adapted null coordinates and in vacuum we have, using the formulae of \cite[Appendix~A]{CCM2},
\begin{eqnarray*}
\ol C_{rAr}{}^B &=& \ol R_{rAr}{}^B \,=\, -\partial_r\ol \Gamma^B_{rA} + \ol\Gamma^B_{rA}\ol \Gamma^r_{rr}
- \ol \Gamma^B_{rC}\ol\Gamma^C_{rA}
\\
&=& -(\partial_r-\kappa)\chi_A{}^B
- \chi_A{}^C\chi_C{}^B
\\
&=& -\frac{1}{2}(\partial_r\tau-\kappa\tau + \frac{1}{2}\tau^2 ) \delta_A{}^B
-(\partial_r + \tau -\kappa)\sigma_A{}^B
- \sigma_A{}^C\sigma_C{}^B
\\
&=& \frac{1}{2}|\sigma|^2\delta_A{}^B -(\partial_r+\tau -\kappa)\sigma_A{}^B
- \sigma_A{}^C\sigma_C{}^B
\;.
\end{eqnarray*}
Assuming, for definiteness, that $\kappa=\mathcal{O}(r^{-3})$ and $\ol g_{AB} = \mathcal{O}(r^2)$ with $(\det \ol g_{AB})_{-4}>0$
we have
\begin{eqnarray*}
\ol C_{rAr}{}^B &=& \Big( (\sigma_A{}^B)_3 - \tau_2 (\sigma_A{}^B)_2 + \frac{1}{2} (\sigma_C{}^D)_2(\sigma_D{}^C)_2\delta_A{}^B
- (\sigma_A{}^C)_2(\sigma_C{}^B)_2\Big) r^{-4}
\\
&& + \mathcal{O}(r^{-5})
\;.
\end{eqnarray*}
As an $s$-symmetric, trace-free tensor $(\sigma_A{}^{C})_2$ has the property
\begin{equation*}
(\sigma_A{}^{C})_2(\sigma_C{}^{B})_2 = \frac{1}{2}(\sigma_D{}^{C})_2(\sigma_C{}^{D})_2\delta_A{}^{B}
\;,
\end{equation*}
i.e.
\begin{eqnarray*}
\ol C_{rAr}{}^B &=& \big[ (\sigma_A{}^B)_3 - \tau_2 (\sigma_A{}^B)_2 \big] r^{-4} + \mathcal{O}(r^{-5})
\;,
\end{eqnarray*}
and \eq{asympt_beh_Weyl} holds if and only if the no-logs-condition is satisfied.
\section{Other settings}
\label{s16XII13.1}
We pass now to the discussion, how to modify the above when other data sets are given, or Cauchy problems other than a light-cone are considered.
\subsection{Prescribed $(\check g_{AB},\kappa)$}
\label{ss16XII13.1}
In this setting the initial data are a symmetric degenerate twice-covariant tensor field $\check g$, and a connection $\kappa$ on the family of bundles tangent to the integral curves of the kernel of $\check g$, satisfying the Raychaudhuri constraint \eq{constraint_tau}.
Recall that so far we have mainly been considering a characteristic Cauchy problem where $([\gamma], \kappa)$ are given. There \eq{constraint_phi} was used to solve for the conformal factor relating $\check g$ and $\gamma$:
\bel{16XII13.1}
\check g \equiv \overline g_{AB} \mathrm{d}x^A\mathrm{d}x^B = \varphi^2 \big( \frac{\det s_{CD}}{\det \gamma_{EF}}\big)^{\frac{1}{n-1}}
\gamma_{AB} \mathrm{d}x^A \mathrm{d}x^B
\;.
\end{equation}
But then a pair $(\check g,\kappa)$ satisfying \eq{constraint_tau} is
obtained.
So, in fact, prescribing the pair $(\check g,\kappa)$ satisfying \eq{constraint_tau} can be viewed as a special case of the $([\gamma], \kappa)$-prescription, where one sets $ \gamma:=\check g$.
Indeed, when $\check g$ and $\kappa$ are suitably regular at the vertex, uniqueness of solutions of \eq{constraint_phi} with the boundary conditions $\varphi(0)=0$ and $\partial_r\varphi(0)=1$ shows that %
\bel{18XII13.1}
\varphi= \big( \frac{\det \ol g_{EF}}{\det s_{CD}}\big)^{\frac{1}{2(n-1)}}
\quad
\Longleftrightarrow
\quad
\ol g_{AB}\equiv \gamma_{AB}
\quad
\Longleftrightarrow
\quad
\check g \equiv \gamma
\;.
\end{equation}
In particular all the results so far apply to this case.
If $\tau$ is nowhere vanishing, as necessary for a smooth null-geodesically complete light-cone extending to null infinity, then \eq{constraint_tau} can be algebraically solved for $\kappa$, so that the constraint becomes trivial.
\subsection{Prescribed $(\overline g_{\mu\nu},\kappa)$}
\label{ss16XII13.2}
In this approach one prescribes all metric functions $\overline g_{\mu\nu}$ on the initial characteristic hypersurface, together with the connection
coefficient $\kappa$, subject to the Raychaudhuri equation \eq{constraint_tau}.
\Eq{R11_constraint} relating $\kappa$ and $\nu_0$ becomes an algebraic equation for the gauge-source function $\ol W{}^0$, while the equations $\ol R_{r A}=0=\ol g^{AB}\ol R_{AB}$ become algebraic equations for $\ol W^A$ and $\ol W{}^r$.
In four space-time dimensions, a smooth conformal completion at null infinity will exist
if and only if $r^{-2}\ol g_{\mu\nu}$ can be smoothly extended as a Lorentzian metric across $\scri^+$ and no logarithmic terms appear in the asymptotic expansion of
$\ol \Gamma^r_{rA}$;
this last fact is equivalent to \eq{no-log-conditions}. To see this,
note that since the equations for $\ol W^{\mu}$ are algebraic,
no log terms arise in these fields as long as no log terms appear in the remaining fields appearing in the constraint equation.
Similarly no log terms arise in the $\zeta$-equation.
The only possible source of log terms is thus the $\xi_A$-equation, and the appearance of log terms there is excluded precisely by the no-logs-condition. The existence of an associated space-time with a ``piece of smooth $\scrip$'' follows then from the analysis of the initial data for Friedrich's conformal equations in part II of this work, together with the analysis in \cite{CPW}.
We conclude that \eq{no-log-conditions} is again a necessary-and-sufficient condition for existence of a smooth $\scrip$ for the current scheme in space-time dimension four.
\subsection{Frame components of $\sigma$ as free data}
\label{ss16XII13.4}
In this section we consider as free data the components $\chi_{ab}$ in an adapted parallel-propagated frame as in~\cite[Section~5.6]{ChPaetz}.
We will assume that
\bel{22XII13.1}
\chi^a{}_b = \frac 1 r \delta^ a{}_b + \mathcal{O}(r^{-2})
\;,
\quad
a,b \in\{2,3\}
\;.
\end{equation}
There are actually at least two schemes which would lead to this form of $ \chi^a{}_b $:
One can e.g.\ prescribe any $ \chi^a{}_b $ satisfying
\eq{22XII13.1} such that $\chi^2{}_2+\chi^3{}_3= \chi_{22}+\chi_{33}$ has no zeros, define $\sigma_{ab}=\chi_{ab}- \frac 12 (\chi^2{}_2+\chi^3{}_3) \delta_{ab}$,
and solve algebraically the
Raychaudhuri equation for $\kappa$.
Another possibility is to prescribe directly a symmetric trace-free tensor $\sigma_{ab}$
in the $\kappa=0$ gauge, use the Raychaudhuri equation to determine $\tau$, and construct $\chi_{ab}$ using
\bel{22XII13.2}
\chi^{a}{}_{b}= \frac \tau {2} \delta^ a{}_b + \sigma^ a{}_b
\;,
\quad
a,b \in\{2,3\}
\;.
\end{equation}
The asymptotics \eq{22XII13.1} will then hold if $\sigma^ a{}_b$ is taken to be $ \mathcal{O}(r^{-2})$.
Given $\chi_{ab}$, the tensor field $\check g$ is obtained by setting
\bel{22XII13.3}
\check g = \big
(\theta^2{}_A \theta^2{}_B
+ \theta^3{}_A \theta^3{}_B
\big)
\mathrm{d}x^ A \mathrm{d} x^B
\;,
\end{equation}
where the co-frame coefficients $
\theta^a{}_A$ are solutions of the equation~\cite{ChPaetz}
\bel{22XII13.4-}
\partial_r \theta^a{}_A = \chi^a{}_b \theta^b{}_A
\;,
\quad
a,b \in \{2,3\}
\;.
\end{equation}
Assuming \eq{22XII13.1}, one finds that solutions of \eq{22XII13.4-} have an asymptotic expansion for $\theta^a{}_A$ without log terms:
\bel{22XII13.4+}
\theta^a{}_A = r \varphi^a{}_A + \mathcal{O}(1)
\;,
\quad
a,b \in \{2,3\}
\end{equation}
for some globally determined functions $\varphi^a{}_A$. If the determinant of the two-by-two matrix $( \varphi^a{}_A )$ does not vanish, one obtains a tensor field $\check g$ to which our previous considerations apply. This leads again to the no-logs-condition (\ref{no-log-conditions}).
Writing, as usual,
\bel{22XII13.5}
\sigma_{ab} = (\sigma_{ab})_{2} r^{-2} +
(\sigma_{ab})_{3} r^{-3} + \mathcal{O}(r^{-4})
\;,
\quad
a,b \in \{2,3\}
\;,
\end{equation}
the no-logs-condition rewritten in terms of $\sigma_{ab}$ reads
\bel{22XII13.6}
(\sigma_{ab})_3 = \tau_2(\sigma_{ab})_2
\;,
\quad
a,b \in \{2,3\}
\;.
\end{equation}
\subsection{Frame components of the Weyl tensor as free data}
\label{ss16XII13.5}
Let $C_{\alpha \beta \gamma \delta}$ denote the space-time Weyl tensor.
For $a,b\ge 2$ let
\begin{equation*}
\psi_{ab} := e_a{}^A e_b {}^B \overline C_{A r B r}
\end{equation*}
represent the components of $ \overline C_{A r B r }$ in a parallelly-transported adapted frame, as in Section~\ref{ss16XII13.4}. The tensor field
$\psi_{ab}$ is symmetric, with vanishing $\eta$-trace, and we have in space-time dimension four (cf., e.g., \cite[Section~5.7]{ChPaetz})
\begin{eqnarray}
(\partial_r -\kappa)\chi_{ab} & = &
- \sum_{c=2}^3
\chi_{ac}\chi_{cb}
- \psi_{ab} -\frac 1{2}\eta_{ab}\overline T_{rr}
\;.
\label{22I12.1x2}
\end{eqnarray}
Given $(\kappa, \psi_{ab})$, we can integrate this equation in vacuum to obtain the tensor field $\chi_{ab}$ needed in Section~\ref{ss16XII13.4}.
However, this approach leads to at least two difficulties: First, it is not clear under which conditions on $\psi_{ab}$ the solutions will exist for all values of $r$. Next, it is not clear that the global solutions will have the desired asymptotics. We will not address these questions but, taking into account the behaviour of the Weyl tensor under conformal transformations, we will assume that
\bel{22XII13.11}
\kappa =
\mathcal{O}(r^{-3})
\;,
\quad
\psi_{ab}=
\mathcal{O}(r^{-4})
\;,
\end{equation}
and that the associated tensor field $\chi_{ab}$ exists globally and satisfies \eq{22XII13.1}. The no-logs-condition will then hold if and only if
\bel{22XII13.12}
\psi_{ab}=
\mathcal{O}(r^{-5})
\qquad
\Longleftrightarrow
\qquad
(\psi_{ab})_4= 0
\;.
\end{equation}
Note that one can reverse the procedure just described: given $\chi_{ab}$ we can use \eq{22I12.1x2} to determine $\psi_{ab}$.
Assuming \eq{22XII13.1}, the no-logs-condition will hold if and only if the $\psi_{ab}$-components of the Weyl tensor vanish one order faster than naively expected from the asymptotic behaviour of the metric (cf.\ Section~\ref{no-logs_geom}).
\Eq{22XII13.12} is the well-known starting point of the analysis in \cite{Newman:Penrose}, and has also been obtained previously as a necessary condition for existence of a smooth $\scri$ in the analysis of the hyperboloidal Cauchy problem~\cite{andersson:chrusciel:PRL}. It is therefore not surprising that it reappears in the analysis of the characteristic Cauchy problem. However, as pointed out above, a satisfactory treatment of the problem using $\psi_{ab}$ as initial data requires further work.
\subsection{Characteristic surfaces intersecting transversally}
\label{s10XII13.1}
Consider two characteristic surfaces, say ${\cal N}_1$ and ${\cal N}_2$, intersecting transversally along a smooth submanifold $S$ diffeomorphic
to $S^2$.
Assume moreover that the initial data on ${\cal N}_1$ (in any of the versions just discussed) are such that the metric $\ol g_{\mu\nu}$ admits a smooth conformal completion across the sphere $\{x=0\}$, as in Definition~\ref{definition_smooth}. The no-logs-condition \eq{no-log-conditions} remains unchanged. Indeed, the only difference is the integration procedure for the constraint equations:
while on the light-cone we have been integrating from the tip of the light-cone, on ${\cal N}_1$ we integrate from the intersection surface~$S$. This leads to the need to provide supplementary data at $S$ which render the solutions unique. Hence the asymptotic values of the fields, which arise from~the integration of the constraints, will also depend on the supplementary data at $S$.
\subsection{Mixed spacelike-characteristic initial value problem}
\label{s10XII13.2}
Consider a mixed initial value problem, where the initial data set consists of:
\begin{enumerate}
\item
A spacelike initial data set $(\cal S,{}^3g,K)$, where ${}^3g$
is a Riemannian metric on $\cal S$ and $K$ is a symmetric two-covariant tensor field on $\cal S$. The three-dimensional manifold $\cal S$ is supposed to have a compact smooth boundary $S$ diffeomorphic to $S^2$, and the fields $( {}^3g,K)$ are assumed to satisfy the usual vacuum Einstein constraint equations.
\item A hypersurface ${\cal N}_1$ with boundary $S$ equipped with a characteristic initial data set, in any of the configurations discussed so far. Here ${\cal N}_1$ should be thought of as a characteristic initial data surface emanating from $S$ in the outgoing direction.
\item The data on $\cal S$ and ${\cal N}_1$ satisfy a set of ``corner conditions'' at $S$, to be defined shortly.
\end{enumerate}
The usual evolution theorems for the spacelike general relativistic initial value problem provide a unique future maximal globally hyperbolic vacuum development ${\mycal D}^+$ of $(\cal S,{}^3g,K)$. Since $\cal S$ has a boundary, ${\mycal D}^+$ will also have a boundary. Near $S$, the null part of the boundary of $\partial {\mycal D}^+$ will be a smooth null hypersurface emanating from $S$, say ${\cal N}_2$, generated by null geodesics normal to $S$ and ``pointing towards $\cal S$'' at $S$.
In particular the space-time metric on ${\mycal D}^+$ induces characteristic initial data on ${\cal N}_2$. In fact, all derivatives of the metric, both in directions tangent and transverse to ${\cal N}_2$, will be determined on ${\cal N}_2$ by the initial data set $(\cal S,{}^3g,K)$. This implies that the characteristic initial data needed on ${\cal N}_1$, as well as their derivatives in directions tangent to ${\cal N}_1$, are determined on $S$ by $(\cal S,{}^3g,K)$. These are the ``corner conditions'' which have to be satisfied by the data on ${\cal N}_1$ at $S$, with these data being arbitrary otherwise. The corner conditions can be calculated algebraically in terms of the fields $( {}^3g,K)$, the gauge-source functions $W^\mu$, and the derivatives of those fields, at $S$, using the vacuum Einstein equations.
One can use now the Cauchy problem discussed in Section~\ref{s10XII13.1} to obtain the metric to the future of ${\cal N}_1\cup {\cal N}_2$, and the discussion of the no-logs-condition given in Section~\ref{s10XII13.1} applies.
\subsection{Global solutions}
\label{s6XII13.1}
A prerequisite for obtaining asymptotic expansions is existence of solutions of the constraint equations defined for all $r$.
The question of globally defined data becomes trivial when all metric components are prescribed on $C_O$: Then the only condition is that $\tau$, as calculated from $\ol g_{AB}$, is strictly positive. Now, as is well-known, and will be rederived shortly in any case, negativity of $\tau$ implies formation of conjugate points in finite affine time, or geodesic incompleteness of the generators. In this work we will only be interested in light-cones $C_O$ which are globally smooth (except, of course, at the vertex), and extending all the way to conformal infinity. Such cones have complete generators without conjugate points,
and so $\tau$ must remain positive. But then one can solve algebraically the Raychaudhuri equation to globally determine $\kappa$.
We note that the function $\tau$ depends upon the choice of parameterisation of the generators, but its sign does not, hence the above discussion applies regardless of that choice.
Recall that we assume that the tip of the cone corresponds to $r \rightarrow 0$ and that the condition that $\kappa=O(r^{-3})$ ensures that an affine parameter along the generators
tends to infinity for $r\rightarrow \infty$, so that the parameterization of $r$ covers the whole cone from $O$ to null infinity.
In some situations it might be convenient
to request that $\kappa$ vanishes, or takes some prescribed value. In this case the Raychaudhuri equation becomes an equation for the function $\varphi$, and the question of its global positivity arises.
Recall that the initial conditions for $\varphi$ at the vertex are $\varphi(0)=0$ and $\partial_r\varphi(0)=1$, and so both $\partial_r \varphi$ and $\varphi$ are positive near zero.
Now, (\ref{constraint_phi}) with $\kappa=0$ shows that $\varphi$ is concave
as long as it is non-negative; equivalently, $\partial_r \varphi$ is non-increasing in the region where $\varphi>0$.
An immediate consequence of this is that if $\partial_r\varphi$ becomes negative at some $r_0>0$, then it stays so, with $\varphi$ vanishing for some $r_0<r_1<\infty$, i.e.\ after some finite affine parameter time. We recover the result just mentioned, that negativity of $\partial_r\varphi$ indicates incompleteness, or occurrence of conjugate points, or both. In the first case the solution will not be defined for all affine parameters $r$, in the second $C_O$ will fail to be smooth for $r>r_1$ by standard results on conjugate points. Since the sign of $\partial_r\varphi$ is invariant under orientation-preserving reparameterisations, we conclude that:
\begin{Proposition}
\label{P12XII13.1}
Globally smooth and null-geodesically-complete light-cones must have $\partial_r\varphi$ positive.
\end{Proposition}
A set of conditions guaranteeing global existence of positive solutions of the Raychaudhuri equation, viewed as an equation for $\varphi$, has been given in~\cite[Theorem~7.3]{CCM2}. Here we shall give an alternative simpler criterion, as follows:
Suppose, first, that $\kappa=0$. Integration of \eq{constraint_phi} gives
\begin{eqnarray}
\partial_r\varphi(r,x^A) &=&1 -\frac{1}{n-1} \int_0^r \big(\varphi|\sigma|^2\big)(\tilde r,x^A)\,\mathrm{d}\tilde r \,\leq\, 1
\label{6XII13.3}
\end{eqnarray}
as long as $\varphi$ remains positive. Since $\varphi(0)=0$, we see that we always have %
$$
\varphi(r,x^A)\le r
$$
in the region where $\varphi$ is positive, and in that region it holds
\begin{eqnarray*}
\partial_r\varphi(r,x^A) &\ge & 1 -\frac{1}{n-1} \int_0^r \tilde r \,|\sigma (\tilde r,x^A)|^2\mathrm{d}\tilde r
\\
&\geq & 1 -\frac{1}{n-1} \int_0^{\infty} \tilde r\, |\sigma (\tilde r,x^A)|^2\mathrm{d} \tilde r
\;.
\end{eqnarray*}
This implies that $\varphi$ is strictly increasing if
\begin{equation}
\int_0^{\infty} r|\sigma|^2 \,\mathrm{d} r \,<\, n-1
\;.
\label{second_integral}
\end{equation}
Since $\varphi$ is positive for small $r$ it remains positive as long as $\partial_r \varphi$ remains positive,
and so global
positivity of $\varphi$ is guaranteed whenever \eq{second_integral} holds.
A rather similar analysis applies to the case $\kappa\ne 0$, in which we set
\bel{30IV12.1}
H(r,x^A) := \int_0 ^r {\kappa(\tilde r,x^A)} \mathrm{d}\tilde r
\;.
\end{equation}
Let
\begin{eqnarray}
\varphi(r) = \mathring \varphi (s(r))\;, \quad \text{where} \quad s(r):= \int_0^re^{H(\hat r)}\mathrm{d}\hat r
\;,
\label{dfn_mathring_varphi}
\end{eqnarray}
the $x^A$-dependence being implicit.
The function $s(r)$ is strictly increasing with $s(0)=0$. If we assume that $\kappa$ is continuous in $r$ with $\kappa(0)=0$, defined for all $r$ and, e.g.,
\bel{9XII13.1}
\int_0^\infty\kappa>-\infty
\;,
\end{equation}
then $\lim_{r\rightarrow \infty} s(r)=+\infty$,
and thus the function $r\mapsto s(r)$ defines a differentiable bijection from $\mathbb{R}^+$ to itself.
Consequently, a differentiable inverse function $s\mapsto r(s)$ exists, and is smooth if $\kappa$ is.
Expressed in terms of \eq{dfn_mathring_varphi}, \eq{constraint_phi} becomes
\begin{equation}
\partial^2_s\mathring\varphi( s)
+e^{-2H(r(s))} \frac{|\sigma|^2(r(s))}{n-1}\mathring\varphi(s) =0
\;.
\label{constraint_phi_alter}
\end{equation}
A global solution $\varphi>0$ of \eq{constraint_phi} exists if and only if a global solution $\mathring\varphi>0$ of
\eq{constraint_phi_alter} exists.
It follows from the considerations above
(note that $\mathring\varphi(s=0)=0$ and $\partial_s\mathring\varphi(s=0)=1$)
that a sufficient condition for global existence of positive solutions of \eq{constraint_phi_alter} is
\begin{eqnarray}
\lefteqn{\int_0^{\infty}s e^{-2H(r(s))} |\sigma|^2(r(s)) \,\mathrm{d} s < n-1
}
&&
\nonumber
\\
&&
\Longleftrightarrow \int_0^{\infty}\Big( \int_0^re^{H(\hat r)}\mathrm{d}\hat r \Big) e^{-H(r)} |\sigma|^2(r) \,\mathrm{d} r < n-1
\;.
\label{second_integral_alter}
\end{eqnarray}
Consider now the question of positivity of $\nu^0$.
In the $\kappa=0$-wave-map gauge with Minkowski metric as a target
we have (see~\cite[Equation~(4.7)]{ChConeExistence})
\begin{equation}
\nu^{0}(r,x^A)
= \frac{\varphi^{-(n-1)/2}(r,x^A)}{2}\int_0^r \Big(\hat r\varphi^{(n-1)/2}\overline{g}{}^{AB}s_{AB}\Big)(\hat r,x^A)\, \mathrm{d}\hat r
\;.
\label{11II.1}
\end{equation}
In an $s$-orthonormal coframe $\theta^{(A)}$, $\overline{g}{}^{AB}s_{AB}$ is the sum of the diagonal elements $\overline{g}{}^{(A)(A)}=\ol g^\sharp(\theta^{(A)},\theta^{(A)})$, $A=1,\ldots,n-1$, where $\ol g^\sharp$ the scalar product on $T^*\Sigma_r$ associated to $\ol g_{AB}\mathrm{d}x^A \mathrm{d}x^B$, each of which is positive in Riemannian signature. Hence
$$
\overline{g}{}^{AB}s_{AB}>0
\;.
$$
So, for globally positive $\varphi$ we obtain a globally defined strictly positive $\nu^0$, hence also a globally defined strictly positive $\nu_0\equiv 1/\nu^0$.
When $\kappa\ne 0$,
and
allowing further a non-vanishing $W^{0}$, we find instead
\begin{eqnarray}\nonumber
\nu^{0}(r,x^A)
& = & \frac{\left(e^{-H }\varphi^{-(n-1)/2}\right)(r,x^A)}{2}
\times
\\
&& \int_0^r
\Big( e^{H }\varphi^{(n-1)/2}(\hat r\overline{g}{}^{AB}s_{AB} - \ol W{}^0)
\Big)(\hat r,x^A)\, \mathrm{d}\hat r
\;,
\label{11II.4v2}
\end{eqnarray}
with $H$ as in \eq{30IV12.1}.
If $\ol W{}^0=0$ we obtain positivity as before.
More generally, we see that a necessary-and-sufficient condition for positivity of $\nu^0$ is positivity of the integral in the last line of \eq{11II.4v2} for all $r$. This will certainly be the case if the gauge-source function $\ol W{}^0$ satisfies
\begin{equation}
\ol W{}^0<r\overline{g}{}^{AB}s_{AB}= r\varphi^{-2} \Big( \frac{\det\gamma}{\det s} \Big)^{1/(n-1)}\gamma^{AB} s_{AB}
\;.
\label{cond_non-vanish_W0}
\end{equation}
Summarising we have proved:
\begin{Proposition}
\label{P6XII13.1}
\begin{enumerate}
\item Solutions of the Raychaudhuri equation with prescribed $\kappa$ and $\sigma$ are global when \eq{9XII13.1} and \eq{second_integral_alter} hold, and lead to globally positive functions $\varphi$ and $\tau$.
\item Any global solution of the Raychaudhuri equation with $\varphi>0$ leads to a globally defined positive function $\nu_0$ when the gauge source function $\ol W^{0}$ satisfies \eq{cond_non-vanish_W0}. This condition will be satisfied for any $\ol W^{0}\le 0$.
\end{enumerate}
\end{Proposition}
\subsection{Positivity of $\varphi_{-1}$ and $(\nu^0)_0$}
\label{nonvanishing_subsection}
For reasons that will become clear in Section~\ref{s12XII13.2}, we are interested in fields $\varphi$ and $\nu_0$ which, for large $r$, take the form
\bel{13XII13.11}
\varphi(r,x^A) = {\varphi_{-1}}(x^A)r
+o(r)\;,
\quad
\nu^0(r,x^A) = ( {\nu^0})_0(x^A) +o(1)\;,
\end{equation}
with $ {\varphi_{-1}}$ and $ ( {\nu^0})_0$ positive. The object of this section is to provide conditions which guarantee existence of such expansions, assuming a global positive solution $\varphi$.
Let us further assume
that $ e^{-2H }\varphi|\sigma|^2$ is continuous in $r$ with
$$\int_0^\infty \big(e^{-2H }\varphi|\sigma|^2\big)\big|_{r=r(s)} \mathrm{d}s
=\int_0^\infty e^{- H }\varphi|\sigma|^2 \mathrm{d}r
<\infty
\;.
$$
Integration of \eq{constraint_phi_alter} and de l'Hospital rule at infinity give
\begin{eqnarray}
\mathring \varphi_{-1}:=\lim_{s\to\infty} \frac{\mathring \varphi(s)}{s} =\lim_{s\to\infty} \partial_s \mathring \varphi(s) =1 -\frac{1}{n-1} \int_0^\infty e^{- H }\varphi|\sigma|^2\mathrm{d} r
\label{6XII13.4}
\;.
\end{eqnarray}
This will be strictly positive if e.g.\ \eq{second_integral_alter} holds, as
\begin{eqnarray*}
\int_0^r e^{H(\tilde r)}\mathrm{d}\tilde r -\varphi(r) &=& \int_0^r (e^{H(\tilde r)}-\partial_{\tilde r} \varphi(\tilde r)) \mathrm{d}\tilde r
\\
&=& \int_0^{r(s)} \underbrace{(1-\partial_{\tilde s}\mathring \varphi(\tilde s)) }_{\geq 0 \text{ by } \eq{6XII13.3}}\mathrm{d} \tilde s \,\geq\, 0
\;,
\end{eqnarray*}
and thus by \eq{6XII13.4} and \eq{second_integral_alter}
\begin{eqnarray*}
\mathring\varphi_{-1} \,\geq\, 1- \frac{1}{n-1} \int_0^{\infty}\Big( \int_0^re^{H(\hat r)}\mathrm{d}\hat r \Big) e^{-H(r)} |\sigma|^2(r) \,\mathrm{d} r\,>\, 0
\;.
\end{eqnarray*}
One can now use \eq{6XII13.4} to obtain \eq{13XII13.11} if we assume that the integral of $\kappa$
over $r$ converges:
\bel{6XII12.6}
\forall x^A \qquad-\infty < \beta(x^A):=\int_0^{\infty} \kappa(r,x^A)\mathrm{d}r <\infty
\;,
\end{equation}
so that
\begin{equation}
\int_0^r \kappa(s,\cdot) \mathrm{d}s = \beta(\cdot) + o (1)
\;.
\end{equation}
Indeed, it follows from \eq{6XII12.6} that there exists a constant $C$ such that the parameter $s$ defined in \eq{dfn_mathring_varphi} satisfies
\bel{12XII13.2}
C^{-1} \le \frac{\partial s}{\partial r} \le C
\;,
\quad
C^{-1} r \le s \le C r
\;,
\quad
\lim_{r\to\infty} \frac{\partial s}{\partial r} = e^\beta
\;.
\end{equation}
We then have
\begin{eqnarray}
\nonumber
\varphi_{-1} &= &
\lim_{r\to\infty} \frac{ \varphi(r)}{r} =\lim_{s\to\infty} \frac{\mathring \varphi(s )}{r(s)}=
\lim_{s\to\infty} \frac{ \partial_s \mathring \varphi(s)}{\partial_s r(s)} = e^{-\beta}\mathring\varphi_{-1}
\\
&
=
&
e^{-\beta}\bigg(1 -\frac{1}{n-1} \int_0^\infty e^{-H }\varphi|\sigma|^2\mathrm{d} r \bigg)
\label{6XII13.4xx}
\;.
\end{eqnarray}
We have proved:
\begin{proposition}
Suppose that \eq{9XII13.1}, \eq{second_integral_alter} and \eq{6XII12.6} hold.
Then the function $\varphi$ is globally positive with $\varphi_{-1}>0$.
\end{proposition}
Consider, next, the asymptotic behaviour of $\nu_0$. In addition to \eq{6XII12.6}, we assume now that $\varphi = \varphi_{-1}r + o(r)$, for some function of the angles $\varphi_{-1}$, and that there exists a bounded
function of the angles $\alpha$ such that
\beal{6XII12.5}
&
\displaystyle
r\overline g^{AB} s_{AB}-\ol W{}^0 = \frac \alpha {r} + o (r^{-1})
\;.
&
\end{eqnarray}
Passing to the limit $r\to\infty$ in \eq{11II.4v2} one obtains
\begin{eqnarray}\nonumber
\nu^{0}(r,x^A)
& = & \frac {\alpha(x^A)} {n-1} + o (1)
\;.
\label{11II.4v3}
\end{eqnarray}
We see thus that
\begin{eqnarray}\nonumber
(\nu^{0})_0 > 0
\quad
\Longleftrightarrow
\quad
\alpha > 0
\;,
\qquad
(\nu_{0})_0 > 0
\quad
\Longleftrightarrow
\quad
\alpha < \infty
\;.
\label{11II.4v4}
\end{eqnarray}
\begin{Remark}
\label{R13XII13.1}
{\rm
Note that \eq{6XII12.6} and \eq{6XII12.5} will hold with smooth functions $\alpha$ and $\beta$ when the a priori restrictions \eq{5XII13.2}-\eq{a_priori_W0}, discussed below, are satisfied and when both $\varphi$ and $\varphi_{-1} $ are positive. Recall also that if $\ol W{}^0 \le 0$ (in particular, if $\ol W{}^0 \equiv 0$), then the condition $\alpha\ge0$ follows from the fact that both $s_{AB}$ and $\ol g_{AB}$ are Riemannian.
}
\end{Remark}
So far we have justified the expansion \eq{13XII13.11}. For the purposes of Section~\ref{s9XII13.1} we need to push the expansion one order further. This is the contents of the following:
\begin{Proposition}
\label{P6XII13.2}
Suppose that there exists a Riemannian metric $(\gamma_{AB})_{-2} \equiv (\gamma_{AB})_{-2}(x^C)$ and a tensor field $(\gamma_{AB})_{-1} \equiv (\gamma_{AB})_{-1}(x^C)$ on $S^{n-1}$ such that for large $r$ we have
\begin{eqnarray}
\label{12XII13.11}
&
\gamma_{AB} =r^2 (\gamma_{AB})_{-2} + r(\gamma_{AB})_{-1} + o (r ) \;,
&
\\
&
\partial_r \big(\gamma_{AB}-r^2(\gamma_{AB})_{-2} - r(\gamma_{AB})_{-1} \big)= o (1 )
\;,
&
\label{12XII13.12}
\\
\label{9XII13.1x}
&
\displaystyle
\int_0^ r \kappa(s,x^A)\mathrm{d}s = \beta_0(x^ A)+ \beta_{1}(x^A) r^{-2} + o ( r^{-2})
\;.
&
\end{eqnarray}
Assume moreover that $\varphi$ exists for all $r$, with $\varphi>0$. Then:
\begin{enumerate}
\item
There exist bounded functions of the angles $\varphi_{-1}\ge 0$ and $\varphi_{0}$ such that
\bel{12XII13.21}
\varphi (r) = \varphi_{-1} r + \varphi_{0} + O(r^{-1})\;.
\end{equation}
\item
If, in addition, $\nu_0$ exists for all $r$, if it holds that $\varphi_{-1}>0$
and if $\ol W{}^0$
takes the form
$\ol W{}^0(r,x^A)=(\ol W{}^0)_1 (x^A) r^{-1} + o(r^{-1})$ with
\bel{9XII13.3}
(\ol W{}^0)_1< s_{AB} (\ol g^{AB})_{2}=(\varphi_{-1})^{-2} \Big( \frac{\det\gamma_{-2}}{\det s} \Big)^{1/(n-1)}\gamma_{-2}^{AB} s_{AB}
\;,
\end{equation}
then
$$
0
<(\nu_0)_0<\infty
\;.
$$
\end{enumerate}
\end{Proposition}
\begin{Remark}
\label{R18XII13.1}
{\rm
If the space-time is not vacuum, then \eq{constraint_phi_alter} becomes
\begin{equation}
\partial^2_s\mathring\varphi( s)
+e^{-2H(r(s))} \frac{\big(|\sigma|^2+\ol R_{rr}\big)(r(s))}{n-1}\mathring\varphi(s) =0
\;.
\label{constraint_phi_alter nonvac}
\end{equation}
and the conclusions of Proposition~\ref{P6XII13.2} remain unchanged if we assume in addition that
\bel{18XII13.10}
\ol R_{rr} = O(r^{-4})
\;.
\end{equation}
}
\end{Remark}
{\sc Proof:}\
From \eq{definition_sigma} one finds
$$|\sigma|^2 = O(r^{-4})
\;.
$$
We have already seen that
$$
\mathring \varphi = \mathring \varphi_{-1}s + o(s)
\;.
$$
Plugging this in the second term in \eq{constraint_phi_alter} and integrating shows that
$$
\partial_s \mathring \varphi (s) = \mathring \varphi_{-1} + O(s^{-2})\;,
\quad
\mathring \varphi (s) = \mathring \varphi_{-1} s + \mathring \varphi_{0} + O(s^{-1})\;.
$$
A simple analysis of the equation relating $r$ with $s$ gives now
$$
\partial_r \varphi (r) = \varphi_{-1} + O(r^{-2})\;,
\quad
\varphi (r) = \varphi_{-1} r + \varphi_{0} + O(r^{-1})\;.
$$
This establishes point 1.
When $\varphi_{-1}$ is positive one finds that \eq{6XII12.5} holds, and from what has been said the result follows.
\hfill $\Box$ \medskip
\input{NoGoSection}
\section{Introduction}
An issue of central importance in general relativity is the understanding of gravitational radiation. This has direct implications for the soon-expected direct detection of gravitational waves. The current main effort in this topic appears to be a mixture of numerical modeling and approximation methods. From this perspective there does not seem to be a need for a better understanding of the exact properties of the gravitational field in the radiation regime. However, as observations and numerics will have become routine, solid theoretical foundations for the problem will become necessary.
Now, a generally accepted framework for describing gravitational radiation seems to be the method of conformal completions of Penrose. Here a key hypothesis is that a suitable conformal rescaling of the space-time metric becomes smooth on a new manifold with boundary $\scrip$.
One then needs to face the question, if and how such space-times can be constructed. Ultimately one would like to isolate the class of initial data, on a spacelike slice extending to spatial infinity, the evolution of which admits a Penrose-type conformal completion at infinity, and show that the class is large enough to model all physical processes at hand. Direct attempts to carry this out (see~\cite{Friedrich:tuebingen,Kroon6,Valiente-Kroon:2002fa} and references therein) have not been successful so far. Similarly, the asymptotic behaviour of the gravitational field established in~\cite{Ch-Kl,KlainermanNicoloBook,klainerman:nicolo:review,KlainermanNicoloPeeling,BieriZipser,LindbladRodnianski} is inconclusive as far as the smoothness of the conformally rescaled metric at $\scrip$ is concerned. The reader is referred to~\cite{FriedrichCMP13} for an extensive discussion of the issues arising.
On the other hand, clear-cut constructions have been carried-out in less demanding settings, with data on characteristic surfaces as pioneered by Bondi et al.~\cite{BBM}, or with initial data with hyperboloidal asymptotics. It has been found~\cite{TorrenceCouch,ChMS,andersson:chrusciel:PRL,ACF}
that both generic Bondi data and generic hyperboloidal data, constructed out of conformally smooth seed data, will \emph{not} lead to space-times with a smooth conformal completion. Instead, a \emph{polyhomogeneous} asymptotics of solutions of the relevant constraint equations was obtained, with logarithmic terms appearing in asymptotic expansions of the fields.
The case for the necessity of a polyhomogeneous-at-best framework, as resulting from the above work, is not waterproof: In both cases it is not clear whether initial data with logarithmic terms can arise from evolution of a physical system which is asymptotically flat in spacelike directions. There is a further issue with the Bondi expansions, because the framework of Bondi et al.~\cite{BBM,Sachs} does not provide a well-posed system of evolution equations for the problem at hand.
The aim of this work is to rederive the existence of obstructions to smoothness of the metric at $\scrip$ in a framework in which the evolution problem for the Einstein vacuum equations is well-posed and where free initial data are given on a light-cone extending to null infinity, or on two characteristic hypersurfaces one of which extends to infinity, or in a mixed setting where part of the data are prescribed on a spacelike surface and part on a characteristic one extending to infinity.
This can be viewed as a revisiting of the Bondi-type setting in a framework where an associated space-time is guaranteed to exist.
One of the attractive features of the characteristic Cauchy problem is that one can explicitly provide an exhaustive class of freely prescribable initial data. By ``exhaustive class" we mean that the map from the space of free initial data to the set of solutions is surjective, where ``solution'' refers to that part of space-time which is covered by the domain of dependence of the smooth part of the light-cone, or of the smooth part of the null hypersurfaces issuing normally from a smooth submanifold of codimension two
\footnote{This should be contrasted with the spacelike Cauchy problem, where no exhaustive method for constructing non-CMC initial data sets is known. It should, however, be kept in mind that the spacelike Cauchy problem does not suffer from the serious problem of formation of caustics, inherent to the characteristic one.}
There is, moreover, considerable flexibility in prescribing characteristic initial data~\cite{ChPaetz}. In this work we will concentrate on the following approaches:
\begin{enumerate}
\item The free data are a triple $({\cal N},[\gamma], \kappa)$,
where ${\cal N}$ is a $n$-dimensional manifold, $[\gamma]$ is a conformal class of symmetric two-covariant tensors on ${\cal N}$ of signature $(0,+,\ldots,+)$, and $\kappa$ is a field of connections on the bundles of tangents to the integral curves of the kernel of $\gamma$
\footnote{Recall that a connection $\nabla$ on each such bundle is uniquely described by
writing
$
\nabla_r \partial_r = \kappa \partial_r
$, in a coordinate system where $\partial_r$ is in the kernel of $\gamma$.
Once the associated space-time has been constructed we will also have
$
\nabla_r \partial_r = \kappa \partial_r
$,
where $\nabla$ now is the covariant derivative operator associated with the space-time metric.}
\item Alternatively, the data are a triple $({\cal N},\check g, \kappa)$, where $\check g$ is a field of symmetric two-covariant tensors on ${\cal N}$ of signature $(0,+,\ldots,+)$, and $\kappa$ is a field of connections on the bundles of tangents to the integral curves of the characteristic direction of $\check g$.
\footnote{We will often write $(\check g, \kappa)$ instead of $({\cal N},\check g, \kappa)$, with ${\cal N}$ being implicitly understood, when no precise description of ${\cal N}$ is required.}
The pair $(\check g, \kappa)$ is further required to satisfy the constraint equation
\bel{10XII13.2}
\partial_r\tau - \kappa \tau + |\sigma|^2 + \frac{\tau^2}{n-1} = 0
\;,
\end{equation}
where $\tau$ is the divergence and $\sigma$ is the shear (see Section~\ref{kappa_freedom} for details), which will be referred to as the \emph{Raychaudhuri equation}.
\item Alternatively, the connection coefficient $\kappa$ and all the components of the space-time metric are prescribed on ${\cal N}$, subject to the Raychaudhuri constraint equation.
Here ${\cal N}$ is viewed as the hypersurface $\{u=0\}$ in the space-time to-be-constructed,
and thus all metric components $g_{\mu\nu}$ are prescribed at $u=0$ in a coordinate system $(x^\mu)=(u,x^i)$, where $(x^i)$ are local coordinates on ${\cal N}$.
\item Finally, schemes where tetrad components of the conformal Weyl tensor are used as
free data are briefly discussed.
\end{enumerate}
In the first two cases, to obtain a well posed evolution problem one needs to impose gauge conditions; in the third case,
the initial data themselves determine the gauge, with the ``gauge-source functions'' determined from the initial data.
The aim of this work is to analyze the occurrence of log terms in the asymptotic expansions as $r$ goes to infinity for initial data sets as above.
The gauge choice $\kappa=O(r^{-3})$ below (in particular the gauge choice $\kappa=\frac{r}{2}|\sigma|^2$,
on which we focus in part II \cite{TimAsymptotics},
ensures that affine parameters along the generators of ${\cal N}$ diverge as $r$ goes to infinity (cf.\ \cite[Appendix~B]{TimAsymptotics}), so that in the associated space-time the limit $r\to\infty$ will correspond to null geodesics approaching a (possibly non-smooth) null infinity.
It turns out that the simplest choice of gauge conditions, namely $\kappa=0$ and harmonic coordinates, is \emph{not compatible} with smooth asymptotics at the conformal boundary at infinity: we prove that the \emph{only} vacuum metric, constructed from characteristic Cauchy data on a light-cone, and which has a smooth conformal completion in this gauge, is Minkowski space-time.
{It should be pointed out, that the observation that some sets of harmonic coordinates are problematic for an analysis of null infinity has already been made in \cite{ChoquetBruhat73,BlanchetPRSL87}. Our contribution here is to make a precise \emph{{no-go}} statement, without approximation procedures or supplementary assumptions.}
One way out of the problem is to replace the harmonic-coordinates condition by a wave-map gauge with non-vanishing gauge-source functions. This provides a useful tool to isolate those log terms which are gauge artifacts, in the sense that they can be removed from the solution by an appropriate choice of the gauge-source functions. There remain, however, some logarithmic coefficients which cannot be removed in this way. We identify those coefficients, and show that the requirement that these coefficients do not vanish is gauge-independent. In part~II of this work
we show that the logarithmic coefficients are non-zero for generic initial data.
The equations which lead to vanishing logarithmic coefficients will be referred to as the
\emph{no-logs-condition}.
It is expected that for generic initial data sets, as considered here, the space-times obtained by solving the Cauchy problem will have a polyhomogeneous expansion at null infinity.
There are, however, no theorems in the existing mathematical literature which guarantee existence of a polyhomogeneous $\scrip$ when the initial data have non-trivial log terms.
The situation is different when the no-logs-condition is satisfied. In part~II of this work
we show that the resulting initial data lead to smooth initial data for Friedrich's conformal field equations~\cite{F1} as considered in~\cite{CPW}. This implies that the no-logs-condition provides a necessary-and-sufficient condition for the evolved space-time to posses a smooth $\scrip$. For initial data close enough to Minkowskian ones, solutions global to the future are obtained.
It may still be the case that the logarithmic expansions are irrelevant as far as our understanding of gravitational radiation is concerned, either because they never arise from the evolution of isolated physical systems, or because their occurrence prevents existence of a sufficiently long evolution of the data, or because all essential physical issues are already satisfactorily described by smooth conformal completions. While we haven't provided a definite answer to those questions, we hope that our results here will contribute to resolve the issue.
If not explicitly stated otherwise, all manifolds, fields, and expansion coefficients are assumed to be smooth.
\section{The characteristic Cauchy problem on a light-cone}
\label{cauchy_problem}
In this section we will review some facts concerning the characteristic Cauchy problem. Most of the discussion applies to any characteristic surface. We concentrate on a light-cone,
as in this case all the information needed is contained in the characteristic initial data together with the requirement of the smoothness
of the metric at the vertex. The remaining Cauchy problems mentioned in the Introduction will be discussed in Section~\ref{s16XII13.1} below.
\subsection{Gauge freedom}
\subsubsection{Adapted null coordinates}
\label{Adapted null coordinates}
Our starting point is a $C^{\infty}$-manifold ${\cal M} \cong\mathbb{R}^{n+1}$ and a future light-cone $C_O\subset {\cal M}$ emanating from some point $O\in {\cal M}$.
We make the assumption that the subset $C_O$ can be \textit{globally} represented in suitable coordinates $(y^{\mu})$ by the equation of a Minkowskian cone, i.e.\
\begin{equation*}
C_O = \{ (y^{\mu}) : y^0 = \sqrt{\sum_{i=1}^n (y^i)^2} \} \subset {\cal M}
\;.
\end{equation*}
Given a $C^{1,1}$-Lorentzian space-time
such a representation is always possible in some neighbourhood of the vertex.
However, since caustics may develop along the null geodesics which generate the cone,
it is a geometric restriction to assume the existence of a Minkowskian representation globally.
A treatment of the characteristic initial value problem at hand is easier in coordinates $x^{\mu}$ adapted to the geometry of the light-cone~\cite{RendallCIVP,CCM2}. We consider space-time-dimensions $n+1\geq 3$.
It is standard to construct a set of coordinates $(x^{\mu})\equiv(u,r,x^A)$, $A=2,\dots,n$, so that $C_O\setminus\{0\}=\{u = 0\}$.
The $x^A$'s denote local coordinates on the level sets $\Sigma_r:=\{r=\text{const},u=0\}\cong S^{n-1}$, and are constant along the generators. The coordinate $r$ induces, by restriction, a parameterization of the generators and is chosen so that the point $O$ is approached when $r \rightarrow 0$.
The general form of the trace $\overline g$ on the cone $C_O$ of the space-time metric $g$ reduces in these \textit{adapted null coordinates} to
\begin{equation}
\overline g = \overline g_{00}\mathrm{d}u^2 + 2\nu_0\mathrm{d}u\mathrm{d}r + 2\nu_A\mathrm{d}u\mathrm{d}x^A + \check g
\;,
\label{null2}
\end{equation}
where
\begin{equation*}
\nu_0:=\overline g_{01}\;, \quad \nu_A:=\overline g_{0A}\;,
\end{equation*}
and where
\begin{equation*}
\check g = \check g_{AB}\mathrm{d}x^A\mathrm{d}x^B := \overline g_{AB}\mathrm{d}x^A\mathrm{d}x^B
\end{equation*}
is a degenerate quadratic form induced by $g$ on $C_O$ which induces on each slice $\Sigma_r$ an $r$-dependent Riemannian metric $\check g_{\Sigma_r}$ (coinciding with $\check g (r,\cdot)$ in the coordinates above)
\footnote{The degenerate quadratic form denoted here by $\check g$ has been denoted by $\tilde g$ in~\cite{CCM2,ChPaetz}. However, here we will use~$\tilde g$ to denote the conformally rescaled unphysical metric, as done in most of the literature on the subject.}
The components $\overline g_{00}$, $\nu_0$ and $\nu_A$ are gauge-dependent quantities.
In particular, $\nu_0$ changes sign when $u$ is replaced by $-u$. Whenever useful and/or relevant, we will assume that $\partial_r$ is future-directed and $\partial_u$ is past-directed, which corresponds to requiring that $\nu_0>0$.
The quadratic form $\check g$ is intrinsically defined on $C_O$,
independently of the choice of the parameter $r$ and of how the coordinates are extended off the cone.
Throughout this work an overline denotes the restriction of space-time objects to $C_O$.
The restriction of the inverse metric to the light-cone takes the form
\begin{equation*}
{\overline g}^\# \equiv \overline g^{\mu\nu} \partial_\mu\partial_\nu= 2\nu^0\partial_u\partial_r + \overline g^{11} \partial_r\partial_r + 2\overline g^{1A}\partial_r\partial_A + \overline g^{AB}\partial_A\partial_B
\;,
\end{equation*}
where
\begin{equation*}
\nu^0:=\overline g^{01}=(\nu_0)^{-1}\;, \enspace \nu^A:=\overline g^{AB}\nu_B \;,\enspace
\overline g^{1A} =-\nu^0\nu^A\;,\enspace \overline g^{11}=(\nu^0)^2(\nu^A\nu_A - \overline g_{00})
\;,
\end{equation*}
and where $\overline g^{AB}$ is the inverse of $\overline g_{AB}$.
The coordinate transformation relating the two coordinate systems $(y^{\mu})$ and $(x^{\mu})$ takes the form
\begin{equation*}
u = \hat r - y^0\;, \quad r = \hat r\;, \quad x^A=\mu^A(y^i/\hat r)\;, \quad \text{with} \quad \hat r := \sqrt{\sum_i (y^i)^2 }
\;.
\end{equation*}
The inverse transformation reads
\begin{equation*}
y^0=r-u\;, \quad y^i=r\Theta^i(x^A)\;, \quad \text{with}\quad \sum_i(\Theta^i)^2=1
\;.
\end{equation*}
Adapted null coordinates are singular at the vertex of the cone $C_O$ and $C^{\infty}$ elsewhere.
They are convenient to analyze the initial data constraints satisfied by the trace $\overline g$ on the light-cone.
Note that the space-time metric $g$ will in general not be of the form (\ref{null2}) away from $C_O$.
We further remark that adapted null coordinates are not uniquely fixed, for there remains the freedom to redefine the coordinate $r$ (the only restriction being that $r$ is strictly increasing on the generators and that $r=0$ at the vertex; compare Section~\ref{kappa_freedom} below),
and to choose local coordinates on $S^{n-1}$.
\subsubsection{Generalized wave-map gauge}
\label{generalizedwg}
Let us be given an auxiliary Lorentzian metric $\hat g$.
A standard method to establish existence, and well-posedness, results for Einstein's vacuum field equations $R_{\mu\nu}=0$ is a ``hyperbolic reduction'' where the Ricci tensor is replaced
by the \textit{reduced Ricci tensor in $\hat g$-wave-map gauge},
\begin{equation}
\label{17V14.1}
R^{(H)}_{\mu\nu} := R_{\mu\nu} - g_{\sigma(\mu}\hat\nabla_{\nu)}H^{\sigma}
\;.
\end{equation}
Here
\begin{equation}
H^{\lambda} :=\Gamma^{\lambda}-\hat \Gamma^{\lambda} - W^{\lambda}
\;, \quad
\Gamma^{\lambda}:= g^{\alpha\beta}\Gamma^{\lambda}_{\alpha\beta}\;, \quad
\hat \Gamma^{\lambda}:= g^{\alpha\beta}\hat \Gamma^{\lambda}_{\alpha\beta}
\;.
\end{equation}
We use the hat symbol ``$\,\hat\enspace\,$" to indicate quantities associated with the \textit{target metric $\hat g$},
while
$W^{\lambda}=W^{\lambda}(x^{\mu},g_{\mu\nu})$ denotes a vector field which is allowed to depend upon the coordinates
and the metric $g$, but not upon derivatives of $g$.
The \textit{wave-gauge vector $H^{\lambda}$} has been chosen of the above form~\cite{FriedrichCMP,Friedrich:hyperbolicreview,CCM2} to remove some second-derivatives terms in the Ricci tensor, so that the \emph{reduced vacuum
Einstein equations}
\bel{17V14.3}
R^{(H)}_{\mu\nu}=0
\end{equation}
form a system of quasi-linear wave equations for $g$.
Any solution of \eq{17V14.3} will provide a solution of the vacuum Einstein equations provided that the so-called \textit{$\hat g$-generalized wave-map gauge condition}
\bel{17V14.2}
H^\lambda=0
\end{equation}
is satisfied. In the context of the characteristic initial value problem, the ``gauge condition'' \eq{17V14.2} is satisfied by solutions of the reduced Einstein equations if it is satisfied on the initial characteristic hypersurfaces.
The vector field $W^\lambda$ reflects the freedom to choose coordinates off the cone. Its components can be freely specified, or chosen to satisfy ad hoc equations.
Indeed, by a suitable choice of coordinates the gauge source functions $W^{\lambda}$ can locally be given any preassigned form, and conversely the $W^{\lambda}$'s can be used to determine coordinates by solving wave equations, given appropriate initial data on the cone.
In most of this work we will use a Minkowski target in adapted null coordinates, that is
\begin{equation}
\hat g = \eta \equiv -\mathrm{d}u^2 + 2\mathrm{d}u\mathrm{d}r + r^2s_{AB}\mathrm{d}x^A\mathrm{d}x^B
\;,
\label{Minktarget}
\end{equation}
where $s$ is the unit round metric on the sphere $S^{n-1}$.
\subsection{The first constraint equation}
\label{kappa_freedom}
Set $\ell\equiv \ell^{\mu}\partial_
\mu\equiv \partial_r $.
The Raychaudhuri equation $\overline R_{\mu\nu}\ell^{\mu}\ell^{\nu}\equiv \overline R_{11}=0$ provides a constraining relation between the connection coefficient $\kappa$ and other geometric objects on $C_O$, as follows: Recall that the \textit{null second fundamental form} of $C_O$ is defined as
\begin{equation*}
\chi_{ij} \,:=\, \frac{1}{2} (\mathcal{L}_{\ell} \check g)_{ij}
\;,
\end{equation*}
where $\mathcal{L}$ denotes the Lie derivative. In the adapted coordinates described above we have
\begin{equation*}
\chi_{AB} = -\overline\Gamma{}^0_{AB}\nu_0 = \frac{1}{2}\partial_r\overline g_{AB}
\;, \quad
\chi_{11}\,=\,0\;, \quad \chi_{1A}\,=\,0
\;.
\end{equation*}
The null second fundamental form is sometimes called \emph{null extrinsic curvature} of the initial surface $C_O$, which is misleading since
only objects intrinsic to $C_O$ are involved in its definition.
The \textit{mean null extrinsic curvature} of $C_O$, or the \textit{divergence} of $C_O$, which we denote by $\tau$
and which is often denoted by $\theta$ in the literature, is defined as the trace of $\chi$:
\begin{equation}
\tau:= \chi_A^{\phantom{A}A}\equiv \overline g^{AB}\chi_{AB}\equiv \frac{1}{2}\overline g^{AB}\partial_r\overline g_{AB} \equiv \partial_r{\log\sqrt{\det\check g_{\Sigma_r}}}
\;.
\label{definition_tau}
\end{equation}
It measures the rate of change of area along the null geodesic generators of $C_O$.
The traceless part of $\chi$,
\begin{eqnarray}
\sigma_A^{\phantom{A}B} &:=& \chi_A^{\phantom{A}B} - \frac{1}{n-1}\delta_A^{\phantom{A}B}\tau \,\equiv\, \overline g^{BC}\chi_{AC} - \frac{1}{n-1}\delta_A^{\phantom{A}B}\tau
\label{definition_sigmaAB}
\\
&=& \frac{1}{2}\gamma^{BC}(\partial_r\gamma_{AC})\breve{}
\;,
\label{formula_sigmaAB}
\end{eqnarray}
is known as the \textit{shear} of $C_O$.
In (\ref{formula_sigmaAB}) the field $\gamma$ is any representative of the conformal class of $\check g_{\Sigma_r}$, which is sometimes regarded as the free initial data.
The addition of the ``$\breve{~~} $''-symbol to a tensor $w_{AB}$ denotes ``the trace-free part of'':
\begin{equation}
\label{12XII13.1}
\breve{w}_{AB} :=w_{AB} - \frac{1}{n-1} \gamma_{AB}\gamma^{CD}w_{CD}
\;.
\end{equation}
We set
\begin{eqnarray}
|\sigma|^2 &:=& \sigma_A^{\phantom{A}B}\sigma_B^{\phantom{B}A} = - \frac{1}{4}(\partial_r\gamma^{AB})\breve{}\,(\partial_r\gamma_{AB})\breve{}
\label{definition_sigma}
\;.
\end{eqnarray}
We thus observe that the shear $\sigma_A{}^B$ depends merely on the conformal class of $\check g_{\Sigma_r}$.
This is not true for $\tau$, which is instead in one-to-one correspondence with the conformal factor relating $\check g_{\Sigma_r}$ and $\gamma$.
Imposing the generalized wave-map gauge condition $H^{\lambda}=0$, the wave-gauge constraint equation induced by $\ol R_{11}=0$
reads \cite[equation (6.13)]{CCM2},
\begin{equation}
\partial_r\tau - \underbrace{\Big( \nu^0\partial_r\nu_0 - \frac{1}{2}\nu_0(\overline W{}^0 + \ol {\hat\Gamma}^0) - \frac{1}{2}\tau \Big)}_{=:\kappa}\tau + |\sigma|^2 + \frac{\tau^2}{n-1} = 0
\;.
\label{R11_constraint}
\end{equation}
Under the allowed changes of the coordinate $r$, $r\mapsto \overline r(r,x^A)$, with $\partial \overline r/\partial r>0$, $\overline r(0,x^A)=0$,
the tensor field $g_{AB}$ transforms as a scalar,
\bel{17V14.7}
\overline g_{AB}(\overline r, x^C)
=
g_{AB}(r(\overline r, x^C),x^C)
\;,
\end{equation}
the field $\kappa$ changes as a connection coefficient
\bel{17V14.5}
\bar \kappa = \frac{\partial r}{\partial\overline r} \kappa + \frac{\partial \overline r}{\partial r} \frac{\partial^2 r}{\partial \overline r ^2}
\;,
\end{equation}
while $\tau$ and $\sigma_{AB}$ transform as one-forms:
\bel{17V14.6}
\overline \tau = \frac {\partial r}{\partial \overline r} \tau\;,
\quad
\overline \sigma_{AB} = \frac {\partial r}{\partial \overline r} \sigma_{AB}
\;.
\end{equation}
The freedom to choose $\kappa$ is thus directly related to the freedom to reparameterize the generators of $C_O$.
Geometrically, $\kappa$ describes the acceleration of the integral curves of $\ell$, as seen from the identity $\nabla_{\ell}\ell^{\mu}=\kappa\ell^{\mu}$.
The choice $\kappa=0$ corresponds to the requirement that the coordinate $r$ be an affine parameter along the rays.
For a given $\kappa$ the first constraint equation splits into an equation for $\tau$ and, once this has been solved, an equation for $\nu_0$.
Once a parameterization of generators has been chosen,
we see that the metric function $\nu_0$ is largely determined by the choice of the gauge-source function $\overline W{}^0$ and, in fact,
the remaining gauge-freedom in $\nu_0$ can be encoded in $\overline W{}^0$.
\subsection{The wave-map gauge characteristic constraint equations}
Here we present the whole hierarchical ODE-system
of Einstein wave-map gauge constraints induced by the vacuum Einstein equations in a generalized wave-map gauge (cf.~\cite{CCM2} for details)
for given initial data $([\gamma],\kappa)$ and gauge source-functions $\overline W^\lambda$.
The equation \eq{R11_constraint} induced by $\ol R_{11}=0$ leads to the equations
\begin{eqnarray}
\partial_r\tau - \kappa \tau + |\sigma|^2 + \frac{\tau^2}{n-1} &=& 0
\;,
\label{constraint_tau}
\\
\partial_r\nu^0 + \frac{1}{2}(\overline W{}^0+ \ol {\hat\Gamma}^0) + \nu^0(\frac{1}{2}\tau + \kappa ) &=& 0
\;.
\label{constraint_nu0}
\end{eqnarray}
Equation \eq{constraint_tau} is a Riccati differential equation for $\tau$ along each null ray, for $\kappa=0$ it reduces to the standard form of the Raychaudhuri equation.
Equation \eq{constraint_nu0} is expressed in terms of
$$
\nu^0:=\frac 1 {\nu_0}
$$
rather than of $\nu_0$, as then it becomes linear.
Our aim is to analyze the asymptotic behavior of solutions of the constraints, for this
it turns out to be convenient to introduce an auxiliary positive function $\varphi$, defined as
\begin{equation}
\tau =(n-1)\partial_r\log\varphi
\;,
\label{relation_tau_phi}
\end{equation}
which transforms \eq{constraint_tau} into a second-order \textit{linear} ODE,
\begin{equation}
\partial^2_{r}\varphi -\kappa\partial_r\varphi + \frac{|\sigma|^2}{n-1}\varphi =0
\;.
\label{constraint_phi}
\end{equation}
The function $\varphi$ is essentially a rewriting of the conformal factor $\Omega$ relating $\check g $
and the initial data $\gamma$,
$\overline g_{AB} = \Omega^2 \gamma_{AB}$:
\begin{equation}
\Omega = \varphi \left( \frac{\det s}{\det \gamma}\right)^{1/(2n-2)}
\;.
\label{definition_Omega}
\end{equation}
Here $s=s_{AB}\mathrm{d}x^A\mathrm{d}x^B$ denotes the standard metric on $S^{n-1}$. The initial data symmetric tensor field $\gamma=\gamma_{AB}dx^A dx^B$ is assumed to form a
one-parameter family of Riemannian metrics $r\mapsto \gamma(r,x^A)$ on $S^{n-1}$.
The boundary conditions at the vertex $O$ of the cone for the ODEs occurring in this work follow from the requirement of regularity of the metric there.
When imposed, they guarantee that (\ref{constraint_nu0}) and (\ref{constraint_phi}), as well as all the remaining constraint equations below, have unique solutions. The relevant conditions at
the vertex have been computed in regular coordinates and then translated into adapted null coordinates in~\cite{CCM2} for a natural family of gauges.
For $\nu^0$ and $\varphi$ the boundary conditions read
\begin{eqnarray*}
\begin{cases}
\lim_{r\rightarrow 0}\nu^0 = 1
\;,
\\
\lim_{r\rightarrow 0}\varphi=0\;,\quad \lim_{r\rightarrow 0}\partial_r\varphi=1
\;.
\end{cases}
\end{eqnarray*}
The Einstein equations $\overline R_{1A} = 0$ imply the equations
\cite[Equation (9.2)]{CCM2} (compare~\cite[Equation~(3.12)]{ChPaetz})
\begin{eqnarray}
\frac{1}{2}(\partial_r + \tau)\xi_A - \check \nabla_B \sigma_A^{\phantom{A}B} + \frac{n-2}{n-1}\partial_A\tau +\partial_A \kappa
=0
\;,
\label{eqn_nuA_general}
\end{eqnarray}
where $\check \nabla$ denotes the Riemannian connection defined by $\check g_{\Sigma_r}$,
and
$$
\xi_A:=-2\ol \Gamma^1_{1A}
\;.
$$
When $\ol H^0=0 $ one has $\ol H^A=0$ if and only if
\begin{eqnarray}
\xi_A
&= & -2\nu^0\partial_r\nu_A + 4\nu^0\nu_B\chi_A{}^B + \nu_A(\overline W{}^0+ \ol {\hat\Gamma}^0) + \overline g_{AB}( \overline W{}^B
+ \ol {\hat\Gamma}^B)
\nonumber
\\
&&- \gamma_{AB} \gamma^{CD} \check \Gamma{}^B_{CD}
\;.
\label{eqn_xiA}
\end{eqnarray}
Here $\check \Gamma^B_{CD}$ are the Christoffel symbols associated to the metric $\check g_{\Sigma_r}$.
Given fields $\kappa$ and $\overline g_{AB}=g_{AB}|_{u=0}$ satisfying the Raychaudhuri constraint equation,
the equations (\ref{eqn_nuA_general}) and \eq{eqn_xiA} can be read as hierarchical linear first-order PDE-system which successively determines
$\xi_A$ and $\nu_A$ by solving ODEs. The boundary conditions at the vertex are
\begin{equation*}
\lim_{r\rightarrow 0}\nu_A = 0=\lim_{r\rightarrow 0}\xi_A
\;.
\end{equation*}
The remaining constraint equation follows from the Einstein equation $\overline g^{AB} \overline R_{AB} = 0$
\cite[Equations (10.33) \& (10.36)]{CCM2},
\begin{eqnarray}
(\partial_r + \tau + \kappa)\zeta + \check R - \frac{1}{2}\xi_A\xi^A +\check \nabla_A\xi^A =0
\;,
\label{zeta_constraint}
\end{eqnarray}
where we have set $\xi^A:= \ol g^{AB}\xi_B$.
The function $\check R$ is the curvature scalar associated to $\check g_{\Sigma_r}$.
The auxiliary function $\zeta$ is defined as
\begin{equation}
\zeta:= (2\partial_r + \tau + 2\kappa)\overline g^{11} + 2\overline W{}^1 + 2 \ol {\hat\Gamma}^1
\;,
\label{dfn_zeta}
\end{equation}
and satisfies, if $\ol H^{\lambda}=0$, the relation $\zeta=2\ol g^{AB}\ol\Gamma^1_{AB} + \tau \ol g^{11}$.
The term $\ol{{\hat\Gamma}}{}^1$ depends upon the target metric chosen, and with our current Minkowski target $\hat g=\eta$ we have
\begin{equation}
\label{5XII13.1}
\ol {\hat\Gamma}^1 = \ol {\hat\Gamma}^0= - r\overline g^{AB}s_{AB}
\;.
\end{equation}
Taking the relation
\begin{equation}
\overline g^{11} = (\nu^0)^2(\nu^A\nu_A - \overline g_{00})
\end{equation}
into account, the definition \eq{dfn_zeta} of $\zeta$ becomes an equation for $\overline g_{00}$ once $\zeta$ has been determined.
The boundary conditions for \eq{zeta_constraint} and \eq{dfn_zeta} are
\begin{equation*}
\lim_{r\rightarrow 0}\overline g^{11} =1\;,\quad \lim_{r\rightarrow 0}(\zeta+2 r^{-1}) =0
\;.
\end{equation*}
\input{GlobalSolutions}
\section{Preliminaries to solve the constraints asymptotically}
\label{s12XII13.2}
\subsection{Notation and terminology}
\label{ss12XII13.2}
Consider a metric which has a smooth, or polyhomogeneous, conformal completion at infinity \emph{\`a la Penrose}, and suppose that the closure (in the completed space-time) $\ol{{\cal N}}$ of a null hypersurface ${\cal N}$ of $O$ meets $\scrip$ in a smooth sphere.
One can then introduce Bondi coordinates $(u,r,x^A)$ near $\scrip$, with $\ol{{\cal N}}\cap \scrip$ being the level set of a Bondi retarded coordinate $u$ (see~\cite{TamburinoWinicour} in the smooth case, and \cite[Appendix~B]{ChMS} in the polyhomogeneous case).
The resulting Bondi area coordinate $r$ behaves as $1/\Omega$, where $\Omega$ is the compactifying factor. If one uses $\Omega$ as one of the coordinates near $\scrip$, say $x$, and chooses $1/x$ as a parameter along the generators of ${\cal N}$, one is led to an asymptotic behaviour of the metric which is captured by the following definition:
\begin{definition}
\label{definition_smooth}
\rm{
We say that a smooth metric tensor $\overline g_{\mu\nu}$ defined on a null hypersurface ${\cal N}$ given in adapted null coordinates has a
\textit{smooth conformal completion at infinity}
if the unphysical metric tensor field $\overline{\tilde g}_{\mu\nu}$
obtained via
the coordinate transformation $r\mapsto 1/r=: x$ and the conformal rescaling $\overline g\mapsto \overline{\tilde g} \equiv x^2 \overline g$
is, as a Lorentzian metric, smoothly extendable across $\{x=0\}$. We will say that $\overline g_{\mu\nu}$ is \emph{polyhomogeneous} if the conformal extension obtained as above is polyhomogeneous at $\{x=0\}$, see Appendix~\ref{A22XII13.1}.
The components of a smooth tensor field on ${\cal N}$ will be said to be \textit{smooth at infinity},
respectively \emph{polyhomogeneous at infinity}, whenever they admit, in the $(x,x^A)$-coordinates, a smooth, respectively polyhomogeneous, extension in the conformally rescaled space-time across $\{x=0\}$.
}
\end{definition}
\begin{Remark}
\label{R12XII13.1}
{\rm The reader is warned that the definition contains an implicit restriction, that} ${\cal N}$
is a smooth hypersurface in the conformally completed space-time. {\rm In the case of a light-cone, this excludes existence of points which are conjugate
to $O$ both in ${\cal M}$ and on $\overline C_O\cap \scrip$.}
\end{Remark}
We emphasise that Definition~\ref{definition_smooth} concerns only fields on ${\cal N}$, and \emph{no assumptions are made concerning existence, or properties, of an associated space-time.} In particular there \emph{might not} {be} an associated space-time; and if there is one, it \emph{might or might not} have a smooth completion through a conformal boundary at null infinity.
The conditions of the definition are both conditions on the metric and on the coordinate system. While the definition restricts the class of parameters $r$, there remains considerable freedom, which will be exploited in what follows.
It should be clear that the existence of a coordinate system as above on a globally-smooth light-cone is a necessary condition for a space-time to admit a smooth conformal completion at null infinity, for points $O$ such that $\overline{C}_O\cap \scrip$ forms a smooth hypersurface in the conformally completed space-time.
Consider a real-valued function
\begin{equation*}
f : (0,\infty)\times S^{n-1} \longrightarrow \mathbb{R} \;, \quad (r,x^A) \longmapsto f(r,x^A)
\;.
\end{equation*}
If this function admits an asymptotic expansion in terms of powers of $r$ (whether to finite or arbitrarily high order)
we denote by $f_n$, or $(f)_n$, the coefficient of $r^{-n}$
in the expansion.
We will write $f=\mathcal{O}(r^N)$ (or $f=\mathcal{O}(x^{-N})$, $x\equiv1/r$), $N\in\mathbb{Z}$ if
the function
\begin{equation}
\label{4XII13.1}
F(x,\cdot) := x^N f(x^{-1},\cdot )
\end{equation}
is smooth at $x=0$. We emphasize that this is a restriction on $f$ for large $r$, and the condition does not say anything
about the behaviour of $f$ near the vertex of the cone (whenever relevant), where $r$ approaches zero.
We write
\[ f(r,x^A) \sim
\sum_{k=-N}^{\infty} f_k(x^A) r^{-k} \]
if the right-hand side is the asymptotic expansion at $x=0$ of the function $x\mapsto r^{-N} f(r, \cdot)|_{r=1/x}$, compare Appendix~\ref{A22XII13.1}.
The next lemma summarizes some useful properties of the symbol $\mathcal{O}$:
\begin{lemma}
Let $f=\mathcal{O}(r^N)$ and $g=\mathcal{O}(r^M)$ with $N,M\in\mathbb{Z}$.
\begin{enumerate}
\item $f$ can be asymptotically expanded as a power series starting from $r^N$,
\[ f(r,x^A) \sim
\sum_{k=-N}^{\infty} f_k(x^A) r^{-k} \]
for some suitable smooth functions $f_k: S^{n-1} \rightarrow \mathbb{R}$.
\item The $n$-th order derivative, $n\geq 0$, satisfies
\[ \partial^n_rf(r,x^A) = \begin{cases}
\mathcal{O}(r^{N-n})
\;,
& \text{for $N<0$,} \\
\mathcal{O}(r^{N-n})
\;,
& \text{for $N\geq 0$ and $N-n\geq 0$,} \\
\mathcal{O}(r^{N-n-1})
\;,
& \text{for $N\geq 0$ and $N-n\leq -1$,}
\end{cases}
\]
as well as
\[ \partial^n_A f(r,x^B) = \mathcal{O}(r^N)\;. \]
\item $f^n g^m = \mathcal{O}(r^{nN+mM})$ for all $n,m\in\mathbb{Z}$.
\end{enumerate}
\end{lemma}
\subsection{Some a priori restrictions}
\label{apriorirestrictions}
In order to solve the constraint equations asymptotically and derive necessary-and-sufficient conditions concerning smoothness of the solutions at infinity in adapted coordinates, it is convenient to have some a priori knowledge regarding the lowest admissible orders of certain functions appearing in these equations,
and to exclude the appearance of logarithmic terms in the expansions of fields such as $\xi_A$ and $\ol W^{\lambda}$.
Let us therefore derive the necessary restrictions on the metric, the gauge source functions, etc.\
needed to end up with a trace of a metric on the light-cone which admits a smooth conformal completion at infinity.
\subsubsection{Non-vanishing of $\varphi$ and $\nu^0$}
As described above, the Einstein wave-map gauge constraints
can be represented as a system of \textit{linear} ODEs for $\varphi$, $\nu^0$, $\nu_A$ and $\ol{g}^{11}$,
so that existence and
uniqueness (with the described boundary conditions)
of global solutions is guaranteed if the coefficients in the relevant ODEs are globally defined.
Indeed, we have to make sure that the resulting symmetric tensor field $\overline g_{\mu\nu}$ does not degenerate, so that it represents a regular Lorentzian metric in the respective adapted null coordinate system.
In a setting where the starting point are conformal data $\gamma_{AB}(r,\cdot)\mathrm{d}x^A\mathrm{d}x^B$ which
define a Riemannian metric for all $r>0$, this will be the case if and only if $\varphi$ and $\nu^0$ are nowhere vanishing, in fact \emph{strictly positive} in our conventions,
\begin{equation}
\varphi >0 \;, \ \nu^0 >0 \quad \forall \,r>0\;.
\end{equation}
\subsubsection{A priori restrictions on $\overline g_{\mu\nu}$}
\label{apriori_subsection}
Assume that $\ol g_{\mu\nu}$ admits a smooth conformal completion in the sense of Definition~\ref{definition_smooth}. Then
its conformally rescaled counterpart $\ol{\tilde g}_{\mu\nu} \equiv x^2 \ol g_{\mu\nu}$
satisfies
\begin{equation}
\overline{\tilde g}_{\mu\nu} = \mathcal{O}(1) \quad \text{with} \quad \ol{\tilde g}_{0x}|_{x=0}> 0\;, \quad \det \ol{\tilde g}_{AB}|_{x=0} > 0
\;.
\end{equation}
This imposes the following restrictions on the admissible asymptotic form of the components $g_{\mu\nu}$ in adapted null coordinates $(u,r\equiv 1/x,x^A)$:
\begin{equation}
\nu_0 \,=\, \mathcal{O}(1)\;, \quad
\nu_A \,=\, \mathcal{O}(r^2)\;, \quad
\overline g_{00} \,=\, \mathcal{O}(r^2)\;, \quad
\overline g_{AB} \,=\, \mathcal{O}(r^2)
\;,
\label{asympt_rest}
\end{equation}
with
\begin{equation}
(\nu_0)_0 > 0 \quad \text{and} \quad (\det \check g_{\Sigma_r})_{-4}> 0
\;.
\label{det_cond}
\end{equation}
Moreover,
\begin{eqnarray}
\tau \,\equiv\, \frac{1}{2}\overline g{}^{AB}\partial_r \overline g_{AB} \,=\, \frac{n-1}{r} + \mathcal{O}(r^{-2})\;,
\end{eqnarray}
and (recall that $\tau = (n-1)\partial_r\log\varphi $)
\begin{eqnarray}
\varphi \,=\,\varphi_{-1}r + \mathcal{O}(1) \quad \text{for some positive function $\varphi_{-1} $ on $S^{n-1}$.}
\label{asympt_rest_phi}
\end{eqnarray}
Indeed assuming that $\varphi_{-1}$ vanishes for some $x^A$, the function $\varphi$ does not diverge as $r$ goes to infinity along some null ray $\Upsilon$ emanating from $O$,
i.e.\ $\varphi|_{\Upsilon}= \mathcal{O}(1)$ and $\det \check g_{\Sigma_r}|_{\Upsilon} \equiv (\varphi^{2(n-1)}\det s) |_{\Upsilon} = \mathcal{O}(1)$,
which is incompatible with~\eq{det_cond}.
The assumptions $\varphi(r,x^A)>0$ and $\varphi_{-1}(x^A)>0$ imply the non-existence of conjugate points on the light-cone up-to-and-including conformal infinity.
\subsubsection{A priori restrictions on gauge source functions}
Assume that there exists a smooth conformal completion of the metric,
as in Definition~\ref{definition_smooth}. We wish to find the class of gauge functions
$\kappa$ and $\ol W^
\mu$ which are compatible with this asymptotic behaviour.
The relation $\overline g_{AB}= \mathcal{O}(r^2)$ together with $\partial_r = -x^2\partial_x$ and the definition \eq{definition_sigmaAB} implies
\begin{equation}
\mbox{ $ \sigma_A{}^B=\mathcal{O}(r^{-2})$,\quad $|\sigma|^2=\mathcal{O}(r^{-4})$. }
\label{5XII13.2}
\end{equation}
Using the estimate (\ref{asympt_rest_phi}) for $\tau$ and the Raychaudhuri equation \eq{constraint_tau} we find
\begin{equation}
\kappa= \mathcal{O}(r^{-3})
\;,
\label{a_priori_kappa}
\end{equation}
where cancellations in both the leading and the next-to-leading terms in \eq{constraint_tau} have been used.
Then \eq{constraint_nu0}, \eq{asympt_rest}, \eq{asympt_rest_phi} and \eq{a_priori_kappa} imply
\begin{equation}
\ol W^0 = \mathcal{O}(r^{-1})
\;.
\label{a_priori_W0}
\end{equation}
Similarly to $\kappa = \ol \Gamma^r_{rr}$, $\xi_A$
corresponds to the restriction to $C_O$ of certain connection coefficients (cf.\ \cite{CCM2,ChPaetz})
\begin{eqnarray*}
\xi_A = -2\ol \Gamma^r_{rA}
\;.
\end{eqnarray*}
We will use this equation to determine the asymptotic behaviour of $\xi_A$; the main point is to show that there needs to exist a gauge in which $\xi_A$ has no logarithmic terms. We note that the argument here requires assumptions about the whole space-time metric and some of its derivatives transverse to the characteristic initial surface, rather than on ${\overline{g}}_{AB}$.
A necessary condition for the space-time metric to be smoothly extendable across $\scri^+$
is that
the Christoffel symbols of the unphysical metric $\tilde g$ in coordinates $(u,x\equiv 1/r,x^A)$ are smooth at $\scri^+$,
in particular
\begin{equation}
\ol {\tilde\Gamma}{}^x_{xA}=\mathcal{O}(1) \;.
\label{smooth_Christoffels}
\end{equation}
The formula for the transformation of Christoffel symbols under conformal rescalings of the metric, $\tilde g = \Theta^2 g$, reads
\begin{eqnarray*}
\tilde\Gamma^{\rho}_{\mu\nu} &=& \Gamma^{\rho}_{\mu\nu} + \frac{1}{\Theta}\left(\delta_{\nu}^{\phantom{\nu}\rho}\partial_{\mu} \Theta + \delta_{\mu}^{\phantom{\mu}\rho}\partial_{\nu} \Theta -g_{\mu\nu}g^{\rho\sigma}\partial_{\sigma} \Theta\right)
\;,
\end{eqnarray*}
and
shows that \eq{smooth_Christoffels} is equivalent to
\begin{equation}
\ol\Gamma^x_{xA}= \mathcal{O}(1)
\;, \quad
\text{or} \quad \ol\Gamma^r_{rA}= \mathcal{O}(1)
\;;
\end{equation}
the second equation is obtained from the first one using the transformation law of the Christoffel symbols under the coordinate transformation $x\mapsto r\equiv 1/x$.
Hence $\xi_A=\mathcal{O}(1)$. Inspection of the leading-order terms in \eq{eqn_nuA_general} leads now to
\begin{equation}
\xi_A = \mathcal{O}(r^{-1})
\;.
\end{equation}
One can insert all this into \eq{eqn_xiA}, viewed as an equation for $ \ol W^A$, to obtain
\begin{eqnarray*}
\ol W^A = \mathcal{O}(r^{-1})
\;.
\end{eqnarray*}
We note the formula
$$
\zeta =\overline{ 2 g^{AB} \Gamma^r_{AB} + \tau g^{rr}}
$$
which allows one to relate $\zeta$ to the Christoffel symbols of $g$, and hence also to those of $\tilde g$.
However, when relating $\ol{\tilde\Gamma}^x_{AB}$ and $\ol\Gamma^r_{AB}$
derivatives of the conformal factor $\Theta$ appear which are transverse to the light-cone and whose expansion is a priori not clear. Therefore this formula
cannot be used to obtain information about $\zeta$ in a direct way, and one has to proceed
differently.
Assuming, from now on, that we are in space-dimension three,
it will be shown in part II of this work that the above a priori restrictions and the constraint equation \eq{zeta_constraint} \textit{imply} that the auxiliary function
$\zeta$ has the asymptotic behaviour
\begin{equation}
\zeta=\mathcal{O}(r^{-1})
\;.
\end{equation}
It then follows from \eq{dfn_zeta} and \eq{asympt_rest} that
\begin{equation}
\ol W{}^1 = \mathcal{O}(r)
\;.
\end{equation}
This is our final condition on the gauge functions.
To summarize, necessary conditions
for existence of both a smooth conformal completion of the metric $\ol g$ and of smooth extensions of
the connection coefficients $\ol \Gamma^1_{1A}$ are
\begin{eqnarray}
& \xi_A=\mathcal{O}(r^{-1})
\;, \quad \ol W{}^0 = \mathcal{O}(r^{-1})\;, \quad \ol W{}^A = \mathcal{O}(r^{-1})
\;. &
\end{eqnarray}
Moreover,
\begin{eqnarray}
\text{if} \enspace \zeta=\mathcal{O}(r^{-1})\enspace \text{then} \enspace \ol W{}^1 = \mathcal{O}(r)
\;.
\end{eqnarray}
\input{asymptoticExpansions}
\vspace{1.2em}
\noindent {\textbf {Acknowledgments}}
Supported in part by the Austrian Science Fund (FWF): P 24170-N16. Parts of this material are based upon work supported by the National Science Foundation under Grant No. 0932078 000, while the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the fall semester of 2013.
\section{A no-go theorem for the ($ {\kappa=0}$, $ \ol W{}^0= 0$)-wave-map gauge}
\label{s9XII13.1}
Rendall's proposal, to solve the characteristic Cauchy problem using the ($ {\kappa=0}$, $ \ol W{}^\mu= 0$)-wave-map gauge, has been adopted by many authors. The object of this section is to show that,
in $3+1$ dimensions,
this approach will always lead to logarithmic terms in an asymptotic expansion of the metric \emph{except for the Minkowski metric}. This makes clear the need to allow non-vanishing gauge-source functions $\ol W^{\mu}$.
More precisely, we prove (compare~\cite{ChoquetBruhat73}):
\begin{Theorem}
\label{T9XII13.11}
Consider a four-dimensional vacuum space-time $({\cal M},g)$
which has a
conformal completion at future null infinity $({\cal M}\cup\scrip,\tilde g)$
with a $C^3$ conformally rescaled metric, and suppose that there exists a point $O\in {\cal M}$ such that
$ \overline{C}_O\setminus \{O\}$, where $\overline C_O$ denotes the closure of $C_O$ in ${\cal M}\cup\scrip$, is a smooth hypersurface in the conformally completed space-time.
If the metric $g$ has no logarithmic terms in its asymptotic expansion for large $r$ in the $ \ol W{}^0=0$ wave-map gauge,
where $r$ is an affine parameter on the generators of $C_O$, then $({\cal M},g)$ is the Minkowski space-time.
\end{Theorem}
{\sc Proof:}\
Let $S \subset \scrip$
denote the intersection of $\overline C_O$ with $ \scrip$. Elementary arguments show
that $\overline C_O$ intersects $ \scrip$ transversally and that $S$ is diffeomorphic to $S^2$. Introduce near $S$ coordinates so that $S$ is given by the equation $\{u=0=x\}$, where $x$ is an $\tilde g$-affine
parameter along the generators of $\overline C_O$, with $x=0$ at $S$, while the $x^A$'s are coordinates on $S$ in which the metric induced by $\check g$
is manifestly conformal to the round-unit metric $s_{AB}\mathrm{d}x^A \mathrm{d}x^B$ on $S^2$.
(Note that for finitely-differentiable metrics this construction might lead to the loss of one derivative of the metric.) The usual calculation shows that the $g$-affine parameter $r$ along the generators of $\overline C_O$ equals $a(x^A)/x $
for some positive function of the angles $a(x^A)$. Discarding strictly positive conformal factors, we conclude that for large $r$ the tensor field $\check g$ is conformal to a tensor field $
\gamma_{AB} \mathrm{d}x^A \mathrm{d}x^B$ satisfying
\begin{eqnarray}
\label{9XII13.21}
&
\gamma_{AB} = r^2\big(s_{AB} + (\gamma_{AB})_{-1} r^{-1} + o (r^{-1} )\big)
\;,
&
\\
&
\partial_r \big(\gamma_{AB}-r^2 s_{AB} - r(\gamma_{AB})_{-1} \big)= o (1 )
\;.
&
\label{9XII13.21b}
\end{eqnarray}
The result follows now immediately from~\cite{CCG} and from our next Theorem~\ref{T21IV11.1}.
\hfill $\Box$ \medskip
\begin{theorem}
\label{T21IV11.1}
Suppose that the space-dimension $n$ equals three.
Let $r|\sigma|$, $r\ol W^0$ and $r^2\ol R_{\mu\nu} \ell^\mu \ell ^\nu$ be bounded for small $r$.
Suppose that $\gamma_{AB}(r,x^A)$ is positive definite for all $r>0$ and admits the expansion \eq{9XII13.21}-\eq{9XII13.21b}, for large $r$
with the coefficients in the expansion depending only upon $x^C$.
Assume that the first constraint equation \eq{constraint_phi} with $\kappa=0$ and
$$
0\le \overline R_{\mu\nu}\ell^\mu \ell^\nu =O(r^{-4})
$$
has a globally defined positive solution satisfying $\varphi(0)=0$, $\partial_r\varphi(0)=1$, $\varphi >0$, and $\varphi_{-1}>0$.
Then there are no logarithmic terms in the asymptotic expansion of $\nu^0$ in a gauge where $ {\kappa=0}$ and $ \ol W{}^0= o(r^{-2})$ (for large $r$) if and only if
$$
\sigma\equiv 0 \equiv \overline R_{\mu\nu}\ell^\mu \ell^\nu
\;.
$$
\end{theorem}
{\noindent \sc Proof of Theorem~\ref{T21IV11.1}:}
At the heart of the proof lies the following observation:
\begin{Lemma}
\label{L9XII13.1}
In space-dimension $n$,
suppose that $\kappa=0$ and set
\bel{9XII13.9}
\Psi = r^2 \exp( \int_0^r \big(\frac {\tau+\tau_1} 2 - \frac {n-1} r\big)\,\mathrm{d}r )
\;.
\end{equation}
with $\tau_1 \equiv (n-1)/r$
We have $\tau = (n-1) r^{-1} + \tau_2 r^{-2} + o(r^{-2})$,
where
\bel{9XII13.10}
\tau_2 := - \lim_{r\to\infty} r^2 {\Psi^{-1}} \times \int_0^r ( |\sigma|^2 + \overline R_{\mu\nu}\ell^\mu \ell^\nu)\Psi \,\mathrm{d}r
\;,
\end{equation}
provided that the limit exists.
\end{Lemma}
\begin{proof}
Let $\delta \tau = \tau - \tau_1$.
It follows from the Raychaudhuri equation with $\kappa=0$ that $\delta \tau$ satisfies the equation
$$
\frac{ \mathrm{d}\delta \tau } {\mathrm{d}r} + \frac {\tau+\tau_1} 2 \delta \tau = -|\sigma|^2 - 8 \pi \ol T_{rr}
\;.
$$
Solving, one finds
\begin{eqnarray*}
\delta \tau & = & -\Psi^{-1} \int_0^r ( |\sigma|^2 + 8 \pi \ol T_{rr} )\Psi \,\mathrm{d}r
\\
& = & \frac{\tau_2 }{r^2} +o(r^{-2})
\;,
\end{eqnarray*}
as claimed.
\hfill $\Box$ \medskip
\end{proof}
Let us return to the proof of Theorem~\ref{T21IV11.1}. Proposition~\ref{P6XII13.2} and Remark~\ref{R18XII13.1} show that
\beal{9XII13.12}
&
\varphi (r,x^A)= \varphi_{-1}(x^A)r + \varphi_0(x^A) + o(r^{-1})
\;,
&
\\
&
\tau \,\equiv\, 2\partial_r\log\varphi \,=\, 2r^{-1} -2\varphi_0(\varphi_{-1})^{-1}r^{-2}
+ o(r^{-2})
\;.
&
\eeal{9XII13.13}
Recall, next, the solution formula \eq{11II.1} for the constraint equation (\ref{constraint_nu0}) with $\kappa=0$ and $n=3$:
\begin{equation}
\nu^{0}(r,x^A)
= \frac{1}{2\varphi (r,x^A)}\int_0^r \varphi \left(s\overline{g}{}^{AB}s_{AB} - \ol W{}^0\right)(s,x^A)\, \mathrm{d}s
\;.
\label{11II.1b}
\end{equation}
From \eq{9XII13.12}-\eq{9XII13.13} one finds
\begin{equation}
\ol g^{AB} = r^{-2} (\varphi_{-1})^{-2}[s^{AB} + r^{-1}(\tau_2 s^{AB} - \breve \gamma_{-1}^{AB}) + o(r^{-1})]
\;,
\end{equation}
with
$$
\breve \gamma_{-1}^{AB} := s^{AC} s^{BD}[(\gamma_{CD})_{-1}- \frac{1}{2}s_{CD}s^{EF}(\gamma_{EF})_{-1} ]\;.
$$
Inserting this into \eq{11II.1b}, and assuming that $\ol W^{0}=o(r^{-2})$, one finds for large $r$
\begin{equation}
\nu^ 0 = (\varphi_{-1})^{-2}+ \frac{1}{2}\tau_2 (\varphi_{-1})^{-2}\frac{\ln r} r + O(r^{-1})
\;,
\end{equation}
with the coefficient of the logarithmic term vanishing if and only if $\tau_2=0$ when a bounded positive coefficient $\varphi_{-1}$ exists.
One can check that the hypotheses of Lemma~\ref{L9XII13.1} are satisfied, and the result follows.
\hfill $\Box$ \medskip
| {'timestamp': '2014-09-24T02:13:17', 'yymm': '1403', 'arxiv_id': '1403.3558', 'language': 'en', 'url': 'https://arxiv.org/abs/1403.3558'} |
\section{Introduction} \label{section:Introduction}
Hubel and Wiesel \cite{Hube59a} discovered that certain visual cells in cats' striate cortex have a directional preference.
It has turned out that there exists an intriguing and extremely precise spatial and directional organization into so-called cortical hypercolumns, see Figure~\ref{fig:VisualCortex}.
A hypercolumn can be interpreted as a ``visual pixel'', representing the optical world at a single location, neatly decomposed into a complete set of orientations. Moreover, correlated horizontal connections run parallel to the cortical surface and link columns across the spatial visual field with a shared orientation preference, allowing cells to combine visual information from spatially separated receptive fields.
Synaptic physiological studies of these horizontal pathways in cats' striate cortex show that neurons with aligned receptive field sites excite each other \cite{Bosking}. Apparently, the visual system not only constructs a score of local orientations, but also accounts for context and alignment by excitation and inhibition \emph{a priori}, which can be modeled by left-invariant PDE's and ODE's on $SE(2)$ \cite{Petitot,Citti,DuitsPhDThesis,Duits2007IJCV,Boscain3,August,BenYosef2012a,Chirikjian2,MashtakovNMTMA,Gonzalo,SartiCitteCompiledBook2014,Zweck,DuitsAMS1,DuitsAMS2,DuitsAlmsick2008,FrankenPhDThesis,BarbieriArxiv2013,Mumford}.
\begin{figure}[!b]
\centering
\includegraphics[width= 0.6\hsize]{v1_simple.pdf}
\caption{The orientation columns in the primary visual cortex.}
\label{fig:VisualCortex}
\end{figure}
Motivated by the orientation-selective cells, so-called orientation scores are constructed
by lifting all elongated structures (in 2D images) along an extra orientation dimension \cite{Kalitzin97,DuitsPhDThesis,Duits2007IJCV}. The main advantage of using the orientation score is that we can disentangle the elongated structures involved in a crossing allowing for a crossing preserving flow.
Invertibility of the transform between image and score is of vital importance, to both tracking \cite{BekkersJMIV} and enhancement\cite{Sharma2014,Franken2009IJCV}, as we do not want to tamper data-evidence in our continuous coherent state transform\cite{Alibook,Zweck} before actual processing takes place. This is a key advantage over related state-of-the-art methods\cite{AugustPAMI,MashtakovNMTMA,Zweck,Boscain3,Citti}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.7\textwidth]{OSStack.pdf}
\caption{Real part of an orientation score of an example image.}
\label{fig:OSIntro}
\end{figure}
Invertible orientation scores (see Figure~\ref{fig:OSIntro}) are obtained via a unitary transform between the space of disk-limited images $\mathbb{L}_{2}^{\varrho}(\mathbb{R}^{2}):=\{f \in \mathbb{L}_{2}(\mathbb{R}^{2}) \; |\; \textrm{support}\{\mathcall{F}_{\mathbb{R}^{2}}f\} \subset B_{\textbf{0},\varrho}\}$ (with $\varrho>0$ close to the Nyquist frequency and $B_{\textbf{0},\varrho}=\{\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}\;|\; \|\mbox{\boldmath$\omega$}\|\leq \varrho\}$), and the space of orientation scores. The space of orientation scores is a specific reproducing kernel vector subspace \cite{DuitsPhDThesis,Aronszajn1950,Alibook} of $\mathbb{L}_{2}(\mathbb{R}^{2}\times S^{1})$, see Appendix~\ref{app:new} for the connection with continuous wavelet theory. The transform from an image $f$ to an orientation score $U_f:=\mathcall{W}_\psi f$ is constructed via an anisotropic convolution kernel $\psi \in \mathbb{L}_{2}(\mathbb{R}^{2}) \!\cap\! \mathbb{L}_{1}(\mathbb{R}^{2})$:
\begin{equation} \label{OrientationScoreConstruction}
U_f(\textbf{x},\theta)=(\mathcall{W}_\psi [f])(\textbf{x},\theta)=\int_{\mathbb{R}^2}\overline{\psi(\textbf{R}_{\theta}^{-1}(\textbf{y}{-\textbf{x}}))}f(\textbf{y})d\textbf{y},
\end{equation}
where $\mathcall{W}_\psi$ denotes the transform and $\small \textbf{R}_\theta=
\left( \begin{array}{ccc}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta \\
\end{array} \right).$
Exact reconstruction is obtained by
\begin{equation}\label{OrientationScoreReconstruction}
\begin{aligned}
f(\textbf{x})
=(\mathcall{W}_\psi^*[U_f])(\textbf{x})
=\left(\mathcall{F}_{\mathbb{R}^2}^{-1}\left[M_\psi^{-1}\mathcall{F}_{\mathbb{R}^2}\left[\frac{1}{2\pi}\int_0^{2\pi}(\psi_\theta*U_f(\cdot,\theta))d\theta\right]\right]\right)(\textbf{x}),
\end{aligned}
\end{equation}
for all $\textbf{x} \in \mathbb{R}^2$, where $\mathcall{F}_{\mathbb{R}^2}$ is the unitary Fourier transform on $\mathbb{L}_2(\mathbb{R}^2)$ and $M_\psi \in C(\mathbb{R}^2, \mathbb{R})$ is given by $M_\psi(\pmb{\omega})=\int_0^{2\pi}|\hat{\psi}(\textbf{R}_\theta^{-1}\pmb{\omega})|^2 d\theta$ for all $\pmb{\omega} \in \mathbb{R}^{2}$, with $\hat{\psi}:=\mathcall{F}_{\mathbb{R}^2}\psi$, $\psi_{\theta}(\textbf{x})=\psi(R_{\theta}^{-1}\textbf{x})$. Furthermore, $\mathcall{W}_\psi^*$ denotes the adjoint of wavelet transform $\mathcall{W}_\psi:\mathbb{L}_2(\mathbb{R}^2)\rightarrow \mathbb{C}_{K}^{SE(2)}$, where the reproducing kernel norm on the space of orientation scores, $\mathbb{C}_{K}^{SE(2)}=\{\mathcall{W}_{\psi}f \; |\; f \in \mathbb{L}_{2}(\mathbb{R}^{2})\}$, is explicitly characterized in \cite[Thm.4, Eq.~11]{Duits2007IJCV}. Well-posedness of the reconstruction is controlled by $M_\psi$\cite{Duits2007IJCV,BekkersJMIV}. For details see Appendix~\ref{app:new}. Regarding the choice of $\psi$ in our algorithms, we rely on the wavelets proposed in \cite[ch:4.6.1]{DuitsPhDThesis},\cite{BekkersJMIV}.
In this article, the invertible orientation scores serve as the initial condition of left-invariant $\mbox{(non-)}$ linear PDE evolutions on the rotation-translation group $\mathbb{R}^2 \rtimes SO(2) \equiv SE(2)$, where by definition, \\$\mathbb{R}^d \rtimes S^{d-1}:=\mathbb{R}^d \rtimes SO(d)/(\{0\} \times SO(d-1))$. Now in our case $d=2$, so $\mathbb{R}^2 \rtimes S^1=\mathbb{R}^2 \rtimes SO(2)$ and we identify rotations with orientations. The primary focus of this article, however, is on the numerics and comparison to the exact solutions of linear left-invariant PDE's on $SE(2)$. Here by left-invariance and linearity we can restrict ourselves in our numerical analysis to the impulse response, where the initial condition is equal to $\delta_e=\delta_0^x \otimes \delta_0^y \otimes \delta_0^\theta$, where $\otimes$ denotes the tensor product in distributional sense.
In fact, we consider all linear, second order, left-invariant evolution equations and their resolvents on $\mathbb{L}_{2}(\mathbb{R}^{2} \rtimes S^{1}) \equiv \mathbb{L}_2(SE(2))$, which actually correspond to the forward Kolmogorov equations of left-invariant stochastic processes. Specifically, there are two types of stochastic processes we will investigate in the field of imaging and vision:
\begin{compactitem}
\item The contour enhancement process as proposed by Citti et al.\cite{Citti} in the cortical modeling.
\item The contour completion process as proposed by Mumford \cite{Mumford} also called the direction process.
\end{compactitem}
In image analysis, the difference between the two processes is that the contour enhancement focuses on the de-noising of elongated structures, while the contour completion aims for bridging the gap of interrupted contours since it contains a convection part.
Although not being considered in this article, we mention related 3D $(SE(3))$ extensions of these processes and applications (primarily in DW-MRI) in \cite{Creusen2013,MomayyezSiakhal2013,ReisertSE3-2012}. Most of our numerical findings in this article apply to these $SE(3)$ extensions as well.
Many numerical approaches for implementing left-invariant PDE's on $SE(2)$ have been investigated intensively in the fields of cortical modeling and image analysis. Petitot introduced a geometrical model for the visual cortex V1 \cite{Petitot}, further refined to the $SE(2)$ setting by Citti and Sarti \cite{Citti}. A method for completing the boundaries of partially occluded objects based on stochastic completion fields was proposed by Zweck and Williams\cite{Zweck}. Also, Barbieri et al.\cite{BarbieriArxiv2013} proposed a left-invariant cortical contour perception and motion integration model within a 5D contact manifold. In the recent work of Boscain et al.\cite{Boscain3}, a numerical algorithm for integration of a hypoelliptic diffusion equation on the group of translations and discrete rotations $SE(2,N)$ is investigated. Moreover, some numerical schemes were also proposed by August et al. \cite{August,AugustPAMI} to understand the direction process for curvilinear structure in images. Duits, van Almsick and Franken\cite{DuitsAMS1,DuitsAMS2,DuitsAlmsick2008,FrankenPhDThesis,MarkusThesis,DuitsPhDThesis} also investigated different models based on Lie groups theory, with many applications to medical imaging.
The numerical schemes for left-invariant PDE's on $SE(2)$ can be categorized into 3 approaches:
\begin{compactitem}
\item Finite difference approaches.
\item Fourier based approaches, including $SE(2)$-Fourier methods.
\item Stochastic approaches.
\end{compactitem}
Recently, several explicit representations of exact solutions were derived \cite{DuitsCASA2005,DuitsCASA2007,DuitsAMS1,MarkusThesis,DuitsAlmsick2008,Boscain1}. In this paper we will set up a structured framework to compare all the numerical approaches to the exact solutions. \\
\textbf{Contributions of the article:}
In this article, we:
\begin{compactitem}
\item compare all numerical approaches (finite difference methods, a stochastic method based on Monte Carlo simulation and Fourier based methods) to the exact solution for contour enhancement/completion. We show that the Fourier based approaches perform best and we also explain this theoretically in Theorem \ref{th:RelationofFourierBasedWithExactSolution};
\item provide a concise overview of all exact approaches;
\item implement exact solutions, including improvements of Mathieu-function evaluations in $\textit{Mathematica}$;
\item establish explicit connections between exact and numerical approaches for contour enhancement;
\item analyze the poles/singularities of the resolvent kernels;
\item propose a new probabilistic time integration to overcome the poles, and we prove this via new simple asymptotic formulas
for the corresponding kernels that we present in this article;
\item show benefits of the newly proposed time integration in contour completion via stochastic completion fields \cite{Zweck};
\item analyze errors when using the $\textbf{DFT}$ (Discrete Fourier Transform) instead of the $\textbf{CFT}$ (Continuous Fourier Transform) to transform exact formulas in the spatial Fourier domain to the $SE(2)$ domain;
\item apply left-invariant evolutions as preprocessing before tracking the retinal vasculature via the ETOS-algorithm \cite{BekkersJMIV} in optical imaging of the eye.
\end{compactitem}
\vspace{1.5ex}
\textbf{Structure of the article:} In Section 2 we will briefly describe the theory of the $SE(2)$ group and left-invariant vector fields. Subsequently, in Section 3 we will discuss the linear time dependent $\mbox(\text{convection-})$ diffusion processes on $SE(2)$ and the corresponding resolvent equation for contour enhancement and contour completion. In Subsection~\ref{IterationResolventOperators} we provide improved kernels via iteration of resolvent operators and give a probabilistic interpretation.
Then we show the benefit in stochastic completion fields.
For completeness, the fundamental solution and underlying probability theory for contour enhancement/completion is explained in Subsection~\ref{section:FundamentalSolutions}.
In Section 4 we will give the full implementations for all our numerical schemes for contour enhancement/completion, i.e. explicit and implicit finite difference schemes, numerical Fourier based techniques, and the Monte-Carlo simulation of the stochastic approaches. Then, in Section 5, we will provide a new concise overview of all three exact approaches in the general left-invariant PDE-setting. Direct relations between the exact solution representations and the numerical approaches are also given in this section. After that, we will provide experiments with different parameter settings and show the performance of all different numerical approaches compared to the exact solutions. Finally, we conclude our paper with applications on retinal images to show the power of our multi-orientation left-invariant diffusion with an application on complex vessel enhancement, i.e. in the presence of crossings and bifurcations.
\section{The $SE(2)$ Group and Left-invariant Vector Fields}
\label{section:The $SE(2)$ Group and Left-invariant Vector Fields}
\subsection{The Euclidean Motion Group $SE(2)$ and Representations}\label{section:The Euclidean Motion Group $SE(2)$ and Group Representations}
An orientation score $U:SE(2) \to \mathbb{C}$ is defined on the Euclidean motion group $SE(2)=\mathbb{R}^2 \rtimes S^1$. The group product on $SE(2)$ is given by
\begin{equation}
gg'=(\textbf{x},\theta)(\textbf{x}',\theta')=(\textbf{x}+\textbf{R}_\theta \cdot \textbf{x}',\theta+\theta'), \quad \textit{for all} \quad g,g' \in SE(2).
\end{equation}
The translation and rotation operators on an image $f$ are given by $(\mathcall{T}_\textbf{x}f)(\textbf{y})=f(\textbf{y}-\textbf{x})$ and $(\mathcall{R}_\theta f)(\textbf{x})=f((\textbf{R}_\theta)^{-1}\textbf{x})$. Combining these operators yields the unitary $SE(2)$ group representation $\mathcall{U}_g=\mathcall{T}_\textbf{x} \circ \mathcall{R}_\theta$. Note that $g h \mapsto \mathcall{U}_{gh}=\mathcall{U}_{g} \mathcall{U}_{h}$ and $\mathcall{U}_{g^{-1}}=\mathcall{U}_{g}^{-1}=\mathcall{U}_{g}^{*}$.
We have
\begin{equation}
\forall g \in SE(2):(\mathcall{W}_\psi \circ \mathcall{U}_g)= (\mathcall{L}_g \circ \mathcall{W}_\psi)
\end{equation}
with group representation $g \mapsto \mathcall{L}_{g}$ given by $\mathcall{L}_{g}U(h)=U(g^{-1}h)$, and consequently, the effective operator $\Upsilon:=\mathcall{W}_\psi^* \circ \Phi \circ \mathcall{W}_\psi$ on the image domain (see Figure~\ref{fig:ImageProcessingViaOS}) commutes with rotations and translations if the operator $\Phi$ on the orientation score satisfies
\begin{align}\label{rel}
\Phi \circ \mathcall{L}_g=\mathcall{L}_g \circ \Phi, \quad \textit{for all}\quad g \in SE(2).
\end{align}
Moreover, if $\Phi$ maps the space of orientation scores onto itself, sufficient condition (\ref{rel}) is also necessary for rotation and translation covariant image processing (i.e. $\Upsilon$ commutes with $\mathcall{U}_{g}$ for all $g \in SE(2)$).
For details and proof see \cite[Thm.21, p.153]{DuitsPhDThesis}.
However, operator $\Phi$ should not be right-invariant, i.e. $\Phi$ should not commute with the right-regular representation $g \mapsto \mathcall{R}_{g}$ given by $\mathcall{R}_{g}U(h)=U(hg)$, as $\mathcall{R}_{g}\mathcall{W}_{\psi}=\mathcall{W}_{\mathcall{U}_{g}\psi}$ and operator $\Upsilon$ should rather take advantage from the anisotropy of the wavelet $\psi$.
We conclude that by our construction of orientation scores \emph{only left-invariant operators are of interest}.
Next we will discuss the left-invariant derivatives (vector-fields) on smooth functions on $SE(2)$, which we will employ in the PDE of interest presented in Section~\ref{section:The PDE's of Interest}. For an intuition of left-invariant processing on orientation scores (via left-invariant vector fields) see
Figure~\ref{fig:ImageProcessingViaOS}.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{pipelineOS.pdf}
\caption{Image processing via invertible orientation scores. Operators $\Phi$ on the invertible orientation score robustly relate to operators $\Upsilon$ on the image domain. Euclidean-invariance of $\Upsilon$ is obtained by left-invariance of $\Phi$. Thus, we consider left-invariant (convection)-diffusion operators $\Phi=\Phi_t$ with evolution time $t$, which are generated by a quadratic form $Q=Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$ (
cf.~\!Eq.~\!(\ref{diffusionconvectiongenerator})) on the left-invariant vector fields $\{\mathcall{A}_i\}$, cf.~\!Eq.~\!(\ref{leftInvariantDerivatives}). We show the relevance of left-invariance of $\mathcall{A}_2$ acting on an image of a circle (as in Figure \ref{fig:OSIntro}) compared to action of the non-left-invariant derivative $\partial_y$ on the same image. }
\label{fig:ImageProcessingViaOS}
\end{figure}
\subsection{Left-invariant Tangent Vectors and Vector Fields}\label{section:Left-invariant Vector Fields}
The Euclidean motion group $SE(2)$ is a Lie group. Its tangent space at the unity element $T_e(SE(2))$ is the corresponding Lie algebra and it is spanned by the basis $\{\textbf{e}_x,\textbf{e}_y,\textbf{e}_\theta\}$. Next we derive the left-invariant derivatives associated to $\textbf{e}_x,\textbf{e}_y,\textbf{e}_\theta$, respectively.
A tangent vector $X_e \in T_e(SE(2))$ is tangent to a curve $\gamma$ at unity element $e=(0,0,0)$. Left-multiplication of the curve $\gamma$ with $g \in SE(2)$ associates to each $X_{e} \in T_{e}(SE(2))$ a corresponding tangent vector $X_{g}=(L_{g})_{*}X_{e} \in T_{g}(SE(2))$:
\begin{equation}
\begin{aligned}
\{\textbf{e}_\xi(g),\textbf{e}_\eta(g),\textbf{e}_\theta(g)\} &=\{(L_g)_{*} \textbf{e}_x,(L_g)_{*} \textbf{e}_y,(L_g)_{*} \textbf{e}_\theta\} \\ &=\{\cos\theta\textbf{e}_x\!+\!\sin\theta\textbf{e}_y,-\sin\theta\textbf{e}_x\!+\!\cos\theta\textbf{e}_y,\textbf{e}_\theta\},
\end{aligned}
\end{equation}
where $(L_g)_*$ denotes the pushforward of left-multiplication $L_gh = gh$, and where we introduce the local coordinates $\xi:= x \cos \theta + y \sin \theta$ and $\eta:= -x \sin \theta + y \cos \theta$.
As the vector fields can also be considered as differential operators on locally defined smooth functions \cite{aubin2001diffgeo}, we replace $\textbf{e}_i$ by using $\partial_i$, $i=\xi,\eta,\theta$, yielding the general form for a left-invariant vectorfield
\begin{equation}
\begin{aligned}
&X_g(U)=(c^\xi(\cos\theta\partial_x+\sin\theta\partial_y)
+c^\eta(-\sin\theta\partial_x+\cos\theta\partial_y)+c^\theta\partial_\theta)U,
\textit{ for all } c^\xi, c^\eta, c^\theta \in \mathbb{R}.
\end{aligned}
\end{equation}
Throughout this article, we shall rely on the following notation for left-invariant vector fields
\begin{equation} \label{leftInvariantDerivatives}
\{\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3\}:=\{\partial_\xi,\partial_\eta,\partial_\theta\}=\{\cos\theta\partial_x+\sin\theta\partial_y,-\sin\theta\partial_x+\cos\theta\partial_y,\partial_\theta\},
\end{equation}
which is the frame of left-invariant derivatives acting on $SE(2)$, the domain of the orientation scores.
\section{The PDE's of Interest} \label{section:The PDE's of Interest}
\subsection{Diffusions and Convection-Diffusions on $SE(2)$ }\label{section:TimedDiffusion}
A diffusion process on $\mathbb{R}^n$ with a square integrable input image $f:\mathbb{R}^n \longmapsto \mathbb{R}$ is given by
\begin{align}
\left\{\begin{aligned}
&\partial_t u(\textbf{x},t)=\triangledown \cdot \textbf{D}\triangledown u(\textbf{x},t) \qquad \textbf{x}\in\mathbb{R}^n,t \geq 0, \\
&u(\textbf{x},0)=f(\textbf{x}).\\
\end{aligned} \right.
\end{align}
Here, the $\triangledown$ operator is defined based on the spatial coordinates with $\triangledown=(\partial_{x_1},...,\partial_{x_n})$, and the constant diffusion tensor $\textbf{D}$ is a positive definite matrix of size $n \times n$. Similarly, the left-invariant diffusion equation on $SE(2)$ is given by:
\begin{align} \label{ExactDiffusionConvectionEquation}
\left\{\begin{aligned}
\partial_t W(g,t)&=\left( \begin{array}{ccc}
\partial_\xi & \partial_\eta & \partial_\theta \end{array} \right)
\left( \begin{array}{ccc}
D_{\xi\xi} & D_{\xi\eta} & D_{\xi\theta} \\
D_{\eta\xi} & D_{\eta\eta} & D_{\eta\theta} \\
D_{\theta\xi} & D_{\theta\eta} & D_{\theta\theta}\\
\end{array} \right)
\left( \begin{array}{ccc}
\partial_\xi\\
\partial_\eta\\
\partial_\theta \end{array} \right)W(g,t),\\
W(g,t=0)&=U^{0}(g),\\
\end{aligned} \right.
\end{align}
where as a default the initial condition is usually chosen as the orientation score of image $f \in \mathbb{L}_{2}(\mathbb{R}^{2})$, $U^{0}=U_{f}=\mathcall{W}_\psi f$. From the general theory for left-invariant scale spaces \cite{DuitsSSVM2007}, the quadratic form of the convection-diffusion generator is given by
\begin{equation} \label{diffusionconvectiongenerator}
\begin{aligned}
&Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)=\sum_{i=1}^3\left(-a_i\mathcall{A}_i+\sum_{j=1}^3 D_{ij}\mathcall{A}_i \mathcall{A}_j \right),\\ &a_i,D_{ij} \in \mathbb{R}, \textbf{D}:=[D_{ij}] \geq 0, \textbf{D}^T=\textbf{D},
\end{aligned}
\end{equation}
where the first order part takes care of the convection process, moving along the exponential curves $t \longmapsto g \cdot exp(t(\sum_{i=1}^3 a_iA_i))$ over time with $g \in SE(2)$, and the second order part specifies the diffusion in the following left-invariant evolutions
\begin{align} \label{diffusionconvection}
\left\{ \begin{aligned}
&\partial_t W=Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)W,\\
&W(\cdot,t=0)=U^{0}(\cdot).\\
\end{aligned} \right.
\end{align}
In case of linear diffusion, the positive definite diffusion matrix $\textbf{D}$ is constant. Then we obtain the solution of the left-invariant diffusion equation via a $SE(2)$-convolution with the Green's function $K_t^{\textbf{D},\textbf{a}}: SE(2)\rightarrow \mathbb{R}^+$ and the
initial condition $U^{0}:SE(2) \to \mathbb{C}$:
\begin{equation} \label{SE(2)ConvolutionOnDiffusion}
\begin{aligned}
W(g,t) =(K_t^{\textbf{D},\textbf{a}} \ast_{SE(2)}U^{0})(g) &=\int \limits_{SE(2)}K_t^{\textbf{D},\textbf{a}}(h^{-1}g)U^{0}(h)\, {\rm d}h \\ &=\int \limits_{\mathbb{R}^2}\int \limits_{-\pi}^{\pi}K_t^{\textbf{D},\textbf{a}}(\textbf{R}_{\theta'}^{-1}(\textbf{x}-\textbf{x}'), \theta-\theta')U^{0}(\textbf{x}',\theta')\, {\rm d}\theta'{\rm d}\textbf{x}',
\end{aligned}
\end{equation}
for all $g=(\textbf{x},\theta)\in SE(2)$.
This can symbolically be written as $W(\cdot,t)=e^{tQ^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)}U^{0}(\cdot)$.
In this time dependent diffusion we have to set a fixed time $t>0$. In the subsequent sections we consider time integration while imposing a negatively exponential distribution $T \sim NE(\alpha)$, i.e. $P(T=t)=\alpha e^{-\alpha t}$. We choose this distribution since it is the only continuous memoryless distribution, and in order to ensure that the underlying stochastic process is Markovian, traveling time must be memoryless.
There are two specific cases of interest:
\begin{compactitem}
\item Contour enhancement, where $\textbf{a}=\textbf{0}$ and $\textbf{D}$ is symmetric positive semi-definite such that the H\"{o}rmander condition is satisfied. This includes both elliptic diffusion $\textbf{D}>0$ and hypo-elliptic diffusion in which case we have $\textbf{D} \geq 0$ in such a way that H\"{o}rmander's condition \cite{Hoermander} is still satisfied. In the linear case we shall be mainly concerned with the hypo-elliptic case $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$,
\item Contour completion, where $\textbf{a}=(1,0,0)$ and $\textbf{D}=\textrm{diag}\{0,0,D_{33}\}$ with $D_{33}>0$.
\end{compactitem}
Several new exact representations for the (resolvent) Green's functions in $SE(2)$ were derived by Duits et al. \cite{DuitsAMS1,DuitsAlmsick2008,DuitsCASA2005,DuitsCASA2007,MarkusThesis} in the spatial Fourier domain, as explicit formulas were still missing, see e.g.~\cite{Mumford}.
This includes the Fourier series representations, studied independently in \cite{Boscain3}, but also includes a series of rapidly decaying terms and explicit representations obtained by computing the latter series representations explicitly via the Floquet theorem, producing explicit formulas involving only 4 Mathieu functions. The works in \cite{DuitsAMS1,DuitsAlmsick2008} relied to a large extend on distribution theory to derive these explicit formulas. Here we deal with the general case with $D\geq 0$ and $\textbf{a} \in \mathbb{R}^{3}$ (as long as H\"{o}rmander's condition
\cite{Hoermander} is satisfied) and we stress the analogy between the contour completion and contour enhancement case in
the appropriate general setting (for the resolvent PDE, for the (convection)-diffusion PDE, and for fundamental solution PDE).
Instead of relying on distribution theory \cite{DuitsAlmsick2008,DuitsAMS1}, we obtain the general solutions more directly via Sturm-Liouville theory.
Furthermore, in Section \ref{section:Experimental results} we include, for the first time, numerical comparisons of various numerical approaches to the exact solutions. The outcome of which, is underpinned by a strong convergence theorem that we will present in Theorem~\ref{th:RelationofFourierBasedWithExactSolution}.
On top of this, in Appendix~\ref{app:A}, we shall present new asymptotic expansions around the origin that allow us to analyze the order of the singularity at the origin of the resolvent kernels. From these asymptotic expansions we deduce that the singularities in the resolvent kernels
(and fundamental solutions) are quite severely. In fact, the better the equations are numerically approximated, the weaker the completion and enhancement properties of the kernels.
To overcome this severe discrepancy between the mathematical PDE theory and the practical requirements, we propose time-integration via Gamma distributions (beyond the negative exponential distribution case).
Mathematically, as we will prove in Subsection~\ref{IterationResolventOperators}, this newly proposed time integration both reduces the singularities, and maintains the formal PDE theory. In fact using a Gamma distribution coincides with iteration the resolvents, with an iteration depth $k$ equal to the squared mean divided by the variance of the Gamma distribution.
We will also show practical experiments that demonstrate the advantage of using the Gamma-distributions: we can control and amplify the infilling property ("the spread of ink") of the PDE's.
\subsection{The Resolvent Equation}\label{section:ResolventEquation}
Traveling time of a memoryless random walker in $SE(2)$ is negatively exponential distributed, i.e.
\begin{align} \label{exponentialdistribution}
p(T=t)=\alpha e^{-\alpha t}, t\geq0,
\end{align}
with the expected life time $E(T)=\frac{1}{\alpha}$. Then the resolvent kernel is obtained by integrating Green's function $K_t^{\textbf{D},\textbf{a}}:SE(2)\rightarrow \mathbb{R}^+$ over the time distribution, i.e.
\[\label{ResolventKernel}
\begin{aligned}
R_{\alpha}^{\textbf{D},\textbf{a}}&=\alpha\int_0^\infty K_t^{\textbf{D},\textbf{a}}e^{-\alpha t}dt=\alpha\int_0^\infty e^{tQ}\delta_ee^{-\alpha t}dt=-\alpha(Q-\alpha I)^{-1}\delta_e,
\end{aligned}
\]
where we use short notation $Q=Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$.
Via this resolvent kernel, one gets the probability density $P_{\alpha}(g)$ of finding a random walker at location
$g \in SE(2)$ regardless its traveling time, given that it has departed from distribution $U:SE(2) \to \mathbb{R}^{+}$:
\begin{equation} \label{Resolvent}
\begin{aligned}
P_\alpha(g)=(R_{\alpha}^{\textbf{D},\textbf{a}} \ast_{SE(2)}U)(g)=-\alpha(Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)-\alpha I)^{-1}U(g),
\end{aligned}
\end{equation}
which is the same as taking the Laplace transform of the left-invariant evolution equations ~(\ref{diffusionconvection}) over time. The resolvent equation can be written as
\[
\begin{aligned}
P_\alpha(g)=\alpha\int_0^\infty e^{-\alpha t}(e^{tQ}U^{0})(g)dt=\alpha((\alpha I-Q)^{-1}U)(g).
\end{aligned}
\]
However, we do not want to go into the details of semigroup theory \cite{Yosida} and just included where $(e^{tQ}U^0)$ in short notation for the solution of Eq.~(\ref{diffusionconvection}).
Resolvents can be used in completion fields\cite{Zweck,DuitsAMS1,August}. Some resolvent kernels of the contour enhancement and completion process are given in Figure~\ref{fig:ResolventCompletionEnhancementKernels}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\hsize]{ResolventCompletionEnhancementKernels.pdf}
\caption{Left: the $xy$-marginal of the contour enhancement kernel $R_{\alpha}^{\textbf{D}}:=R_{\alpha}^{\textbf{D},\textbf{0}}$ with parameters $\alpha=\frac{1}{100}$, $\textbf{D}=\{1,0,0.08\}$, numbers of orientations $N_o = 48$ and spatial dimensions $N_s = 128$. Right: the $xy$-marginal of the contour completion kernel $R_{\alpha}^{\textbf{D},\textbf{a}}$ with parameters $\alpha=\frac{1}{100}$, $\textbf{a}=(1,0,0)$, $\textbf{D}=\{0,0,0.08\}$, $N_o = 72$ and $N_s = 192$.}
\label{fig:ResolventCompletionEnhancementKernels}
\end{figure}
\subsection{Improved Kernels via Iteration of Resolvent Operators \label{IterationResolventOperators}}
The kernels of the resolvent operators suffer from singularities at the origin. Especially for contour completion, this is cumbersome from the application point of view, since here the better one approximates Mumford's direction process and its inherent singularity in the Green's function, the less ``ink'' is spread in the areas with missing and interrupted contours. To overcome this problem we extend the temporal negatively exponential distribution in our line enhancement/completion models by a 1-parameter family of Gamma-distributions.
As a sum $T=T_{1} + \ldots + T_{k}$ of linearly independent negatively exponential time variables is Gamma distributed $P(T=t)= \frac{\alpha^{k} t^{k-1}}{(k-1)!} e^{-\alpha t}$, the time integrated process is now obtained by a $k$-fold resolvent operator. While keeping the expectation of the Gamma distribution fixed by $E(T)=k/\alpha $, increasing of $k$ will induce more mass transport away from $t=0$ towards the void areas of interrupted contours. For $k\geq 3$
the corresponding Green's function of the $k$-step approach even no longer suffers from a singularity at the origin. This procedure is summarized in the following theorem and applied in Figure~\ref{fig:Gamma}.
\begin{theorem}\label{th:prob}
A concatenation of $k$ subsequent, independent time-integrated memoryless stochastic process for contour enhancement(/completion) with expected traveling time $\alpha^{-1}$,
corresponds to a time-integrated contour enhancement(/completion) process with a Gamma distributed traveling time $T=T_{1}+ \ldots +T_{k}$ with
\begin{equation}\label{GammaDistributionIntegration}
\begin{array}{l}
P(T_{i}=t)=\alpha e^{-\alpha t}, \textrm{ for } i=1,\ldots,k, \\
P(T=t)=\Gamma(t; k,\alpha):=\frac{\alpha^{k} t^{k-1}}{\Gamma(k)} e^{-\alpha t}.
\end{array}
\end{equation}
The probability density kernel of this stochastic process is given by
\begin{equation}\label{ProbabilityDensityKernel}
R_{\alpha,k}^{\textbf{D},\textbf{a}}=R_{\alpha}^{\textbf{D},\textbf{a}} *^{(k-1)}_{SE(2)}R_{\alpha}^{\textbf{D},\textbf{a}}= \alpha^{k} (Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})-\alpha I)^{-k} \delta_{e},
\end{equation}
For the linear, hypo-elliptic, contour enhancement case (i.e. $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$ and $\textbf{a}=\textbf{0}$) the kernels admit the following asymptotical formula for $|g| << 1:$
\begin{equation}\label{enhass}
\begin{array}{ll}
R_{\alpha,k}(g) &= \int \limits_{0}^{\infty} \frac{\alpha^{k} t^{k-1}e^{-\alpha t}}{(k-1)!}
\frac{e^{-C^2\frac{|g|^2}{4t}}}{4\pi D_{11}D_{33}t^2} {\rm d}t=
\frac{\alpha^k}{(k-1)! 4\pi D_{11}D_{33}} \int \limits_{0}^{\infty}
t^{k-3}e^{-C^2\frac{|g|^2}{4t}-\alpha t}\,{\rm d}t \\
&= \frac{2^{1-k}}{\pi D_{11}D_{33} (k-1)!}\alpha^{k}
||g|C|^{k-2} \; \mathcall{K}(2-k,|g|C\sqrt{\alpha}),
\end{array}
\end{equation}
where $\mathcall{K}(n,z)$ denotes the modified Bessel function of the 2nd kind, and
with $C \in [2^{-1},\sqrt[4]{2}]$ and with
\begin{equation} \label{logmodulus}
|g|=\left|e^{c^{1}\mathcall{A}_{1}+c^{2}\mathcall{A}_{2}+c^{3}\mathcall{A}_{3}}\right|=
\sqrt{\left(\frac{|c^{1}|^2}{D_{11}}+\frac{|c^{3}|^2}{D_{33}}\right)^2 +\frac{|c^{2}|^2}{D_{11}D_{33}}}
\end{equation}
with $c^{1}=\frac{\theta(y-\eta)}{2(1-\cos \theta)}$, $c^{2}=\frac{\theta(\xi-x)}{2(1-\cos \theta)}$, $c^{3}=\theta$ if $\theta \neq 0$ and $(c^{1},c^{2},c^{3})=(x,y,0)$ if $\theta=0$.
\end{theorem}
\textbf{Proof }
We consider a random traveling time $T=\sum_{i=1}^{n} T_{i}$ in an
$n$-step approach random process $G_{T}=\sum_{i=1}^{N}G_{T_i}$ on $SE(2)$,
with $G_{T_i}$ independent random random walks whose Fokker-Planck equations are given by (\ref{diffusionconvection}), and with independent traveling times $T_{i} \sim NE(\alpha)$ (i.e. $P(T_{i}=t)=f(t):=\alpha e^{-\alpha t}$).
Then for $k \geq 2$ we have $T \sim f *_{\mathbb{R}^{+}}^{k-1} f=\Gamma(\cdot; k,\alpha)$, (with $f*_{\mathbb{R}^+}g(t)=\int_{0}^{t} f(\tau)g(t-\tau)\,{\rm d}\tau$),
which follows by consideration of the characteristic function and application of Laplace transform $\mathcall{L}$.
We have $\alpha^{k}(Q-\alpha I)^{-k}=(\alpha (Q-\alpha I)^{-1})^k$, and for $k=2$ we have
identity
\[
\begin{array}{l}
R_{\alpha,k=2}^{\textbf{D},\textbf{a}}(\textbf{x},\theta)
=\int \limits_{0}^{\infty} p(G_{T}=(\textbf{x},\theta) | T=t, G_{0}=e)\cdot p(T=t)\, {\rm d}t \\
=\int \limits_{0}^{\infty} p(G_{T}=(\textbf{x},\theta) \; |\; T=T_{1}+T_{2}=t, G_{0}=e)\cdot p(T_{1}+T_{2}=t) \, {\rm d}t \\
=\int \limits_{0}^{\infty} \int \limits_{0}^{t} p(G_{T_{1}+T_2}=(\textbf{x},\theta) \; |\; T_{1}=t-s, T_{2}=s, G_{0}=e)\cdot
p(T_{1}=t-s)\; p(T_{2}=s) \, {\rm d}s {\rm d}t \\
=\alpha^2 \, \mathcall{L}\left(t \mapsto \int \limits_{0}^{t} (K_{t-s}^{\textbf{D},\textbf{a}}*_{SE(2)}K_{s}^{\textbf{D},\textbf{a}} *_{SE(2)} \delta_{e})(\textbf{x},\theta) {\rm d}s\right)(\alpha)\\
= \alpha^2 \, \mathcall{L}\left(t \mapsto \int \limits_{0}^{t} (K_{t-s}^{\textbf{D},\textbf{a}}*_{SE(2)} K_{s}^{\textbf{D},\textbf{a}} )(\textbf{x},\theta) {\rm d}s\right)(\alpha) \\
= \alpha^2 \, \left(\mathcall{L}\left(t \mapsto K_{t}^{\textbf{D},\textbf{a}}(\cdot)\right)(\alpha) *_{SE(2)}\mathcall{L}\left(t \mapsto K_{t}^{\textbf{D},\textbf{a}}(\cdot)\right)(\alpha)\right)(\textbf{x},\theta)
= (R_{\alpha,k=1}^{\textbf{D},\textbf{a}}*_{SE(2)}R_{\alpha,k=1}^{\textbf{D},\textbf{a}})(\textbf{x},\theta).
\end{array}
\]
Thereby main result Eq.~\!(\ref{ProbabilityDensityKernel}) follows by induction.
Result (\ref{enhass}) follows by direct computation and application of the theory of weighted
sub-coercive operators on Lie groups \cite{TerElst} to the $SE(2)$ case. We have filtration $\gothic{g}_0:=
\textrm{span}\{\mathcall{A}_{1},\mathcall{A}_{3}\}$,
and $\gothic{g}_{1}=[\gothic{g}_0,\gothic{g}_0]=\textrm{span}\{\mathcall{A}_{1},\mathcall{A}_{2},\mathcall{A}_{3}\}=\mathcall{L}(SE(2))$,
so $w_1=1$, $w_3=1$ and $w_{2}=2$ and computation of the logarithmic map on $SE(2)$,
$g=e^{\sum_{i=1}^{3}c^{i} A_{i}} \Leftrightarrow \sum_{i=1}^{3}c^{i} A_{i} = \log g$, yields a non-smooth logarithmic squared modulus
locally equivalent to smooth $|g|^2$ given by (\ref{logmodulus}), see \cite[ch:5.4,eq.5.28]{DuitsAMS1}.
$\hfill \Box$ \\
\\
We have the following asymptotical formula for $\mathcall{K}(n,z)$:
\[
\mathcall{K}(n,z)
\approx
\left\{
\begin{array}{ll}
- \log(z/2) -\gamma_{EUL} & \textrm{if }n=0 \\
\frac{1}{2}(|n|-1)! \left( \frac{z}{2}\right)^{-|n|}
\end{array}
\right.
\textrm{ for }0 < z <\!<\! 1,
\]
with Euler's constant $\gamma_{EUL}$,
and thereby Eq.~(\ref{enhass}) implies the following result:
\begin{corollary}\label{corr:X}
If $k=1$ then $R_{\alpha,k}^{\textbf{D}}(g)\equiv O(|g|^{-2})$. If $k=2$ then $R_{\alpha,k}^{\textbf{D}}(g)\equiv O(\log |g|^{-1})$.
If $k\geq 3$ then $R_{\alpha,k}^{\textbf{D}}(g)\equiv O(1)$ and the kernel has no singularity.
\end{corollary}
\begin{remark}
As this approach also naturally extends to positive (non-integer) fractional powers $k \in \mathbb{Q}, k\geq 0$ of the resolvent operator we wrote $\Gamma(k)$ instead of $(k-1)!$ in
Eq.~\!(\ref{GammaDistributionIntegration}). The recursion depth $k$ equals $(E(T))^2/Var(T)$, since the variance of $T$ equals $Var(T)= k/\alpha^2$.
\end{remark}
In Figure~\ref{fig:Gamma}, we show that increase of $k$ (while fixing $E(T)=k/\alpha$) allows for better propagation of ink towards the completion areas. The same concept applies to the contour enhancement process. Here we change time integration (using the stochastic approach outlined in Section~\ref{section:MonteCarloStochasticImplementation}) in Eq.~\!(\ref{GammaDistributionIntegration}) rather than iterating the resolvents in Eq.~\!(\ref{ProbabilityDensityKernel}) for better accuracy.
\begin{figure}
\centering
\includegraphics[width=0.85\hsize]{figGamma.pdf}
\caption{Illustration of Theorem~\ref{th:prob} and Corollary~\ref{corr:X}, via the stochastic implementation for the $k$-step contour completion process ($T=\sum_{i=1}^k T_{i}$) explained in Subsection~\ref{section:MonteCarloStochasticImplementation}. We have depicted the (2D marginals) of 3D completion fields \cite{Zweck} now generalized to
$\mathcall{C}(x,y,\theta):=((Q-(\alpha k) I)^{-k}\delta_{g_{0}})(x,y,\theta) \cdot ((Q^{*}-(\alpha k) I)^{-k}\delta_{g_{1}})(x,y,\theta)$, with $Q=-\mathcall{A}_{1}+ D_{33} \mathcall{A}_{3}^2$ and with
$g_0=(\textbf{x}_0, \frac{\pi}{6})$ and $g_{1}=(\textbf{x}_1, -\frac{\pi}{6})$, $\alpha=0.1$, $D_{33}=0.1$, via a rough resolution
(on a $200\times 200 \times 32$-grid) and a finer resolution (on a $400\times 400 \times 64$-grid).
Image intensities have been scaled to full range.
The resolvent process $k=1$ suffers from: "the better the approximation, the less relative infilling in the completion" (cf.~left column). The singularities at $g_0$
and $g_{1}$ vanish at $k=3$. A reasonable compromise is found at $k=2$ where infilling is stronger, and where the modes (i.e. curves $\gamma$ with $\mathcall{A}_{2}\mathcall{C}(\gamma)=\mathcall{A}_{3}C(\gamma)=0$, cf.~\cite[App.~A]{BekkersJMIV},\cite{DuitsAlmsick2008}) are easy to detect. \label{fig:Gamma}}
\end{figure}
\subsection{Fundamental Solutions\label{section:FundamentalSolutions}}
The fundamental solution $S^{\textbf{D},\textbf{a}}:SE(2) \to \mathbb{R}^{+}$ associated to generator
$Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$
solves
\begin{equation} \label{fundsolPDE}
Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) \; S^{\textbf{D},\textbf{a}} =-\delta_{e}\ ,
\end{equation}
and is given by
\begin{equation}\label{FundamentalSolution}
\begin{aligned}
S^{\textbf{D},\textbf{a}}(x, y, \theta) &= \int \limits_{0}^{\infty}K_{t}^{\textbf{D},\textbf{a}}(x,y,\theta)\, {\rm d}t =
\left(-(Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1}\delta_e \right)(x,y,\theta) \\
&=\lim_{\alpha \downarrow 0}\left(\frac{-\alpha(Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)-\alpha I)^{-1}}{\alpha}\delta_e \right)(x,y,\theta)
=\lim_{\alpha \downarrow 0}\frac{R_{\alpha}^{\textbf{D},\textbf{a}}(x, y, \theta)}{\alpha}.\\
\end{aligned}
\end{equation}
There exist many intriguing relations \cite{DuitsAMS2,Boscain2} between fundamental solutions of hypo-elliptic diffusions
and left-invariant metrics on $SE(2)$, which make these solutions interesting. Furthermore, fundamental solutions on the nilpotent approximation $(SE(2))_{0}$ take a relatively simple explicit form \cite{Gaveau,DuitsAMS1}.
However, by Eq.~(\ref{FundamentalSolution}) these fundamental solutions suffer from some practical drawbacks: they are not probability kernels, in fact they are not even $\mathbb{L}_1$-normalizable, and they suffer from poles in both spatial and Fourier domain. Nevertheless, they are interesting to study for the limiting case $\alpha \downarrow 0$ and they have been suggested in cortical modeling \cite{Barbieri2012,BarbieriArxiv2013}. \\
\\
\subsection{The Underlying Probability Theory}
In this section we provide an overview of the underlying probability theory
belonging to our PDE's of interest, given by Eq.~(\ref{diffusionconvection}), (\ref{Resolvent}) and (\ref{fundsolPDE}).
We obtain the contour enhancement case by setting $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$ and $\textbf{a}=\textbf{0}$. Then, by application of Eq.~(\ref{leftInvariantDerivatives}), Eq.~(\ref{diffusionconvection}) becomes the forward Kolmogorov equation
\begin{equation} \label{StochasticEnhancementEvolution}
\left\{
\begin{aligned}
&\partial_t W(g,t)=(D_{11}\partial_\xi^2+D_{33}\partial_\theta^2)W(g,t),\\
&W(g,t=0)=U(g)\\
\end{aligned} \right.
\end{equation}
of the following stochastic process for contour enhancement:
\begin{equation} \label{StochasticEnhancementProcess}
\left\{\begin{aligned}
&\textbf{X}(t)=\textbf{X}(0)+\sqrt{2D_{11}}\varepsilon_\xi\int^t_0(\cos\Theta(\tau)\textbf{e}_x+\sin\Theta(\tau)\textbf{e}_y)\frac{1}{2\sqrt{\tau}}\,{\rm d}\tau\\
&\Theta(t)=\Theta(0)+\sqrt{t}\sqrt{2D_{33}}\varepsilon_\theta,\qquad\varepsilon_\xi,\varepsilon_\theta\thicksim\mathcall{N}(0,1)\\
\end{aligned} \right.
\end{equation}
For contour completion, we must set the diffusion matrix $\textbf{D}=\textrm{diag}\{0,0,D_{33}\}$ and convection vector $\textbf{a}=(1,0,0)$. In this case Eq.~(\ref{diffusionconvection}) takes the form
\begin{equation} \label{StochasticCompletionEvolution}
\left\{
\begin{aligned}
&\partial_t W(g,t)=(\partial_\xi+D_{33}\partial_\theta^2)W(g,t),\qquad g\in SE(2), t>0,\\
&W(g,t=0)=U(g).\\
\end{aligned} \right.
\end{equation}
This is the Kolmogorov equation of Mumford's direction process \cite{Mumford}
\begin{equation} \label{eq:MumfordDirectionProcess}
\left\{\begin{aligned}
&\textbf{X}(t)=X(t)\textbf{e}_x+Y(t)\textbf{e}_y=\textbf{X}(0)+\int^t_0 \cos\Theta(\tau)\textbf{e}_x+\sin\Theta(\tau)\textbf{e}_y\,{\rm d}\tau\\
&\Theta(t)=\Theta(0)+\sqrt{t}\sqrt{2D_{33}}\varepsilon_\theta,\qquad\varepsilon_\theta\thicksim\mathcall{N}(0,1)\\
\end{aligned} \right.
\end{equation}
\begin{remark}
As contour completion processes aim to reconstruct the missing parts of interrupted contours based on the contextual information of the data, a positive direction $\textbf{e}_{\xi}=\cos(\theta)\textbf{e}_x+\sin(\theta)\textbf{e}_y$ in the spatial plane is given to a random walker.
On the contrary, in contour enhancement processes a bi-directional movement of a random walker along $\pm\textbf{e}_{\xi}$ is included for noise removal by anisotropic diffusion.
\end{remark}
The general stochastic process on $SE(2)$ underlying Eq.~(\ref{diffusionconvection}) is :
{\small
\begin{equation} \label{eq:form}
\left\{
\begin{array}{l}
G_{n+1}:=(X_{n+1},\Theta_{n+1})=G_n + \Delta t \sum \limits_{i \in I} a_{i} \left.\textbf{e}_{i}\right|_{G_n} +\sqrt{\Delta t}\sum \limits_{i \in I} \epsilon_{i, n+1}\,\sum \limits_{j \in I} \sigma_{ji}\,
\left. \textbf{e}_{j}\right|_{G_n}, \\
G_{0}=(X_{0},\Theta_{0}),
\end{array}
\right.
\end{equation}
}
with $I = \{1,2,3\} $ in the elliptic case and $I = \{1,3\}$ in the hypo-elliptic case and where $n =1,\ldots, N-1$, $N \in \mathbb{N}$ denotes the number of steps with stepsize $\Delta t >0$, $\sigma=\sqrt{2D}$ is the unique symmetric positive definite matrix such that $\sigma^2=2D$, $\{\epsilon_{i, n+1}\}_{i \in I, n =1,\ldots, N-1 }$ are independent normally distributed \mbox{$\epsilon_{i, n+1} \sim \mathcall{N}(0,1)$} and {\small $\left. \textbf{e}_{1} \right|_{G_{n}}=(\cos \Theta_{n},\sin \Theta_{n},0)$, $\left. \textbf{e}_{2} \right|_{G_{n}}=(-\sin \Theta_{n},\cos \Theta_{n},0)$, and $\left. \textbf{e}_{3} \right|_{G_{n}}=(0,0,1)$}. In case $I = \{1,2,3\}$, Eq.~(\ref{eq:form}) boils down to:
{
\begin{equation}
\begin{array}{l}
\begin{array}{l}
\left(
\begin{array}{c}
X_{n+1} \\
Y_{n+1} \\
\Theta_{n+1}
\end{array}
\right)=
\left(
\begin{array}{c}
X_{n} \\
Y_{n} \\
\Theta_{n}
\end{array}
\right)+
\Delta t \,
{\rm R}_{\Theta_n}
\left(
\begin{array}{c}
a_{1} \\
a_{2} \\
a_{3}
\end{array}
\right)
+
\sqrt{\Delta t}\,
({\rm R}_{\Theta_n})^{T} \,
\sigma \,
\rm{ R}_{\Theta_n}
\left(
\begin{array}{c}
\epsilon_{1,n+1} \\
\epsilon_{2,n+1} \\
\epsilon_{3,n+1}
\end{array}
\right),\\
\textrm{ with }{\rm R}_{\theta}=
\left(
\begin{array}{ccc}
\cos \theta & -\sin \theta & 0 \\
\sin \theta & \cos \theta & 0 \\
0 & 0 & 1
\end{array}
\right).
\end{array}
\end{array}
\end{equation}
}
See Figure~\ref{figure:StochasticRandomWalkerCompletionEnhancementResult} for random walks of the Brownian motion and the direction process in $SE(2)$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.7\hsize]{StochasticRandomWalkerCompletionEnhancementResult.pdf}
\caption{From left to right: Up row: 20 random walks of the direction process for contour completion in $SE(2)=\mathbb{R}^2 \rtimes S^1$ by Mumford \cite{Mumford} with $\textbf{a}=(1,0,0)$, $D_{33}=0.3$, time step $\triangle t$=0.005 and 1000 steps. Bottom row: 20 random walks of the linear left-invariant stochastic processes for contour enhancement within $SE(2)$ with parameter settings $D_{11}=D_{33}=0.5$ and $D_{22}=0$, time step $\triangle t$=0.05 and 1000 steps.}
\label{figure:StochasticRandomWalkerCompletionEnhancementResult}
\end{figure}
\section{Implementation} \label{section:Implementation}
\subsection{Left-invariant Differences} \label{section:Left-invariantDifferences}
\subsubsection{Left-invariant Finite Differences with B-Spline Interpolation} \label{section: Left-invariant Finite Differences with B-spline Interpolation}
As explained in Section \ref{section:The Euclidean Motion Group $SE(2)$ and Group Representations}, our diffusions must be left-invariant. Therefore, a new grid template based on the left-invariant frame $\{\textbf{e}_\xi,\textbf{e}_\eta,\textbf{e}_\theta\}$, instead of the fixed frame $\{\textbf{e}_x,\textbf{e}_y,\textbf{e}_\theta\}$, need to be used in the finite difference methods.
\begin{figure}[!htbp]
\centering
\subfloat{\includegraphics[width=0.9\hsize]{finiteDifferenceScheme.pdf}}
\caption{Illustration of the spatial part of the stencil of the numerical scheme. The horizontal and vertical dashed lines indicate the sampling grid, which is aligned with $\{\textbf{e}_x,\textbf{e}_y\}$. The black dots, which are aligned with the rotated left-invariant coordinate system $\{\textbf{e}_\xi,\textbf{e}_\eta\}$ with $\theta=m \cdot \Delta\theta$, where $m \in \{0,1,...,N_o-1\}$ denotes the sampled orientation equidistantly sampled with distance $\Delta \theta = \frac{2\pi}{N_o}$.}
\label{fig:finiteDifferenceScheme}
\end{figure}
To understand how left-invariant finite differences are implemented, see Figure~\ref{fig:finiteDifferenceScheme}, where 2nd order B-spline interpolation \cite{Unser1993} is used to approximate off-grid samples.
The main advantage of this left-invariant finite difference scheme is the improved rotation invariance compared to finite differences applied after expressing the PDE's in fixed $(x,y,\theta)$-coordinates, such as in \cite{Boscain2,FrankenPhDThesis,Zweck}. This advantage is clearly demonstrated in \cite[Fig.~10]{Franken2009IJCV}. The drawback, however, is the low computational speed and a small amount of additional blurring caused by the interpolation scheme \cite{FrankenPhDThesis}.
\subsection{Left-invariant Finite Difference Approaches for Contour Enhancement and Completion}
\label{section:Left-invariant Finite Difference Approaches for Contour Enhancement}
Eq.~(\ref{StochasticEnhancementEvolution}) of the contour enhancement process and Eq.~(\ref{StochasticCompletionEvolution}) of the contour completion process show us respectively the Brownian motion and direction process of oriented particles moving in $SE(2)\equiv \mathbb{R}^2 \rtimes S^1$. Next we will provide and analyze finite difference schemes for both processes.
\subsubsection{Explicit Scheme for Linear Contour Enhancement and Completion}\label{section:ExplicitSchemeforLinearContourEnhancementCompletion}
We can represent the explicit numerical approximations of the contour enhancement process and contour completion process by using the generator
$Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$ in a general form, i.e. $Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) = (D_{11}\mathcall{A}_1^2+D_{33}\mathcall{A}_3^2) = (D_{11}\partial_\xi^2+D_{33}\partial_\theta^2)$ for the diffusion process and $Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)=(\partial_\xi+D_{33}\partial_\theta^2)$ for
the convection-diffusion process, which yield the following forward Euler discretization:
\begin{align} \label{forwardEuler}
\left\{ \begin{aligned}
&W(g,t+\Delta t)=W(g,t)+\Delta t \, Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) \, W(g,t),\\
&W(g,0)=U_f(g).\\
\end{aligned} \right.
\end{align}
We take the centered 2nd order finite difference scheme with B-spline interpolation as shown in Figure~\ref{fig:finiteDifferenceScheme} to numerically approximate the diffusion terms $(D_{11}\partial_\xi^2+D_{33}\partial_\theta^2)$, and use upwind finite differences for $\partial_\xi$. In the forward Euler discretization, the time step $\Delta t$ is critical for the stability of the algorithm. Typically, the convection process and the diffusion process have different properties on the step size $\Delta t$. The convection requires time steps equal to the spatial grid size ($\Delta t=\Delta x$) to prevent the additional blurring due to interpolation, while the diffusion process requires sufficiently small $\Delta t$ for stability, as we show next. In this combined case, we simulate the diffusion process and convection process alternately with different step size $\Delta t$ according to the splitting scheme in \cite{Creusen2013}, where half of the diffusion steps are carried out before one step convection, and half after the convection.
The resolvent of the (convection-)diffusion process can be obtained by integrating and weighting each evolution step with the negative exponential distribution in Eq.~(\ref{exponentialdistribution}). We set the parameters $\textbf{a}=(1,0,0)$ and $\textbf{D}=\textrm{diag} \{1,0,D_{33}\}$ with $D_{33}=\frac{D_{33}}{D_{11}}\approx0.01$ to avoid too much blurring on $S^{1}$.
\begin{remark}
Referring to the stability analysis of Franken et al.\cite{Franken2009IJCV} in the general gauge frame setting, we similarly obtain: $\Delta t \leq \frac{1}{2(1+\sqrt{2}+\frac{1}{q^2})}$ in our case of normal left-invariant derivatives.
For a typical value of $q=\frac{\Delta\theta}{\beta}=\frac{(\pi/24)}{0.1}$ in our convention with $\beta^2:=\frac{D_{33}}{D_{11}} = 0.01$, in which $D_{33} = 0.01$ and $D_{11} = 1$, cf.~\cite{DuitsJMIV2014b}, we obtain stability bound $\Delta t \leq 0.16$ in the case of contour enhancement Eq.~(\ref{StochasticEnhancementEvolution}).
\end{remark}
\subsubsection{Implicit Scheme for Linear Contour Enhancement and Completion}
The implicit scheme of the contour enhancement and contour completion is given by:
\begin{align} \label{ImplicitScheme}
\left\{ \begin{aligned}
&W(g,t+\Delta t)=W(g,t)+\Delta t \, Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) \, W(g,t+\Delta t),\\
&W(g,0)=U_f(g).\\
\end{aligned} \right.
\end{align}
Then, the equivalent discretization form of the Euler equation can be written as:
\begin{align} \label{DiscretizationImplicitScheme}
\left\{ \begin{aligned}
&\textbf{w}^{s+1}=\textbf{w}^s+\hat{\textbf{Q}}\textbf{w}^{s+1},\\
&\textbf{w}^1=\textbf{u},\\
\end{aligned} \right.
\end{align}
in which $\hat{\textbf{Q}} \equiv \Delta t (Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))$, and $\textbf{w}^s$ is the solution of the PDE at $t=(s-1)\Delta t, s \in \{1,2,...\}$, with the initial state $\textbf{w}^1=\textbf{u}$. According to the conjugate gradient method as shown in \cite{Creusen2013}, we can approximate the obtained linear system $(\textbf{I}-\hat{\textbf{Q}})\textbf{w}^{s+1}=\textbf{w}^s$ iteratively without evaluating matrix $\hat{\textbf{Q}}$ explicitly. The advantage of an implicit method is that it is unconditionally stable, even for large step sizes.
\subsection{Numerical Fourier Approaches \label{section:Duitsmatrixalgorithm}}
The following numerical scheme is a generalization of the numerical scheme proposed by Jonas August for the direction process \cite{August}.
An advantage of this scheme over others, such as the algorithm by Zweck et al. \cite{Zweck} or other finite difference schemes \cite{Franken2009IJCV}, is that (as we will show later in Theorem \ref{th:RelationofFourierBasedWithExactSolution}) it is directly related to the exact analytic solutions (approach 1) presented in Section~\ref{3GeneralFormsExactSolutions}.
The goal is to obtain a numerical approximation of the exact solution of
\begin{equation} \label{theeqn}
\alpha(\alpha I-Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}))^{-1}U=P, \, U \in \mathbb{L}_{2}(G), \quad \textit{with} \quad \underline{\mathcall{A}}=(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3),
\end{equation}
where the generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ is given in the general form Eq.~(\ref{diffusionconvectiongenerator})
without further assumptions on the parameters $a_{i}>0$, $D_{ii}>0$. Recall that its solution is given by $SE(2)$-convolution with the corresponding kernel. First we write
\begin{equation} \label{ansatz}
\begin{array}{l}
\mathcall{F}[P(\cdot,e^{i\theta})](\mbox{\boldmath$\omega$})=\hat{P}(\mbox{\boldmath$\omega$},e^{i\theta})= \sum \limits_{l=-\infty}^{\infty} \hat{P}^{l}(\mbox{\boldmath$\omega$}) e^{i l \theta}, \\
\mathcall{F}[U(\cdot,e^{i\theta})](\mbox{\boldmath$\omega$})=\hat{U}(\mbox{\boldmath$\omega$},e^{i\theta})= \sum \limits_{l=-\infty}^{\infty} \hat{U}^{l}(\mbox{\boldmath$\omega$}) e^{i l \theta}. \\
\end{array}
\end{equation}
Then by substituting (\ref{ansatz}) into (\ref{theeqn}) we obtain the following 4-fold recursion
{\small
\begin{equation} \label{5recursion}
\begin{array}{l}
(\alpha \!+\!l^2 D_{33}\!+\! i\, a_{3} l+\frac{\rho^2}{2}(D_{11}+D_{22}))\hat{P}^{l}(\mbox{\boldmath$\omega$}) + \frac{ a_1(i\, \omega_x
\!+\! \omega_{y})\!+\!a_2(i \, \omega_y \!-\!\omega_{x})}{2} \hat{P}^{l-1}(\mbox{\boldmath$\omega$})\\+\frac{ a_1(i\, \omega_x\!-\! \omega_{y})+a_2(i \, \omega_y \!+\!\omega_{x})}{2} \hat{P}^{l+1}(\mbox{\boldmath$\omega$})
-
\frac{ D_{11}(i\, \omega_x\!+\! \omega_{y})^2\!+\!D_{22}(i \, \omega_y \!-\!\omega_{x})^2}{4}
\hat{P}^{l-2}(\mbox{\boldmath$\omega$}) \\
-
\frac{ D_{11}(i\, \omega_x\!-\! \omega_{y})^2+D_{22}(i \, \omega_y \!+\! \omega_{x})^2}{4}
\hat{P}^{l+2}(\mbox{\boldmath$\omega$}) = \alpha \, \hat{U}^{l}(\mbox{\boldmath$\omega$}),
\end{array}
\end{equation}
}
which can be rewritten in polar coordinates
\begin{equation} \label{recurs}
\begin{array}{l}
(\alpha + i l a_3 +D_{33}l^2+ \frac{\rho^2}{2}(D_{11}+D_{22})) \, \tilde{P}^{l}(\rho)+ \frac{\rho}{2}(i a_{1}-a_2)\, \tilde{P}^{l-1}(\rho)+ \\
\frac{\rho}{2}(i a_{1}+a_2) \, \tilde{P}^{l+1}(\rho) + \frac{\rho^2}{4}(D_{11}-D_{22})\, (\tilde{P}^{l+2}(\rho)+\tilde{P}^{l-2}(\rho))=
\alpha \, \tilde{U}^{l}(\rho)
\end{array}
\end{equation}
for all $l=0,1,2,\ldots$ with $\tilde{P}^{l}(\rho) = e^{il\varphi} \hat{P}^{l}(\mbox{\boldmath$\omega$})$ and $\tilde{U}^{l}(\rho) = e^{il\varphi} \hat{U}^{l}(\mbox{\boldmath$\omega$})$, with
$\mbox{\boldmath$\omega$}=(\rho \cos \varphi, \rho \sin \varphi)$.
Equation (\ref{recurs}) can be written in matrix-form, where a 5-band matrix must be inverted. For explicit representation
of this 5-band matrix where the spatial Fourier transform in (\ref{ansatz}) is replaced by the $\textbf{DFT}$ we refer to \cite[p.230]{DuitsPhDThesis}. Here we stick to a Fourier series on $\mathbb{T}$, $\textbf{CFT}$ on $\mathbb{R}^2$ and truncation of the series at $N \in \mathbb{N}$ which yields the
$(2N+1) \times (2N+1)$ matrix equation:
\begin{equation} \label{MatrixInverse}
{\tiny \left(
\begin{array}{ccccccc}
p_{-N} & q+t & r & 0 & 0 & 0 & 0 \\
q-t & p_{-N+1} & q+t & r & 0 & 0 & 0 \\
r & \ddots & \ddots & \ddots & r & 0 & 0 \\
0 & \ddots & q-t & p_{0} & q+t & r & 0 \\
0 & 0 & r & \ddots & \ddots & \ddots & r \\
0 & 0 & 0 & r & q-t & p_{N-1} & q+t \\
0 & 0 & 0 & 0 & r & q-t & p_{N}
\end{array}
\right)
\left(
\begin{array}{c}
\tilde{P}^{-N}(\rho) \\
\tilde{P}^{-N+1}(\rho) \\
\vdots \\
\tilde{P}^{0}(\rho) \\
\vdots \\
\tilde{P}^{N-1}(\rho)
\\
\tilde{P}^{N}(\rho)
\end{array}
\right)=
\frac{4 \alpha}{ D_{11}} \!
\left(
\begin{array}{c}
\tilde{U}^{-N}(\rho) \\
\tilde{U}^{-N+1}(\rho) \\
\vdots \\
\tilde{U}^{0}(\rho) \\
\vdots \\
\tilde{U}^{N-1}(\rho)
\\
\tilde{U}^{N}(\rho)
\end{array}
\right)
}
\end{equation}
where $p_{l}= (2l)^2 + \frac{4 \alpha + 2 \rho^2(D_{11}+D_{22})+4 i a_{3} l}{D_{33}}$, $r=\frac{\rho^2(D_{11}-D_{22})}{D_{33}}$, $q= \frac{2 \rho a_{1}i}{D_{33}}$ and $t= \frac{2 a_2 \rho}{D_{33}}.$
\begin{remark}\label{rem:41}
The four-fold recursion Eq.~(\ref{recurs}) is uniquely determined by $\tilde{P}_{-N-1}=0, \tilde{P}_{-N-2}=0$, $\tilde{P}_{N+1}=0, \tilde{P}_{N+2}=0$, which is applied in Eq.~(\ref{MatrixInverse}).
\end{remark}
\begin{remark}\label{rem:42}
When applying the Fourier transform on $SE(2)$ to the PDE's of interest, as done in \cite{DuitsAlmsick2008,Boscain3,Boscain2}, one obtains a fully isomorphic 5-band matrix system as pointed out in \cite[App.A, Lemma A.1, Thm A.2]{DuitsAlmsick2008}, the basic underlying coordinate transition to be applied is given by
\[
(p,\phi)= (\rho,\varphi - \theta)
\]
where $p$ indexes the irreducible representations of $SE(2)$ and $\phi$
denotes the angular argument of the $p$-th irreducible function subspace $\mathbb{L}_{2}(S^{1})$ on which
the $p$-th irreducible representation acts. For further details see \cite[App.A]{DuitsAlmsick2008} and \cite{Chirikjian}.
\end{remark}
In \cite{DuitsAlmsick2008}, we showed the relation between spectral decomposition of this matrix (for $N \to \infty$) and the exact solutions of contour completion. In this paper we do the same for the contour enhancement case in Section \ref{section:FourierBasedForEnhancement}.
\subsection{Stochastic Implementation}\label{section:MonteCarloStochasticImplementation}
In a Monte-Carlo simulation as proposed in \cite{Gonzalo,BarbieriArxiv2013}, we sample the stochastic process (Eq.~\!(\ref{eq:form})) such that we obtain the kernels for our linear left-invariant diffusions. In particular the kernel of the contour enhancement process, and the kernel for the contour completion process. Figure~\ref{figure:MentoCarloSimulation} shows the xy-Marginal of the enhancement and the completion kernel, which were obtained by counting the number of paths crossing each voxel in the orientation score domain. In addition, the length of each path follows a negative exponential distribution.
Within Figure~\ref{figure:MentoCarloSimulation} we see, for practically reasonable parameter settings, that increasing the number of sample paths to 50000 already provides a reasonable approximation of the exact kernels.
In addition, each path was weighted using the negative exponential distribution with respect to time in Eq.~\!(\ref{exponentialdistribution}), in order to obtain the resolvent kernels.
\begin{figure}[!htb]
\centering
{\includegraphics[width=\textwidth]{stochastic.pdf}}
\caption{Stochastic random process for the contour enhancement kernel (top) and stochastic random process for the contour completion raw kernel (bottom). Both processes are obtained via Monte Carlo simulation of random process
(\ref{eq:form}). In contour completion, we set step size $\Delta t=0.05, \alpha=10, D_{11}=D_{33}=0.5$, and $D_{22}=0$. In contour completion, we set step size $\Delta t=0.005, \alpha=5, D_{33}=1$, and $\textbf{a}=(1,0,0)$.}
\label{figure:MentoCarloSimulation}
\end{figure}
The implementation of the $k$-fold resolvent kernels is obtained by application of Theorem~\ref{th:prob}, i.e. by imposing a Gamma distribution instead of a negatively exponential distribution. Here stochastic implementations become slower as one can no longer rely on the memoryless property of the negatively exponential distribution, which means one should only take the end-condition of each sample path $G_T$ after a sampling of random traveling time $T\sim\Gamma(t;k,\alpha)$. Still such stochastic implementations are favorable (in view of the singularity) over the concatenation of $SE(2)$-convolutions of the resolvent kernels with themselves.
\section{Implementation of the Exact Solution in the Fourier and the Spatial Domain and their Relation to Numerical Methods}\label{section:Comparison}
In previous works by Duits and van~Almsick \cite{DuitsCASA2005,DuitsCASA2007,DuitsAlmsick2008}, three methods were applied producing three different exact representations for the kernels (or "Green's functions") of the forward Kolmogorov equations of the contour completion process:
\begin{enumerate}
\item The first method involves a spectral decomposition of the bi-orthogonal generator in the $\theta$-direction for each fixed spatial frequency $(\omega_{x},\omega_y)=(\rho \cos\varphi, \rho \sin\varphi) \in \mathbb{R}^{2}$ which is an unbounded Mathieu operator, producing a (for reasonably small times $t>0$) \emph{slowly converging} Fourier series representation. Disadvantages include the Gibbs phenomenon. Nevertheless, the Fourier series representation in terms of \emph{periodic} Mathieu functions directly relates to the numerical algorithm proposed by August in \cite{August}, as shown in \cite[ch:5]{DuitsAlmsick2008}. Indeed the Gibbs phenomenon appears in this algorithm as the method requires some smoothness of data: running the algorithm on a sharp discrete delta-spike provides Gibbs-oscillations. The same holds for Fourier transform on $SE(2)$ methods \cite{DuitsAlmsick2008,Boscain3,Boscain2}, recall Remark \ref{rem:42}.
\item The second method unwraps for each spatial frequency the circle $S^{1}$ to the real line $\mathbb{R}$, to solve the Green's function with absorbing boundary conditions at infinity which results in a quickly converging series in rapidly decaying terms expressed in \emph{non-periodic} Mathieu functions. There is a nice probabilistic interpretation: The $k$-th number in the series reflects the contribution of sample-paths in a Monte-Carlo simulation, carrying homotopy number $k \in \mathbb{Z}$, see Figure~\ref{fig:K0K1K2}.
\item The third method applies the Floquet theorem on the resulting series of the second method and application of the geometric series produces a formula involving only 4 Mathieu functions \cite{DuitsAlmsick2008,MarkusThesis}.
\end{enumerate}
We briefly summarize these results in the general case and then we provide the end-results of the three approaches for respectively the contour enhancement case and the contour completion case in the theorems below.
In Figure~\ref{fig:EnhancementKernel}, we show an illustration of an exact resolvent enhancement kernel and an exact fundamental solution and their marginals.
\begin{figure}[!htb]
\centerline{
\includegraphics[width=0.9\hsize]{fig11Heat.pdf}
}
\caption{Top row, left: The three marginals of the exact Green's function $R_{\alpha}^{\textbf{D}}$ of the resolvent process where $\textbf{D} = \textrm{diag}\{D_{11},0,D_{33}\}$ with parameter settings {\small $\alpha=0.025$ and $\textbf{D}=\{1,0,0.08\}$}.
right: The isotropic case of the exact Green's function $R_{\alpha}^{\textbf{D}}$ of the resolvent process with {\small $\alpha=0.025$, $\textbf{D}=\{1,0.9,1\}$}.
Bottom row: The fundamental solution $S^{\textbf{D}}$ of the resolvent process with {\small $\textbf{D}=\{1,0,0.08\}$}. The iso-contour values are indicated in the Figure.
}\label{fig:EnhancementKernel}
\end{figure}
Furthermore, we investigate the distribution of the stochastic line propagation process with periodic boundaries at $-\pi-2k\pi$ to $\pi+2k\pi$ of the exact kernel. The probability density distribution of the kernel shows us that most of the random walks only move within $k=2$ loops, i.e. from $-3\pi$ to $3\pi$. See Figure~\ref{fig:K0K1K2}, where it can be seen
that the series of rapidly decaying terms of method 2 for reasonable parameter settings already be truncated at $N=1$ or $N=2$.
\begin{figure}[!htb]
\centerline{
\includegraphics[width=0.9\hsize]{K0K1K2.pdf}
}
\caption{Top row, left to right: Two random walks in $SE(2)=\mathbb{R}^2 \rtimes S^1$ (and their projection on $\mathbb{R}^2$) of the direction processes for $k=0, 1, 2$ cases (where $k$ denotes the amount of loops) of contour enhancement with $\mathbf{D}=\{0.5,0.,0.19\}$ (800 steps, step-size $\Delta t = 0.005$). Bottom row, left to right: the intensity projection of the exact enhancement kernels corresponding to the three cases in the top row, i.e. $\theta$ range from $-\pi$ to $\pi$ for $k=0$ case, from $-3\pi$ to $-\pi$ and $\pi$ to $3\pi$ for $k=1$ case, from $-5\pi$ to $-3\pi$ and $3\pi$ to $5\pi$ for $k=2$ case, with {\small $\alpha=\frac{1}{40}$, $\mathbf{D}=\{0.5,0.,0.19\}$}.}
\label{fig:K0K1K2}
\end{figure}
In Appendix~\ref{app:A} we analyze the asympotical behavior of the spatial Fourier transform of the kernels at the origin and at infinity. It turns out that the fundamental solutions (the case $\alpha \downarrow 0$) are the only kernels with a pole at the origin. This reflects that fundamental solutions are not $\mathbb{L}_{1}$-normalizable, in contrast to resolvent kernels and temporal kernels. Furthermore, the Fourier transform of any kernel restricted to a fixed $\theta$-layer has a rapidly decaying direction $\omega_{\eta}$ and a slowly decaying direction $\omega_{\xi}$. Therefore we analyze the decaying behavior of the spatially Fourier transformed kernels along these axes at infinity and we deduce that all resolvent kernels and fundamental solutions have a singularity at the origin, whereas the time-dependent kernels do not suffer from such a singularity.
\subsection{Spectral Decomposition and the 3 General Forms of Exact Solutions}\label{3GeneralFormsExactSolutions}
In this section, we will derive 3 general forms of the exact solutions. To this end we note that analysis of strongly continuous semigroups \cite{Yosida} and their resolvents start with analysis of the generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$. Symmetries of the solutions
directly follow from the symmetries of the generator. Furthermore, spectral analysis of the generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ as an unbounded operator on $\mathbb{L}_{2}(SE(2))$ provides spectral decomposition and explicit formulas for the time-dependent kernels, their resolvents and fundamental solutions as we will see next.
First of all, the domain of the self-adjoint operator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ equals
\[
\begin{array}{l}
\mathcall{D}(Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}))=\mathbb{H}_{2}(\mathbb{R}^{2}) \otimes \mathbb{H}_{2}(S^{1}), \textrm{ with second order Sobolev space} \\
\mathbb{H}_{2}(S^{1})\equiv \{\phi \in \mathbb{H}_{2}([0,2\pi])\;|\; \phi(0)=\phi(2\pi) \textrm{ and } {\rm d}\phi(0)={\rm d}\phi(2\pi)\},
\end{array}
\]
where ${\rm d}\phi \in \mathbb{H}_{1}(S^{1})$ is the weak derivative of $\phi$ and where both Sobolev spaces $\mathbb{H}_{2}(S^{1})$ are $\mathbb{H}_{2}(\mathbb{R}^{2})$ are endowed with the $\mathbb{L}_{2}$-norm. Operator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ is equivalent to the corresponding operator
\[
\mathcall{B}^{\textbf{D},\textbf{a}}:=(\mathcall{F}_{\mathbb{R}^{2}} \otimes \textrm{id}_{\mathbb{L}_{2}(S^{1})}) \circ Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}) \circ (\mathcall{F}_{\mathbb{R}^{2}}^{-1} \otimes \textrm{id}_{\mathbb{H}_{2}(S^{1})}),
\]
where $\otimes$ denotes the tensor product in distributional sense, $\mathcall{F}_{\mathbb{R}^{2}}$ denotes the unitary Fourier transform operator on $\mathbb{L}_{2}(\mathbb{R}^{2})$ almost everywhere given by
\[
\mathcall{F}_{\mathbb{R}^{2}}f(\mbox{\boldmath$\omega$})=\hat{f}(\mbox{\boldmath$\omega$}):= \frac{1}{2\pi} \int_{\mathbb{R}^{2}} f(\textbf{x}) e^{-i\, \mbox{\boldmath$\omega$} \cdot \textbf{x}}\, {\rm d}\textbf{x},
\]
and where $\textrm{id}_{\mathbb{H}_{2}(S^{1})}$ denotes the identity map on $\mathbb{H}_{2}(S^{1})$.
This operator $\mathcall{B}^{\textbf{D},\textbf{a}}$ is given by
\[
(\mathcall{B}^{\textbf{D},\textbf{a}}\hat{U})(\mbox{\boldmath$\omega$},\theta)= (\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}\hat{U}(\mbox{\boldmath$\omega$},\cdot))(\theta),
\]
where for each fixed spatial frequency $\mbox{\boldmath$\omega$}=(\rho \cos \varphi, \rho \sin \varphi) \in \mathbb{R}^{2}$ operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}: \mathbb{H}_{2}(S^{1}) \to \mathbb{L}_{2}(S^{1})$ is a mixture of multiplier operators and
weak derivative operators $d=\partial_{\theta}$:
\begin{equation}
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}= -\sum \limits_{j=1}^{2} a_{j} m_j + \sum \limits_{k,j=1}^{2} D_{kj} m_k m_j
+(-a_{3} + 2 D_{j3} m_j) d + D_{33} d^2,
\end{equation}
with multipliers $m_{1}=i \rho \cos (\varphi - \theta)$ and $m_{2}= -i \rho \sin(\varphi - \theta)$ corresponding to respectively $\partial_{\xi}=\cos \theta \partial_{x} +\sin \theta \partial_{y}$ and $\partial_{\eta}=-\sin \theta \partial_{x} +\cos \theta \partial_{y}$.
By straightforward goniometric relations it follows that for each $\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}$ operator
$\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$ boils down to a 2nd order Mathieu-type operator (i.e. an operator of the type $\frac{d^2}{dz^2}-2q\cos(2z)+a $).
In case of the contour enhancement we have
\[\label{ContourEnhancementMathieuOperator}
\begin{array}{l}
\left(\textbf{a}=\textbf{0} \textrm{ and }\textbf{D}=\textrm{diag}\{D_{11},D_{22},D_{33}\}\textrm{ and } D_{11},D_{22} \geq 0, D_{33}> 0 \right) \Rightarrow \\
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}= - D_{11} \rho^2 \cos^{2}(\varphi - \theta) - D_{22} \rho^{2}\sin^{2}(\varphi - \theta)+
D_{33} \partial_{\theta}^2.
\end{array}
\]
In case of the contour completion we have
\[
(\textbf{a}=(1,0,0) \textrm{ and }D_{33}>0 ) \Rightarrow
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}= - i\rho \cos(\varphi - \theta) + D_{33} \partial_{\theta}^2.
\]
Operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$ satisfies
\[
(\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}})^* \Theta = \overline{\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}\overline{\Theta}},
\]
and moreover it admits a right-inverse kernel operator
$K:\mathbb{L}_{2}(S^{1}) \to \mathbb{H}_{2}(S^{1})$ given by
\begin{equation}\label{relconj}
Kf(\theta) = \int \limits_{S^{1}} k(\theta,\nu) f(\nu) {\rm d}\nu,
\end{equation}
with a kernel satisfying $k(\theta,\nu)=k(\nu,\theta)$ (without conjugation). This kernel $k$
relates to the fundamental solution of operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$:
\[
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}} \hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\cdot) =\delta^{\theta}_{0},
\textrm{ for all }\mbox{\boldmath$\omega$}=(\rho \cos\varphi, \rho \sin \varphi) \in \mathbb{R}^{2},
\]
with $\hat{S}^{\textbf{D},\textbf{a}} :SE(2)\setminus \{e\} \to \mathbb{R}$, infinitely differentiable. By left-invariance of our generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$, we
have
\[
k(\theta,\nu)= \hat{S}^{\textbf{D},\textbf{a}}(\rho \cos(\varphi-\theta), \rho \sin (\varphi-\theta),\nu-\theta),
\]
where $\hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)$ denotes the spatial Fourier transform of the fundamental solution $S^{\textbf{D},\textbf{a}}:SE(2) \setminus \{e\} \to \mathbb{R}^{+}$. Now that we have analyzed the generator of our PDE evolutions, we summarize 3 exact approaches describing the kernels of the PDE's of interest.
\subsubsection*{Exact Approach 1}
Kernel operator $K$ given by Eq.~(\ref{relconj}) is compact and its kernel satisfies $k(\theta,\nu) = k(\nu,\theta)$ and thereby it has a complete bi-orthonormal basis of eigenfunctions $\{\Theta_{n}\}_{n \in \mathbb{Z}}$:
\[
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}} \Theta_{n}^{\mbox{\boldmath$\omega$}}= \lambda_{n} \Theta_{n}^{\mbox{\boldmath$\omega$}} \textrm{ and } K \Theta_{n}^{\mbox{\boldmath$\omega$}} =\lambda_{n}^{-1} \Theta_{n}^{\mbox{\boldmath$\omega$}}, \textrm{ with }0\geq \lambda_{n} \to \infty,
\]
As operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$ is a Mathieu type of operator these eigenfunctions $\Theta_{n}$ can be expressed in periodic Mathieu functions, and the corresponding
eigenvalues can be expressed in Mathieu characteristics as we will explicitly see in the subsequent subsections for both the contour-enhancement and contour-completion cases.
The resulting solutions of our first approach are
\begin{equation} \label{sols1}
\boxed{
\begin{array}{l}
W(x,y,\theta,s)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{W}(\cdot,\theta,s)](x,y) \textrm{ with }
\hat{W}(\mbox{\boldmath$\omega$},\theta,s)= \sum \limits_{n \in \mathbb{Z}} e^{s \lambda_{n}} (\hat{U}(\mbox{\boldmath$\omega$},\cdot),\overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}})\Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta), \\
P_{\alpha}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{P}_{\alpha}(\cdot,\theta)](x,y) \textrm{ with }
\hat{P}_{\alpha}(\mbox{\boldmath$\omega$},\theta)= \alpha \sum \limits_{n \in \mathbb{Z}} \frac{1}{\alpha -\lambda_n} (\hat{U}(\mbox{\boldmath$\omega$},\cdot),\overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}}) \Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta), \\
\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\textbf{\mbox{\boldmath$\omega$}},\theta)= \frac{\alpha}{2\pi} \sum \limits_{n \in \mathbb{Z}} \frac{1}{\alpha-\lambda_n} \overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta)}\, \Theta_{n}^{\mbox{\boldmath$\omega$}}(0),\\
S^{\textbf{D},\textbf{a}}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{S}^{\textbf{D},\textbf{a}}(\cdot,\theta)](x,y) \textrm{ with }
\hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)=-\frac{1}{2\pi} \sum \limits_{n \in \mathbb{N}} \frac{1}{\lambda_n} \overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta)}\, \Theta_{n}^{\mbox{\boldmath$\omega$}}(0).
\end{array}
}
\end{equation}
\begin{remark}
If $\textbf{a}=\textbf{0}$ then $(\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}})^*=(\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}})$ and $\overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}}=\Theta_{n}^{\mbox{\boldmath$\omega$}}$ and the $\{\Theta_{n}^{\mbox{\boldmath$\omega$}}\}$ form an orthonormal basis for $\mathbb{L}_{2}(S^{1})$ for each fixed $\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}$.
\end{remark}
\subsubsection*{Exact Approach 2}
The problem with the solutions (\ref{sols1}) is that the Fourier series representations (\ref{sols1}) do not converge quickly for $s>0$ small.
Therefore, in the second approach we unfold the circle and for the moment we replace the $2\pi$-periodic boundary condition in $\theta$ by absorbing boundary conditions at infinity
and we consider the auxiliary problem of finding $\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}: \mathbb{R}^{2} \times \mathbb{R} \setminus \{e\} \to \mathbb{R}^{+}$, such that
\begin{equation} \label{unfoldeqs}
\begin{array}{l}
(-Q^{\textbf{D},\textbf{a}}+\alpha I) R^{\textbf{D},\textbf{a},\infty}_{\alpha} =\alpha \delta_{0}^{x} \otimes \delta_{0}^{y} \otimes \delta_{0}^{\theta}, \\
R^{\textbf{D},\textbf{a},\infty}_{\alpha}(\cdot, \theta) \to 0 \textrm{ as }|\theta| \to \infty.
\end{array}
\Leftrightarrow \forall_{\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}}\;:\:
\left\{
\begin{array}{l}
(-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I) \hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}(\mbox{\boldmath$\omega$}, \cdot) =\alpha \, \frac{1}{2\pi}\, \delta_{0}^{\theta}, \\
\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}(\mbox{\boldmath$\omega$}, \theta) \to 0 \textrm{ as }|\theta| \to \infty.
\end{array}
\right.
\end{equation}
The spatial Fourier transform of the corresponding fundamental solution again follows by taking the limit $\alpha \downarrow 0$: $\hat{S}^{\infty}:=\lim \limits_{\alpha \downarrow 0}\alpha^{-1}\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}$. Now the solution of (\ref{unfoldeqs}) is given by
\begin{equation}\label{genform}
\boxed{
\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}(\mbox{\boldmath$\omega$}, \theta)=
\frac{\alpha}{ 2\pi D_{33}\, W_{\rho}}
\left\{
\begin{array}{l}
G_{\rho}(\varphi)F_{\rho}(\varphi-\theta), \textrm{ for }\theta \geq 0, \\
F_{\rho}(\varphi)G_{\rho}(\varphi-\theta), \textrm{ for }\theta \leq 0,
\end{array}
\right.
\textrm{ for all }
\mbox{\boldmath$\omega$}=(\rho \cos \varphi, \rho \sin\varphi)}
\end{equation}
where $\theta \mapsto F_{\rho}(\varphi-\theta)$ is the unique solution in the nullspace of operator $-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I$ satisfying $F_{\rho}(\theta) \to 0$ for $\theta \to +\infty$,
and where $G_{\rho}$ is the unique solution in the nullspace of operator $-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I$ satisfying $G_{\rho}(\theta) \to 0$ for $\theta \to -\infty$, and
The Wronskian of $F_{\rho}$ and $G_{\rho}$ is given by
\begin{equation}\label{WronskianComputation}
W_{\rho}=F_{\rho}G_{\rho}'-G_{\rho}F_{\rho}'=
F_{\rho}(0)G_{\rho}'(0)-G_{\rho}
(0)F_{\rho}'(0).
\end{equation}
See Figure~\ref{fig:ContinuousFit}.
We conclude with the periodized solutions
\begin{equation} \label{periodized}
\boxed{
\begin{array}{l}
R_{\alpha}^{\textbf{D},\textbf{a}}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\cdot,\theta)](x,y) \textrm{ with }
\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)
= \sum \limits_{n \in \mathbb{Z}} \hat{R}_{\alpha}^{\textbf{D},\textbf{a},\infty}(\mbox{\boldmath$\omega$},\theta+2n \pi) , \\
S^{\textbf{D},\textbf{a}}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{S}^{\textbf{D},\textbf{a}}(\cdot,\theta)](x,y) \textrm{ with }
\hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)=\sum \limits_{n \in \mathbb{Z}}
\hat{S}^{\textbf{D},\textbf{a},\infty}(\mbox{\boldmath$\omega$},\theta+2n \pi).
\end{array}
}
\end{equation}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\hsize]{ContinuousFit.pdf}
\caption{Illustration of the continuous fit of $\theta \mapsto \hat{R}_{\alpha}^{\textbf{D},\textbf{0},\infty}(\mbox{\boldmath$\omega$},\theta)$ in Eq.~(\ref{genform}) for contour enhancement with parameter settings
$D_{11}=1, D_{22}=0, D_{33}=0.05$ and $\alpha=\frac{1}{20}$, at $(\omega_x, \omega_y)=(\frac{\pi}{20},\frac{\pi}{20})$.}
\label{fig:ContinuousFit}
\end{figure}
For further details see \cite{DuitsAlmsick2008,DuitsCASA2005,DuitsCASA2007,DuitsAMS1,MarkusThesis}. Here we omit the details on these explicit solutions for the general case as the proof is fully equivalent to \cite[Lemma 4.4\&Thm 4.5]{DuitsAlmsick2008}, and moreover the techniques are directly generalizable from standard Sturm-Liouville theory.
\subsubsection*{Exact Approach 3}
In the third approach, where for simplicity we restrict ourselves to both cases of the contour enhancement and the contour completion, we apply the well-known Floquet theorem to the second order ODE
\begin{equation}\label{Mathieu}
(-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I)F(\theta)=0 \Leftrightarrow
F''(\theta) -2 q_{\rho} \cos((\varphi - \theta)\mu) F(\theta) = -a_{\rho}\, F(\theta),
\end{equation}
with $\mu \in \{1,2\}$. For the precise settings/formulas of $a_{\rho}$, $q_{\rho}$ and $\mu$, in the case of contour enhancement and contour completion we refer to the next subsections.
Note that in both the case of contour enhancement and completion we have the Mathieu functions (following the conventions of \cite{AbraMathieu,Schaefke} ) with
\begin{equation} \label{MatheiuEllipticFunctions}
\boxed{
\begin{array}{l}
\textrm{me}_\nu(z;q_\rho)=\textrm{ce}_\nu(z;q_\rho)+i\textrm{se}_\nu(z;q_\rho)\\
\textrm{me}_{-\nu}(z;q_\rho)=\textrm{ce}_\nu(z;q_\rho)-i\textrm{se}_\nu(z;q_\rho)
\end{array},
}
\end{equation}
where $z= \varphi-\theta, \nu=\nu(a_{\rho},q_{\rho})$, $\textrm{ce}_\nu(z;q_\rho)$ denotes the cosine-elliptic functions and $\textrm{se}_\nu(z;q_\rho)$ denotes the sine-elliptic functions, given by
\begin{equation*} \label{CosineSineEllipticFunctions}
\begin{array}{l}
\textrm{ce}_\nu(z;q_\rho)=\sum \limits_{r=-\infty}^{\infty} \textrm{c}_{2r}^\nu(q_\rho)\cos{(\nu+2r)}z\; \textrm{with}\; \textrm{ce}_\nu(z;0)=\cos{\nu z}\\
\textrm{se}_{\nu}(z;q_\rho)=\sum \limits_{r=-\infty}^{\infty} \textrm{c}_{2r}^\nu(q_\rho)\sin{(\nu+2r)}z\; \textrm{with}\; \textrm{se}_\nu(z;0)=\sin{\nu z}
\end{array},
\end{equation*}
For details see \cite{Schaefke}.
Then, we have \[
F_{\rho}(z)=\textrm{me}_{-\nu}(z/\mu ,q_{\rho}),\;
G_{\rho}(z)=\textrm{me}_{\nu}(z/\mu ,q_{\rho}),
\]
with $\mu=1$ in the contour enhancement case and $\mu=2$ in the contour completion case. Furthermore $a_{\rho}$ denotes the Mathieu characteristic and $q_{\rho}$
denotes the Mathieu coefficient and $\nu=\nu(a_{\rho},q_{\rho})$ denotes the purely imaginary Floquet exponent (with $i \nu <0$)
with respect to the Mathieu ODE-equation (\ref{Mathieu}), whose general form is
\[
y''(z)- 2q \cos(2z) y(z)= -a y(z).
\]
Application of this theorem to the solutions $F_{\rho}$ and $G_{\rho}$ in Eq.~(\ref{periodized}) yields
\begin{equation} \label{geom}
F_{\rho}\left(z -2n \pi\right)=e^{\frac{2n \pi \,i\, \nu}{\mu}} F_{\rho}\left(z\right) \textrm{ and }G_{\rho}\left(z -2n \pi\right)=e^{-\frac{2n \pi\, i \,\nu}{\mu}} F_{\rho}\left(z\right), \qquad z=\varphi-\theta.
\end{equation}
Substitution of (\ref{geom}) into (\ref{periodized}) together with the geometric series
\[
\sum \limits_{n=0}^{\infty} \left(e^{2 \nu \pi i/\mu} \right)^{n}=\frac{1}{1-e^{2 i\nu \pi/\mu}} \textrm{ and } \frac{1+e^{2i\nu \pi/\mu}}{1-
e^{2i \nu \pi/\mu}}= -\textrm{coth}\,(i \nu \pi/\mu)= i\cot(\nu \pi/\mu),
\]
with Floquet exponent $\nu=\nu(a_{\rho},q_{\rho}), \ \textrm{Im}(\nu)>0$,
yields
the following closed form solution expressed in 4 Mathieu functions:
{\small
\begin{equation} \label{sols3}
\boxed{
\begin{array}{l}
[\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\cdot,\theta)](\mbox{\boldmath$\omega$})= \frac{\alpha}{D_{33}\, i \, W_{\rho}}
\left\{ \right. \\
\left.\left(-\cot(\frac{\nu \pi}{\mu})\left(\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu}, q_{\rho})+
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})\right)+\right. \right.\\
\left. \left. \qquad
\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})-
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})\right){\rm u}(\theta)\;+ \qquad \right. \\
\left.\left(-\cot(\frac{\nu \pi}{\mu})\left(\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})-
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho}\right) + \right. \right.\\
\left. \qquad
\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})+
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho}\right){\rm u}(-\theta)
\end{array}
}
\end{equation}
}
with Floquet exponent $\nu=\nu(a_{\rho},q_{\rho})$ and where $\theta \mapsto {\rm u}(\theta)$ denotes the unit step function.
Next we will summarize the main results, before we consider the special cases of the contour enhancement and the contour completion.
\begin{theorem}\label{th:exact}
The exact solutions of all linear left-invariant (convection)-diffusions on $SE(2)$, their resolvents, and their fundamental solutions given by
\[
W(g,t)=(K_{t}^{\textbf{D},\textbf{a}} *_{SE(2)} U)(g), \ \ P_{\alpha}(g)= (R_{\alpha}^{\textbf{D},\textbf{a}} *_{SE(2)} U)(g), \ \
S^{\textbf{D},\textbf{a}}= (Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}))^{-1} \delta_{e},
\]
admit three types of exact representations for the solutions. The first type is a series expressed involving periodic Mathieu functions given by Eq.~(\ref{sols1}).
The second type is a rapidly decaying series involving non-periodic Mathieu functions given by Eq.~(\ref{genform}) together with Eq.~(\ref{periodized}),
and the third one involves only four non-periodic Mathieu functions and is given by Eq.~(\ref{sols3}).
\end{theorem}
\subsubsection{The Contour Enhancement Case}
In case $\textbf{D}=\textrm{diag}\{D_{11},D_{22}, D_{33}\}$ with $D_{11},D_{33}>0$ and $D_{22}\geq 0$ and $\textbf{a}=\textbf{0}$,
the settings in the solution formula of the first approach Eq.\!~(\ref{sols1}) are
\begin{equation} \label{efenh}
\begin{array}{l}
\Theta_{n}(\theta)= \frac{\textrm{me}_{n}(\varphi-\theta,q_{\rho})}{\sqrt{2\pi}},\;
q_{\rho}=\frac{\rho^2 (D_{11}-D_{22})}{4 D_{33}},\;
\lambda_{n}=-a_{n}(q_{\rho}) D_{33} - \frac{\rho^{2}(D_{11}+D_{22})}{2},
\end{array}
\end{equation}
where $\textrm{me}_{n}(z,q)$ denotes the periodic Mathieu function with parameter $q$ and $a_{n}(q)$ the corresponding Mathieu characteristic,
and with Floquet exponent $\nu = n \in \mathbb{Z}$.
The settings of the solution formula of the second approach
Eq.\!~(\ref{Mathieu}) together with Eq.\!~(\ref{periodized}) are
\begin{equation} \label{aqenh}
\begin{array}{l}
a_{\rho}=\frac{-\alpha -\frac{\rho^2}{2}(D_{11}+D_{22})}{D_{33}}, \
q_{\rho}=\frac{\rho^2 (D_{11}-D_{22})}{4 D_{33}}, \
\mu = 1, \ W_{\rho}=-2i \, \textrm{se}_{\nu}'(0,q_{\rho})\textrm{ce}_{\nu}(0,q_{\rho}),
\end{array}
\end{equation}
where $\textrm{se}_\nu'{(0,q_\rho)}=\frac{d}{dz}\textrm{se}_\nu(z,q)|_{z=0}$.
The third approach Eq.\!~(\ref{sols3}) yields for $D_{11}>D_{22}$ the result
in \cite[Thm 5.3]{DuitsAMS1}.
\begin{remark}
As the generator $Q^{\textbf{D},\textbf{0}}(\underline{\mathcall{A}})=D_{11}\mathcall{A}_{1}^{2} + D_{33}\mathcall{A}_{3}^{2}$ is invariant under the reflection
$\mathcall{A}_{3} \mapsto -\mathcall{A}_{3}$ we have that our real-valued kernels satisfy $K(x,y,\theta)=K(-x,-y,\theta)$. As a result the spatially Fourier transformed enhancement kernels given by
$\hat{K}_{t}^{\textbf{D}}(\mbox{\boldmath$\omega$},\theta)$, $\hat{R}_{\alpha}^{\textbf{D}}(\mbox{\boldmath$\omega$},\theta)$, $\hat{S}^{\textbf{D}}(\mbox{\boldmath$\omega$},\theta)$ are real-valued.
This is indeed the case in e.g. Eq.\!~(\ref{genform}), Eq.\!~(\ref{sols3}), as for $q,z \in \mathbb{R}$ and $\overline{\nu}=-\nu$, we have
$\overline{\textrm{me}_{\nu}(z,q)}=
\textrm{me}_{\overline{\nu}}(-\overline{z},\overline{q})=\textrm{me}_{\nu}(z,q)$,
so that
$\overline{\textrm{se}_{\nu}(z,q)} \in i\mathbb{R}$ and $\overline{\textrm{ce}_{\nu}(z,q)} \in \mathbb{R}$.
\end{remark}
\subsubsection{The Contour Completion Case}
In case $\textbf{D}=\textrm{diag}\{0,0, D_{33}\}$ with $D_{33}>0$ and $\textbf{a}=(1,0,0)$,
the settings in the solution formula of the first approach Eq.\!~(\ref{sols1}) are
\begin{equation} \label{efcom}
\begin{array}{l}
\Theta_{n}(\theta)= \frac{\textrm{ce}_{n}\left(\frac{\varphi-\theta}{2},q_{\rho}\right)}{\sqrt{\pi}} , n \in \mathbb{N}\cup \{0\}, \qquad
\lambda_{n}=-\frac{a_{n}(q_{\rho})D_{11}}{4}, \ \ q_{\rho}=\frac{2\rho i}{D_{33}},
\end{array}
\end{equation}
where $\textrm{ce}_{n}$ denotes the even periodic Mathieu-function
with Floquet exponent $n$.
The settings of the solution formula of the second approach
Eq.\!~(\ref{Mathieu}) together with Eq.\!~(\ref{periodized}) are
\begin{equation} \label{aqcom}
\begin{array}{l}
a_{\rho}= -\frac{4\alpha}{D_{33}}, \
q_{\rho}= \frac{2\rho i}{D_{33}}, \
\mu = 2, \ W_{\rho}= - i \, \textrm{se}_{\nu}'(0,q_{\rho})\textrm{ce}_{\nu}(0,q_{\rho}).
\end{array}
\end{equation}
See Figure~\ref{fig:ExactCompletionKernel} for plots of completion kernels.
\begin{figure}[!htb]
\centerline{
\includegraphics[width=0.9\hsize]{fig10FP.pdf}
}
\caption{The marginals of the exact Green's functions for contour completion. All the figures have the same settings: $\sigma=0.4$, $\textbf{D}=\{0,0,0.08\}$ and $\textbf{a}=(1,0,0)$. Top row, left: The resolvent process with {\small $\alpha=0.1$},
right: The resolvent process with {\small $\alpha=0.01$}.
Bottom row: The fundamental solution of the resolvent process with {\small $\alpha=0.$} The iso-contour values are indicated in the Figure.
}\label{fig:ExactCompletionKernel}
\end{figure}
\subsubsection{Overview of the Relation of Exact Solutions to Numerical Implementation Schemes}
Theorem~\ref{th:exact} provides three type of exact solutions for our PDE's of interest, and the question rises how these exact solutions
relate to the common numerical approaches to these PDE's.
The solutions of the first type relate to $SE(2)$-Fourier and finite element type (but then using a in Fourier basis) of techniques, as we will show for the general case in Section~\ref{section:Duitsmatrixalgorithm}. The general idea is that if the dimension of the square band matrices (where the bandsize is atmost $5$) tends to infinity, the exact solutions arise in the spectral decomposition of the numerical matrices.
To compare the solutions of the second/third type of exact solutions to the numerics we must sample the solutions involving non-periodic Mathieu functions
in the Fourier domain. Unfortunately, as also reported by Boscain et al. \cite{Boscain2}, well-tested and complete publically available packages for Mathieu-function evaluations
are not easy to find. The routines for Mathieu function evaluation in \emph{Mathematica 7,8,9}, at least show proper results for specific parameter settings. However,
in case of contour enhancement their evaluations numerically break down for the interesting cases $D_{11}=1$ and $D_{33}<0.2$, see Figure~\ref{fig:MathieuImplementationComparison} in Appendix \ref{app:B}. Therefore, in Appendix~\ref{app:B}, we provide
our own algorithm for Mathieu-function evaluation relying on standard theory of continued fractions \cite{ContinuedFractions}.
This allows us to sample the exact solutions in the Fourier domain for comparisons. Still there are two issues left that we address in the next section:
1. One needs to analyze errors that arise by replacing $\textbf{CFT}^{-1}$ (Inverse of the Continuous Fourier
Transform) by the $\textbf{DFT}^{-1}$ (Inverse of the Discrete Fourier Transform), 2. One needs to deal with singularities at the origin.
\subsubsection{The Direct Relation of Fourier Based Techniques to the Exact Solutions}\label{section:FourierBasedForEnhancement}
In \cite{DuitsAlmsick2008} we have related matrix-inversion in Eq.~(\ref{MatrixInverse}) to the exact solutions for the contour completion case. Next we follow a similar approach for the contour enhancement case with ($D_{22}=0$, i.e. hypo-elliptic diffusion), where again we relate diagonalization of the five-band matrix to the exact solutions.
\begin{theorem}\label{th:RelationofFourierBasedWithExactSolution}
Let $\pmb{\omega}=(\rho\cos\varphi, \rho\sin\varphi) \in \mathbb{R}^2$ be fixed. In case of contour enhancement with $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$ and $\textbf{a}= \mathbf{0}$, the solution of the matrix system (\ref{5recursion}), for $N \rightarrow \infty$, can be written as\\
{\small
\begin{equation}
\boxed{
\hat{P}=S \Lambda^{-1} S^T \hat{\mathbf{u}}}
\end{equation}
}
with
\begin{equation}\label{recursionParameters}
\begin{array}{l}
\hat{P}=\{\tilde{P}^\ell(\rho)\}_{\ell \in \mathbb{Z}},\quad \hat{\mathbf{u}}=\{\tilde{u}^\ell(\rho)\}_{\ell \in \mathbb{Z}}, \quad S=[S_n^\ell]=[c_\ell^n(q_\rho)],\\
\Lambda=\textrm{diag}\{\alpha-\lambda_n({\rho})\},\quad \lambda_n(\rho)=-a_{2n}(q_\rho)D_{33}-\frac{\rho^2 D_{11}}{2}, \quad q_\rho=\frac{\rho^2D_{11}}{4 D_{33}},\\
\end{array}
\end{equation}
and where
\begin{align*}
c_\ell^n=
\left\{
\begin{array}{l}
\textit{Mathieu Coefficient }\; c_\ell^n, \quad if \; \ell \; is \; even\\
0, \quad if \;\ell\; is\; odd.
\end{array}
\right.
\end{align*}
In fact Eq.~(\ref{5recursion}), for $N \rightarrow \infty,$ boils down to a steerable $SE(2)$ convolution\cite{FrankenPhDThesis} with the corresponding exact kernel $R_\alpha^{\textbf{D},\textbf{a}}:SE(2)\rightarrow \mathbb{R}^+$
\end{theorem}
\begin{proof}
Both $\{\frac{1}{\sqrt{2\pi}}e^{i\ell(\varphi-\theta)}| \ell \in \mathbb{Z}\}$
and $\{\frac{1}{\sqrt{2\pi}}\Theta_n^{\pmb{\omega}}(\theta):=\frac{me_n(\varphi-\theta,q_\rho)}{\sqrt{2\pi}}|n \in \mathbb{Z}\}$
form an orthonormal basis of $\mathbb{L}_2({S^1})$. The corresponding basis transformation is given by $S$. As this basis transformation is unitary, we have $S^{-1}=S^\dagger=\bar{S}^T$. As a result we have
\begin{equation}
\begin{aligned}
\tilde{P}^\ell(\rho) = \sum_{m,n,p \in \mathbb{Z}} S_m^\ell \delta_n^m \frac{1}{\alpha-\lambda_n(\rho)} (S^\dagger)_p^n\tilde{u}^p(\rho) =\sum_{n \in \mathbb{Z}}\sum_{p \in \mathbb{Z}} \frac{c_\ell^n(q_\rho)c_p^n(q_\rho)\tilde{u}^p(\rho)}{\alpha-\lambda_n(\rho)}.
\end{aligned}
\end{equation}
Thereby, as $me_n(z)=\sum_{\ell \in \mathbb{Z}}c_\ell^n(q_\rho)e^{i \ell z}$, we have:
\begin{equation}
\begin{aligned}
\hat{P}_\alpha(\pmb{\omega},\theta)
=\alpha \sum_{\ell \in \mathbb{Z}} \tilde{P}^\ell(\rho)e^{i \ell (\varphi - \theta)}
=\alpha \sum_{n \in \mathbb{Z}}\sum_{p \in \mathbb{Z}}\frac{me_n(\varphi - \theta, q_\rho)c_p^n(q_\rho)e^{ip\varphi} \hat{u}^p(\rho)}{\alpha-\lambda_n(\rho)},
\end{aligned}
\end{equation}
where we recall $\tilde{u}^p=e^{ip\varphi}\hat{u}^p$.
Now by setting $u=\delta_e \Leftrightarrow \hat{u}(\pmb{\omega},\theta)=\frac{1}{2\pi}\delta_0^\theta \Leftrightarrow \forall_{p \in \mathbb{Z}},\; \hat{u}^p=\frac{1}{2\pi}$.\\
We obtain the exact kernel \\
\begin{equation}
R_\alpha^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta)=\frac{\alpha}{2\pi}\sum_{n \in \mathbb{Z}}\frac{\Theta_n^{\pmb{\omega}}(\theta)\Theta_n^{\pmb{\omega}}(0)}{\alpha-\lambda_n(\rho)}.
\end{equation}
From which the result follows. $\hfill \Box$
\end{proof}
\textbf{Conclusion:} This theorem supports our numerical findings that will follow in Section \ref{section:Experimental results}. The small relative error are due to rapid convergence $\frac{1}{(\alpha-\lambda_n(\rho))} \rightarrow 0\quad (n\rightarrow \infty)$, so that truncation of the 5-band matrix produces very small $\textit{uniform}$ errors compared to the exact solutions. It is therefore not surprising that the Fourier based techniques outperform the finite difference solutions in terms of numerical approximation (see experiments Section~\ref{section:Experimental results}).
\subsection{Comparison to The Exact Solutions in the Fourier Domain\label{ch:comparison}}
In the previous section we have derived the Green's function of the exact solutions of the system
\begin{align} \label{ResolventEquations2}
\left\{
\begin{aligned}
&(\alpha I-Q^{\textbf{D},\textbf{a}}) R_{\alpha}^{\textbf{D},\textbf{a}}=\alpha \delta_e\\
&R_{\alpha}^{\mathbf{D},\mathbf{a}}(x,y,-\pi)
=
R_{\alpha}^{\mathbf{D},\mathbf{a}}(x,y,\pi)
\end{aligned}
\right. \end{align}
in the continuous Fourier domain. However, we still need to produce nearly exact solutions $R_{\alpha}^{\textbf{D},\textbf{a}}(x,y,\theta_r)$ in the spatial domain, given by
\begin{equation}\label{ContinuousExactSolutions}
\begin{aligned}
R_{\alpha}^{\textbf{D},\textbf{a}}(x,y,\theta_r)&=\left(\frac{1}{2\pi}\right)^2\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\pmb{x}}d\pmb{\omega}\\
&=\left(\frac{1}{2\pi}\right)^2\int_{-\varsigma\pi}^{\varsigma\pi}\int_{-\varsigma\pi}^{\varsigma\pi}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\pmb{x}}d\pmb{\omega}+I_\varsigma(\textbf{x},r),
\end{aligned}
\end{equation}
where $\pmb{x}=(x,y) \in \mathbb{R}^2$, $\pmb{\omega}=(\omega_x,\omega_y)\in \mathbb{R}^2$, $\theta_r=(\frac{2\pi}{2R+1} \cdot r) \in [-\pi,\pi]$ are the discrete angles and $r \in \{-R,-(R-1),...,0,...,R-1,R\}$, $\varsigma$ is an oversampling factor and $I_\varsigma(\textbf{x},r)$ represent the tails of the exact solutions due to their support outside the range $[-\varsigma\pi,\varsigma\pi]$ in the Fourier domain, given by
\begin{equation}
I_{\zeta}(\textbf{x},r)=\left(\frac{1}{2\pi}\right)^2 \int_{\mathbb{R}^2 \setminus [-\varsigma\pi,\varsigma\pi]^2}e^{-s|\pmb{\omega}|^2}\hat{R}_{\alpha}^{\textbf{D}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\textbf{x}}d\pmb{\omega}.
\end{equation}
However in practice we sample the exact solutions in the Fourier domain and then obtain the spatial kernel by directly applying the $\textbf{DFT}^{-1}$. Here errors will emerge by using the $\textbf{DFT}^{-1}$ instead of the $\textbf{CFT}^{-1}$. More precisely, we shall rely on the $\textbf{CDFT}^{-1}$ (Inverse of the Centered Discrete Fourier Transform). Next we analyze and estimate the errors via Riemann sum approximations \cite{RiemannSum}. The nearly exact solutions of the spatial kernel in Eq.~(\ref{ContinuousExactSolutions}) can be written as
\begin{equation}\label{DiscreteExactSolutions}
\begin{array}{l}
R^{\textbf{D},\textbf{a}}_\alpha(x,y,\theta_r)=\left(\frac{1}{2\pi}\right)^2 \sum\limits_{p'=-\varsigma P}^{\varsigma P}\sum\limits_{q'=-\varsigma Q}^{\varsigma Q}\hat{R}^{\textbf{D},\textbf{a}}_\alpha (\omega_{p'}^1,\omega_{q'}^2,\theta_r)e^{i(\omega_{p'}^1 x+\omega_{q'}^2 y)}\Delta\omega^1\Delta\omega^2+\\
\qquad\qquad\qquad \left.
I_\varsigma(\textbf{x},r)+O\left(\frac{1}{2P+1}\right)+O\left(\frac{1}{2Q+1}\right)\right.\\
\qquad \qquad \left.
=\frac{1}{2P+1}\frac{1}{2Q+1} \sum\limits_{p'=-\varsigma P}^{\varsigma P}\sum\limits_{q'=-\varsigma Q}^{\varsigma Q}\hat{R}^{\textbf{D},\textbf{a}}_\alpha (\omega_{p'}^1,\omega_{q'}^2,\theta_r)e^{i(\omega_{p'}^1 x+\omega_{q'}^2 y)}+\right.\\
\qquad\qquad\qquad \left.
I_\varsigma(\textbf{x},r)+O\left(\frac{1}{2P+1}\right)+O\left(\frac{1}{2Q+1}\right)\right.
\end{array}
\end{equation}
where
$\Delta\omega^1=\frac{2\pi}{2P+1}=\frac{2\pi}{x_{dim}},
\Delta\omega^2=\frac{2\pi}{2Q+1}=\frac{2\pi}{y_{dim}}$ and $P, \, Q \in \mathbb{N}$ determine the number of samples in the spatial domain, with discrete frequencies and angles given by
\begin{equation}\label{discretefrequencies}
\omega_{p'}^1=\frac{2\pi}{2P+1} \cdot p' \in [-\varsigma\pi,\varsigma\pi],\;
\omega_{q'}^2=\frac{2\pi}{2Q+1} \cdot q' \in [-\varsigma\pi,\varsigma\pi],\;
\theta_r=\frac{2\pi}{2R+1} \cdot r \in [-\pi,\pi]
\end{equation}
There are three approximation terms in Eq.~(\ref{DiscreteExactSolutions}), and two of them, i.e. $O\left(\frac{1}{2P+1}\right)$ and $O\left(\frac{1}{2Q+1}\right)$ are standard due to Riemann sum approximation. However, $I_\varsigma(\textbf{x},r)$ is harder to control and estimate.
This is one of the reasons why we include a spatial Gaussian blurring with small scale $0<s \ll 1$. This means that instead of solving
$R_\alpha^{\textbf{D},\textbf{a}}=
\alpha(\alpha I-Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1}\delta_e$,
we compute
\begin{equation}\label{ResolventWithGaussian}
R_\alpha^{\textbf{D},\textbf{a},s}=e^{s\Delta}\alpha(\alpha I-Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1}\delta_e
=\alpha(\alpha I - Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1} e^{s\Delta} \delta_e.
\end{equation}
So instead of computing the impulse response of a resolvent diffusion we compute the response of a spatially blurred spike $G_s \otimes \delta_0^\theta$ with Gaussian kernel $G_s(x)=\frac{e^{-\frac{||x||^2}{4s}}}{4\pi s}$. Another reason for including a linear isotropic diffusion is that the kernels $R_\alpha^{\textbf{D},\textbf{a},s}$ are not singular at the origin. The singularity at the origin $(0,0,0)$ of $R_\alpha^{\textbf{D},\textbf{a}}$ reproduces the original data, whereas the tails of $R_\alpha^{\textbf{D},\textbf{a}}$ take care of the external actual visual enhancement. Therefore, reducing the singularity at the origin by slight increase of $s>0$, amplifies the enhancement properties of the kernel in practice.
However, $s>0$ should not be too large as we do not want the isotropic diffusion to dominate the anisotropic diffusion.
\begin{theorem}
The exact solutions of $R_\alpha^{\textbf{D},\textbf{a},s}:SE(2) \rightarrow \mathbb{R}^+$ are given by
\begin{equation}\begin{aligned} \label{ExactSolutionsFourier}
\left(\mathcall{F}_{\mathbb{R}^2}R_\alpha^{\textbf{D},\textbf{a},s}(\cdot,\theta)\right)(\pmb{\omega})
=
\left(\mathcall{F}_{\mathbb{R}^2}R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\theta)\right)(\pmb{\omega})e^{-s|\pmb{\omega}|^2},
\end{aligned}\end{equation}
where analytic expressions for $\hat{R}_\alpha^{\textbf{D},\textbf{a}}(\pmb{\omega,\theta})=\left[\mathcall{F}_{\mathbb{R}^2}(R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\theta))\right](\pmb{\omega})$ in terms of Mathieu functions are provided in Theorem \ref{th:exact}. For the spatial distribution, we have the following error estimation:
\begin{equation}\label{ExactSolutionRiemannSumApproximation}
R_\alpha^{\textbf{D},\textbf{a},s}(\textbf{x},\theta_r)=\left(\left[\mathbf{CDFT}\right]^{-1}(\hat{R}_{\alpha}^{\mathbf{D},\mathbf{a},s}(\pmb{\omega}_\cdot^1,\pmb{\omega}_\cdot^2,\theta_r))\right)(\textbf{x})+I_\varsigma^s(\textbf{x},r)+O\left(\frac{1}{2P+1}\right)+O\left(\frac{1}{2Q+1}\right),
\end{equation}
for all $\textbf{x}=(x,y) \in \mathbb{Z}_P \times \mathbb{Z}_Q$, with discretization in Eq.~(\ref{discretefrequencies}), $\varsigma \in \mathbb{N}$ denotes the oversampling factor in the Fourier domain and $s=\frac{1}{2}\sigma^2$ is the spatial Gaussian blurring scale with $\sigma \approx 1, 2$ pixel length, and \\
\begin{equation}
I_\varsigma^s(\textbf{x},r)=\int_{\mathbb{R}^2 \setminus [-\varsigma\pi,\varsigma\pi]^2}e^{-s|\pmb{\omega}|^2}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\cdot\textbf{x}}d\pmb{\omega}.
\end{equation}
\label{thm:DiscreteExactSolutionsErrorEstimation}
\end{theorem}
First of all we recall Eq.~(\ref{ResolventWithGaussian}), from which Eq.~(\ref{ExactSolutionsFourier}) follows. Eq.~(\ref{ExactSolutionRiemannSumApproximation}) follows by standard Riemann-sum approximation akin to Eq.~(\ref{DiscreteExactSolutions}).
Finally, we note that due to H\"{o}rmander theory \cite{Hoermander} the kernel $R_\alpha^{\textbf{D},\textbf{a}}$ is smooth on $SE(2)\setminus\{e\}=(0,0,0)$. Now, thanks to the isotropic diffusion, $R_\alpha^{\textbf{D},\textbf{a},s}$ is well-defined and smooth on the whole group $SE(2)$.
\begin{remark}
In the isotropic case $D_{11}=D_{22}$ we have the asympotic formula (for $\rho \gg 0$ fixed):
\begin{equation}
\begin{array}{l}
\cr(D_{11}\rho^2+D_{33}\rho_\theta^2+\alpha I)\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb\omega,\theta)=1
\Longrightarrow \hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb\omega,\rho_\theta)=\frac{1}{D_{11}\rho^2+D_{33}\rho_\theta^2+\alpha} \approx O(\frac{1}{\rho^2})
\end{array}
\end{equation}
\end{remark}
Now for
\begin{equation}
\begin{array}{l}
|I_\varsigma^s(\textbf{x},r)|=|\int_{\mathbb{R}^2 \setminus [-\varsigma\pi,\varsigma\pi]^2}e^{-s|\pmb{\omega}|^2}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\textbf{x}}d\pmb{\omega}|
\leq 2\pi\int_{\varsigma\pi}^\infty e^{-s\rho^2}\frac{C}{\rho}d\rho = \pi C \; \Gamma(0,\pi^2 s \varsigma^2),
\end{array}
\end{equation}
for fixed $\textbf{a}$, $C \approx \frac{1}{D_{11}}$ (for $D_{33}$ small), and where $\Gamma(a,z)$ denotes the incomplete Gamma distribution. We have $s=\frac{1}{2}\sigma^2$. For typical parameter settings in the contour enhancement case, $\sigma=1$ pixel length, $D_{11}=1,D_{33}=0.05$, we have
\begin{align} \label{GammaDistribution}
|I_\varsigma^s(\textbf{x},r)|\leq\left\{
\begin{aligned}
&(0.00124)\pi C, &\varsigma=1 \\
&(10^{-10})\pi C, &\varsigma=2
\end{aligned}
\right. \end{align}
which is sufficiently small for $\varsigma \geq 2$.
\subsubsection{Scale Selection of the Gaussian Mask and Inner-scale}\label{ScaleSelectionGaussianMask}
In the previous section, we proposed to use a narrow spatial isotropic Gaussian window to control errors caused by using the $\textbf{DFT}^{-1}$. In $\mathbb{R}$, we have $\sqrt{4\pi s} \mathcall{F}G_{s}=G_{\frac{1}{4s}}$, i.e.
\begin{equation}\label{GaussianFunction}
\begin{aligned}
(\mathcall{F}G_{s})(\omega)&=e^{-s||\omega||^2}, \qquad
G_s(x)=\frac{1}{\sqrt{4\pi s}}e^{\frac{-||x||^2}{4s}},
\qquad
\sigma_s \cdot \sigma_f=1.
\end{aligned}
\end{equation}
where $\sigma_f$ denotes the standard deviation of the Fourier window, and $\sigma_s$ denotes the standard deviation of the spatial window.
In our convention, we always take $\Delta x=\frac{l}{N_s}$ as the spatial pixel unit length, where $l$ gives the spatial physical length and $N_s$ denotes the number of samples.
The size of the fourier Gaussian window can be represented as: $2\sigma_f=\nu\cdot\varsigma\pi$,
where $\nu\in[\frac{1}{2},1]$ is the factor that specifies the percentage of the maximum frequency we are going to sample in the fourier domain and $\varsigma$ is the oversampling factor. Then, we can represent the size of the continuous and discrete spatial Gaussian window $\sigma_{s}$ and $\sigma_{s}^{Discrete}$ as:
\begin{equation}
\sigma_{s}=\frac{2}{\nu \varsigma \pi},\qquad
\sigma_{s}^{Discrete}=\sigma_s \cdot \frac{l}{N_s}=\frac{2}{\nu \varsigma \pi}\left(\frac{l}{N_s}\right).
\end{equation}
From Figure~\ref{fig:GaussianScale}, we can see that a Fourier Gaussian window with $\nu<1$ corresponds to a spatial Gaussian blurring of slightly more than 1 pixel unit.
\begin{figure}[htbp]
\centering
\subfloat{\includegraphics[scale=0.6]{GaussianScale.pdf}}
\caption{Illustration of the scales between a Fourier Gaussian window and the corresponding spatial Gaussian window. Here we define the number of samples $N_s=65$.}
\label{fig:GaussianScale}
\end{figure}
If we set the oversampling factor $\varsigma=1$, one has $2\sigma_{s}^{Discrete}\in \left[\Delta x, 2\Delta x\right]$. Then, the scale of the spatial Gaussian window $s_s=\frac{1}{2}(\sigma_s^{Discrete})^2\leq\frac{1}{2}(\Delta x)^2$, in which $\frac{1}{2}(\Delta x)^2$ is called \emph{inner-scale} \cite{FlorackInnerScale1992}, which is by definition the minimum reasonable Gaussian scale due to the sampling distance.
\subsubsection{Comparison by Relative $\ell_K-$errors in the Spatial and Fourier Domain}\label{section:ComparisonRelativeErrorFormula}
Firstly, we explain how to make comparisons in the Fourier domain. Before the comparison, we apply a normalization such that all the DC components in the discrete Fourier domain add up to 1, i.e.
\begin{align*}
\sum_{r=-R}^R\sum_{x=-P}^P\sum_{y=-Q}^Q R_\alpha^{\textbf{D},\textbf{a}}(x,y,\theta_r)\Delta x \Delta y \Delta \theta=\sum_{r=-R}^R \left(\left[\mathbf{CDFT}\right]R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\cdot,\theta_r)\right)(0,0)\cdot \Delta\theta=1,
\end{align*}
where the $\textbf{CDFT}$ and its inverse are given by
\begin{equation}
\begin{array}{l}
\cr\left[\mathbf{CDFT}\left(R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\cdot,\theta_r)\right)\right][p',q']:=\sum\limits_{p=-P}^P\sum\limits_{q=-Q}^Q R_\alpha^{\textbf{D},\textbf{a}}(p,q,\theta_r)e^{\frac{-2\pi i p p'}{2P+1}}e^{\frac{-2\pi i q q'}{2Q+1}},\\
\cr\left[\mathbf{CDFT^{-1}}\left([p',q']\rightarrow \hat{R}_\alpha^{\textbf{D},\textbf{a}}(\omega_{p'}^1,\omega_{q'}^2,\theta_r)\right)\right][p,q]\cr\qquad\qquad\qquad:=\left(\frac{1}{2P+1}\frac{1}{2Q+1}\right)\sum\limits_{p'=-P}^P\sum\limits_{q'=-Q}^Q R_\alpha^{\textbf{D},\textbf{a}}(\omega_{p'}^1,\omega_{q'}^2,\theta_r)e^{\frac{2\pi i p p'}{2P+1}}e^{\frac{2\pi i q q'}{2Q+1}},
\end{array}
\end{equation}
in order to be consistent with the normalization in the continuous domain:
\begin{align*}
\int_{-\pi}^\pi\hat{R}_\alpha^{\textbf{D},\textbf{a}}(0,0,\theta){\rm d}\theta=\int_{-\pi}^\pi\int_\mathbb{R}\int_\mathbb{R} R_\alpha^{\textbf{D},\textbf{a}}(x,y,\theta){\rm d}x {\rm d}y {\rm d}\theta=1.
\end{align*}
The procedures of calculating the relative errors $\epsilon_R^f$ in the Fourier domain are given as follows:
\begin{equation}\label{RelativeError}
\epsilon_R^f=\frac{\textbf{|}\hat{R}_\alpha^{\textbf{D},\textbf{a},exact}(\omega_{\cdot}^1,\omega_{\cdot}^2,\theta_\cdot)-\hat{R}_\alpha^{\textbf{D},\textbf{a},approx}(\omega_{\cdot}^1,\omega_{\cdot}^2,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}{{\textbf{|}\hat{R}_\alpha^{\textbf{D},\textbf{a},exact}(\omega_{\cdot}^1,\omega_{\cdot}^2,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}
},
\end{equation}
where $K \in \mathbb{N}$ indexes the $\ell_K$ norm on the discrete domain $\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R$. Akin to comparisons in the Fourier domain, we compute relative errors $\epsilon_R^s$ in the spatial domain as follows:
\begin{equation}\label{RelativeError}
\epsilon_R^s=\frac{\textbf{|}R_\alpha^{\textbf{D},\textbf{a},exact}(x_\cdot,y_\cdot,\theta_\cdot)-R_\alpha^{\textbf{D},\textbf{a},approx}(x_\cdot,y_\cdot,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}{{\textbf{|}R_\alpha^{\textbf{D},\textbf{a},exact}(x_\cdot,y_\cdot,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}
},
\end{equation}
where we firstly normalize the approximation kernel with respect to the $\ell_1(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )$ norm.
\section{Experimental Results}\label{section:Experimental results}
To compare the performance of different numerical approaches with the exact solution, Fourier and spatial kernels with special parameter settings are produced from different approaches in both enhancement and completion cases. The evolution of all our numerical schemes starts with a spatially blurred orientation score spike, i.e. $(G_{\sigma_s}*\delta_0^{\mathbb{R}^2})\otimes\delta_0^{S^1}$, which corresponds to the Fourier Gaussian window mentioned in Section~\ref{ch:comparison} for the error control of the exact kernel in Theorem \ref{thm:DiscreteExactSolutionsErrorEstimation}. We vary $\sigma_s>0$ in our comparisons. We analyze the relative errors of both spatial and Fourier kernels with changing standard deviation $\sigma_s$ of Gaussian blurring in the finite difference and the Fourier based approaches for contour enhancement, see Figure~\ref{fig:RelativeErrorWithChangingSigma}.
All the kernels in our experiments are $\ell_1-$ normalized before comparisons are done. In the contour completion experiments, we construct all the kernels with the number of orientations $N_o = 72$ and spatial dimensions $N_s = 192$, while in the contour enhancement experiments we set $N_o = 48$ and $N_s = 128$. Our experiments are not aiming for speed of convergence in terms of $N_o$ and $N_s$, as this can be derived theoretically from Theorem \ref{th:RelationofFourierBasedWithExactSolution}, we rather stick to reasonable sampling settings to compare our methods, and to analyze a reasonable choice of $\sigma_s > 0$.
\begin{figure}[!htbp]
\centering
\subfloat{\includegraphics[width=.6\textwidth]{RelativeErrorWithChangingSigma.pdf}}
\caption{The relative errors, Eq.~(\ref{RelativeError}), of the finite difference (FD), and Fourier based techniques (FBT) with respect to the exact methods (Exact) for contour enhancement. Both $\ell_1$ and $\ell_2$ normalized spatial and Fourier kernels are calculated based on different standard deviation $\sigma_s$ ranging from 0.5 to 1.7 pixels, with parameter settings $\textbf{D}=\{1.,0.,0.03\}, \alpha=0.05$ and time step size $\Delta t=0.005$ in the FD explicit approach.
}
\label{fig:RelativeErrorWithChangingSigma}
\end{figure}
From Figure~\ref{fig:RelativeErrorWithChangingSigma} we deduce that the relative errors of the $\ell_1$ and $\ell_2$ normalized finite difference (FD) spatial kernels converge to an offset of approximately $5\%$, which is understood by additional numerical blurring due to B-spline approximation in Section \ref{section: Left-invariant Finite Differences with B-spline Interpolation}, which is needed for rotation covariance in discrete implementations \cite[Figure~\!10]{Franken2009IJCV}, but which does affect the actual diffusion parameters. The relative errors of the Fourier based techniques (FBT) are very slowly decaying from $0.61\%$ along the axis $\sigma_s$. We conclude that an appropriate stable choice of $\sigma_s$ for fair comparison of our methods is $\sigma_s=1$, recall also Section~ \ref{ScaleSelectionGaussianMask}.
\begin{table*}[!ht]
\centering
\caption{Enhancement kernel comparison of the exact analytic solution with the numerical Fourier based techniques, the stochastic methods and the finite difference schemes.}
\begin{tabular}{r|l|l|l}
\toprule
Relative Error & $\textbf{D}=\{1.,0.,0.05\}$ & $\textbf{D}=\{1.,0.,0.05\}$ & $\textbf{D}=\{1.,0.9,1.\}$ \\
(\textbf{\%}) & $\alpha=0.01$ & $\alpha=0.05$ & $\alpha=0.05$ \\
\midrule
$\ell_1$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{0.12} \qquad \textbf{1.30} & \textbf{0.35} \qquad \textbf{1.92} & \textbf{2.27} \qquad \textbf{0.60} \\
\textit{Exact-Stochastic} & 2.18 \qquad 3.94 & 1.74 \qquad 3.82 & 2.66 \qquad 2.54 \\
\textit{Exact-FDExplicit} & 5.07 \qquad 1.82 & 5.70 \qquad 2.34 & 2.99 \qquad 3.56 \\
\textit{Exact-FDImplicit} & 5.08 \qquad 2.29 & 5.70 \qquad 3.03 & 3.00 \qquad 5.59 \\
\midrule
$\ell_2$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{1.40} \qquad \textbf{1.37}& \textbf{2.39} \qquad \textbf{2.30} & \textbf{2.24} \qquad \textbf{1.23} \\
\textit{Exact-Stochastic} & 2.26 \qquad 2.32& 3.50 \qquad 3.16 & 2.93 \qquad 2.65 \\
\textit{Exact-FDExplicit} & 4.80 \qquad 1.72& 4.97 \qquad 1.60 & 2.90 \qquad 3.15 \\
\textit{Exact-FDImplicit} & 5.17 \qquad 2.11& 5.80 \qquad 2.29 & 5.42 \qquad 5.56 \\
\bottomrule
\end{tabular}%
\caption*{
\textbf{Measurement method abbreviations}: ($\textit{Exact}$) - Ground truth measurements based on the analytic solution by using Mathieu functions in Section~\ref{3GeneralFormsExactSolutions}, ($\textit{FBT}$) - Fourier based techniques in Section~\ref{section:Duitsmatrixalgorithm} and Section~\ref{section:FourierBasedForEnhancement}, ($\textit{Stochastic}$) - Stochastic method in Section~\ref{section:MonteCarloStochasticImplementation} (with $\Delta t=0.02$ and $10^8$ samples), ($\textit{FDExplicit}$) and ($\textit{FDImplicit}$) - Explicit and implicit left-invariant finite difference approaches with B-Spline interpolation in Section~\ref{section:Left-invariant Finite Difference Approaches for Contour Enhancement}, respectively. The settings of time step size are $\Delta t=0.005$ in the $\textit{FDExplicit}$ scheme, and $\Delta t=0.05$ in the $\textit{FDImplicit}$ scheme. }
\label{tab:RelativeErrorEnhancementComparison}%
\end{table*}
Table~\ref{tab:RelativeErrorEnhancementComparison} shows the validation results of our numerical enhancement kernels, in comparison with the exact solution using the same parameter settings. The first 5 rows and the last 5 rows of the table show the relative errors of the $\ell_1$ and $\ell_2$ normalized kernels separately. In all the three parameter settings, the kernels obtained by using the FBT method provides the best approximation to the exact solutions due to the smallest relative errors in both the spatial and the Fourier domain.
Overall, the stochastic approach (a Monte Carlo simulation with $\Delta t=0.02$
and $10^8$ samples) performs second best.
Although the finite difference scheme performs less, compared to the more computationally demanding FBT and the stochastic approach, the relative errors of the FD explicit approach are still acceptable, less than $5.7\%$. The $5\%$ offset is understood by the B-spline interpolation to compute on a left-invariant grid. Here we note that finite differences do have the advantage of straightforward extensions to the non-linear diffusion processes \cite{Citti,Creusen2013,FrankenPhDThesis,Franken2009IJCV}, which will also be employed in the subsequent application section. For the FD implicit approach, larger step size can be used than the FD explicit approach in order to achieve a much faster implementation, but still with negligible influence on the relative errors.
Table~\ref{tab:RelativeErrorCompletionComparison} shows the validation results of the numerical completion kernels with three sets of parameters. Again, all the $\ell_1$ and $\ell_2$ normalized FBT kernels show us the best performance (less than $1.2\%$ relative error) in the comparison.
\begin{table*}[!ht]
\centering
\caption{Completion kernel comparison of the exact analytic solution with the numerical Fourier based techniques, the stochastic methods and the finite difference schemes.}
\begin{tabular}{r|l|l|l}
\toprule
Relative Error & $\textbf{D}=\{0.,0.,0.08\}$& $\textbf{D}=\{0.,0.,0.08\}$ & $\textbf{D}=\{0.,0.,0.18\}$ \\
& $\textbf{a}=(1.,0.,0.)$ & $\textbf{a}=(1.,0.,0.)$ & $\textbf{a}=(1.,0.,0.)$ \\
(\textbf{\%}) & $\alpha=0.01$ & $\alpha=0.05$ & $\alpha=0.05$ \\
\midrule
$\ell_1$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{0.02} \qquad \textbf{1.06}& \textbf{0.11} \qquad \textbf{1.17} & \textbf{0.05} \qquad \textbf{0.52} \\
\textit{Exact-Stochastic} & 2.49 \qquad 3.31 & 2.37 \qquad 5.40 & 1.95 \qquad 4.26\\
\textit{Exact-FDExplicit} & 1.91 \qquad 8.36& 4.29 \qquad 8.68 & 4.57 \qquad 9.03 \\
\midrule
$\ell_2$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{0.94} \qquad \textbf{1.21}& \textbf{1.20} \qquad \textbf{1.50} & \textbf{0.65} \qquad \textbf{0.79} \\
\textit{Exact-Stochastic} & 4.96 \qquad 3.40 & 4.84 \qquad 3.25 & 4.39 \qquad 2.45\\
\textit{Exact-FDExplicit} & 6.60 \qquad 5.50& 7.92 \qquad 6.56 & 8.46 \qquad 6.48 \\
\bottomrule
\end{tabular}%
\caption*{
\textbf{Measurement method abbreviations}:
($\textit{Exact}$) - Ground truth measurements based on the analytic solution by using Mathieu functions in Section~\ref{3GeneralFormsExactSolutions}, ($\textit{FBT}$) - Fourier based techniques in Section~\ref{section:Duitsmatrixalgorithm} and Section~\ref{section:FourierBasedForEnhancement}, ($\textit{Stochastic}$) - Stochastic method in Section~\ref{section:MonteCarloStochasticImplementation}
(with $\Delta t=0.02$ and $10^8$ samples), ($\textit{FDExplicit}$) - Explicit left-invariant finite difference approaches with B-Spline interpolation in Section~\ref{section:Left-invariant Finite Difference Approaches for Contour Enhancement}. The settings of time step size are $\Delta t=0.005$ in the $\textit{FDExplicit}$ scheme.}
\label{tab:RelativeErrorCompletionComparison}%
\end{table*}
\section{Application of Contour Enhancement to Improve Vascular Tree Detection in Retinal Imaging}\label{section:Applications on Retinal Image}
In this section, we will show the potential of achieving better vessel tracking results by applying the $SE(2)$ contour enhancement approach on challenging retinal images where the vascular tree (starting from the optic disk) must be detected. The retinal vasculature provides a convenient mean for non-invasive observation of the human circulatory system. A variety of eye-related and systematic diseases such as glaucoma\cite{CassonGlaucoma2012}, age-related macular degeneration, diabetes, hypertension, arteriosclerosis or Alzheimer's disease affect the vasculature and may cause functional or geometric changes\cite{Ikram2013}. Automated quantification of these defects enables
massive screening for systematic and eye-related vascular diseases on the basis of fast and inexpensive imaging modalities, i.e. retinal photography. To automatically extract and assess the state of the retinal vascular tree, vessels have to be segmented, modeled and analyzed. Bekkers et al.\cite{BekkersJMIV} proposed a fully automatic multi-orientation vessel tracking method (ETOS) that performs excellently in comparison with other state-of-the-art algorithms. However, the ETOS algorithm often suffers from low signal to noise ratios, crossings and bifurcations, or some problematic regions caused by leakages/blobs due to some diseases. See Figure~\ref{fig:ProblematicRetinalVesselTracking}.
\begin{figure}[htbp]
\centering
\subfloat{\includegraphics[width=.75\textwidth]{ProblematicRetinalVesselTracking.pdf}}
\caption{Three problematical cases in the ETOS tracking algorithm\cite{BekkersJMIV}. From left to right: blurry crossing parts, small vessels with noise and small vessels with high curvature.
}
\label{fig:ProblematicRetinalVesselTracking}
\end{figure}
We aim to solve these problems via left-invariant contour enhancement processes on invertible orientation scores as pre-processing for subsequent tracking\cite{BekkersJMIV}, recall Figure~\ref{fig:OSIntro}. In our enhancements, we rely on non-linear extension \cite{Franken2009IJCV} of finite difference implementations of the contour enhancement process to improve adaptation of our model to the data in the orientation score. Finally, the ETOS tracking algorithm \cite{BekkersJMIV} is performed on the enhanced retinal images with respect to various problematic tracking cases, in order to show the benefit of the left-invariant diffusion on $SE(2)$.
As a proof of concept, we show examples of tracking on left-invariantly diffused invertible orientation scores on cases where standard ETOS-tracking without left-invariant diffusion fails, see~\mbox{Figure~\ref{fig:VesselTrackingonCEDOSRetinalImages}.}
\begin{figure}[htbp]
\centering
\includegraphics[width=.75\textwidth]{VesselTrackingonCEDOSRetinalImages.pdf}
\caption{Vessel tracking on retinal images. From up to down: the original retinal images with erroneous ETOS tracking, the enhanced retinal images with accurate tracking after enhancement.}
\label{fig:VesselTrackingonCEDOSRetinalImages}
\end{figure}
All the experiments in this section use the same parameters. All the retinal images are selected with the size $400 \times 400$. Parameters used for tracking are the same as the parameters of the ETOS algorithm in \cite{BekkersJMIV}: Number of orientations $N_o$ = 36, wavelets-periodicity = $2\pi$. The following parameters are used for the non-linear coherence-enhancing diffusion (CED-OS): spatial scale of the Gaussian kernel for isotropic diffusion is $t_s=\frac{1}{2}\sigma_s^2=12$, the scale for computing Gaussian derivatives is $t_s'=0.15$, the metric $\beta=0.058$, the end time $t=20$, and $c=1.2$ for controlling the balance between isotropic diffusion and anisotropic diffusion, for details see \cite{Franken2009IJCV}.
\section{Conclusion}
We analyzed linear left-invariant diffusion, convection-diffusion and their resolvents on invertible orientation scores, following both 3 numerical and 3 exact approaches. In particular, we considered the Fokker-Planck equations of Brownian motion for contour enhancement, and the direction process for contour completion. We have provided 3 exact solution formulas for the generic left-invariant PDE's on $SE(2)$ to place previous exact formulas into context. These formulas involve either infinitely many periodic or non-periodic Mathieu functions, or only 4 non-periodic Mathieu functions.
Furthermore, as resolvent kernels suffer from severe singularities that we analyzed in this article, we propose a new time integration via Gamma distributions, corresponding to iterations of resolvent kernels. We derived new asymptotic formulas
for the resulting kernels and show benefits towards applications, illustrated via stochastic completion fields in Figure~\ref{fig:Gamma}.
Numerical techniques can be categorized into 3 approaches: finite difference, Fourier based and stochastic approaches. Regarding the finite difference schemes, rotation and translation covariance on reasonably sized grids requires B-spline interpolation \cite{Franken2009IJCV} (towards a left-invariant grid), including additional numerical blurring. We applied this both to implicit schemes and explicit schemes with explicit stability bound. Regarding Fourier based techniques (which are equivalent to $SE(2)$ Fourier methods, recall Remark~\ref{rem:42}), we have set an explicit connection in Theorem \ref{th:RelationofFourierBasedWithExactSolution} to the exact representations in periodic Mathieu functions from which convergence rates are directly deduced. This is confirmed in the experiments, as they perform best in the numerical comparisons.
We compared the exact analytic solution kernels to the numerically computed kernels for all schemes. We computed the relative $\ell_1$ and $\ell_2$ errors in both spatial and Fourier domain. We also analyzed errors due to Riemann sum approximations that arise by using the $\textbf{DFT}^{-1}$ instead of using the $\textbf{CFT}^{-1}$. Here, we needed to introduce a spatial Gaussian blurring with small ``inner-scale'' due to finite sampling. This small Gaussian blurring allows us, to control truncation errors, to maintain exact solutions, and to reduce the singularities.
We implemented all the numerical schemes in $\textit{Mathematica}$, and constructed the exact kernels based on our own implementation of Mathieu functions to avoid the numerical errors and slow speed caused by $\textit{Mathematica}$'s Mathieu functions.
We showed that FBT, stochastic and FD provide reliable numerical schemes. Based on the error analysis we demonstrated that best numerical results were obtained using the FBT with negligible differences. The stochastic approach (via a Monte Carlo simulation) performs second best.
The errors from the FD method are larger, but still located in an admissible scope, and they do allow non-linear adaptation. Preliminary results in a retinal vessel tracking application show that the PDE's in the orientation score domain preserve the crossing parts and help the ETOS algorithm\cite{BekkersJMIV} to achieve more robust tracking.
\section*{Acknowledgements}
The research leading to the results of this article
has received funding from the European Research Council under the European Community's 7th Framework Programme (FP7/2007-2014)/ERC grant agreement No. 335555. The China Scholarship Council (CSC) is gratefully acknowledged for the financial support No. 201206300010.
\begin{figure}[htbp]
\centering
\includegraphics[width=.75\textwidth]{logos.pdf}
\label{fig:logos}
\end{figure}
| {'timestamp': '2016-03-02T02:14:52', 'yymm': '1403', 'arxiv_id': '1403.3320', 'language': 'en', 'url': 'https://arxiv.org/abs/1403.3320'} |
\section{Introduction}
Rado asked in 1966 (see Problem P531 in \cite{rado1966abstract}) if it is possible to extend the concept of matroids to infinite
without loosing
duality and minors. Based on the works of Higgs (see \cite{higgs1969matroids}) and
Oxley (see \cite{oxley1978infinite} and \cite{oxley1992infinite}) Bruhn, Diestel, Kriesell, Pendavingh and Wollan
settled Rado's problem affirmatively in \cite{bruhn2013axioms} by finding a
set of cryptomorphic axioms for infinite matroids, generalising the usual independent set-, bases-, circuit-, closure- and
rank-axioms for finite mastoids. Higgs named originally these structures B-matroids to distinguish from the original concept.
Later this terminology vanished and in the context of infinite combinatorics B-matroids are referred as matroids and the term
`finite matroid' is used to
differentiate.
An $ M=(E, \mathcal{I}) $ is a matroid if $ \mathcal{I}\subseteq \mathcal{P}(E) $ with
\begin{enumerate}
[label=(\arabic*)]
\item\label{item axiom1} $ \varnothing\in \mathcal{I} $;
\item\label{item axiom2} $ \mathcal{I} $ is downward closed;
\item\label{item axiom3} For every $ I,J\in \mathcal{I} $ where $J $ is $ \subseteq $-maximal in $ \mathcal{I} $ but $ I $ is
not, there exists an $ e\in J\setminus I $ such that
$ I+e\in \mathcal{I} $;
\item\label{item axiom4} For every $ X\subseteq E $, any $ I\in \mathcal{I}\cap
\mathcal{P}(X) $ can be extended to a $ \subseteq $-maximal element of
$ \mathcal{I}\cap \mathcal{P}(X) $.
\end{enumerate}
If $ E $ is finite, then \ref{item axiom4} is redundant and \ref{item axiom1}-\ref{item axiom3} is one of the usual
axiomatizations
of finite matroids. One can show that every dependent set in an infinite matroid contains a minimal dependent set which is called
a circuit. Before Rado's program
was settled, a more restrictive axiom was used as a replacement of \ref{item axiom4}:
\begin{enumerate}[label={(4')}]
\item\label{item axiom4'} If all the finite subsets of an $ X\subseteq E $ are in $ \mathcal{I} $, then $ X\in \mathcal{I} $.
\end{enumerate}
The implication \ref{item axiom4'}$ \Longrightarrow $\ref{item axiom4} follows directly from Zorn's lemma thus axioms
\ref{item axiom1}, \ref{item axiom2}, \ref{item axiom3} and \ref{item axiom4'} describe a subclass $ \mathfrak{F} $ of the
the matroids. This $ \mathfrak{F} $ consists of the matroids having only finite circuits and called the class of \emph{finitary}
matroids. Class $ \mathfrak{F} $ is closed under several important operations like direct sums and taking minors but not under
taking
duals which was the main motivation of Rado's program for looking for a more general matroid concept. The class $
\mathfrak{F}^{*} $ of the duals of the matroids in $ \mathfrak{F} $
consists of the \emph{cofinitary} matroids, i.e. matroid whose all cocircuits are finite. In order to being closed under all
matroid operations we need, we work with the class $
\mathfrak{F}\oplus \mathfrak{F}^{*} $ of matroids having only finitary and cofinitary components, equivalently that are the
direct sum of a finitary and cofinitary matroid.
Matroid union is a fundamental concept in the theory of finite matroids. For a finite family $ (M_i: i\leq n) $ of matroids on a
common finite edge set $ E $ one can define a matroid $ \bigvee_{i\leq n}M_i $ on $ E $ by letting $ I\subseteq E $ be
independent in $ \bigvee_{i\leq n}M_i $ if $ I=\bigcup_{i\leq n}I_i $ for suitable $ I_i\in \mathcal{I}_{M_i} $ (see
\cite{edmonds1968matroid}). This
phenomenon fails for infinite families of finitary matroids. Indeed, let $ E $ be uncountable and let $ M_i $ be the $ 1
$-uniform
matroid on $ E $ for $ i\in \mathbb{N} $. Then exactly the countable subsets of $ E $ would be independent in
$ \bigvee_{i\in \mathbb{N}}M_i $ and hence there would be no maximal independent set contradicting
\ref{item axiom4}.\footnote{For a finite family of finitary matroids the union operation results in a matroid (Proposition 4.1
in \cite{aigner2018intersection}).} Even so,
Bowler and Carmesin observed (see section 3 in \cite{bowler2015matroid}) that the rank formula in the Matroid Partition
Theorem by Edmonds and Fulkerson (Theorem 13.3.1 in \cite{frank2011connections}), namely:
\[ r\left( \bigvee_{i\leq n}M_i \right) =\max_{I_i\in \mathcal{I}_{M_i}}\left|\bigcup_{i\leq n}I_i\right|=\min_{E=E_p\sqcup
E_c}\left|E_c\right|+\sum_{i\leq n}r_{M_i}(E_p),\]
can be interpreted in infinite setting via the complementary slackness conditions. In the minimax formula above there is equality
for the family $ (I_i: i\leq n) $ and partition $ E=E_p\sqcup E_c $ iff
\begin{itemize}
\item $ I_i $ is independent in $ M_i $,
\item $\bigcup_{i\leq n} I_i\supseteq E_c $,
\item $ I_i\cap E_p $ spans $ E_p $ in $ M_i $ for every $ i $,
\item $ I_i\cap I_j\cap E_p=\varnothing $ for $ i\neq j$.
\end{itemize}
Bowler and Carmesin conjectured that for
every family $\mathcal{M}:= (M_i: i\in\Theta) $ of matroids on a common edge set $ E $ there is a family $ (I_i: i\in \Theta) $
and partition $
E=E_p\sqcup E_c $ satisfying
the conditions above. To explain the name ``Packing/Covering Conjecture'' let us provide an alternative formulation. A
\emph{packing} for $ \mathcal{M} $ is a system $ (S_i: i\in\Theta ) $ of pairwise disjoint subsets of $ E $ where $ S_i $ is
spanning in $ M_i $. Similarly, a \emph{covering} for $ \mathcal{M} $ is a system $ (I_i: i\in\Theta ) $ with
$\bigcup_{i\in\Theta} I_i=E $ where $ I_i $ is independent in $ M_i $.
\begin{conj}[Packing/Covering, Conjecture 1.3 in \cite{bowler2015matroid} ]\label{conj: Pack/Cov}
For every family $ (M_i: i\in \Theta) $ of matroids on a common edge set $ E $ there is a partition $E=E_p \sqcup E_c$ in such a
way that $ (M_i \upharpoonright E_p: i\in \Theta) $ admits a packing and $ (M_i. E_c: i\in \Theta) $ admits a covering.
\end{conj}
We shall prove the following special case of the Pacing/Covering Conjecture \ref{conj: Pack/Cov}:
\begin{restatable}{thm}{PC}\label{t: main result0}
For every family $ (M_i: i\in \Theta) $ of matroids on a common countable edge set $ E $ where $ M_i \in
\mathfrak{F}\oplus \mathfrak{F}^{*} $, there is a partition $E=E_p \sqcup E_c$ such
that $ (M_i \upharpoonright E_p: i\in \Theta) $ admits a packing and $ (M_i. E_c: i\in \Theta) $ admits a covering.
\end{restatable}
It is worth to mention that packings and coverings have a crucial role in other problems as well. For example if $ (M_i: i\in\Theta)
$ is as
in Theorem
\ref{t: main result0} and admits
both a packing and a covering, then there is a partition
$ E=\bigsqcup_{i\in \Theta}B_i $ where $ B_i $ is a base of $ M_i $ (see \cite{erde2019base}). Maybe surprisingly, the failure
of the analogous statement
for arbitrary
matroids is consistent with set theory ZFC (Theorem 1.5 of \cite{erde2019base}) which might raise some scepticism about the
provability of Conjecture
\ref{conj: Pack/Cov} for general matroids.
The Packing/Covering Conjecture \ref{conj: Pack/Cov} is closely related to the Matroid Intersection Conjecture which has
been one of the central open problems in the theory of infinite matroids:
\begin{conj}[Matroid Intersection Conjecture by Nash-Williams, \cite{aharoni1998intersection}]\label{MIC}
If $ M $ and $ N $ are finitary matroids on the same edge set $ E $, then they admit a common
independent set $ I $ for which there is a partition $ E=E_M\sqcup E_N $ such that $ I_M:=I\cap E_M $ spans $ E_M $ in $
M $ and $ I_N:=I\cap E_N $ spans $ E_N $ in $ N $.
\end{conj}
Aharoni proved in \cite{aharoni1984konig} based on his earlier works with Nash-Williams and Shelah (see
\cite{aharoni1983general} and \cite{aharoni1984another})
that the special case of Conjecture \ref{MIC} where $ M $ and $ N $ are partition matroids holds. The conjecture
is also known
to be true if we assume that $ E $ is countable but $ M $ and
$ N $ can be otherwise arbitrary (see \cite{joo2020MIC}).
Let us call Generalized Matroid Intersection Conjecture what we obtain from \ref{MIC} by extending it to arbitrary matroids
(i.e. omitting the word ``finitary''). Several partial results has been obtained for this generalization but only for well-behaved
matroid classes. The positive answer is known for example if: $ M $ is finitary and $ N $ is cofinatory
\cite{aigner2018intersection} or both
matroids are
singular\footnote{A matroid is singular if it is the direct sum of $ 1 $-uniform matroids and duals of $ 1
$-uniform matroids.} and countable \cite{ghaderi2017} or $ M $ is arbitrary and $ N $ is the direct sum of
finitely many uniform matroids \cite{joó2020intersection}.
Bowler and Carmesin showed that their Pacing/Covering Conjecture \ref{conj: Pack/Cov} and the
Generalized Matroid Intersection Conjecture are equivalent and they also found an important reduction for both (see Corollary
3.9 in \cite{bowler2015matroid}). By analysing their proof it is clear that the equivalence can be established if we restrict both
conjectures to a class of matroids closed under certain operations. It allows us to prove Theorem \ref{t: main result} by showing
the following instance of the Generalized Matroid Intersection Conjecture which itself is a common extension of the singular case
by Ghaderi \cite{ghaderi2017} and our previous work \cite{joo2020MIC}:
\begin{restatable}{thm}{MI}\label{t: main result}
If $ M $ and $ N $ are matroids in $ \mathfrak{F}\oplus \mathfrak{F}^{*} $ on the same countable edge set $ E $, then they
admit a common independent set $ I $ for which there is a partition $ E=E_M\sqcup E_N $ such that $ I_M:=I\cap E_M $ spans
$ E_M $ in $ M $ and $ I_N:=I\cap E_N $ spans $ E_N $ in $ N $.
\end{restatable}
The paper is organized as follows. In the following section we introduce some notation and fundamental facts about matroids that
are mostly well-know for finite ones. In Section \ref{s: premil} we collect some previous results and relatively easy technical
lemmas in order be able the discuss later the proof of the main results without any distraction. Then in Section
\ref{s: reduction} we reduce the main results to a key-lemma. After these preparations the actual proof begins with
Section \ref{s: augP} by developing and analysing an `augmenting path' type of technique. Our main principle from this point is
to
handle the
finitary and the
cofinitary parts of matroid $ N $ differently in order to exploit the advantage
of the finiteness of the circuits and cocircuits respectively. Equipped with these ``mixed'' augmenting paths we discuss the proof
of our key-lemma in Section
\ref{s: proof of key-lemma}. Finally, we introduce an application in Section \ref{s: application} about orientations of a graph with
in-degree requirements.
\section{Notation and basic facts}\label{sec notation}
In this section we introduce the notation and recall some basic facts about
matroids that we will use later without further explanation. For more details we refer to \cite{nathanhabil}.
An enumeration of a countable set $ X $ is an $ \mathbb{N}\rightarrow X $ surjection that we write as $ \{x_n: n\in \mathbb{N}
\} $. We denote the symmetric difference $ (X\setminus Y)\cup (Y\setminus X) $ of $ X $ and $ Y $ by $
\boldsymbol{X\vartriangle Y} $.
A pair ${M=(E,\mathcal{I})}$ is a \emph{matroid} if ${\mathcal{I} \subseteq \mathcal{P}(E)}$ satisfies the axioms
\ref{item axiom1}-\ref{item axiom4}.
The sets in~$\mathcal{I}$ are called \emph{independent} while the sets in ${\mathcal{P}(E) \setminus \mathcal{I}}$ are
\emph{dependent}. An $ e\in E $ is a \emph{loop} if $ \{ e \} $ is dependent.
If~$E$ is finite, then \ref{item axiom1}-\ref{item axiom3} is one of the the usual axiomization of matroids in terms of
independent sets (while \ref{item axiom4} is redundant).
The maximal independent sets are called \emph{bases}. If $ M $ admits a finite base, then all the bases have the same size which
is the rank $ \boldsymbol{r(M)} $ of $ M $ otherwise we let $ r(M):=\infty $.\footnote{It is independent of ZFC that the bases of
a fixed matroid have the same
size (see \cite{higgs1969equicardinality} and \cite{bowler2016self}).} The minimal dependent sets are called \emph{circuits}.
Every dependent set contains a circuit. The \emph{components} of a matroid are the
components of the hypergraph of its circuits. The
\emph{dual} of a matroid~${M}$ is the
matroid~${M^*}$ with $ E(M^*)=E(M) $ whose bases are the complements of
the bases of~$M$. For an ${X \subseteq E}$, ${\boldsymbol{M \upharpoonright X} :=(X,\mathcal{I}
\cap \mathcal{P}(X))}$ is a matroid and it is called the \emph{restriction} of~$M$ to~$X$.
We write ${\boldsymbol{M - X}}$ for $ M \upharpoonright (E\setminus X) $ and call it the minor obtained by the
\emph{deletion} of~$X$.
The \emph{contraction} of $ X $ in $ M $ and the contraction of $ M $ onto $ X $ are
${\boldsymbol{M/X}:=(M^* - X)^*}$ and $\boldsymbol{M.X}:= M/(E\setminus X) $ respectively.
Contraction and deletion commute, i.e., for
disjoint
$ X,Y\subseteq E $, we have $ (M/X)-Y=(M-Y)/X $. Matroids of this form are the \emph{minors} of~$M$. The
independence of an $ I\subseteq X $ in $ M.X $ is equivalent with $ I\subseteq \mathsf{span}_{M^{*}}(X\setminus I) $.
If $ I $ is independent in $ M $ but $ I+e $ is dependent for some $ e\in E\setminus I $ then there is a unique
circuit $ \boldsymbol{C_M(e,I)} $ of $ M $ through $ e $ contained in $ I+e $. We say~${X
\subseteq E}$ \emph{spans}~${e \in E}$ in matroid~$M$ if either~${e \in X}$ or there exists a circuit~${C
\ni e}$ with~${C-e \subseteq X}$.
By letting $\boldsymbol{\mathsf{span}_{M}(X)}$ be the set of edges spanned by~$X$ in~$M$, we obtain a closure
operation
$ \mathsf{span}_{M}: \mathcal{P}(E)\rightarrow \mathcal{P}(E) $.
An ${S \subseteq E}$ is \emph{spanning} in~$M$ if~${\mathsf{span}_{M}(S) = E}$. An $ S\subseteq X $ spans $ X $ in $
M.X $ iff $ X\setminus S $ is independent in $ M^{*} $. If $ M_i=(E_i,
\mathcal{I}_i)$ is a matroid for $ i\in
\Theta $ and the sets $ E_i $ are pairwise disjoint, then their direct sum is $ \boldsymbol{\bigoplus_{i\in
\Theta}M_i}=(E,\mathcal{I}) $ where
$ E=\bigsqcup_{i\in \Theta}E_i $ and $ \mathcal{I}=\{ \bigsqcup_{i\in \Theta}I_i : I_i\in \mathcal{I}_i\} $. For a class $
\mathfrak{C} $ of matroids $ \boldsymbol{\mathfrak{C}(E)} $ denotes the subclass $ \{ M\in \mathfrak{C}: E(M)=E \} $.
A matroid is called uniform if for ever base $ B $ and every edges $ e\in B $ and $ f\in E\setminus B $ the set $ B-e+f $ is also a
base. Let
$ \boldsymbol{U_{E,n}}$ be the $ n $-uniform matroid on $ E $, formally $ U_{E,n}:=(E , [E]^{\leq n})$.
We need some further more subject-specific definitions. From now on let $ M
$ and $ N $ be matroids on a common edge set $ E $. We call a $ W\subseteq E $ an
$ (M,N) $-\emph{wave} if $ M\upharpoonright W $ admits an $ N.W $-independent base. Waves in the matroidal context were
introduced by
Aharoni and Ziv in \cite{aharoni1998intersection} but it was also an important tool in the proof of the infinite version of Menger's
theorem \cite{aharoni2009menger} by Aharoni and Berger. We write $ \boldsymbol{\mathsf{cond}(M,N)} $ for
the condition: `For every $ (M,N) $-wave $ W $ there is an $ M $-independent base of $ N.W $.' A set $ I\in\mathcal{I}_M\cap
\mathcal{I}_N $ is \emph{feasible} if $
\mathsf{cond}(M/I,N/I) $ holds.
It is known (see Proposition \ref{wave union}) that there exists a $ \subseteq $-largest $ (M,N) $-wave which we denote by
$ \boldsymbol{W(M,N)} $. Let $ \boldsymbol{\mathsf{cond}^{+}(M,N) }$ be the statement that $ W(M,N) $ consists of $ M
$-loops and $
{r(N.W(M,N))=0} $. As the notation indicates it is a strengthening of $ \mathsf{cond}(M,N) $. Indeed, under the assumption $
\mathsf{cond}^{+}(M,N) $, $ \varnothing $ is an $ M $-independent base of $ N.W $ for every wave $ W $. A feasible $ I $ is
called \emph{nice} if $ \mathsf{cond}^{+}(M/I,N/I) $ holds. For $ X\subseteq E $ let $ \boldsymbol{B(M,N,X)} $ be the
(possibly empty) set of common bases of $ M \upharpoonright
X $ and $ N.X $.
\section{Preliminary lemmas and preparation}\label{s: premil}
We collect those necessary lemmas in this section that are either known from previous papers or follow more or less directly
from definitions.
\subsection{Classical results}
The following two statements were proved by Edmonds' in \cite{edmonds2003submodular}:
\begin{prop}\label{prop: simult change}
Assume that~$I$ is independent,
${e_1, \dots, e_{m} \in \mathsf{span}(I) \setminus I}$
and~${f_1, \dots, f_{m} \in I}$
with ${f_j \in C(e_j, I)}$
but ${f_j\notin C(e_k, I)}$ for~${k < j}$.
Then
\[
{\left( I \cup \{e_1,\dots, e_{m}\} \right) \setminus \{f_1,\dots, f_{m}\}}
\]
is independent and spans the same set as~$I$.
\end{prop}
\begin{proof}
We use induction on~$m$.
The case~${m = 0}$ is trivial.
Suppose that~${m > 0}$.
On the one hand, the set ${I-f_m+e_m}$ is independent and spans the same set as~$I$.
On the other hand, ${C(e_j, I-f_m+e_m) = C(e_j, I)}$ for~${j < m}$ because ${f_m \notin C(e_j, I)}$ for~${j < m}$.
Hence by using the induction hypothesis for~${I-f_m+e_m}$ and ${e_1,\dots, e_{m-1}, f_1,\dots, f_{m-1}}$ we are done.
\end{proof}
\begin{lem}[Edmonds' augmenting path method]\label{l: augP Edmonds}
For $ I\in \mathcal{I}_M\cap \mathcal{I}_N $, exactly one of the following statements holds:
\begin{enumerate}
\item There is a partition $ E=E_M\sqcup E_N $ such that $ I_M:=I\cap E_M $ spans $ E_M $ in $ M $ and $ I_N:=I\cap E_N $
spans $ E_N $ in $ N $.
\item There is a
$ P=\{ x_1,\dots, x_{2n+1} \}\subseteq E $ with $ x_{1}\notin \mathsf{span}_N(I) $ and $
x_{2n+1}\notin
\mathsf{span}_M(I) $ such that $ I\vartriangle P\in
\mathcal{I}_M\cap \mathcal{I}_N $ with $ \mathsf{span}_{M}(I\vartriangle P) =\mathsf{span}_{M}(I+x_{2n+1}) $ and
$ \mathsf{span}_{N}(I\vartriangle P) =\mathsf{span}_{M}(I+x_{1}) $.
\end{enumerate}
\end{lem}
\noindent We will develop in Section
\ref{s: augP} a ``mixed'' augmenting path method which operates differently on the finitary
and on the cofinitary part of an $ N\in (\mathfrak{F}\oplus \mathfrak{F}^{*})(E) $. The phrase `augmenting path' refers always
to our mixed method except in the proof of Lemma \ref{one more edge}.
Note that $ E_M $ is an $ (M,N) $-wave witnessed by $ I_M $ and $ E_N $ is an $ (N,M) $-wave witnessed by $
I_N $.
One can define matroids in the language of circuits (see \cite{bruhn2013axioms}). The following claim is one of the axioms in
that case.
\begin{claim}[Circuit elimination axiom]\label{Circuit elim}
Assume that $ C\ni e $ is a circuit and $ \{ C_x: x\in X \} $ is a family of circuits where $ X\subseteq C-e $ and $ C_x $ is a
circuit with $C\cap X=\{ x \} $ avoiding $ e $. Then there is a circuit through $ e $ contained in
\[ \left( C\cup \bigcup_{x\in X}C_x \right) \setminus X =:Y \]
\end{claim}
\begin{proof}
Since $ C_x-x$ spans $ x $ we have $ C-e\subseteq\mathsf{span}(Y-e) $ and therefore $e\in \mathsf{span}(\mathsf{span}(Y-e)
)$. But
then $ e\in \mathsf{span}(Y-e) $ because $ \mathsf{span} $ is a closure operator.
\end{proof}
For finite matroids the axiom above is demanded only in the special case where $ X $ is a singleton (known as ``Strong circuit
elimination``) from which the case of arbitrary $ X $ can be derived by repeated application.
\begin{cor}\label{cor: Noutgoing arc}
Let $ I $ be an independent and suppose that there is a circuit $ C\subseteq \mathsf{span}(I) $
with $ e\in I\cap C $. Then there is an $ f\in C\setminus I $ with $e\in C(f,I) $.
\end{cor}
\begin{proof}
For every $ x\in C\setminus I $ we pick a circuit $ C_x $ with $ C_x\setminus I=\{ x \} $. If $
e\in C_x $
for some $ x $,
then $ f:=x $ is as desired. Suppose for a contradiction that there is no such an $ x $. Then by Circuit elimination (Claim
\ref{Circuit elim}) we obtain a circuit through $ e $ which is
contained entirely in $ I $ contradicting the independence of $ I $.
\end{proof}
The following statement was shown by Aharoni and Ziv in \cite{aharoni1998intersection} using a slightly
different terminology.
\begin{prop}\label{wave union}
The union of arbitrary many waves is a wave.
\end{prop}
\begin{proof}
Suppose that $ W_{\beta} $ is a wave for $ \beta<\kappa $ and let
$W_{<\alpha}:=\bigcup_{\beta<\alpha}W_{\beta} $ for $ \alpha\leq \kappa $. We fix a base
$ B_{\beta} \subseteq W_{\beta} $
of $ M\upharpoonright W_{\beta} $ which is independent in $ N.W_{\beta} $. Let us define
$ B_{<\alpha} $ by transfinite recursion for $ \alpha\leq \kappa $ as follows.
\[B_{<\alpha}:= \begin{cases} \varnothing &\mbox{if } \alpha=0 \\
B_{<\beta}\cup (B_\beta \setminus W_{<\beta}) & \mbox{if } \alpha=\beta+1\\
\bigcup_{\beta<\alpha}B_{<\beta} & \mbox{if } \alpha \text{ is limit ordinal}.
\end{cases} \]
First we show by transfinite induction
that $ B_{<\alpha} $ is spanning in $ M\upharpoonright W_{<\alpha} $. For $ \alpha=0 $ it is trivial.
For a limit $ \alpha $ it follows directly from the induction hypothesis. If $ \alpha=\beta+1 $, then
by the choice of $ B_\beta $, the set $ B_\beta \setminus W_{<\beta} $ spans $ W_{\beta}\setminus W_{<\beta} $
in $ M/W_{<\beta} $. Since $ W_{<\beta} $ is spanned by $ B_{<\beta} $ in $ M $ by induction, it follows that
$ W_{<\beta+1} $ is spanned by $ B_{<\beta+1} $ in $ M $.
The independence of $ B_{<\alpha} $ in
$ N.W_{<\alpha} $ can be reformulated as ``$W_{<\alpha}\setminus B_{<\alpha}$ is spanning in $
N^{*} \upharpoonright
W_{<\alpha} $'',
which can be proved the same way as above.
\end{proof}
\subsection{Some more recent results and basic facts}
\begin{thm}[Aigner-Horev, Carmesin and Frölich; Theorem 1.5 in \cite{aigner2018intersection}]\label{t: mixed}
If $ M\in \mathfrak{F}(E) $ and $ N\in
\mathfrak{F}^{*}(E) $, then there is an $ I\in \mathcal{I}_M\cap \mathcal{I}_N $ and a partition $ E=E_M\sqcup E_N $ such
that $ I_M:=I\cap E_M $ spans $ E_M $ in $ M $ and $ I_N:=I\cap E_N $ spans $ E_N $ in $ N $.
\end{thm}
\begin{cor}\label{cor: applyMixed}
If $ M\in \mathfrak{F}(E) $ and $ N\in
\mathfrak{F}^{*}(E) $ satisfy $ \mathsf{cond}^{+}(M,N) $, then there is an $ M $-independent $ N $-base.
\end{cor}
\begin{proof}
Let $ E_M, E_N,
I_M $ and $ I_N $ be as in Theorem \ref{t: mixed}. Then $ E_M $ is a wave witnessed by $ I_M $ thus by
$ \mathsf{cond}^{+}(M,N) $ we know that $ E_M $ consists of $ M $-loops and $ r(N.E_M)=0 $. But then $ I_M=\varnothing $
and $ I_N $ is a base of $ N $ which is independent in $ M $.
\end{proof}
\begin{obs}\label{o: Mloop}
If $ \mathsf{cond}(M,N) $ holds and $ L $ is a set of the $ M $-loops, then $ r(N.L)=0 $ which means $
L\subseteq\mathsf{span}_N(E\setminus L) $.
\end{obs}
\begin{cor}\label{cor: Mloop}
If $ I $ is feasible, then $ r(N.(\mathsf{span}_{M}(I)\setminus I)=0$.
\end{cor}
\begin{obs}
If $ W_0 $ is an $ (M,N) $-wave and $ W_1 $ is an $ (M/W_0, N-W_0) $-wave, then $ W_0\cup W_1 $ is an $ (M,N)
$-wave.
\end{obs}
\begin{cor}\label{cor: empty wave}
For $ W:=W(M,N) $, the largest $ (M/W, N-W) $-wave is $ \varnothing $. In particular, $ \mathsf{cond}^{+}(M/W,N-W) $
holds.
\end{cor}
\begin{obs}\label{o: condRestrict}
$ \mathsf{cond}^{+}(M,N) $ implies ${\mathsf{cond}^{+}(M\upharpoonright X,N.X) }$ for every $ X\subseteq E $.
\end{obs}
\begin{prop}\label{p: iterate feasible}
If $ I_0\in \mathcal{I}_N\cap \mathcal{I}_M $ and $ I_1 $ is feasible with respect to $ (M/I_0,
N/I_0) $, then $ I_0\cup I_1 $ is
feasible with respect to $ (M,N) $. If in addition $ I_1 $ is a nice feasible in regards to $ (M/I_0, N/I_0) $, then so is
$ I_0\cup I_1 $ to $ (M,N) $.
\end{prop}
\begin{proof}
By definition the feasibility of $ I_1 $ w.r.t. $ (M/I_0, N/I_0) $ means that the condition $ \mathsf{cond}(M/(I_0\cup
I_1),N/(I_0\cup I_1)) $
holds. The feasibility of $ I_0\cup I_1 $ w.r.t. $ (M, N) $ means the same also by definition. For `nice feasible' the argument is
similar, only $ \mathsf{cond} $ must be replaced by $ \mathsf{cond}^{+} $.
\end{proof}
The following lemma was introduced in \cite{joo2020MIC}.
\begin{lem}\label{one more edge}
Condition $ \mathsf{cond}^{+}(M, N) $ implies that
whenever $ W $ is an $ (M/e, N/e) $-wave for some $ e\in E $ witnessed by $ B\subseteq W $, then
$B\in B(M/e,N/e,W) $, i.e. $ B $ is spanning in $ N.W $.
\end{lem}
\begin{proof}
Let $ W $ be an $ (M/e, N/e) $-wave. Note that
$(N/e).W=N.W $ by definition. Pick a $ B\subseteq W $ which is an $ N.W $-independent base of $ (M/e)\upharpoonright
W $. We may assume
that
$ e\in \mathsf{span}_M(W) $ and $ e $ is not an $ M $-loop. Indeed, otherwise $ (M/e) \upharpoonright
W=M\upharpoonright
W$ holds
and hence $ W $ is also an $ (M,N) $-wave. Thus by $ \mathsf{cond}^{+}(M, N) $ we may conclude that $ W $ consists of $ M
$-loops and hence $ B=\varnothing $, moreover, $ r(N.W)=0 $ and therefore $
\varnothing $ is a base of $ N.W $.
Then $ B $ is not a base of
$ M\upharpoonright W $ but ``almost'', namely $r(M/B \upharpoonright W) =1 $. We apply the
augmenting path Lemma \ref{l: augP Edmonds} by Edmonds with $ B$ in regards to $M\upharpoonright W$ and $ N.W $. An
augmenting
path $ P $ cannot exist. Indeed, if $ P $ were an augmenting path then $ B\vartriangle P $ would show that $ W $ is an
$ (M,N) $-wave which does not consist of $ M $-loops, contradiction. Thus we get a partition
$ W=W_{0}\sqcup W_{1} $ instead where $ W_0 $ is an $ (M \upharpoonright W,N.W) $-wave witnessed by $ B\cap W_0 $,
and therefore also an $ (M,N)
$-wave, and $ W_1 $ is an
$ (N.W, M\upharpoonright W) $-wave showed by $ B\cap W_1 $.
Then $ W_0 $ must consist of $ M $-loops by $ \mathsf{cond}^{+}(M, N) $ and
therefore $ B\subseteq W_1 $ by the $ M $-independence of $ B $. We need to show that $ B $ is spanning not just in $ N.W_1 $
but also in $ N.W $. To do so let $ B' $ be a base of $ N-W$.
Then $ B\cup B' $ spans $ E\setminus W_0 $ in $ N $ because $ B=B\cap W_1 $ is a base $ N.W_1 $. But
then $ B\cup B' $ is spanning in $ N $ because $ r(N.W_0)=0 $ by Observation \ref{o: Mloop}. We conclude that $ B $ is
spanning in $ N.W $ as desired.
\end{proof}
\subsection{Technical lemmas in regards to \texorpdfstring{$ \boldsymbol{B(M,N,W)} $}{BMNW}}
\begin{prop}\label{prop: WB nice feasible}
For $ W:=W(M,N) $, the elements of $ B(M,N,W) $ are nice feasible sets.
\end{prop}
\begin{proof}
Let $ B\in B(M,N,W) $. Clearly $ B\in \mathcal{I}_M\cap \mathcal{I}_N $ because $ B\in \mathcal{I}_M\cap
\mathcal{I}_{N.W} $ by definition and $ \mathcal{I}_{N.W} \subseteq \mathcal{I}_N $. We know that $ W\setminus B $ is
an $
(M/B, N/B) $-wave consisting of $ M/B $ loops and $ r(N.(W\setminus B))=0 $ because $ B $ is a
base of $ N.W $. In order to show $ \mathsf{cond}^{+}(M/B, N/B)
$, it is enough to prove that $ W\setminus B $ is actually $ W(M/B, N/B)=:W' $. Suppose for a contradiction that $ W'\supsetneq
W\setminus B$. Let $ B' $ be an $ N.W' $-independent base of $ M/B \upharpoonright W' $. Note that $ B\cup B' $ is a base of $
M \upharpoonright (W\cup
W') $ and $ B'\cap (W\setminus B)=\varnothing $. Since $ B $ is $ N^{*} $-spanned by $ W\setminus B\subseteq (W\cup
W')\setminus (B\cup B') $ and $ B' $ is $ N^{*} $-spanned by $ W'\setminus B'\subseteq (W\cup W')\setminus (B\cup B') $, we
may conclude that $ B\cup B' $ is independent in $ N.(W\cup W') $. Thus $ W\cup W' $ is an $ (M,N) $-wave witnessed by $
B\cup B' $ which contradicts the maximality of $ W $.
\end{proof}
\begin{cor}\label{cor: extNice}
Assume that $ I\in \mathcal{I}_M\cap \mathcal{I}_M $ and $ B\in B(M/I,N/I, W) $ where $ W:=W(M/I,N/I) $. Then $ I\cup B
$ is a nice feasible set.
\end{cor}
\begin{proof}
Combine Propositions \ref{p: iterate feasible} and \ref{prop: WB nice feasible}.
\end{proof}
\begin{obs}\label{obs: cond+ loop delete}
If $ \mathsf{cond}^{+}(M,N) $ holds and $ L $ is a set of $ M $-loops, then $ \mathsf{cond}^{+}(M-L,N-L) $ also holds.
\end{obs}
\begin{obs}\label{obs: common loops remove}
Let $ W $ be a wave and let $ L $ be a set of common loops of $ M $ and $ N $. Then $ W\setminus L $ is also a wave and $
B(M,N,W)=B(M,N,W\setminus L) $.
\end{obs}
\begin{lem}\label{l: wave modify}
Let $ W $ be a wave and $ L\subseteq W $ such that $ L $ consists of $ M $-loops with $ r(N.L)=0 $. Then $ W\setminus L $ is
an $ (M-L, N-L, W\setminus L) $-wave with \[
B(M,N,W)=B(M-L, N-L, W\setminus L). \]
\end{lem}
\begin{proof}
A set $ B $ is a base of $ M\upharpoonright W $ iff it is a base of
$ M\upharpoonright (W\setminus L) $ because $ L $ consists of $ M $-loops. A $ B\subseteq W\setminus L $ is $ N.W
$-independent iff $ B\subseteq\mathsf{span}_{N^{*}}(W\setminus B) $. This holds if and only if $
B\subseteq\mathsf{span}_{N^{*}/L}(W\setminus
(B\cup L)) $ i.e. $ B $ is independent in $ (N-L).(W\setminus L) $. Note that $ r(N.L)=0 $ is equivalent with the $
N^{*} $-independence of $ L $. Thus for $ B\subseteq W\setminus L $, $ W\setminus B $ is $ N^{*} $-independent iff
$ W\setminus (B\cup L) $ is $ N^{*}/L $-independent. It means that $ B $ is spanning in $ N.W $ iff it is spanning in $
(N-L).(W\setminus L) $.
Thus the sets that are witnessing that $ W\setminus L $ is
an $ (M-L, N-L, W\setminus L) $-wave are exactly those that are witnessing that $ W $ is an $ (M,N) $-wave, moreover, $
B(M,N,W)=B(M-L, N-L, W\setminus L) $ holds.
\end{proof}
\begin{lem}\label{l: minorsChanged}
Assume that $ X_j, Y_j \subseteq E $ for $ j\in \{ 0,1 \} $ where $ X_j\sqcup Y_j=Z $ for $ j\in \{ 0,1 \} $, furthermore
$ \mathsf{span}_M(X_0)=\mathsf{span}_M(X_1) $ and $ \mathsf{span}_{N^{*}}(Y_0) =\mathsf{span}_{N^{*}}(Y_1) $.
Then for every $ X\subseteq E\setminus Z $ we have\[ B(M/X_0-Y_0, N/X_0-Y_0, X)=B(M/X_1-Y_1, N/X_1-Y_1, X). \]
\end{lem}
\begin{proof}
The matroids $ M/X_0-Y_0 $ and $ M/X_1-Y_1$ are the same as well as the matroids $ N/X_0-Y_0$ and $ N/X_1-Y_1 $.
\end{proof}
\section{Reductions}\label{s: reduction}
We repeat here our main results for convenience:
\PC*
\MI*
\subsection{Matroid Intersection with a finitary \texorpdfstring{$\boldsymbol{M} $}{M}}
As we mentioned, the method by Bowler and Carmesin used to prove Corollary 3.9 in \cite{bowler2015matroid} works not only
for the class of all matroids but can be adapted for every class closed under certain operations. We apply their
technique to obtain the following reduction:
\begin{lem}\label{l: M finitary}
Theorems \ref{t: main result0} and \ref{t: main result} are implied by the special case of Theorem \ref{t: main result} where $
M\in \mathfrak{F} $.
\end{lem}
\begin{proof}
First we show that one can assume without loss of generality in the proof of Theorem \ref{t: main result0} that $ \Theta $ is
countable. To
do so let \[ E':=\{ e\in E: \left|\{i\in
\Theta: \{ e \}\in \mathcal{I}_{M_i} \}\right|\leq \aleph_0 \} \] and
\[ \Theta':= \{ i\in \Theta: (\exists e\in E' ) (\{ e \}\in \mathcal{I}_{M_i}) \}. \]
We apply Theorem \ref{t: main result0} with $ E' $ and with the countable family $(M_i\upharpoonright E': i\in \Theta') $.
Then we obtain a partition
$ E'=E'_p\sqcup E'_c $ such that $ (M_i \upharpoonright E'_p: i\in \Theta') $ admits a packing $ (S_i: i\in \Theta') $ and
$ (M_i\upharpoonright E'. E_c: i\in \Theta') $ admits a covering $ (I_i: i\in \Theta') $. Let $ E_p:=E'_p $ and $ E_c:= E\setminus
E_p=E'_c\cup (E\setminus E') $. By construction $ r_{M_i}(E')=0 $ for $ i\in
\Theta \setminus \Theta' $. Thus by letting $ S_i:=\varnothing $ for $ i\in \Theta \setminus \Theta' $ the
family $ (S_i: i\in \Theta) $ is a packing w.r.t. $ (M_i \upharpoonright E_p: i\in \Theta) $.
Let $ g: E\setminus E'\rightarrow
\Theta\setminus \Theta' $ be injective. For $ i\in \Theta \setminus \Theta' $, we take $ I_i:=\{ g^{-1}(i) \}$ if $ i\in
\mathsf{ran}(g) $ and $ I_i:=\varnothing $ otherwise. Then $ (I_i: i\in \Theta) $ is a covering for $ (M_i. E_c: i\in \Theta) $ and
we are done.
We proceed with the proof of Theorem \ref{t: main result0} assuming that $ \Theta $ is countable. For $ i\in \Theta $, let $ M'_i
$ be the matroid on $
E\times \{ i \} $ that we
obtain by
``copying'' $ M_i $ via the bijection $ e\mapsto (e,i) $. Then for
\[M:= \bigoplus_{i\in \Theta}M_i',\ \text{ and } N:= \bigoplus_{e\in E}U_{ \{ e \}\times \Theta, 1 } \]
we have $M\in ( \mathfrak{F}\oplus \mathfrak{F}^{*})(E\times \Theta) $ and $ N\in \mathfrak{F}(E\times \Theta) $ where $
E\times \Theta $ is countable. Thus by assumption there is a partition $
E\times \Theta=E_M\sqcup E_N $ and an $ I\in \mathcal{I}_{M}\cap \mathcal{I}_{N} $ such that $ I_M:=I\cap E_M $
spans $ E_M $ in $ M$ and $ I_N:=I\cap E_N $
spans $ E_N$ in $ N$. The $ M $-independence of $ I $ ensures that $ J_i:=\{ e\in E: (e,i)\in I \} $ is $ M_i $-independent.
The $ N $-independence of $ I $ guarantees that the sets $ J_i $ are pairwise disjoint. Let
$ E_c:=\{ e\in E: (\exists i\in \Theta) (e,i)\in E_N \} $. Then for each $ e\in E_c $ there must be some $ i\in \Theta $ with $
(e,i)\in
I_N $ because $ E_N\subseteq \mathsf{span}_N(I_N) $. Thus the sets $ J_i $ cover $ E_c $ and so do the sets $
I_i:=J_i\cap E_c $. It is enough to show that $ S_i:=J_i\setminus I_i $ spans $ E_p:=E\setminus E_c $ in $ M_i $ for every $
i\in \Theta $. Let $ f\in E_p $ and $ i\in \Theta $ be given. Then $ \{ f \}\times \Theta \subseteq E_M $ follows directly from the
definition of $ E_p $, in particular $ (f,i)\in E_M $. We know that $(f,i)\in
\mathsf{span}_{M}(I_M)$ and hence $ f\in \mathsf{span}_{M_i}(\{ e\in E: (e,i)\in I_M \}) $. Suppose for a contradiction that
$h\in E_c\cap \{ e\in E: (e,i)\in I_M \} $. Then for some $ j\in
\Theta $ we have $ (h,j)\in E_N $. Since $ (h,j)\in \mathsf{span}_N(I_N) $, we have $ (h,k)\in I_N $ for some $ k\in \Theta $.
But then $ (h,i), (h,k)\in I $ are distinct elements thus $ i\neq k $ which contradict the $ N $-independence of $ I $. Therefore $
E_c\cap \{ e\in E: (e,i)\in I_M \}=\varnothing $. Since $ \{ e\in E: (e,i)\in I_M \}\subseteq J_i $ by the definition of $ J_i
$ we conclude $ \{ e\in E: (e,i)\in I_M \}=S_i $. Therefore $ (S_i: i\in \Theta) $ is a packing for $ (M_i \upharpoonright E_p:
i\in \Theta) $ and $ (I_i: i\in \Theta) $ is a covering for $ (M_i. E_c: i\in \Theta) $ as desired.
Now we derive Theorem \ref{t: main result} from Theorem \ref{t: main result0}. To do so, we take a partition $E=E_p\sqcup
E_c$ such that $ (S_M, S_N) $ is a packing for
$ (M\upharpoonright E_p, N^{*}\upharpoonright E_p) $ and $ (R_M, R_N) $ is a covering for $ (M.E_c, N^{*}.E_c ) $. Let $
I_M\subseteq S_M $ be a base of $ M\upharpoonright E_p $ and we define $ I_N:= R_M $. By construction
$E_p\subseteq \mathsf{span}_M(I_M) $ and $ I_N\in \mathcal{I}_{M.E_c} $. We also know that
\[ I_M\subseteq \mathsf{span}_{N^{*}}(S_N) \subseteq \mathsf{span}_{N^{*}}(E_p\setminus I_M) \]
which means $ I_M\in \mathcal{I}_{N.E_{p}} $.
Finally, $R_N\in \mathcal{I}_{N^{*}.E_c} $ means that $ E_c\setminus R_N $ spans $ E_c $ in $ N $ and therefore so does $
I_N=R_M\supseteq E_c\setminus R_N $.
\end{proof}
\subsection{Finding an \texorpdfstring{$ \boldsymbol{M} $}{M}-independent base of \texorpdfstring{$ \boldsymbol{N}
$}{N}}
The following reformulation of the matroid intersection problem was introduced by Aharoni and Ziv in
\cite{aharoni1998intersection} but its analogue by Aharoni was already an important tool to attack (and eventually solve in
\cite{aharoni2009menger}) the Erdős-Menger Conjecture.
\begin{restatable}{claim}{indepB}\label{c: M-indep N-base}
Assume that $ M\in \mathfrak{F}(E) $ and $ N\in (\mathfrak{F}\oplus\mathfrak{F}^{*})(E) $ such that $ E $ is countable and
$ \mathsf{cond}^{+}(M,N) $ holds. Then there is an $ M $-independent base of $ N $.
\end{restatable}
\begin{lem}\label{l: reduc2}
Claim \ref{c: M-indep N-base} implies our main results Theorems \ref{t: main result0} and \ref{t: main result}.
\end{lem}
\begin{proof}
By Lemma \ref{l: M finitary} it is enough to show that the special case of Theorem \ref{t: main result} where $ M\in
\mathfrak{F} $ follows from Claim \ref{c: M-indep N-base}.
To do so, let $ E_M:=W(M,N) $ and let $ I_M\subseteq E_M $ be a witness that $ E_M $ is a wave. For $ E_N:=E\setminus
E_M $, we have $
M/E_M\in \mathfrak{F}(E_N) $ and
$ N-E_M\in (\mathfrak{F}\oplus \mathfrak{F}^{*})(E_N) $, furthermore, $ \mathsf{cond}^{+}(M/W,N-W) $ holds (see
Corollary \ref{cor: empty wave}). By Claim \ref{c: M-indep N-base}, there is an $ M/E_M $-independent base $ I_N $ of $
N-E_M $. Then $ I\in \mathcal{I}_M\cap \mathcal{I}_{N}$ and $ E=E_M\sqcup E_N $ such that $ I_M:=I\cap E_M $ spans
$ E_M $ in $ M $ and $ I_N:=I\cap E_N $ spans $ E_N $ in $ N $ as desired.
\end{proof}
\subsection{Reduction to a key-lemma}
From now on we assume that $ M\in \mathfrak{F}(E) $ and $ N\in (\mathfrak{F}\oplus\mathfrak{F}^{*})(E) $ where $ E $ is
countable. Let $
\boldsymbol{E_0} $ be
the union of the finitary components of $ N $ and let
$ \boldsymbol{E_1}:=E\setminus E_0 $. Note that $ N\upharpoonright E_0 $ is finitary,
$ N\upharpoonright E_1 $ is cofinitary and no $ N $-circuit meets both $ E_0 $ and $ E_1 $.
\begin{restatable}[key-lemma]{lem}{keylemma}\label{l: key-lemma}
If $ \mathsf{cond}^{+}(M,N) $ holds, then for every $ e\in E_0 $ there is a nice
feasible $ I $ with $ e\in \mathsf{span}_{N}(I) $.
\end{restatable}
\begin{lem}
Lemma \ref{l: key-lemma} implies our main results Theorems \ref{t: main result0} and \ref{t: main result}.
\end{lem}
\begin{proof}
It is enough to show that Lemma \ref{l: key-lemma} implies Claim \ref{c: M-indep N-base} because of Lemma \ref{l: reduc2}.
Let us
fix an enumeration $ \{ e_n: n\in
\mathbb{N} \} $ of $ E_0 $.
We build an $ \subseteq $-increasing sequence $ (I_n) $ of nice feasible sets starting
with $ I_0:=\varnothing $ (who is nice feasible by $ \mathsf{cond}^{+}(M,N) $) in such a way that $ {e_n\in
\mathsf{span}_N(I_{n+1})} $. Suppose that $ I_n $ is already defined.
If $ e_n\notin \mathsf{span}_N(I_n) $, then we
apply Lemma \ref{l: key-lemma} with $ (M/I_n, N/I_n) $ and $ e_n $ and take the union of the resulting $ I $ with $ I_{n} $ to
obtain $ I_{n+1} $ (see Observation \ref{p: iterate feasible}), otherwise let $ I_{n+1}:=I_n $. The recursion is done. Now we
construct an $ M $-independent
$ I^{+}_n \supseteq I_n $ with $ E_1\subseteq \mathsf{span}_N(I^{+}_n) $ for $ n\in \mathbb{N} $. The matroid $ M/I_n
\upharpoonright
(E_1\setminus I_n) $ is finitary and $ N.(E_1\setminus I_n)=(N\upharpoonright E_1)/(I_n\cap E_1)$ is cofinitary, moreover, by
Observation \ref{o: condRestrict}
\[ \mathsf{cond}^{+}(M/I_n, N/I_n) \Longrightarrow \mathsf{cond}^{+}(M/I_n \upharpoonright (E_1\setminus I_n),
N.(E_1\setminus I_n )). \] Thus by Corollary \ref{cor: applyMixed} there is an $ M/I_n $-independent base $ B_n $ of
$ (N\upharpoonright E_1)/(E_1\cap I_n) $ and $
I^{+}_n:=I_n\cup B_n $ is as desired.
Let $ \mathcal{U} $ be a free ultrafilter on $ \mathcal{P}(\mathbb{N}) $ ad we define
\[ S:=\{ e\in E: \{ n\in \mathbb{N}: e\in I^{+}_n \}\in \mathcal{U} \}. \] Then $ S $ is $ M
$-independent and $ N $-spanning and therefore we are done. Indeed, suppose for a contradiction that $ S $ contains an $ M
$-circuit $ C $. For $ e\in C $, we pick a $ U_e\in
\mathcal{U} $ with $ e\in I^{+}_n $ for $ n\in U_e $. Since $ M $ is finitary, $ C $ is finite, thus $U:= \cap \{ U_e: e\in C \}\in
\mathcal{U} $.
But then for $ n\in U $ we have $ C\subseteq I^{+}_n $ which contradicts the $ M $-independence of $ I^{+}_n $. Clearly, $
I_{n+1}\subseteq S $ for every $ n\in \mathbb{N} $ and therefore $ E_0\subseteq \mathsf{span}_N(S) $. Finally, suppose for a
contradiction that there is some
$ N^{*}\upharpoonright E_1 $-circuit $ C' $ with $ S\cap C'=\varnothing $. For $ e\in C' $, we can pick a $ U_e'\in
\mathcal{U} $ with $ e\notin I^{+}_n $ for $ n\in U_e' $. Since $ N^{*}\upharpoonright E_1 $ is finitary, $ C' $ is finite, thus
$U':= \cap \{ U_e': e\in
C' \}\in
\mathcal{U} $.
But then for $ n\in U' $ we have $ I^{+}_n\cap C=\varnothing $ which contradicts $E_1\subseteq
\mathsf{span}_N(I^{+}_n) $.
\end{proof}
\section{Mixed augmenting paths}\label{s: augP}
In the section we introduce an `augmenting path' type of method and analyse it in order to show some properties we need later.
On $ E_0 $ the definition will be
the same as in the proof of
the Matroid Intersection Theorem by Edmonds \cite{edmonds2003submodular} but on $ E_1 $ we need to define it in a different
way considering that $ N\upharpoonright E_1 $ is cofinitary. For brevity we write
$ \boldsymbol{\overset{\circ}{\mathsf{span}}_{M}(F)} $ for $ \mathsf{span}_{M}(F)\setminus F $
and $ \boldsymbol{F^{j}} $ for $F\cap E_j $
where $ F\subseteq E $ and $ j\in \{ 0,1 \} $.
We call an $F\subseteq E $ \emph{dually safe} if $ F^{1} $ is spanned by $
\overset{\circ}{\mathsf{span}}_M(F) $ in $ N^{*} $.
\begin{lem}\label{l: enoughAddB}
If $ I\in \mathcal{I}_M\cap \mathcal{I}_n $ is dually safe and $ B\in B(M/I, N/I, W) $ for $ W:=W(M/I, N/I) $, then
$ I\cup B $ is a nice dually safe feasible set.
\end{lem}
\begin{proof}
We already know by Corollary \ref{cor: extNice} that $ I\cup B $ is a nice feasible set. By using that $ I $ is dually safe and $
\overset{\circ}{\mathsf{span}}_M(I)\subseteq\overset{\circ}{\mathsf{span}}_M(I\cup B) $ we get
\[ I^{1}\subseteq \mathsf{span}_{N^{*}}(\overset{\circ}{\mathsf{span}}_M(I))\subseteq
\mathsf{span}_{N^{*}}(\overset{\circ}{\mathsf{span}}_M(I\cup B)). \]
Since $ B\in B(M/I, N/I, W) $ we have $ W\setminus B\subseteq \overset{\circ}{\mathsf{span}}_M(I\cup B) $ and $ B $ is a
base of $ N.W $. The latter can be rephrased as `$ W\setminus B $ is a base of $ N^{*}\upharpoonright W $'. By combining
these $ \overset{\circ}{\mathsf{span}}_M(I\cup B) $ spans $ B $ in $ N^{*} $. Therefore
$(I\cup B)\cap E_1 \subseteq \mathsf{span}_{N^{*}}(\overset{\circ}{\mathsf{span}}_M(I\cup B)) $, which means that $ I\cup
B $ is dually safe.
\end{proof}
\begin{prop}\label{prop: IE_1 common base}
For a dually safe feasible $ I $, $ \overset{\circ}{\mathsf{span}}_{M}(I)^{1} $ is
a base of $ {N^{*}\upharpoonright \mathsf{span}_{M}(I)^{1} } $.
\end{prop}
\begin{proof}
By the definition of `dually safe', $ \overset{\circ}{\mathsf{span}}_{M}(I)^{1} $ spans $ N^{*}\upharpoonright
\mathsf{span}_{M}(I)^{1} $. Furthermore, $ r(N. \overset{\circ}{\mathsf{span}}_{M}(I))=0 $ by Corollary \ref{cor: Mloop},
which is equivalent with the $ N^{*} $-independence of $
\overset{\circ}{\mathsf{span}}_{M}(I) $.
\end{proof}
For a dually safe feasible $ I $,
we define an auxiliary digraph $ D(I) $ on $ E $.
Let $ xy $ be an arc of $ D(I) $ iff
one of the following possibilities occurs:
\begin{enumerate}
\item\label{item: D(I) 1} $ x\in E\setminus I $ and $ I+x $ is $ M $-dependent with $ y\in C_M(x, I)-x $,
\item\label{item: D(I) 2} $x\in I^{0} $ and $ C_{N}(y,I) $ is well-defined and contains $ x $,
\item\label{item: D(I) 3} $x\in I^{1} $ and
$ y\in C_{N^{*}}(x,\overset{\circ}{\mathsf{span}}_{M}(I)^{1} ) -x $ (see
Proposition \ref{prop: IE_1 common base}).
\end{enumerate}
An augmenting path for a nice dually safe feasible $ I $ is a $ P=\{ x_1,\dots, x_{2n+1} \} $ where
\begin{enumerate}
[label=(\roman*)]
\item $ x_1\in E_0\setminus \mathsf{span}_N(I) $,
\item $ x_{2n+1}\in E_0\setminus \mathsf{span}_M(I) $,
\item $ x_kx_{k+1}\in D(I) $ for $ 1\leq k\leq 2n $,
\item\label{item: no jumping} $ x_kx_{\ell}\notin D(I) $ if $ k+1<\ell $.
\end{enumerate}
\begin{prop}\label{p: augpath}
If $ I $ is a dually safe feasible set and $ P=\{ x_1,\dots, x_{2n+1} \} $ is an augmenting path for $ I $, then
$ I\vartriangle P$ is a dually safe element of $ \mathcal{I}_M\cap \mathcal{I}_N $ with
\begin{enumerate}
[label=(\Alph*)]
\item\label{item: A} $\mathsf{span}_{M}(I\vartriangle P)=\mathsf{span}_{M}(I+x_{2n+1}) $,
\item\label{item: B} $\mathsf{span}_{N}(I\vartriangle P)\cap E_0=\mathsf{span}_{N}(I+x_1)\cap E_0 $,
\item\label{item: C} $\mathsf{span}_{N^{*}} (\overset{\circ}{\mathsf{span}}_{M}(I)^{1})=\mathsf{span}_{N^{*}}
(\overset{\circ}{\mathsf{span}}_{M}(I)^{1}\vartriangle P^{1}) $ and $
\overset{\circ}{\mathsf{span}}_{M}(I)^{1}\vartriangle P^{1}\in \mathcal{I}_{N^{*}} $.
\end{enumerate}
\end{prop}
\begin{proof}
The set $ I+x_{2n+1} $ is $ M $-independent by the definition of $ P $. Property \ref{item: no jumping} ensures that we can
apply Proposition \ref{prop: simult change} with $I+x_{2n+1},\ e_j=x_{2j-1},\ f_j=x_{2j}\ (1\leq j\leq n) $ and $ M $ and
conclude that
$ I\vartriangle P\in \mathcal{I}_M $ and \ref{item: A} holds.
To prove $ (I\vartriangle P)\cap E_0\in \mathcal{I}_N $ and
\ref{item: B} we
proceed similarly. We start with the $ N $-independent set $ (I+x_{1})\cap E_0 $. In order to satisfy the premisses of
Proposition \ref{prop: simult change} via property
\ref{item: no jumping}, we need to enumerate the relevant edge
pairs backwards.
Namely, for $j\leq \left| I^{0}\cap P \right|$ let $ e_j:=x_{i_j+1} $ where $ i_j $ is the $ j $th largest index with $ x_{i_j}\in
I^{0} $ and $ f_j:=x_{i_j} $. We conclude that $ (I\vartriangle P)\cap E_0\in \mathcal{I}_N
$ and \ref{item: B}
holds.
Finally, we let $ e_j:=x_{i_j} $ for $ j\leq \left|I^{1}\cap P\right| $ where $ i_j $ is the $j$th smallest index with $
x_{i_j}\in I^{1} $ and $
f_j:=x_{i_j+1} $.
Recall that $ \overset{\circ}{\mathsf{span}}_{M}(I)^{1} $ is $ N^{*} $-independent
(see Proposition \ref{prop: IE_1 common base}).
We apply Proposition \ref{prop: simult change} with
$\overset{\circ}{\mathsf{span}}_{M}(I)^{1} ,\ e_j,\ f_j $ and $ N^{*} $ to conclude \ref{item: C}. This means that $
\overset{\circ}{\mathsf{span}}_{M}(I)^{1}\vartriangle P^{1} $ is a base of $ N^{*}\upharpoonright
\mathsf{span}_{M}(I)^{1}$ because $ \overset{\circ}{\mathsf{span}}_{M}(I)^{1} $ was a base
of it by Proposition \ref{prop: IE_1 common base}. By \ref{item: A} and by the definition of $ P $ we
know that \[ (I\vartriangle P)\cap
E_1\subseteq
I^{1}\cup P^{1}\subseteq \mathsf{span}^{1}_{M}(I).
\] By combining these we obtain \[ (I\vartriangle P)\cap
E_1\subseteq \mathsf{span}_{N^{*}}(\overset{\circ}{\mathsf{span}}_{M}(I)^{1}\vartriangle P^{1}). \] The set $
I\vartriangle
P $ is disjoint from
$ \overset{\circ}{\mathsf{span}}_{M}(I)\vartriangle P$ because $ I $ is disjoint
from $
\overset{\circ}{\mathsf{span}}_{M}(I) $, moreover, $ \overset{\circ}{\mathsf{span}}_{M}(I)\vartriangle P $ contained in $
\mathsf{span}_{M}(I\vartriangle P) $. Hence $ \overset{\circ}{\mathsf{span}}_{M}(I\vartriangle P) $
contains $
\overset{\circ}{\mathsf{span}}_{M}(I)^{1}\vartriangle P^{1} $ and therefore $ N^{*} $-spans
$ (I\vartriangle P) \cap E_1 $, i.e. $ I\vartriangle P $ is dually safe. It means that $ (I\vartriangle P)\cap E_1 $ is independent in $
N.\mathsf{span}_{M}(I\vartriangle P)^{1} $.
Thus uniting $ (I\vartriangle P)\cap E_0\in \mathcal{I}_N $
with $ (I\vartriangle P)\cap E_1 $ preserves $ N
$-independence, i.e. $ I\vartriangle P\in \mathcal{I}_N $.
\end{proof}
\begin{lem}\label{l: aug extend}
If $ I $ is a nice dually safe feasible set and $ P=\{ x_1,\dots, x_{2n+1} \} $ is an augmenting path for $ I $, then
$ I \vartriangle P $ can be extended to a nice dually safe feasible set.
\end{lem}
\begin{proof}
Let $ M':= M/(I\vartriangle P)$ and $ N':=N/(I\vartriangle P) $. By Lemma \ref{l: enoughAddB} it is enough to show that
$ B(M', N', W)\neq \varnothing $ for $
W:=W(M', N') $. Statements \ref{item: A} and \ref{item: B} of Proposition \ref{p: augpath} ensure that the elements of $ L:=
\{
x_1, x_3,\dots,
x_{2n-1} \}\cap E_0 $ are common
loops of $ M'$ and $ N' $. Statement \ref{item: C} tells that $ Y_0:=\overset{\circ}{\mathsf{span}}_{M}(I)^{1}
$ and $
Y_1:=\overset{\circ}{\mathsf{span}}_{M}(I)^{1}\vartriangle P^{1} $ have the same $ N^{*}
$-span, furthermore, $ Y_1 $ is $ N^{*} $-independent, i.e. $ r(N.Y_1)=0 $. Note that $ Y_1 $ consists of $ M' $-loops by
\ref{item: A}.
Thus by applying Observation \ref{obs: common loops remove} with $ W $ and $ L $ and then Lemma \ref{l: wave modify}
with $ W\setminus L $ and $ Y_1 $ we can conclude that $ W\setminus (L\cup Y_1) $ is an
$ (M'-Y_1, N'-Y_1) $-wave, furthermore,
\[ B(M',N', W)=B(M'-Y_1, N'-Y_1, W\setminus (L\cup Y_1)). \]
The sets $X_0:= I+x_{2n+1} $ and $X_1:= I\vartriangle P $ have the same $ M $-span (see
Proposition \ref{p: augpath}/ \ref{item: A}). Recall
that $ M'=M/X_1 $ and
$
N'=N/X_1 $ by definition. Hence Lemma \ref{l: minorsChanged}
ensures that $ W\setminus (Y_1\cup L) $ is also an $ (M/X_0-Y_0, M/X_0-Y_0) $-wave with
\[ B(M/X_1-Y_1, N/X_1-Y_1, W\setminus (Y_1\cup L))=B(M/X_0-Y_0, M/X_0-Y_0, W\setminus (Y_1\cup L)). \]
We have $ \mathsf{cond}^{+}(M/I, N/I) $ because $ I $ is a nice feasible set by assumption. Then by
Observation \ref{obs: cond+ loop delete}
$ \mathsf{cond}^{+}(M/I-Y_0, N/I-Y_0) $ also holds. Applying Lemma \ref{one more edge} with $ M/I-Y_0,\ N/I-Y_0,\
W\setminus(Y_0\cup L) $ and $ x_{2n+1}$ tells
$ B(M/X_0-Y_0, M/X_0-Y_0, W\setminus (Y_1\cup L))\neq \varnothing $ which completes the proof.
\end{proof}
\begin{lem}\label{l: arc remain lemma}
If $ P=\{ x_1, \dots, x_{2n+1} \} $ is an augmenting path for $ I $ which contains neither $x$ nor any of its out-neighbours in $
D(I) $, then $xy\in D(I) $ implies $ xy \in D(I \vartriangle P) $.
\end{lem}
\begin{proof}
Suppose that $ xy\in D(I) $. First we assume that $ x\notin I $. Then the set of the out-neighbours of $ x $ is
$ C_M(x,I)-x $. By
assumption $ P\cap C_M(x,I)=\varnothing $ and
therefore $C_M(x,I)\subseteq I \vartriangle P$ thus $ C_M(x,I)=C_M(x,I \vartriangle P) $. This means by definition that $ x $
has the same out-neighbours in
$ D(I) $ and $ D(I \vartriangle P) $.
If $ x\in I^{1} $, then we can argue similarly. The set of the out-neighbours of $ x $ in $ D(I) $ is
$ C_{N^{*}}(x,\overset{\circ}{\mathsf{span}}_{M}(I)^{1} ) -x $. By
assumption $ P\cap C_{N^{*}}(x,\overset{\circ}{\mathsf{span}}_{M}(I)^{1} )=\varnothing $ and
therefore $ C_{N^{*}}(x,\overset{\circ}{\mathsf{span}}_{M}(I)^{1} ) \cap (I \vartriangle P)=\{ x\} $. Since $
\mathsf{span}^{1}_{M}(I\vartriangle P) \supseteq
\mathsf{span}^{1}_{M}(I)$ because of Proposition \ref{p: augpath}/\ref{item: A}, we also have $
C_{N^{*}}(x,\overset{\circ}{\mathsf{span}}_{M}(I)^{1} )\subseteq
\mathsf{span}^{1}_{M}(I\vartriangle P) $. By combining these we conclude
\[ C_{N^{*}}(x,\overset{\circ}{\mathsf{span}}_{M}(I\vartriangle P)^{1} )
=C_{N^{*}}(x,\overset{\circ}{\mathsf{span}}_{M}(I)^{1} ). \] This means
by definition that $ x $ has the same out-neighbours in $ D(I) $ and $ D(I \vartriangle P) $.
We turn to the case where $ x\in I^{0} $. By definition $ C_N(y,I) $ is well-defined and
contains $ x $, in particular $ y\in E_0 $.
For $ k\leq n $, let us denote
$ I+x_1-x_2+x_3-\hdots -x_{2k}+x_{2k+1} $ by $ I_k $. Note that
$ {I_n=I \vartriangle P} $. We show by induction on $ k $ that $ I_k $ is $ N $-independent and
$ {x\in C_N(y, I_k)} $. Since
$ I+x_1 $ is $ N $-independent by definition and $ x_1\neq y $ because $ y\notin P$ by assumption, we obtain $
C_N(y,I)=C_N(y,I_0) $, thus for $ k=0 $ it holds. Suppose that $ n>0 $ and
we already know the statement for some $ k<n $. We have $ C_N(x_{2k+3},I_k)=C_N(x_{2k+3},I)\ni x_{2k+2} $ because
there is no ``jumping arc'' in the augmenting path by property \ref{item: no jumping}. It follows via the $ N $-independence of
$ I_k $ that $ I_{k+1} $ is also $ N
$-independent.
If $ x_{2k+2}\notin C_N(y, I_k) $ then $ C_N(y, I_k)=C_N(y, I_{k+1}) $
and the induction step is done. Suppose that $ x_{2k+2}\in C_N(y, I_k) $. Then $x_{2k+2}, x_{2k+3}\in E_0 $, moreover,
$x\notin C_N(x_{2k+3},I) $ since
otherwise $P$ would contain the out-neighbour $x_{2k+3} $ of $ x $ in $ D(I) $. We apply circuit elimination (Claim
\ref{Circuit elim})
with $C= C_N(y, I_k),\ e=x,\ X=\{ x_{2k+2} \},\ C_{x_{2k+2}}=C_N(x_{2k+3},I_k) $. The resulting circuit $ C'\ni x $ can
have at most one element out of $ I_{k+1} $, namely $ y $. Since $ I_{k+1} $ is $ N $-independent, there must be at least one
such an element and therefore $ C'=C_N(y,I_{k+1}) $.
\end{proof}
\begin{obs}\label{arc remain fact}
If $ xy \in D(I) $ and $ J\supseteq I $ is a dually safe feasible set with
$ \{ x,y \}\cap J=\{ x, y \}\cap I $, then $ xy \in D(J) $ (the same circuit is the witness).
\end{obs}
\section{Proof of the key-lemma}\label{s: proof of key-lemma}
\keylemma*
\begin{proof}
It is enough to build a sequence $(I_n)$ of nice dually safe feasible sets such that
$ (\mathsf{span}_N(I_n)\cap E_0) $ is an ascending sequence exhausting $ E_0 $. We fix a well-order $ \boldsymbol{\prec} $
of type $ \left|E_0\right| $ on $ E_0 $. Let $ I_0=\varnothing $, which is a nice dually safe feasible set by $
\mathsf{cond}^{+}(M,N) $.
Suppose that $ I_n $ is already defined. If there is no augmenting path for $ I_n $, then we let $ I_m:=I_n $ for $ m>n $.
Otherwise we pick an augmenting path $ P_n $ for $ I_n $ in such a way that its first element is as $ \prec $-small as possible.
Then we apply Lemma \ref{p: augpath} to extend $ I\vartriangle P $ to a nice dually safe feasible set which we define to be $
I_{n+1} $. The recursion is done.
Let $ \boldsymbol{X}:=E\setminus \bigcup_{n\in \mathbb{N}}\mathsf{span}_{N}(I_n) $ and for $ x\in X $, let $
\boldsymbol{E(x,n)} $ be the set of elements that are
reachable from $ x $ in $ D(I_n) $ by a directed path. We define $\boldsymbol{n_x} $ to be the smallest
natural number such that for every $ y\in E\setminus X $ with $ y \prec x $ we have $ y\in \mathsf{span}_N(I_{n_x}) $.
We shall prove that \[\boldsymbol{W}:= \bigcup_{x\in X}\bigcup_{n\geq n_x}E(x,n) \] is a wave.
\begin{lem}\label{l: stabilazing stuff}
For every $ x\in X $ and $ \ell\geq m\geq n_x $,
\begin{enumerate}
\item\label{item stabilize} $ I_m\cap E(x,m)=I_{\ell}\cap E(x,m) $,
\item\label{item same circuit} $ C_M(y,I_\ell)=C_M(y,I_{m})\subseteq E(x,m)$ for every $y\in E(x,m)\setminus I_m $,
\item \label{item same cocircuit} $C_{N^{*}}(y,\overset{\circ}{\mathsf{span}}_{M}(I_\ell)^{1})=
C_{N^{*}}(y,\overset{\circ}{\mathsf{span}}_{M}(I_{m})^{1}) $ for every $y\in E(x,m)\cap I_m^{1} $,
\item\label{item subdigraph} If $ yz\in D(I_m)$ with $y,z\in E(x,m)$, then $yz\in D(I_\ell) $,
\item\label{item increasing} $ E(x,m)\subseteq E(x,\ell) $.
\end{enumerate}
\end{lem}
\begin{proof}
Suppose that there is an $ n\geq n_x $ such that we know already the statement whenever $m,\ell \leq n $. For the induction step
it is
enough to
show that the claim holds for $ n $ and $ n+1 $. We may assume that $ P_n $ exists, i.e. $ I_n\neq I_{n+1} $, since otherwise
there is nothing to prove.
\begin{prop}\label{p: nx no aug}
$ P_n \cap E(x,n)=\varnothing $.
\end{prop}
\begin{proof}
A common element of $ P_n $ and $ E(x,n) $ would show that there is also an augmenting path in $ D(I_n) $ starting at $ x $
which is impossible since $ x\in X $ and $ n\geq n_x $.
\end{proof}
\begin{cor}
$ I_{n}\cap E(x,n)=(I_n \vartriangle P_n)\cap E(x,n) $.
\end{cor}
\begin{prop}
$ (I_n \vartriangle P_n)\cap E(x,n)=I_{n+1}\cap E(x,n) $.
\end{prop}
\begin{proof}
If $y\in E(x,n)\setminus I_n $, then its out-neighbours in $ D(I_n) $ are in $ E(x,n)\cap I_n $ and span $ y $ in $ M $.
Thus $I_{n+1}\setminus (I_n \vartriangle P_n)$ cannot contain any edge from $ E(x,n) $.
\end{proof}
\begin{cor}\label{circ subset}
$ I_n\cap E(x,n)=I_{n+1}\cap E(x,n) $ and for every $y\in E(x,n)\setminus I_n $ we have
$ C_M(y,I_n)=C_M(y,I_{n+1})\subseteq E(x,n)$.
\end{cor}
\begin{cor}\label{cor: cocircuit stabil}
For $y\in E(x,n)\setminus I_n $, $ y $ has the same out-neighbours in $ D(I_n) $ and in $ D(I_{n+1}) $ and they span $ y $ in
$ N^{*} $. More concretely:
\[ C_{N^{*}}(y,\overset{\circ}{\mathsf{span}}_{M}(I_{n+1})^{1})=
C_{N^{*}}(y,\overset{\circ}{\mathsf{span}}_{M}(I_{n})^{1}). \]
\end{cor}
Finally, for $ y\in E(x,n)$, $ P_n $ does not contain $ y $ or any of its out-neighbours with respect
to $ D(I_n) $ because $ P_n\cap E(x,n)=\varnothing $. Hence by applying Lemma \ref{l: arc remain lemma} with $P_n, y$ and
$I_n $ (and then Observation \ref{arc remain fact}) we may conclude that $ yz\in D(I_{n+1}) $ whenever $yz\in D(I_n) $.
This implies $ E(x,n)\subseteq E(x,n+1) $ since reachability from $ x $ is witnessed by the same directed paths.
\end{proof}
Let \[ B:=\bigcup_{m\in \mathbb{N}} \bigcap_{n>m}W\cap I_n. \] We are going to show that $ B $ witnesses that $ W $ is a
wave. Since $ M $ is
finitary the $ M $-independence of the sets $ I_n\cap W $ implies the $ M $-independence of $ B $. Similarly $ B^{0} $ is
independent in $ N $ because $ N \upharpoonright E_0 $ is finitary. Statements (\ref{item stabilize}) and
(\ref{item same circuit}) of Claim \ref{l: stabilazing stuff} ensure $W\subseteq
\mathsf{span}_{M}(B) $, while (\ref{item stabilize}) and (\ref{item same cocircuit}) guarantee $ B^{1} \subseteq
\mathsf{span}_{N^{*}}(W\setminus B) $. The latter means that $ B^{1} $ is independent in $ N.(W^{1}\setminus B^{1}) $.
Suppose for
a contradiction that $ B^{0} $ is not independent in $ N.W^{0} $. Then there exists an $ N $-circuit $ C\subseteq E_0 $ that
meets $ B $
but avoids $ W\setminus B $. We already know that $ B^{0} $ is $ N $-independent thus $ C $ is not contained in $ B $. Hence
$C\setminus B= C\setminus W\neq\varnothing $. Let us pick some $ e\in C\cap B$.
Since $ C $ is finite, for every large enough $ n $ we have $ C\cap B\subseteq C\cap I_n $ and $ I_n $ spans $ C $ in $ N $ (for
the latter we use
$ X\subseteq W\setminus B $). Applying Corollary \ref{cor: Noutgoing arc}
with $I_n, N, C $ and $ e $ tells that $e\in C_N(f,I_n) $
for
some $ f\in C\setminus W $ whenever $ n $ is large enough.
Then by (\ref{item increasing}) of Claim \ref{l: stabilazing stuff} we can take an $ x\in X $ and an $ n\geq n_x $ such that $ e\in
E(x,n) \cap C_N(f,I_n) $
for some $ f\in C\setminus W $. Then by definition $ f \in E(x,n) \subseteq W $ which contradicts $ f\in C\setminus W $. Thus
$ B^{0} $ is indeed
independent in $ N.W^{0} $ and hence $ B $ in $ N.W $ as well therefore $ W $ is a wave.
By $ \mathsf{cond}^{+}(M,N) $ we know that $ W $ consists of $ M $-loops and $ r(N.W)=0 $. It implies $ r(N.X)=0 $
because $ X\subseteq W $ by definition. This means $ X\subseteq \mathsf{span}_N(E\setminus X) $. Since $X\subseteq E_0 $
and $ E_0 $ is the union of the finitary $ N $-components, $ X\subseteq \mathsf{span}_N(E_0\setminus X) $ follows. Thus for
every $ x\in X $
there is a finite $ N $-circuit $ C\subseteq E_0 $ with $ C\cap X=\{ x \} $. The sequence $ (\mathsf{span}_N(I_n)\cap E_0 )$ is
ascending by construction and exhausts $ E_0\setminus X $ by the definition of $ X $. As $ C-x\subseteq E_0\setminus X $ is
finite, this implies that for every large enough $ n $, $ I_n $
spans $ C-x $ in $ N $ and hence spans $ x $ itself as well. But then by the definition of $ X $, we must have
$ X=\varnothing $. Therefore $ (\mathsf{span}_N(I_n)\cap E_0) $ exhausts $ E_0 $ and the proof of
Lemma \ref{l: key-lemma} is complete.
\end{proof}
\section{An application: Degree-constrained orientations of infinite graphs}\label{s: application}
Matroid intersection is a powerful tool in graph theory and in combinatorial optimization. Our generalization
Theorem \ref{t: main result} extends the scope of its applicability to infinite
graphs. To illustrate this, let us consider a classical problem in combinatorial optimization. A graph is given with
degree-constrains and we are looking for either an orientation
that satisfies it or a certain substructure witnessing the non-existence of such an orientation (see \cite{hakimi1965degrees}).
Let a (possibly infinite) graph $ G=(V,E) $ be fixed through this section. We denote the set of edges incident with $ v $ by $
\boldsymbol{\delta(v)} $.
Let $ o: V\rightarrow \mathbb{Z}$ with $ \left|o(n)\right|\leq
d(v) $ for $
v\in V $ which we will threat as `lower bounds' for in-degrees in orientations in the following sense. We say that the orientation $
D $ of $ G $ is
\emph{above} $
o $ at $ v $ if either $
o(v)\geq 0 $ and $ v $ has at least $ o(v) $ ingoing edges in $ D $ or $ o(v)< 0 $ and all but at most $ -o(v) $ edges in $ \delta(v)
$
are oriented towards $ v $ by $ D $. We say \emph{strictly above} if we forbid equality in the definition. Orientation $ D $ is
above $ o $ if it is above $ o $ at every $ v\in V $. We say that $ D $ is (strictly) bellow $ o $ at $ v $ if the reverse of $ D $ is
(strictly) above $ -o(v) $. Finally, $ D $ is (strictly) bellow $ o $ if the reverse of $ D $ is strictly above $ -o $.
\begin{thm}\label{t: indegree demand}
Let $ G=(V,E) $ be a countable graph and let $ o: V\rightarrow \mathbb{Z} $. If there is no
orientation of $ G $ above $ o $, then there is a $ V'\subseteq V $ and an orientation $ D $ of $ G $ such that
\begin{itemize}
\item $ D $ is bellow $ o $ at every $ v\in V' $;
\item There exists a $ v\in V' $ such that $ D $ is strictly bellow $ o $ at $ v $;
\item Every edge between $ V' $ and $ V\setminus V' $ is oriented by $ D $ towards $ V' $.
\end{itemize}
\end{thm}
\begin{proof}
Without loss of generality we may assume that $ G $ is loopless. We define the digraph $
\overset{\leftrightarrow}{G}=(V, A) $ by
replacing each $ e\in E
$ by back and forth arcs $ a_{e}, a'_e $ between the
end-vertices of $ e $. Let $ \delta^{+}(v) $ be the set of the ingoing edges of $ v $ in $ \overset{\leftrightarrow}{G} $. For
$ v\in V $, let $ M_v $ be $
U_{\delta^{+}(v), o(v)} $ if $
o(v)\geq 0 $ and $
U_{\delta^{+}(v),-o(v)}^{*} $ if $ o(v)<0 $. We define $ N_e $ to be $ U_{\{a_e, a'_e \}, 1} $ for $ e\in E $. Let \[
M:=\bigoplus_{v\in V}M_v\text{ and }N:=\bigoplus_{e\in E} N_e. \] Since $ M, N\in
(\mathfrak{F}\oplus \mathfrak{F}^{*})(A) $, Theorem \ref{t: main result} guarantees that there exists an $ I\in
\mathcal{I}_M\cap
\mathcal{I}_N $ and a partition $ A=A_M\sqcup A_N $ such that $I_M:=I\cap A_M $ spans $ A_M $ in $ M $ and $
I_N:=I\cap A_N $ spans $ A_N $ in $ N $. Note that $ \left|I\cap \{ a_e, a'_e \} \right|\leq 1$ by the $ N $-independence of $ I
$. We define $ D $ by taking the orientation $ a_e $ of $ e $ if $ a_e\in I $ and $ a'_e $ otherwise. Let $ V'' $ consists of those
vertices $ v $ for which $ I_M $ contains a base of $ M_v $ and let $ V':=V\setminus V'' $.
We claim that whenever an edge $ e\in E $ is incident with some $ v\in V' $, then $ I $ contains one of $ a_e $ and $ a'_e $.
Indeed, if $ I $ contains none of them then they cannot be $ N $-spanned by $ I_N $ thus they are $ M $-spanned by $ M $ which
implies that both end-vertices of $ e $ belong to $ V'' $, contradiction. Thus if $ e $ is incident some $ v\in V' $, then all ingoing
arcs of $ v $ in $ D $ must be in $ I $. Then the $ M $-independence of $ I $ ensures that $ D $ is bellow $ o $ at $ v $.
Suppose for a contradiction that $ a_e $ is an arc in $ D $ from a $ v\in V' $ to a $ w\in V'' $. As we have already shown, we must
have $ a_e\in I $ . By $ w\in V''
$ we know that $ I_M $ contains a base of
$ M_w $ thus $ a_e\notin I_N $ by the $ M $-independence of $ I $ and therefore $ a_e\in I_M $. But then $ a'_e $ cannot be
spanned by $ I_N $ in $ N $ hence $ a'_e\in \mathsf{span}_M(I_M) $, which means that $ I_M $ contains a base of $ M_v $
contradicting $ v\in V' $. We conclude that all the edges between $ V'' $ and $ V' $ are oriented towards $ V' $ in $ D $. By the
definition of $ V'' $, $ D $ is above $ o $ at every $ w\in V'' $. If $ D $ is also above $ o $ for every $ v\in V' $, then $ D $
is above $ o $. Otherwise there exists a $v\in V' $ such that $ D $ is strictly bellow $ o $ at $ v $, but then $ V' $ is as desired.
\end{proof}
Easy calculation shows that if $ G $ is finite, then the existence of a $ V' $ described in Theorem \ref{t: indegree demand}
implies the non-existence of an orientation above $ o $. Indeed, the total demand by $ o $ on $ V' $ is more than the number of all
the edges that are incident with a vertex in $ V' $. That is why for finite $ G $, ``if'' can be replaced by ``if and only if'' in
Theorem \ref{t: indegree demand}. For an infinite $ G $ it is not always the case. Indeed, let $ G $ be the one-way infinite path $
v_0, v_1,\dots $ and let $ o(v_n)=1 $ for $ n\in \mathbb{N} $. Then orienting edge $ \{ v_{n}, v_{n+1} \} $ towards $ v_{n+1}
$ for each $ n\in \mathbb{N} $ and taking $ V':=V $ satisfies the three points in Theorem \ref{t: indegree demand}. However,
taking the opposite orientation is above $ o $.
A natural next step is to introduce upper bounds $ p:V\rightarrow \mathbb{Z} $ beside the lower bounds
$ o:V\rightarrow \mathbb{Z} $. To avoid trivial obstructions we assume that $ o $ and $ p $ are \emph{consistent} which means
that for every $ v\in V $ there is an orientation $ D_v $
which is above $ o $ at bellow $ p $ at $ v $.
\begin{quest}
Let $ G $ be a countable graph and let $ o, p: V\rightarrow \mathbb{Z} $ be a consistent pair of bounding functions.
Suppose that there are orientations $ D_o $ and $ D_p $ that are above $ o $ and bellow $ p $ respectively. Is there always a
single
orientation $ D $ which is above $ o $ and bellow $ p $?
\end{quest}
The positive answer for finite graphs is not too hard to prove, as far we know its first appearance in the literature is
\cite{frank1978orient}.
\begin{bibdiv}
\begin{biblist}
\bib{aharoni1984konig}{article}{
author={Aharoni, Ron},
title={K{\"o}nig's duality theorem for infinite bipartite graphs},
date={1984},
journal={Journal of the London Mathematical Society},
volume={2},
number={1},
pages={1\ndash 12},
}
\bib{aharoni2009menger}{article}{
author={Aharoni, Ron},
author={Berger, Eli},
title={Menger’s theorem for infinite graphs},
date={2009},
journal={Inventiones mathematicae},
volume={176},
number={1},
pages={1\ndash 62},
}
\bib{aharoni1983general}{article}{
author={Aharoni, Ron},
author={Nash-Williams, Crispin},
author={Shelah, Saharon},
title={A general criterion for the existence of transversals},
date={1983},
journal={Proceedings of the London Mathematical Society},
volume={3},
number={1},
pages={43\ndash 68},
}
\bib{aharoni1984another}{article}{
author={Aharoni, Ron},
author={Nash-Williams, Crispin},
author={Shelah, Saharon},
title={Another form of a criterion for the existence of transversals},
date={1984},
journal={Journal of the London Mathematical Society},
volume={2},
number={2},
pages={193\ndash 203},
}
\bib{aharoni1998intersection}{article}{
author={Aharoni, Ron},
author={Ziv, Ran},
title={The intersection of two infinite matroids},
date={1998},
journal={Journal of the London Mathematical Society},
volume={58},
number={03},
pages={513\ndash 525},
}
\bib{aigner2018intersection}{article}{
author={Aigner-Horev, Elad},
author={Carmesin, Johannes},
author={Fr{\"o}hlich, Jan-Oliver},
title={On the intersection of infinite matroids},
date={2018},
journal={Discrete Mathematics},
volume={341},
number={6},
pages={1582\ndash 1596},
}
\bib{nathanhabil}{thesis}{
author={Bowler, Nathan},
title={Infinite matroids},
type={Habilitation Thesis},
date={2014},
note={\url{https://www.math.uni-hamburg.de/spag/dm/papers/Bowler\_Habil.pdf}},
}
\bib{bowler2015matroid}{article}{
author={Bowler, Nathan},
author={Carmesin, Johannes},
title={Matroid intersection, base packing and base covering for infinite
matroids},
date={2015},
journal={Combinatorica},
volume={35},
number={2},
pages={153\ndash 180},
}
\bib{bowler2016self}{article}{
author={Bowler, Nathan},
author={Geschke, Stefan},
title={Self-dual uniform matroids on infinite sets},
date={2016},
journal={Proceedings of the American Mathematical Society},
volume={144},
number={2},
pages={459\ndash 471},
}
\bib{bruhn2013axioms}{article}{
author={Bruhn, Henning},
author={Diestel, Reinhard},
author={Kriesell, Matthias},
author={Pendavingh, Rudi},
author={Wollan, Paul},
title={Axioms for infinite matroids},
date={2013},
journal={Advances in Mathematics},
volume={239},
pages={18\ndash 46},
}
\bib{edmonds1968matroid}{article}{
author={Edmonds, Jack},
title={Matroid partition},
date={1968},
journal={Mathematics of the Decision Sciences},
volume={11},
pages={335\ndash 345},
}
\bib{edmonds2003submodular}{incollection}{
author={Edmonds, Jack},
title={Submodular functions, matroids, and certain polyhedra},
date={2003},
booktitle={Combinatorial optimization—eureka, you shrink!},
publisher={Springer},
pages={11\ndash 26},
}
\bib{erde2019base}{article}{
author={Erde, Joshua},
author={Gollin, J.~Pascal},
author={Jo{\'o}, Attila},
author={Knappe, Paul},
author={Pitz, Max},
title={Base partition for mixed families of finitary and cofinitary
matroids},
date={2021},
ISSN={1439-6912},
journal={Combinatorica},
volume={41},
number={1},
pages={31\ndash 52},
url={https://doi.org/10.1007/s00493-020-4422-4},
}
\bib{frank1978orient}{article}{
author={Frank, Andr{\'a}s},
title={How to orient the edges of a graph?},
date={1978},
journal={Combinatorics},
pages={353\ndash 364},
}
\bib{frank2011connections}{book}{
author={Frank, Andr{\'a}s},
title={Connections in combinatorial optimization},
publisher={OUP Oxford},
date={2011},
volume={38},
}
\bib{ghaderi2017}{thesis}{
author={Ghaderi, Shadisadat},
title={On the matroid intersection conjecture},
type={PhD. Thesis},
date={2017},
}
\bib{hakimi1965degrees}{article}{
author={Hakimi, S~Louis},
title={On the degrees of the vertices of a directed graph},
date={1965},
journal={Journal of the Franklin Institute},
volume={279},
number={4},
pages={290\ndash 308},
}
\bib{higgs1969equicardinality}{article}{
author={Higgs, DA},
title={Equicardinality of bases in {B}-matroids},
date={1969},
journal={Can. Math. Bull},
volume={12},
pages={861\ndash 862},
}
\bib{higgs1969matroids}{inproceedings}{
author={Higgs, Denis~Arthur},
title={Matroids and duality},
date={1969},
booktitle={Colloquium mathematicum},
volume={2},
pages={215\ndash 220},
}
\bib{joó2020intersection}{misc}{
author={Joó, Attila},
title={Intersection of a partitional and a general infinite matroid},
how={arXiv},
date={2020},
note={\url{https://arxiv.org/abs/2009.07205}},
}
\bib{joo2020MIC}{article}{
author={Jo{\'o}, Attila},
title={Proof of {N}ash-{W}illiams' intersection conjecture for countable
matroids},
date={2021},
ISSN={0001-8708},
journal={Advances in Mathematics},
volume={380},
pages={107608},
}
\bib{oxley1978infinite}{article}{
author={Oxley, James},
title={Infinite matroids},
date={1978},
journal={Proc. London Math. Soc},
volume={37},
number={3},
pages={259\ndash 272},
}
\bib{oxley1992infinite}{article}{
author={Oxley, James},
title={Infinite matroids},
date={1992},
journal={Matroid applications},
volume={40},
pages={73\ndash 90},
}
\bib{rado1966abstract}{inproceedings}{
author={Rado, Richard},
title={Abstract linear dependence},
organization={Institute of Mathematics Polish Academy of Sciences},
date={1966},
booktitle={Colloquium mathematicum},
volume={14},
pages={257\ndash 264},
}
\end{biblist}
\end{bibdiv}
\end{document}
\section{Outlook: Multi-Matroid Intersection by Aharoni}
For $ k\geq 2 $, a $k $\emph{-partite hypergraph} is a $ H=(V_1,\dots, V_k; E) $ where the sets $ V_i $ are pairwise disjoint
and each
edge
$ e\in E $ is a set of size $ k $ containing one element from each $ V_i $. A set of pairwise disjoint edges is called a
\emph{matching} and
a set of vertices meeting every edge is a \emph{vertex cover}. The size of a largest matching and a smallest vertex cover denoted
by $ \boldsymbol{\nu(H)} $ and $ \boldsymbol{\tau(H)} $ respectively.
\begin{conj}[Ryser’s Conjecture]\label{conj: Rys}
For every finite $ k $-partite hypergraph $ H $, $ \tau(H)\leq (k-1)\nu(H) $.
\end{conj}
Aharoni settled the question affirmatively for $ k=3 $ in \cite{aharoni2001ryser} and conjectured the following structural
generalization which are equally
meaningful for infinite $ k $-partite hypergraphs:
\begin{conj}\label{conj: str Rys}
For every hypergraph $ H=(V_1,\dots, V_k; E) $ there are matchings $ I_1, \dots, I_{k-1} $ such that one can choose
for every $ 1\leq i\leq k $ and $ e\in I_i $ a vertex $ v_{i,e}\in e $ such that the (multi)set
$ \{v_{i,e}: 1\leq i\leq k-1 \wedge e\in I_i \} $ is a vertex cover.
\end{conj}
\noindent For a finite $ H $ the Pigeonhole principle ensues that one of the matchings $ I_i $ must have at least $
\frac{\tau(H)}{k-1} $ elements,
hence this is indeed a generalization of Ryser's Conjecture \ref{conj: Rys}.
The matchings of a $ k $-partite hypergraph can be considered as common
independent sets of the partition matroids $ M_1,\dots, M_k $ on $ E $ where $I\in \mathcal{I}_{M_i} $ iff no distinct $ e,f\in I
$ have a common vertex in $ V_i $. This motivated the following matroidal generalization:
\begin{conj}[Aharoni and Berger]
Assume that $ M_1,\dots, M_k $ are matroids on the finite edge
set $ E $ where $ 2\leq k\in \mathbb{N} $. Then they
admit common independent set of size at least
\[ \frac{1}{k-1} \min \left\lbrace \sum_{i=1}^{k}r_i(X_i): E= \bigsqcup_{i=1}^{k} X_i. \right\rbrace \]
\end{conj}
The case $ k=3 $ was solved by Aharoni and Berger in \cite{aharoni2006intersection}.
\begin{conj}[Multi-Matroid Intersection Conjecture by Aharoni]
Assume that $ M_1,\dots, M_k $ are arbitrary/finitary/finite matroids on $ E $ where $ 2\leq k\in \mathbb{N} $. Then they
admit common independent sets $ I_1,\dots, I_{k-1} $ such that for
each $ 1\leq i\leq k-1 $ there is a $ k $-partition $ I_i=\bigsqcup_{j=1}^{k}I_i^{j} $ for which
\[ \bigcup_{j=1}^{k}\mathsf{span}_{M_j}\left( \bigcup_{i=1}^{k-1}I_{i}^{j}\right) =E. \]
\end{conj}
For $ k=2 $, we get the structural formulation of the Matroid Intersection Theorem by Edmonds if we consider finite matroids,
the Matroid Intersection Conjecture by Nash-Williams if the matroids are assumed to be finitary and the Generalized Matroid
Intersection Conjecture if the matroids are arbitrary. | {'timestamp': '2021-03-30T02:14:36', 'yymm': '2103', 'arxiv_id': '2103.14881', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.14881'} |
\section{\bf{I.} Sample structure for the record-quality two-dimensional electron systems}
\begin{figure*}[h]
\centering
\includegraphics[width=.80\textwidth]{Supplement_R1_Page_2.png}
\caption{\label{figS1}Sample structure for the record-quality GaAs two-dimensional electron systems whose data are shown in the main text. The spacer layer thickness is varied to obtain the desired sample density, and the quantum well width is chosen accordingly to prevent second-subband occupation.}
\end{figure*}
All record-quality samples follow the structure shown in Fig. S1. Standard modulation-doped structures are $\delta$-doped in AlGaAs while the doping-well structures (DWSs) use the doping scheme described in the right panels of the figure. A 12\%/24\% stepped-barrier structure is implemented to reduce the Al composition of the barrier directly in contact with the main quantum well. The thickness of each of the barrier layers is controlled so that no parallel channel forms at the 12\%/24\% barrier interface, and the total spacer thickness is varied to attain the desired sample density. The well width of the quantum well is varied for each sample so that there is no second-subband occupation. Specific structural parameters are summarized in Table S1 for some representative samples.
\newpage
\begin{table}[h]
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$n$ ($\times10^{11}$ /cm$^2$)& $\mu$ ($\times10^6$ cm$^{2}$/Vs) & $s_1$ (nm) & $s_2$ (nm) & $w$ (nm) \\
\hline
\hline
2.06 & 44.0 & 68.2 & 60.0 & 34.0 \\
\hline
1.15 & 41.5 & 114 & 100 & 45.3 \\
\hline
1.00 & 36.0 & 136 & 120 & 50.0 \\
\hline
0.71 & 29.0 & 195 & 171 & 58.5 \\
\hline
\end{tabular}
\caption{Structural parameters of some representative record-quality doping-well-structure samples whose data are shown in the main text; for definitions of $s_1$, $s_2$, and $w$, see the sample structure in Fig. S1.} \label{tab:table1}
\end{center}
\end{table}
\newpage
\section{\bf{II.} Measurement of energy gap for the $\nu=5/2$ fractional quantum Hall state}
\begin{figure}[h]
\centering
\includegraphics[width=.50\textwidth]{Supplement_R3_Page_2.png}
\caption{\label{figS2} Arrhenius plot of resistance at $\nu=5/2$. The sample density is $n\simeq1.0\times10^{11}$/cm$^2$. }
\end{figure}
Magnetoresistance traces were taken at a sweep rate of 0.1 T/hour in the vicinity of the $\nu=5/2$ fractional quantum Hall state to precisely determine the magnetic field position of the $R_{xx}$ minimum at base temperature. Then the energy gap $^{5/2}\Delta$ was measured by raising the temperature of the sample while monitoring the sample resistance at the magnetic field corresponding to $\nu=5/2$. The data are shown in the Arrhenius plot of Fig. S2. A line corresponding to $R_{xx}\sim e^{-^{5/2}\Delta/{2k_BT}}$ was fitted to the data plotted in Fig. S2 (shown in red), yielding an energy gap of $^{5/2}\Delta=820$ mK. Here $k_B$ is the Boltzmann constant.
\newpage
\end{document} | {'timestamp': '2020-10-07T02:02:27', 'yymm': '2010', 'arxiv_id': '2010.02283', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.02283'} |
\section{Introduction}
The introduction of a conducting surface into a region disturbs the electromagnetic field. As a general
measure of the disturbance we can calculate the mode sum
\begin{equation}
\label{modesum}
S(\Omega) = \sum_{\alpha} \left (e^{-\omega_{\alpha}/\Omega} - e^{-\overline{\omega}_{\alpha}/\Omega}\right )
\end{equation}
where $\overline \omega_{\alpha}$ are the mode frequencies before introduction of the surface \cite{note1}
and $\omega_{\alpha}$ is the resulting spectrum. For a general spectrum, the mode sum $S(\Omega)$ is divergent for large values of the parameter $\Omega$ (for example, for a massless scalar field, it diverges as $\Omega^{d-1}$, where $d$ is the dimensionality); however,
for the electromagnetic modes disturbed by a thin perfectly conducting cylindrical shell (of any crosssection), the mode sum per unit length is finite
\cite{BD}.
In this case (the subject of this paper) the quantities of particular interest are the
Kac number per unit length ${\cal K}/L$ (the change in the number of possible modes \cite{kac,BD})
and the Casimir energy per unit length ${\cal C}/L$ (the change in the zero point energy \cite{casimir} formally calculable from the sum
of $\frac {1}{2} (\omega_{\alpha} - \overline\omega_{\alpha})$ \cite{note2}). These can be extracted from $S$ by
an expansion in powers of $1/\Omega $:
\begin{equation}
\label{kacnumber}
S(\Omega)/L = {\cal K}/L - \frac {2}{\Omega} {\cal C}/L + ...
\end{equation}
From here the Kac number $\mathcal{K}$ can be recognized as the coefficient $B_{3/2}$ of the heat kernel expansion widely used in the Casimir calculations \cite{heat}.
For the case of a cylinder (of any cross section) the modes have the form
\begin{equation}
\label{cylindermodes}
\omega_{\alpha}(q) = \sqrt{q^2+ w_{\alpha}^2} ,
\end{equation}
where $q$ is the continuous wave vector of the modes in the axial direction and $w_{\alpha}$ are the cutoff frequencies for the modes. There are two types of modes, in which either the magnetic field or the electric field is purely transverse \cite{Jackson}. The cutoff frequencies $w_{\alpha}$ are determined by the
solutions of the two-dimensional Helmholtz problem for a scalar field representing the component of the electric or magnetic field parallel to the axis of the cylinder, with Dirichlet and Neumann boundary conditions for the two mode types. The mode sum (\ref{modesum}) is over both types of modes.
The sum over the cutoff frequencies (\ref{modesum}) can be done with the aid of a contour integral representation \cite{contour}. Let $\psi(w)$ be a function that is zero for each cutoff frequency, and let $\overline \psi(w)$ be the
corresponding function for the undisturbed space. Then
\begin{equation}
\label{contour}
\frac{S(\Omega)}{L} = \frac {1} {2 \pi i} \oint dw
\int_{- \infty}^{\infty} \frac {dq}{2 \pi} e^{-\sqrt{q^{2}+w^{2}}/\Omega} \frac{d}{dw}
\ln \frac{\psi (w)}{\overline \psi (w)}
\end{equation}
where the first integral is over a contour enclosing the real axis, and $L$ is the (infinite) length of
the cylinder. The advantage to this
representation is that the contour can be deformed to lie along the imaginary $w$ axis, where the
functions $\psi(w)$ and $\overline \psi(w)$ are slowly varying, instead of highly oscillatory. The integral over $q$ is
\begin{equation}
\int_{- \infty}^{\infty} e^{-\sqrt{q^2+w^2}/\Omega} dq = 2w K_{1}\left (\frac{w}{\Omega}\right )
\end{equation}
where $K_{n}$ is the modified Bessel function of order $n$. Using this and making the change in variables $w \rightarrow iy$ gives
\begin{eqnarray}
\label{contour2}
\frac{S(\Omega)}{L} &=& \frac {1} {2 \pi^2} \oint dy \
y K_{1}\left (\frac {iy}{\Omega}\right ) \frac {d}{dy}
\ln \frac{\psi (iy)}{\overline \psi (iy)}
\nonumber
\\&=&- \frac {1} {2 \pi^2 \Omega} \int_{-\infty}^{\infty} dy \
i y K_{0}\left (\frac {iy}{\Omega}\right )
\ln \frac{\psi (iy)}{\overline \psi (iy)}
\end{eqnarray}
In the second representation an integration by parts has been performed, and the contour integral has been converted to an integral over the $y$ axis.
In the cases to be considered, the function $\ln[\psi (iy)/\overline \psi (iy)]$
can be taken to be an even function of $y$. Then we can replace $i y K_{0}(iy/\Omega)$ by its even part, $(\pi/2) |y| J_{0}(y/\Omega)$ ($J_{n}$ is the Bessel function of order $n$), and reduce the range of integration to the positive $y$ axis, arriving at the representation
\begin{equation}
\label{contour3}
\frac{S(\Omega)}{L} = - \frac {1} {2 \pi \Omega} \int_{0}^{\infty} ydyJ_{0}\left (\frac{y}{\Omega}\right ) \ln \frac{\psi (iy)}{\overline \psi (iy)}
\end{equation}
The quantity $\ln[\psi (iy)/\overline \psi (iy)]$ becomes small for large $y$.
In general the leading term is of order $y^{-1}$; this determines the Kac number per unit length, as will now be shown.
Define
\begin{equation}
\label{kacdef}
\frac{{\cal K}}{L} = -\frac{1}{2\pi} \lim_{y \rightarrow \infty} y \ln \frac{\psi (iy)}{\overline \psi (iy)}
\end{equation}
and separate Eq.(\ref{contour3}) into two terms:
\begin{eqnarray}
\label{contour4}
\frac{S(\Omega)}{L} &=& \frac{{\cal K}}{L} \int_{0}^{\infty} \frac {dy}{\Omega} J_{0}\left (\frac{y}{\Omega}\right )\nonumber\\
&-& \frac {1}{\Omega} \int_{0}^{\infty} dy \
J_{0}\left (\frac{y}{\Omega}\right ) \left (\frac {y} {2 \pi} \ln \frac{\psi (iy)}{\overline \psi (iy)} + \frac{{\cal K}}{L}\right )
\end{eqnarray}
The first integral converges and is equal to unity, independent of the cutoff $\Omega$. The second integral is convergent without the Bessel function, so that we may take the limit $\Omega \rightarrow \infty$ in the integrand. Then according to Eq.(\ref{kacnumber}), the Casimir energy per unit length is
\begin{equation}
\label{casmir}
\frac{{\cal C}}{L} = \frac {1}{2} \int_{0}^{\infty} dy
\left (\frac {y} {2 \pi} \ln \frac{\psi (iy)}{\overline \psi (iy)} + \frac{{\cal K}}{L} \right)
\end{equation}
\section{Circular Cylinder}
The Kac number for an arbitrarily shaped conductive shell has been given previously \cite{BD}. The Casimir energy of the circular cylindrical shell has been also determined \cite{deraad}. Here we will give a new route to the known result in the familiar setting of the circular cylinder, in preparation for the extension to the case of the cylinder of elliptical cross section.
We consider the effect on the electromagnetic spectrum of the introduction of a cylindrical conducting boundary of radius $A$ into another cylinder of radius $Z$ (which will be taken to infinity shortly \cite{note1}). The cutoff frequencies of the system interior to the larger cylinder are specified by a scalar function $\varphi(\rho,\phi)$, which satisfies the two-dimensional Helmholtz equation. The variables can be separated, so that $\varphi$ is a combination of angular factors $\exp(i n \phi)$ and Bessel functions of corresponding order. There are both $TE$ and $TM$ modes, for the regions $\rho < A$ and $A < \rho < Z$, so that there are four mode conditions: $J_{n}(w A)=0$; $J_{n}'(w A)=0$; $H_{n}^{(1)}(w A) J_{n}(w Z) - H_{n}^{(1)}(w Z) J_{n}(w A)= 0$;
and $H_{n}^{(1)}{'}(w A) J_{n}'(w Z) - H_{n}^{(1)}{'}(w Z) J_{n}'(w A) = 0$ ($H_{n}^{(1)}$ is the Hankel function of order $n$). Then a candidate for $\psi$ is the product for all $n$ of the left-hand sides of these four expressions, and $\overline \psi$ is the product of the factors $J_{n}(w Z) J_{n}'(w Z)$. However, for $w = iy$, the Bessel functions are increasing or decreasing exponentially, and in the limit of large $Z$ the exponentially small $H_{n}^{(1)}(iZA)$ terms can be dropped so that $ \psi(iy)/\overline \psi (iy) $
reduces to a product of modified Bessel functions
\begin{equation}
\label{cylinder1}
\frac{\psi(iy)}{\overline \psi (iy)} = \prod_{n}
-(2 A y)^2 I_{n}(A y) K_{n}(A y) I_{n}'(A y) K_{n}'(A y)
\end{equation}
The factor $-(2Ay)^2$ has been introduced so that the factors in the product approach unity for large $y$.
This product can be written in a simpler form with the aid of the Wronskian relation $r I_{n}'(r)K_{n}(r)- r I_n(r)K_{n}'(r)= 1$:
\begin{equation}
\label{cylinder2}
\frac{\psi(iy)}{\overline \psi (iy)} = \prod_{n} \left (1 - \sigma_{n}^{2}\right )
\end{equation}
where
\begin{equation}
\label{sigmadefinition}
\sigma_{n} = y \frac {d}{dy} I_{n}(A y) K_n(A y)
\end{equation}
There is an alternate route to (\ref{cylinder1}) that sheds some light on its meaning \cite{KSLZ}. The path integral representation of quantum mechanics turns three-dimensional quantum mechanics at zero temperature into a four-dimensional classical statistical mechanics problem where the ratio $H/T$ that determines the Boltzmann weight at a temperature $T$ is represented by the ratio of the action to Planck's constant. The introduction of a boundary suppresses $TM$ modes (because the longitudinal electric field must vanish at the boundary) but introduces new $TE$ modes (because the longitudinal magnetic field can be discontinuous across the boundary). In either case the effect is localized near the boundary, and can be described by solutions of the modified Helmholtz equation $(\triangle-y^{2})u(\textbf{r},y) = 0$ with sources on the boundary. These solutions have radial parts that are described by the modified Bessel functions, and the amplitudes of the sources are proportional to $I_{n}(yA)$, $K_n(yA)$, $I_{n}'(yA)$, and $K_{n}'(yA)$ -- the quantities that enter into (\ref{cylinder1}). This approach gives a route to the calculation of the Casimir
energy entirely in terms of functions defined on the
imaginary $w$ axis.
Combining (\ref{casmir}) and (\ref{cylinder2}) gives
\begin{equation}
\label{casimircylinder}
\frac{{\cal C}}{L} = \frac {1}{2}
\int_{0}^{\infty}
\left ( \frac{{\cal K}}
{L} + \frac {y} {2 \pi} \sum_{n=-\infty}^{\infty}
\ln (1-\sigma_{n}^2) \right ) dy
\end{equation}
The logarithm can be expanded in a sum of powers of $\sigma_{n}$. Then ${\cal C}/L = \sum T_{m}$, where
\begin{eqnarray}
\label{cylinderexpansion}
T_{1}&=& \frac {1}{4 \pi } \int_{0}^{\infty} \left (\frac{2\pi {\cal K}}{L} - y \sum_{n=-\infty}^{\infty}
\sigma_{n}^{2}\right ) dy
\nonumber
\\
T_{m}&=& -\frac {1}{4 \pi m} \int_{0}^{\infty} y dy \sum_{n=-\infty}^{\infty}
\sigma_{n}^{2m}
\end{eqnarray}
\subsection{Klich Expansion}
Klich \cite{klich} has pointed out a useful trick for evaluating the sums that appear in these equations.
The Green function for the modified Helmholtz problem can be expanded in modified Bessel functions about an arbitrary origin, and by equating representations we can derive the identity \cite{JDJ}
\begin{equation}
\label{green}
K_{0}\left (yA \rho(\phi-\phi')\right ) = \sum_{n=-\infty}^{\infty} I_{n}(Ay) K_{n}(Ay) e^{in(\phi-\phi')}
\end{equation}
where
\begin{equation}
\rho(\phi) = \sqrt{2 - 2 \cos\phi}= 2 \left |\sin \frac{\phi}{2}\right |
\end{equation}
Performing the operation $y d/dy$ on both sides of (\ref{green}) gives
\begin{eqnarray}
\label{Gdef}
H(1,2)&\equiv& y \frac {d}{dy} K_{0}\left (2 y A \left |\sin \frac{\phi_{1}-\phi_{2}}{2}\right |\right )
\nonumber
\\
&=& \sum_{n=-\infty}^{\infty} y \frac{d}{dy} \left (I_{n}(yA) K_{n}(yA)\right ) e^{in(\phi_{1}-\phi_{2})}
\nonumber
\\
&=&\sum_{n=-\infty}^{\infty} \sigma_{n} e^{in(\phi_{1}-\phi_{2})}
\end{eqnarray}
By multiplying this expression times itself $2m$ times, identifying the $\phi_{i}$ in a chain of pairs, and integrating over all $\phi_{i}$ we may derive a series of identities \cite{CM}
\begin{eqnarray}
\label{klichidentities}
\sum_{n=-\infty}^{\infty} \sigma_{n}^{2m} &=& \int_{-\pi}^{\pi}\frac{d\phi_{1}}{2\pi} \ldots
\int_{-\pi}^{\pi}\frac{d\phi_{2m}}{2\pi} H(1,2)\nonumber\\
&\times&H(2,3)\ldots H(2m,1)
\end{eqnarray}
Substituting these into (\ref{cylinderexpansion}) gives an extension of the representation for ${\cal C}$ similar to that given by Balian and Duplantier \cite{BD}.
\subsection{Kac number}
For large $y$ the $\sigma_{n}$, Eq.(\ref{sigmadefinition}), are small, and Eq.(\ref{kacdef}) gives
\begin{eqnarray}
\label{cylK}
\frac {{\cal K}}{ L} &= &- \lim_{y \rightarrow \infty}
\frac{y}{2\pi} \ln\frac{\psi(iy)}{\overline \psi (iy)}\nonumber\\
&=& - \lim_{y\rightarrow \infty} \frac{y}{2\pi} \sum_n \ln (1-\sigma_n^2)
= \lim_{y\rightarrow \infty} \frac{y}{2\pi} \sum_n \sigma_n^2\nonumber\\
&=& \lim_{y\rightarrow \infty} \frac{y}{2\pi} \int_{-\pi}^{\pi} \frac {d \phi_1}{2\pi}\int_{-\pi}^{\pi} \frac {d \phi_2}{2\pi} H(1,2) H(2,1)
\nonumber
\\
&=& \lim_{y\rightarrow \infty} \frac{y}{\pi^{2}} \int_{0}^{\pi} d \beta \left [ 2yA\sin \beta K_{1}\left (2yA \sin \beta \right )\right ]^2
\end{eqnarray}
For large $y$, this integral is dominated by the contribution
from small $\beta$, so that we may replace $\sin\beta$
by its argument, leading to \cite{integral}
\begin{eqnarray}
\label{cylK2}
\frac { {\cal K}}{L}
&=& \frac {1}{2\pi^{2}A} \lim_{y\rightarrow \infty}\int_{0}^{2\pi yA} d(2yA\phi) (2yA\phi K_{1}(2yA \phi))^2\nonumber\\
&=& \frac{1}{2\pi^{2}A}\int_{0}^{\infty}dx (xK_{1}(x))^{2}=\frac {3}{64A}
\end{eqnarray}
This result was given previously by Balian and Duplantier \cite{BD}. They also showed that with this choice $T_{1}$ in (\ref{cylinderexpansion}) vanishes.
\subsection{Casimir energy}
We evaluated the expressions for $T_{m}$ (\ref{cylinderexpansion}) by a Monte Carlo integration. The first step is to change variables to $y = (\ln z)^{4}$, which compresses the range of integration to the interval $z \in [0,1]$ (the choice of the fourth power is somewhat arbitrary, but was found to give better sampling of the function in the Monte Carlo integration). Then the multiple integral over the angles and $z$ is given by the average of the integrand (in the new variable) evaluated at uniformly and randomly sampled values for $z$ and the $\phi_{i}$. Averaging over $10^8$ configurations we find
\begin{eqnarray}
\label{results}
T_2 = \frac{-0.00758 \pm 0.00002}{A^{2}}
\nonumber
\\
T_3= \frac{-0.002264 \pm 0.000002}{A^{2}}
\nonumber
\\
T_4= \frac{-0.001080 \pm 0.000001}{A^{2}}
\nonumber
\end{eqnarray}
For large $m$ the integral is almost entirely due to the terms involving $\sigma_{0} = y d/dy (I_{0}(Ay)K_{0}(Ay))$:
\begin{eqnarray}
\label{results2}
T_{20} =
-\frac{1}{8\pi}\int_{0}^{\infty} \sigma_{0}^4 ydy =-\frac{0.00685}{A^{2}}
\nonumber
\\
T_{30} =
-\frac{1}{12\pi}\int_{0}^{\infty} \sigma_{0}^6 ydy = -\frac{0.002258}{A^{2}}
\nonumber
\\
T_{40} =
-\frac{1}{16\pi}\int_{0}^{\infty} \sigma_{0}^8 ydy = -\frac{0.001081}{A^{2}}
\end{eqnarray}
This series ${ T_{m0} }$ is found to fall off as $0.0346 \times (2m)^{-2.5}$ rather accurately, and it is also possible
to evaluate the sum of the series in the form
\begin{equation}
S_0=
\frac{1}{4\pi} \int_{0}^{\infty} \left (\ln(1-\sigma_{0}^{2}) + \sigma_{0}^{2}\right ) dy = -\frac{0.012799}{A^{2}}
\end{equation}
Therefore only the first few $T_{m}$ need to be calculated; we thus arrive at the evaluation
\begin{eqnarray}
\frac{{\cal C}}{L} &=& S_0-T_{20}-T_{30}-T_{40}+T_{2}+T_{3}+T_{4}
\nonumber
\\
&=& \frac{- 0.01354 \pm 0.00002}{A^{2}}
\end{eqnarray}
in agreement with the accepted value ${\cal C}/L = -0.013561343/A^{2}$ \cite{deraad}. The Monte Carlo program is simple in structure and only requires modest computational resources. The accuracy is limited by the $T_2$ term, which is difficult to evaluate because the integrand has significant contributions from large $y$ when the four angles $\phi_{i}$ are nearly the same.
\section{elliptical cylinder}
\subsection{Mode expansion}
Elliptical coordinates are related to the rectangular coordinates $X, Y$ by
\begin{eqnarray}
\label{elliptical}
X = h \cosh \xi \cos \eta
\nonumber
\\
Y = h \sinh \xi \sin \eta .
\end{eqnarray}
The surfaces of constant $\xi$ are confocal ellipses with axes $A = h \cosh \xi$ and $B = h \sinh \xi$.
The distance between the foci is $2h$; the eccentricity of the ellipse is $\epsilon = {\text {sech}} \ \xi$.
The case of the circle is recovered in the limit $\xi \rightarrow
\infty$, $h \cosh\xi = \sqrt{X^{2}+Y^{2}}$ (which implies $h \rightarrow 0$).
The two-dimensional modified Helmholtz equation can be separated in these variables. The solutions have the form of products $P(\eta)Q(\xi)$, where $P$ and $Q$ satisfy the Mathieu equations
\begin{equation}
\label{de-eta}
\frac{d^2 P}{d \eta^2} = -(a +h^2 y^2 \cos^2 \eta) P(\eta)
\end{equation}
\begin{equation}
\label{de-xi}
\frac{d^2 Q}{d \xi^2} = (a + h^2 y^2 \cosh^2 \xi) Q(\xi)
\end{equation}
where $a$ is the separation constant. The functions $P(\eta;a,y)$ play a role similar to that of the trigonometric functions in the case of the cylinder, and become them in the circle limit. The condition that $P(\eta;a,y)$ be periodic restricts $a$ to a discrete set of values, but unlike the case of the circle the allowed values are not integers and depend on $y$; we will refer to the set of these as $\{a_{n}(y)\}$,
where $n$ is a counting label (the
modes can be ordered
so that $a_{n+1} > a_n$, and so that $P(\eta;a_{0},y)$ is
the nodeless function of $\eta$).
For any $y$, the corresponding set of functions $P(\eta;a_{n}(y),y)$ forms a complete set for expression of a general function of $\eta$; they will be normalized so that for any
$a$ and $b$ in $\{a_n\}$,
\begin{equation}
\label{normaliztion}
\int_{-\pi}^{\pi} P(\eta;a,y) P(\eta;b,y) d\eta = \delta_{ab}
\end{equation}
Then it follows
from the completeness of the $P(\eta; a,y)$ that
\begin{equation}
\label{complete}
\sum_{a \in \{a_n\}} P(\eta;a,y) P(\eta';a,y) = \delta ( \eta-\eta')
\end{equation}
The function $Q(\xi;a,y)$ plays a role analogous to a modified Bessel function; there are again two solutions: $Q_{1}$, which is regular at small $\xi$ (and increasing); and $Q_{2}$, which is small at large $\xi$ (and decreasing). These are related by the Wronskian relationship
\begin{equation}
\label{MathieuWronskian}
Q_{2}(\xi;a,y) \frac {d}{d\xi} Q_{1}(\xi;a,y)-
Q_{1}(\xi;a,y) \frac {d}{d\xi} Q_{2}(\xi;a,y) = 1
\end{equation}
However, there are some important differences from the case of the circular cylinder. Eq.(\ref{de-xi}) differs from the Bessel equation in that there is no term in the first derivative. This implies that the Wronskian is constant, and we are choosing the normalization so that it is unity, as stated in (\ref{MathieuWronskian}). This also means that the product of the two solutions is asymptotically unity at $\xi$ large, and (via Eq.(\ref{de-xi})) at $y$ large as well. Additionally the Bessel functions depend on radial position $r$ only through the $ry$ combination, and on what happens in the angular sector only through the order $n$, while the Mathieu function $Q(\xi;a,y)$ depends on $y$, $\xi$, and $a$ separately. We can make the mathematics of the ellipse look more like the mathematics of the circle if we define the restricted functions $I_{n}(y;\xi) = Q_{1}(\xi;a_{n}(y),y)$ and $K_{n}(y;\xi) = Q_{2}(\xi;a_{n}(y),y)$. For real cutoff frequency $w$ we can similarly define the oscillatory functions $J_n(w;\xi)$ and $H_{n}(w;\xi)$ that correspond to the usual Bessel functions.
The cutoff frequencies for the elliptical cylinder whose shape and size are determined by $h$ and $\xi$ are determined by the values of $w$ for which either
$J_{n}(w;\xi)= 0$ or $d/d\xi (J_{n}(w,\xi)) = 0$ for the modes inside the cylinder and with a similar but more complicated relationship for the modes outside the cylinder. The quantity $\psi (w)/\overline \psi (w)$ needed for the contour integral relationship (\ref{contour}) for the mode sum is constructed from these elements as in the circular cylinder case. After the change of variables $w \rightarrow iy$, we arrive at
\begin{eqnarray}
\label{ellipse1}
\frac{\psi (iy)}{\overline \psi (iy)}&=& -4 I_{n}(y;\xi) \frac {d}{d\xi} I_{n}(y;\xi) K_{n}(y;\xi) \frac {d}{d\xi} K_{n}(y;\xi)
\nonumber
\\
&=&1-\left (\frac {d}{d\xi} (I_n(y;\xi)K_{n}(y;\xi)\right )^{2}
\nonumber
\\
&\equiv&1-\sigma_{n}^2
\end{eqnarray}
where we have used the Wronskian relationship (\ref{MathieuWronskian}) in the second line.
The cautious reader will have reason to be suspicious about the analytic continuation of $J_{n}(w;\xi)$ to $I_{n}(y;\xi)$. In the case of the circular cylinder the order $n$ of the Bessel function is an integer and unaffected by the frequency (its argument $w$); here, the $a_{n}$ depend on the frequency, and take on different values on the real and imaginary axes. For complex values of $y$, the $a_{n}(y)$ are surely also complex. However, the point is that the Mathieu function $Q(\xi,y,a)$ is analytic in $y$ for any choice of $\xi$ and $a$, and then the restricted function $I_{n}(y;\xi)$ is also analytic; the way that the set $a_{n}$ evolves as $y$ is varied is built into the definition of the restricted functions. Finally, we will note that the path integral approach \cite{KSLZ} gives an alternate route to the representation (\ref{ellipse1}).
With this foundation, the determination of the Kac number and the Casimir energy for the cylinder of elliptic cross section is similar to that for the circular cylinder. In particular, Eqs. (\ref{kacdef})-(\ref{casmir}), (\ref{casimircylinder}), and (\ref{cylinderexpansion}) are unaltered, except that the sums are now over the set $\{a_n(y)\}$.
\subsection{Klich expansion}
In elliptic variables the Green function satisfies the partial differential equation
\begin{eqnarray}
\label{PDE}
\left ( \frac {\partial^2}{\partial \xi^2} +\frac {\partial^2}{\partial \eta^2}
-y^2 h^2 (\sinh^2 \xi + \sin^2 \eta )\right ) G(\xi,\eta;\xi'
,\eta')
\nonumber
\\
= - 2\pi \delta (\xi-\xi') \delta (\eta - \eta')
\end{eqnarray}
We can represent it in two ways: by direct transcription from the polar variables
\begin{equation}
\label{green1}
G(\xi,\eta;\xi',\eta') = K_0(y \rho) ,
\end{equation}
where
$\rho$
is the distance between the points $(\xi,\eta)$ and $(\xi',\eta')$; or as an
expansion in Mathieu functions
\begin{eqnarray}
\label{green2}
G(\xi,\eta;\xi',\eta') &=& \sum_{a \in \{a_n\}}
g_n(y)
P(\eta;a,y) P(\eta';a,y)\nonumber\\
&\times& I_n(\xi,y) K_n(\xi',y)
\end{eqnarray}
for $\xi < \xi'$ (and with $\xi$ and $\xi'$ interchanged for $\xi > \xi'$).
The coefficients $g_n(y)$ are determined by the conditions that $G$ and $\partial G/\partial\xi$ be
continuous at $\xi_1 = \xi_2$ (for $\eta_1 \ne \eta_2$), and that the singularity at $(\xi_1,\eta_1) = (\xi_2,\eta_2)$
has the amplitude implied by (\ref{PDE}).
The result of this analysis is that $g_n(y) = 2\pi$ for all $n$ and $y$.
Thus we have the identity
\begin{eqnarray}
\label{identity}
K_0(y\rho) &=& 2 \pi \sum_{a \in \{a_n(y)\}} P(\eta_1;a,y)P(\eta_2;a,y)\nonumber\\
&\times& I_n(\xi_1,y) K_n(\xi_2,y)
\end{eqnarray}
We only need this result for the case $\xi_1 = \xi_2 = \xi$, where $\rho$ reduces to
\begin{eqnarray}
\label{rhodef}
\rho &=& h\sqrt {\cosh^{2}\xi (\cos \eta - \cos \eta')^2 +\sinh^{2} \xi (\sin \eta - \sin \eta')^2}
\nonumber
\\
&=& h \left |\sin \frac {\eta -\eta'}{2} \right | \Gamma \left (\frac {\eta+\eta'}{2}\right )
\end{eqnarray}
where
\begin{equation}
\label{Gamma}
\Gamma(\alpha) = \sqrt{\sinh^2\xi+\sin^2 \alpha}
\end{equation}
Take the $\xi$ derivative of (\ref{identity}) to get
\begin{eqnarray}
\label{ident2}
\frac {\partial}{\partial \xi} K_0\left (2 y h \left |\sin\frac {\eta_1-\eta_2}{2}\right | \Gamma\left (\frac {\eta_1+\eta_2}{2}\right )\right )
\nonumber
\\
= 2 \pi \sum_{a \in \{a_n(y)\}} P(\eta_1;a,y)P(\eta_2;a,y) \sigma_n
\end{eqnarray}
Thus by defining
\begin{equation}
\label{Gdef2}
H(\eta_1,\eta_2) = \frac {\partial}{\partial \xi} K_0\left (2 y h \left |\sin\frac{\eta_1-\eta_2}{2}\right | \Gamma \left (\frac{\eta_1+\eta_2}{2}\right ) \right )
\end{equation}
we obtain an identity equivalent to (\ref{klichidentities}).
In the circular case (large $\xi$),
$\Gamma \approx \sinh \xi$ and
$\sinh \xi \approx \cosh \xi $, so that (\ref{Gdef2}) reduces to (\ref{Gdef}) with $A = h \cosh \xi$.
For the circular case,
the Klich expansion was a convenience that improved the rate of convergence by
summing over all orders of the Bessel functions. In the elliptical case it is even
more useful, because it allows us to do
the sums equivalent to (\ref{cylinderexpansion}) without constructing
the sets $\{a_n(y)\}$ or the Mathieu functions.
\begin{figure}
\includegraphics[width=1.0\columnwidth,keepaspectratio]{kacfigure3.eps}
\caption{How the Kac number per unit length depends on the shape of the cylinder. }
\end{figure}
\subsection{Kac number}
Reprising the argument given above, the Kac number per unit length determines the leading
contribution to $\ln[\psi(iy)/\bar \psi(iy)]$ at large $y$. The analogue to Eq. (\ref{cylK}) is
\begin{eqnarray}
\label{ellipseK}
\frac { {\cal K}}{L} &=& \lim_{y \rightarrow \infty}
\frac {y}{2\pi}
\int_{-\pi}^{\pi}
\frac {d\eta_1}{2\pi} \int_{-\pi}^{\pi}\frac {d\eta_2}{2\pi}
H^{2}(\eta_1,\eta_2)
\nonumber
\\
&=&
\lim_{y\rightarrow \infty}\frac {y}{\pi^{3}} \int_{0}^{\pi}
d\alpha \int_{0}^{\pi}d\beta
\left (\frac {d}{d\xi} K_{0}\left (2yh \sin\beta \Gamma(\alpha)\right ) \right )^2\nonumber\\
\end{eqnarray}
where we have changed variables to $\alpha=(\eta_1+\eta_2)/2$ and
$\beta = (\eta_1 -\eta_2)/2$. As was previously explained, for large $y$
we can replace $\sin\beta$ by $\beta$; then the integral over $\beta$ is
the same as was considered in Eqs.(\ref{cylK}) and (\ref{cylK2}). The result is
\begin{eqnarray}
\label{ellipseK2}
\frac {\cal K}{L} &=& \frac{3}{64h}\int_0^{\pi} \frac{d\alpha}{\pi}
\frac {\sinh^2(\xi) \cosh^2(\xi)}{\Gamma^{5}(\alpha)}
\nonumber
\\
&=& \frac {3}{64} \int_0^{\pi} \frac {d\alpha}{\pi}
\frac {A^2 B^2}
{(A^2 \sin^2 \alpha + B^2 \cos^2 \alpha )^{5/2}}
\end{eqnarray}
which can be evaluated in terms of the complete elliptic integrals ($A > B$ are the
axes of the ellipse). This agrees from the general expression given
by Balian and Duplantier \cite{BD}.
In the limit of small $B$, this expression is of order $B^{-2}$. Therefore in Figure 1 we plot ${\cal K}B^2/LA$, which is
independent of the size of the cylinder and only weakly dependent on its shape. The horizontal axis is the eccentricity $\epsilon = {\text {sech}} \xi$.
For large eccentricity, the cylindrical shell is approximately a pair of planar strips of width $A$ and area $LA$ separated by a distance $B$. Then its Kac number must be proportional to the area ($\mathcal{K}\propto LA$) while the Kac number per unit area can only depend on the strip separation $B$. Since the Kac number itself is dimensionless, the combination of these two arguments predicts a dependence $\mathcal{K}\simeq LA/B^{2}$ thus explaining the divergence of the Kac number at large eccentricity.
\subsection{Casimir energy}
The term $T_1$ in Eq.(\ref{cylinderexpansion}) again vanishes for the case of the cylinder of elliptical cross-section:
substituting (\ref{Gdef2}) and (\ref{klichidentities}) into
(\ref{cylinderexpansion}) and then changing variables from
$\eta_i$ to $\alpha$ and $\beta$ leads to an integral
that differs from the circular case only by a scale factor.
The expansion (\ref{cylinderexpansion}) continues to be relevant, as does (\ref{klichidentities}), except
that now $H(\eta_{1},\eta_{2})$ is given by
(\ref{Gdef2}). It follows from Eq. (\ref{cylinderexpansion}) that all of the $T_{m}$ are negative, and thus that the Casimir energy is negative.
The numerical evaluation of the $T_{m}$ can be done as above, using almost the same program.
\begin{figure}
\includegraphics[width=1.0\columnwidth, keepaspectratio]{rints3.eps}
\caption{The dependence of $|T_{m}|$ on $m$.
The curves are drawn for $\epsilon = 0.05, 0.50, 0.80, 0.90, $ and $0.95$, in ascending order.
The nearly constant slope of these lines indicates that
$-T_{m}$ is decreasing as $m^{-2.5}$.
}
\end{figure}
Figure 2 shows that it is again true that $-T_{m}$ decreases as $m^{-2.5}$, so that the series
sum converges.
We evaluated $T_{2}$ through $T_{10}$ by Monte Carlo integration, and applied a truncation correction based on the $m^{-2.5}$ law. The resulting dependence of ${\cal C}/L$ on the eccentricity $\epsilon$ is shown in Figure 3.
Also shown in Figure 3 is $\sqrt{1-\epsilon^{2}} \times {\cal C}/L$. The lack of dependence on $\epsilon$ for small $\epsilon$
agrees with the results of Kitson and Romeo \cite{KR}, who discussed the Casimir energy of a cylinder of elliptical cross-section perturbatively.
As was already observed, for large eccentricity the cylindrical shell is approximately a pair of planar strips of width $A$ separated by a distance $B$. The Casimir attraction between these would give an energy ${\cal C} \simeq - LA/B^3$ \cite{casimir} ($LA$ is the area of a strip); this can be rewritten in terms of the eccentricity as ${\cal C} A^{2}/L \simeq - (1-\epsilon^{2})^{-1.5}$. The dash-dotted line in Figure 2 shows that this crude argument successfully accounts for the divergence of the Casimir energy at large eccentricity.
\begin{figure}
\includegraphics[width=1.0\columnwidth, keepaspectratio]{casenrint5.eps}
\caption{How the Casimir energy per unit length varies with the eccentricity of the cylinder.
The dash-dotted line is $(1-\epsilon^{2})^{1.5} \times |{\cal C}| A^{2}/L$; this removes the
divergence at large eccentricity.
The dashed line is $\sqrt{1-\epsilon^{2}} \times |{\cal C}| A^{2}/L$, which removes the leading order dependence on $\epsilon$ for small $\epsilon$.
}
\end{figure}
\section{Discussion}
The case of a cylindrical shell with circular cross-section holds a special place in the physics of the Casimir effect. At zero temperature Casimir forces are known to be attractive for parallel plates \cite{casimir}, repulsive for the sphere \cite{Boyer68, BD} and nearly zero (very weakly attractive) for a long cylindrical shell \cite{BD,deraad}. Thus the latter is approximately the intermediate case. The solution of the elliptical cross-section case allows us to follow the evolution of the Casimir attraction with eccentricity $\epsilon$ as the shell cross-section evolves from circular to highly eccentric, resembling the parallel plate geometry. Therefore it is not surprising that the Casimir energy is found to decrease with eccentricity with the circular case corresponding to the energy maximum. Naively one might have expected that the Casimir attraction in the $\epsilon\rightarrow 1$ limit would be stronger than its parallel plate counterpart \cite{casimir}. Our results indicate the opposite, however: even though both attractions have the same order of magnitude, the parallel plate geometry generates stronger attraction. This illustrates the non-additive character of the Casimir forces.
Since the interactions are attractive with the circular cross-section corresponding to the energy maximum and the large eccentricity $\epsilon=1$ limit being the minimum, with only Casimir stresses present a fixed area shell would be unstable with respect to collapse onto itself. We argue that this remains the equilibrium state of the system at arbitrary temperature. Indeed the high-temperature thermodynamics of the system is dominated by the Kac number as the latter determines the form of the free energy \cite{BD}
\begin{equation}
\label{fenergy}
\mathcal{F}\simeq -\mathcal{K}T\ln(Tl)
\end{equation}
at a temperature $T\gg 1/l$ where $l$ is a length scale approximately corresponding to the largest eigenfrequency of the problem. For $\mathcal{K}>0$, as is the case in the problem under study, the equilibrium configuration of the shell must have the largest Kac number $\mathcal{K}$. But we found (Figure 1) that the Kac number diverges in the large eccentricity $\epsilon=1$ limit which corresponds to the collapsed state. However, as this limit is approached we have $l\simeq B\rightarrow 0$, thus inevitably leaving the range of applicability of the high-temperature expression (\ref{fenergy}) and entering the range of applicability of the zero-temperature theory where the ground state is still collapsed. This allows us to argue that the equilibrium state of fixed area shell in the presence of Casimir stresses only is collapsed at arbitrary temperature.
\section{Acknowledgements}
EBK is supported by the US AFOSR Grant No. FA9550-11-1-0297. GAW gratefully acknowledges an Australian Postgraduate Award and the J. L. William Ph. D. scholarship.
| {'timestamp': '2014-03-17T01:01:02', 'yymm': '1403', 'arxiv_id': '1403.3439', 'language': 'en', 'url': 'https://arxiv.org/abs/1403.3439'} |
\section{Introduction}
\label{sec:intro}
With the rapid growth of society and the modern logistics industry, road infrastructure has been greatly increased in today's world. There are a total of more than 64,000,000 kilometers of roads in the world~\cite{length}, which leads to massive operational requirements for pavement maintenance. The pavement inspection is one of the key steps~\cite{c1}. Generally speaking, the cameras are often utilized as pavement inspection equipment due to their low cost and the powerful data representational ability of images. Therefore, the pavement inspection task is often translated into a pavement distress analysis task based on the acquired pavement images, and then this task is accomplished manually by the proficient workers. Clearly, such an operation consumes plenty of time and labor resources due to an enormous amount of pavement images produced daily~\cite{c2}. Therefore, automating the pavement distress analysis can play a critical role in improving efficiency, reducing cost and also avoiding the labeling errors in manual pavement inspection.
\begin{figure}[t]
\centering
\includegraphics[scale=0.33]{first_page.pdf}
\caption{The illustrations of pavement distress detection (finding out the distress images) and recognition (recognize the pavement distresses).}
\label{fig.firstpage}
\vspace{-0.5cm}
\end{figure}
Pavement distress detection and recognition are two fundamental tasks for pavement distress analysis, which aim at identifying the distressed pavement images and classifying the distressed images into specific categories, respectively, as shown in Figure~\ref{fig.firstpage}. In recent decades, many classical approaches have been proposed to address these two tasks, and they can be roughly categorized into two groups.
The first one is to utilize the image processing, hand-craft features and conventional classifiers to recognize the pavement distress~\cite{c3,c27,c28,c29,c30}. For example, Zhou et al.~\cite{c29} developed a two-step method that conducts the wavelet transform followed by a random transform to classify the pavement distress. Sun et al.~\cite{c3} proposed a crack classification method based on topological properties and chain code. The main drawback of these methods is that they often optimize the feature extraction and classification step separately or even do not involve any learning process which leads to poor performance. Moreover, it usually needs plenty of sophisticated image pre-processing.
The second group is comprised of those using deep learning-based methods. Inspired by the advance of deep learning approaches, it is more and more popular to apply different deep learning-based visual learning models for pavement distress detection and recognition~\cite{c8,c9,c10,c11,c16}. For example, K.Gopalakrishnan et al.~\cite{c8} leveraged VGG-16 pre-trained on ImageNet\cite{c9} to identify whether the specific pavement image is "crack" or "non-crack". Laha et al.~\cite{rddretina} detected road damages with RetinaNet~\cite{retinanet}. Compared to the conventional approaches, deep learning-based approaches often achieve better performance. However, most of these approaches only regard the pavement distress detection or recognition problem as common object detection or image classification problem and directly apply the classical deep learning approaches. They seldom paid attention to the specific characteristics of pavement images, such as the high image resolution, the low distress area ratio, and uneven illumination, in the model design phase.
To address the issue, instead of directly classifying the pavement images, IOPLIN~\cite{c16} performed the histogram equalization for suppressing the negative effects from illumination, and tackled the pavement distress detection task via inferring the labels of patches from pavement images with a Patch Label Inference Network (PLIN) for fully exploiting the high-resolution image information. IOPLIN is able to be iteratively trained with only the image label via the Expectation-Maximization Inspired Patch Label Distillation (EMIPLD) strategy and achieves promising detection performances for various categories of pavement distress. The main drawback of IOPLIN is that its optimization process is quite complex and time-consuming. Moreover, IOPLIN is not end-to-end trainable and is also unable to be further extended to the pavement distress recognition scenario.
To address the aforementioned issues, we present a novel pavement image classification framework named Weakly Supervised Patch Label Inference Network (WSPLIN)~\cite{wsplin} for both pavement distress detection and recognition. Similar to the IOPLIN, our method also accomplishes the pavement image classification via inferring the labels of patches from the pavement images with Patch Label Inference Networks (PLIN). Therefore, WSPLIN inherits the merits of IOPLIN, such as the better image information utilization and result interpretability, but also suffer from the obstacle of training PLIN only with image labels. Compared to IOPLIN, WSPLIN solves this model training issue via introducing a more concise end-to-end weakly supervised learning framework. Such a framework endows WSPLIN with better efficiency and greater flexibility, enabling the pavement distress recognition application.
In WSPLIN, the pavement image is divided into patches with different patch collection strategies under different scales for exploiting both global and local information. Then, a CNN is implemented as PLIN for inferring the labels of patches with a sparsity constraint. Finally, the patch label inference results are fed into a Comprehensive Decision Network (CDN) for completing the classification. We integrate PLIN and CDN as an end-to-end deep learning model. In such a manner, the PLIN can be optimized by the guidance of CDN and the patch label sparsity constraint in a cleaner and more efficient fashion. Moreover, three different strategies, namely Sliding Window (SW), Image Pyramid (IP), and Sparse Sampling (SS), are adopted for collecting patches from images. We name these corresponding WSPLIN versions, WSPLIN-SW, WSPLIN-IP and WSPLIN-SS, respectively. As same as IOPLIN, WSPLIN-SW has not considered any scale information during patch collection. It can be deemed as a naive version of WSPLIN. Different to WSPLIN-SW, WSPLIN-IP incorporates the scale information via dividing images into patches from coarse to fine based on an image pyramid. It is the default version of WSPLIN. WSPLIN-SW conducts a sparse patch sampling to collect only a few of patches from the image pyramid for improving the efficiency of WSPLIN. It can be seen as the speedy version of WSPLIN. We evaluate WSPLIN on a large-scale pavement image dataset named CQU-BPDD~\cite{c16} under different settings, including distress detection, one-stage recognition, and two-stage recognition. The experimental results show that WSPLIN outperforms extensive baselines and demonstrate the prominent advantages over IOPLIN in both efficiency and performance.
The main contributions of our work are summarized as follows:
\begin{itemize}
\item We propose a novel end-to-end weakly supervised deep learning model named WSPLIN for addressing both pavement distress detection and recognition issues. WSPLIN not only inherits the merits of IOPLIN, but also enjoys faster training speed, better classification performance, and wider application scenarios over IOPLIN.
\item Different from IOPLIN and the conventional CNN-based image classification methods, we introduce image pyramid to WSPLIN-IP for exploiting scale information. Moreover, we design a sparse patch sampling strategy in the image pyramid for further speeding up WSPLIN. The model training time of this faster WSPLIN version (WSPLIN-SS) is only one-fourth of the training time of IOPLIN while they share similar performance in pavement distress detection.
\item We design a patch label sparsity constraint based on the prior knowledge of distress distribution and leverage the CDN to guide the training of PLIN in a weakly supervised way. The patch labels produced by PLIN provide interpretable intermediate information, such as the rough location and the type of distress.
\item We empirically evaluate our model against the current state-of-the-art CNN methods and some classic transformer methods as baselines in both the pavement distress detection and recognition tasks. Extensive results show that WSPLIN outperforms them in both tasks under different settings.
\end{itemize}
\section{Related Work}
\label{sec:rela_work}
\subsection{Image-based Pavement Distress Analysis}
The traditional pavement distress analysis approaches mainly include filter-based methods, and hand-crafted feature-based classical classifiers. For example, in~\cite{zhou2006wavelet}, wavelet transform is used to decompose a pavement image into different-frequency subbands. Hu et al.~\cite{hu2010novel} propose a novel Local Binary Pattern (LBP) based operator for pavement crack detection. In~\cite{shi2016automatic}, a random structured forest named CrackForest, which is combined with the integral channel features is proposed for automatic road crack detection. Kapela et al.~\cite{kapela2015asphalt} propose a crack recognition system based on the Histograms of Oriented Gradients (HOG). Pan et al.~\cite{pan2017object} use the four popular supervised learning algorithms (KNN, SVM, ANN, RF) to discern pavement damages. However, the traditional methods usually have weak performance owing to numerous artificial design factors, separate optimization procedures, and they cannot be adapted to a large number of data currently.
Inspired by the recent remarkable successes of deep learning in extensive applications, simple and efficient convolutional neural networks (CNN) based pavement distress analysis methods have gradually become the mainstream in recent years. In general, these methods can be divided into three parts according to the task objective: pavement distress segmentation~\cite{c11,crack500,yang2019feature,zou2018deepcrack}, pavement distress location~\cite{ibragimov2020automated,ZHU2022103991}, and pavement distress classification~\cite{c16,few_shot,dong2021automatic}. Among them, pixel-based pavement distress segmentation is a hot research field. Zhang et al.~\cite{crack500} leverage CNN to classify the image patch for segmenting pavement distress. In~\cite{c11}, a CNN is used to learn the structure of the cracks from raw images, then the segmentation result is generated by the obtained structure information. Based on the fully conventional network (FCN), Yang et al.~\cite{yang2019feature} fuse multiscale features from top-to-down for pavement crack segmentation. In DeepCrack~\cite{zou2018deepcrack}, multiscale deep convolutional features learned at hierarchical convolutional stages are fused together to capture the line structures. For distress localization, Ibragimov et al.~\cite{ibragimov2020automated} propose a method for localizing signs of pavement distress based on faster region based conventional neural network (Faster R-CNN). Zhu et al.~\cite{ZHU2022103991} compare the performance of three state-of-the-art object-detection algorithms on an Unmanned aerial vehicles(UAV) pavement image dataset, which includes six types of distress. Because pavement distress annotation requires professional knowledge and a large amount of time, the datasets used in the above methods are low-resolution and small-scale. However, it remains to be determined whether models derived from small-scale datasets can be applied to real-world practice.
For pavement distress classification, Dong et al.~\cite{few_shot} propose a metric-learning based method for multi-target few-show pavement distress classification on the dataset which includes 10 different kinds of distress. In~\cite{dong2021automatic}, discriminative super-features constructed by the multi-level context information from the CNN is used to determine whether there is distress in the pavement image and recognize the type of the distress. All of these methods do a good job of classification on the dataset they use, which is small and only contains distressed images, and on which the test accuracy even achieves 100\%~\cite{dong2021automatic}. There have been few works to systematically evaluate the model's performance on a difficult large-scale multi-type dataset. Moreover, these approaches only regard the pavement distress detection or recognition problem as a common image classification problem and directly apply the classical deep learning approaches. In~\cite{c16}, patch-based weakly learning model IOPLIN and large-scale distress datasets CQU-BPDD are proposed to solve these problems. However, the main drawback of IOPLIN is that the patch label inference strategy based on the pseudo label makes IOPLIN incompatible with pavement recognition and its optimization process is quite complex and time-consuming. Our approach takes inspiration from IOPLIN but operates with different patch inference strategy, and uses more effective and hierarchical patch collection strategies.
\vspace{-0.2cm}
\subsection{Deep Learning-based Image Classification}
In recent years, due to the popularity of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC)~\cite{ilsvrc}, many computer vision algorithms based on deep learning have emerged. Among them, a series of convolutional neural networks (CNNs) play a leading role in the field of image classification. I.e., AlexNet~\cite{alexNet} first applies the structure of convolutional neural networks to large-scale image classification datasets. Simonyan et al. first propose the deep and large-scale convolutional neural network VGGNet~\cite{c17} (i.e., the VGG19 model has 19 layers and more than 130 million parameters, while the previous convolutional neural network has less than 10 layers and millions to tens of millions of parameters). In InceptionNet~\cite{c18}, $1 \times 1$ convolution kernel application is proposed for the first time. He et al.~\cite{c19} propose a residual structure and network extension strategy to construct network family for the first time. Zoph et al.~\cite{mobilenets} bring CNN into the embedded mobile terminal and propose MobileNet specially designed for low computing power and low memory computing platform. Then based on MobileNet and Neural architecture search (NAS)~\cite{nas}, Tan et al. propose an efficient CNN, dubbed EfficientNet~\cite{c21}. In the past year, inspired by the field of Natural Language Processing (NLP), Dosovitskiya et al.~\cite{vit} propose a visual classification model based on Transformer~\cite{attention}, named as ViT.
However, the general classification models based on CNN and Transformer are difficult to be directly used in the field of pavement distress analysis. This is mainly because the pavement images have many specific characteristics, such as high image resolution, the low distress area ratio, and uneven illumination, compared with object-centric natural images. This paper aims to use a patch collection strategy to incorporate both local and multiscale information for pavement distress analysis.
\section{Methodology}
\label{sec:format}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.46]{Network.pdf}
\caption{The overview of WSPLIN approaches. WSPLIN divides the images under varied scales into patches with different collection strategies, and then leverages a PLIN to produce the pseudo patch labels. Finally, CDN is used to yield the final pavement image label based on these patch labels.}
\label{fig.network}
\vspace{-0.5cm}
\end{figure*}
In this section, the network architecture of the Weakly Supervised Patch Label Inference Network (WSPLIN) is shown in Figure~\ref{fig.network}. We first introduce the problem formulation and overview of WSPLIN in section~\ref{sec:formulation}. Then, we introduce the involved patch collection strategies in section~\ref{sec:pc}. After that, the core modules of WSPLIN, Patch Label Inference Network (PLIN), and Comprehensive Decision Network (CDN) are detailed in section~\ref{sec:plin} and section~\ref{sec:cdn} respectively. Finally, we will show how to apply WSPLIN to detect and recognize the pavement distress in section~\ref{sec:pavement DR}.
\vspace{-0.2cm}
\subsection{Problem Formulation and Overview}
\label{sec:formulation}
Both pavement distress detection and recognition can be deemed as an image classification task from the perspective of computer vision. Let $X=\{x_1,\cdots,x_n\}$ and $Y=\{y_1,\cdots,y_n\}$ be the collection of pavement images and their pavement labels respectively. $y_i$ is a $C$-dimensional one-hot vector where $C$ is the number of categories and $y_{ij}$ indicates the $j$-th element of $y_i$. In the detection case, such a classification task is a binary image classification issue (distressed or normal) where $C=2$. In the recognition case, this classification task is a multi-class image classification problem where $C>2$. In a pavement label $y$, if the $j$-th element is the only nonzero element, it indicates that the corresponding pavement image belongs to the $j$-th category.
The pavement distress detection or recognition is to learn a classifier $F_{det}(\cdot)$ or $F_{reg}(\cdot)$ can label the pavement image correctly, $y_i\leftarrow F(x_i)$.
There are two strategies for accomplishing the pavement distress recognition task. One is the two-stage recognition flow path and the other is the one-stage recognition flow path. The two-stage recognition is to identify distressed images first via pavement distress detection, and then apply the pavement distress recognition to further classify each distressed image into a specific type of pavement distress. The one-stage recognition is to directly consider the normal case as an additional category in the recognition procedure, and therefore the pavement distress detection and recognition tasks are jointly tackled with one image classification model.
Similar to Iteratively Optimized Patch Label Inference Network (IOPLIN), WSPLIN is a patch-based pavement image classification method whose main obstacle is to train Patch Label Inference Network (PLIN) only with the image label. WSPLIN introduces an additional module named Comprehensive Decision Network (CDN) to guide the optimization of PLINs in an end-to-end weakly supervised learning manner. The flow path of WSPLIN is very concise. In WSPLIN, the pavement image is divided into several patches first, and then PLIN is used to infer the labels of these patches, finally the inferred labels are fed into CDN for yielding the final pavement label. Clearly, WSPLIN only has two core modules, namely PLIN and CDN, whose corresponding mapping functions are $\Gamma_\zeta(\cdot)$ and $\Theta_\xi(\cdot)$ respectively.
\vspace{-0.2cm}
\subsection{Patches Collection}
\label{sec:pc}
We adopt three different patch collection strategies for producing patches. They are Slide Window (SW), Image Pyramid (IP) and Sparse Sampling (SS). The first strategy is also adopted by IOPLIN. WSPLIN uses the second strategy to fully exploit image information from different scales. The third strategy is newly designed by us based on the IP strategy for speeding up WSPLIN via reducing the patch amount for training.
\textbf{Slide Window}:
The pavement image is simply divided into a series of uniform scale patches following non-overlapping strategy. We adopt $300\times300$ as the sliding window size with 300 sliding stride. The patch collection can be denoted as $P_i= \tau(x_i) = \{p^i_1, ..., p^i_m\}, m = 12$ where $\tau(\cdot)$ is our patch extraction operation.
\textbf{Image Pyramid}:
The slide window strategy does not consider the scale information. So we resize the pavement image into three resolutions, $300 \times 300$, $600\times 600$ and $1200 \times 900$ (the original size), to construct a three-layer image pyramid from top to down, and then employ sliding window method for dividing the image into patches. The patch collection can be denoted as $P_i= \{\tau(x_i^l)\}_{l\in\{0,1,2\}} = \{p^i_1, ..., p^i_m\}, m=\sum_l m_l = 17$ where $x_i^0=x_i$ and $l$ indicates the layer ID. Similar to the slide window strategy, we also apply $300\times300$ as the sliding window size with 300 sliding strides in the image pyramid. So, $m_0=12$, $m_1=4$, and $m_2=1$.
\textbf{Sparse Sampling}:
The patch number determines the scale of training data, and the patches in the same image pyramid also contain some redundant information in the scale space. Therefore, we can sample some patches for each image for reducing the training burden, and thereby speeding up the model. More specifically, let $\alpha$ be the sparse sample ratio to control
the number of sampled patches for each layer, $n_l=\left \lceil m_{l}\times \left ( 1-\alpha \right ) \right \rceil$ where $\left \lceil \cdot \right \rceil$ returns the smallest integer that is greater than or equal to the input. We design a simple strategy for sampling patches in each layer. In this strategy, the sampled patches of all three layers should cover all scales while maximizing the spatial coverage. The optimal patch sparse sampling strategy is mathematically denoted as follows,
\begin{equation}
\setlength\abovedisplayskip{1pt
\setlength\belowdisplayskip{1pt}
\{\hat{\mathcal{C}}_l\}_{l=0,1,2}\leftarrow\arg\underset{\{\mathcal{C}_l\}_{l=0,1,2}}\max { \text{Vol}(\bigcup_{l=0}^2\bigcup_{t\in \mathcal{C}_l} p_t)}, \quad \text{s.t.} |\mathcal{C}_l|=n_l,
\end{equation}
where $\text{Vol}(\cdot)$ returns the volume of the given set and $\mathcal{C}_l$ denotes an index subset to patches in $l$-th layer. Since the solutions of the above problem are limited, we can use the enumeration method to address this issue efficiently when $n_l$ is fixed. In this paper, we empirically set $\alpha = 0.25$. In such a manner, $n_0=1$, $n_1=1$ and $n_2=3$.
\emph{For distinguishing different versions of WSPLIN, WSPLIN-SW, WSPLIN-IP and WSPLIN-SS indicate the versions that use sliding window, image pyramid and sparse sampling patch collection strategies respectively. The default version of WSPLIN is WSPLIN-IP.}
\vspace{-0.2cm}
\subsection{Patch Label Inference Network}
\label{sec:plin}
Similar to IOPLIN~\cite{c16}, we adopt EfficientNet-B3~\cite{c21} as our Patch Label Inference Network (PLIN) due to its good trade-off between performance and efficiency. We denote $\Gamma_\zeta(\cdot)$ as the mapping function of PLIN. The patch label inference procedure is denoted as,
\begin{equation}
S_i = \Gamma_\zeta(\tau(x_i)),
\end{equation}
where $S_i$ is an $m\times C$-dimensional matrix in which every column encodes the label inference confidences of patches. Such confidences are expected to be zero if the patch does not exist the distress and reflect the possibility of the certain distress that the corresponding patch has. Note, there is no supervised information of patches, so all these labels are randomly produced just via forward propagation. We need to leverage the follow-up comprehensive decision network to guide the PLIN to generate reasonable patch labels with image-level labels in a weakly supervised manner. We will introduce such a procedure later.
\noindent\textbf{Patch Label Sparsity Constraint (PLSC):} Since the distressed area is often the small part of the pavement, and it is seldom that a pavement image has many different distress. In such a manner, the label confidence matrix $S_i$ should only have very limited nonzero elements. In other words, $S_i$ should be sparse. Thus, we introduce an $L_1$-norm constraint to the label confidence matrices of the distressed training samples,
\begin{equation}
\mathcal{L}_s=\sum_{i \in \{i|y_i\neq y_{\text{normal}}\}}||S_i||_1,
\end{equation}
where $y_{normal}$ is the label of the normal pavement image. We introduce this constraint only to the distressed samples, since there should be no nonzero element in the label confidence matrices of the normal samples, $||S_i||_1=0$
\vspace{-0.2cm}
\subsection{Comprehensive Decision Network}
\label{sec:cdn}
We establish a Comprehensive Decision Network (CDN) for accomplishing the final pavement image classification based on the aforementioned patch label results. CDN consists of four layers where the first two layers are all the $m\times C$ fully connection layers followed by a ReLU, and Dropout layer, the third layer is also an $m\times C$ fully connection layer and the size of output fully connection layer is $C$. Here, $C$ is the number of categories. Let $\Theta_\xi(\cdot)$ be the mapping function of CDN, then the predicted pavement distress label $\hat{y}_i$ can be obtained by,
\begin{equation}
\hat{y}_i = \Theta_\xi(\Gamma_\zeta(\tau(x_i)))=\Theta_\xi(S_i).
\end{equation}
We use the cross-entropy to measure the discrepancy between the predicted label and ground-truth and denote it as the classification loss $\mathcal{L}_c$,
\begin{equation}
\label{eq:cel}
\mathcal{L}_c = -\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^C y_{ij}\log{\hat{y}_{ij}}.
\end{equation}
Finally, the optimal WSPLIN model is learned by minimizing the following loss,
\begin{equation}
(\xi,\zeta) \leftarrow\arg\underset{\hat{\xi},\hat{\zeta}}\min~~\mathcal{L}_{total}:=\mathcal{L}_c+\lambda\mathcal{L}_s,
\end{equation}
where $\lambda$ is a positive parameter for reconciling the classification loss and the sparsity constraint.
WSPLIN is an end-to-end deep learning framework that uses the back-propagation to compute the loss deviation and update the parameters layer by layer. In WSPLIN, CDN requires the patch label results produced by PLIN that should be useful for the final classification and the patch label sparsity constraint forces WSPLIN to highlight only a few of the most crucial patches for participating in the final decision. Clearly, these highlighted patches should be distressed ones and their inferred patch label results should be nonzero since only the distressed patches can provide helpful information for the final detection and recognition. In such a manner, CDN essentially guides the training of PLIN in a weakly supervised manner.
\vspace{-0.2cm}
\subsection{Pavement Distress Detection and Recognition}
\label{sec:pavement DR}
\textbf{Detection and One-Stage Recognition:}
The pavement detection and one-stage pavement distress detection can be deemed as a one-stage pavement image classification problem. To tackle these tasks, we can train our model as a pavement image classifier. Once the model is trained, the pavement image $x$ can be divided into patches with different patch collection strategies, which are fed into WSPLIN for yielding the final classification,
\begin{equation}
y = F(x) = \Theta_\xi(\Gamma_\zeta(\tau(x))),
\end{equation}
where $F \in \{F_{det}, F_{reg}\}$ and the predicted category should be corresponding to the maximum element of $y$.
\textbf{Two-Stage Recognition:}
The two-stage recognition has two stages to accomplish the pavement distress recognition. The first stage is to train our model as a pavement distress detector $F_{det}$ for filtering out the normal samples and finding the distressed samples. The second stage is to train our model as a multi-class pavement image classifier $F_{reg}$ for completing the final distress recognition,
\begin{equation}
y_i = F_{reg}(x_i) = \Theta_\xi(\Gamma_\zeta(\tau(x_i))),
\end{equation}
where $x_i \in \{x|F_{det}(x)\neq y_{\text{normal}}\}$ and the maximum element of $y_i$ reflects the specific pavement distress category of the distressed pavement image $x_i$.
\vspace{-0.2cm}
\section{Experiments and Results}
\vspace{-0.2cm}
\subsection{Dataset and Setup}
We test our method on pavement distress detection and recognition tasks under four application settings. The first one is the one-stage recognition (\textbf{I-REC}), which tackle the pavement distress detection and recognition tasks jointly. In this setting, all samples (including both the distressed and normal ones) and their fine-grained category label are available for training and testing model. Moreover, both the detection and recognition performances can be evaluated under this setting. The second one is the one-stage detection (\textbf{I-DET}), which is the conventional detection fashion. In this setting, all samples (including both the distressed and normal ones) are involved, but only the binary coarse-grained category label (distressed or normal) is available. The other two settings are all from the two-stage recognition scenario. One is the ideal second-stage recognition \textbf{II-REC(i)} which assumes all distressed samples are ideally detected via the first-stage detection. In this setting, the recognition models are only evaluated with distressed pavement images. The last setting is the normal second-stage recognition \textbf{II-REC(n)}. The training stage of \textbf{II-REC(i)} and \textbf{II-REC(n)} are identical. But their testing stages are different. In \textbf{II-REC(n)}, the recognition models are only evaluated on the images detected by the detection model trained in \textbf{I-DET}. In such a manner, the recognition error under this setting is the errors accumulated by both the first-stage detection and the second-stage recognition, since some distressed images may be incorrectly filtered out while some normal images may be incorrectly classified as distressed ones by the detector in the recognition testing stage under \textbf{II-REC(n)}. The results in \textbf{II-REC(n)} can reflect the comprehensive performances of two-stage recognition.
A large-scale bituminous pavement distress dataset named CQU-BPDD~\cite{c16} is used for evaluating the approaches under four application settings. This dataset involves seven different types of distress: alligator crack, crack pouring, longitudinal crack, massive crack, transverse crack, raveling, and mending. For settings of \textbf{I-DET}, \textbf{I-REC}, and \textbf{II-REC(n)}. We simply follow the data split strategy in \cite{c16}. With regard to the setting of \textbf{II-REC(i)}, 5140 distressed pavement images are randomly selected as the training set while the rest 11589 distressed pavement images are for testing. The detailed data split information of different settings are tabulated in Table~\ref{settings}.
Similar to IOPLIN, we adopt EfficientNet-B3 as the Patch Label Inference Network (PLIN). Since the comprehensive decision network (CDN) adopts a fully connected layer with fixed dimensions, WSPLIN requires the input size to be fixed at $300\times300$, and the optimizer uses RangerLars, which is just a combination of RAdam\cite{c24}, LookAhead\cite{c25} and LARS\cite{c26}. The learning rate is $8\times 10^{-4}$, and the cosine annealing strategy is adopted to adjust the learning rate: the learning rate remained unchanged in the first 25\% of the training process, and gradually decreased with the cosine function in the subsequent training process. Data augments such as rotation, flipping, and brightness balance are carried out for the raw images. The dropout rate of the classification layer is 0.5.
\begin{table}[tb]
\small
\caption{The detailed information of four different application settings on CQU-BPDD.} \label{settings}
\small
\begin{tabularx}{9cm}{p{1.4cm}p{1.8cm}<{\centering}p{0.9cm}<{\centering}p{0.6cm}<{\centering}p{0.9cm}<{\centering}p{0.6cm}<{\centering}}
\toprule
\multirow{2}{*}{Setting}& \multirow{2}{*}{Classifier}&\multicolumn{2}{c}{Train} & \multicolumn{2}{c}{Test} \\
\cmidrule(r){3-4}\cmidrule(r){5-6}
&& \#Sample & \#Class & \#Sample & \#Class \\
\midrule
\textbf{I-DET} & Detector & 10137 & 2 & 49919 & 2\\[2pt]
\hdashline[1.5pt/1.5pt]
\textbf{I-REC} & I-Recognizer & 10137 & 8 & 49919 & 8 \Tstrut\\[2pt]
\hdashline[1.5pt/1.5pt]
\textbf{II-REC(i)} & II-Recognizer & 5140 & 7 & 11589 & 7 \Tstrut\\[2pt]
\hdashline[1.5pt/1.5pt]
\multirow{2}{*}{\textbf{II-REC(n)} } & Detector & 10137 & 2 & \multirow{2}{*}{49919} & \multirow{2}{*}{8} \Tstrut\\[1.5pt]
&II-Recognizer& 5140 & 7 & & \\[2pt]
\bottomrule
\end{tabularx}
\vspace{-0.5cm}
\end{table}
\vspace{-0.2cm}
\subsection{Evaluation Metrics}
\subsubsection{Evaluation Metrics of Detection}
For pavement distress detection task, we adopt Area Under Curve (AUC) of Receiver Operating Characteristic (ROC)~\cite{auc}, which is common in binary classification tasks (this metric is not affected by classification threshold). It is mathematically defined as follows,
\begin{equation}\label{AUC}
AUC=\frac{S_{p}-N_{p}\left ( N_{p}+1 \right )/2}{N_{p}N_{n}}
\end{equation}
where $S_{p}$ is the sum of the all positive samples ranked, while $N_{p}$ and $N_{n}$ denote the number of positive and negative samples.
Additionally, Binary $F1$ score, which is the harmonic mean of precision and recall, is used to measure the models more comprehensively. The binary $F1$ score is defined as:
\begin{equation}\label{f1_bin}\small
F1_{binary}= \frac{2\times \textbf{P}\times\textbf{R}}{\textbf{P} + \textbf{R}},\quad \textbf{P}=\frac{TP}{TP+FP},\quad\textbf{R}=\frac{TP}{TP+FN},
\end{equation}
where $\textbf{P}$ is the precision while \textbf{R} is the recall. $TP$, $FP$, and $FN$ are the numbers of true positives, false positives and false negatives respectively. The precision measures how many true positive samples are among the samples that are predicted as the positive samples. Similarly, recall measures how many true positive samples are correctly detected among all positive samples. Moreover, in the medical or pavement image analysis tasks, it is more meaningful to discuss the precision under the high recall, since the miss of the positive samples (the distressed sample) may lead to a more serious impact than the miss of the negative ones.
\subsubsection{Evaluation Metrics of Recognition}
For pavement distress recognition task, we mainly use the Top-1 accuracy and Marco $F1$ score to evaluate the performance of models. Top-1 accuracy mainly measures the overall accuracy of the models, while Marco $F1$ score evaluates the accuracy of the model across different categories. The Macro $F1$ score can be mathematically represented as follows,
\begin{equation}\label{f1_marco}
F1_{marco}=\frac{1}{c}\sum_{i}^{c} F1_{binary}^{i},
\end{equation}
where $F1_{binary}^{i}$ indicates the binary $F1$ score of the $i$-th category, and $c$ is the total number of categories.
\textbf{Note:} The $F1$ represents $F1_{binary}$ and $F1_{marco}$ in pavement distress detection and recognition tasks respectively.
\vspace{-0.2cm}
\subsection{Baselines}
Histogram of Oriented Gradient (HOG)~\cite{c27}, Local Binary Pattern (LBP)~\cite{lbp}, Fisher Vector(FV)~\cite{fv}, Support Vector Machine (SVM)~\cite{c23}, ResNet-50~\cite{c19}, Inception-v3~\cite{c13}, VGG-19~\cite{c17}, ViT-S/16~\cite{vit}, ViT-B/16~\cite{vit}, EfficientNet-B3~\cite{c21}, and Iterative Optimized Patch Label Inference Network (IOPLIN)~\cite{c16} are selected as baselines. The first four approaches are the shallow learning-based approaches. ResNet-50, Inception-v3, VGG-19, and EfficientNet-B3 are the classical Convolutional Neural Network (CNN) models. ViT-S/16 and ViT-B/16 are the recently popular transformer models. IOPLIN is a well elaborated pavement distress detection approach.
\begin{table}[htb]
\small
\caption{The pavement distress detection performances of different approaches on the CQU-BPDD detection benchmark. P@R = $n$\% indicates the precision when the corresponding recall is equal to $n$\%. }
\centering
\begin{tabularx}{9cm}{l p{0.6cm}<{\centering} X<{\centering}X<{\centering}p{0.6cm}<{\centering}}
\toprule
Detectors(\textbf{I-DET}) & AUC & P@R=90\% & P@R=95\% & $F1$\\
\midrule
HOG+PCA+SVM~\cite{c27}& 77.7\% &31.2\%&28.4\%& -\\[1.5pt]
LBP+PCA+SVM~\cite{lbp}&82.4\% &34.9\%&30.3\%& -\\[1.5pt]
HOG+FV+SVM~\cite{fv}&88.8\% &43.9\%&35.4\%& -\\[1.5pt]
ResNet-50~\cite{c19} & 90.5\% & 45.0\% & 35.3\% & -\\[1.5pt]
Inception-v3~\cite{c13} & 93.3\% & 56.0\% & 42.3\% & -\\[1.5pt]
VGG-19~\cite{c17} & 94.2\% & 60.0\% & 45.0\% & -\\[1.5pt]
ViT-S/16~\cite{vit} & 95.4\% & 67.7\% & 51.0\% & 81.1\%\\[1.5pt]
ViT-B/16~\cite{vit} & 96.1\% & 71.2\% & 56.1\% & 80.6\%\\[1.5pt]
EfficientNet-B3~\cite{c21} & 95.4\% & 68.9\% & 51.1\% & 81.3\%\\[1.5pt]
IOPLIN~\cite{c16} & 97.4\% & 81.7\% & 67.0\% & 85.3\%\\[1.5pt]
{\bf WSPLIN-IP} & {\bf 97.5\%} & {\bf 83.2\%} & {\bf 69.5\%} & {\bf 86.4\%}\\[1.5pt]
\midrule
Recognizers(\textbf{I-REC}) & AUC & P@R=90\% & P@R=95\% & $F1$\\[2pt]
\midrule
EfficientNet-B3~\cite{c21} & 96.0\% & 77.3\% & 59.9\% & 83.2\%\\[1.5pt]
{\bf WSPLIN-IP} & {\bf 97.6\%} & {\bf 85.3\%} & {\bf 72.6\%} & {\bf87.4}\%\\[1.5pt]
\bottomrule
\end{tabularx}
\label{tab:compare_binary}
\vspace{-0.5cm}
\end{table}
\vspace{-0.4cm}
\subsection{Pavement Distress Detection}
Table~\ref{tab:compare_binary} reports pavement distress detection performances of different approaches. These approaches include the detectors trained under \textbf{I-DET} and the recognizers trained under \textbf{I-REC} where recognizers address the detection issue along with the recognition task. Based on observations, WSPLIN-IP outperforms all baselines in all evaluation metrics under both \textbf{I-DET} and \textbf{I-REC}. In \textbf{I-DET}, WSPLIN-IP improves the performances of IOPLIN by 0.1\%, 1.5\%, 2.5\%, and 1.1\% in AUC, P@R=90\%, P@R=95\%, and F1-score respectively. In \textbf{I-REC}, WSPLIN-IP achieves 1.6\%, 8.0\%, 12.7\%, and 4.2\% performance gains over EfficientNet-B3 in AUC, P@R=90\%, P@R=95\%, and F1-score respectively. Moreover, the methods under \textbf{I-REC} consistently perform much better than the ones under~\textbf{I-DET}. For example, the WSPLIN-IP trained under \textbf{I-REC} achieves 0.1\%, 2.1\%, 3.1\%, and 1.0\% performance gains than the WSPLIN-IP trained under \textbf{I-DET} in AUC, P@R=90\%, P@R=95\%, and F1-score respectively. Similarly, the gains of EfficientNet-B3 are 0.6\%, 8.4\%, 8.8\%, and 1.9\%. We attribute this to the fact that recognizers trained under \textbf{I-REC} utilize fine-grained distress labels instead of binary distress labels for training the pavement image classification models. It reflects that the much finer-grained supervised information, such as the specific pavement distress information, can benefit the pavement distress detection.
\begin{table}[tb]
\small
\centering
\caption{The pavement distress recognition performances of different methods under different settings, namely \textbf{I-REC}, \textbf{II-REC(n)} and \textbf{II-REC(i)}. "Para." indicates the parameter scale of the deep learning model. Top-1 indicates the top-1 accuracy.}
\begin{tabularx}{9cm}{p{3.3cm} X<{\centering}X<{\centering}X<{\centering} }
\toprule
Recognizers(\textbf{I-REC}) &Para. & Top-1& $F1$\\\midrule
ResNet-50 \cite{c19}&23M & 88.3\% & 60.2\% \\[1.5pt]
VGG-16 \cite{c17}&134M & 87.7\% & 58.4\% \\[1.5pt]
ViT-S/16~\cite{vit}&22M & 86.8\% & 59.0\% \\[1.5pt]
ViT-B/16~\cite{vit}&86M & 88.1\% & 61.2\% \\[1.5pt]
Inception-v3 \cite{c13}&22M & 89.3\% & 62.9\%\\[1.5pt]
EfficientNet-B3~\cite{c21}&11M & 88.1\% &63.2\%\\[1.5pt]
\textbf{WSPLIN-IP}&\textbf{11M} & \textbf{91.1\%} &\textbf{66.3\%}\\[1.5pt] \midrule
Recognizers(\textbf{II-REC(n)}) &Para. & Top-1& $F1$\\
\midrule
ResNet-50 \cite{c19}&23M & 82.8\% & 53.6\% \\[1.5pt]
VGG-16 \cite{c17}&134M & 86.5\% & 55.0\% \\[1.5pt]
ViT-S/16~\cite{vit}&22M & 86.7\% & 56.6\% \\[1.5pt]
ViT-B/16~\cite{vit}&86M & 87.5\% & 59.6\% \\[1.5pt]
Inception-v3 \cite{c13} &22M& 88.3\% & 59.8\% \\[1.5pt]
EfficientNet-B3 \cite{c21}&11M & 88.9\% & 61.2\% \\[1.5pt]
\textbf{WSPLIN-IP} & \textbf{11M} & \textbf{90.0\%} & \textbf{64.5\%} \\[1.5pt]
\midrule
Recognizers(\textbf{II-REC(i)}) &Para. & Top-1& $F1$\\
\midrule
ResNet-50 \cite{c19}&23M & 71.2\% & 61.5\% \\[1.5pt]
VGG-16 \cite{c17}&134M & 74.6\% & 65.0\% \\[1.5pt]
ViT-S/16~\cite{vit}&22M & 75.0\% & 64.9\% \\[1.5pt]
ViT-B/16~\cite{vit}&86M & 75.3\% & 67.0\% \\[1.5pt]
Inception-v3 \cite{c13} &22M& 77.6\% & 69.8\% \\[1.5pt]
EfficientNet-B3 \cite{c21}&11M & 78.6\% & 70.3\% \\[1.5pt]
\textbf{WSPLIN-IP} & \textbf{11M} & \textbf{85.0\%} & \textbf{77.2\%} \\[1.5pt]
\bottomrule
\end{tabularx}
\vspace{-0.5cm}
\label{rec}
\end{table}
\vspace{-0.5cm}
\subsection{Pavement Distress Recognition}
Table~\ref{rec} records the pavement distress recognition performances and parameter scales of different approaches under different application settings on the CQU-BPDD dataset. Similar to the pavement distress detection performances, WSPLIN-IP achieves better recognition performances than baselines under all settings but enjoys the smaller parameter scale. In \textbf{I-REC}, the performance gains of WSPLIN-IP over Inception-v3, which is the second-best method, are 1.8\% and 3.4\% in top-1 accuracy and F1-score, respectively. In~\textbf{II-REC(n)} and \textbf{II-REC(i)}, the EfficientNet-B3 achieves the second-best performances. The performance gains of WSPLIN-IP over it under \textbf{II-REC(n)} are 1.1\% and 3.3\% in top-1 accuracy and F1-score, respectively. Such gains under \textbf{II-REC(i)} are 6.4\% and 6.9\%. The distribution of different pavement image categories are imbalanced. Top-1 accuracy is sensitive to this data imbalance while F1-score is more stable to this imbalance. Therefore, F1-score can better reflect the comprehensive performances of recognizers. According to the observations, WSPLIN-IP shows more advantages compared with baselines in F1-score.
The test settings of \textbf{I-REC} and \textbf{II-REC(n)} are identical as seen in Table~\ref{settings}. However, the models trained under~\textbf{I-REC} outperform the ones of \textbf{II-REC(n)}. For example, Inception-v3, ViT-B/16, EfficientNet-B3, and WSPLIN-IP trained under \textbf{I-REC} achieve 3.1\%, 1.6\%, 2.0\%, and 1.8\% improvements over the ones under \textbf{II-REC(n)} in F1-score. This implies that the end-to-end pavement distress recognition solution, which addresses detection and recognition tasks jointly (\textbf{I-REC}), enjoys more advantages than the conventional two-stage implementation solution, which addresses detection and recognition tasks individually (\textbf{II-REC(n)}), in the real-world application. We attribute this to that the end-to-end solution exploits the complementarity of these two tasks and introduces global optimization.
An interesting phenomenon is observed from Table~\ref{rec} that the top-1 accuracies of \textbf{II-REC(i)} are lower than the ones of the rest two settings while its F1-scores are higher than the F1-scores of the rest settings. This is because the test setting of \textbf{II-REC(i)} is different from the settings of \textbf{I-REC} and \textbf{II-REC(n)}, which does not involve any normal pavement sample. Top-1 accuracy is measured sample-wise but F1-score is measured category-wise. The normal samples comprise a large proportion of the whole data in \textbf{I-REC} and \textbf{II-REC(n)}. So, the superabundant normal samples will make the recognizers trained under \textbf{I-REC} bias to the classification of the normal sample, which leads to the high top-1 accuracy but the low F1-score. With regard to \textbf{II-REC(n)}, the massive normal samples push up the top-1 accuracy. However, the measure of F1-score is independent of the sample amount and the classification error of \textbf{II-REC(n)} is accumulated from both the detection and recognition stages. Therefore, it achieves a lower F1-score in comparison with \textbf{II-REC(i)}.
\vspace{-0.2cm}
\subsection{Ablation Study}
In this section, we will systematically discuss the effects of different components and different hyperparameters on our model. Table~\ref{ablation} records the performances and efficiencies of our approaches under different settings in comparison with IOPLIN.
\begin{table}[b]
\vspace{-0.3cm}
\small
\caption{The ablation study results of the proposed approaches under the same memory in different settings and evaluation metrics. (w/o indicates without and PLSC is the patch label sparsity constraint.)}
\begin{tabular}{p{3.15cm}p{1.3cm}<{\centering}p{1.2cm}<{\centering}c}
\toprule
\textbf{Method} & \tabincell{c}{\textbf{I-DET}\\(P@R=90\%)} & \tabincell{c}{\textbf{I-REC} \\($F1$)}& \tabincell{c}{\textbf{TrainTime}\\(Reduction)}\\
\midrule
IOPLIN & 81.7\% & - &12.5h\\%[1.5pt]
WSPLIN-SW & 80.9\% & 64.4\% &9.6h (\textcolor{blue}{-23\%})\\%[1.5pt]
WSPLIN-IP w/o PLSC & 81.2\%&64.5\% &11.0h (\textcolor{blue}{-12\%}) \\%[1.5pt]
WSPLIN-IP & \textbf{83.2\%} & \textbf{66.3\%} &11.1h (\textcolor{blue}{-11\%})\\%[1.5pt]
WSPLIN-SS ($\alpha = 0.25$) & 81.1\% & 64.1\%& \textbf{3.2h (\textcolor{blue}{-74\%})}\\%[1.5pt]
WSPLIN-SS ($\alpha = 0.50$) & 81.4\% & 64.9\% &5.7h (\textcolor{blue}{-54\%})\\%[1.5pt]
WSPLIN-SS ($\alpha = 0.75$) & 80.0\% & 63.7\% & 8.4h (\textcolor{blue}{-33\%})\\%[1.5pt]
\bottomrule
\end{tabular}
\label{ablation}
\end{table}
\subsubsection{Discussion on Patch Collection Strategies}We adopt three strategies named Slide Window (SW), Image Pyramid (IP), and Sparse Sampling (SS) to collect the patches from pavement images. Their corresponding versions are WSPLIN-SW, WSPLIN-IP, and WSPLIN-SS respectively. In all three versions, WSPLIN-IP, which is also the default version of WSPLIN, achieve the best performances under two application settings with different evaluation metrics. WSPLIN-IP achieves 1.5\% performance gains in P@R=90\% in the pavement distress detection case. However, its training time is only 89\% of the training time of IOPLIN. In comparison with WSPLIN-SW, WSPLIN-IP not just exploits the local information but also exploits the scale information of pavement image. The results indicate that such scale information can further improve the performance of WSPLIN. Although WSPLIN-SW has not outperformed IOPLIN with a 0.8\% performance decrease in pavement distress detection, WSPLIN-SW is much faster than WSPLIN-IP and its training only takes around 3/4 of the training time of IOPLIN. We attribute this to the efficiency advantage of the end-to-end model optimization strategy. Moreover, compared with other versions, WSPLIN-SW has not suffered from the scale variation, so it can produce better patch label inference visualization results and thereby enjoys the better interpretability.
WSPLIN-SS also takes the scale information into consideration, and can be deemed as a speedy version of WSPLIN-IP. The best performed WSPLIN-SS ($\alpha = 0.5$) achieves the similar performance as IOPLIN where $\alpha$ is a hyperparameter to control the number of sampled patches in each layer of an image pyramid. However, WSPLIN-SS saves around half of the training time compared with IOPLIN, and it is only 76\% of the training time of WSPLIN-IP. Clearly, WSPLIN-SS highly speeds up WSPLIN only with an acceptable performance decrease. Another interesting phenomenon is observed that WSPLIN-SS with a higher $\alpha$ does not always enjoy better performance. Generally, a higher $\alpha$ implies collecting more patches which means more information can be preserved for classification. However, the results indicate that not all preserved information is necessary for classification. Moreover, the less amount of patches per image means more images can be taken into one batch for model optimization since the memory size is fixed in our case. The higher diversity of pavement images in each batch benefits the model optimization. A good sparse sampling strategy should optimize the trade-off between patch preservation and the diversity of samples in the same batch. We believe this is the reason why WSPLIN-SS ($\alpha = 0.50$) performs well in both tasks.
In summary, all WSPLIN approaches show prominent advantages in training efficiency with similar or even better performances. We recommend using WSPLIN-IP in the application scenarios, which pay more attention to the performance instead of the efficiency, while using WSPLIN-SS in the application scenarios, which needs to take both performance and efficiency into considerations. WSPLIN-SW is recommended in the application scenarios, which pay more attention to the visual analysis of distressed images.
\subsubsection{Discussion on Patch Label Sparsity Constraint}
The distressed area is often a small proportion over the whole image. So we introduce the Patch Label Sparsity Constraint (PLSC) to model and leverage this prior property for better addressing the pavement image classification issue. Table~\ref{ablation} reports the performances of WSPLIN-IP with and without PLSC. WSPLIN-IP with PLSC achieves 2.0\% more accuracies in P@R=90\% under \textbf{I-DET} and 1.8\% greater F1-scores under \textbf{I-REC} over WSPLIN without PLSC. This implies that PLSC can offer a considerable improvement of WSPLIN. We also leverage Grad-CAM~\cite{grad_cam} to plot the Class Activation Maps (CAM) of the features extracted by the WSPLIN-IP models before and after using PLSC in Figure~\ref{pic_heatmap}. The CAM visualization results also validate that PLSC benefits the distressed feature extraction.
$\lambda$ is a positive hyperparameter for reconciling the classification loss and the PLSC. Figure~\ref{pic_lambda} plots the relationships between the different values of $\lambda$ and the performances of WSPLIN-IP under \textbf{I-DET} and \textbf{I-REC}. From observations, we can find that the WSPLIN-IP is insensitive to the setting of $\lambda$. The optimal $\lambda$ is $10^{-3}$.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=0.35]{./PLSC_CAM.pdf}
\caption{The Class Activation Map (CAM) Visualizations of the WSPLIN-IP models trained with or without PLSC. From the left column to the right column respectively indicate the ground truth, CAMs of baseline, CAMs of WSPLIN-IP without PLSC, and CAMs of WSPLIN-IP with PLSC.}
\label{pic_heatmap}
\vspace{-0.4cm}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=0.5]{./hyperpara.pdf}
\vspace{-0.2cm}
\caption{ The impacts of different $\lambda$ to the performance of WSPLIN-IP under \textbf{I-DET} and \textbf{I-REC}.}
\label{pic_lambda}
\end{center}
\vspace{-0.6cm}
\end{figure}
\subsubsection{The Efficiency of WSPLIN}
According to observations in Table~\ref{ablation}, all WSPLIN approaches are more efficient than IOPLIN. Moreover, IOPLIN and WSPLIN have very similar network structure, so they have the same parameter scale.
\vspace{-0.2cm}
\subsection{User Scenarios}
WSPLIN has wider application scenarios in comparison to IOPLIN. IOPLIN can only address the pavement distress detection problem, which is a typical binary image classification issue and attempts to find the distressed samples only. WSPLIN can tackle both the pavement distress detection and the recognition tasks under various aforementioned application settings shown in Table~\ref{settings}. The pavement distress recognition task is a multi-class image classification task, which tries to classify the pavement image into the specific distress category. The usages of WSPLIN and IOPLIN are similar. Once the corresponding models are trained, we can input pavement images into the models for acquiring their detection or recognition labels. For more details about their usages, please refer to~\cite{c16}. Similar to IOPLIN, WSPLIN can also roughly localize the distressed area of a pavement images. The main difference between IOPLIN and WSPLIN in such process is that WSPLIN can further recognize the diseases in those distressed areas. Figure~\ref{fig:visualization} gives some examples of this scenario via visualizing the patch labels produced by the trained WSPLIN.
\begin{figure}[tb]
\centering
\includegraphics[width=9cm, keepaspectratio]{ROI_vertical.jpg}
\caption{The visualizations of pavement images labeled by WSPLIN in the patch level.}
\label{fig:visualization}
\vspace{-0.4cm}
\end{figure}
\vspace{-0.2cm}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we present a novel patch-based deep learning model named WSPLIN for automatic pavement distress detection and recognition in the wild. WSPLIN divides the pavement image into patches with different patch collection strategies and then learns the label of patches in a weakly supervised manner. Finally, these inferred patch labels are fed into a comprehensive decision network for yielding the final recognition results. Similar to IOPLIN, WSPLIN can sufficiently utilize the resolution and scale information, and can also provide interpretable information, such as the location of the distressed area. However, WSPLIN is more efficient than IOPLIN with similar or even better performance. The experiments on a large pavement distress dataset validate the effectiveness of our approach.
\footnotesize
\bibliographystyle{IEEEtran}
| {'timestamp': '2022-04-01T02:13:48', 'yymm': '2203', 'arxiv_id': '2203.16782', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.16782'} |
\section{Introduction}
The study of supremum of Gaussian processes is a major area of study in probability and functional analysis as epitomized by the celebrated {\it majorizing measures} theorem of Fernique and Talagrand (see \cite{LedouxT}, \cite{Talagrand05} and references therein). There is by now a rich body of work on obtaining tight estimates and characterizations of the supremum of Gaussian processes with several applications in analysis \cite{Talagrand05}, convex geometry \cite{Pisier99} and more. Recently, in a striking result, Ding, Lee and Peres \cite{DingLP11} used the theory to resolve the Winkler-Zuckerman {\it blanket time} conjectures \cite{WinklerZ96}, indicating the usefulness of Gaussian processes even for the study of combinatorial problems over discrete domains.
Ding, Lee and Peres \cite{DingLP11} used the powerful {\it Dynkin isomorphism theory} and majorizing measures theory to establish a structural connection between the cover time (and blanket time) of a graph $G$ and the supremum of a Gaussian process associated with the Gaussian Free Field on $G$. They then use this connection to resolve the Winkler-Zuckerman blanket time conjectures and to obtain the first deterministic polynomial time constant factor approximation algorithm for computing the cover time of graphs. This latter result resolves an old open question of Aldous and Fill (1994).
Besides showing the relevance of the study of Gaussian processes to discrete combinatorial questions, the work of Ding, Lee and Peres gives evidence that studying Gaussian processes could even be an important algorithmic tool; an aspect seldom investigated in the rich literature on Gaussian processes in probability and functional analysis. Here we address the corresponding computational question directly, which given the importance of Gaussian processes in probability, could be of use elsewhere. In this context, the following question was asked by Lee \cite{leepost} and Ding \cite{Ding11}.
\begin{question}
For every $\epsilon > 0$, is there a deterministic polynomial time algorithm that given a set of vectors $v_1,\ldots,v_m \in \rd$, computes a $(1 + \epsilon)$-factor approximation to $\ex_{X \lfta \N^d}[\sup_i |\inp{v_i,X}|]$\footnote{Throughout, $\N$ denotes the univariate Gaussian distribution with mean $0$ and variance $1$ and for a distribution $\calD$, $X \lfta \calD$ denotes a random variable with distribution $\calD$. By a $\alpha$-factor approximation to a quantity $X$ we mean a number $p$ such that $p \leq X \leq \alpha p$.}.
\end{question}
We remark that Lee \cite{leepost} and \cite{Ding11} actually ask for an approximation to $\ex_{X \lfta \N^d}[\sup_i \inp{v_i, X}]$ (and not the supremum of the absolute value). However, this formulation results in a somewhat artificial asymmetry and for most interesting cases these two are essentially equivalent: if $\ex_{X \lfta \N^d}[\sup_i \inp{v_i, X}] = \omega(\max_i \nmt{v_i})$, then $\ex_{X \lfta \N^d}[\sup_I |\inp{v_i, X}|] = (1+o(1)) \ex_{X \lfta \N^d}[\sup_i \inp{v_i, X}]$\footnote{This follows from standard concentration bounds for supremum of Gaussian processes; we do not elaborate on it here as we ignore this issue.}. We shall overlook this distinction from now on.
There is a simple randomized algorithm for the problem: sample a few Gaussian vectors and output the median supremum value for the sampled vectors. This however, requires $O(d\log d/\epsilon^2)$ random bits. Using Talagrand's majorizing measures theorem, Ding, Lee and Peres give a deterministic polynomial time $O(1)$-factor approximation algorithm for the problem. This approach is inherently limited to not yield a PTAS as the majorizing measures characterization is bound to lose a universal constant factor. Here we give a PTAS for the problem thus resolving the above question.
\begin{theorem}\label{th:main}
For every $\epsilon > 0$, there is a deterministic algorithm that given a set of vectors $v_1,\ldots,v_m \in \rd$, computes a $(1 + \epsilon)$-factor approximation to $\ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|]$ in time $\poly(d) \cdot m^{\tilde{O}(1/\epsilon^2)}$.
\end{theorem}
Our approach is comparatively simpler than the work of Ding, Lee and Peres, using some classical {\it comparison inequalities} in convex geometry
We explain our result on estimating semi-norms with respect to Gaussian measures mentioned in the abstract in \sref{sec:linest}.
We next discuss some applications of our result to computing cover times of graphs as implied by the works of Ding, Lee and Peres \cite{DingLP11} and Ding \cite{Ding11}.
\subsection{Application to Computing Cover Times of Graphs}
The study of random walks on graphs is an important area of research in probability, algorithm design, statistical physics and more. As this is not the main topic of our work, we avoid giving formal definitions and refer the readers to \cite{AldousF}, \cite{Lovasz93} for background information.
Given a graph $G$ on $n$-vertices, the cover time, $\tau_{cov}(G)$, of $G$ is defined as the expected time a random walk on $G$ takes to visit all the vertices in $G$ when starting from the worst possible vertex in $G$. Cover time is a fundamental parameter of graphs and is extensively studied. Algorithmically, there is a simple randomized algorithm for approximating the cover time - simulate a few trials of the random walk on $G$ for $\poly(n)$ steps and output the median cover time. However, without randomness the problem becomes significantly harder. This was one of the motivations of the work of Ding, Lee and Peres \cite{DingLP11} who gave the first deterministic constant factor approximation algorithm for the problem, improving on an earlier work of Kahn, Kim, Lov\'asz and Vu \cite{KahnKLV00} who obtained a deterministic $O((\log \log n)^2)$-factor approximation algorithm. For the simpler case of trees, Feige and Zeitouni \cite{FeigeZ09} gave a FPTAS.
Ding, Lee and Peres also conjectured that the cover time of a graph $G$ (satisfying a certain reasonable technical condition) is asymptotically equivalent to the supremum of an explicitly defined Gaussian process---the Gaussian Free Field on $G$. However, this conjecture though quite interesting on its own, is not enough to give a PTAS for cover time; one still needs a PTAS for computing the supremum of the relevant Gaussian process. Our main result provides this missing piece, thus removing one of the obstacles in their posited strategy to obtain a PTAS for computing the cover time of graphs. Recently, Ding \cite{Ding11} showed the main conjecture of Ding, Lee and Peres to be true for bounded-degree graphs and trees. Thus, combining his result (see Theorem 1.1 in \cite{Ding11}) with \tref{th:main} we get a PTAS for computing cover time on bounded degree graphs with $\tau_{hit}(G) = o(\tau_{cov}(G))$\footnote{The hitting time $\tau_{hit}(G)$ is defined as the maximum over all pairs of vertices $u,v \in G$ of the expected time for a random walk starting at $u$ to reach $v$. See the discussion in \cite{Ding11} for why this is a reasonable condition.}. As mentioned earlier, previously, such algorithms were only known for trees \cite{FeigeZ09}.
\ignore{
\begin{theorem}
For every $\epsilon > 0$ and $\Delta > 0$ there exists a constant $C_{\Delta,\epsilon}$ such that the following holds. For every graph $G$ with maximum degree at most $\Delta$ and $\tau_{hit}(G) < C_{\Delta,\epsilon}(\tau_{cov}(G))$, there exists a deterministic $n^{O_{\Delta,\epsilon}(1)}$-time algorithm to compute a $(1+\epsilon)$-factor approximation to $\tau_{cov}(G)$.
\end{theorem}}
\section{Outline of Algorithm}
The high level idea of our PTAS is as follows. Fix the set of vectors $V = \{v_1,\ldots,v_m\} \subseteq \R^d$ and $\epsilon > 0$. Without loss of generality suppose that $\max_{v \in V} \nmt{v} = 1$. We first reduce the dimension of $V$ by projecting $V$ onto a space of dimension of $O((\log m)/\epsilon^2)$ \'a la the classical Johnson-Lindenstrauss lemma (JLL). We then give an algorithm that runs in time polynomial in the number of vectors but exponential in the underlying dimension. Our analysis relies on two elegant comparison inequalities in convex geometry---Slepian's lemma \cite{Slepian62} for the first step and Kanter's lemma \cite{Kanter77} for the second step. We discuss these modular steps below.
\subsection{Dimension Reduction} We project the set of vectors $V\subseteq \R^d$ to $\R^k$ for $k = O((\log m)/\epsilon^2)$ to preserve all pairwise (Euclidean) distances within a $(1+\epsilon)$-factor as in the Johnson-Lindenstrauss lemma (JLL). We then show that the expected supremum of the {\it projected} Gaussian process is within a $(1 + \epsilon)$ factor of the original value. The intuition is that, the supremum of a Gaussian process, though a global property, can be controlled by pairwise correlations between the variables. To quantify this, we use Slepian's lemma, that helps us relate the supremum of two Gaussian processes by comparing pairwise correlations. Finally, observe that using known derandomizations of JLL, the dimension reduction can be done deterministically in time $\poly(d,m,1/\epsilon)$ \cite{DJLL}.
Thus, to obtain a PTAS it would be enough to have a deterministic algorithm to approximate the supremum of a Gaussian process in time exponential in the dimension $k = O((\log m)/\epsilon^2)$. Unfortunately, a naive argument by discretizing the Gaussian measure in $\R^k$ leads to a run-time of at least $k^{O(k)}$; which gives a $m^{O((\log \log m)/\epsilon^2)}$ algorithm. This question was recently addressed by Dadush and Vempala \cite{DadushV12}, who needed a similar sub-routine for their work on computing {\it M-Ellipsoids} of convex sets and give a deterministic algorithm with a run-time of $(\log k)^{O(k)}$. Combining their algorithm with the dimension reduction step gives a deterministic $m^{O((\log \log \log m)/\epsilon^2)}$ time algorithm for approximating the supremum. We next get rid of this $\omega(1)$ dependence in the exponent.
\subsection{Oblivious Linear Estimators for Semi-Norms}\label{sec:linest}
We in fact, solve a more general problem by constructing an optimal {\it linear estimator} for semi-norms in Gaussian space.
Let $\phi:\R^k \rgta \R_+$ be a semi-norm, i.e., $\phi$ is homogeneous and satisfies triangle inequality. For normalization purposes, we assume that $1 \leq \ex_{x \lfta \N^k}[\phi(x)]$ and that the Lipschitz constant of $\phi$ is at most $k^{O(1)}$. Note that the supremum function $\phi_V(x) = \sup_{v \in V}|\inp{v,x}|$ satisfies these conditions. Our goal will be to compute a $(1+\epsilon)$-factor approximation to $\ex_{x \lfta \N^k}[\phi(x)]$ in time $2^{O_\epsilon(k)}$.
\ignore{
\begin{theorem}\label{th:epsnet}
For every $\epsilon > 0$, there exists a distribution $\calD$ on $\R^k$ which can be sampled using $O(k\log(1/\epsilon))$ bits in time $\poly(k,1/\epsilon)$ and space $O(\log k + \log(1/\epsilon))$ such that for every semi-norm $\phi:\R^k \rgta \R_+$,
\[(1-\epsilon) \ex_{x \lfta \calD}[\phi(x)] \leq \ex_{x \lfta \N^k}[\phi(x)] \leq (1+\epsilon)\ex_{x \lfta \calD}[\phi(x)].\]
In particular, there exists a deterministic $(1/\epsilon)^{O(k)}$-time algorithm for computing a $(1+\epsilon)$-factor approximation to $\ex_{X \lfta \N^k}[\phi(X)]$ using only oracle access to $\phi$.
\end{theorem}}
\begin{theorem}\label{th:epsnetintro}
For every $\epsilon > 0$, there exists a deterministic algorithm running in time $(1/\epsilon)^{O(k)}$ and space $\poly(k,1/\epsilon)$ that computes a $(1+\epsilon)$-factor approximation to $\ex_{X \lfta \N^k}[\phi(X)]$ using only oracle access to $\phi$.
\end{theorem}
Our algorithm has the additional property of being an {\it oblivious linear estimator}: the set of query points does not depend on $\phi$ and the output is a positive weighted sum of the evaluations of $\phi$ on the query points. Further, the construction is essentially optimal as any such oblivious estimator needs to make at least $(1/\epsilon)^{\Omega(k)}$ queries (see \sref{sec:appendix}). In comparison, the previous best bound of Dadush and Vempala \cite{DadushV12} needed $(\log k)^{O(k)}$ queries. We also remark that the query points of our algorithm are essentially the same as that of Dadush and Vempala, however our analysis is quite different and leads to better parameters.
As in the analysis of the dimension reduction step, our analysis of the oblivious estimator relies on a comparison inequality---Kantor's lemma---that allows us to ``lift'' a simple estimator for the univariate case to the multi-dimensional case.
We first construct a symmetric distribution $\mu$ on $\R$ that has a simple {\it piecewise flat graph} and {\it sandwiches} the one-dimensional Gaussian distribution in the following sense. Let $\nu$ be a ``shrinking'' of $\mu$ defined to be the probability density function (pdf) of $(1-\epsilon)x$ for $x \lfta \mu$. Then, for every symmetric interval $I \subseteq \R$, $\mu(I) \leq \N(I) \leq \nu(I)$.
Kantor's lemma \cite{Kanter77} says that for pdf's $\mu,\nu$ as above that are in addition {\it unimodal}, the above relation carries over to the product distributions $\mu^k, \nu^k$: for every symmetric convex set $K \subseteq \R^k$, $\mu^k(K) \leq \N^k(K) \leq \nu^k(K)$. This last inequality immediately implies that semi-norms cannot {\it distinguish} between $\mu^k$ and $\N^k$: for any semi-norm $\phi$, $\ex_{\mu^k}[\phi(x)] = (1\pm \epsilon)\ex_{\N^k}[\phi(x)]$. We then suitably prune the distribution $\mu^k$ to have small support and prove \tref{th:epsnet}.\\
Our main result, \tref{th:main}, follows by first reducing the dimension as in the previous section and applying \tref{th:epsnet} to the semi-norm $\phi:\R^k \rgta \R_+$, $\phi(x) = \sup_i|\inp{u_i,x}|$ for the projected vectors $\{u_1,\ldots,u_m\}$.
\section{Dimension Reduction}
The use of JLL type random projections for estimating the supremum comes from the following comparison inequality for Gaussian processes. We call a collection of real-valued random variables $\{X_t\}_{t \in T}$ a Gaussian process if every finite linear combination of the variables has a normal distribution with mean zero. For a reference to Slepian's lemma we refer the reader to Corollary 3.14 and the following discussion in \cite{LedouxT}.
\begin{theorem}[Slepian's Lemma \cite{Slepian62}]\label{lm:slepian}
Let $\{X_t\}_{t \in T}$ and $\{Y_t\}_{t \in T}$ be two Gaussian processes such that for every $s,t \in T$, $\ex[(X_s - X_t)^2] \leq \ex[(Y_s - Y_t)^2]$. Then, $\ex[\sup_t X_t] \leq \ex[\sup_t Y_t]$.
\end{theorem}
We also need a derandomized version of the Johnson-Lindenstrauss Lemma.
\begin{theorem}[\cite{DJLL}]\label{th:djll}
For every $\epsilon > 0$, there exists a deterministic $(d m^2 (\log m + 1/\epsilon)^{O(1)})$-time algorithm that given vectors $v_1,\ldots,v_m \in \R^d$ computes a linear mapping $A:\R^d \rgta \R^k$ for $k = O((\log m)/\epsilon^2)$ such that for every $i,j \in [m]$, $\nmt{v_i - v_j} \leq \nmt{A(v_i) - A(v_j)} \leq (1+\epsilon)\nmt{v_i - v_j}$.
\end{theorem}
Combining the above two theorems immediately implies the following.
\begin{lemma}\label{lm:derandjl}
For every $\epsilon > 0$, there exists a deterministic $(d m^2 (\log m + 1/\epsilon)^{O(1)})$-time algorithm that given vectors $v_1,\ldots,v_m \in \R^d$ computes a linear mapping $A:\R^d \rgta \R^k$ for $k = O((\log m)/\epsilon^2)$ such that
\begin{equation}
\label{eq:jllsup}
\ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|] \leq \ex_{y \lfta \N^k}[\sup_i |\inp{A(v_i),y}|] \leq (1+\epsilon) \ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|].
\end{equation}
\end{lemma}
\begin{proof}
Let $V = \{v_1,\ldots,v_m\} \cup \{-v_1,\ldots,-v_m\}$ and let $\{X_v\}_{v \in V}$ be the Gaussian process where the joint distribution is given by $X_v \equiv \inp{v,x}$ for $x \lfta \N^d$. Then, $\ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|] = \ex[\sup_v X_v]$.
Let $A:\R^d \rgta \R^k$ be the linear mapping as given by \tref{th:djll} applied to $V$. Let $\{Y_v\}_{v \in V}$ be the ``projected'' Gaussian process with joint distribution given by $Y_v \equiv \inp{A(v),y}$ for $y \lfta \N^k$. Then, $\ex_{y \lfta \N^k}[\sup_i |\inp{v_i,y}|] = \ex[\sup_v Y_v]$.
Finally, observe that for any $u,v \in V$,
\[ \ex[(X_u - X_v)^2] = \nmt{u-v}^2 \leq \nmt{A(u) - A(v)}^2 = \ex[(Y_u - Y_v)^2] \leq (1+\epsilon)^2\ex[(X_u - X_v)^2].\]
Combining the above inequality with Slepian's lemma \lref{lm:slepian} applied to the pairs of processes $\left(\{X_v\}_{v \in V}, \{Y_v\}_{v \in V}\right)$ and $\left(\{Y_v\}_{v \in V}, \{(1+\epsilon)X_v\}_{v \in V}\right)$ it follows that
\[ \ex[\sup_v X_v] \leq \ex[\sup_v Y_v] \leq \ex[\sup_v (1+\epsilon)X_v] = (1+\epsilon)\ex[\sup_v X_v].\]
The lemma now follows.
\end{proof}
\section{Oblivious Estimators for Semi-Norms in Gaussian Space}\label{sec:epsnet}
In the previous section we reduced the problem of computing the supremum of a $d$-dimensional Gaussian process to that of a Gaussian process in $k = O((\log m)/\epsilon^2)$-dimensions. Thus, it suffices to have an algorithm for approximating the supremum of Gaussian processes in time exponential in the dimension. We will give such an algorithm that works more generally for all semi-norms.
Let $\phi:\R^k \rgta \R_+$ be a semi-norm. That is, $\phi$ satisfies the triangle inequality and is homogeneous. For normalization purposes we assume that $1 \leq \ex_{\N^k}[\phi(X)]$ and the Lipschitz constant of $\phi$ is at most $k^{O(1)}$.
\begin{theorem}\label{th:epsnet}
For every $\epsilon > 0$, there exists a set $S \subseteq \R^k$ with $|S| = (1/\epsilon)^{O(k)}$ and a function $p:\R^k \rgta \R_+$ computable in $\poly(k,1/\epsilon)$ time such that the following holds. For every semi-norm $\phi:\R^k \rgta \R_+$,
\[(1 - \epsilon) \left(\sum_{x \in S} p(x) \phi(x)\right) \leq \ex_{X \lfta \N^k}[\phi(X)] \leq (1 + \epsilon) \left(\sum_{x \in S} p(x) \phi(x)\right).\]
Moreover, successive elements of $S$ can be enumerated in $\poly(k,1/\epsilon)$ time and $O(k\log(1/\epsilon))$ space.
\end{theorem}
\tref{th:epsnetintro} follows immediately from the above.
\begin{proof}[Proof of \tref{th:epsnetintro}]
Follows by enumerating over the set $S$ and computing $\sum_{x \in S} p(x) \phi(x)$ by querying $\phi$ on the points in $S$.
\end{proof}
We now prove \tref{th:epsnet}.
Here and henceforth, let $\gamma$ denote the pdf of the standard univariate Gaussian distribution. Fix $\epsilon > 0$ and let $\delta > 0$ be a parameter to be chosen later. Let $\mu \equiv \mu_{\epsilon,\delta}$ be the pdf which is a piecewise-flat approximator to $\gamma$ obtained by spreading the mass $\gamma$ gives to an interval $I = [i\delta, (i+1)\delta)$ evenly over $I$. Formally, $\mu(z) = \mu(-z)$ and for $z > 0$, $z \in [i\delta,(i+1)\delta)$,
\begin{equation}
\label{eq:defmu}
\mu(z) = \frac{\gamma([i\delta,(i+1)\delta))}{\delta}.
\end{equation}
Clearly, $\mu$ defines a symmetric distribution on $\R$. We will show that for $\delta \ll \epsilon$ sufficiently small, semi-norms cannot {\it distinguish} the product distribution $\mu^k$ from $\N^k$:
\begin{lemma}\label{lm:epsnetm}
Let $\delta = (2\epsilon)^{3/2}$. Then, for every semi-norm $\phi:\R^k \rgta \R$,
$$(1-\epsilon) \ex_{X \lfta \mu^k}[\phi(X)] \leq \ex_{Z \lfta \N^k}[\phi(Z)] \leq \ex_{X \lfta \mu^k}[\phi(X)].$$
\end{lemma}
\newcommand{\hX}{\hat{X}}
We first prove \tref{th:epsnet} assuming the above lemma, whose proof is deferred to the next section.
\begin{proof}[Proof of \tref{th:epsnet}]
Let $\hat{\mu}$ be the symmetric distribution supported on $\delta(\Z + 1/2)$ with pdf defined by
$$\hat{\mu}(\delta(i+1/2)) = \mu([i\delta, (i+1)\delta)),$$ for $i \geq 0$. Further, let $X \lfta \mu^k$, $\hX \lfta \hat{\mu}^k$, $Z \lfta \N^k$.
We claim that $\ex[\phi(\hX)] = (1\pm \epsilon)\ex[\phi(Z)]$. Let $Y$ be uniformly distributed on $[-\delta,\delta]^k$ and observe that random variable $X \equiv \hX + Y$ in law. Therefore,
\begin{multline}
\ex[\phi(X)] = \ex[\phi(\hX+Y)] = \ex[\phi(\hX)] \pm \ex[\phi(Y)] = \ex[\phi(\hX)] \pm \delta \ex[\phi(Y/\delta)]\\
= \ex[\phi(\hX)] \pm \delta \ex_{Z' \in_u [-1,1]^k}[\phi(Z')] = \ex[\phi(\hX)] \pm \delta \ex[\phi(Z)] \text{ (\lref{lm:cube})}.
\end{multline}
Thus, by \lref{lm:epsnetm},
\begin{equation}
\label{eq:lm1}
\ex[\phi(\hX)] = (1\pm O(\epsilon)) \ex[\phi(Z)]
\end{equation}
We next prune $\hat{\mu}^k$ to reduce its support. Define $p:\R^k \rgta \R_+$ by $p(x) = \hat{\mu}^k(x)$. Clearly, $p(x)$ being a product distribution is computable in $\poly(k,1/\epsilon)$ time.
Let $S = \left(\delta(\Z + 1/2)\right)^k \cap B_2(3\sqrt{k})$, where $B_2(r) \subseteq \R^k$ denotes the Euclidean ball of radius $r$. As $\phi$ has Lipschitz constant bounded by $k^{O(1)}$, a simple calculation shows that throwing away all points in the support of $\hX$ outside $S$ does not change $\ex[\phi(\hX)]$ much. It is easy to check that for $x \notin S$, $p(x) \leq \exp(-\nmt{x}^2/4)/(2\pi)^{k/2}$. Therefore,
\begin{multline}
\ex[\phi(\hX)] = \sum_{x} p(x) \phi(x) = \sum_{x \in S}p(x) \phi(x) + \sum_{x \notin S}p(x) \phi(x)\\
= \sum_{x \in S} p(x) \phi(x) \pm \sum_{x \notin S} \frac{\exp(-\nmt{x}^2/4)}{(2\pi)^{k/2}}\cdot ( k^{O(1)} \nmt{x}) = \sum_{x \in S}p(x) \phi(x) \pm o(1).
\end{multline}
From \eref{eq:lm1} and the above equation we get (recall that $\ex[\phi(Z)] \geq 1$)
\[ \ex[\phi(Z)] = (1\pm O(\epsilon)) \left(\sum_{x \in S} p(x) \phi(x)\right),\]
which is what we want to show.
We now reason about the complexity of $S$. First, by a simple covering argument $|S| < (1/\delta)^{O(k)}$:
\[ |S| < \frac{Vol\,(B_2(3\sqrt{k}) + [-\delta,\delta]^k)}{Vol\,([-\delta,\delta]^k)} = (1/\delta)^{O(k)} = (1/\epsilon)^{O(k)},\]
where for sets $A, B \subseteq \R^k$, $A+B$ denotes the Minkowski sum and $Vol$ denotes Lebesgue volume. This size bound almost suffices to prove \tref{th:epsnet} except for the complexity of enumerating elements from $S$. Without loss of generality assume that $R = 3\sqrt{n}/\delta$ is an integer. Then, enumerating elements in $S$ is equivalent to enumerating integer points in the $n$-dimensional ball of radius $R$. This can be accomplished by going through the set of lattice points in the natural lexicographic order, and takes $\poly(k,1/\epsilon)$ time and $O(k\log(1/\epsilon))$ space per point in $S$.
\end{proof}
\section{Proof of \lref{lm:epsnetm}}
Our starting point is the following definition that helps us {\it compare} multivariate distributions when we are only interested in volumes of convex sets. We shall follow the notation of \cite{Ball}.
\begin{definition}
Given two symmetric pdf's, $f,g$ on $\R^k$, we say that $f$ is less peaked than $g$ ($f \preceq g$) if for every symmetric convex set $K \subseteq \R^k$, $f(K) \leq g(K)$.
\end{definition}
We also need the following elementary facts. The first follows from the unimodality of the Gaussian density and the second from partial integration.
\begin{fact}
For any $\delta > 0$ and $\mu$ as defined by \eref{eq:defmu}, $\mu$ is less peaked than $\gamma$.
\end{fact}
\begin{fact}\label{fct:peaked}
Let $f, g$ be distributions on $\R^k$ with $f \preceq g$. Then for any semi-norm $\phi:\R^k \rgta \R$, $\ex_f[\phi(x)] \geq \ex_g[\phi(x)]$.
\end{fact}
\begin{proof}
Observe that for any $t > 0$, $\{x: \phi(x) \leq t\}$ is convex. Let random variables $X \lfta f$, $Y \lfta g$. Then, by partial integration, $\ex[\phi(X)] = \int_0^\infty \phi'(t) \pr[ \phi(X) > t] dt \geq \int_0^\infty \phi'(t) \pr[\phi(Y)> t] dt = \ex[\phi(Y)]$.
\end{proof}
The above statements give us a way to compare the expectations of $\mu$ and $\gamma$ for one-dimensional convex functions. We would now like to do a similar comparison for the product distributions $\mu^k$ and $\gamma^k$. For this we use Kanter's lemma \cite{Kanter77}, which says that the relation $\preceq$ is preserved under tensoring if the individual distributions have the additional property of being {\it unimodal}.
\begin{definition}
A distribution $f$ on $\R^n$ is unimodal if $f$ can be written as an increasing limit of a sequence of distributions each of which is a finite positively weighted sum of uniform distributions on symmetric convex sets.
\end{definition}
\begin{theorem}[Kanter's Lemma \cite{Kanter77}; cf.~\cite{Ball}]\label{th:kanter}
Let $\mu_1,\mu_2$ be symmetric distributions on $\R^n$ with $\mu_1 \preceq \mu_2$ and let $\nu$ be a unimodal distribution on $\R^m$. Then, the product distributions $\mu_1 \times \nu$, $\mu_2 \times \nu$ on $\R^n \times \R^m$ satisfy $\mu_1 \times \nu \preceq \mu_2 \times \nu$.
\end{theorem}
We next show that $\mu$ ``sandwiches'' $\gamma$ in the following sense.
\begin{lemma}
Let $\nu$ be the pdf of the random variable $y = (1-\epsilon)x$ for $x \lfta \mu$. Then, for $\delta \leq (2\epsilon)^{3/2}$, $\mu \preceq \gamma \preceq \nu$.
\end{lemma}
\begin{proof}
As mentioned above, $\mu \preceq \gamma$. We next show that $\gamma \preceq \nu$. Intuitively, $\nu$ is obtained by spreading the mass that $\gamma$ puts on an interval $I = [i\delta, (i+1)\delta)$ evenly on the \emph{smaller} interval $(1-\epsilon)I$. The net effect of this operation is to push the pdf of $\mu$ closer towards the origin and for $\delta$ sufficiently small the inward push from this ``shrinking'' wins over the outward push from going to $\mu$.
Fix an interval $I = [-i \delta(1-\epsilon) - \theta, i\delta(1-\epsilon) + \theta]$ for $0 \leq \theta < \delta(1-\epsilon)$. Then,
\begin{align}\label{eq:cases}
\nu(I) &= \nu\left(\,[-i\delta(1-\epsilon), i\delta(1-\epsilon)]\,\right) + 2\, \nu\left(\,[i\delta(1-\epsilon), i\delta(1-\epsilon) + \theta]\,\right)\\
&= \gamma\left(\,[-i\delta,i\delta]\,\right) + \frac{2\, \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}.
\end{align}
We now consider two cases.
Case 1: $i \geq (1-\epsilon)/\epsilon$ so that $i\delta(1-\epsilon) + \theta \leq i\delta$. Then, from the above equation,
\[ \nu(I) \geq \gamma\left(\,[-i\delta, i \delta]\,\right) \geq \gamma\left(\,[-i\delta(1-\epsilon)-\theta, i\delta(1-\epsilon)+\theta]\,\right) = \gamma(I).\]
Case 2: $i < (1-\epsilon)/\epsilon$. Let $\alpha = (i+1)\delta = \delta/\epsilon$. Then, as $1 - x^2/2 \leq e^{-x^2/2} \leq 1$,
\[ \gamma((i\delta, i\delta + \theta]) \leq \theta \cdot \gamma(0),\;\;\;\; \gamma(\,[i\delta,(i+1)\delta)\,) \geq \delta \cdot \gamma(0) \cdot (1-\alpha^2/2).\]
Therefore,
\begin{align*}
\nu(I) &= \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta(1-\epsilon) + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \theta \gamma(0) + \frac{2 \theta \cdot \delta \cdot \gamma(0) \cdot (1-\alpha^2/2)}{\delta(1-\epsilon)}\\
&= \gamma(I) + \frac{2\theta \gamma(0)}{1-\epsilon} \cdot (\epsilon - \alpha^2/2) \geq \gamma(I),
\end{align*}
for $\alpha^2 \leq 2 \epsilon$, i.e., if $\delta \leq (2\epsilon)^{3/2}$.
\ignore{
Let $\delta \ll \epsilon$ be sufficiently small so that the pdf of $\gamma$ is nearly constant in the interval $[0,(i+1)\delta]$, that is $\gamma([\alpha,\beta]) = (\beta-\alpha) \gamma(0) \pm O((\beta-\alpha)^2)$ for $0 < \alpha < \beta < (i+1) \delta$. Then, by \eref{eq:cases}, as $\theta < \delta(1-\epsilon)$,
\begin{align*}
\nu(I) &\geq \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta(1-\epsilon) + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \theta \gamma(0) - O(\theta^2) + \frac{2 \theta (\delta \gamma(0) - O(\delta^2))}{\delta(1-\epsilon)}\\
&= \gamma(I) + \frac{2\theta \gamma(0)\epsilon}{1-\epsilon} - O(\theta \,\delta) \geq \gamma(I)
\end{align*}}
\end{proof}
\lref{lm:epsnetm} follows easily from the above two claims.
\begin{proof}[Proof of \lref{lm:epsnetm}]
Clearly, $\mu,\nu,\gamma$ are unimodal and product of unimodal distributions is unimodal. Thus, from the above lemma and iteratively applying Kanter's lemma we get $\mu^k \preceq \gamma^k \preceq \nu^k$. Therefore, by Fact \ref{fct:peaked}, for any semi-norm $\phi$,
\[ \ex_{\mu^k}[\phi(X)] \geq \ex_{\gamma^k}[\phi(Y)] \geq \ex_{\nu^k}[\phi(X)] = \ex_{\mu^k}[\phi((1-\epsilon)X)] = (1-\epsilon)\ex_{\mu^k}[\phi(X)].\]
\end{proof}
We now prove the auxiliary lemma we used in proof of \tref{th:epsnet}.
\begin{lemma}\label{lm:cube}
Let $\rho$ be the uniform distribution on $[-1,1]$. Then, $\gamma \preceq \rho$ and for any semi-norm $\phi:\R^k \rgta \R$, $\ex_{\rho^k}[\phi(x)] \leq \ex_{\gamma^k}[\phi(x)]$.
\end{lemma}
\begin{proof}
It is easy to check that $\gamma \preceq \rho$. Then, by Kanter's lemma $\gamma^k \preceq \rho^k$ and the inequality follows from Fact \ref{fct:peaked}.
\end{proof}
\section{A PTAS for Supremum of Gaussian Processes}
Our main theorem, \tref{th:main}, follows immediately from \lref{lm:derandjl} and \tref{th:epsnetintro} applied to the semi-norm $\phi:\R^k \rgta \R$ defined by $\phi(x) = \sup_{i \leq m} |\inp{A(v_i),x}|$.
\section{Lowerbound for Oblivious Estimators}\label{sec:appendix}
We now show that \tref{th:epsnet} is optimal: any oblivious linear estimator for semi-norms as in the theorem must make at least $(C/\epsilon)^{k}$ queries for some constant $C > 0$.
Let $S \subseteq \R^k$ be the set of query points of an oblivious estimator. That is, there exists a function $f:\R_+^S \rgta \R_+$ such that for any semi-norm $\phi:\R^k \rgta \R_+$, $f((\phi(x): x \in S)) = (1\pm \epsilon) \ex_{Y \lfta \N^k}[\phi(Y)]$. We will assume that $f$ is monotone in the following sense: $f(x_1,\ldots,x_{|S|}) \leq f(y_1,\ldots,y_{|S|})$ if $0 \leq x_i \leq y_i$ for all $i$. This is clearly true for any linear estimator (and also for the median estimator). Without loss of generality suppose that $\epsilon < 1/4$.
\newcommand{\spk}{\mathcal{S}^{k-1}}
The idea is to define a suitable semi-norm based on $S$: define $\phi:\R^k \rgta \R$ by $\phi(x) = \sup_{u \in S}|\inp{u/\nmt{u},x}|$. It is easy to check that for any $v \in S$, $\nmt{v} \leq \phi(v)$. Therefore, the output of the oblivious estimator when querying the Euclidean norm is at most the output of the estimator when querying $\phi$. In particular,
\begin{equation}
\label{eq:app1}
(1-\epsilon) \ex_{Y \lfta \N^k}[\nmt{Y}] \leq f((\nmt{x}: x \in S)) \leq f((\phi(x): x \in S)) \leq (1+\epsilon) \ex_{Y \lfta \N^k}[\phi(Y)].
\end{equation}
We will argue that the above is possible only if $|S| > (C/\epsilon)^k$. Let $\spk$ denote the unit sphere in $\R^k$. For the remaining argument, we shall view $Y \lfta \N^k$ to be drawn as $Y = R X$, where $X \in \spk$ is uniformly random on the sphere and $R \in \R$ is independent of $X$ and has a Chi-squared distribution with $k$ degrees of freedom. Let $S(\epsilon) = \cup_{u \in S} \{y \in \spk: |\inp{u/\nmt{u},y}| \geq 1 - 4\epsilon\}$.
Now, by a standard volume argument, for any $y \in \spk$, $\pr_X[|\inp{X,y}| \geq 1 - 4\epsilon] < (O(\epsilon))^k$. Thus, by a union bound, $p = \pr_X[X \in S(\epsilon)] < |S| \cdot (O(\epsilon))^k$. Further, for any $y \in \spk\setminus S(\epsilon)$, $\phi(y) < 1-4\epsilon$. Therefore,
\begin{multline*}
\ex_{X}[\phi(X)] = \pr[X \notin S(\epsilon)] \cdot \ex[\phi(X) | X \notin S(\epsilon)] + \pr[X \in S(\epsilon)] \cdot \ex[\phi(X) | X \in S(\epsilon)]\leq \\(1-p) (1-4\epsilon) + p.
\end{multline*}
Thus,
\begin{equation}
\label{eq:app2}
\ex[\phi(Y)] = \ex[\phi(R X)] = \ex[R] \cdot \ex[\phi(X)] \leq \ex[\nmt{Y}] \cdot ((1-p)(1-4\epsilon) + p).
\end{equation}
Combining Equations \ref{eq:app1} and \ref{eq:app2}, we get
\[ 1 - \epsilon \leq (1+\epsilon)\cdot ((1-p)(1-4\epsilon) + p) < 1-3\epsilon + 2p.\]
As $p < |S| \cdot (O(\epsilon))^k$, the above leads to a contradiction unless $|S| > (C/\epsilon)^{k}$ for some constant $C > 0$.
\bibliographystyle{amsalpha}
| {'timestamp': '2012-02-23T02:03:17', 'yymm': '1202', 'arxiv_id': '1202.4970', 'language': 'en', 'url': 'https://arxiv.org/abs/1202.4970'} |
\section{Introduction}
\label{intro}
The development and refinement in recent years of new techniques in materials
growth has made it possible
to fabricate superconducting heterostructures with various materials and
high quality interfaces. These advances, coupled
with the continuing intense level of activity in the study of the nature of
high temperature\cite{agl,dj,har} and other exotic
superconductors,\cite{upt,mae}
has led to renewed interest in
tunneling spectroscopy.
It has been demonstrated\cite{har,hu,tan} that this
technique yields information about
both the magnitude and the phase of the superconducting pair potential (PP).
This implies that the method can
provide a systematic way to distinguish among various proposed PP
candidates, including both spin singlet and spin triplet
pairing states.\cite{tan3,sig}
For example, it has been argued that the observed zero bias conductance
peak\cite{hu,tan,xu,wei,alf}
(ZBCP),
attributed to mid-gap surface states, is an indication of
unconventional superconductivity with a sign change of the PP,
as it occurs in pairing with a
$d_{x^2-y^2}$-wave symmetry. Furthermore, the splitting of the ZBCP and
the forming
of a finite bias peak (FBCP) in the conductance spectrum has been examined
and interpreted\cite{cov,sauls,ting2} as support for the admixture of
an imaginary
PP component to the dominant $d_{x^2-y^2}$-wave part, leading to a broken
time-reversal symmetry.\cite{shiba,rice}
The same developments, and the ability to make low interfacial
resistance junctions between high spin polarization
ferromagnets and superconductors,
have stimulated significant efforts to study transport in these
structures.\cite{prinz} There have been
various experiments in both
conventional\cite{soul,up,john} and high temperature
superconductors\cite{vas,dong,vas2,chen} (HTSC's),
as well as re-examinations of earlier work\cite{ted,mers} which was
performed generally in the
tunneling limit of strong interfacial barrier.
Theoretical studies of
the effects of spin polarized transport on the current-voltage characteristics
and the conductance in ferromagnet/superconductor (F/S)
junctions have been carried out
in conventional\cite{been}
and, recently, in high-temperature superconductors.\cite{ting,zv,kash}
The feasibility of nanofabricating F/S structures
has also
generated interest in studying the influence of ferromagnetism on mesoscopic
superconductivity.\cite{vol}
One of the important questions raised by the possibility of making high
transmissivity F/S junctions was that of studying the influence of Andreev
reflection (AR)\cite{been,and,bru,nat} on spin polarized transport. In AR
an electron, belonging to one of the two spin bands,
incoming from the ferromagnetic region to the F/S
interface will be reflected as a hole in the opposite spin band.
The splitting of spin bands by the exchange energy in ferromagnetic
materials implies that only a fraction of the incoming incident
majority spin electrons can be Andreev reflected.\cite{been}
This simple argument was used in
previous studies\cite{soul,up,been} to infer that
the effect of spin polarization (exchange energy)
was generally to reduce AR. The sensitivity of AR to
the exchange energy in a ferromagnet was employed\cite{soul,up} to determine
the degree of polarization in various materials.
In this paper we will study the tunneling spectroscopy of F/S junctions.
We will adopt the basic
approach of Ref. \onlinecite{btk} but we will extend and
generalize it to include the
effects of spin polarization, the presence of an unconventional PP state
(pure or mixed), and the existence of Fermi wavevector
mismatch (FWM)\cite{btk2,dutch}
stemming from the different band widths in the two junction materials.
Our aim in this paper is twofold: Firstly, to investigate and
reveal novel features
in the conductance spectra arising from the interplay of ferromagnetism and
unconventional superconductivity, and secondly, to show the importance of FWM, and
how its inclusion can lead to some unexpected results, even for F/s-wave
superconductor junctions, where we find, for example, that in some cases
Andreev reflection can be enhanced by spin polarization.
In the next section (Sec. \ref{meth}), we present the methods we use
to obtain the amplitudes for the various scattering processes
that occur in the junction when spin polarized
electrons are injected from the F into the S region. We will use these
methods to calculate the conductance
of the F/S junctions. In Sec. \ref{results}, we first give results for
a conventional
($s$-wave) superconductor in the S side,
and then illustrate the unconventional case of the
pairing potential by considering both pure $d$- and mixed
$d+is$-wave symmetry. In
Sec. \ref{conc},
we summarize our results and discuss future problems.
\section{Methods}
\label{meth}
As explained in the Introduction,
we investigate in this work F/S junctions by extending and generalizing
the techniques previously employed in the study of simpler
cases without spin polarization, or for conventional superconductors.
Thus, we use here the Bogoliubov-de Gennes
(BdG) equations\cite{hu,tan,been,bru,deg} in the ballistic limit.
We consider a geometry where the ferromagnetic material is at $x<0$, and
is described by the
Stoner model. We take the usual approach\cite{been} of assuming a
single particle Hamiltonian
with the exchange energy being therefore of the form
$h({\bf r})=h_0 \Theta(-x)$, where $\Theta(x)$ is a step
function. The F/S interface is at $x=0$, where there is
interfacial scattering modeled by a potential
$V({\bf r})=H \delta(x)$,\cite{tan,soul,up,btk} and $H$ is the
variable strength of the
potential barrier. The dimensionless parameter characterizing barrier
strength\cite{btk} is $Z_0\equiv mH/\hbar k_F$, where the
effective mass, $m$, is taken to be equal in the F and S regions.
In the superconducting region, at $x>0$, we
assume\cite{hu,tan,been,btk,arnold}
that there is a pair
potential $\Delta({\bf k}', {\bf r})=
\Delta({\bf \hat{k}}') \Theta(x)$.
This approximation for the PP becomes more accurate\cite{simple}
in the presence of FWM and allows analytic solution of the BdG equations.
We will denote quantities
pertaining to the S region by primed letters.
From these considerations, the BdG equations for F/S junction,
in the absence
spin-flip scattering, can be written as\cite{deg}
\begin{eqnarray}
\left[\begin{array}{cc} H_0-\rho_S h & \Delta \\
\Delta^* & -(H_0+\rho_S h)\end{array} \right]
\left[ \begin{array}{c} u_S \\
v_{\overline{S}} \end{array} \right]=\epsilon
\left[ \begin{array}{c} u_S \\
v_{\overline{S}} \end{array} \right],
\label{BdG}
\end{eqnarray}
where $H_0$ is the single particle Hamiltonian and $\rho_S=\pm1$ for
spin $S=\uparrow, \downarrow$. The exchange energy $h{(\bf r})$ and
the PP $\Delta$ are as defined above. The excitation energy is denoted by
$\epsilon$, and $u_S$, $v_{\overline{S}}$ are the
electronlike quasiparticle (ELQ)
and holelike quasiparticle (HLQ) amplitudes, respectively.
We take
$H_0\equiv -\hbar^2\nabla^2/2m+V({\bf r})-E_F^{F,S}$, where $V({\bf r})$ is
defined above. In the F region,
we have $E_F^F\equiv E_F = \hbar^2 k_F^2/2m$, so that $E_F$ is the spin
averaged value,
$E_F=(\hbar^2 k_{F\uparrow}^2/2m+\hbar^2 k^2_{F\downarrow}/2m )/2$.
We assume that in general it differs from
the value in the superconductor,
$E_F^S \equiv E'_F = \hbar^2 k'^2_F/2m$. Thus,
we take the Fermi energies to be different in the F and S regions
that is, we allow for different
band widths, stemming from the different carrier densities
in the two regions. Indeed,
as the results in the next Section will show, the
Fermi wavevector mismatch (FWM) between the two regions
has an important influence on our findings.
We will parameterize the FWM by
the value of $L_0$, $L_0 \equiv k'_F/k_F$ and describe the degree of
spin polarization, related to the exchange energy, by the dimensionless
parameter $X\equiv h_0/E_F$.
The invariance of the Hamiltonian with respect to translations
parallel to $x=0$ implies conservation\cite{mil} of the
(spin dependent) parallel component of the the wavevector at the junction.
As we shall show, this will be an important
consideration in understanding the possible scattering processes.
An electron
injected from the F side, with spin $S=\uparrow,
\downarrow$, excitation energy $\epsilon$, and wavevector
${\bf k}^+_S$ (with magnitude
$k^+_{S}=(2m/\hbar^2)^{1/2} [E_F +\epsilon+\rho_S h_0]^{1/2}$),
at an angle $\theta$ from the interface
normal, can undergo
four scattering processes \cite{tan,btk} each described by a
different amplitude. Assuming specular reflection at the interface, these can
be characterized as follows:
1) Andreev reflection, with amplitude that we denote by $a_S$,
as a hole with
spin, $\overline{S}$, belonging to the spin band opposite to that of the
incident electron ($\rho_{\overline{S}}=-\rho_S$), wavevector
${\bf k}_{\overline{S}}^-$
($k^-_{\overline{S}}=(2m/\hbar^2)^{1/2}
[E_F -\epsilon+\rho_{\overline{S}} h_0]^{1/2}$),
and spin dependent angle of reflection $\theta_{\overline{S}}$, generally
different
from $\theta$.\cite{zv} As is the case with the angles
corresponding to the other scattering processes,
$\theta_{\overline{S}}$, as we shall see below,
is determined from the requirement that the
parallel component of the wavevector is conserved.
Even in the absence of exchange energy ($h_0=0$),
one has that,
for $\epsilon \neq 0$,
$\theta_{\overline{S}}$ (although then spin independent)
is slightly different\cite{kummel} from $\theta$. When $h_0>0$, the
typical situation is, as we
discuss later, that
$|\theta_{\overline{\downarrow}}| <|\theta|
< |\theta_{\overline{\uparrow}}|$.
2) The second process is ordinary
reflection into the F region, characterized by an amplitude
which we call $b_S$, as
an electron with variables $S$, $-{\bf k}^+_{S}$, $-\theta$.
The other two processes are:
3) Transmission into the S region, with amplitude $c_S$,
as an ELQ with ${\bf k}'^+_S$, and 4) Transmission as a HLQ with
amplitude $d_S$ and wavevector $-{\bf k}'^-_S$. Here the corresponding
wavevector magnitudes
are $k'^{\pm}_{S}=(2m/\hbar^2)^{1/2}
[E'_F\pm(\epsilon^2-|\Delta_{S\pm}|^2)^{1/2}]^{1/2}$. We
denote by $\Delta_{S\pm}
=|\Delta_{S\pm}|\exp(i\phi_{S\pm})$,
the different PP's felt by the ELQ and the HLQ, respectively,
as determined by
${\bf k}'^\pm_S$. We see, therefore, that up to four different energy
scales of the PP are involved for each incident angle $\theta$.
In our considerations, which pertain to the common experimental situation,
\cite{amg,venky}
$E_F, E'_F $ $\gg$ $\max(\epsilon, |\Delta_{S\pm}|)$,
we can employ the Andreev
approximation\cite{tan,been,and,bru} and write
$k^{\pm}_{S} \approx k_{FS}
\equiv(2m/\hbar^2)^{1/2}
[E_F +\rho_S h_0]^{1/2}$,
$k'^{\pm}_{S} \approx k'_F$. It then follows that the appropriate wavevectors
for the transmission of ELQ's and HLQ's are at
angles $\theta'_S$, $-\theta'_S$,
with the interface normal, respectively.
Within this approximation the components of
the vectors ${\bf k}^{\pm}_{S}$,
${\bf k}'^{\pm}_{S}$
normal and parallel to the interface, can be expressed as
${\bf k}^{\pm}_{S}\equiv(k_{S}, k_{\|S})$, and
${\bf k}'^{\pm}_{S}\equiv(k'_S, k_{\|S})$,
in the F and S regions. From the conservation of $k_{\|S}$, we have then
an analogue of Snell's law
\begin{mathletters}
\begin{equation}
k_{FS}\sin\theta=k_{F\overline{S}}\sin\theta_{\overline{S}},
\label{snella}
\end{equation}
\begin{equation}
k_{FS}\sin\theta=k'_F\sin\theta'_S,
\label{snellb}
\end{equation}
\label{snell}
\end{mathletters}
which has several important implications, including the existence
of critical angles,\cite{crit} as one encounters in well known phenomena in
the propagation of electromagnetic waves.\cite{jackson}
Using the conservation of $k_{\|S}$,
the solution to Eq. (\ref{BdG}),
$\Psi_S\equiv(u_S,v_{\overline{S}})^T$, can be expressed in a separable form,
effectively reducing the problem to a one-dimensional one. In the F region
we write
\begin{equation}
\Psi_S({\bf r})\equiv e^{i {\bf k}_{\| S}\cdot {\bf r}}\psi_S(x),
\label{Psif}
\end{equation}
where
\begin{eqnarray}
\psi_S(x)=
e^{i k_S x} \left[\begin{array}{c} 1 \\ 0 \end{array} \right]+
a_S e^{i k_{\overline{S}}x} \left[\begin{array}{c} 0 \\ 1 \end{array} \right]
+b_S e^{-i k_S x} \left[\begin{array}{c} 1 \\ 0 \end{array} \right],
\label{psif}
\end{eqnarray}
analogously, in the $S$ region we have\cite{tan}
\begin{equation}
\Psi'_S({\bf r})\equiv e^{i {\bf k}_{\| S}\cdot {\bf r}}\psi'_S(x),
\label{Psis}
\end{equation}
\begin{eqnarray}
\psi'_S(x)=
c_S e^{i k'_S x} \left[\begin{array}{c}
(\epsilon +\Omega_{S+}/2 \epsilon)^{\frac{1}{2}} \\
e^{-i\phi_+} (\epsilon -\Omega_{S+}/2 \epsilon)^{\frac{1}{2}}
\end{array} \right]
+d_S e^{-i k'_S x} \left[\begin{array}{c}
e^{i\phi_-} (\epsilon -\Omega_{S-}/2 \epsilon)^{\frac{1}{2}} \\
(\epsilon +\Omega_{S-}/2 \epsilon)^{\frac{1}{2}}
\end{array} \right],
\label{psis}
\end{eqnarray}
with $\Omega_{S\pm}\equiv(\epsilon^2-|\Delta_{S\pm}|^2)^{\frac{1}{2}}$,
and the appropriate boundary conditions\cite{tan,btk} at the F/S
interface are
\begin{equation}
\psi_S(0)=\psi'_S(0), \quad
\partial_x \psi_S(0)-\partial_x \psi'_S(0)=\frac{2mH}{\hbar^2}\psi'_S(0).
\label{bc}
\end{equation}
We pause next to discuss some implications of Eq. (\ref{snell})
for the various
scattering processes. In typical realizations of ferromagnet/HTSC structures,
the appropriate FWM corresponds to $L_0 \leq 1$.\cite{amg}
Consider first $L_0=1$, i.e. $E_F=E'_F$. If $X>0$ it follows
that $k_{F\downarrow}<k'_F<k_{F\uparrow}$, for an $S=\downarrow$
incoming electron. Then,
at any incident angle, Eq. (\ref{snell}) is satisfied so that $k_\|$ will be
conserved. In this case $|\theta|>|\theta'_\downarrow|>|\theta_{\overline
{\downarrow}}|$, and all the corresponding wave vectors are real.
For an $S=\uparrow$ incident electron at angle
$|\theta| > |\sin^{-1}(k'_F/k_{F\uparrow})|$, a solution of
Eq. (\ref{snellb}) for a real $\theta'_\uparrow$ no longer exist, one
has
a complex $\theta'_{\uparrow}$.\cite{jackson}
The scattering problem does not have a solution with propagating
wavevectors in the S region:
there is total reflection.
The wavevectors for
ELQ and HLQ have purely imaginary components along the $x$-axis, while
their components parallel to the interface are real.
This corresponds to a surface (evanescent) wave, propagating
along the interface and exponentially damped away from
it.\cite{jackson}
An analogous, but physically more interesting, situation occurs for AR in
the particular case where
$|\theta|$ is smaller than the angle of total reflection and satisfies
$|\theta|>|\sin^{-1}(k_{F\downarrow}/k_{F\uparrow})|$. This regime
corresponds to
$k_{\|\overline{\uparrow}} > k_{F\downarrow}$. In this case it is
Eq. (\ref{snella}) that has no solution for real angles. This means
that Andreev reflection as a {\it propagating} wave is impossible.
From the condition, which follows from the Andreev approximation,
$k_{\overline{\uparrow}}^2+k_{\|\overline{\uparrow}}^2
\equiv k_{F\downarrow}^2$, we see that the
component $k_{\overline{\uparrow}}$ along the $x$ axis must
be purely imaginary,\cite{kash} while
$k_{\|\overline{\uparrow}}$ is still real.
With these considerations we then find
\begin{equation}
k_{\overline{\uparrow}}=-i
(k_{F\uparrow}^2 \sin^2\theta-k_{F\downarrow}^2)^{1/2},
\label{kim}
\end{equation}
where we have expressed
$k_{\overline{\uparrow}}$ in terms of quantities which are always real
and which pertain to the F region only. As with total reflection,
there is
propagation only along the interface and an exponential decay away from it.
This case differs from that
of total reflection in that, since the evanescence affects only the
Andreev reflected component, there may still be
transmission across the junction.
The above considerations apply {\it a fortiori} in the
presence of FWM. For example, if we now consider $L_0<1$, we can see
by inspection of Eq. (\ref{snell}),
that there can also be total reflection for an
$S=\downarrow$ incident electron, when $k_{F\downarrow}>k'_F$. This
condition would imply the absence of imaginary
$k_{\overline{\uparrow}}$ for any incident angle and any exchange energy.
Returning now to the basic equations, we see that by solving for
$\psi_S(x)$, $\psi'_S(x)$ in Eq. (\ref{psif}), (\ref{psis}) with
the boundary conditions given by Eq. (\ref{bc}), we can obtain the
amplitudes $a_S$, $b_S$, $c_S$ and $d_S$, $S=\uparrow,\downarrow$.
For each spin, there is a sum rule, related to the conservation of probability,
for the squares of the absolute
values of the amplitudes.
We can thus, in a way similar to what was done in
Ref. \onlinecite{btk}, express the various quantities in terms of the
amplitudes $a_S$ and $b_S$ only. These amplitudes are given by
\begin{equation}
a_S=\frac{4 t_S L_S \Gamma_+ e^{-i\phi_{S+}}}
{U_{SS+}U_{\overline{S} S-}-
V_{SS-} V_{\overline{S} S+}
\Gamma_+\Gamma_-e^{i(\phi_{S-}-\phi_{S+})}},
\label{as}
\end{equation}
\begin{equation}
b_S= \frac
{V_{S S+}U_{\overline{S} S-}-
U_{S S-} V_{\overline{S} S+}
\Gamma_+\Gamma_-e^{i(\phi_{S-}-\phi_{S+})}}
{U_{SS+} U_{\overline{S} S-}-
V_{SS-} V_{\overline{S} S+}
\Gamma_+\Gamma_-e^{i(\phi_{S-}-\phi_{S+})}},
\label{bs}
\end{equation}
where we have introduced the notation
$\Gamma_\pm\equiv
(\epsilon-\Omega_{S\pm})/|\Delta_{S\pm}|$,
$L_S\equiv L_0 \cos\theta'_S/\cos \theta$,
describing FWM,
$t_S\equiv k_S/k_{Fx}=(1+\rho_S X)^{1/2}$,
$t_{\overline{S}}\equiv k_{\overline{S}}/k_{Fx}=
(1-\rho_S X)^{1/2} \cos \theta_{\overline{S}} /\cos \theta$, for
$k_{\overline{S}}$ real, ($-i[(1+X)\sin^2\theta-(1-X)]^{1/2}/\cos\theta$, for
$k_{\overline{\uparrow}}$ imaginary, see Eq. (\ref{kim})).
The other abbreviations are defined as:
$U_{\overline{S} S\pm}\equiv t_{\overline{S}}+w_{S\pm}$,
$V_{S S\pm}\equiv t_{S}-w_{S\pm}$,
$w_{S\pm}\equiv L_S\pm 2iZ$, $Z\equiv Z_0/\cos\theta$, where
$Z_0\equiv m H/\hbar k_F$ is the interfacial barrier parameter,
as defined above.
The limits $Z_0 \rightarrow 0$ and
$Z_0 \rightarrow \infty$ correspond to the extreme cases of a metallic
point contact and the tunnel junction limit, respectively.
Given the above amplitudes, the results for the dimensionless differential
conductance\cite{btk} can be written down in the standard way
by computing, as a function of the excitation energy arising
from the application of a bias voltage, the ratio of the induced flux
densities across the junction
to the corresponding incident
flux density. One straightforwardly generalizes the
methods used in previous work\cite{tan,btk,aver}
to include now the
effects of unconventional
superconductivity, FWM, and net spin polarization,
to obtain,
\begin{equation}
G\equiv G_{\uparrow}+G_{\downarrow}=\sum_{S=\uparrow,\downarrow} P_S
(1+\frac{k_{\overline{S}}}{k_S}|a_S|^2-|b_S|^2),
\label{gs}
\end{equation}
where we introduce
the probability $P_S$ of an incident electron having spin $S$,
related to the exchange energy as $P_S=(1+\rho_SX)/2$.\cite{been}
In deriving Eq. (\ref{gs}), care has to be taken to
properly include the flux factors, which
are, at $X>0$, different for the incident and the Andreev reflected particle.
The ratio of wavevectors in the second term on
the right side of Eq. (\ref{gs}) results from the incident electron and the AR
hole belonging to different spin bands.
The quantity $k_{\overline{S}}$ in that term
is real, the case
of imaginary $k_{\overline{S}}$ can only contribute to $G_\uparrow$ indirectly,
by modifying $|b_\uparrow|$.
It can be shown\cite{zv}
from the conservation of probability current\cite{btk} that
such a contribution vanishes
for the subgap conductance ($\epsilon<|\Delta_{\uparrow \pm}|$).\cite{total} It
is, furthermore, possible to express
the subgap conductance in terms of the AR amplitude only.\cite{zv}
At $X=0$ we recover the results of Ref. \onlinecite{tan}.
The suppression of the conductance due to ordinary reflection
at $X\neq 0$ has the same
form as for the unpolarized case since the magnitude of the normal component of
the wavevectors before and after ordinary reflection remains the same.
We focus in this work (see results in the next Section) on the conductance
spectrum of the charge current as given by Eq. (\ref{gs}),
but the amplitudes $a_S$, $b_S$, given by Eq. (\ref{as}), (\ref{bs})
can be used to calculate many other quantities of interest, such as
current-voltage characteristics, the spin current,
and the spin conductance.\cite{kash} We consider also here
angularly averaged quantities and notice
that Eq. (\ref{gs}) implies that the conductance
vanishes for $|\theta|$
greater than the angle of total reflection (we recall that this angle
is spin dependent). We define the angularly averaged
(AA) conductance, $\langle G_S\rangle$, as
\begin{equation}
\langle G_S \rangle=\int_{\Omega_S} d\theta \cos \theta G_S(\theta)
/\int_{\Omega_S} d\theta \cos\theta,
\label{ga}
\end{equation}
where $\Omega_S$ is limited by the angle of total reflection
or by experimental setup. This form correctly reduces to
that used in the previously
investigated spin unpolarized situation.\cite{tan} One may choose
a different weight
function in
performing such angular averages, depending
on the specific experimental geometry
and the strengths of the interfacial
scattering.\cite{wei,aver,sauls2}
However, all expressions for angularly averaged results, obtained
from different averaging methods,
would still have a factor of
$(1+\frac{k_{\overline{S}}}{k_S}|a_S|^2-|b_S|^2)$ in the kernel of
integration, and would merely require numerical integration of the
amplitudes we have already given here.
\section{Results}
\label{results}
\subsection{Conventional pair potentials}
\label{cpp}
We present our results in terms of the dimensionless
differential conductance, plotted as a function of the
dimensionless energy $E\equiv \epsilon/\Delta_0$. We
concentrate on the region $E\lesssim 1$ since for larger bias
various extrinsic effects, such as heating, tend to dominate the behavior
of the measured conductance.\cite{vas3} While our findings,
and the analytic results from Section \ref{meth}, are valid for any value
of the interfacial scattering, we focus on smaller values of $Z_0$,
$Z_0\leq 1$, where the novel effects of ferromagnetism on Andreev reflection,
and consequently on the conductance, are more pronounced than in the tunneling
limit, $Z_0 \gg 1$. This regime on which we focus is also that
which is believed to
correspond to several ongoing experiments of F/S structures, where the
samples typically have small interface resistance.\cite{vas2,chen}
To present numerical results, we choose $E'_F/\Delta_0=12.5$, consistent with
optimally doped
$YBa_2Cu_3O_{7-\delta}$.\cite{kresin,hars} We will include
FWM, as parametrized by the quantity $L_0$ introduced above,
$E_F=E'_F/L_0^2$.
We first give some results for an $s$-wave PP, with a constant energy gap.
This will serve to illustrate the influence of FWM coupled with
that of $Z_0$
within a simpler and more familiar context.
In this case,
for any incident angle, $\theta$, of an injected electron the ELQ and HLQ
feel the same PP with $\Delta_{S\pm}=\Delta_0$, and $\phi_{S\pm}\equiv 0$.
Therefore, the results that we give here for the $s$- wave case and
normal incidence ($\theta=0$),
also correspond to the case of a PP of the $d_{x^2-y^2}$ form, with
the angle $\alpha \in (-\pi/2,\pi/2)$, between the crystallographic
$a$-axis and the interface normal, set to $\alpha=0$. This would
represent an F/S interface along the (100) plane.
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl1.ps}
\vspace*{-0.75cm}
\caption{$G(E)$ (Eq.(\protect{\ref{gs}})) versus
$E\equiv \epsilon/\Delta_0$. Results are for $\theta=0$ (normal incidence).
The curves are for $Z_0=0$ (no barrier):
in panel (a) at exchange energy $X\equiv h_0/E_F=0$ (no spin polarization)
they are (from top to bottom at any $E$)
for the FWM values of $L_0^2=E'_F/E_F=1,1/\sqrt{2},1/2,1/4,1/9,1/16$.
In panel (b) they are for $X=0.866$. Since the curves now cross at $E=1$
they are drawn in different ways for clarity.
For $E>1$ they are in the same order as in panel (a) and
for the same values of $L_0$,
while for $E<1$ they correspond, from top to bottom,
to $L_0^2=1/2,1/\sqrt{2},1,1/9,1/16$. The $L_0^2=1/4$ curve
overlaps with that for $L_0^2=1$ in this range.}\label{l1}
\end{figure}
In Fig. \ref{l1} we show results for $G(E)$, given by Eq. (\ref{gs}), at
$\theta=0$, and $Z_0=0$
(this limit of no interfacial barrier was also considered
in Ref.\onlinecite{been}). We plot results for various values of the FWM
parameter $L_0$. Panel (a) corresponds to no polarization ($X=0$)
and panel (b) to high polarization $X=\sqrt{3}/2\approx 0.866$.
For normal incidence,
we have
$t_S=(1+\rho_S X)^{1/2}$,
$t_{\overline{S}}= (1-\rho_S X)^{1/2}$ (as defined below Eq. (\ref{bs})), and
the subgap conductance can be expressed as
\begin{equation}
G=\frac{32 L^2_0 (1-X^2)^{1/2}}
{|t_\uparrow t_\downarrow+ (t_\uparrow+t_\downarrow)L_0+L^2_0
-(t_\uparrow t_\downarrow- (t_\uparrow+t_\downarrow) L_0+L_0^2)
\Gamma_+ \Gamma_-|^2}.
\label{sub}
\end{equation}
Panel (a) displays results in the absence of exchange energy.
With increasing
FWM (i.e. decreasing $L_0$), the amplitude at zero bias voltage (AZB) decreases
monotonically. This effect was explained\cite{btk2} in previous work
as resulting from the increase in a single parameter
$Z_{eff}$, which combined $Z_0$ with the effects of FWM.
Our curves with FWM ($L_0 < 1$) reduce in the appropriate limits to those
previously found\cite{btk2} with $L_0=1$
and $Z_0 \rightarrow Z_{eff}$, $Z_{eff}>Z_0$. We will see below that this
is not the case at $X\neq 0$.
In panel (b) we give results for high $X$
while keeping the
other parameters at the same values as in panel (a). We notice that the presence of
exchange energy gives rise to non-monotonic behavior in the AZB. At low
bias, the conductance can be enhanced with increasing FWM (compare,
for example, the $L_0=1$ and $L_0=1/\sqrt{2}$ results),
and form a zero bias conductance peak (ZBCP.) This behavior is qualitatively
different from that found in the
unpolarized case and the effect of FWM can no longer be reproduced
by simply increasing the interface scattering parameter.
Thus the often implied\cite{soul,up} expectation that the effects of $Z_0$ and $L_0$
could also be
subsumed in a single parameter in the spin polarized case is not fulfilled.
In this panel we have an example of coinciding subgap conductances
for $L_0=1$ and $L_0=1/2$.
The condition for this coincidence to take place at
fixed X can be simply obtained from Eq. (\ref{sub}) as
\begin{equation}
t_\uparrow t_\downarrow/L^2_0=L'^2_0 \quad \Rightarrow \quad
(1-X^2)=L'^2_0 \: (L_0\equiv1),
\label{coincide}
\end{equation}
where $L_0$, $L'_0$ correspond to two different values of FWM for
which the subgap conductances will coincide.
We next look, in the same situation as in the previous figure,
at the effects of the presence of an
interfacial barrier.
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl2.ps}
\vspace*{-0.75cm}
\caption{$G(E)$ for $\theta=0$ and
interfacial barrier strength $Z_0=1$. All the other parameters
are taken as in the previous figure. In both panels,
curves from top to bottom correspond to decreasing values of $L_0$.}
\label{l2}
\end{figure}
In Fig. \ref{l2}, we choose $Z_0=1$,
while keeping all the other parameters the same as in
the corresponding panel of the previous figure. In panel (a)
we show results in the absence of spin
polarization. A finite bias conductance peak
(FBCP) appears at the gap edge. It becomes increasingly narrow with
greater FWM (smaller $L_0$).
Its amplitude
is $2$, independent of $L_0$. In panel (b), at $X=0.866$, the conductance
curves display similar behavior, but with a reduced FBCP at the gap
edge. From Eqs. (\ref{as}), (\ref{bs}), and (\ref{gs}), the amplitude
of the FBCP in this case is
\begin{equation}
G(E=1)=\frac{4(1-X^2)^{1/2}}{1+(1-X^2)^{1/2}}.
\label{edge}
\end{equation}
An interesting feature of this result is that it depends only
on the exchange energy
(spin polarization) and not on the FWM parameter
or the barrier strength. It can be shown that this property holds
for all angles of incidence. This is in contrast
with the value of the zero bias conductance which depends, both for normal
incidence and for other angles, on the value of the FWM. This dependence
could introduce difficulties in the accurate determination of spin
polarization from the AZB.\cite{soul,up} The gap edge
value is less susceptible to these problems.
\begin{figure}[htbp]
\vspace*{-3.75cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl3.ps}
\vspace*{-3.0cm}
\caption{Evolution of the zero bias conductance, $G(E)$ for $\theta=0$.
Results are given at $Z_0=1$ for $X$ determined from Eq. (\protect\ref{azb})
and values of $L_0$ as in Fig. \protect{\ref{l1}}. From top to
bottom the curves correspond to
values of $(L_0^2,X)$ given by $(1,0)$, $(1/\sqrt{2},1/\sqrt{2})$,
$(1/2,0.866)$, $(1/4,0.968)$, $(1/9,0.994)$, and $(1/16,0.998)$.}
\label{l3}
\end{figure}
The presence of spin polarized carriers, due to nonvanishing
exchange energy, is usually held\cite{up,been,ting} to result in the suppression
of Andreev reflection and thus in a reduction of the subgap
conductance. A simple explanation,\cite{been} which neglects the effects of FWM,
predicts that the AZB should monotonically decrease with increasing
$X$, because of the reduction of Andreev reflection, when
only a fraction of injected electrons from the majority spin band
can be reflected as holes belonging to the minority spin band.
This follows from the reduction of the density of states in the
minority spin band with increasing $X$, and eventually causes
the subgap conductance to vanish for a half-metallic ferromagnet when
$X\rightarrow 1$.
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl4.ps}
\vspace*{-0.75cm}
\caption{$\langle G(E)\rangle$, the $\theta$ averaged
conductance, for an $s$-wave PP and the same values
of $X$, $L_0$ as in panels (b) of Figs. \protect{\ref{l1}},
\protect{\ref{l2}}, respectively.
In both panels curves from top to bottom, at
$E=2$, correspond to decreasing $L_0$.}
\label{l4}
\end{figure}
We now proceed to examine whether these findings are
modified when FWM is taken into account.
In Fig. \ref{l3}, which shows results at $Z_0=0$ and normal incidence,
we consider the evolution of the conductance
curves for different values of $X$ and $L_0$ chosen to yield maximum AZB,
$(G(E=0)=2$), starting from the step-like feature at $L_0=1$ and $X=0$
(see Fig. (\ref{l1})).
The condition for maximum AZB at fixed FWM and polarization
can be derived\cite{zv} from Eq. (\ref{sub}) and is
\begin{equation}
k_{\uparrow} k_{\downarrow}=k'^2_F
\quad \Rightarrow \quad(1-X^2)^{1/2}=L_0^2.
\label{azb}
\end{equation}
We have used this equation to determine the optimal value of $X$ for each
value of $L_0$ used
in Figs. \ref{l1}, \ref{l2}. The resulting curves are plotted
in Fig. \ref{l3}.
This figure reveals several interesting features. With the increase
of FWM and the correspondingly larger
optimal spin polarization (according
to the value of $X$ found from Eq. (\ref{azb})), a ZBCP forms.
This is a novel effect in which the peak arises from a mechanism completely
different
from the one usually put forward, where the ZBCP
is attributed to the
presence of unconventional superconductivity. In that case, the ZBCP
is produced by the sign change of the PP and the concomitant
formation of Andreev bound
states.\cite{hu,tan,alf}
Furthermore, if we compare these curves with those in panel (b) of Fig. \ref{l1},
we see that the subgap conductance can increase with increasing
spin polarization at fixed $L_0$. This implies that Andreev
reflection can be enhanced by spin polarization.
We now turn to angular averages (AA).
In Fig. \ref{l4} we show angularly averaged results, obtained
from the expression for $\langle G \rangle$,
Eq. (\ref{ga}). The averaged results are no longer equivalent
(as in the previous figures with normal incidence) to the case of
a $d_{x^2-y^2}$ PP with an F/S interface along the (100) plane: the angular
dependence of the PP would then modify the results.
Each of the two panels shown includes results for the same set of parameter
values
used in panels (b) of Figs. \ref{l1}, and \ref{l2}, respectively.
In panel (a) of the current figure we show how the novel features
previously discussed are largely
preserved after angular averaging. There is still formation
of a ZBCP with increased FWM and the AZB retains its non-monotonic
behavior with $L_0$, as in the case of fixed normal incidence.
The angularly averaged results in panel (b), at $Z_0=1$,
display behavior similar to that found in the $\theta=0$ case,
with the conductance
peak at $E=1$ becoming sharper at increasing FWM.
\subsection{Unconventional pair potentials}
\label{upp}
We next consider an angularly dependent PP, specifically that
for a $d_{x^2-y^2}$ pairing state. With this PP
we have different, spin dependent, PP's for ELQ's and HLQ's. These
are given respectively by
$\Delta_{S\pm}=\Delta_0\cos(2 \theta'_{S\pm})$, where $\theta'_{S\pm}$
can be expressed as
$\theta'_{S\pm}=\theta'_{S}\mp\alpha$ (we recall that $\alpha$ is
the angle between the interface normal and the crystallographic $a$-axis,
and $\theta'_S$ is related to $\theta$ through Eq. (\ref{snell})).
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl5.ps}
\vspace*{-0.75cm}
\caption{$G(E)$
for $\theta=\pi/10$, $\alpha=\pi/4$, $Z_0=0$, and $L_0=1$.
In (a) the curves are for
$X=0,0.5,0.7,0.8, 0.866,0.95$,
(top to bottom at $E=0$).
In (b), we plot the
spin resolved conductance, for two values of $X$.
The upper curve at $E>1$ corresponds to
$G_\uparrow$, and and lower curve to $G_\downarrow$.}
\label{l5}
\end{figure}
In Fig. \ref{l5} we give some of our results
for $d$-wave pairing and $\alpha=\pi/4$ (interface in the (110) plane),
in the absence of both interfacial barrier and FWM and
at a fixed $\theta=\pi/10$, for various values of $X$.
Panel (a) shows curves for the total conductance as it
evolves from a step-like
feature at $X=0$ to a zero bias conductance dip (ZBCD) for large
spin polarization. The width of the plateau at $X=0$ is determined by a
single energy scale set by the equal magnitudes of the PP's for ELQ and HLQ
in that case, as given by
$\Delta_{S+}=\Delta_{S-}<\Delta_0$, $S=\uparrow,\downarrow$. As the
exchange energy is increased, $k_{F\uparrow}$ and $k_{F\downarrow}$
are no longer equal. As one can see from Eq. (\ref{snellb}), it follows that
$\theta'_\uparrow \neq \theta'_\downarrow$ and thus
$\Delta_{\uparrow\pm} \neq \Delta_{\downarrow\pm}$. These two different energy
scales are responsible for the position of various features, such as the
several finite bias conductance peaks (FBCP's) that are seen.
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl6.ps}
\vspace*{-0.75cm}
\caption{$G(E)$
for $\theta=\pi/10$, $\alpha=\pi/6$, $Z_0=0$, and $L_0=1$.
In both panels ordering and values of $X$ for each curve are as in
Fig. \protect{\ref{l5}}.}
\label{l6}
\end{figure}
In panel (b) we show the spin decomposition $G=G_\uparrow+G_\downarrow$,
which better reveals these scales, at two different exchange energies.
At $X=0.5$, the shapes of $G_\uparrow$, $G_\downarrow$ are only slightly
modified
from those in the unpolarized case.
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl7.ps}
\vspace*{-0.75cm}
\caption{$G(E)$
for $\theta=\pi/10$, $Z_0=1$, and $L_0=1$. In both panels
curves (top to bottom at $E=0$) correspond to $X=0,0.5,0.7,0.8,0.9$.
In (a) $\alpha=\pi/4$, and in (b) $\alpha=\pi/6$.}
\label{l7}
\end{figure}
At larger exchange energy, $X=0.866$,
the situation is
very different, as shown in the figure.
We also see, in
panel (b), that as
stated in the previous section, the evanescent wave associated
with the imaginary $k_{\overline{\uparrow}}$
does not contribute to the subgap conductance $G_\uparrow$.
In general, for an arbitrary orientation of the
F/S interface, $\alpha\neq 0,\pi/4$, at a fixed $\theta$, all the four
spin dependent PP's for ELQ and HLQ will have different magnitudes.
There are, therefore, specific features at four different energy scales.
It is only for the particular and atypical (but often
chosen in theoretical work) case of $\alpha=\pi/4$
that these four scales reduce to two.
In Fig. \ref{l6} we show the general behavior by choosing $\alpha=\pi/6$,
while retaining the values of all the other parameters from the previous
figure. One can easily calculate, for example, that at $X=0.5$ the
normalized values
of the PP are, in units of the gap maximum, $\Delta_0$,
$|\Delta_{\uparrow+}|=0.963$, $|\Delta_{\uparrow+}|=0.250$,
$|\Delta_{\downarrow+}|=0.822$, $|\Delta_{\downarrow-}|=0.083$.
These numbers can also be approximately inferred from the spin resolved
results given by the solid lines in panel (b).
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl8.ps}
\vspace*{-0.75cm}
\caption{$\langle G(E) \rangle$, at $Z_0=1$, and $L_0=1$
for $d_{x^2-y^2}$, and
$d_{x^2-y^2}+is$ pair potentials. The latter is of the form
$\Delta_{S\pm}=\Delta_0\cos(2 \theta'_{S\pm})+i0.1\Delta_0$.
In panel (a) $\alpha=\pi/4$ and in (b) $\alpha=\pi/6$. From top to
bottom (at $E=0$), the curves correspond to $X=0,0.5,0.9$,
in both panels and for each pair potential.}
\label{l8}
\end{figure}
We next turn to the case where there is a nonvanishing potential barrier,
choosing for illustration the value $Z_0=1$. In the absence of spin
polarization, the
formation of a ZBCP at finite barrier strength
has been extensively investigated\cite{hu,tan,xu}
and explained by Andreev bound states
in the context of $d$-wave superconductivity. We will consider here also
the effects of $X$, not included in previous work.
In Fig. \ref{l7} we show results for various values of $X$ at $\alpha=\pi/4$.
(in panel (a)) and $\alpha=\pi/6$ (panel (b)). One can see that
for intermediate values of $X$
the conductance maximum is at finite bias. Comparing the two panels,
one sees that the AZB
at a fixed $X\neq 0$ is larger for $\alpha=\pi/4$, in agreement with the
results obtained for the
unpolarized case where, at zero bias, the spectral weight is maximal\cite{hu}
for a (110) interface.
For a different choice of incident angle $\theta$ there will be,
if the values of all other parameters are held fixed, a change in the effective
barrier strength for various scattering processes.
We recall
(see below Eq. (\ref{bs})), that $Z=Z_0/\cos\theta$, and with an
increase in $|\theta|$ typically there will be,
as in the unpolarized case,\cite{bag} a decrease in the amplitude for Andreev
reflection and an increased amplitude for
ordinary reflection.
Results such as those discussed above can be obtained as a function of angle,
and the angular average can then be computed from Eq. (\ref{ga}). We will
combine showing some of these angularly averaged results with a brief study of
another point:
it is straightforward to use the formalism discussed here to examine more
complicated superconducting order parameters. A question that has given
rise to a considerable amount of discussion is that of
whether the superconducting order parameter in high $T_c$ materials
is pure $d$-wave or contains a mixture of $s$ wave as well, with an
imaginary component, so that there would not be, strictly speaking,
gap nodes, but only very deep minima. With this in mind,
the effect of a possible ``imaginary'' PP admixture (for example in a
$d+is$ form) on Andreev bound states has also been recently studied.
\cite{cov,sauls,ting2} We consider this question here, including the
effects of polarization.
In Fig. \ref{l8}, we illustrate the difference in the
angularly averaged conductance values
obtained for a pure
$d_{x^2-y^2}$ PP and for a mixed $d_{x^2-y^2}+is$ case. We choose the
particular
form $\Delta_{S\pm}=0.9\Delta_0\cos(2 \theta'_{S\pm})+i 0.1\Delta_0$.
The phase of the PP, $\phi_{S\pm}$, is no longer equal to $\pi$ or $0$ as in
the pure $d$-wave case. We give AA results for several values
of $X$, both for the pure $d$ and the mixed $d+is$ cases.
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl9.ps}
\vspace*{-0.75cm}
\caption{$G(E)$
for $\theta=\pi/10$, $Z_0=0$, and $L_0=1/2,L_0=1/3$.
In panel (a) $\alpha=\pi/4$ and in (b) $\alpha=\pi/6$.
From top to bottom, at $E=0$, curves correspond to $X=0,0.5,0.8$,
in both panels and for each pairing potential.}
\label{l9}
\end{figure}
The former
represents the angular average of results similar to those
previously displayed.
\begin{figure}[htbp]
\vspace*{-2.0cm} \hspace*{4cm}
\epsfxsize = 3.4 in \epsfbox{wl10.ps}
\vspace*{-0.75cm}
\caption{Conductance curves for $\theta=\pi/10$ at $Z_0=1$ with the same
parameters and ordering as in Fig. \protect{\ref{l9}}.}
\label{l10}
\end{figure}
As in the unpolarized case,\cite{sauls,ting} the $is$
admixture
in the PP is responsible for a FBCP, approximately at
$E=0.1$. The conductance maximum is reduced with increased $X$ and
with departure from a (110) oriented interface.
Replacing the
$d_{x^2-y^2}+is$ PP by a ``real'' admixture $d_{x^2-y^2}+s$
(taking again $0.1\Delta_0$ for the $s$-wave part) gives
results almost indistinguishable from the pure $d$-wave for any
value of spin polarization.
To show the effects of FWM on conductance for a pure $d$-wave PP we take
$L_0=1/2,1/3$ and give results at the fixed angle, $\theta=\pi/10$,
previously considered. In Fig. \ref{l9} we show curves at $Z_0=0$
and $\alpha=\pi/4$ (panel (a)), and for $\alpha=\pi/6$ in panel (b).
It is useful to compare this figure to panel (a) in Figs. \ref{l5}, and
\ref{l6}, corresponding to no FWM for $\alpha=\pi/4$ and $\pi/6$, respectively.
In the absence of spin polarization the effect of FWM resembles the influence
of a nonvanishing barrier strength, $Z_0$, and leads to the formation
of a ZBCP,
which becomes increasingly narrow for smaller $L_0$. The effect of moderate
spin polarization ($X \lesssim 0.5$, for comparison with the above
mentioned figures) on the AZB is
rather small for $L_0=1, 1/2$ but it is significantly larger at $L_0=1/3$.
In the next figure, Fig. \ref{l10}, we use $Z_0=1$ and the same parameters as in
the previous
figure, so that the influence of barrier strength can
be gauged. One sees that in the presence of spin polarization the position
of the conductance maximum depends on FWM. With increasing mismatch,
the FBCP
evolves into a ZBCP. By comparing the curves
corresponding to $L_0=1$ in Fig. \ref{l7} with those
for smaller $L_0$ in Fig. \ref{l10} , it is
interesting to notice that an effect similar to that
discussed previously for $s$-wave PP without an interfacial barrier
and at normal incidence is also manifested in other regimes, in that
the conductance maximum can actually be enhanced,
in the spin polarized case (at fixed $X$), by the FWM.
\section{Conclusions}
\label{conc}
In this paper we have studied the conductance spectra of
ferromagnetic/superconductor structures. The expressions for
Andreev reflection and ordinary reflection amplitudes which we have given, allow one to
simply obtain other quantities of interest such as current-voltage
characteristics or conductance spectra for spin current.\cite{kash}
We have developed the appropriate extensions of
the standard approach and approximations used in
the absence of spin polarization. This has enabled us to present analytic
results. Within these approximations, and with the inclusion of FWM, we have
shown a number of important
qualitative differences from the unpolarized case or from that where
spin polarization is included
in the absence of FWM.
Our considerations may also be important in the interpretation of recent
experiments\cite{soul,up} attempting to use tunneling to
measure the degree of spin polarization in the ferromagnetic
side of the junction, since the experimental determination
of spin polarization in a ferromagnet is a very difficult
and important experimental question in its own right.
As we have shown, the ZBCP is sensitive to both spin polarization and FWM,
while the gap edge amplitude depends only on $X$.
It is then not possible to straightforwardly determine the spin polarization
by using the results for the amplitude of the zero bias conductance unless the
appropriate FWM of the F/S structure is
known and properly taken into account. Furthermore, FWM can not, unlike in the
unpolarized case,\cite{btk2}
be simply described by a rescaled
value of the interfacial barrier strength.
The procedures used here have the advantages of simplicity and of allowing
for analytic solutions. These advantages have enabled us to investigate
widely the relevant parameter space.
We have left for future work considerations that would have diminished
these advantages. Among these
are the question of the self consistent treatment of the PP, inclusion
of spin-flip scattering or of a
more realistic band structure, and non-equilibrium transport. However,
we believe that the the methods we have employed
are sufficient to elucidate the hitherto unappreciated subtleties and
the richness and variety of the phenomena associated with
spin polarized tunneling spectroscopy.
We hope that our work,
as reported here and in Ref.{\onlinecite{zv}, will prompt additional experiments
and theoretical work. In particular,
an important clue about spin polarized transport would be provided by
measurements of the spin resolved conductance. Indeed, we have
already become aware of two very recent related preprints
leading into these directions, one\cite{si}
on Andreev reflection and spin injection into $s$- and
$d$-wave superconductors, and another\cite{enhance} discussing,
in conventional superconductors,
out of equilibrium enhanced Andreev reflection with spin polarization.
\section{Acknowledgements}
We would like to thank J. Fabian, A.M. Goldman, A.J. Millis, S. Das Sarma,
T. Venkatesan, V.A. Vas'ko, and S. Gasiorowicz for useful
discussions. This work was supported by the US-ONR.
| {'timestamp': '1999-02-07T03:52:15', 'yymm': '9902', 'arxiv_id': 'cond-mat/9902080', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9902080'} |
\section{Introduction}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figures/intro_fig_2.png}
\caption{Overview of our experimental design. Two probes are evaluated using learning curves (including zero-shot). \robertal{}'s (red squares, upper text in black) accuracy is compared to a \textsc{No Language} (\nolang{}) control (red circles, lower text in black), and \mlmbaseline{}, which is not pre-trained (green triangles). Here, we conclude that the LM representations are well-suited for task A), whereas in task B) the model is adapting to the task during fine-tuning.}~\label{fig:intro-fig}
\end{figure}
Large pre-trained language models (LM) have revolutionized the field of natural language processing in the last few years \cite{NIPS2015_5949,peters2018elmo,yang2019xlnet,radford2019gpt2,devlin2019bert}.
This has instigated research exploring what is captured by the contextualized representations that these LMs compute, revealing that they encode substantial amounts of syntax and semantics \cite{linzen2016assessing,tenney2018what,tenney-etal-2019-bert,lexcomp_tacl_2019,lin2019open,coenen2019visualizing}.
\begin{table*}[t]
\centering
\resizebox{1.0\textwidth}{!}{
\begin{tabular}{l|l|l|c}
\textbf{Probe name} & \textbf{Setup} & \textbf{Example} & \textbf{Human\footnotemark[1]} \\
\hline
\textsc{Always-Never} & \mcmlm{} & \emph{A \underline{chicken} [MASK] has \underline{horns}.} \ansa{} never \ansb{} rarely \ansc{} sometimes \ansd{} often \anse{} always & 91\% \\
\hdashline
\textsc{Age Comparison} & \mcmlm{} & \emph{A \underline{21} year old person is [MASK] than me in age, If I am a \underline{35} year old person.} \ansa{} younger \ansb{} older & 100\% \\
\textsc{Objects Comparison} & \mcmlm{} & \emph{The size of a \underline{airplane} is [MASK] than the size of a \underline{house} .} \ansa{} larger \ansb{} smaller & 100\% \\
\hdashline
\textsc{Antonym Negation} & \mcmlm{} & \emph{It was [MASK] \underline{hot}, it was really \underline{cold} .} \ansa{} not \ansb{} really & 90\% \\
\hdashline
\textsc{Property Conjunction} & \mcqa{} & \emph{What is usually \underline{located at hand} and \underline{used for writing}?} \ansa{} pen \ansb{} spoon \ansc{} computer & 92\% \\
\textsc{Taxonomy Conjunction} & \mcmlm{} & \emph{A \underline{ferry} and a \underline{floatplane} are both a type of [MASK].} \ansa{} vehicle \ansb{} airplane \ansc{} boat & 85\% \\
\hdashline
\textsc{Encyc. Composition} & \mcqa{} & \emph{When did the band where Junior Cony played first form?} \ansa{} 1978 \ansb{} 1977 \ansc{} 1980 & 85\% \\
\textsc{Multi-hop Composition} & \mcmlm{} & \emph{When comparing a \underline{23}, a \underline{38} and a \underline{31} year old, the [MASK] is oldest} \ansa{} second \ansb{} first \ansc{} third & 100\% \\
\end{tabular}}
\caption{Examples for our reasoning probes. We use two types of experimental setups, explained in \S\ref{sec:models}. A. is the correct answer. }
\label{tab:intro}
\end{table*}
Despite these efforts, it remains unclear \emph{what symbolic reasoning capabilities are difficult to learn from an LM objective only?} In this paper, we propose a diverse set of probing tasks for types of symbolic reasoning that are potentially difficult to capture using a LM objective (see Table~\ref{tab:intro}). Our intuition is that since a LM objective focuses on word co-occurrence, it will struggle with tasks that are considered to involve symbolic reasoning such as determining whether a \emph{conjunction} of properties is held by an object, and \emph{comparing} the sizes of different objects. Understanding what is missing from current LMs may help design datasets and objectives that will endow models with the missing capabilities.
However, how does one verify whether pre-trained representations hold information that is useful for a particular task?
Past work mostly resorted to fixing the representations and \emph{fine-tuning} a simple, often linear, randomly initialized probe, to determine whether the representations hold relevant information \cite{ettinger2016probing,adi2016fine,belinkov2019analysis,structural-probe,wallace2019nlp,rozen2019diversify,peters2018dissecting,warstadt2019investigating}. However, it is difficult to determine whether success is due to the pre-trained representations or due to fine-tuning itself \cite{hewitt2019designing}.
To handle this challenge, we include multiple controls that improve our understanding of the
results.
Our ``purest'' setup is zero-shot: we cast tasks in the \emph{masked LM} format, and use a pre-trained LM without any fine-tuning. For example, given the statement \nl{A cat is [MASK] than a mouse}, an LM can decide if the probability of \nl{larger} is higher than \nl{smaller} for the a masked word (Figure~\ref{fig:intro-fig}). If a model succeeds without pre-training over many pairs of objects, then its representations are useful for this task. However, if it fails, it could be due to a mismatch between the language it was pre-trained on and the language of the probing task (which might be automatically generated, containing grammatical errors). Thus, we also compute the learning curve (Figure~\ref{fig:intro-fig}),
by fine-tuning with increasing amounts of data on the already pre-trained masked language modeling (MLM) output ``head'', a 1-hidden layer MLP on top of the model's contextualized representations. A model that adapts from fewer examples arguably has better representations for it.
Moreover, to diagnose whether model performance is related to pre-training or fine-tuning, we add controls to every experiment (Figures~\ref{fig:intro-fig},\ref{fig:controls-example}). First, we add a control
that makes minimal use
of language tokens, i.e., \nl{cat [MASK] mouse} (\nolang{} in Figure~\ref{fig:intro-fig}).
If a model succeeds given minimal use of language, the
performance can be mostly attributed to fine-tuning rather than to the pre-trained language representations. Similar logic is used to compare against baselines that are not pre-trained (except for non-contextualize word embeddings).
Overall, our setup provides a rich picture of whether LM representations help in solving a wide range of tasks.
We introduce eight tasks that test different types of reasoning, as shown in Table~\ref{tab:intro}.\footnote{Average human accuracy was evaluated by two of the authors. Overall inter-annotator agreement accuracy was 92\%.}
We run experiments using several pre-trained LMs, based on \bert{} \cite{devlin2019bert} and \roberta{} \cite{liu2019roberta}. We find that there are clear qualitative differences between different LMs with similar architecture. For example, \textsc{RoBERTa-Large} (\robertal{}) can perfectly solve some reasoning tasks, such as comparing numbers, even in a zero-shot setup, while other models' performance is close to random. However, good performance is highly \emph{context-dependent}. Specifically, we repeatedly observe that even when a model solves a task, small changes to the input quickly derail it to low performance. For example, \robertal{} can almost perfectly compare people's ages, when the numeric values are in the expected range (15-105), but miserably fails if the values are outside this range.
Interestingly, it is able to reliably answer when ages are specified through the birth year in the range 1920-2000.
This highlights that the LMs ability to solve this task is strongly tied to the specific values and linguistic context and does not generalize to arbitrary scenarios.
Last, we find that in four out of eight tasks, all LMs perform poorly compared to the controls.
Our contributions are summarized as follows:
\begin{itemize}[leftmargin=*,topsep=0pt,itemsep=0pt,parsep=0pt]
\item A set of probes that test whether specific reasoning skills are
captured by pre-trained LMs.
\item An evaluation protocol for understanding whether a capability is encoded in pre-trained representations or is learned during fine-tuning.
\item An analysis of skills that current LMs possess. We find that LMs with similar architectures are qualitatively different, that their success is context-dependent, and that often all LMs fail.
\item Code and infrastructure for designing and testing new probes on a large set of pre-trained LMs.
\end{itemize}
\comment{
\begin{itemize}
\item \at{the Olympics are coming soon, and the NLP community is sending it's best contestants - Pre-trained Language Models. But how well can they do against the praised human reasoning capabilities? (let's say something amusing here? :) }
\item In the last 1-2 years large models pre-trained on large amounts of data had enormous success in NLP, substantially advancing the field etc. etc.
\item This has resulted in a lot of work that tries to investigate what types of knowledge is captured by these LMs, where it was found that they capture some world knowledge (LM as KBs among others) that they captures syntactic stuff (many work including from Yoav and Tal and Stanford and more
\item Still there is something missing - results have shown that LMs capture many different things to some extent, but it is difficult from current literature to systematically compare the extent to which LM pre-training captures different types of knowledge. This is especially pronounced when considering tasks that require what humans usually view as symbolic reasoning, such as the ability to determine whether some object manifests a conjunction of properties, or determining whether a certain fact is always true or only frequently. Intuitively, such symbolic reasoning would be difficult to capture through an objective that focuses on co-occurrence only. (should refer to figure with examples)
\item \at{why would we like the LM the capture stuff? }
\item In this paper, we propose a diverse set of probing/reasoning tasks (i.e. "games") to examine whether LM-like objectives capture various types of reasoning/knowledge over a large number of different LMs (refer to table with tasks or overview them?).
\item Our set of tasks includes tasks that intuitively should be easily captured by LM pre-training and other tasks that should be difficult to solve given LM pre-training.
\item To test what LMs capture in pre-training, ideally, we would like to test them as is, without influencing their params, thus we conduct several zero shot experiments in the masked language model objective.
\item However, slight language mismatch between pre-training and the games may cause low zero-shot results to be low, not truly indicating if the LMs know this task.
\item Fine-tuning may help overcome this mismatch, however, a sufficiently expressive fine-tuning probe with enough training data can learn any task on top of it, regardless of the LM capabilities. (cite NNs are universal approximates? )
\item To constraint the expressiveness of the probe, and apply minimum changes to the language model, we prefer games set up using the language model masked objective.
\item however we acknowledge that some tasks a hard to express in this format, therefore a multi-choice question answering probe is used in some of the games.
\item To control for the amount of training data the games recieve, we make use of fine-tuning learning curves, generally limiting the number of examples of 4,000.
\item We then compare the learning curves of the original probes to specially introduced controls designed to test the selectivity of the LM to the task at hand, these controls include a lexical variations (removing the language from the probe), weight freezing, and comparing to a model that has not been pre-trained.
\item We find that, when tested for knowledge, the LMs show strong performance in lexical-semantic and frequent common-sense and world-knowledge fact prediction (cite Comet, and Riedel's KB stuff) but struggle with less frequently mentioned "long tail" world knowledge.
\item Interestingly, we find that the LM struggle with "negative facts", facts that state what does not exist, e.g. "rhinoceros never has fur" (talk about blind people experiment?). Moreover, knowledge in the form of quantifiers, e.g. "men sometimes/never/always have a beard" is not well expressed by the pre-trained language models.
\item We also test a wide veriaty of probes for low level reasoning challenges. In general we find that most LMs struggle with low level reasoning "out-of-the-box". However, we did find, specifically for RoBERTa-Large, a few cases that indicate it is able to capture various types of low level reasoning such as age comparison etc..
\item However this reasoning is very context specific: for age comparison, once the range is out of frequent ranges or the use of the parameters is out of context (numbers used for some other task) the model performance on this task decreases. (elaborate on the reasoning,... )
\end{itemize}
Contributions:
\begin{itemize}
\item A novel set of probes and challenges designed to test if specific reasoning skills and knowledge were encoded in the Language Model during its pre-training.
\item A novel evaluation method based on learning-curves over a training-set limited to only 4,000 examples, by which we determine if knowledge has been acquired during the training.
\item A detailed analysis of the knowledge and reasoning skills acquired by current state-of-the-art language models during pre-training, and the ease in which some reasoning capabilities not encoded during training can be acquired.
\item Code and infrastructure for easily designing new probes and testing them on a large set of pre-trained language models
\end{itemize}
}
The code and models are available at \url{http://github.com/alontalmor/oLMpics}.
\section{Models}
\label{sec:models}
We now turn to the architectures and loss functions used throughout the different probing tasks.
\subsection{Pre-trained Language Models}
All models in this paper take a sequence of tokens $\inputtokens = (\inputtokens_1, \dots, \inputtokens_n)$, and compute contextualized representations with a pre-trained LM, that is, $\reprs = \text{ENCODE}(\inputtokens) = (\reprs_1, \dots, \reprs_n)$.
Specifically, we consider (a) \bert{}: \cite{devlin2019bert}, a pre-trained LM built using the Transformer \cite{vaswani2017attention} architecture, which consists of a stack of Transformer layers, where each layer includes a multi-head attention sublayer and a feed-forward sub-layer. BERT is trained on large corpora using the \emph{masked-language modeling} objective (MLM), i.e., the model is trained to predict words that are masked from the input; including \textsc{BERT-Whole-Word-Masking} (\bertwwm{}), that was trained using \emph{whole-word-masking} (b) \textsc{RoBERTa} \cite{liu2019roberta}, which has the same architecture as \bert{}, but was trained on 10x more data and optimized carefully.
\subsection{Probing setups}
We probe the pre-trained LMs using two setups:
multi-choice masked LM (\mcmlm{}) and multi-choice question answering (\mcqa{}). The default setup is \mcmlm{}, used for tasks where the answer-set is small, consistent across the different questions, and each answer appears as a single item in the word-piece vocabulary.\footnote{Vocabularies of LMs such as \bert{} and \roberta{} contain \emph{word-pieces}, which are sub-word units that are frequent in the training corpus. For details see \newcite{sennrich2015neural}.} The MC-QA setup is used when the answer-set substantially varies between questions, and many of the answers have more than one word piece.
\paragraph{\mcmlm{}}
Here, we convert the MLM setup to a multi-choice setup (\mcmlm{}). Specifically, the input to the LM is the sequence $\inputtokens = (\texttt{[CLS]}, \dots, \inputtokens{}_{i-1}, \texttt{[MASK]}, \inputtokens{}_{i+1}, \dots, \texttt{[SEP]})$, where a single token $\inputtokens_i$ is masked. Then, the contextualized representation $\reprs_i$ is passed through a \emph{\mcmlm{} head} where $\mathcal{V}$ is the vocabulary, and $FF_\text{MLM}$ is a 1-hidden layer MLP:
$$
l = FF_\text{MLM}(\reprs_i) \in \mathbb{R}^{|\mathcal{V}|},
p = \text{softmax}(m \oplus l),
$$
where $\oplus$ is element-wise addition and $m \in \{0, -\infty\}^{|\mathcal{V}|}$ is a mask that
guarantees that the support of the probability distribution will be over exactly $K \in \{2,3,4,5\}$ candidate tokens: the correct one and $K-1$ distractors. Training minimizes cross-entropy loss given the gold masked token.
An input, e.g. \nl{\texttt{[CLS]} Cats \texttt{[MASK]} drink coffee \texttt{[SEP]}}, is passed through the model, the contextualized representation of the masked token is passed through the MC-MLM head, and the final distribution is over the vocabulary words \nl{always}, \nl{sometimes} and \nl{never}, where the gold token is \nl{never}, in this case.
A compelling advantage of this setup,
is that reasonable performance can be obtained without training, using the original LM representations and the already pre-trained MLM head weights \cite{petroni2019language}.
\paragraph{\mcqa{}}
Constructing a MC-MLM probe limits the answer candidates to a single token from the word-piece vocabulary. To relax this we use in two tasks the standard setup for answering multi-choice questions with pre-trained LMs \cite{talmor2019commonsenseqa, OpenBookQA2018}.
Given a question $\bm{q}$ and candidate answers $\bm{a}_1, \dots, \bm{a}_K$, we compute for each candidate answer $\bm{a}_k$ representations $\bm{h}^{(k)}$ from the input tokens \nl{\texttt{[CLS]} $\bm{q}$ \texttt{[SEP]} $\bm{a}_k$ \texttt{[SEP]}}. Then the probability over answers is obtained using the \emph{multi-choice QA head}:
$$
l^{(k)} = FF_\text{QA}(\reprs_1^{(k)}),
p = \text{softmax}(l^{(1)}, \dots, l^{(K)}),
$$
where $FF_\text{QA}$ is a 1-hidden layer MLP that is run over the \texttt{[CLS]} (first) token of an answer candidate and outputs a single logit. Note that in this setup that parameters of $FF_\text{QA}$ cannot be initialized using the original pre-trained LM.
\comment{
\subsection{Multi-Choice Question Answering}
Constructing a MC-MLM probe limits the answer candidates to a single token from the word-piece vocabulary. To relax this setup we also explore the \mcqa{} setup from \S\ref{sec:models}.
In MC-QA, we phrase the task as a question, letting answer candidates be arbitrary strings, which provides ample expressivity \cite{gardner2019question}. In Table~\ref{tab:intro}, \textsc{Property conjunction} and \textsc{Encyc. Comparison} serve as examples for this setup.
For \agecomp{} we use the same task in \mcqa{} setup, Figure~\ref{fig:controls-example}F shows the learning curves. Because in \mcqa{}, the network $\MLPQA{}$ cannot be initialized by pre-trained weights, it is impossible to obtain meaningful zero-shot results, and more training examples are needed to train $\MLPQA{}$.
Still, the trends observed in \mcmlm{} remain, with \robertal{} achieving best performance with the fewest examples.
}
\comment{
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figures/models.png}
\caption{\mcmlm{} setup. A few vocabulary tokens are possible outputs; the MLP is initialized with pre-trained weights.
}~\label{fig:models}
\end{figure}
}
\subsection{Baseline Models}
To provide a lower bound on the performance of pre-trained LMs, we introduce two baseline models with only non-contextualized representations.
\paragraph{\mlmbaseline{}}
This serves as a lower-bound for the \mcmlm{} setup.
The input to $FF_\text{MLM}(\cdot)$ is the hidden representation $\reprs \in \mathbb{R}^{1024}$ (for large models). To obtain a similar architecture with non-contextualized representations, we concatenate the first $20$ tokens of each example, representing each token with a $50$-dimensional \textsc{GloVe} vector \cite{pennington2014glove}, and pass this $1000$-dimensional representation of the input through $FF_\text{MLM}$, exactly like in \mcmlm{}. In all probes, phrases are limited to 20 tokens. If there are less than 20 tokens in the input, we zero-pad the input.
\paragraph{\mcqa{} baseline}
This serves as a lower-bound for \mcqa{}.
We use the ESIM architecture over \textsc{GloVe}
representations, which is known to provide a strong model when the input is a pair of text fragments \cite{chen2017enhanced}. We adapt the architecture to the multi-choice setup using the procedure proposed by \newcite{zellers2018swag}. Each phrase and candidate answer are passed as a list of token \texttt{`[CLS] phrase [SEP] answer [SEP]'} to the LM. The contextualized representation of the \texttt{[CLS]} token is linearly projected to a single logit. The logits for candidate answers are passed through a softmax layer to obtain probabilities, and the argmax is selected as the model prediction.
\section{Controlled Experiments}
\label{sec:controlled_experiments}
We now describe the experimental design and controls used to interpret the results.
We use the \agecomp{} task as a running example, where models need to compare the numeric value of ages.
\subsection{Zero-shot Experiments with \mcmlm{}}
\label{subsec:zero_shot}
Fine-tuning pre-trained LMs makes it hard to disentangle what is captured by the original representations and what was learned during fine-tuning. Thus, ideally, one should test LMs using the pre-trained weights \emph{without}
fine-tuning \cite{linzen2016agreement,goldberg2019assessing}.
The \mcmlm{} setup, which uses a pre-trained MLM head, achieves exactly that. One only needs to design the task as a statement with a single masked token and $K$ possible output tokens. For example, in \agecomp{}, we chose the phrasing \nl{A \texttt{AGE-1} year old person is \texttt{[MASK]} than me in age, If I am a \texttt{AGE-2} year old person.}, where \texttt{AGE-1} and \texttt{AGE-2} are replaced with different integers, and
possible answers are \nl{younger} and \nl{older}. Otherwise, no training is needed, and the original representations are tested.
Figure~\ref{fig:controls-example}A provides an example of such zero-shot evaluation. Different values are assigned to \texttt{AGE-1} and \texttt{AGE-2}, and the pixel is colored when the model predicts \nl{younger}. Accuracy (acc.) is measured as the proportion of cases when the model output is correct.
The performance of \bertwwm{}, is on the left (blue), and of \robertal{} on the right (green). The results in Figure~\ref{fig:controls-example}A and Table~\ref{tab:agecomparison} show that \robertal{} compares numbers correctly (98\% acc.), \bertwwm{} achieves higher than random acc. (70\% acc.), while \bertl{} is random (50\% acc.).
The performance of \mlmbaseline{} is also random,
as the $\MLP{}$ weights are randomly initialized.
We note that picking the statement for each task was done through manual experimentation. We tried multiple phrasings \cite{jiangHowCanWe2019} and chose the one that achieves highest average zero-shot accuracy across all tested LMs.
A case in point ...
Thus, if a model performs well, one can infer that it has the tested reasoning skill. However, failure does not entail that the reasoning skill is missing, as it is possible that there is a problem with the lexical-syntactic construction we picked.
\subsection{Learning Curves}
Despite the advantages of zero-shot evaluation, performance of a model might be adversely affected by mismatches between the language the pre-trained LM was trained on and the language of the examples in our tasks \cite{jiangHowCanWe2019}.
To tackle this, we fine-tune models with a small number of examples.
We assume that if the LM representations are useful for a task, it will require few examples to overcome the language mismatch and achieve high performance. In most
cases, we train with $N \in \{62, 125, 250, 500, 1K, 2K, 4K\}$ examples.
To account for optimization instabilities, we fine-tune several times with different seeds,
and report average accuracy across seeds.
The representations $\reprs{}$ are fixed during fine-tuning, and we only fine-tune the parameters of $\MLP{}$.
\begin{table}[t]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 98 & 98 & 100 & 97 & 100 & 31 & 51 \\
BERT-WWM & 70 & 82 & 100 & 69 & 85 & 13 & 15 \\
BERT-L & 50 & 52 & 57 & 50 & 51 & 1 & 0 \\
\hdashline
RoBERTa-B & 68 & 75 & 91 & 69 & 84 & 24 & 25 \\
BERT-B & 49 & 49 & 50 & 50 & 50 & 0 & 0 \\
\hdashline
Baseline & 49 & 58 & 79 & - & - & 0 & 0 \\
\end{tabular}}
\caption{\agecomp{} results. Accuracy over two answer candidates (random is 50\%). \langsenses{} are the \langsense{} controls, \pertlangs{} is \pertlanguage{} and \nolangs{} is \nolanguage{}. The baseline row is \mlmbaseline{}. }
\label{tab:agecomparison}
\end{table}
\paragraph{Evaluation and learning-curve metrics}
Learning curves are informative, but inspecting many learning curves can be difficult. Thus, we summarize them using two aggregate statistics.
We report: (a) \textsc{Max}, i.e., the maximal accuracy on the learning curve, used to estimate how well the model can handle the task given the limited amount of examples. (b) The metric \smetric{}, which is a weighted average of accuracies across the learning curve, where higher weights are given to points where $N$ is small.\footnote{
We use the decreasing weights
$W=(0.23, 0.2, 0.17, 0.14, 0.11, 0.08, 0.07)$. }
\smetric{} is related to the area under the accuracy curve, and to the online code metric, proposed by \citet{yogatama2019learning,blier2018description}. The linearly decreasing weights emphasizes our focus on performance given little training data, as it highlights what was encoded by the model \emph{before} fine-tuning.
For \agecomp{}, the solid lines in Figure~\ref{fig:controls-example}B illustrate the learning curves of \robertal{} and \bertwwm{}, and Table~\ref{tab:agecomparison} shows the aggregate statistics. We fine-tune the model by replacing \texttt{AGE-1} and \texttt{AGE-2} with values between $43$ and $120$, but test with values between $15$ and $38$, to guarantee that the model \emph{generalizes} to values unseen at training time. Again, we see that the representations learned by \robertal{} are already equipped with the knowledge necessary for solving this task.
\subsection{Controls}
Comparing learning curves tells us which model learns from fewer examples.
However, since highly-parameterized MLPs, as used in LMs, can approximate a wide range of functions, it is difficult to determine whether performance is tied to the knowledge acquired at pre-training time, or to the process of fine-tuning itself. We present controls that attempt to disentangle these two factors.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figures/controls_example_5.png}
\caption{An illustration of our evaluation protocol. We compare \robertal{} (green) and \bertwwm{} (blue), controls are in dashed lines and markers are described in the legends. Zero-shot evaluation on the top left, \texttt{AGE-1} is \nl{younger} (in color) vs. \nl{older} (in white) than \texttt{AGE-2}.}~\label{fig:controls-example}
\end{figure}
\paragraph{Are LMs sensitive to the language input?}
We are interested in whether pre-trained representations reason over language examples. Thus, a natural control is to present the reasoning task \emph{without} language and inspect performance. If the learning curve of a model does not change when the input is perturbed or even mostly deleted, then the model shows low \emph{language sensitivity} and the pre-trained representations do not explain the probe performance.
This approach is related to work by \newcite{hewitt2019designing}, who proposed a control task, where the learning curve of a model is compared to a learning curve when words are associated with random behaviour. We propose two control tasks:
\noindent
\textbf{\emph{\textsc{No Language} control}}
We remove all input tokens, except for \texttt{[MASK]} and the \emph{arguments} of the task, i.e., the tokens that are necessary for computing the output.
In \agecomp{}, an example is reduced to the phrase \nl{24 [MASK] 55}, where the candidate answers are the words \nl{blah}, for \nl{older}, and \nl{ya}, for \nl{younger}.
If the learning curve is similar to when the full example is given (low language sensitivity), then the LM is not strongly using the language input.
The dashed lines in Figure~\ref{fig:controls-example}B illustrate the learning curves in \nolanguage{}: \robertal{} (green) shows high language sensitivity, while \bertwwm{} (blue) has lower language sensitivity. This suggests it handles this task partially during fine-tuning. Table~\ref{tab:agecomparison} paints a similar picture, where the metric we use is identical to \smetric{}, except that instead of averaging accuracies, we average the \emph{difference} in accuracies between the standard model and \nolanguage{} (rounding negative numbers to zero). For \robertal{} the value is $51$, because \robertal{} gets almost $100\%$ acc. in the presence of language, and is random ($50\%$ acc.) without language.
\noindent
\textbf{\emph{\textsc{Perturbed Language} control}}
A more targeted language control, is to replace words that are central for the reasoning task with nonsense words. Specifically, we pick key words in each probe template, and replace these words by randomly sampling from a list of 10 words that carry relatively limited meaning.\footnote{The list of substitutions is: \nl{blah}, \nl{ya}, \nl{foo}, \nl{snap}, \nl{woo}, \nl{boo}, \nl{da}, \nl{wee}, \nl{foe} and \nl{fee}.}
For example, in \textsc{Property Conjunction}, we can replace the word \nl{and} with the word \nl{blah} to get the example \nl{What is located at hand \textbf{blah} used for writing?}.
If the learning curve of \pertlanguage{} is similar to the original example, then the model does not utilize the pre-trained representation of \nl{and} to solve the task, and may not capture its effect on the semantics of the statement.
Targeted words change from probe to probe.
For example, in \agecomp{}, the targeted words are \nl{age} and \nl{than}, resulting in examples like \nl{A \texttt{AGE-1} year old person is [MASK] \textbf{blah} me in \textbf{da}, If i am a \texttt{AGE-2} year old person.}.
Figure~\ref{fig:controls-example}C shows the learning curves for \robertal{} and \bertwwm{}, where solid lines corresponds to the original examples and dashed lines are the \pertlanguage{} control.
Despite this minor perturbation, the performance of \robertal{} substantially decreases, implying that the model needs the input. Conversely, \bertwwm{} performance decreases only moderately.
\paragraph{Does a linear transformation suffice?}
In MC-MLM, the representations $\reprs{}$ are fixed, and only the pre-trained parameters of $\MLP{}$ are fine-tuned. As a proxy for measuring ``how far" the representations are from solving a task, we fix the weights of the first layer of $\MLP{}$, and only train the final layer. Succeeding in this setup means that only a linear transformation of $\reprs{}$ is required.
Table~\ref{tab:agecomparison} shows the performance of this setup (\linear{}), compared to $\MLP{}$.
\paragraph{Why is \mcmlm{} preferred over \mcqa{}?}
Constructing a MC-MLM probe limits the answer candidates to a single token from the word-piece vocabulary. To relax this setup we also explore the \mcqa{} setup from \S\ref{sec:models}.
In MC-QA, we phrase the task as a question, letting answer candidates be arbitrary strings, which provides ample expressivity \cite{gardner2019question} and facilitates probing question involving complex and commonsense reasoning \cite{talmor2019commonsenseqa,gardner2019making,talmor2018web}.
In Table~\ref{tab:intro}, \textsc{Property conjunction} and \textsc{Encyc. Comparison} serve as examples for this setup. For \agecomp{} we use the same task in \mcqa{} setup.
Figure~\ref{fig:controls-example}D compares the learning curves of \mcmlm{} and \mcqa{} in \agecomp{}.
Because in \mcqa{}, the network $\MLPQA{}$ cannot be initialized by pre-trained weights, zero-shot evaluation is not meaningful, and more training examples are needed to train $\MLPQA{}$.
Still, the trends observed in \mcmlm{} remain, with \robertal{} achieving best performance with the fewest examples.
\comment{
\paragraph{Are LMs sensitive to the input distribution?}
In probes where the \emph{arguments} of the symbolic reasoning can take a range of values, we can test whether models are robust to changes in the input distribution.
In \agecomp{}, we shift ages to values that are not within a human life span: $215-230$. Figure~\ref{fig:controls-example}E shows that models are substantially affected by shift the age values. \robertal{} partially recover and achieve fair acc., but the drop in zero-shot performance illustrates that the ability of LMs to predict \nl{younger} or \nl{older} is tied to the natural distribution of ages, and the models cannot just abstractly reason about numbers in any context.
\subsection{Multi-Choice Question Answering}
Constructing a MC-MLM probe limits the answer candidates to a single token from the word-piece vocabulary. To relax this setup we also explore the \mcqa{} setup from \S\ref{sec:models}.
In MC-QA, we phrase the task as a question, letting answer candidates be arbitrary strings, which provides ample expressivity \cite{gardner2019question, }. In Table~\ref{tab:intro}, \textsc{Property conjunction} and \textsc{Encyc. Comparison} serve as examples for this setup. For \agecomp{} we use the same task in \mcqa{} setup, Figure~\ref{fig:controls-example}F shows the learning curves. Because in \mcqa{}, the network $\MLPQA{}$ cannot be initialized by pre-trained weights, it is impossible to obtain meaningful zero-shot results, and more training examples are needed to train $\MLPQA{}$.
Still, the trends observed in \mcmlm{} remain, with \robertal{} achieving best performance with the fewest examples.
}
\subsection{Baseline Tasks}
\label{sec:baseline_tasks}
We first present a few baseline tasks we will build on in later tasks and that aid in understanding the experimental setup and metrics.
\paragraph{Multi-choice language modeling}
As a sanity check, we construct a 3-way \multichoicelm{} task. For instance, an example will include the statement \nl{the film was released in the US and [MASK] well at the box office.} and $K=3$ answer candidates: \nl{recurring}, \nl{always} and the gold answer \nl{performed}.
\noindent
\bfemph{probe construction}
\at{yanai can you please fill this?}
\noindent
\bfemph{results}
Table~\ref{tab:multichoicelm} presents zero-shot accuracy, \smetric{}, and \maxmetric{} for $\MLP{}$ and \linear{}. As expected, all LMs perform very well, close to 100\% accuracy. \mlmbaseline{}, which uses \textsc{GloVe} representations, achieves a maximum of 52\% accuracy (random is 33\%).
We also evaluate this task in the \mcqa{} setup (\S\ref{sec:models}). The Maximum accuracy is comparable to the \mcmlm{} setup, but since the parameters of $\MLPQA{}$ need to be trained during fine-tuning, the \smetric{} is substantially lower.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 96 & 96 & 97 & 96 & 97 & 72 & 95 \\
BERT-WWM & 98 & 98 & 99 & 98 & 98 & 86 & 98 \\
BERT-L & 98 & 98 & 98 & 92 & 98 & 77 & 95 \\
\hdashline
BERT-B & 98 & 98 & 98 & 98 & 98 & 78 & 96 \\
RoBERTa-B & 95 & 95 & 96 & 95 & 95 & 80 & 94 \\
\hdashline
Baseline & 47 & 38 & 52 & - & - & - & - \\
\end{tabular}}
\caption{\multichoicelm{}. \at{for the MC-QA perhaps we should have a column separator in bold?} \jb{yes}}
\label{tab:multichoicelm}
\end{table}
\paragraph{Lexical-semantic knowledge}
\textsc{ConceptNet} \cite{speer2017conceptnet} is a Knowledge-Base (KB) that specifies semantic relations between words and concepts in English, and fine-tuning LMs on the knowledge it contains will be useful in later reasoning tasks.
Following \newcite{Bosselut2019COMETCT}, we show fine-tune LMs to predict \textsc{ConceptNet} facts.
\noindent
\bfemph{Probe Construction}
\textsc{ConceptNet} contains more than 34 million triples of the form (\texttt{subject}, \texttt{predicate}, \texttt{object}). We first construct pseudo-language statements
by mapping each predicate to a natural language phrase. For example, we map the predicate \texttt{atLocation} to \nl{can usually be found at} to obtain statements like \nl{flower can usually be found at [MASK].}, masking the \texttt{object} concept. We use 15 predicates from \textsc{ConceptNet} and create two distractors by randomly choosing an \texttt{object} that occurs in the context of the example \texttt{predicate}, but with a different \texttt{subject}.
\jb{explain train-dev split}
\at{TODO - examples for this task are show in the task examples table that we will add... }
\noindent
\bfemph{Results}
Table~\ref{tab:lexicalsemnatic} shows the result for this task.
Zero-shot performance is lower than the \smetric{} and \maxmetric{}, implying that the LMs had to adapt to the pseudo-language in the task. \bertwwm{} achieves the highest accuracy of 68\% zero-shot, and a maximum of 80\%. The \mlmbaseline{} is incapable of solving the task, which requires substantial amounts of lexical-semantic knowledge and achieve a close to random result of 33\% \smetric{}-metric and a maximum of 38\%. \jb{I don't like \smetric{}-metric sounds weird}
\at{I'm actually not sure the \partlang{} control is very interesting here, perhaps remove it from the table? }
\at{i don't think we need a learning curve here ...} \jb{yes to both.}
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 56 & 59 & 68 & 57 & 63 & 0 & 24 \\
BERT-WWM & 68 & 71 & 80 & 68 & 73 & 5 & 17 \\
BERT-L & 43 & 44 & 47 & 43 & 45 & 3 & 0 \\
\hdashline
BERT-B & 49 & 54 & 76 & 52 & 72 & 9 & 1 \\
RoBERTa-B & 53 & 57 & 68 & 55 & 62 & 1 & 15 \\
\hdashline
Baseline & 32 & 33 & 38 & - & - & 0 & 0 \\
\end{tabular}}
\caption{\textsc{Lexical-semantic knowledge}.}
\label{tab:lexicalsemnatic}
\end{table}
\subsection{Can LMs capture the \nl{long-tail} of Encyclopedic knowledge?}
Acquiring knowledge from LMs have received an increasingly amount of attention lately \cite{logan2019barack,petroni2019language,xiong2020pretrained}.
Recent language models all uses Wikipedia as part of their training data and since it contains a lot of encyclopedic knowledge, we are interested in LMs ability to capture this kind of information. Recently, \citet{petroni2019language} \at{cite \url{https://arxiv.org/abs/1911.03681}} showed that Bert, does indeed capture some encyclopedic-knowledge facts, and achieves comparable performances to specialized KB construction systems, especially on the 1-to-1 relations (e.g. \textit{capital of}). In their work, they use multiple data sources, such as triplets from Wikipedia \cite{elsahar2018t}, Google-RE,\footnote{\url{https://code.google.com/archive/p/relation-extraction-corpus/}} ConceptNet \cite{speer2017conceptnet} and Squad \cite{squad2016url}.
We note that some of these facts, are indeed expected to be successfully completed due to very explicit queues from the query (e.g. ``The official language of Lithuania is [MASK].'', which is successfully completed to \textbf{Lithuanian}, or ``Jules de Gaultier was born in the city of [MASK].'' which is succesfully completed to \textbf{Paris}, the largest city in France).
In contrast, in this probe, we aim to inspect the \nl{long-tail} distribution of encyclopedic knowledge.
\bfemph{Probe Construction}
We follow \cite{petroni2019language} and use the Google-RE data triplets in order to query the LMs. We use the three relations from this dataset, specifically \textit{birth-place}, \textit{birth-date} and \textit{death-place}. For constructing distractors for the location relations,\footnote{we only use the city locations as answers} we use cities from the same country, whereas for the date relations,\footnote{which consists of the year when the event happened} the distractors arrive from a random window size of 2 years.
We split the data into train/dev according to the following: countries with less than 8 possible cities, go into training, for the rest, we sort the cities based on their population size such as the smaller cities are used in the dev split, and larger would be used in training. The distractors are drawn from cities with similar population size (from a window size of 2 for each side).
\bfemph{Results}
The results of this experiments are summarized in Table \ref{tab:res-encyclopedic}. We note that the results across models are low, do not improve by training and demonstrate low sensitivity to language, implying that LMs contain some encyclopedic knowledge, but cannot acquire more of it by training on this task.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 48 & 47 & 49 & 46 & 49 & 0 & 1 \\
BERT-WWM & 52 & 50 & 54 & 50 & 54 & 1 & 2 \\
BERT-L & 46 & 46 & 50 & 45 & 49 & 0 & 0 \\
\hdashline
BERT-B & 48 & 47 & 51 & 46 & 51 & 0 & 1 \\
RoBERTa-B & 48 & 47 & 51 & 47 & 49 & 0 & 2 \\
\hdashline
Baseline & 33 & 33 & 41 & - & - & 0 & 0 \\
\end{tabular}}
\caption{Encyclopedic}
\label{tab:res-encyclopedic}
\end{table}
\subsection{Do LMs know \nl{always} from \nl{often}?}
Adverbial modifiers such as \nl{always}, \nl{sometimes} or \nl{never}, tell us about the quantity or frequency of events~\cite{lewis1975adverbs, barwise1981generalized}.
Anecdotally, when \robertal{} predicts a completion for the phrase \nl{Cats usually drink [MASK].}, the top completion is \nl{coffee}, a frequent drink in the literature it was trained on, rather then \nl{water}.
However, humans know that\nl{Cats NEVER drink coffee}.
Prior work explored retrieving the correct
quantifier for a statement \cite{herbelot2015building,NIPS2017_6871}. Here we adapt this task to a masked language model.
\noindent
\bfemph{The \textsc{\nl{Always-Never}} task}
We present statements, such as
\nl{rhinoceros [MASK] have fur}, with answer candidates, such as \nl{never} or \nl{always}.
To succeed, the model must know the frequency of an event, and map the appropriate adverbial modifier to that representation.
Linguistically, the task tests how well the model predicts frequency quantifiers (or adverbs) modifying predicates in different statements \cite{lepore2007donald}.
\noindent
\bfemph{Probe Construction}
We manually craft templates that contain one slot for a \texttt{subject} and another for an \texttt{object}, e.g. \nl{\texttt{FOOD-TYPE} is \texttt{[MASK]} part of a \texttt{ANIMAL}'s diet.} (more examples available in Table~\ref{tab:model_analysis}).
The \texttt{subject} slot is instantiated with concepts of the correct semantic type, according to the \texttt{isA} predicate in \textsc{ConceptNet}. In the example above we will find concepts that are of type \texttt{FOOD-TYPE} and \texttt{ANIMAL}. The \texttt{object} slot is then instantiated by forming masked templates of the form \nl{meat is part of a [MASK]'s diet.} and \nl{cats have [MASK].} and letting \bertl{} produce the top-20 completions. We filter out completions that do not have the correct semantic type according to the \texttt{isA} predicate.
Finally, we crowdsource gold answers using Amazon Mechanical Turk. Annotators were presented with an instantiated template (with the masked token removed), such as \nl{Chickens have horns.} and chose the correct answer from $5$ candidates: \nl{never}, \nl{rarely}, \nl{sometimes}, \nl{often} and \nl{always}.\footnote{The class distribution over the answers is \nl{never}:24\%, \nl{rarely}:10\%, \nl{sometimes}:34\%, \nl{often}:7\% and \nl{always}:23\%.}
We collected 1,300 examples with 1,000 used for training and 300 for evaluation.
We note that some examples in this probe are similar to \textsc{Objects Comparison} (line 4 in Table~\ref{tab:coffeecatsresults}). However, the model must also determine if sizes can be overlapping, which is the case in 56\% of the examples.
\noindent
\bfemph{Results}
Table~\ref{tab:coffeecatsresults} shows the results, where random accuracy is 20\%, and majority vote accuracy is
35.5\%.
In the zero-shot setup, acc. is less than random. In the $\MLP{}$ and \linear{} setup acc. reaches a maximum of 57\% in \bertl{}, but \mlmbaseline{} obtains similar acc., implying that the task was mostly tackled at fine-tuning time, and the pre-trained representations did not contribute much.
Language controls strengthen this hypothesis, where performance hardly drops in the \pertlanguage{} control
and slightly drops in the \nolanguage{} control.
Figure~\ref{fig:intro-fig}B compares the learning curve of \robertal{} with controls.
\mlmbaseline{} consistently outperforms \robertal{}, which display only minor language sensitivity, suggesting that pre-training is not effective for solving this task.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 14 & 44 & 55 & 26 & 41 & 3 & 5 \\
BERT-WWM & 10 & 46 & 57 & 32 & 52 & 2 & 3 \\
BERT-L & 22 & 45 & 55 & 36 & 50 & 3 & 8 \\
\hdashline
BERT-B & 11 & 44 & 56 & 30 & 52 & 3 & 8 \\
RoBERTa-B & 15 & 43 & 53 & 25 & 44 & 2 & 6 \\
\hdashline
Baseline & 20 & 46 & 56 & - & - & 1 & 2 \\
\end{tabular}}
\caption{Results for the \textsc{Always-Never} probe. Accuracy over five answer candidates (random is 20\%).}
\label{tab:coffeecatsresults}
\end{table}
\paragraph{Analysis}
We generated predictions from the best model, \bertwwm{}, and show analysis results in
Table~\ref{tab:model_analysis}. For reference, we only selected examples where human majority vote led to the correct answer, and thus the majority vote is near 100\% on these examples.
Although the answers \nl{often} and \nl{rarely} are the gold answer in 19\% of the training data, the LMs predict these answers in less than 1\% of examples.
In the template \nl{A dish with \texttt{FOOD-TYPE} [MASK] contains \texttt{FOOD-TYPE}.} the LM always predicts \nl{sometimes}.
Overall we find models do not perform well. Reporting bias \cite{gordon2013reporting} may play a roll in the inability to correctly determine that \nl{A rhinoceros NEVER has fur.} Interestingly, behavioral research conducted on blind humans shows they exhibit a similar bias \cite{kim2019knowledge}.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|l|l|l}
\textbf{Question} & \textbf{Answer} & \textbf{Distractor} & \textbf{Acc.} \\
\hline
\emph{A dish with \underline{pasta} [MASK] contains \underline{pork} . } & \textbf{sometimes} & sometimes & 75 \\
\hdashline
\emph{\underline{stool} is [MASK] placed in the \underline{box} . } & never & \textbf{sometimes} & 68 \\
\hdashline
\emph{A \underline{lizard} [MASK] has a \underline{wing} . } & never & \textbf{always} & 61 \\
\hdashline
\emph{A \underline{pig} is [MASK] smaller than a \underline{cat} . } & rarely & \textbf{always} & 47 \\
\hdashline
\emph{\underline{meat} is [MASK] part of a \underline{elephant}'s diet .} & never & \textbf{sometimes} & 41 \\
\hdashline
\emph{A \underline{calf} is [MASK] larger than a \underline{dog} .} & \textbf{sometimes} & often & 30 \\
\hdashline
\end{tabular}}
\caption{Error analysis for \textsc{Always-Never}. Model predictions are in bold, and Acc. shows acc. per template.}
\label{tab:model_analysis}
\end{table}
\subsection{Can LMs perform robust comparison?}
\label{sec:num-comparison}
Comparing two numeric values requires representing the values and performing the comparison operations. In \S\ref{sec:controlled_experiments} we saw the \agecomp{} task, in which ages of two people were compared. We found that \robertal{} and to some extent \bertwwm{} were able to handle this task, performing well under the controls.
We expand on this to related comparison tasks and perturbations that assess the sensitivity of LMs to the particular context and to the numerical value.
\paragraph{Is \robertal{} comparing numbers or ages?}
\robertal{} obtained zero-shot acc. of 98\% in \agecomp{}. But is it robust? We test this using perturbations to the task and present the results in Figure~\ref{fig:age-comp-pert}. Figure~\ref{fig:age-comp-pert}A corresponds to the experiment from~\S\ref{sec:controlled_experiments},
where we observed that \robertal{} predicts \nl{younger} (blue pixels) and \nl{older} (white pixels) almost perfectly.
To test whether \robertal{} can compare ages given the birth year rather than the age, we use the statement \nl{A person born in \texttt{YEAR-1} is [MASK] than me in age, If i was born in \texttt{YEAR-2}.} Figure~\ref{fig:age-comp-pert}B shows that
it correctly flips \nl{younger} to \nl{older} (76\% acc.), reasoning that a person born in 1980 is older than one born in 2000.
However, when evaluated on the exact same statement, but with values corresponding to typical \emph{ages} instead of years (Figure~\ref{fig:age-comp-pert}D), \robertal{} obtains an acc. of 12\%, consistently outputting the opposite prediction.
With ages as values and not years, it seems to disregard the language, performing the comparison based on the values only.
We will revisit this tendency in \S\ref{subsec:conjunction}.
Symmetrically, Figure~\ref{fig:age-comp-pert}C shows results when numeric values of ages are swapped with typical years of birth.
\robertal{} is unable to handle this, always predicting \nl{older}.\footnote{We observed that in neutral contexts models have a slight preference for \nl{older} over \nl{younger}, which could potentially explain this result.}
This emphasizes that the model is sensitive to the argument values.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{figures/age_comparison_perturbations_3.png}
\caption{\textsc{Age comparison} perturbations. Left side graphs are age-comparison, right side graphs are age comparison by birth-year. In the bottom row, the values of ages are swapped with birth-years and vice versa. In blue pixels the model predicts \nl{older}, in white
\nl{younger}. (A) is the correct answer.}~\label{fig:age-comp-pert}
\end{figure}
\paragraph{Can Language Models compare object sizes?}
Comparing physical properties of objects requires knowledge of the numeric value of the property and the ability to perform comparison. Previous work has shown that such knowledge can be extracted from text and images \cite{bagherinezhad2016elephants,forbes2017verb,yang2018extracting,elazar-etal-2019-large,pezzelle2019red}. Can LMs do the same?
\noindent
\bfemph{Probe Construction}
We construct statements of the form \nl{The size of a \texttt{OBJ-1} is usually much [MASK] than the size of a \texttt{OBJ-2}.}, where the candidate answers are \nl{larger} and \nl{smaller}.
To instantiate the two objects, we manually sample from a list of objects from two domains: animals (e.g. \nl{camel}
) and general objects (e.g.
\nl{sun}), and use the first domain for training and the second for evaluation.
We bucket different objects based on the numerical value of their \emph{size} based on their median value in \textsc{DoQ} \cite{elazar-etal-2019-large}, and then manually fix any errors.
This probe requires prior knowledge of object sizes and understanding of a comparative language construction.
Overall, we collected 127 and 35 objects for training and development respectively. We automatically instantiate object slots using objects that are in the same bucket.
\noindent
\bfemph{Results}
\robertal{} excels in this task, starting from 84\% acc. in the zero-shot setup and reaching \maxmetric{} of 91\% (Table \ref{tab:numbercomparison}). Other models start with random performance and are roughly on par with \mlmbaseline{}. \robertal{} shows sensitivity to the language, suggesting that the ability to compare object sizes is encoded in it.
\noindent
\bfemph{Analysis}
Table \ref{tab:obj-comparison-matrix} shows results of running \robertal{} in the zero-shot setup over pairs of objects, where we sampled a single object from each bucket.
Objects are ordered by their size from small to large. Overall, \robertal{} correctly predicts \nl{larger} below the diagonal, and \nl{smaller} above it. Interestingly, errors are concentrated around the diagonal, due to the more fine-grained differences in sizes, and when we compare objects to \nl{sun}, mostly emitting \nl{larger}, ignoring the rest of the statement.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 84 & 88 & 91 & 86 & 90 & 22 & 26 \\
BERT-WWM & 55 & 65 & 81 & 63 & 77 & 9 & 9 \\
BERT-L & 52 & 56 & 66 & 53 & 56 & 5 & 4 \\
\hdashline
BERT-B & 56 & 55 & 72 & 53 & 56 & 2 & 3 \\
RoBERTa-B & 50 & 61 & 74 & 57 & 66 & 8 & 0 \\
\hdashline
Baseline & 46 & 57 & 74 & - & - & 2 & 1 \\
\end{tabular}}
\caption{Results for the \textsc{Objects Comparison} probe. Accuracy over two answer candidates (random is 50\%).}
\label{tab:numbercomparison}
\end{table}
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{lllllllll}
\toprule
{} & nail & pen & laptop & table & house & airplane & city & sun \\
\midrule
nail & - & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} \\
pen & \textcolor[HTML]{009F3D}{smaller} & - & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} \\
laptop & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & - & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{009F3D}{smaller} \\
table & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & - & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{DF0024}{larger} \\
house & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & - & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{009F3D}{smaller} & \textcolor[HTML]{DF0024}{larger} \\
airplane & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & - & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} \\
city & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & - & \textcolor[HTML]{DF0024}{larger} \\
sun & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & \textcolor[HTML]{DF0024}{larger} & - \\
\bottomrule
\end{tabular}}
\caption{\robertal{} Zero-shot \textsc{Size comp.} predictions.}
\label{tab:obj-comparison-matrix}
\end{table}
\subsection{Do LMs Capture Negation?}
Ideally, the presence of the word \nl{not} should affect the prediction of a masked token.
However, Several recent works have shown that LMs
do not take into account the presence of negation in sentences \cite{ettinger2019bert,nie2019adversarial,kassner2019negated}.
Here, we add to this literature, by probing whether LMs can properly use negation in the context of \emph{synonyms} vs. \emph{antonyms}.
\paragraph{Do LMs Capture the Semantics of Antonyms?}
In the statement \nl{He was [MASK] fast, he was very slow.}, [MASK] should be replaced with \nl{not}, since \nl{fast} and \nl{slow} are antonyms.
Conversely, in \nl{He was [MASK] fast, he was very \textbf{rapid}}, the LM should choose a word like \nl{very} in the presence of the synonyms \nl{fast} and \nl{rapid}.
An LM that correctly distinguishes between \nl{not} and \nl{very}, demonstrates knowledge of the taxonomic relations
as well as the ability to reason about the usage of negation in this context.
\noindent
\bfemph{Probe Construction}
We sample synonym and antonym pairs from
\textsc{ConceptNet} \cite{speer2017conceptnet} and \textsc{WordNet} \cite{fellbaum1998wordnet}, and use Google Books Corpus to choose pairs that occur frequently in language. We make use of the statements introduced above. Half of the examples are synonym pairs and half antonyms, generating 4,000 training examples and 500 for evaluation.
Linguistically, we test whether the model appropriately predicts a negation vs. intensification adverb based on synonymy/antonymy relations between nouns, adjectives and verbs.
\noindent
\bfemph{Results}
\robertal{} shows higher than chance acc. of 75\% in the zero-shot setting, as well as high \langsense{}
(Table~\ref{tab:negation}). \mlmbaseline{}, equipped with GloVe word embeddings, is able to reach a comparable \smetric{} of 67 and \maxmetric{} of 80\%,
suggesting they do not have a large advantage on this task.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 75 & 85 & 91 & 77 & 84 & 14 & 21 \\
BERT-WWM & 57 & 70 & 81 & 61 & 73 & 5 & 6 \\
BERT-L & 51 & 70 & 82 & 58 & 74 & 5 & 9 \\
\hdashline
BERT-B & 52 & 68 & 81 & 59 & 74 & 2 & 9 \\
RoBERTa-B & 57 & 74 & 87 & 63 & 78 & 10 & 16 \\
\hdashline
Baseline & 47 & 67 & 80 & - & - & 0 & 0 \\
\end{tabular}}
\caption{Results for the \textsc{Antonym Negation} probe. Accuracy over two answer candidates (random is 50\%).}
\label{tab:negation}
\end{table}
\iffalse
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 50 & 56 & 90 & 50 & 51 & 0 & 1 \\
BERT-WWM & 50 & 67 & 99 & 50 & 71 & 0 & 0 \\
BERT-L & 48 & 71 & 99 & 51 & 85 & 0 & 0 \\
\hdashline
BERT-B & 50 & 66 & 99 & 51 & 83 & 0 & 0 \\
RoBERTa-B & 50 & 55 & 87 & 50 & 54 & 0 & 1 \\
\hdashline
Baseline & 49 & 52 & 56 & - & - & 0 & 0 \\
\end{tabular}}
\caption{same or different}
\label{tab:generalization}
\end{table}
\fi
\subsection{Conjunction}
\subsection{Can LMs handle conjunctions of facts?}
\label{subsec:conjunction}
We present two probes where a model should understand the reasoning expressed by the word \emph{and}.
\noindent
\paragraph{Property conjunction}
\textsc{ConceptNet} is a Knowledge-Base that describes the properties of millions of concepts through its \texttt{(subject, predicate, object)} triples. We use \textsc{ConcepNet} to test whether LMs can find concepts for which a conjunction of properties holds. For example, we will create a question like \nl{What is located in a street and is related to octagon?}, where the correct answer is \nl{street sign}. Because answers are drawn from \textsc{ConceptNet}, they often consist of more than one word-piece, thus examples are generated in the MC-QA setup.
\noindent
\bfemph{Probe Construction}
To construct an example, we first choose a concept that has two properties in \textsc{ConceptNet}, where a property is a (\texttt{predicate}, \texttt{object}) pair.
For example, \texttt{stop sign} has the properties \texttt{(atLocation,street)} and (\texttt{relatedTo}, \texttt{octagon}). Then, we create two distractor concepts, for which only one property holds: \texttt{car} has the property (\texttt{atLocation}, \texttt{street}), and \texttt{math} has the property (\texttt{relatedTo}, \texttt{octagon}). Given the answer concept, the distractors and the properties, we can automatically generate pseudo-langauge questions and answers by mapping 15 \textsc{ConceptNet} predicates to natural language questions. We split examples such that concepts in training and evaluation are disjoint.
This linguistic structure tests whether the LM can answer questions with conjoined predicates, requiring world knowledge of object and relations.
\comment{
\noindent
\paragraph{Property conjunction}
\textsc{ConceptNet} is a Knowledge-Base that describes the properties of millions of concepts through its \texttt{(subject, predicate, object)} triples. \at{We use \textsc{ConcepNet} to test whether LMs can find concepts for which one property holds but another does not.} For example, we will create a question like \nl{What is located in a street \textbf{but not} related to octagon?}, where the correct answer is \nl{street sign}. Because answers are drawn from \textsc{ConceptNet}, they often consist of more than one word-piece, thus examples are generated in the MC-QA setup.
\noindent
\bfemph{Probe Construction}
To construct an example, we first choose a concept that has two properties in \textsc{ConceptNet}, where a property is a (\texttt{predicate}, \texttt{object}) pair.
\at{For example, \texttt{stop sign} has the properties \texttt{(atLocation,street)} and (\texttt{relatedTo}, \texttt{octagon}). Then, we create two concepts, for which only one property holds: \texttt{car} has the property (\texttt{atLocation}, \texttt{street}), and \texttt{math} has the property (\texttt{relatedTo}, \texttt{octagon}). The first, \texttt{car}, will serve as the correct answer, whereas \texttt{math} as well as \texttt{stop sign} will serve as distractors.} . Given the answer concept, the distractors and the properties, we can automatically generate pseudo-langauge questions and answers by mapping 15 \textsc{ConceptNet} predicates to natural language questions. We split examples such that concepts in training and eval. are disjoint.
}
\noindent
\bfemph{Results}
In MC-QA, we fine-tune the entire network and do not freeze any representations. Zero-shot cannot be applied since the weights of $\MLPQA{}$ are untrained.
All LMs consistely improve as the number of examples increases, reaching a \maxmetric{} of 57-87\% (Table~\ref{tab:property_conjunction}). The high \maxmetric{} results suggest that the LMs generally have the required pre-existing knowledge. The \smetric{} of most models is slightly higher than the baselines (49\% \maxmetric{} and 39 \smetric{}). \langsense{} is slightly higher than zero in some models.
Overall, results suggest the LMs do have some capability in this task, but proximity to baseline results, and low language selectivity make it hard to clearly determine if it existed before fine-tuning.
To further validate our findings, we construct a parallel version of our data, where we replace the word \nl{and} by the phrase \nl{but not}. In this version, the correct answer is the first distractor in the original experiment, where one property holds and the other does not. Overall, we observe a similar trend (with an increase in performance across all models): \maxmetric{} results are high (79-96\%), pointing that the LMs hold the relevant information, but improvement over ESIM-Baseline and language sensitivity are low. For brevity, we omit the detailed numerical results.
\begin{table}[h]
\begin{center}
\resizebox{0.7\columnwidth}{!}{
\begin{tabular}{l|cc|cc}
Model & \multicolumn{2}{c|}{\textsc{LearnCurve}} & \multicolumn{2}{c}{\langsenses{}} \\
\toprule
&\smetric{}&\maxmetric{}& \pertlangs{}& \nolangs{} \\
\midrule
RoBERTa-L & 49 & 87 & 2 & 4 \\
BERT-WWM & 46 & 80 & 0 & 1 \\
BERT-L & 48 & 75 & 2 & 5 \\
\hdashline
BERT-B & 47 & 71 & 2 & 1 \\
RoBERTa-B & 40 & 57 & 0 & 0 \\
\hdashline
Baseline & 39 & 49 & 0 & 0 \\
\end{tabular}
}
\end{center}
\caption{Results for the \textsc{Property Conjunction} probe. Accuracy over three answer candidates (random is 33\%).}
\label{tab:property_conjunction}
\end{table}
\comment{
\begin{table}[h]
\begin{center}
\resizebox{0.7\columnwidth}{!}{
\begin{tabular}{l|cc|cc}
Model & \multicolumn{2}{c|}{LC} & \multicolumn{2}{c}{Language} \\
\toprule
& S & Max & part& none \\
\midrule
RoBERTa-L & 79 & 93 & 1 & 3 \\
BERT-WWM & 79 & 94 & 0 & 5 \\
BERT-L & 71 & 91 & 1 & 1 \\
\hdashline
BERT-B & 69 & 92 & 2 & 1 \\
RoBERTa-B & 77 & 92 & 1 & 3 \\
XLNET & 68 & 88 & 0 & 4 \\
\hdashline
Baseline & 52 & 78 & 0 & 0 \\
\end{tabular}
}
\end{center}
\caption{Results for the \textsc{Property Conjunction} probe.}
\label{tab:generalization}
\end{table}
}
\paragraph{Taxonomy conjunction}
A different operation is to find properties that are shared by two concepts. Specifically, we test whether LMs can find the mutual hypernym of a pair of concepts. For example,
\nl{A germ and a human are both a type of [MASK].}, where the answer is \nl{organism}.
\noindent
\bfemph{Probe Construction}
We use \textsc{ConceptNet} and \textsc{WordNet} to find pairs of concepts and their hypernyms, keeping only pairs that frequently appear in the \textsc{Google Book Corpus}. The example template is \nl{A \texttt{ENT-1} and a \texttt{ENT-2} are both a type of [MASK].}, where \texttt{ENT-1} and \texttt{ENT-2} are replaced with entities that have a common hypernym, which is the gold answer. Distractors are concepts that are hypernyms of \texttt{ENT-1}, but not \texttt{ENT-2}, or vice versa. For evaluation, we keep all examples related to food and animal taxonomies, e.g., \nl{A beer and a ricotta are both a type of [MASK].}, where the answer is \nl{food} and the distractors are \nl{cheese} and \nl{alcohol}.
This phrasing requires the model to handle conjoined co-hyponyms in the subject position, based on lexical relations of hyponymy / hypernymy between nouns.
For training, we use examples from different taxonomic trees, such that the concepts in the training and evaluation sets are disjoint.
\noindent
\bfemph{Results}
Table~\ref{tab:taxonomy} shows that models' zero-shot acc. is substantially higher than random (33\%), but overall even after fine-tuning acc. is at most 59\%.
However, the \nolanguage{} control shows some language sensitivity, suggesting that some models have pre-existing capabilities.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 45 & 50 & 56 & 45 & 46 & 0 & 3 \\
BERT-WWM & 46 & 48 & 52 & 46 & 46 & 0 & 7 \\
BERT-L & 53 & 54 & 57 & 53 & 54 & 0 & 15 \\
\hdashline
BERT-B & 47 & 48 & 50 & 47 & 47 & 0 & 12 \\
RoBERTa-B & 46 & 50 & 59 & 47 & 49 & 0 & 18 \\
\hdashline
Baseline & 33 & 33 & 47 & - & - & 1 & 2 \\
\end{tabular}}
\caption{Results for the \textsc{Taxonomy Conjunction} probe.
Accuracy over three answer candidates (random is 33\%).
}
\label{tab:taxonomy}
\end{table}
\noindent
\bfemph{Analysis}
Analyzing the errors of \robertal{}, we found that a typical error is predicting for \nl{A crow and a horse are both a type of [MASK].} that the answer is \nl{bird}, rather than \nl{animal}.
Specifically, LMs prefer hypernyms that are closer in terms of edge distance on the taxonomy tree. Thus, a crow is first a bird, and then an animal. We find that when distractors are closer
to one of the entities in the statement than the gold answer, the models will consistently (80\%) choose the distractor, ignoring the second entity in the phrase.
\section{The oLMpic Games}
We now move to describe the research questions and various probes used to answer these questions. For each task we describe how it was constructed, show results via a table as described in the controls section, and present an analysis.
Our probes are mostly targeted towards symbolic reasoning skills (Table~\ref{tab:intro}). We examine the ability of language models to compare numbers, to understand whether an object has a conjunction of properties, to perform multi-hop composition of facts, among others. However, since we generate examples automatically from existing resources, some probes also require background knowledge, such as sizes of objects. Moreover, as explained in \S\ref{subsec:zero_shot}, we test models on a manually-picked phrasing that might interact with the language abilities of the model. Thus, when a model succeeds this is evidence that it has the necessary skill, but failure could be attributed to issues with background knowledge and linguistic abilities as well. In each probe, we will explicitly mention what knowledge and language abilities are necessary.
\input{04d_comparison}
\input{04c_coffeecats}
\input{04e_negation}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figures/learning_curves_3.png}
\caption{Learning curves in two tasks. For each task the best performing LM is shown alongside the \nolanguage{} control and baseline model. (A) is the correct answer.}~\label{fig:learning-curves}
\end{figure}
\input{04f_conjunction}
\input{04g_composition.tex}
\subsection{Can LMs do multi-hop reasoning?}
Questions that require multi-hop reasoning, such as \nl{Who is the director of the movie about a WW2 pacific medic?},
have recently drawn attention
\cite{yang2018hotpotqa,welbl2017constructing,talmor2018web} as a challenging task for contemporary models.
But do pre-trained LMs have some internal mechanism to handle such questions?
To address this question, we create two probes, one for compositional question answering, and the other uses a multi-hop setup, building upon our observation (\S\ref{sec:controlled_experiments}) that some LMs can compare ages.
\paragraph{Encyclopedic composition}
We construct questions such as \nl{When did the band where John Lennon played first form?}. Here answers require multiple tokens, thus we use the MC-QA setup.
\noindent
\bfemph{Probe Construction}
We use the following three templates: (1) \nl{when did the band where \texttt{ENT} played first form?}, (2) \nl{who is the spouse of the actor that played in \texttt{ENT}?} and (3) \nl{where is the headquarters of the company that \texttt{ENT} established located?}.
We instantiate \texttt{ENT} using information from \textsc{Wikidata} \cite{vrandecic2014wikidata}, choosing challenging distractors. For example, for template 1, the distractor will be a year close to the gold answer, and for template 3, it will be a city in the same country as the gold answer city.
This linguistic structure introduces a (restrictive) relative clauses that requires a) Correctly resolving the reference of the noun modified by the relative clause, b) Answering the full question subsequently.
To solve the question, the model must have knowledge of all single-hop encyclopedic facts required for answering it. Thus, we first fine-tune the model on all such facts (e.g. \nl{What company did Bill Gates establish? Microsoft})
from the training and evaluation set, and then fine-tune on multi-hop composition.
\comment{
In order to solve these questions, a model needs first to solve the inner part (e.g. \nl{what company did \texttt{[OBJ]}} established?), the inner part answer (\texttt{OBJ-INNER}) is then inserted and used to solve the full question (e.g. \nl{where is the company \texttt{OBJ-INNER} located?}). \jb{what is the purpose of these sentence? who knows how should the questions be solved?}
We split the data to train/dev according to the inner answer \jb{what does this mean?}, and randomly draw 2 distractors in the method: years are drawn from the set (answer-2, answer+2) excluding the answer year \jb{sloppy definition}. Answers of type \texttt{spouse} are first ordered according to their frequency in the data \jb{what data}, and then chosen randomly from a window size of 3 relative to the answer. \at{Yanai, i'm not sure what we mean here} \jb{same} Lastly, distractors of type \texttt{city} are drawn from cities in the country associated with the answer.
}
\noindent
\bfemph{Results}
Results are summarized in Table \ref{tab:encyclopedic-composition-res}. All models achieve low acc. in this task, and the baseline performs best with a \maxmetric{} of 54\%. Language sensitivity of all models is small, and \mlmbaseline{} performs slightly better (Figure~\ref{fig:learning-curves}B), suggesting that the LMs are unable to resolve compositional questions, but also struggle to learn it with some supervision.
\begin{table}[h]
\begin{center}
\resizebox{0.7\columnwidth}{!}{
\begin{tabular}{l|cc|cc}
Model & \multicolumn{2}{c|}{\textsc{LearnCurve}} & \multicolumn{2}{c}{\langsenses{}} \\
\toprule
&\smetric{}&\maxmetric{}& \pertlangs{}& \nolangs{} \\
\midrule
RoBERTa-L & 42 & 50 & 0 & 2 \\
BERT-WWM & 47 & 53 & 1 & 4 \\
BERT-L & 45 & 51 & 1 & 4 \\
\hdashline
BERT-B & 43 & 48 & 0 & 3 \\
RoBERTa-B & 41 & 46 & 0 & 0 \\
\hdashline
ESIM-Baseline & 49 & 54 & 3 & 0 \\
\end{tabular}
}
\end{center}
\caption{Results for \textsc{Encyclopedic composition}. Accuracy over three answer candidates (random is 33\%).
}
\label{tab:encyclopedic-composition-res}
\end{table}
\paragraph{Multi-hop Comparison}
Multi-hop reasoning can be found in many common structures in natural language. In the phrase \nl{When comparing a 83 year old, a 63 year old and a 56 year old, the [MASK] is oldest}
one must find the oldest person, then refer to its ordering: first, second or third.
\noindent
\bfemph{Probe Construction}
We use the template above, treating the ages as arguments,
and \nl{first}, \nl{second}, and \nl{third} as answers. Age arguments are in the same ranges as in \agecomp{}.
Linguistically, the task requires predicting the subject of sentences whose predicate is in a superlative form, where the relevant information is contained in a ``when''-clause. The sentence also contains nominal ellipsis, also known as fused-heads \cite{elazar_head}.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{$\MLP{}$} & \multicolumn{2}{c|}{\linear{}}
& \multicolumn{2}{c}{\langsenses{}} \\
\toprule
& shot &\smetric{}&\maxmetric{}&\smetric{}&\maxmetric{}&\pertlangs{}& \nolangs{}\\
\midrule
RoBERTa-L & 29 & 36 & 49 & 31 & 41 & 2 & 2 \\
BERT-WWM & 33 & 41 & 65 & 32 & 36 & 6 & 4 \\
BERT-L & 33 & 32 & 35 & 31 & 34 & 0 & 3 \\
\hdashline
BERT-B & 32 & 33 & 35 & 33 & 35 & 0 & 2 \\
RoBERTa-B & 33 & 32 & 40 & 29 & 33 & 0 & 0 \\
\hdashline
Baseline & 34 & 35 & 48 & - & - & 1 & 0 \\
\end{tabular}}
\caption{Results for \textsc{Compositional Comparison}. Accuracy over three answer candidates (random is 33\%).
}
\label{tab:compositional_comparison}
\end{table}
\noindent
\bfemph{Results}
All three possible answers appear in \robertal{}'s top-10 zero-shot predictions, indicating that the model sees the answers as viable choices. Although successful in \agecomp{}, the performance of \robertal{} is poor in this probe (Table \ref{tab:compositional_comparison}), With zero-shot acc. that is almost random, \smetric{} slightly above random, \maxmetric{} lower than \mlmbaseline{} (48\%), and close to zero language sensitivity. All LMs seem to be learning the task during probing.
Although \bertwwm{} was able to partially solve the task with a \maxmetric{} of 65\% when approaching 4,000 training examples, the models do not appear to show multi-step capability in this task.
\section{Medals}
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|c|c|c|c}
& RoBERTa & BERT & BERT & RoBERTa & BERT \\
& Large & WWM & Large & Base & Base \\
\hline
\textsc{Always-Never} & & & & & \\
\hdashline
\textsc{Age Comparison} & \checkmark & \checkmark & & \semicheck & \\
\textsc{Objects Compar.} & \checkmark & \semicheck & & & \\
\hdashline
\textsc{Antonym Neg.} & \checkmark & & \semicheck & \semicheck& \\
\hdashline
\textsc{Property Conj.} & \semicheck & \semicheck & & & \\
\textsc{Taxonomy Conj.} & \semicheck & \semicheck & & \semicheck & \\
\hdashline
\textsc{Encyc. Comp.} & & & & & \\
\textsc{Multi-hop Comp.} & & & & &
\end{tabular}}
\caption{The oLMpic games medals', summarizing per-task success. \checkmark{} indicate the LM has achieved high accuracy considering controls and baselines, \semicheck{} indicates partial success.}
\label{tab:medals}
\end{table}
We summarize the results of the oLMpic Games in Table \ref{tab:medals}.
Generally, the LMs did not demonstrate strong pre-training capabilities in these symbolic reasoning tasks.
\bertwwm{} showed partial success in a few tasks, whereas
\robertal{} showed high performance in \textsc{Always-Never}, \textsc{Objects Comparison} and \textsc{Antonym Negation}, and emerges as the most promising LM. However, when perturbed, \robertal{} has failed to demonstrates consistent generalization and abstraction.
\paragraph{Analysis of correlation with pre-training data}
A possible hypothesis for why a particular model is successful in a particular task might be that the language of a probe is more common in the corpus it was pre-trained on. To check that, we compute the unigram distribution over the training corpus of both \bert{} and \roberta{}. We then compute the average log probability of the development set under these two unigram distributions for each task (taking into account only content words). Finally, we compute the correlation between which model performs better on a probe (\robertal{} vs. \bertwwm{}) and which training corpus induces higher average log probability on that probe. We find that the Spearman correlation is 0.22, hinting that the unigram distributions do not fully explain the difference in performance.
\comment{
\begin{table*}[t]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{l|c|c|c|c|c}
& BERT-B & BERT-L & BERT-WWM & RoBERTa-B & RoBERTa-L \\
\hline
Always-Never & & & & & \\
\hdashline
Numeric Comparison & & & \semicheck & & \checkmark \\
Objects Comparison & & & \semicheck & & \checkmark \\
\hdashline
Antonym-Synonym Negation & & \semicheck & & \semicheck & \checkmark \\
\hdashline
Property Conjunction & & \semicheck & & & \\
Taxonomy Conjunction & & & & & \\
\hdashline
Encyclopedic Composition & & & & & \\
Multi-hop Comparison & & & & &
\end{tabular}}
\caption{oLMpics summary. Outlining the success / failure of each model on every game we tested. \checkmark{} indicates the model knows the specific game we tested on and \semicheck{} indicates the model manage partially on it.}
\label{tab:medals}
\end{table*}
}
\section{Discussion}
We presented eight different tasks for evaluating the reasoning abilities of models, alongside an evaluation protocol for disentangling pre-training from fine-tuning. We found that even models that have identical structure and objective functions differ not only quantitatively but also qualitatively. Specifically, \robertal{} has shown reasoning abilities that are absent from other models. Thus, with appropriate data and optimization, models can acquire from an LM objective skills that might be surprising intuitively.
However, when current LMs succeed in a reasoning task, they do not do so through abstraction and composition as humans perceive it. The abilities are context-dependent, if ages are compared -- then the numbers should be typical ages. Discrepancies from the training distribution lead to large drops in performance. Last, the performance of LM in many reasoning tasks is poor.
Our work sheds light on some of the blind spots of current LMs. We will release our code and data to help researchers evaluate the reasoning abilities of models, aid the design of new probes, and guide future work on pre-training, objective functions and model design for endowing models with capabilities they are currently lacking.
\subsection{Can LMs handle ``set-negation"?}
The set of objects that have some property is the complement of the set of objects that do not. We test whether LM can handle this case, building on the \textsc{Lexical-semantic knowledge} probe from \S\ref{sec:baseline_tasks}.
\noindent
\bfemph{Probe Construction}
Our setup is identical to \textsc{Lexical-semantic knowledge}, where we build statements from \texttt{subject predicate object} triples, except that in 50\% of the examples, we add the word \nl{not}: \nl{[MASK] is \textbf{not} a prerequisite of eating.}. Then we sample two concepts from \textsc{ConceptNet} that have the described property and are the distractors (\nl{food}, \nl{chewing}), and one concept that does not (\nl{ticket}), which is the gold answer.
\noindent
\bfemph{Results}
Results in the task are slightly above random, which is 33\%. \bertwwm{} achieves the highest zero-shot accuracy, \smetric{} and \maxmetric{} scores of 47, 47, and 51 respectively. In this setting, fine-tuning only slightly improves performance. Error analysis shows that negation examples account for the majority of errors, and the language controls show that there is very little sensitivity to the words in the input.
These results are aligned with previous results showing that LMs struggle with negation.
\begin{table}[h]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|c|cc|cc|cc}
Model & Zero & \multicolumn{2}{c|}{MLP} & \multicolumn{2}{c|}{Linear}
& \multicolumn{2}{c}{Language} \\
\toprule
& Acc & S & Max & S & Max & part & full \\
\midrule
RoBERTa-L & 38 & 39 & 44 & 39 & 42 & 0 & 0 \\
BERT-WWM & 47 & 47 & 51 & 47 & 49 & 0 & 1 \\
BERT-L & 40 & 40 & 42 & 40 & 43 & 0 & 0 \\
\hdashline
BERT-B & 41 & 41 & 45 & 41 & 43 & 0 & 5 \\
RoBERTa-B & 42 & 42 & 45 & 42 & 45 & 0 & 3 \\
\hdashline
Baseline & 35 & 34 & 38 & - & - & 0 & 0 \\
\end{tabular}}
\caption{\textsc{Negation}.}
\label{tab:negation}
\end{table}
\at{ should we add to the learning curves the MC-QA version of this experiment? remember it was this sudden jump in performance, could be more interesting, because in the MC-MLM setup the models do not succeed at all ... (at least when not fine-tuned on the lexical semantic similarity used to build the task. } \jb{if it is more successful you can put it in the table?}
\section{Supplementary Experiments}
\input{04a_baseline_tasks}
\input{04b_encyclopedic}
\input{08a_set_negation}
| {'timestamp': '2020-11-20T02:10:27', 'yymm': '1912', 'arxiv_id': '1912.13283', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.13283'} |
\section{Introduction}
Spectral line surveys have revealed that high-mass star-forming
regions are rich reservoirs of molecules from simple diatomic species
to complex and larger molecules (e.g.,
\citealt{schilke1997b,hatchell1998b,comito2005,bisschop2007}).
However, there have been rarely studies undertaken to investigate the
chemical evolution during massive star formation from the earliest
evolutionary stages, i.e., from High-Mass Starless Cores (HMSCs) and
High-Mass Cores with embedded low- to intermediate-mass protostars
destined to become massive stars, via High-Mass Protostellar Objects
(HMPOs) to the final stars that are able to produce Ultracompact H{\sc
ii} regions (UCH{\sc ii}s, see \citealt{beuther2006b} for a recent
description of the evolutionary sequence). The first two evolutionary
stages are found within so-called Infrared Dark Clouds (IRDCs). While
for low-mass stars the chemical evolution from early molecular
freeze-out to more evolved protostellar cores is well studied (e.g.,
\citealt{bergin1997,dutrey1997,pavlyuchenkov2006,joergensen2007}),
it is far from clear whether similar evolutionary patterns are present
during massive star formation.
To better understand the chemical evolution of high-mass star-forming
regions we initiated a program to investigate the chemical properties
from IRDCs to UCH{\sc ii}s from an observational and theoretical
perspective. We start with single-dish line surveys toward a large
sample obtaining their basic characteristics, and then perform
detailed studies of selected sources using interferometers on smaller
scales. These observations are accompanied by theoretical modeling of
the chemical processes. Long-term goals are the chemical
characterization of the evolutionary sequence in massive star
formation, the development of chemical clocks, and the identification
of molecules as astrophysical tools to study the physical processes
during different evolutionary stages. Here, we present an initial
study of the reactive radical ethynyl (C$_2$H) combining single-dish
and interferometer observations with chemical modeling. Although
C$_2$H was previously observed in low-mass cores and Photon Dominated
Regions (e.g., \citealt{millar1984,jansen1995}), so far it was not
systematically investigated in the framework of high-mass star
formation.
\section{Observations}
\label{obs}
The 21 massive star-forming regions were observed with the Atacama
Pathfinder Experiment (APEX) in the 875\,$\mu$m window in fall 2006.
We observed 1\,GHz from 338 to 339\,GHz and 1\,GHz in the image
sideband from 349 to 350\,GHz. The spectral resolution was
0.1\,km\,s$^{-1}$, but we smoothed the data to
$\sim$0.9\,km\,s$^{-1}$. The average system temperatures were around
200\,K, each source had on-source integration times between 5 and 16
min. The data were converted to main-beam temperatures with forward
and beam efficiencies of 0.97 and 0.73, respectively
\citep{belloche2006}. The average $1\sigma$ rms was 0.4\,K. The main
spectral features of interest are the C$_2$H lines around 349.4\,GHz
with upper level excitation energies $E_u/k$ of 42\,K (line blends of
C$_2$H$(4_{5,5}-3_{4,4})$ \& C$_2$H$(4_{5,4}-3_{4,3})$ at
349.338\,GHz, and C$_2$H$(4_{4,4}-3_{3,3})$ \&
C$_2$H$(4_{4,3}-3_{3,2})$ at 349.399\,GHz). The beam size was $\sim
18''$.
The original Submillimeter Array (SMA) C$_2$H data toward the
HMPO\,18089-1732 were first presented in \citet{beuther2005c}. There
we used the compact and extended configurations resulting in good
images for all spectral lines except of C$_2$H. For this project, we
re-worked on these data only using the compact configuration. Because
the C$_2$H emission is distributed on larger scales (see
\S\ref{results}), we were now able to derive a C$_2$H image. The
integration range was from 32 to 35\,km\,s$^{-1}$, and the achieved
$1\sigma$ rms of the C$_2$H image was 450\,mJy\,beam$^{-1}$. For more
details on these observations see \citet{beuther2005c}.
\section{Results}
\label{results}
The sources were selected to cover all evolutionary stages from IRDCs
via HMPOs to UCH{\sc ii}s. We derived our target list from the samples
of \citet{klein2005,fontani2005,hill2005,beltran2006}. Table
\ref{sample} lists the observed sources, their coordinates, distances,
luminosities and a first order classification into the evolutionary
sub-groups IRDCs, HMPOs and UCH{\sc ii}s based on the previously
available data. Although this classification is only based on a
limited set of data, here we are just interested in general
evolutionary trends. Hence, the division into the three main classes
is sufficient.
Figure \ref{spectra} presents sample spectra toward one source of each
evolutionary group. While we see several CH$_3$OH lines as well as
SO$_2$ and H$_2$CS toward some of the HMPOs and UCH{\sc ii}s but not
toward the IRDCs, the surprising result of this comparison is the
presence of the C$_2$H lines around 349.4\,GHz toward all source types
from young IRDCs via the HMPOs to evolved UCH{\sc ii}s. Table
\ref{sample} lists the peak brightness temperatures, the integrated
intensities and the FWHM line-widths of the C$_2$H line blend at
349.399\,GHz. The separation of the two lines of 1.375\,MHz already
corresponds to a line-width of 1.2\,km\,s$^{-1}$. We have three C$_2$H
non-detections (2 IRDCs and 1 HMPO), however, with no clear trend with
respect to the distances or the luminosities (the latter comparison is
only possible for the HMPOs). While IRDCs are on average colder than
more evolved sources, and have lower brightness temperatures, the
non-detections are more probable due to the relatively low sensitivity
of the short observations (\S\ref{obs}). Hence, the data indicate
that the C$_2$H lines are detected independent of the evolutionary
stage of the sources in contrast to the situation with other
molecules. When comparing the line-widths between the different
sub-groups, one finds only a marginal difference between the IRDCs and
the HMPOs (the average $\Delta v$ of the two groups are 2.8 and
3.1\,km\,s$^{-1}$). However, the UCH{\sc ii}s exhibit significantly
broader line-widths with an average value of 5.5\,km\,s$^{-1}$.
Intrigued by this finding, we wanted to understand the C$_2$H spatial
structure during the different evolutionary stages. Therefore, we
went back to a dataset obtained with the Submillimeter Array toward
the hypercompact H{\sc ii} region IRAS\,18089-1732 with a much higher
spatial resolution of $\sim 1''$ \citep{beuther2005c}. Albeit this
hypercompact H{\sc ii} region belongs to the class of HMPOs, it is
already in a relatively evolved stage and has formed a hot core with a
rich molecular spectrum. \citet{beuther2005c} showed the spectral
detection of the C$_2$H lines toward this source, but they did not
present any spatially resolved images. To recover large-scale
structure, we restricted the data to those from the compact SMA
configuration (\S\ref{obs}). With this refinement, we were able to
produce a spatially resolved C$_2$H map of the line blend at
349.338\,GHz with an angular resolution of $2.9''\times 1.4''$
(corresponding to an average linear resolution of 7700\,AU at the
given distance of 3.6\,kpc). Figure \ref{18089} presents the
integrated C$_2$H emission with a contour overlay of the 860\,$\mu$m
continuum source outlining the position of the massive protostar. In
contrast to almost all other molecular lines that peak along with the
dust continuum \citep{beuther2005c}, the C$_2$H emission surrounds the
continuum peak in a shell-like fashion.
\section{Discussion and Conclusions}
To understand the observations, we conducted a simple chemical
modeling of massive star-forming regions. A 1D cloud model with a mass
of 1200\,M$_\sun$, an outer radius of 0.36\,pc and a power-law density
profile ($\rho\propto r^p$ with $p=-1.5$) is the initially assumed
configuration. Three cases are studied: (1) a cold isothermal cloud
with $T=10$\,K, (2) $T=50$\,K, and (3) a warm model with a temperature
profile $T\propto r^q$ with $q=-0.4$ and a temperature at the outer
radius of 44\,K. The cloud is illuminated by the interstellar UV
radiation field (IRSF, \citealt{draine1978}) and by cosmic ray
particles (CRP). The ISRF attenuation by single-sized $0.1\mu$m
silicate grains at a given radius is calculated in a plane-parallel
geometry following \citet{vandishoeck1988}. The CRP ionization rate is
assumed to be $1.3\times 10^{-17}$~s$^{-1}$ \citep{spitzer1968}. The
gas-grain chemical model by \citet{vasyunin2008} with the desorption
energies and surface reactions from \citet{garrod2006} is used.
Gas-phase reaction rates are taken from RATE\,06 \citep{woodall2007},
initial abundances, were adopted from the ``low metal'' set of
\citet{lee1998}.
Figure \ref{model} presents the C$_2$H abundances for the three models
at two different time steps: (a) 100\,yr, and (b) in a more evolved
stage after $5\times10^4$\,yr. The C$_2$H abundance is high toward the
core center right from the beginning of the evolution, similar to
previous models (e.g., \citealt{millar1985,herbst1986,turner1999}).
During the evolution, the C$_2$H abundance stays approximately
constant at the outer core edges, whereas it decreases by more than
three orders of magnitude in the center, except for the cold $T=10$~K
model. The C$_2$H abundance profiles for all three models show
similar behavior.
The chemical evolution of ethynyl is determined by relative removal
rates of carbon and oxygen atoms or ions into molecules like CO, OH,
H$_2$O. Light ionized hydrocarbons CH$^+_{\rm n}$ (n=2..5) are quickly
formed by radiative association of C$^+$ with H$_2$ and hydrogen
addition reactions: C$^+$ $\rightarrow$ CH$_2^+$ $\rightarrow$
CH$_3^+$ $\rightarrow$ CH$_5^+$. The protonated methane reacts with
electrons, CO, C, OH, and more complex species at later stage and
forms methane. The CH$_4$ molecules undergo reactive collisions with
C$^+$, producing C$_2$H$_2^+$ and C$_2$H$_3^+$. An alternative way to
produce C$_2$H$_2^+$ is the dissociative recombination of CH$_5^+$
into CH$_3$ followed by reactions with C$^+$. Finally, C$_2$H$_2^+$
and C$_2$H$_3^+$ dissociatively recombine into CH, C$_2$H, and
C$_2$H$_2$. The major removal for C$_2$H is either the direct
neutral-neutral reaction with O that forms CO, or the same reaction
but with heavier carbon chain ions that are formed from C$_2$H by
subsequent insertion of carbon. At later times, depletion and
gas-phase reactions with more complex species may enter into this
cycle. At the cloud edge the interstellar UV radiation
instantaneously dissociates CO despite its self-shielding,
re-enriching the gas with elemental carbon.
The transformation of C$_2$H into CO and other species proceeds
efficiently in dense regions, in particular in the ``warm'' model
where endothermic reactions result in rich molecular complexity of the
gas (see Fig.~\ref{model}). In contrast, in the ``cold'' 10\,K model
gas-grain interactions and surface reactions become important. As a
result, a large fraction of oxygen is locked in water ice that is hard
to desorb ($E_{\rm des} \sim 5500$~K), while half of the elemental
carbon goes to volatile methane ice ($E_{\rm des} \sim 1300$~K). Upon
CRP heating of dust grains, this leads to much higher gas-phase
abundance of C$_2$H in the cloud core for the cold model compared to
the warm model. The effect is not that strong for less dense regions
at larger radii from the center.
Since the C$_2$H emission is anti-correlated with the dust continuum
emission in the case of IRAS\,18089-1732 (Fig.\,\ref{18089}), we do
not have the H$_2$ column densities to quantitatively compare the
abundance profiles of IRAS\,18089-1732 with our model. However, data
and model allow a qualitative comparison of the spatial structures.
Estimating an exact evolutionary time for IRAS\,18089-1732 is hardly
possible, but based on the strong molecular line emission, its high
central gas temperatures and the observed outflow-disk system
\citep{beuther2004a,beuther2004b,beuther2005c}, an approximate age of
$5\times10^4$\,yr appears reasonable. Although dynamical and chemical
times are not necessarily exactly the same, in high-mass star
formation they should not differ to much: Following the models by
\citet{mckee2003} or \citet{krumholz2006b}, the luminosity rises
strongly right from the onset of collapse which can be considered as a
starting point for the chemical evolution. At the same time disks and
outflows evolve, which should hence have similar time-scales. The
diameter of the shell-like C$_2$H structure in IRAS\,18089-1732 is
$\sim 5''$ (Fig.\,\ref{18089}), or $\sim$9000\,AU in radius at the
given distance of 3.6\,kpc. This value is well matched by the modeled
region with decreased C$_2$H abundance (Fig.\,\ref{model}). Although
in principle optical depths and/or excitation effects could mimic the
C$_2$H morphology, we consider this as unlikely because the other
observed molecules with many different transitions all peak toward the
central submm continuum emission in IRAS\,18089-1732
\citep{beuther2005c}. Since C$_2$H is the only exception in that rich
dataset, chemical effects appear the more plausible explanation.
The fact that we see C$_2$H at the earliest and the later evolutionary
stages can be explained by the reactive nature of C$_2$H: it is
produced quickly early on and gets replenished at the core edges by
the UV photodissociation of CO. The inner ``chemical'' hole observed
toward IRAS\,18089-1732 can be explained by C$_2$H being consumed in
the chemical network forming CO and more complex molecules like larger
carbon-hydrogen complexes and/or depletion.
The data show that C$_2$H is not suited to investigate the central gas
cores in more evolved sources, however, our analysis indicates that
C$_2$H may be a suitable tracer of the earliest stages of (massive)
star formation, like N$_2$H$^+$ or NH$_3$ (e.g.,
\citealt{bergin2002,tafalla2004,beuther2005a,pillai2006}). While a
spatial analysis of the line emission will give insights into the
kinematics of the gas and also the evolutionary stage from chemical
models, multiple C$_2$H lines will even allow a temperature
characterization. With its lowest $J=1-0$ transitions around 87\,GHz,
C$_2$H has easily accessible spectral lines in several bands between
the 3\,mm and 850\,$\mu$m. Furthermore, even the 349\,GHz lines
presented here have still relatively low upper level excitation
energies ($E_u/k\sim42$\,K), hence allowing to study cold cores even
at sub-millimeter wavelengths. This prediction can further be proved
via high spectral and spatial resolution observations of different
C$_2$H lines toward young IRDCs.
\acknowledgments{H.B. acknowledges financial support
by the Emmy-Noether-Programm of the Deutsche Forschungsgemeinschaft
(DFG, grant BE2578). }
| {'timestamp': '2008-01-29T14:32:19', 'yymm': '0801', 'arxiv_id': '0801.4493', 'language': 'en', 'url': 'https://arxiv.org/abs/0801.4493'} |
\section{Introduction} \label{sec:introduction}
We consider nonconvex quadratically constrained quadratic programs (QCQPs) of the form
\begin{align}
\label{eq:hqcqp} \tag{$\PC$}
\begin{array}{rl}
\min & \trans{\x}Q^0 \x \\
\subto & \trans{\x}Q^p \x \leq b_p, \quad p \in [m],
\end{array}
\end{align}
where $Q^0, \ldots, Q^m \in \SymMat^n$, $\b \in \Real^m$, $\x \in \Real^n$, and $[m]$ denotes
the set $\left\{i \in \Natural\,\middle|\, 1 \leq i \leq m \right\}$.
We use $\SymMat^n$ to denote
the space of $n \times n$ symmetric matrices.
A general form of QCQPs with linear terms
\begin{align*}
\begin{array}{rl}
\min & \trans{\x}Q^0 \x + \trans{(\q^0)}\x \\
\subto & \trans{\x}Q^p \x + \trans{(\q^p)}\x \leq b_p \quad p \in [m],
\end{array}
\end{align*}
can be represented in the form of \eqref{eq:hqcqp}
using a new variable $x_0$ such that $x_0^2 = 1$, where $\q^0, \ldots, \q^m \in \Real^n$.
For simplicity, we describe QCQPs as \eqref{eq:hqcqp}
and we assume that \eqref{eq:hqcqp} is feasible in this paper.
Nonconvex QCQPs \eqref{eq:hqcqp} are known to be NP-hard in general, however, finding the exact solution of
some class of QCQPs has been a popular subject \cite{Azuma2021,Burer2019,Jeyakumar2014,kim2003exact,Sojoudi2014exactness,Wang2021geometric,Wang2021tightness}
as they can provide solutions for important applications
formulated as QCQPs \eqref{eq:hqcqp}. They include optimal power flow problems~\cite{Lavaei2012,Zhou2019}, pooling problems~\cite{kimizuka2019solving},
sensor network localization problems~\cite{BISWAS2004,KIM2009,SO2007},
quadratic assignment problems~\cite{PRendl09,ZHAO1998},
the max-cut problem~\cite{Geomans1995}.
Moreover, it is well-known that polynomial optimization problems can be recast as QCQPs.
By replacing $\x\trans{\x}$ with a rank-1 matrix $X \in \SymMat^n$ in \eqref{eq:hqcqp}
and removing the rank constraint of $X$,
the standard (Shor) SDP relaxation
and its dual problem can be expressed as
\begin{align}
\label{eq:hsdr} \tag{$\PC_R$} &
\begin{array}{rl}
\min & \ip{Q^0}{X} \\
\subto & \ip{Q^p}{X} \le b_p, \quad p \in [m], \\
& X \succeq O,
\end{array} \\
\label{eq:hsdrd} \tag{$\DC_R$} &
\begin{array}{rl}
\max & \trans{-\b}\y \\
\subto & S(\y) := Q^0 + \sum\limits_{p=1}^m y_p Q^p \succeq O,
\quad \y \ge \0,
\end{array}
\end{align}
where $\ip{Q^p}{X}$ denotes the Frobenius inner product of $Q^p$ and $X$, i.e., $\ip{Q^p}{X} \coloneqq \sum_{i,j} Q^p_{ij} X_{ij}$,
and $X \succeq O$ means that $X$ is positive semidefinite.
The SDP relaxation provides a lower bound of the optimal value of \eqref{eq:hqcqp} in general.
When the SDP relaxation ({$\PC_R$}) provides a rank-1 solution $X$, we say that
the SDP relaxation is exact. In this case, the exact optimal solution
and exact optimal value can be computed in polynomial time.
A second-order cone programming (SOCP) relaxation can be obtained by
further relaxing the positive semidefinite constraint $X \succeq O$, for instance,
requiring all $2 \times 2$ principal submatrices of $X$ to be positive
semidefinite~\cite{kim2003exact, sheen2020exploiting}.
For QCQPs with a certain sparsity structure, e.g., forest structures,
the SDP relaxation coincides with the SOCP relaxation.
In this paper, we present a wider class of QCQPs that can be solved exactly with the SDP relaxation by extending
the results in \cite{Azuma2021} and \cite{Sojoudi2014exactness}. The extension is based on that trees or forests are bipartite graphs
and that QCQPs with no structure and the same sign of $Q_{ij}^p$ for $ p=0,1, \ldots,m$ can be transformed into
ones with bipartite structures.
Sufficient conditions for the exact
SDP relaxation of QCQP \eqref{eq:hqcqp} are described. These conditions are called exactness conditions in the subsequent discussion.
We mention that our results on the exact SDP relaxation is obtained by investigating the rank of $S(\y)$ in the dual of SDP relaxation ({$\DC_R$}).
When discussing the exact optimal solution of nonconvex QCQPs, convex relaxations of QCQPs such as the SDP or
SOCP have played a pivotal role.
In particular,
the signs of the elements in the data matrices $Q^0, \ldots, Q^m$
as in \cite{kim2003exact,Sojoudi2014exactness} and
graph structures such as forests \cite{Azuma2021} and bipartite structures \cite{Sojoudi2014exactness} have been used to identify the classes
of nonconvex QCQPs whose exact optimal solution can be attained via the SDP relaxation.
QCQPs with nonpositive off-diagonal data matrices were shown to have an exact SDP and SOCP relaxation~\cite{kim2003exact}.
This result was generalized by Sojoudi and Lavaei~\cite{Sojoudi2014exactness}
with a sufficient condition that can be tested by
the sign-definiteness based on the cycles in the aggregated sparsity pattern graph induced from the nonzero elements of data matrices in \eqref{eq:hqcqp}.
A finite set $\{Q^0_{ij}, Q^1_{ij}, \ldots, Q^m_{ij}\} \subseteq \Real$ is called sign-definite
if the elements of the set are either all nonnnegative or all nonpositive.
We note that these results are obtained by analyzing the primal problem ($\PC_R$).
For general QCQPs with no particular structure,
Burer and Ye in \cite{Burer2019} presented sufficient conditions for the exact semidefinite formulation
with a polynomial-time checkable polyhedral system.
From the dual SDP relaxation \eqref{eq:hsdrd} using strong duality,
they proposed
an LP-based technique to detect the exactness of the SDP relaxation of QCQPs
consisting of diagonal matrices $Q^0, \ldots, Q^m$ and linear terms.
Azuma et al.~\cite{Azuma2021} presented related results on QCQPs with forest structures.
With respect to the exactness conditions,
Yakubovich's S-lemma~\cite{Polik2007, Yakubovich1971}
(also known as S-procedure) can be regarded as one of the most important results.
It showed that the trust-region subproblem,
a subclass of QCQPs with only one constraint ($m = 1$) and $Q^1 \succeq O$, always admits an exact SDP relaxation.
Under some mild assumptions,
Wang and Xia~\cite{Wang2015} generalized this result to QCQPs with two constraints ($m = 2$)
and any matrices satisfing $Q^1 = -Q^2$ but not necessarily being positive semidefinite.
For the extended trust-region subproblem
whose constraints consist of one ellipsoid and linear inequalities,
the exact SDP relaxation has been studied by
Jeyakumar and Li~\cite{Jeyakumar2014}. They proved that
the SDP relaxation of the extended trust-region subproblem is exact if
the algebraic multiplicity of the minimum eigenvalue of $Q^0$ is strictly greater than
the dimension of the space spanned by the coefficient vectors of the linear inequalities.
This condition was slightly improved by Hsia and Sheu~\cite{Hsia2013}.
In addition, Locatelli~\cite{Locatelli2016} introduced
a new exactness condition for the extended trust-region subproblem
based on the KKT conditions
and proved that it is more general than the previous results.
A different approach on the exactness of the SDP relaxation for QCQPs is to study the convex hull exactness, i.e., the coincidence of
the convex hull of the epigraph of a QCQP and the projected epigraph of its SDP relaxation.
Wang and K{\dotlessi}l{\dotlessi}n\chige{c}-Karzan in~\cite{Wang2021tightness}
presented sufficient conditions for the convex hull exactness under the condition that
the feasible set $\Gamma \coloneqq \{\y \geq \0 \,|\, S(\y) \succeq O\}$ of \eqref{eq:hsdrd} is polyhedral.
Their results were improved in~\cite{Wang2021geometric} by eliminating this condition.
The rank-one generated (ROG) property, a geometric property,
was employed by Argue et al.~\cite{Argue2020necessary}
to evaluate the feasible set of the SDP relaxation.
In their paper, they proposed sufficient conditions that
the feasible set of the SDP relaxation is ROG, and
connected the ROG property with both the objective value and the exactness of the convex hull.
We describe our contributions:
\begin{itemize} \vspace{-2mm}
\item
We first show that
if the aggregated sparsity pattern graph is connected and bipartite
and a feasibility checking system constructed from QCQP \eqref{eq:hqcqp} is infeasible,
then the SDP relaxation is exact in section~\ref{sec:main}.
It is a polynomial-time method as the systems can be represented as SDPs.
This result can be regarded as an extension of Azuma et al.~\cite{Azuma2021}
in the sense that
the aggregated sparsity pattern
was generalized from forests to bipartite. We should mention that the signs of data are irrelavant.
We give in section~\ref{sec:example} two numerical examples of QCQPs which can be shown to
have exact SDP relaxations by our method, but fails to meet the conditions for real-valued
QCQP of \cite{Sojoudi2014exactness}.
\item
We propose a conversion method to derive a bipartite graph structure in \eqref{eq:hqcqp} from QCQPs with no apparent structure,
so that the SDP relaxation of the resulting QCQP provides the exact optimal solution.
More precisely, for every off-diagonal index $(i,j)$, if the set $\{Q^0_{ij},\ldots,Q^m_{ij}\}$ is sign-definite,
i.e., either all nonnegative or all nonpositive,
then any QCQP \eqref{eq:hqcqp} can be transformed into nonnegative off-diagonal QCQPs
with bipartite aggregated sparsity
by introducing a new variable $\z \coloneqq -\x$ and a new constraint $\|\x + \z\|_2^2 \leq 0$, which
covers a result for the real-valued QCQP proposed in \cite{Sojoudi2014exactness}.
\item We also show that the known results on the exactness of QCQPs
where (a) all the off-diagonal elements are sign-definite and the aggregated sparsity pattern graph is forest
or (b) all the off-diagonal elements are nonpositive can be proved using our method.
\item For disconnected pattern graphs,
a perturbation of the objective function is introduced, as in~\cite{Azuma2021}, in section~\ref{sec:perturbation}
to demonstrate that a QCQP is exact
if there exists a sequence of perturbed problems converging to the QCQP
while maintaining the exactness of their SDP relaxation
under assumptions weaker than \cite{Azuma2021}.
\end{itemize}
Throughout this paper, the following example is used to illustrate the difference between our result and previous works.
\begin{example}
\label{examp:cycle-graph-4-vertices}
\begin{align*}
\begin{array}{rl}
\min & \trans{\x} Q^0 \x \\
\subto & \trans{\x} Q^1 \x \leq 10, \quad \trans{\x} Q^2 \x \leq 10,
\end{array}
\end{align*}
where \begin{align*}
& Q^0 = \begin{bmatrix}
0 & -2 & 0 & 2 \\ -2 & 0 & -1 & 0 \\
0 & -1 & 5 & 1 \\ 2 & 0 & 1 & -4 \end{bmatrix}, \
Q^1 = \begin{bmatrix}
5 & 2 & 0 & 1 \\ 2 & -1 & 3 & 0 \\
0 & 3 & 3 & -1 \\ 1 & 0 & -1 & 4 \end{bmatrix}, \
Q^2 = \begin{bmatrix}
-1 & 1 & 0 & 0 \\ 1 & 4 & -1 & 0 \\
0 & -1 & 6 & 1 \\ 0 & 0 & 1 & -2 \end{bmatrix}.
\end{align*}
\end{example}
\noindent
Although Example \ref{examp:cycle-graph-4-vertices} does not satisfy
the sign-definiteness,
the proposed method can successfully show that the SDP relaxation is exact.
The rest of this paper is organized as follows.
In section~\ref{sec:preliminaries},
the aggregated sparsity pattern of QCQPs and the sign-definiteness are defined
and related works on the exactness of the SDP relaxation for QCQPs
with some aggregated sparsity pattern are described.
Sections~\ref{sec:main} and \ref{sec:perturbation} include the main results of this paper.
In section~\ref{sec:main}, the assumptions necessary for the exact SDP relaxation are described,
and sufficient conditions for the exact SDP relaxation are presented
under the connectivity of the aggregated sparsity pattern.
In section~\ref{sec:perturbation},
we show that the sufficient conditions can be extended to
QCQPs which do not satisfy the connectivity condition.
The perturbation results on the exactness are utilized to remove the connectivity condition.
In section~\ref{sec:example}, we also provide specific numerical instances to compare our result with the existing work and
illustrate our method.
Finally, we conclude in section~\ref{sec:conclution}.
\section{Preliminaries} \label{sec:preliminaries}
We denote
the $n$-dimensional Euclidean space by $\Real^n$
and the nonnegative orthant of $\Real^n$ by $\Real_+^n$.
We write the zero vector and the vector of all ones as $\0 \in \Real^n$ and $\1 \in \Real^n$, respectively.
We also write $M \succeq O$ and $M \succ O$ to indicate that
the matrix $M$ is positive semidefinite and positive definite, respectively.
We use $[n]:=\left\{i \in \Natural\,\middle|\, 1 \leq i \leq n\right\}$
and $[n, m]:=\left\{i \in \Natural\,\middle|\, n \leq i \leq m\right\}$.
The graph $G(\VC, \EC)$ denotes an undirected graph with
the vertex set $\VC$ and the edge set $\EC$.
We sometimes write $G$ if the vertex and edge sets are clear.
\subsection{Aggregated sparsity pattern}
The aggregated sparsity pattern of the SDP relaxation, defined from the data matrices $Q^p \, (p \in [0, m])$, is used
to describe the sparsity structure of QCQPs.
Let $\VC = [n]$ denote the set of indices of
rows and columns of $n \times n$ symmetric matrices.
Then, the set of indices
\begin{equation*}
\EC = \left\{
(i, j) \in \VC \times \VC \,\middle|\,
\text{$i \neq j$ and $Q^p_{ij} \ne 0$ for some $p \in [0, m]$}
\right\}
\end{equation*}
is called the aggregated sparsity pattern
for both a given QCQP \eqref{eq:hqcqp} and its SDP relaxation \eqref{eq:hsdr}.
If $\EC$ denotes the set of edges of a graph with vertices $\VC$,
the graph $G(\VC, \EC)$ is called the aggregated sparsity pattern graph.
If $\EC$ corresponds to an adjacent matrix $\QC$ of $n$ vertices,
$\QC$ is called the aggregated sparsity pattern matrix.
Consider the QCQP in Example \ref{examp:cycle-graph-4-vertices} as an illustrative example.
\noindent
As $(1, 3)$th and $(2, 4)$th elements are zeros in $Q^0, Q^1, Q^2$,
the aggregated sparsity pattern graph is a cycle with 4 vertices as
shown in Figure~\ref{fig:example-aggregated-sparsity}. We see that
the graph has only one cycle with 4 vertices.
This graph is the simplest of connected bipartite graphs with cycles.
\begin{figure}[t]
\centering
\begin{minipage}{0.30\textwidth}
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.6,xscale=0.6]
\draw (140,40) -- (40,140) -- (140,140) -- (40,40) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (10,40) .. controls (10,23.43) and (23.43,10) .. (40,10) .. controls (56.57,10) and (70,23.43) .. (70,40) .. controls (70,56.57) and (56.57,70) .. (40,70) .. controls (23.43,70) and (10,56.57) .. (10,40) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (10,140) .. controls (10,123.43) and (23.43,110) .. (40,110) .. controls (56.57,110) and (70,123.43) .. (70,140) .. controls (70,156.57) and (56.57,170) .. (40,170) .. controls (23.43,170) and (10,156.57) .. (10,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (110,140) .. controls (110,123.43) and (123.43,110) .. (140,110) .. controls (156.57,110) and (170,123.43) .. (170,140) .. controls (170,156.57) and (156.57,170) .. (140,170) .. controls (123.43,170) and (110,156.57) .. (110,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (110,40) .. controls (110,23.43) and (123.43,10) .. (140,10) .. controls (156.57,10) and (170,23.43) .. (170,40) .. controls (170,56.57) and (156.57,70) .. (140,70) .. controls (123.43,70) and (110,56.57) .. (110,40) -- cycle ;
\draw (40,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 1}
\end{center}\end{minipage}};
\draw (140,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2}
\end{center}\end{minipage}};
\draw (140,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4}
\end{center}\end{minipage}};
\draw (40,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3}
\end{center}\end{minipage}};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.30\textwidth}
\begin{equation*}
\QC = \begin{bmatrix}
\star & \star & 0 & \star \\ \star & \star & \star & 0 \\
0 & \star & \star & \star \\ \star & 0 & \star & \star
\end{bmatrix}.
\end{equation*}
\end{minipage}
\caption{The aggregated sparsity pattern graph and matrix of Example~\ref{examp:cycle-graph-4-vertices}. $\star$ denotes an arbitrary value.}
\label{fig:example-aggregated-sparsity}
\end{figure}
For the discussion on QCQPs with sign-definiteness, we adopt the following notation
from \cite{Sojoudi2014exactness}.
We define the sign $\sigma_{ij}$ of each edge in $\VC \times \VC$ as
\begin{equation} \label{eq:definition-sigma-ij}
\sigma_{ij} = \begin{cases}
\quad +1 \quad & \text{if $Q^0_{ij}, \ldots, Q^m_{ij} \geq 0$,} \\
\quad -1 \quad & \text{if $Q^0_{ij}, \ldots, Q^m_{ij} \leq 0$,} \\
\quad 0 \quad & \text{otherwise.}
\end{cases}
\end{equation}
Obviously,
$\sigma_{ij} \in \{-1, +1\}$ if and only if $\{Q^0_{ij}, \ldots, Q^m_{ij}\}$ is sign-definite.
Sojoudi and Lavaei~\cite{Sojoudi2014exactness}
proposed the following condition for exactness.
\vspace{-2mm}
\begin{theorem}[{\cite[Theorem 2]{Sojoudi2014exactness}}] \label{thm:sojoudi-theorem}
The SOCP relaxation and the SDP relaxation of \eqref{eq:hqcqp} are exact
if both of the following hold:
\begin{align}
&\sigma_{ij} \neq 0, && \forall (i, j) \in \EC, \label{eq:sign-constraint-sign-definite} \\
\prod_{(i,j) \in \mathcal{C}_r} & \sigma_{ij} = (-1)^{\left|\mathcal{C}_r\right|}, && \forall r \in \{1,\ldots, \kappa\}, \label{eq:sign-constraint-simple-cycle}
\end{align}
where the set of cycles $\mathcal{C}_1, \ldots, \mathcal{C}_\kappa \subseteq \EC$ denotes a cycle basis for $G$.
\end{theorem}
\vspace{-2mm}
With the aggregated sparsity pattern graph $G$ of a given QCQP,
they presented the following corollary:
\begin{coro}[{\cite[Corollary 1]{Sojoudi2014exactness}}]
\label{coro:sojoudi-corollary1}
The SDP relaxation and the SOCP relaxation of \eqref{eq:hqcqp} are exact
if one of the following holds:
\begin{enumerate}[label=(\alph*)]
\item $G$ is forest with $\sigma_{ij} \in \{-1, 1\}$ for all $(i, j) \in \EC$, \label{cond:sojoudi-forest}
\item $G$ is bipartite with $\sigma_{ij} = 1$ for all $(i, j) \in \EC$, \label{cond:sojoudi-bipartite}
\item $G$ is arbitrary with $\sigma_{ij} = -1$ for all $(i, j) \in \EC$. \label{cond:sojoudi-arbitrary}
\end{enumerate}
\end{coro}
\subsection{Conditions for exact SDP relaxations with forest structures}
Recently,
Azuma et al.~\cite{Azuma2021} proposed
a method to decide the exactness of the SDP relaxation of QCQPs with forest structures.
The forest-structured QCQPs or their SDP relaxation have no cycles in their aggregated sparsity pattern graph.
In their work,
the rank of the dual SDP relaxation was determined using feasibility systems under the following assumption:
\begin{assum}
\label{assum:previous-assumption}
The following conditions hold for \eqref{eq:hqcqp}:
\begin{enumerate}[label=(\roman*)]
\item there exists $\bar{\y} \geq 0$ such that $\sum \bar{y}_p Q^p \succ O$, and \label{assum:previous-assumption-1}
\item \eqref{eq:hsdr} has an interior feasible point. \label{assum:previous-assumption-2}
\end{enumerate}
\end{assum}
\noindent
We note that Assumption \ref{assum:previous-assumption} is used to derive
strong duality of the SDP relaxation
and the boundedness of the feasible set.
More precisely, for $\bar{\y}$ in Assumption~\ref{assum:previous-assumption},
multiplying $\ip{Q^p}{X} \leq b_p$ by $\bar{y}_p$ and adding together leads to
\begin{equation*}
\ip{\left(\sum_{p = 1}^m \bar{y}_pQ^p\right)}{X} \leq \trans{\b}\bar{\y},
\end{equation*}
which implies that the feasible set of $X$ is bounded from $X \succeq O$.
We describe the result in \cite{Azuma2021} for our subsequent discussion.
\begin{prop}[\cite{Azuma2021}] \label{prop:forest-results}
Assume that a given QCQP satisfies Assumption~\ref{assum:previous-assumption},
and that its aggregated sparsity pattern graph $G(\VC, \EC)$ is a forest.
The problem \eqref{eq:hsdr} is exact
if, for all $(k, \ell) \in \EC$, the following system has no solutions:
\begin{equation} \label{eq:system-zero}
\y \geq 0,\; S(\y) \succeq O,\; S(\y)_{k\ell} = 0.
\end{equation}
\end{prop}
\noindent
The above feasibility system, formulated as SDPs, can be checked in polynomial time
since the number of edges of a forest graph with $n$ vertices is at most $n - 1$.
\section{Conditions for exact SDP relaxations with connected bipartite structures} \label{sec:main}
Throughout this section,
we assume that the aggregated sparsity pattern graph $G(\VC, \EC)$ of a QCQP is connected and bipartite.
Under this assumption,
we present sufficient conditions for the SDP relaxation to be exact.
The main result described in Theorem~\ref{thm:system-based-condition-connected} in this section is extended to
the ones for the disconnected aggregated sparsity in section~\ref{sec:perturbation}.
Assumption~\ref{assum:previous-assumption}
has been introduced
only to derive the strong duality which is used in the proof of Proposition~\ref{prop:forest-results}.
Instead of Assumption~\ref{assum:previous-assumption}, we introduce
Assumption~\ref{assum:new-assumption}.
In Remark~\ref{rema:comparison-assumption} below, we will consider a relation between
Assumptions~\ref{assum:previous-assumption} and
\ref{assum:new-assumption}.
\begin{assum} \label{assum:new-assumption}
The following two conditions hold:
\begin{enumerate}[label=(\roman*)]
\item \label{assum:new-assumption-1}
the sets of optimal solutions for \eqref{eq:hsdr} and \eqref{eq:hsdrd} are nonempty; and
\item \label{assum:new-assumption-2}
at least one of the following two conditions holds:
\begin{enumerate}[label=(\alph*)]
\item \label{assum:new-assumption-2-1}
the feasible set of \eqref{eq:hsdr} is bounded; or
\item \label{assum:new-assumption-2-2}
the set of optimal solutions for \eqref{eq:hsdrd} is bounded.
\end{enumerate}
\end{enumerate}
\end{assum}
\noindent
The following lemma states that strong duality holds under Assumption~\ref{assum:new-assumption}.
\begin{lemma
\label{lem:feasible-set-strong-duality}
If Assumption~\ref{assum:new-assumption} is satisfied,
strong duality holds between \eqref{eq:hsdr} and \eqref{eq:hsdrd}, that is,
\eqref{eq:hsdr} and \eqref{eq:hsdrd} have optimal solutions
and their optimal values are finite and equal.
\end{lemma}
\begin{proof}
Since either the set of optimal solutions for \eqref{eq:hsdr} or
that for \eqref{eq:hsdrd} is nonempty and bounded,
Corollary~4.4 of Kim and Kojima~\cite{kim2021strong} indicates that
the optimal values of \eqref{eq:hsdr} and \eqref{eq:hsdrd} are finite and equal.
\end{proof}
\begin{rema} \label{rema:comparison-assumption}
Assumption~\ref{assum:new-assumption} is weaker than Assumption~\ref{assum:previous-assumption}.
To compare these assumptions, we suppose that there exists $\bar{\y} \geq 0$ such that $\sum_p \bar{y}_pQ^p \succ O$.
Then, there obviously exists sufficiently large $\lambda > 0$
such that
\begin{equation*}
\lambda \bar{\y} \geq \0 \quad \text{and} \quad Q^0 + \sum_p \lambda\bar{y}_pQ^p \succ O,
\end{equation*}
which implies \eqref{eq:hsdrd} has an interior feasible point.
It follows that the set of optimal solutions of \eqref{eq:hsdr} is bounded.
Similarly, since \eqref{eq:hsdr} has an interior point by Assumption~\ref{assum:previous-assumption},
the set of optimal solutions of \eqref{eq:hsdrd} is also bounded.
This indicates Assumption~\ref{assum:new-assumption} {\it \ref{assum:new-assumption-1}} and
{\it \ref{assum:new-assumption-2}\ref{assum:new-assumption-2-2}}.
In addition,
as mentioned right after Assumption~\ref{assum:previous-assumption},
the feasible set of \eqref{eq:hsdr} is bounded.
%
Thus,
Assumption~\ref{assum:new-assumption}
{\it \ref{assum:new-assumption-2}\ref{assum:new-assumption-2-1}} is also satisfied,
under Assumption~\ref{assum:previous-assumption}.
\end{rema}
\subsection{Bipartite sparsity pattern matrix} \label{ssec:bipartite-matrix}
For a given matrix $M \in \SymMat^n$,
a sparsity pattern graph $G(\VC, \EC_M)$ can be defined by the vertex set and edge set:
\begin{equation*}
\VC = [n], \quad
\EC_M = \left\{(i, j) \in \VC \times \VC \,\middle|\, M_{ij} \neq 0\right\}.
\end{equation*}
Conversely, if $(i, j) \not\in \EC_M$, then the $(i, j)$th element of $M$ must be zero.
The graph $G(\VC, \EC)$ is called bipartite if
its vertices can be divided into two disjoint sets $\LC$ and $\RC$ such that
no two vertices in the same set are adjacent.
Equivalently, a bipartite $G$ is a graph with no odd cycles.
If $G(\VC, \EC)$ is bipartite, it can be represented with $G(\LC, \RC, \EC)$,
where $\LC$ and $\RC$ are disjoint sets of vertices.
The sets $\LC$ and $\RC$ are sometimes called parts of the bipartite graph $G$.
The following lemma is an immediate consequence of Proposition 1 of \cite{grone1992nonchordal}.
It shows that the rank of a nonnegative positive semidefinite matrix can be bounded below by $n - 1$
under some sparsity conditions if the sum of every row of the matrix is positive.
We utilize Lemma~\ref{lemma:bipartite-rank} to estimate the rank of solutions of the dual SDP relaxation,
and establish conditions for the exact SDP relaxation in this section.
\begin{lemma}[{\cite[Proposition 1]{grone1992nonchordal}}] \label{lemma:bipartite-rank}
Let $M \in \Real^{n \times n}$ be a nonnegative and positive semidefinite matrix with $M\1 > \0$.
If the sparsity pattern graph of $M$ is bipartite and connected,
then $\rank(M) \geq n - 1$.
\end{lemma}
As the aggregated sparsity pattern graph $G$ composed from
$Q_0, Q_1, \dots, Q_m$ is used
to investigate the exactness of the SDP relaxation of a QCQP,
the sparsity pattern graph of the matrix $S(\y)$ in the dual of the SDP relaxation is clearly a subgraph of $G$.
As a result, if $G$ is bipartite,
then the rank of $S(\y)$ can be estimated by Lemma~\ref{lemma:bipartite-rank}
since $S(\y)$ is also bipartite.
This will be used in the proof of Theorem~\ref{thm:system-based-condition-connected}.
\subsection{Main results} \label{ssec:main-connected}
We present our main results, that is,
sufficient conditions for the SDP relaxation of the QCQP with bipartite structures to be exact.
\begin{theorem}
\label{thm:system-based-condition-connected}
Suppose that Assumption~\ref{assum:new-assumption} holds
and the aggregated sparsity pattern $G(\VC, \EC)$ is a bipartite graph.
Then, \eqref{eq:hsdr} is exact if
\begin{itemize}
\item $G(\VC, \EC)$ is connected,
\item
for all $(k, \ell) \in \EC$, the following system has no solutions:
\begin{equation} \label{eq:system-nonpositive}
\y \geq \0,\; S(\y) \succeq O,\; S(\y)_{k\ell} \leq 0.
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
Let $X^*$ be any optimal solution for \eqref{eq:hsdr} which exists by Assumption~\ref{assum:new-assumption}.
By Lemma~\ref{lem:feasible-set-strong-duality},
the optimal values of \eqref{eq:hsdr} and \eqref{eq:hsdrd} are finite and equal.
Thus, there exists an optimal solution $\y^*$ for \eqref{eq:hsdrd}
such that the complementary slackness holds, i.e.,
\begin{equation*}
X^* S(\y^*) = O.
\end{equation*}
%
Since $\y^* \geq \0$ and $S(\y^*) \succeq O$,
by the infeasibility of \eqref{eq:system-nonpositive},
we obtain $S(\y^*)_{k\ell} > 0$ for every $(k, \ell) \in \EC$.
Furthermore, for each $i \in \VC$, the $i$th element of $S(\y^*)\1$ is
\begin{equation*}
[S(\y^*)\1]_i
= \sum_{j = 1}^n S(\y^*)_{ij}
= S(\y^*)_{ii} + \sum_{(i,j) \in \EC} S(\y^*)_{ij}
> 0.
\end{equation*}
By Lemma~\ref{lemma:bipartite-rank},
$\rank\left\{S(\y^*)\right\} \geq n - 1$.
%
From the Sylvester’s rank inequality~\cite{Anton2014},
\begin{equation*}
\rank(X^*)
\leq n - \rank\left\{S(\y^*)\right\} + \rank\left\{X^*S(\y^*)\right\}
\leq n - (n - 1)
= 1.
\end{equation*}
%
Therefore, the SDP relaxation is exact.
\end{proof}
The exactness of a given QCQP can be determined
by checking the infeasibility of $|\EC|$ systems.
Since \eqref{eq:system-nonpositive} can be formulated as
an SDP with the objective function $0$,
checking their infeasibility is not difficult.
Compared with Proposition~\ref{prop:forest-results} in \cite{Azuma2021},
Theorem~\ref{thm:system-based-condition-connected} can determine the exactness of a wider class of QCQPs
in terms of the required assumption and sparsity.
As mentioned in Remark~\ref{rema:comparison-assumption},
the assumptions in Theorem \ref{thm:system-based-condition-connected} are weaker
than those in Proposition~\ref{prop:forest-results},
and the aggregated sparsity pattern of $G$ is extended from forest graphs to bipartite graphs.
\subsection{Nonnegative off-diagonal QCQPs} \label{ssec:nonnegative-offdiagonal-connected}
We can also prove a known result by Theorem~\ref{thm:system-based-condition-connected},
i.e.,
the exactness of the SDP relaxation for QCQPs with nonnegative off-diagonal data matrices $Q^0, \ldots, Q^m$,
which was
referred as Corollary~\ref{coro:sojoudi-corollary1}\ref{cond:sojoudi-bipartite} above and was proved in~\cite{Sojoudi2014exactness}.
The aggregated sparsity pattern graph $G(\VC, \EC)$ is assumed to be connected
and $Q^0_{ij} > 0$ for all $(i, j) \in \EC$ in this subsection.
These assumptions will be relaxed in
section~\ref{ssec:nonnegative-offdiagonal}.
\begin{coro} \label{coro:nonnegative-offdiagonal-connected}
Suppose that Assumption~\ref{assum:new-assumption} holds,
and the aggregated sparsity pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp}
is bipartite and connected.
If $Q^0_{ij} > 0$ for all $(i, j) \in \EC$ and
$Q^p_{ij} \geq 0$ for all $(i, j) \in \EC$ and all $p \in [m]$,
then the SDP relaxation is exact.
\end{coro}
\begin{proof}
Let $\hat{\y} \ge \0$ be any nonnegative vector satisfying $S(\hat{\y}) \succeq O$.
By the assumption, for any $(i, j) \in \EC$,
\begin{equation*}
S(\hat{\y})_{ij}
= Q^0_{ij} + \sum_{p \in [m]} \hat{y}_p Q^p_{ij}
\geq Q^0_{ij}
> 0.
\end{equation*}
Hence, the system \eqref{eq:system-nonpositive} for every $(i, j) \in \EC$ has no solutions.
Therefore, by Theorem~\ref{thm:system-based-condition-connected},
the SDP relaxation is exact.
\end{proof}
\subsection{Conversion to QCQPs with bipartite structures} \label{ssec:comparison}
We show that a QCQP can be transformed into an equivalent QCQP with bipartite structures.
We then compare Theorem~\ref{thm:system-based-condition-connected}
with Theorem~\ref{thm:sojoudi-theorem}.
As our result has been obtained by
the rank of the dual SDP \eqref{eq:hsdrd} via strong duality while
the result in \cite{Sojoudi2014exactness} is from the evaluation of \eqref{eq:hsdr}, the classes of QCQPs that can be solved exactly with the SDP relaxation become different.
In this section, we show that
a class of QCQPs obtained by Theorem~\ref{thm:system-based-condition-connected}
under Assumption~\ref{assum:new-assumption} is wider than those by Theorem~\ref{thm:sojoudi-theorem}.
To transform a QCQP into an equivalent QCQP with bipartite structures and to apply Theorem~\ref{thm:system-based-condition-connected},
we define a diagonal matrix $D^p \in \SymMat^n$ with a positive number
from the diagonal of $Q^p$ for every $p$.
In addition, off-diagonal elements of $Q^p$ are divided into
two nonnegative symmetric matrices $2N^p_+,\, 2N^p_- \in \SymMat^n$ according to their signs
such that $Q^p = D^p + 2N^p_+ - 2N^p_-$.
More precisely, for an arbitrary positive number $\delta > 0$,
\begin{align*}
D^p_{ii} &= Q^p_{ii} + 2 \delta, \\
2[N^p_+]_{ij} &= \begin{cases}
+ Q^p_{ij} & \text{if $i \neq j$ and $Q^p_{ij} > 0$}, \\
0 & \text{otherwise,}
\end{cases} \\
2[N^p_-]_{ij} &= \begin{cases}
- Q^p_{ij} & \text{if $i \neq j$ and $Q^p_{ij} < 0$}, \\
2 \delta & \text{if $i = j$}, \\
0 & \text{otherwise.}
\end{cases}
\end{align*}
We introduce a new variable $\z$ such that $\z \coloneqq -\x$. Then,
\begin{equation*}
\trans{\x}Q^p\x =
\trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} D^p + 2N^p_+ & N^p_- \\ N^p_- & O \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix},
\end{equation*}
The constraint $\z = -\x$ can be expressed as
$\|\x + \z\|^2 \leq 0$,
which can be written as
\begin{equation*}
\trans{(\x + \z)}(\x + \z)
= \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} I & I \\ I & I \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix}
\leq 0.
\end{equation*}
Thus, we have an equivalent QCQP:
\begin{equation} \label{eq:decomposed-hqcqp}
\begin{array}{rl}
\min & \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} D^0 + 2N^0_+ & N^0_- \\ N^0_- & O \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix} \\
\subto & \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} D^p + 2N^p_+ & N^p_- \\ N^p_- & O \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix} \leq b_p, \quad p \in [m], \\
& \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} I & I \\ I & I \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix} \leq 0.
\end{array}
\end{equation}
Note that
\eqref{eq:decomposed-hqcqp} includes $m + 1$ constraints and
all off-diagonal elements of data matrices are nonnegative since $N^p_+$ and $N^p_-$ are nonnegative.
Let $\bar{G}(\bar{\VC}, \bar{\EC})$ denote
the aggregated sparsity pattern graph of \eqref{eq:decomposed-hqcqp}.
The number of vertices in $\bar{G}$ is twice as many as that
in $G$ due to the additional variable $\z$.
If $\bar{G}$ is bipartite and $Q^0_{ij} \neq 0$ for all $(i, j) \in \EC$,
the SDP relaxation of \eqref{eq:decomposed-hqcqp} is exact
since the assumptions of Corollary~\ref{coro:nonnegative-offdiagonal-connected} are satisfied.
\begin{figure}[t]
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.6,xscale=0.6]
\draw [dash pattern={on 4.5pt off 4.5pt}, line width=0.75] (40,40) -- (140,140) ;
\draw (40,40) -- (140,40) -- (140,140) -- (40,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (10,40) .. controls (10,23.43) and (23.43,10) .. (40,10) .. controls (56.57,10) and (70,23.43) .. (70,40) .. controls (70,56.57) and (56.57,70) .. (40,70) .. controls (23.43,70) and (10,56.57) .. (10,40) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (10,140) .. controls (10,123.43) and (23.43,110) .. (40,110) .. controls (56.57,110) and (70,123.43) .. (70,140) .. controls (70,156.57) and (56.57,170) .. (40,170) .. controls (23.43,170) and (10,156.57) .. (10,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (110,140) .. controls (110,123.43) and (123.43,110) .. (140,110) .. controls (156.57,110) and (170,123.43) .. (170,140) .. controls (170,156.57) and (156.57,170) .. (140,170) .. controls (123.43,170) and (110,156.57) .. (110,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (110,40) .. controls (110,23.43) and (123.43,10) .. (140,10) .. controls (156.57,10) and (170,23.43) .. (170,40) .. controls (170,56.57) and (156.57,70) .. (140,70) .. controls (123.43,70) and (110,56.57) .. (110,40) -- cycle ;
\draw (40,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 1}
\end{center}\end{minipage}};
\draw (140,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2}
\end{center}\end{minipage}};
\draw (140,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3}
\end{center}\end{minipage}};
\draw (40,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4}
\end{center}\end{minipage}};
\draw (91.33,64.33) node [anchor=north west][inner sep=0.75pt] [font=\large] [align=left] {$\displaystyle -$};
\draw (79.67,7) node [anchor=north west][inner sep=0.75pt] [font=\large] [align=left] {$\displaystyle +$};
\draw (80.33,106.67) node [anchor=north west][inner sep=0.75pt] [font=\large] [align=left] {$\displaystyle +$};
\draw (42,71) node [anchor=north west][inner sep=0.75pt] [font=\large] [align=left] {$\displaystyle +$};
\draw (142,71) node [anchor=north west][inner sep=0.75pt] [font=\large] [align=left] {$\displaystyle +$};
\end{tikzpicture}
\caption{
An aggregated sparsity pattern graph with edge signs.
The solid and dashed lines show that
the corresponding $\sigma_{ij}$ are $+1$ and $-1$, respectively.
Both lines indicate the existence of nonzero elements in some $Q^p$.
}
\label{fig:example-edge-signs}
\end{figure}
\begin{example}
Now, consider an instance of QCQP~\eqref{eq:hqcqp}
with $n=4$, $Q^p_{24} = 0 \, (p \in [0, m])$ and the edge signs as:
\begin{equation*}
\begin{aligned}
\sigma_{12} &= +1, & \sigma_{13} &= -1, & \sigma_{14} &= +1, & \sigma_{23} &= +1, & \sigma_{34} &= +1.
\end{aligned}
\end{equation*}
\autoref{fig:example-edge-signs} illustrates the above signs.
We also suppose that $Q^0_{ij} \neq 0$ for all $(i, j) \in \EC$.
Then, for any distinct $i, j \in [n]$,
the set $\left\{Q^0_{ij}, \ldots, Q^m_{ij}\right\}$ is sign-definite by definition.
Since there exist odd cycles, e.g., $\{(1, 2), (2, 3), (3, 1)\}$,
the aggregated sparsity pattern graph of a QCQP with the above edge signs is not bipartite.
Next, we transform the QCQP instance into an equivalent QCQP with bipartite structures.
Since $n=4$, we see $\bar{\VC} = [8]$.
\autoref{fig:example-transformed-sparsity-before} displays $\bar{G}$ from
\begin{equation*}
\begin{bmatrix} D^p + 2N^p_+ & N^p_- \\ N^p_- & O \end{bmatrix} =
\left[
\begin{array}{c|c}
\begin{matrix}
Q^p_{11} & Q^p_{12} & 0 & Q^p_{14} \\
Q^p_{21} & Q^p_{22} & Q^p_{23} & 0 \\
0 & Q^p_{32} & Q^p_{33} & Q^p_{34} \\
Q^p_{41} & 0 & Q^p_{43} & Q^p_{44}
\end{matrix} &
\begin{matrix}
0 & 0 & -\frac{1}{2}Q^p_{13} & 0 \\
0 & 0 & 0 & 0 \\
-\frac{1}{2}Q^p_{31} & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{matrix} \\ \hline
\begin{matrix}
0 & 0 & -\frac{1}{2}Q^p_{13} & 0 \\
0 & 0 & 0 & 0 \\
-\frac{1}{2}Q^p_{31} & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{matrix} & O \\
\end{array}
\right] +
\delta \left[
\begin{array}{c|c}
2 I & I \\ \hline
I & O
\end{array}
\right]
\end{equation*}
and $[I\; I; I\; I]$.
There exist three types of edges:
\begin{equation*}
\def1.5{1.5}
\left\{\begin{array}{rl}
\text{ (i)} & (1, 2), (2, 3), (3, 4), (1, 4); \\
\text{ (ii)} & (1, 7), (3, 5); \\
\text{(iii)} & (1, 5), (2, 6), (3, 7), (4, 8).
\end{array}\right.
\end{equation*}
The edges in (i) and (ii) are derived from
four $N^p_+$ on the upper-left of the data matrices,
and two $N^p_-$ on the upper-right and the lower-left of the data matrices, respectively.
The edges for (iii) represents off-diagonal elements in $[I\; I; I\; I]$ in the new constraint.
In \autoref{fig:example-transformed-sparsity-before},
the cycle in the solid lines is bipartite with the vertices $\{1,2,3,4\}$,
and hence its vertices can be divided into two distinct sets $L_1 = \{1, 3\}$ and $R_1 = \{2, 4\}$.
If we let $L_2 \coloneqq \{6, 8\}$ and $R_2 \coloneqq \{5, 7\}$,
there are no edges between any distinct $i, j$ in $L_1 \cup L_2$,
and the same is true for $R_1 \cup R_2$.
The graph $\bar{G}$ is thus bipartite (\autoref{fig:example-transformed-sparsity-after}).
We can conclude that the SDP relaxation of \eqref{eq:decomposed-hqcqp} is exact
by Corollary~\ref{coro:nonnegative-offdiagonal-connected}.
\end{example}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.6,xscale=0.6]
\draw [dash pattern={on 4.5pt off 4.5pt}] (70,60) -- (270,160) ;
\draw [dash pattern={on 4.5pt off 4.5pt}] (70,160) -- (270,60) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (270,60) -- (270,160) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (70,60) -- (70,160) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (170,60) -- (170,160) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (370,60) -- (370,160) ;
\draw (370,10) -- (70,10) -- (70,60) -- (370,60) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (40,60) .. controls (40,43.43) and (53.43,30) .. (70,30) .. controls (86.57,30) and (100,43.43) .. (100,60) .. controls (100,76.57) and (86.57,90) .. (70,90) .. controls (53.43,90) and (40,76.57) .. (40,60) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (240,60) .. controls (240,43.43) and (253.43,30) .. (270,30) .. controls (286.57,30) and (300,43.43) .. (300,60) .. controls (300,76.57) and (286.57,90) .. (270,90) .. controls (253.43,90) and (240,76.57) .. (240,60) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (140,60) .. controls (140,43.43) and (153.43,30) .. (170,30) .. controls (186.57,30) and (200,43.43) .. (200,60) .. controls (200,76.57) and (186.57,90) .. (170,90) .. controls (153.43,90) and (140,76.57) .. (140,60) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (340,60) .. controls (340,43.43) and (353.43,30) .. (370,30) .. controls (386.57,30) and (400,43.43) .. (400,60) .. controls (400,76.57) and (386.57,90) .. (370,90) .. controls (353.43,90) and (340,76.57) .. (340,60) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (40,160) .. controls (40,143.43) and (53.43,130) .. (70,130) .. controls (86.57,130) and (100,143.43) .. (100,160) .. controls (100,176.57) and (86.57,190) .. (70,190) .. controls (53.43,190) and (40,176.57) .. (40,160) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (240,160) .. controls (240,143.43) and (253.43,130) .. (270,130) .. controls (286.57,130) and (300,143.43) .. (300,160) .. controls (300,176.57) and (286.57,190) .. (270,190) .. controls (253.43,190) and (240,176.57) .. (240,160) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (140,160) .. controls (140,143.43) and (153.43,130) .. (170,130) .. controls (186.57,130) and (200,143.43) .. (200,160) .. controls (200,176.57) and (186.57,190) .. (170,190) .. controls (153.43,190) and (140,176.57) .. (140,160) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (340,160) .. controls (340,143.43) and (353.43,130) .. (370,130) .. controls (386.57,130) and (400,143.43) .. (400,160) .. controls (400,176.57) and (386.57,190) .. (370,190) .. controls (353.43,190) and (340,176.57) .. (340,160) -- cycle ;
\draw (170,60) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2}
\end{center}\end{minipage}};
\draw (370,60) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4}
\end{center}\end{minipage}};
\draw (70,60) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 1}
\end{center}\end{minipage}};
\draw (270,60) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3}
\end{center}\end{minipage}};
\draw (170,160) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 6}
\end{center}\end{minipage}};
\draw (370,160) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 8}
\end{center}\end{minipage}};
\draw (70,160) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 5}
\end{center}\end{minipage}};
\draw (270,160) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 7}
\end{center}\end{minipage}};
\draw (15,60) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{20.4pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle \x$
\end{center}\end{minipage}};
\draw (15,160) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{20.4pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle \z$
\end{center}\end{minipage}};
\end{tikzpicture}
\caption{
Vertices are divided into two groups:
the upper vertices correspond to $\x$
while the lower ones correspond to $\z$.
}
\label{fig:example-transformed-sparsity-before}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.6,xscale=0.6]
\draw (215,110) -- (315,70) -- (388,125.35) -- (141,53.35) -- cycle ;
\draw [dash pattern={on 4.5pt off 4.5pt}] (115,40) -- (315,140) ;
\draw [dash pattern={on 4.5pt off 4.5pt}] (115,140) -- (315,40) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (315,40) -- (315,140) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (115,40) -- (115,140) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (215,40) -- (215,140) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (415,40) -- (415,140) ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (85,40) .. controls (85,23.43) and (98.43,10) .. (115,10) .. controls (131.57,10) and (145,23.43) .. (145,40) .. controls (145,56.57) and (131.57,70) .. (115,70) .. controls (98.43,70) and (85,56.57) .. (85,40) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (285,40) .. controls (285,23.43) and (298.43,10) .. (315,10) .. controls (331.57,10) and (345,23.43) .. (345,40) .. controls (345,56.57) and (331.57,70) .. (315,70) .. controls (298.43,70) and (285,56.57) .. (285,40) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (185,40) .. controls (185,23.43) and (198.43,10) .. (215,10) .. controls (231.57,10) and (245,23.43) .. (245,40) .. controls (245,56.57) and (231.57,70) .. (215,70) .. controls (198.43,70) and (185,56.57) .. (185,40) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (385,40) .. controls (385,23.43) and (398.43,10) .. (415,10) .. controls (431.57,10) and (445,23.43) .. (445,40) .. controls (445,56.57) and (431.57,70) .. (415,70) .. controls (398.43,70) and (385,56.57) .. (385,40) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (85,140) .. controls (85,123.43) and (98.43,110) .. (115,110) .. controls (131.57,110) and (145,123.43) .. (145,140) .. controls (145,156.57) and (131.57,170) .. (115,170) .. controls (98.43,170) and (85,156.57) .. (85,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (285,140) .. controls (285,123.43) and (298.43,110) .. (315,110) .. controls (331.57,110) and (345,123.43) .. (345,140) .. controls (345,156.57) and (331.57,170) .. (315,170) .. controls (298.43,170) and (285,156.57) .. (285,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (185,140) .. controls (185,123.43) and (198.43,110) .. (215,110) .. controls (231.57,110) and (245,123.43) .. (245,140) .. controls (245,156.57) and (231.57,170) .. (215,170) .. controls (198.43,170) and (185,156.57) .. (185,140) -- cycle ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (385,140) .. controls (385,123.43) and (398.43,110) .. (415,110) .. controls (431.57,110) and (445,123.43) .. (445,140) .. controls (445,156.57) and (431.57,170) .. (415,170) .. controls (398.43,170) and (385,156.57) .. (385,140) -- cycle ;
\draw (215,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 6}
\end{center}\end{minipage}};
\draw (415,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 8}
\end{center}\end{minipage}};
\draw (115,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 1}
\end{center}\end{minipage}};
\draw (315,40) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3}
\end{center}\end{minipage}};
\draw (215,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2}
\end{center}\end{minipage}};
\draw (415,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4}
\end{center}\end{minipage}};
\draw (115,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 5}
\end{center}\end{minipage}};
\draw (315,140) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
{\fontfamily{pcr}\selectfont 7}
\end{center}\end{minipage}};
\draw (30,40) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{54.4pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle L_{1} \cup L_{2}$
\end{center}\end{minipage}};
\draw (30,140) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{54.4pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle R_{1} \cup R_{2}$
\end{center}\end{minipage}};
\end{tikzpicture}
\caption{
Vertices are reorganized to show the bipartite structure of the graph.
}
\label{fig:example-transformed-sparsity-after}
\end{subfigure}
\caption{
Aggregated sparsity pattern graph of the transformed example.
The solid lines and the dashed lines come from $N^p_+$ and $N^p_-$, respectively.
The dotted lines are for the new constraint $\|\x + \z\|^2 \leq 0$.
}
\label{fig:example-transformed-sparsity}
\end{figure}
Similarly, the SDP relaxation of any QCQP that satisfies Theorem~\ref{thm:sojoudi-theorem}
can be shown to be exact by the transformation.
Therefore,
Theorem~\ref{thm:system-based-condition-connected} includes a wider classes of QCQPs than Theorem~\ref{thm:sojoudi-theorem}.
We prove this assertion in the following.
\begin{prop}
\label{prop:weaker-than-sojoudi-connected}
Suppose that Assumption~\ref{assum:new-assumption} holds,
the aggregated sparsity pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp}
is connected,
and for all $(i, j) \in \EC$, $Q^0_{ij} \neq 0$.
If \eqref{eq:hqcqp} satisfies the assumption of Theorem~\ref{thm:sojoudi-theorem},
then \eqref{eq:hqcqp} also satisfies that of Corollary~\ref{coro:nonnegative-offdiagonal-connected}.
In addition, the exactness of its SDP relaxation
can be proved by Theorem~\ref{thm:system-based-condition-connected}.
\end{prop}
\begin{proof}
Let $\bar{G}(\bar{\VC}, \bar{\EC})$ be
the aggregated sparsity pattern graph of \eqref{eq:decomposed-hqcqp}.
Since the number of variables is $2n$, $\bar{\VC} = [2n]$ holds.
The edges in $\bar{G}$ are:
\begin{equation*}
\def1.5{1.5}
\left\{\begin{array}{rll}
\text{ (i)} & (i, j) & \text{for $i, j \in \VC$ such that $\sigma_{ij} = +1$}, \\
\text{ (ii)} & (i, j + n), (j, i + n) &
\text{for $i, j \in \VC$ such that $\sigma_{ij} = -1$}, \\
\text{(iii)} & (i, i + n) & \text{for $i \in \VC$}.
\end{array}\right.
\end{equation*}
%
Note that
no edges exist among the vertices in $\{n+1, \ldots, 2n\}$.
By the definition of \eqref{eq:decomposed-hqcqp},
an edge $(i, j)$ with $\sigma_{ij} = -1$ in $G$
is decomposed into two paths with positive signs in $\bar{G}$:
(a) the edges $(j, i + n)$ and $(i + n, i)$;
(b) the edges $(i, j + n)$ and $(j + n, j)$, as shown in \autoref{fig:transform-minus-edge-sign}.
Since $G$ is connected, so is the graph $\bar{G}$.
Recall that all off-diagonal elements of
the data matrices in \eqref{eq:decomposed-hqcqp} are nonnegative, since both $N^p_+$ and $N^p_-$ are nonnegative matrices.
In particular, for each $(i, j) \in \bar{\EC}$,
the $(i, j)$th element of the matrix in the objective function is not only nonnegative but also positive by assumption.
Thus, to apply Corollary~\ref{coro:nonnegative-offdiagonal-connected}, it remains to show that $\bar{G}$ is bipartite.
Assume on the contrary there exists an odd cycle $\bar{\CC}$ in $\bar{G}$.
Let $\bar{\UC} \subseteq [n+1, 2n]$ denote the set of vertices on $[n+1, 2n]$ in $\bar{\CC}$.
As illustrated in \autoref{fig:transform-minus-edge-sign},
any vertex $v \coloneqq i + n \in \bar{\UC}$ connects with $i$ and $j \in \VC$ in $\bar{\CC}$.
Hence for every vertex $v \in \bar{\UC}$,
by removing the edges $(i, v)$ and $(v, j)$ from $\bar{\CC}$ and
adding the edge $(i, j)$ with the negative sign to $\bar{\CC}$,
we obtain a new cycle $\CC$ in $G$.
Since $2|\bar{\UC}|$ edges are removed and $|\bar{\UC}|$ edges are added in this procedure,
it follows $|\CC| = |\bar{\CC}| - 2|\bar{\UC}| + |\bar{\UC}| = |\bar{\CC}| - |\bar{\UC}|$.
\autoref{fig:removing-adding-cycle-edges} displays a case for $|\bar{\UC}| = 2$.
Thus, if $|\bar{\UC}|$ is even (odd), $|\CC|$ is odd (resp., even),
hence, by \eqref{eq:sign-constraint-simple-cycle} in Theorem~\ref{thm:sojoudi-theorem},
the number of negative edges in $\CC$ must be odd (resp., even).
However, the number of negative edges in $\CC$ is equal to $|\bar{\UC}|$
since $\bar{\CC}$ has no negative edges and all the additional edges
in the conversion from $\bar{\CC}$ to $\CC$
are negative.
This is a contradiction.
Therefore, there are no odd cycles in $\bar{G}$,
which implies $\bar{G}$ is bipartite.
Since \eqref{eq:decomposed-hqcqp} satisfies the assumptions of Corollary~\ref{coro:nonnegative-offdiagonal-connected},
it also satisfies the assumptions of Theorem~\ref{thm:system-based-condition-connected}.
\end{proof}
\begin{figure}[t]
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{minipage}[t]{0.54\textwidth}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.6,xscale=0.6]
\draw [draw opacity=0] (324.92,109.3) .. controls (318.07,127.62) and (305,140) .. (290,140) .. controls (267.91,140) and (250,113.14) .. (250,80) .. controls (250,46.86) and (267.91,20) .. (290,20) .. controls (305,20) and (318.07,32.38) .. (324.92,50.7) -- (290,80) -- cycle ; \draw (324.92,109.3) .. controls (318.07,127.62) and (305,140) .. (290,140) .. controls (267.91,140) and (250,113.14) .. (250,80) .. controls (250,46.86) and (267.91,20) .. (290,20) .. controls (305,20) and (318.07,32.38) .. (324.92,50.7) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (324.92,50.7) -- (404.9,50.7) ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (318.92,50.7) .. controls (318.92,47.39) and (321.6,44.7) .. (324.92,44.7) .. controls (328.23,44.7) and (330.92,47.39) .. (330.92,50.7) .. controls (330.92,54.02) and (328.23,56.7) .. (324.92,56.7) .. controls (321.6,56.7) and (318.92,54.02) .. (318.92,50.7) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (318.92,109.3) .. controls (318.92,105.98) and (321.6,103.3) .. (324.92,103.3) .. controls (328.23,103.3) and (330.92,105.98) .. (330.92,109.3) .. controls (330.92,112.61) and (328.23,115.3) .. (324.92,115.3) .. controls (321.6,115.3) and (318.92,112.61) .. (318.92,109.3) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (324.92,109.3) -- (404.9,109.3) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (324.92,50.7) -- (404.9,109.3) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (324.92,109.3) -- (404.9,50.7) ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (398.9,50.7) .. controls (398.9,47.39) and (401.59,44.7) .. (404.9,44.7) .. controls (408.21,44.7) and (410.9,47.39) .. (410.9,50.7) .. controls (410.9,54.01) and (408.21,56.7) .. (404.9,56.7) .. controls (401.59,56.7) and (398.9,54.01) .. (398.9,50.7) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (398.9,109.3) .. controls (398.9,105.98) and (401.59,103.3) .. (404.9,103.3) .. controls (408.21,103.3) and (410.9,105.98) .. (410.9,109.3) .. controls (410.9,112.61) and (408.21,115.3) .. (404.9,115.3) .. controls (401.59,115.3) and (398.9,112.61) .. (398.9,109.3) -- cycle ;
\draw [draw opacity=0] (114.92,109.3) .. controls (108.07,127.62) and (95,140) .. (80,140) .. controls (57.91,140) and (40,113.14) .. (40,80) .. controls (40,46.86) and (57.91,20) .. (80,20) .. controls (95,20) and (108.07,32.38) .. (114.92,50.7) -- (80,80) -- cycle ; \draw (114.92,109.3) .. controls (108.07,127.62) and (95,140) .. (80,140) .. controls (57.91,140) and (40,113.14) .. (40,80) .. controls (40,46.86) and (57.91,20) .. (80,20) .. controls (95,20) and (108.07,32.38) .. (114.92,50.7) ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (108.92,50.7) .. controls (108.92,47.39) and (111.6,44.7) .. (114.92,44.7) .. controls (118.23,44.7) and (120.92,47.39) .. (120.92,50.7) .. controls (120.92,54.02) and (118.23,56.7) .. (114.92,56.7) .. controls (111.6,56.7) and (108.92,54.02) .. (108.92,50.7) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (108.92,109.3) .. controls (108.92,105.98) and (111.6,103.3) .. (114.92,103.3) .. controls (118.23,103.3) and (120.92,105.98) .. (120.92,109.3) .. controls (120.92,112.61) and (118.23,115.3) .. (114.92,115.3) .. controls (111.6,115.3) and (108.92,112.61) .. (108.92,109.3) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][line width=1.5] (114.92,50.7) -- (114.92,109.3) ;
\draw (150,80) -- (167.5,60) -- (167.5,70) -- (202.5,70) -- (202.5,60) -- (220,80) -- (202.5,100) -- (202.5,90) -- (167.5,90) -- (167.5,100) -- cycle ;
\draw (290,157.5) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{68pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle \mathcal{C} \setminus \{( i,j)\}$
\end{center}\end{minipage}};
\draw (240,30) node [font=\large] [align=left] {\begin{minipage}[lt]{27.2pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle \overline{G}$
\end{center}\end{minipage}};
\draw (404.9,34.7) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle i+n$
\end{center}\end{minipage}};
\draw (404.9,129.3) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle j+n$
\end{center}\end{minipage}};
\draw (334.92,34.7) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle i$
\end{center}\end{minipage}};
\draw (334.92,129.3) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle j$
\end{center}\end{minipage}};
\draw (80,157.5) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{68pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle \mathcal{C}$
\end{center}\end{minipage}};
\draw (30,30) node [font=\large] [align=left] {\begin{minipage}[lt]{27.2pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle G$
\end{center}\end{minipage}};
\draw (124.92,34.7) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle i$
\end{center}\end{minipage}};
\draw (124.92,129.3) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle j$
\end{center}\end{minipage}};
\draw (100,80) node [font=\large] [align=left] {\begin{minipage}[lt]{20.4pt}\setlength\topsep{0pt}
\begin{center}
$\displaystyle -$
\end{center}\end{minipage}};
\end{tikzpicture}
\caption{
An edge with the negative sign.
If the cycle $\CC$ has the edge $(i, j)$ with $\sigma_{ij} = -1$,
then $(i, j)$ is decomposed into two paths:
(a) $(j, i+n)$ and $(i+n, i)$ via the vertex $i+n$;
(b) $(i, j+n)$ and $(j+n, j)$ via the vertex $j+n$.
}
\label{fig:transform-minus-edge-sign}
\end{minipage}
\hfill
\begin{minipage}[t]{0.42\textwidth}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.725,xscale=0.725]
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (45,10) .. controls (45,7.24) and (47.24,5) .. (50,5) .. controls (52.76,5) and (55,7.24) .. (55,10) .. controls (55,12.76) and (52.76,15) .. (50,15) .. controls (47.24,15) and (45,12.76) .. (45,10) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (45,110) .. controls (45,107.24) and (47.24,105) .. (50,105) .. controls (52.76,105) and (55,107.24) .. (55,110) .. controls (55,112.76) and (52.76,115) .. (50,115) .. controls (47.24,115) and (45,112.76) .. (45,110) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (45,40) .. controls (45,37.24) and (47.24,35) .. (50,35) .. controls (52.76,35) and (55,37.24) .. (55,40) .. controls (55,42.76) and (52.76,45) .. (50,45) .. controls (47.24,45) and (45,42.76) .. (45,40) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (45,80) .. controls (45,77.24) and (47.24,75) .. (50,75) .. controls (52.76,75) and (55,77.24) .. (55,80) .. controls (55,82.76) and (52.76,85) .. (50,85) .. controls (47.24,85) and (45,82.76) .. (45,80) -- cycle ;
\draw (10,10) -- (50,10) -- (90,25) -- (50,40) -- (50,80) -- (90,95) -- (50,110) -- (10,110) -- (10,10);
\draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (45,64) -- (55,59) -- (55,55) -- (45,60) -- cycle ;
\draw (45,60) -- (55,55) ;
\draw (45,64) -- (55,59) ;
\draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (5,64) -- (15,59) -- (15,55) -- (5,60) -- cycle ;
\draw (5,60) -- (15,55) ;
\draw (5,64) -- (15,59) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (85,25) .. controls (85,22.24) and (87.24,20) .. (90,20) .. controls (92.76,20) and (95,22.24) .. (95,25) .. controls (95,27.76) and (92.76,30) .. (90,30) .. controls (87.24,30) and (85,27.76) .. (85,25) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (85,95) .. controls (85,92.24) and (87.24,90) .. (90,90) .. controls (92.76,90) and (95,92.24) .. (95,95) .. controls (95,97.76) and (92.76,100) .. (90,100) .. controls (87.24,100) and (85,97.76) .. (85,95) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (125,10) .. controls (125,7.24) and (127.24,5) .. (130,5) .. controls (132.76,5) and (135,7.24) .. (135,10) .. controls (135,12.76) and (132.76,15) .. (130,15) .. controls (127.24,15) and (125,12.76) .. (125,10) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (125,110) .. controls (125,107.24) and (127.24,105) .. (130,105) .. controls (132.76,105) and (135,107.24) .. (135,110) .. controls (135,112.76) and (132.76,115) .. (130,115) .. controls (127.24,115) and (125,112.76) .. (125,110) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (125,40) .. controls (125,37.24) and (127.24,35) .. (130,35) .. controls (132.76,35) and (135,37.24) .. (135,40) .. controls (135,42.76) and (132.76,45) .. (130,45) .. controls (127.24,45) and (125,42.76) .. (125,40) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (125,80) .. controls (125,77.24) and (127.24,75) .. (130,75) .. controls (132.76,75) and (135,77.24) .. (135,80) .. controls (135,82.76) and (132.76,85) .. (130,85) .. controls (127.24,85) and (125,82.76) .. (125,80) -- cycle ;
\draw (130,10) -- (170,25) -- (130,40) ;
\draw (130,80) -- (170,95) -- (130,110) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (165,25) .. controls (165,22.24) and (167.24,20) .. (170,20) .. controls (172.76,20) and (175,22.24) .. (175,25) .. controls (175,27.76) and (172.76,30) .. (170,30) .. controls (167.24,30) and (165,27.76) .. (165,25) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (165,95) .. controls (165,92.24) and (167.24,90) .. (170,90) .. controls (172.76,90) and (175,92.24) .. (175,95) .. controls (175,97.76) and (172.76,100) .. (170,100) .. controls (167.24,100) and (165,97.76) .. (165,95) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (215,10) .. controls (215,7.24) and (217.24,5) .. (220,5) .. controls (222.76,5) and (225,7.24) .. (225,10) .. controls (225,12.76) and (222.76,15) .. (220,15) .. controls (217.24,15) and (215,12.76) .. (215,10) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (215,110) .. controls (215,107.24) and (217.24,105) .. (220,105) .. controls (222.76,105) and (225,107.24) .. (225,110) .. controls (225,112.76) and (222.76,115) .. (220,115) .. controls (217.24,115) and (215,112.76) .. (215,110) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (215,40) .. controls (215,37.24) and (217.24,35) .. (220,35) .. controls (222.76,35) and (225,37.24) .. (225,40) .. controls (225,42.76) and (222.76,45) .. (220,45) .. controls (217.24,45) and (215,42.76) .. (215,40) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (215,80) .. controls (215,77.24) and (217.24,75) .. (220,75) .. controls (222.76,75) and (225,77.24) .. (225,80) .. controls (225,82.76) and (222.76,85) .. (220,85) .. controls (217.24,85) and (215,82.76) .. (215,80) -- cycle ;
\draw (220,10) -- (220,40) ;
\draw (220,80) -- (220,110) ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (320,10) .. controls (320,7.24) and (322.24,5) .. (325,5) .. controls (327.76,5) and (330,7.24) .. (330,10) .. controls (330,12.76) and (327.76,15) .. (325,15) .. controls (322.24,15) and (320,12.76) .. (320,10) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (320,110) .. controls (320,107.24) and (322.24,105) .. (325,105) .. controls (327.76,105) and (330,107.24) .. (330,110) .. controls (330,112.76) and (327.76,115) .. (325,115) .. controls (322.24,115) and (320,112.76) .. (320,110) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (320,40) .. controls (320,37.24) and (322.24,35) .. (325,35) .. controls (327.76,35) and (330,37.24) .. (330,40) .. controls (330,42.76) and (327.76,45) .. (325,45) .. controls (322.24,45) and (320,42.76) .. (320,40) -- cycle ;
\draw [draw opacity=0][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (320,80) .. controls (320,77.24) and (322.24,75) .. (325,75) .. controls (327.76,75) and (330,77.24) .. (330,80) .. controls (330,82.76) and (327.76,85) .. (325,85) .. controls (322.24,85) and (320,82.76) .. (320,80) -- cycle ;
\draw (285,10) -- (285,110) -- (325,110) -- (325,10) -- (285,10) ;
\draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (320,64) -- (330,59) -- (330,55) -- (320,60) -- cycle ;
\draw (320,60) -- (330,55) ;
\draw (320,64) -- (330,59) ;
\draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (280,64) -- (290,59) -- (290,55) -- (280,60) -- cycle ;
\draw (280,60) -- (290,55) ;
\draw (280,64) -- (290,59) ;
\draw (110,60) node [align=left] {$\displaystyle -$};
\draw (195,60) node [align=left] {$\displaystyle +$};
\draw (250,60) node [align=left] {$\displaystyle =$};
\draw (50,135) node [align=left] {$\displaystyle | \overline{\mathcal{C}}| $};
\draw (150,135) node [align=left] {$\displaystyle 2| \overline{U}| $};
\draw (220,135) node [align=left] {$\displaystyle | \overline{U}| $};
\draw (305,135) node [align=left] {$\displaystyle | \mathcal{C}| $};
\end{tikzpicture}
\caption{
Removing and adding edges, and calculating of the number of edges if $\bar{\UC} = 2$.
The black circles are the vertices in $[n]$ while the white circles represent those in $[n+1, 2n]$.
}
\label{fig:removing-adding-cycle-edges}
\end{minipage}
\end{figure}
\noindent
Proposition~\ref{prop:weaker-than-sojoudi-connected} is proved
under the assumptions that: (i) $G$ is connected; (ii) for all $(i, j) \in \EC$, $Q^0_{ij} \neq 0$.
These assumptions may seem strong;
however, we will show that they can be removed
using
Corollary~\ref{coro:nonnegative-offdiagonal} in section 4.
At the end of this section, we apply Proposition~\ref{prop:weaker-than-sojoudi-connected} to
a class of QCQPs where all the off-diagonal elements of every matrix $Q^0, \ldots, Q^m$ are nonpositive.
We call QCQPs in this class nonpositive off-diagonal QCQPs.
It is well-known that their SDP relaxations are exact~\cite{kim2003exact}.
By applying the same transformation above,
we obtain \eqref{eq:decomposed-hqcqp} with $N^p_+ = O$ for every $p$
since no positive off-diagonal elements exist.
The diagonal elements of $D^p$ do not generate edges in the aggregated sparsity pattern graph, thus,
the
data matrices in \eqref{eq:decomposed-hqcqp} induce a bipartite sparsity pattern graph.
Therefore, the SDP relaxation is exact.
This can be regarded as an alternative proof for \cite{kim2003exact} and Corollary~\ref{coro:sojoudi-corollary1}\ref{cond:sojoudi-arbitrary}.
\begin{coro} \label{coro:nonpositive-offdiagonal-connected}
Under Assumption~\ref{assum:new-assumption},
the SDP relaxation of a nonpositive off-diagonal QCQP is exact
if the aggregate spartiy pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp} is connected
and $Q^0_{ij} < 0$ for all $(i, j) \in \EC$.
\end{coro}
\section{Perturbation for disconnected aggregated sparsity pattern graph} \label{sec:perturbation}
The connectivity of $G$ has played an important role for
our main theorem in \autoref{sec:main}.
For QCQPs with sparse data matrices,
the connectivity assumption might be a difficult condition to be satisfied.
In this section,
we replace the assumption for connected graphs
by a slightly different assumption (Assumption~\ref{assum:new-assumption-strong}),
and present a new condition for the exact SDP relaxation.
The following assumption is slightly stronger than Assumption~\ref{assum:new-assumption}
in the sense that it requires the existence of a feasible interior point of \eqref{eq:hsdrd}.
However, it can be satisfied in practice without much difficulty.
\begin{assum} \label{assum:new-assumption-strong}
The following two conditions hold:
\begin{enumerate}[label=(\roman*)]
\item \label{assum:new-assumption-strong-1}
the sets of optimal solutions for \eqref{eq:hsdr} and \eqref{eq:hsdrd} are nonempty; and
\item \label{assum:new-assumption-strong-2}
at least one of the following two conditions holds:
\begin{enumerate}[label=(\alph*)]
\item \label{assum:new-assumption-strong-2-1}
the feasible set of \eqref{eq:hsdr} is bounded; or
\item \label{assum:new-assumption-strong-2-2}
for \eqref{eq:hsdrd},
the set of optimal solutions is bounded,
and the interior of the feasible set is nonempty.
\end{enumerate}
\end{enumerate}
\end{assum}
We now perturb the objective function of a given QCQP
to remove the connectivity of $G$ from Theorem~\ref{thm:system-based-condition-connected}.
Let $P \in \SymMat^n$ be an $n \times n$ nonzero matrix,
and let $\varepsilon > 0$ denote the magnitude of the perturbation.
An $\varepsilon$-perturbed QCQP is described as follows:
\begin{equation}
\label{eq:hqcqp-perturbed} \tag{$\PC^\varepsilon$}
\begin{array}{rl}
\min & \trans{\x} \left(Q^0 + \varepsilon P\right) \x \\
\subto & \trans{\x} Q^p \x \le b_p, \quad p \in [m].
\end{array}
\end{equation}
To generalize $S(\y)$ for the $\varepsilon$-perturbed QCQP,
we define
\begin{equation*}
S(\y;\, \varepsilon)
\coloneqq Q^0 + \varepsilon P + \sum_{p = 1}^m y_pQ^p
= S(\y) + \varepsilon P.
\end{equation*}
\subsection{Perturbation techniques}
\label{ssec:perturbation-techniques}
Under the condition that the feasible set of a QCQP is bounded,
Azuma et al.~\cite[Lemma 3.3]{Azuma2021} proved that the SDP relaxation is exact
if a sequence of perturbed QCQPs that satisfy the exactness condition converges to the original one.
This result was used to eliminate the requirement that
the aggregated sparsity pattern graph is connected from their main theorem.
The following lemmas are extensions of the results in \cite{Azuma2021} under a weaker assumption.
\begin{lemma} \label{lemma:perturbation-technique-primal}
Suppose that Assumption
~\ref{assum:new-assumption-strong} {\it \ref{assum:new-assumption-strong-1}} and
\ref{assum:new-assumption-strong-2}\ref{assum:new-assumption-strong-2-1} hold.
Let $P \neq O$ be an $n \times n$ nonzero matrix, and
$\{\varepsilon_t\}_{t = 1}^\infty$ be a monotonically decreasing sequence
such that $\lim_{t \to \infty} \varepsilon_t = 0$.
If the SDP relaxation of the $\varepsilon_t$-perturbed problem
$(\PC^{\varepsilon_t})$
is exact for all $t = 1, 2, \ldots$,
then the SDP relaxation of the original problem \eqref{eq:hqcqp} is also exact.
\end{lemma}
\begin{proof}
Let $A$ and $B$ be
the feasible sets of \eqref{eq:hqcqp} and \eqref{eq:hsdr}, respectively:
%
\begin{align*}
A \coloneqq & \left\{
\x \in \Real^n \,\middle|\,
\ip{Q^p}{(\x\trans{\x})} \leq b_p, \quad p = 1, \ldots, m\right\}, \\
B \coloneqq & \left\{
X \in \SymMat_+^n \,\middle|\,
\ip{Q^p}{X} \leq b_p, \quad p = 1, \ldots, m\right\}.
\end{align*}
%
Note that $B$ is a compact set by the assumption.
The intersection of $B$ and the set of rank-1 matrices
\begin{align*}
B_1
&\coloneqq B \cap \left\{X \in \SymMat^n \,\middle|\, \rank(X) \leq 1\right\} \\
&= \left\{
X \succeq O \,\middle|\,
\rank(X) \leq 1,\;
\ip{Q^p}{X} \leq b_p, \; p = 1, \ldots, m\right\}
\end{align*}
is also a compact set since $\left\{X \in \SymMat^n \,\middle|\, \rank(X) \leq 1\right\}$ is closed.
There exists a bijection $f: A \to B_1$ given by $f(\x) = \x\trans{\x}$,
thus $A$ is also a compact set.
By an argument similar to the proof of \cite[Lemma 3.3]{Azuma2021},
we obtain the desired result.
\end{proof}
\begin{lemma} \label{lemma:perturbation-technique-dual}
Suppose that Assumption~\ref{assum:new-assumption-strong} {\it \ref{assum:new-assumption-strong-1}} and
\ref{assum:new-assumption-strong-2}\ref{assum:new-assumption-strong-2-2} hold.
Let $P \neq O$ be an $n \times n$ negative semidefinite nonzero matrix, and
$\{\varepsilon_t\}_{t = 1}^\infty$ be a monotonically decreasing sequence
such that $\lim_{t \to \infty} \varepsilon_t = 0$.
If the SDP relaxation of the $\varepsilon_t$-perturbed problem
$(\PC^{\varepsilon_t})$
is exact for all $t = 1, 2, \ldots$,
then the SDP relaxation of the original problem \eqref{eq:hqcqp} is also exact.
\end{lemma}
\begin{proof}
Let $\Gamma \coloneqq \left\{\y \geq \0 \,\middle|\, S(\y) \succeq O\right\}$
be the feasible set of \eqref{eq:hsdrd}.
Let $(\DC_R^{\varepsilon})$ denote
the dual of the SDP relaxation for $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed},
and define $\Gamma(\varepsilon) \coloneqq \left\{\y \geq \0 \,\middle|\, S(\y;\, \varepsilon) \succeq O\right\}$
as the feasible set of $(\DC_R^{\varepsilon})$.
%
Since $P$ is negative semidefinite, we have $S(\y;\, \varepsilon_1) \preceq S(\y;\, \varepsilon_2)$
for any $\y \geq \0$ and $\varepsilon_1 > \varepsilon_2 > 0$, which indicates
a monotonic structure of the sequence $\left\{\Gamma(\varepsilon_t)\right\}_{t=1}^\infty$:
\begin{equation*}
\Gamma = \Gamma(0) \supseteq \cdots \supseteq \Gamma(\varepsilon_{t+1})
\supseteq \Gamma(\varepsilon_t) \supseteq \cdots.
\end{equation*}
From Assumption~\ref{assum:new-assumption-strong}{\it\ref{assum:new-assumption-strong-2}\ref{assum:new-assumption-strong-2-2}},
there exists a point $\bar{\y} \in \Gamma$ such that $S(\bar{\y}) \succ O$.
Since each $\Gamma(\varepsilon_t)$ is a closed set and $\lim_{t \to \infty} \varepsilon_t = 0$,
there exists an integer $T$ such that
$S(\bar{\y}; \varepsilon_T) \succ O$. In addition, it holds that
$S(\bar{\y}; \varepsilon_t) \succeq S(\bar{\y}; \varepsilon_T)$ for $t \ge T$.
Let $v_t^*$ and $B^*(\varepsilon_t)$ be the optimal value and
the set of the corresponding optimal solutions of $(\PC^{\varepsilon_t})$, respectively.
From the assumptions that $(\PC)$ has a feasible point
and $P$ is negative semidefinite,
there is an upper bound $\bar{v}$ such that $v_t^* \le \bar{v}$ for any $t$.
Therefore, it holds that, for any $t \ge T$,
\begin{align*}
B^*(\varepsilon_t)
&= \left\{
X \in \SymMat^n \,\middle|\,
X \succeq O,\;
\ip{(Q^0 + \varepsilon_t P)}{X} = v_t^*,\;
\ip{Q^p}{X} \leq b_p \; \text{for all $p \in [m]$}
\right\} \\
& \subseteq \left\{
X \in \SymMat^n \,\middle|\,
X \succeq O,\;
\ip{\left(Q^0 + \varepsilon_t P + \sum_{p=1}^m \bar{y}_pQ^p\right)}{X} \leq v_t^* + \trans{\bar{\y}}\b
\right\} \\
&= \left\{
X \in \SymMat^n \,\middle|\,
X \succeq O,\;
\ip{S(\bar{\y};\, \varepsilon_t)}{X} \leq v_t^* + \trans{\bar{\y}}\b
\right\}, \\
&\subseteq \left\{
X \in \SymMat^n \,\middle|\,
X \succeq O,\;
\ip{S(\bar{\y};\, \varepsilon_T)}{X} \leq \bar{v} + \trans{\bar{\y}}\b
\right\},
\end{align*}
which implies $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is bounded
since $S(\bar{\y};\, \varepsilon_T) \succ O$.
%
With the exact SDP relaxation of the perturbed problems and strong duality,
we can consider $X^t \in B^*(\varepsilon_t)$, an rank-1 solution of the primal SDP relaxation,
and $\y^t \in \Gamma(\varepsilon_t)$, an optimal solution of $(\DC_R^{\varepsilon_t})$
satifying $X^t S(\y^t;\, \varepsilon_t) = O$.
We define a closed set as
\begin{equation*}
U \coloneqq \closure\left(\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)\right)
\end{equation*}
so that the sequence $\{X^t\}_{t=T}^\infty \subseteq U$.
Since $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is bounded, the set $U$ is a compact set.
%
As the sequence has an accumulation point, we let
$X^\mathrm{lim} \coloneqq \lim_{t \to \infty} X^t \in U$
by taking an appropriate subsequence from $\{X^t \,|\, t \ge T\}$.
Moreover, since $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is included in the feasible set of \eqref{eq:hsdr},
its closure $U$ is also in the same set,
which implies that $X^\mathrm{lim}$ is an at most rank-1 feasible point of \eqref{eq:hsdr}.
Finally, we show the optimality of $X^\mathrm{lim}$ for \eqref{eq:hsdr}.
We assume that $\bar{X}$ is a feasible point of \eqref{eq:hsdr}
such that $\ip{Q^0}{\bar{X}} < \ip{Q^0}{X^\mathrm{lim}}$
and derive a contradiction.
Since $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is bounded,
there is a sufficiently large $M$ such that
$\| \bar{X} \| \le M$ and $\| X^{t} \| \le M$ for all $t \geq T$.
%
Let $\delta = \ip{Q^0}{X^\mathrm{lim}} - \ip{Q^0}{\bar{X}} > 0$.
Since
$X^\mathrm{lim} = \lim_{t \to \infty} X^t $ and
$\lim_{t \to \infty} \varepsilon_t = 0$,
we can find $\hat{T} \ge T$ such that
$|Q_0 \bullet (X^\mathrm{lim} - X^{\hat{T}})| \le \frac{\delta}{4}$
and $\varepsilon_{\hat{T}} \le \frac{\delta}{8 \|P\| M }$.
Since $\bar{X}$ and $X^{\hat{T}}$ are feasible for $(\PC^{\varepsilon_{\hat{T}}})$,
$\frac{\bar{X} +X^{\hat{T}}}{2}$ is also feasible
for $(\PC^{\varepsilon_{\hat{T}}})$.
Thus, we have
\begin{align}
& \ \left(Q_0 + \varepsilon_{\hat{T}} P\right) \bullet \left(\frac{\bar{X} +X^{\hat{T}}}{2}\right)
- \left(Q_0 + \varepsilon_{\hat{T}} P\right) \bullet X^{\hat{T}} \\
= & \ \frac{1}{2}\left(Q_0 + \varepsilon_{\hat{T}} P\right) \bullet \left(\bar{X} - X^{\hat{T}}\right) \\
= & \ \frac{1}{2} Q_0 \bullet \left(\bar{X} - X^{\mathrm{lim}}\right)
+ \frac{1}{2} Q_0 \bullet \left(X^{\mathrm{lim}} - X^{\hat{T}}\right)
+ \frac{1}{2} \varepsilon_{\hat{T}} P \bullet \left(\bar{X} - X^{\hat{T}}\right) \\
\le & \ \frac{1}{2} Q_0 \bullet \left(\bar{X} - X^{\mathrm{lim}}\right)
+ \frac{1}{2} \left|Q_0 \bullet \left(X^{\mathrm{lim}} - X^{\hat{T}}\right)\right|
+ \frac{1}{2} \varepsilon_{\hat{T}} \|P\| (2M) \\
\le & -\frac{\delta}{2} + \frac{\delta}{8} + \frac{\delta}{8}
= - \frac{\delta}{4} < 0.
\end{align}
This contradicts the optimality of
$X^{\hat{T}}$ in $(\PC^{\varepsilon_{\hat{T}}})$.
This completes the proof.
\end{proof}
We note that the negative semidefiniteness of $P$ assumed in Lemma~\ref{lemma:perturbation-technique-dual}
is not included in Lemma~\ref{lemma:perturbation-technique-primal}.
In the subsequent discussion, we remove the assumption on the connectivity of $G$ from Theorem~\ref{thm:system-based-condition-connected}
using Lemmas~\ref{lemma:perturbation-technique-primal} and \ref{lemma:perturbation-technique-dual}.
\subsection{QCQPs with disconnected bipartite structures} \label{ssec:main-disconnected}
We present an improved version of Theorem~\ref{thm:system-based-condition-connected}
for QCQPs with disconnected aggregated sparsity pattern graphs $G$.
\begin{theorem}
\label{prop:system-based-condition}
Suppose that Assumption~\ref{assum:new-assumption-strong} holds
and that the aggregated sparsity pattern graph $G(\VC, \EC)$ is bipatite.
Then, \eqref{eq:hsdr} is exact if, for all $(k, \ell) \in \EC$,
the system \eqref{eq:system-nonpositive} has no solutions.
\end{theorem}
\begin{proof}
Let $L$ denote the number of connected components of $G$,
and choose an arbitrarily vertex $u_i$ from the connected components indexed by $i \in [L]$.
Then, we define the edge set
\begin{equation*}
\FC = \bigcup_{i \in [L-1]} \left\{\left(u_i, u_{i+1}\right), \left(u_{i+1}, u_i\right)\right\}.
\end{equation*}
%
Since $\FC$ connects the $i$th and $(i+1)$th component,
the graph $\tilde{G}(\VC, \tilde{\EC} \coloneqq \EC \cup \FC)$
is a connected and bipartite graph.
%
Let $P \in \SymMat^n$ be the negative of the Laplacian matrix of a subgraph $\hat{G}(\VC, \FC)$ of $\tilde{G}$ induced by $\FC$, i.e.,
\begin{equation*}
P_{ij} = \begin{cases}
\; -\deg(i) & \quad \text{if $i = j$}, \\
\; 1 & \quad \text{if $(i,j) \in \FC$}, \\
\; 0 & \quad\text{otherwise},
\end{cases}
\end{equation*}
where $\deg(i)$ denotes the degree of the vertex $i$ in the subgraph $\hat{G}(\VC, \FC)$.
Since the Laplacian matrix is positive semidefinite,
$P$ is negative semidefinite.
By adding a perturbation $\varepsilon P$ with any $\varepsilon > 0$ into \eqref{eq:hqcqp},
we obtain an $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed}
whose aggregated sparsity pattern graph is $\tilde{G}(\VC, \tilde{\EC})$.
To check the exactness of the SDP relaxation for \eqref{eq:hqcqp-perturbed}
by Theorem~\ref{thm:system-based-condition-connected},
it suffices to show that the following system
\begin{equation*}
\y \geq \0,\; S(\y;\, \varepsilon) \succeq O,\;
S(\y;\, \varepsilon)_{k\ell} \leq 0.
\end{equation*}
has no solutions for all $(k, \ell) \in \tilde{\EC}$,
where $S(\y;\, \varepsilon) \coloneqq (Q^0 + \varepsilon P) + \sum_{p \in [m]} y_p Q^p$.
Let $\hat{\y}$ be an arbitrary vector satifying the first two constraints, i.e.,
$\hat{\y} \geq \0$ and $S(\hat{\y};\, \varepsilon) \succeq O$.
\begin{enumerate}[label=(\roman*)]
\item
If $(k, \ell) \in \FC$, then $P_{k\ell} = 1$ and
$Q^p_{k\ell} = 0$ for any $p \in [0, m]$ by definition.
Thus, we have
\begin{equation*}
S(\hat{\y};\, \varepsilon)_{k\ell}
= \varepsilon P_{k\ell}
> 0.
\end{equation*}
\item
If $(k, \ell) \in \tilde{\EC} \setminus \FC = \EC$,
the system \eqref{eq:system-nonpositive} with $(k, \ell)$ has no solutions,
which implies $S(\hat{\y})_{k\ell} > 0$.
Since $(k, \ell) \not\in \FC$, we have $P_{k\ell} = 0$.
Hence, it follows
\begin{equation*}
S(\hat{\y};\, \varepsilon)_{k\ell}
= S(\hat{\y})_{k\ell}
> 0.
\end{equation*}
\end{enumerate}
%
Therefore, all the systems have no solutions,
and the SDP relaxation of \eqref{eq:hqcqp-perturbed} is exact.
Let $\{\varepsilon_t\}_{t=1}^\infty \subseteq \Real_+$ be a monotonically decreasing sequence converging to zero,
then the SDP relaxation of the $\varepsilon_t$-perturbed QCQP is exact as discussed above.
By Lemmas~\ref{lemma:perturbation-technique-primal} or \ref{lemma:perturbation-technique-dual},
the desired result follows.
\end{proof}
\subsection{Disconnected sign-definite QCQPs} \label{ssec:nonnegative-offdiagonal}
For QCQPs with the bipartite sparsity pattern and nonnegative off-diagonal elements of $Q^0, \ldots, Q^m$,
their SDP relaxation is known to be exact (see Theorem~\ref{thm:sojoudi-theorem}
\cite{Sojoudi2014exactness}).
In contrast, when we have dealt with such QCQPs in section~\ref{ssec:nonnegative-offdiagonal-connected},
the connectivity of $G$ and $Q^0_{ij} > 0$ have been assumed to derive the exactness of the SDP relaxation.
In this subsection, we eliminate these assumptions using the perturbation techniques of section~\ref{ssec:perturbation-techniques}.
\begin{coro} \label{coro:nonnegative-offdiagonal}
Suppose that Assumption~\ref{assum:new-assumption-strong} holds, and
suppose the aggregated sparsity pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp}
is bipartite.
If $Q^p_{ij} \geq 0$ for all $(i, j) \in \EC$ and for all $p \in [0, m]$,
then the SDP relaxation is exact.
\end{coro}
\begin{proof}
Let $P \in \SymMat^n$ be the negative of the Laplacian matrix of $G(\VC, \EC)$, i.e.,
\begin{equation*}
P_{ij} = \begin{cases}
\; -\deg(i) & \quad \text{if $i = j$}, \\
\; 1 & \quad \text{if $(i,j) \in \EC$}, \\
\; 0 & \quad\text{otherwise}.
\end{cases}
\end{equation*}
Since the Laplacian matrix is positive semidefinite,
$P$ is negative semidefinite.
By adding a perturbation $\varepsilon P$ with any $\varepsilon > 0$,
we obtain an $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed}
whose aggregated sparsity pattern graph remains the same as the graph $G(\VC, \EC)$.
To determine whether the SDP relaxation is exact for this $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed},
it suffices to check the infeasibility of the system, according to Theorem~\ref{prop:system-based-condition}:
\begin{equation*}
\y \geq \0,\; S(\y;\, \varepsilon) \succeq O,\;
S(\y;\, \varepsilon)_{k\ell} \leq 0.
\end{equation*}
Let $\hat{\y} \geq \0$ be an arbitrary vector
satisfying the first two constraints, i.e.,
$\hat{\y} \geq \0$ and $S(\hat{\y};\, \varepsilon) \succeq O$.
For every $(k, \ell) \in \EC$, since $S(\hat{\y})_{k\ell} \geq 0$ and $P_{k\ell} > 0$,
we have
\begin{equation*}
S(\hat{\y};\, \varepsilon)_{k\ell}
\geq \varepsilon P_{k\ell} > 0,
\end{equation*}
which implies that the system above has no solutions.
Hence, by Theorem~\ref{prop:system-based-condition},
the SDP relaxation of the $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed} is exact.
Let $\{\varepsilon_t\}_{t=1}^\infty \subseteq \Real_+$ be a monotonically decreasing sequence converging to zero,
then the SDP relaxation of the $\varepsilon$-perturbed QCQP is exact as discussed above.
By Lemmas~\ref{lemma:perturbation-technique-primal} or \ref{lemma:perturbation-technique-dual},
the SDP relaxation of a QCQP with nonnegative off-diagonal elements and bipartite structures
is also exact.
\end{proof}
We can extend
Proposition~\ref{prop:weaker-than-sojoudi-connected}
and Corollary~\ref{coro:nonpositive-offdiagonal-connected}
using Corollary~\ref{coro:nonnegative-offdiagonal} to the following results.
\begin{prop}
\label{prop:weaker-than-sojoudi}
Suppose that Assumption~\ref{assum:new-assumption-strong} holds and no conditions on sparsity is considered.
If \eqref{eq:hqcqp} satisfies the assumption of Theorem~\ref{thm:sojoudi-theorem},
then \eqref{eq:hqcqp} also satisfies that of Corollary~\ref{coro:nonnegative-offdiagonal}.
In addition, the exactness of its SDP relaxation
can be proved by Theorem~\ref{prop:system-based-condition}.
\end{prop}
\begin{coro} \label{coro:nonpositive-offdiagonal}
Under Assumption~\ref{assum:new-assumption-strong},
the SDP relaxation of a nonpositive off-diagonal QCQP is exact.
\end{coro}
\begin{proof}
{(Both Proposition~\ref{prop:weaker-than-sojoudi} and Corollary~\ref{coro:nonpositive-offdiagonal})}
It is easy to check that
the aggregated sparsity pattern graph of
\eqref{eq:decomposed-hqcqp} generated by the given problem is bipartite
by the arguments similar to the proof of Proposition~\ref{prop:weaker-than-sojoudi-connected}.
Therefore, \eqref{eq:decomposed-hqcqp} satisfies the assumption of Corollary~\ref{coro:nonnegative-offdiagonal}.
\end{proof}
\section{Numerical experiments} \label{sec:example}
We investigate analytical and computational aspects of the conditions in
Theorem~\ref{thm:system-based-condition-connected}
with two QCQP instances below.
The first QCQP consists of $2 \times 2$ data matrices.
We show the exactness of its SDP relaxation
by checking the feasibility systems in Theorem~\ref{thm:system-based-condition-connected} without SDP solvers.
Next, Example~\ref{examp:cycle-graph-4-vertices} is considered for the second QCQP.
As the size $n$ of the second QCQP is 4, it is difficult to handle
the positive semidefinite constraint $S(\y) \succeq O$ without numerical computation.
We present a numerical method for testing the exactness of the SDP relaxation with a computational solver.
We also detail the difference between our results and the existing results
using these two QCQP instances.
As discussed in section~\ref{ssec:comparison},
if the aggregated sparsity pattern graph is bipartite,
then Theorem~\ref{thm:system-based-condition-connected} covers a wider class of QCQPs than
those by Theorem~\ref{thm:sojoudi-theorem} in~\cite{Sojoudi2014exactness}
under the connectivity and the elementwise condition on $Q^0$.
Theorem~\ref{thm:system-based-condition-connected} has been generalized in section~\ref{sec:perturbation} to Theorem~\ref{prop:system-based-condition},
and this theorem covers a wider class of QOCPs without the connectivity condition.
For numerical experiments,
JuMP~\cite{Dunning2017} was used with the solver MOSEK~\cite{mosek}
and SDPs were solved with tolerance $1.0 \times 10^{-8}$.
All numerical results are shown with four significant digits.
\subsection{A QCQP instance with $n=2$} \label{ssec:analytical-example}
\begin{example}
\label{examp:small-example}
Consider the QCQP \eqref{eq:hqcqp} with
\begin{align*}
& n = 2, \quad m = 1, \quad \b = \begin{bmatrix} 1 \end{bmatrix}, \\
& Q^0 = \begin{bmatrix} -3 & -1 \\ -1 & -2 \end{bmatrix}, \quad
Q^1 = \begin{bmatrix} 3 & 4 \\ 4 & 6 \end{bmatrix}.
\end{align*}
\end{example}
We first verify whether the problem satisfies the assumption of Theorem~\ref{thm:system-based-condition-connected}.
The aggregated sparsity pattern graph $G$ is bipartite and connected
as it has only two vertices and $Q^0_{12} \neq 0$.
Since $Q^1$ is positive definite,
the problem satisfies Assumption~\ref{assum:previous-assumption}{\it \ref{assum:previous-assumption-1}}.
By the discussion in Remark~\ref{rema:comparison-assumption},
it also satisfies Assumption~\ref{assum:new-assumption}.
It only remains to show that the system
\begin{equation*}
y_1 \geq 0, \quad
\hat{S}(y_1) \coloneqq \begin{bmatrix} -3 & -1 \\ -1 & -2 \end{bmatrix} +
y_1 \begin{bmatrix} 3 & 4 \\ 4 & 6 \end{bmatrix} \succeq O, \quad
-1 + 4y_1 \leq 0
\end{equation*}
has no solutions.
By definition,
$\hat{S}(y_1) \succeq O$ holds if and only if all the principal minors of $\hat{S}(y_1)$ are nonnegative,
or equivalently, $-3 + 3y_1 \geq 0$, $-2 + 6y_1\ge 0$, and $2y_1^2 - 16y_1 + 5 \geq 0$.
Hence, if $y_1 \geq 4 + 3\sqrt{6}/2 \simeq 7.674$, then
the first two inequalities of the system are satisfied.
Since $-1 + 4y_1 \geq -1 + 4(4 + 3\sqrt{6}/2) = 15 + 6\sqrt{6} > 0$,
the last inequality does not hold for such $y_1$.
The problem therefore admits the exact SDP relaxation.
Actually, we numerically obtained an optimal solution of the above QCQP in
Example~\ref{examp:small-example} and its SDP relaxation
as $\x^* \simeq [1.731; -1.167]$ and
$X^* \simeq [2.997, -2.021; -2.021, 1.362]$, respectively.
From $\trans{(\x^*)}Q^0 \x^* - Q^0 \bullet X^* \simeq 5.379 \times 10^{-10}$,
we see numerically that the SDP relaxation provided the exact optimal value.
Since $G$ is clearly a forest (no cycles),
we can also apply Proposition~\ref{prop:forest-results} in~\cite{Azuma2021}.
From the discussion above,
the system \eqref{eq:system-zero} has no solutions for $(k, \ell) = (1, 2)$
and Assumption~\ref{assum:previous-assumption}{\it \ref{assum:previous-assumption-1}} is satisfied.
By taking $\hat{X} = [0.1 \ \ 0; 0 \ \ 0.1] \succ O$,
we know $\ip{Q^1}{\hat{X}} = 0.9 \leq 1 = b_1$.
Hence, the exactness of the SDP relaxation can be proved by Proposition~\ref{prop:forest-results}.
We mention that this result cannot be obtained by
Theorem~\ref{thm:sojoudi-theorem} in~\cite{Sojoudi2014exactness}.
Since $Q^0_{12} = -1$ and $Q^1_{12} = 4$,
the edge sign $\sigma_{12}$ of the edge $(1, 2)$ must be zero by definition,
contradicting \eqref{eq:sign-constraint-sign-definite}.
\subsection{Example~\ref{examp:cycle-graph-4-vertices}} \label{ssec:computational-example}
We computed an optimal solution of Example~\ref{examp:cycle-graph-4-vertices} and that of its SDP relaxation as
\begin{equation*}
x^* \simeq \begin{bmatrix}
7.818 \\ -8.331 \\ 1.721 \\ -7.019
\end{bmatrix}\ \text{and} \
X^* \simeq \begin{bmatrix}
61.12 & -65.13 & 13.45 & -54.87 \\
-65.13 & 69.41 & -14.34 & 58.48 \\
13.45 & -14.34 & 2.961 & -12.08 \\
-54.87 & 58.48 & -12.08 & 49.27
\end{bmatrix} \in \SymMat^4,
\end{equation*}
respectively.
From $\trans{(\x^*)}Q^0 \x^* - Q^0 \bullet X^* \simeq 7.676 \times 10^{-8}$,
we see numerically that the SDP relaxation resulted in the exact optimal value.
The aggregated sparsity pattern graph $G(\VC, \EC)$ is
a cycle graph with 4 vertices (\autoref{fig:example-aggregated-sparsity}).
We first see whether
it satisfies the assumption of Theorem~\ref{thm:system-based-condition-connected}.
We compute $3Q_1 + 4Q_2$ as
\begin{equation*}
3 \begin{bmatrix}
5 & 2 & 0 & 1 \\ 2 & -1 & 3 & 0 \\
0 & 3 & 3 & -1 \\ 1 & 0 & -1 & 4 \end{bmatrix} +
4 \begin{bmatrix}
-1 & 1 & 0 & 0 \\ 1 & 4 & -1 & 0 \\
0 & -1 & 6 & 1 \\ 0 & 0 & 1 & -2 \end{bmatrix} =
\begin{bmatrix}
11 & 10 & 0 & 3 \\ 10 & 13 & 5 & 0 \\
0 & 5 & 33 & 1 \\ 3 & 0 & 1 & 4 \end{bmatrix},
\end{equation*}
and its minimum eigenvalue is approximately $0.1577$.
Thus, there exists $\bar{\y} \geq 0$ such that $\bar{y}_1 Q_1 + \bar{y}_2 Q_2 \succ O$,
e.g., $\bar{\y} = [3; 4]$.
As mentioned in Remark~\ref{rema:comparison-assumption},
it follows that the second problem satisfies Assumption~\ref{assum:new-assumption}.
To show the exactness of the SDP relaxation for the problem,
it only remains to show that
the systems \eqref{eq:system-nonpositive} for all $(k, \ell) \in \EC$ has no solutions.
Using an SDP solver on a computer, we could observe that there is no solution for the system.
Indeed, for every $(k, \ell) \in \EC$, the SDP
\begin{equation} \label{eq:optimal-value-systems-example}
\begin{array}{rl}
\mu^* =
\min & S(\y)_{k\ell} \\
\subto & \y \geq \0, \; S(\y) \succeq O,
\end{array}
\end{equation}
returns the optimal values shown in \autoref{tab:optimal-value-systems-example},
which implies that no solution exists for \eqref{eq:system-nonpositive}
since $S(\y)_{k\ell}$ cannot attain a nonpositive value.
Therefore,
the SDP relaxation of Example~\ref{examp:cycle-graph-4-vertices} is exact by
Theorem~\ref{thm:system-based-condition-connected}.
\begin{table}[b]
\caption{Optimal values of \eqref{eq:optimal-value-systems-example} for each $(k, \ell)$}
\label{tab:optimal-value-systems-example}
\centering
\begin{tabular}{c|cccc}
$(k, \ell)$ & $(1, 2)$ & $(2, 3)$ & $(1, 4)$ & $(3, 4)$ \\ \hline
$\mu^*$ & 18.58 & 12.84 & 8.897 & 0.3215
\end{tabular}
\end{table}
With Theorem~\ref{thm:sojoudi-theorem} in~\cite{Sojoudi2014exactness},
it is not possible to show the exactness of the SDP relaxation.
The edge sign $\sigma_{12}$ for $(1, 2)$th element is $0$ by definition.
Since the cycle basis of $\GC$ is only $\CC_1 = \GC$,
the left-hand side of \eqref{eq:sign-constraint-simple-cycle} is
$\sigma_{12}\sigma_{23}\sigma_{34}\sigma_{41} = 0$.
However, its right-hand side only takes $-1$ or $+1$.
This implies that Theorem~\ref{thm:sojoudi-theorem} cannot be applied to Example~\ref{examp:cycle-graph-4-vertices}.
\section{Concluding remarks} \label{sec:conclution}
We have proposed sufficient conditions for
the exact SDP relaxation of QCQPs whose
aggregated sparsity pattern graph can be represented by bipartite graphs.
Since these conditions consist of at most $n^2/4$ SDP systems,
the exactness can be investigated in polynomial time.
The derivation of the conditions is based on
the rank of optimal solutions $\y$ of the dual SDP relaxation under strong duality.
More precisely, a QCQP admits the exact SDP relaxation
if the lower bound of the rank of $S(\y)$ is $n-1$.
For the lower bound,
we have used the fact that
any nonnegative matrix $M \succeq O$ with bipartite sparsity pattern is of at least rank $n - 1$
if it satisfies $M\1 > \0$.
Using results from the recent paper~\cite{kim2021strong},
the sufficient conditions have been considered under weaker assumptions than
those in \cite{Azuma2021}.
That is, the sparsity of bipartite graphs includes that of tree and forest graphs,
therefore, the proposed conditions
can serve for a wider class of QCQPs
than those in \cite{Azuma2021}.
We have also shown in Proposition~\ref{prop:weaker-than-sojoudi} that
one can determine the exactness for all the problems
which satisfy the condition considered in Theorem~\ref{thm:sojoudi-theorem} (\cite{Sojoudi2014exactness}).
For our future work,
sufficient conditions for the exactness of
a wider class of QCQPs than those with bipartite structures will be investigated.
Furthermore, examining our conditions
to analyze the exact SDP relaxation of QCQPs
transformed from polynomial optimization would be an interesting subject.
\vspace{0.5cm}
\noindent
{\bf Acknowledgements.}
The authors would like to thank Prof. Ram Vasudevan and Mr. Jinsun Liu for pointing out that there exists
no edge $(i,i+n)$ in the objective function in the proof of Proposition 3.8
of the original version. | {'timestamp': '2022-05-03T02:32:05', 'yymm': '2204', 'arxiv_id': '2204.09509', 'language': 'en', 'url': 'https://arxiv.org/abs/2204.09509'} |
\section{Introduction}
Conventional approaches to distributed network optimization are based on iterative descent in either the primal or dual domain. The reason for this is that for many types of network optimization problems there exist descent directions that can be computed in a distributed fashion. Subgradient descent algorithms, for example, implement iterations through distributed updates based on local information exchanges with neighboring nodes; see e.g., \cite{mung, kelly, low, Srikant}. However, practical applicability of the resulting algorithms is limited by exceedingly slow convergence rates typical of gradient descent algorithms. Furthermore, since traditional line search methods require global information, fixed stepsizes are used, exacerbating the already slow convergence rate, \cite{averagepaper, CISS-rate}.
Faster distributed descent algorithms have been recently developed by constructing approximations to the Newton direction using iterative local information exchanges, \cite{lowdiag,cdc09,acc11}. These results build on earlier work in \cite{BeG83} and \cite{cgNewt} which present Newton-type algorithms for network flow problems that, different from the more recent versions in \cite{cdc09} and \cite{acc11}, require access to all network variables. To achieve global convergence and recover quadratic rates of centralized Newton's algorithm \cite{cdc09} and \cite{acc11} use distributed backtracking line searches that use average consensus to verify global exit conditions. Since each backtracking line search step requires running a consensus iteration with consequently asymptotic convergence \cite{fagnani, spielman}, the exit conditions of the backtracking line search can only be achieved up to some error. Besides introducing inaccuracies, computing stepsizes with a consensus iteration is not a suitable solution because the consensus iteration itself is slow. Thus, the quadratic convergence rate of the algorithms in \cite{cdc09} and \cite{acc11} is to some extent hampered by the linear convergence rate of the line search. This paper presents a distributed line search algorithm based on local information so that each node in the network can solve its own backtracking line search using only locally available information.
Work on line search methods for descent algorithms can be found in \cite{NMLS,zhang,cgLS}. The focus in \cite{NMLS} and \cite{zhang} is on nonmonotone line searches which improve convergent rates for Newton and Newton-like descent algorithms. The objective in \cite{cgLS} is to avoid local optimal solutions in nonconvex problems. While these works provide insights for developing line searches they do not tackle the problem of dependence on information that is distributed through nodes of a graph.
To simplify discussion we restrict attention to the network flow problem. Network connectivity is modeled as a directed graph and the goal of the network is to support a single information flow specified by incoming rates at an arbitrary number of sources and outgoing rates at an arbitrary number of sinks. Each edge of the network is associated with a concave function that determines the cost of traversing that edge as a function of flow units transmitted across the link. Our objective is to find the optimal flows over all links. Optimal flows can be found by solving a concave optimization problem with linear equality constraints (Section II). Evaluating a line search algorithm requires us to choose a descent direction. We choose to work with the family of Accelerated Dual Descent (ADD) methods introduced in \cite{acc11}. Algorithms in this family are parameterized by the information dependence between nodes. The $N$th member of the family, shorthanded as ADD-N, relies on information exchanges with nodes not more than $N$ hops away. Similarly, we propose a group of line searches that can be implemented through information exchanges with nodes in this $N$ hop neighborhood.
Our work is based on the Armijo rule which is the workhorse condition used in backtracking line searches, \cite[Section 7.5]{LaNLP}. We construct a local version of the Armijo rule at each node by taking only the terms computable at that node, using information from no more than $N$ hops away(Section III). Thus the line search always has the same information requirements as the descent direction computed via the ADD-N algorithm. Our proofs(Section IV) leverage the information dependence properties of the algorithm to show that key properties of the backtracking line search are preserved: (i) We guarantee the selection of unit stepsize within a neighborhood of the optimal value (Section IV-A). (ii) Away from this neighborhood, we guarantee a strict decrease in the optimization objective (Section IV-B). These properties make our algorithm a practical distributed alternative to standard backtracking line search techniques. Simulations further demonstrate that our line search is functionally equivalent to its centralized counterpart (Section V).
\section{Network Optimization}\label{mincost-Newton}
Consider a network represented by a directed graph ${\cal G}=({\cal N},{\cal E})$ with node set ${\cal N}=\{1,\ldots,n\}$, and edge set ${\cal E} = \{1,\ldots,E\}$. The $i$th component of vector $x$ is denoted as $x^{i}$. The notation $x\ge 0$ means that all components $x^i\ge 0$. The network is deployed to support a single information flow specified by incoming rates $b^{i}>0$ at source nodes and outgoing rates $b^{i}<0$ at sink nodes. Rate requirements are collected in a vector $b$, which to ensure problem feasibility has to satisfy $\sum_{i=1}^{n}b^{i}=1$. Our goal is to determine a flow vector $x=[x^e]_{e\in {\cal E}}$, with $x^e$ denoting the amount of flow on edge $e=(i,j)$.
Flow conservation implies that it must be $Ax=b$, with $A$ the $n\times E$ node-edge incidence matrix defined as
\[ [A]_{ij} = \left\{
\begin{array}{ll}
1 & \hbox{if edge $j$ leaves node $i$} , \\
-1 & \hbox{if edge $j$ enters node $i$}, \\
0 & \hbox{otherwise,}
\end{array}\right.\]
where $[A]_{ij}$ denotes the element in the $i$th row and $j$th column of the matrix $A$. We define the reward as the negative of scalar cost function $\phi_e(x^{e})$ denoting the cost of $x^e$ units of flow traversing edge $e$. We assume that the cost functions $\phi_e$ are strictly convex and twice continuously differentiable. The maximum reward network optimization problem is then defined as
\begin{equation}
\begin{array}{ll}\hbox{maximize }& -f(x)=\sum_{e=1}^E -\phi_e(x^e)\\ \hbox{subject to: }& Ax=b.\label{optnet}\end{array}
\end{equation}
Our goal is to investigate a distributed line search technique for use with Accelerated Dual Descent (ADD) methods for solving the optimization problem in (\ref{optnet}). We begin by discussing the Lagrange dual problem of the formulation in (\ref{optnet}) in Section \ref{subsection:dualform}) and reviewing the ADD method in Section \ref{subsection:ADD}.
\subsection{Dual Formulation} \label{subsection:dualform}
Dual descent algorithms solve (\ref{optnet}) by descending on the Lagrange dual function $q(\lambda)$. To construct the dual function consider the Lagrangian ${\cal L} (x,\lambda) = -\sum_{e=1}^E \phi_e(x^e) +\lambda'(Ax-b)$ and define
\begin{eqnarray} \label{eqn_separable_dual}
q(\lambda) \!\!&=&\!\! \sup_{x\in \mathbb{R}^E} {\cal L} (x,\lambda) \nonumber \\
&=&\sup_{x\in \mathbb{R}^E} \left(-\sum_{e=1}^E \phi_e(x^e) +\lambda'Ax\right) - \lambda'b \nonumber\\
&=&\!\! \sum_{e=1}^E \sup_{x^e \in \mathbb{R}} \Big((\lambda'A)^e x^e-\phi_e(x^e)\Big) - \lambda'b,
\end{eqnarray}
where in the last equality we wrote $\lambda'Ax = \sum_{e=1}^{E}(\lambda'A)^e x^e$ and exchanged the order of the sum and supremum operators.
It can be seen from (\ref{eqn_separable_dual}) that the evaluation of the dual function $q(\lambda)$ decomposes into the $E$ one-dimensional optimization problems that appear in the sum. We assume that each of these problems has an optimal solution, which is unique because of the strict convexity of the functions $\phi_e$. Denote this unique solution as $x^e(\lambda)$ and use the first order optimality conditions for these problems in order to write
\begin{equation}
x^e(\lambda) = (\phi_e')^{-1} (\lambda^i-\lambda^j),\label{primal-sol}
\end{equation}
where $i\in{\cal N}$ and $j\in{\cal N}$ respectively denote the source and destination nodes of edge $e=(i,j)$. As per (\ref{primal-sol}) the evaluation of $x^e(\lambda)$ for each node $e$ is based on local information
about the edge cost function $\phi^e$ and the dual variables of the incident nodes $i$ and $j$.
The dual problem of (\ref{optnet}) is defined as $\min_{\lambda\in \mathbb{R}^n} q(\lambda)$. The dual function is convex, because all dual functions of minimization problems are, and differentiable, because the $\phi_e$ functions are strictly convex. Therefore, the dual problem can be solved using any descent algorithm of the form
\begin{equation}
\lambda_{k+1} = \lambda_k + \alpha_k d_k\qquad \hbox{for all }k\ge 0, \label{sgit}
\end{equation}
where the descent direction $d_k$ satisfies $g_k'd_k<0$ for all times $k$ with $g_k = g(\lambda_{k})=\nabla q(\lambda_{k})$ denoting the gradient of the dual function $q(\lambda)$ at $\lambda=\lambda_k$. An important observation here is that we can compute the elements of $g_k$ as
\begin{equation}\label{eqn_dual_update_distributed}
g_k^{i} = \sum_{e =(i,j)} x^e(\lambda_{k}) - \sum_{e =(j,i)} x^e(\lambda_{k}) - b_{i}.
\end{equation}
with the vector $x(\lambda_k)$ having components $x^e(\lambda_k)$ as determined by (\ref{primal-sol}) with $\lambda=\lambda_k$, \cite[Section 6.4]{nlp}. An important fact that follows from \eqref{eqn_dual_update_distributed} is that the $i$th element $g_k^{i}$ of the gradient $g_k$ can be computed using information that is either locally available $x^{(i,j)}$ or available at neighbors $x^{(j,i)}$. Thus, the simplest distributed dual descent algorithm, known as subgradient descent takes $d_k=-g_k$. Subgradient descent suffers from slow convergence so we work with an approximate Newton direction.
\subsection{Accelerated Dual Descent} \label{subsection:ADD}
The Accelerated Dual Descent (ADD) method is a parameterized family of dual descent algorithms developed in \cite{acc11}. An algorithm in the ADD family is called ADD-N and each node uses information from $N$-hop neighbors to compute its portion of an approximate Newton direction. Two nodes are $N$-hop neighbors if the shortest undirected path between those nodes is less than or equal to $N$.
The exact Newton direction $d_k$ is defined as the solution of the linear equation $ H_k d_k = -g_k$ where $H_k = H(\lambda_{k})=\nabla^2 q(\lambda_{k})$ denotes the Hessian of the dual function. We approximate $d_k$ using the ADD-N direction defined as
\begin{equation}
d_k^{(N)} = -\bar H_k^{(N)} g_k \label{ADDd}
\end{equation}
where the approximate Hessian inverse, $\bar H_k^{(N)}$ is defined
\begin{equation}
\bar H_k^{(N)} =\sum_{r=0}^N D_k^{-\frac{1}{2}}\!\left(D_k^{-\frac{1}{2}} B_k D_k^{-\frac{1}{2}}\right)^r\!D_k^{-\frac{1}{2}}\!
\label{ADD_h}
\end{equation}
using a Hessian splitting: $H_k= D_k-B_k$ where $D_k$ is the diagonal matrix $[D_k]_{ii} = [H_k]_{ii}$. The resulting accelerated dual descent algorithm
\begin{equation}
\lambda_{k+1} = \lambda_k + \alpha_k d^{(N)}_k\qquad \hbox{for all }k\ge 0, \label{update}
\end{equation}
can be computed using information from $N$-hop neighbors because the dependence structure of $g_k$ shown in equation (\ref{eqn_dual_update_distributed}) causes the Hessian to have a local structure as well: $[H_k]_{ij} \not = 0$ if and only if $(i,j)\in \mathcal{E}$. since $H_k$ has the sparsity pattern of the network, $B_k$ and thus $D_k^{-\frac{1}{2}} B_k D_k^{-\frac{1}{2}}$ must also have the sparsity pattern of the graph. Each term $D_k^{-\frac{1}{2}}\!\left(D_k^{-\frac{1}{2}} B_k D_k^{-\frac{1}{2}}\right)^r\!D_k^{-\frac{1}{2}}\!$ is a matrix which is non-zero only for $r$-hop neighbors so the sum is non-zero only for $N$-hop neighbors.
Analysis of the ADD-N algorithm fundamentally depends on a network connectivity coefficient $\bar \rho$, which is defined in \cite{acc11} as the bound
\begin{equation}
\rho \left(B_k D_k^{-1}\right) \le \bar\rho \in (0,1) \label{rhobar}
\end{equation}
where $\rho(\cdot)$ denotes the second largest eigenvalue modulus. When $\bar\rho$ is small, information in the network spreads efficiently and $d_k^{(N)}$ is a more exact approximation of $d_k$. See \cite{acc11} for details.
\section{Distributed Backtracking Line Search}
Algorithms ADD-$N$ for different $N$ differ in their information dependence. Our goal is to develop a family of distributed backtracking line searches parameterized by the same $N$ and having the same information dependence. The idea is that the $N^{th}$ member of the family of line searches is used in conjunction with the $N^{th}$ member of the ADD family to determine the step and descent direction in (\ref{update}). As with the ADD-$N$ algorithm, implementing the distributed backtracking line search requires each node to get information from its $N$-hop neighbors.
Centralized backtracking line searches are typically intended as method to find a stepsize $\alpha$ that satisfies Armijo's rule. This rule requires the stepsize $\alpha$ to satisfy the inequality
\begin{equation}q(\lambda+\alpha d) \le q(\lambda) + \sigma \alpha d'g, \label{armijo}\end{equation}
for given descent direction $d$ and search parameter $\sigma\in (0,1/2)$. The backtracking line search algorithm is then defined as follows:
\begin{algorithm}\label{BLS}
Consider the objective function $q(\cdot)$ and given variable value $\lambda$ and corresponding descent direction $d$ and dual gradient $g$. The backtracking line search algorithm is:
\begin{itemize}
\item[\footnotesize\bf] Initialize $\alpha=1$
\item[\footnotesize\bf] {\bf while} $q(\lambda+\alpha d) > q(\lambda) + \sigma \alpha d'g$
\item[\footnotesize\bf] $\qquad\alpha = \alpha \beta$
\item[\footnotesize\bf] \bf{end}
\end{itemize}
The scalars $\beta\in (0,1)$ and $\sigma \in (0,1/2)$ are given parameters.
\end{algorithm}
This line search algorithm is commonly used with Newton's method because it guarantees a strict decrease in the objective and once in an error neighborhood it always selects $\alpha=1$ allowing for quadratic convergence, \cite[Section 9.5]{boydbook}.
In order to create a distributed version of the backtracking line search we need a local version of the Armijo Rule. We start by decomposing the dual objective $q(\lambda) = \sum_{i=1}^n q_i(\lambda)$ where the local objectives takes the form \begin{equation}q_i(\lambda)= \sum_{e=(j,i)} \phi_e(x^e)-\lambda_i(a_i'x-b_i).\label{ldep}\end{equation} The vector $a_i'$ is the $i^{th}$ row of the incidence matrix $A$. Thus the local objective $q_i(\lambda)$ depends only on the flows adjacent to node $i$ and $\lambda^i$.
An $N$-parameterized local Armijo rule is therefore given by
\begin{equation}
q_i(\lambda+\alpha_i d)\le q_i(\lambda)+\sigma\alpha_i\sum_{j\in \mathcal{N}_i^{(N)}} {d^j g^j},
\label{armijoN}
\end{equation}
where $\mathcal{N}_j^{(N)}$ is the set of $N$-hop neighbors of node $j$. The scalar $\sigma\in (0,1/2)$ is the same as in (\ref{armijo}), $g=\nabla q(\lambda)$ and $d$ is a descent direction. Each node is able to compute a stepsize $\alpha_i$
satisfying (\ref{armijoN}) using $N$-hop information. The stepsize used for the dual descent update (\ref{sgit}) is
\begin{equation}
\alpha = \min_{i\in \mathcal{N}} \alpha_i.
\end{equation}
Therefore, we define the distributed backtracking line search according to the following algorithm.
\begin{algorithm}
Given local objectives $q_i(\cdot)$, descent direction $d$ and dual gradient $g$.
\begin{itemize}
\item[\footnotesize\bf] \bf{for} $i=1:n$
\item[\footnotesize\bf] $\qquad$Initialize $\alpha_i=1$
\item[\footnotesize\bf] $\qquad${\bf while} $q_i(\lambda+\alpha_i d)> q_i(\lambda)+\sigma\alpha_i\sum_{j\in \mathcal{N}_i^{(N)}} {d^j g^j}$
\item[\footnotesize\bf] $\qquad\qquad\alpha_i = \alpha_i \beta$
\item[\footnotesize\bf] $\qquad$\bf{end}
\item[\footnotesize\bf] \bf{end}
\item[\footnotesize\bf] $\alpha = \min_i \alpha_i$
\end{itemize}
The scalars $\beta\in (0,1)$, $\sigma \in (0,1/2-\bar\rho^{N+1}/2)$ and $N\in \field{Z}^+$ are parameters.
\label{DBLS}
\end{algorithm}
The distributed backtracking line search described in Algorithm \ref{DBLS} works by allowing each node to execute its own modified version of Algorithm \ref{BLS} using only information from $N$-hop neighbors. Minimum consensus of $\alpha_i$ requires at most diameter of $\mathcal{G}$ iterations. If each node shares its current $\alpha_i$ along with $g_k^i$ with its $N$-hop neighbors the maximum number of iterations drops to $\lceil \hbox{diam}(\mathcal{G})/N \rceil$.
The parameter $\sigma$ is restricted by the network connectivity coefficient $\bar\rho$ and the choice of $N$ because these are scalars which encode information availability. Smaller $\bar\rho^{N+1}$ indicates more accessible information and thus allows for greater $\sigma$ and thus a more aggressive search. As $\bar\rho^{N+1}$ approaches zero, we recover the condition $\sigma\in(0,1)$ from Algorithm \ref{BLS}.
\section{Analysis}
In this section we show that when implemented with the Accelerated Dual Descent update in (\ref{update}) the distributed backtracking line search defined in Algorithm \ref{DBLS} recovers the key properties of Algorithm \ref{BLS}: strict decrease of the dual objective and selection of $\alpha=1$ within an error neighborhood.
We proceed by outlining our assumptions. The standard Lipshitz and strict convexity assumptions regarding the dual Hessian are defined here.
\begin{assumption} \label{standard}
The Hessian $H(\lambda)$ of the dual function $q(\lambda)$ satisfies the following conditions
\begin{list}{}{\setlength{\itemsep }{2pt} \setlength{\parsep }{2pt}
\setlength{\parskip }{0in} \setlength{\topsep }{2pt}
\setlength{\partopsep}{0in} \setlength{\leftmargin}{10pt}
\setlength{\labelsep }{10pt} \setlength{\labelwidth}{-0pt}}
\item[({\it Lipschitz dual Hessian})] There exists some constant $L>0$ such that
\[\|H(\lambda)-H(\bar{\lambda})\| \le L\|\lambda-\bar{\lambda}\| \, \forall \lambda,\bar{\lambda}\in \mathbb{R}^n.\]
\item [({\it Strictly convex dual function})] There exists some constant $M>0$ such that $\|H(\lambda)^{-1}\|\le M \qquad \, \forall \lambda\in \mathbb{R}^n.$
\end{list}\end{assumption}
In addition to assumptions about the dual Hessian we assume that key properties of the inverse Hessian carry forward to our approximation.
\begin{assumption}
The approximate inverse Hessian remains well conditioned,
\[ m \le \|\bar H^{(N)}\| \le M.\]
within the subspace $\mathbf{1}^\perp$.
\label{cond}
\end{assumption}
These assumptions make sense because $\bar H^{(N)}$ is a truncated sum whose limit as $N$ approaches infinity is $H^{-1}$, a matrix we already assume to be well conditioned on $\mathbf{1}^\perp$ even when solving this problem in the centralized case. Furthermore the first term in the sum is $D^{-1}$ which is well conditioned by construction.
We begin our analysis by characterizing the stepsize $\alpha$ chosen by Algorithm \ref{DBLS} when the descent direction $d$ is chosen according the the ADD-N method.
\begin{lemma}
For any $\alpha_i$ satisfying the distributed Armijo rule in equation (\ref{armijoN})
with descent direction $d= -\bar H^{(N)}g$ we have
\[q_i(\lambda+\alpha_i d)- q_i(\lambda)\le 0.\]
\label{neg}\vspace{-5mm}
\end{lemma}
\begin{proof}
Recall that $\bar H ^{(N)}$ is non-zero only for elements corresponding to $N$-hop neighbors by construction. Therefore, by defining the local gradient vector $\tilde g^{(i)}$ as a sparse vector with nonzero elements $[\tilde g^{(i)}]_j=g^j$ for $j\in \mathcal{N}_i^{(N)}$ we can write
\begin{equation}\sum_{j\in \mathcal{N}_i^{(N)}} {d^j g^j} = -\left(\tilde g^{(i)}\right)' \bar H ^{(N)}\tilde g^{(i)}\label{quad}\end{equation}
Because $\bar H^{(N)}$ is positive definite the right hand side of (\ref{quad}) is nonpositive from where it follows that $\sum_{j\in \mathcal{N}_i^{(N)}} {d^j g^j}\leq0 $. The desired result follows by noting that $\alpha_i$ and $\sigma$ are positive scalars.\end{proof}
Lemma \ref{neg} tells us that when using the distributed backtracking line search with the ADD-N algorithm, we achieve improvement in each element of the decomposed objective $q_i(\lambda)$. From the quadratic form in equation (\ref{quad}) it also follows that if equation (\ref{armijoN}) is satisfied by a stepsize $\alpha_i$, then it is also satisfied by any $\alpha\le \alpha_i$ and in particular $\alpha= \min_i \alpha_i$ satisfies equation (\ref{armijoN}) for all $i$.
\subsection{Unit Stepsize Phase}
A fundamental property of the backtracking line search using Armijo's rule summarized in Algorithm \ref{BLS}is that it always selects $\alpha=1$ when iterates $\lambda$ are within a neighborhood of the optimal argument. This property is necessary to ensure quadratic convergence of Newton's method and is therefore a desirable property for the distributed line search summarized in Algorithm \ref{DBLS}. We prove here that this is true as stated in the following theorem.
\begin{theorem}
Consider the distributed line search in Algorithm \ref{DBLS} with parameter $N$, starting point $\lambda=\lambda_k$, and descent direction $d = d_k^{(N)} = -\bar H_k^{(N)} g_k$ computed by the ADD-$N$ algortihm [cf. \eqref{ADDd} and \eqref{ADD_h}. If the search parameter $\sigma$ is chosen such that
\[ \sigma\in \left(0, \frac{1-\bar\rho^{N+1}}{2}\right) \]
and the norm of the dual gradient satisfies
\[\|g_k\|\le\frac{3m}{LM^3}\left({1-\bar\rho^{N+1}}-2\sigma\right), \]
then Algorithm \ref{DBLS} selects stepsize $\alpha=1$.
\label{a1}
\end{theorem}
\begin{proof}
Recall the definition of the local gradient $\tilde g^{(i)}_k$ as the sparse vector with nonzero elements $[\tilde g^{(i)}_k]_j=g_k^j$ for $j\in \mathcal{N}_i^{(N)}$. Further define the local update vector $\tilde d_k^{(i)} := \bar H^{(N)}_k \tilde g^{(i)}_k$ whose sparsity pattern is the same as that of $\tilde g^{(i)}_k$. Due to this and to the fact that the local objective $q_i(\lambda)$ in \eqref{ldep} depends only on values in $\mathcal{N}_i^{(N)}$, we have
\begin{align}\label{eqn_theo_unit_step_size_pf_10}
q_i(\lambda_k+\alpha d_k) = q_i(\lambda_k+\alpha\tilde d_k^{(i)}).
\end{align}
Applying the Lipschitz dual Hessian assumption to the local update vector $\tilde d_k^{(i)}$ we get
\begin{align}\label{eqn_theo_unit_step_size_pf_20}
\Vert H(\lambda_k+\alpha \tilde d_k^{(i)})- H(\lambda_k)\Vert \le
\alpha L \Vert \tilde d_k^{(i)}\Vert.
\end{align}
We further define a reduced Hessian $\nabla^2 q_i(\lambda) = \tilde H^{(i)}$ by setting to zero the rows and columns corresponding to nodes outside of the neighborhood $\mathcal{N}_i^{(N)}$, i.e.,
\begin{equation}
\left[\tilde H^{(i)}\right]_{ij} :=
\left\{ \begin{array}{ll} H_{ij} & i,j\in \mathcal{N}_i^{(N)}\\
0 & else\end{array}\right. \label{RH}
\end{equation}
Since the elements of $H$ already satisfy $H_{ij}= 0$ for all ${i,j}\not\in \mathcal{E}$ the resulting $\tilde H^{(i)}$ has the structure of a principal submatrix of $H$ with the deleted rows left as zeros. Since the norm $\Vert H(\lambda_k+\alpha \tilde d_k^{(i)})- H(\lambda_k)\Vert$ in \eqref{eqn_theo_unit_step_size_pf_20} is the maximum eigenvalue modulus of the matrix $H(\lambda_k+\alpha \tilde d_k^{(i)})- H(\lambda_k)$, it is larger than the norm $\Vert \tilde H^{(i)}(\lambda_k+\alpha \tilde d_k^{(i)})- \tilde H^{(i)}(\lambda_k)\Vert$ because the latter is the maximum over a subset of the eigenvalues of the former. Combining this observation with \eqref{eqn_theo_unit_step_size_pf_20} yields
\begin{equation}\Vert \tilde H^{(i)}(\lambda_k+\alpha \tilde d_k^{(i)})- \tilde H^{(i)}(\lambda_k)\Vert\le \alpha L\Vert \tilde d_k^{(i)}\Vert.\label{LL}\end{equation}
Interpret now the update in \eqref{eqn_theo_unit_step_size_pf_10} as a function of $\tilde q_i(\alpha)$ defined as
\begin{equation}\label{eqn_theo_unit_step_size_pf_50}
\tilde q_i(\alpha) := q_i(\lambda_k+\alpha\tilde d_k^{(i)}).
\end{equation}
Differentiating with respect to $\alpha$ and using the definition of the local gradient $\tilde g^{(i)}_k$ we get the derivative of $\tilde q_i(\alpha)$ as
\begin{equation}\label{eqn_theo_unit_step_size_pf_60}
\tilde q_i'(\alpha)
= \nabla q_i (\lambda_k+\alpha\tilde d_k^{(i)}) \tilde d_k^{(i)}
= \tilde g^{(i)}(\lambda_k+\alpha\tilde d_k^{(i)}) \tilde d_k^{(i)}.
\end{equation}
Differentiating with respect to $\alpha$ a second time and using the definition of $\tilde H^{(i)}$ in \eqref{RH} yields
\begin{align}\label{eqn_theo_unit_step_size_pf_70}
\tilde q_i''(\alpha)
&=\ \tilde d_k^{(i)}\phantom{}' \nabla^2 q_i(\lambda_k+\alpha\tilde d_k^{(i)})\tilde d_k^{(i)}\nonumber\\
&=\ \tilde d_k^{(i)}\phantom{}' \tilde H^{(i)}(\lambda_k+\alpha\tilde d_k^{(i)})\tilde d_k^{(i)}.
\end{align}
Return now to (\ref{LL}) and replace the matrix norm on the right hand side with left and right multiplication by the unit vector $\tilde d_k^{(i)}/\|\tilde d_k^{(i)}\|$. This yields
\begin{align}\label{eqn_theo_unit_step_size_pf_80}
\tilde d_k^{(i)}\phantom{}'
\left[\tilde H^{(i)}(\lambda_k+\alpha \tilde d_k^{(i)})\!- \tilde H^{(i)}(\lambda_k)\right]
\tilde d_k^{(i)}
\!\le \alpha L\Vert \tilde d_k^{(i)}\Vert^3.
\end{align}
Comparing the expressions for the derivatives $\tilde q_i''(\alpha)$ in \eqref{eqn_theo_unit_step_size_pf_70} with the left hand side of \eqref{eqn_theo_unit_step_size_pf_80} we can simplify the latter to
\[\tilde q_i''(\alpha)-\tilde q_i''(0)\le \alpha L\Vert \tilde d_k^{(i)}\Vert^3.\]
Integrating the above expression with respect to $\alpha$ results in
\[\tilde q_i'(\alpha)-\tilde q_i'(0)\le \frac{\alpha^2}{2} L\Vert \tilde d_k^{(i)}\Vert^3+\alpha\tilde q_i''(0),\]
which upon a second integration with respect to $\alpha$ yields
\[\tilde q_i(\alpha)-\tilde q_i(0)\le \frac{\alpha^3}{6} L\Vert \tilde d_k^{(i)}\Vert^3+\frac{\alpha^2}{2}\tilde q_i''(0)+\alpha\tilde q_i'(0).\]
Since we are interested in unit stepsize substitute $\alpha=1$ and the definitions of the derivatives $\tilde q_i'(0)$ and $\tilde q_i''(0)$ given in \eqref{eqn_theo_unit_step_size_pf_60} and \eqref{eqn_theo_unit_step_size_pf_70} to get
\begin{equation*}
\tilde q_i(1)-\tilde q_i(0)\le
\frac{L}{6} \Vert \tilde d_k^{(i)}\Vert^3 \!
+ \frac{1}{2} \tilde d_k^{(i)}\phantom{}'\tilde H^{(i)}(\lambda_k)\tilde d_k^{(i)}
+ \tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}.
\end{equation*}
Since according to (\ref{RH}) the reduced Hessian $\tilde H^{(i)}$ has the structure of a principal submatrix of the Hessian $H$ and $H\succeq 0$ it follows that $0\preceq\tilde H^{(i)}\preceq H$ and that as a consequence \[\tilde d_k^{(i)}\phantom{}' \tilde H^{(i)}(\lambda_k)\tilde d_k^{(i)} \le \tilde d_k^{(i)}\phantom{}' H_k \tilde d_k^{(i)}.\]
Incorporating this latter relation and the definition of the local update $\tilde d_k^{(i)} = \bar H^{(N)}_k \tilde g^{(i)}_k$ in the previous equation we obtain
\begin{align}\label{eqn_theo_unit_step_size_pf_140}
\tilde q_i(1)-\tilde q_i(0)\le & \
\frac{L}{6}\Vert \bar H^{(N)}_k \tilde g^{(i)}_k\Vert^3 \\&\hspace{-4mm}\nonumber
+ \frac{1}{2}\left( \bar H^{(N)}_k \tilde g^{(i)}_k\right)' H_k\bar H^{(N)}_k \tilde g^{(i)}_k
- \tilde g_k^{(i)}\phantom{}' \bar H^{(N)}_k \tilde g^{(i)}_k.
\end{align}
Consider now the last term in the right hand side and recall the sparsity pattern of the local gradient $\tilde g_k^{(i)}$ to write
\begin{align}\label{eqn_theo_unit_step_size_pf_150}
-\tilde g_k^{(i)} \bar H^{(N)}_k \tilde g^{(i)}_k = \sum_{j\in \mathcal{N}_i^{(N)}} g_k^j d_k^j,
\end{align}
and further split the right hand side of \eqref{eqn_theo_unit_step_size_pf_150} to generate suitable structure
\begin{align}\label{eqn_theo_unit_step_size_pf_160}
\sum_{j\in \mathcal{N}_i^{(N)}} g_k^j d_k^j
= \sum_{j\in \mathcal{N}_i^{(N)}} \sigma{g_k^j d_k^j} + (1-\sigma){g_k^j d_k^j}.
\end{align}
Substitute now \eqref{eqn_theo_unit_step_size_pf_160} into \eqref{eqn_theo_unit_step_size_pf_150} and the result into \eqref{eqn_theo_unit_step_size_pf_140} to write
\begin{align*}
\tilde q_i(1)-\tilde q_i(0)\le \
\frac{L}{6}&\Vert \bar H^{(N)}_k \tilde g^{(i)}_k\Vert^3
+ \frac{1}{2}\tilde d_k^{(i)}\phantom{}' H\tilde d_k^{(i)}\\&
+ \sigma \sum_{j\in \mathcal{N}_i^{(N)}} {g_k^j d_k^j}
+ (1-\sigma)\sum_{j\in \mathcal{N}_i^{(N)}}{g_k^j d_k^j}.
\end{align*}
Using the expression for the quadratic form in (\ref{eqn_theo_unit_step_size_pf_150}) to substitute the last term in the previous equation yields
\begin{align}\label{terms}
\tilde q_i(1)-\tilde q_i(0)\le \
\frac{L}{6}&\Vert\bar H_k ^{(N)}\tilde g_k^{(i)}\Vert^3
+\frac{1}{2}\tilde d_k^{(i)}\phantom{}' H\tilde d_k^{(i)}\\&
+\sigma \sum_{j\in \mathcal{N}_i^{(N)}} {g_k^j d_k^j}
-(1-\sigma)\tilde g^{(i)}_k\phantom{}' \bar H_k ^{(N)}\tilde g_k^{(i)}\nonumber
\end{align}
Further note that from the definition of $\tilde d^{(i)}$ it follows that
\[\tilde d_k^{(i)}\phantom{}' H_k\tilde d_k^{(i)} = \tilde g_k^{(i)}\phantom{}'\bar H_k^{(N)} H_k\bar H_k^{(N)}\tilde g_k^{(i)}.\]
The right hand side of this latter equality can be bounded using Cauchy-Schwarz's inequality and the submultiplicity of matrix norms as
\[\tilde g_k^{(i)}\phantom{}'\bar H_k^{(N)} H\bar H_k^{(N)}\tilde g_k^{(i)}\le \|\tilde g_k^{(i)}\|\, \| \bar H_k^{(N)}\|\, \|H_k \bar H_k^{(N)}\|\, \|\tilde g_k^{(i)}\|.\]
The norm $H_k \bar H_k^{(N)}$ can be further bounded using the result $\|H_k\bar H^{(N)}_k\|\le \bar \rho^{N+1}+1$ from \cite{acc11}. The norm $\|\bar H^{(N)}_k\|$ can be bounded as $\|\bar H^{(N)}_k\|\le M$ according to Assumption \ref{cond}. These two observations substituted in the last displayed equation yield
\begin{equation}\tilde d_k^{(i)}\phantom{}' H_k\tilde d_k^{(i)} \le M(\rho^{N+1}+1)\|\tilde g_k^{(i)}\|^2. \label{term2}\end{equation}
Applying the bound $\|\bar H^{(N)}_k\|\le M$ from Assumption 2 to the norm $\Vert\bar H_k ^{(N)}\tilde g_k^{(i)}\Vert^3$ we get $\Vert\bar H_k ^{(N)}\tilde g_k^{(i)}\Vert^3 \le M^3 \|\tilde g_k^{(i)}\|^3$. Since Assumption 2 also guarantees that $\|\bar H^{(N)}_k\|\ge m$, we have \[\frac{\tilde g_k^{(i)}\phantom{}'\bar H_k^{(N)} \tilde g_k^{(i)}}{\|\tilde g_k^{(i)}\|^2} \ge m.\] Therefore, we can write
\begin{equation}
\Vert\bar H_k ^{(N)}\tilde g_k^{(i)}\Vert^3 \le \frac{M^3}{m}\|\tilde g_k^{(i)}\|\ \tilde g_k^{(i)}\phantom{}'\bar H_k^{(N)} \tilde g_k^{(i)}.
\label{term1}
\end{equation}
Substituting the relations (\ref{term2}) and (\ref{term1}) in relation (\ref{terms}) and factoring we get
\begin{align*}
& \tilde q_i(1)-\tilde q_i(0)\le \
\sigma\sum_{j\in \mathcal{N}_i^{(N)}} {g_k^j d_k^j}\\&\hspace{3mm}
+ \tilde g^{(i)}_k\phantom{}' \bar H_k ^{(N)}\tilde g_k^{(i)}
\left[-(1-\sigma)+\frac{LM^3}{6m}\Vert \tilde g_k^{(i)}\Vert+ \frac{\bar\rho^{N+1}}{2} + 1\right].
\end{align*}
Use $\Vert \tilde g_k^{(i)}\Vert\le \|g_k\|\le {6m}/({LM^3})(({1-\bar\rho^{N+1}})/{2}-\sigma)$ to write
\begin{align*}
& \tilde q_i(1)-\tilde q_i(0)\le
\sigma\sum_{j\in \mathcal{N}_i^{(N)}} {g_k^j d_k^j} \\&
+ M \Vert \tilde g_k^{(i)}\Vert^2
\left[-(1-\sigma)
+\left(\frac{1-\bar\rho^{N+1}}{2}-\sigma\right)
+\frac{\bar\rho^{N+1}+1}{2} \right].
\end{align*}
Algebraic simplification of the bracketed portion yields
\begin{equation}\left[-(1-\sigma)+\left(\frac{1-\bar\rho^{N+1}}{2}-\sigma\right)+\frac{\bar\rho^{N+1}+1}{2} \right]=0\label{cancel}.\end{equation}
Thus we have \[\tilde q_i(1)-\tilde q_i(0)\le \sigma\sum_{j\in \mathcal{N}_i^{(N)}} {g_k^j d_k^j}.\]
Substituting the definition of $\tilde q_i(\lambda)$ in \eqref{eqn_theo_unit_step_size_pf_50} into this equation we arrive at
\[q_i\left(\lambda_{k}+d_k^{(i)}\right)\le q_i(\lambda_k)+\sigma\sum_{j\in \mathcal{N}_i^{(N)}} {d_k^j g_k^j}\]
which means that the exit condition \eqref{armijoN} in Algorithm \ref{DBLS} is met with $\alpha=1$.
\end{proof}
Theorem \ref{a1} guarantees that for an appropriately chosen line search parameter $\sigma$ the local backtracking line search will always choose a step size of $\alpha=1$ once the norm of the dual gradient becomes small. Furthermore, the condition on the line search parameter tells us that $\bar\rho$ and our choice of $N$ fully capture the impact of distributing the line search. The distributed Armijo rule requires $\left(1-\bar\rho^{N+1}-2\sigma\right)>0$ while the standard Armijo rule requires $(1-2\sigma)>0$. It is clear that in the limit $N\rightarrow \infty$ these conditions become the same with a rate controlled by $\bar\rho$.
\subsection{Strict Decrease Phase}
A second fundamental property of the backtracking line search with the Armijo rule is that there is a strict decrease in the objective when iterates are outside of an arbitrary noninfinitesimal neighborhood of the optimal solution. This property is necessary to ensure global convergence of Newton's algorithm as it ensures the quadratic convergence phase is eventually reached. Our goal here is to prove that this strict decrease can be also achieved using the distributed backtracking line search specified by Algorithm \ref{DBLS}.
Traditional analysis of the centralized backtracking line search of Algorithm \ref{BLS} leverages a lower bound on the stepsize $\alpha$ to prove strict decrease. We take the same approach here and begin by finding a global lower bound on the stepsize $\hat\alpha\le\alpha_i$ that holds for all nodes $i$. We do this in the following lemma.
\begin{lemma}
Consider the distributed line search in Algorithm \ref{DBLS} with parameter $N$, starting point $\lambda=\lambda_k$, and descent direction $d = d_k^{(N)} = -\bar H_k^{(N)} g_k$ computed by the ADD-$N$ algortihm [cf. \eqref{ADDd} and \eqref{ADD_h}. The stepsize
\[\hat \alpha = {2(1-\sigma)}\frac{m^2}{M^2}\]
satisfies the local Armijo rule in \eqref{armijoN}, i.e.,
\[q_i(\lambda_{k+1})\le q_i(\lambda_k)+\sigma \hat{\alpha}\sum_{j\in n_i^{(N)}} {d_k^j g_k^j}\]
for all network nodes $i$ and all $k$.
\label{ahat}
\end{lemma}
\begin{proof}
From the mean value theorem centered at $\lambda_k$ we can write the dual function's value as
\[q_i(\lambda_k+\alpha \tilde d_k^{(i)}) = q_i(\lambda_k)+\alpha\tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}+\frac{\alpha^2}{2} \tilde d_k^{(i)}\phantom{}' \tilde H^{(i)} (z) \tilde d_k^{(i)}\]
where the vector $z= \lambda_k + t\alpha\tilde d_k^{(i)}$ for some $t\in (0,1)$; see e.g.,\cite[Section 9.1]{boydbook}. We use the relation $0\preceq\tilde H^{(i)}\preceq H$ and the bound $\|H^{-1}\|>m$ from Assumption \ref{cond} to transform this equality into the bound
\[q_i(\lambda_k+\alpha \tilde d_k^{(i)}) \le q_i(\lambda_k)+\alpha\tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}+\frac{\alpha^2}{2 m} \Vert\tilde d_k^{(i)}\Vert^2.\]
Introduce now a splitting of the term $\alpha\tilde g_k^{(i)} \tilde d_k^{(i)}$ to generate convenient structure
\begin{align*}
q_i(\lambda_k+\alpha \tilde d_k^{(i)}) \le\ &
q_i(\lambda_k)\\& \hspace{-8mm}
+\alpha\sigma\tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}
+ \alpha(1-\sigma)\tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}
+\frac{\alpha^2}{2 m}\Vert\tilde d_k^{(i)}\Vert^2.
\end{align*}
Further apply the definition of the local update vector $\tilde d_k^{(i)} := \bar H^{(N)}_k \tilde g^{(i)}_k$ and use the well-conditioning of the approximate inverse Hessian $\bar H_k^{(N)}$ as per Assumption \ref{cond} to claim that $m\le \|\bar H_k^{(N)}\|\le M$ and obtain
\begin{align*}
q_i(\lambda_k+\alpha \tilde d_k^{(i)}) & \le \
q_i(\lambda_k) \\& \hspace{-12mm}
+\alpha\sigma\tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}
- \alpha(1-\sigma)m\|\tilde g_k^{(i)}\|^2
+ \frac{\alpha^2M^2}{2 m} \Vert\tilde g_k^{(i)}\Vert^2.
\end{align*}
Factoring common terms in this latter equation yields
\begin{align*}
q_i(\lambda_k+\alpha \tilde d_k^{(i)}) & \le \
q_i(\lambda_k)\\&\hspace{-8mm}
+ \alpha\sigma\tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}
+\alpha m\|\tilde g_k^{(i)}\|^2 \left[-(1-\sigma)+\frac{\alpha M^2}{(2m^2)}\right].
\end{align*}
Substitute $\hat \alpha$ for $\alpha$ in this inequality. Observe that by doing so we have $[-(1-\sigma)+{\hat\alpha M^2}/({2m^2})]=0$ implying that the second term vanishes from this expression. Therefore
\[q_i(\lambda_k+\hat\alpha \tilde d_k^{(i)}) \le q_i(\lambda_k)+\hat\alpha\sigma\tilde g_k^{(i)}\phantom{}' \tilde d_k^{(i)}.\]
From the definitions of the local gradient $\tilde g^{(i)}_k$ as the sparse vector with nonzero elements $[\tilde g^{(i)}_k]_j=g_k^j$ for $j\in n_i^{(N)}$ and the local update vector $\tilde d_k^{(i)} := \bar H^{(N)}_k \tilde g^{(i)}_k$ the desired result follows. \end{proof}
We proceed with the second main result using Lemma \ref{ahat}, in the same manner that strict decrease is proven for the Newton method with the standard backtracking line search in \cite{boydbook}[Section 9.5].
\begin{theorem}
Consider the distributed line search in Algorithm \ref{DBLS} with parameter $N$, starting point $\lambda=\lambda_k$, and descent direction $d = d_k^{(N)} = -\bar H_k^{(N)} g_k$ computed by the ADD-$N$ algortihm [cf. \eqref{ADDd} and \eqref{ADD_h}]. If the norm of the dual gradient is bounded away from zero as $\|g_k\|\ge \eta$, the function value at $\lambda_{k+1} = \lambda_k + \alpha_k d^{(N)}_k$ satisfies
\[q(\lambda_{k+1})-q(\lambda_k) \le -\beta \hat \alpha \sigma m N \eta^2\]
\label{strict}
I.e., the dual function decreases by at least $\alpha \sigma m N \eta^2$
\end{theorem}
\begin{proof}
According to Lemma \ref{ahat} we have
\[q_i(\lambda_{k+1})-q_i(\lambda_k) \le \hat \alpha \sigma \tilde g^{(i)}_k\phantom{}' \tilde d^{(i)}\] because $\hat \alpha$ is a lower bound on $\alpha_i$. Therefore, Algorithm \ref{DBLS} exits with $\alpha\in (\beta \hat \alpha, \hat\alpha)$ and any $\alpha\le \hat\alpha$ satisfies the exit condition in (\ref{armijoN}) therefore
\[q_i(\lambda_{k+1})-q_i(\lambda_k) \le \beta \hat \alpha \sigma \tilde g^{(i)}_k\phantom{}' \tilde d^{(i)}.\]
Applying Assumption \ref{cond} with the definition of $\tilde d^{(i)}$ we get
\[q_i(\lambda_{k+1})-q_i(\lambda_k) \le -\beta\hat \alpha \sigma m\|\tilde g^{(i)}_k\|^2.\]
Summing over all $i$,
\[q(\lambda_{k+1})-q(\lambda_k) \le -\beta\hat \alpha \sigma m \sum_{i=1}^n \|\tilde g^{(i)}_k\|^2.\]
Using the definition of the 2-norm we can write $\sum_{i=1}^n \|\tilde g^{(i)}\|^2 = \sum_{i=1} \sum_{j\in n_i^{(N)}} (g_k^j)^2$. Counting the appearance of each $(g_k^j)^2$ term in this sum we have that $\sum_{i=1} \sum_{j\in n_i^{(N)}} (g_k^j)^2 = \sum_{i=1}|n_i^{(N)}| (g_k^i)^2$. Notice however that since the network is connected it must be $|n_i^{(N)}|\ge N$, from where it follows $\sum_{i=1} \sum_{j\in n_i^{(N)}} (g_k^j)^2 \leq N \sum_{i=1}^n (g^i_k)^2$. Substituting this expression into the above equation yields
\[q(\lambda_{k+1})-q(\lambda_k) \le -\beta\hat \alpha \sigma m N\sum_{i=1}^n (g^i_k)^2\]
Observe now that $\sum_{i=1}^n (g^i)^2= \|g_k\|^2$ and substitute the lower bound $\eta\le \|g_k\|$ to obtain the desired relation.
\end{proof}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{example.pdf}
\centering
\caption{\label{ex} The distributed line search results in solution trajectories nearly equivalent to those of the centralized line search. Top: the Primal Objective follows a similar trajectory in both cases. Middle: Primal Feasibility is achieved asymptotically. Bottom: unit stepsize is achieved in roughly the same number of steps.}
\end{figure}
Theorem \ref{strict} guarantees global convergence into any error neighborhood $\|g_k\|\le \eta$ around the optimal value because the dual objective is strictly decreasing by, at least, the noninfinitesimal quantity $\beta\hat \alpha \sigma m N \eta^2$ while we remain outside of this neighborhood. In particular, we are guaranteed to reach a point inside the neighborhood $\|g_k\| \le \eta = 3m/(LM^3)\left({1-\bar\rho^{N+1}}-2\sigma\right)$ at which point Theorem \ref{a1} will be true and the ADD-N algorithm with the local line search becomes simply
\[ \lambda_{k+1} = \lambda_k - \bar H^{(N)}_k g_k.\]
This iteration is shown to have quadratic convergence properties in \cite{acc11}.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{data.pdf}
\centering
\caption{
\label{data}The distributed line search reaches unit stepsize in 2 to 3 iterations. Fifty simulations were done for each algorithm with N=1, N=2 and N=3 and for Networks with 25 nodes and 100 edges (small), 50 nodes and 200 edges (medium) and 100 nodes and 400 edges (large).}
\end{figure}
\section{Numerical results}
Numerical experiments demonstrate that the distributed version of the backtracking line search is functionally equivalent to the centralized backtracking line search when the descent direction is chosen by the ADD method. The simulations use networks generated by selecting edges are added uniformly at random but are restricted to connected networks. The primal objective function is given by $\phi^e(x) = e^{c x^e}+e^{-c x^e}$ where $c$ captures the notion of edge capacity. For simplicity we let $c=1$ for all edges.
Figure \ref{ex} shows an example of a network optimization problem with 25 nodes and 100 edges being solved using ADD-1 with the centralized and distributed backtracking line searches. The top plot shows that the trajectory of primal objective is not significantly affected by the choice line search. The middle plot shows that primal feasibility is approached asymptotically at the same rate for both algorithms. The bottom plot shows that a unit stepsize is achieved in roughly the same number of steps.
In Figure \ref{data} we look closer at the number of steps required to reach a unit stepsize. We compare the distributed backtracking line search to its centralized counterpart on networks with 25 nodes and 100 edges, 50 nodes and 200 edges and 100 nodes and 400 edges. For each network optimization problem generated we implemented distributed optimization using ADD-1, ADD-2, and ADD-3. Most trials required only 2 or 3 iterations to reach $\alpha=1$ for both the centralized and distributed line searches. The variation came from the few trials which required significantly more iterations. As might be expected, increasing $N$ causes the distributed and centralized algorithms to behave closer to each other. When we increase the size of the network most trials still only require 2 to 3 iterations to reach $\alpha=1$ but for the cases which take more than 2 iterations we jump from around 10 iterations in the 25 nodes networks to around 40 iterations in 100 node networks.
\section{Conclusion}
We presented an alternative version of the backtracking line search using a local version of the Armijo rule which allows the stepsize for the dual update in the single commodity network flow problem to be computed using only local information. When this distributed backtracking line search technique is paired with the ADD method for selecting the dual descent direction we recover the key properties of the standard centralized backtracking line search: a strict decrease in the dual objective and unit stepsize in a region around the optimal. We use simulations to demonstrate that the distributed backtracking line search is functionally equivalent to its centralized counterpart.
This work focuses on line searches when the ADD-N method is used to select the descent direction, however the proof method relies primarily on the sparsity structure of the inverse hessian approximation. This implies that our line search method could be applied with other descent directions provided they have are themselves depend only on local information.
\vfill
\bibliographystyle{amsplain}
| {'timestamp': '2012-03-14T01:03:42', 'yymm': '1203', 'arxiv_id': '1203.2808', 'language': 'en', 'url': 'https://arxiv.org/abs/1203.2808'} |
\section{introduction}
In the past few years, the synthesis of ferromagnetic semiconductors has become a major challenge for spintronics. Actually, growing a magnetic and semiconducting material could lead to promising advances like spin injection into non magnetic semiconductors, or electrical manipulation of carrier induced magnetism in magnetic semiconductors \cite{ohno00,Bouk02}. Up to now, major efforts have focused on diluted magnetic semiconductors (DMS) in which the host semiconducting matrix is randomly substituted by transition metal (TM) ions such as Mn, Cr, Ni, Fe or Co \cite{Diet02}. However Curie temperatures ($T_{C}$) in DMS remain rather low and TM concentrations must be drastically raised in order to increase $T_{C}$ up to room temperature. That usually leads to phase separation and the formation of secondary phases. It was recently shown that phase separation induced by spinodal decomposition could lead to a significant increase of $T_{C}$ \cite{Diet06,Fuku06}. For semiconductors showing $T_{C}$ higher than room temperature one can foresee the fabrication of nanodevices such as memory nanodots, or nanochannels for spin injection. Therefore, the precise control of inhomogeneities appears as a new challenge which may open a way to industrial applications of ferromagnetism in semiconductors.
The increasing interest in group-IV magnetic semiconductors can also be explained by their potential compatibility with the existing silicon technology. In 2002, carrier mediated ferromagnetism was reported in MBE grown Ge$_{0.94}$Mn$_{0.06}$ films by Park \textit{et al.} \cite{Park02}. The maximum critical temperature was 116 K. Recently many publications indicate a significant increase of $T_{C}$ in Ge$_{1-x}$Mn$_{x}$ material depending on growth conditions \cite{Pint05,Li05,tsui03}. Cho \textit{et al.} reported a Curie temperature as high as 285 K \cite{Cho02}.
Taking into account the strong tendency of Mn ions to form intermetallic compounds in germanium, a detailed investigation of the nanoscale structure is required. Up to now, only a few studies have focused on the nanoscale composition in Ge$_{1-x}$Mn$_{x}$ films. Local chemical inhomogeneities have been recently reported by Kang \textit{et al.} \cite{Kang05} who evidenced a micrometer scale segregation of manganese in large Mn rich stripes. Ge$_3$Mn$_5$ as well as Ge$_8$Mn$_{11}$ clusters embedded in a germanium matrix have been reported by many authors. However, Curie temperatures never exceed 300 K \cite{Bihl06,Morr06,Pass06,Ahle06}. Ge$_3$Mn$_5$ clusters exhibit a Curie temperature of 296 K \cite{Mass90}. This phase frequently observed in Ge$_{1-x}$Mn$_{x}$ films is the most stable (Ge,Mn) alloy. The other stable compound Ge$_8$Mn$_{11}$ has also been observed in nanocrystallites surrounded with pure germanium \cite{Park01}. Ge$_8$Mn$_{11}$ and Ge$_3$Mn$_5$ phases are ferromagnetic but their metallic character considerably complicates their potential use as spin injectors.
Recently, some new Mn-rich nanostructures have been evidenced in Ge$_{1-x}$Mn$_{x}$ layers. Sugahara \textit{et al.} \cite{Sugh05} reported the formation of high Mn content (between 10 \% and 20 \% of Mn) amorphous Ge$_{1-x}$Mn$_x$ precipitates in a Mn-free germanium matrix. Mn-rich coherent cubic clusters were observed by Ahlers \textit{et al.} \cite{Ahle06} which exhibit a Curie temperatures below 200 K. Finally, high-$T_{C}$ ($>$ 400 K) Mn-rich nanocolumns have been evidenced \cite{Jame06} which could lead to silicon compatible room temperature operational devices.\\
In the present paper, we investigate the structural and magnetic properties of Ge$_{1-x}$Mn$_x$ thin films for low growth temperatures ($<$ 200$^{\circ}$C) and low Mn concentrations (between 1 \% and 11 \%). By combining TEM, x-Ray diffraction and SQUID magnetometry, we could identify different magnetic phases. We show that depending on growth conditions, we obtain either Mn-rich nanocolumns or Ge$_{3}$Mn$_{5}$ clusters embedded in a germanium matrix. We discuss the structural and magnetic properties of these nanostructures as a function of manganese concentration and growth temperature. We also discuss the magnetic anisotropy of nanocolumns and
Ge$_3$Mn$_5$ clusters.
\section{Sample growth}
Growth was performed using solid sources molecular beam epitaxy (MBE) by co-depositing Ge and Mn evaporated from standard Knudsen effusion cells. Deposition rate was low ($\approx$ 0.2 \AA.s$^{-1}$). Germanium substrates were epi-ready Ge(001) wafers with a residual n-type doping and resistivity of 10$^{15}$ cm$^{-3}$ and 5 $\Omega.cm$ respectively. After thermal desorption of the surface oxide, a 40 nm thick Ge buffer layer was grown at 250$^{\circ}$C, resulting in a 2 $\times$ 1 surface reconstruction as observed by reflection high energy electron diffraction (RHEED) (see Fig. 1a). Next, 80 nm thick Ge$_{1-x}$Mn$_{x}$ films were subsequently grown at low substrate temperature (from 80$^{\circ}$C to 200$^{\circ}$C). Mn content has been determined by x-ray fluorescence measurements performed on thick samples ($\approx$ 1 $\mu m$ thick) and complementary Rutherford Back Scattering (RBS) on thin Ge$_{1-x}$Mn$_{x}$ films grown on silicon. Mn concentrations range from 1 \% to 11\% Mn.
For Ge$_{1-x}$Mn$_{x}$ films grown at substrate temperatures below 180$^{\circ}$C, after the first monolayer (ML) deposition, the 2 $\times$ 1 surface reconstruction almost totally disappears. After depositing few MLs, a slightly diffuse 1 $\times$ 1 streaky RHEED pattern and a very weak 2 $\times$ 1 reconstruction (Fig. 1b) indicate a predominantly two-dimensional growth. For growth temperatures above 180$^{\circ}$C additional spots appear in the RHEED pattern during the Ge$_{1-x}$Mn$_{x}$ growth (Fig. 1c). These spots may correspond to the formation of very small secondary phase crystallites. The nature of these crystallites will be discussed below.
Transmission electron microscopy (TEM) observations were performed using a JEOL 4000EX microscope with an acceleration voltage of 400 kV. Energy filtered transmission electron microscopy (EFTEM) was done using a JEOL 3010 microscope equipped with a Gatan Image Filter . Sample preparation was carried out by standard mechanical polishing and argon ion milling for cross-section investigations and plane views were prepared by wet etching with H$_3$PO$_4$-H$_2$O$_2$ solution \cite{Kaga82}.
\begin{figure}[htb]
\center
\includegraphics[width=.29\linewidth]{./fig1a.eps}
\includegraphics[width=.29\linewidth]{./fig1b.eps}
\includegraphics[width=.29\linewidth]{./fig1c.eps}
\caption{RHEED patterns recorded during the growth of Ge$_{1-x}$Mn$_{x}$ films: (a) 2 $\times$ 1 surface reconstruction of the germanium buffer layer. (b) 1 $\times$ 1 streaky RHEED pattern obtained at low growth temperatures ($T_g<$180$^{\circ}$C). (c) RHEED pattern of a sample grown at $T_g=$180$^{\circ}$C. The additional spots reveal the presence of Ge$_3$Mn$_5$ clusters at the surface of the film.}
\label{fig1}
\end{figure}
\section{Structural properties \label{structural}}
\begin{figure}[htb]
\center
\includegraphics[width=.49\linewidth]{./fig2a.eps}
\includegraphics[width=.49\linewidth]{./fig2b.eps}
\includegraphics[width=.49\linewidth]{./fig2c.eps}
\includegraphics[width=.49\linewidth]{./fig2d.eps}
\caption{Transmission electron micrographs of a Ge$_{1-x}$Mn$_{x}$ film grown at 130$^{\circ}$C and containing 6 \% of manganese. (a) cross-section along the [110] axis : we clearly see the presence of nanocolumns elongated along the growth axis. (b) High resolution image of the interface between the Ge$_{1-x}$Mn$_{x}$ film and the Ge buffer layer. The Ge$_{1-x}$Mn$_{x}$ film exhibits the same diamond structure as pure germanium. No defect can be seen which could be caused by the presence of nanocolumns. (c) Plane view micrograph performed on the same sample confirms the columnar structure and gives the density and size distribution of nanocolumns. (d) Mn chemical map obtained by energy filtered transmission electron microcopy (EFTEM). The background was carefully substracted from pre-edge images. Bright areas correspond to Mn-rich regions.}
\label{fig2}
\end{figure}
In samples grown at 130$^{\circ}$C and containing 6 \% Mn, we can observe vertical elongated nanostructures \textit{i.e.} nanocolumns as shown in Fig. 2a. Nanocolumns extend through the whole Ge$_{1-x}$Mn$_{x}$ film thickness. From the high resolution TEM image shown in Fig. 2b, we deduce their average diameter around 3 nm. Moreover in Fig. 2b, the interface between the Ge buffer layer and the Ge$_{1-x}$Mn$_{x}$ film is flat and no defect propagates from the interface into the film. The Ge$_{1-x}$Mn$_{x}$ film is a perfect single crystal in epitaxial relationship with the substrate. In Fig. 2c is shown a plane view micrograph of the same sample confirming the presence of nanocolumns in the film. From this image, we can deduce the size and density of nanocolumns. The nanocolumns density is 13000 $\rm{\mu m}^{-2}$ with a mean diameter of 3 nm which is coherent with cross-section measurements. In order to estimate the chemical composition of these nanocolumns, we further performed chemical mapping using EFTEM. In Fig. 2d we show a cross sectional Mn chemical map of the Ge$_{1-x}$Mn$_{x}$ film. This map shows that the formation of nanocolumns is a consequence of Mn segregation. Nanocolumns are Mn rich and the surrounding matrix is Mn poor. However, it is impossible to deduce the Mn concentration in Ge$_{1-x}$Mn$_{x}$ nanocolumns from this cross section. Indeed, in cross section observations, the columns diameter is much smaller than the probed film thickness and the signal comes from the superposititon of the Ge matrix and Mn-rich nanocolumns. In order to quantify Mn concentration inside the nanocolumns and inside the Ge matrix, EELS measurements (not shown here) have been performed in a plane view geometry \cite{Jame06}. These observations revealed that the matrix Mn content is below 1 \% (detection limit of our instrument). Measuring the surface occupied by the matrix and the nanocolumns in plane view TEM images, and considering the average Mn concentration in the sample (6 \%), we can estimate the Mn concentration in the nanocolumns. The Mn concentration measured by EELS being between 0\% and 1\%, we can conclude that the Mn content in the nanocolumns is between 30 \% and 38 \%.\\
For samples grown between 80$^\circ$C and 150$^\circ$C cross section and plane view TEM observations reveal the presence of Mn rich nanocolumns surrounded with a Mn poor Ge matrix. In order to investigate the influence of Mn concentration on the structural properties of Ge$_{1-x}$Mn$_{x}$ films, ten samples have been grown at 100$^\circ$C and at 150$^\circ$C with Mn concentrations of 1.3 \%, 2.3 \%, 4 \%, 7 \% and 11.3 \%. Their structural properties have been investigated by plane view TEM observations.
\begin{figure}[htb]
\center
\includegraphics[width=.98\linewidth]{./fig3a.eps}
\includegraphics[width=.45\linewidth]{./fig3b.eps}
\includegraphics[width=.45\linewidth]{./fig3c.eps}
\caption{Nanocolumns size and density as a function of growth conditions. Samples considered have been grown at 100$^{\circ}$C and 150$^{\circ}$C respectively. (a) Mn concentration dependence of the size distribution. (b) columns density as a function of Mn concentration. (c) Volume fraction of the nanocolumns as a function of Mn concentration.}
\label{fig3}
\end{figure}
For samples grown at 100$^\circ$C with Mn concentrations below 5 \% the nanocolumns mean diameter is 1.8$\pm$0.2 nm. The evolution of columns density as a fonction of Mn concentration is reported in figure 3b. By increasing the Mn concentration from 1.3 \% to 4 \% we observe a significant increase of the columns density from 13000 to 30000 $\rm{\mu m}^{-2}$. For Mn concentrations higher than 5 \% the density seems to reach a plateau corresponding to 35000 $\rm{\mu m}^{-2}$ and their diameter slightly increases from 1.8 nm at 4 \% to 2.8 nm at 11.3 \%. By plotting the volume fraction occupied by the columns in the film as a function of Mn concentration, we observe a linear dependence for Mn contents below 5 \%. The non-linear behavior above 5 \% may indicate that the mechanism of Mn incorporation is different in this concentration range, leading to an increase of Mn concentration in the columns or in the matrix. For samples grown at 100$^\circ$C, nanocolumns are always fully coherent with the surrounding matrix (Fig. 4a).
Increasing the Mn content in the samples grown at 150$^\circ$C from 1.3 \% to 11.3 \% leads to a decrease of the columns density (fig 3b). Moreover, their average diameter increases significantly and size distributions become very broad (see Fig. 3a). For the highest Mn concentration (11.3 \%) we observe the coexistence of very small columns with a diameter of 2.5 nm and very large columns with a diameter of 9 nm. In samples grown at 150$^\circ$C containing 11.3 \% of Mn, the crystalline structure of nanocolumns is also highly modified. In plane view TEM micrographs, one can see columns exhibiting several different crystalline structures. We still observe some columns which are fully coherent with the Ge matrix like in the samples grown at lower temperature. Nevertheless, observations performed on these samples grown at 150$^\circ$C and with 11.3\% Mn reveal some uniaxially \cite{Jame06} or fully relaxed columns exhibiting a misfit of 4 \% between the matrix and the columns and leading to misfit dislocations at the interface between the column and the matrix (see fig. 4b). Thus we can conclude that coherent columns are probably in strong compression and the surrounding matrix in tension. On the same samples (T$_g$=150$^{\circ}$C, 11.3\% Mn), we also observe a large number of highly disordered nanocolumns leading to an amorphous like TEM contrast(fig. 4c).
\begin{figure}[htb]
\center
\includegraphics[width=.31\linewidth]{./fig4a.eps}
\includegraphics[width=.31\linewidth]{./fig4b.eps}
\includegraphics[width=.31\linewidth]{./fig4c.eps}
\caption{Plane view high resolution transmission electron micrographs of different types of nanocolumns : (a) typical structure of a column grown at 100$^{\circ}$C. The crystal structure is exactly the same as germanium . (b) Partially relaxed nanocolumn. One can see dislocations at the interface between the columns and the matrix leading to stress relaxation. (c) Amorphous nanocolumn. These columns are typical in samples grown at 150$^{\circ}$C with high Mn contents.}
\label{fig4}
\end{figure}
In conclusion, we have evidenced a complex mechanism of Mn incorporation in Mn doped Ge films grown at low temperature. In particular Mn incorporation is highly inhomogeneous. For very low growth temperatures (below 120$^\circ$C) the diffusion of Mn atoms leads to the formation of Mn rich, vertical nanocolumns. Their density mostly depends on Mn concentration and their mean diameter is about 2 nm. These results can be compared with the theoretical predictions of Fukushima \textit{et al.} \cite{Fuku06}: they proposed a model of spinodal decomposition in (Ga,Mn)N and (Zn,Cr)Te based on layer by layer growth conditions and a strong pair attraction between Mn atoms which leads to the formation of nanocolumns. This model may also properly describe the formation of Mn rich nanocolumns in our samples. Layer by layer growth conditions can be deduced from RHEED pattern evolution during growth. For all the samples grown at low temperature, RHEED observations clearly indicate two-dimensional growth. Moreover, Ge/Ge$_{1-x}$Mn$_{x}$/Ge heterostructures have been grown and observed by TEM (see Fig. 5). Ge$_{1-x}$Mn$_{x}$/Ge (as well as Ge/Ge$_{1-x}$Mn$_{x}$) interfaces are very flat and sharp thus confirming a two-dimensional, layer by layer growth mode. Therefore we can assume that the formation of Mn rich nanocolumns is a consequence of 2D-spinodal decomposition.
\begin{figure}[htb]
\center
\includegraphics[width=.7\linewidth]{./fig5.eps}
\caption{Cross section high resolution micrograph of a Ge/Ge$_{1-x}$Mn$_{x}$/Ge/Ge$_{1-x}$Mn$_{x}$/Ge heterostructure. This sample has been grown at 130 $^{\circ}$C with 6\% Mn. Ge$_{1-x}$Mn$_{x}$ layers are 15 nm thick and Ge spacers 5 nm thick. We clearly see the sharpness of both Ge$_{1-x}$Mn$_{x}$/Ge and Ge/Ge$_{1-x}$Mn$_{x}$ interfaces. Mn segregation leading to the columns formation already takes place in very thin Ge$_{1-x}$Mn$_{x}$ films.}
\label{fig5}
\end{figure}
For growth temperatures higher than 160$^\circ$C, cross section TEM and EFTEM observations (not shown here) reveal the coexistence of two Mn-rich phases: nanocolumns and Ge$_{3}$Mn$_{5}$ nanoclusters embedded in the germanium matrix. A typical high resolution TEM image is shown in figure 6.
Ge$_{3}$Mn$_{5}$ clusters are not visible in RHEED patterns for temperatures below 180$^\circ$C. To investigate the nature of these clusters, we performed x-ray diffraction in $\theta-2\theta$ mode. Diffraction scans were acquired on a high resolution diffractometer using the copper K$_\alpha$ radiation and on the GMT station of the BM32 beamline at the European Synchrotron Radiation Facility (ESRF). Three samples grown at different temperatures and/or annealed at high temperature were investigated. The two first samples are Ge$_{1-x}$Mn$_{x}$ films grown at 130$^\circ$C and 170$^\circ$C respectively. The third one has been grown at 130$^\circ$C and post-growth annealed at 650$^\circ$C. By analysing x-ray diffraction spectra, we can evidence two different crystalline structures. For the sample grown at 130$^\circ$C, the $\theta-2\theta$ scan only reveals the (004) Bragg peak of the germanium crystal, confirming the good epitaxial relationship between the layer and the substrate, and the absence of secondary phases in the film in spite of a high dynamics of the order of 10$^7$. For both samples grown at 170$^\circ$C and annealed at 650$^\circ$C, $\theta-2\theta$ spectra are identical. In addition to the (004) peak of germanium, we observe three additional weak peaks. The first one corresponds to the (002) germanium forbidden peak which probably comes from a small distortion of the germanium crystal, and the two other peaks are respectively attributed to the (002) and (004) Bragg peaks of a secondary phase. The $c$ lattice parameter of Ge$_3$Mn$_5$ hexagonal crystal is 5.053 \AA \ \cite{Fort90} which is in very good agreement with the values obtained from diffraction data for both (002) and (004) lines assuming that the $c$ axis of Ge$_3$Mn$_5$ is along the [001] direction of the Ge substrate.
\begin{figure}[htb]
\center
\includegraphics[width=.7\linewidth]{./fig6.eps}
\caption{Cross section high resolution transmission electron micrograph of a sample grown at 170$^{\circ}$C. We observe the coexistence of two different Mn-rich phases: Ge$_{1-x}$Mn$_{x}$ nanocolumns and Ge$_3$Mn$_5$ clusters.}
\label{fig6}
\end{figure}
In summary, in a wide range of growth temperatures and Mn concentrations, we have evidenced a two-dimensional spinodal decomposition leading to the formation of Mn-rich nanocolumns in Ge$_{1-x}$Mn$_{x}$ films. This decomposition is probably the consequence of: $(i)$ a strong pair attraction between Mn atoms, $(ii)$ a strong surface diffusion of Mn atoms in germanium even at low growth temperatures and $(iii)$ layer by layer growth conditions. We have also investigated the influence of growth parameters on the spinodal decomposition: at low growth temperatures (100$^{\circ}$C), increasing the Mn content leads to higher columns densities while at higher growth temperatures (150$^{\circ}$C), the columns density remains nearly constant whereas their size increases drastically. By plotting the nanocolumns density as a function of Mn content, we have shown that the mechanism of Mn incorporation in Ge changes above 5 \% of Mn. Finally, using TEM observations and x-ray diffraction, we have shown that Ge$_3$Mn$_5$ nanoclusters start to form at growth temperatures higher than 160$^\circ$C.
\section{Magnetic properties \label{magnetic}}
We have thoroughly investigated the magnetic properties of thin Ge$_{1-x}$Mn$_{x}$ films for different growth temperatures and Mn concentrations. In this section, we focus on Mn concentrations between 2 \% and 11 \%. We could clearly identify four different magnetic phases in Ge$_{1-x}$Mn$_{x}$ films : diluted Mn atoms in the germanium matrix, low $T_{C}$ nanocolumns ($T_{C}$ $\leq$ 170 K), high $T_{C}$ nanocolumns ($T_{C}$ $\geq$ 400 K) and Ge$_{3}$Mn$_{5}$ clusters ($T_{C}$ $\thickapprox$ 300 K). The relative weight of each phase clearly depends on the growth temperature and to a lesser extend on Mn concentration. For low growth temperature ($<$ 120$^{\circ}$C), we show that nanocolumns are actually made of four uncorrelated superparamagnetic nanostructures. Increasing T$_{g}$ above 120$^{\circ}$C, we first obtain continuous columns exhibiting low $T_{C}$ ($<$ 170 K) and high $T_{C}$ ($>$ 400 K) for $T_{g}\approx$130$^{\circ}$C. The larger columns become ferromagnetic \textit{i.e.} $T_{B}>T_{C}$. Meanwhile Ge$_{3}$Mn$_{5}$ clusters start to form. Finally for higher $T_{g}$, the magnetic contribution from Ge$_{3}$Mn$_{5}$ clusters keeps increasing while the nanocolumns signal progressively disappears.
\begin{figure}[htb]
\center
\includegraphics[width=.6\linewidth]{./fig7a.eps}
\includegraphics[width=.3\linewidth]{./fig7b.eps}
\caption{(a) Temperature dependence of the saturation magnetization (in $\mu_{B}$/Mn) of Ge$_{0.93}$Mn$_{0.07}$ samples for different growth temperatures. The magnetic field is applied in the film plane. The inset shows the temperature dependence of a sample grown at 130$^{\circ}$C and annealed at 650$^{\circ}$C for 15 minutes. After annealing, the magnetic signal mostly arises from Ge$_{3}$Mn$_{5}$ clusters. (b) ZFC-FC measurements performed on Ge$_{0.93}$Mn$_{0.07}$ samples for different growth temperatures. The in-plane applied field is 0.015 T. The ZFC peak at low temperature ($\leq$150 K) can be attributed to the superparamagnetic nanocolumns. This peak widens and shifts towards high blocking temperatures when increasing growth temperature. The second peak above 150 K in the ZFC curve which increases with increasing growth temperature is attributed to superparamagnetic Ge$_{3}$Mn$_{5}$ clusters. The increasing ZFC-FC irreversibility at $\approx$ 300 K is due to the increasing contribution from large ferromagnetic Ge$_{3}$Mn$_{5}$ clusters. The nanocolumns signal completely vanishes after annealing at 650$^{\circ}$C for 15 minutes.}
\label{fig7}
\end{figure}
In Fig. 7a, the saturation magnetization at 2 Tesla in $\mu_{B}$/Mn of Ge$_{1-x}$Mn$_{x}$ films with 7 \% of Mn is plotted as a function of temperature for different growth temperatures ranging from $T_{g}$=90$^{\circ}$C up to 160$^{\circ}$C. The inset shows the temperature dependence of the magnetization at 2 Tesla after annealing at 650$^{\circ}$C during 15 minutes. Figure 7b displays the corresponding Zero Field Cooled - Field Cooled (ZFC-FC) curves recorded at 0.015 Tesla. In the ZFC-FC procedure, the sample is first cooled down to 5 K in zero magnetic field and the susceptibility is subsequently recorded at 0.015 Tesla while increasing the temperature up to 400 K (ZFC curve). Then, the susceptibility is recorded under the same magnetic field while decreasing the temperature down to 5 K (FC curve). Three different regimes can be clearly distinguished. \\
For $T_{g}\leq$120$^{\circ}$C, the temperature dependence of the saturation magnetization remains nearly the same while increasing growth temperature. The overall magnetic signal vanishing above 200 K is attributed to the nanocolumns whereas the increasing signal below 50 K originates from diluted Mn atoms in the surrounding matrix. The Mn concentration dependence of the saturation magnetization is displayed in figure 8. For the lowest Mn concentration (4 \%), the contribution from diluted Mn atoms is very high and drops sharply for higher Mn concentrations (7 \%, 9 \% and 11.3 \%). Therefore the fraction of Mn atoms in the diluted matrix decreases with Mn concentration probably because Mn atoms are more and more incorporated in the nanocolumns. In parallel, the Curie temperature of nanocolumns increases with the Mn concentration reaching 170 K for 11.3 \% of Mn. This behavior may be related to different Mn compositions and to the increasing diameter of nanocolumns (from 1.8 nm to 2.8 nm) as discussed in section \ref{structural}.
\begin{figure}[htb]
\center
\includegraphics[width=.7\linewidth]{./fig8.eps}
\caption{Temperature dependence of the saturation magnetization (in $\mu_{B}$/Mn) of Ge$_{1-x}$Mn$_{x}$ films grown at 100$^{\circ}$C plotted for different Mn concentrations: 4.1 \%; 7 \%; 8.9 \% and 11.3 \%.}
\label{fig8}
\end{figure}
ZFC-FC measurements show that the nanocolumns are superparamagnetic. The magnetic signal from the diluted Mn atoms in the matrix is too weak to be detected in susceptibility measurements at low temperature. In samples containing 4 \% of Mn, ZFC and FC curves superimpose down to low temperatures. As we do not observe hysteresis loops at low temperature, we believe that at this Mn concentration nanocolumns are superparamagnetic in the whole temperature range and the blocking temperature cannot be measured. For higher Mn contents, the ZFC curve exhibits a very narrow peak with a maximum at the blocking temperature of 15 K whatever the Mn concentration and growth temperature (see Fig. 7b). Therefore the anisotropy barrier distribution is narrow and assuming that nanocolumns have the same magnetic anisotropy, this is a consequence of the very narrow size distribution of the nanocolumns as observed by TEM. To probe the anisotropy barrier distribution, we have performed ZFC-FC measurements but instead of warming the sample up to 400 K, we stopped at a lower temperature $T_{0}$.
\begin{figure}[htb]
\center
\includegraphics[width=.6\linewidth]{./fig9.eps}
\caption{Schematic drawing of the anisotropy barrier distribution n($E_{B}$) of superparamagnetic nanostructures. If magnetic anisotropy does not depend on the particle size, this distribution exactly reflects their magnetic size distribution. In this drawing the blocking temperature ($T_{B}$) corresponds to the distribution maximum. At a given temperature $T_{0}$ such that 25$k_{B}T_{0}$ falls into the anisotropy barrier distribution, the largest nanostructures with an anisotropy energy larger than 25$k_{B}T_{0}$ are blocked whereas the others are superparamagnetic.}
\label{fig9}
\end{figure}
If this temperature falls into the anisotropy barrier distribution as depicted in Fig. 9, the FC curve deviates from the ZFC curve. Indeed the smallest nanostructures have become superparamagnetic at $T_{0}$ and when decreasing again the temperature, their magnetization freezes along a direction close to the magnetic field and the FC susceptibility is higher than the ZFC susceptibility. Therefore any irreversibility in this procedure points at the presence of superparamagnetic nanostructures. The results are given in Fig. 10a. ZFC and FC curves clearly superimpose up to $T_{0}$=250 K thus the nanocolumns are superparamagnetic up to their Curie temperature and no Ge$_{3}$Mn$_{5}$ clusters could be detected. Moreover for low $T_{0}$ values, a peak appears at low temperature in FC curves which evidences strong antiferromagnetic interactions between the nanocolumns \cite{Chan00}.
\begin{figure}[htb]
\center
\includegraphics[width=.35\linewidth]{./fig10a.eps}
\includegraphics[width=.63\linewidth]{./fig10b.eps}
\caption{(a) ZFC-FC measurements performed on a Ge$_{0.887}$Mn$_{0.113}$ sample grown at 115$^{\circ}$C. The in-plane applied field is 0.015 T. Magnetization was recorded up to different T$_{0}$ temperatures: 30 K, 50 K, 100 K, 150 K, 200 K and 250 K. Curves are shifted up for more clarity. (b) ZFC-FC curves for in-plane and out-of-plane applied fields (0.015 T).}
\label{fig10}
\end{figure}
In order to derive the magnetic size and anisotropy of the Mn-rich nanocolumns embedded in the Ge matrix, we have fitted the inverse normalized in-plane (resp. out-of-plane) susceptibility: $\chi_{\parallel}^{-1}$ (resp. $\chi_{\perp}^{-1}$). The corresponding experimental ZFC-FC curves are reported in Fig. 10b. Since susceptibility measurements are performed at low field (0.015 T), the matrix magnetic signal remains negligible. In order to normalize susceptibility data, we need to divide the magnetic moment by the saturated magnetic moment recorded at 5 T. However the matrix magnetic signal becomes very strong at 5 T and low temperature so that we need to subtract it from the saturated magnetic moment using a simple Curie function. From Fig. 10b, we can conclude that nanocolumns are isotropic. Therefore to fit experimental data we use the following expression well suited for isotropic systems or cubic anisotropy: $\chi_{\parallel}^{-1}= \chi_{\perp}^{-1}\approx 3k_{B}T/M(T)+\mu_{0}H_{eff}(T)$. $k_{B}$ is the Boltzmann constant, $M=M_{s}v$ is the magnetic moment of a single-domain nanostructure (macrospin approximation) where $M_{s}$ is its magnetization and $v$ its volume. The in-plane magnetic field is applied along $[110]$ or $[-110]$ crystal axes. Since the nanostructures Curie temperature does not exceed 170 K, the temperature dependence of the saturation magnetization is also accounted for by writting $M(T)$. Antiferromagnetic interactions between nanostructures are also considered by adding an effective field estimated in the mean field approximation \cite{Fruc02}: $\mu_{0}H_{eff}(T)$.
The only fitting parameters are the maximum magnetic moment (\textit{i.e.} at low temperature) per nanostructure: $M$ (in Bohr magnetons $\mu_{B}$) and the maximum interaction field (\textit{i.e.} at low temperature): $\mu_{0}H_{eff}$.
\begin{figure}[htb]
\center
\includegraphics[width=.7\linewidth]{./fig11.eps}
\caption{Temperature dependence of the inverse in-plane (open circles) and out-of-plane (open squares) normalized susceptibilities of a Ge$_{0.887}$Mn$_{0.113}$ sample grown at 115$^{\circ}$C. Fits were performed assuming isotropic nanostructures or cubic anisotropy. Dashed line is for in-plane susceptibility and solid line for out-of-plane susceptibility.}
\label{fig11}
\end{figure}
In Fig. 11, the best fits lead to $M\approx$1250 $\mu_{B}$ and $\mu_{0}H_{eff}\approx$102 mT for in-plane susceptibility and $M\approx$1600 $\mu_{B}$ and $\mu_{0}H_{eff}\approx$98 mT for out-of-plane susceptibility. It gives an average magnetic moment of 1425 $\mu_{B}$ per column and an effective interaction field of 100 mT. Using this magnetic moment and its temperature dependence, magnetization curves could be fitted using a Langevin function and $M(H/T)$ curves superimpose for $T<$100 K. However, from the saturated magnetic moment of the columns and their density (35000 $\rm{\mu m}^{-2}$), we find almost 6000 $\mu_{B}$ per column. Therefore, for low growth temperatures, we need to assume that nanocolumns are actually made of almost four independent elongated magnetic nanostructures. The effective field for antiferromagnetic interactions between nanostructures estimated from the susceptibility fits is at least one order of magnitude larger than what is expected from pure magnetostatic coupling. This difference may be due to either an additional antiferromagnetic coupling through the matrix which origin remains unexplained or to the mean field approximation which is no more valid in this strong coupling regime. As for magnetic anisotropy, the nanostructures behave as isotropic magnetic systems or exhibit a cubic magnetic anisotropy. First we can confirm that nanostructures are not amorphous otherwise shape anisotropy would dominate leading to out-of-plane anisotropy. We can also rule out a random distribution of magnetic easy axes since the nanostructures are clearly crystallized in the diamond structure and would exhibit at least a cubic anisotropy (except if the random distribution of Mn atoms within the nanostructures can yield random easy axes). Since the nanostructures are in strong in-plane compression (their lattice parameter is larger than the matrix one), the cubic symmetry of the diamond structure is broken and magnetic cubic anisotropy is thus unlikely. We rather believe that out-of-plane shape anisotropy is nearly compensated by in-plane magnetoelastic anisotropy due to compression leading to a \textit{pseudo} cubic anisotropy. From the blocking temperature (15 K) and the magnetic volume of the nanostructures , we can derive their magnetic anisotropy constant using $Kv=25k_{B}T_{B}$: K$\approx$10 kJ.m$^{-3}$ which is of the same order of magnitude as shape anisotropy.
\begin{figure}[htb]
\center
\includegraphics[width=.35\linewidth]{./fig12a.eps}
\includegraphics[width=.63\linewidth]{./fig12b.eps}
\caption{(a) ZFC-FC measurements performed on a Ge$_{0.93}$Mn$_{0.07}$ sample grown at 122$^{\circ}$C. The in-plane applied field is 0.015 T. Magnetization was recorded up to different T$_{0}$ temperatures: 50 K, 100 K, 150 K, 200 K and 250 K. Curves are shifted up for more clarity. (b) ZFC-FC curves for in-plane and out-of-plane applied fields (0.015 T).}
\label{fig12}
\end{figure}
For growth temperatures $T_{g}\geq$120$^{\circ}$C and Mn concentrations $\geq$ 7 \%, samples exhibit a magnetic signal above 200 K corresponding to Ge$_{3}$Mn$_{5}$ clusters (see Fig. 7a). As we can see, SQUID measurements are much more sensitive to the presence of Ge$_{3}$Mn$_{5}$ clusters, even at low concentration, than TEM and x-ray diffraction used in section \ref{structural}. We also observe a sharp transition in the ZFC curve (see Fig. 7b, Fig. 12a and 12b): the peak becomes very large and is shifted towards high blocking temperatures (the signal is maximum at $T=$23 K). This can be easily understood as a magnetic percolation of the four independent nanostructures obtained at low growth temperatures into a single magnetic nanocolumn. Therefore the magnetic volume increases sharply as well as blocking temperatures. At the same time, the size distribution widens as observed in TEM. In Fig. 12a, we have performed ZFC-FC measurements at different $T_{0}$ temperatures. The ZFC-FC irreversibility is observed up to the Curie temperature of $\approx$120 K meaning that a fraction of nanocolumns is ferromagnetic (\textit{i.e.} $T_{B}\geq T_{C}$).
In Fig. 12b, in-plane and out-of-plane ZFC curves nearly superimpose for $T\leq$150 K due to the isotropic magnetic behavior of the nanocolumns: in-plane magnetoelastic anisotropy is still compensating out-of-plane shape anisotropy. Moreover the magnetic signal above 150 K corresponding to Ge$_{3}$Mn$_{5}$ clusters that start to form in this growth temperature range is strongly anisotropic. This perpendicular anisotropy confirms the epitaxial relation: (0002) Ge$_{3}$Mn$_{5}$ $\parallel$ (002) Ge discussed in Ref.\cite{Bihl06}. The magnetic easy axis of the clusters lies along the hexagonal $c$-axis which is perpendicular to the film plane.
\begin{figure}[ht]
\center
\includegraphics[width=.35\linewidth]{./fig13a.eps}
\includegraphics[width=.63\linewidth]{./fig13b.eps}
\caption{(a) ZFC-FC measurements performed on a Ge$_{0.887}$Mn$_{0.113}$ sample grown at 145$^{\circ}$C. The in-plane applied field is 0.015 T. Magnetization was recorded up to different T$_{0}$ temperatures: 50 K, 100 K, 150 K, 200 K, 250 K and 300 K. Curves are shifted up for more clarity. (b) ZFC-FC curves for in-plane and out-of-plane applied fields (0.015 T).}
\label{fig13}
\end{figure}
For growth temperatures $T_{g}\geq$145$^{\circ}$C the cluster magnetic signal dominates (Fig. 13b). Superparamagnetic nanostructures are investigated performing ZFC-FC measurements at different $T_{0}$ temperatures (Fig. 13a). The first ZFC peak at low temperature \textit{i.e.} $\leq$ 150 K is attributed to low-$T_{C}$ nanocolumns ($T_{C}\approx$130 K). This peak is wider than for lower growth temperatures and its maximum is further shifted up to 30 K. These results are in agreement with TEM observations: increasing $T_{g}$ leads to larger nanocolumns (\textit{i.e.} higher blocking temperatures) and wider size distributions. ZFC-FC irreversibility is observed up to the Curie temperature due to the presence of ferromagnetic columns. The second peak above 180 K in the ZFC curve is attributed to Ge$_{3}$Mn$_{5}$ clusters and the corresponding ZFC-FC irreversibility persisting up to 300 K means that some clusters are ferromagnetic. We clearly evidence the out-of-plane anisotropy of Ge$_{3}$Mn$_{5}$ clusters and the isotropic magnetic behavior of nanocolumns (Fig. 13b). In this growth temperature range, we have also investigated the Mn concentration dependence of magnetic properties.
\begin{figure}[ht]
\center
\includegraphics[width=.49\linewidth]{./fig14a.eps}
\includegraphics[width=.49\linewidth]{./fig14b.eps}
\caption{Temperature dependence of the saturation magnetization (in $\mu_{B}$/Mn) of Ge$_{1-x}$Mn$_{x}$ films grown at 150$^{\circ}$C plotted for different Mn concentrations: 2.3 \%; 4 \%; 7 \%; 9 \%; 11.3 \%. (b) ZFC-FC measurements performed on Ge$_{1-x}$Mn$_{x}$ films grown at 150$^{\circ}$C. The in-plane applied field is 0.025 T for 2.3 \% and 4 \% and 0.015 T for 8 \% and 11.3 \%. }
\label{fig14}
\end{figure}
In Fig. 14a, for low Mn concentrations (2.3 \% and 4 \%) the contribution from diluted Mn atoms in the germanium matrix to the saturation magnetization is very high and nearly vanishes for higher Mn concentrations (7 \%, 9 \% and 13 \%) as observed for low growth temperatures. Above 7 \%, the magnetic signal mainly comes from nanocolumns and Ge$_{3}$Mn$_{5}$ clusters. We can derive more information from ZFC-FC measurements (Fig. 14b). Indeed, for 2.3 \% of Mn, ZFC and FC curves nearly superimpose down to low temperature meaning that nanocolumns are superparamagnetic in the whole temperature range. Moreover the weak irreversibility arising at 300 K means that some Ge$_{3}$Mn$_{5}$ clusters have already formed in the samples even at very low Mn concentrations. For 4 \% of Mn, we can observe a peak with a maximum at the blocking temperature (12 K) in the ZFC curve. We can also derive the Curie temperature of nanocolumns: $\approx$45 K. The irresversibility arising at 300 K still comes from Ge$_{3}$Mn$_{5}$ clusters. Increasing the Mn concentration above 7 \% leads to: higher blocking temperatures (20 K and 30 K) due to larger nanocolumns and wider ZFC peaks due to wider size distributions in agreement with TEM observations (see Fig. 3a). Curie temperatures also increase (110 K and 130 K) as well as the contribution from Ge$_{3}$Mn$_{5}$ clusters.\\
Finally when increasing $T_{g}$ above 160$^{\circ}$C, the nanocolumns magnetic signal vanishes and only Ge$_{3}$Mn$_{5}$ clusters and diluted Mn atoms coexist. The overall magnetic signal becomes comparable to the one measured on annealed samples in which only Ge$_{3}$Mn$_{5}$ clusters are observed by TEM (see Fig. 7a).\\
The magnetic properties of high-$T_{C}$ nanocolumns obtained for $T_{g}$ close to 130$^{\circ}$C are discussed in detail in Ref.\cite{Jame06}.\\
In conclusion, at low growth temperatures ($T_{g}\leq$120$^{\circ}$C), nanocolumns are made of almost 4 independent elongated magnetic nanostructures. For $T_{g}\geq$120$^{\circ}$C, these independent nanostructures percolate into a single nanocolumn sharply leading to higher blocking temperatures. Increasing $T_{g}$ leads to larger columns with a wider size distribution as evidenced by ZFC-FC measurements and given by TEM observations. In parallel, some Ge$_{3}$Mn$_{5}$ clusters start to form and their contribution increases when increasing $T_{g}$. Results on magnetic anisotropy seems counter-intuitive. Indeed Ge$_{3}$Mn$_{5}$ clusters exhibit strong out-of-plane anisotropy whereas nanocolumns which are highly elongated magnetic structures are almost isotropic. This effect is probably due to compensating in-plane magnetoelastic coupling (due to the columns compression) and out-of-plane shape anisotropy.
\section{Conclusion}
In this paper, we have investigated the structural and magnetic properties of thin Ge$_{1-x}$Mn$_{x}$ films grown by low temperature molecular beam epitaxy. A wide range of growth temperatures and Mn concentrations have been explored. All the samples contain Mn-rich nanocolumns as a consequence of 2D-spinodal decomposition. However their size, crystalline structure and magnetic properties depend on growth temperature and Mn concentration. For low growth temperatures, nanocolumns are very small (their diameter ranges between 1.8 nm for 1.3 \% of Mn and 2.8 nm for 11.3 \% of Mn), their Curie temperature is rather low ($<$ 170 K) and they behave as almost four uncorrelated superparamagnetic nanostructures. Increasing Mn concentration leads to higher columns densities while diameters remain nearly unchanged. For higher growth temperatures, the nanocolumns mean diameter increases and their size distribution widens. Moreover the 4 independent magnetic nanostructures percolate into a single magnetic nanocolumn. Some columns are ferromagnetic even if Curie temperatures remain quite low. In this regime, increasing Mn concentration leads to larger columns while their density remains nearly the same. In parallel, Ge$_{3}$Mn$_{5}$ nanoclusters start to form in the film with their $c$-axis perpendicular to the film plane. In both temperature regimes, the Mn incorporation mechanism in the nanocolumns and/or in the matrix changes above 5 \% of Mn and nanocolumns exhibit an isotropic magnetic behaviour due to the competing effects of out-of-plane shape anisotropy and in-plane magnetoelastic coupling. Finally for a narrow range of growth temperatures around 130$^{\circ}$C, nanocolumns exhibit Curie temperatures higher than 400 K. Our goal is now to investigate the crystalline structure inside the nanocolumns, in particular the position of Mn atoms in the distorted diamond structure, which is essential to understand magnetic and future transport properties in Ge$_{1-x}$Mn$_{x}$ films.
\section{Aknowledgements}
The authors would like to thank Dr. F. Rieutord for grazing incidence x-ray diffraction measurements performed on the GMT station of BM32 beamline at the European Synchrotron Radiation Facility.
| {'timestamp': '2007-05-04T09:14:13', 'yymm': '0705', 'arxiv_id': '0705.0566', 'language': 'en', 'url': 'https://arxiv.org/abs/0705.0566'} |
\section{Introduction}
In 1914 Fekete constructed a formal power series $\sum_{n = 1}^{\infty} a_n x^n$ with the following \textit{universal} property: For any continuous function $f$ on $[-1,1]$ (with $f(0) = 0$) and given any $\varepsilon > 0$ there exists an integer $N > 0$ such that
$$
\sup_{-1 \leq x \leq 1} \Big | \sum_{n \leq N} a_n x^n - f(x) \Big | < \varepsilon.
$$
In the 1970's Voronin \cite{Voronin} discovered the remarkable fact that the Riemann zeta-function satisfies a similar universal property. He showed that for any $r < \tfrac 14$, any non-vanishing continuous function $f$ in $|z| \leq r$, which is analytic in the interior, and for arbitrary $\varepsilon>0$, there exists a $T > 0$
such that
\begin{equation} \label{universal}
\max_{|z| \leq r} \Big | \zeta(\tfrac 34 + i T + z) - f(z) \Big | < \varepsilon.
\end{equation}
Voronin obtained a more quantitative description of this phenomena, stated below.\begin{theoremvor}
Let $0 < r < \tfrac 14$ be a real number. Let $f$ be a non-vanishing continuous function in $|z|\leq r$, that is analytic in the interior. Then, for any $\varepsilon > 0$,
\begin{equation}\label{LIMINF}
\liminf_{T \rightarrow \infty}
\frac{1}{T} \cdot \textup{meas} \Big \{ T \leq t \leq 2T: \max_{|z| \leq r}
\Big | \zeta(\tfrac 34 + it + z) - f(z) \Big | < \varepsilon \Big \}
> 0,
\end{equation}
where $\textup{meas}$ is Lebesgue's measure on $\mathbb{R}$.
\end{theoremvor}
There are several extensions of this theorem, for example to domains more general than compact discs (such as any compact set $K$ contained in the strip $1/2 < \re(s) < 1$ and with connected complement), or to more general $L$-functions. For a complete history of this subject, we refer the reader to \cite{Matsumoto}.
The assumption that $f(z) \neq 0$ is necessary:
if $f$ were allowed to vanish then an application of Rouche's theorem would produce at least $\asymp T$ zeros $\rho = \beta + i \gamma$ of $\zeta(s)$ with $\beta > \tfrac 12 + \varepsilon$ and $T \leq \gamma \leq 2T$, contradicting the simplest zero-density theorems.
Subsequent work of Bagchi \cite{Bagchi} clarified Voronin's universality theorem by setting it in the context of probability theory (see \cite{Kowalski} for a streamlined proof). Viewing $\zeta(\tfrac 34 + it + z)$ with $t \in [T, 2T]$ as a random variable $X_T$ in the space of random analytic functions (i.e $X_T(z) = \zeta(\tfrac 34 + i U_T + z)$ with $U_T$ uniformly distributed in $[T, 2T]$), Bagchi showed that as $T \rightarrow \infty$ this sequence of random variables converges in law (in the space of random analytic functions) to a random Euler product,
$$
\zeta(s, X) := \prod_{p} \Big (1 - \frac{X(p)}{p^s} \Big )^{-1}
$$
with $\{X(p)\}_p$ a sequence of independent random variables uniformly distributed on the unit circle (and with $p$ running over prime numbers). This product converges almost surely for $\re(s) > \tfrac 12$ and defines almost surely a holomorphic function in the half-plane $\re(s) > \sigma_0$ for any $\sigma_0 > \tfrac 12$ (see Section 2 below). The proof of Voronin's universality is then reduced to showing that the support of $\zeta(s+3/4, X)$ in the space of random analytic functions contains all non-vanishing analytic $f : \{ z : |z| < r\} \rightarrow \mathbb{C} \backslash \{0\}$. Moreover it follows from Bagchi's work that the limit in Voronin's universality theorem exists for all but at most countably many $\varepsilon > 0$.
In this paper, we present an alternative approach to Bagchi's result using methods from hard analysis. As a result we obtain, for the first time, a rate of convergence in Voronin's universality theorem. We also give an explicit description for the limit in terms of the random model $\zeta(s, X)$.
\begin{theorem} \label{thm:main}
Let $0 < r < \tfrac 14$. Let $f$ be a non-vanishing continuous function on $|z|\leq (r+1/4)/2$ that is holomorphic in $|z| < (r+1/4)/2$. Let $\omega$ be a real-valued continuously differentiable function with compact support. Then, we have
\begin{align*}
\frac{1}{T}\int_{T}^{2T}\omega\left(\max_{|z| \leq r} |\zeta(\tfrac 34 + it + z) - f(z)| \right)dt
& = \ex\left(\omega\left(\max_{|z|\leq r}|\zeta(\tfrac 34 + z, X)-
f(z)|\right)\right)\\
&+ O\left((\log T)^{-\frac{(3/4-r)}{11}+o(1)} \right),
\end{align*}
where the constant in the $O$ depends on $f, \omega$ and $r$.
\end{theorem}
If the random variable $Y_{r, f}=\max_{|z|\leq r}|\zeta(\tfrac 34 + z, X)-f(z)|$ is absolutely continuous, then it follows from the proof of Theorem \ref{thm:main} that for any fixed $\varepsilon>0$ we have
\begin{align*}
&\frac{1}{T} \cdot \textup{meas} \Big \{ T \leq t \leq 2T: \max_{|z| \leq r}
\Big | \zeta(\tfrac 34 + it + z) - f(z) \Big | < \varepsilon \Big \}\\
&= \mathbb{P}\left(\max_{|z|\leq r}\big|\zeta(\tfrac 34 + z, X)-
f(z)\big|<\varepsilon\right)
+ O\left((\log T)^{-\frac{(3/4-r)}{11}+o(1)} \right).
\end{align*}
Unfortunately, we have not been able to even show that $Y_{r, f}$ has no jump discontinuities. We conjecture the latter to be true, and one might even hope that $Y_{r, f}$ is absolutely continuous.
A slight modification of the proof of Theorem \ref{thm:main} allows for more general domains than the disc $|z| \leq r$.
Furthermore, if $\omega \geq \mathbf{1}_{(0, \varepsilon)}$ (where $\mathbf{1}_{S}$ is the indicator function of the set $S$), then it follows from Voronin's universality theorem that the main term in Theorem \ref{thm:main} is positive. Explicit lower bounds for the limit in \eqref{LIMINF} (in terms of $\varepsilon$) are contained in the papers of Good \cite{Good} and Garunkstis \cite{Garunkstis}.
Our approach is flexible, and can be generalized to other $L$-functions in the $t$-aspect, as well as to ``natural'' families of $L$-functions in the conductor aspect. The only analytic ingredients that are needed are zero density estimates, and bounds on the coefficients of these $L$-functions (the so-called Ramanujan conjecture). In particular, the techniques of this paper can be used to obtain an effective version of a recent result of Kowalski \cite{Kowalski}, who proved an analogue of Voronin's universality theorem for families of $L$-functions attached to $GL_2$ automorphic forms. In fact, using the zero-density estimates near $1$ that are known for a very large class of $L$-functions (including those in the Selberg class by Kaczorowski and Perelli \cite{KP}, and for families of $L$-functions attached to $GL_n$ automorphic forms by Kowalski and Michel \cite{KM}), one can prove an analogue of Theorem \ref{thm:main} for these $L$-functions, where we replace $3/4$ by some $\sigma<1$ (and $r<1-\sigma$).
The main idea in the proof of Theorem \ref{thm:main} is to cover the boundary of the disc $|z|\leq r$ with a union of a growing (with $T$) number of discs, while maintaining a global control of the size of $|\zeta'(s + z)|$ on $|z| \leq r$. It is enough to focus on the boundary of the disc thanks to the maximum modulus principle.
The behavior of $\zeta(s + z)$ with $z$ localized to a shrinking disc is essentially governed by the behavior at a single point $z = z_i$ in the disc.
This allows us to reduce the problem to understanding the joint distribution of a growing number of shifts $\log\zeta(s + z_i)$ with the $z_i$ well-spaced, which can be understood by computing the moments of these shifts and using standard Fourier techniques.
It seems very difficult to obtain a rate of convergence which is better than logarithmic in Theorem \ref{thm:main}. We have at present no understanding as to what the correct rate of convergence should be.
\section{Key Ingredients and detailed results} \label{sec:propositions}
We first begin with stating certain important properties of the random model $\zeta(s, X)$. Let $\{X(p)\}_p$ be a sequence of independent random variables uniformly distributed on the unit circle. Then we have
$$-\log\left(1-\frac{X(p)}{p^s}\right)=\frac{X(p)}{p^s}+ h_X(p,s),$$
where the random series
\begin{equation}\label{ErrorRandom}
\sum_{p} h_X(p,s),
\end{equation}
converges absolutely and surely for $\re(s)>1/2$. Hence, it (almost surely) defines a holomorphic function in $s$ in this half-plane. Moreover, since $\ex(X(p))=0$ and $\ex(|X(p)|^2)=1$, then it follows from Kolmogorov's three-series theorem that the series
\begin{equation}\label{MainRandom}
\sum_{p}\frac{X(p)}{p^s}
\end{equation}
is almost surely convergent for $\re(s)>1/2$. By well-known results on Dirichlet series, this shows that this series defines (almost surely) a holomorphic function on the half-plane $\re(s)>\sigma_0$, for any $\sigma_0>1/2$. Thus, by taking the exponential of the sum of the random series in \eqref{ErrorRandom} and \eqref{MainRandom}, it follows that $\zeta(s, X)$ converges almost surely to a holomorphic function on the half-plane $\re(s)>\sigma_0$, for any $\sigma_0>1/2$.
We extend the $X(p)$ multiplicatively to all positive integers by setting $X(1)=1$ and
$X(n):= X(p_1)^{a_1}\cdots X(p_k)^{a_k}, \text{ if } n= p_1^{a_1}\dots p_k^{a_k}.$
Then we have
\begin{equation}\label{orthogonality}
\ex\left(X(n)\overline{X(m)}\right)=\begin{cases} 1 & \text{if } m=n,\\
0 & \text{otherwise}.\\
\end{cases}
\end{equation}
Furthermore, for any complex number $s$ with $\re(s)>1/2$ we have almost surely that
$$ \zeta(s, X)= \sum_{n=1}^{\infty} \frac{X(n)}{n^s}.$$
To compare the distribution of $\zeta(s+it)$ to that of $\zeta(s, X)$, we define a probability measure on $[T, 2T]$ in a standard way, by
$$\mathbb{P}_T(S):= \frac{1}{T}\textup{meas}(S), \text{ for any } S\subseteq [T, 2T].$$
The idea behind our proof of effective universality is to
first reduce the problem to the discrete problem of controlling the
distribution of many shifts $\log \zeta(s_j + it)$ with
all of the $s_j$ contained in a compact set inside the strip $\tfrac 12 < \re(s) < 1$.
One of the main ingredients in this reduction is
the following result which allows us
to control the maximum of the derivative of the Riemann zeta-function.
This is proven in Section \ref{sec:derivative}.
\begin{proposition}\label{ControlDerivative}
Let $0<r<1/4$ be fixed. Then there exist positive constants $b_1$, $b_2$ and $b_3$ (that depend only on $r$) such that
$$
\mathbb{P}_T \left( \max_{|z|\leq r} |\zeta'(\tfrac 34 + it + z)| > e^{V}
\right) \ll \exp\bigg( -b_1 V^{\frac{1}{1-\sigma(r)}}(\log V)^{\frac{\sigma(r)}{1-\sigma(r)}}\bigg)
$$
where $\sigma(r)=\tfrac34-r$,
uniformly for $V$ in the range $b_2<V \leq b_3 (\log T)^{1-\sigma}/(\log \log T)$.
\end{proposition}
We also prove an analogous result for the random model $\zeta(s, X)$, which holds for all sufficiently large $V$.
\begin{proposition}\label{DerRandom}
Let $0<r<1/4$ be fixed and $\sigma(r)=\tfrac34-r$. Then there exist positive constants $b_1$ and $b_2$ (that depend only on $r$) such that for all $V>b_2$ we have
$$\mathbb{P}\left(\max_{|z| \leq r} |\zeta'(\tfrac 34 + z, X)| > e^V\right) \ll \exp\bigg( -b_1 V^{\frac{1}{1-\sigma(r)}}(\log V)^{\frac{\sigma(r)}{1-\sigma(r)}}\bigg).$$
\end{proposition}
Once the reduction has been accomplished, it remains to understand the joint
distribution of the shifts $\{\log \zeta(s_1 + it), \log \zeta(s_2 + it), \dots, \log \zeta(s_J + it)\}$
with $J \rightarrow \infty$ as $T \rightarrow \infty$ at a certain rate,
and $s_1, \ldots, s_J$ are complex numbers with $\tfrac 12 < \re (s_j) < 1$ for all $j \leq J$. Heuristically, this should be well approximated by the the joint distribution of the random variables
$\{\log \zeta(s_1, X),\log \zeta(s_2, X), \dots, \log \zeta(s_J, X)\}$. In order to establish this fact (in a certain range of $J$), we first prove, in Section \ref{sec:moments}, that the
moments of the joint shifts $\log\zeta(s_j+it)$ are very close to the corresponding ones of $\log\zeta(s_j, X)$, for $j\leq J$.
\begin{thm} \label{MomentsShifts}
Fix $1/2<\sigma_0<1$. Let $s_1, s_2, \dots, s_k, r_1, \dots, r_{\ell}$ be complex numbers in the rectangle $\sigma_0<\re(z)<1$ and $|\im(z)|\leq T^{(\sigma_0-1/2)/4}$. Then, there exist positive constants $c_3, c_4, c_5 $ and a set $\mathcal{E}(T)\subset [T,2T]$ of measure $\ll T^{1-c_3}$, such that if $k, \ell \leq c_4\log T/\log\log T$ then
\begin{align*}
&\frac{1}{T} \int_{[T,2T]\setminus \mathcal{E}(T)}\left(\prod_{j=1}^k\log\zeta(s_j+it)\right)\left(\prod_{j=1}^{\ell}\log\zeta(r_j-it)\right)dt\\
&= \ex\left(\left(\prod_{j=1}^k\log\zeta(s_j,X)\right)\left(\prod_{j=1}^{\ell}\log\overline{\zeta(r_j, X)}\right)\right)+ O\left(T^{-c_5}\right).
\end{align*}
\end{thm}
Having obtained the moments we are in position to understand the
characteristic function,
$$ \Phi_T(\mathbf{u}, \mathbf{v}):= \frac1T \int_T^{2T} \exp\left(i\left(\sum_{j=1}^J (u_j \re\log\zeta(s_j+it)+ v_j \im\log\zeta(s_j+it))\right)\right)dt,$$
where $\mathbf{u}=(u_1,\dots, u_J)\in \mathbb{R}^J$ and $\mathbf{v}=(v_1,\dots, v_J)\in \mathbb{R}^J$. We relate the above characteristic function to
the characteristic function of the probabilistic model,
$$ \Phi_{\textup{rand}}(\mathbf{u}, \mathbf{v}):= \ex\left( \exp\left(i\left(\sum_{j=1}^J (u_j \re\log\zeta(s_j, X)+ v_j \im\log\zeta(s_j, X))\right)\right)\right).$$
This is obtained in the following theorem, which we prove in Section \ref{sec:characteristic}.
\begin{thm}\label{characteristic}
Fix $1/2<\sigma<1$. Let $T$ be large and $J\leq (\log T)^{\sigma}$ be a positive integer. Let $s_1, s_2, \dots, s_J$ be complex numbers such that $\min(\re(s_j))=\sigma$ and $\max(|\im(s_j)|)<T^{(\sigma-1/2)/4}$. Then, there exist positive constants $c_1(\sigma), c_2(\sigma)$, such that for all $ \mathbf{u}, \mathbf{v} \in \mathbb{R}^J$ such that $\max(|u_j|), \max(|v_j|)\leq c_1(\sigma) (\log T)^{\sigma}/J$ we have
$$ \Phi_T(\mathbf{u}, \mathbf{v})= \Phi_{\textup{rand}}(\mathbf{u}, \mathbf{v})+ O\left(
\exp\left(-c_2(\sigma)\frac{\log T}{\log\log T}\right)\right).$$
\end{thm}
Using this result, we can show that the joint distribution of the shifts $\log \zeta(s_j+it)$ is very close to the corresponding joint distribution of the random variables $\log \zeta(s_j, X)$. The proof depends on Beurling-Selberg functions.
To measure how close are these distributions, we introduce the discrepancy $\mathcal{D}_T(s_1, \ldots, s_J)$ defined as
\begin{align*}
\sup_{(\mathcal R_1, \ldots, \mathcal R_J) \subset \mathbb C^J} \bigg| \mathbb P_T\bigg( \log \zeta(s_j+it) \in \mathcal R_j, \forall j \le J \bigg)-
\mathbb P\bigg( \log \zeta(s_j, X) \in \mathcal R_j, \forall j \le J \bigg)\bigg|
\end{align*}
where the supremum is taken over all $(\mathcal R_1, \ldots, \mathcal R_J) \subset \mathbb C^J$ and
for each $j=1,\ldots, J$ the set $\mathcal R_j$
is a rectangle with sides parallel to the coordinate axes.
Our next theorem, proven in Section \ref{sec:distribution}, states a bound
for the above discrepancy. This generalizes Theorem 1.1 of \cite{LLR}, which corresponds to the special case $J=1$.
\begin{thm} \label{discrep}
Let $T$ be large, $\tfrac12<\sigma<1$ and $J \le (\log T)^{\sigma/2}$ be a positive integer.
Let $s_1,s_2, \ldots, s_J$ be complex numbers such that
$$
\tfrac12 < \sigma:=\min_j(\tmop{Re}(s_j)) \le \max_j(\tmop{Re}(s_j)) <1
\quad \mbox{and} \quad \max_j(|\tmop{Im}(s_j)|) < T^{ (\sigma-\frac12)/4}.
$$
Then, we have
\begin{align*}
\mathcal{D}_T(s_1, \ldots, s_J) \ll \frac{J^2 }{(\log T)^{\sigma}}.
\end{align*}
\end{thm}
With all of the above tools in place we are ready to prove Theorem \ref{thm:main}. This is accomplished in the next section.
\section{Effective universality: Proof of Theorem \ref{thm:main}} \label{sec:proof}
In this section, we will prove Theorem \ref{thm:main} using the results
described in Section \ref{sec:propositions}. First, by the maximum modulus principle, the maximum of $|\zeta(\tfrac 34 + it + z) -
f(z)|$ in the disc $\{z: |z|\leq r\}$ must occur on its boundary $\{z: |z|=r\}$.
Our idea consists of first covering the circle $|z| = r$ with $J$ discs of radius $\varepsilon$ and centres $z_j$, where $z_j\in \{z: |z|=r\} $ for all $1\leq j\leq J$, and $J \asymp 1/\varepsilon$. We call each of the discs
$\mathcal{D}_j$. Then, we observe that
\begin{equation}\label{ComparisonSupMax}
\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j) -
f(z_j)|\leq \max_{|z| \leq r} |\zeta(\tfrac 34 + it + z) - f(z)|\leq \max_{j\leq J}\max_{z\in \mathcal{D}_j} |\zeta(\tfrac 34 + it + z) - f(z)|.
\end{equation}
Using Proposition \ref{ControlDerivative}, we shall prove that for all $j\leq J$ (where $J$ is a small power of $\log T$) we have
$$\max_{z\in \mathcal{D}_j} |\zeta(\tfrac 34 + it + z) - f(z)|\approx |\zeta(\tfrac 34 + it + z_j)-
f(z_j)|$$ for all $t\in [T, 2T]$ except for a set of points $t$ of very small measure. We will then deduce that the (weighted) distribution of $\max_{|z| \leq r} |\zeta(\tfrac 34 + it + z) - f(z)|$ is very close to the corresponding distribution of $\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|$, for $t\in [T, 2T]$. We will also establish an analogous result for the random model $\zeta(s, X)$ along the same lines, by using Proposition \ref{DerRandom} instead of Proposition \ref{ControlDerivative}. Therefore, to complete the proof of Theorem \ref{thm:main} we need to compare the distributions of $\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-f(z_j)|$ and $\max_{ j\leq J}|\zeta(\tfrac 34 + z_j, X)-
f(z_j)|$. Using Theorem \ref{discrep} we prove
\begin{proposition}\label{DistributionMax}
Let $T$ be large, $0<r<1/4$ and $J\leq (\log T)^{(3/4-r)/7}$ be a positive integer. Let $z_1, \dots, z_J$ be complex numbers such that $|z_j|\leq r$. Then we have
\begin{align*}
&\left|\mathbb{P}_T\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\leq u\right)-\mathbb P\left(\max_{ j\leq J}|\zeta(\tfrac 34 + z_j, X)-
f(z_j)|\leq u\right)\right|\\
&\ll_u \frac{(J\log\log T)^{6/5}}{(\log T)^{(3/4-r)/5}}.
\end{align*}
\end{proposition}
\begin{proof}
Fix a positive real number $u$. Let $\mathcal{A}_J(T)$ be the set of those $t$ for which
$|\arg\zeta(\tfrac 34 + it + z_j)|\leq \log\log T$ for every $j\leq J$. Since $\re(\tfrac 34 + it + z_j)\geq \tfrac34-r$ and $\im(\tfrac 34 + it + z_j)=t+O(1)$, then it follows from Theorem 1.1 and Remark 1 of \cite{La} that for each $j\leq J$ we have
\begin{equation}\label{LargeDeviationArg}
\mathbb{P}_T\left(|\arg \zeta(\tfrac 34 + it + z_j)|\geq \log\log T\right)
\ll \exp\left(-(\log\log T)^{(\tfrac14+r)^{-1}}\right)\ll \frac{1}{(\log T)^4}.
\end{equation}
Therefore, we obtain
$$\mathbb{P}_T\left([T, 2T]\setminus\mathcal{A}_J(T)\right)\leq \sum_{j=1}^J
\mathbb{P}_T\left(|\arg \zeta(\tfrac 34 + it + z_j)|\geq \log\log T\right)\ll \frac{J}{(\log T)^4}\ll \frac{1}{(\log T)^2},$$
and this implies that
\begin{equation} \label{mainterm}
\begin{aligned}\mathbb{P}_T\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\leq u\right)&=\mathbb{P}_T \left(\max_{ j\leq J} |\zeta(\tfrac 34 + it + z_j) - f(z_j)
| \leq u \ , \ t \in \mathcal{A}_J(T)\right)\\
&+O\left(\frac{1}{(\log T)^2}\right).
\end{aligned}
\end{equation}
For each $j\leq J$ consider the region
$$
\mathcal U_j=\left\{ z:
|e^z - f(z_j)| \leq u \ , \ |\im(z) | \leq \log\log T \right\}.
$$
We
cover $\mathcal U_j$ with $K \asymp \tmop{area}(\mathcal U_j) / \varepsilon^2 \asymp \log \log T/\varepsilon^2$ squares
$\mathcal{R}_{j, k}$ with
sides of length $\varepsilon=\varepsilon(T)$, where $\varepsilon$ is a small positive parameter to be chosen later. Let $\mathcal K_j$ denote the set of $k \in \{1, 2, \ldots, K\}$ such that the intersection of
$\mathcal R_{j,k}$ with the boundary of $\mathcal U_j$ is empty and write $\mathcal K_j^c$ for the relative complement of $\mathcal K_j$ with respect to $\{1, 2, \ldots, K\}$. Note that $ |\mathcal K_j^c| \asymp \log \log T/\varepsilon$. By construction,
\[
\left( \bigcup_{k \in \mathcal K_j} \mathcal R_{j,k} \right)
\subset
\mathcal U_j \subset \left( \bigcup_{k \le K} \mathcal R_{j,k} \right).
\]
Therefore (\ref{mainterm}) can be expressed as
$$
\mathbb{P}_T \Big ( \forall j \leq J, \forall k \leq K:
\log \zeta(\tfrac 34 + it + z_j) \in \mathcal{R}_{j,k} \Big) +
\mathcal{E}_1
$$
where by Theorem \ref{discrep}
\begin{equation}
\begin{split}
\mathcal{E}_1 \ll & \sum_{j \leq J} \sum_{k \in \mathcal K_j^c}
\mathbb{P}_{T} \Big ( \log \zeta(\tfrac 34 + it + z_j) \in \
\mathcal{R}_{j,k} \Big ) \\
\ll& \sum_{j \leq J} \sum_{k \in \mathcal K_j^c} \left(
\mathbb{P}_{T} \Big ( \log \zeta(\tfrac 34+z_j,X) \in \
\mathcal{R}_{j,k} \Big ) +\frac{1}{(\log T)^{3/4-r}} \right) \\
\ll& J \cdot \frac{ \log \log T}{\varepsilon} \left( \varepsilon^2+\frac{1}{ (\log T)^{3/4-r}} \right),
\end{split}
\end{equation}
and in the last step we used the fact that
$\log \zeta(s, X)$ is an absolutely continuous random variable (see for example Jessen and Wintner \cite{JeWi}).
We conclude that
\begin{equation}\label{CoveringRectanglesZeta}
\begin{aligned}
\mathbb{P}_T \Big (\max_{j\leq J} |\zeta(\tfrac 34 + it + z_j) - f(z_j)
| \leq u \Big) & = \mathbb{P}_T \Big ( \forall j \leq J,
\forall k \leq K:
\log \zeta(\tfrac 34 + it + z_j) \in \mathcal{R}_{j,k} \Big ) \\
& + O\left( \varepsilon J \log\log T + \frac{J\log\log T}{\varepsilon (\log T)^{3/4-r}} \right).
\end{aligned}
\end{equation}
Additionally, it follows from Theorem \ref{discrep} that the main term of this last estimate
equals
\begin{equation} \label{eq:mainterm}
\mathbb{P} \Big ( \forall j \leq J, \forall k \leq K:
\log \zeta(\tfrac 34 + z_j, X) \in \mathcal{R}_{j, k} \Big) +
O \left( \frac{J^2 (\log\log T)^2}{\varepsilon^4(\log T)^{3/4-r}} \right).
\end{equation}
We now repeat the exact same argument but for the random model $\zeta(s, X)$ instead of the zeta function. In particular, instead of \eqref{LargeDeviationArg} we shall use that
$$
\mathbb{P} \left(|\arg \zeta(\tfrac 34 + z_j, X)|\geq \log\log T\right)
\ll \exp\left(-(\log\log T)^{(\tfrac14+r)^{-1}}\right)\ll \frac{1}{(\log T)^4},
$$
which follows from Theorem 1.9 of \cite{La}. Thus, similarly to \eqref{CoveringRectanglesZeta} we obtain
\begin{align*}
\mathbb{P} \Big ( \forall j \leq J, \forall k \leq K:
\log \zeta(\tfrac 34 + z_j, X) \in \mathcal{R}_{j, k} \Big) &= \mathbb{P}
\left(\max_{j\leq J} |\zeta(\tfrac 34 + z_j, X) - f(z_j)
| \leq u\right)\\
& + O\left( \varepsilon J \log\log T + \frac{J\log\log T}{\varepsilon (\log T)^{3/4-r}} \right).
\end{align*}
Combining the above estimate with \eqref{CoveringRectanglesZeta} and \eqref{eq:mainterm} we conclude that
\begin{align*}
\mathbb{P}_T \Big (\max_{j\leq J} |\zeta(\tfrac 34 + it + z_j) - f(z_j)
| \leq u \Big) & = \mathbb{P}
\left(\max_{j\leq J} |\zeta(\tfrac 34 + z_j, X) - f(z_j)
| \leq u\right) \\
& + O\left( \frac{J^2 (\log\log T)^2}{\varepsilon^4(\log T)^{3/4-r}}+ \varepsilon J \log\log T\right).
\end{align*}
Finally, choosing
$$\varepsilon= \left(\frac{J\log\log T}{(\log T)^{3/4-r}}\right)^{1/5}$$
completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}]
We wish to estimate
\begin{equation} \label{toestimate}
\frac{1}{T}\int_{T}^{2T}\omega\left(\max_{|z| \leq r} |\zeta(\tfrac 34 + it + z) - f(z)| \right)dt
\end{equation}
with $f$ an analytic non-vanishing function, and where $\omega$ is a continuously differentiable function with compact support.
Recall that the maximum of $|\zeta(\tfrac 34 + it + z) -
f(z)|$
on the disc $\{z: |z|\leq r\}$ must occur on its boundary $\{z: |z|=r\}$, by the maximum modulus principle. Let $\varepsilon \leq (1/4-r)/4$ be a small positive parameter to be chosen later, and
cover the circle $|z| = r$ with $J\asymp 1/\varepsilon$ discs $\mathcal{D}_j$ of radius $\varepsilon$ and centres $z_j$, where $z_j\in \{z: |z|=r\} $ for all $j\leq J$.
Let $\mathcal{S}_V(T)$ denote the set of those $t \in [T, 2T]$
such that
$$
\max_{|z| \leq (r+1/4)/2} |\zeta'(\tfrac 34 + it + z)| \leq e^V
$$
where $V\leq \log\log T$ is a large parameter to be chosen later, and let $L := \max_{|z| \leq (r+1/4)/2} |f'(z)|$.
Then for $t \in \mathcal{S}_V(T)$, and for all $z\in \mathcal{D}_j$ we have
\begin{equation}\label{SmallDistance}
\begin{aligned}
& \Big|\zeta(\tfrac 34 +it+z)-f(z) - \big(\zeta(\tfrac 34 + it+ z_j)-f(z_j)\big)\Big|
= \left|\int_{z_j}^{z} \zeta'(\tfrac 34 + it+ s)-f'(s) ds\right| \\
& \leq |z - z_j| \cdot \left(\max_{|z| \leq (r+1/4)/2} |\zeta'(\tfrac 34 + it+ z)|+L\right)
\leq \varepsilon (e^V+L) \leq C\varepsilon e^V,
\end{aligned}
\end{equation}
for some large absolute constant $C$, depending at most on $L$. Define
$$
\theta(t):= \max_{|z| \leq r} |\zeta(\tfrac 34 + it + z) - f(z)|- \max_{j\leq J}|\zeta(\tfrac 34 + it + z_j) -
f(z_j)|.$$ Then, it follows from \eqref{ComparisonSupMax} and \eqref{SmallDistance} that for all $t\in \mathcal{S}_V(T)$ we have
\begin{equation}\label{BoundErrorSupMax}
0\leq \theta(t)\leq C\varepsilon e^V.
\end{equation}
Therefore, using this estimate together with Proposition \ref{ControlDerivative} and the fact that $\omega$ is bounded, we deduce that \eqref{toestimate} equals
\begin{equation}\label{SplitSmall}
\begin{aligned}
&\frac{1}{T}\int_{t\in \mathcal{S}_V(T)}\omega\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|+\theta(t)\right)dt+ O\left(e^{-V^2}\right)\\
&=\frac{1}{T}\int_{t\in \mathcal{S}_V(T)}\omega\left(\max_{ j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\right)dt+ O\left(|\mathcal{E}_2|+ e^{-V^2}\right)\\
&=\frac{1}{T}\int_{T}^{2T}\omega\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\right)dt+ O\left(|\mathcal{E}_2|+ e^{-V^2}\right),\\
\end{aligned}
\end{equation}
where
$$\mathcal{E}_2= \frac{1}{T}\int_{t\in \mathcal{S}_V(T)} \int_0^{\theta(t)} \omega'\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|+x\right) dx \cdot dt
\ll \varepsilon e^V, $$
using the fact that $\omega'$ is bounded on $\mathbb{R}$ together with \eqref{BoundErrorSupMax}.
Furthermore, observe that
\begin{equation}\label{SmoothProb}
\begin{aligned}
&\frac{1}{T}\int_{T}^{2T}\omega\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\right)dt\\
=&
-\frac{1}{T}\int_{T}^{2T}\int_{\max_{j\leq J}|\zeta(\frac 34 + it + z_j)-
f(z_j)|}^{\infty}\omega'(u) du \cdot dt\\
=&
-\int_{0}^{\infty}\omega'(u) \cdot \mathbb P_T\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\leq u\right) du.
\end{aligned}
\end{equation}
Since $\omega$ has a compact support, then $\omega'(u)=0$ if $u>A$ for some positive constant $A$. Furthermore, it follows from Proposition \ref{DistributionMax} that for all $0\leq u\leq A$ we have
\begin{align*}
\mathbb P_T\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\leq u\right)&= \mathbb P\left(\max_{j\leq J}|\zeta(\tfrac 34 + z_j, X)-
f(z_j)|\leq u\right)\\
& +O\left(\frac{(J\log\log T)^{6/5}}{(\log T)^{(3/4-r)/5}}\right).
\end{align*}
Inserting this estimate in \eqref{SmoothProb} gives that
\begin{equation}\label{ApproximationMAX}
\begin{aligned}
&\frac{1}{T}\int_{T}^{2T}\omega\left(\max_{j\leq J}|\zeta(\tfrac 34 + it + z_j)-
f(z_j)|\right)dt\\
&= -\int_{0}^{\infty}\omega'(u) \cdot \mathbb P\left(\max_{j\leq J}|\zeta(\tfrac 34 + z_j, X)-
f(z_j)|\leq u\right) du+ O\left(\frac{(J\log\log T)^{6/5}}{(\log T)^{(3/4-r)/5}}\right)\\
&= \ex\left(\omega\left(\max_{j\leq J}|\zeta(\tfrac 34 + z_j, X)-
f(z_j)|\right) \right)+ O\left(\frac{(J\log\log T)^{6/5}}{(\log T)^{(3/4-r)/5}}\right).\\
\end{aligned}
\end{equation}
To finish the proof, we shall appeal to the same argument used to establish \eqref{SplitSmall}, in order to compare the (weighted) distributions of $\max_{j\leq J}|\zeta(\tfrac 34 + z_j, X)-f(z_j)|$ and $\max_{|z|\leq r}|\zeta(\tfrac 34 + z, X)-
f(z)|$. Let $\mathcal{S}_V(X)$ denote the event corresponding to
$$
\max_{|z| \leq (r+1/4)/2} |\zeta'(\tfrac 34 + z, X)| \leq e^V,
$$
and let $\mathcal{S}_V^c(X)$ be its complement. Then, it follows from Proposition \ref{DerRandom} that $\mathbb{P}\big(\mathcal{S}_V^c(X)\big)\ll \exp(-V^2).$ Moreover, similarly to \eqref{SmallDistance} one can see that for all outcomes in $\mathcal{S}_V(X)$ we have, for all $z\in \mathcal{D}_j$
$$\Big|\zeta(\tfrac 34+z, X)-f(z) - \big(\zeta(\tfrac 34 + z_j, X)-f(z_j)\big)\Big|
= \left|\int_{z_j}^{z} \zeta'(\tfrac 34 + s, X)-f'(s) ds\right| \ll \varepsilon e^V.
$$
Thus, since the maximum of $|\zeta(\tfrac 34 + z, X) - f(z)|$ for $|z|\leq r$ occurs (almost surely) on the boundary $|z|=r$, then following the argument leading to \eqref{SplitSmall}, we conclude that
\begin{align*}
&\ex\left(\omega\left(\max_{|z|\leq r}|\zeta(\tfrac 34 + z, X)-
f(z)|\right)\right)\\
&=\ex \left( \mathbf 1_{\mathcal S_V(X)} \, \omega\left(\max_{|z|\leq r}|\zeta(\tfrac 34 + z, X)-
f(z)| \right) \right)+ O\left(e^{-V^2}\right)\\
&=\ex \left( \mathbf 1_{\mathcal S_V(X)} \, \omega\left(\max_{j\leq J}|\zeta(\tfrac 34 + z_j, X)-
f(z)| \right) \right)+ O\left(\varepsilon e^V+ e^{-V^2}\right)\\
&= \ex\left(\omega\left(\max_{j\leq J}|\zeta(\tfrac 34 + z_j, X)-
f(z)|\right)\right)+ O\left(\varepsilon e^V+ e^{-V^2}\right).\\
\end{align*}
Finally, combining this estimate with \eqref{SplitSmall} and \eqref{ApproximationMAX}, and noting that $J\asymp 1/\varepsilon $ we deduce that
\begin{align*}
\frac{1}{T}\int_{T}^{2T}\omega\left(\max_{|z| \leq r} |\zeta(\tfrac 34 + it + z) - f(z)| \right)dt&=\ex\left(\omega\left(\max_{|z|\leq r}|\zeta(\tfrac 34 + z, X)-
f(z)|\right)\right)\\
&+O\left(\varepsilon e^V+ e^{-V^2}+O\left(\frac{(\log\log T)^{6/5}}{\varepsilon^{6/5}(\log T)^{(3/4-r)/5}}\right) \right).
\end{align*}
Choosing $\varepsilon=(\log T)^{-(3/4-r)/11}$ and $V=2\sqrt{\log\log T}$ completes the proof.
\end{proof}
\section{Controlling the derivatives of the zeta function and the random model: Proof of Propositions \ref{ControlDerivative} and \ref{DerRandom}} \label{sec:derivative}
By Cauchy's theorem we have
$$
|\zeta'(\tfrac 34 + it + z)| \leq \frac{1}{\delta}\max_{|s-z|=\delta} |\zeta(\tfrac 34 + it + s)|,
$$
and hence we get
\begin{equation}\label{Cauchy}
\max_{|z|\leq r} |\zeta'(\tfrac 34 + it + z)| \leq
\frac{1}{\delta} \max_{|s| \leq r + \delta} |\zeta(\tfrac 34 + it + s)|.
\end{equation}
Therefore, it follows that
\begin{equation} \label{derivative}
\begin{split}
\mathbb{P}_T \left( \max_{|z| \leq r} |\zeta'(\tfrac 34 + it + z)| >
e^V \right) &\leq
\mathbb{P}_T \left( \max_{|s| \leq r + \delta} |\zeta(\tfrac 34 + it + s)| >
\delta e^V \right) \\
&=\mathbb{P}_T \left( \max_{|s| \leq r + \delta} \log |\zeta(\tfrac 34 + it + s)| >
V+\log \delta \right).
\end{split}
\end{equation}
To bound the RHS we estimate large moments of $\log \zeta(\tfrac34+it+s)$. This is accomplished by approximating
$\log \zeta(\tfrac 34 + it + s)$ by a short Dirichlet polynomial, uniformly for all $s$ in the disc $\{|s|\leq r+\delta\}$. Using zero density estimates and large sieve inequalities, we can show that such an approximation holds for all $t\in [T, 2T]$, except for an exceptional set of $t$'s with very small measure. We prove
\begin{lemma}\label{UnifShortDirichlet}
Let $0<r<1/4$ be fixed, and $\delta= (1/4-r)/4$. Let $y\leq \log T$ be a real number. There exists a set $\mathcal{I}(T)\subset [T, 2T]$ with $\text{meas}(\mathcal{I}(T))\ll T^{1-\delta} y(\log T)^5$, such that for all $t\in [T, 2T]\setminus\mathcal{I}(T)$ and all $|s|\leq r+\delta$ we have
$$
\log \zeta(\tfrac 34 +it+s)=\sum_{ n \le y} \frac{\Lambda(n)}{n^{\frac34+it+s} \log n}+O\left(
\frac{(\log y)^2 \log T}{y^{(1/4-r)/2}}\right).
$$
\end{lemma}
To prove this result, we need the following lemma from Granville and Soundararajan \cite{GrSo}.
\begin{lemma}[Lemma 1 of \cite{GrSo}]\label{ApproxShortEuler}
Let $y\geq 2$ and $|t|\geq y+3$ be real numbers. Let $1/2\leq \sigma_0< 1$ and suppose that the rectangle
$\{z: \sigma_0<\textup{Re}(z)\leq 1, |\textup{Im}(z)-t|\leq y+2\}$ is free of zeros of $\zeta(z)$. Then for any $\sigma$ with
$\sigma_0+2/\log y<\sigma\leq 1$ we have
$$
\log \zeta(\sigma+it)=\sum_{n\leq y}\frac{\Lambda(n)}{n^{\sigma+it}\log n} +O\left(\log |t| \frac{(\log y)^2}{y^{\sigma-\sigma_0}}\right).
$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{UnifShortDirichlet}]
Let $\sigma_0= 1/2+\delta$. For $j=1, 2$ let $\mathcal{T}_j$ be the set of those $t \in [T, 2T]$ for which
the rectangle
$$
\{ z : \sigma_0 < \textup{Re}(z) \leq 1 , |\textup{Im}(z) - t| < y+ 1+ j \}
$$
is free of zeros of $\zeta(z)$. Then, note that $\mathcal{T}_2\subseteq \mathcal{T}_1$, and for all $t\in\mathcal{T}_2 $, we have $t+\im(s)\in \mathcal{T}_1$ for all $|s|\leq r+\delta $. Hence, by Lemma \ref{ApproxShortEuler}
we have
$$
\log \zeta(\tfrac 34 +it+s) = \sum_{n \leq y} \frac{\Lambda(n)}{n^{3/4+ it+s}
\log n} + O \left( \frac{(\log y)^2 \log T}{y^{(1/4-r)/2}}\right),
$$
for all $t\in \mathcal{T}_2$ and all $|s|\leq r+\delta$. Let $N(\sigma, T)$ be the number of zeros of $\zeta(s)$ in the rectangle $\sigma<\re(s)\leq 1$ and $|\im(s)|\leq T$. By the classical zero density estimate $N(\sigma, T)\ll T^{3/2-\sigma}(\log T)^5$ (see for example Theorem 9.19 A ot Titchmarsh \cite{Ti})
we deduce that the measure of the complement of $\mathcal{T}_2$ in $[T, 2T]$
is $\ll T^{1- \delta}y(\log T)^5$.
\end{proof}
We also require a minor variant of Lemma 3.3 of \cite{LLR}, whose proof we will omit.
\begin{lemma} \label{lem:momentbd}
Fix $1/2<\sigma<1$, and let $s$ be a complex number such that $\re(s)=\sigma,$ and $|\im(s)|\le1$. Then, for any positive integer
$k \le \log T/(3 \log y)$ we have
\[
\frac{1}{T} \int_{T}^{2T} \left| \sum_{n \le y} \frac{\Lambda(n)}{n^{s+it} \log n} \right|^{2k} \, dt \ll \left(\frac{c_8 k^{1-{\sigma}}}{(\log k)^{\sigma}} \right)^{2k}
\]
and
\[
\mathbb E \left(\left| \log \zeta(s,X) \right|^{2k}\right)
\ll \left(\frac{c_8 k^{1-{\sigma}}}{(\log k)^{\sigma}} \right)^{2k}
\]
for some positive constant $c_8$ that depends at most on $\sigma$.
\end{lemma}
\begin{proof}[Proof of Proposition \ref{ControlDerivative}]
Let $\delta=e^{-V/2}$. Taking $y=(\log T)^{5(1/4-r)^{-1}}$ in Lemma \ref{UnifShortDirichlet} gives for all $t\in [T, 2T]$ except for a set with measure $\ll T^{1- (1/4-r)/5}$ that
\begin{equation}\label{ZeroDensityApp}
\log \zeta(\tfrac 34 +it+s) = \sum_{n \leq y} \frac{\Lambda(n)}{n^{3/4+ it+s}
\log n}+ O \left(\frac{1}{\log T}\right),
\end{equation}
for all $|s|\leq r+\delta$.
Furthermore, it follows from Cauchy's integral formula that
\[
\left( \sum_{n \le y} \frac{\Lambda(n)}{n^{3/4+it+s} \log n} \right)^{2k}=
\frac{1}{2\pi i} \int_{|z|=r+2\delta} \left( \sum_{n \le y} \frac{\Lambda(n)}{n^{3/4+it+z} \log n} \right)^{2k} \frac{dz}{z-s}.
\]
Applying Lemma \ref{lem:momentbd} we get that
\begin{equation} \label{eq:cauchy}
\begin{split}
\frac{1}{T}
\int_T^{2T}
\left(\max_{|s| \le r+\delta} \left| \sum_{n \le y} \frac{\Lambda(n)}{n^{3/4+it+s} \log n} \right| \right)^{2k} \, dt
\ll & \frac{1}{\delta} \int_{|z|=r+2\delta} \frac{1}{T} \int_{T}^{2T} \left| \sum_{n \le y} \frac{\Lambda(n)}{n^{3/4+it+z} \log n} \right|^{2k} \, dt |dz| \\
\ll & e^{V/2} \left(c_8(r) \frac{k^{1-\sigma'(r)}}{(\log k)^{\sigma'(r)}} \right)^{2k}
\end{split}
\end{equation}
where $\sigma'(r)=\tfrac34-r-2\delta$, and $k \le c_9 \log T/\log \log T$, for some sufficiently small constant $c_9>0$.
We now choose $k=\lfloor c_6(r) V^{\frac{1}{(1-\sigma(r))}} (\log V)^{\frac{\sigma(r)}{1-\sigma(r)}} \rfloor$ (so that $k^{\sigma'(r)} \asymp k^{\sigma(r)}$) where $c_6(r)$ is a sufficiently small absolute constant.
Using \eqref{derivative} and \eqref{ZeroDensityApp} along with Chebyshev's inequality and the above estimate we conclude that
there exists $c_7(r)>0$ such that
\[
\begin{split}
\mathbb P_T\bigg( \max_{|z| \le r} |\zeta'(\tfrac34+it+z)| > e^V \bigg)
\ll & \mathbb P_T\bigg( \max_{|s| \le r+\delta} \left| \sum_{n \le y} \frac{\Lambda(n)}{n^{3/4+it+s} \log n} \right| > \frac{V}{4} \bigg)+T^{1- (1/4-r)/5} \\
\ll & e^{V/2} \bigg(\frac{4}{V} \cdot c_8(r)\frac{ k^{1-\sigma'(r)}}{ (\log k)^{\sigma'(r)}}\bigg)^{2k}+T^{1- (1/4-r)/5}\\
\ll & \exp\left(-c_6 V^{\frac{1}{1-\sigma(r)}} (\log V)^{\frac{\sigma(r)}{1-\sigma(r)}}\right)
\end{split}
\]
for $V \le c_7 (\log T)^{1-\sigma(r)}/\log \log T$.
\end{proof}
We now prove Proposition \ref{DerRandom} along the same lines. The proof is in fact easier than in the zeta function case, since we can compute the moments of $\log \zeta(s, X)$, for any $s$ with $\re(s)>1/2$.
\begin{proof}[Proof of Proposition \ref{DerRandom}] Let $\delta= e^{-V/2}$. Since $\zeta(\tfrac34+s, X)$ is almost surely analytic in $|s|\leq r+2\delta$, then by Cauchy's estimate we have almost surely that
$$
\max_{|z|\leq r} |\zeta'(\tfrac 34 + z, X)| \leq
\frac{1}{\delta} \max_{|s|\leq r + \delta} |\zeta(\tfrac 34 + s, X)|.
$$
Therefore, we obtain
\begin{equation} \label{derivativeRand}
\begin{split}
\mathbb{P} \left( \max_{|z|\leq r} |\zeta'(\tfrac 34 + z, X)| >
e^V \right)\leq &
\mathbb{P} \left( \max_{|s|\leq r+\delta} |\zeta(\tfrac 34 + s, X)| >
\delta e^V \right) \\
\le & \mathbb{P} \left( \max_{|s|\leq r+\delta} |\log \zeta(\tfrac 34 + s, X)| >
\frac{V}{2} \right).
\end{split}
\end{equation}
Let $k$ be a positive integer. By
\eqref{MainRandom}
$\log \zeta(\tfrac34+s,X)$ converges almost surely
to a holomorphic function in $|s|\leq r+2\delta$.
Using Cauchy's integral formula as in \eqref{eq:cauchy}, we obtain almost surely that
$$
\left(\max_{|s|\leq r+\delta} |\log \zeta(\tfrac 34 + s, X)|\right)^{2k}
\ll \frac{1}{\delta} \int_{|z|=r+2\delta} \left| \log \zeta(\tfrac 34 + z, X)\right|^{2k}\cdot |dz|.
$$
Hence, applying Lemma
\ref{lem:momentbd} we get
\begin{equation}\label{BoundProbSubHarm}
\begin{aligned}
\mathbb{P} \left(\max_{|s|\leq r+\delta} |\log \zeta(\tfrac 34 + s, X)| >
V/2 \right)
&\leq \left( \frac{2}{V}\right)^{2k} \cdot \ex\left(\left(\max_{|s|\leq r+\delta} |\zeta(\tfrac 34 + s, X)| \right)^{2k} \right)\\
& \ll \left( \frac{2}{V}\right)^{2k} e^{V/2} \int_{|z|=r+2\delta} \ex\left(|\log \zeta(\tfrac 34 + z, X)|^{2k}\right) \cdot |dz| \\
& \ll e^{V/2} \left(\frac{2 c_8(r) k^{1-\sigma'(r)}}{V (\log k)^{\sigma'(r)}}\right)^{2k},
\end{aligned}
\end{equation}
where $\sigma'(r)=\tfrac34-r-2\delta$. Let $\sigma(r)=\tfrac34-r$ and
take $k=\lfloor c_6 V^{\frac{1}{1-\sigma(r)}} (\log V)^{\frac{\sigma(r)}{1-\sigma(r)}} \rfloor$, where $c_6$ is sufficiently small (note that $k^{\sigma'(r)} \asymp k^{\sigma(r)}$), then apply \eqref{BoundProbSubHarm} to complete the proof.
\end{proof}
\section{Moments of joint shifts of $\log \zeta(s)$: Proof of Theorem \ref{MomentsShifts} }\label{sec:moments}
The proof of Theorem \ref{MomentsShifts} splits into two parts.
In the first part we derive an approximation to
$$
\prod_{j = 1}^{k} \log \zeta(s_j + it)
$$
by a short Dirichlet polynomial.
In the second part we compute the resulting mean-values and obtain
Theorem \ref{MomentsShifts}.
\subsection{Approximating $\prod_{j = 1}^{k} \log \zeta(s_j + it)$ by short Dirichlet polynomials}
Fix $1/2<\sigma_0<1$, and let $\delta:=\sigma_0-1/2$. Let $k\leq \log T$ be a positive integer and $s_1, s_2, \dots, s_k$ be complex numbers (not necessarily distinct)
in the rectangle $\{ z: \sigma_0\leq \re(z)<1, \text{ and }|\im(z)|\leq T^{\delta/4}\}$. We let ${\mathbf s}=(s_1,\dots, s_k)$, and define
$$
F_{{\mathbf s}}(n)= \sum_{\substack{n_1,n_2,\dots,n_k\geq 2\\ n_1n_2\cdots n_k=n}}\prod_{\ell=1}^k \frac{\Lambda(n_{\ell})}{n_{\ell}^{s_{\ell}}\log(n_{\ell})}.
$$
Then for all complex numbers $z$ with $\re(z)>1-\sigma_0$ we have
$$
\prod_{\ell=1}^k\log\zeta(s_{\ell}+z)= \sum_{n=1}^{\infty}\frac{F_{{\mathbf s}}(n)}{n^z}.$$
The main result of this subsection is the following proposition.
\begin{proposition}\label{Approx}
Let $T$ be large, $s_1, \dots, s_k$ be as above, and $\mathcal{E}(T)$ be as in Lemma \ref{ExceptionalSet} below. Then, there exist positive constants $a(\sigma_0), b(\sigma_0)$ such that if $k\leq a(\sigma_0)(\log T)/\log\log T$ and $t\in [T, 2T]\setminus \mathcal{E}(T)$ then
$$\prod_{j=1}^k\log\zeta(s_j+it) = \sum_{n\leq T^{\delta/8}} \frac{F_{\mathbf s}(n)}{n^{it}}+ O\left(T^{-b(\sigma_0)}\right).$$
\end{proposition}
This depends on a sequence of fairly standard lemmas which we now describe.
\begin{lemma}\label{DirichletC} With the same notation as above, we have
$$|F_{{\mathbf s}}(n)|\leq \frac{(2\log n)^k}{n^{\sigma_0}}.$$
\end{lemma}
\begin{proof}
We have
$$|F_{{\mathbf s}}(n)|\leq \frac{1}{n^{\sigma_0}(\log 2)^k}\sum_{\substack{n_1,n_2,\dots,n_k\geq 2\\ n_1n_2\cdots n_k=n}}\prod_{\ell=1}^k \Lambda(n_{\ell})\leq \frac{2^k}{n^{\sigma_0}}\left(\sum_{m|n}\Lambda (m)\right)^k\leq \frac{(2\log n)^k}{n^{\sigma_0}}.
$$
\end{proof}
\begin{lemma}\label{BoundLogZ}
Let $y\geq 2$ and $|t|\geq y+3$ be real numbers. Suppose that the rectangle $\{z: \sigma_0-\delta/2<\re(z)\leq 1, |\im(z)-t|\leq y+2\}$ is free of zeros of $\zeta(z)$. Then, for all complex numbers $s$ such that $\re(s)\geq \sigma_0-\delta/4$ and $|\im(s)|\leq y$ we have
$$\log\zeta(s+it)\ll_{\sigma_0} \log|t|.$$
\end{lemma}
\begin{proof}
This follows from Theorem 9.6 B of Titchmarsh.
\end{proof}
\begin{lemma}\label{ExceptionalSet}
Let $s_1, \dots, s_k$ be as above. Then, there exists a set $\mathcal{E}(T)\subset [T, 2T]$ with measure $\text{meas}(\mathcal{E}(T))\ll T^{1-\delta/8}$, and such that for all $t\in [T, 2T]\setminus \mathcal{E}(T)$ we have $\zeta(s_j+it+z)\neq 0$ for every $1\leq j\leq k$ and every $z$ in the rectangle $\{z: -\delta/2<\re(z)\leq 1, |\im(z)|\leq 3T^{\delta/4}\}$.
\end{lemma}
\begin{proof}
For every $1\leq j\leq k$, let $\mathcal{E}_j(T)$ be the set of $t\in [T, 2T]$ such that the rectangle $\{z: -\delta/2<\re(z)\leq 1, |\im(z)|\leq 3T^{\delta/4}\}$ has a zero of $\zeta(s_j+it+z)$. Then, by the classical zero density estimate $N(\sigma, T)\ll T^{3/2-\sigma}(\log T)^5$, we deduce that
$$ \textup{meas}(\mathcal{E}_j(T))\ll T^{\delta/4} T^{3/2-\sigma_0+\delta/2}(\log T)^5 < T^{1-\delta/4}(\log T)^5.$$
We take $\mathcal{E}(T)=\cup_{j=1}^k \mathcal{E}_j(T)$. Then $\mathcal{E}(T)$ satisfies the assumptions of the lemma, since $\textup{meas}(\mathcal{E}(T))\ll T^{1-\delta/4}(\log T)^6\ll T^{1-\delta/8}$.
\end{proof}
We are now ready to prove
Proposition \ref{Approx}.
\begin{proof}[Proof of Proposition \ref{Approx}]
Let $x=\lfloor T^{\delta/8}\rfloor +1/2$. Let $c=1-\sigma_0+ 1/\log T$, and $Y=T^{\delta/4}$. Then by Perron's formula, we have for $t\in [T, 2T]\setminus \mathcal{E}(T)$
$$ \frac{1}{2\pi i}\int_{c-iY}^{c+iY} \left(\prod_{j=1}^k\log\zeta(s_j+it+z)\right)\frac{x^z}{z}dz=
\sum_{n\leq x} \frac{F_{{\mathbf s}}(n)}{n^{it}}+ O\left(\frac{x^c}{Y}\sum_{n=1}^{\infty} \frac{|F_{{\mathbf s}}(n)|}{n^{c}|\log(x/n)|}\right).$$
To bound the error term of this last estimate, we split the sum into three parts: $n\leq x/2$, $x/2<n<2x$ and $n\geq 2x$. The terms in the first and third parts satisfy $|\log(x/n)|\geq \log 2$, and hence their contribution is
$$\ll \frac{x^{1-\sigma_0}}{Y} \sum_{n=1}^{\infty}\frac{|F_{{\mathbf s}}(n)|}{n^{c}}\leq \frac{x^{1-\sigma_0}}{Y} \left(\sum_{n=1}^{\infty}\frac{\Lambda(n)}{n^{\sigma_0+c}\log n}\right)^k\leq \frac{x^{1-\sigma_0}(2\log T)^k}{Y}\ll T^{-b(\sigma_0)},$$
or some positive constant $b(\sigma_0)$, if $a(\sigma_0)$ is sufficiently small.
To handle the contribution of the terms $x/2<n<2x$, we put $r=x-n$, and use that $|\log(x/n)|\gg |r|/x$. Then by Lemma \ref{DirichletC} we deduce that the contribution of these terms is
$$\ll \frac{x^{1-\sigma_0}(3\log x)^k}{Y}\sum_{r\leq x}\frac{1}{r}\ll \frac{x^{1-\sigma_0}(3\log x)^{k+1}}{Y}\ll T^{-b(\sigma_0)}.$$
We now move the contour to the line $\re(s)=-\delta/4$. By Lemma \ref{ExceptionalSet}, we do not encounter any zeros of $\zeta(s_j+it+z)$ since $t\in [T, 2T]\setminus \mathcal{E}(T)$. We pick up a simple pole at $z=0$ which leaves a residue $\prod_{j=1}^k\log\zeta(s_j+it)$.
Also Lemma \ref{BoundLogZ} implies that for any $z$ on our contour we have
$$|\log\zeta(s_j+it+z)|\leq c(\sigma_0) \log T,$$
for all $j$ where $c(\sigma_0)$ is a positive constant. Therefore, we deduce that
$$ \frac{1}{2\pi i}\int_{c-iY}^{c+iY} \left(\prod_{j=1}^k\log\zeta(s_j+it+z)\right)\frac{x^z}{z}dz= \prod_{j=1}^k\log\zeta(s_j+it) + E_1,$$
where
\begin{align*}
E_1&=\frac{1}{2\pi i} \left(\int_{c-iY}^{-\delta/4-iY}+ \int_{-\delta/4-iY}^{-\delta/4+iY}+ \int_{-\delta/4+iY}^{c+iY}\right) \left(\prod_{j=1}^k\log\zeta(s_j+it+z)\right)\frac{x^z}{z}dz\\
&\ll \frac{x^{1-\sigma_0}(c(\sigma_0)\log T)^k}{Y}+ x^{-\delta/4}(c(\sigma_0)\log T)^k\log Y\ll T^{-b(\sigma_0)},
\end{align*}
as desired.
\end{proof}
\subsection{An Asymptotic formula for the moment of products of shifts of $\log\zeta(s)$}
\begin{proof}[Proof of Theorem \ref{MomentsShifts}]
Let $\mathcal{E}_1(T)$ and $\mathcal{E}_2(T)$ be the corresponding exceptional sets for ${\mathbf s}$ and ${\mathbf r}$ respectively as in Lemma \ref{ExceptionalSet}, and let $\mathcal{E}(T)= \mathcal{E}_1(T)\cup \mathcal{E}_2(T)$. First, note that if $t\in [T,2T]\setminus \mathcal{E}(T)$ then by Proposition \ref{Approx} and Lemma \ref{BoundLogZ} we have
$$ \left|\sum_{n\leq x} \frac{F_{\mathbf s}(n)}{n^{it}}\right|\ll (c(\sigma_0)\log T)^{k}, \text{ and } \left|\sum_{m\leq x} \frac{F_{\mathbf r}(m)}{m^{-it}}\right|\ll (c(\sigma_0))\log T)^{\ell},$$
for some positive constant $c(\sigma_0)$. Let $x=T^{(\sigma_0-1/2)/8}$.
Then, it follows from Proposition \ref{Approx} that
\begin{equation}\label{MomentsProduct}
\begin{aligned}
&\frac{1}{T} \int_{[T,2T]\setminus \mathcal{E}(T)}\left(\prod_{j=1}^k\log\zeta(s_j+it)\right)\left(\prod_{j=1}^{\ell}\log\zeta(r_j-it)\right)dt\\
&= \frac{1}{T} \int_{[T,2T]\setminus \mathcal{E}(T)} \left(\sum_{n\leq x} \frac{F_{\mathbf s}(n)}{n^{it}}\right)\left(\sum_{m\leq x} F_{\mathbf r}(m)m^{it}dt\right) dt + O\left(T^{-b(\sigma_0)}(\log T)^{\max(k, \ell)}\right)\\
&= \frac{1}{T} \int_T^{2T} \left(\sum_{n\leq x} \frac{F_{\mathbf s}(n)}{n^{it}}\right)\left(\sum_{m\leq x} F_{\mathbf r}(m)m^{it}\right)dt + O\left(T^{-b(\sigma_0)/2}\right).
\end{aligned}
\end{equation}
Furthermore, we have
\begin{equation} \label{eq:mvt}
\frac{1}{T} \int_T^{2T} \left(\sum_{n\leq x} \frac{F_{\mathbf s}(n)}{n^{it}}\right)\left(\sum_{m\leq x} F_{\mathbf r}(n)m^{it}\right)dt= \sum_{m,n\leq x} F_{\mathbf s}(n)F_{\mathbf r}(m)\frac{1}{T} \int_T^{2T}\left(\frac{m}{n}\right)^{it}dt.
\end{equation}
The contribution of the diagonal terms $n=m$ equals $\sum_{n\leq x} F_{\mathbf s}(n)F_{\mathbf r}(n)$. On the other hand, by Lemma \ref{DirichletC} the contribution of the off-diagonal terms $n\neq m$ is
\begin{equation} \label{eq:offdiag}
\ll \frac{1}{T}\sum_{\substack{m,n\leq x \\ m \neq n}} \frac{(2\log n)^k (2\log m)^{\ell}}{(mn)^{\sigma_0}}\frac{1}{|\log(m/n)|}\ll \frac{x^{3-2\sigma_0}(2\log x)^{k+\ell}}{T}\ll T^{-1/2},
\end{equation}
since $|\log(m/n)|\gg 1/x$.
Furthermore, it follows from \eqref{orthogonality} that
\begin{equation} \label{eq:randmvt}
\ex\left(\prod_{j=1}^k\log\zeta(s_j,X)\right)\left(\prod_{j=1}^{\ell}\log\overline{\zeta(r_j, X)}\right)= \sum_{n=1}^{\infty} F_{\mathbf s}(n)F_{\mathbf{r}}(n)= \sum_{n\leq x} F_{\mathbf s}(n)F_{\mathbf{r}}(n)+E_2,
\end{equation}
where
$$E_2\leq \sum_{n>x}\frac{(2\log n)^{k+\ell}}{n^{2\sigma_0}}.$$
Since the function $(\log t)^{\beta}/t^{\alpha}$ is decreasing for $t>\exp(\beta/\alpha)$, then with the choice $\alpha=(2\sigma_0-1)/2$ we obtain
$$ E_2\leq \frac{(2\log x)^{k+\ell}}{x^{\alpha}}\sum_{n>x}\frac{1}{n^{1+\alpha}}\ll \frac{(2\log x)^{k+\ell}}{x^{2\alpha}}\ll x^{-\alpha}.$$
Combining this with \eqref{eq:mvt}, \eqref{eq:offdiag}, and \eqref{eq:randmvt} completes the proof.
\end{proof}
\section{The characteristic function of joint shifts of $\log \zeta(s)$} \label{sec:characteristic}
\begin{proof}[Proof of Theorem \ref{characteristic}]
Let $\mathcal{E}(T)$ be as in Theorem \ref{MomentsShifts}. Let $N=[\log T/(C(\log\log T))]$ where $C$ is a suitably large constant. Then,
$\Phi_T(\mathbf{u}, \mathbf{v})$ equals
\begin{align}\label{Taylor}
& \nonumber \frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)} \exp\left(i\left(\sum_{j=1}^J (u_j \re \log\zeta(s_j+it)+ v_j \im \log\zeta (s_j+it))\right)\right)dt +O\left(T^{-c_3}\right)\\
& =\sum_{n=0}^{2N-1} \frac{i^n}{n!} \cdot \frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)}\left(\sum_{j=1}^J (u_j \re \log\zeta(s_j+it)+ v_j \im \log\zeta (s_j+it))\right)^ndt + E_3,
\end{align}
where
$$E_3 \ll T^{-c_3}+ \frac{1}{(2N)!}\left(\frac{2c_1(\log T)^{\sigma}}{J}\right)^{2N}\frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)} \left(\sum_{j=1}^J|\log\zeta(s_j+it)|\right)^{2N}dt.$$
Now, by Theorem \ref{MomentsShifts} along with Lemma \ref{lem:momentbd}, we obtain that for all $1\leq j\leq J$
\begin{equation}\label{BoundM}
\frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)} |\log\zeta(s_j+it)|^{2N}dt \ll
\ex\left(|\log\zeta(s_j, X)|^{2N}\right)\leq \left(\frac{c_8(\sigma) N^{1-\sigma}}{(\log N)^{\sigma}}\right)^{2N},
\end{equation}
for some positive constant $c_8=c_8(\sigma).$ Furthermore, by Minkowski's inequality we have
$$ \frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)} \left(\sum_{j=1}^J|\log\zeta(s_j+it)|\right)^{2N}dt \leq \left(c_8 J \frac{N^{1-\sigma}}{(\log N)^{\sigma}}\right)^{2N}.$$
Therefore, we deduce that for some positive constant $c_{9}=c_{9}(\sigma)$, we have
$$ E_3\ll T^{-c_3} + \left(c_{9}\frac{(\log T)^{\sigma}}{(N\log N)^{\sigma}}\right)^{2N}\ll e^{-N}.$$
Next, we handle the main term of \eqref{Taylor}. Let $\tilde{u_j}=(u_j+iv_j)/2$ and $\tilde{v_j}=(u_j-iv_j)/2$. Then by Theorem \ref{MomentsShifts} we obtain
\begin{align*}
&\frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)}\left(\sum_{j=1}^J (u_j \re \log\zeta(s_j+it)+ v_j \im \log\zeta(s_j+it))\right)^ndt\\
&\frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)}\left(\sum_{j=1}^J (\tilde{u_j} \log\zeta(s_j+it)+ \tilde{v_j} \log\zeta(s_j-it))\right)^ndt\\
&=\sum_{\substack{k_1,\dots, k_{2J}\geq 0\\ k_1+\cdots+k_{2J}=n}}{n\choose k_1, k_2, \dots, k_{2J}}\prod_{j=1}^J\tilde{u_j}^{k_j} \prod_{\ell=1}^J\tilde{v_{\ell}}^{k_{J+\ell}}\\
& \quad \quad \times \frac1T \int_{[T, 2T]\setminus \mathcal{E}(T)} \prod_{j=1}^J(\log\zeta(s_j+it))^{k_j}\prod_{\ell=1}^J (\log\zeta(s_\ell-it))^{k_{J+\ell}}dt\\
&= \sum_{\substack{k_1,\dots, k_{2J}\geq 0\\ k_1+\cdots+k_{2J}=n}}{n\choose k_1, k_2, \dots, k_{2J}}\prod_{j=1}^J\tilde{u_j}^{k_j} \prod_{\ell=1}^J\tilde{v_{\ell}}^{k_{J+\ell}}\\
& \quad \quad \times \ex\left(\prod_{j=1}^J(\log\zeta(s_j, X))^{k_j}\prod_{\ell=1}^J (\log\zeta(s_\ell, X))^{k_{J+\ell}}\right) +O\Big(T^{-c_5} \big(2c_1(\log T)^{\sigma}\big)^n\Big),\\
&= \ex\left(\left(\sum_{j=1}^J (u_j \re \log\zeta(s_j, X)+ v_j \im \log\zeta(s_j, X))\right)^n\right) +O\Big(T^{-c_5} \big(2c_1(\log T)^{\sigma}\big)^n\Big).
\end{align*}
Inserting this estimate in \eqref{Taylor}, we derive
\begin{align*}
\Phi_T(\mathbf{u}, \mathbf{v})&=\sum_{n=0}^{2N-1} \frac{i^n}{n!}\ex\left(\left(\sum_{j=1}^J (u_j \re \log\zeta(s_j, X)+ v_j \im \log\zeta(s_j, X))\right)^n\right) + O\Big(e^{-N}\Big)\\ &= \Phi_{\text{rand}}(\mathbf{u}, \mathbf{v}) +E_4.
\end{align*}
where
$$ E_4\ll e^{-N} + \frac{1}{(2N)!}\left(\frac{2c_1(\log T)^{\sigma}}{J}\right)^{2N}\ex\left(\left(\sum_{j=1}^J | \log\zeta(s_j, X)|\right)^{2N}\right)\ll e^{-N}
$$
by \eqref{BoundM} and Minkowski's inequality. This completes the proof.
\end{proof}
\section{Discrepancy estimates for the distribution of shifts} \label{sec:distribution}
The deduction of Theorem \ref{discrep} from Theorem \ref{characteristic} uses Beurling-Selberg functions.
For $z\in \mathbb C$ let
\[
H(z) =\bigg( \frac{\sin \pi z}{\pi} \bigg)^2 \bigg( \sum_{n=-\infty}^{\infty} \frac{\tmop{sgn}(n)}{(z-n)^2}+\frac{2}{z}\bigg)
\qquad\mbox{and} \qquad K(z)=\Big(\frac{\sin \pi z}{\pi z}\Big)^2.
\]
Beurling proved that the function $B^+(x)=H(x)+K(x)$
majorizes $\tmop{sgn}(x)$ and its Fourier transform
has restricted support in $(-1,1)$. Similarly, the function $B^-(x)=H(x)-K(x)$ minorizes $\tmop{sgn}(x)$ and its Fourier
transform has the same property (see Vaaler \cite{Vaaler} Lemma 5).
Let $\Delta>0$ and $a,b$ be real numbers with $a<b$. Take $\mathcal I=[a,b]$
and
define
\[
F_{\mathcal I} (z)=\frac12 \Big(B^-(\Delta(z-a))+B^-(\Delta(b-z))\Big).
\]
The function $F_{\mathcal I}$ has the following remarkable properties.
First, it follows from the inequality $B^-(x) \le \tmop{sgn}(x) \le B^+(x)$ that
\begin{equation} \label{l1 bd}
0 \le \mathbf 1_{\mathcal I}(x)- F_{\mathcal I}(x)\le K(\Delta(x-a))+K(\Delta(b-x)).
\end{equation}
Additionally, one has
\begin{equation} \label{Fourier}
\widehat F_{\mathcal I}(\xi)=
\begin{cases}\widehat{ \mathbf 1}_{\mathcal I}(\xi)+O\Big(\frac{1}{\Delta} \Big) \mbox{ if } |\xi| < \Delta, \\
0 \mbox{ if } |\xi|\ge \Delta.
\end{cases}
\end{equation}
The first estimate above follows from \eqref{l1 bd} and
the second follows from the fact that the Fourier transform
of $B^-$ is supported in $(-1,1)$.
Before proving Theorem \ref{discrep} we first require the following lemmas.
\begin{lemma} \label{lem:functionbd}
For $x \in \mathbb R$ we have
$
|F_{\mathcal I}(x)| \le 1.
$
\end{lemma}
\begin{proof}
It suffices to prove the lemma for $\Delta=1$. Also, note that we only need to show that $F_{\mathcal I}(x) \ge -1$.
From the identity
\[
\sum_{ n=-\infty}^{\infty} \frac{1}{(n-z)^2}=
\left(\frac{\pi}{\sin \pi z}\right)^2
\]
it follows that for $y \ge 0$
\begin{equation}\label{eq:Hid}
H(y)=1-K(y)G(y)
\end{equation}
where
\[
G(y)=2y^2 \sum_{m=0}^{\infty} \frac{1}{(y+m)^2}-2y-1.
\]
In Lemma 5 of \cite{Vaaler}, Vaaler shows for $y \ge 0$ that
\begin{equation} \label{eq:Gbd}
0 \le G(y) \le 1.
\end{equation}
Also, note that for each $m \ge 1$, and $0<y \le 1$ one has $\frac{m}{(y+m)^3} \le \int_{m-1}^m \frac{t}{(y+t)^3} \, dt$ so that for $0<y \le 1$
\begin{equation} \label{eq:Gdec}
G'(y)=4y \sum_{m \ge 1} \frac{m}{(y+m)^3}-2 \le 4y \int_0^{\infty} \frac{t}{(y+t)^3} \, dt-2 = 0.
\end{equation}
First consider the case $a\le x \le b$. By \eqref{eq:Hid} we get that in this range
\[
F_{\mathcal I}(x)=\frac12 \left(2- K(x-a)(G(x-a)+1)-K(b-x)(G(b-x)+1) \right),
\]
which along with \eqref{eq:Gbd} implies $F_{\mathcal I}(x) \ge -1$ for $a \le x \le b$.
Now consider the case $x<a$. Since $H$ is an odd function
\eqref{eq:Hid} and \eqref{eq:Gbd} imply
\[
\begin{split}
F_{\mathcal I}(x)=& \frac12 \left(K(x-a)(G(a-x)-1)-K(b-x)(G(b-x)+1) \right) \\
\ge & \frac12\left( -K(x-a)-2K(x-b)\right),
\end{split}
\]
which is $\ge -1$ if $K(x-b) \le 1/2$. If $K(x-b) \ge 1/2$
we also have $K(x-a)>K(x-b)$ and $0<b-x < 1$.
By this and \eqref{eq:Gdec} we have in this range as well that
\[
F_{\mathcal I}(x) \ge \frac12 \left( K(x-b)(G(a-x)-G(b-x)-2)\right) \ge -1.
\]
Hence, $F_{\mathcal I}(x)\ge -1$ for $x<a$.
The remaining case when $x>b$ follows from a similar argument.
\end{proof}
\begin{lemma} \label{upper bd}
Fix $1/2<\sigma<1$, and let $s$ be a complex number such that $\tmop{Re}(s)=\sigma$ and $|\tmop{Im}(s)| \le T^{\frac14\cdot(\sigma-\frac12)}$. Then there exists a positive constant $c_1(\sigma)$ such that for $|u| \le c_1(\sigma)(\log T)^{\sigma}$ we have
\[
\Phi_T(u,0) \ll \exp\left( \frac{-u}{5 \log u} \right) \quad
\mbox{ and } \quad \Phi_T(0,u) \ll \exp\left( \frac{-u}{5 \log u} \right).
\]
\end{lemma}
\begin{proof}
By a straightforward modification of
Lemma 6.3 of \cite{LLR} one has that
\[
\mathbb E \bigg( \exp\Big(i u \tmop{Re} \log \zeta(s, X)\Big) \bigg) \ll \exp\bigg(-\frac{u}{5 \log u} \bigg),
\]
and
\[
\mathbb E \bigg( \exp\Big(i u \tmop{Im} \log \zeta(s, X)\Big) \bigg) \ll \exp\bigg(-\frac{u}{5 \log u} \bigg).
\]
Using the first bound and applying Theorem \ref{characteristic} with $J=1$ establishes the first claim. The second claim follows similarly by using the second bound and Theorem \ref{characteristic}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{discrep}]
First, we claim that it suffices to estimate
the discrepancy over $(\mathcal R_1, \ldots, \mathcal R_J)$ such
that for each $j$ we have $\mathcal R_j \subset [-\sqrt{\log T}, \sqrt{\log T}] \times [-\sqrt{\log T}, \sqrt{\log T}]$.
To see this consider $( \widetilde{\mathcal R_1}, \ldots, \widetilde{\mathcal R_J})$ where
$\widetilde{\mathcal R_j}=\mathcal R_j \cap [-\sqrt{\log T}, \sqrt{\log T}] \times [-\sqrt{\log T}, \sqrt{\log T}] $.
It follows that
\begin{equation} \notag
\begin{split}
&\bigg|\mathbb P_T \bigg( \log \zeta(s_j+it) \in \mathcal R_j, \forall j \le J \bigg)
-\mathbb P_T \bigg( \log \zeta(s_1+it) \in \widetilde{\mathcal R_1},\log \zeta(s_j+it) \in \mathcal R_j, 2 \le j \le J \bigg)
\bigg| \\
&\ll \mathbb P_T \bigg( |\log \zeta(s_1+it)| \ge \sqrt{\log T} \bigg) \ll \exp\Big(-\sqrt{\log T}\Big),
\end{split}
\end{equation}
where the last bound follows from Theorem 1.1 and Remark 1 of
of \cite{La}.
Repeating this argument gives
\[
\bigg|\mathbb P_T \bigg( \log \zeta(s_j+it) \in \widetilde{\mathcal R_j}, \forall j \le J )
-\mathbb P_T \bigg( \log \zeta(s_j+it) \in \mathcal R_j, \forall j \le J \bigg)
\bigg| \ll J \exp\Big(-\sqrt{\log T}\Big).
\]
Similarly,
\[
\bigg|\mathbb P \bigg( \log \zeta(s_j,X) \in \widetilde{\mathcal R_j}, \forall j \le J )
-\mathbb P \bigg( \log \zeta(s_j,X) \in \mathcal R_j, \forall j \le J \bigg)
\bigg| \ll J \exp\Big(-\sqrt{\log T}\Big).
\]
Hence, the error from restricting to $( \widetilde{\mathcal R_1}, \ldots, \widetilde{\mathcal R_J})$ is negligible and establishes
the claim.
Let $\Delta=c_1(\sigma) (\log T)^{\sigma}/J$ and $\mathcal R_j=[a_j,b_j]\times[c_j, d_j]$ for $j=1, \ldots, J$, with
$|b_j-a_j|,|d_j-c_j| \le 2\sqrt{\log T}$.
Also, write $\mathcal I_j=[a_j,b_j]$ and $ \mathcal J_j=[c_j,d_j]$.
By Fourier inversion, \eqref{Fourier}, and Theorem \ref{characteristic} we have that
\begin{equation} \label{long est}
\begin{split}
&\frac1T \int_T^{2T} \prod_{j=1}^J F_{\mathcal I_j} \Big( \tmop{Re} \log \zeta(s_j+it)\Big)
F_{\mathcal J_j}\Big( \tmop{Im} \log \zeta(s_j+it)\Big) \, dt\\
&
=\int_{\mathbb R^{2J}} \bigg(\prod_{j=1}^J \widehat{F}_{\mathcal I_j} (u_j)
\widehat{F}_{\mathcal J_j}( v_j)\bigg) \Phi_T(\mathbf u, \mathbf v) \, d\mathbf u \, d\mathbf v \\
&
= \int\limits_{\substack{|u_j|,|v_j| \le \Delta \\ j=1,2, \ldots, J}} \bigg(\prod_{j=1}^J \widehat{F}_{\mathcal I_j} (u_j)
\widehat{F}_{\mathcal J_j}( v_j)\bigg) \Phi_{\tmop{rand}}(\mathbf u, \mathbf v) \, d\mathbf u \, d\mathbf v +O\left(\left(2\Delta\sqrt{\log T}\right)^{2J} \exp\Big(-
\frac{c_2 \log T}{\log \log T}\Big)\right)\\
&
=\mathbb E \bigg( \prod_{j=1}^J F_{\mathcal I_j} \Big( \tmop{Re} \log \zeta(s_j,X)\Big)
F_{\mathcal J_j}\Big( \tmop{Im} \log \zeta(s_j,X)\Big) \bigg)+O\left( \exp\left(-
\frac{c_2 \log T}{2\log \log T}\right)\right).
\end{split}
\end{equation}
Next note that $\widehat K(\xi)=\max(0,1-|\xi|)$. Applying Fourier inversion, Theorem \ref{characteristic} with $J=1$,
and Lemma \ref{upper bd} we have that
\begin{equation} \notag
\begin{split}
\frac1T \int_T^{2T} K\Big( \Delta \cdot \Big(\tmop{Re} \log \zeta(s+it)-\alpha\Big)\Big) \, dt
=&\frac{1}{\Delta}\int_{-\Delta}^{\Delta}\Big(1-\frac{|\xi|}{\Delta}\Big) e^{-2\pi i \alpha \xi} \Phi_T(\xi,0) \, d\xi
\ll \frac{1}{\Delta},
\end{split}
\end{equation}
where $\alpha$ is an arbitrary real number and $s \in \mathbb C$ satisfies
$\sigma \le \tmop{Re(s)} <1$ and $|\tmop{Im}(s)|< T^{\frac14(\sigma-\frac12)}$.
By this and \eqref{l1 bd} we get that
\begin{equation} \label{K bd}
\frac1T \int_T^{2T} F_{\mathcal I_1}\Big(\tmop{Re} \log \zeta(s_1+it)\Big) \, dt
=\frac1T \int_{T}^{2T} \mathbf 1_{\mathcal I_1}\Big(\tmop{Re} \log \zeta(s_1+it)\Big) dt+O(1/\Delta).
\end{equation}
Lemma \ref{lem:functionbd} implies that $|F_{\mathcal I_j}(x)|, |F_{\mathcal J_j}(x)| \le 1$ for $j=1,\ldots, J$. Hence, by this and \eqref{K bd}
\begin{equation} \notag
\begin{split}
&\frac1T \int_T^{2T} \prod_{j=1}^J F_{\mathcal I_j} \Big( \tmop{Re} \log \zeta(s_j+it)\Big)
F_{\mathcal J_j}\Big( \tmop{Im} \log \zeta(s_j+it)\Big) \, dt \\
&=\frac1T \int_T^{2T} \mathbf 1_{\mathcal I_1} \Big( \tmop{Re} \log \zeta(s_j+it)\Big)
F_{\mathcal J_1}\Big( \tmop{Im} \log \zeta(s_j+it)\Big) \\
&\qquad \qquad \qquad \times \prod_{j=2}^J F_{\mathcal I_j} \Big( \tmop{Re} \log \zeta(s_j+it)\Big)
F_{\mathcal J_j}\Big( \tmop{Im} \log \zeta(s_j+it)\Big) \, dt+O(1/\Delta).
\end{split}
\end{equation}
Iterating this argument and using an analog of \eqref{K bd} for $\tmop{Im } \log \zeta(s+it)$, which
is proved in the same way, gives
\begin{equation} \label{one}
\begin{split}
&\frac1T \int_T^{2T} \prod_{j=1}^J F_{\mathcal I_j} \Big( \tmop{Re} \log \zeta(s_j+it)\Big)
F_{\mathcal J_j}\Big( \tmop{Im} \log \zeta(s_j+it)\Big) \, dt \\
&\qquad \qquad \qquad
=\mathbb P_T\Bigg(\log \zeta(s_j+it) \in \mathcal R_j, \forall j \le J\bigg) +O\left(\frac{J}{\Delta}\right).
\end{split}
\end{equation}
Similarly, it can be shown that
\begin{equation} \label{two}
\begin{aligned}
\mathbb E \bigg( \prod_{j=1}^J F_{\mathcal I_j} \Big( \tmop{Re} \log \zeta(s_j,X)\Big)
F_{\mathcal J_j}\Big( \tmop{Im} \log \zeta(s_j,X)\Big) \bigg)
&=\mathbb P\Bigg(\log \zeta(s_j,X) \in \mathcal R_j, \forall j \le J \bigg) \\
&\ \ \ +O\left(\frac{J}{\Delta}\right).
\end{aligned}
\end{equation}
Using \eqref{one} and \eqref{two} in \eqref{long est} completes the proof.
\end{proof}
| {'timestamp': '2016-12-06T02:14:26', 'yymm': '1611', 'arxiv_id': '1611.10325', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.10325'} |
\section{Introduction }
\label{intro}
Majorana zero modes (MZMs) in topological superconductors~\cite{m1,m2,m4,m5,m6} have generated significant interest because of their potential
utility in topological quantum computation~\cite{nayakrmp}. For this
purpose, proximity-induced topological superconductors based on
semiconductors~\cite{fu-kane,*sau,*lutchyn,*oreg} have
been considered particularly convenient, because of the large tunability
resulting from the conventional nature of the constituents.
However, quasi-one-dimensional (quasi-1D) topological superconductors with the
potential for harboring multiple MZMs, while not ideal for quantum
computation applications, are interesting systems in their own right.
According to the classification table for topological systems~\cite{kitaev-classi,*Schnyder}, one-dimensional (1D) superconductors
can support Kramers pairs of Majoranas or multiple Majoranas,
where the systems are time-reversal symmetric (class DIII) \cite{kane-mele}
or chiral symmetric (class BDI) \cite{fidkowski,ipsita-sudip,Mandal2015,*Mandal2016a,*Mandal2016b}, respectively.
While proximity effect in wide semiconductor nanowires can lead to
multiple Majoranas in class BDI for the appropriate spin-orbit coupling~\cite{Tewari_2011}, class DIII Majorana Kramers pairs are found to require
interactions to generate from spin-singlet proximity effect~\cite{jelena,*ips-kramers}.
Multiple MZMs have turned out to be particularly interesting because of
novel phenomena that can result from their interplay with interactions.
The most direct addition of interactions in this case was shown to modify
the $Z$ invariant to $Z_8$~\cite{fidkowski}. Recently, more interesting physics has been shown to arise from the interplay of such multiple MZMs with random interactions in the form of the Sachdev-Ye-Kitaev (SYK) model~\cite{sy,*syk-kitaev,*marcel}. From a more pedestrian standpoint, details of experimental manifestations, such as quantization
of conductance or degeneracy of Josephson spectra, are expected to be
more intricate for systems with multiple MZMs as compared to the ones
with single MZMs (that have dominated experimental attention so far).
Specifically, it has been shown~\cite{beenakker} that the conductance into a
wire in the BDI symmetry class takes values that are integer multiples of the
quantum of conductance. Furthermore, perturbations that reduce the
symmetry to class D, also reduce the conductance to the characteristic single
quantum of conductance or vanishing conductance, associated with the
$Z_2$ topological invariant.
\begin{figure}
\includegraphics[scale=0.1]{general_system}
\caption{\label{gs}
Schematic representation of the generic system treated in Sec.~\ref{tomo}. The strength of the tunelling barrier is fixed at $\tau=5\,t_x$, where $t_x$ is the hopping along the chain direction. The superconducting system, as well as the leads, are quasi-1D in nature ($t_y << t_x$).}
\end{figure}
Quasi-1D superconductors, that may be viewed as
weakly coupled 1D chains \cite{Kaladzhyan_2017} (as shown in Fig.~\ref{gs}), have been suggested in several potential spin-triplet superconductors, such as lithium
purple bronze~\cite{Mercure_12,Lebed_13}, Bechgaard salts~\cite{Lebed_00,Lee_2001,Lee_2003,Shinagawa_2007}, and even possibly SrRuO$_4$
~\cite{raghu}. Given the evidence for spin-triplet pairing in these systems,
in the form of high upper critical fields, these materials have been conjectured to
host MZMs at the ends~\cite{Yakovenko}. Their quasi-1D
structure, composed of many chains coupled by weak transverse
hopping, suggests the possibility of one MZM from each of the chains.
Recent work shows~\cite{Dumitrescu13} that in the cases of time-reversal
(TR) invariant superconductivity in the form of equal spin pairing (ESP),
these MZMs would not split, leading to the possibility of multiple MZMs
at the ends of these materials. Furthermore, spin-triplet superconductors can support
low-energy end-modes even in the absence of external perturbations such as
magnetic fields. This is different from topological superconductivity in
semiconductor systems~\cite{*sau,*lutchyn,*oreg}, which despite being
topologically equivalent to $p-$wave superconductors in certain limits,
cannot realize time-reversal invariant phases without magnetic fields.
The flexibility of p-wave systems to realize highly symmetric MZMs, allows for topological superconductivity with a high degree
of symmetry. As shown earlier~\cite{Dumitrescu13}, the stability of the multiple
MZMs depends on the variety of symmetries of the systems, and therefore,
in principle, can be split by a variety of perturbations.
In this paper, we study the effect of various symmetry-breaking
perturbations on experimentally measurable signatures of the quasi-1D
spin-triplet superconductors, such as transport and Josephson response~\cite{yukio1,yukio2,jorge1,jorge2}.
In the first part of the paper (Sec.~\ref{tomo}), we numerically compute the zero and non-zero temperature conductances of quasi-1D nanowires, that host multiple MZMs
in the configuration depicted in Fig.~\ref{gs}. For the purposes of reference, we start by reviewing the results~\cite{beenakker} on the conductance of the quasi-1D s-wave Rashba nanowire, with parameters chosen such that the system is in the BDI symmetry class. In this case, we study how the conductance into the wire, as a function of density, tracks the bandstructure and the topological invariant -- it is shown to decouple into single nanowires with modified chemical potentials that belong to the BDI class. Following
this (Sec.~\ref{pwave}), we shift to the main focus, i.e., spin-triplet superconductors.
We extend the class of perturbations previously considered~\cite{Dumitrescu13},
and start with a model with mirror, chiral, and TR symmetries. We systematically break these symmetries by changing various spin-orbit coupling
terms that maybe controlled by gate voltages and magnetic fields. We study how
the conductance tracks the topological invariant and spectrum in all these cases.
In the second part of the paper (Sec.~\ref{cavity}), we study the effect of the symmetry-induced spectrum-breaking on the
Andreev spectra of Josephson junctions~\cite{Dumitrescu13}. Recent
measurements have demonstrated the ability to measure aspects of Andreev
state spectrum in a Josephson junction by two-tone spectroscopy~\cite{urbina,Devoret}. Similar to the case of conductance, we show that the spectrum shows
multiple zero-energy Andreev states in the highly symmetric case with mirror and
chiral symmetries. We study how applying gate voltages to the junction to change the symmetry of the spin-orbit coupling as well as magnetic fields, splits these states in the junction.
\section{Differential conductance with normal leads}
\label{tomo}
In this section, we analyze the behaviour of the differential conductance, that can be detected using normal leads connected to the first lattice sites of the system, as shown in Fig.~\ref{gs}. Let $t_{x}$ be the hopping strength between the neighboring sites in the same chain, and $t_{y}$ be the hopping strength between the neighboring sites in the neighboring chains. We consider the limit $t_y\ll t _x$ in order to model a quasi-1D chain.
The leads are modeled as having only hopping (of strength $t_x$) and chemical potential ($\mu_j$) terms corresponding to the single chains, and they are represented by the Hamiltonian
\begin{align}
H_{leads} & =
-t_{x} \sum_{i=1}^{N_\ell-1} \sum_{j=1}^{N_y}
{\Phi}^{\dagger}_{i+1, j} \,\tau_{z}\, {\Phi}_{i,j}
\nonumber \\ &
\hspace{1 cm}
- \left( \tau \,t_{x} \sum_{j=1}^{N_y}
{\Phi}^{\dagger}_{1, j} \,\tau_{z}\, {\psi}_{1,j} + {\rm h.c.} \right) .
\label{eqlead}
\end{align}
Here, $\tau $ is the strength of the tunnelling barrier in units of $t_x$, and is set at $5$ for the systems we analyze. Furthermore, ${\Phi}^{\dagger}_{i,j}= (d^{\dagger}_{i,j,\uparrow},d^{\dagger}_{i,j,\downarrow},d_{i,j,\downarrow},- d_{i,j,\uparrow})$ and
${\psi}^{\dagger}_{\ell,j}= (c^{\dagger}_{\ell,j,\uparrow},c^{\dagger}_{\ell,j,\downarrow},c_{\ell,j,\downarrow},- c_{\ell,j,\uparrow})$ are the spinors belonging to the lead and chain sites, respectively.
The site-indices $(\ell,j)$ label the fermions in the $(x,y)$ strip, such that $\ell \in [1,N_x]$ and $j \in [1,N_y]$.
Lastly, $ {\sigma}_\alpha $ and $ {\tau}_\alpha $ ($ \alpha \in \lbrace x,y,z \rbrace $) are the Pauli matrices which act on the spin and particle-hole spaces, respectively . In our numerics, we have taken the number of sites in each chain (lead)
to be $N_x \, (N_\ell) = 100$, while the number of chains is set to $N_y =3$, corresponding to a three-channel lead. Energy eigenvalues, voltages, and all the parameters in the Hamiltonian are expressed in units of $t_x$. The conductance is calculated in units of $\frac{e^2}{h}$.
The zero temperature ($T=0$) conductance $G_0(V)$ has been computed using the KWANT package \cite{Groth_2014}, that uses the scattering matrix formalism. These results are extended to non-zero temperature ($T>0$) conductance using the convolution:
\begin{equation}
G_T (V) = \int dE\, \frac{df_{\epsilon}(T)}{d\epsilon} \Big \vert_{(E-V)} G_0(E)\,,
\label{GT}
\end{equation}
where $f_{\epsilon} (T)=\frac{1}{e^{\epsilon/kT}+1}$ is the Fermi function at temperature $T$ and energy $\epsilon$, whose derivative with respect to energy is carried out at the value $(E-V)$, energy being in units of electron volts.
\subsection{Rashba nanowire}
\label{swave}
\begin{figure*}[]
{\includegraphics[width = 0.5 \textwidth]{F1a}} \qquad
{\includegraphics[width = 0.35\textwidth,height = 0.35\textwidth]{F1b}}
{\includegraphics[width = 0.35\textwidth,height = 0.35\textwidth]{F1c}} \qquad
{\includegraphics[width = 0.35\textwidth,height = 0.35\textwidth]{F1d}}
\caption{\label{1dnano}
We shows the results for the quasi-1D Rashba nanowire described by Eq.~\ref{nanoham}, with $\Delta_s=0.4,\, V_x=0.8,\,\alpha=0.25, \,,t_y=0.1 $. The first panel (including (a), (b), and (c)) shows the correspondence between the spectrum, the invariant, and the zero-bias differential conductance, all plotted as functions of the chemical potential $\mu$. The remaining three panels (i.e., (d), (e), anf (f)) show the differential conductance as a function of voltage, for $\mu=-0.1,\,0.725,$ and $0.8$ respectively, at different temperature values (as shown in the plot-legends).
All the temperature values are in units of $t_x$.}
\end{figure*}
We consider a multi-channel 1D Rashba nanowire aligned along the $x$-axis, brought in contact with an s-wave superconductor, in the presence of an external magnetic field of strength $V_x$ applied in the $x$-direction. This can be modeled as an array of 1D chains coupled by a weak hopping amplitude, with the Hamiltonian written below:
\begin{align}
\label{nanoham}
H = & \sum_{\ell=1}^{N_x}\sum_{j=1}^{N_y} {\psi}^{\dagger}_{\ell,j}
\left [
\left \lbrace -\mu+2 \left (t_x+t_y \right ) \right \rbrace \tau_{z}
+\Delta_s \tau_{x}+V_{x} \sigma_{x} \right ] {\psi}_{\ell,j}
\nonumber \\
& - \sum_{\ell=1}^{N_x-1} \sum_{j=1}^{N_y}
\left \{ {\psi}^{\dagger}_{\ell+1, j}
\left (t_{x}+ \mathrm{i}\,\alpha\, \sigma_{y}\right )\tau_{z}\, {\psi}_{\ell,j}+{\rm h.c.} \right\}
\nonumber \\ &
- \sum_{\ell=1}^{N_x} \sum_{j=1}^{N_y-1}
\left ( {\psi}^{\dagger}_{\ell, j+1} \,t_{y}\,\tau_{z}\, {\psi}_{\ell,j}
+{\rm h.c.} \right) .
\end{align}
Here, $\mu$ is the chemical potential, $\Delta _s$ is the magnitude of the s-wave superconducting gap, and $\alpha $ is the spin-orbit coupling.
The system has a chiral symmetry operator $\mathcal{S} = \sigma_y\, \tau_y$, which can used to off-block diagonalize the Hamiltonian~\cite{tewsau} to the form $\begin{pmatrix} 0 & A(k) \\ A^{T}(-k) & 0\end{pmatrix}$. The topological invariant is calculated as~\cite{tewsau}:
\begin{equation} \label{e1}
\mathcal{Z} = \frac{1}{2 \,\pi \, \mathrm{i}}\,\int_{k=-\pi}^{k = \pi}d\theta(k) \,,\quad
\theta(k)= \frac{\det \left (A(k) \right)}{|\det \left (A(k) \right )|}\,.
\end{equation}
More details of this calculation are presented in Appendix~\ref{app}.
The topological behaviour of the quasi-1D Rashba wire described by
Eq.~\ref{nanoham} can be understood from the evolution of the spectrum of
the nanowire with a Zeeman field, as shown in Fig.~\ref{1dnano}(a).
The gap-closures seen in this spectrum indicate the presence of a sequence of six
topological phase transitions. In addition, for chemical potential in the range $0.75\lesssim\mu\lesssim1.5$, one can see Andreev bound states \cite{DEGENNES,PhysRevB.86.100503,PhysRevB.86.180503,PhysRevB.97.165302,PhysRevB.96.075161,PhysRevB.98.155314,Vuik_2019} as states that have ``peeled off'' below the continuum of states. These states are similar to those obtained in single channel nanowires in the absence of any lead quantum dot, and have an energy that approaches zero at the phase transitions~\cite{PhysRevB.86.100503,PhysRevB.86.180503,PhysRevB.97.165302,PhysRevB.96.075161,PhysRevB.98.155314,Vuik_2019}.
The topological phase transitions seen in Fig.~\ref{1dnano}(a) are not accompanied by a change in symmetry. Rather, they correspond to changes in the topological invariant that is calculated using Eq.~\ref{e1}, and plotted in Fig.~\ref{1dnano}(b). We see that the topological invariant changes between consecutive integer values. As the chemical potential crosses each phase transition, we observe the corresponding gap closures in (a), as expected for topological phase transitions. The range of integer values (i.e., from zero to three) for the topological invariant in this system can be understood from the fact that the normal state (i.e., $\Delta_s=0$) of the isolated system can be decomposed into a sequence of three sub-bands with different wave-function profiles in the $y$ direction. Adding superconductivity does not couple these bands, and hence the Hamiltonian in Eq.~\ref{nanoham} describes a stack of decoupled topological nanowires, with their normal state bands shifted relative to each other. The applied Zeeman field splits the spin components of each of these sub-bands in a way such that they are topological in a range of chemical potentials, where one of these spin-split bands is occupied. Because the Zeeman splitting of the electrons for $V_x=2$
is larger than this separation between the various sub-bands,
changing chemical potential can sequentially drive all of them into a topological phase,
before the lowest sub-band gets both its spins occupied. This leads to a situation where
the number of topological bands can increase to three before decreasing back to zero
\cite{PhysRevB.84.144522}. While the results for the topological invariant in this sub-section can be understood from the sub-band decomposition of the Hamiltonian in Eq.~\ref{nanoham}, in later sub-sections we will find that the presence of chiral symmetry protects the topological invariant from small perturbations that couple the sub-bands.
While the topological invariant, plotted in Fig.~\ref{1dnano}(b), is not directly measureable in experiments, one can see the evidence for this invariant in the zero-bias conductance, plotted in Fig.~\ref{1dnano}(c).
We observe that the zero temperature zero-bias conductance tracks the integer topological invariant in (b) quite closely. The quantization of the conductance here represents the topological invariant, despite the fact that the tunneling Hamiltonian from the lead (Eq.~\ref{eqlead}) couples the different channels, which we used in the previous paragraph to understand the integer values. Thus, measuring the zero-bias conductance at low-enough temperatures can provide insights into the topological phase diagram of such a wire.
Furthermore, the ABS states do not appear to affect the value of the zero-bias conductance peak (ZBCP) for the parameters of our calculation. Fig.~\ref{1dnano}(c) tracks the change of the conductance with a rise in temperatures. We find that the conductance values are lowered as the temperature is increased. However, even at $T =4.3 \times 10^{-3}\,t_x$, the phase transition points can be identified from the conductance plots, though the conductance is significantly reduced from the correct quantized value.
The thermal-suppression of the zero-bias conductance, as seen in the Fig.~\ref{1dnano}(c), can be understood by considering the conductance as a function of voltage for different temperatures. One sees from Fig.~\ref{1dnano}(d) (plotted for $\mu=0.1$) that the temperature-suppression arises from a broadening of the zero-bias peak with temperature. The conductance as a function of voltage at two other values of $\mu$ (viz. $\mu=0.725$ and $0.8$), plotted in Fig.~\ref{1dnano}(e) and ref{1dnano}(f), shows a similar thermal-suppression.
Interestingly, this thermal-suppression effect appears to be weaker in the case of the smaller conductance peaks, that are associated with fewer MZMs. This is accompanied by narrower zero-bias peak widths for the case of larger number of MZMs. These observations suggest that the extra MZMs, that occur in the case of larger number of modes, are coupled to the lead with a weaker tunneling amplitude.
This behaviour is found in all the cases examined in this paper. The $T=0$ plot in Fig.~\ref{1dnano}(e) shows sharp peaks away from the zero voltage, which are associated with the bulk states that are quantized by finite size effects. The width of these resonances are suppressed because of the weak tunneling
of these states across the tunnel barrier. These narrow peaks associated with the
sub-gap states are washed away at higher temperatures.
Furthermore, Fig.~\ref{1dnano}(f) shows additional broad peaks, away from zero-energy but below the superconducting gap. These peaks are associated with finite energy ABSs, the evidence of which is seen in the spectrum in Fig.~\ref{1dnano}(a).
The ABSs that result from splitting of the MZMs, are localized near the end of the wire, and therefore couple strongly to the leads resulting in larger broadening compared to the sub-gap states. As expected, these extra states do not change the zero-bias conductance, which is controlled by the topological invariant.
\subsection{p-wave superconductors}
\label{pwave}
We will now consider the main focus of this work, i.e., TR-invariant topological superconductors~\cite{Dumitrescu13} that can be realized by spin-triplet pairing, exhibiting equal spin pairing (ESP) $p$-wave superconductivity. These properties are conjectured to be present in the quasi-1D transition metal oxide Li$_{0.9}$Mo$_6$O$_{17}$, and some organic superconductors~\cite{limo1,triplet1,triplet2,Lebed_00,Mercure_12,Lebed_13,Lee_2001,
Lee_2003,Shinagawa_2007}. The hopping integrals along the crystallographic directions of these materials vary as, $t_x \gg t_y \gg t_z$, making them quasi-1D conductors.
The triplet ($S=1$) superconductivity of the p-wave wire can be represented by a matrix pair
potentital $\Delta_{\alpha\beta}(k)=\Delta\left[\textbf{d}(k)\cdot \bm{\sigma} \right ] _{\alpha\beta}$, where
$k=k_x$ is the 1D crystal-momentum.
The nature of the triplet component characterized by the vector $\mathbf d$ is odd (i.e. $\textbf{d}(k)=-\textbf{d}(-k)$), while $\Delta$ represents the magnitude.
Here, we choose $\mathbf d(k) =d( k)\left(0,0,1\right )$, i.e., along
the $z$-direction in the spin space. The superconducting term in real space is then of the form: $ \mathrm{i}\,\Delta \left ( c_{ \ell+1,\uparrow}^{\dagger}\, c_{ \ell,\uparrow}^{\dagger}\,+\,c_{\ell+1,\downarrow}^{\dagger}\,
c_{ \ell,\downarrow}^{\dagger} \right )+ {\rm h.c.}$
This choice of the $\bm d$-vector represents a TR-invariant superconductor containing the ESP spin-triplet $p$-wave state.
The bulk Hamiltonian for a p-wave superconducting chain, with the order parameter described
in the previous paragraph, can be written in the Nambu basis (defined with the spinor $\Psi_{k}=(c_{k,\uparrow},c_{k,\downarrow},c_{-k,\downarrow}^{\dagger},-c_{-k,\uparrow}^{\dagger})^{T}$) as:
\begin{align}
\label{eq:H1DK}
& {\cal{H}}^{1D}_{k}(\mu,\Delta, \mathbf V_Z,\alpha_R) \nonumber \\
& = \left [\, \epsilon(k)-\mu+2\,t_x\, \right ]\sigma_{0}\tau_z + \Delta\, \sin k \,\sigma_z\, \tau_{x}
+ {\cal{H}}^{Z}\,+ {\cal{H}}^{SO}\,.
\end{align}
Here, $\epsilon(k)=-2\,t_x \cos k $ is the single-particle kinetic energy,
and $ \Delta\,\sin k$ is the $p$-wave superconducting order parameter.
In addition to p-wave superconductivity, the above Hamiltonian allows us to consider the
effect of an electric-field-induced inversion-symmetry breaking spin-orbit interaction (SOI) term aligned in an arbitrary direction ${\mathbf{a}}$ in the spin space. This is written as
\begin{align}{\cal{H}}^{SO}=\alpha_R \,\sin k
\left ({\mathbf{a}} \cdot \bm{\sigma}\right ) \tau_z\,.
\end{align}
The Hamiltonian also allows us to consider the breaking of TR symmetry ,
through the Zeeman effect of an applied magnetic field, that is
captured by the term
\begin{align}
{\cal{H}}^{Z}= \left( \bm{V} \cdot \bm{\sigma} \right ) \tau_0 \,.
\end{align}
The single-chain p-wave Hamiltonian described above can be generalized
to model a quasi-1D system that is more relevant to the experimental materials (such as Li$_{0.9}$Mo$_6$O$_{17}$~\cite{Lebed_00,Mercure_12,Lebed_13,Lee_2001,
Lee_2003,Shinagawa_2007}), by coupling
multiple copies of Eq.~\ref{eq:H1DK} into a multi-channel Hamiltonian that is written as:
\begin{align}
H^{Q1D}_{k;jj'} = & \,{\cal{H}}^{1\text{D}}_{k,j} \delta_{j, j'}+{\cal{H}}^{y}_{j,j'} \,,
\nonumber \\
{\cal{H}}^{1\text{D}}_{k,j}
= & \, {\cal{H}}^{1D}_{k}
\left (\mu_j,\Delta_j,\mathbf V^j,\alpha_{R}^j \right ).
\label{eq:HQ1D}
\end{align}
The different chains in this quasi-1D bundle are coupled together
with an amplitude $t_y$, given by the y-directional Hamiltonian
\begin{align}
{\cal{H}}^{y}_{j,j'}
= &- t_y \, \tau_z \left (\delta_{j,j'+1}+\delta_{j,j'-1} \right )
\nonumber \\ & \quad
+ \mathrm{i}\,\alpha_R'\, \sigma_y \,
\tau_z \left (\delta_{j,j'+1}-\delta_{j,j'-1} \right ),
\end{align}
where $\alpha_R'$ represents the magnitude of the inter-chain Rashba SOI.
As we saw in the last subsection, coupling identical chains leads to an artificial decoupling
of sub-bands, that can lead to non-generic results. For this reason, we have introduced
a $j$-dependence of the parameters of the single chains $\mathcal{O}_j=\mu_j,\Delta_j,\mathbf V^j,\alpha_{R}^j$, which is assumed to be of the form
\begin{align}
&\mathcal{O}_j=\bar{\mathcal{O}}\left (1-\tilde{j}\, \gamma \right ) ,
\end{align}
where $\bar{\mathcal{O}}$ is the average value of the parameters, $\gamma=0.1$, and $\tilde{j} = j-2$.
Combining all the ingredients discussed in this subsection so far, the total Hamiltonian in Eq.~\ref{eq:HQ1D} can be explicitly written out in the position space as:
\begin{widetext}
\begin{equation}
\label{nanoham2}
\begin{split}
H =
&\sum_{\ell=1}^{N_x}
\sum_{j=1}^{N_y}
{\psi}^{\dagger}_{\ell,j}
\left [ \left \lbrace 2(t_x+t_y) -\mu_{j} \right \rbrace \tau_{z}
+ \mathbf{V}^j\cdot \bm{\sigma} \right ] {\psi}_{\ell,j}
- \sum_{\ell=1}^{N_x-1} \sum_{j=1}^{N_y}
\left [ {\psi}^{\dagger}_{\ell+1, j}
\left \lbrace t_{x}\, \tau_{z}\,
+ \frac{ \mathrm{i}\, \Delta_j}{2}\, \sigma_z\, \tau_x +
\frac{ \mathrm{i}\, \alpha_{R}^{j}}{2}
\left (\mathbf{a}\cdot \bm{\sigma} \right ) \tau_z \right \rbrace
{\psi}_{\ell,j}+{\rm h.c.} \right ]
\nonumber \\ &- \sum_{\ell=1}^{N_x} \sum_{j=1}^{N_y-1}
\left \{ {\psi}^{\dagger}_{\ell, j+1}
\left ( \,t_{y}\,\tau_{z}\, -\mathrm{i}\,\alpha_R'\,\sigma_y\,\tau_z
\right ){\psi}_{\ell,j} +{\rm h.c.} \right \} .
\end{split}
\end{equation}
\end{widetext}
The above Hamiltonian, in addition to obeying particle-hole symmetry $\mathcal{P}=\sigma_y\tau_y K$ (and taking $k\rightarrow -k$) that applies to any superconductor, obeys
a TR symmetry for $\bm{V}=0$. The corresponding TR operator is $\mathcal{T}= \mathrm{i}\,\sigma_y\, K$ (plus performing the operation $k\rightarrow -k$), $K$ here denotes complex conjugation. In addition, depending on the presence or absence of spin-conservation (the latter caused by SOI), or inversion symmetry, the system can possess various
mirror or chiral symmetries. This makes the p-wave system particularly interesting because
it allows, in principle, turning the various symmetries on or off. by applying electric and magnetic fields, while all the while remaining in the topological phase.
We now study the signatures of the symmetry breaking phenomena in this topological p-wave
superconductor in the subsubsections below. Specifically, we will compute the results for the spectrum, topological invariant, and conductance, similar to what we reviewed in the more familiar and simpler case of the Rashba nanowire in the previous subsection (illustrated in Fig.~\ref{1dnano}).
For our numerics, we will choose the superconducting pairing amplitude to be $\Delta=0.5$ (in units where $t_x=1$), in addition to other parameters specified in the relevant subsubsection.
\subsubsection{Time-reversal and chiral symmetric case: $\mathbf V =\alpha_R'=0$}
The p-wave system of Eq.~\ref{eq:HQ1D}, with the restriction $\bm{V} =\alpha_R' = 0 $, is in the BDI symmetry class, similar to the Rashba nanowire studied in Sec.~\ref{swave}. This has a chiral symmetry operator $ \mathcal{S} = \sigma_z\, \tau_y$, which is different though, compared to the Rashba nanowire case.
The spectrum of the model as a function of $\mu$ is shown in Fig.~\ref{F2}(a). This illustrates a sequence of bulk-gap closings, together with a set of zero-energy states,
similar to the Rashba nanowire case of Sec.~\ref{swave}. However, in this case, we do
not see significant sub-gap ABSs associated with the ends. The evolution
of the spectrum can again be understood in terms of changing of the filling of sub-bands.
Unlike the Rashba nanowire case, both spin components of a sub-band with p-wave
filling are topological, with the same sign of the topological invariant. This can be understood from the chiral topological invariant, which is calculated analogous to the Rashba nanowire case, and is plotted in Fig.~\ref{F2}(b). From this plot, we see that the topological invariant jumps from zero to two, as $\mu$ increases from $\mu\sim 0$. This regime corresponds to both spin components of the lowest sub-band starting to fill. The fact that the topological invariant increases from zero to two indicates that both spin components of the sub-band contribute the same value to the topological invariant, which is different from the case of the Rashba nanowire.
The difference can be understood in the simpler case with $\alpha_R=0$, where
the Hamiltonian and the chiral symmetry $\mathcal{S}$ commute with the mirror symmetry operator $M = \mathrm{i} \left (\hat{\mathbf d}
\cdot \boldmath{\sigma} \right )\tau_0=\sigma_z\tau_0$. $M$ can thus be used to define a mirror-invariant as well~\cite{tudor15}. Thus a chiral topological invariant can be computed for each $\sigma_z=\pm 1$. Furthermore, the two sectors $\sigma_z=\pm 1$ can be mapped into
each other by $\sigma_x$, that commutes with $\mathcal{S}$. This explains why the two sectors have the same topological invariant. The inclusion of non-zero $\alpha_R$ does not break the chiral symmetry (though it breaks the mirror symmetry), and therefore it cannot change the topological invariant. In fact, numerical results for $\alpha_R=0$ (not included here) appear qualitatively identical to Fig.~\ref{F2}.
The topological invariant begins to decreases at $\mu \gtrsim 4$, as the Fermi level crosses the tops of the bands.
The topological invariant indicates the number of MZMs that appear at each end
of the system, which appear as zero-energy states in the spectrum shown in Fig.~\ref{F2}(a).
The even-parity of the topological invariant can be understood to be a consequence of
the TR symmetry, that constrains the MZMs to appear in Kramers pairs. The
even number of MZMs appear as zero-bias conductance plotted in Fig.~\ref{F2}(c). Similar
to the case of the Rashba nanowire, the zero-bias conductance closely tracks the
number of MZMs, and provides a measurable indication of the change of the
topological invariant (that cannot be measured directly). As in the case of the Rashba nanowire, the temperature dependence of the zero-bias conductance provides a sense of the energy scale with which the MZMs couple to the leads. As before, we find that the higher conductance peaks are more sensitive to temperature.
\begin{figure}[htb]
\includegraphics[width = 0.4 \textwidth]{F3}
\caption{\label{F2}
For quasi-1D p-wave superconductors of Eq.~\ref{eq:HQ1D}, descibed by the parameters $ t_y=0.1,\,\alpha_R =0.25, \, \alpha_R'=0.0,\,\mathbf V =0$, and ${\mathbf{a}}\,=\,(0,0,1)$, the figure shows the correspondence between the spectrum, the BDI invariant (associated with the chiral symmetry operator $\sigma_z\,\tau_{y}$), and the
zero-bias differential conductance, all of which have been plotted as functions of the chemical potential $\mu$. All the temperature values are in units of $t_x$.}
\end{figure}
\begin{figure}[htb]
{\includegraphics[width = 0.4 \textwidth]{F4}}
\caption{\label{F3}For quasi-1D p-wave superconductors of Eq.~\ref{eq:HQ1D}, descibed by the parameters $ t_y=0.1,\,\alpha_R =0.25, \, \alpha_R'=0$, ${\mathbf{a}}=(0,0,1)$, and $\mathbf V= (0,0.2,0)$, the figure shows the correspondence between the spectrum, the BDI invariant (associated with the chiral symmetry operator $\sigma_{z}\,\tau_{y}$), and the zero-bias differential conductance, all of which have been plotted as functions of the chemical potential $\mu$. All the temperature values are in units of $t_x$.}
\end{figure}
\subsubsection{Time-reversal broken chiral symmetric case:
$|{\mathbf{a}} \times {\hat{z}}|=\alpha_R'=0$}
The TR symmetry of the system, discussed in the previous sub-subsection,
can be broken by applying a Zeeman field $\mathbf V= (0,0.2,0)$.
The nanowire, however, still has chiral symmetry, encoded by the operator $\mathcal{S}=\sigma_z\tau_y$. The breaking of the TR symmetry splits the Kramer's degeneracy of the bulk states near the phase transitions seen in Fig.~\ref{F3}, such that the three phase transitions for $\mu<1$ (seen in Fig.~\ref{F2}(a)) are now split into six phase transitions
(see Fig.~\ref{F3}(a)). For the parameters chosen, the splitting of some of the higher chemical potential ($\mu>3$) transitions are too small to resolve. As seen
in Fig.~\ref{F3}(b), these split-transitions are indeed topological phase transitions, as
they are accompanied by a change of the topological invariant. The topological
invariant, which is identical to the one calculated in the last sub-subsection,
shows integer jumps for $\mu<1$, as opposed to the double jumps seen in the
TR symmetric case discussed in the last sub-subsection.
The MZMs that appear at the end because of the non-trivial topological invariant are
no longer required to be Kramers degenerate. This allows both even and odd
number of MZMs. This is apparent from the numerical result for the zero bias
conductance shown in Fig.~\ref{F3}(c), where we see that many of the conductance
steps that are multiples of $4e^2/h$ split into steps with height $2e^2/h$
associated with individual MZMs. Unfortunately, many of the smallest steps are washed out
at the lowest temperature of $T\sim 10\,K$.
\begin{figure}[]
\includegraphics[width = 0.4 \textwidth]{F5}
\caption{\label{F4} For quasi-1D p-wave superconductors of Eq.~\ref{eq:HQ1D}, descibed by the parameters $t_y=0.3, \, \alpha_R=0.25$, $ \alpha_R^\prime=0 $, ${\mathbf{a}}\,=\,(1,1,0)$, and $\mathbf V= (0,0.1,0.1)$, the figure shows the correspondence between the spectrum, the Pfaffian invariant (as the system belongs to class D), and the zero-bias differential conductance, all of which have been plotted as functions of the chemical potential $\mu$.
All the temperature values are in units of $t_x$.}
\end{figure}
\subsubsection{Superconductors without symmetry}
\label{classD}
Changing the electric field symmetry of the system such that either the intra-chain
Rashba coupling picks up a non-zero component (causing $|\mathbf {a}\times \hat{z}|\neq 0$), and/or generates an inter-chaing $\alpha_R'\neq 0$, breaks all the chiral symmetries of the system. This places the system in the symmetry class D~\cite{kitaev-classi,*Schnyder}, which is the minimal symmetry for a superconductor, showing only particle-hole symmetry. As seen from the spectrum plotted in Fig.~\ref{F4}(a), this leads
to a structure of bulk-gap closings that is similar to Fig.~\ref{F3}(a), in the sense
of showing six phase transitions at low chemical potential values ($\mu<2$). In this case, we also find six phase transitions above $\mu>2$.
However, unlike Fig.~\ref{F3}(a), the zero-energy states in Fig.~\ref{F3}(a) both appear or disappear at the subsequent phase transitions. This is consistent with the fact
that the topological invariant for class D is in the group $Z_2$, and therefore it takes only two values given by \cite{Kitaev_2001} $Q =\frac{1-\nu}{2}$,
where
\begin{align}
& \nu = \text{sgn} \left[ Pf(Q_1) \,Pf(Q_2) \right ],\nonumber \\
& Q_1 = \mathcal{H}^{Q1D}(k) \Big \vert_{k=0} \sigma_y\, \tau_y \,,\nonumber \\
& Q_2 = \mathcal{H}^{Q1D}(k) \Big \vert_{k=\pi}
\sigma_y\, \tau_y \,.
\end{align}
Here, $Pf(A)$ represents the Pfaffian of the matrix $A$ \cite{pf}.
The resulting topological invariant $Q$ is plotted in Fig.~\ref{F4}(b), which shows
that the system alternates between a topological phase (with one MZM at each end), and
a trivial phase. Most of the trivial phases, seen in Fig.~\ref{F4}(b), correspond to
the chemical potential range of Fig.~\ref{F4}(b) with an even value of the chiral topological invariant. These regions in the spectrum of Fig.~\ref{F4}(a) contain sub-gap states, that result from splitting the even number of MZMs by the breaking of chiral symmetry with SOI.
As in the earlier cases, the plot of the zero-bias conductance shown in Fig.~\ref{F4}(c)
indicates the topological invariant. The topological region with invariant $Q=1$
shows a conductance of $2e^2/h$, corresponding to a single MZM at each end of the wire.
The zero-bias conductance in this case shows a stronger suppression with temperature compared to its chiral symmetric counterpart in Fig.~\ref{F3}(c). This indicates that the residual MZM is the one that couples weakest to the lead.
\begin{figure}[]
\includegraphics[width = 0.4 \textwidth]{F6}
\caption{\label{F5}For the quasi-1D p-wave superconductors of Eq.~\ref{eq:HQ1D}, descibed by the parameters $t_y=0.3, \alpha_R=0.25$, $ \alpha_R^\prime=0.1 $, ${\mathbf{a}}\,=\,(1,1,0)$, and $\mathbf{V}= 0$, the figure shows the correspondence between the spectrum, the DIII invariant ($Q$), and the zero-bias differential conductance, all of which have been plotted as functions of the chemical potential $\mu$.
All the temperature values are in units of $t_x$.}
\end{figure}
\subsubsection{Time-reversal preserving case}
\label{sectr}
Applying an electric field to generate an inter-chain Rashba SOI ($\alpha_R'\neq 0$),
as well as an intra-chain SOI ($\alpha_R\neq 0$), in the absence of a Zeeman field,
breaks all chiral and spin symmetries without breaking TR symmetry.
This leads to a superconductor in the symmetry class DIII. The symmetry class
of the system becomes apparent from the spectrum shown in
Fig.~\ref{F5}(a), which exhibits three phase transitions for $\mu\lesssim 1$, similar to
the spectrum of the TR symmetric superconductor shown in
Fig.~\ref{F3}(a). However, the chiral symmetry present in the case of Fig.~\ref{F3}(a), is broken here by the applied electric field. As a result,
the MZMs disappear at alternate phase transition points,
similar to the spectrum of the class D nanowires shown in Fig.~\ref{F4}(a).
Similar to the case of Sec.~\ref{classD}, the alternating presence of
MZMs can be understood from the topological invariant for this case, which can take
two values (see Fig.~\ref{F4}(b)). As in the other cases, the topological
invariant only changes at the phase transitions.
The topological invariant for the symmetry class DIII, shown in Fig.~\ref{F5}(b), is calculated as $Q=\frac{1-\nu}{2}$~\cite{DIII}, where
\begin{equation}
\nu = \det \left({\mathcal{U}}^K \right )\,\frac{Pf \left(\hat{\theta} (0) \right )}
{ Pf \left (\hat{\theta} (\pi) \right )}\,.
\end{equation}
Here, $\hat{\theta} (0)$ and $\hat{\theta} (\pi)$ represent the matrix elements of
the TR operator $\mathcal{T}$, in the basis of the occupied states,
at $k=0$ and $k = \pi$, respectively. The matrix ${\mathcal{U}}^K $ in this basis
is given by the so-called Kato Propagator~\cite{DIII}:
\begin{align}
{\mathcal{U}}^K(0,\pi) = \lim_{n\rightarrow \infty} \,\prod
\limits_{\lambda=0}^{n} \, \mathcal{P}_{o}(k_\lambda)\,,
\end{align}
where $\mathcal{P}_{o}(k_\lambda)$ is the projector into the occupied bands (negative energy) and $k_\lambda = \frac{ \lambda \, \pi}{n}$.
The zero-bias conductance as a function
of $\mu$, which is plotted in Fig.~\ref{F5}(c), provides a measurable indication
of the topological invariant. Similar to the previous sub-subsection, this correspondence
is apparent from the fact that the zero-bias conductance takes two values in the
tunneling limit, corresponding to the two values of the topological invariant.
The conductance in the topologically non-trivial regime takes a value $4e^2/h$
corresponding to two MZMs at each end. The doubled value of the quantized
conductance, in contrast with the case in Sec.~\ref{classD}, is a
result of the Kramers degeneracy associated with TR symmetry.
Relative to the spectrum of the multiple Kramers pairs of MZMs shown in
Fig.~\ref{F2}(a) (for the chiral and TR symmetric p-wave superconductor),
Kramers pairs of ABSs or MZMs seen in the spectrum in Fig.~\ref{F5}(a).
Thus the DIII class topological invariant and the structure of ABSs in Fig.~\ref{F5} can be understood from the chiral symmetric topological invariant at the corresponding
values of chemical potential in Fig.~\ref{F2}(b).
\section{Signatures of multiple MZMs in Andreev spectroscopy}
\label{cavity}
In this section, we focus on the effect of the symmetry-breaking on the ABS spectra of Josephson junctions (JJs) of superconducting nanowires.
A JJ is created by introducing a weak link with tunneling amplitude
$\gamma$, which has negligible conductance compared to the rest of the superconducting wire.
We can therefore assume that the supercurrent has a negligible contribution to the superconducting phase drop around the wire, which can be controlled by a flux-loop~\cite{felix}.
The ABS spectrum of a JJ generates features in the ac absorption, that can be measured by several techniques such as microwave-spectroscopy~\cite{van_Woerkom} and two-tone spectroscopy~\cite{urbina,Devoret}.
Following the argument in the last paragraph, the superconducting phase
difference $\phi$ across the JJ, generated by the magnetic flux-loops, is introduced through a modified Hamiltonian corresponding to Eq.~\ref{eq:HQ1D}:
\begin{widetext}
\begin{align}
\label{nanoham3}
H = & \sum_{\ell=1}^{N_x}\sum_{j=1}^{N_y} {\psi}^{\dagger}_{\ell,j}\left ((-\mu+2(t_x+t_y)) \,\tau_{z}+\Delta_s\, \tau_{x}^{\ell}+V_{x} \,\sigma_{x} \right ){\psi}_{\ell,j}
- \sum_{\ell=1}^{N_x-1} \sum_{j=1}^{N_y}
\left \{ {\psi}^{\dagger}_{\ell+1, j}
\left (t_{x}+ \mathrm{i}\,\alpha\, \sigma_{y}\right )\tau_{z}\, {\psi}_{\ell,j}+{\rm h.c.} \right\}
\nonumber \\ &
- \sum_{\ell=1}^{N_x} \sum_{j=1}^{N_y-1}
\left ( {\psi}^{\dagger}_{\ell, j+1} \,t_{y}\,\tau_{z}\, {\psi}_{\ell,j} +{\rm h.c.} \right ) - \gamma \sum_{j=1}^{N_y}
\left ( {\psi}^{\dagger}_{N_x, j} \,\tau_{z}\, {\psi}_{1,j}+{\rm h.c.} \right )\,,
\end{align}
\end{widetext}
where $\tau_x^{\ell} =\begin{pmatrix} 0 &
e^{\mathrm{i}\, \ell\,\phi/N_x}
\\ e^{-\mathrm{i}\, \ell\,\phi/N_x}
& 0\end{pmatrix}$. Furthermore, the phase difference $\phi$ across the JJ is controlled by the magnetic flux $\Phi$ in the superconducting loop through the relation
$\phi=2\,\pi\,\Phi/\Phi_0$, where $\Phi_0= \frac{h\,c} {2\,e}$ is the superconducting flux quantum.
The $\ell$ dependence of the matrix $\tau_x^{\ell}$ can be understood as a winding of the superconducting phase around the loop, which is needed to ensure that the superconducting phase difference between the ends $\ell=1$ and $\ell=N_x$ of the JJ is equal to $\Phi$.
\begin{figure}[]
\includegraphics[width =0.4 \textwidth]{Rf}
\caption{\label{sring}
ABS spectrum of a quasi-1D Rashba nanowire, with the same parameters as in Fig.~\ref{1dnano}, as a function of the flux across a Josephson junction. The ends of the multi-chain ring has a weak coupling strength of $\gamma=0.1$.
The three panels have different values of the chemical potential:
$\mu=0.1$ (top), $\mu= -0.52 $ (middle), and $\mu = 0.4$ (bottom). The spectrum shows degeneracies of $6$, $4$, and $2$, at flux $\Phi=\Phi_0/2$. These correspond to $3$, $2$, and $1$ MZM(s) at each end of the nanowire in the open boundary case, as seen in Fig.~\ref{1dnano}.}
\end{figure}
The ABS spectrum of the above Hamiltonian is plotted in Fig.~\ref{sring} for
several chemical potential values, and it shows zero-energy crossings at
$\frac{\phi}{2\,\pi} = 1/2$. The class BDI topological invariant for
the bulk Hamiltonian, with the same parameters as used to plot the spectrum in Fig.~\ref{1dnano}, shows clearly that the number of zero-energy ABSs at
$\frac{\phi}{2\,\pi}=1/2$, for the different chemical potentials, is twice the topological invariant. This can be understood by noting that, prior to introducing the tunnel coupling (i.e., at $\gamma=0$), the topological invariant in the BDI symmetry class equals the number of MZMs at each end of the JJ. Introducing the tunnel coupling $\gamma$ across the JJ splits the pairs of MZMs into ABSs with energies that are typically non-zero, except at $\phi=\pi$.
We can understand the above argument more explicitly by applying a gauge transformation, $U=e^{ \frac{\mathrm{i}\,\ell \,\phi\,\tau_z}{2\,N_x}}$, to each site $\ell$ of the lattice. This eliminates the $\ell$ dependence of the SC phase in $\tau_x^\ell$, in favour of introducing a phase $e^{\mathrm{i}\,\phi\,\tau_z/2}$ into the tunneling
term proportional to $\gamma$ (in Eq.~\ref{nanoham3}). Then this term becomes
proportional to $\mathrm{i}\,\gamma$ for phase $\phi=\pi$. Since this term commutes with the
chirality operator $\mathcal{S}=\sigma_y\,\tau_y$ (see Sec.~\ref{swave}), and
the MZMs for the BDI class are eigenstates of $\mathcal{S}$ with eigenvalues $\mathcal{S}=\pm 1$ in the absence of tunneling (i.e., $\gamma=0$), these MZMs cannot be hybridized by the coupling $\gamma$ at phase $\phi=\pi$. As a result, they appear as zero-energy ABSs, as shown in Fig.~\ref{sring}.
We now analyze the p-wave system treated earlier in Sec.~\ref{pwave}. Introducing a JJ into the system with a phase difference $\phi$ (similar to Eq.~\ref{nanoham3}), leads to the modified Hamiltonian:
\begin{widetext}
\begin{align}
\label{nanoham4}
H = &\sum_{\ell=1}^{N_x}
\sum \limits_{j=1}^{N_y}
{\psi}^{\dagger}_{\ell,j} \left[
\left \lbrace (-\mu_j +2 \left (t_x+t_y \right ) \right \rbrace
\tau_{z} + \mathbf{V}^j\cdot\mathbf{\sigma} \right ] {\psi}_{\ell,j}
- \sum_{\ell=1}^{N_x-1} \sum_{j=1}^{N_y}
\left [ {\psi}^{\dagger}_{\ell+1, j}\left \lbrace t_{x}\,\tau_{z}
+ \frac{\mathrm{i}\, \Delta_j}{2} \,\sigma_z \, \tau_x^{\ell}
+ \frac{ \mathrm{i}\, \alpha_{R}^{j}}{2}\left (\mathbf{a}\cdot\mathbf{\sigma} \right )
\tau_z \right \rbrace {\psi}_{\ell,j}+{\rm h.c.} \right ] \nonumber \\
&- \sum_{\ell=1}^{N_x} \sum_{j=1}^{N_y-1}
\left \{ {\psi}^{\dagger}_{\ell, j+1}
\left( t_{y}\,\tau_{z}\, -\mathrm{i}\,\alpha_R'\,\sigma_y\tau_z \right)
{\psi}_{\ell,j} +{\rm h.c.} \right \} - \gamma \sum_{j=1}^{N_y}
\left ( {\psi}^{\dagger}_{N_x, j} \,\tau_{z}\, {\psi}_{1,j}+{\rm h.c.} \right ) ,
\end{align}
\end{widetext}
with $\tau_x^{\ell}$ having the same meaning as in Eq.~\ref{nanoham3}.
We consider the same parameters as in Sec.~\ref{sectr}.
Similar to the case of the s-wave superconductor with Rashba SOI
studied in Fig.~\ref{sring}, the tunnel coupling proportional to $\gamma$ splits
the end MZMs into finite-energy ABSs, as seen in Fig.~\ref{pring}. However, unlike the s-wave case, we see the ABSs cross zero energy at two points: at $\Phi = \Phi_0/4$ and $3\,\Phi_0/4$.
The number of ABSs merging at zero energy varies with the
chemical potential, as shown in the three different panels of Fig.~\ref{pring}.
Comparison of the values of the chemical potential with the topological invariant in Fig.~\ref{F2}, shows clearly that the number of zero-energy ABSs at $\Phi=\Phi_0/4$ and $ 3\Phi_0/4$ is twice the topological invariant, similar to the case of the Rashba nanowire.
The middle panel also shows low energy ABSs, in addition to the zero-energy
crossings associated with the MZMs.
\begin{figure}[]
\includegraphics[width =0.4\textwidth]{Jf}
\caption{\label{pring}
ABS spectrum of a quasi-1D p-wave superconductor, with the same parameters as in Fig.~\ref{F5}, as a function of the flux across a Josephson junction. The ends of the multi-chain ring has a weak coupling strength of $\gamma=0.1$.
The three panels have different values of the chemical potential:
$\mu=0.0$ (top), $\mu= 2.0$ (middle), and $\mu =5.0$ (bottom). The spectrum shows degeneracies of $0$, $4$, and $4$, at flux $\Phi=\Phi_0/2$. These correspond to $0$, $2$, and $2$ MZMs at each end of the nanowire in the open boundary case, as seen in Fig.~\ref{F5}. The middle panel shows additional states due to the presence of ABSs (also seen in Fig.~\ref{F5}(a)), which also exist in the open boundary case at low energies.}
\end{figure}
The results shown in Fig.~\ref{sring} and Fig.~\ref{pring} for the highly symmetric cases show that the Andreev spectra also show multiple zero-energy ABSs similar to the corresponding conductance plots. Therefore, we expect these results to be somewhat fragile in the sense that breaking the symmetries will lower the number of Andreev crossings (similar to what is seen for ZBCPs).
\section{Discussion and conclusion}
\label{end}
In this paper, we have studied the effect of symmetry-breaking field for
multi-channel p-wave superconductors that have a high degree of symmetry.
These symmetries allow the possibility of topological superconductivity
with integer topological invariants with an integer number of MZMs.
We find that the ZBCP reflects this topological invariant, as both vary
synchronously with the chemical potential. Breaking the symmetries systematically, by applying strain and magnetic fields, leads to reducing the conductance to
lower integer values by splitting some of the MZMs. The integer topological
invariants also appear to manifest as zero-energy crossings of Andreev spectra for the highly symmetric topological superconductors. For the examples we consider, the number of
zero-energy crossings of ABSs corresponds to the topological invariant,
similar to the zero-bias conductance. The crossings of the Andreev spectra in
topological Josephson junctions is expected to be measurable through
recent advances in Andreev spectrocopy~\cite{Geresdi-Kouwenhoven,tosi-urbina,dev}. The changes in the signatures (such as zero-bias conductance and Andreev spectroscopy) of the topological invariant can elevate the fingerprints of the MZMs to a rich structure, where the observables vary over several quantized values in a multi-dimensional phase space.
While the signatures for integer topological invariants appear to be quite
robust to variations in the Hamiltonian, the numerical examples we considered
so far do not include disorder. How far these predictions hold up to realistic
disorder in these systems will be an interesting direction for future work. In
addition, it will be interesting to see if the zero-bias conductance with normal
leads translates to a robust signature for superconducting leads, as with the
case of non-degenerate Majorana modes~\cite{glazman-oppen}.
Explicit computation of the detection of the ABSs through a
cavity-response experiment is also left for future work.
\section{Acknowledgments}
We thank Pablo San-Jose for collaboration in the initial stages of the project. We also thank Denis Chevallier for useful discussions. Pfaffians were calculated using a code written by Bas Nijholt.
ABR acknowledges the computational facilities provided by IIT-KGP, where parts of the work were carried out.
JS acknowledges support from NSF
DMR1555135 (CAREER).
| {'timestamp': '2021-05-17T02:06:22', 'yymm': '1907', 'arxiv_id': '1907.10626', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.10626'} |
\section{Introduction}
\label{sec:intro}
Face hallucination
(FH) refers to the task of recovering high-resolution (HR) facial images from corresponding low-resolution (LR) inputs~\cite{baker_pami2002, chen2018fsrnet, grm2018face}. Solutions to this task have applications in face-oriented vision problems, such as face editing and alignment, 3D reconstruction or face attribute estimation~\cite{Super-FAN,chen2018fsrnet, jourabloo2017pose, li2017generative, lin2007super,QLC,roth2016adaptive, yu2018super} and are used to mitigate performance degradations caused by input images of insufficient resolution. One particularly popular use of FH models is for LR face recognition tasks\cite{gunturk2003eigenface,lin2007super, zhao2015heterogeneous}, where LR probe images are super-resolved to reduce the dissimilarity with HR gallery data.
\begin{figure}[t!]
\centering
\begin{minipage}{0.96\columnwidth}
\includegraphics[width=\textwidth]{downsampling_method_comparison.pdf}\vspace{1mm}
\end{minipage}
\begin{minipage}{0.03\columnwidth}
\begin{turn}{270}
\scriptsize{ MS \hspace{8mm} NMS}
\end{turn}
\end{minipage}
\begin{minipage}{\columnwidth}
\vspace{-1mm}
\scriptsize{\hspace{0.4mm} LR ($24$ px) \hspace{2mm} URDGN \hspace{3mm} LapSRN \hspace{3.2mm} SRResNet \hspace{3.5mm} CARN \hspace{4.8mm} C-SRIP}
\end{minipage}
\caption{Hallucination examples ($8\times$)
for the five FH models used in this work (see Sec.~\ref{Sec: Method} for details).
The top row shows results for a LR image generated with a degradation procedure matching (MS) the one used during training and the bottom row shows results for an image produced by a non-matching degradation function (NMS). Note the difference in the reconstruction quality.
In this paper, we study the bias introduced into FH models by the training data, which has so far received limited attention in the literature.
}
\label{fig: sr_bias}\vspace{-3mm}
\end{figure}
Formally, face hallucination is defined as an inverse problem described by the following observation model~\cite{nasrollahi2014super}:
\begin{equation}
\mathbf{x}=\mathbf{H}\mathbf{y}+\mathbf{n},
\label{Eq: Inverse_SR_problem}
\end{equation}
where $\mathbf{x}$ denotes the observed low-resolution face image, $\mathbf{H}$ stands for a composite down-sampling and blurring operator, $\mathbf{n}$ represents an additive i.i.d. Gaussian noise term with standard deviation $\sigma_n$, and $\mathbf{y}$ is the latent high-resolution face image that needs to be recovered~\cite{nasrollahi2014super}.
Recent techniques increasingly approach the FH problem in~\eqref{Eq: Inverse_SR_problem} using machine learning methods~\cite{ahn2018carn,bulat2018learn,lai2017lapsrn,nguyen_SR_2018,yu2018face, yu2016ultra} and try to learn a direct (non-linear) mapping $f_{\theta}$ from the LR inputs to the desired HR outputs, i.e.,
$f_{\theta}: \mathbf{x} \rightarrow \mathbf{y}.$
This mapping is commonly implemented with a parameterized regression model, e.g., a convolutional neural network (CNN), and the parameters of the model $\theta$ are learned through an optimization procedure that minimizes a selected training objective (e.g., an $L_p$ loss) over a set of corresponding LR-HR image pairs. Because the learning procedure is supervised, the image pairs needed for training are constructed by artificially degrading HR training images using a selected degradation function, i.e., a known operator $\mathbf{H}$ and noise level $\sigma_n$. Such an approach ensures that all generated LR images have corresponding HR ground truth faces available for training, but also implicitly defines the type of image degradations the learned model is able to handle. If the actual degradation function encountered with (real-world) test data differs from the one used during training, the result of the face hallucination model may be far from optimal - as illustrated in Fig.~\ref{fig: sr_bias} for five recent state-of-the-art
FH models~\cite{ahn2018carn,grm2018face,lai2017lapsrn,ledig2016photo,yu2016ultra}.
As can be seen from the presented examples, the HR images recovered from a LR input that matches the characteristics of the training data (Fig.~\ref{fig: sr_bias}, top row) are of significantly better quality than those produced from a non-matching LR input image (Fig.~\ref{fig: sr_bias}, bottom row). While all models are able to convincingly ($8\times$) upscale the example $24\times 24$ face with a matching LR image, the hallucination results exhibit considerable artifacts when a small change in terms of blur and noise is introduced into the degradation procedure.
These examples show that the bias introduced into the FH models by the training data has a detrimental effect on the quality of the super-resolved faces and may adversely effect the generalization ability of the trained models to data with unseen characteristics.
Surprisingly, the problem of (face hallucination) model bias has received little attention from the research community so far. Nevertheless, it has important implications for the generalization abilities of FH models as well as for the performance of high-level vision tasks that rely on the generated hallucination results, most notably face recognition. The existing literature on the generalization abilities of FH techniques is typically focused on generalization across different facial characteristics, such as pose, facial expressions, occlusions or alignment, and less so on the mismatch in the degradation functions used to produce the LR test data or qualitative experiments with real-world imagery. Difficulties with model bias are, therefore, rarely observed. Similarly, when used to improve performance of LR face recognition problems, FH models are predominantly applied on artificially degraded images, leaving the question of generalization to real-world LR data unanswered.
In this paper, we aim to address these issues and study the problem model bias in the field of face hallucination. We try to answer obvious research questions, such as: How do different image characteristics affect the reconstruction quality of FH models? How do FH models trained on artificially degraded images generalize to real-world data? Do FH models ensure improvements in LR face recognition when applied as a preprocessing step? Are there differences in recognition performance when using either artificially generated or real-world LR data? To answer these and related questions we conduct a rigorous analysis using five recent state-of-the-art FH models and examine in detail: \textit{i)} the mismatch between the degradation procedure used to generate the LR-HR training pairs and the actual degradation function encountered with LR data, \textit{ii)} changes in classifier-independent separability measures before and after the application of FH models, and \textit{iii)} face recognition performance with hallucinated images and a state-of-the-art CNN recognition model. We make interesting findings that point to open and rarely addressed problems in the area of face hallucination
and provide insights into future research challenges in this area.
\section{Related Work}
\textbf{Bias in computer vision.} Machine learning techniques are known to be sensitive to the characteristics of the training data and typically result in models with sub-optimal generalization abilities if the training data
is biased towards certain data characteristics.
The
effect of dataset bias can, for example, be seen in \cite{buolamwini2018gender}, where commercial gender classification systems are shown to have a drop in gender-classification accuracy on darker-skinned subjects compared to lighter-skinned subjects, indicating insufficient training data coverage of the latter. Torralba and Efros~\cite{torralba2011unbiased} dem\-onstrate that image datasets used to train classification models are heavily biased towards specific appearances of object categories, causing poor performance in cross-dataset experiments.
Zhao et al. \cite{zhao2017men} show that datasets for semantic role labeling tasks, contain significant gender bias and introduce strong associations between gender labels and verbs/objects (e.g., \textit{woman} and \textit{cooking}) that lead to biased models for certain labeling tasks.
These examples show that understanding dataset bias is paramount for the generalization abilities of machine learning models.
Our work is related to these studies, as we also explore dataset bias. However, different from prior work, we focus on the task of face hallucination, which
has not been studied from this perspective so far.
\textbf{Face hallucination for face recognition.} Face recognition performance with LR images tends to degrade severely in comparison to HR face data.
To mitigate this problem, a significant body of work resorts to FH models and tries to up-sample images during pre-processing~\cite{farrugia2017face,gunturk2003eigenface, lin2007super,su2016supervised}
or to devise models that jointly learn an upscaling function and recognition procedure~\cite{hennings2008simultaneous,jian2015simultaneous, wu2016deep}.
While performance improvements are reported with these works, experiments are commonly limited to artificially down-sampled images, findings are then simply extrapolated to real-world data and potential issues due to dataset bias are often overlooked.
Experiments with real LR images, on the other hand, are scarce in the literature and the usefulness of FH models for face recognition with real-world LR imagery has not received much attention by the research community. As part of our analysis, we study this issue and explore the effect on FH models on data separability and recognition performance on artificially down-sampled and real-world LR data.
\section{Methodology}\label{Sec: Method}
\subsection{Experimental setup}
We conduct our analysis with several state-of-the-art FH models and LR images of size $24\times 24$ pixels. Since there is no clear distinction on what constitutes a LR image, we select the LR image data to be smaller than $32\times 32$ pixels, which represents an image size, below which most computer vision models are known to deteriorate quickly in performance~\cite{grm2017strengths, torralba200880,wang2016studying}. Given this rather small size, we use an upscaling factor of $8\times$ with the FH models and generate $192\times 192$ images that are used as the basis for our analysis.
\subsection{Face hallucination (FH) models}
Using the presented setup, we study the effect of dataset bias using five recent FH (or super-resolution) models, i.e.: the Ultra Resolving Discriminative Generative Network (URDGN,~\cite{yu2016ultra}), the Deep Laplacian Super-Resolution Network (LapSRN,~\cite{lai2017lapsrn}), the Super-Resolution Residual Network (SRResNet,~\cite{ledig2016photo}), the Cascading Residual Network (CARN,~\cite{ahn2018carn}), and the Cascading Super Resolution Network with Identity Priors (C-SRIP,~\cite{grm2018face}). The selected models differ in the network architecture and training objective, but are all considered to produce state-of-the-art hallucination results as shown in Fig.~\ref{fig: sr_bias}. We also include an interpolation-based method in the experiments to have a baseline for comparisons. A short summary of the models is given below:
\begin{itemize}[leftmargin=*]\vspace{-0.5mm}
\item \textbf{Bicubic interpolation}~\cite{bicubic} is a learning-free approach that up-samples images by interpolating missing pixel values using Lagrange polynomials, cubic splines, or other similar functions. Unlike FH models, it does not rely on domain knowledge when generating HR faces. \vspace{-1mm}
\item \textbf{URDGN} consists of a generator and a discriminator network, and is trained using the generative adversarial network (GAN~\cite{goodfellow2014generative}) framework, where the discriminator is trained to tell apart real and generated HR images, whereas the generator is trained to minimize an $L_2$ reconstruction loss and the accuracy of the discriminator.\vspace{-1mm}
\item \textbf{LapSRN} represents a CNN-based model that progressively up-samples LR images by factors of $2$ through bilinear deconvolution and relies on a feature prediction branch to calculate the high-frequency residuals at each scale. Because of the progressive up-sampling, multi-scale supervision signals are used during training. \vspace{-1mm}
\item \textbf{SRResNet} is a variant of the SRGAN~\cite{ledig2016photo} model that incorporates many of the recent tweaks used in CNN-based super-resolution, such as adversarial training, pixel shuffle up-sampling, batch normalization and leaky ReLU activations. SRResNet represents the generator network of SRGAN trained with the $L_2$ loss. \vspace{-1mm}
\item \textbf{CARN} consists of a light-weight CNN, which is able to achieve state-of-the-art performance for the general super-resolution problems using an efficient cascading architecture that combines the design principles of densely connected networks~\cite{huang2017densely} and res nets~\cite{He_2016_CVPR}. We use the variant with local and global cascading connections, as opposed to the lighter variants of the network.\vspace{-1mm}
\item \textbf{C-SRIP} is a CNN-based FH model that incorporates explicit face identity constraints into the training procedure in addition to the main reconstruction objective. The model has a cascaded architecture that allows it to use supervision signals at multiple scales during training.
\end{itemize}
To incorporate face-specific domain knowledge into the models and ensure a fair comparison, we train all models on the CASIA Webface~\cite{yi2014learning} dataset using $494,414$ images of $10,575$ subjects. We crop the $192\times 192$ central part of the images and generate the HR-LR data pairs for training by blurring the HR images with a Gaussian kernel of $\sigma_b=\frac{8}{3}$ and then downscaling them $8\times$ using bicubic interpolation.
\begin{figure*}[!tb]
\begin{minipage}{0.02\textwidth}
\end{minipage}
\hfill
\begin{minipage}{0.29\textwidth}
\centering
\includegraphics[width=1\textwidth]{downsampling_model_comparison_lr.pdf}\vspace{0.1mm}
\text{\footnotesize (a) LR inputs ($\sigma_n$ vs. $\sigma_b$)}
\end{minipage}
\hfill
\begin{minipage}{0.29\textwidth}
\centering
\includegraphics[width=1\textwidth]{downsampling_model_comparison_bicubic.jpg}\vspace{0.1mm}
\text{\footnotesize (b) Bicubic ($\sigma_n$ vs. $\sigma_b$)}
\end{minipage}
\hfill
\begin{minipage}{0.29\textwidth}
\centering
\includegraphics[width=1\textwidth]{downsampling_model_comparison_C-SRIP.jpg}\vspace{0.1mm}
\text{\footnotesize (f) C-SRIP ($\sigma_n$ vs. $\sigma_b$)}
\end{minipage}
\hfill
\begin{minipage}{0.02\textwidth}
\end{minipage}\vspace{2mm}
\caption{Reconstruction capabilities of the learning-free bicubic interpolation a selected FH model. The image block on the left (with samples of size $24 \times 24$ pixels) illustrates the effect of increasing noise ($\sigma_n$, increases vertically) and blur ($\sigma_b$, increases horizontally) for a sample LR LFW image, the second and third block show $192 \times 192$ reconstructions generated by bicubic interpolation and C-SRIP, respectively. Images marked green are generated with a degradation function matching the one used during training. For the FH model good HR reconstructions are achieved only with images degraded similarly as the training data, whereas interpolation ensures reasonable reconstructions with all input images. Results for the remaining FH models are shown in the Appendix. Best viewed zoomed in.}\vspace{0mm}
\label{fig:SR_grids_noise}
\end{figure*}
\begin{figure*}[!tb]
\begin{minipage}{0.162\textwidth}
\centering
\includegraphics[width=1\textwidth,trim=6mm 75mm 6mm 6mm, clip]{reconstruction_grid_bicubic.pdf}\vspace{-2mm}
\text{\footnotesize (a) Bicubic}
\end{minipage}
\hfill
\begin{minipage}{0.162\textwidth}
\centering
\includegraphics[width=1\textwidth,trim=6mm 75mm 6mm 6mm, clip]{reconstruction_grid_URDGN.pdf}\vspace{-2mm}
\text{\footnotesize (b) URDGN}
\end{minipage}
\hfill
\begin{minipage}{0.162\textwidth}
\centering
\includegraphics[width=1\textwidth,trim=6mm 75mm 6mm 6mm, clip]{reconstruction_grid_LapSRN.pdf}\vspace{-2mm}
\text{\footnotesize (c) LapSRN}
\end{minipage}
\hfill
\begin{minipage}{0.162\textwidth}
\centering
\includegraphics[width=1\textwidth,trim=6mm 75mm 6mm 6mm, clip]{reconstruction_grid_CARN.pdf}\vspace{-2mm}
\text{\footnotesize (d) CARN}
\end{minipage}
\hfill
\begin{minipage}{0.162\textwidth}
\centering
\includegraphics[width=1\textwidth,trim=6mm 75mm 6mm 6mm, clip]{reconstruction_grid_srresnet.pdf}\vspace{-2mm}
\text{\footnotesize (d) SRResNet}
\end{minipage}
\hfill
\begin{minipage}{0.162\textwidth}
\centering
\includegraphics[width=1\textwidth,trim=6mm 75mm 6mm 6mm, clip]{reconstruction_grid_C-SRIP.pdf}\vspace{-1mm}
\text{\footnotesize (d) C-SRIP}
\end{minipage}\vspace{2mm}
\caption{Reconstruction capabilities with mismatching degradation functions due to different blur and noise levels. The heat maps show the average SSIM values computed over artificially degraded LFW images. The points marked in the heat maps correspond to the sampled levels of noise ($\sigma_n$, increases vertically) and blur ($\sigma_b$, increases horizontally). The value of $\sigma_n$ and $\sigma_b$ that was used for training is marked green. Note that all FH models achieve good reconstructions only around values that match the training setup. Best viewed in color.}\vspace{-1.5mm}
\label{fig:heat_maps}
\end{figure*}
\subsection{Datasets.} \label{subsec:datasets}
We conduct experiments on the Labeled Face in the Wild (LFW~\cite{huang2007labeled}) and SCFace~\cite{grgic2011scface} datasets. We introduce artificial down-sampling to simulate low image resolutions with LFW and use the SCFace images to explore the effect of training data bias on real-world LR images.
\begin{itemize}[leftmargin=*]\vspace{-0.5mm}
\item \textbf{LFW} is one of the most popular face dataset available, mainly due to the unconstrained settings in which the images were captured. The dataset~\cite{huang2007labeled} consists of $13,233$ face images of size $250\times 250$ pixels belonging to $5749$ subjects. For the experiments, we use only the central crop of the images to have faces of similar proportion to the ones used during FH model training. \vspace{-1mm
\item \textbf{SCface} contains images of $130$ subjects that are split between a gallery set, containing $130$ high-resolution frontal mug\-shots ($1$ per subject), and a larger probe set of surveillance-camera images. The daylight camera set, which we use for our experiments, consists of images from $5$ different security cameras.
Each subject is recorded by each camera at $3$ different distances, resulting in a total of $130\times 5\times 3 = 1950$ probe set images. We crop facial areas from all images based on the provided facial landmarks prior to the experiments.
\end{itemize}
\subsection{Bias exploration with synthetic LR data}
We start our analysis by exploring the sensitivity of FH models to a controlled mismatch in the degradation function. We first crop the ($192\times 192$) central part of the LFW images and generate baseline LR test data using the same degradation function as used during training. To simulate the mismatch, we generate additional sets of LR data from LFW by varying the standard deviations of the Gaussian blurring kernel $\sigma_b$ and Gaussian noise term $\sigma_n$, which define $\mathbf{H}$ and $\mathbf{n}$ in~\eqref{Eq: Inverse_SR_problem}. We consider five different values for each parameter and select $\sigma_b$ from $[0.75, 1.5, 2.25, 3, 3.75]$ and $\sigma_n$ from $[0, 5, 10, 15, 20]$. Because the LR test data is generated artificially, the HR ground truth can be used to evaluate the reconstruction capabilities of the FH models for each combination of $\sigma_b$ and $\sigma_n$. Note that it is in general infeasible to include all possible data variations in the training procedure, so there will always be image characteristics that have not been accounted for by data augmentation. The selected noise and blur levels are therefore as reasonable factors as any to simulate the mismatch.
From the hallucination examples in Fig.~\ref{fig:SR_grids_noise} we see that visually convincing results for the FH model are produced only for LR images generated with blur and noise levels similar to those used during training (close to the images marked green), and deteriorate quickly as the difference to the training blur and noise levels gets larger (see Appendix for additional results). The interpolation baseline produces less convincing results compared to the best hallucinated image of C-SRIP, but does also introduces lesser distortions with images of other blur and noise levels. A similar observation can also be made for the remaining FH models based on the results in Fig.~\ref{fig:heat_maps}, where average structural similarity (SSIM) values computed over the entire LFW dataset are shown for different levels of noise and blur. Here, the computed SSIM scores are shown in the form of interpolated heat maps for all five FH models and the baseline (bicubic) interpolation procedure.
The first thing to notice is that the degradation in reconstruction quality is also visible for the (learning-free) interpolation method. This suggests that the reconstruction problem gets harder with increased noise and blur levels
and the worsened reconstruction quality is not linked exclusively to the mismatch in the degradation function. However, the heat maps also clearly show that performance degrades much faster for the FH models than for the interpolation approach and that the degradation is particularly extreme for the C-SRIP model, which otherwise results in the highest peak SSIM score among all models.
In general, all FH models achieve significantly higher SSIM scores with matching degradation functions (see green point in Fig.~\ref{fig:heat_maps}) than the interpolation approach, but their performance falls below bicubic interpolation at the highest noise and blur levels - see lowest heat map part in Fig.~\ref{fig:heat_maps}.
This is an important finding and implies that for imaging conditions that are difficult to model and challenging to reproduce using \eqref{Eq: Inverse_SR_problem}, interpolation may still be a better choice for recovering HR faces than FH models, which require representative HR-LR image pairs for training.
The presented results are consistent with recent studies~\cite{stutz2018disentangling,su2018robustness}, which suggest that the performance of CNN models may come at the expense of robustness and that trying to learn models that are more robust to varying imaging conditions leads to less accurate results. We observe similar behaviour with the tested FH models (compare the heat maps of C-SRIP and URDGN, for example) and hypothesize that the relatively narrow focus of the models on specific degradation functions may be one of the reasons for the convincing performance of recent CNN-based FH models.
\subsection{Bias exploration with synthetic and real data}
Next, we explore the impact of dataset bias with synthetic LR images from LFW and with real-world surveillance data from SCFace, where the observed image degradations due to the acquisition hardware are not well modelled by the training degradation function. Since there is no HR ground truth available for the SCFace data, measuring the reconstruction quality is not possible with this dataset. We therefore focus on face recognition, which is regularly advocated in the literature as one of the main applications for FH models~\cite{farrugia2017face,gunturk2003eigenface,lin2007super,su2016supervised}, and use it as a proxy for face hallucination performance.
Because this task is different from the reconstruction task studied above, we first run experiments with artificially degraded LFW images to have a baseline for later comparisons with results obtained on real-world SCFace data. We note that recognition experiments add another dimension to our analysis, as we now also explore the impact of the dataset bias on the semantic content of the reconstructed HR data and not only on the perceived quality of the hallucinated faces.
For the experiments, we use a ResNet-$101$ model~\cite{He_2016_CVPR} and train it for face recognition on a dataset of close to $1.8$ images and $2622$ identities~\cite{VGGface}. We select the mdel because of its state-of-the-art performance~\cite{masi2016we,ranjan2017hyperface} and that fact that an open-source implementation is readily available. We perform network surgery on the trained ResNet-$101$ and use the activations from the penultimate network layer as a $512$-dimensional descriptor of the input face images.
\begin{figure}[t!]
\centering\vspace{-0.7mm}
\begin{minipage}{0.51\columnwidth}
\centering
\includegraphics[width=\textwidth]{lr_samples.pdf
\end{minipage}
\begin{minipage}{0.48\columnwidth}
\centering
\includegraphics[width=\textwidth]{dist1_size_hist.pdf
\end{minipage
\caption{Examples of LR LFW and SCFace images used in the experiments. Left: the first row shows LFW samples degraded using the \textit{matching} scheme (MS), the next row shows LFW images degraded with the \textit{non-matching} scheme (NMS) and the last row shows images from SCFace. Right: distribution of SCFace image widths/heights (in $px$) for faces captured at the largest distance.\vspace{-2.5mm}}
\label{fig:face_samples_scface_lfw}
\end{figure}
\begin{figure*}
\centering\vspace{-0.5mm}
\begin{minipage}{0.239\textwidth}\centering
\includegraphics[width=\textwidth]{tsne_lfw_192px_hr_.pdf}\vspace{0.3mm}
\text{\footnotesize (a) HR images}
\end{minipage}
\begin{minipage}{0.118\textwidth}\centering
\includegraphics[width=\textwidth]{tsne_optimal_downsampling_bicubic.pdf}
\includegraphics[width=\textwidth]{tsne_adverse_downsampling_bicubic.pdf}\vspace{0.3mm}
\text{\footnotesize (b) Bicubic}
\end{minipage}
\begin{minipage}{0.118\textwidth}\centering
\includegraphics[width=\textwidth]{tsne_optimal_downsampling_URDGN.pdf}
\includegraphics[width=\textwidth]{tsne_adverse_downsampling_URDGN.pdf}\vspace{0.3mm}
\text{\footnotesize (c) URDGN}
\end{minipage}
\begin{minipage}{0.118\textwidth}\centering
\includegraphics[width=\textwidth]{tsne_optimal_downsampling_LapSRN.pdf}
\includegraphics[width=\textwidth]{tsne_adverse_downsampling_LapSRN.pdf}\vspace{0.3mm}
\text{\footnotesize (d) LapSRN}
\end{minipage}
\begin{minipage}{0.118\textwidth}\centering
\includegraphics[width=\textwidth]{tsne_optimal_downsampling_CARN.pdf}
\includegraphics[width=\textwidth]{tsne_adverse_downsampling_CARN.pdf}\vspace{0.3mm}
\text{\footnotesize (e) CARN}
\end{minipage}
\begin{minipage}{0.118\textwidth}\centering
\includegraphics[width=\textwidth]{tsne_optimal_downsampling_srresnet.pdf}
\includegraphics[width=\textwidth]{tsne_adverse_downsampling_srresnet.pdf}\vspace{0.3mm}
\text{\footnotesize (f) SRResNet}
\end{minipage}
\begin{minipage}{0.118\textwidth}\centering
\includegraphics[width=1\textwidth]{tsne_optimal_downsampling_C-SRIP.pdf}
\includegraphics[width=1\textwidth]{tsne_adverse_downsampling_C-SRIP.pdf}\vspace{0.3mm}
\text{\footnotesize (g) C-SRIP}
\end{minipage}\vspace{2.2mm}
\begin{minipage}{0.015\textwidth}
\begin{turn}{270}
\footnotesize{\hspace{-5mm} MS \hspace{14mm} NMS}
\end{turn}
\end{minipage}
\caption{Visualization of ResNet-$101$ features extracted from hallucinated HR images using t-SNE~\cite{maaten2008visualizing}. Results are shown for the $10$ largest classes of LFW. The plots show distributions for: (a) the original HR images, and (b-g) hallucinated HR face images images down-sampled using the matching (MS) or non-matching (NMS) degradation schemes. Best viewed in color and zoomed in.\vspace{-1.6mm}}
\label{fig:tsne}
\end{figure*}
For the experiments with artificially down-sampled LFW data, we consider two different degradation schemes:
\begin{itemize}[leftmargin=*]\vspace{-0.5mm}
\item \textbf{A matching scheme (MS),} where each full-resolution LFW image is first filtered with a Gaussian kernel of $\sigma_b= 10$ and
the generated image is then decimated to the target size using bicubic interpolation. No noise is added. This scheme matches the training setup. \vspace{-1mm
\item \textbf{A non-matching scheme (NMS)}, where $\sigma_b$ is selected randomly from a uniform distribution, i.e., $\mathcal{U}\left(0.5, 4\right)$, for each LFW image.
After filtering and down-sampling, images are corrupted through additive Gaussian noise with standard deviation $\sigma_n$, drawn randomly from $\mathcal{U}(0, 20)$. This ensures a mismatch between the applied degradation function and the one used during training. Furthermore, it results in a different degradation for every test image.\vspace{-0.5mm}
\end{itemize}
The two schemes generate $24\times 24$ LR data of size and different characteristics as shown in Fig.~\ref{fig:face_samples_scface_lfw}. The generated images are then fed to the FH models for up-sampling and the HR results are used as inputs for ResNet-$101$.
For the experiments with the SCFace data, we use a subset of $650$ images captured by the five surveillance cameras at the largest of all recorded distances. After removing the interlaced rows from the images as well as a corresponding number of columns to ensure a correct aspect-ratio, we end up with images, where the facial area covers an image region close in size to the $24\times 24$ pixels expected by the FH models - a distribution for the SCFace face widths/heights is shown on the right of Fig.~\ref{fig:face_samples_scface_lfw}. We rescale all images to the correct input size (using bicubic interpolation) and then feed the hallucination results produced by the FH models to ResNet-$101$ for descriptor computation.
\textbf{Experiments on data separability.} Using the experimental setup described above, we explore whether data separability is improved when facial details are hallucinated
how the separability is affected by the mismatch in the degradation function. To this end, we visualize the distribution of ResNet-$101$ feature descriptors extracted from hallucinated HR images of the $10$ largest LFW classes (i.e., the $10$ subjects with the highest number of images) using t-SNE~\cite{maaten2008visualizing} in Fig.~\ref{fig:tsne}. In order to quantitatively evaluate the separability of the presented distributions, we also compute a separability measure in the form of the Kullback-Leibler (KL) divergence between the distribution of a given class and joint distribution of all remaining classes in the 2D t-SNE embedding space and report average values calculated over all $10$ considered LFW classes in Table~\ref{tab: KL divergence}.
We observe that the for the original HR images (before down-sampling) the classes are well separated and show no overlap. After down-sampling with the matching scheme (MS) and subsequent up-sampling (top row in Fig.~\ref{fig:tsne}), we see considerable overlap in the class distributions for bicubic interpolation. The FH models, on the other hand, improve the data separability over the interpolation-based baseline and result in significantly higher KL-divergence scores. C-SRIP performs particularly well and generates compact class clusters with very little overlap.
\begin{table}[!tb]
\renewcommand{\arraystretch}{1.05}
\caption{Average KL divergence for the $10$ largest LFW classes with the MS and NMS degradation schemes estimated in the 2D space generated by t-SNE. Arrows indicate an increase or decrease in value compared to the baseline bicubic interpolation method.\vspace{1mm}}
\label{tab: KL divergence}
\centering
\footnotesize
\begin{tabular}{ l cccc}
\hline
\multirow{2}{*}{Approach} & & \multicolumn{3}{c}{LFW} \\\cline{3-5}
&& MS & NMS & Change \\ \hline
Bicubic (baseline) & &$0.5389$ & $0.2135$ & $-0.3254$ \\
URDGN & &$0.5561$ $\blu{\uparrow}$ & $0.2143$ $\blu{\uparrow}$ & $-0.3418$ \\
LapSRN && $0.6346$ $\blu{\uparrow}$ & $0.2087$ $\red{\downarrow}$ & $-0.4259$ \\
CARN & &$0.6851$ $\blu{\uparrow}$ & $0.1957$ $\red{\downarrow}$ & $-0.4894$ \\
SRResNet & & $0.7148$ $\blu{\uparrow}$ & $0.1962$ $\red{\downarrow}$ & $-0.5222$ \\
C-SRIP && $0.7676$ $\blu{\uparrow}$ & $0.1972$ $\red{\downarrow}$ & $-0.5704$ \\ \hline
\end{tabular}\vspace{-3mm}
\end{table}
With the non-matching scheme (NMS) all mo\-dels perform noticeably worse, as shown in the bottom row of Fig.~\ref{fig:tsne}. Similarly as with the reconstruction experiments, we again see a drop in performance for bicubic interpolation, which is a learning-free approach and was hence not trained for specific image characteristics. This suggests that ensuring good data separation is a harder task for LR images generated by NMS and that the drop in the KL divergence is not only a result of mismatched degradation functions. However, if we take the performance drop of the interpolation approach as our baseline, we observe that the FH models are much more sensitive to the characteristics of the LR data. The KL divergence of all models drops to a comparable value around $0.2$ and for the majority (except for URDGN) even falls slightly behind bicubic interpolation.
To further analyze the separability of the ResNet-$101$ descriptors of the hallucinated images, we report values for another non-parametric separability measure. i.e., Thornton's Geometric Separability Index (GSI), however, this time for the entire LFW and SCFace datasets and all FH models in Table~\ref{tab: GSI}. The index is defined as the fraction of data instances of a given dataset, $\mathcal{S}$, that has the same class-labels as their nearest neighbors, i.e.~\cite{thornton1998separability}:
$GSI = \frac{1}{n}\sum_{i=1}^nf(\mathbf{z}_i,\mathbf{z}'_i)$,
where $n$ stands for the cardinality of $\mathcal{S}$ and $f$ is an indicator function that returns $1$ if the $i$-th ResNet-$101$ descriptor $\mathbf{z}_i$ and it's nearest neighbor $\mathbf{z}'_i$ share the same label and $0$ otherwise. GSI is bounded between $0$ and $1$, where a higher value indicates better separability. We use the cosine similarity to determine nearest neighbors.
\begin{table}[!tb]
\renewcommand{\arraystretch}{1.05}
\caption{GSI values achieved by the FH models in the ResNet-$101$ feature space. Note the decrease in the data separability due to mismatched degradation functions. Arrows indicate an increase or decrease in value compared to the baseline bicubic interpolation.\vspace{1mm}}
\label{tab: GSI}
\centering
\footnotesize
\begin{tabular}{ l ccccc}
\hline
\multicolumn{1}{l}{\multirow{2}{*}{Approach}} & \multicolumn{3}{c}{LFW} &\multirow{2}{*}{SCFace} \\\cline{2-4}
& MS & NMS & Change & \\ \hline
Bicubic (baseline) & $0.6283$ & $0.5032$ &{$-19.9\%$} & $0.5963$\\
URDGN & $0.6481$ $\blu{\uparrow}$ & $0.4866$ $\red{\downarrow}$ & {$-24,9\%$} & $0.5346$\\
LapSRN & $0.6657$ $\blu{\uparrow}$ & $0.4906$ $\red{\downarrow}$& {$-26.3\%$} & $0.6218$\\
CARN & $0.7130$ $\blu{\uparrow}$ & $0.4858$ $\red{\downarrow}$& {$-31.8\%$} & $0.5691$\\
SRResNet & $0.7084$ $\blu{\uparrow}$ & $0.4927$ $\red{\downarrow}$& {$-30.4\%$} & $0.5840$\\
C-SRIP & $0.7104$ $\blu{\uparrow}$ & $0.4893$ $\red{\downarrow}$& {$-31.1\%$} & $0.5712$\\ \hline
\end{tabular}\vspace{-3mm}
\end{table}
The results in Table~\ref{tab: GSI} again show that the data separability is improved with all FH models compared to the baseline with the MS scheme on LFW. With the NMS scheme all models perform worse than the baseline and also exhibit a larger drop in separability than simple bicubic interpolation. On SCFace we see a similar picture. Only LapSRN results in better separability than the interpolation-based baseline, while all other FH models decrease separability.
These results again point to the importance of suitable training data, as FH models do not generalize well to unseen image characteristics and perform different than expected when applied on real-world imagery.
\textbf{Recognition experiments.} In our last series of experiments we look at the recognition performance ensured by the FH models and extracted ResNet-$101$ descriptors on LFW and SCFace. For LFW we follow the so-called ``unrestricted outside data'' protocol and use the $6000$ pre-defined image pairs in verification experiments.
We keep one of the images in each pair unchanged (at the original resolution), and down-sample the second one using either the MS or NMS scheme. The LR images are then upscaled with the FH models and used to extract ResNet-$101$ descriptors. Matching is done with the cosine similarity. We report verification accuracy for the $10$ predefined experimental folds. For SCFace we perform a series of identification experiments, where we try to recognize subjects in the upscaled HR probe images based on the HR gallery data.
\begin{figure}[!tb]
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{recog_results_lfw_resnet101.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\columnwidth}
\centering
\vspace{1.5mm}
\includegraphics[width=1\textwidth]{recog_results_scf_resnet101.pdf}
\end{minipage}\vspace{1mm}
\caption{Recognition results on LFW (left) and SCFace (right). With a matching degradation function all models improve upon interpolation. The results are less predictable with image characteristics not seen during training. Best viewed in color.\vspace{-2mm}}
\label{fig: LFW_SCFace_results}
\end{figure}
Fig.~\ref{fig: LFW_SCFace_results} shows that on HR LFW images the ResNet-$101$ model achieves a median verification accuracy of $95.1\%$. When the image size is reduced to $24\times24$ pixels with the MS scheme and the LR images are upscaled with bicubic interpolation, the performance drops to $84.5\%$. The FH models improve on this and achieve significantly better results. The highest median accuracy of $91.8\%$ comes from C-SRIP, which is the top performer in this setting. With the NMS scheme the drop in performance is larger for all methods compared to the HR data. URDGN, LapSRN and CARN are only able to match the performance achieved by bicubic interpolation, while SRResNet and C-SRIP degrade results.
Results for SCFace are shown separately for each of the five cameras and in the form of the overall mean identification accuracy (i.e., rank-$1$) in Fig.~\ref{fig: LFW_SCFace_results}. We see that none of the FH models outperforms the bicubic baseline on all cameras. Overall, LapSRN offers a slight improvement over bicubic interpolation considering the average identification accuracy, but the performance gain is modest and in the range of $3\%$. The ranking of the models is also not consistent across different cameras, which generate LR data with very different characteristics. Observe, for example, C-SRIP, which performs worst with images from camera $2$, but is one of the top performers on camera $4$, where it gains around $10\%$ in performance over bicubic interpolation.
These results show that without suitable mechanisms that are able to compensate for the bias introduced into FH model by the training data, hallucination results with real-world images are unpredictable and findings made with artificially down-sampled images cannot simply be extrapolated to real-world data.
\section{Conclusion, discussion and outlook}
We have studied the impact of dataset bias on the problem of face hallucination and analyzed five recent CNN-based FH models on artificially degraded as well as real-world LR images. Below is summary of the main findings:
\begin{itemize}[leftmargin=*]\vspace{-0.5mm}
\item \textbf{Reconstruction and robustness:} FH models achieve better reconstruction performance than the learning-free interpolation baseline on LR images matching the training data in terms of characteristics. However, their superiority fades away quickly
as the LR image characteristics diverge from the training setting.
The rather sudden drop in reconstruction quality points to an accuracy-robustness trade-off with FH models not present with learning-free approaches, as also observed for other CNN-based models by recent studies~\cite{stutz2018disentangling,su2018robustness}.\vspace{-0.5mm}
\item \textbf{Separability and recognition:} We observe statistically significant improvements in data separability and face recognition performance, when LR image sre pre-processed with FH models (as opposed to interpolated), but \textit{only} for LR images degraded with the same approach as used during training. For mismatched image characteristics (with real-world data) we found no significant improvements in separability or recognition performance for any of the FH models, which in most cases fall behind simple interpolation
\end{itemize}
Overall, our results suggest that despite recent progress, FH models are still very sensitive to the characteristics of the LR input data. We found limited empirical proof of their usefulness for higher-level vision tasks (e.g., recognition) beyond improvements in perceptually quality -- which might be important for representation-oriented problems, such as alignment or detection. Our analysis shows that we, as a community, need to move away from the standard evaluation methodology involving artificially degraded LR images and focus on more challenging real-world data when developing FH models for specific vision problems.
A common way to mitigate the effects of dataset bias in CNN-based models from the literature are domain adaption (DA) techniques or ensemble approaches \cite{csurka2017domain}. These have not been explored extensively for the problem of face hallucination yet (see~\cite{bulat2018learn} for initial attempts), but seem like an good starting point to improve the generalization abilities of FH models and make them applicable to real-world data.
{\small
\bibliographystyle{ieee}
| {'timestamp': '2018-12-24T02:08:48', 'yymm': '1812', 'arxiv_id': '1812.09010', 'language': 'en', 'url': 'https://arxiv.org/abs/1812.09010'} |
\section{Introduction}
In this paper, a robust tracking control scheme is designed for a class of uncertain nonlinear systems. The design is composed of two steps. In the first step a linearized uncertainty model for the uncertain nonlinear system is developed using a robust feedback linearization approach.
The feedback linearization approach has many applications in the process control and aerospace industries. Using this method, a large class of nonlinear systems can be made to exhibit linear input-output behavior using a nonlinear state feedback control law. Once the input-output map is linearized, any linear controller design method can be used to design a desired controller. One of the limitations of the standard feedback linearization method is that the model of the system must be exactly known. In the presence of uncertainty in the system, exact feedback linearization is not possible and uncertain nonlinear terms remain in the system.
In order to resolve the issue of uncertainty after canceling the nominal nonlinear terms using the feedback linearization method, several approaches have been considered in the literature \cite{FBRC,NBSS,FBLetter,FBchemical,FBRC,FBbound,FB_robust_Friedovich,FBobserver1}. Most of these approaches use adaptive or related design procedures to estimate the uncertainty in the system. In these methods, mismatched uncertainties are decomposed into matched and mismatched parts. These methods typically require the mismatched parts not to exceed some maximum allowable bound \cite{FBoutput}. The existing results that are based on mismatched uncertainties either do not guarantee stability or require some stringent conditions on the nonlinearities and uncertainties to guarantee the stability of the system \cite{FBchemical,FBRC}.
In this paper, we approach the uncertainty issue in a different way and represent the uncertain nonlinear system in an uncertain linearized form. We use a nominal feedback linearization method to cancel the nominal nonlinear terms, and use a generalized mean value theorem to linearize the nonlinear uncertain terms. In our previous work \cite{Rehman_ASJC,CDC02}, the uncertain nonlinear terms are linearized using a Taylor expansion at a steady state operating point by considering a structured representation of the uncertainties. This linearization approach approximates the actual nonlinear uncertainty by considering only the first order terms and neglecting all of the higher order terms. In \cite{CDC02}, the uncertain nonlinear terms are linearized using a Taylor expansion but an unstructured representation of uncertainty is considered. In both of these methods, the linearized uncertainty model was obtained by ignoring higher order terms. In \cite{Rehman_ASCC01}, we introduced the linearization of nonlinear terms using using a generalized mean value theorem \cite{Lin_algebra,Mcleod_Mean_value} approach. This method exactly linearizes the uncertain nonlinear terms at an arbitrary point and therefore, no higher order terms exist. In \cite{Rehman_ASCC01}, the upper bound on the uncertainties is obtained by using unstructured uncertainty representations. The bound obtained using an unstructured uncertainty representation may be conservative which may degrade the performance of the closed loop system. In order to reduce conservatism and to obtain an uncertain linearized model with a structured uncertainty representation, a different approach for obtaining an upper bound is presented here. In contrast to \cite{Rehman_ASCC01}, here we propose a minimax linear quadratic regulation (LQR) \cite{IP} controller which combines with a standard feedback linearization law and gives a stable closed loop system in the presence of uncertainty. Here, we assume that the uncertainty satisfies a certain integral quadratic constraint (IQC).
The paper is organized as follows. Section \ref{sec:PSlqr} presents a description of the considered class of uncertain nonlinear systems. Our approach to robust feedback linearization is given in Section \ref{sec:FBlqr}. Derivation of the linearized uncertainty model and tracking controller for an air-breathing hypersonic flight vehicle (AHFV) along with simulation results are presented in Section \ref{sec:VM}. The paper is concluded in Section \ref{sec:concl} with some final remarks on the proposed scheme.
\section{System Definition}\label{sec:PSlqr}
Consider a multi-input multi-output (MIMO) uncertain nonlinear system with the same number of inputs and outputs as follows:
\begin{align}
\label{eqsystemlqr}
\dot{x}(t)&=f(x,\hat{p})+\sum\limits_{k=1}^m{g_k(x,\hat{p})u_k(t)}+\epsilon \bar{g}(\hat{p},x,u),\nonumber\\
y_i(t)&=\nu_i(x),\quad i=1,2,\cdots,m
\end{align}
where $x(t)\in \mathbb{R}^n$, $u(t)=[u_1.....u_m]^T\in\mathbb{R}^m$, $y(t)=[y_1....y_m]^T\in\mathbb{R}^m$ and $\epsilon\neq0$.
Furthermore, the system has norm bounded uncertain parameters lumped in the vector $\hat{p}\in \mathbb{R}^q$. Also, $f(x,\hat{p})$, $g_i(x,\hat{p})$, and $\nu_i(x,\hat{p})$ for $i=1,\cdots,m$ are assumed to be infinitely differentiable (or differentiable to a sufficiently large degree) functions of their arguments. The term $\bar{g}(\hat{p},x,u)$ in (\ref{eqsystemlqr}) is a nonlinear function which represents the couplings in the system. The full state vector $x$ is assumed to be available for measurement.
\section{Feedback Linearization}\label{sec:FBlqr}
In this section, we first simplify the model (\ref{eqsystemlqr}) so that the term involving $\epsilon$ vanishes. Here we assume that $\vert\epsilon\vert$ is sufficiently small and hence indicates weak coupling. In general, $\epsilon$ depends on the physical parameters of the system and may be available through measurement or known in advance. Instead of neglecting this coupling in the control design, we model $\epsilon \bar{g}(\hat{p},x,u)$ as an uncertainty function $\tilde{g}(\hat{p},\bar{p},x)$ with certain bound, where, $\bar{p}$ denotes a new uncertainty parameter whose magnitude is bounded. The parameter $\bar{p}$ appears due to the removal of coupling terms which depend on the input $u$. Now we can write (\ref{eqsystemlqr}) as follows:
\begin{equation}
\label{eqSSsystem}
\begin{split}
\dot{x}(t)&=\bar{f}(x,p)+\sum\limits_{k=1}^m{g_k(x,\hat{p})u_k(t)}\\
y_i(t)&=\nu_i(x),\quad i=1,2,\cdots,m\\
\end{split}
\end{equation}
where $\bar{f}(x,p)=f(x,\hat{p})+\tilde{g}(\hat{p},\bar{p},x)$ is an infinitely differentiable function and $p=[\hat{p} \quad \bar{p} ]^T$. Also, note that in equation (\ref{eqSSsystem}), which includes the new uncertain parameter $\bar{p}$, we can write the system in terms of a new uncertainty vector $p=p_0+\Delta p \in \mathbb{R}^{\bar{q}}$, where $\bar{q}=q+1$. Here, $p_0$ is the vector of the nominal values of the parameter vector $p$ and $\Delta p$ is the vector of uncertainties in the corresponding parameters as follows:
\begin{align*}
&p_{0}=\left[\begin{array}{ccccc}
p_{10} & p_{20} & \cdots & p_{({\bar{q}}-1)0} & p_{{\bar{q}}0}
\end{array}\right],\\
&\Delta p=\left[\begin{array}{ccccc}
\Delta p_1 & \Delta p_2 & \cdots & \Delta p_{({\bar{q}}-1)} & \Delta p_{\bar{q}}
\end{array}\right].
\end{align*}
We assume that a bound on $|\Delta p_s|$ is known for each $s\in \{1,\cdots,\bar{q}\}$. We also assume that the functions in the system (\ref{eqSSsystem}) are differentiable. The standard feedback linearization method can be used on the nominal model (without uncertainties) by differentiating each individual element $y_i$ of the output vector $y$ a sufficient number of times until a term containing the control element $u$ appears explicitly. The number of differentiations needed is equal to the relative degree $r_i$ of the system with respect to each output for $i=1,2,~\cdots~m$. Note that a nonlinear system of the form (\ref{eqSSsystem}) with $m$ output channels has a vector relative degree $r=[r_1~r_2~\cdots~r_m]$ \cite{IS}. We assume that the nonlinear system (\ref{eqSSsystem}) has full relative degree; i.e. $\sum\limits_{i=1}^m{r_i}=n$, where $n$ is the order of the system.
It is shown in \cite{CDC01} that in the presence of uncertainties exact cancellation of the nonlinearities is not possible because only an upper bound on the uncertainties is known: Indeed, we obtain
\begin{equation}
\label{eqSsystem}
\begin{split}
\dot{x}(t)&=\underbrace{\bar{f}_0(x,p_0)+\sum\limits_{k=1}^m{g_{k0}(x,\hat{p_0})u_k(t)}}_\text{Nominal part}\\
&+\underbrace{\Delta \bar{f}(x,p)+\sum\limits_{k=1}^m{\Delta g_k(x,p)u_k(t)}}_\text{Uncertain part}\\
y_i(t)&=\nu_i(x),\quad i=1,2,\cdots,m
\end{split}
\end{equation}
where, $\Delta \bar{f}$, and $\Delta g_k$ are the uncertain parts of their respective functions. After taking the Lie derivative of the regulated outputs a sufficient number of times, the system (\ref{eqSsystem}) can be written as follows:
\begin{align}
\label{eqdiffoutputlqr}
\left[\begin{array}{c}
y_1^{r_1}\\
\vdots \\
y_m^{r_m}
\end{array}\right]
&=f_*(x)+g_*(x)u \nonumber\\
&+\left[\begin{array}{c}
L_{\Delta\bar{f}}^{r_1}(\nu_1)+\sum\limits_{k=1}^m L^{r_1-1}_{ \Delta g_k}[L_{ \Delta \bar{f}}( \nu_1)]u_k\\
\vdots\\
L_{\Delta\bar{f}}^{r_m}(\nu_m)+\sum\limits_{k=1}^m L^{r_m-1}_{\Delta g_k}[L_{\Delta \bar{f}}(\nu_m)]u_k
\end{array}\right],
\end{align}
where,
\begin{footnotesize}
\begin{align*}
f_*(x)&=[L_{\bar{f}_0}^{r_1}(\nu_1) \cdots L_{\bar{f}_0}^{r_m}(\nu_m)]^T,\\
g_*(x)&=\left[
\begin{array}{cc}
L_{g_{10}}L_{\bar{f}_0}^{r_1-1}(\nu_1)& L_{g_{20}}L_{\bar{f}_0}^{r_1-1}(\nu_1)\dots L_{g_{m0}}L_{\bar{f}_0}^{r_1-1}(\nu_1)\\
L_{g_{10}}L_{\bar{f}_0}^{r_2-1}(\nu_2) & L_{g_{20}}L_{\bar{f}_0}^{r_2-1}(\nu_2)\dots L_{g_{m0}}L_{\bar{f}_0}^{r_2-1}(\nu_2)\\
\vdots &\vdots\quad\quad\quad\quad\quad\quad\vdots \\
L_{g_{10}}L_{\bar{f}_0}^{r_m-1}(\nu_m)&L_{g_{20}}L_{\bar{f}_0}^{r_m-1}(\nu_m)\dots L_{g_{m0}}L_{\bar{f}_0}^{r_m-1}(\nu_m)
\end{array}\right],
\end{align*}
\end{footnotesize}
and the Lie derivative of the functions $\nu_i$ with respect to the vector fields $\bar{f}$ and $g_k$ are given by
\begin{footnotesize}
\begin{align*}
L_{\bar{f}} \nu_i=&\frac{\partial \nu_i (x)}{\partial x} \bar{f} ,~ L_{\bar{f}}^j\nu_i=L_{\bar{f}} (L_{\bar{f}}^{j-1}\nu_i (x))
,~ L_{g_k }(\nu_i)=\frac{\partial \nu_i (x)}{\partial x}g_k.
\end{align*}
\end{footnotesize}
Note that in equation (\ref{eqdiffoutputlqr}), we have deliberately lumped the uncertainties at the end of a chain of integrators. This is because the uncertainties in $y^{1}_i,\cdots,y^{r_i-1}_i$ will be included in the diffeomorphism, which will be defined in the sequel. This definition of the diffeomorphism is in contrast to \cite{CDC01}, where the uncertainties in $y^{1}_i,\cdots,y^{r_i-1}_i$ are assumed to be zero; i.e. they satisfy a generalized matching condition\cite{NBSS}.
The feedback control law
\begin{equation}
\label{equ}
u=-g_*(x)^{-1}f_*(x)+g_*(x)^{-1}v,
\end{equation}
partially linearizes the input-output map (\ref{eqdiffoutputlqr}) in the presence of uncertainties as follows:
\begin{align}
\label{eqsdiffoutputm}
&y^{r_i}_*=\underbrace{\left[
\begin{array}{c}
v_1\\
\vdots\\
v_m
\end{array}\right]}_\text{Nominal part}+
\underbrace{\left[
\begin{array}{c}
\Delta W_1^{r_1}(x,u,p)\\
\vdots\\
\Delta W_m^{r_m}(x,u,p)
\end{array}\right]}_\text{Uncertainty part},
\end{align}
where
$\Delta W_i^{r_i}(x,u,p)=L_{\Delta \bar{f}}^{r_i}( \nu_i)+\sum\limits_{k=1}^m L^{r_i-1}_{\Delta g_k}[L_{\bar{f}}( \nu_i)]u_k$, $y_*=[y_1^{r_1} .... y_m^{r_m}]^T$, and $v=[v_1 .... v_m]^T$ is the new control input vector. Furthermore, we define an uncertainty vector \begin{small}$\Delta W_i$\end{small} which represents the uncertainty in each derivative of the $i^{th}$ regulated output as
\begin{equation}
\label{eqWi}
\begin{split}
\Delta W_i(x,u,p)
&=\left[\begin{array}{c}
L_{\Delta \bar{f}}(\nu_i)\\
L_{\Delta \bar{f}}^{2}(\nu_i)\\
\vdots\\
L_{\Delta \bar{f}}^{r_i}( \nu_i)+\sum\limits_{k=1}^m L^{r_i-1}_{\Delta g_k}[L_{\bar{f}}( \nu_i)]u_k
\end{array}\right]\\&=\left[\begin{array}{c}
\Delta W_i^{1}(x,u,p)\\
\Delta W_i^{2}(x,u,p)\\
\vdots\\
\Delta W_i^{r_i}(x,u,p)
\end{array}\right],
\end{split}
\end{equation}
and write $y_i$ for $i=1,2,\cdots,m$ as given below.
\begin{small}
\begin{equation}
\label{eqyi}
\left[\begin{array}{c}
y_i^1\\
y_i^2\\
\vdots\\
y_i^{r_i}
\end{array}\right]=\left[\begin{array}{c}
0\\
0\\
\vdots\\
v_i
\end{array}\right]+\left[\begin{array}{c}
\Delta W_i^{1}(x,u,p)\\
\Delta W_i^{2}(x,u,p)\\
\vdots\\
\Delta W_i^{r_i}(x,u,p)
\end{array}\right].
\end{equation}
\end{small}
Let us define a nominal diffeomorphism similar to the one defined in \cite{Rehman_ASCC01} for each partially linearized system in (\ref{eqyi}) for $i=1,\cdots, m$ as given below:
\begin{equation}
\label{eqdiffm}
\chi_i=T_i(x,p_0)=\left[\begin{array}{ccccc}
\int y_i-y_{ic} & y_i-y_{ic} & y^{1}_i & .. & y^{r_i-1}_i\end{array}\right]^{T}.
\end{equation}
Using the diffeomorphism (\ref{eqdiffm}) and system (\ref{eqyi}), we obtain the following:
\begin{equation}
\label{eqfullBvsky}
\dot{\chi}=A{\chi}+B v+\Delta \bar{W}(\chi,v,p),\quad\quad
\end{equation}
where $\chi(t)=[\chi_1^T(t),\cdots,\chi_m^T(t)]^T\in \mathbb{R}^{\bar{n}}$ ($\bar{n}=n+m$), $v(t)=[v_1~v_2 \cdots v_m]^T\in\mathbb{R}^m$ is the new control input vector, $ \Delta \bar{W}(\chi,v,p)=[\Delta \bar{W}_1^T(.), \Delta \bar{W}_2^T(.),~ \cdots ~, \Delta \bar{W}_m^T(.)]^T$ is a transformed version of $\Delta W (x,u,p)$ using (\ref{eqdiffm}) and $\Delta\bar{W}_i(.)=[w_i^{(1)}(.), w_i^{(2)}(.),~ \cdots,~ w_i^{(r_i)}(.)]^T$ for $i=1,2,~\cdots,~m$. Also,
\[
A=\left[
\begin{array}{ccc}
A_1& \dots & 0\\
\vdots & \ddots & \vdots \\
0 & \dots & A_m
\end{array}\right];
~~B=\left[
\begin{array}{cccc}
\bar{B}_1& \dots & 0\\
\vdots & \ddots & \vdots \\
0 & \dots & \bar{B}_m
\end{array}\right],
\]
where
\[
A_i=\left[\begin{array}{cccccc}
0 & 1 & 0 & \cdots & 0 & 0\\
0 & 0 & 1 & \cdots & 0 & 0\\
\vdots & &\vdots & &\vdots &\\
0 & 0 & 0 & 0 & 0 & 0\\
\end{array}\right],\quad
\bar{B}_i=\left[\begin{array}{c}
0\\
\vdots
\\
0\\
1\\
\end{array}\right].
\]
In our previous work, these uncertainty terms are linearized at a steady state operating point and all the higher order terms in states, control and parameters are ignored in order to obtain a fully linearized form for (\ref{eqfullBvsky}). In this paper, we adopt a different approach to the linearization of the uncertain nonlinear terms in (\ref{eqfullBvsky}). Here, we perform the linearization of $\Delta \bar{W}(\chi,v,p)$ using a generalized mean value theorem \cite{Lin_algebra, vector_valued_mean} such that no higher order terms exist.
\begin{theorem}\label{th_mean}
Let $\bar{w}_i^{(j)}:\mathbb{R}^{\bar{n}} \rightarrow \mathbb{R}$ be a differentiable mapping on $\mathbb{R}^{\bar{n}}$ with a Lipschitz continuous gradient $\nabla \bar{w}_i^{(j)}$. Then for given $\chi$ and $\chi(0)$ in $\mathbb{R}^{\bar{n}}$, there exists a vector $c_i=\chi+\bar{t}(\chi-\chi(0))$ with $\bar{t}\in [0,1]$, such that
\begin{equation}
\label{eqvect_mean}
\bar{w}_i^{(j)}(\chi)-\bar{w}_i^{(j)}(\chi(0)))=\nabla \bar{w}_i^{(j)}(c_i).(\chi-\chi(0)).
\end{equation}
\end{theorem}
\startproof
For proof, see \cite{vector_valued_mean}.
\finishproof
We can apply Theorem \ref{th_mean} to the nonlinear uncertain part of (\ref{eqfullBvsky}). Let us define a hyper rectangle
\begin{equation}
\mathfrak{B}=\{\left[
\begin{array}{c}
\chi\\
v
\end{array}\right]:\begin{array}{c}
\underline{\chi_i}\leq\chi_i\leq\bar{\chi_i}\\
\underline{v_i}\leq v_i\leq\bar{v_i}
\end{array}\},
\end{equation}
where $\underline{\chi_i}$, and $\underline{v_i}$ denote the lower bounds and $\bar{\chi_i}$, and $\bar{v_i}$ denote the upper bounds on the new states and inputs respectively. For this purpose, the gradient of $w_i^{(j)}(.)$ is found by differentiating it with respect to $\chi$ and $v$ at an arbitrary operating point $c_{ij}=(\tilde{\chi}, ~ \tilde{v}, ~ \tilde{p})$ for $i=1,2,\cdots,m$ and $j=1,2,\cdots, r_i$ where, $\left[\tilde{\chi}^T~~ \tilde{v}^T\right]^{T}\in\mathfrak{B}$, and $\tilde{p}\in\Theta$. We assume $w_i^{(j)}(\chi_0,v_0,p_0)=0 $, $\chi(0)=0$, and $v(0)=0$ and write $w_i^{(j)}(.)$ as follows:
\begin{equation}
w_i^{(j)}(\chi,v,p) = {w'}_i^{(j)}(c_{ij}).[\chi^T \quad v^T]^T.
\end{equation}
Then $\Delta \bar{W}(.)$ can be written as
\begin{equation}
\label{eqbarW}
\Delta\bar{W}(.)= \Phi \left[
\begin{array}{c}
\chi\\
v
\end{array}\right],
\end{equation}
where
\[
\Phi=\left[
\begin{array}{c}
{w'}_{1}^{(1)}(c_{11}) \\
\vdots\\
{w'}_{1}^{(r_1)}(c_{1r_1}) \\
\vdots\\
{w'}_{m}^{(1)}(c_{m1}) \\
\vdots\\
{w'}_{m}^{(r_m)}(c_{m r_m})
\end{array}\right].
\]
\subsection{Linearized model with structured uncertainty representation}\label{sec:structure_model}
The equation (\ref{eqfullBvsky}) can be written in a linearized form using (\ref{eqbarW}). Note that the matrix $\Phi$ is unknown. However, it is possible to write bounds on each term ${w'}_{i}^{(j)}(c_{ij})$ in $\Phi$ individually and represent them in a structured form. For this purpose, we define each individual bound as follows:
\begin{equation}
\label{eqrhos}
\begin{split}
\hat{\rho}_{1}&=\max_{c_{11}}(\Vert {w'}_{1}^{(1)}(c_{11}) \Vert),\\
\hat{\rho}_{2}&=\max_{c_{12}}(\Vert {w'}_{1}^{(2)}(c_{12}) \Vert), \\
&\vdots\\
\hat{\rho}_{r_1}&=\max_{c_{1r_1}}(\Vert {w'}_{1}^{(r_1)}(c_{1r_1})\Vert), \\
\hat{\rho}_{r_{1+1}}&=\max_{c_{21}}(\Vert {w'}_{2}^{(1)}(c_{21}) \Vert), \\
&\vdots\\
\hat{\rho}_{r_1+r_2+\cdots+r_m}&=\max_{c_{m r_m }}(\Vert {w'}_{m}^{(r_m)}(c_{m r_m }) \Vert).
\end{split}
\end{equation}
Using the definition in (\ref{eqrhos}) and (\ref{eqbarW}), the model (\ref{eqfullBvsky}) can be rewritten as
\begin{footnotesize}
\begin{equation}
\label{eqgforms}
\begin{split}
\dot {\chi}(t) &=(A+\tilde{C}_{1} \Delta_{1} \tilde{K}_{1}+\tilde{C}_{2} \Delta_{2} \tilde{K}_{2}+\cdots +\tilde{C}_{r_1} \Delta_{r_1} \tilde{K}_{r_1}+\cdots\\
&+\tilde{C}_{\bar{n}} \Delta_{\bar{n}} \tilde{K}_{\bar{n}})\chi(t)
+(B+\tilde{C}_{1} \Delta_{1} \tilde{G}_{1}+\tilde{C}_{2} \Delta_{2} \tilde{G}_{2}+\cdots \\
&+\tilde{C}_{r_1} \Delta_{r_1} \tilde{G}_{r_1}+\cdots
+\tilde{C}_{\bar{n}} \Delta_{\bar{n}} \tilde{G}_{\bar{n}}) v(t),
\end{split}
\end{equation}
\end{footnotesize}
where $\tilde{C}_{k}$ for $k=1,2,\cdots,\bar{n}$ is a $\bar{n} \times 1$ vector whose $k$th entry is one and the other entries are zeros, $\tilde{K}_{k}$ for $k=1,2,\cdots,\bar{n}$ is a $1 \times \bar{n}$ vector whose $k$th entry is $\hat{\rho}_{k}$ and the other entries are zero,
$\tilde{G}_{1},\cdots,\tilde{G}_{r_{i-1}},\tilde{G}_{r_{i+1}},\cdots, \tilde{G}_{\bar{n}-1}=0$ for $i=1,2,\cdots,m$ and $\tilde{G}_{(.)}$ is a $1 \times m$ vector as defined below:
\begin{footnotesize}
\begin{equation}
\label{eqGs}
\begin{array}{c}
\tilde{G}_{r_1}=[\hat{\rho}_{r_1}~ \hat{\rho}_{r_1} \cdots \hat{\rho}_{r_1}],\\
\tilde{G}_{r_1+r_2}=[\hat{\rho}_{r_1+r_2}~ \hat{\rho}_{r_1+r_2} \cdots \hat{\rho}_{r_1+r_2}],\\
\vdots\\
\tilde{G}_{r_1+r_2+\cdots+r_m}=[\hat{\rho}_{r_1+r_2+\cdots+r_m}~ \hat{\rho}_{r_1+r_2+\cdots+r_m} \cdots \hat{\rho}_{r_1+r_2+\cdots+r_m}],
\end{array}
\end{equation}
\end{footnotesize}
and $\Vert \Delta_{k} \Vert < 1$.
Using the above definitions of variables, we will write the system in a general MIMO form as given below:
\begin{equation}
\label{eqggforms}
\begin{split}
\dot {\chi}(t) &=A\chi(t)+Bv(t)+\sum^{\bar{n}}_{k=1} \tilde{C}_k\zeta_k(t);\\
z_k(t)&=\tilde{K}_k{\chi(t)}+\tilde{G}_kv(t);\quad k=1,2,\cdots,\bar{n},\\
\end{split}
\end{equation}
where $\chi(t)\in \mathbb{R}^{\bar{n}}$ is the state; $\zeta_k (t)=\Delta_{k}[\tilde{K}_k\chi+\tilde{G}_1 v] \in \mathbb{R}$ is the uncertainty input, $z_k(t)\in \mathbb{R}$ is the uncertainty output, $v(t)\in\mathbb{R}^m$ is the new control input vector and $y(t)\in \mathbb{R}^m$ is the measured output vector.
\begin{theorem}
\label{th1m}
Consider the nonlinear uncertain system (\ref{eqsystemlqr}) with vector relative degree $\{r_1,\cdots,r_m\}$ at $x=x(0)$. Suppose also that $\bar{f}(x(0),p)=0$ and $\nu(x(0))=0$. There exist a feedback control law of the form (\ref{equ}) and coordinate transformation (\ref{eqdiffm}), defined locally around $x(0)$ transforming the nonlinear system into an equivalent linear controllable system (\ref{eqggforms}) with uncertainty norm bound $\hat{\rho}$ for (\ref{eqggforms}) in a certain domain of attraction if
\begin{enumerate}
\item $L_{\Delta \bar{f}}L_{\bar{f}}^jh\neq0$ and $L_{\Delta g}L_{\bar{f}}^jh\neq0$ for all $0\leq i \leq r_i-1$,
\item $r_1+r_2+\cdots+r_m=n$,
\item the matrix $\beta =[g_*(x,p_0)]^{-1}$ exists,
\item The uncertainty satisfies $\vert \Delta \vert \leq 1 \quad \forall {\chi}\in \mathfrak{B}, {v}\in \mathfrak{B} \text{, and}~ p \in \Theta$.
\end{enumerate}
\end{theorem}
\startproof
The proof directly follows from the form of the feedback control law (\ref{equ}) which cancels all the nominal nonlinearities and the linearize the remaining uncertain nonlinear terms using the generalized mean value theorem at $\left[\tilde{\chi}~ \tilde{v}\right]^T\in\mathfrak{B}$, and $\tilde{p}\in \Theta$. Since the generalized mean value theorem allows us to write any nonlinear function as an equivalent linear function, which will be tangent to the nonlinear function at some given points, we can linearize the remaining uncertain nonlinear terms using the generalized mean value theorem. Finally, it is straight forward to write the entire uncertain nonlinear system (\ref{eqsystemlqr}) in the linear controllable form (\ref{eqggforms}) by finding the maximum norm bound $\hat{\rho}$ in (\ref{eqrhos}) on the linearized uncertain terms in the region being considered, $\left[\tilde{\chi}~ \tilde{v}\right]^T\in\mathfrak{B}$, and ${p}\in \Theta$.
\finishproof
\section{AHFV Example}\label{sec:VM}
\subsection{Vehicle Model}
The nonlinear equations of motion of an AHFV used in this study are taken from the work of Bolender et al \cite{BD01} and the description of the coefficients are taken from Sigthorsson and Serrani \cite{LPV01}. The equations of motion are given below:
\begin{small}
\begin{equation*}
\label{eqn1}
\dot {V}=\frac{T \cos \alpha -D}{m}- g \sin\gamma,~~
\dot {\gamma }=\frac{L+T\sin \alpha }{mV}-\frac{g \cos \gamma }{V}
\end{equation*}
\begin{equation}
\label{eqn2}
\dot {h}=V\sin \gamma,\quad
\dot {\alpha }=Q-\dot {\gamma },~~
\dot {Q}=M_{yy} /I_{yy}
\end{equation}
\begin{equation*}
\label{eqn3}
\ddot{\eta_i}=-2\zeta_{m} w_{m,i} \dot{\eta_i}-w_{m,i}^2 \eta_i+N_i,\qquad i=1,2,3
\end{equation*}
\end{small}
See \cite{LPV01,BD01} for a full description of the variables in this model. The forces and moments in actual nonlinear model are intractable and do not give a closed representation of the relationship between control inputs and controlled outputs. In order to obtain tractable expressions for the purpose of control design, these forces and moments are replaced with curve-fitted approximations in \cite{LPV01} which leads to a curve-fitted model (CFM). The CFM has been derived by assuming a flat earth and unit vehicle depth and retains the relevant dynamical features of the actual model and also offers the advantage of being analytically tractable \cite{LPV01}. The approximations of the forces and moments are given as follows in \cite{LPV01}:
\begin{small}
\begin{equation}
L\approx \bar{q} S C_L (\alpha ,\delta_e,\delta_c,\Delta \tau_1,\Delta \tau_2 ),
\end{equation}
\begin{equation}
D\approx \bar{q} S C_D (\alpha ,\delta_e,\delta_c,\Delta \tau_1,\Delta \tau_2 ),
\end{equation}
\begin{equation}
M_{yy} \approx z_T T+\bar{q} S \bar {c} C_M(\alpha ,\delta_e,\delta_c,\Delta \tau_1,\Delta \tau_2 ),
\end{equation}
\begin{equation}
T\approx \bar{q} [\phi C_{T,\phi}(\alpha,\Delta \tau_1,M_{\infty})+C_T(\alpha,\Delta \tau_1,M_{\infty},A_d)],
\end{equation}
\begin{equation}
N_i\approx\bar{q} C_{N_i} [\alpha ,\delta_e,\delta_c,\Delta \tau_1,\Delta \tau_2],\quad i=1,2,3.
\end{equation}
\end{small}
The coefficients obtained from fitting the curves are given as follows. These coefficients are obtained by assuming states and inputs are bounded and only valid for the given range. Here, we remove the function arguments for the sake of brevity:
\begin{small}
\[
C_L=C_L^\alpha \alpha+C_L^{\delta_e} \delta_e+C_L^{\delta_c} \delta_c+C_L^{\Delta\tau_1}\Delta\tau_1+C_L^{\Delta\tau_2}\Delta\tau_2+C_L^0,\]
\[
C_M=C_M^\alpha \alpha+C_M^{\delta_e} \delta_e+C_M^{\delta_c} \delta_c+C_M^{\Delta\tau_1}\Delta\tau_1+C_M^{\Delta\tau_2}\Delta\tau_2+C_M^0,
\]
\begin{align*}
C_D =& C_D^{(\alpha+\Delta\tau_1)^2} (\alpha+\Delta\tau_1)^2+C_D^{(\alpha+\Delta\tau_1)} (\alpha+\Delta\tau_1)\\
&+C_D^{\delta_e^2} \delta_e^2+C_D^{\delta_e} \delta_e+C_D^{\delta_c^2} \delta_c^2+C_D^{\delta_c} \delta_c
+C_D^{\alpha\delta_e}\alpha\delta_e\\
&+C_D^{\alpha\delta_c}\alpha\delta_c+C_D^{\delta\tau_1} \delta\tau_1+C_D^0,
\end{align*}
\begin{align*}
C_{T,\phi}&=C_{T,\phi}^\alpha \alpha+C_{T,\phi}^{\alpha M_{\infty}^{-2}} \alpha M_{\infty}^{-2}+ C_{T,\phi}^{\alpha \Delta\tau_1} \alpha \Delta\tau_1\\
&+C_{T,\phi}^{M_{\infty}^{-2}} M_{\infty}^{-2}+C_{T,\phi}^{{\Delta\tau_1}^2} {\Delta\tau_1}^2+C_{T,\phi}^{{\Delta\tau_1}}{\Delta\tau_1}+C_{T,\phi}^0,
\end{align*}
\[
C_T=C_T^{A_d}A_d+C_T^\alpha \alpha+C_{T}^{M_{\infty}^{-2}} M_{\infty}^{-2}+C_{T}^{{\Delta\tau_1}}{\Delta\tau_1}+C_T^0,
\]
\[
C_{N_i}=C_{N_i}^\alpha \alpha+C_{N_i}^{\delta_e} \delta_e+C_{N_i}^{\delta_c} \delta_c+C_{N_i}^{\Delta\tau_1}\Delta\tau_1+C_{N_i}^{\Delta\tau_2}\Delta\tau_2+C_{N_i}^0.
\]
\end{small}
Here, $M_{\infty}$ is the free-stream Mach number, and $\bar{q}$ is the dynamic pressure, which are defined as follows:
\begin{small}
\begin{equation}
\bar{q}=\frac{\rho(h) V^2}{2},\quad
M_{\infty}=\frac{V}{M_0}.
\end{equation}
\end{small}
Also, $\rho(h)$ is the altitude dependent air-density and $M_0$ is the speed of sound at a given altitude and temperature.
The nonlinear equations of motion have five rigid body states; i.e., velocity $V$, altitude $h$, angle of attack $\alpha$, flight path angle $\gamma$, and pitch rate $Q$. The CFM also has $6$ vibrational modes and they are represented by generalized modal coordinates $\eta_i$. There are four inputs and they are the diffuser-area-ratio $A_d$, throttle setting or fuel equivalence ratio $\phi$, elevator deflection $\delta_e$, and canard deflection $\delta_c$. In this example, tracking of velocity and altitude will be considered.
\subsection{Feedback linearization of the AHFV nonlinear model}\label{sec:CLUM}
\subsubsection{Simplification of the CFM}\label{sec:SM}
The CFM contains input coupling terms in the lift and drag coefficients. We simplify the CFM in a robust way as presented in Section \ref{sec:FBlqr} so that the simplified model approximates the real model and the input term vanishes in the low order derivatives during feedback linearization.
In the simplification process, we will first remove the flexible states as they have stable dynamics. A canard is introduced in the AHFV model by Bolender and Doman \cite{BD02} to cancel the elevator-lift coupling using an ideal interconnect gain $k_{ec}=-\frac{C_L^{\delta_e}}{C_L^{\delta_c}}$ which relates the canard deflection to elevator deflection ($\delta_c=k_{ec} \delta_e$). In practice, an ideal interconnect gain is hard to achieve and thus exact cancellation of the lift-elevator coupling is not possible. In the simplified model, we assume that the interconnect gain is uncertain with a bound on its magnitude and it also satisfies an IQC \cite{IP}. The drag coefficient is also affected due to the presence of elevator and canard coupling terms in the corresponding expression. We also model this coupling as uncertainty.
The simplified expressions for lift, moment, drag, and thrust coefficients now can be written as follows:
\begin{footnotesize}
\begin{equation}
\begin{split}
C_L&=C_L^\alpha \alpha+C_L^0+\Delta C_l,\\
C_M&=C_M^\alpha \alpha+[C_M^{\delta_e}-C_M^{\delta_c}(\frac{C_L^{\delta_e}}{C_L^{\delta_c}})] \delta_e+C_M^0,\\
C_D &= C_D^{(\alpha+\Delta\tau_1)^2} (\alpha)^2+C_D^{(\alpha+\Delta\tau_1)} (\alpha)+C_D^0+\Delta C_d,\\
C_{T,\phi}&=C_{T,\phi}^\alpha \alpha+C_{T,\phi}^{\alpha M_{\infty}^{-2}} \alpha M_{\infty}^{-2}\alpha+C_{T,\phi}^{M_{\infty}^{-2}} M_{\infty}^{-2}+C_{T,\phi}^0,\\
C_T&=C_T^{A_d}A_d+C_T^\alpha \alpha+C_{T}^{M_{\infty}^{-2}} M_{\infty}^{-2}+C_T^0,
\end{split}
\end{equation}
\end{footnotesize}
where $\Delta C_l$ is the uncertainty in the lift coefficient $C_L(\alpha)$ due to the uncertain interconnect gain and $\Delta C_d$ is the uncertainty in the drag coefficient $C_D(\alpha)$ due to the input coupling terms. Furthermore, in order to obtain full relative degree for the purpose of feedback linearization, we dynamically extend the system by introducing second order actuator dynamics into the fuel equivalence ratio input as follows:
\begin{small}
\begin{equation}
\label{eqfuel}
\ddot {\phi }=-2\zeta \omega _n \dot {\phi }-\omega _n^2 \phi +\omega
_n^2 \phi _c.
\end{equation}
\end{small}
After this extension we have two more states $\phi$ and $\dot{\phi}$, and thus the sum of the vector relative degree is equal to the order of the system $n$; i.e. $n=7$ and thus satisfying one of the conditions for feedback linearization\cite{IS}.
\begin{table}
\caption{States, Inputs and Selected Physical Parameters at the Trim Condition}
\label{tab01}
\centering
\begin{tabular}{l c}
\hline\hline
Vehicle Velocity $V$ & $7702.08$ ft/sec\\
Altitude $h$& 85000 ft \\
Fuel-to-air ratio $\phi$ & $0.4388$ \\
Pitch Rate $Q$ & $0$ \\
Angle of Attack $\alpha$ & $-0.0134$ rad \\
Flight Path Angle $\gamma$ & $0$\\
\hline \hline
Elevator Deflection $\delta_e$& $2.005$ deg\\
Canard Deflection $\delta_c$& $2.79$ deg \\
Diffuser Area ratio $A_d$ & $1$ \\
\hline \hline
Reference Area $S$ & $17$ sq-ft.ft$^{-1}$\\
Mean Aerodynamic Chord $c$ & $17$ ft\\
Air Density $\rho$ & $6.6\times 10^{-5}$~slugs/ft$^3$\\
Mass with 50\% fuel level $m$ & $147.9$ slug. ft$^{-1}$ \\
Moment of Inertia $I_{yy}$ & $5.0\times 10^{5}$~slugs/ft$^2$/(rad . ft)\\
\hline \hline
\end{tabular}
\end{table}
We use Theorem \ref{th1m} to linearize the AHFV dynamics. The outputs to be regulated are selected as velocity $V$ and altitude $h$ using two inputs, elevator deflection $\delta_e$, and fuel equivalence ratio $\phi_c$. Note that the canard deflection $\delta_c$ is a function of the elevator deflection $\delta_e$ and they are related via an interconnect gain. Also, we fix the diffuser area ratio $A_d$ to unity.
The new simplified model consists of seven rigid states $x=\left[\begin{array}{ccccccccc}
V & h & \gamma & \alpha & \phi & \dot{\phi} & Q
\end{array}\right]^T$ and can be represented by a general form as follows:
\begin{small}
\begin{align}
\label{equsystemm}
\dot{x}(t)&=\bar{f}(x,p)+\sum\limits_{k=1}^2{g_k(x,p)u_k};\quad
y_i(t)=\nu_i(x),~i=1,2.
\end{align}
\end{small}
where the control vector $u$ and output vector $y$ are defined as
\begin{small}
\[
u=[u_1, u_2]^T=\left[ {\delta _e, \phi _c } \right]^T, y=[y_1, y_2]^T=\left[ {V,h}
\right]^T.
\]
\end{small}
The following set of uncertain parameters are considered for the development of a linearized uncertainty model:
\begin{small}
\begin{align}
p=[
&C_L^\alpha \quad C_M^{\delta_c} \quad C_{T,\phi}^{\alpha M_{\infty}^{-2}} \quad C_{T,\phi}^{M_{\infty}^{-2}} \Delta C_l \quad \Delta C_d \quad \Delta C_T \nonumber\\
&\Delta C_M \quad \Delta C_{T,\phi}]^T \in \mathbb{R}^{9}.
\end{align}
\end{small}
We assume that $p \in \Theta$, where $\Theta=\{p\in \mathbb{R}^{9}~ |~ 0.9 p_0 \leq p_i \leq 1.1 p_0~~ \text{for}\quad i=1,\cdots,9\}$.
In order to get the linearized uncertain model for the uncertain nonlinear AHFV model, the output $V$ and the output $h$ are differentiated three times and four times respectively using the Lie derivative:
\begin{small}
\begin{equation}
\label{equv3dotm}
\left[
\begin{array}{c}
\stackrel{\dots}{V} \\
h^{4}
\end{array}\right]\underbrace{
=f_*(x,p_0)+
g_*(x,p_0)
\left[
\begin{array}{c}
\delta _e \\
\phi _c
\end{array}\right]}_\text{Nominal nonlinear part}+\underbrace{
\left[\begin{array}{c}
\Delta\stackrel{\dots}{V}(x,u,p) \\
\Delta {h}^4(x,u,p)\end{array}\right]}_\text{Uncertain nonlinear part},
\end{equation}
\end{small}
where
\begin{footnotesize}
\begin{equation*}
f_*(x,p_0)=\left[
\begin{array}{c}
L_f^3 V\\
L_f^4 h
\end{array}\right], \quad
g_*(x,p_0)=\left[\begin{array}{cc}
{L_{g_1}(L_f^2 V)} & {L_{g_2}(L_f^2 V)}\\
{L_{g_1}(L_f^3 h)} & {L_{g_2}(L_f^3 h)}
\end{array}\right],
\end{equation*}
\end{footnotesize}
and $\Delta\stackrel{\dots}{V}(x,u,p_0,\Delta p)$ and $\Delta {h}^4(x,u,p_0,\Delta p)$ are the uncertainties in $\stackrel{\dots}{V}$ and ${h}^4$ respectively.
The application of the control law (\ref{equ}) yields the following:
\begin{small}
\begin{equation}
\label{equfblin}
\left[\begin{array}{c}
\stackrel{\dots}{V} \\
h^{4}
\end{array}\right]=\underbrace{\left[\begin{array}{c}
v_1 \\
v_2
\end{array}\right]}_\text{Nominal linear part}+\underbrace{
\left[\begin{array}{c}
\Delta\stackrel{\dots}{V}(x,u,p) \\
\Delta {h}^4(x,u,p)\end{array}\right]}_\text{Uncertain nonlinear part}.
\end{equation}
\end{small}
Also, by using the fact that there are no uncertainty terms in $\dot{V}$, and $\dot{h}$, we can write linearized input-output map for the AHFV model using (\ref{eqyi}) as follows:
\begin{small}
\begin{equation}
\label{equyfblin}
\left[\begin{array}{c}
\dot{V}\\
\ddot{V}\\
\stackrel{\dots}{V} \\
\dot{h}\\
\ddot{h}\\
\stackrel{\dots}{h} \\
h^{4}
\end{array}\right]=\left[\begin{array}{c}
0\\
0\\
v_1 \\
0\\
0\\
0\\
v_2
\end{array}\right]+
\left[\begin{array}{c}
0\\
\Delta\ddot{V}\\
\Delta\stackrel{\dots}{V}(x,u,p) \\
0\\
\Delta\ddot{h}\\
\Delta\stackrel{\dots}{h}(x,u,p) \\
\Delta {h}^4(x,u,p)\end{array}\right].
\end{equation}
\end{small}
Let us define a diffeomorphism for each system as in (\ref{eqdiffm}) which maps the new vectors $\xi$ and $\eta$ respectively to the original vector $x$ as follows:
\begin{small}
\begin{equation}
\xi=T_1(x,p_0,V_c)
,\quad
\eta=T_2(x,p_0,h_c),
\end{equation}
where
\begin{align*}
&T_1(x,p,V_c)=\left[
\begin{array}{cccc}
\int_0^{t}{(V(\tau)-V_c)}d\tau & V-V_c & \dot{V} & \ddot{V}\end{array}\right]^T,\\
&T_2(x,p,h_c)=\left[
\begin{array}{ccccc}
\int_0^{t}{ (h(\tau)-h_c)}d\tau & h-h_c & \dot{h} & \ddot{h} & \stackrel{\dots}{h}\end{array}\right]^T,
\end{align*}
\end{small}
and $V_c$ and $h_c$ are the desired command values for the velocity and altitude respectively.
We write each diffeomorphism as follows:
\begin{small}
\begin{equation}
\label{eqtransformx}
\chi = T(x,p_0,V_c,h_c),
\end{equation}
where $\chi= \left[\begin{array}{ccccccccc}
\xi_1 & \xi_2 & \xi_3 &\xi_4 & \eta_1& \eta_2& \eta_3& \eta_4& \eta_5 \end{array}\right]^T$, and
$T(x,p,V_c,h_c)=\left[\begin{array}{cc}
T_1(x,p,V_c) \\
T_2(x,p,h_c) \end{array}\right]$.
\end{small}
Now, we can transform the nominal part of (\ref{equyfblin}) into the new states using the transformation (\ref{eqtransformx}) and linearize the uncertainty parts of (\ref{equfblin}) using the generalized mean value theorem as follows:
\begin{small}
\begin{align}
\label{eqlfinalm}
\left[\begin{array}{c}
\dot{\xi}_1 \\
\dot{\xi}_2 \\
\dot{\xi}_3 \\
\dot{\xi}_4 \\
\dot{\eta_1}\\
\dot{\eta_2}\\
\dot{\eta_3}\\
\dot{\eta_4}\\
\dot{\eta}_5
\end{array}\right]=\left[\begin{array}{c}
0\\
0\\
0\\
v_1 \\
0\\
0\\
0\\
v_2
\end{array}\right]
+
\left[\begin{array}{c}
0\\
0\\
\Delta w_{1}(\tilde{\chi},\tilde{v},p) \\
\Delta w_{2}(\tilde{\chi},\tilde{v},p) \\
0\\
0\\
\Delta w_{3}(\tilde{\chi},\tilde{v},p) \\
\Delta w_{4}(\tilde{\chi},\tilde{v},p) \\
\Delta w_{5}(\tilde{\chi},\tilde{v},p)
\end{array}\right]\chi+
\left[\begin{array}{c}
0\\
0\\
0\\
\Delta \tilde{w}_{1}(\tilde{\chi},\tilde{v},p) \\
0\\
0\\
0\\
0\\
\Delta \tilde{w}_{2}(\tilde{\chi},\tilde{v},p)
\end{array}\right]v.
\end{align}
\end{small}
In this section, we write the equation (\ref{eqlfinalm}) in a structured form as presented in Subsection \ref{sec:structure_model}. Using (\ref{eqrhos}), (\ref{eqgforms}), and (\ref{eqGs}) we can write (\ref{eqlfinalm}) as given below:
\begin{small}
\begin{equation}
\label{eqgforms_ex}
\begin{split}
\dot {\chi}(t) &=(A+\tilde{C}_{1} \Delta_{1} \tilde{K}_{1}+\tilde{C}_{2} \Delta_{2} \tilde{K}_{2}+\cdots +\tilde{C}_{9} \Delta_{9} \tilde{K}_{9})\chi(t)\\
&+(B+\tilde{C}_{1} \Delta_{1} \tilde{G}_{1}+\tilde{C}_{2} \Delta_{2} \tilde{G}_{2}+\cdots +\tilde{C}_{9} \Delta_{9} \tilde{G}_{9}) v(t)
\end{split}
\end{equation}
\end{small}
where
\begin{footnotesize}
\[
\tilde{C}_3=\left[
\begin{array}{ccccccccc}
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right],\]
\[\tilde{C}_4=\left[
\begin{array}{ccccccccc}
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0
\end{array}\right],
\]
\[
\tilde{C}_7=\left[
\begin{array}{ccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
\end{array}\right],\]
\[\tilde{C}_8=\left[
\begin{array}{ccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0
\end{array}\right],
\]
\[
\tilde{C}_9=\left[
\begin{array}{ccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
\end{array}\right],\]\[
\tilde{C}_1=\tilde{C}_2=\tilde{C}_5=\tilde{C}_6=\left[
\begin{array}{ccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right];
\]
\[
\tilde{K}_3=\left[
\begin{array}{ccccccccc}
\hat{\rho}_3 & \hat{\rho}_3 & \hat{\rho}_3 & \hat{\rho}_3 & \hat{\rho}_3 & \hat{\rho}_3 & \hat{\rho}_3 & \hat{\rho}_3 & \hat{\rho}_3
\end{array}\right],\]
\[\tilde{K}_4=\left[
\begin{array}{ccccccccc}
\hat{\rho}_4 & \hat{\rho}_4 & \hat{\rho}_4 & \hat{\rho}_4 & \hat{\rho}_4 & \hat{\rho}_4 & \hat{\rho}_4 & \hat{\rho}_4 & \hat{\rho}_4
\end{array}\right],
\]
\[
\tilde{K}_7=\left[
\begin{array}{ccccccccc}
\hat{\rho}_7 & \hat{\rho}_7 & \hat{\rho}_7 & \hat{\rho}_7 & \hat{\rho}_7 & \hat{\rho}_7 & \hat{\rho}_7 & \hat{\rho}_7 & \hat{\rho}_7
\end{array}\right],\]
\[\tilde{K}_8=\left[
\begin{array}{ccccccccc}
\hat{\rho}_8 & \hat{\rho}_8 & \hat{\rho}_8 & \hat{\rho}_8 & \hat{\rho}_8 & \hat{\rho}_8 & \hat{\rho}_8 & \hat{\rho}_8 & \hat{\rho}_8
\end{array}\right],
\]
\[
\tilde{K}_9=\left[
\begin{array}{ccccccccc}
\hat{\rho}_9 & \hat{\rho}_9 & \hat{\rho}_9 & \hat{\rho}_9 & \hat{\rho}_9 & \hat{\rho}_9 & \hat{\rho}_9 & \hat{\rho}_9 & \hat{\rho}_9
\end{array}\right],
\]
\[
\tilde{K}_1=\tilde{K}_2=\tilde{K}_5=\tilde{K}_6=\left[
\begin{array}{ccccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right];
\]
\[
\tilde{G}_4=\left[
\begin{array}{cc}
\hat{\rho}_4 & \hat{\rho}_4
\end{array}\right],\]
\[\tilde{G}_9=\left[
\begin{array}{cc}
\hat{\rho}_9 & \hat{\rho}_9
\end{array}\right],
\]
\[
\tilde{G}_1=\tilde{G}_2=\tilde{G}_3=\tilde{G}_5=\tilde{G}_6=\tilde{G}_7=\tilde{G}_8=\left[
\begin{array}{cc}
0 & 0
\end{array}\right].
\]
\end{footnotesize}
Using the above definition of the variables, we can write the system in the general MIMO form given in (\ref{eqggforms}), where $\chi(t)\in \mathbb{R}^9$ is the state, $\zeta_k (t)=\Delta_{k}[\tilde{K}_k\chi+\tilde{G}_1 v] \in \mathbb{R}$ is the uncertainty input, $z_k(t)\in \mathbb{R}$ is the uncertainty output, and $v(t)\in\mathbb{R}^2$ is the new control input vector.
\subsection{Minimax LQR Control Design}
The linearized model (\ref{eqggforms}) corresponding to the AHFV uncertain nonlinear model (\ref{eqn2}) allows for the design of a minimax LQR controller for the velocity and altitude reference tracking problem. The method of designing a minimax LQR controller is given in \cite{IP}. Here, we follow the same method and proposed a minimax LQR controller for the linearized system (\ref{eqggforms}). We assume the uncertainty in the system (\ref{eqggforms}) satisfies following IQC and the original state vector $x$ is available for measurement.
\begin{small}
\begin{equation}
\label{eqIQC}
\int_0^{\infty}(\parallel z_j(t)\parallel^2-\parallel \xi_j(t)\parallel^2)dt\geq-\chi^T(0)D_j\chi(0),
\end{equation}
\end{small}
where $D_j>0$ for each $j=1,\cdots,m$ is a given positive definite matrix. The cost function selected is as given below:
\begin{equation}
\label{eqfcost}
F=\int_0^{\infty}{[\chi(t)^T Q \chi(t)+v(t)^T R v(t)]dt},
\end{equation}
where $Q=Q^T>0$ and $R=R^T>0$ are the state and control weighting matrices respectively. A minimax optimal controller can be designed by solving a game type Riccati equation
\begin{small}
\begin{align}
\label{eqARE}
&(A-BE^{-1}G^T K)^T X_\tau +X_\tau(A-BE^{-1}G^T K)\nonumber\\&+X_\tau(CC^T-BE^{-1}B^T)X_\tau
+K^T(I-GE^{-1}G^T)K=0,
\end{align}
where
\[
K=\left[\begin{array}{c}
Q^{1/2} \\
0 \\
\sqrt{\tau_1}\tilde{K}_1 \\
\vdots\\
\sqrt{\tau_9}\tilde{K}_9
\end{array}\right]
,\quad
G=\left[\begin{array}{c}
0 \\
R^{1/2} \\
\sqrt{\tau_1}\tilde{G}_1 \\
\vdots\\
\sqrt{\tau_9}\tilde{G}_9
\end{array}\right]
,\quad
E=GG^T,
\]
\[
C=\left[\begin{array}{ccc}
\frac{1}{\sqrt{\tau_1}}C_1 & \dots & \frac{1}{\sqrt{\tau_9}}C_9
\end{array}\right]
\]
\end{small}
The weighting matrices $Q$ and $R$, and parameters $\tau_k$, for $k=1,2,\cdots,9$ are selected such that they give the minimum bound
\begin{small}
\begin{equation}
\label{eqfbond}
\text{min}[\chi^T(0)X_\tau \chi(0)+\sum_{j=1}^{9}\tau_j \chi^T(0)D_j\chi(0)],
\end{equation}
\end{small}
on the cost function (\ref{eqfcost}). The minimax LQR control law can be obtained by solving the ARE (\ref{eqARE}) for given values of the parameters as given below:
\begin{small}
\begin{equation}
\label{eqcontrllaw_mv}
v(t)=-G_\tau\chi(t),
\end{equation}
where
\[
G_\tau=E^{-1}[B^T X_\tau+G^T K]
\]
\end{small}
is the controller gain matrix. The parameters $Q$ and $R$ are selected intuitively so that required performance can be obtained and $\tau_k$ for $k=1,2,\cdots,9$ correspond to the minimum bound on (\ref{eqfcost}). These parameters are given as follows:
\begin{small}
\begin{equation*}
\label{eqSW_mv}
\mathbf{Q}=\textbf{diag}\left[
\begin{array}{c}
1000, 500, 500, 100, 0.001, 100, 100, 500, 500
\end{array}\right],
\end{equation*}
\begin{equation*}
\label{eqCW_mv}
\mathbf{R}=\left[
\begin{array}{cc}
3.0 & 0\\
0 & 3.0
\end{array}\right],\quad \tau_3=547.9,
\tau_4=8.0,~~ \tau_7=4935.7,
\end{equation*}
\[
\tau_8=4935.3,~~\tau_9=3768.0.
\]
\end{small}
\subsection{Simulation Results} \label{sec:resultslqr_mv}
The closed loop nonlinear AHFV system with the minimax LQR controller (\ref{eqcontrllaw_mv}) is simulated using different sizes and combinations of uncertainty. For the sake of brevity, here we evaluate the performance of the proposed controller by using step input commands for the following three cases:
\begin{itemize}
\item[1.] Uncertain parameters equal to their nominal values, with no uncertainty.
\item[2.] Uncertain parameters $20\%$ lower than their nominal values.
\item[3.] Uncertain parameters $20\%$ larger than their nominal values.
\end{itemize}
The responses of the closed loop system given in Fig. \ref{fig:vstep_mv1} -- Fig. \ref{fig:hstep_flex_mv} show that the minimax LQR controller along with the feedback linearization law gives satisfactory performance.
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=Vstep_lqrp20.eps, scale=0.38}
\caption{System response to a step change in velocity reference.}
\label{fig:vstep_mv1}
\end{center}
\end{figure}
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=Vstep_control_lqrp20.eps, scale=0.38}
\caption{Control input responses corresponding to a step change in velocity reference.}
\label{fig:vstep_mv}
\end{center}
\end{figure}
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=Vstep_states_lqrp20.eps, scale=0.38}
\caption{Responses of $\alpha$ and $\gamma$ for a step change in velocity reference.}
\label{fig:vstep_states_mv1}
\end{center}
\end{figure}
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=Vstep_flex_lqrp20.eps, scale=0.38}
\caption{Flexible states during velocity reference tracking.}
\label{fig:vstep_flex_mv}
\end{center}
\end{figure}
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=hstep_lqrp20.eps, scale=0.38}
\caption{System response to a step change in altitude reference.}
\label{fig:hstep_mv2}
\end{center}
\end{figure}
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=hstep_control_lqrp20.eps, scale=0.38}
\caption{Control input responses corresponding to a step change in altitude reference.}
\label{fig:hstep_mv}
\end{center}
\end{figure}
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=Vstep_states_lqrp20.eps, scale=0.38}
\caption{Responses of $\alpha$ and $\gamma$ for a step change in altitude reference.}
\label{fig:hstep_states_mv3}
\end{center}
\end{figure}
\begin{figure}[htbp]
\hfill
\begin{center}
\epsfig{file=hstep_flex_lqrp20.eps, scale=0.38}
\caption{Flexible states during altitude reference tracking.}
\label{fig:hstep_flex_mv}
\end{center}
\end{figure}
\section{Conclusion}\label{sec:concl}
In this paper, a robust nonlinear tracking control scheme for a class of uncertain nonlinear systems has been proposed. The proposed method uses a robust feedback linearization approach and the generalized mean value theorem to obtain an uncertain linear model for the corresponding uncertain nonlinear system. The scheme allows for a structured uncertainty representation. In order to demonstrate the applicability of the proposed method to a real world problem, the method is applied to a tracking control problem for an air-breathing hypersonic flight vehicle. Simulation results for step changes in the velocity and altitude reference commands show that the proposed scheme works very well in this example and the tracking of velocity and altitude is achieved effectively even in the presence of uncertainties.
| {'timestamp': '2012-03-01T02:01:34', 'yymm': '1202', 'arxiv_id': '1202.6436', 'language': 'en', 'url': 'https://arxiv.org/abs/1202.6436'} |
\section{Introduction}
One of the major goals of relativistic heavy-ion collisions is to explore the QCD phase diagram and phase boundary~\cite{StephanovPD,adams2005experimental,conservecharge0,conservecharge1}. Theoretical studies have shown the existence of a critical point (CP) at finite baryon chemical potential and temperature~\cite{CEP1, CEP2}. This CP is proposed to be characterized by a second-order phase transition, which becomes a unique property of strongly interacting matter~\cite{searchCEP1,searchCEP2,searchCEP3,searchCEP4,STARPRLMoment}.
In the thermodynamic limit, the correlation length diverges at the CP and the system becomes scale invariant and fractal~\cite{invariant1,invariant2,invariant3}. It has been shown that the density fluctuations near the QCD critical point form a distinct pattern of power-law or intermittency behavior in the matter produced in high energy heavy-ion collisions~\cite{Antoniou2006PRL,Antoniou2010PRC,Antoniou2016PRC,Antoniou2018PRD}. Intermittency is a manifestation of the scale invariance and fractality of physical process and the stochastic nature of the underlying scaling law~\cite{invariant3}. It can be revealed in transverse momentum spectrum as a power-law behavior of scaled factorial moment~\cite{invariant1, Antoniou2006PRL}. In current high energy heavy-ion experiments at CERN SPS~\cite{NA49SFM,NA61universe}, the NA49 and NA61 collaborations have explored the critical fluctuations by employing an intermittency analysis with various sizes of colliding nuclei. The power-law behavior has been observed in Si + Si collisions at 158A GeV~\cite{NA49SFM}. At RHIC BES-I energies, the preliminary result of a critical exponent extracted from intermittency index shows a minimum around $\sqrt{s_\mathrm{NN}}$ = 20 - 30 GeV in central collisions~\cite{STARintermittency}. In the mean time, several model studies have made efforts to investigate the unique behaviors of intermittency under various underlying mechanisms~\cite{CMCPLB,UrQMDLi,IJMPE,amptNPA,epos}.
For a self-similar system with intermittency, it is expected that the multiplicity distribution in momentum space is associated with a strong clustering effect, which indicating a remarkably structured phase-space density~\cite{invariant3,densityF}. However, the inclusive single-particle multiplicity spectra in finite space of high energy collisions are significantly influenced by background effects. The multiplicity distribution is changed or distorted by conservation law, resonance decay and statistical fluctuations, etc. It has been studied that the statistical fluctuations due to finite number of particles~\cite{SFM1} or the choices of the size in momentum space~\cite{APPB} will influence the measured SFMs. Therefore, it is necessary to estimate and remove these trivial effects in order to get a clean power-law exponent and then one can compare the measured intermittency with theoretical predictions. For this purpose, Ochs~\cite{OchsCumulative} and Bialas~\cite{BialasCumulative} have proposed to study intermittency in terms of variables, called as cumulative variable, in which the single-particle density is a constant. It is shown that the cumulative variable very effectively reduces the distortions of the simple scaling law caused by a non-uniform single-particle spectrum and there is no bias from the shape of the inclusive distribution. We will study how to remove the trivial background effects by the cumulative variable method in the measurement of SFMs in heavy-ion collisions.
In the study of intermittency in high energy experiments, particle detector has a finite detection efficiency, which simply results from the limited capability of the detector to register the finite-state particles~\cite{STAREfficiency,EfficiencyLuo}. This efficiency effect will lead to the loss of particle multiplicity in an event, which will change the shape of the original event-by-event multiplicity distributions in momentum space~\cite{EfficiencyLuo,MomentOverview}. Since the calculation of SFMs is determined by the particle distributions, its value could be significantly modified by the detector efficiency, which will distort the original signal possibly induced by the CP. We should recover the SFM of the original true multiplicity distributions from the experimentally measured one by applying a proper efficiency correction technique. Then the true intermittency index can be extracted from the efficiency corrected SFM.
The paper is organized as follows. A brief introduction to the UrQMD model is given in Sec. II. In sec. III, we introduce the method of intermittency analysis by using SFMs. Then, the collision energy and centrality dependence of SFMs are investigated by UrQMD model in Au + Au collisions from $\sqrt{s_\mathrm{NN}}$ = 7.7 to 200 GeV. In Sec. V, we discuss the estimation and subtraction of background in the calculations of SFMs. In Sec. VI, the efficiency correction formula is deduced, followed by a check of the validity of the method by UrQMD model. Finally, we give a summary and outlook of this work.
\section{UrQMD MODEL}
The UrQMD (Ultra Relativistic Quantum Molecular Dynamics) model is a microscopic many-body model and has been extensively applied to simulate p + p, p + A, and A + A interactions in the ultra-relativistic heavy-ion collisions~\cite{MBUrQMD,SABassUrQMD,HPUrQMD}. It provides phase space descriptions of different reaction mechanisms based on the covariant propagation of all hadrons with stochastic binary scattering, color string formation and resonance decay~\cite{SABassUrQMD}. This model includes baryon-baryon, meson-baryon and meson-meson interactions with more than 50 baryon and 45 meson species. It preserves the conservation of electric charge, baryon number, and strangeness number as expected for the evolution of the QCD matter. It models the phenomena of baryon stopping which is an essential feature encountered in heavy ions at low beam energies. It is a well-designed transport model~\cite{MBUrQMD} for simulations with the entire available range of energies from SIS energy ($\sqrt{s_\mathrm{NN}}=2$ GeV) to the top RHIC energy ($\sqrt{s_\mathrm{NN}}=200$ GeV). More details about the UrQMD model can be found in Refs~\cite{MBUrQMD,SABassUrQMD,HPUrQMD}.
The UrQMD model is a suitable simulator to estimate the non-critical contribution and other trivial background effects in the measurement of correlations and fluctuations in heavy-ion collisions. In this work, we use the UrQMD model with version 2.3 to generate Monte-Carlo event samples in Au + Au collisions at RHIC energies. The corresponding statistics are 72.5, 105, 106, 81, 133, 38, 56 millions at $\sqrt{s_\mathrm{NN}}$ = 7.7, 11.5, 19.6, 27, 39, 62.4, 200 GeV, respectively.
\begin{figure*}
\centering
\includegraphics[scale=0.85]{pdf/CFM-FM-UrQMD-7-200GeV.pdf}
\caption{The second-order scaled factorial moment (black circles) , $F_{2}(M)$, as a function of number of partitioned cells in a double-logarithmic scale at $\sqrt{s_\mathrm{NN}}$ = 7.7 - 200 GeV from UrQMD model. The black lines are the power-law fitting. The corresponding red ones represent the SFMs calculated by the cumulative variable method.}
\label{Fig:F2E}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.85]{pdf/Centrality-FM-CFM.pdf}
\caption{The second-order scaled factorial moment (black circles) as a function of number of partitioned cells from the most central ($0-5\%$) to the most peripheral ($60-80\%$) collisions at $\sqrt{s_\mathrm{NN}}$ = 19.6 GeV. The corresponding red ones represent the SFMs calculated by the cumulative variable method.}
\label{Fig:F2C}
\end{figure*}
\section{Method of Analysis}
It is argued that in heavy-ion collisions large baryon density fluctuation may provide a unique signal to the phase transition in the QCD phase diagram. It is expected to observe the critical density fluctuations as a power-law pattern on available phase space resolution if system freezes out right in the vicinity of the critical point~\cite{Antoniou2006PRL,NA61universe}.
In high energy experiments, the power-law or intermittency behavior can be measured by calculations of SFMs of baryon number density~\cite{HattaPRL,Antoniou2006PRL, NA61universe}. For this purpose, an available region of momentum space is partitioned into $M^{D}$ equal-size bins, $F_{q}(M)$ is defined as:\\
\begin{equation}
F_{q}(M)=\frac{\langle\frac{1}{M^{D}}\sum_{i=1}^{M^{D}}n_{i}(n_{i}-1)\cdots(n_{i}-q+1)\rangle}{\langle\frac{1}{M^{D}}\sum_{i=1}^{M^{D}}n_{i}\rangle^{q}},
\label{Eq:FM}
\end{equation}
\noindent with $M^{D}$ the number of cells in the D-dimensional partitioned momentum space, $n_{i}$ the measured multiplicity in the $i$-th cell, and $q$ the order of the moment.
If the system exhibits critical fluctuations, $F_{q}(M)$ is expected to follow a scaling function:~\cite{Antoniou2006PRL,NA49SFM}\\
\begin{equation}
F_{q}(M)\sim (M^{D})^{\phi_{q}}, M\rightarrow\infty.
\label{Eq:PowerLaw}
\end{equation}
\noindent A power-law behavior of $F_{q}$ on the partitioned number $M^{D}$ when $M$ is large enough is referred as intermittency. The scaling exponent, $\phi_{q}$, is called the intermittency index that characterizes the strength of the intermittency behavior. By using a critical equation of state for a 3D Ising system, the second-order intermittency index in a two-dimensional transverse momentum space is predicted to be $\phi_{2}=\frac{5}{6}$~\cite{Antoniou2006PRL} for baryon density and $\phi_{2}=\frac{2}{3}$ for sigma condensate~\cite{NGNPA2005}. The search of multiplicity fluctuations in increasing number of partition intervals using the SFMs method was first proposed several years ago~\cite{SFM1,SFM2}. Recent studies show that one can probe QCD critical fluctuations~\cite{CMCPLB} and estimate the possible critical region~\cite{Antoniou2018PRD} from intermittency analysis in relativistic heavy-ion collisions.
In the following section, we will calculate the second-order SFM of proton density in transverse momentum space by using event samples from UrQMD model in Au + Au collisions at $\sqrt{s_\mathrm{NN}}$ = 7.7 - 200 GeV. Then, the intermittency index, $\phi_{2}$, can be extract by fitting from Eq.~\eqref{Eq:PowerLaw}.
\section{Energy and centrality dependence of Scaled Factorial moments}
By using the UrQMD model, we generate event samples at various centralities in Au + Au collisions at $\sqrt{s_\mathrm{NN}}$ = 7.7, 11.5, 19.6, 27, 39, 62.4, 200 GeV. In the model calculations, we apply the same kinematic cuts and technical analysis methods as those used in the RHIC/STAR experiment data~\cite{net_proton2014}. The protons are measured at mid-rapidity($|y|<0.5$) within the transverse momentum $0.4<p_{T}<2.0$ GeV/c. The centrality is defined by the charged pion and kaon multiplicities within pseudo-rapidity $|\eta|<1.0$, which can effectively avoid auto-correlation effect in the measurement of SFMs. In our analysis, we focus on proton multiplicities in two dimensional transverse momentum spaces of $p_{x}$ and $p_{y}$. The available 2D region of transverse momentum is partitioned into $M^2$ equal-size bins to calculate SFMs in various sizes of cells. The statistical error is estimated by the Bootstrap method~\cite{BEboostrap}.
In Fig.~1, the black circles represent the SFMs as a function of number of partitioned bins, directly calculated in transverse momenta for proton numbers in $0-5\%$ most central Au + Au collisions at $\sqrt{s_\mathrm{NN}}$ = 7.7 - 200 GeV. It is observed that $F_{2}(M)$ increases slowly with increasing number of dividing bins. The black lines show the power-law fitting of $F_{2}(M)$ according to Eq.~\eqref{Eq:PowerLaw}. The slopes of the fitting, $i.e.$ the intermittency indices $\phi_{2}$, are found to be small at all energies. And they are much less than the theoretical prediction $\phi_{2}=5/6$ for a critical system of the 3D Ising universality class~\cite{Antoniou2006PRL}.
The $F_{2}(M)$ measured at various collision centralities in Au + Au collisions at $\sqrt{s_\mathrm{NN}}$ = 19.6 GeV are shown as the black circles in Fig.~2. And the black lines are the fitting according to Eq.~\eqref{Eq:PowerLaw}. Again, we find that the directly calculated SFMs can be fitted with a small intermittency index. The values of $\phi_{2}$ increase slightly from the most central ($0-5\%$) to the most peripheral ($60-80\%$) collisions.
Therefore, we observe that the intermittency indices from the directly calculated SFMs are small but non-zero in Au + Au collisions from UrQMD model. But we know that the UrQMD model is a transport model which does not include any critical related self-similar fluctuations~\cite{MBUrQMD} and thus $\phi_{2}$ should be zero. There must exist some trivial non-critical contributions from background. We will investigate how to remove the background effects from the directly measured SFMs in the next section.
\section{Background Subtraction}
\begin{figure}[!htp]
\centering
\includegraphics[scale=0.45]{pdf/CFM-FM-CMCMergeGaus0p5.pdf}
\label{Fig:GasandCMC}
\caption{The black symbols represent the second-order scaled factorial moment as a function of number of partitioned cells (a) in pure CMC events and (b) in CMC events contaminated with Gaussian background fluctuations. The corresponding red ones are the SFMs calculated by the cumulative variable method.}
\end{figure}
To extract the signature of critical fluctuations, it is essential to understand the non-critical effects or background contributions in heavy-ion collisions on the experimental observables. The background effects will change the inclusive single-particle multiplicity distributions in the measured finite momentum space and thus will significantly influence the value of calculated SFMs and the intermittency index thereby. In this purpose, NA49 and NA61 use the mixed event method to estimate and subtract background by assuming that the particle multiplicity in each cell can be simply divided into background and critical contributions~\cite{NA49SFM}. In this paper, we pursue the cumulative variable method, which has been proved to drastically reduce distortions of intermittency due to non-uniform single-particle density from background contributions~\cite{OchsCumulative,BialasCumulative,na22}, to understand and remove the background effects.
Following Ochs and Bialas~\cite{BialasCumulative,OchsCumulative}, the cumulative variable $\large X(x)$ is related to the single-particle density distribution $\rho(x)$ through:
\begin{equation}
\large X(x)=\frac{\int_{x_{min}}^{x} \rho(x)dx}{\int_{x_{min}}^{x_{max}}\rho(x)dx}.
\label{Eq:cvariable}
\end{equation}
\noindent Here $x$ represents the original measured variable, {\it e.g.,} $p_{x}$ or $p_{y}$. $\rho(x)$ is the density function of $x$. $x_{min}$ and $x_{max}$ are the lower and upper phase space limits of the chosen variable $x$.
The cumulative variable $X(x)$ is determined by the shape of density distribution $\rho(x)$. And the distribution of the new variable $X(x)$ is uniform in the interval from 0 to 1. It has been proved that the cumulative variable could remove the dependence of the intermittency parameters on the shape of particle density distributions and give a new way to compare measurements from different experiments~\cite{BialasCumulative}. In order to use the cumulative variable, the two-dimensional momentum space $p_{x}p_{y}$ which is partitioned into $M^{2}$ cells will transfer to be $p_{X}p_{Y}$ space. And the SFM directly calculated in $p_{x}p_{y}$ space ($F_{2}(M)$) will transfer to be $CF_{2}(M)$, which is now calculated in $p_{X}p_{Y}$ space. The process of fitting $\phi_{2}^{c}$ from $CF_{2}(M)$ is similar to that of $\phi_{2}$ from $F_{2}(M)$ according to Eq.~\eqref{Eq:PowerLaw}.
In order to test the validity of the cumulative variable method in the calculations of SFMs, we use a Critical Monte-Carlo (CMC) model~\cite{Antoniou2006PRL,CMCPLB} of the 3D Ising universality class to generate critical event sample. The CMC model involves self-similar or intermittency nature of particle correlations and leads to an intermittency index of $\phi_{2}=\frac{5}{6}$~\cite{Antoniou2006PRL}. In Fig.~3(a), both $F_{2}(M)$ and $CF_{2}(M)$ are shown in the same pad. We observe that $CF_{2}(M)$ follows a good power-law behavior as $F_{2}(M)$ with increasing $M^{2}$. Within statistical errors, the intermittency index $\phi_{2}^{c}$ fitted from $CF_{2}(M)$ equals to $\phi_{2}$ obtained from $F_{2}(M)$. It means that the cumulative variable method does not change the intermittency behavior for a pure critical signal event sample. In Fig.~3(b), the CMC event sample is contaminated by hand with a statistical Gaussian background contribution, with the mixed probability $\lambda=95\%$. The chosen value of $\lambda$ is close to the one used in the simulations of background in NA49 experiment~\cite{NA49SFM}. In this case, one finds that the directly calculated $F_{2}(M)$ deviates substantially from the linear dependence, i.e. violation of the scaling law because of the Gaussian background contribution. However, the trend of $CF_{2}(M)$, which is calculated by the cumulative variable method, still obey similar power-law dependence on $M^{2}$ as that in Fig.~3(a). Furthermore, the intermittency index $\phi_{2}^{c}$ calculated from $CF_{2}(M)$ keeps unchanged when comparing to the one in original CMC sample shown in Fig.~3(a). We feel that these results are encouraging. It confirms that, in the intermittency analysis, the cumulative variable method can efficiently remove the distortions or effects caused by background contribution.
Let us go back to the problem of background effect in the UrQMD model remained in Sec. IV. We calculate SFMs in the same event sample by the proposed cumulative variable method and then get the intermittency index from $CF_{2}(M)$. The results are shown as red triangles and red lines in Fig.~1 and Fig.~2. The $CF_{2}(M)$ is found to be nearly flat with increasing number of cells in all measured energies and centralities. Furthermore, we note that the intermittency index, with the value nearly around $0$, is much smaller than the one directly calculated from $F_{2}(M)$. It verifies that the background of non-critical effect can be efficiently removed by the cumulative variable method in the calculation of SFMs in UrQMD model. This method could also be used for the intermittency analysis in the ongoing experimental at RHIC/STAR or further heavy-ion experiments in search of the QCD critical point.
\section{Efficiency correction}
One of the difficulties of measuring SFMs and intermittency in experiment is efficiency correction. It is known that the values of SFMs are changed from the true ones due to the fact that detectors miss some particles with a probability named efficiency. To understand the underlying physics associated with this measurement, one needs to perform a careful study on the efficiency effect. Generally, the efficiencies in experiments are obtained by Monte Carlo (MC) embedding technique~\cite{embedding}. This allows for the determination of the efficiency, which is the ratio of the matched MC tracks number and the number of input tracks. It contains the effects of tracking efficiency, detector acceptance and interaction losses.
We denote the number of produced particles as $N$ and the number of measured ones as $n$ with a detection efficiency $\epsilon$. The true factorial moment is calculated as $f_{q}^{true}=\langle N(N-1)...(N-q+1)\rangle$. It can be recovered by dividing the measured factorial moment, $f_{q}^{measured}=\langle n(n-1)...(n-q+1)\rangle$, with appropriate powers of the detection efficiency~\cite{EfficiencyFM, EfficiencyLuo,EfficiencyKoch}:
\begin{equation}
f_{q}^{corrected}=\frac{f_{q}^{measured}}{\epsilon^{q}}=\frac{\langle n(n-1)...(n-q+1)\rangle}{\epsilon^{q}}.
\label{Eq:f2correction}
\end{equation}
This strategy has been used for the efficiency correction in the high-order cumulant analysis~\cite{EfficiencyLuo,MomentOverview, EfficiencyFM,EfficiencyToshihiro, EfficiencyKoch}. Consider the probability to detect a particle is governed by a binomial distribution, both cumulants~\cite{EfficiencyLuo,MomentOverview} and off-diagonal cumulants~\cite{EfficiencyKoch} can be expressed in term of factorial moments and then could be corrected by using Eq.~\eqref{Eq:f2correction}.
We apply the strategy to the efficiency correction for SFMs defined in Eq.~\eqref{Eq:FM}. Since the available region of phase space is partitioned into a lattice of $M^{2}$ equal-size cells, every element, $f_{q,i}^{measured}=n_{i}\times(n_{i}-1)...(n_{i}-q+1)$, of measured $F_{q}(M)$ should be corrected one by one. In this way, the efficiency corrected $F_{q}(M)$ is:
\begin{eqnarray}
F_{q}^{corrected}(M) &=&\large \frac{\langle\frac{1}{M^{2}}\sum_{i=1}^{M^{2}} \frac{f_{q,i}^{measured}}{\bar{\epsilon_{i}}^{q}}\rangle}{\langle\frac{1}{M^{2}}\sum_{i=1}^{M^{2}} \frac{f_{1,i}^{measured}}{\bar{\epsilon_{i}}}\rangle^{q}} \nonumber\\
&=& \frac{\langle\frac{1}{M^{2}}\sum_{i=1}^{M^{2}}\frac{n_{i}(n_{i}-1)\cdots(n_{i}-q+1)}{\bar{\epsilon_{i}}^{q}}\rangle}{\langle\frac{1}{M^{2}}\sum_{i=1}^{M^{2}}\frac{n_{i}}{\bar{\epsilon_{i}}}\rangle^{q}}.
\label{Eq:FMcorrection}
\end{eqnarray}
\noindent Here, $n_{i}$ denotes the number of measured particles located in the $i$-th cell with detection efficiency $\epsilon_{i}$. The mean $\bar{\epsilon_{i}}$, is calculated by $(\epsilon_{1}+\epsilon_{2}\cdots \epsilon_{n_i})/n_i$, representing the average efficiency in the $i$-th cell. We call the efficiency correction technique of Eq.~\eqref{Eq:FMcorrection} the cell-by-cell method.
In order to demonstrate the validity of the cell-by-cell method, we employ the UrQMD model with the particle detection efficiencies used in real experiment. It is simulated by injecting particle tracks from UrQMD events into the RHIC/STAR detector acceptance with the experimental efficiencies. In the STAR experiment, the detection efficiency is not a constant but depends on the momentum of particles~\cite{STAREfficiency,STARMoment,LuoCPOD}. The particle identification method is different between low and high $p_{T}$ regions. The main particle detector at STAR, the Time Projection Chamber (TPC), is used to obtain momentum of charged particles at low $p_{T}$ region of $0.4<p_{T}<0.8$ GeV/c~\cite{STAREfficiency}. And the Time-Of-Flight (TOF) detector is used to do the particle identification at relatively high $p_{T}$ region of $0.8<p_{T}<2$ GeV/c~\cite{STARMoment,LuoCPOD}. In this case, particles need to be counted separately for the two $p_{T}$ regions, in which the values of the efficiency are different.
\begin{figure}[!htp]
\hspace{-0.8cm}
\includegraphics[scale=0.45]{pdf/FM-TPC-Efficiency.pdf}
\caption{(a) Experimental tracking efficiencies as a function of $p_{T}$ in TPC detector at mid-rapidity ($|y|<0.5$) for protons in $0 - 5\%$ Au + Au collisions. (b) The second-order SFM as a function of number of partitioned cells from UrQMD calculations. The black circles represent the true $F_{2}(M)$, the blue solid triangles are the measured $F_{2}(M)$ after applying the TPC efficiency, and the red stars show the efficiency corrected SFMs by using the cell-by-cell method.}
\label{Fig:F2TPC}
\end{figure}
In Fig.~4(a), we firstly show the $p_{T}$ dependence of the experimental efficiency in only TPC detector at mid-rapidity ($|y|<0.5$) region for protons in the most central Au + Au collisions at $\sqrt{s_\mathrm{NN}}$ = 19.6 GeV~\cite{STAREfficiency}. It firstly increases with increasing $p_{T}$, and then gets saturated in higher $p_{T}$ regions. After employing this $p_{T}$ dependent efficiency into the UrQMD event sample, the measured $F_{2}(M)$ is calculated. Then, we apply the correction formula of Eq.~\eqref{Eq:FMcorrection} to do the efficiency correction on the measured $F_{2}(M)$. In Fig.~4(b), the black circles represent the original true $F_{2}(M)$, the blue triangles are the measured $F_{2}(M)$ with STAR TPC efficiency, and the red stars show the efficiency corrected $F_{2}(M)$. It is observed that the measured SFMs with efficiency are systematically smaller than the true ones, especially in the large number of partitioned cells. However, the efficiency corrected SFMs are found to be well consistent with the original true ones.
\begin{figure}[hbtp]
\hspace{-0.8cm}
\includegraphics[scale=0.45]{pdf/FM-TPCTOF-Efficiency.pdf}
\caption{(a) Experimental tracking efficiencies as a function of $p_{T}$ in TPC+TOF detectors at mid-rapidity ($|y|<0.5$) for protons in $0 - 5\%$ Au + Au collisions. (b) The second-order SFM as a function of number of partitioned cells from UrQMD calculations. The black circles represent the true $F_{2}(M)$, the blue solid triangles are the measured $F_{2}(M)$ after applying the TPC+TOF efficiency, and the red stars show the efficiency corrected SFMs by using the cell-by-cell method.}
\label{Fig:F2TPTOF}
\end{figure}
For the case of TPC+TOF efficiencies, Fig.~5(a) shows the tracking efficiencies as a function of $p_{T}$ in TPC and TOF at STAR~\cite{LuoCPOD, EfficiencyToshihiro, STARMoment}. One notes that there is a steplike dependence of the efficiencies on $p_{T}$. The reason is that the particle identification method is different between TPC and TOF detectors at STAR experiment. We apply the TPC+TOF efficiency to the UrQMD event sample at $\sqrt{s_\mathrm{NN}}$ = 19.6 GeV and then correct the measured SFMs by Eq.~\eqref{Eq:FMcorrection}. The results are shown in Fig.~5(b). Again, the SFMs corrected by the proposed cell-by-cell method are verified to be coincide with the original true ones.
In this section, we have demonstrated that the cell-by-cell method could be served as a precise and effective way of efficiency correction on SFMs. It can be easily applied to current studies at STAR~\cite{STARintermittency}, NA49~\cite{NA49SFM}, NA61~\cite{NA61universe} and other heavy-ion experiments in the intermittency analysis. It should also be noted that one needs to consider how to treat the momentum resolution in different experiments. Since we use the $p_{T}$ of individual particles to get the efficiency, the momentum resolution might directly affect the calculation of SFMs. This effect could be studied by smearing the $p_{T}$ for each particle with the known value of the momentum resolution.
\section{summary and outlook}
In summary, we investigate collision energy and centrality dependence of the SFMs in Au + Au collisions at $\sqrt{s_\mathrm{NN}}=7.7-200$ GeV by using the UrQMD model. The second-order intermittency index is found to be small but non-zero in the transport model without implementing any critical related self-similar fluctuations. A cumulative variable method is then proposed to be used in the calculations of SFMs to remove background in the intermittency analysis. It has been verified that this method can successfully reduce the distortion of a Gaussian background contribution from a pure self-similar event sample generated by the CMC model. After applying the method to the UrQMD event sample, it confirms that the non-critical background effect can be efficiently removed and the value of the intermittency index is close to zero.
In the experimental measurements of intermittency, the measured SFMs should be corrected for detecting efficiencies. We derive a cell-by-cell formula in the calculation of SFMs in heavy-ion collisions. The validity of the method has been checked in the UrQMD event sample which is employed with both the TPC and TPC+TOF tracking efficiencies used in the RHIC/STAR experiment. It is demonstrated that the cell-by-cell method provides a precise and effective way for the efficiency correction on SFMs. The correction method is universal and can be applied to the ongoing studies of intermittency in heavy-ion experiments.
\begin{figure}
\includegraphics[scale=0.35]{pdf/UrQMD-phi2-energy-dependence-vsData.pdf}
\caption{The second-order intermittency index measured at NA49~\cite{NA49SFM,NA49C+C} (solid blue symbols) and NA61~\cite{NA61Ar+Sc} (open blue circles). The results from the UrQMD model in central Au + Au collisions are plotted as black circles. The red arrow represents the theoretic expectation from a critical QCD model~\cite{Antoniou2006PRL}.}
\label{Figphi2-energy}
\end{figure}
In current experimental explorations of the intermittency in heavy-ion collisions, the NA49 and NA61 collaborations have directly measured $\phi_{2}$ at various sizes of colliding nuclei~\cite{NA49SFM,NA49C+C,NA61Ar+Sc}, which are represented as blue symbols in Fig.~6. The intermittency parameter at $\sqrt{s_\mathrm{NN}}$ = 17.3 GeV for the Si + Si system at NA49 experiment approaches the theoretic expectation value, shown as red arrow in the figure, in the second-order phase transition in a critical QCD model~\cite{Antoniou2006PRL}. The black circles of the UrQMD results give a flat trend with the value around 0 at all energies due to no critical mechanisms are implemented in the transport model.
The RHIC/STAR experiment has been running the second phase of beam energy scan (BES-II) program in 2018 - 2021~\cite{BESII}. With significant improved statistics and particle identification in BES-II, it would be interesting that STAR could measure intermittency to explore the CP in the QCD phase diagram. Our work provides a non-critical baseline and give a guidance of background subtraction and efficiency correction for the calculations of intermittency in heavy-ion collisions.
\section* {Acknowledgements}
This work is supported by the Ministry of Science and Technology (MoST) under
grant No. 2016YFE0104800, the National Key Research and Development Program of China (Grant No. 2020YFE0202002 and 2018YFE0205201), the National Natural Science Foundation of China (Grant No. 11828500,11828501, 11575069, 11890711 and 11861131009). The first two authors contribute equally to this work.
| {'timestamp': '2021-04-29T02:07:24', 'yymm': '2104', 'arxiv_id': '2104.11524', 'language': 'en', 'url': 'https://arxiv.org/abs/2104.11524'} |
\section{Introduction}
In general, a model of distributed computing is a set of \emph{runs}, i.e., all
allowed interleavings of \emph{steps} of concurrent processes.
There are multiple ways to define these sets of runs in a tractable way.
A natural one is based on \emph{failure models} that
describe the assumptions on where and when failures might occur.
By the conventional assumption of \emph{uniform} failures, processes fail with equal and
independent probabilities, giving rise to the classical model of
\emph{$t$-resilience}, where at most $t$ processes may fail in a given run.
The extreme case of $t=n-1$, where~$n$ is the number of processes in
the system, corresponds to the
\emph{wait-free} model.
\remove{
A solution of a distributed task in the wait-free model must ensure
that a process is able to make progress (i.e., obtain an
\emph{output} on its task invocation) in a finite number of its own
steps, regardless of the behavior of other processes.
In this paper we assume that processes communicate by
reading and writing to a shared memory.
}
The notion of \emph{adversaries}~\cite{DFGT11} generalizes
uniform failure models by defining a set of process subsets, called \emph{live
sets}, and assuming that in every model run, the set of \emph{correct}, i.e., taking infinitely many steps, processes
must be a live set.
In this paper, we consider adversarial read-write shared memory
models, i.e., sets of runs in which processes communicate via reading
and writing in the shared memory and live sets define which sets of
processes can be correct.
\remove{
Another way to define a model is to bound the \emph{concurrency} of
executions: how many processes can be concurrently \emph{active},
i.e., having started their computation but not yet being done with it.
Assuming that processes communicate via reading and writing in the
shared memory,
the model of $k$-concurrency, in which at most $k$ processes are
active, can be shown equivalent, in terms of solving
distributed tasks, to the wait-free model in which processes also
have access to $k$-set consensus objects~\cite{GG11-univ}.
}
\remove{
The sets of models defined via adversaries and concurrency intersect.
Indeed, the wait-free model corresponds to the
adversary that consists of all non-empty process subsets and, also, the model
of $n$-concurrency.
In general, however, live sets and active processes are different
creatures, so we cannot represent a $k$-concurrent model
as an adversary, and vice versa.
The two models correspond to sets of runs that cannot always be related by containment.
Can we find a way to relate the \emph{computational power} of the
two model classes?
}
A conventional way to capture the power of a model is to determine its
\emph{task computability}, i.e., the
set of distributed tasks that can be solved in it.
For example, consider the \emph{$0$-resilient} adversary $\ensuremath{\mathcal{A}}_{0\textrm{-}\textit{res}}$
defined through a single live set $\{p_1,\ldots,p_n\}$: the
adversary says that no process is allowed to fail (by taking only
finitely many steps).
It is easy to see that the model is strong enough to
solve \emph{consensus}, and, thus, any task~\cite{Her91}.~\footnote{In
the ``universal'' task of consensus,
every process has a private \emph{input} value, and is expected to
produce an \emph{output} value, so
that (validity)~every output is an input of some process,
(agreement)~no two processes produce different output values, and (termination)~every process
taking sufficiently many steps returns.}
In this paper, we propose a surprisingly simple characterization of
the task computability of a large class of adversarial models
through \emph{agreement functions}.
An agreement function $\alpha$ maps subsets of processes $\{p_1,\ldots,p_n\}$ to
positive integers in $\{0,1,\ldots,n\}$.
For each subset $P$, $\alpha(P)$ determines, intuitively, the level of
\emph{set consensus} that processes in $P$ can reach when no other process is
active, i.e., the smallest number of distinct input values they can
decide on.
For example, the agreement function of the wait-free
shared-memory model is $\alpha_{wf}: P \mapsto |P|$ and
the $t$-resilient model, where at most $t$ processes may fail or not
participate, has $\alpha_{t,\textit{res}}:\; P \mapsto \max(0,|P|-n+t+1)$.
The agreement function of an adversary $\ensuremath{\mathcal{A}}$ can be computed
using the notion of \emph{set consensus power} of an adversary introduced
in~\cite{GK11}: $\alpha_{\ensuremath{\mathcal{A}}}(P)=\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$.
Here $\ensuremath{\mathcal{A}}|_P$ is the \emph{restriction of
$\ensuremath{\mathcal{A}}$ to $P$}, i.e., the adversary defined through the live sets of $\ensuremath{\mathcal{A}}$ that
are subsets of $P$.
To each agreement function $\alpha$, corresponding to an existing model,
we associate a particular model, the \emph{$\alpha$-model}.
The $\alpha$-model is defined as the set of runs satisfying the
following property: the set $P$ of \emph{participating}
(taking at least one step) processes in a run is
such that $\alpha(P)\geq 1$ and is such that at most $\alpha(P)-1$
processes take only finitely many steps in it.
An algorithm solves a task in the~$\alpha$-model if
processes taking infinitely many steps produces an output.
We show that, for the class of \emph{{fair}} adversaries, agreement functions ``tell it
all'' about task computability: a task is solvable in a {fair}
adversarial model with agreement function $\alpha$
\emph{if and only if} it is solvable in the $\alpha$-model.
{Fair} adversaries include notably the class of superset-closed~\cite{HR10,Kuz12}
and the class of symmetric~\cite{Tau10} adversaries.
Intuitively, superset-closed adversaries do not anticipate failures of
processes: if $S\in \ensuremath{\mathcal{A}}$ and~$S\subseteq S'$, then~$S'\in\ensuremath{\mathcal{A}}$.
Symmetric adversaries do not depend on processes identifiers:
if~$S\in\ensuremath{\mathcal{A}}$, then for every set of processes $S'$ such that $|S'|=|S|$,
we have $S'\in\ensuremath{\mathcal{A}}$.
A corollary of our result is a characterization of the $k$-concurrency model~\cite{GG09}.
Here we use the fact that the $k$-concurrency model is equivalent, with respect to task solvability, to
the \emph{$k$-obstruction-freedom}~\cite{GK11}, a symmetric adversary
consisting of live sets of sizes from $1$ to $k$.
Thus, the agreement function
$\alpha_{k\textit{-conc}}:\; P\mapsto \min(|P|,k)$ captures the
$k$-concurrent task
computability.
An alternative characterization of $k$-concurrency via a compact
\emph{affine} task was suggested in~\cite{GHKR16}.
There are, however, models that are not captured by their agreement
functions.
We give an example of a \emph{non-{fair}} adversary that solves
strictly more tasks than its \emph{$\alpha$-model}.
Characterizing the class of models that can be captured through their
agreement function is an intriguing open question.
\remove{The first problem to resolve here is whether there exist non-{fair}
adversaries which are in this class. }
The rest of the paper is organized as follows.
Section~\ref{sec:model} gives model definitions.
In Section~\ref{sec:agr}, we formally define the notion of an agreement
function.
In Section~\ref{sec:adapt}, we present an $\alpha$-adaptive
set consensus algorithm, and in
Section~\ref{sec:univer}, we prove a few useful properties of
$\alpha$-models.
In Section~\ref{sec:adv}, we present the class of {fair} adversary,
show that superset-closed and symmetric adversaries are {fair} and
that {fair} adversaries are captured by their agreement functions.
In Section~\ref{sec:counter}, we give examples of models that are
\emph{not} captured by agreement functions.
Section~\ref{sec:related} reviews related work, and
Section~\ref{sec:conc} concludes the paper.
\section{Preliminaries}
\label{sec:model}
\myparagraph{Processes, runs, models.}
Let $\Pi$ be a system of $n$ asynchronous processes, $p_1,\ldots,p_n$
that communicate via a shared atomic-snapshot memory~\cite{AADGMS93}.
The atomic-snapshot (AS) memory is represented as a vector of $n$ shared
variables, where each process is associated with a distinct position
in this vector, and exports two operations: \emph{update} and
\emph{snapshot}.
An \emph{update} operation performed by $p_i$
replaces position $i$ with a new value and a
\emph{snapshot} operation returns the current state of the vector.
We assume that processes run the
\emph{full-information} protocol:
the first value each process writes is its \emph{input value}.
A process then alternates between taking snapshots of the memory
and writing back the result of its latest snapshot.
A \emph{run} is thus a sequence
of process identifiers stipulating the order in which the processes
take operations: each odd appearance of $i$
corresponds to an \emph{update} and each even appearance corresponds
to a \emph{snapshot}.
A \emph{model} is a set of runs.
\myparagraph{Failures and participation.}
A process that takes only finitely many steps of the
full-information protocol in a given run is called \emph{faulty}, otherwise it is called
\emph{correct}. A process that took at least one step in a given run
is called \emph{participating} in it.
The set of participating processes in a given run is called its
\emph{participating set}. Note that, since every process writes its input value in its first step, the inputs of
participating processes are eventually known to every process that
takes sufficiently many steps.
\remove{
\myparagraph{Adversaries.}
A process takes only finitely many steps in a given run is called \emph{faulty}, otherwise it is called
\emph{correct}. It is convenient to model patterns in which process failures can occur using the formalism of
\emph{adversaries}~\cite{DFGT11}. An adversary $\ensuremath{\mathcal{A}}$ is defined as a set of possible correct process subsets.
In this paper, we assume that adversaries are \emph{superset-closed}~\cite{Kuz12}: each superset of a set of an
element of $\ensuremath{\mathcal{A}}$ also an element of $\ensuremath{\mathcal{A}}$. Superset-closed adversaries provide an interesting non-uniform
generalization of the classical $t$-resilience condition~\cite{HR10}: the \emph{$t$-resilient} adversary in a
system of $n$ processes consists of all sets of $n-t$ or more processes.
Formally, an adversary $\ensuremath{\mathcal{A}}$ is a set of subsets of $\Pi$, called \emph{live sets}, $\ensuremath{\mathcal{A}}\subseteq 2^{\Pi}$,
satisfying $\forall S\in \ensuremath{\mathcal{A}}$, $\forall S'\subseteq \Pi$, $S\subseteq S'$: $S'\in\ensuremath{\mathcal{A}}$.
We say that an execution is \emph{$\ensuremath{\mathcal{A}}$-compliant} if the set of processes that are correct in that execution
belongs to $\ensuremath{\mathcal{A}}$. Thus, an adversary $\ensuremath{\mathcal{A}}$ describes a model consisting of $\ensuremath{\mathcal{A}}$-compliant executions.
}
\myparagraph{Tasks.}
In this paper, we focus on
distributed \emph{tasks}~\cite{HS99}.
A process invokes a task with an input value and the task returns an output value, so that the inputs and the
outputs across the processes which invoked the task respect the task
specification.
Formally, a \emph{task} is defined through a set $\mathcal{I}$ of input vectors (one input value for each process),
a set $\mathcal{O}$ of output vectors (one output value for each process), and a total relation $\Delta:\mathcal{I}\mapsto 2^{\mathcal{O}}$
associating to each input vector a set of valid output vectors. An input $\bot$ denote a \emph{not
participating} process and an output value $\bot$ denote an
\emph{undecided} process. Check~\cite{HKR14} for more details.
In the task of \emph{$k$-set consensus}, input values are in a set of values $V$ ($|V|\geq k+1$), output values
are in $V$, and for each input vector $I$ and output vector $O$, $(I,O) \in\Delta$ if the set of non-$\bot$
values in $O$ is a subset of values in $I$ of size at most $k$.
The special case of $1$-set consensus is called \emph{consensus}~\cite{FLP85}.
\myparagraph{Solving a task.} We say that an algorithm $A$ solves a task $T=(\mathcal{I},\mathcal{O},\Delta)$ in a model $M$ if $A$ ensures that (1) in every run in which processes start with an input vector $I\in\mathcal{I}$, all decided values form a vector $O\in\mathcal{O}$ such that $(I,O)\in\Delta$, and (2) if the run is in $M$, then every correct process decides.
This gives rise to the notion of task solvability, i.e., a task $T$ is solvable in a model $M$ if and only if there exists an algorithm $A$ which solves $T$ in $M$.
\myparagraph{BGG simulation.}
The principal technical tool in this paper is a simulation technique
that we call the \emph{BGG simulation}, after Borowski, Gafni,
Guerraoui, collecting ideas presented in~\cite{BG93a,Gaf09-EBG,GG09,GG11-univ}.
The technique allows a system of~$n$ processes that communicate via
read-write shared memory and $k$-set consensus objects to
\emph{simulate} a $k$-process system running an arbitrary read-write
algorithm.
In particular, we can use this technique to run an extended BG simulation~\cite{Gaf09-EBG} on top
of these $k$ simulated processes, which gives a simulation of an
arbitrary $k$-concurrent algorithm.
An important feature of the simulation is that it adapts to the
number of currently active simulated processes $a$: if it goes below
$k$ (after some simulated processes complete their computations), the
number of used simulators also becomes $a$.
We refer to~\cite{GHKR16} for a detailed description of this
simulation algorithm.
\remove{
We show in Sections~\ref{sec:adapt}-\ref{sec:adv} how to extend
BGG simulation to \emph{$\alpha$-models}, i.e., models that allow
us to solve $\alpha(P)$-set consensus, whenever the participating set
is~$P$.
}
\remove{
An execution of the processes $p_1,\ldots p_n$ can be \emph{simulated} by a set of \emph{simulators}
$s_1,\ldots,s_{\ell}$ that mimic the steps of the full-information protocol in a \emph{consistent} way: for
every execution $E_s$, there exists an execution $E$ of the full-information protocol on $p_1,\ldots,p_n$ such
that the sequence of simulated snapshots for every process $p_i$ in $E_s$ have also been observed by $p_i$ in
$E$.
}
\ignore{
The \emph{BG simulation} technique~\cite{BG93b,BGLR01} allows $k+1$ processes $s_1,\ldots,s_{k+1}$, called
\emph{simulators}, to wait-free simulate a \emph{$k$-resilient} execution of any protocol $\ensuremath{\mathcal{A}}$ on $m$ processes
$p_1,\ldots,p_m$ ($m>k$). The simulation guarantees that each simulated step of every process $p_j$ is either
agreed upon by all simulators, or one less simulator participates further in the simulation for each step which
is not agreed on.
The main outcome of this result is that the set of colorless tasks
solvable $k$-resiliently by $m$ processes coincides with the one
solvable wait-free by $k+1$ processes.
The core of the technique is the \emph{BG-agreement} protocol.
Indeed, the BG-agreement protocol is used by the simulators to agree on the simulated
state of a process: it ensure that every decided value was previously proposed, and
no two different values are decided.
But if one of the simulators slows down while executing BG-agreement, the protocol's execution at other correct
simulators may ``block'' until the slow simulator finishes the protocol.
If the slow simulator is faulty, no other simulator is guaranteed to
decide.
}
\remove{
If the simulation tries to promote $m>k$ simulated processes in a fair (e.g., round-robin) way, then, as long as
there is a live simulator, at least $m-k$ simulated processes perform infinitely many steps of $\ensuremath{\mathcal{A}}$ in the
simulated execution.
}
\ignore{
The technique was later applied to ``generic'' (not necessarily
colorless) tasks to derive the \emph{extended BG
simulation}~\cite{Gaf09-EBG}. Here the BG-agreement protocol is used to implement an Extended Agreement (EA),
which is also safe but not necessarily live. As BG-agreement, it may block if some process has slowed down in
the middle of executing it. Additionally, the EA protocol exports an \emph{abort} operation that, when applied
to a blocked EA instance, re-initializes it so that it can move forward until an output is computed or a process
makes it block again.
}
\remove{
Briefly, the implementation of EA is as follows. To decide, every process goes through a sequence of
BG-agreement protocols alternated with instances of commit-adopt (CA) protocols:
$\textit{EA}_1,\textit{CA}_1,\textit{EA}_2,\textit{CA}_2,\ldots$. Here each $\textit{EA}_i$ is BG-agreement
equipped with an additional \textit{Abort} boolean register, initially \textit{false}. When a process reaches
the ``waiting'' phase of the protocol, where it waits until every participating process finishes the protocol,
it also additionally checks if the \textit{Abort} register is set to \textit{true}.
The process invokes the first BG-agreement protocol ($\textit{EA}_1$) using its input value. Every subsequent
$\textit{CA}_i$ (resp., $\textit{EA}_i$) is accessed with the value received from the preceding
$\textit{EA}_{i-1}$ (resp., $\textit{CA}_{i-1}$).
Here if the preceding $\textit{EA}_{i-1}$ was aborted, then the process uses its own decision estimate as an
input for $\textit{CA}_{i}$. If the preceding $\textit{CA}_{i-1}$ adopted a value, then this value is used as an
input for $\textit{EA}_{i}$. The process decides on the first value to be committed in a commit-adopt protocol.
The properties of commit-adopt and BG-agreement ensure that the EA protocol is safe in the sense that no two
different values can be decided and every decided value is an input of some participating process. However, it
might not be live in case when either some of its agreement protocol remains forever blocked or protocols are
aborted. But, as we will see, if $\ell$ simulators, one of which is correct, are involved in simulating a
protocol on $\ell$ processes, then we can use the EA protocol so that at least one of the simulated processes
advances under the condition that at least one of the simulators is correct.
}
\ignore{
The \emph{$k$-state-machine simulation} was introduced
in~\cite{GG11-univ} as a generalization of state machine
replication~\cite{Sch90,Her91}.
The processes \emph{issue} $k$-vectors of commands that they
seek to \emph{execute} on their local copies of the $k$ state
machines: a command issued at entry $j$ of a vector is
to be executed on machine $sm[j]$.
Informally, the construction proposed in~\cite{GG11-univ} ensures that the local
copies of state machine $\textit{sm}[i]$ ($i=1,\ldots,k$) progress in
the same way at all processes: prefixes of the same sequence of
proposed commands are executed on $\textit{sm}[i]$ by every process.
Furthermore, there is at least one machine $\textit{sm}[i]$ that
\emph{makes progress}, i.e., executes infinitely many proposed
commands at every correct process.
The $k$-state-machine abstraction can be implemented by any AS system in which
$k$-set consensus can be solved.
}
\remove{
The principle of the construction is to use $k$-set-agreement to solve $k$-simultaneous consensus. Each process
is provided with an index in $\{1,\dots,k\}$ and a decision value. All processes with the same index agree on
the proposal returned, i.e., reach consensus among themselves. Processes uses this value to participate on a
commit-adopt associated with their associated entry. Only after they participate in all other entry's
commit-adopt with their initial proposal. This order is important as it ensures one commit-adopt succeed, all
start by participating to commit-adopt with agreed values thus the first one to terminated cannot have been
perturbed by a process with a different proposal.
One important feature not visible according to the specification is that when $l$ processes participates, with
$l<k$, the $k$-set-agreement is an $l$-set-agreement in practice and the corresponding $k$-simultaneous
consensus algorithm only returns entry values lower or equal to $l$. This property is required later in the
simulation.
}
\ignore{
Furthermore, the implemented $k$-state-machines plus AS can be used to
simulate any $k$-process AS shared-memory system.
In the construction, the commands submitted to the state machines are
actually the outcomes of snapshot operations of the full-information
protocol, evaluated from the snapshots of the memory taken by the
simulators. Each time a new command is agreed upon for a state
machine, its outcome is registered by the simulators in the shared
memory, so that future snapshot would necessary contain it, which
simulates update operations.
This way, the simulated $k$ state machines indeed witness a sequence
of local states compatible with an AS run.
}
\remove{
The construction is quite simple, process propose as command for the state machines results of the read
consisting of the last write they see committed for other state machines. Then they write to shared memory their
views of the $k$ state-machines progress. Then they take a snapshot of the memory to update their views before
computing the new command to submit.
The last known committed writes may not be up-to-date enough according to the original algorithm, processes have
a state that may be missing the last committed round and if two processes manage to first commit views to two
different state machines while being both one write late on the other, it would not fulfil the shared memory
sequential specification.
To see that this simple construction indeed implements a $k$-process shared memory system, validity directly
arises from the shared memory mechanism. We can associate as linearisation point for a simulated process read
the linearisation point for the last snapshot made by any of the processes that proposed the corresponding
command. For the write, it can be linearised at the linearisation point of the first process that wrote the
committed command. Finally, the $k$-state machine ordering and validity properties ensure that processes can
determine the most recent command executed to a state machine from the shared memory snapshot.
}
\section{Agreement functions}
\label{sec:agr}
\begin{definition}[Agreement function]
The \emph{agreement function} of a model $M$ is
a function $\alpha: 2^{\Pi} \to \{0,\ldots,n\}$, such that
for each $P\in 2^{\Pi}$, in the set of runs of $M$ in which no process
in $\Pi\setminus P$ participates,
iterative $\alpha(P)$-set consensus can be solved,
but $(\alpha(P)-1)$-set consensus cannot.
By convention, if $M$ contains no (infinite) runs with participating set
$P$, then $\alpha(P)=0$.
\end{definition}
\noindent
Intuitively, for each $P$, we consider a model consisting of runs of $M$ in which only
processes in $P$ participate and determine the best level of set
consensus that can be reached in this model,
with $0$ corresponding to a model that consists of \emph{finite} runs only.
Note the agreement function $\alpha$ of a model $M$ is \emph{monotonic}:
$P\subseteq P'$ $\Rightarrow$ $\alpha(P)\leq \alpha(P')\leq|P'|$.
Indeed, the set of runs of $M$ where
the processes in $\Pi\setminus P$ do not take any step
is a subset of the set of runs of $M$ where
the processes in $\Pi\setminus P'$ do not take any step.
Moreover, $|P'|$-set consensus is trivially solvable in any model
by making processes return their own proposal directly.
In this paper, we only consider monotonic functions~$\alpha$.
\begin{definition}[$\alpha$-model]
Given a monotonic agreement function $\alpha$, the \emph{$\alpha$-model} is the set of runs in which,
the participating set $P$ satisfies:
(1)~$\alpha(P)\geq 1$; and, (2)~at most $\alpha(P)-1$ participating
processes take only finitely many
steps.
\end{definition}
\noindent
We say that a model is \emph{characterized
by its agreement function} $\alpha$ if and only if
it solves the same set of task as the $\alpha$-model.
\begin{definition}[$\alpha$-adaptive set consensus]
The $\alpha$-adaptive set consensus task is an agreement task satisfying the
{\bf validity} and {\bf termination} properties of consensus
and the {\bf $\alpha$-agreement} property:
if at some time $\tau$, $k$ distinct values have been returned,
then the current participating set $P_\tau$
is such that $\alpha(P_\tau)\geq k$.
\end{definition}
\section{Adaptive set consensus}
\label{sec:adapt}
We can easily show that
any model with agreement function $\alpha$ can solve the
$\alpha$-adaptive set consensus task, i.e., to achieve the best level
of set consensus without this an priori knowledge of the set of processes
that are allowed to participate.
\begin{algorithm}
\caption{Adaptive set consensus for process $p_i$.\label{Alg:AdaptiveAgreement}}
\SetKwRepeat{Repeat}{Repeat}{Until}%
\textbf{Shared variables}: $R[1,\dots,n] \leftarrow (\bot,\bot)$\;\label{Alg:l:MemoryInitState}
$\mathbf{Local\ variables}$: $\mathit{parts},P \in 2^\Pi,v \leftarrow \mathit{Proposal}, k\in \mathbb{N},r[1,\dots,n] \leftarrow (\bot,\bot)$\;\label{Alg:l:Init}
\vspace{1em}
$R[i].\mathit{update}(v,0)$\;\label{Alg:l:Input}
$r = R.\mathit{snapshot}()$\;\label{Alg:l:InitPart1}
$P =$ set of processes $p_j$ such that $r[j]\neq(\bot,\bot)$\;\label{Alg:l:InitPart2}
\Repeat{$\mathit{parts}=P$\label{Alg:l:ConstantPart}}{\label{Alg:l:loop-begin}
$ \mathit{parts} := P$\;
$k :=$ be the greatest integer such that $(-,k)$ is in $r$\;\label{Alg:l:AdoptProp1}
$v :=$ be any value such that $(v,k)$ is in $r$\;\label{Alg:l:AdoptProp2}
$v := \alpha(\mathit{parts})$-$\mathit{agreement}(v)$\;\label{Alg:l:Agrrement}
$R[i].\mathit{update}(v,|\mathit{parts}|)$\;\label{Alg:l:LockProp}
\vspace{1em}
$r = R.\mathit{snapshot}()$\;\label{Alg:l:UpdatePart1}
$P =$ set of processes $p_j$ such that $r[j] \neq (\bot,\bot)$\;\label{Alg:l:UpdatePart2}
}\label{Alg:l:loop-end}
\Return $v$\;\label{Alg:l:Decide}
\end{algorithm}
Let $M$ be a model and let $\alpha$ be its agreement function.
Recall that, by definition, assuming that subsets of $P$ participate,
there exists an algorithm that solves $\alpha(P)$-set consensus,
let $\alpha(P)$-$\mathit{agreement}$ be such an algorithm.
We describe now an \emph{$\alpha$-adaptive} set consensus
algorithm providing the ``best'' level of consensus available in every
participating set, without prior knowledge of who may participate.
The algorithm adaptively ensures that if the participating set is $P$, then at most
$\alpha(P)$ distinct input values can be decided.
The algorithm is presented in
Algorithm~\ref{Alg:AdaptiveAgreement}. The idea is the following:
every process $p$ writes its input in shared memory
(line~\ref{Alg:l:Input}), and then takes a snapshot to get the current set
$P$ of participating processes (lines~\ref{Alg:l:InitPart1}--\ref{Alg:l:InitPart2}).
Then processes adopt a value ``locked'' by a process associated to the largest participation
(lines~\ref{Alg:l:AdoptProp1}--\ref{Alg:l:AdoptProp2}). It uses this new value as proposal for
a simulated agreement solving $\alpha(parts)$ (feasible as it is accessed only by processes in $parts$ and $\alpha$ only increases with participation). The decision value obtained is then ``locked'' in memory by writing the value together with the current participation estimation size, $|parts|$, to the shared
memory (line~\ref{Alg:l:LockProp}). If after locking the value, the updated
participating set $P$ (lines~\ref{Alg:l:UpdatePart1}--\ref{Alg:l:UpdatePart2}), has not
changed (line~\ref{Alg:l:ConstantPart}), the process returns its current decision estimate, $v$ (line~\ref{Alg:l:Decide}). Otherwise, if participation has changed, then the same construction is applied again and again until it has been executed with an observed stable participation (lines~\ref{Alg:l:loop-begin}--\ref{Alg:l:loop-end}).
\begin{theorem}
\label{thm:AdaptiveAgreement}
In any run with a participating set $P$, Algorithm~\ref{Alg:AdaptiveAgreement} satisfies the following
properties:
\begin{itemize}
\item \emph{Termination:} All \emph{correct} processes eventually decide.
\item \emph{$\alpha$-Agreement:} if at some time $\tau$, $k$ distinct values have been returned,
then the current participating set $P_\tau$
is such that $\alpha(P_\tau)\geq k$.
\item \emph{Validity:} Each decided value has been proposed by some process.
\end{itemize}
\end{theorem}
\begin{proof}
Let us show that Algorithm~\ref{Alg:AdaptiveAgreement} satisfies the following
properties:
\begin{itemize}
\item \textbf{Validity:}
Processes can only decide on their current estimated decision value~$v$
(line~\ref{Alg:l:Decide}).
This value~$v$ is first initialized to their own input proposal (line~\ref{Alg:l:MemoryInitState}),
and it can then only be replaced by adopting another process's current,
initialized, estimated decision value. Adaopting this value can either
be done directly (line~\ref{Alg:l:AdoptProp2}),
or through an agreement protocol (line~\ref{Alg:l:Agrrement})
satisfying the same \emph{validity} property.
\item \textbf{Termination:}
Assume that a correct process never decides.
Then it must be blocked executing the \emph{while loop} indefinitely often
(at lines~\ref{Alg:l:loop-begin}--\ref{Alg:l:loop-end}).
So the participation observed must have changed at each iteration
(line~\ref{Alg:l:ConstantPart}). But, as participation can only increase,
it must have been strictly increased infinitely often
-- a contradiction with the finiteness of the system size.
\item \textbf{$\alpha$-Agreement:}
We say that a process \emph{returns at level} $t$ if
it exited the while loop (line~\ref{Alg:l:ConstantPart}) with
a last participation observed of size $t$. Let $l$ be the
smallest level at which a process \emph{returns}, and let
$O_l$ the set of values ever written to $R$ at \emph{level} $l$
, i.e., values $v$ such that $(v,l)$ ever appears in
$R$. We shall show that for all $l'>l$, $O_{l'}\subseteq O_l$.
Indeed, let $q$ be the first process to write a value $(v',l')$ (in
line~\ref{Alg:l:LockProp}), such that~$l'>l$, in $R$. Thus, the
immediately preceding snapshot, taken before this write in
lines~\ref{Alg:l:InitPart1} or~\ref{Alg:l:UpdatePart1}, witnessed a
participating set of size $l'$. Hence, the snapshot of $q$ succeeds
the last snapshot (of size $l<l'$) taken by any process $p$ that
\emph{returned at level} $l$. But immediately before taking this last
snapshot, every such process has written $(v,l)$ in R
(line~\ref{Alg:l:LockProp}) for some $v\in O_l$. Therefore $q$ must see
$(v,l)$ in its snapshot of size $l'$ and, since, by assumption, the snapshot
contains no values written at levels higher than $l$, $q$ must have adopted
some value written at level $l$, i.e., $v'\in O_l$.
Inductively, we can derive that any subsequent process will have to adopt a value in $O_l$.
We have shown that every returned value must appear in $O_l$, where $l$ is the
smallest \emph{level} at which some process exits the while loop (line~\ref{Alg:l:ConstantPart}).
Participation can only grow due to snapshots inclusion property,
thus there is a single participating set
of size $l$, $P_l$, which can be observed in a given run.
A value in $O_l$ has been adopted as the decision returned by
the $\power{P_l}$-agreement, that can return at most $\power{P_l}$
values as it was only accessed by processes in~$P_l$.
Therefore the set of returned decisions, as included in $O_l$,
is smaller or equal to~$\power{P_l}$. Last, it easy to see that
for any process returning at a time $\tau$, the participation $P_\tau$
at time $\tau$ is such that $P_l\subseteq P_\tau$ and thus that $\power{P_l}\leq\power{P_\tau}$.
Therefore, the number of distinct returned decisions returned
at any time $\tau$ is smaller or equal to $\power{P_\tau}$.
\end{itemize}
\end{proof}
This adaptive agreement is simple but central for all our model simulations. All reductions are constructed with a common simulation structure. Processes write their input, then use their access to shared memory and their ability to solve this adaptive agreement to simulate an adaptive BGG simulation where the number of active BG simulators is equal at any time $\tau$ to at most $\alpha_M(P_\tau)$, with $P_\tau$ the participating set at time $\tau$, with at least one of these BG simulators taking infinitely many steps.
\section{Properties of the $\alpha$-model}
\label{sec:univer}
We now relate task solvability in the $\alpha$-model and in $M$.
More precisely, we show that (1) the agreement function of
the $\alpha$-model is $\alpha$ and
(2) any task $T$ solvable in the $\alpha$-model is also solvable in every
model with agreement function $\alpha$.
\begin{theorem
\label{lem:activeResilency}
The agreement function of the $\alpha$-model is $\alpha$.
\end{theorem}
\begin{proof}
Take $P$ such that $\alpha(P)>1$ and consider the set of runs of the
$\alpha$-model in which no process in $\Pi\setminus P$ participates
and, thus, according to the monotonicity property, at most $\alpha(P)-1$ processes are faulty.
To solve $\alpha(P)$-set consensus, we use the \emph{safe-agreement} protocol~\cite{BG93b},
the crucial element of BG simulation.
Safe agreement solves consensus if every process that participates in
it takes enough steps.
The failure of a process then may \emph{block} the safe-agreement protocol.
In our case as at most $\alpha(P)-1$ processes in $P$ can fail, so we
can simply run $\alpha(P)$ safe agreement protocols: every process
goes through the protocols one by one using its input as a proposed
value, if the protocol blocks, it proceeds to the next one in the
round-robin manner.
The first protocol that returns gives the output value.
Since at most $\alpha(P)-1$ processes are faulty, at least one safe
agreement eventually terminates, and there are at most $\alpha(P)$
distinct outputs.
To see that $(\alpha(P)-1)$ cannot be solved in this set of runs,
recall that one cannot solve $(\alpha(P)-1)$-set consensus
$(\alpha(P)-1)$-resiliently~\cite{BG93b,HS99,SZ00}.
\end{proof}
\noindent The following result is instrumental in our
characterizations of \emph{fair} adversaries:
\begin{theorem}\label{thm:adaptiveAbstract}
For any task $T$ solvable in an $\alpha$-model, $T$ is solvable in any read-write shared
memory model which solves the $\alpha$-adaptive set consensus task.
\end{theorem}
\begin{proof}
Using $\alpha$-adaptive set consensus
and read-write shared memory,
we can run $BGG$-simulation so that,
when the participating set is~$P$,
at most $\alpha(P)$ \emph{BG simulators} are activated
and at least one is live
(i.e., takes part in infinitely many simulation steps).
Moreover, we make a process provided with a (simulated) task output
to stop proposing simulated steps to BGG simulation.
Hence, the number of active simulators is also bounded by
the number of participating processes without an output,
with at least one live BG simulator if there is
a correct process without a task output.
These BG simulators are used to simulate an execution of
a protocol solving~$T$ in the $\alpha$-model.
And so, since any finite run can be extended to
a valid run of the $\alpha$-model,
the protocol can only provide valid outputs.
We make BG simulators execute the \emph{breadth-first} simulation:
every BG simulator executes an infinite loop consisting of
(1) updating the estimated participating set $P$, then (2) try to
execute a simulation step of every process in~$P$, one by one.
Now assume that there exist $k\geq 1$ correct processes that
are never provided with a task output.
BGG simulation ensure that we eventually have at most
$min(k,\alpha(P))$ active simulators, with at least one live among them.
Let $s$ be such a live simulator. After every process in $P$ have taken
their first steps, $s$ tries to simulate steps for every process of $P$
infinitely often. A process simulation step can be blocked forever only due
to an active but not live BG simulator\footnote{
Note that the extended BG-simulation provides a mechanism which ensures that
a simulation step is not blocked forever by a no longer active BG simulator.},
thus there are at most $min(k,\alpha(P))-1$ simulated processes in $P$
taking only finitely many steps.
As at most $\alpha(P)-1$ processes have a finite number of
simulated steps, the simulated run is a valid run of the $\alpha$-model.
Moreover, as at most $k-1$ processes have a finite number of
simulated steps, there is one process never provided with a task output
simulated as a correct process.
But, a protocol solving a task eventually provides
task outputs to every correct process --- a contradiction.
\end{proof}
\noindent
Using Theorem~\ref{thm:AdaptiveAgreement} in combination with Theorem~\ref{thm:adaptiveAbstract},
we derive that:
\begin{corollary}
\label{cor:adaptive}
Let $M$ be any model, $\alpha_M$ be its agreement function, and $T$
be any task that is solvable in the $\alpha_M$-model.
Then $M$ solves $T$.
\end{corollary}
\remove{
\section{Characterizing $k$-concurrency}
\label{sec:kconc}
A process is called \emph{active} at the end of a finite run $R$ if
it participates in $R$ but did not returned at the end of $R$.
Let $\textit{active}(R)$ denote the set of all processes that are
active at the end of $R$.
A run $R$ is \emph{$k$-concurrent} ($k=1,\ldots,n$) if
at most $k$ processes are \emph{concurrently active} in $R$, i.e.,
$\max\{|\textit{active}(R')|;\; R' \mbox{ prefix of }$R$\}\leq k$.
The \emph{$k$-concurrency} model is the set of $k$-concurrent AS runs.
The model of \emph{$k$-active resilience}~\cite{GG11-univ} ensures
that in every run at most $k$ participating
processes are faulty and at least one participating process is
correct.
It has been shown in~\cite{GG11-univ} that the models of
$k$-concurrency and $(k-1)$-active resilience are equivalent, in the sense
that they are able to solve exactly the same set of tasks.
\begin{theorem
\label{th:kconc}
The agreement function of the $k$-concurrency model is
$\alpha_{k\textit{-conc}}(P)= \min(|P|,k)$.
\end{theorem}
\begin{proof}
The easiest way to prove this claim is to refer to the recently
established equivalence~\cite{GG09,GHKR16}, with respect to task computability,
between the model of $k$-concurrency and the wait-free model in which,
in addition to the atomic-snapshot memory, \emph{$k$-set consensus objects}
can be accessed.
Thus, if the participating sets are subsets of $P$,
then whenever $|P|<k$, the best level of set consensus that can be achieved
in the $k$-concurrent model is $|P|$ (as at most $|P|$ distinct values are proposed),
and if $|P|\geq k$, then it is $k$:
read-write memory and $k$-set consensus objects cannot be used
to solve $(k-1)$-set consensus.
\end{proof}
\subsection{Task computability in the model of $k$-concurrency}
\begin{theorem
\label{th:kconc:task}
A task can be solved in the model of $k$-concurrency if and
only if it can be solved $\alpha_{k\textit{-conc}}$-adaptively.
\end{theorem}
\begin{proof}
The ``if'' direction is implied by Lemma~\ref{lem:adaptive}. The
``only if'' direction has already been shown
in~\cite{GG09,GHKR16}. The proof relies on simulating a $k$-concurrent
run by using BGG simulation where at most $min(k,\ell)$ BG simulator may be active, where $\ell$ is the number of active processes. This is done by making only active processes proposing simulation steps, and making the BG simulators advance the code of one of the most advanced available simulated process (\emph{depth-first} BG simulation). See~\cite{GHKR16} for more details on the proof of the simulation.
\end{proof}
}
\section{Characterizing fair adversaries}
\label{sec:adv}
An \emph{adversary} $\ensuremath{\mathcal{A}}$ is a set of subsets of $\Pi$, called \emph{live sets}, $\ensuremath{\mathcal{A}}\subseteq 2^{\Pi}$.
An infinite run is \emph{$\ensuremath{\mathcal{A}}$-compliant} if the set of processes that are correct in that run
belongs to~$\ensuremath{\mathcal{A}}$. An adversarial $\ensuremath{\mathcal{A}}$-model is thus defined as the set of
$\ensuremath{\mathcal{A}}$-compliant runs.
An adversary is \emph{superset-closed}~\cite{Kuz12} if each
superset of a live set of~$\ensuremath{\mathcal{A}}$ is also an element of $\ensuremath{\mathcal{A}}$, i.e.,
if $\forall S\in \ensuremath{\mathcal{A}}$, $\forall S'\subseteq \Pi$, $S\subseteq S' \implies S'\in\ensuremath{\mathcal{A}}$.
Superset-closed adversaries provide a non-uniform
generalization of the classical \emph{$t$-resilient} adversary
consisting of sets of $n-t$ or more processes.
An adversary $\ensuremath{\mathcal{A}}$ is a \emph{symmetric} adversary if it does not depend on process
identifiers: $\forall S \in \ensuremath{\mathcal{A}}$, $\forall S' \subseteq \Pi$, $|S'|=|S|
\implies S'\in\ensuremath{\mathcal{A}}$. Symmetric adversaries provides another interesting
generalization of the classical $t$-resilience condition and
$k$-obstruction-free progress condition~\cite{GG09} which was previously
formalized by Taubenfeld as its symmetric progress conditions~\cite{Tau10}.
\subsection{Set consensus power}
The notion of the \emph{set consensus power}~\cite{GK10} was originally
proposed to capture the power of adversaries in solving
\emph{colorless} tasks~\cite{BG93a,BGLR01}, i.e., tasks that can be defined by relating \emph{sets}
of inputs and outputs, independently of process identifiers.
\begin{definition}\label{def:scn
The \emph{set consensus power} of $\ensuremath{\mathcal{A}}$, denoted by $\mathit{setcon}(\ensuremath{\mathcal{A}})$, is defined as follows:
\begin{itemize}
\item If $\ensuremath{\mathcal{A}}=\emptyset$, then $\mathit{setcon}(\ensuremath{\mathcal{A}})=0$
\item Otherwise, $\mathit{setcon}(\ensuremath{\mathcal{A}})= \max_{S\in \ensuremath{\mathcal{A}}} \min_{a\in S}
\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S\setminus\{a\}}) + 1{}.$
\footnote{$\ensuremath{\mathcal{A}}|_P$ is the adversary consisting of all live sets of $\ensuremath{\mathcal{A}}$ that are subsets of~$P$.}
\end{itemize}
\end{definition}
Thus, for a non-empty adversary $\ensuremath{\mathcal{A}}$, $\mathit{setcon}(\ensuremath{\mathcal{A}})$ is determined as
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S\setminus\{a\}})+1$ where~$S$ is an element of~$\ensuremath{\mathcal{A}}$ and~$a$
is a process in~$S$ that ``max-minimize'' $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S\setminus\{a\}})$.
Note that for $\ensuremath{\mathcal{A}}\neq\emptyset$, $\mathit{setcon}(\ensuremath{\mathcal{A}})\geq 1$.
It is shown in~\cite{GK10} that $\mathit{setcon}(\ensuremath{\mathcal{A}})$ is the smallest
$k$ such that $\ensuremath{\mathcal{A}}$ can solve $k$-set consensus.
It was previously shown in~\cite{GK11} that for a superset-closed adversary $\ensuremath{\mathcal{A}}$,
the set consensus power of $\ensuremath{\mathcal{A}}$ is equal to $\mathit{csize}(\ensuremath{\mathcal{A}})$,
where $\mathit{csize}(\ensuremath{\mathcal{A}})$ denote the minimal hitting set size of $\ensuremath{\mathcal{A}}$,
i.e., a minimal subset of $\Pi$ that intersects with each live set of $\ensuremath{\mathcal{A}}$.
Therefore if $\ensuremath{\mathcal{A}}$ is superset-closed, then $\mathit{setcon}(\ensuremath{\mathcal{A}})=\mathit{csize}(\ensuremath{\mathcal{A}})$.
For a symmetric adversary $\ensuremath{\mathcal{A}}$, it can be easily derived from
the definition of $\mathit{setcon}$ that
$\mathit{setcon}(\ensuremath{\mathcal{A}})= |\{k\in\{1,\dots,n\}:\exists S\in\ensuremath{\mathcal{A}},|S|=k\}|$.
\begin{theorem
\label{th:adv}
The agreement function of adversary $\ensuremath{\mathcal{A}}$ is
$\alpha_{\ensuremath{\mathcal{A}}}(P)= \mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$.
\end{theorem}
\begin{proof}
An algorithm $A_{P}$ that solves $\alpha_{\ensuremath{\mathcal{A}}}(P)$-set consensus,
assuming that the participating set is a subset of $P$,
is a straightforward generalization of the result of~\cite{GK10}.
It is shown in~\cite{GK10} that $\mathit{setcon}(\ensuremath{\mathcal{A}})$-set consensus can be solved in $\ensuremath{\mathcal{A}}$.
But if we restrict the runs to assume that
the processes in $\Pi\setminus P$ do not take a single step,
then the set of possible live sets reduces to $\ensuremath{\mathcal{A}}|_P$.
Thus using the agreement algorithm of~\cite{GK10} for the adversary $\ensuremath{\mathcal{A}}|_P$,
we obtain a $\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$-set consensus algorithm,
or equivalently, an $\alpha_{\ensuremath{\mathcal{A}}}(P)$-set consensus algorithm.
\remove{ : simply wait until
at least one input of an element of some deterministically chosen
minimal-size hitting set of $\ensuremath{\mathcal{A}}|_P$ is written in the memory and adopt
it an output. Clearly, at most $\mathit{csize}(\ensuremath{\mathcal{A}}|P)$ different input values
are output.
As at least one member of the hitting set must be correct, every
correct process eventually returns.
\trnote{I removed the last two paragraphs that was not useful as showing a stronger result than required.}
}
\remove{To devise an adaptive algorithm that solves $\mathit{setcon}(\ensuremath{\mathcal{A}}|P)$-set
consensus for any participating set $P$, we let the processes run
through a series of iterations.
In each iteration, every process first takes a memory snapshot to evaluate the current set of
participants $P$ and adopts the input of the process that is found in
the highest iteration.
Then it runs $A_{P}$ until it returns or the participating set changes.
The process writes the obtained value (or its previous value if the
$A_P$ did not return) together with its current iteration number in
the shared memory.
The iteration terminates with a decision if the participating set has
not changed.
Obviously, since the participating set eventually stops changing,
every correct process eventually decides.
By applying the standard ``commit-adopt''
arguments~\cite{Gaf98,YNG98}, it is straightforward to see that if
some process decides in an iteration $\ell$ in which a participating
set $P$ was observed, not more $\mathit{csize}(\ensuremath{\mathcal{A}}|P)$ different values can be
returned in total. We consider the smallest-size $P$ for which a
decision is produced and observe that every other process must adopt a
value decided by $A_{\mathcal{P}}$: there are at most $\mathit{csize}(\ensuremath{\mathcal{A}}|P)$ different such values. }
\end{proof}
\noindent
It is immediate from Theorem~\ref{th:adv} that $\ensuremath{\mathcal{A}}\subseteq \ensuremath{\mathcal{A}}'$ implies
$\mathit{setcon}(\ensuremath{\mathcal{A}})\leq \mathit{setcon}(\ensuremath{\mathcal{A}}')$.
\remove{
\begin{property}
The agreement function of an adversary is regular.
\end{property}
\begin{proof}
Consider sets of processes $P$ and $Q$, and
assume that the participating set is $P\cup Q$.
Now consider the following protocol:
(1) processes in $P$ solve an $\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$ set-consensus algorithm by
assuming that processes in $Q\setminus (P\cap Q)$ have failed;
(2) processes in $Q\setminus (P\cap Q)$ return directly their
proposal;
(3) processes also return any value decided by some process.
A process terminate as a process in $Q$ is correct and some
process returns according to (2), or else every process in $Q$
are faulty and thus (1) terminate. Moreover, it is easy to see
that at most $\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$ distinct values can be returned by (1)
and at most $|Q\setminus (P\cap Q)|$ values can be returned by (2).
Thus we can derive that~$\alpha_\ensuremath{\mathcal{A}}(P\cup Q)\leq \alpha_\ensuremath{\mathcal{A}}(P)+|Q\setminus (P\cap Q)|$.
\end{proof}
}
\subsection{{Fair} adversaries}
In this paper we propose a class of adversaries which encompasses both
classical classes of super-set closed and symmetric adversaries.
Informally, an adversary is \emph{{fair}}
if its set consensus power does not change if only a
subset of the processes are participating in an agreement protocol.
More precisely, consider $\ensuremath{\mathcal{A}}$-compliant runs with participating set $P$ and assume that processes in
$Q\subseteq P$ want to reach agreement \emph{among themselves}: only
these processes propose inputs and are expected to produce outputs.
We can only guarantee outputs to processes in $Q$ when the set of
correct processes include some process in $Q$, i.e.,
when the current live set intersect with $Q$.
Thus, the best level of set consensus reachable by $Q$ is
defined the set consensus power of adversary $\ensuremath{\mathcal{A}}|_{P,Q}=\{S\in \ensuremath{\mathcal{A}}|_P, S\cap Q\neq
\emptyset\}$, unless $|Q|<\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$.
\begin{definition}\label{def:fair}[{Fair} adversary]
An adversary $\ensuremath{\mathcal{A}}$ is {fair} if and only if:
\[ \forall P \subseteq \Pi, \forall Q\subseteq P, \mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})= min(|Q|,\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)){}.\]
\end{definition}
\remove{
Indeed, if only a subset $Q$ of the current participation $P$ is participating to an agreement protocol, processes need to decide if there is a correct process in $Q$. If there is a correct process in $Q$ then it can be assumed that the valid live-set must include a process from $Q$ and therefore solving the best set consensus possible in $Q$ correspond to solve the best set consensus possible in $\ensuremath{\mathcal{A}}'=\{S\in \ensuremath{\mathcal{A}}|_P, S\cap Q\neq \emptyset\}$.
\begin{property}
Not all adversaries are {fair}.
\end{property}
\begin{proof}
To see that not all adversaries are {fair}, one can look at the adversary $\ensuremath{\mathcal{A}}=\{\{p_1\},\{p_2,p_3\},\{p_1,p_2,p_3\}\}$. It is easy to see that $\mathit{setcon}(\ensuremath{\mathcal{A}})=2$, but that $\mathit{setcon}(\{S\in \ensuremath{\mathcal{A}}, S\cap \{p_2,p_3\}\neq\emptyset\}= \mathit{setcon}(\{\{p_2,p_3\},\{p_1,p_2,p_3\}\})= 1$ which is not equal to $min(|\{p_2,p_3\}|,\mathit{setcon}(\ensuremath{\mathcal{A}}))= 2$. Therefore $\ensuremath{\mathcal{A}}$ is an example of a non-{fair} adversary, i.e., if every-process participate then $\{p_2,p_3\}$ can solve consensus between themselves even if the set consensus power of the all adversary is equal to $2$.
\end{proof}
}
\begin{property}
\label{prop:nonfair}
\[\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})\leq \min(|Q|,\mathit{setcon}(\ensuremath{\mathcal{A}}|_P))\]
\end{property}
\begin{proof}
For any $P\subseteq \Pi$ and $Q\subseteq P$,
$\ensuremath{\mathcal{A}}|_{P,Q}=\{S\in \ensuremath{\mathcal{A}}|_P, S\cap Q\neq \emptyset\}$ is a subset of $\ensuremath{\mathcal{A}}|_P$
and, thus, $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q}) \leq \mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$.
Moreover, $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})\leq |Q|$,
as $|Q|$-set consensus can be solved in $\{S\in \ensuremath{\mathcal{A}}|_P, S\cap Q\neq
\emptyset\}$ as follows: every process waits until some
process in $Q$ writes its input and decides on it.
\end{proof}
\begin{theorem}
Any superset-closed adversary is {fair}.
\end{theorem}
\begin{proof}
Suppose that there exists a superset-closed adversary $\ensuremath{\mathcal{A}}$ that
is not {fair}, i.e., by Property~\ref{prop:nonfair}, $\exists P\subseteq \Pi,\exists Q\subseteq P,
\mathit{setcon}(\{S\in \ensuremath{\mathcal{A}}|_P, S\cap Q\neq \emptyset\})<min(|Q|,\mathit{setcon}(\ensuremath{\mathcal{A}}|_P))$.
Clearly $\ensuremath{\mathcal{A}}|_P$ and $\ensuremath{\mathcal{A}}|_{P,Q}$ are
also superset-closed and, thus, $\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)=\mathit{csize}(\ensuremath{\mathcal{A}}|_P)$ and
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})=\mathit{csize}(\ensuremath{\mathcal{A}}|_{P,Q})$.
Since $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})<|Q|$, a minimal hitting set $H'$ of
$\ensuremath{\mathcal{A}}|_{P,Q}$ is such that $|H'|<|Q|$, and therefore there exists a process
$q\in Q$, $q\not\in H'$. Also, since $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})<\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$, $H'$
is not a hitting set of $\ensuremath{\mathcal{A}}|_P$.
Thus, there exists $S\in \ensuremath{\mathcal{A}}|_P$ such that $S\cap
H'=\emptyset$.
Hence, $(S\cup\{q\})\cap H'=\emptyset$.
Since $\ensuremath{\mathcal{A}}|_P$ is superset closed, we have $S\cup\{q\}\in \ensuremath{\mathcal{A}}|_P$ and, since
$q\in Q$, $S\cup\{q\}\in \ensuremath{\mathcal{A}}|_{P,Q}$.
But $(S\cup\{q\})\cap H'=\emptyset$---a contradiction
with $H'$ being a hitting set of $\ensuremath{\mathcal{A}}|_{P,Q}$.
\end{proof}
\begin{theorem}
Any symmetric adversary is {fair}.
\end{theorem}
\begin{proof}
The set consensus power of a generic adversary $\ensuremath{\mathcal{A}}$ is defined recursively
through finding $S\in \ensuremath{\mathcal{A}}$ and $p\in S$ which max-minimize the
set consensus power of $\ensuremath{\mathcal{A}}|_{S\setminus\{p\}}$.
Let us recall that if $\ensuremath{\mathcal{A}}\subseteq \ensuremath{\mathcal{A}}'$ then
$\mathit{setcon}(\ensuremath{\mathcal{A}})\leq \mathit{setcon}(\ensuremath{\mathcal{A}}')$.
Therefore, $S$ can always be selected to
be \emph{locally maximal}, i.e., such that there is no live set in $S'\in \ensuremath{\mathcal{A}}$
with $S\subsetneq S'$.
Suppose by contradiction that $\ensuremath{\mathcal{A}}$ is symmetric but not {fair}, i.e.,
by Property~\ref{prop:nonfair},
for some $P\subseteq \Pi$ and $Q\subseteq P$,
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})<min(|Q|,\mathit{setcon}(\ensuremath{\mathcal{A}}|_P))$.
We show that if the property holds for $P$ and $Q$ such that $\ensuremath{\mathcal{A}}|_{P,Q}\neq\emptyset$ then
it also holds for some $P'\subsetneq P$ and $Q'\subseteq Q$.
First, we observe that $|Q|>1$, otherwise $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})=0$ and, thus,
we have $\ensuremath{\mathcal{A}}|_{P,Q}=\emptyset$.
Since $\ensuremath{\mathcal{A}}$ is symmetric, $\ensuremath{\mathcal{A}}|_P$ is also symmetric.
Thus, for every $S\in\ensuremath{\mathcal{A}}|_P$ and $p\in S$ such that
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)= 1+\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S\setminus\{p\}})$, any $S'$
such that $|S'|=|S|$ and for any $p'\in S'$, we also have
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)= 1+\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S'\setminus\{p'\}})$.
Since we can always choose $S$ to be a maximal set, we derive that
the equality holds for every maximal set $S$ in $\ensuremath{\mathcal{A}}|_P$ and every
$p\in S$.
Let us recall that, by the definition of $\mathit{setcon}$, there exists $L\in \ensuremath{\mathcal{A}}|_{P,Q}$ and $a\in L$ such that
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})= 1+\mathit{setcon}((\ensuremath{\mathcal{A}}|_{P,Q})|_{L\setminus\{a\}})=\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L,Q})$.
Since $\ensuremath{\mathcal{A}}|_P$ is symmetric,
for all $L'$, $|L'|=|L|$ and $L\cap Q\subseteq L'\cap Q$, we have
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L',Q})\geq\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L,Q})$. Indeed, modulo a permutation
of process identifiers, $\ensuremath{\mathcal{A}}|_{L',Q}$ contains all the live sets of
$\ensuremath{\mathcal{A}}|_{L,Q}$ plus live sets in $\ensuremath{\mathcal{A}}|_{L'}$ that overlap with $(L'\cap
Q)\setminus(L\cap Q)$.
Since $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L,Q})=\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})$ and $L'\in \ensuremath{\mathcal{A}}|_{P,Q}$, we
have $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L',Q})=\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L,Q})$.
Therefore, for any $a\in L'$,
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L'\setminus\{a\},Q})< \mathit{setcon}(\ensuremath{\mathcal{A}}|_{L'\setminus\{a\}})$.
In particular, for $L'$ with $L'\cap Q\in \{L',Q\}$,
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L',Q})=\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L,Q})$.
Note that $L'\nsubseteq Q$, otherwise, $\ensuremath{\mathcal{A}}|_{L',Q}=\ensuremath{\mathcal{A}}|_{L'}$ and, thus,
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L',Q})=\mathit{setcon}(\ensuremath{\mathcal{A}}|_{L'})=\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)$, contradicting
our assumption.
Thus, let us assume that $Q\subsetneq L'$.
Note that $Q'=Q\setminus\{a\}\subsetneq L'\setminus\{a\}$, and since
$|Q|\geq 2$, $Q'\neq\emptyset$,
we have $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P',Q'})< \mathit{setcon}(\ensuremath{\mathcal{A}}|_{P'})$ for
$P'=L'\setminus\{a\}$ and $Q'\subseteq P'$, $Q'\neq\emptyset$.
Furthermore, since $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})<|Q|$, we have
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P',Q'})<|Q'|$.
By applying this argument inductively, we end up with a live set
$P$ and $Q\subseteq P$ such that $\mathit{setcon}(\ensuremath{\mathcal{A}}|_P)\geq 1$,
$Q\neq\emptyset$ and
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P,Q})=0$.
By the definition of $\mathit{setcon}$, $\ensuremath{\mathcal{A}}|_P\neq\emptyset$ and
$\ensuremath{\mathcal{A}}|_{P,Q}=\emptyset$.
But $\ensuremath{\mathcal{A}}|_P$ is symmetric and $Q\neq\emptyset$, so for every $S\in \ensuremath{\mathcal{A}}|_P$, there exists
$S'\in\ensuremath{\mathcal{A}}|_P$ such that $|S|=|S'|$ and $S'\cap Q \neq \emptyset$,
i.e., $\ensuremath{\mathcal{A}}|_{P,Q}\neq\emptyset$---a contradiction.
\end{proof}
Note that not all adversaries are fair.
For example, the adversary
$\ensuremath{\mathcal{A}}=\{\{p_1\},\{p_2,p_3\},\{p_1,p_2,p_3\}\}$ is not fair.
On the other hand, not all fair adversaries are either
super-set closed or symmetric. For example, the adversary
$\ensuremath{\mathcal{A}}=2^{\{p_1,p_2,p_3\}}\setminus\{p_1,p_2\}$ is fair but is
neither symmetric not super-set closed. Understanding
what makes an adversary fair is an interesting challenge.
\subsection{Task computability in {fair} adversarial models}
In this section, we show that the task computability
of a {fair} adversarial $\ensuremath{\mathcal{A}}$-model is fully grasped by
its associated agreement function $\alpha_\ensuremath{\mathcal{A}}$.
\begin{algorithm}
\caption{Code for BG simulator $s_i$ to simulate adversary $\ensuremath{\mathcal{A}}$.\label{Alg:AdversarySimulation}}
\SetKwRepeat{Repeat}{Repeat}{Forever}%
\textbf{Shared variables:} $R[1,\dots,\alpha_\ensuremath{\mathcal{A}}(\Pi)] \leftarrow (\bot,\emptyset) ,P_{MEM}[p_1,\dots,p_n] \leftarrow \bot$\;\label{•}l{Alg:Adv:MemoryInitState}
\textbf{Local variables:} $S_{cur},S_{tmp},P,A,W \in 2^\Pi,p_{cur},p_{tmp} \in \mathbb{N},S_{cur}\leftarrow\emptyset$\;\label{Alg:Adv:Init}
\vspace{1em}
\Repeat{}{\label{Alg:Adv:loop-begin}
$P = \{p\in \Pi, P_{MEM}[p]\neq \bot\}$\;\label{Alg:Adv:Participation}
$A = \{p\in P, P_{MEM}[p]\neq \top\}$\;
\If{$i \geq min(|A|,\alpha_\ensuremath{\mathcal{A}}(P))$}{\label{Alg:Adv:TestActive}
$W = P$\;\label{Alg:Adv:LSselection-begin}
\For{$j=\alpha_\ensuremath{\mathcal{A}}(\Pi)$ \textbf{down to} $i+1$}{\label{Alg:Adv:WselectLoop}
$(p_{tmp},S_{tmp}) \leftarrow R[j]$\;
\If{$(p_{tmp}\neq\bot)\wedge(S_{tmp}\subseteq W)\wedge((\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S_{tmp},A})\geq j))$}
{\label{Alg:Adv:RecentlyActive}
$W \leftarrow S_{tmp}\setminus\{p_{tmp}\}$\;\label{Alg:Adv:Wselect}
}
}
\vspace{1em}
\If{$(S_{cur}\not\subseteq W)\vee(\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S_{cur},A})< i)$}{\label{Alg:Adv:CheckCur}
\If{$\exists S\in \ensuremath{\mathcal{A}}|_W, \mathit{setcon}(\ensuremath{\mathcal{A}}|_{S,A})\geq i$}{\label{Alg:Adv:CheckExists}
$S_{cur} = S\in \ensuremath{\mathcal{A}}|_W \mathbf{\ such\ that\ } \mathit{setcon}(\ensuremath{\mathcal{A}}|_{S,A})\geq i$\;\label{Alg:Adv:SelectNew}
}
\lElse{$S_{cur}= S\in \ensuremath{\mathcal{A}}|_P$}\label{Alg:Adv:SelectAny}
$p_{cur} = S_{cur}.first()$\;
$R[i] \leftarrow (p_{cur},S_{cur})$\;\label{Alg:Adv:LSselection-end}
}
\vspace{1em}
\If{$(\mathbf{SimulateStep}(p_{cur})=SUCCESS)$}{\label{Alg:Adv:LSsimulation-begin}
\lIf{$\mathbf{Outputed}(p_{cur})$}{$P_{MEM}[p_{cur}]=\top$}\label{Alg:Adv:SetTerminated}
$p_{cur} = S_{cur}.next(p_{cur})$\;
}
\lElse{
$\mathbf{AbortStep}(p_{cur})$\label{Alg:Adv:LSsimulation-end}
}
}
}\label{Alg:Adv:loop-end}
\end{algorithm}
Using BGG simulation, we show that the $\alpha_\ensuremath{\mathcal{A}}$-model
can be used to solve any task $T$ solvable in the $\ensuremath{\mathcal{A}}$-model.
In the simulation, up to $\alpha(P)$ BG simulators execute the
given algorithm solving $T$, where $P$ is the participating set of the
current run.
\remove{
To do so, the simulation executes an algorithm solving $T$ in the
$\ensuremath{\mathcal{A}}$-model and tries to provide a run such that
(1) the run is $\ensuremath{\mathcal{A}}$-compliant, i.e., a run in which the set of processes
performing infinitely many simulated steps is a live set of $\ensuremath{\mathcal{A}}$,
and (2) the set of processes performing infinitely many simulated steps
includes a correct process not provided with a task output.
Succeeding in providing such a run is not possible.
Indeed, an $\ensuremath{\mathcal{A}}$-compliant run of an algorithm solving a task
eventually provides outputs to every correct process.
}
We adapt the currently simulated live set to include processes
not yet provided with a task output, and ensure that the chosen live set is
simulated sufficiently long until some active processes are provided
with outputs of $T$.
The simulation terminates as soon as all correct processes are
provided with outputs.
The code for BG simulator $b_i\in\{b_1,\dots,b_{\alpha_\ensuremath{\mathcal{A}}(\Pi)}\}$ is
given in Algorithm~\ref{Alg:AdversarySimulation}.
It consists of two parts: (1)~selecting a live set to simulate
(lines~\ref{Alg:Adv:LSselection-begin}--\ref{Alg:Adv:LSselection-end}),
and (2)~simulating processes in the selected live set
(lines~\ref{Alg:Adv:LSsimulation-begin}--\ref{Alg:Adv:LSsimulation-end}).
\myparagraph{Selecting a live set.}
This is the most involved part.
The idea is to select a participating live set
$L\subseteq P$ such that:
(1) the set consensus power of $\ensuremath{\mathcal{A}}|_{L,A}=\{S\in \ensuremath{\mathcal{A}}|_L,S\cap
A\neq\emptyset\}$, with $A$ the set of participating processes not yet provided
with a task output, is greater than or equal to the BG simulator identifier $i$;
(2) $L$ is a subset of the live sets currently selected
by live BG simulators with greater identifiers;
(3) $L$ does not contain the processes currently simulated
by live BG simulators with greater identifiers.
The live set selection in Algorithm~\ref{Alg:AdversarySimulation} consists in two phases.
First, BG simulators determine a \emph{selection window} $W$, $W\subseteq P$, i.e.,
the largest set of processes which is a subset
of the live sets selected by live BG simulators with greater identifiers,
and which excludes the processes currently selected
by live BG simulators with greater identifiers
(lines~\ref{Alg:Adv:LSselection-begin}--\ref{Alg:Adv:Wselect}).
This is done iteratively on all BG simulators with
greater identifiers, from the greatest to the lowest.
At each iteration, if the targeted BG simulator $b_k$ \emph{appears live},
the current window is restricted to the live set selected by $b_k$,
but excluding the process selected by $b_k$.
Determining if $b_k$ appears live is simply done by checking whether,
with the current simulation status observed,
the live set selected by $b_k$ is \emph{valid},
i.e., satisfies conditions~(1),~(2) and~(3) above.
The second phase (lines~\ref{Alg:Adv:CheckCur}--\ref{Alg:Adv:LSselection-end}),
consists in checking if the currently selected live set is valid (line~\ref{Alg:Adv:CheckCur}).
If not, the BG simulator
tries to select a live set $L$ which belongs to the selection window $W$, and hence satisfies (2) and (3),
but also such that the set consensus power of $\ensuremath{\mathcal{A}}_{L,A}$ is greater than $i$,
the BG simulator identifier (line~\ref{Alg:Adv:SelectNew}).
If the simulator does not find such a live set,
it simply selects any available live set (line~\ref{Alg:Adv:SelectAny}).
\myparagraph{Simulating a live set.}
The idea is that, if the selected live set does not change,
the BG simulator simulates steps of
every process in its selected live set infinitely often.
Unlike conventional variations of BG simulations, a BG simulator here
does not skip a blocked process simulation,
instead it aborts and re-tries the same simulation step until it is successful.
Intuitively, this does not obstruct progress because,
in case of a conflict, there are two live BG simulators blocked on the
same simulation step, but the BG simulator with the smaller identifier
will eventually change its selected live set and release the corresponding process.
\myparagraph{Pseudocode.}
The protocol executed by processes in the $\alpha_\ensuremath{\mathcal{A}}$-model is the following:
Processes first update their status in $P_{MEM}$
by replacing $\bot$ with their initial state.
Then, processes participate in an $\alpha_{\ensuremath{\mathcal{A}}}$-adaptive BGG simulation
(i.e., BGG simulation runs on top of an $\alpha_{\ensuremath{\mathcal{A}}}$-adaptive
set consensus protocol),
where BG simulators use Algorithm~\ref{Alg:AdversarySimulation}
to simulate an algorithm solving a given task~$T$ in the adversarial $\ensuremath{\mathcal{A}}$-model.
When a process $p$ observes that $P_{MEM}[p]$ has been set to $\top$
(``termination state''),
it stops to propose simulation steps.
\myparagraph{Proof of correctness.}
Let $P_f$ be the participating set of the $\alpha_\ensuremath{\mathcal{A}}$-model run,
and let $A_f$ be the set of processes $p\in P_f$ such that $P_{MEM}[p]$ is never set to $\top$.
\begin{lemma}
\label{lem:stable}
There is a time after which variables $P$ and $A$ in
Algorithm~\ref{Alg:AdversarySimulation}
become constant and equal to $A_f$ and $P_f$ for all live BG simulators.
\end{lemma}
\begin{proof}
Since $\Pi$ is finite, the set of processes $p$ such that $P_{MEM}[p]\neq \bot$ eventually corresponds to $P_f$
as the first step of $p$ is to set $P_{MEM}[p]$ to its initial state and
$P_{MEM}[p]$ can only be updated to $\top$ afterwards.
As after $P_{MEM}[p]$ is set to~$\top$, it cannot be set to another value,
eventually, the set of processes from $P_f$
such that $P_{MEM}[p]\neq \top$ is equal to~$A_f$.
Live BG simulators update $P$ and $A$ infinitely often,
so eventually their values of $P$ and $A$ are equal to $P_f$ and $A_f$ respectively.
\end{proof}
\begin{lemma}
\label{lem:bgsim}
If $A_f$ contains a correct process, then there is a
correct BG simulator with an identifier smaller or equal to $min(|A_f|,\alpha_\ensuremath{\mathcal{A}}(P_f))$.
\end{lemma}
\begin{proof}
In our protocol, eventually only correct processes in $A_f$ are
proposing BGG simulation steps. Thus eventually, at most
$|A_f|$ distinct simulations steps are proposed.
The $\alpha_A$-adaptive set consensus protocol used for BGG simulation
ensures that at most $\alpha_\ensuremath{\mathcal{A}}(P_f)$ distinct
proposed values are decided. But as there is a time after which only processes in $A_f$
propose values, eventually, $min(|A_f|,\alpha_\ensuremath{\mathcal{A}}(P_f))$-set consensus
is solved. Thus BGG simulation ensures that, when this is the case, there is a live
BG simulator with an identifier smaller or equal to $min(|A_f|,\alpha_\ensuremath{\mathcal{A}}(P_f))$.
\end{proof}
Suppose that $A_f$ contains a correct process, and let $b_m$ be the
greatest live BG simulator such that $m\leq
min(|A_f|,\alpha_\ensuremath{\mathcal{A}}(P_f))$ (by Lemma~\ref{lem:bgsim}).
Let $S_i(t)$ denote the value of $S_{cur}$ and
let $p_i(t)$ denote the value of $p_{cur}$ at simulator $b_i$ at time
$t$.
Let also $\tau_f$ be the time after which every active but not live BG simulators
have taken all their steps, and after which $A$ and $P$ have become constant and
equal to $A_f$ and~$P_f$ for every live BG simulator (by Lemma~\ref{lem:stable}).
\begin{lemma}
For every live BG simulator $b_s$, with $s\leq
min(|A_f|,\alpha_\ensuremath{\mathcal{A}}(P_f))$,
eventually, $b_s$ cannot fail the test on line~\ref{Alg:Adv:CheckExists}.\label{lem:ValidSetcon}
\end{lemma}
\begin{proof}
Consider a correct BG simulator $b_s$ starting a round after time
$\tau_f$. Let $W_s$ be the value of $W$ at the end of
line~\ref{Alg:Adv:Wselect}. Two cases may arise:
- If $W_s=P_f$, as $\ensuremath{\mathcal{A}}$ is {fair}, then
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{W_s,A_f})=\min(|A_f|,\mathit{setcon}(\ensuremath{\mathcal{A}}|_{P_f}))$.
Thus, $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{W_s,A_f})\geq s$.
- Otherwise, $W_s$ is set on line~\ref{Alg:Adv:Wselect} to some
$S_{target}\setminus \{p_{target}\}$ at some iteration~$l$,
with $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S_{target},A_f})\geq l$ for $l>s$.
We have $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{W_s,A_f}) =
\mathit{setcon}((\ensuremath{\mathcal{A}}|_{S_{target},A})|_{S_{target}\setminus\{p_{target}\}})$
which, by the definition of $\mathit{setcon}$,
is greater or equal to $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S_{target},A}) - 1\geq l-1\geq s$,
so we have
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{W_s,A_f})\geq s$.
By the definition of $\mathit{setcon}$, as $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{W_s,A_f})\geq s$,
there exists $S\subseteq W_s$ such that $\mathit{setcon}(\ensuremath{\mathcal{A}}|_{S,A_f})\geq s$.
So, eventually $b_s$ will always succeed the test on line~\ref{Alg:Adv:CheckExists}.
\end{proof}
\begin{lemma}
For every live BG simulator $b_s$, with $s\leq min(|A_f|,\alpha_\ensuremath{\mathcal{A}}(P_f))$,
eventually, the value of $W$ computed at the end of iteration $m+1$ (at lines~\ref{Alg:Adv:WselectLoop}--\ref{Alg:Adv:Wselect}) is
equal to some constant value $W_{m,f}$.\label{lem:StableWindow}
\end{lemma}
\begin{proof}
No BG simulator $b_l$, with $l> m$, executes lines~\ref{Alg:Adv:LSselection-begin}--\ref{Alg:Adv:LSsimulation-end} after time~$\tau_f$. Therefore $R[l]$ is constant after time~$\tau_f$, $\forall l>m$. As the computation of~$W$, on lines~\ref{Alg:Adv:LSselection-begin}--\ref{Alg:Adv:Wselect}, only depends on the value of $A$, $P$ and $R[l]$, for $\alpha_\ensuremath{\mathcal{A}}(\Pi)\geq l>m$, all constant after time~$\tau_f$, then the value of $W$ computed at the end of line~\ref{Alg:Adv:Wselect} for iteration $m+1$
is the same at every round initiated after time~$\tau_f$ for any
live BG simulator $b_s$, with $s\leq min(|A_f|,\alpha_\ensuremath{\mathcal{A}}(P_f))$.
\end{proof}
\remove{
\begin{lemma}
If $A_f$ contains a correct process, then there is
a time after which $S_m(t)$ stabilizes on some $S_{m,f}$
satisfying the test on line~\ref{Alg:Adv:CheckCur}
at every round.\label{lem:SmConverge}
\end{lemma}
\begin{proof}
No BG simulator $b_l$, with $l> m$, executes lines~\ref{Alg:Adv:LSselection-begin}--\ref{Alg:Adv:LSsimulation-end} after time~$\tau_f$. Therefore $R[l]$ is constant after time~$\tau_f$, $\forall l>m$. As the computation of~$W$, on lines~\ref{Alg:Adv:LSselection-begin}--\ref{Alg:Adv:Wselect}, only depends on the value of $A$, $P$ and $R[l]$, for $\alpha_\ensuremath{\mathcal{A}}(\Pi)\geq l>m$, all constant after time~$\tau_f$, then the value of $W$ computed at the end of line~\ref{Alg:Adv:Wselect} is the same at every round initiated after time~$\tau_f$ for $b_m$. Let $W_{m,f}$ be this value.
After time $\tau_f$, the success of the test of
line~\ref{Alg:Adv:CheckCur} only depends on the value of~$S_m(t)$. If
the test is successful, then $S_m(t)$ is not modified and thus will
never be. Otherwise, by Lemma~\ref{lem:ValidSetcon}, $b_m$ selects a new live set on line~\ref{Alg:Adv:SelectNew}. This new live set passes the test of line~\ref{Alg:Adv:CheckCur} and thus will never be modified.
\end{proof}
\begin{lemma}
If $A_f$ contains a correct process, then for any live BG simulator $b_s$, with $s<m$, eventually $S_s(t)\subseteq S_{m,f}$. Moreover, if $p_m(t)$ is eventually constant to some $p_{m,f}$ then eventually $S_s(t)\subseteq (S_{m,f}\setminus\{p_{m,f}\})$.\label{lem:subsetSelection}
\end{lemma}
\begin{proof}
Similarly to Lemma~\ref{lem:SmConverge}'s proof, there is a time after
which $A$, $P$, and $R[l]$, $\forall l> m$ become constant. Therefore
any BG simulator with an identifier smaller or equal to $m$ will
obtain the same value of $W$ at the end of round $m+1$ of the loop of
lines~\ref{Alg:Adv:WselectLoop}--\ref{Alg:Adv:Wselect}. But by Lemma~\ref{lem:SmConverge}, $S_m(t)$ is eventually constant to some value $S_{m,f}$, such that $S_{m,f}$ satisfies the test on line~\ref{Alg:Adv:RecentlyActive} (as it is equivalent to the test on line~\ref{Alg:Adv:CheckCur} for $b_m$). Therefore, any BG simulator with an identifier strictly smaller than $m$ will update $W$ on iteration $m$ of the loop of lines~\ref{Alg:Adv:WselectLoop}--\ref{Alg:Adv:Wselect} to be equal to $S_{m,f}\setminus\{p_m(t)\}$.
Moreover, during the loop on lines~\ref{Alg:Adv:WselectLoop}--\ref{Alg:Adv:Wselect}, $W$ can only be set to a subset of itself and therefore remains a subset of $S_{m,f}\setminus\{p_m(t)\}$ during the execution of lines~\ref{Alg:Adv:CheckCur}--\ref{Alg:Adv:LSselection-end}. Therefore either $S_s(t)$ is already a subset of $S_{m,f}$ (or of $S_{m,f}\setminus\{p_{m,f}\}$ if $p_m(t)$ is eventually constant), or otherwise, it is set to one on line~\ref{Alg:Adv:SelectNew} (Lemma~\ref{lem:ValidSetcon}).
\end{proof}
}
\begin{lemma}If $A_f$ contains a correct process,
then the set of processes with an infinite number of simulated steps
is a live set of $\ensuremath{\mathcal{A}}$ containing a process of~$A_f$.\label{lem:ContradictoryLemma}
\end{lemma}
\begin{proof}
As $b_m$ is live, it proceeds to an infinite number of rounds.
By Lemma~\ref{lem:StableWindow}, eventually $b_m$ computes the same window in every
round. By Lemma~\ref{lem:ValidSetcon}, if $b_m$ does not have a valid live set
selected, then it eventually selects a valid one for~$W_{m,f}$.
Thus, eventually $b_m$ never changes its selected live set.
Let $S_{m,f}$ be this live set.
Afterwards, in each round, $b_m$ tries to complete a simulation step of $p_m(t)$ and,
if successfully completed, changes $p_m(t)$ in a round robin manner among $S_{m,f}$.
Two cases may arise:
- If $p_m(t)$ never stabilizes, then the set of processes with
an infinite number of simulated steps includes $S_{m,f}$.
By Lemma~\ref{lem:StableWindow}, every other live BG simulator
with a smaller identifier computes the same value of $W$
at the end of round $m+1$ (of the loop at lines~\ref{Alg:Adv:WselectLoop}--\ref{Alg:Adv:Wselect}).
Thus, after the $S_{m,f}$ is selected by $b_m$, as $S_{m,f}$ is valid,
every BG simulator will select a subset of $S_{m,f}$ for its window value in every round.
Moreover, by Lemma~\ref{lem:ValidSetcon},
these BG simulators will always find valid live sets to select,
and so they will eventually simulate only processes in $S_{m,f}$.
Thus, the set of processes with infinitely many simulated steps
is equal to $S_{m,f}$, a live set intersecting with $A_f$.
- Otherwise, $p_m(t)$ eventually stabilizes on some $p_{m,f}$.
Therefore, $b_m$ attempts to complete a simulation step of $p_{m,f}$ infinitely often.
Two sub-cases may arise:
\begin{itemize}
\item Either $|S_{m,f}|=1$ and, therefore, $b_m$ is the only one live
BG simulator performing simulation steps, and thus,
the set of processes with an infinite number of simulated steps
is equal to $S_{m,f}$, a live set intersecting with $A_f$.
\item Otherwise, by Lemma~\ref{lem:StableWindow},
every live BG simulator with a smaller identifier eventually
selects a window, and thus a live set (Lemma~\ref{lem:ValidSetcon}),
which is a subset of $S_{m,f}\setminus\{p_{m,f}\}$. Thus
every live BG simulator with a smaller identifier eventually
selects processes to simulate distinct from $p_{m,f}$ and, thus,
cannot block $b_m$ infinitely often---a contradiction.
\end{itemize}
\end{proof}
\begin{lemma}
If $\ensuremath{\mathcal{A}}$ is {fair}, then any task $T$ solvable in the $\ensuremath{\mathcal{A}}$-model is solvable in the $\alpha_\ensuremath{\mathcal{A}}$-model.\label{lem:AdvReduction}
\end{lemma}
\begin{proof}
Let us assume that it is not the case: there exists a task $T$ and a
{fair} adversary $\ensuremath{\mathcal{A}}$ such that $T$ is solvable in the adversarial
$\ensuremath{\mathcal{A}}$-model but not in the $\alpha_\ensuremath{\mathcal{A}}$-model. As every finite run
of the $\ensuremath{\mathcal{A}}$-model can be extended to and $\ensuremath{\mathcal{A}}$-compliant run, the
simulated algorithm can only provide valid outputs to the simulated
processes. Thus, it can only be the case that a correct process is
not provided with a task output, i.e., belongs to $A_f$.
Therefore, by Lemma~\ref{lem:ContradictoryLemma},
the simulation provides an $\ensuremath{\mathcal{A}}$-compliant run, i.e.,
the set of processes with an infinite number of simulated steps is a live set.
As the run is $\ensuremath{\mathcal{A}}$-compliant then each process $p$
with an infinite number of simulated steps
is eventually provided with a task output
and thus $p_{MEM}[p]$ is set to $\top$.
Thus, they cannot belong to $A_f$ --- a contradiction.
\end{proof}
Combining Corollary~\ref{cor:adaptive} and Lemma~\ref{lem:AdvReduction} we obtain the following result:
\begin{theorem
\label{th:adv:task}
For any {fair} adversary $\ensuremath{\mathcal{A}}$, the adversarial $\ensuremath{\mathcal{A}}$-model and the $\alpha_\ensuremath{\mathcal{A}}$-model are equivalent regarding task solvability.
\end{theorem}
\remove{
\begin{proof}
The ``if'' direction is implied by Lemma~\ref{lem:adaptive}.
The ``only if'' direction is shown in Lemma~\ref{lem:AdvReduction}.
\end{proof}
}
\section{Agreement functions do not always tell it all}
\label{sec:counter}
We observe that agreement functions are not able to characterize
the task computability power of \emph{all} models.
In particular there are non-{fair} adversaries
not captured by their agreement functions.
Consider for example the adversary
$\ensuremath{\mathcal{A}}=\{\{p_1\},\{p_2,p_3\},\{p_1,p_2,p_3\}\}$.
It is easy to see that $\mathit{setcon}(\ensuremath{\mathcal{A}})=2$, but that
$\mathit{setcon}(\ensuremath{\mathcal{A}}|_{\Pi,\{p_2,p_3\}})= 1$ which is strictly smaller than
$\min(|\{p_2,p_3\}|,\mathit{setcon}(\ensuremath{\mathcal{A}}))= 2$.
Therefore, $\ensuremath{\mathcal{A}}$ is non-{fair}.
Consider the task $Cons_{2,3}$ consisting in consensus among $p_2$ and
$p_3$: every process in $\{p_2,p_3\}$ proposes a value and every
correct process in $\{p_2,p_3\}$ decides a proposed value, so that $p_2$ and $p_3$ cannot decide
different values.
$Cons_{2,3}$ is solvable in the adversarial $\ensuremath{\mathcal{A}}$-model:
every process in $\{p_2,p_3\}$ simply waits until $p_2$ writes its proposed value and decides on
it. Indeed, this protocol solves $Cons_{2,3}$ in the $\ensuremath{\mathcal{A}}$-model as if
$p_3$ is correct, $p_2$ is also correct.
The agreement function of $\ensuremath{\mathcal{A}}$, $\alpha_{\ensuremath{\mathcal{A}}}$, is equal to $0$ for
$\{p_2\}$ or $\{p_3\}$, to $2$ for $\{p_1,p_2,p_3\}$, and to $1$ for
all other values.
It is easy to see that $\alpha_{\ensuremath{\mathcal{A}}}$ only differs from
$\alpha_{1-res}$, the agreement function of the $1$-resilient
adversary, for $\{p_1\}$ where
$\alpha_{\ensuremath{\mathcal{A}}}(\{p_1\})=1>\alpha_{1-res}(\{p_1\})=0$.
Therefore, $\forall P\subseteq\Pi, \alpha_{\ensuremath{\mathcal{A}}}(P)\geq \alpha_{1-res}(P)$, and thus any task solvable in the $\ensuremath{\mathcal{A}}$-model is solvable in the $1$-resilient model.
The impossibility of solving such a task $1$-resiliently can be
directly derived from the characterization of task solvable
$t$-resiliently from~\cite{Gaf09-EBG}.
Indeed, let $p_1$ wait for some
process to output in order to decide the same value. Processes $p_2$
and $p_3$ use the ability to solve consensus among themselves to
output a unique value. As there are two correct processes in the
system, $p_2$ or $p_3$ will eventually terminate and thus $p_1$ will
not wait indefinitely. This gives a
$3$-process $1$-resilient consensus algorithm---a
contradiction\cite{FLP85,LA87}.
Thus, the $\ensuremath{\mathcal{A}}$-model is not equivalent with the
$\alpha_{\ensuremath{\mathcal{A}}}$-model, even though they have the same agreement function.
\remove{
Consider, for example, a $3$-process \emph{test-and-set} model $M_{TS}$ consisting of runs of the type
$q_1q_1q_2q_2q_3q_3\ldots$, $q_i\in\{p_1,p_2,p_3\}$.
In other words, a run of $M_{TS}$ is a sequence of fragments,
where in each segment, a selected process performs an update followed
by a snapshot.
Let $\alpha_{TS}$ denote the agreement function of $M_{TS}$.
By Lemma~\ref{lem:activeResilency}, $\alpha_{TS}$ is also the agreement function of~$M_{\alpha_{TS}}$.
\begin{theorem}
There exists a task solvable in $M_{TS}$, but not in $M_{\alpha_{TS}}$.
\end{theorem}
\begin{proof}
It is easy to see that $\alpha_{TS}$, the agreement function of $M_{TS}$,
satisfies $\alpha_{TS}(P)=1$, whenever $|P|\leq 2$.
Without loss of generality, suppose that $P=\{p_1,p_2\}$.
Every process $p_I$ first writes its input $v_i$.
If the first snapshot taken by $p_i$ only contains $v_i$,
$p_i$ outputs $v_i$.
Otherwise, it outputs the input of the other process.
Recall that $M_{TS}$ guarantees that the snapshots are distinct, i.e.,
either $p_1$ sees only itself and $p_2$ sees both processes, or vice
versa.
If all three processes participate ($|P|=3$), $\alpha_{TS}(P)=2$.
Indeed, assume that there is a consensus algorithm for three processes
in $M_{TS}$.
Using standard valency arguments, we derive that the algorithm has a
\emph{critical bivalent finite run} $R=q_1q_1 \ldots q_mq_m$, such
that $R\cdot pp$ is $0$-valent and $R\cdot qq$ is $1$-valent.
But then $R\cdot ppqq$ and $R\cdot qqpp$ cannot be distinguished by
the third process $r$ in its solo extension.
On the other hand, by making $p_1$ and $p_2$ solve consensus among themselves and
by letting $p_3$ simply outputs its input, we get a $2$-set consensus algorithm.
Thus, $\alpha_{TS}(P)=2$ for~$|P|=3$.
Consider the task $T_{PC}$ of \emph{pairwise consensus}: every process $p_i$
($i=1,2,3$) proposes a natural value as
an input and produces a tuple $(v_{i,1},v_{i,2},v_{i,3})$ as an output.
It is guaranteed that each value $v_{i,j}$ in the output tuple is an input of
some process in $\{p_i,p_j\}$ and, for all $i,j=1,2,3$, if $p_i$ and $p_j$ produce
output, then $v_{i,j}=v_{j,i}$.
In other words solving $T_{PC}$ consists in solving consensus among
each pair of processes.
We observe that $M_{TS}$ solves $T_{PC}$.
Each process $p_i$ writes its input and takes a snapshot.
If the snapshot does not contain the value of process $p_j$, then
$p_i$ considers itself the winner of consensus between $p_i$ and
$p_j$ and outputs its input as $v_{i,j}$. Otherwise it outputs the
input of $p_j$ as $v_{i,j}$.
Clearly, $M_{\alpha_{TS}}$ is not able to solve $T_{PC}$: consider the
set of runs of $M_{\alpha_{TS}}$ with participating set $P=\{p_1,p_2\}$.
Since $\alpha_{TS}(P)=2$, we get a set of $1$-resilient $2$-process
runs in which ($2$-process) consensus cannot be solved.
As we have seen, $T_{PC}$ can
be solved in $M_{TS}$, but cannot be solved in $M_{\alpha_{TS}}$.
Thus, $M_{TS}$ and $M_{\alpha_{TS}}$ have the same agreement function, but
disagree on the set of tasks that they can solve.
\end{proof}
}
\section{Related work}
\label{sec:related}
Adversarial models were introduced by Delporte et
al. in~\cite{DFGT11}.
With respect to colorless tasks, Herlihy and Rajsbaum~\cite{HR12}
characterized a class \emph{superset-closed}~\cite{Kuz12}
adversaries (closed under the superset operation) via their minimal
core sizes.
Still with respect to colorless tasks, Gafni and Kuznetsov~\cite{GK10}
derived a characterization of general adversary using its
\emph{consensus power} function $\mathit{setcon}$.
A side result of this present paper is an extension of the characterization
in~\cite{GK10} to any (not necessarily colorless) tasks.
Taubenfeld introduced in~\cite{Tau10} the notion of a symmetric progress
conditions, which is equivalent to our symmetric adversaries.
The BG simulation establishes equivalence between $t$-resilience and wait-freedom with respect to task
solvability~\cite{BG93a,BGLR01,Gaf09-EBG}. Gafni and Guerraoui~\cite{GG11-univ} showed that if a model allows
for solving $k$-set consensus, then it can be used to simulate a \emph{$k$-concurrent} system in which at
most $k$ processes are concurrently invoking a task.
In our simulation, we use the fact that a model $M$ associated to an agreement function $\alpha_M$
allows to solve an $\alpha$-adaptive set consensus, using the technique proposed in~\cite{DFGK15},
which enables a composition of the ideas in~\cite{BG93a,BGLR01,Gaf09-EBG} and \cite{GG11-univ}.
Running BG simulation on top of a $k$-concurrent system,
we are able to derive the equivalence between fair adversaries
and their corresponding $\alpha$-models.
\section{Concluding remarks}
\label{sec:conc}
By Theorem \ref{th:adv:task}, task
computability
of a {fair} adversary $\ensuremath{\mathcal{A}}$ is \emph{characterized} by its
agreement function $\alpha$: a task is solvable with $\ensuremath{\mathcal{A}}$ if and only
if it is solvable in the $\alpha$-model.
The result implies characterizations of superset-closed~\cite{HR10,Kuz12} and
symmetric~\cite{Tau10} adversaries and, via the equivalence
result established in~\cite{GG09}, the model of $k$-concurrency.
As a corollary, for all models $M$ and $M'$ characterized by their
agreements functions, such that $\forall P\in
\Pi,\alpha_{M'}(P)\geq\alpha_M(P)$,
we have that $M$ is \emph{stronger} than $M'$, i.e.,
the set of tasks solvable in $M$ contains the set of tasks solvable in
$M'$.
In particular, if the two agreement functions are equal, then $M$ and
$M'$ solve exactly the same sets of tasks.
Note that if a model $M$ is characterized by its agreement function $\alpha$, then it
belongs to the weakest equivalence class among the models whose
agreement function is $\alpha$.
An intriguing open question is therefore how to precisely determine the scope of the
approach based on agreement functions and if it can be extended to capture larger classes of models.
\bibliographystyle{abbrv}
\def\noopsort#1{} \def\kern-.25em\lower.2ex\hbox{\char'27}{\kern-.25em\lower.2ex\hbox{\char'27}}
\def\no#1{\relax} \def\http#1{{\\{\small\tt
http://www-litp.ibp.fr:80/{$\sim$}#1}}}
| {'timestamp': '2017-03-13T01:08:36', 'yymm': '1702', 'arxiv_id': '1702.00361', 'language': 'en', 'url': 'https://arxiv.org/abs/1702.00361'} |
\section{Introduction}
Recent observational and theoretical studies suggest that
gravitational microlensing can induce variability not only in
optical light, but also in the X-ray emission of lensed QSOs
\citep{Chart02a,Chart04,Dai03,Dai04,pj06,Pop01,Pop03a,Pop03b,Pop06a,Pop06b}.
Variability studies of QSOs indicate that the size of the X-ray
emitting region is significantly smaller ($\sim$ several light
hours), than the optical and UV emitting regions ($\sim$ several
light days).
Gravitational lensing is achromatic (the deflection angle of a light
ray does not depend on its wavelength), but it is clear that if the
geometries of the emitting regions at different wavelengths are
different then chromatic effects could occur. For example, if the
microlens is a binary star or if the microlensed source is extended
\citep{Griest_Hu92,Griest_Hu93,Bog_Cher95a,Bog_Cher95b,Zakh97,Zakh_Sazh98,Pc04}
different amplifications in different spectral bands can be present.
Studies aiming to determine the influence of microlensing on the
spectra of lensed QSOs need to take into account the complex
structure of the QSO central emitting region \citep{Pc04}. Since the
sizes of the emitting regions are wavelength dependent, microlensing
by stars in the lens galaxy may lead to a wavelength dependent
magnification. For example, Blackburne et al. (2006) reported such a
'flux anomaly' in quadruply imaged quasar 1RXS J1131-1231. In
particular, they found discrepancies between the X-ray and optical
flux ratio anomalies. Such anomalies in the different spectral band
flux ratios can be attributed to micro- or milli-lensing in the
massive lensing halo. In the case of milli-lensing one can infer the
nature of substructure in the lensing galaxy, which can be connected
to Cold Dark Matter (CDM) structures (see e.g. Dobler \& Keeton
2006). Besides microlensing, there are several mechanisms which can
produce flux anomalies, such as extinction and intrinsic
variability. These anomalies were discussed in \citet{Pc04} where
the authors gave a method that can aid in distinguishing between
variations produced by microlensing from ones resulting from other
effects. In this paper we discuss consequences of variations due to
gravitational microlensing, in which the different geometries and
dimensions of emitting regions of different spectral bands are
considered.
The influence of microlensing on QSO spectra emitted from their
accretion discs in the range from the X-ray to the optical
spectral band is analyzed. Moreover, assuming different sizes
of the emitting regions, we investigate the microlensing time
scales for those regions, as well as a time dependent response
in amplification of different spectral bands due to
microlensing. Also, we give the estimates of microlensing time
scales for a sample of lensed QSOs.
In Section 2, we describe our model of the quasar emitting regions
and a model of the micro-lens. In Section 3 we discuss the time
scales of microlensing and in Section 4 we present our results.
Finally, in Section 5, we draw our conclusions.
\section{A model of QSO emitting regions and microlens}
\subsection{A model of the QSO emitting regions}
In our models we adopt a disc geometry for the emitting regions of
Active Galactic Nuclei (AGN) since the most widely accepted
paradigm for AGN includes a supermassive black hole fed by an
accretion disc. \citet{Fabian89} calculated spectral line profiles
for radiation emitted from the inner parts of accretion discs and
later on such features of Fe $K\alpha$ lines were discovered by
\citet{Tanaka95} in Japanese ASCA satellite data for Seyfert galaxy
MGC-6-30-15. Moreover, the assumption of a disc geometry for the
distribution of the emitters in the central part is supported by the
spectral shape of the Fe K$\alpha$ line in AGN (e.g.
\citet{Nandra_1997}, see also results of simulations
\citep{Zak_Rep99,Zak_Rep02,ZKLR02,Zak_Rep03a,Zak_Rep03b,ZR_Nuovo_Cim03,Zak_Rep03c,ZR_5SCSLSA,ZR_NA_05,Zak_SPIG04,ZR_NANP_05}).
On the other hand, very often a bump in the UV band is present in
the spectra of AGN, that indicates that the UV and optical continuum
originates in an accretion disk.
We should note here that probably most of the X-ray emission in the
1--10 keV energy range originates from inverse Compton scattering of
photons from the disc by electrons in a tenuous hot corona. Proposed
geometries of the hot corona of AGN include a spherical corona
sandwiching the disc and a patchy corona made of a few compact
regions covering a small fraction of the disc (see e.g.
\citet{mz07}). On the other hand, it is known that part of the
accretion disc that emits in the 1--10~keV rest-frame band (e.g. the
region that emits the continuum Compton reflection component and the
fluorescent emission lines) is very compact and may contribute to
X-ray variability in this energy range. In order to study the
microlensing time scales, one should consider the dimensions of the
X-ray emitting region which are very important for the integral flux
variations due to microlensing. The geometry of the emitting regions
adopted in microlensing models will affect the simulated spectra of
the continuum and spectral line emission \citep{Pc04,pj06}. In this
paper we assume that the AGN emission from the optical to the X-ray
band originates from different parts of the accretion disc.
Also we assume that we have a stratification in the disc, where the
innermost part emits the X-ray radiation and outer parts the UV and
optical continuum emission. To study the effects of microlensing on
a compact accretion disc we use the ray-tracing method, considering
only those photon trajectories that reach the sky plane at a given
observer angle (see, e.g. Popovi\'c et al. 2003a,b and references
therein). The amplified brightness with amplification $A(X,Y)$ for
the continuum is given by
\begin{equation}
I_{C} (X,Y;E_{obs}) = { {I_{P}}} (E_{obs},T(X,Y)) \cdot A(X,Y),
\end{equation}
where $T(X,Y)$ is the temperature, $X$ and $Y$ are the impact
parameters which describe the apparent position of each point of the
accretion disc image on the celestial sphere as seen by an observer
at infinity, $E_{\rm obs}$ is the observed energy, $I_P$ is an
emissivity function.
In the standard Shakura-Sunyaev disc model \citeyearpar{Shakura73},
accretion occurs via an optically thick and geometrically thin disc.
The effective optical depth in the disc is very high and photons are
close to the thermal equilibrium with electrons. The surface
temperature is a function of disc parameters and results in the
multicolor black body spectrum. This component is thought to explain
the 'blue bump' in AGN and the soft X-ray emission in galactic black
holes. Although the standard Shakura-Sunyaev disc model does not
predict the power-law X-ray emission observed in all sub-Eddington
accreting black holes, the power law for the X-ray emissivity in AGN
is usually adopted (see e.g. \cite{Nandra_1999}). But one can not
exclude other approximations for emissivity laws, such as black-body
or modified black-body emissivity laws. Moreover, we will assume
that the emissivity law is the same through the whole disc.
Therefore we used the black-body radiation law. The disc emissivity
is given as (e.g. \cite{Jarosz_1992}):
$$I_P(X,Y;E)=B[E,T_s(X,Y)],$$
where
\begin{equation}
B\left( {E,T_s(X,Y)} \right) = {\frac{{2E
^{3}}}{{h^2c^{2}}}}{\frac{{1}}{{e^{{{E
}
\mathord{\left/ {\vphantom {{h\nu} {kT}}} \right.
\kern-\nulldelimiterspace} {kT_s(X,Y)}}} - 1}}},
\end{equation}
where $c$ is the speed of light, $h$ is the Planck constant,
$k$ is the Boltzmann constant and $T_s(X, Y)$ is the surface
temperature. Concerning the standard accretion disc
\citep{Shakura73}, here we assumed that (see \citet{Pop06a}):
\begin{equation}
T_s(X, Y) \sim r^{-3/2}(X,Y)(1-r^{-1/2}(X,Y))^{4/5} \,{\rm K},
\end{equation}
taking that an effective temperature in the innermost part (where X-ray is
originated) is in an interval from 10$^7$ K to
10$^8$ K.
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{fig1a.eps}\hfill
\includegraphics[width=0.49\textwidth]{fig1b.eps}
\caption{Magnification map of the QSO 2237+0305A image (left) and of
a "typical" lens system (right). The white solid lines represent
three analyzed paths of an accretion disc center: horizontal
($y=0$), diagonal ($y = -x$) and vertical ($x=0$).}
\end{figure*}
\subsection{A model for microlensing}
To explain the observed microlensing events in quasars, one can use
different microlensing models. The simplest approximation is a
point-like microlens, where microlensing is caused by some compact
isolated object (e.g. by a star). Such a microlens is characterized
by its Einstein Ring Radius in the lens plane:
$ERR=\sqrt{\dfrac{4Gm}{c^2}\dfrac{D_lD_{ls}}{D_s}}$ or by the
corresponding projection in the source plane:
$R_E=\dfrac{D_s}{D_l}ERR=\sqrt{\dfrac{4Gm}{c^2}\dfrac{D_sD_{ls}}{D_l}}$,
where $G$ is the gravitational constant, $c$ is the speed of light,
$m$ is the microlens mass and $D_l$, $D_s$ and $D_{ls}$ are the
cosmological angular distances between observer-lens,
observer-source and lens-source, respectively. In most cases we can
not simply consider that microlensing is caused by an isolated
compact object but we must take into account that the
micro-deflector is located in an extended object (typically, the
lens galaxy). Therefore, when the size of the Einstein ring radius
projection $R_E$ of the microlens is larger than the size of the
accretion disc and when a number of microlenses form a caustic net,
one can describe the microlensing in terms of the crossing of the
disc by a straight fold caustic (Schneider et al. 1992). The
amplification at a point of an extended source (accretion disc)
close to the caustic is given by \citet{Chang84} (a more general
expression for a magnification near a cusp type singularity is given
by \citet{Schneider92,Zakharov95}):
\begin{equation}
A(X,Y)=A_0+K\sqrt{\frac{r_{\rm caustic}}{\kappa(\xi-\xi_c)}}\cdot
H(\kappa(\xi-\xi_c)),
\label{eq04}
\end{equation}
where $A_0$ is the amplification outside the caustic and
$K=A_0\beta$ is the caustic amplification factor, where $\beta$ is
constant of order of unity (e.g. Witt et al. 1993). The "caustic
size" $r_{\rm caustic}$ is the distance in a direction
perpendicular to the caustic for which the caustic amplification is
1, and therefore this parameter defines a typical linear scale for
the caustic in the direction perpendicular to the caustic. $\xi$ is
the distance perpendicular to the caustic in gravitational radii
units and $\xi_c$ is the minimum distance from the disc center to
the caustic. Thus,
\begin{equation}
\xi_c={\sqrt{X_c^2+Y_c^2}},
\end{equation}
\begin{equation}
{\rm tg}\alpha=\frac{Y_c}{X_c},
\end{equation}
and
\begin{equation}
\xi=\xi_c+\frac{(X-X_c){\rm tg}\phi+Y_c-Y}{\sqrt{{\rm tg}^2\phi+1}},
\end{equation}
where
$\phi=\alpha+{\pi/2}$. $ H(\kappa(\xi-\xi_c))$ is the Heaviside
function, $H(\kappa(\xi-\xi_c))=1$, for $\kappa(\xi-\xi_c)>0$,
otherwise it is 0. $\kappa$ is $\pm 1$, it depends on the direction
of caustic motion; if the direction of the caustic motion is from
approaching side of the disc $\kappa=-1$, otherwise it is +1. Also,
in the special case of caustic crossing perpendicular to the
rotating axis $\kappa=+1$ for direction of caustic motion from $-Y$
to $+Y$, otherwise it is $-1$. A microlensing event where a caustic
crosses over an emission region can be described in the following
way: before the caustic reaches the emission region, the
amplification is equal to $A_0$ because the Heavisied function of
equation (4) is zero. Just as the caustic begins to cross the
emitting region the amplification rises rapidly and then decays
gradually towards $A_0$ as the source moves away from the
caustic-fold.
Moreover, for the specific event one can model the caustic shape to
obtain different parameters (see e.g. Abajas et al. 2005, Kochanek
2004 for the case of Q2237+0305). In order to apply an appropriate
microlens model, additionally we will consider a standard
microlensing magnification pattern for the Q2237+0305A image (Fig. 1
left). For generating this map we used the ray-shooting method
\citep{Kay86,sch1,sch2,WP90,WSP90}. In this method the input
parameters are the average surface mass density $\sigma$, shear
$\gamma$ and width of the microlensing magnification map expressed
in units of the Einstein ring radius (defined for one solar mass in
the lens plane).
First, we generate a random star field in the lens plane with use of
the parameter $\sigma$. After that, we solve the Poisson equation
$\nabla^2\psi=2\sigma$ in the lens plane numerically, so we can
determine the lens potential $\psi$ in every point of the grid in
the lens plane. To solve the Poisson equation numerically one has to
write its finite difference form:
\begin{equation}
\psi_{i+1,j}+\psi_{i-1,j}+\psi_{i,j+1}+\psi_{i,j-1}-4\psi_{i,j}=2\sigma_{i,j}.
\label{psi}
\end{equation}
Here we used the standard 5-point formula for the two-dimensional
Laplacian. Next step is inversion of the equation (\ref{psi}) using
Fourier transform. After some transformations we obtain:
\begin{equation}
\hat{\psi}=\frac{\hat{\sigma}_{mn}}{2(\cos{\frac{m\pi}{N_{1}}}+\cos{\frac{n\pi}{N_{2}}}-2)},
\label{hatpsi}
\end{equation}
where $N_1$ and $N_2$ are dimensions of the grid in the lens plane.
Now, using the finite difference technique, we can compute the
deflection angle $ \vec{\alpha}=\nabla\psi $ in each point of the
grid in the lens plane. After computing deflection angle, we can map
the regular grid of points in the lens plane, via lens equation,
onto the source plane. These light rays are then collected in pixels
in the source plane, and the number of rays in one pixel is
proportional to the magnification due to microlensing at this point
in the source plane.
Typically, for calculations of microlensing in
gravitationally lensed systems one can consider cases where
dimensionless surface mass density $\sigma$ is some fraction of
1 e.g. 0.2, 0.4, 0.6, 0.8 \citep{Treyer03}, cases without
external shear ($\gamma=0$) and cases with $\gamma=\sigma$ for
isothermal sphere model for the lensing galaxy. In this article
we assume the values of these parameters within the range
generally adopted by other authors, in particular by
\citet{Treyer03}.
Microlensing magnification pattern for the Q2237+0305A image (Fig. 1
left) with 16~$R_E$ on a side (where $R_E\approx 5867\ R_g$) is
calculated using the following parameters: $\sigma=0.36$ and
$\gamma=0.4$ (see \cite{Pop06a}, Fig. 2), the mass of microlens is
taken to be 0.3$M_\odot$ and we assume a flat cosmological model
with $\Omega =0.3$ and $H_{0}= 75\ \rm km\ s^{-1} Mpc^{-1}$.
We also calculated microlensing magnification pattern for a
"typical" lens system (Fig. 1 right), where the redshifts of
microlens and source are: $z_l=0.5$ and $z_s=2$. In this case, the
microlens parameters are taken arbitrary: $\sigma=0.45$ and
$\gamma=0.3$ and the size of obtained microlensing pattern is also
$16\ R_E \times 16\ R_E$, where $R_E\approx 3107\ R_g$.
\section{Typical time scales for microlensing}
Typical scales for microlensing are discussed not only in books on
gravitational lensing \citep{Schneider92a,Zakh97,Petters01}, but in
recent papers also (see, for example, \cite{Treyer03}). In this
paper we discuss microlenses located in gravitational macrolenses
(stars in lensing galaxies), since optical depth for microlensing is
then the highest \citep{Wyithe02,Wyithe02b,Zakharov04,Zakharov05} in
comparison with other possible locations of gravitational
microlenses, as for example stars situated in galactic clusters and
extragalactic dark halos \citep{Tadros98,Totani03,Inoue03}.
Assuming the concordance cosmological model with $\Omega_{\rm tot}=1$,
$\Omega_{\rm matter}=0.3$ and $\Omega_{\Lambda}=0.7$ we
recall that typical length scale for microlensing is \citep{Treyer03}:
\begin{equation}
R_E = \sqrt{2 r_s \cdot \frac{D_s D_{ls}}{D_{l}}}
\approx 3.2 \cdot 10^{16} \sqrt{\frac{m}{M_\odot}} h_{75}^{-0.5} \mathrm{~cm},
\label{eq_suppl1}
\end{equation}
where "typical" microlens and sources redshifts are assumed to be
$z_l=0.5, z_s=2.$ (similar to \cite{Treyer03}),
$r_s=\dfrac{2Gm}{c^2}$ is the Schwarzschild radius corresponding to
microlens mass $m$, $h_{75}=H_0/((75 {\rm~km/sec})/{\rm Mpc})$ is
dimensionless Hubble constant.
The corresponding angular scale is \citep{Treyer03}
\begin{equation}
\theta_0=\frac{R_E}{D_s}
\approx 2.2 \cdot 10^{-6} \sqrt{\frac{m}{M_\odot}} h_{75}^{-0.5}{\rm ~arcsec},
\label{eq_suppl2}
\end{equation}
Using the length scale (\ref{eq_suppl1}) and velocity scale
(say $v_\bot \sim$~600~km/sec as \cite{Treyer03} did), one
could calculate the standard time scale corresponding to the
scale to cross the projected Einstein radius
\begin{equation}
t_E=(1+z_l)\frac{R_E}{v_\bot}
\approx 25 \sqrt{\frac{m}{M_\odot}}v_{600}^{-1} h_{75}^{-0.5}{\rm ~years},
\label{eq_suppl3}
\end{equation}
where a relative transverse velocity $v_{600}=v_\bot/(600$~km/sec).
The time scale $t_E$, corresponding to a point-mass lens and to a
small source (compared to the projected Einstein radius of the
lens), could be used if microlenses are distributed freely at
cosmological distances and if each Einstein angle is located far
enough from another one. However, the estimation (\ref{eq_suppl3})
gives long and most likely overestimated time scales especially for
gravitationally lensed systems. Thus we must apply another microlens
model to estimate time scales.
For a simple caustic model, such as one that considers a straight
fold caustic\footnote{We use the following approximation for the
extra magnification near the caustic: $\mu=\sqrt{\dfrac{r_{\rm
caustic}}{\xi-\xi_c}}$, $\xi>\xi_c$ where $\xi$ is the perpendicular
direction to the caustic fold (it is obtained from Eq. (\ref{eq04})
assuming that factor $K$ is about unity).}, there are two time
scales depending either on the "caustic size" ($r_{\rm caustic}$) or
the source radius ($R_{\rm source}$). In the case when source radius
is larger or at least close to the "caustic size" ($R_{\rm source}
\gtrsim r_{\rm caustic}$), the relevant time scale is the "crossing
time" \citep{Treyer03}:
\begin{eqnarray}
\label{eq_suppl4}
t_{\rm cross} & = & (1+z_l)\frac{R_{\rm source}}{v_\bot (D_s/D_l)} \nonumber \\
& \approx & 0.69\ R_{15}\ v_{600}^{-1} \left(\frac{D_s}{D_l}\right)^{-1} h_{75}^{-0.5}{\rm ~years} \\
& \approx & 251\ R_{15}\ v_{600}^{-1}\ h_{75}^{-0.5}{\rm ~days},\nonumber
\end{eqnarray}
where $D_l$ and $D_s$ correspond to $z_l=0.5$ and $z_s=2$,
respectively and $R_{15}=R_{\rm source}/10^{15}$~cm. As a matter of
fact, the velocity perpendicular to the straight fold caustic
characterizes the time scale and it is equal to $v_\bot \sin \beta$
where $\beta$ is the angle between the caustic and the velocity
$v_\bot$ in the lens plane, but in our rough estimates we can omit
factor $\sin \beta$ which is about unity. However, if the source
radius $R_{\rm source}$ is much smaller than the "caustic size"
$r_{\rm caustic}$ ($R_{\rm source} \ll r_{\rm caustic}$), one could
use the "caustic time", i.e. the time when the source is located in
the area near the caustic:
\begin{eqnarray}
\label{eq_suppl5}
t_{\rm caustic} & = & (1+z_l)\frac{r_{\rm caustic}}{v_\bot (D_s/D_l)} \nonumber \\
& \approx & 0.69\ r_{15}\ v_{600}^{-1} \left(\frac{D_s}{D_l}\right)^{-1} h_{75}^{-0.5}{\rm ~years} \\
& \approx & 251\ r_{15}\ v_{600}^{-1}\ h_{75}^{-0.5}{\rm ~days}, \nonumber
\end{eqnarray}
where $r_{15}=r_{\rm caustic}/10^{15}$~cm.
Therefore, $t_{\rm cross}$ could be used as a lower limit for
typical time scales in the case of a simple caustic microlens model.
From equations (\ref{eq_suppl4}) and (\ref{eq_suppl5}) it is clear
that one cannot unambiguously infer the source size $R_{\rm source}$
from variability measurements alone, without making some further
assumptions. In general, however, we expect that $t_{\rm cross}$
corresponds to smaller amplitude variations than $t_{\rm caustic}$,
since in the first case only a fraction of a source is significantly
amplified by a caustic (due to assumption that $R_{\rm source}
\gtrsim r_{\rm caustic}$), while in the second case it is likely
that the entire source could be strongly affected by caustic
amplification (due to assumption that $R_{\rm source} \ll r_{\rm
caustic}$).
\begin{figure}
\centering
\includegraphics[width=8cm]{fig2.eps}
\caption{The variations of normalized total continuum flux in
optical (3500--7000 \AA), UV (1000--3500 \AA) and X (1.24--12.4 \AA
\ i.e. 1--10 KeV) band due to microlensing by a caustic crossing
along $y=-x$ direction in the case of Schwarzschild metric. Time
scale corresponds to "typical" redshifts of microlens and source:
$z_l=0.5$ and $z_s=2$. The parameters of the caustic are: $A_0$=1,
$\beta$=1, $\kappa=+1$ and its "size" is 9000 $R_g$. Negative
distances and times correspond to the approaching side, and positive
to the receding side of accretion disc. In this case, due to
$\kappa=+1$, caustic motion is from the receding towards the
approaching side (i.e. from the right to the left). The source mass
is $10^8\ M_\odot$. The radii of optical emitting region are:
$R_{in}= 100\ R_g$, $R_{out}=2000\ R_g$, for UV emitting region:
$R_{in}= 100\ R_g$, $R_{out}=1000\ R_g$ and for X emitting region:
$R_{in}= R_{ms}$, $R_{out}=80\ R_g$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{fig3.eps}
\caption{The same as in Fig. 2. but for $r_{\rm caustic}=2000\ R_g$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{fig4.eps}
\caption{The same as in Fig. 2. in the case when time scale
corresponds to $z_l=0.04$, $z_s=1.69$ (i.e. to Q2237+0305 lens
system).}
\end{figure}
In this paper, we estimated the microlensing time scales for the
X-ray, UV and optical emitting regions of the accretion disc using
the following three methods:
\begin{enumerate}
\item by converting the distance scales of microlensing events
to the corresponding time scales according to the formula
(13) in which $R_{\rm source}$ is replaced by the distance
from the center of accretion disc. Caustic rise times
($t_{HME}$) are then derived from the simulated variations
of the normalized total flux in the X-ray, UV and optical
spectral bands (see Figs. 2-4) by measuring the time from
the beginning to the peak of the magnification event
\item by calculating the caustic times ($t_{\rm caustic}$)
according to equation (14)
\item using light curves (see Figs. 5 and 6) produced when the
source crosses over a magnification pattern. Rise times of
high magnification events ($t_{HME}$) are then measured as
the time intervals between the beginning and the maximum of
the corresponding microlensing events (for more details, see
the next section).
\end{enumerate}
\section{Results and discussion}
In order to explore different cases of microlensing and evaluate
time scales for different spectral bands, first we numerically
simulate the crossing of a straight-fold caustic with parameters
$A_0=1,\ \beta=1$, $\kappa=+1$ and $r_{\rm caustic}=9000$ $R_g$ over
an accretion disc with an inclination angle 35$^\circ$, that is
stratified into three parts:
(i) The innermost part that emits X-ray continuum (1.24 \AA\ -- 12.4
\AA \ or 1--10 keV). The inner radius is taken to be
$R_{inn}=R_{ms}$ (where $R_{ms}$ is the radius of the marginally
stable orbit: $R_{ms}=6\ R_g$ in the Schwarzschild metric) and outer
radius is $R_{out}=80$ $R_g$ (where $R_g=GM/c^2$ is the
gravitational radius for a black hole with mass $M$).
(ii) An UV emitting part of the disc (contribute to the emission
from 1000 \AA\ -- 3500\AA ), with $R_{inn}=100$ $R_g$ and
$R_{out}=1000$ $R_g$.
(iii) An outer optical part of the disc with $R_{inn}=100$ $R_g$ and
$R_{out}=2000$ $R_g$ that emits in the wavelength band from 3500
\AA\ until 7000 \AA.
\begin{table*}
\centering \caption{The estimated time scales (in years) for
microlensing of the X-ray, UV and optical emission region for lensed
QSOs observed by Chandra X-ray Observatory \citep{Dai04}. The
calculated caustic times $t_{\rm caustic}$ are obtained according to
formula (14) for the following values of the cosmological constants:
$H_0=75\rm \ km\ s^{-1}Mpc^{-1}$ and $\Omega_0=0.3$. The caustic
rise times $t_{HME}$ are derived from caustic crossing simulations
(see Figs 2 -- 4). The black hole mass is assumed to be 10$^8\rm
M_\odot$.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Object & $z_s$ & $z_l$ & \multicolumn{2}{c|}{X-ray} & \multicolumn{2}{c|}{UV} & \multicolumn{2}{c|}{optical} \\
\cline{4-9}
& & & $t_{\rm caustic}$ & $t_{HME}$ & $t_{\rm caustic}$ & $t_{HME}$ & $t_{\rm caustic}$ & $t_{HME}$ \\
\hline
\hline
HS 0818+1227 & 3.115 & 0.39 & 0.572 & 0.660 & 7.147 & 7.070 & 14.293 & 15.160 \\
RXJ 0911.4+0551 & 2.800 & 0.77 & 0.976 & 1.120 & 12.200 & 12.080 & 24.399 & 25.880 \\
LBQS 1009-0252 & 2.740 & 0.88 & 1.077 & 1.240 & 13.468 & 13.330 & 26.935 & 28.570 \\
HE 1104-1805 & 2.303 & 0.73 & 0.918 & 1.050 & 11.479 & 11.370 & 22.957 & 24.350 \\
PG 1115+080 & 1.720 & 0.31 & 0.451 & 0.520 & 5.634 & 5.570 & 11.269 & 11.950 \\
HE 2149-2745 & 2.033 & 0.50 & 0.675 & 0.780 & 8.436 & 8.350 & 16.871 & 17.890 \\
Q 2237+0305 & 1.695 & 0.04 & 0.066 & 0.080 & 0.828 & 0.820 & 1.655 & 1.760 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering \caption{Average rise times ($<t_{HME}>$) of high
magnification events, their average number ($<N_{\rm caustic}>_0$)
per unit length ($R_E$) and their average number ($<N_{\rm
caustic}>_y$) per year in the light curves of Q2237+0305A (Fig. 5)
and "typical" lens system (Fig. 6).}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& & \multicolumn{3}{|c|}{Q2237+0305} & \multicolumn{3}{c|}{"Typical" lens} \\
\hline
Disc path & Sp. band & $<t_{HME}>$ & $<N_{\rm caustic}>_0$ & $<N_{\rm caustic}>_y$ & $<t_{HME}>$ & $<N_{\rm caustic}>_0$ & $<N_{\rm caustic}>_y$ \\
\hline
\hline
& X & 0.37 & 0.95 & 0.20 & 3.94 & 1.63 & 0.06 \\
$y=0$ & UV & 1.29 & 0.59 & 0.12 &10.96 & 0.78 & 0.03 \\
& Optical & 2.96 & 0.51 & 0.10 &15.81 & 0.62 & 0.02 \\
\hline
& X & 0.57 & 0.52 & 0.11 & 2.77 & 1.65 & 0.06 \\
$y=-x$ & UV & 1.92 & 0.36 & 0.07 & 6.62 & 0.49 & 0.02 \\
& Optical & 3.98 & 0.26 & 0.05 &17.04 & 0.38 & 0.01 \\
\hline
& X & 0.68 & 1.61 & 0.33 & 4.52 & 1.40 & 0.05 \\
$x=0$ & UV & 1.38 & 0.88 & 0.18 &13.25 & 0.54 & 0.02 \\
& Optical & 2.96 & 0.44 & 0.09 &31.62 & 0.31 & 0.01 \\
\hline
\end{tabular}
\end{table*}
Having in mind that the aims of this investigation are to study the
microlensing time scales for different emitting regions and time
dependent response of amplification in different spectral bands, we
considered microlensing magnification patterns only for image A of
Q2237+0305 and for a "typical" lens system. Our intention was not to
create a complete microlensing model for a specific lens system, and
therefore we did not analyze the differences between images (as for
instance the time delay between them). The variations in the total
flux in the different spectral bands are given in Figs. 2--6. In
Figs. 2 and 3 the simulations for a typical lens system with
$z_l=0.5$ and $z_s=2$ and for two different "caustic sizes" are
given. As one can see from Figs. 2 and 3, the microlensing time
scales are different for different regions, and the durations of
variations in the X-ray are on order of several months to a few
years, but in the UV/optical emission regions they are on order of
several years. Also, as one can see in Figs. 2 and 3, the time
scales do not depend on "caustic size" which, on the other hand,
affects only the maximal amplifications in all three spectral bands.
The results corresponding to the lens system of QSO 2237+0305
($z_l=0.04$, $z_s=1.69$) are given in Figs. 4 and 5. We considered
the straight-fold caustic (Fig. 4) and a microlensing pattern for
the image A of QSO 2237+0305 (Fig. 5). As one can see from Figs. 4
and 5, a higher amplification in the X-ray continuum than in the
UV/optical is expected. In this case, the corresponding time scales
are much shorter and they are from a few days up to a few months for
X-ray and a few years for UV/optical spectral bands. The similar
conclusion arises when we compare the latter results with those for
a "typical" lens system in the case of microlensing pattern (Fig.
6).
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{fig5a.eps}
\includegraphics[width=0.45\textwidth]{fig5b.eps} \\
\includegraphics[width=0.45\textwidth]{fig5c.eps}
\caption{Variations in the X-ray (solid), UV (dashed) and optical
(dotted) spectral bands corresponding to the horizontal (top left),
diagonal (top right) and vertical (bottom) path in the magnification
map of the QSO 2237+0305A (Fig. 1 left).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{fig6a.eps}
\includegraphics[width=0.45\textwidth]{fig6b.eps} \\
\includegraphics[width=0.45\textwidth]{fig6c.eps}
\caption{The same as in Fig. 5 but for the magnification map of a
"typical" lens system (Fig. 1 right).}
\end{figure*}
We also estimated time scales for seven lensed QSOs which have been
observed in the X-ray band \citep{Dai04}. For each spectral band two
estimates are made: caustic time ($t_{\rm caustic}$) - obtained from
equation (14) and caustic rise time ($t_{HME}$) - obtained from
caustic crossing simulations (see Figs. 2 -- 4). In the second case,
we measured the time from the beginning of the microlensing event
until it reaches its maximum (i.e. in the direction from the right
to the left in Figs. 2 -- 4). The duration of the magnification
event beyond the maximum of the amplification could not be
accurately determined because of the asymptotic decrease of the
magnification curve beyond the peak. Estimated time scales for
different spectral bands are given in Table 1. As one can see from
Table 1, the microlensing time scales are significantly smaller for
the X-ray than for the UV/optical bands.
Unamplified and amplified brightness profiles of the X-ray emitting
region, corresponding to the highest peak in Fig. 5 (top right) are
presented in Fig. 7. As one can see in Fig. 7, the assumed
brightness profile of the source is very complex due to applied ray
tracing method, which allows us to obtain an image of the entire
disc, not only its one dimensional profile. Therefore, we could not
use a simple source profile for the calculation of microlensing time
scales (as it was done by \citet{Witt95}). Instead, we estimated the
frequency of high amplification events (HMEs), i.e. the number of
such events per unit length, directly from the light curves
presented in Figs 5 and 6. For models with non-zero shear, this
frequency depends on the direction of motion and for both calculated
maps (Q2237+0305A and "typical" lens) we counted the number of high
magnification events along the following paths (see Fig. 1): i)
horizontal ($y=0$) in the direction from $-x$ to $x$, ii) diagonal
($y = -x$) in direction from $-x$ to $x$ and iii) vertical ($x=0$)
in direction from $y$ to $-y$. For each map the lengths of the
horizontal and vertical paths in the source plane are the same
(13.636 $R_E$ for Q2237+0305A and 12.875 $R_E$ for "typical" lens),
as well as are the corresponding crossing times (66.22 years for
Q2237+0305A and 337.32 years for "typical" lens). For Q2237+0305A
the length of the diagonal path in source plane is 19.284 $R_E$ and
the crossing time is 93.62 years, while the corresponding length in
the "typical" case is 18.208 $R_E$ and the crossing time is 477.04
years.
\begin{figure*}
\centering
\includegraphics[width=0.495\textwidth]{fig7a.eps}
\includegraphics[width=0.495\textwidth]{fig7b.eps}
\caption{Unamplified (left) and amplified (right) brightness profile
of the X-ray emitting region, corresponding to the highest peak in
Fig. 5 (top right). The profiles are obtained using the ray tracing
method (see, e.g., Popovi\'c et al. 2003a,b and references
therein).}
\end{figure*}
HMEs are asymmetric peaks in the light curves which depend not only
on microlens parameters but also on the sizes of emitting regions in
the following sense: the larger emitting regions are expected to
produce smoother light curves and more symmetric peaks
\citep{Witt95}. Consequently, it can be expected that the majority
of HMEs should be detected in X-ray light curves, less of them in UV
and the smallest number in optical light curves. Therefore, we
isolated only clearly asymmetric peaks in all light curves and
measured their rise times $t_{HME}$ as the intervals between the
beginning and the maximum of the corresponding microlensing events.
In the case of Q2237+0305A we found the following number of HMEs: i)
horizontal path: 13 in X-ray, 8 in UV and 7 in optical band, ii)
diagonal path: 10 in X-ray, 7 in UV and 5 in optical band and iii)
vertical path: 22 in X-ray, 12 in UV and 6 in optical band. In case
of "typical" lens these numbers are: i) horizontal path: 21 in
X-ray, 10 in UV and 8 in optical band, ii) diagonal path: 30 in
X-ray, 9 in UV and 7 in optical band and iii) vertical path: 18 in
X-ray, 7 in UV and 4 in optical band.
\begin{figure*}
\centering
\includegraphics[width=0.495\textwidth]{fig8a.eps}\hfill
\includegraphics[width=0.495\textwidth]{fig8b.eps} \\
\vspace*{0.3cm}
\includegraphics[width=0.495\textwidth]{fig8c.eps}
\caption{Rise times ($t_{HME}$) of high magnification
events for all three spectral bands in the light curves of
Q2237+0305A (Fig. 5). Top left panel corresponds to the
horizontal, top right to the diagonal and bottom to the
vertical path of accretion disc.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.495\textwidth]{fig9a.eps}\hfill
\includegraphics[width=0.495\textwidth]{fig9b.eps} \\
\vspace*{0.3cm}
\includegraphics[width=0.495\textwidth]{fig9c.eps}
\caption{The same as in Fig 8. but for the light
curves of "typical" lens system (Fig. 6)}
\end{figure*}
The average number of caustic crossings per unit length ($<N_{\rm
caustic}>_0$) and per year ($<N_{\rm caustic}>_y$) are given in
Table 2. This table also contains the average rise times
($<t_{HME}>$) in all three spectral bands, derived from the rise
times ($t_{HME}$) of individual HMEs which are presented in Figs. 8
and 9 in the form of histograms. These results, as expected, show
that the rise times are the shortest and the frequency of caustic
crossings is the highest in the X-ray spectral band in comparison to
the other two spectral bands. One can also see from Tables 1 and 2
that in the case of Q2237+0305A, the average rise times of HMEs for
all three spectral bands (obtained from microlens magnification
pattern simulations) are longer than both, caustic rise times
(obtained from caustic simulations) and caustic times (calculated
from equation (14)).
Microlensing can result in flux anomalies in the sense that
different image flux ratios are observed in different spectral bands
\citep{Pc04,Pop06b}. As shown in Figs 2--6. the amplification in the
X-ray band is larger and lasts shorter than it does in the UV and
optical bands. Consequently, monitoring of lensed QSOs in the X-ray
and UV/optical bands can clarify whether the flux anomaly is
produced by CDM clouds, massive black holes or globular clusters
(millilensing) or stars in foreground galaxy (microlensing).
\section{Conclusion}
In this paper we calculated microlensing time scales of different
emitting regions. Using a model of an accretion disc (in the center
of lensed QSOs) that emits in the X-ray and UV/optical spectral
bands, we calculated the variations in the continuum flux caused by
a straight-fold caustic crossing an accretion disc. We also
simulated crossings of accretion discs over microlensing
magnification patterns for the case of image A of Q2237+0305 and for
a "typical" lens system. From these simulations we concluded the
following:
(i) one can expect that the X-ray radiation is more amplified than
UV/optical radiation due to microlensing which can induce the so
called 'flux anomaly' of lensed QSOs.
(ii) the typical microlensing time scales for the X-ray band
are on order of several months, while for the UV/optical they
are on order of several years (although the time scales
obtained from microlensing magnification pattern simulations
are longer in comparison to those obtained from caustic
simulations).
(iii) monitoring of the X-ray emission of lensed QSOs can
reveal the nature of 'flux anomaly' observed in some lensed
QSOs.
All results obtained in this work indicate that monitoring the X-ray
emission of lensed QSOs is useful not only to discuss the nature of
the 'flux anomaly', but also can be used for constraining the size
of the emitting region.
\section*{Acknowledgments}
This work is a part of the project (146002) "Astrophysical
Spectroscopy of Extragalactic Objects" supported by the Ministry of
Science of Serbia. The authors would like to thank the anonymous
referee for very useful comments.
| {'timestamp': '2008-01-29T12:28:11', 'yymm': '0801', 'arxiv_id': '0801.4473', 'language': 'en', 'url': 'https://arxiv.org/abs/0801.4473'} |
\section{0pt}{6pt plus 4pt minus 2pt}{4pt plus 2pt minus 2pt}
\titlespacing\subsection{0pt}{6pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsubsection{0pt}{6pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\paragraph{0pt}{6pt plus 4pt minus 2pt}{12pt plus 2pt minus 2pt}
\titleformat*{\section}{\large\bfseries}
\titleformat*{\subsection}{\normalsize\bfseries}
\titleformat*{\subsubsection}{\normalsize\bfseries}
\titleformat*{\paragraph}{\bfseries\color{myblue1}}
\begin{document}
\setlist[itemize]{label={},leftmargin=7mm,rightmargin=7mm}
\Large
\noindent Astro2020 APC White Paper: State of the Profession Consideration\\
\noindent The Early Career Perspective on the Coming Decade, Astrophysics Career Paths, and the Decadal Survey Process \\
\normalsize
\vspace{1mm}
\noindent \textbf{Thematic Areas:} State of the profession considerations: Early career concerns for the coming decade, graduate and postdoctoral training, career preparation, career transitions, and structure and dissemination of decadal survey. \\
\noindent\textbf{Principal Authors:}
\noindent Names: Emily Moravec | Ian Czekala | Kate Follette
\noindent Institutions: University of Florida, 2018 NASEM Christine Mirzayan Science and Technology Policy Fellow | UC Berkeley | Amherst College
\noindent Emails: [email protected] | [email protected] | [email protected] \\
\noindent \textbf{Co-authors:} Zeeshan Ahmed, Mehmet Alpaslan, Alexandra Amon, Will Armentrout, Giada Arney, Darcy Barron, Eric Bellm, Amy Bender, Joanna Bridge, Knicole Colon, Rahul Datta, Casey DeRoo, Wanda Feng, Michael Florian, Travis Gabriel, Kirsten Hall, Erika Hamden, Nimish Hathi, Keith Hawkins, Keri Hoadley, Rebecca Jensen-Clem, Melodie Kao, Erin Kara, Kirit Karkare, Alina Kiessling, Amy Kimball, Allison Kirkpatrick, Paul La Plante, Jarron Leisenring, Miao Li, Jamie Lomax, Michael B. Lund, Jacqueline McCleary, Elisabeth Mills, Edward Montiel, Nicholas Nelson, Rebecca Nevin, Ryan Norris, Michelle Ntampaka, Christine O'Donnell, Eliad Peretz, Andres Plazas Malagon, Chanda Prescod-Weinstein, Anthony Pullen, Jared Rice, Rachael Roettenbacher, Robyn Sanderson, Jospeh Simon, Krista Lynne Smith, Kevin Stevenson, Todd Veach, Andrew Wetzel, and Allison Youngblood
\begin{center}
\includegraphics[width=\textwidth]{NAS.JPG}
\end{center}
\pagebreak
\noindent \textbf{Executive Summary:} In response to the need for the Astro2020 Decadal Survey to explicitly engage early career astronomers, the National Academies of Sciences, Engineering, and Medicine hosted the Early Career Astronomer and Astrophysicist Focus Session (ECFS)\footnote{http://sites.nationalacademies.org/SSB/SSB\_185166} on October 8-9, 2018 under the auspices of Committee of Astronomy and Astrophysics. The meeting was attended by fifty six pre-tenure faculty, research scientists, postdoctoral scholars, and senior graduate students\footnote{List of participants http://sites.nationalacademies.org/cs/groups/ssbsite/documents/webpage/ssb\_189919.pdf}, as well as eight former decadal survey committee members, who acted as facilitators. The event was designed to educate early career astronomers about the decadal survey process, to provide them with guidance toward writing effective white papers, to solicit their feedback on the role that early career astronomers should play in Astro2020, and to provide a forum for the discussion of a wide range of topics regarding the astrophysics career path.
This white paper presents highlights and themes that emerged during two days of discussion. In Section 1, we discuss concerns that emerged regarding the coming decade and the astrophysics career path, as well as specific recommendations from participants regarding how to address them. We have organized these concerns and suggestions into five broad themes. These include (sequentially): (1) adequately training astronomers in the statistical and computational techniques necessary in an era of ``big data", (2) responses to the growth of collaborations and telescopes, (3) concerns about the adequacy of graduate and postdoctoral training, (4) the need for improvements in equity and inclusion in astronomy, and (5) smoothing and facilitating transitions between early career stages. Section 2 is focused on ideas regarding the decadal survey itself, including: incorporating early career voices, ensuring diverse input from a variety of stakeholders, and successfully and broadly disseminating the results of the survey.
Recommendations presented here do not necessarily represent a universal consensus among participants, nor do they reflect the entirety of the discussions. Rather, we have endeavored to highlight themes, patterns, and concrete suggestions.
\clearpage
\Large
\begin{centering}
\textbf{The Early Career Perspective on the Coming Decade, Astrophysics Career Paths, and the Decadal Survey Process}
\end{centering}
\\
\normalsize
\section{Discussion Themes: Topics of Importance for Early Career Professionals in the Coming Decade}
\subsection{Theme 1 - The big role of big data and data science in the 2020s}
Big data and data science have become integral parts of astronomical research. In an era of data proliferation (both observational and simulated), professional and funding agencies must consider the provision of fiscal, physical, and computational resources when planning projects and evaluating proposals. Particular needs in the area of data science include: the development of infrastructure to manage large datasets, the training of early career professionals in the techniques necessary to work with sophisticated data analysis tools, and the expansion and development of career paths focused on software development, data collection, and data analysis. We recommend that management, development, and analysis of big data be considered both within its own decadal panel and holistically across the survey panels and committees.
During the ECFS, ``big data" and modern data analysis techniques featured prominently in discussions about preparation for academic career paths, as well as transitions into professions that employ astronomy Ph.D.s (data science, engineering, defense). There was a general concern among participants that graduate curricula do not provide sufficient training in the sophisticated computational and statistical methodologies necessary to efficiently handle data in general, and large datasets in particular. Participants felt strongly that in order to incentivize the improvement of graduate and postdoctoral training in this area, the decadal survey committee should make explicit recommendations to funding agencies, professional societies, and departments regarding improved training.
\paragraph{Recommendations}
\begin{itemize}
\item \textbf{1.1.1} The decadal survey should include explicit prioritization of theoretical and computational questions in astronomy.
\item \textbf{1.1.2} The decadal survey should consider whether the evolution of the profession means that a redistribution or re-prioritization of permanent positions is necessary. In particular, ECFS participants suggested that the panels consider ways in which grant funding might be restructured to incentivize semi-permanent software development and data management positions rather than postdoctoral positions (see also Theme 5).
\item \textbf{1.1.3} Professional societies and funding agencies should provide support for conference workshops on data science. Wherever possible, these should be embedded rather than pre-conference workshops to minimize the additional cost of attendance for early career participants.
\item \textbf{1.1.4} Departments should restructure graduate curricula to explicitly address the computational and statistical techniques necessary to succeed in the big data era (see also Theme 3).
\item \textbf{1.1.5} Observatories, national labs, and large collaborations (LSST, DES, etc.) should increase the number of data science workshops and data-science themed postdoctoral fellowships.
\end{itemize}
\subsection{Theme 2 - The evolving challenges of large missions and collaborations}
A further concern among ECFS participants was the trend toward larger facilities and projects. While acknowledging that flagship missions and thirty meter class telescopes are critical for advancing the field, early career astronomers voiced a need for the astronomical community to maintain a broad portfolio. There was widespread concern about the closing of current facilities to ``make room" in budgets for large telescopes and missions. Continued operation of an array of facilities operating in various wavelength regimes will be key for the professional training of future astronomers, as well as being needed more broadly for followup observations, time-domain monitoring, etc. We urge the decadal survey committee to consider the need for smaller and more easily accessible facilities that will allow early career astronomers to lead proposals, execute observations, and obtain their own data.
While many early career astronomers are enthusiastic about the science enabled by large facilities and big collaborations, discussions at the ECFS revealed significant anxiety around the preparedness of Ph.D. astronomers to work in an era where `big' is the norm. Particular concerns in this regard are: the availability of a sufficient number of desirable long-term (non-postdoctoral) positions within academia to support large projects, the marketability of PhD astronomers in fields outside of or peripheral to academic astronomy, and concerns regarding whether and how contributions to large collaborations will be recognized by potential employers. The group predicted that more support scientist positions will be needed to design and maintain hardware and software for large facilities, but have observed a lack of sufficient funding for facility operations and maintenance under the current funding paradigm.
\paragraph{Recommendations}
\begin{itemize}
\item \textbf{1.2.1} The decadal survey committee, funding agencies, and observatories should carefully consider the prioritization of and funding for construction of new facilities vs. operation and maintenance of existing facilities. A broad portfolio of facilities should be maintained, including those where early career astronomers can reasonably expect to lead proposals and obtain telescope time for smaller projects.
\item \textbf{1.2.2} The reasonableness of cost estimates for missions and facilities proposed to the decadal survey should be carefully evaluated during project prioritization so that smaller facilities and support staff positions are not ``squeezed out" in agency budgets if and when the cost for these missions balloons.
\item \textbf{1.2.3} More astronomical journals should move toward a model where contributions to published papers are detailed specifically. This will incentivize early career astronomers to make contributions to large projects even when it will not lead to first authorship, and will provide a means for potential employers to gauge the role of applicants in coauthored papers.
\item \textbf{1.2.4} Professional societies and publishers should provide more opportunities for software development work to be recognized, published, and cited.
\end{itemize}
\subsection{Theme 3 - Career Preparation and Opportunities}
Early career astronomers are ideally situated to reflect on the experience of their education as well as assess its adequacy in preparing them for the challenge of navigating early career transitions. During the ECFS, participants focused on evaluating existing training, discussing the challenges that mark the transitions to postdoctoral and long-term positions, and preparing recommendations to improve these areas.
ECFS participants felt that some substantive changes are necessary to make the process of obtaining a Ph.D. more inclusive and supportive, and to reduce imbalances that lead to systemic inequalities in the profession (see Theme 5). In this section, we focus on recommendations around maximizing flexibility and marketability of Ph.D. astronomers, and around restructuring graduate curricula to reflect the changing nature of the profession.
\paragraph{Recommendations}
\begin{itemize}
\item \textbf{1.3.1} Departments should incorporate ethics and professional development seminars into graduate curricula. In addition to focusing on proper scientific and professional conduct, these seminars are an excellent venue to hone research and presentation skills.
\item \textbf{1.3.2} Departments should consider the flexibility of their curricula, de-emphasizing a burdensome number of core classes and allowing for more electives, particularly in statistical and computational methods.
\item \textbf{1.3.3} Grant components such as ``Mentoring Plans,'' and ``Facilities and Resources'' documents should be given more weight and structure. In particular, they should be tied specifically to curricular methodologies and professional development opportunities for all grants requesting graduate student or postdoctoral funding, and should be standardized across agencies.
\item \textbf{1.3.4} AAS should incentivize (e.g. through reduced fees for registration, exhibit space, and workshop facilitation) the participation of members of industry in annual conferences and should consider organizing an alternative careers networking event.
\end{itemize}
\subsection{Theme 4 - The Postdoctoral Years}
While the postdoctoral scholar period can be a time of great opportunity and scholarly freedom, it is also fraught with uncertainty. The traditionally transient nature of the postdoctoral years, where frequent cross-country (or international) moves are common and even encouraged, has a variety of hidden societal costs.
``2-body problems'' and relocation costs add stress and uncertainty to the career transition process and disproportionately affect women and underrepresented minorities.\footnote{See \url{http://womeninastronomy.blogspot.com/2013/02/figure-1-two-body-problem.html} and references therein.}
Efforts are also needed to de-stigmatize the actions of those astronomers who express interest in non-academic career options, including during the graduate admission process and during graduate and postdoctoral mentoring.
\paragraph{Recommendations}
\begin{itemize}
\item \textbf{1.4.1} Five year postdoctoral appointments should be encouraged by funding agencies and appointments with terms of less than three years should be strongly discouraged.
\item \textbf{1.4.2} To increase the quality of applications while also reducing the considerable stress on applicants during job application season, the postdoctoral fellowship application process should be standardized in terms of the nature and length of required materials. Specific and measurable criteria should be posted in job ads to help applicants in evaluating their suitability for positions. For example, vague advertising for ``outstanding candidates'' preselects for candidates with high self-efficacy.
\item \textbf{1.4.3} The AAS should contract with social scientists to conduct regular membership surveys that include astronomers who have left academia, and should include the results in state of the profession reports.
\item \textbf{1.4.4} The social and psychological pressures of the Astrophysics Jobs Rumor Mill outweigh the benefits, and its information is often inaccurate or incomplete. It should be replaced by a commitment on the part of departments and agencies offering employment to release timely information to \textbf{all} job applicants regarding the search status, even if it has not concluded (e.g. ``we have created a short list and invited five applicants to visit").
\item \textbf{1.4.5} Development of teaching and communication skills should be incentivized and given greater emphasis in graduate curricula, postdoctoral training, and analysis of candidates for academic jobs. This can be accomplished through targeted graduate coursework, workshops, structured feedback to graduate students giving presentations (i.e. journal clubs), a greater emphasis on teaching and diversity statements in job applications, etc.
\item \textbf{1.4.6} The community should rethink the distribution of temporary (e.g. postdoc) vs. potentially permanent (e.g. staff scientist) positions and the ways in which they are funded (e.g. grants vs. operations budgets). In particular, support and software design for large surveys and telescopes should be done by long-term staff members and not by postdocs.
\end{itemize}
\subsection{Theme 5 - A Diverse Workforce and Equitable Community}
There was a strong consensus and desire among attendees that the astronomy community of the future should reflect the breadth of identities of the nation as a whole. Progress has been made in this area in the past decade, however there remains a great deal of work to be done. In particular, more work is needed to eliminate inequities and barriers to entry into the profession, which will enable more people of color, white women, members of the LGBTQ+ community, and people with disabilities to enter and, more importantly, to remain in the field. These efforts will benefit \textbf{all} members of the profession in myriad ways. In order to enable meaningful change, we hope to see the 2020 decadal survey make specific, high-priority, and far-reaching recommendations in this regard.
\paragraph{Recommendations}
\begin{itemize}
\item \textbf{1.5.1} We urge the decadal panels to consider white papers regarding the state of the profession to be of equal importance to science and facilities white papers. Although these papers will be read by separate panels, we hope that members of the state of the profession panels will work together with members of the science panels and survey officials to ensure that the broader recommendations of Astro2020 integrate science and state of the profession recommendations powerfully and effectively.
\item \textbf{1.5.2} Minoritized individuals often bear a disproportionate service burden, including during graduate school and postdoc years when these efforts are not generally a recognized responsibility. Graduate and postdoctoral fellowships should explicitly ask for information about, value, and reward this important work.
\item \textbf{1.5.3} AAS should consider establishing an award that specifically recognizes service in the area of improving representation in our profession.
\item \textbf{1.5.4} Training concerning gender and racial harassment should be required for all members of academic departments, including: faculty, postdocs, graduate students (especially teaching assistants), and staff. More training is needed across the board to combat gender and racial harassment in our field, particularly training with a focus on intersectionality.
\item \textbf{1.5.5} To mitigate concerns about harassment and bullying, graduate departments should move away from the single mentor model towards advising strategies that encourage interaction with multiple faculty members on a regular basis (e.g., larger advising committees with semi-regular meetings). This would have the added benefit of proving more opportunities for all students to seek support and advice from a range of mentors in addition to providing a safety net for students with difficult or abusive advising relationships.
\item \textbf{1.5.6} The AAS should establish a list of best-practices for parental and child leave policies for graduate students and postdocs. Similar to recommendation 1.3.3, this information should be included in grant documents that request funding for graduate students and postdocs.
\end{itemize}
\section{Recommendations Regarding the Structure and Dissemination of the Decadal Survey}
\subsection{Early career participation in the decadal survey}
A key goal of the ECFS was to solicit feedback regarding how early career astronomers would like to be represented throughout the decadal survey process. Many participants argued for the most direct form of representation: full membership on survey panels. Several concerns regarding full panel membership were raised, however. These included: 1) reduced scientific productivity during a vulnerable career stage, 2) uncertainty regarding recognition by tenure committees and potential employers of the importance of this service to the profession, and 3) the potential to become enmeshed in controversial decisions that could negatively affect future career prospects. Several ideas were put forward regarding ways to mitigate these concerns for early career participants on the panels, namely: funding (e.g., a ``decadal fellowship''), recognition (e.g. a specific title or award that could be included in job and tenure applications), commitments from panel chairs to write letters of recommendation for jobs and letters of support for tenure, commitments from senior panel members to support early career colleagues with invitations to give seminars and colloquia, and childcare support.
The principal alternative to full participation put forward was the formation of ``early career consultation groups." These groups would travel to or virtually participate in some subset of decadal survey meetings. This would give early career astronomers a ``seat at the table" but would also mitigate the potentially deleterious effects of full survey participation on the careers of participants in these groups.
Other ideas for early career involvement ranged from early career town halls, workshops, and conferences hosted by the American Astronomical Society and National Academies to inviting early career astronomers to make presentations to the decadal panels. While the assembled astronomers were not uniformly in support of a single recommendation, increased participation in some format (and perhaps multiple formats) was a universal theme in the discussions.
\paragraph{Recommendations}
In order to increase the early career representation in the decadal survey process, we recommend that the decadal survey committee:
\begin{itemize}
\item \textbf{2.1.1} Implement incentives for early career astronomers to participate as full panel members (see above).
\item \textbf{2.1.2} Invite early career astronomers to serve as consultants to the decadal panels.
\item \textbf{2.1.3} Invite early career astronomers to deliver presentations to panels.
\end{itemize}
\subsection{Soliciting inclusive input to the 2020 decadal and beyond} Another discussion theme revolved around how to solicit broad input from the astronomical community, ensure that diverse voices are heard, and successfully disseminate the conclusions of the decadal survey to all of its ``stakeholders." First, town halls could be standardized (so that more places could easily arrange them) and conducted both digitally and in a wide range of geographic locations. Focused virtual town halls could explicitly target the challenges facing certain groups in astronomy (e.g. postdocs, observatory staff).
Second, we suggest that panel members are either present at the town halls, or that minutes from these events are compiled and circulated so that panel members are aware of what was discussed. Lastly, the survey committee should also consider inviting additional stakeholders not traditionally involved in the decadal survey process (e.g. native Hawaiians, amateur astronomers, K12 astronomy educators, data science employers) and certain additional subsets of the community (e.g. disabled astronomers, LGBTQ+ astronomers, etc.) to participate in Astro2020, either by making invited presentations to the committee or through focused town halls that are broadly advertised.
\paragraph{Recommendations}
In order to ensure equitable participation and dissemination to the broadest possible audience of stakeholders (within and beyond the astronomy community), we recommend that the decadal survey committee:
\begin{itemize}
\item \textbf{2.2.1} Standardize town halls.
\item \textbf{2.2.2} Organize official gatherings (town hall, focus session, etc.) that target specific groups of people whose perspectives have been historically missing in the decadal survey process and/or are important for the current decadal at hand (i.e. underrepresented people groups, geographically underrepresented groups, graduate students, postdocs, etc.).
\item \textbf{2.2.3} Encourage participation by all stakeholders in the decadal survey process through invited presentations to the committee and targeted virtual town halls.
\item \textbf{2.2.4} Make survey highlights available in multiple easily-digestible formats that adhere to the principles of universal design. For example: an ``Executive Summary" (1-page) document, publicly available slide decks highlighting priorities, digital representations that are captioned and easy to share on social media platforms (e.g. 5-10 minute videos).
\end{itemize}
\section{Conclusion}
The Early Career Astronomer and Astrophysicist Focus Session was a unique opportunity for a large group of early career professionals to gather and discuss the state of the astronomy profession. Lively and at times contentious discussions about the state of the profession centered around the five main themes identified in this document, namely: the role of big data in the future of astronomy, the evolving challenges of large missions and collaborations, graduate training and career preparation, the postdoctoral years, and creating a diverse workforce and equitable community. Although ECFS participants had many concerns about and ideas for improvements in these areas, there was also an overwhelming energy and enthusiasm about the future of our field and an optimism about the potential for meaningful change. A central goal of the event was to collect ideas about changes that early career astronomers would like to see made to the decadal survey process, and ideas that were generated in this regard are outlined in section 2. We, the participants, felt strongly that early career voices should be heard at every stage of the decadal process, and not just at the focus session. We look forward to participating in more of the Astro2020 decadal survey.
\end{document}
| {'timestamp': '2019-07-15T02:08:25', 'yymm': '1907', 'arxiv_id': '1907.01676', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.01676'} |
\section{Introduction}
Recently, substantial progress has been achieved in the search of non-supersymmetric
Minkowski/dS vacua in the context of string/M-theory compactifications. This was mainly related
to the understanding of the superpotentials generated by background fluxes \cite{GVW} and by
non-perturbative effects like gaugino condensation \cite{GC}, which generate a potential for
the moduli fields coming from the compactification and have suggested new
interesting possibilities for model building, like in particular those proposed in refs.~\cite{GKP,KKLT}.
From a phenomenological point of view, this type
of models must however posses some characteristics in order to be viable: supersymmetry must be
broken, the cosmological constant should be tiny, and all the moduli fields should be stabilized. In
the low energy effective theory all these crucial
features are controlled by a single quantity, the four-dimensional scalar potential, which gives
information on the dynamics of the moduli fields, on how supersymmetry is broken and on
the value of the cosmological constant. The characterization of the conditions under which a
supersymmetry-breaking stationary point of the scalar potential satisfies simultaneously the flatness
condition (vanishing of the cosmological constant) and the stability condition (the stationary point is
indeed a minimum) is therefore very relevant in the search of phenomenologically viable string models.
In this note we review the techniques presented in \cite{yo1,yo2,yo3} to study the
possibility of getting this type of vacua in the context of general supergravity
theories in which both chiral and vector multiplets participate to supersymmetry breaking.
\section{Viable supersymmetry breaking vacua}
The goal of this section is to find conditions for the existence of
non-supersymmetric extrema of the scalar potential of general supergravity theories fulfilling two basic
properties: i) they are locally stable and ii) they lead to a negligible cosmological constant. We will
first study this issue for theories with only chiral multiplets and then when
also vector multiplets are present.
\subsection{Constraints for chiral theories}
The Lagrangian of the most general supergravity theory with $n$ chiral superfields is entirely defined
by a single arbitrary real function $G$ depending on the corresponding chiral superfields $\Phi_i$ and
their conjugates $\bar \Phi_i$, as well as on its derivatives \cite{sugra}. The function $G$ can be
written in terms of a real K\"ahler potential $K$ and a holomorphic superpotential $W$ in the following
way\footnote{We will use the standard notation in which subindices $i$, $\bar \jmath$ mean
derivatives with respect to $\Phi^i$, $\bar \Phi^{j}$ and Planck units, $M_P=1$.}:
\begin{equation}
G(\Phi_i,\bar \Phi_i) = K(\Phi_i,\bar\Phi_i) + \log W(\Phi_i) +\log \bar W(\bar\Phi_i) \,.
\end{equation}
The quantities $K$ and $W$ are however defined only up to K\"ahler transformations acting as
$K \to K + f + \bar f$ and $W \to e^{-f} W $, $f$ being an arbitrary holomorphic function of the superfields,
which leave the function $G$ invariant. The scalar components of the chiral multiplets span an
$n$-dimensional K\"ahler manifold whose metric is given by $G_{i\bar \jmath}$, which
can be used to lower and raise indices.
The 4D scalar potential of this theory takes the following simple form:
\begin{equation}
V = e^G \Big(G^k G_k - 3 \Big) \,.
\label{genpot}
\end{equation}
The auxiliary fields of the chiral multiplets are fixed by the Lagrangian through the equations
of motion, and are given by $F_i = - \, e^{G/2}\, G_{i}$ where $e^{G/2}=m_{3/2}$
is the mass of the gravitino. Whenever $F_i\neq0$ at the vacuum, supersymmetry is spontaneously
broken and the direction given by the $G_i$'s defines the direction of the Goldstino eaten
by the gravitino in the process of supersymmetry breaking.
In order to find local non-supersymmetric minima of the potential (\ref{genpot}) with small non-negative
cosmological constant, one should proceed as follows: First impose the condition that the
cosmological constant is negligible and fix $V =0$. This flatness condition implies that:
\begin{equation}
\label{fc}
G^k G_k = 3 \,.
\end{equation}
Then look for stationary points of the potential where the flatness condition is satisfied.
This implies:
\begin{equation}\label{sc}
G_i + G^k \nabla_i G_k = 0 \,,
\end{equation}
where by $\nabla_i G_k=G_{ik}-\Gamma^n_{ik}G_n$ we denote the covariant derivative with
respect to the K\"ahler metric.
Finally, make sure that the matrix of second derivatives of the potential,
\begin{equation}
m^{2} = \left(
\begin{matrix}
m^2_{i \bar\jmath} & m^2_{ij} \smallskip \cr
m^2_{\bar i \bar \jmath} & m^2_{\bar \imath j}
\end{matrix}
\right) \,,
\label{VIJ}
\end{equation}
is positive definite. This matrix has two different $n$-dimensional blocks, $m^2_{i \bar \jmath} =
\nabla_i \nabla_{\bar \jmath} V$ and $m^2_{i j} = \nabla_i \nabla_j V$, and after a straightforward computation these are found to be given by the following expressions:
\begin{eqnarray}\label{vij}
\begin{array}{lll}
m^2_{i \bar \jmath} \!\!\!&=&\!\!\! e^G\Big(G_{i \bar \jmath} + \nabla_i G_k \nabla_{\bar
\jmath} G^k - R_{i \bar \jmath p \bar q} G^p G^{\bar q}\Big) \,, \smallskip\ \\
m^2_{i j} \!\!\!&=&\!\!\! e^G\Big(\displaystyle{\nabla_i G_j + \nabla_j G_i + \frac 12 G^k
\big\{\nabla_i,\nabla_j \big\}G_k}\Big) \,, \\
\end{array}
\end{eqnarray}
where $R_{i \bar \jmath p \bar q}$ denotes the Riemann tensor with respect to the K\"ahler metric.
The conditions under which this $2n$-dimensional matrix (\ref{VIJ}) is positive definite are
complicated to work
out in full generality, the only way being the study of the behaviour of the $2n$ eigenvalues.
Nevertheless a necessary condition for this matrix to be positive definite can be encoded
in the condition that the quadratic form $m^2_{i \bar \jmath} z^i {\bar z}^{\bar \jmath}$ is positive
for any choice of non-null complex vector $z^i$. Our strategy will be then to look for a special
vector $z^i$ which leads to a simple constraint.
In this case there is only one special direction in field space, that is the direction given by $z^i = G^i$.
Indeed projecting in that direction we find the following simple expression:
\begin{equation}\label{eq}
m^2_{i \bar \jmath} G^i G^{\bar \jmath} = 6 - R_{i \bar \jmath p \bar q}\, G^i G^{\bar \jmath} G^p
G^{\bar q}
\,.
\end{equation}
This quantity must be positive if we want the matrix (\ref{VIJ}) to be positive definite
\footnote{Actually, as emphasized in \cite{yo4}, the Goldstino multiplet cannot receive any supersymmetric
mass contribution from $W$, since in the limit of rigid supersymmetry its fermionic
component must be massless. This means that, in order to study metastability, it is enough to study
the projection of the diagonal block $m^2_{i \bar \jmath} $ of the mass matrix along the
Goldstino direction $G^i$, as the rest of the projections can be given a mass with the help of
the superpotential.}. Using the
rescaled variables $f^i = - \frac{1}{\sqrt{3}} G^i$ the conditions for the existence of
non-supersymmetric flat minima can then be written as:
\begin{eqnarray}\label{fs-1}
\left\{\!\!
\begin{array}{l}
G_{i\,\bar \jmath}f^if^{\bar \jmath} = 1\,,\\
R_{i \,\bar \jmath \,p \,\bar q}\, f^i f^{\bar \jmath} f^p f^{\bar q}< \displaystyle{\frac23}\,.
\end{array}\right.
\end{eqnarray}
The first condition, the flatness condition, fixes the amount of supersymmetry breaking
whereas the second condition, the stability condition, requires the existence of directions with
K\"ahler curvature less than {$2/3$} and constraints the direction of supersymmetry breaking
to be sufficiently aligned with it.
\subsection{Constraints for gauge invariant theories}
It can happen that the supergravity theory with $n$ chiral multiplets $\Phi^i$ we just described
has a group of some number $m$ of global symmetries, compatibly with supersymmetry. In this
subsection we consider the possibility of gauging such isometries with the introduction of
vector multiplets. The corresponding supergravity theory will then include in addition to the $n$ chiral
multiplets $\Phi^i$, $m$ vector multiplets $V^a$.
The two-derivative Lagrangian is specified in this case by a
real K\"ahler function $G(\Phi^k,\bar\Phi^{k},V^a)$, determining in particular the scalar geometry,
$m$ holomorphic Killing vectors $X_a^i(\Phi^k)$, generating the isometries that are gauged, and an
$m$ by $m$ matrix of holomorphic gauge kinetic functions $H_{ab}(\Phi^k)$, defining the
gauge couplings\footnote{The gauge kinetic function $H_{ab}$ must have an appropriate
behavior under gauge transformations, in such a way as to cancel possible gauge anomalies
$Q_{abc}$. Actually, the part
$h_{ab}={\rm Re}\,H_{ab}$ defines a metric for the gauge fields and must be gauge invariant.
On the other hand ${\rm Im}\,H_{ab}$ must have a variation that matches the coefficient of
$Q_{abc}$, namely $X_a^i h_{bc i} = \frac i2 \, Q_{abc}$.}. In this case the minimal coupling between chiral and vector multiplets turn ordinary
derivatives into covariant derivatives, and induces a new contribution to the scalar potential
coming from the vector auxiliary fields $D^a$, in addition to the standard one coming from the
chiral auxiliary fields $F^i$. The 4D scalar potential takes the form:
\begin{equation}
\label{genpot2}
V = e^G\Big( g^{i \bar \jmath}\, G_i G_{\bar \jmath}-3\Big) + \frac 12 h^{ab} D_a D_b\,.
\end{equation}
The auxiliary fields are fixed from the Lagrangian through the equations of motion to be:
\begin{eqnarray}
&& F_i = - m_{3/2}\, G_i \,, \label{F} \\[1mm]
&& D_a = -G_a = i \, X_a^i \, G_i = - i \, X_a^{\bar \imath} \, G_{\bar \imath} \,, \label{D}
\end{eqnarray}
where to get the relations in (\ref{D}) one should also use gauge invariance of the action.
Now in order to find local non-supersymmetric minima of the potential (\ref{genpot2})
with small non-negative cosmological constant, we will proceed as in the previous subsection.
First we will impose the condition that the cosmological constant is negligible and fix $V =0$.
This flatness condition implies that:
\begin{equation}
- 3 + G^i G_i + \frac 12 \, e^{-G} D^a D_a = 0\,.
\label{flatness}
\end{equation}
The stationarity conditions correspond now to the requirement that $\nabla_i V = 0$, and
they are given by:
\begin{equation}
G_i + G^k \nabla_i G_k + e^{-G} \Big[D^a\Big(\nabla_i - \frac 12\, G_i \Big) D_a
+ \frac 12\, h_{abi} D^a D^b \Big] = 0\,.
\label{stationarity}
\end{equation}
The $2n$-dimensional mass matrix (\ref{VIJ}) for small fluctuations of the scalar fields around the
vacuum has as before two different $n$-dimensional blocks, which can be computed as
$m_{i \bar\jmath}^2 = \nabla_i \nabla_{\bar\jmath} V$ and $m_{i j}^2 = \nabla_i \nabla_j V$. Using
the flatness and stationarity conditions, one finds, after a straightforward computation
\cite{FKZ,dudasvempati}:
\begin{eqnarray}
&&\hspace{-1.5cm}m_{i \bar \jmath}^2 = e^G \Big[g_{i \bar \jmath} - R_{i \bar \jmath p \bar q} G^p G^{\bar q}
+ \nabla_i G_k \nabla_{\bar \jmath} G^k \Big] - \frac 12\, \Big(g_{i \bar \jmath} - G_i G_{\bar \jmath} \Big) D^a D_a
-\,2\, D^a G_{(i} \nabla_{\bar \jmath)} D_a \label{mijbar} \\
&&\hspace{-.5cm} + \Big(G_{(i} h_{ab \bar \jmath)} + h^{cd} h_{a c i} h_{b d \bar \jmath} \Big)\, D^a D^b -
2\, D^a h^{bc} h_{ab(i} \nabla_{\bar \jmath)} D_c
+ h^{ab} \nabla_i D_a \nabla_{\bar \jmath} D_b + D^a \nabla_i \nabla_{\bar \jmath} D_a\,, \nonumber \\
&& \hspace{-1.5cm}m_{i j}^2 = e^G \Big[2 \, \nabla_{(i} G_{j)} + G^k \nabla_{(i} \nabla_{j)} G_k \Big] - \frac 12 \Big(\nabla_{(i} G_{j)} - G_i G_j \Big) D^a D_a+ h^{ab}\, \nabla_i D_a \nabla_j D_b
\label{mij} \\
&& \hspace{-.5cm}-\,2\, D^a G_{(i} \nabla_{j)} D_a - 2\,D^a h^{bc} h_{ab(i} \nabla_{j)} D_c
+ \Big(G_{(i} h_{abj)} + h^{cd} h_{a c i} h_{b d j} - \frac 12 h_{a b i j} \Big) D^a D^b \,.\nonumber
\end{eqnarray}
We want to analyze now the restrictions imposed by the requirement
that the physical squared mass of the scalar fields are all positive. In general the theory displays a spontaneous breakdown of both supersymmetry and gauge symmetries, so in the study of the stability
of the vacuum it is
necessary to take appropriately into account the spontaneous breaking of gauge symmetries.
In that process $m$ of the $2n$ scalars, the would-be Goldstone
bosons, are absorbed by the gauge fields and get a positive mass, so we do not need to take them
into account for the analysis of the stability. Nevertheless the would-be
Goldstone modes correspond to flat directions of the unphysical mass matrix,
and get their physical mass through their kinetic mixing with the gauge bosons.
This means that positivity of the physical mass matrix implies semi-positivity of the unphysical mass
matrix in (\ref{mijbar}), (\ref{mij}). We can use then the same strategy as before but changing the
strictly positive condition to a semi-positive one.
In this case there exist two types of special complex directions $z^i$ one could look at. The first is
the direction $G^i$, which is associated with the Goldstino direction in the subspace of chiral multiplet
fermions. Projecting into this direction one finds, after a long but straightforward computation:
\begin{eqnarray}
m^2_{i \bar \jmath} G^i G^{\bar \jmath} \!\!\!&=&\!\!\!
e^G \Big[6 -R_{i \bar \jmath p \bar q} \, G^i G^{\bar \jmath} G^p G^{\bar q} \Big]
+\, \Big[\!-\! 2\, D^a D_a + h^{cd} h_{a c i} h_{b d \bar \jmath} \, G^i G^{\bar \jmath} D^a D^b \Big] \label{mGG}\\
\!\!\!&\;&\!\!\! +\,e^{-G} \Big[M^2_{ab} D^a D^b \!+\! \frac 34\,Q_{abc} D^a D^b D^c
\!-\! \frac 12 \Big(D^a D_a\Big)^2 \! \!+\! \frac 14 h_{ab}^{\;\;\;i} h_{cdi} D^a D^b D^c D^d \Big]\,, \nonumber
\end{eqnarray}
where $Q_{abc}=-2iX^i_ah_{bci}$.
The condition $m^2\hspace{-4pt}{}_{i \bar \jmath} \, G^i G^{\bar \jmath} \ge 0$ is then the generalization of the
condition in (\ref{eq}) for theories involving only chiral multiplets. In terms of the rescaled variables:
\begin{equation}
f_i=\frac{1}{\sqrt{3}}\frac{F_i}{m_{3/2}}=-\frac{1}{\sqrt{3}}G_i\,,\hspace{1.8cm}
d_a=\frac{1}{\sqrt{6}}\frac{D_a}{m_{3/2}}\,,
\end{equation}
the flatness and stability conditions take then the following form:
\begin{eqnarray}
\hspace{-.8cm}\left\{\hspace{-4pt}
\begin{array}{l}
\displaystyle{ \,f^i f_{i} + d^a d_a} = 1\,,\\
\displaystyle{R_{i \,\bar \jmath\, p\, \bar q} \, f^i f^{\bar \jmath} f^p f^{\bar q} \le \frac23
+ \frac23 \Big({M_{ab}^2}/{m^2_{3/2}} -2 h_{a\,b} \Big) d^a d^b
+2 h^{c\,d}h_{a\,c\,i}h_{b\,d\,\bar \jmath}f^if^{\bar \jmath}d^ad^b}\\
\displaystyle{\hspace{2.8cm}- \Big(2\, h_{a\,b} h_{c\,d}- h_{a\,b}^ih_{c\,d\,i}\Big)\, d^a d^b d^c d^d
+\sqrt{\frac 32} \frac{Q_{abc}}{m_{3/2}} d^a d^b d^c\,.}
\end{array}
\right.
\end{eqnarray}
Again we have that the flatness condition fixes the amount of supersymmetry breaking whereas
the stability condition constrains its direction. One could also consider the
directions $X_a^i$, which are instead associated with the Goldstone directions in the space of chiral
multiplet scalars. Nevertheless the constraint $m^2_{i\bar \jmath}X_a^iX_a^{\bar \jmath}\ge0$ turns
out to be more complicated and no useful condition seems to emerge from it.
\section{Analysis of the constraints}
The analysis of the flatness and stability conditions in the case where both chiral and vector multiplets
participate to supersymmetry breaking presents an additional complication with respect to the case
where only chiral multiplets are present, due to the fact that the auxiliary
fields of the chiral and vector multiplets are not independent of each other.
The rescaled auxiliary fields $f_i$ and $d_a$ are actually related in several ways.
One first relation (consequence of gauge invariance) can be read from
eq. (\ref{D}) and is given by:
\begin{equation}
d^a = \frac{i\,X^a_i}{\sqrt{2} m_{3/2}} \, f^i \label{DFkin} \,.
\end{equation}
This relation is satisfied as a functional relation valid at any point of the scalar field space.
It shows that the $d_a$ are actually linear combinations of the $f_i$. Using
now the inequality $|a^i b_i| \le \sqrt{a^i a_i} \sqrt{b^j b_j}$ one can derive a simple bound on
the sizes that the $d_a$ can have relative to the $f_i$:
\begin{equation}
|d_a| \le \frac {1}{2} \frac {M_{aa}}{m_{3/2}} \sqrt{f^i f_i}\,.
\label{DF}
\end{equation}
There is also a second relation between $f_i$ and $d_a$, that is instead valid
only at the stationary points of the potential.
It arises by considering a suitable linear combination of the stationarity conditions along the
direction $X_a^i$, in other words, by imposing $X_a^i \nabla_i V= 0$. This relation reads
\cite{KawamuraDFF,ChoiDFF} (see also \cite{KawaKobDFF}):
\begin{equation}
i\,\nabla_i X_{a\bar \jmath} \, f^i f^{\bar \jmath}
- \sqrt{\frac23}m_{3/2}\Big(3 f^i f_i - 1 \Big) \,d_a - \, \frac{M^2_{ab}}{\sqrt{6} \,m_{3/2}} \, d^b
+Q_{abc}\,d^b d^c= 0\label{DFF}\,.
\end{equation}
These relations show that whenever the $f_i$ auxiliary fields vanish also the $d_a$ auxiliary fields
should vanish. Therefore we can say that the {$f_i$}'s represent the basic qualitative seed
for supersymmetry breaking whereas the {$d_a$}'s provide additional quantitative effects.
Along this section we will address
the problem of working out more concretely the implications of these constraints. In order to do so
we will concentrate on the case in which the gauge kinetic function is constant and diagonal:
$h_{ab} = g_a^{-2} \delta_{ab}$. In this case we can rescale the vector fields in such a way as to
include a factor $g_a$ for each vector index $a$. In this way, no explicit dependence on $g_a$ is left
in the formulas and the metric becomes just $\delta_{ab}$. Using this the flatness and stability
conditions take the following simple form:
\begin{eqnarray}\label{fs-2}
\left\{\hspace{-4pt}
\begin{array}{l}
f^i f_i +\sum_a d_a^2 = 1 \,,\\
R_{i\, \bar \jmath\, p\, \bar q} \, f^i f^{\bar \jmath} f^p f^{\bar q} \le \displaystyle{\frac 23}
+ \displaystyle{\frac 43} \, \mbox{$\sum_{a}$} \Big(2\, m_{a}^2 -1 \Big)\, d_a^2
- 2\, \mbox{$\sum_{a,b}$} d_a^2 d_b^2\,,
\end{array}
\right.
\end{eqnarray}
where we have defined the quantity $m_{a} = {M_{a}}/{(2\, m_{3/2})}$
measuring the hierarchies between scales. Denoting $v_a^i = {\sqrt{2} X_a^i}/{M_a}$ and
$T_{a\, i \bar \jmath} = {i\,\nabla_i X_{a\,\bar \jmath}}/{M_a}$ the relations between
{$f^i$} and {$d^a$} read:
\begin{eqnarray}
&& d_a = i\, m_a v_a^i f_i \hspace{.4cm}\Longrightarrow\hspace{.4cm}
|d_a| \le m_{a} \, \sqrt{ f^i f_i} \,,\\
&& d_a = \sqrt{\frac 32} \, \frac {m_a \,T_{a\, i \,\bar \jmath} \, f^i f^{\bar \jmath}}
{m_{a}^2 - 1/2 + 3/2\, f^i f_i} \label{KIn}\,.
\end{eqnarray}
\subsection{Interplay between F and D breaking effects}
In this subsection we will study the interplay between the {$F$} and {$D$} supersymmetry
breaking effects. In order to do so it is useful to introduce the variables ${\hat f}^i = {f^i}/
{\sqrt{1 - \sum_a d_a^2}}$. Using these variables the conditions for flatness and stability can
be rewritten as:
\begin{eqnarray}
\left\{\hspace{-4pt}
\begin{array}{l}
\displaystyle{ {\hat f}^i {\hat f}_{i} = 1 } \,,\\
\displaystyle{R_{i\, \bar \jmath\, p\, \bar q} \, {\hat f}^i {\hat f}^{\bar \jmath} {\hat f}^p {\hat f}^{\bar q}
\le \frac 23 \, K(d_a^2,m_a^2) }\,,
\end{array}
\right.
\end{eqnarray}
where the function $K(d_a^2,m_a^2)$ is given by:
\begin{equation}
K(d_a^2,m_a^2) = 1 + 4\, \frac {\sum_a m_a^2 d_a^2 - \big(\sum_a d_a^2\big)^2 \raisebox{-5pt}{}}
{\big(1 - \sum_b d_b^2\big)^2} \,.
\end{equation}
In the limit in which the rescaled vector auxiliary fields are small ($d_a \ll 1$) we have that
${\hat f}^i \simeq f^i$ and therefore these variables $\hat f^i$ are the right variables to study
the effect of vector multiplets with respect to the case where only chiral multiplets are present. Note that
in such a limit the relation (\ref{KIn}) between F and D auxiliary fields can be written at first order as
$d_a\,\simeq\,\sqrt{3/2}\,m_a/(1+m_a^2)\,T_{a\,i\,\bar \jmath}\,{\hat f}^i\,{\hat f}^{\bar \jmath}$.
Using this we get:
\begin{equation}
K\,\simeq\,1+6\, \mbox{$\sum_a$} \,\xi^2_a(m)\,T_{a\,i\,\bar \jmath} \,T_{a\,p\,\bar q} \,
{\hat f}^i {\hat f}^{\bar \jmath}{\hat f}^p {\hat f}^{\bar q}\,,\hspace{1cm}\xi_a(m)=\frac{m_a^2}{1+m_a^2}\,,
\end{equation}
and we can write the flatness and stability conditions as:
\begin{eqnarray}
\left\{\hspace{-4pt}
\begin{array}{l}
\displaystyle{ {\hat f}^i {\hat f}_{i} = 1 } \,,\\
\displaystyle{\hat{R}_{i\, \bar \jmath\, p\, \bar q} \, {\hat f}^i {\hat f}^{\bar \jmath} {\hat f}^p
{\hat f}^{\bar q} \le \frac 23 }\,,
\end{array}
\right.
\end{eqnarray}
where $\hat{R}_{i\, \bar \jmath\, p\, \bar q} = R_{i\, \bar \jmath\, p\, \bar q} -
4\,\sum_a \xi^2_a(m)\,T_{a\,i\,(\bar \jmath} \,T_{a\,p\,\bar q)}$. This means
that the net effect in this case is to change
the curvature felt by the chiral multiplets. Note as well that in the case in which the mass of the
vectors is large this is not necessarily a small effect and can compete with the
curvature effects due to the chiral multiplets. Actually for heavy vector fields one can check that
integrating out the vector fields modifies the K\"ahler potential of the chiral multiplets
in a way that accounts for this shift in the K\"ahler curvature.
For larger values of {$d_a$} one can instead find an upper
bound to {$K$} (see \cite{yo3} for details):
\begin{equation}
\hspace{-.5cm}K \leq 1+6 \, \mbox{$\sum_a$} \, \xi^2_a(m) T_{a\,i\,\bar \jmath} \,T_{a\,p\,\bar q} \,
{\hat f}^i {\hat f}^{\bar \jmath}{\hat f}^p {\hat f}^{\bar q}\,,\hspace{1cm}
\xi_a(m)=\frac{m_a^2\, (1+\sum_bm_b^2)}{1+m_a^2+(m_a^2-\frac12)
\sum_bm_b^2}\,.
\end{equation}
So in this general case we get as well that the effect of vector multiplets can be encoded into an
effective curvature $\hat{R}_{i\, \bar \jmath\, p\, \bar q} = R_{i\, \bar \jmath\, p\, \bar q} -
4\,\sum_a \xi^2_a(m)\,T_{a\,i\,(\bar \jmath} \,T_{a\,p\,\bar q)}$.
In this section we have derived the implications of the flatness and stability conditions taking into
account the fact that $f^i$ and $d^a$ are not independent variables. The strategy that we have followed
is to use the the relation (\ref{DFkin}) to write $d_a$ in terms of $f^i$. A second possibility would be
to use instead the relation (\ref{DFF}) to write $d^a$ in term of $f^i$ and a third one would be to
impose only the bound (\ref{DF}) to restrict the values of the $d^a$ in terms of the values of $f^i$.
It is clear that switching from the relation (\ref{DFkin}) to the relation (\ref{DFF}) and finally to
the bound (\ref{DF}) represents a gradual simplification of the formulas, which is also
accompanied by a loss of information. As a consequence, these different types of strategies will
be tractable over an increasingly larger domain of parameters, but this will be
accompanied by a gradual weakening of the implied constraints. A detailed
derivation of the implications of the flatness and stability conditions when the relations (\ref{DFF})
and (\ref{DF}) are used can be found in \cite{yo3}.
\section{Some examples: moduli fields in string models}
In this section we will apply our results to the typical situations arising for the moduli sector of string models. The K\"ahler potential and superpotential governing the dynamics of
these moduli fields typically have the general structure:
\begin{equation}
\label{stringk}
K = - \,\mbox{$\sum_i$}\, n_i\, {\rm ln} (\Phi_i + \bar\Phi_i) + \dots\,, \\
\end{equation}
where by the dots we denote corrections that are subleading in the derivative and loop expansions
defining the effective theory. The K\"ahler metric computed from (\ref{stringk}) becomes diagonal
and the whole Kahler manifold factorizes into the product of $n$ one-dimensional Kahler
submanifolds. Also the only non-vanishing components of the Riemann
tensor are the $n$ totally diagonal components ${R_{i \,\bar \jmath\, p\, \bar q} =
R_{i} \,g_{i\,\bar \imath}^2 \, \delta_{i\, \bar \jmath p \bar q}}$ where $R_i={2}/{n_i}$.
Recall now that when only chiral fields participate to supersymmetry breaking the flatness and
stability conditions take the form (\ref{fs-1}), so in this particular case they just read:
\begin{equation}\label{32}
\,\mbox{$\sum_i$}\, |f^i|^2 = 1\,,\hspace{1.5cm}\,\mbox{$\sum_i$}\, R_{i}\, |f^i|^4< \frac 23 \,.
\end{equation}
These relations represent a quadratic inequality in the variable $|f^i|^2$ subject to a linear constraint.
This system of equations can be easily solved to get the condition $\sum_{i} R^{-1}_{i}
> \frac32$, which translates into:
\begin{equation}\label{con}
\hspace{4cm} \,\mbox{$\sum_i$}\, n_{i} > 3\,.
\end{equation}
Also eqs.~(\ref{32}) constrain the values that the auxiliary fields {$|f_i|$} can take.
When a single modulus dominates the dynamics the condition (\ref{con}) implies $n>3$ (this result was
already found in \cite{Brustein:2004xn} in a less direct way). For the universal dilaton $S$ we have
$n_S = 1$ and therefore
it does not fulfill the necessary condition (\ref{con}). This shows in a very clear way that
just the dilaton modulus cannot lead to a viable situation \cite{nodildom} unless subleading corrections
to its K\"ahler potential become large \cite{nonpertdil,yetmoredil}.
We can therefore conclude that the scenario proposed in ref.~\cite{dilatondom}, in which
the dilaton dominates supersymmetry breaking, can never be realized in a controllable way.
On the other hand, the overall K\"ahler modulus $T$ has $n_T = 3$, and violates
only marginally the necessary condition. In this case, subleading corrections to the K\"ahler potential
are crucial. Recently some interesting cases where subleading corrections can help in achieving a
satisfactory scenario based only on the $T$ field have been identified for example in
\cite{nonpertvol,nonpertvolbis}.
In this case where the dynamics is dominated by
just one field the K\"ahler potential of (\ref{stringk}) corresponds to a constant curvature manifold with
$R = 2/n$ and it has a
global symmetry associated to the Killing vector $X = i\, \xi$, which can be gauged as long as the
superpotential is also gauge invariant. By doing so the potential would get a $D$-term contribution
that should be taken into account in the analysis of stability, as was explained in the previous section.
In such a situation the flatness condition in (\ref{fs-2}) can be solved by introducing
an angle $\delta$ and parametrizing the rescaled auxiliary fields as $f = \cos \delta$ and
$d = \sin \delta$. In terms of this angle the stability condition implies:
\begin{equation}
n > \frac {3}{1 + 4\, \tan^6\delta} \,.
\label{c2}
\end{equation}
From this expression, it is clear that it is always possible to
satisfy the stability condition for a large enough value of $\tan\delta$.
Note in particular that eq.~(\ref{c2}) implies that when $n$ is substantially less than
$3$, which is the critical value for stability in the absence of gauging, the contribution to
supersymmetry breaking coming from the $D$ auxiliary field must be comparable to the
one coming from the $F$ auxiliary field.
A final comment is in order regarding the issue of implementing the idea of uplifting with an uplifting
sector that breaks supersymmetry in a soft way. It is clear that such a sector will have to contain
some light degrees of freedom, providing also some non-vanishing $F$ and/or $D$ auxiliary field.
Models realizing an $F$-term uplifting are easy to construct. A basic precursor of such models
was first constructed in \cite{lutysundrum}.
More recently, a variety of other examples have been constructed, where the extra chiral multiplets
have an O' Raifeartaigh like dynamics that is either genuinely postulated from the beginning
\cite{Fup} or effectively derived from the dual description of a strongly coupled theory \cite{Fupiss}
admitting a metastable supersymmetry breaking vacuum as in \cite{ISS}. Actually, a very simple
and general class of such models can be constructed by using as uplifting sector any kind of sector
breaking supersymmetry at a scale much lower than the Planck scale \cite{yo1}. Models realizing
a $D$-term uplifting, on the other hand, are difficult to achieve. The natural idea of relying on some
Fayet-Iliopoulos term \cite{BKQ} does not work, due to the already mentioned fact that such terms
must generically be field-dependent in supergravity, so that the induced $D$ is actually
proportional to the available charged $F$'s. It is then clear that there is an obstruction in getting
$D$ much bigger than the $F$'s (see also \cite{cr}).
Most importantly, if the only charged chiral multiplet in the model
is the one of the would-be supersymmetric sector (which is supposed to have vanishing $F$) then
also $D$ must vanish, implying that a vector multiplet cannot act alone as an uplifting sector
\cite{ChoiDup,DealwisDup}. This difference between $F$-term and $D$-term uplifting is, as
was emphasized in the previous section,
due to the basic fact that chiral multiplets can dominate supersymmetry breaking whereas vector
multiplets cannot.
Finally we would like to mention that the flatness and stability conditions simplify not only for
factorizable K\"ahler manifolds but also for some other classes of scalar manifolds that present
a simple structure for the Riemann tensor. This is the case for example for K\"ahler potentials
generating a scalar manifold of the form $G/H$ which arise for example in orbifold string
models \cite{yo2,yo3}, and also for no-scale supergravities and
Calabi-Yau string models \cite{yo4}.
\section{Conlcusions}
In this note we have reviewed the constraints that can be put on gauge invariant supergravity
models from the requirement of the existence of a flat and metastable vacuum, following the
results of \cite{yo1,yo2,yo3}. We have shown that in a general {${\cal N}=1$} supergravity theory
with chiral and vector multiplets there are {strong necessary conditions} for the existence of
phenomenologically viable vacua. Our results can be summarized as follows. These necessary conditions severely {constrain the geometry} of the scalar manifold as well as {the direction} of supersymmetry breaking and {the size} of the auxiliary fields. When supersymmetry breaking is
dominated by the chiral multiplets the conditions restrict the {K\"ahler curvature}, whereas when
also vector multiplets participate to supersymmetry breaking the net effect is to alleviate the
constraints through a {lower effective curvature}. This is mainly due to the fact that the $D$-type
auxiliary fields give a positive definite contribution to the scalar potential, on the contrary of the
$F$-type auxiliary fields, which give an indefinite sign contribution. Nevertheless one should also
take into account the fact that the local symmetries associated to the vector multiplets also
restrict the allowed superpotentials. These results should be useful in discriminating
more efficiently potentially viable models among those emerging, for instance, as low-energy
effective descriptions of string models.
\begin{acknowledgement}
M.G.-R. would like to thank the organizers of the RTN meeting "Constituents, Fundamental Forces and
Symmetries of the Universe" held in Valencia, 1-5 October 2007, for the opportunity to present this work.
The work of C.A.S. has been supported by the Swiss National Science Foundation.
\end{acknowledgement}
| {'timestamp': '2008-04-23T16:55:19', 'yymm': '0804', 'arxiv_id': '0804.3730', 'language': 'en', 'url': 'https://arxiv.org/abs/0804.3730'} |
\section{Related Work}
\label{sec:background}
There has been a growing interest in flaky tests in recent years, especially after the publication of Martin Fowler's article on the potential issues with non-deterministic tests \cite{fowler2011eradicating}, and Luo et al.'s \cite{luo2014empirical} seminal study.
To the best of our knowledge, there have been three reviews of studies on test flakiness: two systematic literature reviews, one by Zolfaghari et al. and the other by Zheng et al. \cite{zolfaghari2020root,zheng2021research} and a survey by Parry et al. \cite{parry2021survey}.
There have also been some developers' surveys that aimed to understand how developers perceive and deal with flaky tests in practice. A developer survey conducted by Eck et al. \cite{eck2019understanding} with 21 Mozilla developers studied the nature and the origin of 200 flaky tests that had been fixed by the same developers. The survey looked into how those tests were introduced and fixed, and found that there are 11 main causes for those 200 flaky tests (including concurrency, async wait and test order dependency). It was also pointed out that flaky tests can be the result of issues in the production code (code under test) rather than in the test. The authors also surveyed another 121 developers about their experience with flaky tests. It was found that flakiness is
perceived as a significant issue by the vast majority of developers they surveyed.
The study reported that developers found flaky tests to have a wider impact on the reliability of the test suites.
As part of their survey with developers, the authors also conducted a mini-multivocal review study to collect evidence from the literature on the challenges to deal with flaky tests. However, this was a small, targeted review to address only the challenges to deal with flaky tests. The study included a review of only a few (19) articles.
A recent developers' survey \cite{habchi2022qualitative} echoed the results found in Eck et al., noting that flakiness can result from interactions between the system components, the testing infrastructure, and other external factors.
Ahmed et al. \cite{ahmad2021empirical} conducted a similar survey with developers aiming to understand developers' perception of test flakiness (e.g., how developers define flaky tests, and what factors are known to impact the presence of flaky tests).
The study identified several key factors that are believed to be impacted by the presence of test flakiness, such as software product quality and the quality of the test suite.
The systematic review by Zolfaghari et al. \cite{zolfaghari2020root} identified what has been done so far on test flakiness in general, and presented points for future research directions. The authors identified main methods behind approaches for detecting flaky tests, methods for fixing flaky tests, empirical studies on test flakiness, root causes of flakiness and listed tools for detecting flaky tests. The study suggested investigation into building a taxonomy of flaky tests that covers all dimensions (causes, impact, detection), formal modelling of flaky tests, setting standards for flakiness-free testing and investigating the application of AI-based approaches to the problem, and automated flaky test repair.
Zheng et al. \cite{zheng2021research} also discussed current trends and research progress in flaky tests. The study analysed similar questions to the research reported in this paper on causes and detection techniques of flaky tests in 31 primary studies on flaky tests. Hence, this review was limited, and it did not discuss in detail the mechanism of current detection approaches or the wider impact of flaky tests on other techniques.
There was a short scoping grey literature review by Barboni et al. \cite{barboni2021we} that focused on investigating the definition of flaky tests in grey literature by analysing flaky-test related blogs posted on \emph{Medium}. The study is limited to understanding the definition of flaky tests (highlighting the problem of inconsistent terminology used in the surveyed articles), and covered a small subset of the posts (analysing only 12 articles in total).
Parry et al. \cite{parry2021survey} conducted a more recent comprehensive survey of academic literature on the topic of test flakiness. The study addressed similar research questions to our review, and to those in the previous reviews, by studying causes, detection and mitigation of flaky tests. The study reviewed 76 articles that focused on flaky tests.
The review presented in this paper covers a longer period of time than those previous reviews \cite{parry2021survey,zolfaghari2020root,zheng2021research}, which includes work dating back further (on ``non-deterministic tests''), before the term ``flaky tests'' became popular. The review contains a discussion of publications through the end of April 2022, where the most recent review of Parry et al \cite{parry2021survey} covers publications through April 2021. We found a significant number of academic articles published in the period between the two reviews (229 articles published between 2021-2022). In general, our study complements previous work in that, 1) we gather more detailed evidence about causes of flaky tests, and investigate the relationships between different causes, 2) we investigate both the impact of and responses to flaky tests in both research and practice, 3) we list the \textit{indirect} impact of flaky tests on other analysis methods and techniques (e.g., software debugging and maintenance).
All previous reviews focused on academic literature. The review by Zolfaghari et al. \cite{zolfaghari2020root} covered a total of 43 articles, Parry et al. \cite{parry2021survey} covered 76 articles, and Zheng et al. \cite{zheng2021research} covered 31 articles. Our study complements these reviews by providing a much wider coverage and in-depth perspective on the topic of flaky tests. We include a total of 602 academic articles, as well as reviewing 91 grey literature entries (details in Section \ref{sec:data-extraction}). We cover not only studies that directly report on flaky tests, but also those that reference or discuss the issue of test flakiness while it is not the focus of the study. We also discussed a wide range of flaky tests related tools used in research and practice (including industrial and open-source tools), and discuss the impact of flaky tests from different perspectives.
A comparative summary of this review with the previous three reviews is shown in Table \ref{tab:reviews-summary}.
\begin{table*}[]
\caption{Summary of prior reviews on test flakiness compared with our review}
\label{tab:reviews-summary}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}lllll@{}}
\toprule
Paper & Period covered & \begin{tabular}[c]{@{}l@{}}\# of reviewed\\ articles\end{tabular} & \begin{tabular}[c]{@{}l@{}}Grey \\ literature\end{tabular} & Focus \\ \midrule
\cite{zolfaghari2020root} Zolfaghari et al. & 2013 - 2020 & 43 & -- & causes and detection techniques \\
\cite{zheng2021research} Zheng et al. & 2014 - 2020 & 31 & -- & causes, impact, detection and fixing approaches \\
\cite{parry2021survey} Parry et al. & 2009 - 4/2021 & 76 & -- &
\begin{tabular}[c]{@{}l@{}}causes, costs and consequences, detection and approaches for \\ mitigation and repair \end{tabular} \\
This review & 1994 - 5/2022 & 560 & 91 & \begin{tabular}[c]{@{}l@{}}taxonomy of causes, detection and responses techniques, and \\impact on developers, process and product in research and practice\end{tabular} \\ \bottomrule
\end{tabular}}
\end{table*}
\section{Study Design}
\label{sec:design}
We designed this review following the Systematic Literature Review (SLR) guidelines by Kitchenham and Charters \cite{kitchenham2007guidelines}, and the guidelines of Garousi et al. \cite{Garousi2019Guidelines} on multivocal review studies in software engineering. The review process is summarized in Fig. \ref{fig:protocol}.
\begin{figure*}[h]
\centering
\includegraphics[width=0.70\linewidth]
{media/review_protocol.png
\caption{An overview of our review process}
\label{fig:protocol}
\end{figure*}
\subsection{Research Questions}
\label{sec:questions}
This review addresses a set of research questions that we categorized along four main dimensions: \textit{causes}, \textit{detection}, \textit{impact} and \textit{responses}. We list our research questions below, with the rationale for each.
\subsubsection{Common Causes of Flaky Tests:}
\noindent\textbf{RQ1. What are the common causes of flaky tests?
}
The rationale behind this question is to list the common causes of flaky tests and then group similar categories of causes together.
We also investigate the cause-effect relationships between different flakiness causes as reported in the reviewed studies, as we believe that some causes are interrelated. For example, flakiness related to the User Interface (UI) could be attributed to the underlying environment (e.g., the Operating System (OS) used). Understanding the causes and their relationships is key to dealing with flaky tests (i.e., detection, quarantine or elimination).
\subsubsection{Detection of Flaky Tests}
\noindent\textbf{RQ2. How are flaky tests being detected?}\\
\noindent To better understand flaky tests detection, we divide this research question into the following two sub-questions.\\
\noindent\textbf{RQ2.1. What \textit{methods} have been used to detect flaky tests?\\
}
\noindent\textbf{RQ2.2. What \textit{tools} have been developed to detect flaky tests?
} \\
In RQs 2.1 and 2.2, we gather evidence regarding methods/tools that have been proposed/used to detect flaky tests. We seek to understand how these methods work. We later discuss the potential advantages and limitations of current approaches.
\subsubsection{Impact of Flaky Tests}
\noindent\textbf{RQ3. What is the impact of flaky tests?}
As reported in previous studies, flaky tests are generally perceived to have a negative impact on software product and process \cite{fowler2011eradicating,googleFlaky2016,harman2018start,eck2019understanding}. However, it is important to understand the extent of this impact, and what exactly is affected (e.g., process, product, personnel).
\subsubsection{Responses to Flaky Tests
\noindent\textbf{RQ4. How do developers/organizations respond to flaky tests when detected? }
Here we are looking at the responses and mitigation strategies employed by developers, development teams and organisations. We note that there are both technical (i.e., how to fix the test or the underlying code that causes the flakiness) and management (i.e., what are the processes followed to manage test suites that are flaky) responses.
\subsection{Review Process}
\label{sec:review-process}
Since this is a multivocal review where we search for academic and grey literature in different forums, the search process for each of the two parts (academic and grey literature) is different and requires different steps.
The systematic literature review search targets peer-reviewed publications that have been published in relevant international journals, conference proceedings and theses in the areas of software engineering, software testing and software maintenance. The search also covers preprint and postprint articles available in open access repositories such as \textit{arXiv}. For the grey literature review, we searched for top-ranked online posts on flaky tests. This includes blog posts, technical reports, white papers, and official documentation for tools.
We used \textit{Google Scholar} to search for academic literature and \textit{Google} search engine to search for grey literature.
Both \textit{Google Scholar} and \textit{Google search} have been used in similar multivocal studies in software engineering \cite{garousi2018smell,myrbakken2017devsecops,garousi2016and} and other areas of computer science \cite{Islam2019Security,pereira2021security}.
Google Scholar indexes most major publication databases relevant to computer science and software engineering \cite{neuhaus2006depth}, in particular the ACM digital library, IEEE Xplore, ScienceDirect and SpringerLink, thus providing a much wider coverage compared to those individual libraries. A recent study on the use of Google Scholar in software engineering reviews found it to be very effective, as it was able to retrieve $\sim$96\% of primary studies in other review studies \cite{yasin2020using}.
Similarly, it has been suggested that a regular \textit{Google Search} is sufficient to search for grey literature material online \cite{mahood2014searching,adams2016searching}.
\subsubsection{Searching Academic Publications}
We closely followed Kitchenham and Charters's guidelines \cite{kitchenham2007guidelines} to conduct a full systematic literature review.
The goal here is to identify and analyse primary studies relating to test flakiness. We defined a search strategy and search string that covers the terminology associated with flaky tests. The search string was tested and refined multiple times during our pilot runs to ensure coverage. We then defined a set of inclusion and exclusion criteria.
We included a quality assessment of the selection process to ensure that we include all relevant primary studies. We explain those steps in detail below.
We define the following criteria for our search:
\begin{enumerate}
\item [] \textbf{Search engine:} Google Scholar.
\item []\textbf{Search String:} ``flaky test'' OR ``test flakiness'' OR ``flaky tests'' OR ``nondeterministic tests'' OR ``non deterministic tests'' OR ``nondeterministic test'' OR ``non deterministic test''
\item [] \textbf{Search scope:} all academic articles published until 30 April 2022.
\end{enumerate}
In case that an article appears in multiple venues (e.g., a conference paper that was also published on arXiv under a different title, or material from a thesis that was subsequently published in a journal or conference proceedings), we only include the published articles/papers over the other available versions. This was to ensure that we include as much peer-reviewed material as possible.
We conducted this search over two iterations. The first iteration covers the period until 31 December 2020, where the second iteration covers the period between 1 January 2021 and 30 April 2022. Results from the two searches were then combined.
\noindent Our review for academic articles follow the following steps:
\begin{enumerate}
\item Search and retrieve relevant articles using the defined search terms using Google Scholar.
\item Read the title, abstract and the full text (if needed) to determine relevance by one of the co-authors and apply inclusion and exclusion criteria.
\item Cross-validate a randomly selected set of articles by another co-author.
\item Apply inclusion and exclusion criteria.
\item Classify all included articles based on the classification we have for each question (details of those classifications are provided for each research question in Section \ref{sec:results}).
\end{enumerate}
\subsubsection{Searching for Grey Literature}
Here we followed the recommendations made in previous multivocal review studies \cite{garousi2018smell,Garousi2019Guidelines} and reported the results obtained only from the first 10 pages (10 items each) of the Google search. It was reported that relevant results usually only appear in the first few pages of the search \cite{Garousi2019Guidelines}. We observed that the results in the last five pages were less relevant compared to those that appeared in the first five.
For the grey literature search, we define the following criteria:
\begin{enumerate}
\item [] \textbf{Search engine:} Google Search.
\item []\textbf{Search String:} \textit{``flaky test'' OR ``test flakiness'' OR ``non-deterministic test''}.
\item [] \textbf{Search scope:} pages that appear in the first 10 pages of the search.
Note that this search was conducted over two iterations. We searched for material published up until 31 December 2020, then in the second iteration we searched for material published up to 30 April 2022 (we removed duplication found between the two searches)
\end{enumerate}
\noindent The grey literature review consists of the following steps:
\begin{enumerate}
\item Search and retrieve relevant results using the search terms in Google Search.
\item Read the title and full article (if needed) to determine relevance.
\item Cross-validate a randomly selected set of articles by another co-author.
\item Check external links and other external resources that have been referred to in the articles/posts. Add new results to the dataset.
\item Classify all posts based on the classification scheme.
\end{enumerate}
\subsubsection{Selection Criteria}
\noindent We selected articles based on the three following inclusion criteria:
\begin{itemize}
\item Studies discussed test flakiness as the main topic of the article.
\item Studies discussed test flakiness as an impact of using a specific technique, tool or in an experiment.
\item Studies discussed test flakiness as a limitation of a technique, tool or an experiment.
\end{itemize}
\noindent We apply the following exclusion criteria:
\begin{enumerate}
\item [--] Articles only mentioning flakiness, but without substantial discussion on the subject.
\item [--] Articles that are not accessible electronically, or the full text is not available for downloading\footnote{In case the article is not available either through a known digital library such as ACM Digital Library, IEEE Xplore, ScienceDirect and SpringerLink; or not publicly available through other open-access repositories such as arXiv or ResearchGate}.
\item [--] Studies on nondeterminism in hardware and embedded systems.
\item [--] Studies on nondeterminism in algorithms testing (e.g., when nondeterminism is intentionally introduced).
\item [--] Duplicate studies (e.g., reports of the same study published in different places or on different dates, or studies that appeared in both academic and grey literature searches).
\item [--] Secondary studies on test flakiness (previous review articles).
\item [--] Editorials, prefaces, books, news, tutorials and summaries of workshops and symposia.
\item [--] Multimedia material (videos, podcasts, etc.) and patents.
\item [--] Studies written in a language other than English.
\end{enumerate}
\noindent For the grey literature study, we also exclude the following:
\begin{enumerate}
\item [--] Academic articles, as those are covered by our academic literature search using Google Scholar.
\item [--] Tools description pages (such as GitHub pages) with little or no discussion about the causes of flakiness or its impact.
\item [--] Web pages that mention flaky tests with no substantial discussion (e.g., just provided a definition of flakiness).
\end{enumerate}
\subsubsection{Pilot Run}
\label{sec:pilot}
Before we finalized our search keywords and search strings, and defined our inclusion and exclusion criteria, we conducted a pilot run using a simplified search string to validate the study selection criteria, refine/confirm the search strategy and refine the classification scheme before conducting the full-scale review. Our pilot run was conducted using a short, intentionally inclusive string (\textit{``flaky test'' OR ``test flakiness'' OR ``non-deterministic test''}) using both Google and Google Scholar. These keywords were drawn from two key influential articles and blog posts (either used in the title or as keywords) that the authors are aware of - i.e., the highly cited work on the topic by Luo et al. \cite{luo2014empirical}, and the well-known blog post by Martin Fowler \cite{fowler2011eradicating}). This was done for the period until 31 December 2020.
In the first iteration, we retrieved 10 academic articles and 10 grey literature results, and then in the second iteration we obtained another 10 academic articles (next 10 results) and 10 grey literature results.
We validated the results of this pilot run based on articles' relevance as well as our familiarity with the field. We validated the retrieved articles and classified all retrieved results using the defined classification scheme in order to answer the four research questions. We used this pilot run to improve and update our research questions and our classification scheme. We classified a total of 20 articles in each group (academic and grey literature).
With respect to the former, we were able to identify 14 of the 20 articles found by the search as being familiar to us, lending a degree of confidence that our search would at minimum find papers relevant to our research questions.
\subsection{Data Extraction and Classification}
\label{sec:data-extraction}
We extracted relevant data from all reviewed articles in order to answer the different research questions. Our search results (following the different steps explained above) are shown in Fig. \ref{fig:results_stats}.
As explained in Section \ref{sec:review-process}, we conducted our search over two iterations, covering two periods. The first period covers articles published up until 31 December 2020, while the second cover the period from 1 January 2021 until 30 April 2022.
In the first iteration we retrieved a total of 1092 results, with 992 articles obtained from the Google Scholar search and 100 grey literature articles obtained from Google Search (i.e., the first 10 pages). After filtering the relevant articles and applying the inclusion and exclusion criteria,
we ended up with a total of 408 articles to analysis (354 academic articles and 54 grey literature articles).
In the second iteration (from January 2021 until April 2022), we retrieved 330 academic articles from Google Scholar and 100 articles from Google Search (results from the first 10 pages). We removed the duplicates (e.g., results that might appear twice such as the same publication appeared in multiple publication venues, or grey literature article that appeared in the top 10 pages over the two iterations). For this iteration, after filtering the relevant articles, we ended up with 243 results (206 academic articles and 37 grey literature posts). Collectively, we identified a total of 560 academic and 91 grey literature articles for our analysis.
\begin{figure*}[h]
\centering
\resizebox{0.6\linewidth}{!}{
\includegraphics[width=\linewidth]{media/review-results.png}}
\caption{Results of the review process}
\label{fig:results_stats}
\end{figure*}
The analysis of articles was done by three coders (co-authors). We split the articles between coders, where each coder read the articles and obtained data to answer our research questions. For each article, we first tried to understand the context of flakiness. We then looked for the following: 1) the discussed causes of flakiness (RQ1), 2) how flakiness is detected (methods, tools, etc.) (RQ2), 3) the noted impact of flakiness (R3), and 4) the approach used to respond to or deal with flaky tests (RQ4).
\subsection{Reliability Assessment}
We conducted a reliability check of our filtration and classification. We cross-validated a randomly selected sample of 88/992 academic articles and 49/100 grey literature articles, as obtained from the first iterations (obtaining a confidence level = 95\% and confidence interval = 10). Two of the authors cross-validated those articles, with each classifying half of these articles (44 of academic literature and 25 of the grey literature articles). In addition, a third co-author (who was not involved in the initial classification) cross-validated another 25 randomly selected academic/grey articles.
On those cross-validation, we reached an agreement level of $\geq$ 90\%.\\
We provide the full list of articles that we reviewed (both academic and grey) online\url{https://docs.google.com/spreadsheets/d/1eQhEAUSMXzeiMatw-Sc8dqvftzLp8crC3-B9v5qHEuE}.
\section{Results}
\label{sec:results}
\subsection{Overview of the publications}
We first provide an overview of the timeline of publications on flaky tests in order to understand how the topic has been viewed and developed over the years. The timeline of publications is shown in Fig. \ref{fig:timeline}. Based on our search for academic articles, we found that there have been articles that discuss the issue of \emph{nondeterminism} in test outcomes dating back to 1994, with 34 articles found between 1994 and 2014. However, the number of articles has significantly increased since 2014.
There has been an exponential growth in publications addressing flaky tests in the past 6 years (between 2016 and 2022).
We attributed this increase to the rising attention to flaky tests by the research community since the publication of the first empirical study on the causes of flaky tests in Apache projects by Luo et al. in 2014 \citeS{S1}, which was the first study that directly addressed the issue of flaky tests in great detail (in terms of common causes and fixes). Over 93\% of the articles were published after the publication of this study, with 41\% of those articles (229) published between January 2021 and April 2022 only, indicating an increased popularity over the years.\\
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]
{media/timeline.png}
\caption*{\scriptsize{* The 2021-2022 numbers include article published between Jan 2021 and April 2022.}}
\caption{Timeline of articles published on flaky tests, including the focused articles}
\label{fig:timeline}
\end{figure}
In terms of publication types and venues, more than 40\% of these publications have been published in reputable and highly rated software engineering venues. Top publications venues include the premier software engineering conferences: International Conference on Software Engineering (ICSE), Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) and the International Conference on Automated Software Engineering (ASE). Publications on flaky tests has also appeared in the main software testing conferences International Symposium on Software Testing and Analysis (ISSTA) and International Conference on Software Testing, Verification and Validation (ICST). They also appear in software maintenance conferences, International Conference on Software Maintenance and Evolution (ICSME) and International Conference on Software Analysis, Evolution and Reengineering (SANER). A few articles ($\sim$10\%) were published in premier software engineering journals, including Empirical Software Engineering (EMSE), Journal of Systems and Software (JSS), IEEE Transaction in Software Engineering (TSE) and Software Testing, Verification and Reliability (STVR) journal. The distribution of publications in key software engineering venues is shown in Fig. \ref{fig:venues}).
To expand on the methodology for including and excluding articles, we did not perform a quality assessment of articles based on venues or citation statistics. We focused on primary studies (excluding the three reviews and secondary studies). Furthermore, looking deeper into focused studies provided insights into their quality.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]
{media/venue-distr.png}
\caption{Distribution of publications based on publication venues}
\label{fig:venues}
\end{figure}
In terms of programming languages, the vast majority of the studies have focused on Java (49\% of those studies), with only a few other studies that discuss flakiness in other languages such as Python, JavaScript and .NET, with 49 (14\%) studies used multiple languages (results listed in Table \ref{tab:languages}).
\begin{table}[h]
\centering
\caption{Flaky tests in terms of languages studied}
\begin{tabular}{lc}
\toprule
\textbf{Language} & \textbf{\# Articles} \\ \midrule
Java & 212 \\
Python & 25 \\
JavaScript & 11 \\
.NET languages & 6 \\
Other languages & 27 \\
Multiple languages & 58 \\
Not specified/unknown & 221 \\
\bottomrule
\end{tabular
\label{tab:languages}
\end{table}
We classified all articles into three main categories: 1) studies focusing on test flakiness, where flakiness is the focal point (e.g., an empirical investigation into test flakiness due to a particular cause such as concurrency or order dependency), 2) studies that explain how test flakiness impacts tools/techniques (e.g., the impact of test flakiness on program repair or fault localization) or 3) studies that just mention or reference test flakiness (but it is not a focus of the study). A breakdown of the focus of the articles and the nature of the discussion on test flakiness in academic literature is shown in Table \ref{tab:focus}.
We observed that only 109 articles ($\sim$20\%) from all the collected studies focused on test flakiness as the main subject of the study. The majority of the reviewed articles (297, representing $\sim$53\%) just mentioned test flakiness as a related issue or as a threat to validity. The remaining 154 articles($\sim$27\%) discussed flakiness in terms of their impact on a proposed technique or tool that is the subject of the study.
\begin{table}[!h]
\caption{Focus of academic articles}
\resizebox{\linewidth}{!}{
\begin{tabular}{llc}
\toprule
\textbf{Type} & \textbf{Description} & \multicolumn{1}{l}{\textbf{\# of articles}} \\ \toprule
Focused & \begin{tabular}[c]{@{}l@{}}Studies focusing on test flakiness\end{tabular} & 109 \\
Impact & \begin{tabular}[c]{@{}l@{}}Studies that explain how test flakiness impacts tools/techniques \end{tabular} & 154 \\
Referenced & \begin{tabular}[c]{@{}l@{}}Studies that just mention or reference test flakiness\end{tabular} & 297 \\ \bottomrule
\end{tabular}}
\label{tab:focus}
\end{table}
As for the grey literature results, all articles we found were published following the publication of Martin Fowler's influential blog post on test nondeterminism in early 2011 \cite{fowler2011eradicating}.
Similar to the recent increased attention on those academic literature articles, we found that almost 50\% of the grey literature articles (26) have been published between 2019 and 2020, indicating a growing interest in flaky tests, and shedding light on the importance of the topic in practice.
In the following subsections, we present the results of the study by answering each of our four research questions in detail.
\input{Sections/rq1causes}
\input{Sections/rq2detection}
\input{Sections/datasets}
\input{Sections/rq3impact}
\input{Sections/rq4responses}
\section{Discussion}
\label{sec:discussion}
In this section, we discuss the results of the review and present possible challenges in detecting and managing flaky tests.
We also provide our own perspective on current limitations of existing approaches, and discuss potential future research directions.
\subsection{Flaky Tests in Research and Practice}
The problem with flaky tests has been a subject of a wide discussion among researchers and practitioners. Dealing with flaky tests is a real issue that is impacting developers and test engineers on a daily basis. It can undermine the validity of test suites and make them almost useless \citeG{G8,G74}. The topic of flaky tests has been a research focus, with a noticeable increase in the number of publications over the last four years (between 2017 and 2021). We observed that the way the issue of test flakiness is being discussed is slightly different in academic and grey literature. The majority of research articles discuss mainly the impact of flaky test on different software engineering techniques and applications. Research that focuses solely on flaky tests mainly tackles new methods that are employed to detect flaky tests, aiming to increase speed (i.e., how fast flakiness can be manifested) and accuracy of flakiness detection.
Many of those studies have focused on specific causes of flakiness (either in terms of detection or fixes) - namely those related to order-dependency in test execution or to concurrency. There is generally a lack of studies that investigate the impact of other causes of test flakiness, such as those related to variation in the environment or in the network. This is an area that can be addressed by future tools designed specifically to detect test flakiness due to those factors. Our recent work targets this by designing a tool, \textit{saflate}, that is aimed at reducing test flakiness by sanitising failures induced by network connectivity problems \cite{dietrich2022flaky}.
On the other hand, the discussion in grey literature focused more on the general strategies that are being followed in practice to deal with any flaky tests once detected in the CI pipeline. Those are usually detected by checking if the test outcomes have changed between different runs (e.g., between \texttt{PASS} to \texttt{FAIL}). Several strategies that have been followed by software development teams are discussed in grey literature, especially around what to do with flaky tests once they have been identified. A notable approach is quarantining flaky tests in an isolated `staging' area before they are fixed \citeG{G4,S106}.
The gap between academic research and practice when it comes to the way flaky tests are viewed has also been discussed in some of the most recent articles. An experience report published by managers and engineers at Facebook \citeG{G127} explained how real world applications can \textit{always} be flaky (e.g., due to the non-determinism of algorithms), and what we should be focusing on is not when or if tests are flaky, but rather how flaky those tests can be. This supports Harman and O'Hearn's view \cite{harman2018start} that all test should, by default, considered to be flaky, which provides a defensive mechanism that can help in managing flaky tests in general.
\subsection{Identifying and Detecting Flaky Tests}
In RQ1, we surveyed the common causes of flaky tests, whether in the CUT or in the tests themselves. We observe that there are a variety of causes for flakiness, from the use of specific programming language features to the reliance on external resources. It is clear that there are common factors that are responsible for flaky test behaviours, regardless of the programming language used or the application domains. Factors like test order dependency, concurrency, randomness, network and reliance on external resources are common across almost all domains, and are responsible for a high proportion of flaky tests.
Beyond the list of causes noted in the first empirical study in this topic of Luo et al. \citeS{luo2014empirical}, we found evidence of a number of additional causes, namely flakiness due to algorithmic nondeterminism (related to randomness), variations of hardware, environment and those related to the use of ML applications (which are nondeterministic in nature).
Some causes identified in academic literature overlap, and causes can also be interconnected. For example, UI flakiness can in turn be due to a platform dependency (e.g. dependency on a specific browser) or because of event races.
With this large number of causes of flaky tests, further in-depth investigation into the different causes is needed to understand how flaky tests are introduced in the code base, and better understand the root causes of flaky tests in general. This also includes studies of test flakiness in the context of variety of programming languages (as opposite to Java or Python, which most flakiness studies have covered).
Another point worth mentioning here is how \emph{flakiness} is being defined in different studies -- in general, a test is considered flaky if it has a different outcome on different runs with the same input data. Academic literature refers to tests having binary outcomes, i.e., \emph{PASS} or \emph{FAIL}. In practice, however, tests can have multiple outcomes on execution (pass, fail, error or skip). For instance, tests may be skipped/ignored (potentially non-deterministically) or may not terminate (or timeout, depending on the configuration of tests or test runners). A more concise and consistent definition of the different variants of flakiness is needed.
\subsection{The Impact of and Response to Flaky Tests}
It is clear that flaky tests are known to have a negative impact on the validity of the tests, or the quality of the software as a whole.
A few impact points have been discussed in both academic and grey literature. Notable areas that are impacted by flaky tests are test-dependent techniques, such as fault localization, program repair and test selection.
An important impact area that has not been widely acknowledged is how flaky tests affect developers. Although the impact on developers was mentioned in developers'
surveys \cite{S8,S1019}, and in many grey literature articles (e.g., \citeG{G2,G8,G134}), such an impact has not been explicitly studied in more detail -- an area that should be explored further in the future.
In terms of responses to flaky tests, it seems that the most common approach is to quarantine flaky tests once they are detected. The recommendation is to keep tests with flaky outcomes in a separate quarantine area from other ``healthy'' tests. This way, those flaky tests are not forgotten and the cause of flakiness can be investigated later to apply a suitable fix. On the other hand, other non-flaky tests can still run so that it does not cause any delay in development pipelines.
However, there remains some open questions about how to deal with quarantined tests, how long those tests should stay in the designated quarantine area, and how many tests can be quarantined at once. A strategy (that can be implemented into tools) on how to process quarantined flaky tests and remove them from the designated quarantine area (i.e., de-quarantining) also needs further investigation.
One interesting area for future research is to study the long-term impact of flaky tests. For example, what is the impact of flaky tests on the validity of test suites if left unfixed or unchanged. Do a few flaky tests that are left in the test suite untreated have a wider impact on the presence of bugs as the development progresses?
It is also interesting to see, when flaky tests are flagged and quarantined, how long it will take developers to fix those tests. This can be viewed at as a technical debt that will need to be paid back. Therefore, a study on whether this is actually been paid back, and how long it takes, will be valuable.\\
\subsection{Implications on Research and Practice}
This study yields some actionable insights and opportunities for future research. We discuss those implications in the following:
\begin{enumerate}
\item The review clearly demonstrates that academic research on test flakiness focuses mostly on Java, with limited studies done in other popular languages\footnote{Based on Stack Overflow language popularity statistics \url{https://insights.stackoverflow.com/survey/2021\#technology-most-popular-technologies}} i.e., JavaScript and Python. The likely reasons are a combination of existing expertise, the availability of open-source datasets, and the availability of high-quality and low cost (often free) program analysis tools. However, our grey literature review shows that the focus among practitioners is more on the ``big picture'', and flakiness has been discussed in the context of a variety of programming languages.
\item Different programming languages have different features, and it is not obvious how results observed in Java programs carry over to other languages. For instance: flakiness caused by test order dependencies and shared (memory) state are not possible in a pure functional language (like Haskell), and at least less likely in a language that manages memory more actively to restrict aliasing (like Rust using ownership\footnote{\url{https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html}}). In languages with different concurrency models such single-threaded languages (e.g., JavaScript), some flakiness caused by concurrency is less likely to occur. For instance, deadlocks are more common in multithreaded applications \cite{wang2017comprehensive}. Still, this does not mean that flakiness cannot occur due to concurrency, but it is likely to happen to a lesser extent compared to multithreaded languages such as Java. Similarly, languages (like Java) that use a virtual machine decoupling the runtime from operating systems and hardware are less likely to produce flakiness due to variability in those platforms than low-level languages lacking such a feature, like C. Languages with strong integrated dynamic/meta programming features to facilitate testing like mock testing, which when used may help avoid certain kinds of flakiness, for instance, flakiness caused by network dependencies.
\item There seems to be an imbalance in the way to respond to flaky tests between what have been discussed in academic and industry articles. Industry responses have focused on processes to deal with flaky tests (such as quarantining strategies), and academic research have focused more on causes detection (note that there are some recent studies on automatically repairing flakiness in ML projects and order dependent tests). This is not unexpected; however, and may also indicate opportunities for future academic research to provide tools that can help automate quarantining (and de-quarantining). Furthermore, it appears that some industrial practices such as rerunning failed tests until they pass may require a deeper theoretical foundation. For instance, does a test that only passes after several reruns provide the same level of assurance as a test that always passes provides? The same question can be asked for entire test suites: what is the quality of a test suite that never passes in its entirety, but each individual test is observed to pass in some configuration.
\item Another question arises from this, what is the number of reruns required to assure (with a high level of confidence level) that a test is not flaky? From what we observed in the studies that used a rerun approach to manifest flakiness, the number of reruns used differ from one study to another (with some studies noting 2 \citeS{S16}, 10 \citeS{S99} or even 100 \citeS{S6} reruns as baselines). A recent study on Python projects reported that $\sim$170 reruns are required to ensure a test is not flaky due to non-order-dependent reasons \cite{gruber2021empirical}.
We believe that the number of reruns required will depend largely on the cause of flakiness. Some rerun based tools, such as rspec-retry\footnote{\url{https://github.com/NoRedInk/rspec-retry}} for Ruby or the \texttt{\@RepeatedTest}\footnote{\url{https://junit.org/junit5/docs/5.0.1/api/org/junit/jupiter/api/RepeatedTest.html}} annotation in JUnit, provides an option to rerun tests a specified \textit{n} number of times (set by the developer). However, it is unknown what is a suitable threshold for the number of reruns required for different types of flakiness.
Further empirical investigation to quantify the minimum number of reruns required to manifest flakiness (for the different causes and in different contexts) is required. Alshammari et al. \citeS{S1008} is a step in this direction where they rerun tests in 24 Java projects 10,000 times to find out how many flaky tests can be found with different numbers of reruns.
\end{enumerate}
\section{Validity Threats}
\label{sec:threats}
We discuss a number of potential threats to the validity of the study below, and explain the steps taken to mitigate them. \\
\noindent\textbf{Incomplete or inappropriate selection of articles:} As with any systematic review study, due to the use of an automatic search it is possible that we may have missed some articles that were not either covered by our search string, or were not captured by our search tool.
We mitigated this threat by first running and refining our search string multiple times. We piloted the search string on Google Scholar to check what the string will return. We cross-validated this by checking if the search string would return well-known, highly cited articles of test flakiness (e.g., \cite{luo2014empirical,memon2013automated,eck2019understanding}). We believe this iterative approach has improved our search string and reduced the risk of missing key articles.
There is also a chance that some related articles have used terms other than those we used in our search string. If terms other than ``flaky'', ``flakiness'' or ``non-deterministic'' were used, then the possibility of missing those studies increases. To avoid such a limitation we repeatedly refined our search string and performed sequential testing in order to recognize and include as many relevant studies as possible.\\
\noindent\textbf{Manual analysis of articles:}
We read through each of the academic and grey literature articles in order to answer our research questions.
This was done manually with at least one of the authors reading through articles and then the overall results are verified by another co-author. This manual analysis could introduce bias due to multiple interpretations and/or oversight. We are aware that human interpretation introduces bias, and thus we attempted to account for it via cross-validation involving multiple evaluators and by cross-checking the results from the classification stage, by involving at least two coders.\\
\noindent\textbf{Classification and reliability:} We have performed a number of classifications based on findings from different academic and grey literature articles to answer our four research questions.
We extracted information such as causes of flakiness (RQ1), detection methods and tools (RQ2), impact of flakiness (RQ3) and responses (RQ4). This information was obtained by reading through the articles, extracting the relevant information, and then classifying the articles by one of the author. Another author then cross-validated the overall classification of articles. We made sure that at least two of the co-authors will check each results and discuss any difference until a 100\% agreement between the two is reached.\\
\section{Conclusion}
\label{sec:conclusion}
In this paper, we systematically studied how test flakiness has been addressed in academic and grey literature. We provide a comprehensive view of flaky tests, their common causes, their impact on other techniques/artefacts, and discuss response strategies in research and practice. We also studied methods and tools that have been used to detect and locate flaky tests and strategies followed to respond to flaky tests.\\
This review covers 560 academic literature and 91 grey literature articles. The results show that most academic studies that covered test flakiness has focused more on Java compared to other programming languages. In terms of common causes, we observed that flakiness due to test order dependency and concurrency have been studied more widely compared to other noted source of flakiness. However, this depends mainly on the focus of the studies that reported those causes. For example, studies that used Android as their subject systems has focused mostly on flakiness in UI (which concurrency issues are attributed as the root cause).
Correspondingly, methods to detect flaky tests have focused more on specific types of flaky tests, with the dynamic rerun-based approach noted as the main proposed method for flaky tests detection. The intention is to provide approaches (either static or dynamic) that are less expensive to run by accelerating ways to manifest flakiness with running fewer tests.\\
This paper outlines some limitations in test flakiness research that should be addressed by researchers in the future.
\section{Introduction}
\label{sec:introduction}
Software testing is a standard method used to uncover defects. Developers use tests early during development to uncover software defects when corrective actions are relatively inexpensive. A test can only provide useful feedback if it has the same outcome (either pass or fail) for every execution with the same version of code. Tests with non-deterministic outcomes (known as \textit{flaky tests}) are tests that may pass in some runs and fail on others. Such flaky behaviour is problematic as it leads to uncertainty in choosing corrective measures \cite{harman2018start}. They also incur heavy costs in developers' time and other resources, particularly when the test suites are large and development follows an agile methodology, requiring frequent regression testing on code changes to safeguard releases.
Test flakiness has been attracting more attention in recent years. In particular, there are several studies on the causes and impact of flaky tests in both open-source and proprietary software. In a study of open source projects, it was observed that 13\% of failed builds are due to flaky tests \cite{Labuschagne2017}. At Google, it was reported that around 16\% of their tests were flaky, and 1 in 7 of the tests written by their engineers occasionally fail in a way that is not caused by changes to the code or tests \cite{googleFlaky2016}. GitHub also reported that, in 2020, one in eleven commits (9\%) had at least one red build caused by a flaky test \cite{github2020reducing}.
Other industrial reports have shown that flaky tests present a real problem in practice that have a wider impact on product quality and delivery \cite{fowler2011eradicating,SandhuTesting2015,palmer2019}.
Studies of test flakiness have also been covered in the context of several programming languages including Java \cite{luo2014empirical}, Python \cite{gruber2021empirical} and, more recently, JavaScript \cite{Hashemi2022flakyJS}.
Awareness that more research on test flakiness is needed has increased in recent years \cite{harman2018start}. Currently, studies on test flakiness and its causes largely focus on specific sources of test flakiness, such as order-dependency \cite{gambi2018practical}, concurrency \cite{dong2021flaky} or UI-specific flakiness \cite{memon2013automated,romano2021empirical}.
Given that test flakiness is an issue in both research and practice, we deem it is important to integrate knowledge about flaky tests from both academic literature and grey literature in order to provide insights into the state-of-the-practice.
In order to address this,
we performed a multivocal literature review on flaky tests.
A multivocal review is a form of a \textit{systematic literature review} \cite{kitchenham2007guidelines} which includes sources from both academic (formal) and grey literature \cite{Garousi2019Guidelines}. Such reviews in computer science and software engineering have become popular over the past few years \cite{Tom2013TD,garousi2018smell,Islam2019Security,Butijn2020Blockchains} as it is acknowledged that the majority of developers and practitioners do not publish their work or thoughts through peer-reviewed academic channels \cite{Garousi2016Multivocalreviews,Glass2006Creativity}, but rather in blogs, discussion boards and Q\&A sites \cite{Williams2019Grey}.
This research summarizes existing work and current thinking on test flakiness from both academic and grey literature.
We hope that this can help a reader to develop an in-depth understanding of common causes of test flakiness, methods used to detect flaky tests, strategies used to avoid and eliminate them, and the impact flaky tests have. We identify current challenges and suggest possible future directions for research in this area.
The remaining part of the paper is structured as follows: Section \ref{sec:background} presents recent reviews and surveys on the topic. Our review methodology is explained in Section \ref{sec:design}. We present our results answering all four research questions in Section \ref{sec:results}, followed by a discussion of the results in Section \ref{sec:discussion}. Threats to validity are presented in Section \ref{sec:threats}, and finally we present our conclusion in Section \ref{sec:conclusion}.
\subsection{Causes of Test Flakiness (RQ1)}
\label{sec:causes}
We analysed causes of flaky tests, as noted in both academic and grey literature articles. We looked for the quoted reasons for why a test is flaky, and in most cases, we note multiple (sometimes connected) causes being the reason for flakiness. We then grouped those causes into categories based on their overall nature. \\
The most widely discussed causes in literature are those that are already in the empirical study on flaky tests by Luo et al. \citeS{S1}. The study provides a classification of causes as the result of analysing 201 commits that fix flaky tests across Apache projects, which are diverse across languages and maturity. Their methodology is centred around examining commits that \textit{fix} flaky tests. In addition to classifying root causes of test flakiness into categories, they present approaches to manifest flakiness and strategies used to fix flaky tests. The classification consists of 10 categories that are the \textit{root causes} of flaky tests in the commits, which are \emph{async-wait}, \emph{concurrency}, and \emph{test order dependency}, \emph{resource leak}, \emph{network}, \emph{time}, \emph{IO}, \emph{randomness}, \emph{floating point operations} and \emph{unordered collections}. Thorve et al. \citeS{S7} listed additional causes identified from a study of Android commits fixing flaky tests: \emph{dependency}, \emph{program logic}, and \emph{UI}. Dutta et al. \citeS{S14} noted subcategories of \emph{randomness}, and Moritz et al. \cite{S8} identified three additional causes: \emph{timeout}, \emph{platform dependency} and \emph{too restrictive range} from a survey of developers.
We mapped all causes found in all surveyed publications, and categorized them into the following major categories (based on their nature): \emph{concurrency}, \emph{test order dependency}, \emph{network}, \emph{randomness}, \emph{platform dependency}, \emph{external state/behaviour dependency}, \emph{hardware}, \emph{time} and \emph{other}. A summary of the causes we classified is provided in Table \ref{tab:causes} and discussed below.
\noindent\textbf{Concurrency.} This categorizes flakiness due to concurrency issues resulting from concurrency-related bugs. These bugs can be race conditions, data races, atomicity violations or deadlocks.
\textbf{\emph{Async-wait}} is investigated as one of the major causes of flakiness under concurrency. This occurs when an application or test makes an asynchronous call and does not correctly wait for the result before proceeding.
This category accounts for nearly half of the studied flaky test fixing commits \citeS{S1}. Thorve et al.\citeS{S7} and Luo et al \citeS{S1} classified async-wait related flakiness under concurrency. Lam et al. \citeS{S10,S6} reported async-wait as the main cause of flakiness in Microsoft projects.
Other articles cited async-wait in relation to root-cause identification \citeS{S6}, detection \citeS{S15} and analysis \citeS{S72}.
Luo et al. \citeS{S1} identified an additional subcategory \textit{``bug in condition''} for concurrency-related flakiness, where the guard for code that determines which thread can call it is either too restrictive or permissive. Concurrency is also identified as a cause in articles on detection \citeS{S6}, \citeS{S15}, \citeS{S14} and \citeS{S7}. Another subcategory identified from browser applications is event races~\citeS{S1008}.
\noindent\textbf{Test order dependency.} The test independence assumption implies that tests can be executed in any order and produce expected outcomes. This is not the case in practice \cite{S40} as tests may exhibit different behaviour when executed in different orders. This is due to a shared state where it can either be in memory or external (e.g. file system, database). Tests can expect a particular state before they can exhibit the expected outcome, which can be different if the state is not setup correctly or reset. There can be multiple sources of test order dependency. Instances of shared state can be in the form of explicit or implicit data dependencies in tests, or even bugs such as resource leaks or failure to clean up resources between tests. Luo et al. \citeS{S1} listed these as separate root causes: resource leaks and I/O. \emph{\noindent\textbf{Resource leak}} can be a source of test order dependency when the code under test (CUT) or test code is not properly managing shared resources (e.g. obtaining a resource and not releasing it). Empirical studies that discuss this resource leak related flakiness include \citeS{S1} and \citeS{S10}, as well other studies on root cause analysis, such as \citeS{S6} and \citeS{S210}, that find instances of flakiness in test code due to improper management of resources. Luo et al. \citeS{S1} identified I/O as a potential cause of flakiness. An example is a code that opens and reads from a file and does not close it, leaving it to garbage collection to manage it. If a test attempts to open the same file, it would only succeed if the garbage collector had processed the previous instance. In Luo et al. \citeS{S1}, 12\% of test flakiness in their study is due to order dependency. Articles that cite order dependency include those that propose detection methods \citeS{S147,S4}, and one that is an experimental study on flakiness in generated test suites \citeS{S110}. A shared state can also arise due to \emph{incorrect/flaky API usage} in tests. Tests may intermittently fail if programmers use such APIs in tests without accounting for such behaviour. Dutta et al. \citeS{S14} discussed this in the study of machine learning applications and cite an example where the underlying cause is the shared state between two tests that use the same API and one of the tests not resetting the fixture before executing the second.
\noindent\textbf{Network.} Another common cause for flaky tests relates to network issues (connections, availability, and bandwidth). This has two subcategories: local and remote issues. Local issues pertain to managing resources such as sockets (e.g. contention with other programs for ports that are hard-coded in tests) and remote issues concern failures in connecting to remote resources. Mor{\'a}n et al. \citeS{S99} studied network bandwidth in localizing flakiness causes. In a study consisting of Android projects \citeS{S7}, network is identified as a cause of flakiness of 8\% of the studied flaky tests.
\noindent\textbf{Randomness.} Tests or the code under test may depend on randomness, which can result in flakiness if the test does not consider all possible random values that can be generated. This is listed as a main cause by Luo et al. \citeS{S1}. Dutta et al. \citeS{S14} identified subcategories of randomness in their investigation of flaky tests in probabilistic and machine learning applications. Such applications rely on machine learning frameworks that provide operations for inference and training, which are largely nondeterministic in nature. Writing tests can be challenging for such applications, which use these frameworks. The applications are written in Python, and they study applications that use the main ML frameworks for the language. They analysed 75 bug/commits that are linked to flaky tests and obtained three cause subcategories: 1) \textit{algorithmic nondeterminism}, 2) \textit{incorrect/flaky API usage} and 3) \textit{hardware}. They state that these categories are subcategories of the \textit{randomness} category in \citeG{S1}. The most common cause identified was algorithmic nondeterminism. They also present a technique to detect flaky tests due to such assertions. They evaluate the technique on 20 projects and found 11 previously unknown flaky tests. The subcategories identified are \emph{Algorithmic non-determinism} and \emph{Unsynchronized seeds} in ML applications. In these applications, as test input, developers use small datasets and models, expecting the results to converge to values within an expected range. Assertions are added to check if the inferred values are close to the expected ones. As there is a chance that the computed value may fall outside the expected range, this may result in flaky outcomes. Tests in ML applications may also use multiple libraries that need sources of randomness, and flakiness can arise if different random number seeds are used across these modules or if the seeds are not set. We include a related category here, \emph{too restrictive ranges}, identified in \citeS{S8}. This is due to output values falling outside ranges or values in assertions determined at design time.
\begin{landscape}
\begin{table}
\caption{Causes of flaky tests}
\label{tab:causes}
\resizebox{0.70\linewidth}{!}{
\begin{tabular}{@{}llp{10cm}l@{}}
\toprule
\textbf{Main category} & \textbf{Sub-category} & \textbf{Description} & \textbf{Example Articles} \\ \midrule
Concurrency & Synchronization & Asynchronous call in test (or CUT) without proper synchronization before proceeding & \citeS{S1}, \citeS{S7}, \citeS{S10}, \citeS{S6} \\
& & & \citeS{S15}, \citeS{S72} \\
& Event races & Event racing due to a single UI thread and async events triggering UI changes & \citeS{S16,S93} \\
& Bugs & Other concurrency bugs (deadlocks, atomicity violations, different threads interacting in a non-desirable manner.) & \citeS{S1} \\
& Bug in condition & A condition that inaccurately guards what thread can execute the guarded code. & \citeS{S1} \\
\midrule\\
Test order dependency & Shared state & Tests having the same data dependencies can affect test outcome. & \citeS{S147}, \citeS{S4}, \citeS{S110}\\
& I/O & Local files & \citeS{S1,S357} \\
& Resource leaks & When an application does not properly manage the resources it acquires & \citeS{S1}, \citeS{S10}, \citeS{S6}, \citeS{S210} \\
\midrule\\
Network & Remote & Connection failure to remote host (latency, unavailability) & \citeS{S1,S357}\\
& Local & Bandwidth, local resource management issues (e.g. port collisions)& \\
\midrule\\
Randomness & Data & Input data or output from the CUT & \citeS{S6,S545} \\
& Randomness seed & If the seed is not fixed in either the CUT or test it may cause flakiness. & \citeS{S14} \\
& Stochastic algorithms & Probabilistic algorithms where the result is not always the same. & \citeS{S217}\\
& Too restrictive range & Valid output from the CUT are outside the assertion range. & \citeS{S8} \\
\midrule\\
Platform dependency & Hardware & Environment that the test executes in (Development/Test/CI or Production.) & \citeS{S1,S14,S548,S210} \\
& OS & Varying operating system & \citeS{S42,S99} \\
& Compiler & Difference in compiled code & \citeS{S1019} \\
& Runtime & e.g., Languages with virtual runtimes (Java, C\# .. etc) & \citeS{S1} \\
& CI infra flakiness & Build failures due to infrastructure flakiness. & \citeS{S1007}\\
& Browser & A browser may render objects differently affecting tests. & \citeS{S42} \\
\midrule\\
External state/behaviour & Reliance on production service & Tests rely on production data that can change. & \\
dependency & & & \\
& Reliance on external resources & Databases, web, shared memory... etc & \citeS{S23,S39} \\
& API changes & Evolving REST APIs due to changing requirements & \\
& External resources & Relying on data from external resources (e.g., REST APIs, databases) & \citeS{S23,S713} \\
\midrule\\
Environmental dependencies & & Memory and performance & \citeS{S42} \\
\midrule\\
Hardware & Screen resolution & UI elements may render differently on different screen resolutions causing problems for UI tests & \\
& Hardware faults & & \citeS{S210} \\
\midrule\\
Time & Timeouts & Test case/test suite timeouts. & \citeS{S8} \\
& System date/time & Relying on system time can result in non-deterministic failure (e.g. time precision and changing UTC time) & \citeS{S6}, \citeS{S39} \\
\midrule\\
Other & Floating point operations & Use of floating point operations can result in non-deterministic cases & \citeS{S14}, \citeS{S1}\\
& UI & Incorrectly coding UI interactions & \citeS{S7} \\
& Program logic & Incorrect assumptions about APIs & \citeS{S7} \\
& Tests with manual steps & & \citeS{S1025} \\
& Code transformations & Random amplification/instrumentation can cause flaky tests & \citeS{S258} \\
\bottomrule
\end{tabular}}
\end{table}
\end{landscape}
\noindent\textbf{Platform dependency.} This causes flakiness when a test is designed to pass on a specific platform but when executed on another platform it unexpectedly fails. A platform could be the hardware and OS and also any component on the software stack that test execution/compilation depends on. Tests that rely on platform dependency may fail on alternative platforms due to missing preconditions or even performance issues across them. The cause was initially described in Luo et al. \citeS{S1}, though it was not in the list of 10 main causes as it was a small category. It is discussed in more detail in \citeS{S8}. In Thorve et al \citeS{S7}, it was reported that dependency flakiness for Android projects are due to hardware, OS version or third-party libraries. The study consisted of 29 Android projects containing 77 flakiness related commits. We also include \emph{implementation dependency}, differences in compilation \citeS{S1019} and infrastructure flakiness \citeS{S1007} under this category. Infrastructure flakiness could also be due to issues in setting up the required infrastructure for test execution, which could include setting up Virtual Machines (VM)/containers and downloading dependencies, which can results in flakiness. Environmental dependency flakiness due to dynamic aspects (performance or resources) are also included in this category.
\noindent\textbf{Dependencies on external state/behaviour.} We include flakiness due to changes in external dependencies like state (e.g. reliance on external data from databases or obtained via REST API's) or behaviour (changes or assumptions about the behaviour of third-party libraries) in this category. Thorve et al ~\citeS{S7} included this under platform dependency.
\noindent\textbf{Hardware.} Some ML applications/libraries may use specialized hardware, as discussed in \citeS{S14}. If the hardware produces nondeterministic results, this can cause flakiness. An example is where an accelerator is used that performs floating-point computations in parallel. The ordering of the computations can produce nondeterministic values, leading to flakiness when tests are involved. Note that this is distinct from platform dependency, which can also be at the hardware level, for instance, different processors or Android hardware.
\noindent\textbf{Time.} Variations in time are also a cause of test flakiness (e.g. midnight changes in the UTC time zone, daylight saving time, etc.), and due to differences in precision across different platforms. Time is listed as a cause in root cause identification by Lam et al. \citeS{S6}. New subcategories, \textit{timeouts}, are listed by developers in the survey done in \citeS{S8}. Time precision across OS, platforms and different time zones are listed under this category \citeS{S39}. Another cause related to time, is that test cases may time out nondeterministically e.g. failing to obtain prerequisites or execution not completing within the specified time due to flaky performance.
A similar cause is test suite timeouts where no specific test case is responsible for it. Both of these causes were identified in the developers survey reported in \citeS{S8}.
\noindent\textbf{Other causes.} We include causes listed in articles, which may already have relationships with the major causal categories listed above. Thorve et al. \citeS{S7} listed \emph{program logic} as one of them. This category consists of cases where programmers have made incorrect assumptions about the code's behaviour, which results in cases where tests may exhibit flakiness. The authors cited an example where the Code Under Test (CUT) may nondeterministically raise an I/O exception and the exception handling throws a runtime exception, causing the test to crash in that scenario. \emph{UI flakiness} can be caused due to developers either not understanding UI behaviour or incorrectly coding UI interactions \citeS{S7}. They can also be caused by concurrency (e.g., event races or async-wait) or platform dependency (e.g., dependence on the availability of a display, dependence on a particular browser \citeS{S99}). \emph{Floating-point operations.} floating-point operations can be a cause of flakiness as they can be non-deterministic due to non-associative addition, overflows and underflows as described in \citeS{S1}. It is also discussed in the context of machine learning applications \citeS{S14}. Concurrency, hardware and platform dependency can be a source of nondeterminism in floating-point operations.
Luo et al. \citeS{S1} identified \emph{unordered collections}, where there are variations in outcomes due to a test's incorrect assumptions about an API. An example of this is the sets which can have specifications that are underdetermined. Code may assume behaviour such as the order of the collection from a certain execution/implementation, which is not deterministic.
\subsubsection{Ontology of causes of flaky tests}
\label{sec:ontology}
Fig.~\ref{fig:ontologycause} illustrates the different causes of flakiness. The figure uses Web Ontology Language (OWL) ~\cite{mcguinness2004owl} terminology such as classes, subclasses and relations. We identify classes for causes of flakiness and flaky tests. Subclass relationships between classes are named `kindOf' and `causes' is the relation for denoting causal relationships.
Note that not all identified causes are shown in the diagram. For instance, causes listed under the other category may be due to sources already shown in the diagram. For instance, UI flakiness can be due to platform dependency or environmental dependency. An example that demonstrates the complex causal nature of flakiness is in \citeS{S14}, where the cause of flakiness is due to a hardware accelerator for deep learning, which performed fast parallel floating point computations. As different orderings of floating point operations can result in different outputs, this leads to test flakiness. In this case, the causes are a combination of \textit{hardware}, \textit{concurrency}, and \textit{floating point operations}.
Network uncertainty can be attributed to multiple reasons, for instance, connection failure and bandwidth variance. Stochastic algorithms exhibit randomness, and concurrency related flakiness can be due to concurrency bugs such as races and deadlocks. Finally, order dependency is due to improper management of resources (e.g. leaks and not cleaning up after I/O operations) or hidden state sharing that may manifest in flakiness.
There are a number of factors that vary, which underlie those causes. For instance, \textit{random seed variability} can cause flakiness related to randomness and scheduling variability causes concurrency-related flakiness. \textit{Test execution order variability}, which causes order dependent test flakiness and types of \textit{platform variability} (e.g. hardware and browser that can, for instance, manifest in UI flakiness) are additional dimensions of variability.
\begin{qoutebox}{white}{}
\textbf{RQ1 summary.}
Numerous causes of flakiness have been identified in literature, with factors related to concurrency, test order dependency, network availability and randomness are the most common causes for flaky test behaviour. Other factors related to specific types of systems such as \textit{algorithmic nondeterminism} and \textit{unsynchronised seeds} impacting testing in ML applications. There is also a casual relationship between some of these factors (i.e., they impact each other - for example, UI flakiness is mostly due to concurrency issues).
\end{qoutebox}
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\linewidth]{media/causes.png}
\caption{Relationships between the different causes of flaky tests}
\label{fig:ontologycause}
\end{figure*}
\subsection{Flaky Tests Detection (RQ2)}
\label{sec:results:detection}
One of the dimensions we studied is how flaky tests are identified and/or detected. In this section, we present methods used to detect and identify or locate causes of flakiness. We make a distinction between these three goals in our listing of techniques found in the reviewed literature.
We look at methods identified in both academic and grey literature. RQ2 is divided into two sub-questions, we answer each subquestion separately, as shown below:
\subsubsection{Methods Used to Detect Flaky Tests (RQ2.1)}
There are two distinctive approaches towards the detection of flaky tests, which are either by using dynamic techniques that involve the execution of tests or using static techniques that rely only on analysing the test code without actually executing tests.
Figure~\ref{fig:taxonomydetection} depicts a broad overview of these strategies. Dynamic methods are based mostly on multiple test runs whilst also using techniques to perturb specific variability factors (e.g. environment, test execution order, event schedules or random seeds) that quickly manifests flakiness. There is one study on using program repair \citeS{S28} to induce test flakiness and two studies on using differential coverage to detect flakiness without resorting to reruns \citeS{S2}. Under static approaches, studies have employed machine learning (3 studies), model checking for implementation dependent tests and similarity patterns techniques for identifying flaky tests. There are only two studies that leverage hybrid approaches (one for order dependent tests and another for async-wait). \\
\noindent\textbf{Static methods}: Static approaches that do not execute tests are mostly classification-based that use machine learning techniques \citeS{S607,S118,S31}. Other static methods use pattern matching \citeS{S357} and association rule learning \citeS{SB1}. Model checking using Java PathFinder \cite{visser2003model} has also been used for detecting flakiness due to implementation dependent tests \citeS{SB28}.\\
Ahmad et al. \citeS{S607} evaluated a number of machine learning methods for predicting flaky tests. They used projects from the iDFlakies dataset \citeS{S4}. There is also a suggestion that the evaluation also covered another language (Python) besides the data from the original dataset (which is in Java), though this is not made clear, and the set of Python programs or tests is not listed. The study built on the work of Pinto et al. \citeS{S11}, which is an evaluation of five machine learning classifiers (Naive Bayes, Random Forest, Decision Tree, Support Vector Machine and Nearest Neighbour) that predict flaky tests. In comparison to \citeS{S11}, the study of Ahmad et al. \cite{S607} answers two additional research questions: how classifiers perform with regard to another programming language, and the predictive power of the classifiers. Another static technique based on patterns in code have been used to predict flakiness \citeS{S357}.\\
\noindent\textbf{Dynamic methods:} Dynamic techniques to detect flakiness are built on executions of tests (single or multiple runs). Those techniques are centred around making reruns less expensive by accelerating ways to manifest flakiness, i.e., fewer number of reruns or rerunning fewer tests. Methods to manifest flakiness include varying causal factors such as random number seeds \citeS{S14}, event order \citeS{S93}, environment (e.g. browser, display) \citeS{S99}, and test ordering \citeS{S147, S4}. Test code has also been varied using program repair \citeS{S28} to induce flakiness. Fewer tests are run by selecting them based on differential code coverage or those with state dependencies. \\
\noindent\textbf{Hybrid methods:} Dynamic and static techniques are known to make different trade-offs between desirable attributes such as recall, precision and scalability \cite{ernst2003static}. As in other applications of program analysis, hybrid techniques have been proposed to combine the strength of different techniques, whilst avoiding their weaknesses.
One of the tools, FLAST \citeS{S118}, proposes a hybrid approach where the tool uses a static technique but suggests that dynamic analysis can be used to detect cases missed by the tool. Malm et al. \citeS{S72} proposed a hybrid analysis approach to detect delays used in tests that cause flakiness. Zhang et al. \citeS{S591} proposed a tool for dependent test detection, and they use static analysis to determine side-effect free methods, whose field accesses are ignored when determining inter-test dependence in the dynamic analysis. Some tools, stated earlier under static methods (e.g., \citeS{S607}), may need access to historic execution data for analysis or training.
\begin{figure*}[htp]
\centering
\includegraphics[width=\linewidth]{media/detection.png}
\caption{Taxonomy of detection methods}
\label{fig:taxonomydetection}
\end{figure*}
\subsubsection{Tools to Detect Flaky Tests (RQ2.2)}
Table~\ref{tab:detection} lists the tools that detect test flakiness, which are described in the literature. Most of the tools detect flakiness manifested in test outcomes.
The majority of the tools found in academic articles work on Java programs, with only three for Python and a single tool for JavaScript.
These tools can be grouped by the source of flakiness they target: UI, test order, concurrency and platform dependency (implementation dependency to a particular runtime). Some of these tools identify the cause of flakiness as well (which may already be a part of the tool's output if the source of flakiness they detect is closely associated with a cause: e.g., test execution order dependency arising from a shared state can be detected by executing tests under different orders).\\
\begin{landscape}
\begin{table*}[!b]
\caption{Flaky tests detection tools as reported in academic studies}
\centering
\label{tab:detection}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{lllllll}
\toprule
\textbf{Detection type} & \textbf{Category} & \textbf{Language} & \textbf{Method type} & \textbf{Method} & \textbf{Tool name} & \textbf{Article} \\
\midrule
Outcomes & Order & Java & Dynamic & Rerun (Vary orders) & - & \citeS{S1014} \\
Outcomes & Android & Java & Dynamic & Rerun (Vary event schedules) & FlakeScanner & \citeS{S1008} \\
Outcomes & General & Java & Dynamic & Rerun (twice) & - & \citeS{S1011} \\
Cause & Web & Java & Dynamic & Rerun (Vary environment) & FlakyLoc & \citeS{S99} \\
Location & General & - & Dynamic & Log analysis & RootFinder & \citeS{S6} \\
Outcomes & UI & Java & Dynamic & Rerun (Vary event schedules) & FlakeShovel & \citeS{S16} \\
Outcomes & General & Java & Hybrid & Machine learning & FlakeFlagger & \citeS{S1027} \\
Outcome & General & Mixed & Dynamic & Rerun (Environment) & - & \citeS{S27} \\
Outcomes & General & Java & Static & Machine learning & - & \citeS{S607} \\
Outcomes & ML & Python & Dynamic & Rerun (Vary random number seeds) & FLASH & \citeS{S14} \\
Outcomes & Concurrency & JavaScript & Dynamic & Rerun (Vary event order) & NodeRacer & \citeS{S93} \\
Outcomes & General & Java & Static & Machine learning (test code similarity) & FLAST & \citeS{S118} \\
Outcomes & General & Python & Dynamic & Rerun (Vary test code) & FITTER & \citeS{S28} \\
Outcomes & Concurrency & Java & Dynamic & Rerun (Add noise to environment) & Shaker & \citeS{S24} \\
Outcomes & General & Python & Dynamic & Test execution history & GreedyFlake & \citeS{S78} \\
Outcomes & General & Java & Dynamic & Rerun & iDFlakies & \citeS{S4} \\
Location & Assumptions & Java & Dynamic & Rerun (vary API implementation ) & NonDex & \citeS{S152} \\
Cause & Order & Java & Dynamic & Rerun and delta debugging & iFixFlakies & \citeS{S135} \\
Outcomes & General & Java & Dynamic & Differential coverage & DeFlaker & \citeS{S2} \\
& & & & and test execution history & & \\
Cause, location & General & C++ / Java & Dynamic & Rerun & Flakiness & \citeS{S25} \\
& & & & & Debugger & \\
& UI & JavaScript & Dynamic & Machine learning (Bayesian network) & - & \citeS{S31}\\
Cause & Order & Java & Dynamic & - & PolDet & \citeS{S460}\\
Outcome & General & - & Static & Machine learning & Flakify & \citeS{S1022}\\
Cause,Outcome & IO/Concurrency/Network & - & Dynamic & Rerun in varied containers & - & \citeS{S19}\\
Cause,Outcome & - & - & Dynamic & Rerun in varied containers & TEDD & \citeS{S274}\\
Cause,Outcome & - & C & Static & Dependency analysis & - & \citeS{S780}\\
Outcomes & Order & Java & Dynamic & Rerun (Dynamic dataflow analysis) & PRADET & \citeS{S147} \\
Outcomes & Order & Java & Dynamic & Rerun (Vary order) & DTDetector & \citeS{S591} \\
Outcomes & Order & Java & Dynamic & Rerun (Dynamic dataflow analysis) & ElectricTest & \citeS{S391} \\
Outcome & Order and Async/Wait & Java & Static & Pattern matching & - & \citeS{S357}\\
Outcome & Order & Python & Dynamic & Rerun (varying orders) & iPFlakies & \citeS{S1031}\\
Outcome & - & Multilanguage & Dynamic & Machine learning & Fair & \citeS{S1050}\\
Outcome & Order & Java & Static & Model checking & PolDet (JPF) & \citeS{S1051}\\
Outcome & Nondeterminism & Java & Static & Type checking & Determinism Checker & \citeS{S1054}\\
\bottomrule
\end{tabular}}
\end{table*}
\end{landscape}
FlakyLoc \citeS{S99} does not detect flaky tests, but identifies causes for a given flaky test. The tool executes the known flaky test in different environment configurations. These configurations are composed of environment factors (i.e., memory sizes, CPU cores, browsers and screen resolutions) that are varied in each execution. The results are analysed using a spectrum-based localization technique \cite{wong2016survey}, which analyses the factors that cause flakiness and assigns a ranking and a suspiciousness value to determine the most likely factors. The tool was evaluated on a single flaky test from a Java web application (with several end-to-end flaky tests). The results for this particular test indicate that the technique is successfully able to rank the cause of flakiness (e.g., low screen resolution) for the test.
RootFinder \citeS{S6} identifies causes as well as the location in code that cause flakiness. It can identify nine causes (network, time, I/O, randomness, floating-point operations, test order dependency, unordered collections, concurrency). The tool adds instrumentation at API calls during test execution, which can log interesting values (time, context, return value) as well as add additional behaviour (e.g., add a delay to identify causes involving concurrency and async wait). Post-execution, the logs are analysed by evaluating predicates (e.g., if the return value was the same at this point compared to previous times) at each point where it was logged. Predicates that evaluate to consistent values in passing and failing runs are likely to be useful in identifying the causes, as they can explain what was different during passing and failing runs.
DeFlaker \citeS{S2} detects flaky tests using differential coverage to avoid reruns (as rerun can be expensive). If a test has a different outcome compared to a previous run and the code covered by the test has not changed, then it can be determined to be flaky. The study also examines if a particular rerun strategy has an impact on flakiness detection. With Java projects, there can be many such strategies (e.g., five reruns in the same JVM, forking with each run in a new JVM, rebooting the machine and cleaning files generated by builds between runs).
NodeRacer \citeS{S93}, Shaker \citeS{S24} and FlakeShovel \citeS{S16} specifically detect concurrency related flakiness. NodeRacer analyses JavaScript programs and accelerates manifestation of event races that can cause test flakiness. It uses instrumentation and builds a model consisting of a happens-after relation for callbacks. During the guided execution phase, this relation is used to explore postponing of events such that callback interleaving is realistic with regard to actual executions. In Shaker, it is suggested that the tool exposes flakiness faster than rerun by adding noise to the environment in the form of tasks that also stress the CPU and memory whilst the test suite is executed. FlakeShovel targets the same type of cause as NodeRacer by similarly exploring different yet feasible event execution orders, but only for GUI tests in Android applications.\\
A number of detection tools are built to detect order dependent tests. In the case of iDFlakies \citeS{S4}, which uses rerun by randomizing the order of their execution, it classifies flaky tests into two types: order-dependent and non-order dependent. In this category there are four more studies: DTDetector \citeS{S591}, ElectricTest \citeS{S391}, PolDet \citeS{S460}, and PRADET \citeS{S147}. DTDetector presents four algorithms to check for dependent tests, which are manifested in test outcomes: reversal of test execution order, random test execution order, the exhaustive bounded algorithm (which executes bounded subsequences of the test suite instead of trying out all permutations), and the dependence-aware bounded algorithm that only tests subsequences that have data dependencies. ElectricTest checks for data dependencies between tests using a more sophisticated check for data dependencies. While DTDetector checks for writes/reads to/from static fields, ElectricTest checks for changes to any memory reachable from static fields. PRADET uses a similar technique to check for data dependencies, but it also refines the output by checking for manifest dependencies, i.e., data dependence that also influences flakiness in test outcomes. Wei et al. \citeS{S1014} used a systematic and probabilistic approach to explore the most effective orders for manifesting order dependent flaky tests. Whereas tools such as PRADET and DTDetector explore randomized test orders, Wei et al. analyse the probability of randomized orders detecting flaky tests, and they propose an algorithm that explores consecutive tests to find all order-dependent tests that depend on one test.
Anjiang et al. \citeS{S1011} discussed a class of flakiness due to shared state, non-idempotent-outcome (NOP) tests, which are detected by executing the same test twice in the same VM.
NonDex \citeS{S152} is the only tool we found that detects flakiness caused by implementation dependency. The class of such dependencies it detects is limited to dependencies due to assumptions developers make about underdetermined APIs in the Java standard libraries, for instance the iteration order of data structures using hashing in the internal representation, such as Java's \texttt{HashMap}.
A number of studies discussed machine learning approaches for flakiness prediction. Pontillo et al. \citeS{S1004} studied the use of test and production code factors that can be used to predict test flakiness using classifiers. Their evaluation uses a logistic regression model. Haben et al. \citeS{S1005} reproduced a Java study \citeS{S11} with a set of Python programs to confirm the effectiveness of code vocabularies for predicting test flakiness. Camara et al. \citeS{S1012} is another replication of the same study that extends it with additional classifiers and datasets. Parry et al. \citeS{S1034} presented an evaluation of static and dynamic features that are more effective as predictors of flakiness in comparison to previous feature sets. Camara et al. \citeS{S1017} evaluated using test smells to predict flakiness.\\
\begin{qoutebox}{white}{}
\textbf{RQ2 summary.}
A number of methods have been proposed to detect flaky tests, which include static, dynamic and hybrid methods. Most static approaches use machine learning. Rerun (in different forms) is the most common dynamic approach for detecting flaky tests. Approaches that use rerun focus on making flaky tests detection less expensive by accelerating ways to manifest flakiness or running fewer tests.
\end{qoutebox}
\begin{table*}[!b]
\centering
\caption{Detection tools as reported in grey literature}
\label{tab:detection-industry}
\resizebox{\linewidth}{!}{
\begin{tabular}{p{3cm}p{12cm}l}
\toprule
\textbf{Tool} & \textbf{Features} & \textbf{Articles}\\
\midrule
Flakybot & Determines test(s) are flaky before merging commits. Flakybot can be invoked on a pull request, and tests will be exercised quickly and results reported & \citeG{G2} \\
Azure DevOps Services & Feature that enables the detection of flaky tests (based on changes and through reruns) & \citeG{G6} \\
Scope& Helps identify flaky tests, requiring a single execution based on the commit diff & \citeG{G8} \\
Cypress & Automatically rerun (retries) a failed test prior to marking it as fail & \citeG{G9} \\
Gradle Enterprise & Considers a test flaky if it fails and then succeeds within the same Gradle task & \citeG{G22} \\
pytest-flakefinder \& pytest-rerunfailures & Rerun failing tests multiple times without having to restart pytest (in Python) & \citeG{G31} \\
pytest-random-order \& pytest-randomly & Randomise test order so that it can detect flakiness due to order dependency and expose tests with state problems & \citeG{G31} \\
BuildPluse & Detect and categorise flaky tests in the build by checking changes in test outcomes between builds (cross-language) & \citeG{G92} \\
rspec-retry & Ruby scripts that rerun flaky \texttt{RSpec} tests and obtain a success rate metric & \citeG{G35} \\
Quarantine & A tool that provides a run-time solution to diagnosing and disabling flaky tests and automates the workflow around test suite maintenance & \citeG{G36} \\
protractor-flake & Rerun failed tests to detect changes in test outcomes & \citeG{G50} \\
Shield34 & Designed to address the Selenium flaky tests issues & \citeG{G57} \\
Bazel & Build and auto testing tool, An option to mark tests as flaky, which will skip those marked tests & \citeG{G58,G71} \\
Flaky (pytest plugin) & Automatically rerunning failing tests & \citeG{G59,G67} \\
Capybara & Contains an option to prevent against race conditions & \citeG{G68} \\
Xunit.Skip-pableFact & Tests can be marked as SkippableFact allowing control over test execution & \citeG{G70} \\
timecop & ruby framework to test time-dependent tests & \citeG{G81,G96} \\
Athena & Identifies commits that make a test nondeterministically fail, and notifying the author. Automatically quarantines flaky tests & \citeG{G108} \\
Datadog & Flaky test management through a visualisation of test outcomes, it shows which tests are flaky & \citeG{G116} \\
CircleCI dashboard & The ``Test Insights'' dashboard provides information about all flaky tests, with an option to automate reruns of failed tests & \citeG{G122} \\
Flaky-test-extractor-maven-plugin & Maven plugin that filters out flaky tests from existing surefire reports. It generates additional XML files just for the flaky tests & \citeG{G140} \\
TargetedAutoRetry & A tool to retry just the steps which are most likely to cause issues with flakiness (such as Apps launch, race conditions candidates etc..) & \citeG{G213} \\
Junit surefire plugin & an option to rerun failing tests in Junit surefire plugin (rerunFailingTestsCount) & \citeG{G192} \\
Test Failure Analytics & gradle's plugin that helps to identify flaky tests between different builds & \citeG{G142} \\
Test Analyzer Service & An internal tool at Uber used to manage the state of unit tests and to disable flaky tests & \citeG{G149} \\
TestRecall & Test analysis tool that provides insights about test suites, including tracking flaky tests & \citeG{G202} \\
Katalon Studio & an option to retry all tests (or only failed tests) when the Test Suite finishes & \citeG{G203} \\
\bottomrule
\end{tabular}}
\end{table*}
\subsection{Impact of Flaky Tests (RQ3)}
Next, we explore the wider view of the impact flaky tests have on different aspects of software engineering. In addressing this research question, we look at the impact of flaky tests as discussed in the articles we reviewed, and then combine the evidence noted in academic and grey literature. We discuss this in detail in the following two subsections.
\subsubsection{Impact Noted in Academic Research}
For each article we included in our review, we look at the context of flaky tests in the study. We classify the impact of flaky tests as reported in academic literature into the following three categories:
\begin{enumerate}
\item \textbf{Testing (including testing techniques):} the impact on software testing process in general (i.e., impact on test coverage).
\item \textbf{Product quality:} impact on the software product itself, and its quality.
\item \textbf{Debugging and maintenance:} the impact on other software development and program analysis techniques.
\end{enumerate}
Figure~\ref{fig:taxonomyimpact} illustrates these categories and provides a general taxonomy of impact points as noted in the reviewed studies.
Table \ref{tab:impact_academic} provides a summary of the impact of flaky tests as noted in academic literature. We discuss some examples for each of the three categories below.\\
\begin{figure*}[!h]
\centering
\includegraphics[width=0.9\linewidth]{media/impact.png}
\caption{Taxonomy for impact of test flakiness}
\label{fig:taxonomyimpact}
\end{figure*}
\noindent \textbf{Impact on testing:} Many aspects of testing are affected by test flakiness. This includes automatic test generation \citeS{S330}
, test quality characteristics \citeS{S57}, and techniques or tasks involved in test debugging and maintenance \citeS{S430}.
A number of testing techniques are based on the assumption that tests have deterministic outcomes, and when this assumption does not hold, they may not be reliable for their intended purposes. Test optimization techniques such as test suite reduction, test prioritization, test selection, and test parallelization rely on this assumption. For instance, flakiness can manifest in order dependent tests when test optimization is applied to test suites with such tests. Lam et al. \citeS{S40} studied the necessity of dependent-test-aware techniques to reduce flaky test failures, where they first investigate the impact of flaky tests on three regression testing techniques: test prioritization, test selection and parallelization. Other testing techniques impacted are test amplification \citeS{S1156}, simulation testing \citeS{S1041} and manual testing \citeS{S1081}. \\
\noindent \textbf{Impact on product quality:} Several articles cite how test flakiness breaks builds \citeS{S336,S256}. Testing drives automated builds, which flakiness can break, resulting in delaying CI workflows. Zdun et al. \citeS{S429} highlighted how flaky tests can introduce noise into CI builds that can affect service deployment and operation (microservices and APIs in particular). B{\"o}hme \citeS{S57} discussed flakiness as one of the challenges for test assurance, i.e., executing tests as a means to increase confidence in the software. Product quality can be affected due to lack of test stability, which is cited as an issue by Hirsch et al. \citeS{S197}, in the context of a single Android application with many UI tests that are fragile. Several articles mention the issue of cost in detecting flaky tests, Pinto et al.~\citeS{S11} pointed out that it can be costly to run detectors after each change and hence organizations run them only on new or changed tests, which might not be the best approach as this would affect the recall. Vassallo et al. \citeS{S556} identified retrying failure to deal with flakiness as a CI smell, as it has a negative impact on development experience by slowing down progress and hiding bugs. Mascheroni et al. \citeS{S1043} proposed a model to improve continuous testing by presenting test reliability as a level in the improvement model, and flaky tests as a main cause for reliability issues with tests. They suggest good practices to achieve this. \\
Multiple articles also discuss how test flakiness can affect developers, leading to a negative impact on product quality. This includes the developer's perception of tests, and the effort required to respond to events arising from test flakiness (build failures in CI, localizing causes, fixing faulty tests). Koivuniemi \citeS{S750} mentioned uncertainty and frustration due to developers attributing flaky failures to errors in the code where there are none. Eck et al. \citeS{S8} survey on developer's perception of flaky tests noted that flaky tests can have an impact on software projects, in particular on resource allocation and scheduling.\\
\noindent \textbf{Impact on debugging and maintenance:} Several techniques used in maintenance and debugging are known to be impacted by the presence of flaky tests. This includes all techniques that rely on tests such as test-based program repair, crash reproduction, test amplification and fault localization, which can all be negatively impacted by flakiness. Martinez et al. \citeS{S252} reported a flaky test in a commonly used bugs' dataset, Defects4J, and how the repair system's effectiveness can be affected (if the flaky test fails after a repair, the system would conclude that the repair introduced a regression). Chen et al. \citeS{S962} explained that subpar quality tests can affect their use for detecting performance regressions, and in the case of flaky tests they may introduce noise and require multiple executions. Dorward et al. \citeS{S1049} proposed an approach that is more efficient to find culprit commits using flaky tests, as bisect fails in this situation.
\subsubsection{Impact noted in grey literature}
We also analysed the impact of flaky test as found in grey literature articles. We checked if there was any discussion of the impact of flaky tests on certain techniques, tools, products or processes. We classify the noted impact of flaky tests into the following three categories:
\begin{enumerate}
\item \textbf{Code-base and product:} the impact of flaky tests on the quality or the performance of the production code and the CUT.
\item \textbf{Process:} the impact on the development pipeline and the delivery of the final product.
\item \textbf{Developers:} the `social' impact of flaky test on the developers/testers.
\end{enumerate}
\begin{table*}[]
\caption{Summary of the impact of flaky tests noted in academic literature}
\label{tab:impact_academic}
\resizebox{\linewidth}{!}{
\begin{tabular}{lll}
\toprule
\textbf{Impact Type} & \textbf{Impact} & \textbf{Reference} \\ \midrule
\textbf{Product quality} & Breaking builds & \citeS{S336,S256} \\
& Service deployment and operation & \citeS{S429} \\
& Test reliability & \citeS{S1043} \\
& Test assurance & \citeS{S57} \\
& Product quality & \citeS{S197}\\
& Costly to detect & \citeS{S15,S11,S143} \\
& Delays CI workflow & \citeS{S556,S136}\\
& Maintenance effort & \citeS{S430} \\
& Uncertainty and frustration & \citeS{S750} \\
& Trust in tools and perception & \citeS{S395,S159}\\ \midrule
\textbf{Testing} & Regression testing techniques & \citeS{S40} \\
& Simulation testing & \citeS{S1041} \\
& Test amplification & \citeS{S1156} \\
& Test suite/case reduction & \citeS{S662,S444} \\
& Mutation testing & \citeS{S174,S234} \\
& Manual testing & \citeS{S1081} \\
& Test minimization & \citeS{S526} \\
& Test coverage (ignored tests) & \citeS{S211} \\
& Test selection & \citeS{S207,S584} \\
& Patch quality & \citeS{S269} \\
& Test performance & \citeS{S704} \\
& Test suite efficiency & \citeS{S565} \\
& Test prioritization & \citeS{S73,S207} \\
& Regressions & \citeS{S110} \\
& Test suite diversity & \citeS{S73} \\
& Test generation & \citeS{S330} \\
& Differential testing & \citeS{S348} \\
& Test assurance & \citeS{S57} \\ \midrule
\textbf{Debugging and maintenance} & Program repair & \citeS{S520,S252,S1070}\\
& Determining culprit commits & \citeS{S1049} \\
& Performance analysis & \citeS{S962} \\
& Bug reproduction & \citeS{S229} \\
& Crash reproduction & \citeS{S762} \\
& Fault localization & \citeS{S17,S692} \\
\bottomrule
\end{tabular}}
\end{table*}
\begin{table*}[]
\caption{Summary of the impact of flaky tests as noted in grey literature}
\label{tab:impact_grey}
\resizebox{\linewidth}{!}{
\begin{tabular}{lll}
\toprule
\textbf{Impact Type} & \textbf{Impact} & \textbf{Reference} \\ \midrule
\textbf{Product} & Hard to debug & \citeG{G11,G52,G95} \\
& Hard to reproduce & \citeG{G11} \\
& Reduces test reliability & \citeG{G27,G103} \\
& Expensive to repair & \citeG{G114} \\
& \begin{tabular}[c]{@{}l@{}}Increase cost of testing as flaky \\ behaviour can spread to other tests\end{tabular} & \citeG{G8,G210} \\ \midrule
\textbf{Developers}& Losing trust in builds & \citeG{G74,G81,G114,G127,G203} \\
& Loss of productivity & \citeG{G8,G152,G165,G210} \\
& Time-consuming / wastes time & \citeG{G22,G95,G107,G134,G142,G144,G147,G149}\\
& Resource consuming & \citeG{G26,G30,G127} \\
& Demotivate/mislead developers & \citeG{G22,G134} \\
\midrule
\textbf{Delivery} & Affects the quality of shipped code & \citeG{G6,G129,G202} \\
& Slows down deployment pipeline & \citeG{G22,G95,G114,G142,G154} \\
& Slows down the development & \citeG{G45,G95,G98,G22,G110} \\
& Loses faith in tests catching bugs & \citeG{G30,G89}
\\
& Causes unstable deployment pipelines & \citeG{G35} \\
& Slows down development and testing processes & \citeG{G45,G110} \\
& Delays project release & \citeG{G107,G108,G213} \\
\bottomrule
\end{tabular}}
\end{table*}
A summary of the impact noted in the grey literature is shown in Table \ref{tab:impact_grey}. We discussed each of those three categories below.\\
\noindent \textbf{Impact on the code-base and product:}
Several grey literature articles have discussed the wider impact of flaky tests on the production code and on the final software product.
Among several issues reported by different developers, testers and managers, it was noted that the presence of flaky tests can significantly increase the cost of testing \citeG{G8,G36}, and makes it hard to debug and reproduce the CUT \citeG{G11,G52,G95}. In general, flaky tests can be very expensive to repair and often require time and resources to debug \citeG{G95,G114}. They can also make end-of-end testing useless \citeG{G74}, which can reduce test reliability \citeG{G27,G103}. One notable area that flaky tests compromise is coverage - if the test is flaky enough that it can fail even when retried, then coverage is already considered lost \citeG{G129}.
Flaky tests can also spread and accumulate, with some unfixed flaky tests can lead to more flaky tests in the test suite \citeG{G7,G8}. Fowler describe them as \textit{``virulent infection that can completely ruin your entire test suite"} \citeG{G4}.\\
Flaky tests can have serious implications in terms of time and resources required to identify and fix potential bugs in the CUT, and can directly impact production reliability \citeG{G36}. However, detecting and fixing flaky tests can help in finding underlying flaws and issues in the tested application and CUT that is otherwise much harder to detect \citeG{G95}.\\
\noindent \textbf{Impact on developers:}
We observed that several of the blog posts we analysed here are written by developers and discuss the impact of flaky tests on their productivity and confidence.
Developers noted that flaky tests can cause them to lose confidence in the `usefulness' of the test suite in general \citeG{G2}, and to lose trust in their builds \citeG{G74}. Flaky tests may also lead to a ``collateral damage'' for developers: if they are left uncontrolled or unresolved, they can have a bigger impact and may ruin the value of an entire test suite \citeG{G8}.
They are also reported to be disruptive and counter-productive, as they can waste developers' time as they try to debug and fix those flaky tests \citeG{G95,G107,G26,G30}.\\
\begin{quote}
\textit{``The real cost of test flakiness is a lack of confidence in your tests..... If you don’t have confidence in your tests, then you are in no better position than a team that has zero tests. Flaky tests will significantly impact your ability to confidently continuously deliver.''} (Spotify Engineering, \citeG{G2}).\\
\end{quote}
Another experience report from Microsoft explained the practices followed and tools used to manage flaky tests at Microsoft in order to boost developers' productivity:
\begin{quote}
\textit{``Flaky tests.... negatively impact developers’ productivity by providing misleading signals about their recent changes ... developers may end up spending time investigating those failures, only to discover that the failures have nothing to do with their changes and may simply go away by rerunning the tests.'' (Engineering@Microsoft, \citeG{G134})
}\end{quote}
\noindent \textbf{Impact on delivery:}
Developers and managers also presented evidence of how flaky tests can delay developments and have a wider impact on the delivery (e.g., \citeG{G36}) - mostly by slowing down the development \citeG{G45,G95,G98} and delaying products' release \citeG{G107,G108}.
They can also reduce the value of an automated regression suite \citeG{G4} and lead organization and testing teams to lose faith that their tests will actually find bugs \citeG{G30,G89}.
Some developers also noted that if flaky tests left unchecked, or untreated, they can lead to a completely useless test suites, as this is the case with some organisations:
\begin{quote}
\emph{`` We've talked to some organizations that reached 50\%+ flaky tests in their codebase, and now developers hardly ever write any tests and don’t bother looking at the results. Testing is no longer a useful tool to improve code quality within that organization.'' (Product Manager at Datadog, \citeG{G8})}\end{quote}
Another impact of flaky tests is that it could slow down deployment pipeline which can decrease confidence in the correctness of changes in the software \citeG{G22,G114}. They could even block deployment until spotted and resolved \citeG{G5}.\\
\begin{qoutebox}{white}{}
\textbf{RQ3 summary.}
The impact of flaky tests has been the subject of discussion in both academic and grey literature. Flaky tests are reported to have an impact on the products under development, the quality of CUT and the tests themselves and on the delivery pipelines.
Techniques that rely on tests such as test-based program repair, crash reproduction and fault detection and localization can be negatively impacted by the presence of flaky tests.
\end{qoutebox}
\subsection{Responses to Flaky Tests (RQ4)}
The way that developers and teams respond to flaky tests has been discussed in detail in both academic and grey literature. However, the type of applied/recommended response is slightly different from one study to another as this also depends on the context of the causes of flaky tests, and also the methods used to detect them. Below we discuss the responses as noted in academic and grey literature, separately:
\subsubsection{Response Noted in Academic Literature}
We classify responses to flaky tests as follows:
\begin{itemize}
\item Modifying the test.
\item Modifying the program/code under test.
\item Process response.
\end{itemize}
We provide a general taxonomy of the responses to flaky tests as noted in the reviewed studies in Figure~\ref{fig:taxonomyresponse}.
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\linewidth]{media/response.png}
\caption{Taxonomy of response strategies}
\label{fig:taxonomyresponse}
\end{figure*}
\begin{table*}[h]
\centering
\caption{Summary of the response strategies to flaky tests in academic literature.}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Strategy} & \textbf{Description} & \textbf{Articles} \\ \toprule
\textbf{Modify test} & \textbf{Change assumptions} & \\
& Fix assumptions about library API's & \citeS{S152} \\
& automatically repair implementation-dependent tests & \citeS{S1013} \\
& Replace test & \citeS{S8} \\
& Merge dependent tests & \citeS{S1} \\
& \textbf{Change assertions} & \\
& Modify assertion bounds (e.g., to accommodate wider ranges of outputs) & \citeS{S1,S7,S8,S14,S1009,S1044} \\
& \textbf{Change fixture} & \\
& Removing shared dependency between tests & \citeS{S1} \\
& Global time as system variable & \citeS{S950} \\
& Setup/clean shared state between tests & \citeS{S1}\\
& Modify test parameters & \citeS{S14} \\
& Modify test fixture & \citeS{S9} \\
& Fix defective tests & \citeS{S446,S963,S1} \\
& Make behaviour deterministic & \citeS{S39} \\
& Change delay for async-wait & \citeS{S72,S1045} \\
& Concurrency-related fixes & \citeS{S8} \\
& \textbf{Change test-program interaction} & \\
& Mock use of environment/concurrency & \citeS{S179} \\ \midrule
\textbf{Modify program}
& Concurrency-related fixes & \citeS{S1} \\
& Replace dependencies & \citeS{S7,S14} \\
& Remove nondeterminism & \citeS{S8} \\ \midrule
\textbf{Process response} & Rerun tests & \citeS{S7,S269,S234} \\
& Ignore/Disable & \citeS{S8,S359,S143} \\
& Quarantine & \citeS{S106} \\
& Add annotation to mark test as flaky & \citeS{S14} \\
& Increase test time-outs & \citeS{S8} \\
& Reconfigure test environment (e.g., containerize or virtualize unit tests) & \citeS{SB29} \\
& Remove & \citeS{S164,S145,S39,S165,S71} \\
& Improve bots to detect flakiness & \citeS{S100} \\
& Responsibility of CI to deal with it & \citeS{S67} \\
& Prioritize tests & \citeS{S137,S1021} \\
\bottomrule
\end{tabular}}
\label{tab:responses_academic}
\end{table*}
A summary of the responses found in academic articles is presented in Table \ref{tab:responses_academic}. The three major strategies are to fix tests, to modify the CUT or putting in a mechanism to deal with flaky tests (e.g., retry or quarantine tests). Berglund and Vateman \citeS{S39} listed some strategies for avoiding non-deterministic behaviour in tests: minimising variations in testing environment, avoiding asynchronous implementations, testing in isolation, aiming for deterministic assertions and limiting the use of third party dependencies. Other measures include mocking to reduce flakiness, for instance EvoSuite \cite{fraser2011evosuite} uses mocking for this. Zhu et al. \citeS{S179} proposed a tool for identifying and proposing mocks for unit tests. A wider list of specific fixes to the different types of flaky tests is provided in \cite{S1}. Shi et al. \citeS{S9} presented a tool, iFixFlakies, to fix order dependent tests.
Fixes in the CUT are not discussed as much in academic articles. The closest mention in relation to this is in \citeS{S7}, which finds instances in flaky test fix commits where the CUT is improved and dependencies are changed to fix flakiness.
Another strategy, removing flaky tests, was also identified in \citeS{S7}. The study found that developers commented out flaky tests in 10/77 of examined commits. Removing flaky tests is also a strategy cited in papers that discuss testing related techniques \citeS{S164,S165}. Quarantining, ignoring or disabling flaky tests is also discussed as responses. Memon et al. \citeS{S137} detailed the approach at Google for dealing with flaky tests. They use multiple factors (e.g., more frequently modified files are more likely to cause faults) to prioritize the tests to rerun rather than a simple test selection heuristic such as rerun tests that failed recently, which is sensitive to flakiness.
A number of tools have been proposed recently for automatically repairing flaky tests. They can fix flakiness due to these causes: randomness in ML projects, order dependence and implementation dependence.
Dutta et al. \citeS{S1009,S1044} conducted an empirical analysis of seeds in machine learning projects and propose approaches to repair flaky tests due to randomness by tuning hyperparameters, fixing seeds and modifying assertions bounds. Zhang et al. \citeS{S1013} proposed a tool for fixing flaky tests that are caused by implementation dependencies of the type explored by NonDex \citeS{S152}. Wang et al. \citeS{S1031} proposed iPFlakies for Python that fixes order dependent tests that fail due to state that is polluted by other tests. This is related to their work, iFixFlakies for repairing order dependent tests in Java programs. The Python tool discovers existing tests or helper methods that clean the state before successfully rerunning the order dependent test. ODRepair from Li et al. \citeS{S1018} is an approach that uses automatic test generation to clean the state rather than existing code. Mondal et al. \citeS{S1058} proposed an approach to fixing flakiness due to parallelizing dependent tests by adding a test from the same class to correct dependency failure.
\subsubsection{Response Noted in Grey Literature}
Here we look at the methods and strategies followed to deal with flaky tests as noted in the grey literature. We classified those strategies into the following categories:
\begin{enumerate}
\item \textbf{Quarantine:} keep flaky tests in a different test suite to other `healthy' tests in a quarantined area in order to diagnose and then fix those tests.
\item \textbf{Fix immediately:} fix any flaky test that has been found immediately, but first developers will need to reproduce the flaky behaviour .
\item \textbf{Skip and ignore:} provide an option to developers to ignore flaky tests from the build and suppress the test failures. This is usually in the form of annotation.
In some cases, especially when developers are fully aware of the flaky behaviour of the tests and the implications of those tests have been considered, they may decide to ignore those flaky tests and continue with the test run as planned.
\item \textbf{Remove:} remove any test that is flaky from the test suite once detected.
\end{enumerate}
\begin{table*}[!h]
\centering
\caption{Summary of the response strategies followed by some organisations to deal with flaky tests, as discussed in grey literature.}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Strategy} & \textbf{Description} & \textbf{Example} \\ \toprule
Quarantine & \begin{tabular}[c]{@{}l@{}}Keep flaky tests in a different test suite to \\ other healthy tests in a quarantined area.\end{tabular} & \begin{tabular}[c]{@{}l@{}}\citeG{G1,G4,G5,G36,G104,G106,G108}\\ \citeG{G35,G38,G67,G70,G79,G89,G111,G114} \\ \citeG{G149,G154,G164,G213,G134,G202} \end{tabular} \\ \midrule
\begin{tabular}[c]{@{}l@{}}Fix and replace immediately, \\ or remove if not fixed \end{tabular}& \begin{tabular}[c]{@{}l@{}} Test with flaky behaviour are given priority \\ and fixed/removed once detected. \end{tabular} & \citeG{G20,G101,G102,G37,G147,G189,G111} \\ \midrule
Label flaky tests & Leave it to developers to decide & \citeG{G22,G67,G129,G134,G202} \\ \midrule
Ignore/Skip & \begin{tabular}[c]{@{}l@{}}Provide an option to developers to ignore \\ flaky tests from the build (e.g., though the \\ use of annotations) and suppress the test failures.\end{tabular} & \citeG{G6,G8,G70,G129} \\
\bottomrule
\end{tabular} }
\label{tab:responses_grey}
\end{table*}
A summary of the response found in grey literature is shown in Table \ref{tab:responses_grey}. The most common strategy that has been discussed is to quarantine and then fix flaky tests. As explained by Fowler \citeG{G4}, this strategy indicates that developers should follow a number of steps once a flaky test has been identified: \textit{Quarantine} $\rightarrow$ \textit{Determine the cause} $\rightarrow$ \textit{Report/Document}
$\rightarrow$ \textit{Isolate and run locally} $\rightarrow$ \textit{Reproduce} $\rightarrow$ \textit{Decide (fix/ ignore)}.
This is the same strategy that Google (and many other organisations) has been employing to deal with any flaky tests detected in the pipelines \citeG{G1}. A report from Google reported that they use a tool that monitors all potential flaky tests, and then automatically quarantines the test in case flakiness is found to be high. The quarantining works by removing ``\emph{the test from the critical path and files a bug for developers to reduce the flakiness. This prevents it from becoming a problem for developers, but could easily mask a real race condition or some other bug in the code being tested}'' \citeG{G1}. Other organizations also follow the same strategy e.g., Flexport \citeG{G36} and Dropbox \citeG{G108}.
Flexport \citeG{G36} have even included a mechanism to automate the process of quarantining and skipping flaky tests. The Ruby gem, Quarantine\footnote{\url{https://github.com/flexport/quarantine}}, used to maintain a list of flaky tests by automatically ``detects flaky tests and disables them until they are proven reliable''.
It has been suggested by some developers and managers that all identified flaky tests should be labelled by their severity. This can be determined by which specific component they impact, the frequency of a flaky test, or the flakiness rate of a given test.
One approach that has been suggested is not only to quarantine and treat all flaky tests equally, but to quantify the level of flakiness of each flaky test so that those tests can be priorities for fixing. A report from Facebook engineers proposed a statistical metric called the Probabilistic Flakiness Score (PFS), with the aim to quantify flakiness by measuring test reliability based on how flaky they are \citeG{G127}. Using this metric, developers can \textit{``test the tests to measure and monitor their reliability, and thus be able to react quickly to any regressions in the quality of our test suite. PFS ... quantify the degree of flakiness for each individual test at Facebook and to monitor changes in its reliability over time. If we detect specific tests that became unreliable soon after they were created, we can direct engineers’ attention to repairing them.''} \citeG{G127}. GitHub reported a similar metrics-based approach to determine the level of flakiness for each flaky test. An impact score is given to each flaky test based on how many times it changed its outcomes, as well as how many branches, developers, and deployments were affected by it. The higher the impact score, the more important the flaky test and thus the highest priority for fix is given to this test \citeG{G147}.
At Spotify \citeG{G2}, engineers use Odeneye, a system that visualises an entire test suite running in the CI, and can point out developers to tests with flaky outcomes as the results of different runs. Another tool used at Spotify is Flakybot\footnote{\url{https://www.flakybot.com}}, which is designed to help developers determine if their tests are flaky before merging their code to the master/main branch. The tool can be self-invoked by a developer in a pull request, which will exercise all tests and provide a report of their success/failure and possible flakiness.
There are a number of issues to consider when quarantining flaky tests though, such as how many tests should be quarantined (having too many tests in the quarantine can be considered as counterproductive) and how long a test should stay in quarantine. Fowler \citeG{G4} suggested that not more than 8 tests in the quarantine at one time, and not to keep those tests for a long period of time. It was suggested to have a dashboard to track progress of all flaky tests so that they are not forgotten \citeG{G8}, and have an automated approach not to only quarantine flaky tests, but also to de-quarantine them once fixed or decided to be ignored \citeG{G202}.
Regarding the different causes of flaky tests, there are different strategies that are recommended to deal with the specific sources of test flakiness. For example, to deal with flakiness due to state-dependent scenarios such as if there is an ``Inconsistent assertion timing'' (i.e., state is not consistent between test runs that can cause tests to fail randomly), one solution is to ``construct tests so that you wait for the application to be in a consistent state before asserting'' \citeG{G36}. If the test depends on specific test order (i.e., global state shared between tests as one test may depend on compilation of another one), an obvious solution is to ``reset the state between each test run and reduce the need for global state'' \citeG{G36}. Table \ref{tab:fixing-grey} provides a brief summary of flaky tests' fixing strategies due to the most common causes as noted in grey literature articles.\\
\begin{table*}[]
\caption{Some fixing strategies for some common flaky tests noted in grey literature }
\label{tab:fixing-grey}
\resizebox{\linewidth}{!}{
\begin{tabular}{lll}
\toprule
\textbf{Cause of flakiness} & \textbf{Suggested fix} & \textbf{Example} \\ \toprule
Asynchronous wait & \begin{tabular}[c]{@{}l@{}}Wait for a specified period of time before it checks if the action has been \\ successful (with callbacks and polling) .\end{tabular} & \citeG{G7,G74} \\ \midrule
Inconsistent assertion timing & \begin{tabular}[c]{@{}l@{}}Construct tests so that you wait for the application to be in a consistent \\ state before asserting.\end{tabular} & \citeG{G7} \\ \midrule
Concurrency & \begin{tabular}[c]{@{}l@{}}Make tests more robust, so that it accepts all valid results . \\ Avoid running tests in parallel .\end{tabular} & \begin{tabular}[c]{@{}l@{}}\citeG{G7}\\ \citeG{G74}\end{tabular} \\ \midrule
Order dependency & \begin{tabular}[c]{@{}l@{}}Run a test in a database transaction that’s rolled back once the test has \\ finished executing .\\ Clean up the environment (i.e., reset state) and prepare it before every \\ test (and reduce the need for global state in general). \\ Run test in isolation. \\ Run test in random order to find out if they are still flaky .\end{tabular} & \begin{tabular}[c]{@{}l@{}}\citeG{G7}\\ \\ \citeG{G7,G2,G75}\\ \\ \citeG{G5,G101}\\ \citeG{G96}\end{tabular} \\ \midrule
Time-dependent tests & \begin{tabular}[c]{@{}l@{}}Wrapping the system clock with routines that can be replaced with a \\ seeded value for testing. \\ Use a tool to control for time variables such as freeze time helper in\\ Ruby and Sinon.JS in JavaScript.\end{tabular} & \begin{tabular}[c]{@{}l@{}}\citeG{G5}\\ \\ \citeG{G95}\end{tabular} \\ \midrule
Randomization & Avoid the use of random seeds. & \citeG{G38} \\ \midrule
Environmental & \begin{tabular}[c]{@{}l@{}}Limit dependency on environments in the test.\\ limit calls to external resources and build a mocking server for \\ tests .\end{tabular} & \citeG{G27, G81,G98} \\ \midrule
Leak global state & Run test in random order. & \citeG{G95} \\ \bottomrule
\end{tabular}}
\end{table*}
\begin{qoutebox}{white}{}
\textbf{RQ4 summary.}
Quarantining flaky tests (for a later investigation and fix) is a common strategy that is widely used in practice. This is now supported by many tools that can integrate with modern CI tooling (able to automatically detect changes in test outcomes to identify flaky tests). Understanding the main cause of the flaky behaviour is a key to reproducing flakiness and identifying an appropriate fix, which remains a challenge.\end{qoutebox}
\subsection{Flaky Tests Datasets (RQ1 and RQ2)}
\label{sec:datasets}
Datasets used in flakiness related studies can be divided into those used in empirical studies or for detection/causes analysis studies. Those are all obtained from academic literature studies. Table~\ref{tab:flakdatasets} lists these datasets with the type of flakiness in the programs, the programming language, the number of flaky tests identified and the total number of projects along with the names of the tools or if it's an empirical study.
As can be seen in Table \ref{tab:flakdatasets}, the dominant programming language is Java. There are a few studies in Python \citeS{S14}. Some of these datasets are used in multiple studies, for instance \citeS{S15} obtains its subjects from \citeS{S4}, and \citeS{S12} from \citeS{S2}.\footnote{A dataset on the relationship between test smells and flaky tests was largely used in multiple studies but recently was retracted \url{https://ieeexplore.ieee.org/document/8094404}.}
\begin{table*}[!b]
\centering
\caption{Datasets to Study Test Flakiness}
\label{tab:flakdatasets}
\resizebox{\linewidth}{!}{
\begin{tabular}{llllll}
\toprule
\textbf{Study} & \textbf{Flakiness type} & \textbf{Language} & \textbf{\# flaky tests} & \textbf{\# projects} & \textbf{Article} \\
\midrule
iDFlakies & Order dep/Other & Java & 422 & 694 & \citeS{S4} \\
DeFlaker & General & Java & 87 & 96 & \citeS{S2} \\
NonDex & Wrong assumptions & Java & 21 & 8 & \citeS{S152} \\
iFixFlakies & Order dependent & Java & 184 & 10 & \citeS{S135} \\
FLASH & Machine learning & Python & 11 & 20 & \citeS{S14} \\
Shaker & Concurrency & Java/Kotlin (Android) & 75 & 11 & \citeS{S24} \\
FlakeShovel & Concurrency & Java (Android) & 19 & 28 & \citeS{S16} \\
NodeRacer & Concurrency & JavaScript & 2 & 8 & \citeS{S93} \\
GreedyFlake & Flaky coverage & Python & -- & 3 & \citeS{S78} \\
Travis-Listener & Flaky builds & Mixed & -- & 22,345 & \citeS{S136} \\
RootFinder & General & .Net & 44 & 22 & \citeS{S6} \\
\bottomrule
\end{tabular}}
\end{table*}
\section{Acknowledgements}
This work is funded by Science for Technological Innovation National Science Challenge of New Zealand, grant number MAUX2004.
\bibliographystyleS{IEEEtran}
\bibliographyS{literature}
\Urlmuskip=0mu plus 1mu\relax
\bibliographystyleG{IEEEtran}
\bibliographyG{greyliterature}
\bibliographystyle{IEEEtran}
| {'timestamp': '2022-12-05T02:03:21', 'yymm': '2212', 'arxiv_id': '2212.00908', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.00908'} |
\section{Introduction}
Initial results from SuperKamiokande \cite{SuperK} appear
to confirm indications from IMB,~\cite{IMB} Kamiokande~\cite{Kam}
and Soudan~\cite{Soud} of an excess of $\nu_e$ relative to $\nu_\mu$
in the atmospheric neutrinos. One possible interpretation is
that neutrino flavor oscillations play a role. In a two--flavor
mixing scheme, for example, the probability that a neutrino
of flavor $i$ and energy $E_i$ retains its identity after
propagating a distance $L$ in vacuum is \cite{BoehmV}
\begin{equation}
P_{ii}\;=\;1\,-
\,\sin^22\theta\,\sin^2\left[1.27\,\Delta m^2(eV^2)\times L(km)\over
E_i(GeV)\right],
\end{equation}
where $\delta m^2$ is the difference in mass squared of the
two neutrino mass eigenstates and $\theta$ is the mixing angle.
Therefore, to evaluate the manifestation of the mixing
in a detector that measures to some degree the direction
and energy of neutrino-induced events, one needs to know
the distributions of production heights of the neutrinos
as a function of energy and zenith angle. More complicated
mixing schemes~\cite{Foglietal} and effects of propagation
in matter~\cite{Parkeetal} still require this basic information
about the points of origin of the neutrinos.
Information about origin of the neutrinos is implicit in any
calculation of neutrino fluxes. Here we extract the relevant
information from the simulation of Ref. \cite{Agrawal},
which has been compared to several other calculations in
Ref.~\cite{GHKLMNS}.
The paper is organized in three sections. First we review the
simulation we are using to calculate production of neutrinos
in the atmosphere. Next we present the basic results of the
calculation. We discuss simple analytic approximations which
offer insight into the systematics of the results and compare
them to simulation results for zenith angles from the vertical to
horizontal. Finally, we provide some parametrizations, based
on the analytic approximations, that may be useful for practical
application of the results.
\section{Simulation}
The simulation was performed in the spirit of earlier
calculations of the atmospheric neutrino flux~\cite{Agrawal,BGS}.
The simulation code is one dimensional. In this approximation,
all secondaries are assumed to move in the direction of the
primary particles (except for a small fraction of low energy
secondaries with angles larger than $90^\circ$ to the beam,
which are discarded). The validity of this approximation has
been checked in Refs. \cite{Lee,conference}.
The primary cosmic ray flux and its composition is the
parametrization used previously in the calculation of Agrawal
{\em et al.}~\cite{Agrawal} which in the multi-GeV range falls in
between the measurements of Refs.~\cite{Webber,Seo}. Incident
cosmic-ray nuclei are treated in the superposition approximation \cite{EGLS},
with cascades generated separately for protons and neutrons in
order to insure the correct ratios of neutrinos and antineutrinos.
The fraction of neutrons is derived from the
fractions of nuclei heavier than hydrogen in the primary flux.
We consider three ranges of neutrino energies that correspond
approximately to the three major types of experimental events
in a detector the size of SuperKamiokande:
contained events; partially contained neutrino interactions
and stopping neutrino induced muons; and througoing muons.
The energy ranges are presented in two different ways:
\begin{itemize}
\item $0.3<E_\nu<2$~GeV; $2<E_\nu<20$~GeV; $E_\nu>20$~GeV and
\item $E>1,\;10$~and~$100$~GeV.
\end{itemize}
The integral form is more closely related to simple analytic
approximations that we use as the basis of parametrizations of
the results of Monte Carlo simulations.
Our results are obtained with the geomagnetic cutoffs for
Kamioka and for the epoch of solar minimum, which is applicable
to measurements performed currently ($\sim1994-99$).
Because of the high geomagnetic cutoffs at Kamioka, it is not
necessary to account precisely for the phase of the solar cycle.
To illustrate the potential influence of geomagnetic effects
at other locations we also tabulate some results for the much higher
geomagnetic latitude of the SNO experiment.
We have not included prompt neutrino production through
charm decay because it is totally negligible in the considered
energy ranges~\cite{ThunIng,Volkova}. All neutrinos are
generated either in pion and kaon decays or in muon decays.
The production heights are stored separately for neutrinos
from $\pi/K$ and from muon decays. The muon decay procedure
accounts for the muon enegy loss during propagation in the atmosphere.
Technically the muon lifetime is sampled in the muon rest
frame and then the muon is propagated in the atmosphere with time
dilation proportional to its decreasing energy. Thus mouns decay
on the average sooner that they would have if one (incorrectly)
sampled from a decay distribution using their Lorentz factor
at production.
\section{Results}
Before presenting the results for neutrinos, we show
a comparison between measurements of GeV muons
at different altitudes in the atmosphere and our calculation made
with the same Monte Carlo code \cite{Circella}. This type of
balloon measurement provides the most direct
test of the validity of the cascade model and of
the treatment of the muon propagation in the atmosphere because
the muons and neutrinos have a common origin. The
comparison shown in Fig. 1 is with data of the MASS experiment \cite{MASS}
as discussed in Ref. \cite{Circella}.
Fig.~2 shows the height of production of neutrinos of energy
above 1 GeV for cos$(\theta)$ = 0.75. The graph gives $dN_\nu/dh$
(cm$^{-2}$s$^{-1}$sr$^{-1}$km$^{-1}$), where $h$ is the slant distance
from the neutrino production point to sea level.
Contributions from muon decay and from $\pi/K$ decay are shown separately for
$\nu_e + \bar{\nu}_e$ and for $\nu_\mu + \bar{\nu}_\mu$.
The overall flux of
$\nu_e+\bar{\nu}_e$ from $\pi/K$ decay is much lower because it
reflects primarily the contribution of $K^0_L$ decays, which is very low
in this energy range. The curves for electron and for muon
neutrinos from muon decay are nearly equal. They extend to lower
altitudes with a slope that depends on the average energy of
the parent muons. For higher energy neutrinos this slope is
signicantly flatter as a consequence of the higher parent muon
energy and correspondingly longer muon decay length. For $E_\nu >$
20 GeV, most parent muons reach the ground (except in nearly
horizontal direction) and stop before decaying. As a consequence,
the height distributions for neutrinos from muon decay deep in the
atmosphere are nearly flat.
\subsection{Height distribution for neutrinos from $\pi/K$ decay}
\subsubsection{Analytic approximation}
It is instructive to look at a simple approximation
for the height of production of neutrinos from decay of pions.
In the approximation of an exponential atmosphere with scale height
$h_0$ and the approximation of Feynman scaling for the production
cross sections of pions in interactions of hadrons with nuclei
of the atmosphere, a straightforward solution of the equations
for propagation of hadrons through the atmosphere \cite{book}
gives \cite{Lipari}
\begin{equation}
{dF(>E_\nu)\over dX}\;=\;(1-r_\pi)^{-\gamma}\,{Z_{N\pi}\over\lambda_N}\,
e^{-X/\Lambda_N}\,{K\over\gamma(\gamma+1)}\,E_\nu^{-\gamma}\;
\equiv\;A \times E_\nu^{-\gamma}
\label{production}
\end{equation}
for the integral flux of neutrinos in the energy range
$E_\nu\ll \epsilon_\pi$ where reinteraction of pions in the
atmosphere can be neglected. There is a similar expression
for neutrinos from decay of kaons proportional to
$B_K\times Z_{NK}$. The meaning and approximate
values of the quantities in these equations are given in
Table~\ref{tab1}.
\begin{table}
\caption{Values of the parameters used in Eq.~\ref{production} that
correspond to a power law primary cosmic ray spectrum and to an
exponential atmosphere. $\gamma$ and $K$ are the spectral index
and the coefficient of the differential cosmic ray energy spectrum,
$dN/dE = K E^{-(\gamma+1)}$. $r_\pi \; (r_K)$ is $(m_\mu/m_\pi)^2\;
((m_\mu/m_K)^2)$. $Z_{N\pi}\;(Z_{NK})$ is the spectrum weighted
moment for pion (kaon) production by nucleons
($Z_{N\pi}\; =\; \int\,dx\,x^\gamma\,dN/dx$). $\Lambda_N$ and $\lambda_N$
are the attenuation and interaction lengths for nucleons. $B_K$ is the
branching ratio for $K \longrightarrow \mu$ decay. $X_0$ and $h_0$
are the total vertical thickness (in g/cm$^2$) and the scaleheight
for an exponential atmosphere in km.}
\begin{tabular}{cccccccccc}
$\gamma$ & $K$ & $\lambda_N$ & $\Lambda_N$ & $Z_{N\pi}$ &$B_K \times Z_{NK}$ &
$r_\pi$ & $r_K$ & $X_0$ & $h_0$ \\
& cm$^{-2}$s$^{-1}$sr$^{-1}$(GeV)$^\gamma$ & g/cm$^2$ & g/cm$^2$ & & &
& & g/cm$^2$ & km \\ \tableline
& & & & & & & & & \\
1.70 & 1.8 & 86 & 120 & 0.08 &0.0075 & 0.5731 & 0.0458 & 1030 & 6.4 \\
& & & & & & & & & \\ \tableline
\end{tabular}
\label{tab1}
\end{table}
In Eq.~\ref{production} $X$ is the slant depth in the atmosphere
at which the pion is produced and decays. We now convert this
into distance $\ell$ from the detector in the approximation of
an exponential atmosphere in which
\begin{equation}
X\;=\;{X_0\over \cos\theta}\,\exp\left[-{\ell\cos\theta \over h_0}\right].
\label{atmos}
\end{equation}
For $\theta<70^\circ$, curvature of the earth can be neglected
and $\cos\theta$ in Eq.~\ref{atmos} is cosine of the zenith
angle to a good approximation. The effective values of $\cos\theta$
for larger angles are given below.
The corresponding approximate expression for the distribution
of production distances is
\begin{equation}
{dF(>E_\nu)\over d\ell}\;=\;{A X_0\over h_0}\, E_\nu^{-\gamma}
\exp\left[-{X\over \Lambda_N}\right]
\times\exp\left[-{\ell\cos\theta\over h_0}\right],
\label{nupi}
\end{equation}
where $X$ is to be evaluated as a function of $\ell$ from
Eq.~\ref{atmos}. Assuming a primary cosmic ray nucleon flux
with the normalization given in Table \ref{tab1} (and including
the small contribution from decay of kaons) the normalization
factor is $A X_0/h_0\; \simeq$ 0.020.
Taking parameters from Table~\ref{tab1} gives the most probable
distance of production as
\begin{equation}
\label{mode}
\ell_{max}\;\approx\;{h_0\over\cos\theta}\,\ln{X_0\over\Lambda_N\cos\theta},
\end{equation}
which is $\approx 15$~km for vertical neutrinos from decay of pions.
\subsubsection{Monte Carlo results}
Fig.~3 shows the distance distribution for neutrinos from $\pi/K$ decay
($ E_\nu >$ 1 GeV) for cos$(\theta)$ = 1.00, 0.75, 0.50, 0.25,
0.15 and 0.05. Here $\theta$ is the zenith angle of the
neutrino trajectory at the surface of the Earth.
For large zenith angles, the curvature of the Earth is significant,
and it is necessary to use effective values of
$\cos_{eff}(\theta)$ that represent the convolution
of the locations of neutrino production with
the local zenith angle as it decreases moving upward along the trajectory.
We treat $\cos_{eff}(\theta)$ as a free parameter
in fitting Eqs. \ref{nupi} and \ref{numu} to the Monte Carlo results.
The values are included in Table \ref{tab2}.
\begin{table}
\caption{ Comparison of analytic and Monte Carlo values of the effective
value of $\cos \theta$ . Column 1 shows the cosine of the zenith angle
$\theta$. Column 2 shows the most probable production height
for neutrinos from $\pi/K$ decay from the Monte Carlo calculation.
Column 3 gives the most probable height of production from Eq.~\ref{mode}
with $cos_{eff} \theta $ from column 4.
Columns 4 \& 5 give the $\cos_{eff} \theta$ values that fit best the
calculated height of production distributions for neutrinos from $\pi/K$
and muon decay with $h_0$ = 6.50 km. Column 6 gives the normalization
coefficient $C_\mu$ needed to fit the distribution for neutrinos from
muon decay.}
\begin{tabular}{lccllc}
$\cos \theta$ & $\ell_{max}(MC)$ & $\ell_{max}$(Eq.~\ref{mode}) &
$\cos_{eff}^{K/\pi} \theta$ & $ \cos_{eff}^\mu \theta$ & $C_\mu$ \\
& (km) & (km) & & & \\
\tableline
& & & & & \\
1.00 & 13.8 & 14.0 & 1.00 & 1.00 & 0.69 \\
0.75 & 21.6 & 21.2 & 0.75 & 0.75 & 0.71 \\
0.50 & 38.4 & 37.0 & 0.50 & 0.50 & 0.77 \\
0.25 & 88.4 & 87.5 & 0.26 & 0.26 & 0.83 \\
0.15 & 155. & 157. & 0.164 & 0.168 & 1.00 \\
0.05 & 382. & 358. & 0.084 & 0.087 & 1.86 \\
& & & & & \\
\tableline
\end{tabular}
\label{tab2}
\end{table}
Up to $\cos\theta$ = 0.25 the agreement between the Monte Carlo
calculation and the analytic estimate is quite good.
Note that for nearly horizontal neutrinos the height distribution
from the Monte Carlo calculation is artificially narrow and irregular.
The atmospheric model used does not treat exactly the atmospheric
densities at vertical depths of less than few g/cm$^2$. This intruduces
a sharp cutoff in the height distribution for strongly inclined showers
and also decreases the width of the height distribution.
The height distribution of $\nu_e$ from $\pi/K$ decay has a similar shape
with much lower normalization, because only $K^0_L$ have decay
mode with $\nu_e$'s ($K^0_{e3})$.
\subsection{Height distribution for neutrinos from muon decay}
\subsubsection{Analytic approximation}
To estimate the height of production for neutrinos from decay
of muons is more complicated because of the competition between
decay and energy loss for muons in the multi-GeV energy range.
One starts from the distribution of production points for muons,
which is similar to Eq. \ref{production} with different coefficients.
The resulting approximate expression \cite{Lipari}
for the distribution of production distances (differential in
the energy of the parent muons as well as the slant height of production) is
\begin{equation}
\label{numu}
{dN_\nu\over dE_\mu\,d\ell}\;=\;K B {\mu c^2\over E_\mu c\tau}\,\int_0^X\,
{dY\over \lambda_N}{e^{-Y/\Lambda_N}\left[{X \over Y}{E_\mu+\alpha(X-Y)\over
E_\mu}\right]^{-p}\over[E_\mu+\alpha(X-Y)]^{\gamma+1}},
\end{equation}
where $\tau$ is the muon lifetime,
$$
p\;=\;{h_0\over c\tau\cos\theta}{\mu c^2\over E_\mu+\alpha X}
$$
and
$$
B\; = \; {1\over\gamma+1}\,\left[{1-r_\pi^{(\gamma+1)}\over 1-r_\pi}\,Z_{N\pi}\;
+\;B_{K}\,{1-r_k^{(\gamma+1)}\over 1-r_K}\,Z_{NK}\right].
$$
At high altitude muon energy loss ($\alpha(X-Y)$) can
be neglected and the expression \ref{numu} is proportional to
slant depth $X$ given by Eq. \ref{atmos}. This expression
gives a good account of the high-altitude exponential falloff
of the neutrinos from muon decay.
An approximation that is adequate for fitting the distribution
for all distances (integrated over neutrino energy) is
\begin{equation}
\label{numuapprox}
{dN_\nu(>E_\nu)\over d\ell}\,\approx\,
{C_\mu\,K\,B\over (\gamma+1) (2 E_\nu)^{(\gamma+1)}}\,
{\mu c^2\over c\tau}\,{X\over\lambda_N}\,
\int_0^1 dz\,z^p\,\exp{(-{X\over\Lambda_N}z)}
\left[1\,+\,{\alpha X\over 2\,E_\nu}(1-z)\right]^{-(p+\gamma+1)} ,
\end{equation}
where $C_\mu$ is an overall normalization factor used to fit
the Monte Carlo results (see Table~\ref{tab2}).
\subsubsection{Monte Carlo results}
Fig.~4 shows the height of production distributions for muon
neutrinos of energy above 1 GeV from muon decay. The lines
are calculated according to Eq.~\ref{numuapprox} with values of
$\cos_{eff}\theta$ as given in Table~\ref{tab2}. To obtain the
fits shown in Fig.~4 the approximations of Eq. \ref{numuapprox}
have also been renormalized as indicated in Table~\ref{tab2}.
At high altitude the height of production distributions have the same
shape as the ones from neutrinos from $\pi/K$ decay, shifted to lower
altitudes by one muon decay length (6.24 km for 1 GeV muons).
At lower altitude the shapes are quite different. The production
height for neutrinos from muon decay extend to much lower altitude
because of the slow attenuation of the parent muon flux, an effect which
becomes more pronounced as the energy increases.
It is interesting to observe that at high zenith angles the yield
of neutrinos from (daughter) muon decay exceeds the yield of neutrinos
from the decay of the parent pions and kaons.
The reason is that muon neutrinos
from muon decay in flight have a spectrum extending almost to
$x = 1$ (where $x = E_{\nu}/E\pi$), while the neutrinos from $\pi$
decay can only reach $E_\nu^{max} = E_\pi \times (1 - r_\pi)$
= $0.428\,E_\pi$. The corresponding $Z$--factors $Z_{\pi\mu \nu_\mu}$
and $Z_{\pi\nu_\mu}$ are 0.133 and 0.087 respectively, including
the effect of muon polarization in pion decay. The result is that
for large zenith angles, when almost all muons decay the
$\nu_\mu$ yield from muon decay becomes slightly larger than
that from $\pi/K$ decay ($\sim 9/7$ for $\cos\theta$ = 0.05).
Fig.~5 compares the distributions of distance to production
for $\nu_\mu$
from muon decay with $E_\nu$ above 1, 10, and 100 GeV. At high
neutrino energy the muon decay length becomes comperable or larger
than the total dimension of the atmosphere. The neutrino height of
production then becomes constant deep in the atmosphere.
The height distribution for $\nu_e$ from muon decay is analogous
to that of $\nu_\mu$. The only difference is the slightly lower
normalization, which reflects the ratio $Z_{\mu \nu_\mu}/Z_{\mu \nu_e}$
= 0.133/0.129 = 1.03 (including muon polarization).
\subsection{Height distribution in three energy bins}
In Table~\ref{tab3} we show the average height of production and
the contributions of $\pi/K$ and muon decays for neutrinos in the
three energy bins ($0.3 <\,E_\nu\,< 2$~GeV; $2 <\,E_\nu\,< 20$~GeV;
$E_\mu\,20$~GeV) which roughly correspond to contained neutrino
events, semicontained events and stopping neutrino induced muons and
throughgoing neutrino induced muons. For each angle and neutrino flavor
Table~\ref{tab3} first gives the average height of production
(slant depth) in km and the width of the height of production
distribution. Then it gives the contribution (in \%) of $\pi/K$ decay
$f_m$ and the corresponding $\langle h_m \rangle$ and $\sigma h_m$,
then the same quantities ($f_\mu,\; \langle h_\mu \rangle,\; \sigma h_\mu$
for neutrinos from muon decay.
The calculation was done with the geomagnetic cutoffs of Kamioka,
except for the three lines ($\cos \theta$ = 1.00, 0.75 and 0.50, for
the lowest energy bin) that are also calculated for
the high geomagnetic latitude of SNO~\cite{SNO}. The numbers for
high geomagnetic latitude are slightly higher (2 -- 10 \%) for
both neutrino sources because of the contribution of low energy
protons. This difference becomes negligible at higher angles.
\begin{table}
\caption{ Production height (slant distance, km) of neutrinos for six
values of $\cos{\theta}$ and three neutrino energy ranges. The calculation
is for the geomagnetic location of Kamioka with three lines for the lowest
energy range calculated for Sudbury, Canada.}
\begin{small}
\begin{center}
\begin{tabular}{|r||rr|rrr|rrr||rr|rrr|rrr|}\hline
E, GeV & \multicolumn{8}{ c|}{$\nu_e + \bar{\nu}_e$} &
\multicolumn{8}{|c|}{$\nu_\mu + \bar{\nu}_\mu$} \\
$\cos{\theta}$ &h&$\sigma_h$&$f_{\pi/K}$&$h_{\pi/K}$&$\sigma h_{\pi/K}$
& $f_\mu$ & $h_\mu$ & $\sigma h_\mu$ &
h&$\sigma_h$&$f_{\pi/K}$&$h_{\pi/K}$&$\sigma h_{\pi/K}$
& $f_\mu$ & $h_\mu$ & $\sigma h_\mu$ \\
\hline \hline
0.3 -- 2. & & & & & & & & & & & & & & & &\\
1.00 & 14.0& 8.7& 1.3& 16.8& 8.3& 98.7& 14.0& 8.7&
15.9& 8.7& 57.4& 17.4& 8.4& 42.6& 14.1& 8.7\\
0.75 & 21.0& 11.9& 1.2& 25.6& 11.9& 98.8& 21.0& 11.9&
23.6& 11.8& 54.5& 25.6& 11.4& 45.5& 21.1& 11.9\\
0.50 & 37.7& 18.4& 1.0& 44.0& 17.2& 99.0& 37.6& 18.3&
41.0& 18.1& 51.3& 44.0& 17.2& 48.7& 37.8& 18.3\\
0.25 & 91.1& 32.0& 0.9&100.5& 29.9& 99.1& 90.9& 32.1&
95.6& 31.4& 48.7&100.2& 30.2& 51.3& 91.3& 32.0\\
0.15 &154.8& 38.1& 0.9&167.6& 35.1& 99.1&154.6& 38.2&
160.0& 37.3& 47.9&165.6& 35.3& 52.1&155.0& 38.0\\
0.05 &363.8& 56.5& 0.9&378.0& 52.8& 99.1&363.7& 56.5&
369.8& 55.0& 47.4&376.1& 53.1& 52.6&364.1& 56.3\\
SNO & & & & & & & & & & & & & & & &\\
1.00 & 15.4& 8.9& 0.8& 17.2& 8.6& 99.2& 15.4& 8.9&
16.9& 8.8& 53.8& 18.0& 8.5& 46.2& 15.5& 8.9\\
0.75 & 23.2& 12.1& 0.7& 25.6& 11.3& 99.3& 23.2& 12.1&
25.0& 11.9& 51.1& 26.5& 11.5& 48.9& 23.3& 12.1\\
0.50 & 40.6& 18.4& 0.6& 44.4& 17.3& 99.4& 40.6& 18.4&
43.0& 18.1& 48.3& 45.2& 17.4& 51.7& 40.8& 18.3\\
2. -- 20. & & & & & & & & & & & & & & & &\\
1.00 & 13.4& 9.1& 6.7& 17.9& 8.9& 93.3& 13.1& 9.0&
16.6& 9.0& 71.6& 18.0& 8.6& 28.4& 13.1& 8.9\\
0.75 & 19.6& 12.4& 5.4& 26.3& 11.6& 94.6& 19.3& 12.3&
24.1& 12.1& 67.0& 26.4& 11.4& 33.0& 19.4& 12.3\\
0.50 & 34.4& 19.6& 3.9& 44.8& 17.4& 96.1& 34.0& 19.5&
40.9& 19.1& 60.2& 45.3& 17.5& 39.8& 34.2& 19.3\\
0.25 & 81.9& 35.6& 2.9&102.8& 31.1& 97.1& 81.3& 35.6&
92.8& 34.6& 52.6&102.9& 30.5& 47.4& 81.7& 35.4\\
0.15 &139.5& 45.2& 2.5&168.6& 34.9& 97.5&138.8& 45.2&
154.3& 42.8& 49.2&169.1& 35.0& 50.8&139.9& 44.9\\
0.05 &338.9& 73.0& 2.2&380.6& 51.5& 97.8&338.0& 73.1&
359.0& 67.1& 46.2&381.6& 52.0& 53.8&339.5& 72.4\\
$>$ 20. & & & & & & & & & & & & & & & &\\
1.00 & 14.0& 9.3& 41.5& 17.7& 8.7& 58.5& 11.4& 8.9&
17.6& 8.9& 94.2& 17.9& 8.7& 5.8& 11.6& 9.1\\
0.75 & 20.0& 13.1& 33.1& 26.4& 11.7& 66.9& 16.8& 12.6&
25.8& 12.1& 91.9& 26.6& 11.7& 8.1& 16.6& 12.4\\
0.50 & 31.8& 20.3& 22.3& 44.8& 17.7& 77.7& 28.0& 19.4&
43.3& 18.9& 87.8& 45.4& 17.8& 12.2& 28.0& 19.4\\
0.25 & 70.3& 38.9& 7.8& 99.1& 27.2& 92.2& 67.2& 38.6&
94.9& 36.4& 79.6&103.8& 29.9& 20.4& 60.1& 39.5\\
0.15 &110.3& 54.7& 8.8&168.1& 35.0& 91.2&104.7& 53.0&
151.2& 49.4& 72.5&168.7& 34.6& 27.5&105.2& 52.7\\
0.05 &267.4&105.1& 5.4&382.1& 54.8& 94.6&260.8&103.5&
335.7& 94.2& 61.7&381.7& 51.1& 38.3&262.3&100.5\\
\label{tab3}
\end{tabular}
\end{center} \end{small} \end{table}
There are two obvious trends in the numbers in Table~\ref{tab3}.
The height of production for neutrinos from $\pi/K$ decay grows slightly
with the neutrino energy because higher energy mesons preferentially
decay (rather than interact) in the tenuous atmosphere at high altitude.
Neutrinos from muon decay, on the other hand, are generated at
lower altitude at high energy because of the increasing muon decay length.
This second feature is much stronger because of the proportionality
of decay length and muon energy.
The average heights of production also reflect the relative yields of the
two neutrino sources. For low energy $\nu_e (\bar{\nu}_e)$, for example,
the contribution of $K^0_{e3}$ is small, so the
average height of production is dominated by muon decay.
At higher energy the relative contribution of $K^0_{e3}$ grows, especially
at directions close to the vertical, and $\langle h \rangle$ becomes
intermediate between those of the two processes with correspondingly
larger width.
Generally the contribution from muon decay increases significantly
with the zenith angle since even 20 GeV muons easily decay in
cascades developing in nearly horizontal direction.
\section{Conclusions}
We have calculated the distribution of pathlengths of atmospheric
neutrinos for a range of angles and energies relevant for
current searches for neutrino oscillations with atmospheric neutrinos.
Accounting correctly for the pathlenght will be particularly
important for neutrinos near the horizontal direction where the
pathlength through the atmosphere of neutrinos from above the horizon
is of the same order of magnitude as the pathlength through the Earth
of neutrinos from below the horizon.
We have also given simple approximations that may be useful
in interpolating the tables and adapting the results for
different energy ranges and directions.
The influence of the geomagnetic effects on the calculated height
of neutrino production distributions is not very strong. The difference
in the average production heights for neutrinos detected at Kamioka
and SNO is of order several per cent in directions relatively close
to the vertical. This difference diminishes with angle and becomes
totally negligible for upward going neutrinos, where the geomagnetic
cutoffs becomes approximately equal, being averaged over the
geomagnetic fields of the opposite hemisphere.
The agreement of our calculation with the measured muon fluxes above
1 GeV/c as a function of the atmospheric depth serves as a check
on the validity of the results presented above.
\acknowledgments The authors express their gratitude to
W.~Gajewski, J.G.~Learned, H.~Sobel, and Y.~Suzuki for their
interest in the height of production problem that inspired us the
complete this research. This work is supported in part by the U.S.
Department of Energy under DE-FG02-91ER40626.A007.
| {'timestamp': '1997-08-15T18:23:02', 'yymm': '9708', 'arxiv_id': 'astro-ph/9708146', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9708146'} |
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the ICCV 2019 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \emph{et al}\onedot [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the ICCV70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \emph{et al}\onedot.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \emph{et al}\onedot, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\emph{et al}\onedot'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \emph{et al}\onedot~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \emph{et al}\onedot~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\emph{et al}\onedot' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \emph{et al}\onedot.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the ICCV 2019 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the ICCV 2019 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \emph{et al}\onedot [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the ICCV70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \emph{et al}\onedot.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \emph{et al}\onedot, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\emph{et al}\onedot'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \emph{et al}\onedot~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \emph{et al}\onedot~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\emph{et al}\onedot' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \emph{et al}\onedot.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the ICCV 2019 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
\subsection{Evaluating unsupervised features}
We evaluate the quality of the features extracted from a convnet trained with DeeperCluster\xspace on YFCC100M by considering several standard transfer learning tasks, namely image classification, object detection and scene classification.
\begin{table}[t!]
\centering
\setlength{\tabcolsep}{1pt}
\begin{tabular}{@{}lc@{}c@{}c@{\hspace{0.2em}}c@{}c@{}c@{}c@{}}
\toprule
& &\phantom{ee}& \multicolumn{2}{c}{Classif.} &\phantom{ee}& \multicolumn{2}{c}{Detect.} \\
\cmidrule{4-5} \cmidrule{7-8}
Method & Data && \textsc{fc68} & \textsc{all} && \textsc{fc68} & \textsc{all} \\
\midrule
ImageNet labels & INet && $89.3^{\phantom{\dagger}}$ & $89.2^{\phantom{\dagger}}$ && $66.3^{\phantom{\dagger}}$ & $70.3^{\phantom{\dagger}}$ \\
Random & -- && $10.1^{\phantom{\dagger}}$ & $49.6^{\phantom{\dagger}}$ && $\phantom{0}5.4^{\phantom{\dagger}}$ & $55.6^{\phantom{\dagger}}$ \\
\midrule
\multicolumn{6}{l}{\textit{Unsupervised on curated data}}
\vspace{0.3em} \\
Larsson~\emph{et al}\onedot~\cite{larsson2017colorization} & INet+Pl. && -- & $77.2^\dagger$ && $49.2^{\phantom{\dagger}}$ & $59.7^{\phantom{\dagger}}$ \\
Wu~\emph{et al}\onedot~\cite{wu2018unsupervised} & INet && -- & -- && -- & $60.5^\dagger$ \\
Doersh~\emph{et al}\onedot~\cite{doersch2015unsupervised} & INet && $54.6^{\phantom{\dagger}}$ & $78.5^{\phantom{\dagger}}$ && $38.0^{{\phantom{\dagger}}}$ & $62.7^{{\phantom{\dagger}}}$ \\
Caron~\emph{et al}\onedot~\cite{caron2018deep} & INet && $78.5^{\phantom{\dagger}}$ & $82.5^{\phantom{\dagger}}$ && $58.7^{\phantom{\dagger}}$ & $65.9^\dagger$ \\
\midrule
\multicolumn{6}{l}{\textit{Unsupervised on non-curated data}}
\vspace{0.3em} \\
Mahendran~\emph{et al}\onedot~\cite{mahendran2018cross} & YFCCv && -- & $76.4^\dagger$ && -- & -- \\
Wang and Gupta~\cite{wang2015unsupervised} & YT8M && -- & -- && -- & $60.2^\dagger$ \\
Wang~\emph{et al}\onedot~\cite{wang2017transitive} & YT9M && $59.4^{\phantom{\dagger}}$ & $79.6^{\phantom{\dagger}}$ && $40.9^{\phantom{\dagger}}$ & $63.2^\dagger$ \\
\midrule
DeeperCluster\xspace & YFCC && $\mathbf{79.7}^{\phantom{\dagger}}$ & $\mathbf{84.3}^{\phantom{\dagger}}$ && $\mathbf{60.5}^{\phantom{\dagger}}$ & $\mathbf{67.8}^{\phantom{\dagger}}$ \\
\bottomrule
\end{tabular}
\caption{
Comparison of DeeperCluster\xspace to state-of-the-art unsupervised feature learning on classification and detection on \textsc{Pascal} VOC $2007$.
We disassociate methods using curated datasets and methods using non-curated datasets.
We selected hyper-parameters for each transfer task on the validation set, and then retrain on both training and validation sets.
We report results on the test set averaged over $5$ runs.
``YFFCv'' stands for the videos contained in YFFC100M dataset.
$^\dagger$ numbers from their original paper.
}
\label{tab:voc}
\end{table}
\paragraph{Pascal VOC 2007~\cite{everingham2010pascal}.}
This dataset has small training and validation sets ($2.5$k images each), making it close to the setting of real applications where models trained using large computational resources are adapted to a new task with a small number of instances.
We report numbers on the classification and detection tasks with finetuning (``\textsc{all}'') or by only retraining the last three fully connected layers of the network (``\textsc{fc68}'').
The \textsc{fc68} setting gives a better measure of the quality of the evaluated features since fewer parameters are retrained.
For classification, we use the code of Caron~\emph{et al}\onedot~\cite{caron2018deep}\footnote{\scriptsize\url{github.com/facebookresearch/deepcluster}} and for detection, \texttt{fast-rcnn}~\cite{girshick2015fast}\footnote{\scriptsize\url{github.com/rbgirshick/py-faster-rcnn}}.
For classification, we train the models for $150k$ iterations, starting with a learning rate of $0.002$ decayed by a factor $10$ every $20k$ iterations, and we report results averaged over $10$ random crops.
For object detection, we train our network for $150k$ iterations, dividing the step-size by $10$ after the first $50k$ steps with an initial learning rate of $0.01$ (\textsc{fc68}) or $0.002$ (\textsc{all}) and a weight decay of $0.0001$.
Following Doersch~\emph{et al}\onedot~\cite{doersch2015unsupervised}, we use the multiscale configuration, with scales $[400,500,600,700]$ for training and $[400,500,600]$ for testing.
In Table~\ref{tab:voc}, we compare DeeperCluster\xspace with two sets of unsupervised methods that use a VGG-16 network: those trained on curated datasets and those trained on non-curated datasets.
Previous unsupervised methods that worked on unucurated datasets with a VGG-16 use videos: Youtube8M (``YT8M''), Youtube9M (``YT9M'') or the videos from YFCC100M (``YFFCv'').
Our approach achieves state-of-the-art performance among all the unsupervised method that uses a VGG-16 architecture, even those that use ImageNet as a training set.
The gap with a supervised network is still important when we freeze the convolutions ($6\%$ for detection and $10\%$ for classification) but drops to less than $5\%$ for both tasks with finetuning.
\paragraph{Linear classifiers on ImageNet~\cite{deng2009imagenet} and Places205~\cite{zhou2014learning}.}
ImageNet (``INet'') and Places205 (``Pl.'') are two large scale image classification datasets: ImageNet's domain covers objects and animals ($1.3$M images) and Places205's domain covers indoor and outdoor scenes ($2.5$M images).
We train linear classifiers with a logistic loss on top of frozen convolutional layers at different depths.
To reduce influence of feature dimension in the comparison, we average-pool the features until their dimension is below $10k$~\cite{zhang2016colorful}.
This experiment probes the quality of the features extracted at each convolutional layer.
In Figure~\ref{fig:layers}, we observe that DeeperCluster\xspace matches the performance of a supervised network for all layers on Places205.
On ImageNet, it also matches supervised features up to the $4$th convolutional block; then the gap suddenly increases to around $20\%$.
It is not surprising since the supervised features are trained on ImageNet itself, while ours are trained on YFCC100M.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\linewidth]{figure_linear.pdf}
\includegraphics[width=0.48\linewidth]{figure_linear_places.pdf}
\caption{
Accuracy of linear classifiers on ImageNet and Places205 using the activations from different layers as features.
We compare a VGG-16 trained with supervision on ImageNet to VGG-16 trained with either RotNet or DeeperCluster\xspace on YFCC100M.
Exact numbers are in Appendix.
}
\label{fig:layers}
\end{figure}
\subsection{Pre-training for ImageNet}
In the previous section, we can observe that a VGG-16 trained on YFCC100M has similar or better low level features than the same network trained on ImageNet with supervision.
In this experiment, we want to check whether these low-level features pre-trained on YFCC100M without supervision can serve as a good initialization for fully-supervised ImageNet classification.
To this end, we pre-train a VGG-16 on YFCC100M using either DeeperCluster\xspace or RotNet.
The resulting weights are then used as initialization for the training of a network on ImageNet with supervision.
We merge the Sobel weights of the network pre-trained with DeeperCluster\xspace with the first convolutional layer during the initialization.
We then train the networks on ImageNet with mini-batch SGD for $100$ epochs, a learning rate of $0.1$, a weight decay of $0.0001$, a batch size of $256$ and dropout of $0.5$.
We reduce the learning rate by a factor of $0.2$ every $20$ epochs.
Note that this learning rate decay schedule slightly differs from the ImageNet classification PyTorch default implementation\footnote{\scriptsize\url{github.com/pytorch/examples/blob/master/imagenet/}} where they train for $90$ epochs and decay the learning rate by $0.1$ at epochs $30$ and $60$.
We give in Appendix the results with this default schedule (with unchanged conclusions).
In Table~\ref{tab:pretrain}, we compare the performance of a network trained with a standard intialization (``Supervised'') to one initialized with a pre-training obtained from either DeeperCluster\xspace (``Supervised + DeeperCluster\xspace pre-training'') or RotNet (``Supervised + RotNet pre-training'') on YFCC100M.
We see that our pre-training improves the performance of a supervised network by $+0.8\%$, leading to $74.9\%$ top-1 accuracy.
This means that our pre-training captures important statistics from YFCC100M that transfers well to ImageNet.
\begin{table}[t!]
\centering
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{@{}l c cc @{}}
\toprule
ImageNet &~~~&top-$1$ & top-$5$ \\
\midrule
Supervised (PyTorch documentation\footnote{\scriptsize\url{pytorch.org/docs/stable/torchvision/models}}) && $73.4$ & $91.5$ \\
Supervised (our code) && $74.1$ & $91.8$ \\
Supervised + RotNet pre-training && $74.5$ & $92.0$ \\
Supervised + DeeperCluster\xspace pre-training && $\mathbf{74.9}$ & $\mathbf{92.3}$ \\
\bottomrule
\end{tabular}
\caption{
Accuracy on the validation set of ImageNet classification for a supervised VGG-16 trained with different initializations:
we compare a network trained from a standard initialization to networks trained from pre-trained weights using either DeeperCluster\xspace or RotNet on YFCC100M.
}
\label{tab:pretrain}
\end{table}
\subsection{Model analysis}
In this final set of experiments, we analyze some components of our model.
Since DeeperCluster\xspace derives from RotNet and DeepCluster,
we first look at the difference between these methods and ours, when trained on curated and non-curated datasets.
We then report quantitative and qualitative evaluations of the clusters obtained with DeeperCluster\xspace.
\paragraph{Comparison with RotNet and DeepCluster.}
In Table~\ref{tab:linear}, we compare DeeperCluster\xspace with DeepCluster and RotNet when a linear classifier is trained on top of the last convolutional layer of a VGG-16 on several datasets.
For reference, we also report previously published numbers~\cite{wu2018unsupervised} with a VGG-$16$ architecture.
We average-pool the features of the last layer resulting in representations of $8192$ dimensions.
Our approach outperforms both RotNet and DeepCluster, even when they are trained on curated datasets (except for ImageNet classification task where DeepCluster trained on ImageNet yields the best performance).
More interestingly, we see that the quality of the dataset or its scale has little impact on RotNet while it has on DeepCluster.
This is confirming that self-supervised methods are more robust than clustering to a change of dataset distribution.
\begin{table}[t!]
\centering
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{@{}l c c c c@{}}
\toprule
Method & Data & ImageNet & Places & VOC2007 \\
\midrule
Supervised & ImageNet & $70.2$ & $45.9$ & $84.8$ \\
\midrule
Wu~\emph{et al}\onedot~\cite{wu2018unsupervised} & ImageNet & $39.2$ & $36.3$ & - \\
\midrule
RotNet & ImageNet & $32.7$ & $32.6$ & $60.9$ \\
DeepCluster & ImageNet & $\mathbf{48.4}$ & $37.9$ & $71.9$ \\
\midrule
RotNet & YFCC100M & $33.0$ & $35.5$ & $62.2$ \\
DeepCluster & YFCC100M & $34.1$ & $35.4$ & $63.9$ \\
\midrule
DeeperCluster\xspace & YFCC100M & $45.6$ & $\mathbf{42.1}$ & $\mathbf{73.0}$ \\
\bottomrule
\end{tabular}
\caption{
Comparaison between DeeperCluster\xspace, RotNet and DeepCluster when pre-trained on curated and non-curated dataset.
We report the accuracy on several datasets of a linear classifier trained on top of features of the last convolutional layer.
All the methods use the same architecture.
DeepCluster does not scale to the full YFCC100M dataset, we thus train it on a random subset of $1.3$M images.
}
\label{tab:linear}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[height=0.19\linewidth]{figure_nmi_tag.pdf}
\includegraphics[height=0.19\linewidth]{figure_nmi_user.pdf}
\includegraphics[height=0.19\linewidth]{figure_nmi_gps.pdf}
\includegraphics[height=0.19\linewidth]{figure_nmi_camera.pdf}
\includegraphics[height=0.19\linewidth]{figure_nmi_imnet.pdf}
\caption{Normalized mutual information between our clustering and different sorts of metadata: hashtags, user IDs, geographic coordinates, and device types.
We also plot the NMI with an ImageNet classifier labeling.}
\label{fig:nmi}
\end{figure*}
\begin{figure*}[t!]
\centering
\begin{tabular}{ccccccc}
{tag: \itshape cat} & {tag: \itshape elephantparadelondon} & {tag: \itshape always} & {device: \itshape CanoScan}
\\
\includegraphics[width=0.225\linewidth]{cluster21642.jpeg}&
\includegraphics[width=0.225\linewidth]{cluster28828.jpeg}&
\includegraphics[width=0.225\linewidth]{cluster3551.jpeg}&
\includegraphics[width=0.225\linewidth]{cluster5841.jpeg}
\\
{GPS: ($43$, $10$)} & {GPS: ($-34$, $-151$)} & {GPS: ($64$, $-20$)} & {GPS: ($43$, $-104$)}
\\
\includegraphics[width=0.225\linewidth]{cluster30986.jpeg}&
\includegraphics[width=0.225\linewidth]{cluster380.jpeg}&
\includegraphics[width=0.225\linewidth]{cluster22739.jpeg}&
\includegraphics[width=0.225\linewidth]{cluster16433.jpeg}
\end{tabular}
\caption{We randomly select $9$ images per cluster and indicate the dominant cluster metadata.
The bottom row depicts clusters pure for GPS coordinates but unpure for user IDs.
As expected, they turn out to correlate with tourist landmarks.
No metadata is used during training.
For copyright reasons, we provide in Appendix the photographer username for each image.
}
\label{fig:cluster}
\end{figure*}
\paragraph{Influence of dataset size and number of clusters.}
\label{sec:exp-k}
To measure the influence of the number of images on features, we train models with $1$M, $4$M, $20$M, and $96$M images and report their accuracy on the validation set of the Pascal VOC 2007 classification task (\textsc{fc68} setting).
We also train models on $20$M images with a number of clusters that varies from $10$k to $160$k.
For the experiment with a total of $160$k clusters, we choose $m=2$ which results in $8$ super-classes.
In Figure~\ref{fig:k}, we observe that the quality of our features improves when scaling both in terms of images and clusters.
Interestingly, between $4$M and $20$M of YFCC100M images are needed to meet the performance of our method on ImageNet.
Augmenting the number of images has a bigger impact than the number of clusters.
Yet, this improvement is significant since it corresponds to a reduction of more than $10\%$ of the relative error w.r.t. the supervised model.
\paragraph{Quality of the clusters.}
In addition to features, our method provides a clustering of the input images.
We evaluate the quality of these clusters by measuring their correlation with existing partitions of the data.
In particular, YFCC100M comes with many different metadata.
We consider hashtags, users, camera and GPS coordinates.
If an image has several hashtags, we pick as label the least frequent one in the total hashtag distribution.
We also measure the correlation of ours clusters with labels predicted by a classifier trained on ImageNet categories.
We use a ResNet-$50$ network~\cite{he2016deep}, pre-trained on ImageNet, to classify the YFCC100M images and we select those for which the confidence in prediction is higher than $75\%$.
This evaluation omits a large amount of the data but gives some insight about the quality of our clustering in object classification.
In Figure~\ref{fig:nmi}, we show the evolution during training of the normalized mutual information (NMI) between our clustering and different metadata, and the predicted labels from ImageNet.
The higher the NMI, the more correlated our clusters are to the considered partition.
For reference, we compute the NMI for a clustering of RotNet features (as it corresponds to weights at initialization) and of a supervised model.
First, it is interesting to observe that our clustering is improving over time for every type of metadata.
One important factor is that most of these commodities are correlated since a given user takes pictures in specific places with probably a single camera and use a preferred fixed set of hashtags.
Yet, these plots show that our model captures in the input signal enough information to predict these metadata at least as well as the features trained with supervision.
We visually assess the consistency of our clusters in Figure~\ref{fig:cluster}.
We display $9$ random images from $8$ manually picked clusters.
The first two clusters contain a majority of images associated with tag from the head (first cluster) and from the tail (second cluster) in the YFC100M dataset.
Indeed, $418.538$ YFC100M images are associated with the tag \textit{cat} whereas only $384$ images contain the tag \textit{elephantparadelondon} ($0.0004\%$ of the dataset).
We also show a cluster for which the dominant hashtag does not corrolate visually with the content of the cluster.
As already mentioned, this database is non-curated and contains images that basically do not depict anything semantic.
The dominant metadata of the last cluster in the top row is the device ID \textit{CanoScan}.
As this cluster is about drawings, its images have been mainly taken with a scanner.
Finally, the bottom row depict clusters that are pure for GPS coordinates but unpure for user IDs.
It results in clusters of images taken by many different users in the same place: tourist landmarks.
\section{Introduction}
\input{intro.tex}
\section{Related Work}
\input{related.tex}
\input{method.tex}
\section{Experiments}
\input{experiments.tex}
\section{Conclusion}
In this paper, we present an unsupervised approach specifically designed to deal with large amount of non-curated data.
Our method is well-suited for distributed training, which allows training on large datasets with $96M$ of images.
With such amount of data, our approach surpasses unsupervised methods trained on curated datasets, which validates the potential of unsupervised learning in applications where annotations are scarce or curation is not trivial.
Finally, we show that unsupervised pre-training improves the performance of a network trained on ImageNet.
\paragraph{Acknowledgement.}
We thank Thomas Lucas, Matthijs Douze, Francisco Massa and the rest of Thoth and FAIR teams for their help and fruitful discussions.
We also thank the anonymous reviewers for their thoughtful feedback.
Julien Mairal was funded by the ERC grant number 714381 (SOLARIS project).
{\small
\bibliographystyle{ieee_fullname}
\section{Preliminaries}
In this work, we refer to the vector obtained at the penultimate layer of the convnet as a \emph{feature} or \emph{representation}.
We denote by $f_\theta$ the feature-extracting function, parametrized by a set of parameters $\theta$.
Given a set of images, our goal is then to learn a ``good'' mapping $f_{\theta^*}$.
By ``good'', we mean a function that produces general-purpose visual features that are useful on downstream tasks.
\subsection{Self-supervision}
In self-supervised learning, a pretext task is used to extract target labels directly from data~\cite{doersch2015unsupervised}.
These targets can take a variety of forms.
They can be categorical labels associated with a multiclass problem, as when predicting the transformation of an image~\cite{gidaris2018unsupervised,zhang2019aet} or the ordering of a set of patches~\cite{noroozi2016unsupervised}.
Or they can be continuous variables associated with a regression problem, as when predicting image color~\cite{zhang2016colorful} or surrounding patches~\cite{pathak2016context}.
In this work, we are interested in the former.
We suppose that we are given a set of $N$ images $\{x_1, \dots, x_N\}$ and we assign a pseudo-label $y_n$ in $\mathcal{Y}$ to each input $x_n$.
Given these pseudo-labels, we learn the parameters $\theta$ of the convet jointly with a linear classifier $V$ to predict pseudo-labels by solving the problem
\begin{equation}
\label{eq:selfsup}
\min_{\theta, V} \frac{1}{N} \sum_{n=1}^N \ell( y_n, V f_\theta(x_n)),
\end{equation}
where $\ell$ is a loss function.
The pseudo-labels $y_n$ are fixed during the optimization and the quality of the learned features entirely depends on their relevance.
\paragraph{Rotation as self-supervision.}
Gidaris~\emph{et al}\onedot~\cite{gidaris2018unsupervised} have recently shown that good features can be obtained when training a convnet to discriminate between different image rotations.
In this work, we focus on their pretext task, \textit{RotNet}, since its performance on standard evaluation benchmarks is among the best in self-supervised learning.
This pretext task corresponds to a multiclass classification problem with four categories: rotations in $\{0\degree, 90\degree, 180\degree, 270\degree\}$.
Each input $x_n$ in Eq.~(\ref{eq:selfsup}) is randomly rotated and associated with a target~$y_n$ that represents the angle of the applied rotation.
\subsection{Deep clustering}
Clustering-based approaches for deep networks typically build target classes by clustering visual features produced by convnets.
As a consequence, the targets are updated during training along with the representations and are potentially different at each epoch.
In this context, we define a latent pseudo-label $z_n$ in $\mathcal{Z}$ for each image $n$ as well as a corresponding linear classifier $W$.
These clustering-based methods alternate between learning the parameters $\theta$ and $W$ and updating the pseudo-labels $z_n$.
Between two reassignments, the pseudo-labels $z_n$ are fixed, and the parameters and classifier are optimized by solving
\begin{equation}
\label{eq:mstep}
\min_{\theta, W} \frac{1}{N} \sum_{n=1}^N \ell( z_n, W f_\theta(x_n)),
\end{equation}
which is of the same form as Eq.~(\ref{eq:selfsup}).
Then, the pseudo-labels $z_n$ can be reassigned by minimizing an auxiliary loss function.
This loss sometimes coincides with Eq.~(\ref{eq:mstep})~\cite{bojanowski2017unsupervised,xie2016unsupervised} but some works proposed to use another objective~\cite{caron2018deep,yang2016joint}.
\paragraph{Updating the targets with $k$-means.}
In this work, we focus on the framework of Caron~\emph{et al}\onedot~\cite{caron2018deep}, \textit{DeepCluster}, where latent targets are obtained by clustering the activations with $k$-means.
More precisely, the targets $z_n$ are updated by solving the following optimization problem:
\begin{equation}\label{eq:clustering}
\min_{C\in\mathbb{R}^{d\times k}} \sum_{n=1}^N \left[\min_{z_n\in \{0,1\}^k ~\text{s.t.}~ z_n^\top {\mathbf 1} = 1} \| C z_n - f_\theta(x_n)\|_2^2 \right],
\end{equation}
$C$ is the matrix where each column corresponds to a centroid, $k$ is the number of centroids, and $z_n$ is a binary vector with a single non-zero entry.
This approach assumes that the number of clusters $k$ is known \emph{a priori}; in practice, we set it by validation on a downstream task (see Sec.~\ref{sec:exp-k}).
The latent targets are updated every $T$ epochs of stochastic gradient descent steps when minimizing the objective~(\ref{eq:mstep}).
Note that this alternate optimization scheme is prone to trivial solutions and controlling the way optimization procedures of both objectives interact is crucial.
Re-assigning empty clusters and performing a batch-sampling based on an uniform distribution over the cluster assignments are workarounds to avoid trivial parametrization~\cite{caron2018deep}.
\section{Method}
\label{sec:method}
In this section, we describe how we combine self-supervised learning with deep clustering in order to scale up to large numbers of images and targets.
\subsection{Combining self-supervision and clustering}
We assume that the inputs $x_1,\dots,x_N$ are rotated images, each associated with a target label $y_n$ encoding its rotation angle and a cluster assignment~$z_n$.
The cluster assignment changes during training along with the visual representations.
We denote by $\mathcal{Y}$ the set of possible rotation angles and by $\mathcal{Z}$, the set of possible cluster assignments.
A way of combining self-supervision with deep clustering is to add the losses defined in Eq.~(\ref{eq:selfsup}) and Eq.~(\ref{eq:mstep}).
However, summing these losses implicitly assumes that classifying rotations and cluster memberships are two independent tasks, which may limit the signal that can be captured.
Instead, we work with the Cartesian product space $\mathcal{Y} \times \mathcal{Z}$, which can potentially capture richer interactions between the two tasks.
We get the following optimization problem:
\begin{equation}
\label{eq:naive}
\min_{\theta, W} \frac{1}{N} \sum_{n=1}^N \ell( y_n \otimes z_n, W f_\theta(x_n)).
\end{equation}
Note that any clustering or self-supervised approach with a multiclass objective can be combined with this formulation.
For example, we could use a self-supervision task that captures information about tiles permutations~\cite{noroozi2016unsupervised} or frame ordering in a video~\cite{wang2015unsupervised}.
However, this formulation does not scale in the number of combined targets, i.e., its complexity is $O(|\mathcal{Y}||\mathcal{Z}|)$.
This limits the use of a large number of cluster or a self-supervised task with a large output space~\cite{zhang2019aet}.
In particular, if we want to capture information contained in the tail of the distribution of non-curated dataset, we may need a large number of clusters.
We thus propose an approximation of our formulation based on a scalable hierarchical loss that it is designed to suit distributed training.
\subsection{Scaling up to large number of targets}
Hierarchical losses are commonly used in language modeling where the goal is to predict a word out of a large vocabulary~\cite{brown1992class}.
Instead of making one decision over the full vocabulary, these approaches split the process in a hierarchy of decisions, each with a smaller output space.
For example, the vocabulary can be split into clusters of semantically similar words, and the hierarchical process would first select a cluster and then a word within this cluster.
Following this line of work, we partition the target labels into a $2$-level hierarchy where we first predict a super-class and then a sub-class among its associated target labels.
The first level is a partition of the images into $S$ super-classes and we denote by $y_n$ the super-class assignment vector in $\{0,1\}^S$ of the image $n$ and by $y_{ns}$ the $s$-th entry of $y_n$.
This super-class assignment is made with a linear classifier~$V$ on top of the features.
The second-level of the hierarchy is obtained by partitioning \emph{within each super-class}.
We denote by $z^s_n$ the vector in $\{0,1\}^{k_s}$ of the assignment into~$k_s$ sub-classes for an image $n$ belonging to super-class $s$.
There are $S$ sub-class classifiers $W_1,\dots,W_S$, each predicting the sub-class memberships within a super-class $s$.
The parameters of the linear classifiers~$(V, W_1, \dots, W_S)$ and $\theta$ are jointly learned by minimizing the following loss function:
\begin{equation}\label{eq:sup}
\hspace{-5pt}\frac{1}{N} \sum_{n=1}^N \left[\ell\big(V f_\theta(x_n), y_n\big) {+}\sum_{s=1}^S y_{ns} \ell\left(W_s f_\theta(x_n) , z^s_n\right)\right],
\end{equation}
where $\ell$ is the negative log-softmax function. Note that an image that does not belong to the super-class $s$ does not belong either to any of its $k_s$ sub-classes.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{pull-figure-3.pdf}
\caption{DeeperCluster alternates between a hierachical clustering of the features and learning the parameters of a convnet by predicting both the rotation angle and the cluster assignments in a single hierachical loss.}
\vspace{-1em}
\label{fig:method}
\end{figure}
\paragraph{Choice of super-classes.}
A natural partition would be to define the super-classes based on the target labels from the self-supervised task and the sub-classes as the labels produced by clustering.
However, this would mean that each image of the entire dataset would be present in each super-class (with a different rotation), which does not take advantage of the hierarchical structure to use a bigger number of clusters.
Instead, we split the dataset into $m$ sets by running $k$-means with $m$ centroids on the full dataset every $T$ epochs.
We then use the Cartesian product between the assignment to these $m$ clusters and the angle rotation classes to form the super-classes.
There are $4m$ super-classes, each associated with the subset of data belonging to the corresponding cluster ($N/m$ images if the clustering is perfectly balanced).
These subsets are then further split with $k$-means into $k$ subclasses.
This is equivalent to running a hierarchical $k$-means with rotation constraints on the full datasets to form our hierarchical loss.
We typically use $m=4$ and $k=80$k, leading to a total of $320$k different clusters split in $4$ subsets.
Our approach, ``DeeperCluster\xspace'', shares similarities with DeepCluster but is designed to scale to larger datasets.
We alternate between clustering the non-rotated images features and training the network to predict both the rotation applied to the input data and its cluster assignment amongst the clusters corresponding to this rotation (Figure~\ref{fig:method}).
\paragraph{Distributed training.}
Building the super-classes based on data splits lends itself to a distributed implementation that scales well in the number of images.
Specifically, when optimizing Eq.~(\ref{eq:sup}), we form as many distributed communication groups of $p$ GPUs as the number of super-classes, i.e., $G=4m$.
Different communication groups share the parameters $\theta$ and the super-class classifier $V$, while the parameters of the sub-class classifiers $W_1,\dots,W_S$ are only shared within a communication group.
Each communication group $s$ deals only with the subset of images and the rotation angle associated with the super-class $s$.
\paragraph{Distributed $k$-means.}
Every $T$ epochs, we recompute the super and sub-class assignments by running two consecutive $k$-means on the entire dataset.
This is achieved by first randomly splitting the dataset across different GPUs.
Each GPU is in charge of computing cluster assignments for its partition, whereas
centroids are updated across GPUs.
We reduce communication between GPUs by sharing only the number of assigned elements for each cluster and the sum of their features.
The new centroids are then computed from these statistics.
We observe empirically that $k$-means converges in $10$ iterations.
We cluster $96$M features of dimension $4096$ into $m=4$ clusters using $64$ GPUs ($1$ minute per iteration).
Then, we split this pool of GPUs into $4$ groups of $16$ GPUs.
Each group clusters around $23$M features into $80$k clusters ($4$ minutes per iteration).
\subsection{Implementation details}
The loss in Eq.~(\ref{eq:sup}) is minimized with mini-batch stochastic gradient descent~\cite{bottou2012stochastic}.
Each mini-batch contains $3072$ instances distributed accross $64$ GPUs, leading to $48$ instances per GPU per minibatch~\cite{goyal2017accurate}.
We use dropout, weight decay, momentum and a constant learning rate of $0.1$.
We reassign clusters every $3$ epochs.
We use the Pascal VOC $2007$ classification task without finetuning as a downstream task to select hyper-parameters.
In order to speed up experimentations, we initialize the network with RotNet trained on YFCC100M.
Before clustering, we perform a whitening of the activations and $\ell_2$-normalize each of them.
We use standard data augmentations, i.e., cropping of random sizes and aspect ratios and horizontal flips~\cite{krizhevsky2012imagenet}).
We use the VGG-$16$ architecture~\cite{simonyan2014very} with batch normalization layers.
Following~\cite{bojanowski2017unsupervised,caron2018deep,paulin2015local}, we pre-process images with a Sobel filtering.
We train our models on the $96$M images from YFCC100M~\cite{thomee2015yfcc100m} that we managed to download.
We use this publicly available dataset for research purposes only.
\section*{\refname}}{}{}{}
\input{template_sierra.tex}
\usepackage{geometry}
\geometry{
a0paper,
left=0mm,
right=0mm,
top=0mm,
}
\usepackage{tikzscale}
\usepackage{booktabs}
\usepackage{algorithmic}
\usepackage{float}
\usepackage{subcaption}
\renewcommand{\baselinestretch}{1.1}
\def\emph{et al}\onedot{\emph{et al}.}
\begin{document}
\sffamily
\postertitle{Leveraging Large-Scale Uncurated Data for Unsupervised Learning of Visual Features} {Mathilde Caron, Piotr Bojanowski, Armand Joulin and Julien Mairal} {Facebook AI Research, INRIA}
\vspace{-1em}
\setlength\columnsep{-25pt}
\begin{multicols}{3}
\large
\begin{center}
\block[orangeinria][1]{Overview}
{
\begin{itemize}
\begin{Large}
\item \textcolor{orangeinria}{\textbf{Goal}}\\
Learning general-purpose visual features with convnets on large-scale unsupervised and uncurated datasets.
\item \textcolor{orangeinria}{\textbf{Motivation}}\\
\begin{itemize}
\item bridge the performance gap between unsupervised methods trained on curated data, which are costly to obtain, and massive raw datasets that are easily available;
\item new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data.
\end{itemize}
\item \textcolor{orangeinria}{\textbf{Method}}\\
Our approah, iterates between:
\begin{itemize}
\item hierarchical clustering of the features;
\item updating convnet weights by predicting both rotation angle and cluster assignment in a single hierarchical loss.
\end{itemize}
\item \textcolor{orangeinria}{\textbf{Results}}\\
Features pre-trained on $95$M images from YFCC100M with state-of-the-art performance on standard evaluation benchmarks with VGG-$16$.
\end{Large}
\vspace{0.5em}
\end{itemize}
}
\block[orangeinria][1]{Illustration of our approach}
{\includegraphics[width=1.\linewidth]{figures/pull-figure-3.pdf}}
\block{Method}
{
\begin{itemize}
\item A large set of unlabelled images~$\{ x_1, \ldots, x_N \}$, $x_i$ in $\mathbb{R}^{3 \times 224 \times 224}$.
\item $f_\theta$ is the convnet mapping (with $\theta$ the set of corresponding parameters).
\item We partition the target labels into a $2$-level hierarchy:
\begin{enumerate}
\item \textbf{Super-classes}: $y_n$ the super-class assignment vector in $\{0,1\}^S$ of the image $n$;
\item \textbf{Sub-classes}: partitioning \emph{within each super-class}. $z^s_n$ is the vector in $\{0,1\}^{k_s}$ of the assignment into~$k_s$ sub-classes for an image $n$ belonging to super-class $s$.
\end{enumerate}
\item Parameters of linear classifiers~$(V, W_1, \dots, W_S)$ and $\theta$ are learned by minimizing:
\begin{Large}
\[
\frac{1}{N} \sum_{n=1}^N \left[\ell\big(V f_\theta(x_n), y_n\big) {+}\sum_{s=1}^S y_{ns} \ell\left(W_s f_\theta(x_n) , z^s_n\right)\right],
\]
\end{Large}
where $\ell$ is the negative log-softmax function.
\end{itemize}
} \par
\block{Transfer learning to Pascal VOC 2007}{
\begin{tabular}{@{}lc@{}c@{}c@{\hspace{0.2em}}c@{}c@{}c@{}c@{}}
& &\phantom{ee}& \multicolumn{2}{c}{Classif.} &\phantom{ee}& \multicolumn{2}{c}{Detect.} \\
\cmidrule{4-5} \cmidrule{7-8}
Method & Data && \textsc{fc68} & \textsc{all} && \textsc{fc68} & \textsc{all} \\
\midrule
ImageNet labels & ImageNet && $89.3$ & $86.9$ && $57.0$ & $67.3$ \\
\midrule
\multicolumn{6}{l}{\textit{Unsupervised on curated data}}
\vspace{0.3em} \\
Larsson~\emph{et al}\onedot~\cite{larsson2017colorization} & ImageNet + Places && -- & $77.2$ && $45.6$ & $59.7$ \\
Doersh~\emph{et al}\onedot~\cite{doersch2015unsupervised} & ImageNet && $54.6$ & $78.5$ && $38.0$ & $62.7$ \\
Caron~\emph{et al}\onedot~\cite{caron2018deep} & ImageNet && $78.5$ & $82.3$ && $\mathbf{57.1}$ & $65.9$ \\
\midrule
\multicolumn{6}{l}{\textit{Unsupervised on uncurated data}}
\vspace{0.3em} \\
Mahendran~\emph{et al}\onedot~\cite{mahendran2018cross} & YFCC100M videos && -- & $76.4$ && -- & -- \\
Wang and Gupta~\cite{wang2015unsupervised} & Youtube8M && -- & -- && -- & $60.2$ \\
Wang~\emph{et al}\onedot~\cite{wang2017transitive} & Youtube9M && $59.4$ & $79.6$ && $40.9$ & $63.2$ \\
\midrule
Our method & YFCC100M && $\mathbf{79.9}$ & $\mathbf{83.8}$ && $56.9$ & $\mathbf{67.5}$ \\
\bottomrule
\end{tabular}
}
\block{Comparing with methods on YFCC100M}{
\begin{itemize}
\item[] We train logistic regressions on top of frozen convolutional layers at different depths.
\end{itemize}
\includegraphics[width=0.48\linewidth]{figures/figure_linear.pdf}
\includegraphics[width=0.48\linewidth]{figures/figure_linear_places.pdf}
} \par
\block{Amounts of images and clusters}
{
\begin{itemize}
\item[] We report validation mAP on Pascal VOC classification task (\textsc{fc68} setting).
\end{itemize}
\includegraphics[width=0.495\linewidth]{figures/figure_size_dataset.pdf}
\includegraphics[width=0.495\linewidth]{figures/figure_k.pdf}
}
\block{Clustering quality}
{
\begin{itemize}
\item[] We display $9$ random images for clusters pure for a certain metadata. The bottom row depicts clusters that are pure for GPS coordinates but unpure for user IDs.
\end{itemize}
{
\begin{tabular}{ccccccc}
{tag: \itshape cat} & {tag: \small \itshape elephantparadelondon} & {tag: \itshape always} & {device: \itshape CanoScan}
\\
\includegraphics[width=0.24\linewidth]{images/cluster21642.jpeg}&
\includegraphics[width=0.24\linewidth]{images/cluster28828.jpeg}&
\includegraphics[width=0.24\linewidth]{images/cluster3551.jpeg}&
\includegraphics[width=0.24\linewidth]{images/cluster5841.jpeg}
\\
{GPS: ($43$, $10$)} & {GPS: ($-34$, $-151$)} & {GPS: ($64$, $-20$)} & {GPS: ($43$, $-104$)}
\\
\includegraphics[width=0.24\linewidth]{images/cluster30986.jpeg}&
\includegraphics[width=0.24\linewidth]{images/cluster380.jpeg}&
\includegraphics[width=0.24\linewidth]{images/cluster22739.jpeg}&
\includegraphics[width=0.24\linewidth]{images/cluster16433.jpeg}
\end{tabular}
}
} \par
\block{References}
{
{\small
\bibliographystyle{ieee}
\section*{\textbf{\LARGE{Appendix}}}
\begin{table*}[h]
\centering
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{@{}l ccccccccccccc@{}}
\toprule
Method & \footnotesize \texttt{conv1} & \footnotesize \texttt{conv2} & \footnotesize \texttt{conv3} & \footnotesize \texttt{conv4} & \footnotesize \texttt{conv5} & \footnotesize \texttt{conv6} & \footnotesize \texttt{conv7} & \footnotesize \texttt{conv8} & \footnotesize \texttt{conv9} & \footnotesize \texttt{conv10} & \footnotesize \texttt{conv11} & \footnotesize \texttt{conv12} & \footnotesize \texttt{conv13} \\
\midrule
\textit{ImageNet} \\
\midrule
Supervised & $7.8 $ & $12.3$ & $15.6$ & $21.4$ & $24.4$ & $24.1$ & $33.4$ & $41.1$ & $44.7$ & $49.6$ & $61.2$ & $66.0$ & $70.2$ \\
RotNet & $10.9$ & $15.7$ & $17.2$ & $21.0$ & $27.0$ & $26.6$ & $26.7$ & $33.5$ & $35.2$ & $33.5$ & $39.6$ & $38.2$ & $33.0$ \\
DeeperCluster\xspace & $7.4$ & $9.6$ & $14.9$ & $16.8$ & $26.1$ & $29.2$ & $34.2$ & $41.6$ & $43.4$ & $45.5$ & $49.0$ & $49.2$ & $45.6$ \\
\midrule
\textit{Places205} \\
\midrule
Supervised & $10.5$ & $16.4$ & $20.7$ & $24.7$ & $30.3$ & $31.3$ & $35.0$ & $38.1$ & $39.5$ & $40.8$ & $45.4$ & $45.3$ & $45.9$ \\
RotNet & $13.9$ & $19.1$ & $22.5$ & $24.8$ & $29.9$ & $30.8$ & $32.5$ & $35.3$ & $36.0$ & $36.1$ & $38.8$ & $37.9$ & $35.5$ \\
DeeperCluster\xspace & $12.7$ & $14.8$ & $21.2$ & $23.3$ & $30.5$ & $32.6$ & $34.8$ & $39.5$ & $40.8$ & $41.6$ & $44.0$ & $44.0$ & $42.1$ \\
\bottomrule
\end{tabular}
\caption{
Accuracy of linear classifiers on ImageNet and Places205 using the activations from different layers as features.
We train a linear classifier on top of frozen convolutional layers at different depths.
We compare a VGG-16 trained with supervision on ImageNet to VGG-16s trained with either RotNet or our approach on YFCC100M.
}
\label{tab:layers}
\end{table*}
\setcounter{section}{0}
\section{Evaluating unsupervised features}
Here we provide numbers from Figure~$2$ in Table~\ref{tab:layers}.
\section{YFCC100M and Imagenet label distribution}
YFCC100M dataset contains social media from the Flickr website.
The content of this dataset is very unbalanced, with a ``long-tail'' distribution of hashtags contrasting with the well-behaved label distribution of ImageNet as can be seen in Figure~\ref{fig:tag_dist}.
For example, \textit{guenon} and \textit{baseball} correspond to labels with $1300$ associated images in ImageNet, while there are respectively $226$ and $256,758$ images associated with these hashtags in YFCC100M.
\section{Pre-training for ImageNet}
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{figure_tag_dist.pdf}
\caption{
Comparison of the hashtag distribution in YFCC100M with the label distribution in ImageNet.
}
\label{fig:tag_dist}
\end{figure}
\begin{table}[h]
\centering
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{@{}l cc @{}}
\toprule
& \footnotesize PyTorch doc & \footnotesize Our \\
& \footnotesize hyperparam & \footnotesize hyperparam \\
\midrule
\small Supervised (PyTorch documentation\footnote{\url{pytorch.org/docs/stable/torchvision/models}}) & $73.4$ & - \\
\small Supervised (our code) & $73.3$ & $74.1$ \\
\small Supervised + RotNet pre-training & $73.7$ & $74.5$ \\
\small Supervised + DeeperCluster\xspace pre-training & $74.3$ & $74.9$ \\
\bottomrule
\end{tabular}
\caption{
Top-$1$ accuracy on validation set of a VGG-16 trained on ImageNet with supervision with different initializations.
We compare a network initialized randomly to networks pre-trained with our unsupervised method or with RotNet on YFCC100M.
}
\label{tab:pretrain90}
\end{table}
In Table~\ref{tab:pretrain90}, we compare the performance of a network trained with supervision on ImageNet with a standard intialization (``Supervised'') to one pre-trained with DeeperCluster\xspace (``Supervised + DeeperCluster\xspace pre-training'') and to one pre-trained with RotNet (``Supervised + RotNet pre-training'').
The convnet is finetuned on ImageNet with supervision with mini-batch SGD following the hyperparameters of the ImageNet classification example implementation from PyTorch documentation\footnote{\url{github.com/pytorch/examples/blob/master/imagenet/main.py}}).
Indeed, we train for $90$ epochs (instead of $100$ epochs in Table~$3$ of the main paper).
We use a learning rate of $0.1$, a weight decay of $0.0001$, a batch size of $256$ and dropout of $0.5$.
We reduce the learning rate by a factor of $0.1$ at epochs $30$ and $60$ (instead of decaying the learning rate with a factor $0.2$ every $20$ epochs in Table~$3$ of the main paper).
This setting is unfair towards the supervised from scratch baseline since as we start the optimization with a good initialization we arrive at convergence earlier.
Indeed, we observe that the gap between our pretraining and the baseline shrinks from $1.0$ to $0.8$ when evaluating at convergence instead of evaluating before convergence.
As a matter of fact, the gap for the RotNet pretraining with the baseline remains the same: $0.4$.
\section{Model analysis}
\subsection{Instance retrieval}
\begin{table}[h]
\centering
\begin{tabular}{@{}lc c c@{}}
\toprule
Method & Pretraining & Oxford$5$K & Paris$6$K \\
\midrule
ImageNet labels & ImageNet & $72.4$ & $81.5$ \\
Random & - & $\phantom{0}6.9$ & $22.0$ \\
\midrule
Doersch~\emph{et al}\onedot~\cite{doersch2015unsupervised} & ImageNet & $35.4$ & $53.1$ \\
Wang~\emph{et al}\onedot~\cite{wang2017transitive} & Youtube $9$M & $42.3$ & $58.0$ \\
\midrule
RotNet & ImageNet & $48.2$ & $61.1$\\
DeepCluster & ImageNet & $\mathbf{61.1}$ & $\mathbf{74.9}$\\
\midrule
RotNet & YFCC100M & $46.5$ & $59.2$ \\
DeepCluster & YFCC100M & $57.2$ & $74.6$ \\
\midrule
DeeperCluster\xspace & YFCC100M & $55.8$ & $73.4$ \\
\bottomrule
\end{tabular}
\caption{
mAP on instance-level image retrieval on Oxford and Paris dataset.
We apply R-MAC with a resolution of $1024$ pixels and $3$ grid levels~\cite{tolias2015particular}.
We disassociate the methods using unsupervised ImageNet and the methods using non-curated datasets.
DeepCluster does not scale to the full YFCC100M dataset, we thus train it on a random subset of $1.3$M images.
}
\label{tab:retrieval}
\end{table}
Instance retrieval consists of retrieving from a corpus the most similar images to a given a query.
We follow the experimental setting of Tolias~\emph{et al}\onedot~\cite{tolias2015particular}:
we apply R-MAC with a resolution of $1024$ pixels and $3$ grid levels and we report mAP on instance-level image retrieval on Oxford Buildings~\cite{philbin2007object} and Paris~\cite{philbin2008lost} datasets.
As described by Dosovitskiy~\emph{et al}\onedot~\cite{dosovitskiy2016discriminative}, class-level supervision induces invariance to semantic categories.
This property may not be beneficial for other computer vision tasks such as instance-level recognition.
For that reason, descriptor matching and instance retrieval are tasks for which unsupervised feature learning might provide performance improvements.
Moreover, these tasks constitute evaluations that do not require any additionnal training step, allowing a straightforward comparison accross different methods.
We evaluate our method and compare it to previous work following the experimental setup proposed by Caron~\emph{et al}\onedot~\cite{caron2018deep}.
We report results for the instance retrieval task in Table~\ref{tab:retrieval}.
We observe that features trained with RotNet have significantly worse performance than DeepCluster both on Oxford$5$K and Paris$6$K.
This performance discrepancy means that properties acquired by classifying large rotations are not relevant to instance retrieval.
An explanation is that all images in Oxford$5$k and Paris$6$k have the same orientation as they picture buildings and landmarks.
As our method is a combination of the two paradigms, it suffers an important performance loss on Oxfork$5$K, but is not affected much on Paris$6$k.
These results emphasize the importance of having a diverse set of benchmarks to evaluate the quality of features produced by unsupervised learning methods.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\linewidth]{figure_color.pdf}
\caption{Sorted standard deviations to clusters mean colors.
If the standard deviation of a cluster to its mean color is low, the images of this cluster have a similar colorization.}
\label{fig:color_std}
\end{figure}
\subsection{Influence of data pre-processing}
In this section we experiment with our method on raw RGB inputs.
We provide some insights into the reasons why sobel filtering is crucial to obtain good performance with our method.
First, in Figure~\ref{fig:color_std}, we randomly select a subset of $3000$ clusters and sort them by standard deviation to their mean color.
If the standard deviation of a cluster to its mean color is low, it means that the images of this cluster tend to have a similar colorization.
Moreover, we show in Figure~\ref{fig:color_cluster} some clusters with a low standard deviation to the mean color.
We observe in Figure~\ref{fig:color_std} that the clustering on features learned with our method focuses more on color than the clustering on RotNet features.
Indeed, clustering by color and low-level information produces balanced clusters that can easily be predicted by a convnet.
Clustering by color is a solution to our formulation.
However, as we want to avoid an uninformative clustering essentially based on colors, we remove some part of the input information by feeding the network with the image gradients instead of the raw RGB image (see Figure~\ref{fig:preprocess}).
This allows to greatly improve the performance of our features when evaluated on downstream tasks as it can be seen in Table~\ref{tab:rgb}.
We observe that Sobel filter improves slightly RotNet features as well.
\begin{figure}[ht]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.2\linewidth]{color_153.jpeg}&
\includegraphics[width=0.2\linewidth]{color_171.jpeg}&
\includegraphics[width=0.2\linewidth]{color_437.jpeg}&
\includegraphics[width=0.2\linewidth]{color_319.jpeg}
\\
\\
\includegraphics[width=0.1\linewidth]{color153.jpeg}&
\includegraphics[width=0.1\linewidth]{color171.jpeg}&
\includegraphics[width=0.1\linewidth]{color437.jpeg}&
\includegraphics[width=0.1\linewidth]{color319.jpeg}
\end{tabular}
\caption{We show clusters with an uniform colorization accross their images.
For each cluster, we show the mean color of the cluster.}
\label{fig:color_cluster}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{c cc}
RGB & \multicolumn{2}{c}{Sobel}\\
\midrule
\includegraphics[width=0.27\linewidth]{img2-RGB.jpeg}&
\includegraphics[width=0.27\linewidth]{img2-sobx.jpeg}&
\includegraphics[width=0.27\linewidth]{img2-soby.jpeg}
\\
\midrule
\includegraphics[width=0.27\linewidth]{img3-RGB.jpeg}&
\includegraphics[width=0.27\linewidth]{img3-sobx.jpeg}&
\includegraphics[width=0.27\linewidth]{img3-soby.jpeg}
\end{tabular}
\caption{Visualization of two images preprocessed with Sobel filter.
Sobel gives a $2$ channels output which at each point contain the vertical and horizontal derivative approximations.
Photographer usernames of these two YFCC100M RGB images are respectively \textit{booledozer} and \textit{nathalie.cone}.
}
\label{fig:preprocess}
\end{figure}
\begin{table}[t]
\centering
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{@{}l c c c@{}}
\toprule
Method & Data & RGB & Sobel \\
\midrule
RotNet & YFCC 1M & $69.8$ & $70.4$ \\
\midrule
DeeperCluster\xspace & YFCC 20M & $71.6$ & $76.1$ \\
\bottomrule
\end{tabular}
\caption{
Influence of applying Sobel filter or using raw RGB input on the features quality.
We report validation mAP on Pascal VOC classification task (\textsc{fc68} setting).
}
\label{tab:rgb}
\end{table}
\section{Hyperparameters}
In this section, we detail our different hyperparameter choices.
Images are rescaled to $3 \times 224 \times 224$.
Note that for each network we choose the best performing hyperparameters by evaluating on Pascal VOC $2007$ classification task without finetuning.
\begin{itemize}
\item \textbf{RotNet YFCC100M}: we train with a total batch-size of $512$, a learning rate of $0.05$, weight decay of $0.00001$ and dropout of $0.3$.
\item \textbf{RotNet ImageNet}: we train with a total batch-size of $512$, a learning rate of $0.05$, weight decay of $0.00001$ and dropout of $0.3$.
\item \textbf{DeepCluster YFCC100M 1.3M images}: we train with a total batch-size of $256$, a learning rate of $0.05$, weight decay of $0.00001$ and dropout of $0.5$.
A sobel filter is used in preprocessing step. We cluster the pca-reduced to $256$ dimensions, whitened and normalized features with $k$-means into $10.000$ clusters every $2$ epochs of training.
\item \textbf{DeeperCluster YFCC100M}: we train with a total batch-size of $3072$, a learning rate of $0.1$, weight decay of $0.00001$ and dropout of $0.5$.
A sobel filter is used in preprocessing step.
We cluster the whitened and normalized features (of dimension $4096$) of the non-rotated images with hierarchical $k$-means into $320.000$ clusters ($4$ clusterings in $80.000$ clusters each) every $3$ epochs of training.
\item \textbf{DeeperCluster ImageNet}: we train with a total batch-size of $748$, a learning rate of $0.1$, weight decay of $0.00001$ and dropout of $0.5$.
A sobel filter is used in preprocessing step. We cluster the whitened and normalized features (of dimension $4096$) of the non-rotated images with $k$-means into $10.000$ clusters every $5$ epochs of training.
\end{itemize}
For all methods, we use stochastic gradient descent with a momentum of $0.9$.
We stop training as soon as performance on Pascal VOC $2007$ classification task saturates.
We use PyTorch version 1.0 for all our experiments.
\section{Usernames of cluster visualization images}
For copyright reason, we give here the Flickr user names of the images from Figure~$5$.
For each cluster, the user names are listed from left to right and from top to bottom.
Photographers of images in cluster \textit{cat} are sun\_summer, savasavasava, windy\_sydney, ironsalchicha, Chiang Kai Yen, habigu, Crackers93, rikkis\_refuge and rabidgamer.
Photographers of images in custer \textit{elephantparadelondon} are Karen Roe, asw909, Matt From London, jorgeleria, Loz Flowers, Loz Flowers, Deck Accessory, Maxwell Hamilton and Melinda 26 Cristiano.
Photographers of images in custer \textit{always} are troutproject, elandru, vlauria, Raymond Yee, tsupo543, masatsu, robotson, edgoubert and troutproject.
Photographers of images in custer \textit{CanoScan} are what-i-found, what-i-found, allthepreciousthings, carbonated, what-i-found, what-i-found, what-i-found, what-i-found and what-i-found.
Photographers of images in custer \textit{GPS: (43, 10)} are bloke, garysoccer1, macpalm, M A T T E O 1 2 3, coder11, Johan.dk, chrissmallwood, markomni and xiquinhosilva.
Photographers of images in custer \textit{GPS: (-34, -151)} are asamiToku, Scott R Frost, BeauGiles, MEADEN, chaitanyakuber, mathias Straumann, jeroenvanlieshout, jamespia and Bastard Sheep.
Photographers of images in custer \textit{GPS(64, -20)} are arrygj, Bsivad, Powys Walker, Maria Grazia Dal Pra27, Sterling College, roundedbygravity, johnmcga, MuddyRavine and El coleccionista de instantes.
Photographers of images in custer \textit{GPS: (43, -104)} are dodds, eric.terry.kc, Lodahln, wmamurphy, purza7, jfhatesmustard, Marcel B., Silly America and Liralen Li.
| {'timestamp': '2019-08-14T02:08:56', 'yymm': '1905', 'arxiv_id': '1905.01278', 'language': 'en', 'url': 'https://arxiv.org/abs/1905.01278'} |
\section{Introduction}\label{Intro}
Dimension reduction is a fundamental concept in science and engineering
for feature extraction and data visualization. Exploring the properties of low-dimensional structures in high-dimensional spaces attracts
broad attention. Popular dimension reduction methods include principal component analysis (PCA) \cite{wold1987principal,vms2016gpca}, non-negative matrix factorization (NMF) \cite{sra2006gnmf}, and t-distributed stochastic neighbor embedding (t-SNE) \cite{maaten2008tsne}.
A main procedure in dimension reduction is to build a linear or nonlinear mapping from a high-dimensional space to a low-dimensional one, which keeps important properties of the high-dimensional space, such as the distance between any two points \cite{pham2013fast}.
The random projection (RP) is a widely used method for dimension reduction. It is well-known that the Johnson-Lindenstrauss (JL) transformation \cite{johnson1984extensions,dasgupta2003elementary} can nearly preserve the distance between two points after a random projection $f$, which is typically called isometry property. The isometry property can be used to achieve the nearest neighbor search in high-dimensional datasets \cite{kleinberg1997nns,ailon2006approximatefastjl}. It can also be used to \cite{baraniuk2008simplerip,krahmer2011newjlrip}, where a sparse signal can be reconstructed under a linear random projection \cite{crt2006robust}.
The JL lemma \cite{johnson1984extensions} tells us that there exists a nearly
isometry mapping $f$, which maps high-dimensional datasets into a lower dimensional space.
Typically, a choice for the mapping $f$ is the linear random projection
\begin{equation}\label{RP}
f(\mathbf{x}) = \frac{1}{\sqrt{M}} \mathbf{Rx},
\end{equation}
where $\mathbf{x}\in \mathbb{R}^N$, and $\mathbf{R} \in \mathbb{R}^{M \times N}$ is a matrix whose entries are drawn from the mean zero and variance one Gaussian distribution, denoted by $\mathcal{N}(0,1)$. We call it Gaussian random projection (Gaussian RP). The storage of matrix $\mathbf{R}$ in \eqref{RP} is $O(MN)$ and the cost of computing $\mathbf{Rx}$ in \eqref{RP} is $O(MN)$. However, with large $M$ and $N$, this construction is computationally infeasible. To alleviate the difficulty, the sparse random projection
method \cite{achlioptas2003database} and the very sparse random projection method \cite{li2006very} are proposed, where the random projection is constructed by a sparse random matrix. Thus the storage and the computational cost can be reduced.
To be specific, Achlioptas \cite{achlioptas2003database} replaced the dense matrix $\mathbf{R}$
by a sparse matrix whose entries follow
\begin{equation} \label{sparse_distribution}
\mathbf{R}_{ij}=\sqrt{s} \cdot
\begin{cases}
+1, & \text{with probability} \, \frac{1}{2s}, \\
0, & \text{with probability} \,1-\frac{1}{s}, \\
-1, & \text{with probability} \,\frac{1}{2s}.
\end{cases}
\end{equation}
This means that the matrix is sampled at a rate of $1/s$. Note that, if $s=1$,
the corresponding distribution is called the
Rademacher distribution. When $s=3$, the cost of computing $\mathbf{Rx}$ in \eqref{RP} reduces down to a third of the original one but is still $O(MN)$.
When $s=\sqrt{N}\gg3$, Li et al. \cite{li2006very} called this case as {the very sparse random projection} (Very Sparse RP), which significantly speeds up the computation with little loss in accuracy. It is clear that the storage of very sparse random projection is $O(M\sqrt{N})$. However, the sparse random projection can typically distort a sparse vector \cite{ailon2006approximatefastjl}. To achieve a low-distortion embedding, Ailon and Chazelle \cite{ailon2009fastjl,ailon2006approximatefastjl} proposed the Fast-Johnson-Lindenstrauss Transform (FJLT), where the preconditioning of a sparse projection matrix with a randomized Fourier transform is employed.
To reduce randomness and storage requirements, Sun \cite{sun2018tensor} et al. proposed the following format:
$\mathbf{R}=(\mathbf{R}_1\odot\cdots\odot\mathbf{R}_d)^{\text{T}}$,
where $\odot$ represents the Khatri-Rao product, $\mathbf{R}_i \in \mathbb{R}^{n_i \times M}$, and $N=\prod_{i=1}^{d} n_i$. Each $\mathbf{R}_i$ is a random matrix whose entries are i.i.d. random variables drawn from $\mathcal{N}(0,1)$. This transformation is called {the Gaussian
tensor random projection} (Gaussian TRP) throughout this paper. It is clear that the storage of the Gaussian TRP is $O(M\sum_{i=1}^{d}n_i)$, which is less than that of the Gaussian random projection (Gaussian RP) . For example, when $\mathbf{N}=n_1n_2=40000$, the storage of Gaussian TRP is only $1/20$ of Gaussian RP. Also, it has been shown that Gaussian TRP satisfies the properties of expected isometry with vanishing variance \cite{sun2018tensor}.
Recently, using matrix or tensor decomposition to reduce the storage of projection matrices is proposed in \cite{jinfaster,malik2020guarantees}. The main idea of these methods is to split the projection matrix into some small scale matrices or tensors.
In this work, we focus on the low rank tensor train representation to construct the random projection $f$. Tensor decompositions are
widely used for data compression \cite{Kolda2009Tensor,Acar2010Scalable,austin2016paralleltd,pham2013fast,ahle2020oblivious,tang2020rank,cui2021deep}.
The tensor train (TT) decomposition gives the following benefits---low rank TT-formats can provide compact representations of projection
matrices and efficient basic linear algebra operations of matrix-by-vector products \cite{oseledets2011tensor}.
Based on these benefits, we propose a novel tensor train random projection (TTRP) method,
which requires significantly smaller storage and computational costs compared with existing methods (e.g., Gaussian TRP \cite{sun2018tensor}, Very Sparse RP \cite{li2006very} and Gaussian RP \cite{achlioptas2001database}).
While constructing projection matrices using tensor train (TT) and Canonical polyadic (CP) decompositions based on Gaussian random variables is proposed in \cite{rakhshan2020tensorized}, the main contributions of our work are three-fold: first our TTRP is conducted based on a rank-one TT-format, which significantly reduces the storage of projection matrices; second, we provide a novel construction procedure
for the rank-one TT-format in our TTRP based on i.i.d.\ Rademacher random variables;
third, we prove that our construction of TTRP is unbiased with bounded variance.
The rest of the paper is organized as follows. The tensor train format is introduced in section \ref{Talg}. Details of our TTRP approach are introduced in section \ref{TTRP}, where we prove that the approach is an expected isometric projection with bounded variance. In section \ref{Experm}, we demonstrate the efficiency of TTRP with datasets including synthetic, MNIST. Finally section \ref{Conclu} concludes the paper.
\section{Tensor train format}\label{Talg}
Let lowercase letters $(x)$, boldface lowercase letters ($\mathbf{x}$), boldface capital letters ($\mathbf{X}$), calligraphy letters $(\mathcal{X})$ be scalar, vector, matrix and tensor variables, respectively.
$\mathbf{x}(i)$ represents the element $i$ of a vector $\mathbf{x}$.
$\mathbf{X}(i,j)$ means the element $(i,j)$ of a matrix $\mathbf{X}$. The $i$-th row and $j$-th column of a matrix $\mathbf{X}$ is defined by $\mathbf{X}(i,:)$ and $\mathbf{X}(:,j)$, respectively. For a given $d$-th order tensor $\mathcal{X}$, $\mathcal{X}({i_1, i_2, \ldots, i_d})$ is its $({i_1, i_2, \ldots, i_d})$-th component. For a vector $\mathbf{x}\in \mathbb{R}^N$, we denote its $\ell^{p}$ norm as ${\Arrowvert\mathbf{x}\Arrowvert}_p=(\sum_{i=1}^{N}{\lvert\mathbf{x}(i)\rvert}^p)^{\frac{1}{p}}$, for any $p\geq 1$.
The Kronecker product of matrices $\mathbf{A}\in \mathbb{R}^{I \times J}$ and $\mathbf{B}\in \mathbb{R}^{K \times L}$ is denoted by $\mathbf{A}\otimes\mathbf{B}$ of which the result is a matrix of size $(IK)\times (JL)$ and defined by
\begin{equation*}
\mathbf{A}\otimes\mathbf{B}=\left[
\begin{array}{cccc}
\mathbf{A}(1,1)\mathbf{B} & \mathbf{A}(1,2)\mathbf{B} & \cdots &\mathbf{A}(1,J)\mathbf{B}\\
\mathbf{A}(2,1)\mathbf{B}& \mathbf{A}(2,2)\mathbf{B} & \cdots&\mathbf{A}(2,J)\mathbf{B}\\
\vdots & \vdots& \ddots & \vdots\\
\mathbf{A}(I,1)\mathbf{B}&\mathbf{A}(I,2)\mathbf{B}&\cdots&\mathbf{A}(I,J)\mathbf{B}
\end{array}
\right].
\end{equation*}
The Kronecker product conforms the following laws \cite{van2000ubiquitous}:
\begin{equation}\label{multiply}
(\mathbf{A}\mathbf{C})\otimes(\mathbf{B}\mathbf{D})=(\mathbf{A}\otimes\mathbf{B})(\mathbf{C}\otimes\mathbf{D}),
\end{equation}
\begin{equation}\label{distributive}
(\mathbf{A}+\mathbf{B})\otimes(\mathbf{C}+\mathbf{D})=\mathbf{A}\otimes\mathbf{C}+ \mathbf{A}\otimes\mathbf{D}+\mathbf{B}\otimes\mathbf{C}+\mathbf{B}\otimes\mathbf{D},
\end{equation}
\begin{equation}
\left(k\mathbf{A}\right)\otimes \mathbf{B}=\mathbf{A}\otimes \left(k \mathbf{B}\right)=k\left(\mathbf{A}\otimes\mathbf{B}\right).
\end{equation}
\subsection{Tensor train decomposition}
Tensor Train (TT) decomposition \cite{oseledets2011tensor} is a generalization of SVD decomposition from matrices to tensors. TT decomposition provides a compact representation for tensors, and allows for efficient application of linear algebra operations (discussed in section \ref{matrix-vector} and section \ref{basic}).
Given a $d$-th order tensor $\mathcal{G} \in \mathbb{R}^{n_1 \times \cdots \times n_d}$, the tensor train decomposition \cite{oseledets2011tensor} is
\begin{equation}\label{TTformat_ele}
\mathcal{G}({i_1, i_2, \ldots, i_d}) = \mathcal{G}_1(i_1) \mathcal{G}_2(i_2) \cdots \mathcal{G}_d(i_d),
\end{equation}
where
$\mathcal{G}_k \in \mathbb{R}^{r_{k-1}\times n_k \times r_k}$ are called TT-cores, $\mathcal{G}_k(i_k) \in \mathbb{R}^{r_{k-1} \times r_k}$ is a slice of $\mathcal{G}_k$, for $k=1,2,\ldots,d$, $i_k = 1, \ldots, n_k$, and the ``boundary condition'' is $r_0 = r_d = 1$. The tensor $\mathcal{G}$ is said to be in the TT-format if each element of $\mathcal{G}$ can be represented by \eqref{TTformat_ele}.
The vector $[r_0,r_1,r_2, \ldots, r_d]$ is referred to as TT-ranks. Let $\mathcal{G}_k(\alpha_{k-1}, i_k, \alpha_{k})$ represent the element of $\mathcal{G}_k(i_k)$ in the position $(\alpha_{k-1}, \alpha_k)$. In the index form, the decomposition (\ref{TTformat_ele}) is rewritten as the following TT-format
\begin{equation}\label{TTformat_core}
\mathcal{G}(i_1,i_2,\ldots,i_d) = \sum\limits_{\alpha_0,\cdots,\alpha_d}{\mathcal{G}_1(\alpha_0,i_1,\alpha_1)\mathcal{G}_2(\alpha_1,i_2,\alpha_2)\cdots \mathcal{G}_d(\alpha_{d-1},i_d,\alpha_d)}.
\end{equation}
To look more closely to \eqref{TTformat_ele}, an element $\mathcal{G}(i_1, i_2, \ldots, i_d)$ is represented by a sequence of matrix-by-vector products.
Figure \ref{ttformat_fig} illustrates the tensor train decomposition. It can be seen that the key ingredient in tensor train (TT) decomposition is the TT-ranks. The TT-format only uses $O(ndr^2)$ memory to $O(n^d)$ elements, where $n = \text{max} \ \{n_1, \ldots, n_d\}$ and $r = \text{max}\ \{r_0, r_1, \ldots, r_d\}$. Although the storage reduction is efficient only if the TT-rank is small, tensors in data science and machine learning typically have low TT-ranks. Moreover, one can apply the TT-format to basic linear algebra operations, such as matrix-by-vector products, scalar multiplications, etc. This can reduce the computational cost significantly when the data have low rank structures (see \cite{oseledets2011tensor} for details).
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{TT-format.png}
\caption{Tensor train format (TT-format): extract an element $\mathcal{G}({i_1, i_2, \ldots, i_d})$ via a sequence of matrix-by-vector products.}
\label{ttformat_fig}
\end{figure}
\subsection{Tensorizing matrix-by-vector products}\label{matrix-vector}
The tensor train format gives a compact representation of matrices and efficient computation for matrix-by-vector products.
We first review the TT-format of large matrices and vectors following \cite{oseledets2011tensor}. Defining two bijections $\nu: \mathbb{N} \mapsto \mathbb{N}^{d}$ and $\mu: \mathbb{N} \mapsto \mathbb{N}^{d}$, a pair index $(i, j) \in \mathbb{N}^{2}$ is mapped to a multi-index pair $(\nu (i), \mu (j)) = (i_1. i_2, \ldots, i_d, j_1, j_2, \ldots, j_d)$.
Then a matrix $\mathbf{R} \in \mathbb{R}^{M \times N}$ and a vector $\mathbf{x} \in \mathbb{R}^{N}$ can be
tensorized in the TT-format as follows.
Letting $M = \prod_{i=1}^d {m_k}$ and $N = \prod_{i=1}^d {n_k}$, an element $(i,j)$ of $\mathbf{R}$ can be written as
(see \cite{novikov2015tensorizing,oseledets2011tensor})
\begin{equation} \label{eq_tensorizeR}
\mathbf{R}(i,j) = \mathcal{R}(\nu (i), \mu (j)) = \mathcal{R}(i_1,\dots,i_d,j_1,\dots, j_d)= \mathcal{R}_1(i_1,j_1)\cdots\mathcal{R}_d(i_d,j_d),
\end{equation}
and an element $j$ of $\mathbf{x}$ can be written as
\begin{equation} \label{eq_tensorizeX}
\mathbf{x}(j) = \mathcal{X}(\mu (j)) = \mathcal{X}(j_1,\dots,j_d)=\mathcal{X}_1(j_1)\cdots\mathcal{X}_d(j_d),
\end{equation}
where $\mathcal{R}_k(i_k,j_k) \in \mathbb{R}^{r_{k-1} \times r_k},\,\mathcal{X}_k(j_k)\in \mathbb{R}^{\hat{r}_{k-1} \times \hat{r}_k},\,r_0=\hat{r}_0=r_d=\hat{r}_d=1$, for $k=1,\dots,d$, $(i_1,\dots i_d)$ enumerate the rows of $\mathbf{R}$,
and $(j_1,\dots, j_d)$ enumerate the columns of $\mathbf{R}$. We consider the matrix-by-vector product ($\mathbf{y}=\mathbf{R}\mathbf{x}$), and each element of $\mathbf{y}$ can be tensorized in the TT-format as
\begin{equation}\label{ycore}
\begin{aligned}
\mathbf{y}(i)=\mathcal{Y}(i_1,\dots,i_d)=&\sum_{j_1,\dots,j_d} \mathcal{R}(i_1,\dots,i_d,j_1,\dots, j_d)\mathcal{X}(j_1,\dots,j_d)\\
=&\sum_{j_1,\dots,j_d}\Big({\mathcal{R}_1(i_1,j_1)}\cdots\mathcal{R}_d(i_d,j_d)\Big)\Big(\mathcal{X}_1(j_1)\cdots\mathcal{X}_d(j_d)\Big)\\
=&\sum_{j_1,\dots,j_d}\underbrace{\Big(\mathcal{R}_1(i_1,j_1)\otimes \mathcal{X}_1(j_1)\Big)}_{O(r_0r_1\hat{r}_0\hat{r}_1)}\cdots\underbrace{\Big(\mathcal{R}_d(i_d,j_d)\otimes \mathcal{X}_d(j_d)\Big)}_{O(r_{d-1}r_d\hat{r}_{d-1}\hat{r}_d)}\\
=&\underbrace{\mathcal{Y}_1(i_1)}_{O(n_1r_0r_1\hat{r}_0\hat{r}_1)}\cdots\underbrace{\mathcal{Y}_d(i_d)}_{O(n_dr_{d-1}r_d\hat{r}_{d-1}\hat{r}_d)},
\end{aligned}
\end{equation}
where $\mathcal{Y}_k(i_k)=\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes \mathcal{X}_k(j_k)\in \mathbb{R}^{r_{k-1}\hat{r}_{k-1}\times r_k\hat{r}_k}$, for $k=1,\dots,d$. The complexity of computing each TT-core $\mathcal{Y}_k \in \mathbb{R}^{r_{k-1}\hat{r}_{k-1}\times m_k\times r_k \hat{r}_k}$, is $O(m_kn_k r_{k-1}r_k \hat{r}_{k-1}\hat{r}_k)$ for $k=1,\dots,d$.
Assuming that the TT-cores of $\mathbf{x}$ are known, the total cost of the matrix-by-vector product ($\mathbf{y}=\mathbf{R}\mathbf{x}$) in the TT-format can reduce significantly from the original complexity $O(MN)$ to $O(dmnr^2\hat{r}^2),\,m=\max\{m_1,m_2,\dots,m_d\}$, $n=\max\{n_1,n_2,\dots,n_d\},\,r=\text{max}\ \{r_0, r_1, \ldots, r_d\},\,\hat{r}=\text{max}\ \{\hat{r}_0, \hat{r}_1, \ldots, \hat{r}_d\}$, where $N$ is typically large and $r$ is small. When $m_k=n_k,\,r_k=\hat{r}_k$, for $k=1,\dots,d$, the cost of such matrix-by-vector product in the TT-format is $O(dn^2r^4)$ \cite{oseledets2011tensor}. Note that, in the case that $r$ equals one, the cost of such matrix-by-vector product in the TT-format is $O(dmn\hat{r}^2)$.
\subsection{Basic Operations in the TT-format}\label{basic}
In section \ref{matrix-vector}, the product of matrix $\mathbf{R}$ and vector $\mathbf{x}$ which are both in the TT-format, is conducted efficiently.
In the TT-format, many important operations can be readily implemented. For instance, computing the Euclidean distance between two vectors in the TT-format is more efficient with less storage than directly computing the Euclidean distance in standard matrix and vector formats. In the following, some important operations in the TT-format are discussed.
The subtraction of tensor $\mathcal{Y}\in\mathbb{R}^{m_1\times\cdots \times m_d }$ and tensor $\hat{\mathcal{Y}}\in\mathbb{R}^{m_1\times\cdots \times m_d }$ in the TT-format is
\begin{equation} \label{add}
\begin{aligned}
\mathcal{Z}(i_1,\dots,i_d)
&:=\mathcal{Y}(i_1,\dots,i_d)-\hat{\mathcal{Y}}(i_1,\dots,i_d)\\
&=\mathcal{Y}_1(i_1)\mathcal{Y}_2(i_2)\cdots\mathcal{Y}_d(i_d)-\hat{\mathcal{Y}}_1(i_1)\hat{\mathcal{Y}}_2(i_2)\cdots\hat{\mathcal{Y}}_d(i_d)\\
&=\mathcal{Z}_1(i_1)\mathcal{Z}_2(i_2)\cdots\mathcal{Z}_d(i_d),
\end{aligned}
\end{equation}
where
\begin{equation}
\mathcal{Z}_{k}\left(i_{k}\right)=\left(\begin{array}{cc}
\mathcal{Y}_{k}\left(i_{k}\right) & 0 \\
0 & \hat{\mathcal{Y}}_{k}\left(i_{k}\right)
\end{array}\right), \quad k=2, \ldots, d-1,
\end{equation}
and
\begin{equation}
\mathcal{Z}_{1}\left(i_{1}\right)=\left(\begin{array}{cc}
\mathcal{Y}_{1}\left(i_{1}\right) & -\hat{\mathcal{Y}}_{1}\left(i_{1}\right)
\end{array}\right), \quad \mathcal{Z}_{d}\left(i_{d}\right)=\left(\begin{array}{c}
\mathcal{Y}_{d}\left(i_{d}\right) \\
\hat{\mathcal{Y}}_{d}\left(i_{d}\right)
\end{array}\right),
\end{equation}
and TT-ranks of $\mathcal{Z}$ equal the sum of TT-ranks of $\mathcal{Y}$ and $\hat{\mathcal{Y}}$.
The dot product of
tensor $\mathcal{Y}$ and tensor $\hat{\mathcal{Y}}$ in the TT-format \cite{oseledets2011tensor} is
\begin{equation}\label{dot1}
\begin{aligned}
\langle\mathcal{Y}, \hat{\mathcal{Y}}\rangle &:=\sum_{i_{1}, \ldots, i_{d}} \mathcal{Y}\left(i_{1}, \ldots, i_{d}\right) \hat{\mathcal{Y}}\left(i_{1}, \ldots, i_{d}\right)\\
&=\sum_{i_{1}, \ldots, i_{d}}\Big(\mathcal{Y}_1(i_1)\mathcal{Y}_2(i_2)\cdots\mathcal{Y}_d(i_d)\Big)\Big(\hat{\mathcal{Y}}_1(i_1)\hat{\mathcal{Y}}_2(i_2)\cdots\hat{\mathcal{Y}}_d(i_d)\Big)\\
&=\sum_{i_{1}, \ldots, i_{d}}\Big(\mathcal{Y}_1\left(i_1\right)\mathcal{Y}_2\left(i_2\right)\cdots\mathcal{Y}_d(i_d)\Big)\otimes\Big(\hat{\mathcal{Y}}_1(i_1)\hat{\mathcal{Y}}_2(i_2)\cdots\hat{\mathcal{Y}}_d(i_d)\Big)\\
&=\left(\sum_{i_{1}}\mathcal{Y}_{1}\left(i_{1}\right) \otimes \hat{\mathcal{Y}}_{1}\left(i_{1}\right)\right)\left(\sum_{i_{2}}\mathcal{Y}_{2}\left(i_{2}\right) \otimes \hat{\mathcal{Y}}_{2}\left(i_{2}\right)\right) \ldots\left(\sum_{i_{d}}\mathcal{Y}_{d}\left(i_{d}\right) \otimes \hat{\mathcal{Y}}_{d}\left(i_{d}\right)\right)\\
&=\mathbf{V}_1\mathbf{V}_2\cdots\mathbf{V}_d,
\end{aligned}
\end{equation}
where \begin{equation}\label{dot2}
\mathbf{V}_k=\sum_{i_{k}}\mathcal{Y}_{k}\left(i_{k}\right) \otimes \hat{\mathcal{Y}}_{k}\left(i_{k}\right), \quad k=1, \ldots, d.
\end{equation}
Since $\mathbf{V}_1,\mathbf{V}_d$ are vectors and $\mathbf{V}_2,\dots,\mathbf{V}_{d-1}$ are matrices, we compute $\langle\mathcal{Y}, \hat{\mathcal{Y}}\rangle$ by a sequence of matrix-by-vector products:
\begin{align}
\mathbf{v_{1}}&=\mathbf{V}_{1}, \label{v1}\\
\mathbf{v_{k}}=\mathbf{v_{k-1}} \mathbf{V_k}=\mathbf{v_{k-1}} \sum_{i_{k}} \mathcal{Y}_{k}\left(i_{k}\right) \otimes \hat{\mathcal{Y}}_{k}\left(i_{k}\right)&=\sum_{i_{k}} \mathbf{p_{k}}\left(i_{k}\right), \quad k=2, \ldots, d, \label{second}
\end{align}
where
\begin{align}\label{first}
\mathbf{p_{k}}\left(i_{k}\right)=\mathbf{v_{k-1}}\left(\mathcal{Y}_{k}\left(i_{k}\right) \otimes \hat{\mathcal{Y}}_{k}\left(i_{k}\right)\right),
\end{align}
and we obtain
\begin{equation}
\langle\mathcal{Y}, \hat{\mathcal{Y}}\rangle=\mathbf{v_d}.
\end{equation}
For simplify we assume that TT-ranks of $\mathcal{Y}$ are the same as that of $\hat{\mathcal{Y}}$.
In \eqref{first}, let $\mathbf{B}:=\mathcal{Y}_k(i_k)\in \mathbb{R}^{r\times r},\,\mathbf{C}:=\hat{\mathcal{Y}}_k(i_k)\in \mathbb{R}^{r\times r},\,\mathbf{x}:=\mathbf{v_{k-1}}\in \mathbb{R}^{1\times r^2},\,\mathbf{y}:= \mathbf{p_{k}\left(i_{k}\right)}\in \mathbb{R}^{1\times r^2}$, for $k=2,\dots,d-1$, and we use the reshaping Kronecker product expressions \cite{golub2013matrix} for \eqref{first}:
$$\mathbf{y}=\mathbf{x}(\mathbf{B}\otimes\mathbf{C})\quad \Longleftrightarrow \quad \mathbf{Y}=\mathbf{C}^{T}\mathbf{X}\mathbf{B},$$
where we reshape $\mathbf{x},\,\mathbf{y}$ into $X=\left[\begin{array}{llll}\mathbf{x}_{1} & \mathbf{x}_{2} & \cdots & \mathbf{x}_{r}\end{array}\right] \in \mathbb{R}^{r \times r}$, $ Y=\left[\begin{array}{llll}\mathbf{y}_{1} & \mathbf{y}_{2} & \cdots & \mathbf{y}_{r}\end{array}\right] \in \mathbb{R}^{r \times r}$ respectively. Note that the cost of computing $\mathbf{Y}=\mathbf{C}^{T}\mathbf{X}\mathbf{B}$ is $O(r^3)$ while the disregard of Kronecker structure of $\mathbf{y}=\mathbf{x}(\mathbf{B}\otimes\mathbf{C})$ leads to an $O(r^4)$ calculation. Hence the complexity of computing $\mathbf{p_{k}\left(i_{k}\right)}$ in \eqref{first} is $O(r^3)$, because of the efficient Kronecker product computation. Then the cost of computing $\mathbf{v_k}$ in \eqref{second} is $O(mr^3)$, and the total cost of the dot product $\langle\mathcal{Y}, \hat{\mathcal{Y}}\rangle$ is $O(dmr^3)$.
The Frobenius norm of a tensor $\mathcal{Y}$ is defined by
$$
\norm{\mathcal{Y}}{F}=\sqrt{\langle\mathcal{Y}, \mathcal{Y}\rangle}.
$$
Computing the distance between tensor $\mathcal{Y}$ and tensor $\hat{\mathcal{Y}}$ in the TT-format is computationally efficient by applying the dot product \eqref{dot1}--\eqref{dot2},
\begin{equation} \label{dis}
\norm{\mathcal{Y}-\hat{\mathcal{Y}}}{F}=\sqrt{\langle \mathcal{Y}-\hat{\mathcal{Y}}, \mathcal{Y}-\hat{\mathcal{Y}}\rangle}.
\end{equation}
The complexity of computing the distance is also $O(dmr^3)$. Algorithm \ref{dot} gives more details about computing \eqref{dis} based on Frobenius norm $\norm{\mathcal{Y}-\hat{\mathcal{Y}}}{F}$.
\begin{algorithm}[H]
\caption{Distance based on Frobenius Norm $W:=\norm{\mathcal{Y}-\hat{\mathcal{Y}}}{F}=\sqrt{\langle \mathcal{Y}-\hat{\mathcal{Y}}, \mathcal{Y}-\hat{\mathcal{Y}}\rangle}$}
\label{dot}
\begin{algorithmic}[1]
\Require TT-cores $\mathcal{Y}_k$ of tensor $\mathcal{Y}$ and TT-cores $\hat{\mathcal{Y}}_k$ of tensor $\hat{\mathcal{Y}}$, for $k=1,\dots,d$.
\State Compute $\mathcal{Z}:=\mathcal{Y}-\hat{\mathcal{Y}}.$ $\qquad \qquad \qquad \qquad \qquad \qquad \qquad \triangleright \ O(mr)$ by \eqref{add}
\State Compute $\mathbf{v_1}:=\sum_{i_{1}}\mathcal{Z}_{1}\left(i_{1}\right) \otimes \mathcal{Z}_{1}\left(i_{1}\right)$.$\qquad \qquad \qquad \qquad \quad \triangleright \ O(mr^2)$ by \eqref{v1}
\For {$k = 2:d-1$}
\State Compute $\mathbf{p_{k}}\left(i_{k}\right)=\mathbf{v_{k-1}}\Big(\mathcal{Z}_{k}(i_{k}) \otimes \mathcal{Z}_{k}(i_{k})\Big)$. $\qquad \qquad \quad \triangleright \ O(r^3)$ by \eqref{first}
\State Compute $\mathbf{v_{k}}:=\sum_{i_{k}} \mathbf{p_{k}}\left(i_{k}\right)$. $\qquad \qquad \qquad \qquad \qquad \qquad \triangleright \ O(mr^3)$ by \eqref{second}
\EndFor
\State Compute $\mathbf{p_{d}}\left(i_{d}\right)=\mathbf{v_{d-1}}\Big(\mathcal{Z}_{d}(i_{d}) \otimes \mathcal{Z}_{d}\left(i_{d}\right)\Big)$. $\qquad \qquad \qquad \triangleright \ O(r^2)$ by \eqref{first}
\State Compute $\mathbf{v_{d}}:=\sum_{i_{d}} \mathbf{p_{d}}(i_{d})$. $\qquad \qquad \qquad \qquad \qquad \qquad \quad \triangleright \ O(mr^2)$ by \eqref{second}
\Ensure Distance $W:=\sqrt{\langle \mathcal{Y}-\hat{\mathcal{Y}}, \mathcal{Y}-\hat{\mathcal{Y}}\rangle}=\sqrt{\mathbf{v_d}}$.
\end{algorithmic}
\end{algorithm}
In summary, just merging the cores of two tensors in the TT-format can perform the subtraction of two tensors instead of directly subtraction of two tensors in standard tensor format. A sequence of matrix-by-vector products can achieve the dot product of two tensors in the TT-format. The cost of computing the distance between two tensors in the TT-format, reduces from the original complexity $O(M )$
to $O(dmr^3)$, where $M=\prod_{i=1}^{d}m_i,\,r\ll M $.
\section{Tensor train random projection}\label{TTRP}
Due to the computational efficiency of TT-format discussed above,
we consider the TT-format to construct projection matrices. Our tensor train random projection is defined as follows.
\begin{definition}\label{defTTRP}
(Tensor Train Random Projection). For a data point $\mathbf{x} \in \hbox{{\msbm \char "52}}^{N}$, our tensor train random projection (TTRP) is
\begin{equation} \label{eq_ttfpfull}
f_{TTRP}(\mathbf{x}):= \frac{1}{\sqrt{M}}\mathbf{R} \mathbf{x},
\end{equation}
where the tensorized versions (through
the TT-format) of
$\mathbf{R}$ and $\mathbf{x}$ are denoted by $\mathcal{R}$ and
$\mathcal{X}$ (see \eqref{eq_tensorizeR}-\eqref{eq_tensorizeX}),
the corresponding TT-cores
are denoted by $\{\mathcal{R}_k \in \hbox{{\msbm \char "52}}^{r_{k-1}\times m_k\times n_k \times r_k}\}^d_{k=1}$ and $\{\mathcal{X}_k\in \mathbb{R}^{\hat{r}_{k-1}\times n_k \times \hat{r}_k}\}^d_{k=1}$ respectively, we set $r_0=r_1=\ldots=r_d=1$,
and $\mathbf{y}:=\mathbf{R}\mathbf{x}$ is specified by \eqref{ycore}.
\end{definition}
Note that our TTRP is based on the tensorized version of $\mathbf{R}$ with TT-ranks all equal to one,
which leads to significant computational efficiency and small storage costs, and comparisons for TTRP associated with different TT-ranks are conducted in section \ref{Experm}.
When $r_0=r_1=\ldots=r_d=1$,
all TT-cores $\mathcal{R}_i$, for $i=1,\dots,d$ in \eqref{eq_tensorizeR} become matrices and the cost of computing $\mathbf{Rx}$ in TTRP \eqref{eq_ttfpfull} is $O(dmn\hat{r}^2)$ (see section \ref{matrix-vector}), where $m=\max\{m_1,m_2,\dots,m_d\}$, $n=\max\{n_1,n_2,\dots,n_d\}$ and $\hat{r}=\max\{\hat{r}_0, \hat{r}_1, \ldots, \hat{r}_d\}$.
Moreover, from our analysis in the latter part of this section, we find that the Rademacher distribution introduced in section \ref{Intro} is an optimal choice for generating
the TT-cores of $\mathbf{R}$.
In the following,
we prove that TTRP established by \eqref{eq_ttfpfull} is an expected isometric projection with bounded variance.
\begin{theorem}\label{lemma_mean}
Given a vector $\mathbf{x}\in\mathbb{R}^{\prod_{j=1}^{d} n_j}$, if $\mathbf{R}$ in \eqref{eq_ttfpfull} is composed of $d$ independent TT-cores $\mathcal{R}_1,\dots,\mathcal{R}_d$, whose entries are independent and identically random variables with mean zero and variance one, then the following equation holds
\begin{equation*}
\mathbb{E}{\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{2}_2={\Arrowvert \mathbf{x} \Arrowvert}^2_2.
\end{equation*}
\end{theorem}
\begin{proof}
Denoting $\mathbf{y}=\mathbf{R}\mathbf{x}$ gives
\begin{align}
\mathbb{E}{\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^2_2=\frac{1}{M}\mathbb{E}{\Arrowvert \mathbf{y} \Arrowvert}^2_2=\frac{1}{M}\mathbb{E}\left[\sum_{i=1}^M \mathbf{y}^2(i)\right]=\frac{1}{M}\mathbb{E}\left[\sum_{i_1,\dots,i_d} \mathcal{Y}^2(i_1,\dots,i_d)\right] . \label{yintial}
\end{align}
By the TT-format,
$\mathcal{Y}(i_1,\dots,i_d)=\mathcal{Y}_1(i_1)\cdots\mathcal{Y}_d(i_d)$, where $\mathcal{Y}_k(i_k)=\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes \mathcal{X}_k(j_k)$, for $k=1,\dots,d$, it follows that
\begin{align}
\mathbb{E}\left[\mathcal{Y}^2(i_1,\dots,i_d)\right]&=\mathbb{E}\left[\left(\mathcal{Y}_1(i_1)\cdots\mathcal{Y}_d(i_d)\Big)\Big(\mathcal{Y}_1(i_1)\cdots\mathcal{Y}_d(i_d)\right)\right] \nonumber\\
&=\mathbb{E}\left[\Big(\mathcal{Y}_1(i_1)\cdots\mathcal{Y}_d(i_d)\Big)\otimes\Big(\mathcal{Y}_1(i_1)\cdots\mathcal{Y}_d(i_d)\Big)\right] \label{y2}\\
&=\mathbb{E}\left[\Big(\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\Big)\cdots\Big(\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\Big)\right] \label{y3}\\
&=\mathbb{E}\Big[\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\Big]\cdots\mathbb{E}\Big[\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\Big], \label{yy}
\end{align}
where \eqref{y3} is derived using \eqref{multiply} and \eqref{y2}, and then combining \eqref{y3}
and using the independence of TT-cores $\mathcal{R}_1,\dots,\mathcal{R}_d$ give \eqref{yy}.
The $k$-th term of the right hand side of \eqref{yy}, for $k=1,\dots,d$, can be computed by
\begin{align}
\mathbb{E}\Big[\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i_k)\Big] &=\mathbb{E}\Bigg[\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes\mathcal{X}_k(j_k)\Big]\Bigg] \label{yk1}\\
&=\mathbb{E}\Bigg[\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big]\Bigg]\label{yk2}\\
&=\sum_{j_k,j'_k}\mathbb{E}\Big[\mathcal{R}_k(i_k,j_k)\mathcal{R}_k(i_k,j'_k)\Big]\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k) \label{yk3}\\
&=\sum_{j_k}\mathbb{E}\Big[\mathcal{R}^2_k(i_k,j_k)\Big]\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k) \label{yk4}\\
&=\sum_{j_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k). \label{yk5}
\end{align}
Here as we set the TT-ranks
of $\mathcal{R}$ to be one, $\mathcal{R}_k(i_k,j_k)$ is scalar, and \eqref{yk1} then
leads to \eqref{yk2}.
Using \eqref{distributive} and \eqref{yk2} gives \eqref{yk3}, and we derive \eqref{yk5} from \eqref{yk3} by the assumption that $\mathbb{E}\Big[\mathcal{R}^2_k (i_k,j_k)\Big]=1$ and $\mathbb{E}\Big[\mathcal{R}_k(i_k,j_k)\mathcal{R}_k(i_k,j'_k)\Big]=0$, for $ j_k, j'_k=1,\dots,n_k$, $j_k\neq j'_k$, $k=1,\dots,d$.
Substituting \eqref{yk5} into \eqref{yy} gives
\begin{align}
\mathbb{E}\Big[\mathcal{Y}^2(i_1,\dots,i_d)\Big]&=\Bigg[\sum_{j_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\Bigg]\cdots\Bigg[\sum_{j_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\Bigg] \nonumber\\
&=\sum_{j_1,\dots,j_d}\Big[\mathcal{X}_1(j_1)\cdots\mathcal{X}_d(j_d)\Big]\otimes\Big[\mathcal{X}_1(j_1)\cdots\mathcal{X}_d(j_d)\Big] \nonumber\\
&=\sum_{j_1,\dots,j_d}\mathcal{X}^2(j_1,\dots,j_d) \nonumber\\
&=\norm{\mathbf{x}}{2}^2. \label{yresult}
\end{align}
Substituting \eqref{yresult} into \eqref{yintial}, it concludes that
\begin{align*}
\mathbb{E}{\Arrowvert f_{TTRP}(\mathcal{X})\Arrowvert}^2_2 &=\frac{1}{M}\mathbb{E}\Bigg[\sum_{i_1,\dots,i_d} \mathcal{Y}^2(i_1,\dots,i_d)\Bigg]\\
&=\frac{1}{M}\times M {\Arrowvert \mathbf{x} \Arrowvert}^2_2\\
&={\Arrowvert \mathbf{x} \Arrowvert}^2_2.
\end{align*}
\end{proof}
\begin{theorem}\label{lemma_var}
Given a vector $\mathbf{x} \in \mathbb{R}^{\prod_{j=1}^{d} n_j}$, if $\mathbf{R}$ in \eqref{eq_ttfpfull} is composed of $d$ independent TT-cores $\mathcal{R}_1,\dots,\mathcal{R}_d$, whose entries are independent and identically random variables with mean zero, variance one, with the same fourth moment $\Delta$ and $\mathcal{M}:=\max_{i=1,\dots,N} \ \lvert\mathbf{x}(i)\rvert,\,m=\max\{m_1,m_2,\dots,m_d\},\, n=\max\{n_1,n_2,\dots,n_d\}$, then
$$
\text{Var}\left({\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{2}_2 \right) \leq \frac{1}{M}\Big(\Delta+n(m+2)-3\Big)^d N\mathcal{M}^4-{\Arrowvert \mathbf{x} \Arrowvert}^4_2.
$$
\end{theorem}
\begin{proof}
By the property of the variance and using Theorem \ref{lemma_mean},
\begin{align}
\text{Var}\left({\Arrowvert f_{TTRP}(\mathbf{x}\Arrowvert}^{2}_2\right)&=\mathbb{E}\Big[{\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{4}_2\Big]-\Bigg[\mathbb{E}\Big[{\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{2}_2\Big]\Bigg]^2\nonumber\\
&=\mathbb{E}\Big[\Arrowvert \frac{1}{\sqrt{M}}\mathbf{y}\Arrowvert^4_2\Big]-{\Arrowvert \mathbf{x} \Arrowvert}^4_2 \nonumber\\
&= \frac{1}{M^2}\mathbb{E}\Big[{\Arrowvert\mathbf{y}\Arrowvert}^{4}_2\Big]-{\Arrowvert \mathbf{x} \Arrowvert}^4_2 \label{var_right}\\
&=\frac{1}{M^2}\Bigg[\sum_{i=1}^M \mathbb{E}\Big[\mathbf{y}^4(i)\Big]+\sum_{i\neq j}\mathbb{E}\Big[\mathbf{y}^2(i){\mathbf{y}^2(j)}\Big]\Bigg]-{\Arrowvert \mathbf{x} \Arrowvert}^4_2, \label{square_right}
\end{align}
where note that $\mathbb{E}[\mathbf{y}^2(i)\mathbf{y}^2(j)]\neq \mathbb{E}[\mathbf{y}^2(i)]\mathbb{E}[\mathbf{y}^2(j)]$ in general and a simple example can be found in Appendix A.
We compute the first term of the right hand side of \eqref{square_right},
\begin{align}
\mathbb{E}\Big[\mathbf{y}^4(i)\Big]
&=\mathbb{E}\Big[\mathcal{Y}(i_1,\dots,i_d)\otimes\mathcal{Y}(i_1,\dots,i_d)\otimes\mathcal{Y}(i_1,\dots,i_d)\otimes\mathcal{Y}(i_1,\dots,i_d)\Big] \label{yyyy1}\\
&=\mathbb{E}\Bigg[\Big[\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\Big]\cdots\Big[\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\Big]\Bigg] \label{yyyy2}\\
&=\mathbb{E}\Big[\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\Big]\cdots\mathbb{E}\Big[\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\Big] \label{yyyy3},
\end{align}
where $\mathbf{y}(i)=\mathcal{Y}(i_1,\dots,i_d)$, applying \eqref{multiply} to \eqref{yyyy1} obtains \eqref{yyyy2}, and we derive \eqref{yyyy3} from \eqref{yyyy2} by the independence of TT-cores $\{\mathcal{R}_k\}^d_{k=1}$.
Considering the $k$-th term of the right hand side of \eqref{yyyy3}, for $k=1,\dots,d$, we obtain that
\begin{align}
\mathbb{E}\Big[&\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i_k)\Big] \nonumber\\
=&\mathbb{E}\Bigg[\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes\mathcal{X}_k(j_k)\Big] \nonumber\\
&\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\otimes\mathcal{X}_k(j_k)\Big]\Bigg] \label{yyyk1}\\
=&\mathbb{E}\Bigg[\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big] \nonumber\\
&\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big]\Bigg] \label{yyyk2}\\
=&\mathbb{E}\Big[\sum_{j_k}\mathcal{R}^4_k(i_k,j_k)\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\Big] \nonumber\\
&+\mathbb{E}\Big[\sum_{j_k\neq j'_k}\mathcal{R}^2_k(i_k,j_k)\mathcal{R}^2_k(i_k,j'_k)\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k)\Big] \nonumber\\
&+\mathbb{E}\Big[\sum_{j_k\neq j'_k}\mathcal{R}^2_k(i_k,j_k)\mathcal{R}^2_k(i_k,j'_k)\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\Big] \nonumber\\
&+\mathbb{E}\Big[\sum_{j_k\neq j'_k}\mathcal{R}^2_k(i_k,j_k)\mathcal{R}^2_k(i_k,j'_k)\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j_k)\Big] \label{yyyk3}\\
=&\Delta\sum_{j_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)
+\sum_{j_k\neq j'_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k) \nonumber\\
&+\sum_{j_k\neq j'_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)
+\sum_{j_k\neq j'_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j_k), \label{yyyk4}
\end{align}
where we infer \eqref{yyyk2} from \eqref{yyyk1} by scalar property of $\mathcal{R}_k(i_k,j_k)$, \eqref{yyyk3} is obtained by \eqref{distributive} and the independence of TT-cores $\{\mathcal{R}_k\}^d_{k=1}$, and denoting the fourth moment $\Delta:=\mathbb{E}\Big[\mathcal{R}^4_k(i_k,j_k)\Big]$, we deduce \eqref{yyyk4} by the assumption $\mathbb{E}\Big[\mathcal{R}^2_k (i_k,j_k)\Big]=1$, for $k=1,\dots,d$.
Substituting \eqref{yyyk4} into \eqref{yyyy3}, it implies that
\begin{align}
\mathbb{E}&\Big[\mathcal{Y}^4(i_1,\dots,i_d)\Big] \nonumber\\
=&\Big[\Delta\sum_{j_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)+\sum_{j_1\neq j'_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j'_1) \nonumber\\
&+\sum_{j_1\neq j'_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)+\sum_{j_1\neq j'_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j_1)\Big] \nonumber \\
&\cdots\Big[\Delta\sum_{j_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)+\sum_{j_d\neq j'_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j'_d) \nonumber\\
&+\sum_{j_d\neq j'_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)+\sum_{j_d\neq j'_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j_d)\Big] \nonumber\\
\leq& \Delta^d \sum_{j_1,\dots,j_d}\Bigg[\Big[\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\Big]\cdots\Big[\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\Big]\Bigg] \nonumber\\
&+\Delta^{d-1}C_d^1\underset{k}{\max}\Bigg[\sum_{j_1,..,j_k\neq j'_k,\dots, j_d}\Big[\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\Big]\cdots \nonumber\\
&\Big[\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k)\Big]
\cdots \Big[\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\Big]\Bigg] \nonumber\\
&+\Delta^{d-1}C_d^1\underset{k}{\max}\Bigg[\sum_{j_1,..,j_k\neq j'_k,\dots, j_d}\Big[\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\Big]\cdots \nonumber\\
&\Big[\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\Big]\cdots \Big[\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\Big]\Bigg] \nonumber\\
&+\Delta^{d-1}C_d^1\underset{k}{\max}\Bigg[\sum_{j_1,..,j_k\neq j'_k,\dots, j_d}\Big[\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\Big]\cdots \nonumber\\
&\Big[\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j_k)\Big]
\cdots \Big[\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\Big]\Bigg]+\cdots \label{computing_delta}\\
\leq& \Delta^d\sum_{j_1,\dots,j_d}\mathcal{X}^4(j_1,\dots,j_d)+ 3\Delta^{d-1}C_d^1\underset{k}{\max}\Bigg[\sum_{j_1,..,j_k\neq j'_k,\dots, j_d} \mathcal{X}(j_1,\dots,j_k,\dots,j_d)^2\mathcal{X}(j_1,\dots,j'_k,\dots,j_d)^2 \Bigg] + \cdots \label{delta_result}\\
\leq& \Delta^d{\Arrowvert \mathbf{x} \Arrowvert}^4_4+3(n-1)\Delta^{d-1}C^1_d N\mathcal{M}^4+3^2(n-1)^2\Delta^{d-2}C^2_d N\mathcal{M}^4+\cdots+3^d(n-1)^d N\mathcal{M}^4 \nonumber\\
\leq&\Big(\Delta+3(n-1)\Big)^d N\mathcal{M}^4, \label{yiresult}
\end{align}
where denoting $\mathcal{M}:=\max_{i=1,\dots,N} \ \lvert\mathbf{x}(i)\rvert,\, n=\max\{n_1,n_2,\dots,n_d\}$, we derive \eqref{delta_result} from \eqref{computing_delta} by \eqref{multiply}.
Similarly, the second term $\mathbb{E}\Big[\mathbf{y}^2(i)\mathbf{y}^2(j)\Big]$ of the right hand side of \eqref{square_right}, for $i\neq j,\,\nu(i)=(i_1,i_2,\dots,i_d)\neq \nu(j)=(i'_1,i'_2,\dots,i'_d)$, is obtained by
\begin{align}
\mathbb{E}&\Big[\mathbf{y}^2(i)\mathbf{y}^2(j)\Big] \nonumber\\
=&\mathbb{E}\Big[\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i'_1)\otimes\mathcal{Y}_1(i'_1)\Big]\cdots\mathbb{E}\Big[\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i'_d)\otimes\mathcal{Y}_d(i'_d)\Big]. \label{yiyj}
\end{align}
If $i_k\neq i'_k$, for $k=1,\dots,d$, then the $k$-th term of the right hand side of \eqref{yiyj} is computed by
\begin{align}
\mathbb{E}&\Big[\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i'_k)\otimes\mathcal{Y}_k(i'_k)\Big] \nonumber\\
=&\mathbb{E}\Bigg[\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i_k,j_k)\mathcal{X}_k(j_k)\Big] \nonumber\\
&\otimes\Big[\sum_{j_k}\mathcal{R}_k(i'_k,j_k)\mathcal{X}_k(j_k)\Big]\otimes\Big[\sum_{j_k}\mathcal{R}_k(i'_k,j_k)\mathcal{X}_k(j_k)\Big]\Bigg]\\
=&\mathbb{E}\Big[\sum_{j_k}\mathcal{R}^2_k(i_k,j_k)\mathcal{R}^2_k(i'_k,j_k)\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\Big]\nonumber\\
&+\mathbb{E}\Big[\sum_{j_k\neq j'_k}\mathcal{R}^2_k(i_k,j_k)\mathcal{R}^2_k(i'_k,j'_k)\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k)\Big]\\
=&\sum_{j_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)
+\sum_{j_k\neq j'_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k). \label{ykresult}
\end{align}
Supposing that $i_1=i'_1,\dots,i_k\neq i'_k,\dots,i_d=i'_d$ and substituting \eqref{yyyk4} and \eqref{ykresult} into \eqref{yiyj}, we obtain
\begin{align}
\mathbb{E}&\Big[\mathbf{y}^2(i)\mathbf{y}^2(j)\Big] \nonumber\\
=& \mathbb{E}\Big[\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\otimes\mathcal{Y}_1(i_1)\Big]\cdots \mathbb{E}\Big[\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i_k)\otimes\mathcal{Y}_k(i'_k)\otimes\mathcal{Y}_k(i'_k)\Big]\cdots \nonumber\\
&\mathbb{E}\Big[\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\otimes\mathcal{Y}_d(i_d)\Big] \nonumber\\
=&\Big[\Delta\sum_{j_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)
+\sum_{j_1\neq j'_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j'_1) \nonumber\\
&+\sum_{j_1\neq j'_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)
+\sum_{j_1\neq j'_1}\mathcal{X}_1(j_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j'_1)\otimes\mathcal{X}_1(j_1)\Big]\nonumber\\
& \cdots\Big[\sum_{j_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)
+\sum_{j_k\neq j'_k}\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j_k)\otimes\mathcal{X}_k(j'_k)\otimes\mathcal{X}_k(j'_k)\Big]\nonumber\\
&\cdots\Big[\Delta\sum_{j_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)
+\sum_{j_d\neq j'_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j'_d) \nonumber\\
&+\sum_{j_d\neq j'_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)
+\sum_{j_d\neq j'_d}\mathcal{X}_d(j_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j'_d)\otimes\mathcal{X}_d(j_d)\Big] \nonumber\\
\leq& n(\Delta+3(n-1))^{d-1} N\mathcal{M}^4. \label{yiyj1}
\end{align}
Similarly, if for $k\in S \subseteq \{1,\dots,d\},\,\lvert S\rvert=l$, $i_k\neq i'_k$, and for $k\in \overline{S}$, $i_k=i'_k$, then
\begin{align}
\mathbb{E}\Big[\mathbf{y}^2(i)\mathbf{y}^2(j)\Big]\leq n^l(\Delta+3(n-1))^{d-l} N\mathcal{M}^4. \label{yiyjl}
\end{align}
Hence, combining \eqref{yiyj1} and \eqref{yiyjl} gives
\begin{align}
\sum_{i\neq j}\mathbb{E}\Big[\mathbf{y}^2(i)\mathbf{y}^2(j)\Big]
\leq& M\Big[C^1_d(m-1)n(\Delta+3(n-1))^{d-1}+\cdots+C^l_d(m-1)^ln^l(\Delta+3(n-1))^{(d-l)} \nonumber\\
&+\cdots+C^d_d(m-1)^d n^d\Big]N\mathcal{M}^4, \label{totalyiyj}
\end{align}
where $m=\max\{m_1,m_2,\dots,m_d\}$.\\
Therefore, using \eqref{yiresult} and \eqref{totalyiyj} deduces
\begin{align}
\mathbb{E}\Big[{\Arrowvert\mathbf{y}\Arrowvert}^{4}_2\Big]\leq& M\Big[(\Delta+3(n-1))^d+C^1_d(m-1)n(\Delta+3(n-1))^{d-1}+\cdots+C^d_d(m-1)^d n^d\Big]N\mathcal{M}^4 \nonumber\\
&=M\Big((m-1)n+\Delta+3(n-1)\Big)^d N \mathcal{M}^4 \nonumber\\
&=M\Big(\Delta+n(m+2)-3\Big)^d N \mathcal{M}^4. \label{y24}
\end{align}
In summary, substituting \eqref{y24} into \eqref{var_right} implies
\begin{align}
\text{Var}\Big({\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{2}_2\Big)\leq&\frac{M\Big(\Delta+n(m+2)-3\Big)^d N \mathcal{M}^4}{M^2}-{\Arrowvert \mathbf{x} \Arrowvert}^4_2 \nonumber\\
\leq& \frac{1}{M}\Big(\Delta+n(m+2)-3\Big)^d N\mathcal{M}^4-{\Arrowvert \mathbf{x} \Arrowvert}^4_2. \label{upper_bound}
\end{align}
\end{proof}
{ One can see that the bound of the variance \eqref{upper_bound} is reduced as $M$ increases, which is expected. When $M=m^d$ and $N=n^d$, we have
\begin{align}
\text{Var}\Big({\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{2}_2\Big)&\leq \Big(\frac{\Delta+2n-3}{m}+n\Big)^d N\mathcal{M}^4-{\Arrowvert \mathbf{x} \Arrowvert}^4_2.
\label{upper_d}
\end{align}
As $m$ increases, the upper bound in \eqref{upper_d} tends to $(N^2 \mathcal{M}^4-{\Arrowvert \mathbf{x} \Arrowvert}^4_2)\geq0$, and this upper bound vanishes as $M$ increases if and only if $\mathbf{x}(1)=\mathbf{x}(2)=\dots=\mathbf{x}(N)$.}
Also, { the upper bound \eqref{upper_bound}} is affected by the fourth moment $\Delta=\mathbb{E}\Big[\mathcal{R}^4_k(i_k,j_k)\Big]=\text{Var}\Big(\mathcal{R}^2_k(i_k,j_k)\Big)+\Big[\mathbb{E}[\mathcal{R}^2_k(i_k,j_k)]\Big]^2$. To keep the expected isometry, we need $\mathbb{E}[\mathcal{R}^2_k(i_k,j_k)]=1$.
Note that when the TT-cores follow the Rademacher distribution i.e.,\,$\text{Var}\Big(\mathcal{R}^2_k(i_k,j_k)\Big)=0$, the fourth moment $\Delta$ in \eqref{upper_bound} achieves the minimum. So, the Rademacher distribution is an optimal choice for generating the TT-cores, and we set the Rademacher distribution to be our default choice for constructing TTRP (Definition \ref{defTTRP}).
\begin{proposition}\label{hyper}
(Hypercontractivity \cite{schudy2012concentration}) Consider a degree $q$ polynomial $f(Y)=$ $f\left(Y_{1}, \ldots, Y_{n}\right)$ of independent centered Gaussian or Rademacher random variables $Y_{1}, \ldots, Y_{n} .$ Then for any $\lambda>0$
\begin{equation*}
\mathbb{P}\left(\left\lvert f(Y)-\mathbb{E}\left[f(Y)\right]\right\rvert \geq \lambda\right) \leq e^{2} \cdot \exp{\left[-\left(\frac{\lambda^2}{K\cdot \text{Var}[f(Y)] }\right)^{\frac{1}{q}}\right]},
\end{equation*}
where $\operatorname{Var}([f(Y)])$ is the variance of the random variable $f(Y)$ and $K>0$ is an absolute constant.
\end{proposition}
Proposition \ref{hyper} extends the Hanson-Wright inequality whose proof can be found in \cite{schudy2012concentration}.
\begin{proposition}\label{con}
Let $f_{TTRP}: \hbox{{\msbm \char "52}}^N \mapsto \hbox{{\msbm \char "52}}^{M}$ be the tensor train random projection defined by \eqref{eq_ttfpfull}. Suppose that for $i=1, \ldots, d$, all entries of TT-cores $\mathcal{R}_i$ are independent standard Gaussian or Rademacher random variables, with the same fourth moment $\Delta$ and $\mathcal{M}:=\max_{i=1,\dots,N} \ \lvert \mathbf{x}(i)\rvert,\,m=\max\{m_1,m_2,\dots,m_d\},\, n=\max\{n_1,n_2,\dots,n_d\}$. For any $\mathbf{x} \in \hbox{{\msbm \char "52}}^{N}$, there exist absolute constants $C$ and $K>0$ such that the following claim holds
\begin{equation}
\mathbb{P} \left ( \left\lvert \norm{f_{TTRP}(\mathbf{x})}{2}^2 - \norm{\mathbf{x}}{2}^2 \right\rvert \geq \varepsilon \norm{\mathbf{x}}{2}^2 \right ) \leq C \exp{\left[-\left(\frac{M\cdot\varepsilon^2 }{K\cdot\left[\left(\Delta+n(m+2)-3\right)^d N-M\right]}\right)^\frac{1}{2d}\right]}. \label{prop2_ineq}
\end{equation}
\end{proposition}
\begin{proof}
According to Theorem \ref{lemma_mean}, $ \mathbb{E}{\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{2}_2={\Arrowvert \mathbf{x} \Arrowvert}^2_2$. Since ${\Arrowvert f_{TTRP}(\mathbf{x})\Arrowvert}^{2}_2$ is a polynomial of degree $2d$ of independent standard Gaussian or Radamecher random variables, which are the entries of TT-cores $\mathcal{R}_i$, for $i=1,\dots,d$, we apply Proposition \ref{hyper} and Theorem \ref{lemma_var} to obtain
{
\begin{align*}
\mathbb{P} \left ( \left\lvert \norm{f_{TTRP}(\mathbf{x})}{2}^2 - \norm{\mathbf{x}}{2}^2 \right\rvert \geq \varepsilon \norm{\mathbf{x}}{2}^2 \right ) &\leq e^2\cdot \exp{\left[-{\left(\frac{\varepsilon^2 \norm{\mathbf{x}}{2}^4}{K\cdot \text{Var}\left(\norm{f_{TTRP}(\mathbf{x})}{2}^2\right)}\right)}^\frac{1}{2d}\right]}\\
&\leq e^2\cdot \exp{\left[-\left(\frac{\varepsilon^2 }{K\cdot\left[\frac{1}{M}\left(\Delta+n(m+2)-3\right)^d N\frac{\mathcal{M}^4}{\norm{\mathbf{x}}{2}^4}-1\right]}\right)^\frac{1}{2d}\right]}\\
&\leq e^2\cdot \exp{\left[-\left(\frac{M\cdot\varepsilon^2 }{K\cdot\left[\left(\Delta+n(m+2)-3\right)^d N-M\right]}\right)^\frac{1}{2d}\right]}\\
&\leq C \exp{\left[-\left(\frac{M\cdot\varepsilon^2 }{K\cdot\left[\left(\Delta+n(m+2)-3\right)^d N-M\right]}\right)^\frac{1}{2d}\right] },
\end{align*}
where $\mathcal{M}=\max_{i=1,\dots,N} \ \lvert \mathbf{x}(i)\rvert$ and then $\frac{\mathcal{M}^4}{\norm{\mathbf{x}}{2}^4}\leq 1$.
}
\end{proof}
We note that the upper bound in the concentration inequality \eqref{prop2_ineq} is not tight, as it involves the dimensionality of datasets ($N$).
To give a tight bound independent of the dimensionality of datasets
for the corresponding concentration inequality is our future work.
The procedure of TTRP is summarized in Algorithm \ref{alg_1}.
For the input of this algorithm, the TT-ranks of $\mathcal{R}$ (the tensorized version of the projection matrix $\mathbf{R}$ in \eqref{eq_ttfpfull}) are set to one, and from our above analysis, we generate entries of the corresponding TT-cores $\{\mathcal{R}_{k}\}^d_{k=1}$ through the Rademacher distribution.
For a given data point $\mathbf{x}$ in the TT-format, Algorithm \ref{alg_1} gives the TT-cores of the corresponding output, and each element of $f_{TTRP}(\mathbf{x})$ in \eqref{eq_ttfpfull} can be represented as:
$$f_{TTRP}(\mathbf{x})(i)=f_{TTRP}(\mathbf{x})(\nu(i))=f_{TTRP}(\mathbf{x})(i_1,\dots,i_d)=\frac{1}{\sqrt{M}}\mathcal{Y}_1(i_1)\cdots\mathcal{Y}_d(i_d),$$
where $\nu$ is a bijection from ${\mathbb{N}} $ to ${\mathbb{N}}^{d}$.
\begin{algorithm}[H]
\caption{Tensor train random projection}
\label{alg_1}
\begin{algorithmic}[1]
\Require TT-cores $\mathcal{R}_{k}\left(i_{k}, j_{k}\right)$ of $\mathbf{R}$, and TT-cores $\mathcal{X}_{k}$ of $\mathbf{x}$, for $k=1,\dots,d$.
\For {$k = 1:d$}
\For {$i_k=1:m_k$}
\State Compute $\mathcal{Y}_{k}\left(i_{k}\right)=\sum_{j_{k}=1}^{n_k}\Big(\mathcal{R}_{k}\left(i_{k}, j_{k}\right) \otimes \mathcal{X}_{k}\left(j_{k}\right)\Big)$. $\qquad \triangleright \ O(n\hat{r}^2)$ by \eqref{ycore}
\EndFor
\EndFor
\Ensure TT-cores $\frac{1}{\sqrt{M}}\mathcal{Y}_1$, $\mathcal{Y}_2,\dots,$ $\mathcal{Y}_d$.
\end{algorithmic}
\end{algorithm}
\section{Numerical experiments}\label{Experm}
We demonstrate the efficiency of TTRP using synthetic datasets and the MNIST dataset \cite{lecun2010mnist}.
The quality of isometry is a key factor to assess the performance of random
projection methods,
which in our numerical studies is
estimated by the ratio of the pairwise distance
\begin{equation}\label{ratio}
\frac{2}{n_0(n_0-1)}\sum_{n_0\geq i > j}\frac{{\Arrowvert f_{TTRP}(\mathbf{x}^{(i)}) - f_{TTRP}(\mathbf{x}^{(j)}) \Arrowvert}_2}{{\Arrowvert \mathbf{x}^{(i)}-\mathbf{x}^{(j)}\Arrowvert}_2},
\end{equation}
where $n_0$ is the number of data points. Since the output of our TTRP procedure (see Algorithm \ref{alg_1}) is in the TT-format, it is efficient to apply TT-format operations to compute the pairwise distance of \eqref{ratio} through
Algorithm \ref{dot}. In order to obtain the average performance of isometry, we repeat numerical experiments 100 times (different realizations for TT-cores)
and estimate the mean and the variance for the ratio of the pairwise distance using these samples.
The rest of this section is organized as follows. First,
through a synthetic dataset,
the effect of different TT-ranks of the tensorized version $\mathcal{R}$ of $\mathbf{R}$ in \eqref{eq_ttfpfull} is shown,
which leads to our motivation of setting the TT-ranks to be one.
After that, we focus on the situation
with TT-ranks equal to one, and test the effect of different TT-cores. Finally, based on both high-dimensional synthetic and MNIST datasets, our TTRP are compared with related projection methods, including Gaussian TRP \cite{sun2018tensor}, Very Sparse RP \cite{li2006very} and Gaussian RP \cite{achlioptas2001database}.
\subsection{Effect of different TT-ranks}\label{section_rank}
In Definition \ref{defTTRP}, we set the TT-ranks to be one.
To explain our motivation of this settting, we investigate the effect of different TT-ranks---we herein consider the situation that the TT-ranks take $r_0=r_d=1,\, r_k=r,\,k=2,\dots,d-1$,
where the rank $r\in \{1,2,\ldots\}$,
and we keep other settings in Definition \ref{defTTRP} unchanged.
For comparison, two different distributions are considered to generate the TT-cores in this part---the Rademacher distribution (our default optimal choice) and the Gaussian distribution, and the corresponding tensor train projection is denoted by rank-$r$ TTRP and Gaussian TT (studied in detail in \cite{rakhshan2020tensorized}) respectively.
For rank-$r$ TTRP,
the entries of TT-cores $\mathcal{R}_1(i_1,j_1)$ and $\mathcal{R}_d(i_d,j_d)$ are drawn from $1/r^{1/4}$ or $-1/r^{1/4}$ with equal probability, and each element of $\mathcal{R}_k(i_k,j_k),\,k=2,..,d-1$ is uniformly and independently drawn from $1/r^{1/2}$ or $-1/r^{1/2}$.
A synthetic dataset
with dimension $N=1000$ and size $n_0=10$ are generated,
where each entry of vectors (each vector is a sample in the synthetic dataset) is independently generated through $\mathcal{N}(0,1)$.
In this test problem, we set the reduced dimension to be
$M=24$, and the dimensions of the corresponding tensor representations are set to $m_1=4,\,m_2=3,\,m_3=2$ and $n_1=n_2=n_3=10$ ($M=m_1m_2m_3$ and $N=n_1n_2n_3$).
Figure \ref{rttrp} shows the ratio of the pairwise distance of the two projection methods (computed through \eqref{ratio}).
It can be seen that the estimated mean of ratio of the pairwise distance of rank-$r$ TTRP is typically more close to one than that of Gaussian TT, i.e., rank-$r$ TTRP
has advantages for keeping the pairwise distances.
Clearly, for a given rank in Figure \ref{rttrp}, the estimated variance of the pairwise distance of rank-$r$ TTRP is
smaller
than that of Gaussian TT.
Moreover, focusing on rank-$r$ TTRP, the results of both the mean and the variance are not significantly different for different TT-ranks. In order to reduce the storage, we only focus on the rank-one case (as in Definition \ref{defTTRP}) in the rest of this paper.
\begin{figure}
\centering
\subfloat[][Mean for the ratio of the pairwise distance ]{\includegraphics[width=.48\textwidth]{m_rank-eps-converted-to.pdf}}\quad
\subfloat[][Variance for the ratio of the pairwise distance]{\includegraphics[width=.48\textwidth]{v_rank-eps-converted-to.pdf}}
\caption{Effect of different ranks based on synthetic data ($M=24,\,N=1000,\,m_1=4,\,m_2=3,\,m_3=2,\,n_1=n_2=n_3=10$).}
\label{rttrp}
\end{figure}
\subsection{Effect of different TT-cores}\label{cores}
A synthetic dataset
is tested to assess the effect of different distributions for TT-cores, which consists of independent vectors $\mathbf{x}^{(1)},\dots,\mathbf{x}^{(10)},$ with dimension $N=2500$, whose elements are sampled from the standard Gaussian distribution.
The following three distributions
are investigated
to construct TTRP (see Definition \ref{defTTRP}), which include the Rademacher distribution (our default choice), the standard Gaussian distribution (studied in \cite{rakhshan2020tensorized}), and the $1/3$-sparse distribution (i.e., $s=3$ in \eqref{sparse_distribution}), while the corresponding projection methods
are denoted by TTRP-RD, TTRP-$\mathcal{N}(0,1)$, and TTRP-$1/3$-sparse, respectively.
For this test problem, three TT-cores are utilized for $m_1=M/2,\,m_2=2,\,n_3=1$ and $n_1=25,\,n_2=10,\,n_3=10$.
Figure \ref{core} shows that the estimated mean of the ratio of the pairwise distance for TTRP-RD is
very close to one, and the estimated variance of TTRP-RD is
at least one order of magnitude smaller
than that of TTRP-$\mathcal{N}(0,1)$ and TTRP-$1/3$-sparse.
These results are consist with Theorem \ref{lemma_var}. In the rest of this paper, we focus on our default choice
for TTRP---the TT-ranks are set to one, and each element of TT-cores is independently sampled through the Rademacher distribution.
\begin{figure}
\centering
\subfloat[][Mean for the ratio of the pairwise distance ]{\includegraphics[width=.48\textwidth]{m_core-eps-converted-to.pdf}}\quad
\subfloat[][Variance for the ratio of the pairwise distance]{\includegraphics[width=.48\textwidth]{v_core-eps-converted-to.pdf}}
\caption{Three test distributions for TT-cores based on synthetic data ($N=2500$).}
\label{core}
\end{figure}
\subsection{Comparison with Gaussian TRP, Very Sparse RP and Gaussian RP}
The storage of the projection matrix and
the cost of computing $\mathbf{Rx}$ (see \eqref{eq_ttfpfull})
of our TTRP (TT-ranks equal one),
Gaussian TRP \cite{sun2018tensor}, Very Sparse RP \cite{li2006very} and Gaussian RP \cite{achlioptas2001database}, are shown in Table \ref{storage},
where $\mathbf{R}\in \mathbb{R}^{M\times N},\,M=\prod_{i=1}^{d} m_i,\,N=\prod_{j=1}^{d} n_j,\,m=\max\{m_1,m_2,\dots,m_d\}$ and $n=\max\{n_1,n_2,\dots,n_d\}$.
Note that the matrix $\mathbf{R}$ in \eqref{eq_ttfpfull} is tensorized in the TT-format,
and TTRP is efficiently achieved by the matrix-by-vector products in the TT-format (see \eqref{ycore}).
From Table \ref{storage}, it is clear that our TTRP has the smallest storage cost and requires the smallest computational cost for computing $\mathbf{Rx}$.
\begin{table}
\caption{The comparison of the storage and the computational costs.}
\label{storage}
\centering
\begin{tabular}{@{}lllll@{}}
\toprule
& Gaussian RP & Very Sparse RP & Gaussian TRP & TTRP \\
\midrule
Storage cost & $O(MN) $ & $O(M\sqrt{N})$ & $O(dMn)$ & $O(dmn)$ \\
Computational cost & $O(MN)$ & $O(M\sqrt{N})$ & $O(MN)$ & $O(dmn\hat{r}^2)$ \\
\bottomrule
\end{tabular}
\end{table}
Two synthetic datasets with size $n_0=10$ are tested---the dimension of the first one is $N=2500$ and that of the second one is $N=10^4$;
each entry of the samples is independently generated through $\mathcal{N}(0,1)$.
For TTRP and Gaussian TRP, the dimensions of tensor representations are set to: for $N=2500$, we set $n_1=25,\,n_2=10,\,n_3=10,\,m_1=M/2,\,m_2=2,\,m_3=1$; for $N=10^4$, we set
$n_1=n_2=25,\,n_3=n_4=4,\,m_1=M/2,\,m_2=2,\,m_3=1,\,m_4=1$.
We again focus on the ratio of the pairwise distance (putting the outputs of different projection methods into \eqref{ratio}), and estimate the mean and the variance for the ratio of the pairwise distance through repeating numerical experiments 100 times (different realizations for constructing the random projections, e.g., different realizations of the Rademacher distribution for TTRP).
Figure \ref{s2500} shows that the performance of TTRP is very close to that of sparse RP and Gaussian RP, while the variance for Gaussian TRP is larger than that for the other three projection methods. Moreover, the variance for TTRP basically reduces as the dimension $M$ increases, which is consistent with Theorem \ref{lemma_var}. To be further, more details are given for the case with $M=24$ and $N=10^4$ in Table \ref{estorage1} and Table \ref{estorage2}, where the value of storage is the number of nonzero entries that need to be stored.
It turns out that TTRP with fewer storage costs achieves a competitive performance compared with Very Sparse RP and Gaussian RP. In addition, from Table \ref{estorage2}, for $d>2$, the variance of TTRP is clearly smaller than that of Gaussian TRP, and the
storage cost of TTRP is much smaller than that of Gaussian TRP.
\begin{figure}
\centering
\subfloat[][Mean, $N=2500$.]{\includegraphics[width=.48\textwidth]{m_2500-eps-converted-to.pdf}}\quad
\subfloat[][Variance, $N=2500$.]{\includegraphics[width=.48\textwidth]{v_2500-eps-converted-to.pdf}}
\\
\subfloat[][Mean, $N=10^4$.]{\includegraphics[width=.48\textwidth]{m_10000-eps-converted-to.pdf}}\quad
\subfloat[][Variance, $N=10^4$.]{\includegraphics[width=.48\textwidth]{v_10000-eps-converted-to.pdf}}
\caption{Mean and variance for the ratio of the pairwise distance, synthetic data.}
\label{s2500}
\end{figure}
\begin{table}
\caption{The comparison of mean and variance for the ratio of the pairwise distance, and storage, for Gaussian RP and Very Sparse RP ($M=24$ and $N=10^4$).}
\label{estorage1}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{3}{c|}{Gaussian RP} & \multicolumn{3}{|c}{Very Sparse RP}\\
\hline
mean&variance&storage&mean&variance&storage\\
\hline
0.9908&0.0032&240000&0.9963&0.0025&2400\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{
The comparison of mean and variance for the ratio of the pairwise distance, and storage, for Gaussian TRP and TTRP ($M=24$ and $N=10^4$).}
\label{estorage2}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\multicolumn{2}{c|}{Dimensions for tensorization}&\multicolumn{3}{c|}{Gaussian TRP} & \multicolumn{3}{|c}{TTRP}\\
\hline
$[m_1,\ldots,m_d]$ & $[n_1,\dots,n_d]$ & mean & variance & storage & mean & variance & storage\\
\hline
[6,4] & [100,100] & 0.9908 & 0.0026 & 4800 & 0.9884 & 0.0026 & 1000\\
\hline
[4,3,2]&[25,20,20]& 0.9747 & 0.0062 & 1560 & 0.9846& 0.0028 & 200\\
\hline
[3,2,2,2] & [10,10,10,10] & 0.9811 & 0.0123 & 960 & 0.9851& 0.0035 & 90\\
\hline
\end{tabular}
\end{table}
Next the CPU times for projecting a data point using the four
methods (TTRP, Gaussian TRP, Very Sparse RP and Gaussian RP) are assessed. Here, we
set the reduced dimension $M=1000$, and test four cases with $N=10^4$, $N=10^5$, $N=2\times 10^4$ and $N=10^6$ respectively.
The dimensions of the tensorized output
is set to $m_1=m_2=m_3=10$ (such that $M=m_1m_2m_3$), and the dimensions of the corresponding tensor representations of the
original data points are set to:
for $N=10^4$,
$n_1=25,\,n_2=25,\,n_3=16$;
for $N=10^5$, $n_1=50,\,n_2=50,\,n_3=40$;
for $N=2\times 10^5$, $n_1=80,\,n_2=50,\,n_3=50$;
for $N=10^6$, $n_1=n_2=n_3=100$.
For each case, given a data point of which elements are sampled from the standard Gaussian distribution, the simulation of projecting it to the reduced dimensional space is repeated 100 times (different realizations for constructing the random projections), and the CPU time is defined to be the average time of these 100 simulations.
Figure \ref{fig_time_comparison} shows the CPU times, where the results are obtained in
MATLAB on a workstation with Intel(R) Xeon(R) Gold 6130 CPU. It is clear that the computational cost of our TTRP is much smaller than those of Gaussian TRP and Gaussian RP for different data dimension $N$. As the data dimension $N$ increases, the computational costs of Gaussian TRP and Gaussian RP grow rapidly, while the computational cost of our TTRP grows slowly.
When the data dimension is large (e.g., $N=10^6$ in Figure \ref{fig_time_comparison}),
the CPU time of TTRP becomes smaller than that of Very Sparse RP, which is consist with the results in Table \ref{storage}.
\begin{figure}
\centering
\includegraphics[width=3.8in,height=2.6in]{time_comparison-eps-converted-to.pdf}
\caption{A comparison of CPU time for different random projections ($M=1000$).}
\label{fig_time_comparison}
\end{figure}
Finally, we validate the performance of our TTRP approach using the MNIST dataset \cite{lecun2010mnist}.
From MNIST, we randomly take $n_0=50$ data points, each of which is a vector with dimension $N = 784$. We consider two cases for the dimensions of tensor representations: in the first case, we set $m_1=M/2,\,m_2=2,\,n_1=196,\,n_2=4$,
and in the second case, we set $m_1=M/2,\,m_2=2,\,m_3=1,\,n_1=49,\,n_2=4,\,n_3=4$. Figure \ref{minist} shows the properties of isometry and bounded variance of different random projections on MNIST.
It can be seen that TTRP satisfies the isometry property with bounded variance.
It is clear that as the reduced dimension $M$ increases, the variances of the four methods reduce,
and the variance of our TTRP is close to that of Very Sparse RP.
\begin{figure}[!ht]
\centering
\subfloat[][Mean for the ratio of the pairwise distance ]{\includegraphics[width=.48\textwidth]{m_mnist-eps-converted-to.pdf}}\quad
\subfloat[][Variance for the ratio of the pairwise distance]{\includegraphics[width=.48\textwidth]{v_mnist-eps-converted-to.pdf}}
\caption{Isometry and variance quality for MNIST data ($N=784$).}
\label{minist}
\end{figure}
\section{Conclusion}\label{Conclu}
Random projection plays a fundamental role in
conducting dimension reduction for high-dimensional datasets, where pairwise distances need to be approximately preserved. With a focus on efficient tensorized computation, this paper develops a novel tensor train random projection (TTRP) method.
Based on our analysis for the bias and the variance,
TTRP is proven to be an expected isometric projection with bounded variance. From the analysis in Theorem \ref{lemma_var}, the Rademacher distribution is shown to be an optimal choice to generate the TT-cores of TTRP. For computational convenience, the TT-ranks of TTRP are set to one, while from our numerical results, we show that different TT-ranks do not lead to significant results for the mean and the variance of the ratio of the pairwise distance.
Our detailed numerical studies show that, compared with standard projection methods, our TTRP with the default setting (TT-ranks equal one and TT-cores are generated through the Rademacher distribution), requires
significantly smaller storage and computational costs to achieve a competitive performance.
From numerical results, we also find that our TTRP has smaller variances than tensor train random projection methods based on Gaussian distributions.
Even though we have proven the properties of the mean and the variance of TTRP and the numerical results show that TTRP is efficient,
the upper bound in the concentration inequality \eqref{prop2_ineq} involves the dimensionality of datasets ($N$), and our future work is to give a tight bound independent of the dimensionality of datasets for the concentration inequality.
\bmhead{Acknowledgments}
The authors thank Osman Asif Malik and Stephen Becker for helpful suggestions and
discussions.
This work is supported by the National Natural Science Foundation of China (No. 12071291), the Science and Technology Commission of Shanghai Municipality (No. 20JC1414300) and the Natural Science Foundation of Shanghai (No. 20ZR1436200).
\section*{Declarations}
The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.
\begin{appendices}
\section{Example for $\mathbb{E}[\mathbf{y}^2(i)\mathbf{y}^2(j)]\neq \mathbb{E}[\mathbf{y}^2(i)]\mathbb{E}[\mathbf{y}^2(j)],\,i\neq j$.}\label{appendix}
If all TT-ranks of tensorized matrix $\mathbf{R}$ in \eqref{eq_ttfpfull} are equal to one, then $\mathbf{R}$ is represented as a Kronecker product of $d$ matrices,
$$
\mathbf{R}=\mathbf{R}_1 \otimes \mathbf{R}_2\otimes\cdots \otimes \mathbf{R}_d,
$$
where $\mathbf{R}_i \in \mathbb{R}^{m_i\times n_i}$, for $i=1,2,..,d$, whose entries are i.i.d. mean zero and variance one. We just consider $d=2, m_1=m_2=n_1=n_2=2$, then
$$
\mathbf{y}=\mathbf{R}\mathbf{x}=(\mathbf{R}_1 \otimes \mathbf{R}_2)\mathbf{x},
$$
where
\begin{equation*}
\mathbf{R}_1=\left[ \begin{array}{cc}
a_1 & a_2\\
b_1&b_2
\end{array}
\right],\,
\mathbf{R}_2=\left[ \begin{array}{cc}
c_1 & c_2\\
d_1&d_2
\end{array}
\right].
\end{equation*}
Hence
\begin{equation*}
\mathbf{y}=\left[\begin{array}{c}
\mathbf{y}(1) \\
\mathbf{y}(2)\\
\mathbf{y}(3) \\
\mathbf{y}(4)
\end{array}
\right]=\left[ \begin{array}{c}
a_1c_1x_1 +a_1c_2x_2+a_2c_1x_3+a_2c_2x_4\\
a_1d_1x_1+a_1d_2x_2+a_2d_1x_3+a_2d_2x_4\\
b_1c_1x_1+b_1c_2x_2+b_2c_1x_3+b_2c_2x_4\\
b_1d_1x_1+b_1d_2x_2+b_2d_1x_3+b_2d_2x_4
\end{array}
\right].
\end{equation*}
We compute the following,
\begin{align*}
\text{cov}&\Big(\mathbf{y}^2(1),\mathbf{y}^2(3)\Big)\\
=&\text{cov}\left(\left(a_1c_1x_1 +a_1c_2x_2+a_2c_1x_3+a_2c_2x_4\right)^2,\left( b_1c_1x_1+b_1c_2x_2+b_2c_1x_3+b_2c_2x_4\right)^2\right)\\
=&\text{cov}\left(a_1^2c_1^2x_1^2+a_1^2c_2^2x_2^2+a_2^2c_1^2x_3^2+a_2^2c_2^2x_4^2,b_1^2c_1^2x_1^2+b_1^2c_2^2x_2^2+b_2^2c_1^2x_3^2+b_2^2c_2^2x_4^2\right)\\
&+\text{cov}\left(2a^2_1c_1c_2x_1x_2+2a^2_2c_1c_2x_3x_4,2b^2_1c_1c_2x_1x_2+2b^2_2c_1c_2x_3x_4\right)\\
=&\left(x^2_1+x^2_3\right)^2\text{var}(c^2_1)+\left(x^2_2+x^2_4\right)^2\text{var}(c^2_2)+4\left(x_1x_2+x_3x_4\right)^2\text{var}(c_1c_2)\\
=&\left(x^2_1+x^2_3\right)^2\text{var}(c^2_1)+\left(x^2_2+x^2_4\right)^2\text{var}(c^2_2)+4\left(x_1x_2+x_3x_4\right)^2 > 0,
\end{align*}
then $\mathbb{E}\left[\mathbf{y}^2(1)\mathbf{y}^2(3)\right]\neq\mathbb{E}\left[\mathbf{y}^2(1)\right]\mathbb{E}\left[\mathbf{y}^2(3)\right]$.
Generally, for some $i\neq j$, $\mathbb{E}[\mathbf{y}^2(i)\mathbf{y}^2(j)]\neq \mathbb{E}[\mathbf{y}^2(i)]\mathbb{E}[\mathbf{y}^2(j)]$.
\end{appendices}
| {'timestamp': '2021-10-22T02:22:23', 'yymm': '2010', 'arxiv_id': '2010.10797', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.10797'} |
\section{Introduction}
In this paper, we introduce a new non-displaceability criterion for Lagrangian
submanifolds of a given compact symplectic (four-)manifold. The criterion is
based on a Hamiltonian isotopy invariant of a Lagrangian
submanifold constructed using $J$-holomorphic disks \emph{of lowest area}
with boundary on the Lagrangian. We apply our criterion to prove
non-displaceability results for Lagrangian tori in $\C P^2$ and in other del
Pezzo surfaces, for which there is no clear alternative proof using
standard results in Floer theory.
\label{sec:intro}
We shall focus on dimension~four in line with our intended applications, although we do have a clear vision of a higher-dimensional setup to which our methods generalise---it is mentioned briefly in Subsection~\ref{subsec:higher_dim}.
\subsection{Challenges in Lagrangian rigidity} A classical question in symplectic
topology, originating from Arnold's conjectures and still inspiring numerous
advances in the field, is to understand whether two given Lagrangian
submanifolds $L_1$, $L_2$ are (Hamiltonian) non-displaceable, meaning that there
exists no Hamiltonian diffeomorphism that would map $L_1$ to a Lagrangian
disjoint from $L_2$. It is sometimes referred to as the {\it Lagrangian rigidity}
problem, and the main tool to approach it is Floer theory.
Historically, most applications of Floer theory were focused on monotone
(or exact) Lagrangians, as in those cases it is foundationally easier to set up,
and usually easier to compute.
More recent developments have given access to non-displaceability results
concerning non-monotone Lagrangians. One of such developments is called Floer
cohomology with bulk deformations, introduced by Fukaya, Oh, Ohta and Ono
\cite{FO3Book,FO311a}. Using bulk deformations, the same authors \cite{FO312}
found a {\it continuous} family of non-displaceable
Lagrangian tori $\hat{T}_a$ in $\C P^1\times \C P^1$, indexed by $a\in(0,1/2]$.
(When we say that a Lagrangian is
non-displaceable, we mean that it is non-displaceable from itself.) For some
other recent methods, see \cite{AM13, Bor13,Wo11}.
\begin{remark}
To be able to observe such ``rigidity for families'' phenomena, it is essential
to consider non-monotone Lagrangian submanifolds, as spaces of monotone ones up
to Hamiltonian isotopy are discrete, on compact symplectic manifolds.
\end{remark}
It is easy to produce challenging instances of the displaceability problem which
known tools fail to answer. For example, consider the 2:1
cover $\C P^1\times \C P^1\to \C P^2$ branched along a conic curve. Taking the
images of the above mentioned tori under this cover, we get a
family of Lagrangian tori in the complex projective plane denoted by $T_a\subset
\C P^2$ and indexed by $a\in(0,1/2]$ --- see Section \ref{sec:T_a} and \cite[Section~3]{Wu15} for details.
It turns out that the tori $T_a \subset \C P^2$ have
trivial bulk-deformed Floer cohomology for any bulk class $\mathfrak{b} \in H^2(\C P^2,
\Lambda_0)$, as we check in Proposition~\ref{prop: Bulk CP^2}. While one can
show that the tori $T_a$ are displaceable when $a>1/3$, the following remains to
be a conjecture.
\begin{conjecture}
\label{con:T_a}
For each $a\in (0,1/3]$, the Lagrangian torus $T_a\subset
\C P^2$ is Hamiltonian non-displaceable.
\end{conjecture}
Motivated by this and similar problems, we introduce a new approach, called
low-area Floer theory, to solve rigidity problems concerning some non-monotone
Lagrangians.
Let us list some application of this technique.
\begin{theorem}
\label{th:T_a}
For each $a\in (0,1/9]$, the torus $T_a\subset \C P^2$ is Hamiltonian
non-displaceable from the monotone Clifford torus $T_\mathit{Cl}\subset \C P^2$.
\end{theorem}
\begin{remark}
An interesting detail of the proof, originating from Lemma~\ref{lem:cotangent_bundles}(ii), is that we use $\mathbb{Z}/8$ coefficients for our Floer-theoretic invariants,
and it is impossible to use a field, or the group $\mathbb{Z}$, instead. To place this into context, recall
that conventional Floer cohomology over finite fields can detect non-displaceable
monotone Lagrangians unseen by characteristic zero fields: the simplest example
is $\R P^n\subset \C P^n$, see e.g.~\cite{FO314}; a more sophisticated example,
where the characteristic of the field to take is not so obvious, is the Chiang
Lagrangian studied by Evans and~Lekili \cite{EL15b}, see also J.~Smith~\cite{Smi15}.
However, there are no examples in conventional Floer
theory that would require working over a torsion group which is not a field.
\end{remark}
We can also show analogous results for some other del Pezzo surfaces. They are summarised below, and the precise formulations are contained in Theorems~\ref{th: Ta in PxP},~\ref{th: Ta in BlIII}.
\begin{theorem}
In $\mathbb{C} P^1\times \mathbb{C} P^1$ and in $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$ with a monotone symplectic form, there exists a one-parametric family of Lagrangian tori which are non-displaceable from the standard monotone toric fibre.
\end{theorem}
The next result exhibits a {\it two-parametric} family of non-displaceable
Lagrangian tori in symplectic del Pezzo surfaces. By a symplectic del~Pezzo
surfaces we mean \emph{monotone} symplectic 4-manifolds, whose classification follows
from a series of works \cite{MD90,MD96,Ta95,Ta96,Ta00Book,OhtaOno96,OhtaOno97}.
Recall that their list consists of blowups of $\C P^2$ at $0\le k\le 8$ points,
and of $\C P^1\times\C P^1$. By the correspondence between monotone symplectic
4-manifolds and complex Fano surfaces we will omit the term ``symplectic'', and
call them del Pezzo surfaces from now on.
\begin{theorem}
\label{th:non_displ_spheres}
Let $X$ be a del Pezzo surface and $S,S'\subset X$ be a pair of Lagrangian spheres with homological intersection $[S]\cdot[S']=1$. Then, for some $0<a_0,b_0<1/2$, there exist two families of Lagrangian tori indexed by $a,b$:
$$T_a,\ T'_b\subset X,\quad a\in(0,a_0),\ b\in(0,b_0),$$
lying in a neighbourhood of the sphere $S$ resp.~$S'$, and such that $T_a$ is non-displaceable from $T_b'$ for all $a,b$ as above.
\end{theorem}
In our construction, any two different tori in the same family $\{T_a\}$ will be disjoint, and the same will hold for the $\{T_b'\}$.
Recall that pairs of once-intersecting Lagrangian spheres exist inside
$k$-blowups of $\C P^2$ when $k\ge 3$. For example, one can take Lagrangian
spheres with homology classes $[E_i]-[E_j]$ and $[E_j]-[E_k]$ where
$\{E_i,E_j,E_k\}$ are three distinct exceptional divisors, as explained in \cite{SeiL4D,Ev10}.
These spheres
can also be seen from the almost toric perspective \cite{Vi16}: on an
almost toric base diagram for the corresponding symplectic del Pezzo surface,
these Lagrangian spheres projects onto the segment connecting two nodes on the same monodromy line. For homology reasons, blowups of $\C P^2$ at 2 or less points do not contain such pairs of spheres.
\begin{comment}
\begin{remark}
For $k=1$, $\ker \omega = \langle H - 3[E_1]
\rangle \subset H_2(\mathbb{C}P^2\# \overline{\mathbb{C}P^2},\mathbb{Z})$; where $\omega$ is the (up to scaling) monotone
symplectic form in $\mathbb{C}P^2\# \overline{\mathbb{C}P^2}$, $[E_1]$ is the class of the exceptional divisor and
$H$ is the class of the line. Because the image of the intersection form
restricted to $\ker \omega$ is $8\mathbb{Z}$, there is not a Lagrangian sphere in
$\mathbb{C}P^2\# \overline{\mathbb{C}P^2}$, since a Lagrangian sphere has self-intersection $2$. By
\cite[Lemma~2.3]{Ev10}, the only classes allowed to have Lagrangian Spheres are
of the form $\pm ([E_1] - [E_2])$, where $E_i$, $ i = 1,2$ are the exceptional
divisors. Hence, there is not a pair of Lagrangian spheres with
self-intersection $1$. \end{remark}
\end{comment}
\subsection{Lagrangian rigidity from low-area Floer theory} \label{subsec: DefOC_low}
We call a symplectic manifold $X$ monotone if $\omega$ and $c_1(X)$ give positively proportional classes in $H^2(X;\mathbb{R})$.
Similarly, a Lagrangian is called monotone if the symplectic area class and the Maslov class of $L$ are positively proportional in $H^2(X,L;\mathbb{R})$. It is quite common to use a definition where the proportionality is only required over $\pi_2(X)$ or $\pi_2(X,L)$; we stick to the homological version for convenience.
Floer theory for monotone Lagrangians has abundant algebraic structure, a
particular example of which are the open-closed and closed-open {\it string
maps}. There is a non-displace\-ability criterion for a pair of monotone
Lagrangians formulated in terms of these string maps; it is due to Biran and
Cornea and will be recalled later. Our main finding can be summarised as
follows: it is possible define a low-area version of the string maps for {\it
non-monotone} Lagrangians, and prove a version of Biran-Cornea's theorem under
an additional assumption on the areas of the disks involved. This method can
prove non-displaceability in examples having no clear alternative proof by means
of conventional Floer theory for non-monotone Lagrangians. We shall focus on
dimension 4, and proceed to a precise statement of our theorem.
Fix a ring $Q$ of coefficients; it will be used for all (co)homologies when the
coefficients are omitted. (The coefficient ring does not have to include a
Novikov parameter in the way it is done in classical Floer theory for
non-monotone manifolds; rings like $\mathbb{Z}/k\mathbb{Z}$ are good enough for our purpose.)
Let $L,K\subset X$ be two orientable, not necessarily monotone, Lagrangian
surfaces in a compact symplectic four-manifold $X$.
Denote
\begin{equation}
\label{eq:defn_a_dim4}
a=\min\{\omega(u)>0\ | \ u\in H_2(X,L;\mathbb{Z}),\ \mu(u)=2\},
\end{equation}
assuming this minimum exists. This is the least positive area of topological Maslov index~2 cycles with boundary on $L$. (For example, we currently do not allow the above set of areas to have infimum equal to $0$.)
Also, denote by $A$ the next-to-the-least such area:
\begin{equation} \label{eq: def A}
A = \min \{ \omega(u) > a \ |\ u\in H_2(X,L;\mathbb{Z}),\ \mu(u)=2\}.
\end{equation}
We assume that the minimum exists, including the possibility $A = +\infty = \min \emptyset$; the latter is the case when $L$ is monotone.
Fix a tame almost complex structure $J$ and a point $p_L\in L$. Let
$\{D_i^L\}_i\subset (X,L)$ be the images of all $J$-holomorphic Maslov index~2
disks of area $a$ such that $p_L\in \partial D_i^L$ and \emph{whose boundary is
non-zero in $H_1(L;\mathbb{Z})$} (their number is finite, by Gromov compactness \cite{Gr85}).
Assume that
\begin{equation}
\label{eq:disk_low_area_cancel_bdies} \sum_i \partial [D_i^L]=0\in H_1(L)
\end{equation}
and the disks are regular with respect to $J$. Recall that by convention, the above
equality needs to hold over the chosen ring $Q$. Then let
$$
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\in H_2(X)
$$
be any element whose image under the map $ H_2(X)\to H_2(X,L) $ equals
$\sum_i [D^L_i]$. We call this class the {\it low-area string invariant of $L$.}
Observe that it is defined only up to the image
$$
H_2(L)\to H_2(X),
$$
i.e.~up to $[L]\in H_2(X)$, compare Remark \ref{rem:OC_def_upto}. In the cases of interest, we will have $[L]=0$; but regardless of this, we prove in Lemma~\ref{lem:OC_invt} that
$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$ is independent of the choices we made, up to $[L]$.
Finally, consider $K$ instead of $L$ and define the numbers $b$ and $B$ analogously
to $a$ and $A$, respectively. Let $p_K$ be a point on $K$.
\begin{theorem}
\label{th:CO}
Assume that Condition~(\ref{eq:disk_low_area_cancel_bdies}) holds for $L$ and $K$,
the minima $a$, $A$, $b$, $B$ exist, and $[L]=[K]=0$. Suppose that $a+b<\min(A,B)$
and the homological intersection number below is non-zero over $Q$: $$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\cdot \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_K])\neq 0.$$ Then $L$ and $K$ are Hamiltonian non-displaceable from each other.
\end{theorem}
Above, the dot denotes the intersection pairing $H_2(X)\otimes H_2(X)\to Q$.
We refer to Subsection~\ref{subsec:non_displ_dim4} for a comparison with Biran-Cornea's theorem in the monotone setup,
and for a connection of $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}$ with the classical open-closed string map.
Note that the above intersection number
is well-defined due to the hypothesis $[L]=[K]=0$.
\begin{remark} \label{rmk:L.K=0} The condition $[L]=[K]=0$ is in fact totally
non-restrictive, due to the following two ``lower index'' non-displaceability
criteria. First, if $[L] \cdot [K] \neq 0$, then $L$ and $K$ are
non-displaceable for topological reasons. Second, if $$[L]\cdot
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_K])\neq 0\quad \text{ or }\quad [K]\cdot \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\neq 0,$$
one can show that $K$ and $L$ are non-displaceable by a variation on
Theorem~\ref{th:CO} whose proof can be carried analogously. Finally, if the
conditions of the previous two criteria fail, the intersection number
$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\cdot \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_K])$ is well-defined, and the reader can check
that the proof of Theorem~\ref{th:CO} still applies.
\end{remark}
Our proof of Theorem~\ref{th:CO} uses the idea of gluing holomorphic disks into
annuli and running a cobordism argument by changing the conformal parameter of
these annuli. This argument has been prevously used in Abouzaid's
split-generation criterion \cite{Ab10} and in Biran-Cornea's theorem
\cite[Section 8.2.1]{BC09B}. We follow the latter outline with several important
modifications involved. The condition $a+b<\min(A,B)$, which does not arise when
both Lagrangians are monotone ($A=B= +\infty$), is used in the proof when the
disks $D_i^L$ and $D_j^K$ are glued to an annulus of area $a+b$; the condition
makes sure higher-area Maslov index~2 disks on $L$ cannot bubble off this
annulus. This condition, for example, translates to $a<1/9$ in
Theorem~\ref{th:T_a}. Additionally, we explain how to achieve transversality for
the annuli by domain-dependent perturbations -- although arguments of
this type appeared before in the context of Floer theory \cite{SeiBook08,Ab10,She11,CM07,CW13},
we decided to explain this carefully.
\begin{remark}
Our proof only uses
classical transversality theory for holomorphic curves, as opposed to virtual
perturbations required to set up conventional Floer theory for non-monotone
Lagrangians; compare \cite{Cha15A}.
\end{remark}
\begin{remark}
There is another (weaker) widely used notion of monotonicity of $X$ or $L$,
where $\omega$ is required to be proportional to $c_1(X)$ resp.~Maslov class of
$L$ only when restricted to the image of $\pi_2(X)$ resp.~$\pi_2(X,L)$ under
the Hurewicz map. It is possible to use this definition throughout the paper; moreover, the numbers $a$ and $A$, see \eqref{eq:defn_a_dim4},
\eqref{eq: def A}, can be defined considering $u \in \pi_2(X,L)$. All theorems still hold as stated, except for small
adaptations in the statement of Lemma \ref{lem:cotangent_bundles}, e.g.
requiring $c_1(X)_{|\pi_2(X)} = k \omega_{|\pi_2(X)}$.
\end{remark}
Next, we shall need a technical improvement of our theorem. Fix a field $\k$, and choose an affine subspace
$$S_L\subset H_1(L;\k).$$
\begin{remark}
The field $\k$ and the ring $Q$ appearing earlier play independent roles in the proof, and need not be the same.
\end{remark}
Consider all affine subspaces parallel to $S_L$; they have the form $S_L+l$
where $l\in H_1(L;\k)$. For each such affine subspace, select all holomorphic
disks among the $\{D^L_i\}$ whose boundary homology class over $\k$ belongs to
that subspace and are non-zero. Also, assume that
the boundaries of the selected disks cancel over $Q$. This cancellation has to
happen in groups for each affine subspace of the form $S_L+l$. The stated
condition can be rewritten as follows:
\begin{equation}
\label{eq:disk_groups_cancel_bdies}
\sum_{\mathclap{D_i^L\, :\, [\partial D_i^L]\in S_L+l}}\ \ [\partial D_i^L]=0\in H_1(L;Q)\text{ for each }l\in H_1(L;\k).
\end{equation}
This condition is in general finer than the total cancellation of boundaries (\ref{eq:disk_low_area_cancel_bdies}), and coincides with (\ref{eq:disk_low_area_cancel_bdies}) when we choose $S_L=H_1(L;\k)$.
Under Condition (\ref{eq:disk_groups_cancel_bdies}), we can define $$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L],S_L)\subset H_2(X)$$ to be any element whose image under $H_2(X)\to H_2(X,L)$ equals
$$
\sum_{D_i^L\, :\, [\partial D_i^L]\in S_L}[ D_i^L] \in H_2(X,L).
$$
Note that here we only use the disks whose boundary classes belong to the subspace $S_L\subset H_1(L;\k)$ and ignore the rest. Again, $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L],S_L)\subset H_2(X)$ is well-defined up to $[L]$.
The same definitions can be repeated for another Lagrangian submanifold $K$.
\begin{theorem} \label{th:CO_groups} Let $L$ and $K$ be orientable Lagrangian
surfaces and $S_L \subset H_1(L,\k)$, $S_K \subset H_1(L,\k)$ affine subspaces. Assume
that Condition~(\ref{eq:disk_groups_cancel_bdies}) holds for $L, S_L$ and $K,
S_K$, the minima $a$, $A$, $b$, $B$ exist, and $[L]=[K]=0$. Suppose that $a+b<\min(A,B)$ and
the homological intersection number below is non-zero over $Q$:
$$
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L],S_L)\cdot \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_K],S_K)\neq 0.
$$
Then $L$ and $K$ are
Hamiltonian non-displaceable.
\end{theorem}
We point out that, as in Remark~\ref{rmk:L.K=0}, the condition $[L]=[K]=0$ is in fact non-restrictive; however we will not use this observation.
When $L$ or $K$ is monotone, we shall drop the subscript {\it low} from our notation for $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}(\cdot)$.
\subsection{Computing low-area string invariants}
\label{subsec:compute_low_area}
There is a natural setup for producing Lagrangian submanifolds whose least-area
holomorphic disks will be known. Let us start from an orientable
{\it monotone} Lagrangian $L\subset T^*M$ disjoint from the zero section, and
for which we know the holomorphic Maslov index~2 disks and therefore can compute
our string invariant. We are still restricting to the
4-dimensional setup, so that $\dim M=2$. Next, let us apply fibrewise scaling to
$L$ in order to get a family of monotone Lagrangians $L_a\subset T^*M$ indexed
by the parameter $a\in (0,+\infty)$; we choose the parameter $a$ to be equal to
the areas of Maslov index 2 disks with boundaries on $L_a$. (The scaling changes
the area but not the enumerative geometry of the holomorphic disks.) The next
lemma, explained in Section~\ref{sec:T_a}, follows from an explicit knowledge of
holomorphic disks; recall that we drop the {\it low} subscript from the string
invariants as we are in the monotone setup.
\begin{lemma}
\label{lem:cotangent_bundles}
\begin{enumerate}
\item[(i)]
There are monotone Lagrangian tori $\hat L_a\subset T^*S^2$, indexed by $a\in(0,+\infty)$ and called Chekanov-type tori, which bound Maslov index 2 disks of area $a$ and satisfy Condition~(\ref{eq:disk_low_area_cancel_bdies}) over $Q=\mathbb{Z}/4$, such that:
\begin{equation}
\label{eq:S2_OC}
\mathcal{{O}{C}}\c([p_{\hat L_a}])=2[S^2]\in H_2(T^*S^2;\mathbb{Z}/4).
\end{equation}
Moreover, there is a 1-dimensional affine subspace $S_{\hat L_a}=\langle \beta\rangle\subset H_1(\hat L_a;\mathbb{Z}/2)$
satisfying Condition~(\ref{eq:disk_groups_cancel_bdies}) over $\k=\mathbb{Z}/2$ and $Q=\mathbb{Z}/2$, such that:
\begin{equation}
\label{eq:S2_OC_groups}
\mathcal{{O}{C}}\c([p_{\hat L_a}],S_{\hat L_a})=[S^2]\in H_2(T^*S^2;\mathbb{Z}/2).
\end{equation}
\item[(ii)]
Similarly, there are monotone Lagrangian tori $L_a\subset T^*\R P^2$, indexed by $a\in(0,+\infty)$, which bound Maslov index 2 disks of area $a$ and satisfy Condition~(\ref{eq:disk_low_area_cancel_bdies}) over $Q=\mathbb{Z}/8$, such that:
\begin{equation}
\label{eq:RP2_OC}
\mathcal{{O}{C}}\c([p_{L_a}])=[4\R P^2]\in H_2(T^*\R P^2;\mathbb{Z}/8).
\end{equation}
\end{enumerate}
In both cases, the tori are pairwise disjoint; they are contained inside any given neighbourhood of the zero-section for small enough $a$.
\end{lemma}
\begin{remark}
Note that $\R P^2$ is non-orientable so it only has fundamental class over $\mathbb{Z}/2$,
however the class $[4\R P^2]$ modulo $8$ also exists.
\end{remark}
Now suppose $M$ itself admits a Lagrangian embedding
$M\to X$ into some \emph{monotone} symplectic manifold $X$
(We do not require that $M\subset X$ be monotone.)
By the Weinstein neighbourhood theorem, this embedding extends to a symplectic embedding $i\colon\thinspace U\to X$ for a neighbourhood $U\subset T^*M$ of the zero-section. Possibly by passing to a smaller neighbourhood, we can assume that $U$ is convex. By construction, the Lagrangians $L_a$ will belong to $U$ for small enough $a$:
$$L_a\subset U,\ a\in (0,a_0).$$
(The precise value of $a_0$ depends on the size of the available Weinstein neighbourhood.) We define the Lagrangians
\begin{equation}
\label{eq:induced_Lags}
K_a=i(L_a)\subset X, \ a\in (0,a_0)
\end{equation}
which are generally {\it non-monotone} in $X$ (even if $M\subset X$ were monotone).
Consider the induced map
$
i_*\colon\thinspace H_2(T^*M)\to H_2(X).
$
The next lemma explains that, for sufficiently small $a$, the low-area string
invariants for the $K_a\subset X$ are the $i_*$-images of the ones for the
$L_a\subset T^*M$. We also quantify how small $a$ needs to be.
\begin{lemma}\label{lem:disks_in_nbhood}
In the above setup, suppose
that the image of the inclusion-induced map $H_1(L;\mathbb{Z})\to H_1(T^*M;\mathbb{Z})$ is
$N$-torsion, $N\in\mathbb{Z}$. Let $M\subset (X,\omega)$ be a Lagrangian
embedding into a monotone symplectic manifold $(X,\omega)$.
Assume that $\omega$ is scaled in such a way that the area class in
$H^2(X,M;\mathbb{R})$ is integral, and $c_1(X)=k\omega\in H^2(X;\mathbb{Z})$ for some $k\in \mathbb{N}$.
Assume that
$$
a<1/(k+N).
$$
\begin{enumerate}
\item[(i)] The number $a$ indexing the torus $K_a$ equals the number $a$
defined by Equation~(\ref{eq:defn_a_dim4}). The number $A$ defined by
Equation~(\ref{eq: def A}) exists (could be $A = +\infty$) and satisfies:
$$A\ge \frac {1-(k-N)a}{N}.$$
\item[(ii)] There is a tame almost complex structure on $X$ such that all
area-$a$ holomorphic Maslov index 2 disks in $X$ with boundary on $K_a$
belong to $i(U)$, and $i$ establishes a 1-1 correspondence between them and
the holomorphic disks in $T^*M$ with boundary on $L_a$.
\end{enumerate}
In particular, when (i) and (ii) apply and $L_a\subset T^*M$ satisfy Condition~(\ref{eq:disk_low_area_cancel_bdies}), the following identity holds in $H_2(X)$:
$$
i_*(\mathcal{{O}{C}}\c([p_{L_a})])=\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{K_a}]).
$$
Similarly, if
$L_a\subset T^*M$ satisfy Condition~(\ref{eq:disk_groups_cancel_bdies}) then:
$$
i_*(\mathcal{{O}{C}}\c([p_{L_a}],S_{L_a}))=\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{K_a}],S_{K_a}),
$$
where $S_{K_a}=i_*(S_{L_a})\subset H_1(K_a;\k)$.
\end{lemma}
A proof is found in Section~\ref{sec:proof_OC}. To give a preview, part~(i) is
purely topological and part~(ii) follows from a neck-stretching argument.
\begin{remark} The above constructions and proofs
work for any Liouville domain taken instead of $T^*M$. For example, there is
another class of Liouville domains containing interesting monotone Lagrangian
tori: these domains are rational homology balls whose skeleta are the so-called Lagrangian
pinwheels. The embeddings of Lagrangian pinwheels in $\C P^2$ have beed studied
in \cite{ES16}, and using such embeddings we can employ the above construction
and produce non-monotone tori in $\C P^2$ which are possibly
non-displaceable. In the language of almost toric fibrations on $\C P^2$ constructed in
\cite{Vi13}, these tori live above the line segments connecting the baricentre
of a moment triangle to one of the three nodes. See also Subsection~\ref{subsec:higher_dim} for a short discussion of higher dimensions.
\end{remark}
\subsection{Applications to non-displaceability}
Now that we have explicit calculations of the low-area string invariants available, we can start applying our main non-displaceability result. Our first application is to prove Theorem~\ref{th:non_displ_spheres}.
\begin{proof}[Proof of Theorem~\ref{th:non_displ_spheres}] Let $S\subset X$ be a
Lagrangian sphere in a del~Pezzo surface $X$ with an integral symplectic form.
For concreteness, we normalise the symplectic form to make it primitive integral
(it integrates by $1$ over some integral homology class).
Let us define the Lagrangian tori
$T_a = i(\hat L_a) \subset X$ as in (\ref{eq:induced_Lags}), using the monotone tori
$\hat L_a\subset T^*S^2$ which appeared in Lemma~\ref{lem:cotangent_bundles}(i),
and the Lagrangian embedding $S\subset X$. The tori $T_a$ are indexed by
$a\in(0,a_0)$ for some $a_0>0$. Define the tori $T_b'$ indexed by $b\in(0,b_0)$
analogously, using $S'$ instead of $S$.
After decreasing $a_0$ and $b_0$ if
required, we see that the conditions from Lemma~\ref{lem:disks_in_nbhood}(i,ii)
are satisfied. Here $N = 1$ and $k\in\{1,2,3\}$ depending on the del Pezzo surface,
by the normalization of our symplectic form. Therefore by Lemma~\ref{lem:disks_in_nbhood} and
Lemma~\ref{lem:cotangent_bundles}(i) we have over $\k=Q=\mathbb{Z}/2$:
$$
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}(T_a,S_{T_a})=[S]\in H_2(X;\mathbb{Z}/2),\quad
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}(T_a',S_{T_a'})=[S']\in H_2(X;\mathbb{Z}/2)
$$
for the choices of $S_{T_a}\subset H_1(T_a;\mathbb{Z}/2)$ and $S_{T_a'}$ coming from the
one in Lemma~\ref{lem:cotangent_bundles}. Now let us apply Theorem~\ref{th:CO_groups}. We can check the condition $a+b<\min(A,B)$ using
Lemma~\ref{lem:disks_in_nbhood}(i):
$$
a + b < \tfrac{2}{k+N} \le \min(\tfrac{1 - (k-N)a}{N},\tfrac{1 - (k-N)b}{N}) \le \min(A,B)
$$
whenever $a,b < \tfrac 1{k+N}=\tfrac1{k+1}$.
Finally,
$$
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}(T_a,S_{T_a})\cdot
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}(T_a',S_{T_a'})=[S]\cdot[S']=1\in \mathbb{Z}/2.
$$
So Theorem~\ref{th:CO_groups} implies that $T_a$ is non-displaceable from $T_b'$, for small $a,b$.
\end{proof}
We will prove Theorem~\ref{th:T_a} in Section~\ref{sec:T_a}. In fact, we shall see that the tori $T_a\subset \C P^2$ appearing in Theorem~\ref{th:T_a} can be
obtained via Formula (\ref{eq:induced_Lags}) using the monotone tori
$L_a\subset T^*\R P^2$ from Lemma~\ref{lem:cotangent_bundles}(ii), and the
Lagrangian embedding $\R P^2\subset \C P^2$ described in Section~\ref{sec: dfn
Tori}. We could have taken this as a definition, but our actual
exposition in Section~\ref{sec:T_a} is different: we introduce the tori
$T_a\subset \C P^2$ in a more direct and conventional way, and subsequently use
the existing knowledge of holomorphic disks for them to prove
Lemma~\ref{lem:cotangent_bundles}(ii).
\subsection{Higher-dimensional versions}
\label{subsec:higher_dim}
One can state generalisations of Theorems~\ref{th:CO} and~\ref{th:CO_groups} to higher dimensions. We shall omit the precise statements and instead mention a major class of potential examples where the low-area string invariants are easy to define, and which hopefully makes the details clear. The setup is as in Subsection~\ref{subsec:compute_low_area}: one starts with a monotone Lagrangian submanifold $L$ in a Liouville domain $M$ rescaled so as to stay close to the skeleton of $M$. Then one takes a symplectic embedding $M\subset X$ into some symplectic manifold $X$.
Consider the composite Lagrangian embedding $L\subset X$ which is not necessarily monotone. Provided that the symplectic form on $X$ is rational and the image of $H_1(L)\to H_1(M)$ is torsion, the classes in $H_2(X,L)$ whose symplectic area is below some treshold (depending on how close we scale $L$ to the skeleton of $M$) lie in the image of $H_2(M,L)$ and therefore \emph{behave as if $L$ were monotone} (meaning that their area is proportional to the Maslov index). This makes it possible to define low-area string invariants for $L\subset X$. An analogue of Lemma~\ref{lem:cotangent_bundles} can easily be established as well; the outcome is that the low-area string invariants of $L$ in $X$ are obtained as the composition
$$
\mathcal{{O}{C}}_{\mathit{low}}\colon\thinspace
HF_*(L)\xrightarrow{\mathcal{{O}{C}}}H_*(M)\to H_*(X),
$$
where $HF_*(L)$ is the Floer homology of $L$ in $M$, and $\mathcal{{O}{C}}$ is the classical monotone closed-open string map (see Section~\ref{sec:proof_OC} for references). In this setup, it is most convenient to \emph{define} $\mathcal{{O}{C}}_{\mathit{low}}$ as the above composition. Observe that this setup does not restrict to the use of Maslov index~2 disks, and also allows higher-index disks.
Let us provide a statement which is similar to Theorem~\ref{th:non_displ_spheres}.
\begin{theorem}[sketch]
Let $L\subset M$ and $K\subset N$ be monotone Lagrangian submanifolds in Liouville domains, and these inclusions are $H_1$-torsion. Let $X$ be a monotone symplectic manifold, and $N,M\subset X$ be symplectic embeddings.
Suppose there are classes $\alpha\in HF_*(L)$, $\beta\in HF_*(K)$ such that the following pairing in $X$ is non-zero:
$$\mathcal{{O}{C}}_{\mathit{low}}(\alpha)\cdot \mathcal{{O}{C}}_{\mathit{low}}(\beta)\neq 0.$$
By the above pairing, we mean the composition of the intersection product with the projection: $H_*(X)\otimes H_*(X)\to H_*(X)\to H_0(X)$.
Let $L_a,K_b\subset X$ be the Lagrangians obtained by scaling $L,K$ towards the skeleton within their Liouville domains, and then embedding them into $X$ via $N,M\subset X$.
Then
$L_a,K_b\subset X$ are Hamiltonian non-displaceable from each other for sufficiently small $a,b$.\qed
\end{theorem}
We remark that the above theorem does not incorporate the modification we made in Theorem~\ref{th:CO}, namely to only consider disks with homologically non-trivial boundary; we also have not explored the possibility to re-organise disks in groups like in Theorem~\ref{th:OCCO_monot}.
The idea of proof is to establish a version of Lemma~\ref{lem:cotangent_bundles} saying that $\mathcal{{O}{C}}_{\mathit{low}}$ defined as the above composition coincides with low-area disk counts on the actual Lagrangian $L\subset X$.
The key observation is that
for any interval $I=(r,s)\subset \mathbb{R}$, one can scale $L$ sufficiently close to the skeleton so that all elements in $H_2(X,L)$ of Maslov index within $I$ \emph{and} sufficiently low area, come from $H_2(M,L)$. This can be improved to $(-\infty,s)$ for holomorphic disks by taking $r=-n$, because holomorphic disks of Maslov index $\mu<-n$ generically do not exist.
Given this, one shows that the proof Biran-Cornea's theorem can be carried by only using low-area curves; this forces all the possible bubbling to happen in the monotone regime. The details are left to the reader.
Currently, there seems to be a lack of interesting computations of open-closed maps for monotone Lagrangians in higher-dimensional Liouville domains---this prevents us of from providing some concrete applications of the higher-dimensional story. We believe such applications will become available in the future.
\subsection*{Structure of the article}
In Section~\ref{sec:proof_OC} we prove Theorems~\ref{th:CO}
and~\ref{th:CO_groups}, discuss their connection with the monotone case and
some generalisations. We also prove Lemma~\ref{lem:disks_in_nbhood}.
In Section~\ref{sec:T_a} we prove Lemma~\ref{lem:cotangent_bundles},
Theorem~\ref{th:T_a} and the related theorem for $\C P^1\times\C P^1$ and $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$. Then we
explain why Floer theory with bulk deformation does not readily apply to
the~$T_a\subset \C P^2$.
\subsection*{Acknowledgements} We are grateful to Paul Biran, Georgios Dimitroglou Rizell, Kenji Fukaya, Yong-Geun~Oh, Kaoru~Ono,
and~Ivan~Smith for useful discussions and their interest in this work; we are also thankful to Denis Auroux, Dusa McDuff and the referees for helping us spot some mistakes in the previous version of the article, and suggesting exposition improvements.
Dmitry Tonkonog~is funded by the Cambridge Commonwealth, European and
International Trust, and acknowledges travel funds from King's College,
Cambridge. Part of this paper was written during a term spent at the
Mittag-Leffler Institute, Sweden, which provided an excellent research environment and financial support for the stay.
Renato Vianna is funded by the Herchel Smith Postdoctoral Fellowship from the
University of Cambridge.
\section{The non-displaceability theorem and its discussion}
\label{sec:proof_OC}
In this section we prove Theorems~\ref{th:CO} and~\ref{th:CO_groups}, and further discuss them.
We conclude by proving Lemma~\ref{lem:disks_in_nbhood}, which is somewhat unrelated to the rest of the section.
\subsection{The context from usual Floer theory}
\label{subsec:non_displ_dim4}
We start by explaining Biran-Cornea's non-displaceability criterion for monotone Lagrangians and its relationship with Theorems~\ref{th:CO} and~\ref{th:CO_groups}.
We assume that the reader is familiar with the language of {\it pearly trajectories} to be used here, and shall skip the proofs of some facts we mention if we do not use them later.
Recall that one way of defining the Floer cohomology $HF^*(L)$ of a monotone
Lagrangian $L\subset X$ uses the pearl complex of Biran and Cornea
\cite{BC07,BC09B, BC09A}; its differential counts pearly trajectories consisting of
certain configurations of Morse flowlines on $L$ interrupted by holomorphic
disks with boundary on $L$. A remark about conventions: Biran and Cornea write $QH^*(L)$ instead of $HF^*(L)$; we do not use the Novikov parameter, therefore the gradings are generally defined modulo 2.
Also recall that the basic fact\textemdash if $HF^*(L)\neq 0$, then $L$ is non-displaceable,\textemdash has no intrinsic proof within the language of pearly
trajectories. Instead, the proof uses the isomorphism relating $HF^*(L)$
to the (historically, more classical) version of Floer cohomology that uses Hamiltonian perturbations. Nevertheless, there is
a different non-displaceability statement whose proof is carried out completely
in the language of holomorphic disks. That statement employs an additional structure, namely the maps $$ \mathcal{{O}{C}}\colon\thinspace
HF^*(L)\to QH^*(X),\quad \mathcal{{C}{O}}\colon\thinspace QH^*(X)\to HF^*(L) $$ defined by counting
suitable pearly trajectories in the ambient symplectic manifold $X$. These maps
are frequently called the open-closed and the closed-open (string) map,
respectively; note that Biran and Cornea denote them by $i_{L}$, $j_{L}$. The statement we referred to above is the following one.
\begin{theorem}[{\cite[Theorem~2.4.1]{BC09A}}]
\label{th:OCCO_monot}
For two monotone Lagrangian submanifolds $L,K$ of a closed monotone symplectic
manifold $X$, suppose that the composition
\begin{equation}
\label{eq:OCCO}
HF^*(L)\xrightarrow{\mathcal{{O}{C}}} QH^*(X)\xrightarrow{\mathcal{{C}{O}}} HF^*(K)
\end{equation}
does not vanish. Then $L$ and $K$ are Hamiltonian non-displaceable. \qed
\end{theorem}
In this paper we restrict ourselves to dimension four, so let us first discuss
the monotone setting of Theorem~\ref{th:OCCO_monot} in this dimension. Recalling
that a del~Pezzo surface has $H_1(X)=0$, we see that there are three
possible ways for (\ref{eq:OCCO}) not to vanish.
To explain this, it is convenient to pass to chain level: let
$CF^*(L)$ be the Floer chain complex of $L$ (either the pearl complex or the Hamiltonian Floer complex) and $CF^*(X)$ be the Morse (or Hamiltonian Floer) chain complex of $X$, both equipped with Morse $\mathbb{Z}$-gradings. For the definition of the closed-open maps, see \cite{BC07,BC09A} in the pearl setup, and \cite{She13,Ritter2016} among others in the Hamiltonian Floer setup. Below we will stick to the setup with pearls.
First, we can consider the
topological part of (\ref{eq:OCCO}):
\begin{equation*}
CF^0(L)\xrightarrow[\mu=0]{\mathcal{{O}{C}}} CF^2(X)\xrightarrow[\mu=0]{\mathcal{{C}{O}}} CF^2(K).
\end{equation*}
In this case, as indicated by the $\mu=0$ labels, the relevant
string maps necessarily factor through $CF^2(X)$ and are topological,
i.e.~involve pearly trajectories containing only constant Maslov index 0 disks.
The composition above computes the homological intersection $[L]\cdot [K]$
inside $X$, where $[L],[K]\in H_2(X)$. If $[L]\cdot[K] \neq 0$, then $L$ and $K$ are topologically non-displaceable;
otherwise we proceed to the next possibility;
compare Remark \ref{rmk:L.K=0}.
Observe that we are using the cohomological grading convention: pearly trajectories of total
Maslov index $\mu$ contribute to the degree $-\mu$ part of $\mathcal{{C}{O}}$, and to the
degree $(\dim L-\mu)$ part of $\mathcal{{O}{C}}$ on cochain level.
The second possibility for $\mathcal{{C}{O}}\circ \mathcal{{O}{C}}$ not to vanish is via the contribution
of pearly trajectories whose total Maslov index sums to two; the relevant parts
of the string maps factor as shown below. In the examples we are aiming at, we are going to have $[L]=[K]=0$, so
the $\mu=0$ parts below will vanish on homology level.
\begin{equation*}
\begin{array}{l}
CF^0(L)\xrightarrow[\mu=0]{\mathcal{{O}{C}}}
CF^2(X)\xrightarrow[\mu=2]{\mathcal{{C}{O}}}
CF^0(K),
\vspace{0.1cm}
\\
CF^2(L)\xrightarrow[\mu=2]{\mathcal{{O}{C}}}
CF^2(X)\xrightarrow[\mu=0]{\mathcal{{C}{O}}}
CF^2(K),
\end{array}
\end{equation*}
The remaining part of $\mathcal{{C}{O}}\circ\mathcal{{O}{C}}$ breaks as a sum of three compositions factoring as follows:
\begin{equation}
\label{eq:OCCO_diamond}
\xymatrix{
&
CF^0(X)\ar_-\cong^{\mu=0}[dr]
&
\\
CF^2(L)\ar^{\mu=2}[r]\ar^{\mu=4}[ur]\ar^-\cong_{\mu=0}[dr]
&
CF^2(X)\ar^{\mu=2}[r]
&
CF^0(K)
\\
&
CF^4(X)\ar_{\mu=4}[ur]
&
}
\end{equation}
The labels here indicate the total Maslov index of holomorphic disks present in
the corresponding pearly trajectories; this time the $\mu=0$ parts are
isomorphisms on homology. Therefore, to compute $\mathcal{{C}{O}}\circ\mathcal{{O}{C}}|_{HF^2(L)}$ we need to know the
Maslov index~4 disks. We wish to avoid this, keeping in mind that in our
examples we will know only the Maslov index 2 disks.
To this end, we perform the following trick to ``single out'' the Maslov index~2 disk contribution in the diagram above. Let us modify the
definition of $\mathcal{{C}{O}}$, $\mathcal{{O}{C}}$ by only considering $J$-holomorphic
disks whose boundary is non-zero in $H_1(L;\mathbb{Z})$ or $H_1(K;\mathbb{Z})$.
Denote this modified map by $\mathcal{{O}{C}}\c$ and consider the composition:
\begin{equation}
\label{eq:CO_comp_bdy_condition}
CF^2(L)\xrightarrow[\mu=2]{\mathcal{{O}{C}}\c}
CF^2(X)\xrightarrow[\mu=2]{\mathcal{{C}{O}}\c}
CF^0(K)
\end{equation}
Here the modified maps $\mathcal{{O}{C}}\c$, $\mathcal{{C}{O}}\c$ by definition count pearly trajectories
contributing to the middle row of (\ref{eq:OCCO_diamond}), i.e.~containing a
single disk, of Maslov index~2, with the additional condition that the boundary
of that disk is homologically non-trivial. The superscript $(2)$ reflects that
we are only considering Maslov index~2 trajectories, ignoring the Maslov index~0
and~4 ones; the condition about non-zero boundaries is not reflected by our
notation. We claim that if
composition~(\ref{eq:CO_comp_bdy_condition}) does not vanish, then $K,L$ are
non-displaceable.
This modified non-displaceability criterion we have just formulated is the specialisation of Theorem~\ref{th:CO} to the case when both Lagrangians are monotone. Indeed,
if both $L,K$ are monotone and $[L]\cdot [K]=0$, then
$$\mathcal{{O}{C}}\c([p_L])\cdot \mathcal{{O}{C}}\c([p_K])\neq 0$$ if and only if the composition (\ref{eq:CO_comp_bdy_condition}) is non-zero; compare Lemma~\ref{lem:configs_CO}.
A proof can also be traced using the original approach \cite[Theorem~8.1]{BC09B}---see the proof of Theorem~\ref{th:CO} in Subsection~\ref{subsec:Proof_ThCO} for further details.
Speaking of our second result, Theorem~\ref{th:CO_groups}, in the monotone case it corresponds to another refinement of Biran-Cornea's theorem which does not seem to have appeared in the literature. Note that this refinement is not achieved by deforming the Floer theories of $L$ and $K$ by local systems.
\begin{remark} Recall that, for a two-dimensional monotone Lagrangian
$L$ equipped with the trivial local system, we have
$$\sum_j \partial[D_j^L]=0$$ if and
only if $HF^*(L)\neq 0$, and in the latter case $HF^*(L)\cong H^*(L)$.
Indeed, $\sum_j \partial[D_j^L]$
computes the Poincar\'e dual of the Floer differential $d([p_L])$
where $[p_L]$ is the generator of $H^2(L)$;
if we pick a perfect Morse function on $L$, then $p_L$ is geometrically realised by its maximum.
If $d([p_L])=0$, then by duality the unit is not hit by the differential, hence $HF^*(L)\neq 0$. For a non-monotone $L$, the condition $\sum_i
\partial[D_i^L]=0$ from Equation~(\ref{eq:disk_low_area_cancel_bdies}) above is a natural low-area version of the non-vanishing of Floer cohomology.
\end{remark}
\begin{remark}
\label{rem:OC_def_upto}
Recall that $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\in H_2(X)$ is well-defined up to $[L]$ as explained in Section~\ref{sec:intro}.
This is analogous to a well known phenomenon in monotone Floer theory:
recall that there
is no canonical identification between $HF^*(L)$ and $H^*(L)$, even when they
are abstractly isomorphic \cite[Section~4.5]{BC09A}. In particular, $HF^*(L)$
is only $\mathbb{Z}/2$-graded and the element $[p_L]\in HF^*(L)$ corresponding to the
degree 2 generator of $H^2(L)$ is defined up to adding a multiple of the unit
$1_L\in HF^*(L)$. Recall that $\mathcal{{O}{C}}(1_L)$ is dual to $[L]\in H_2(X)$, and this
matches with the fact that $\mathcal{{O}{C}}([p_L])$, as well as $\mathcal{{O}{C}}\c([p_L])$, is defined
up to $[L]$. \end{remark}
\begin{remark}
Charette~\cite{Cha15B} defined quantum Reidemeister torsion for monotone
Lagrangians whose Floer cohomology vanishes. While it is possible that his
definition generalises to the non-monotone setting, making our tori
$T_a\subset \C P^2$ valid candidates as far as classical Floer theory is concerned,
it is shown in \cite[Corollary 4.1.2]{Cha15B} that quantum
Reidemeister torsion is always trivial for tori.
\end{remark}
\subsection{Proof of Theorem~\ref{th:CO}} \label{subsec:Proof_ThCO}
Our proof essentially follows~\cite[Theorem~8.1]{BC09B} with the following
differences: we check that certain unwanted bubbling, impossible in the monotone
case, does not occur in our setting given that $a+b< \min(A,B)$; we include an
argument which ``singles out'' the contribution of Maslov index~2 disks with
non-trivial boundary from that of Maslov index~4 disks; and relate the string
invariants $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$, $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_K])$ defined in
Section~\ref{sec:intro} to the ones appearing more naturally in pearly
trajectory setup. We assume that the reader is familiar with the setup moduli spaces of pearly trajectories \cite{BC09B}.
\begin{remark}
We point out that \cite[Theorem 8.1]{BC09B} also appears as \cite[Theorem~2.4.1]{BC09A}, and in the latter reference the authors take a different approach to a proof based on superheaviness.
\end{remark}
\begin{lemma}
\label{lem:OC_invt}
Under the assumptions of Subsection \ref{subsec: DefOC_low}, the string invariants
\linebreak[4]
$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$ and $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L],S_L)$ are independent of
the choice of $J$ and the marked point $p_L$.
\end{lemma}
\begin{proof}
First, we claim that for a generic 1-parametric family of almost complex
structures, $L$ will not bound
holomorphic disks of Maslov index $\mu<0$. Indeed, for simple disks this follows
for index reasons (recall that $\dim X=4$); next, non-simple disks with $\mu<0$
must have an underlying simple disk with $\mu<0$ by the decomposition theorem of
Kwon and Oh \cite{KO00} and Lazzarini \cite{La00}, so the non-simple ones do not
occur as well.
Therefore, the only way disks with $\mu=2$ and area $a$ can bubble is into a
stable disk consisting of $\mu=0$ and $\mu=2$ disks; the latter $\mu=2$ disk
must have positive area less than $a$. However, such $\mu=2$ disks do not exist
by Condition~(\ref{eq:defn_a_dim4}). We conclude that Maslov index~2, area $a$
disks cannot bubble as we change $J$, and because the string invariants are
defined in terms of these disks, they indeed do not change.
A similar argument (by moving the point along a generic path) shows the independence of $p_L$.
\begin{comment}
Again by \cite{KO00,La00}, we see that Maslov index 2 $J$-holomorphic disks with
area $a$ must be simple, and, by Gromov \cite{Gr85}, the moduli space
$\mathcal{M}_1(L,J,\beta)$ of $J$-holomorphic disks in class $\beta \in \pi_2(X,L)$,
$\mu(\beta) = 2$, $\int_\beta \omega = a$ with one marked point is a manifold of
dimension $2$. In particular, it has a fundamental class $[\mathcal{M}_1(L,J,\beta)]$ and
$\mathrm{ev}_*[\mathcal{M}_1(L,J,\beta)] = \eta_\beta [L]$, where $\mathrm{ev}:\mathcal{M}_1(L,J,\beta) \to L$ is
the evaluation at the marked point. So the contribution $\eta_\beta$ of such
disk is the same for any choice of $p_L$. Hence, $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$ and
$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L],S_L)$ is does not depend on the choice of the marked point $p_L$.
For any $\phi \in \text{\it Symp}(X)$, the invariant for $\phi(L)$ (or the pair $(\phi(L),
\phi_* S_L)$) and a given almost complex structure $J$ is equivalent to the
invariant for $L$ (or the pair $(L, S_L)$) with the almost complex structure
$\phi^*J$. Hence, $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$ and $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L],S_L)$ are invariant
under the action of $\text{\it Symp}(X)$.
\end{comment}
\end{proof}
\begin{remark} \label{rmk:LemSimilarto}
In the monotone case, the fact that the counts of Maslov index~2 holomorphic disks
was first pointed out in \cite{ElPo93}. A rigorous proof uses the
works \cite{KO00,La00} as above, and is well known. Similar results for
(possibly) non-monotone Lagrangians in dimension~4 appear in \cite[Lemmas~2.2,
2.3]{Cha15A}.
\end{remark}
Suppose there exists a Hamiltonian diffeomorphism $\phi$ such that $\phi(L)\cap
K=\emptyset$, and redenote $\phi(L)$ by $L$, so that $L\cap K=\emptyset$.
Pick generic metrics and Morse functions $f_1,f_2$ on $L,K$. We assume that the
functions $f_1,f_2$ are perfect (it simplifies the proof, but is not essential);
such exist because $L,K$ are two-dimensional and orientable. Consider the
moduli space $\mathcal{M}$ of configurations (``pearly trajectories'') of the three types
shown in Figure~\ref{fig:def_Moduli}, with the additional condition that the
total boundary homology classes of these configurations are non-zero both in
$H_1(L;\mathbb{Z})$ and $H_1(K;\mathbb{Z})$. (By writing ``total'' we mean that if the
configuration's boundary on a single Lagrangian has two components, their sum
must be non-zero.)
\begin{figure}[h]
\includegraphics{Fig_M_defn}
\caption{The moduli space $\mathcal{M}$ consists of pearly trajectories of these types.}
\label{fig:def_Moduli}
\end{figure}
Here are the details on the pearly trajectories from Figure~1 that we use to define $\mathcal{M}$:
\begin{itemize}
\item The Maslov index and the
area of each curve is prescribed in the figure;
\item The conformal parameter of each annulus is allowed to take
any value $R\in(0,+\infty)$. Recall that the domain of an annulus with conformal parameter $R$ can be realised as
$\{z\in \mathbb{C}:1\le |z|\le e^R\}$;
\item Every flowline has a time-length parameter $l$
that can take any value $l\in[0,+\infty)$.
The configurations with a contracted flowline (i.e.~one with $l=0$) correspond to interior points of $\mathcal{M}$, because gluing the disk to
the annulus is identified with $l$ becoming negative;
\item The annulus has two marked points, one on each boundary component, that are \emph{fixed in the domain}. This means that if we identify an annulus with conformal parameter $R$ with $\{z\in \mathbb{C}:1\le |z|\le e^R\}$, then the two marked points can be e.g.~$1$ and $e^R$;
\item The disks also have marked points as shown in the figure. Because the disks are considered up to reparametrisation, the marked points can also be assumed to be fixed in the domain;
\item The curves evaluate to the fixed points
fixed points $p_K\in K$, $p_L\in L$ at the marked points as shown in Figure~\ref{fig:def_Moduli}. They satisfy the Lagrangian boundary conditions as shown in Figure~\ref{fig:def_Moduli};
\item The non-vanishing of total boundary homology classes (stated above) must be satisfied.
\end{itemize}
Recall that the Fredholm index of unparametrised holomorphic annuli without
marked points and with free conformal parameter equals the Maslov index.
Computing the rest of the indices, one shows that $\mathcal{M}$ is a smooth 1-dimensional
oriented manifold \cite[Section~8.2]{BC09B}, assuming $\mathcal{M}$ is regular. The
regularity of the annuli can be achieved by a small domain-dependent
perturbation of the $J$-holomorphic equation; we give a detailed discussion of it in the
next subsection. Now, we continue with the proof assuming the regularity of the annuli (and hence of $\mathcal{M}$, because the transversality for disks is classical).
The space $\mathcal{M}$ can be compactified by adding configurations with broken
flowlines as well as configurations corresponding to the conformal parameter $R$ of the annulus becoming $0$ or $+\infty$.
We describe each of the three types of configurations separately and determine
their signed count.
{\it (i)} The configurations with broken flowlines are shown in Figure~\ref{Fig:M_Morse_break}. As before, they are subject to the condition that the total boundary homology classes of the configuration are non-zero both in $H_1(L;\mathbb{Z})$ and $H_1(K;\mathbb{Z})$. The annuli
have a certain conformal parameter $R_0$ and the breaking is an index 1 critical
point of $f_i$ \cite[Section~8.2.1, Item (a)]{BC09B}.
\begin{figure}[h]
\includegraphics{Fig_M_Morse_break}
\caption{Configurations with broken flowlines, called type (i).}
\label{Fig:M_Morse_break}
\end{figure}
\noindent The count of the sub-configurations consisting of the disk and the
attached flowline vanishes: this is a Morse-theoretic restatement
Condition~(\ref{eq:disk_low_area_cancel_bdies}) saying that $\sum_i \partial
[D_i^L]=\sum_j \partial[D_j^K]=0$. Hence (by the perfectness of the $f_i$) the count of
the whole configurations in Figure~\ref{Fig:M_Morse_break} also vanishes, at
least if we ignore the condition of non-zero total boundary. Separately, the
count of configurations in Figure~\ref{Fig:M_Morse_break} whose total boundary
homology class is zero either in $L$ or $K$, also vanishes. Indeed, suppose for
example that the $\omega=a$ disk in Figure~\ref{Fig:M_Morse_break} (left) has
boundary homology class $\l \in H_1(L;\mathbb{Z})$ and the lower boundary of the annulus
has class $-\l$; then the count of the configurations in the figure with that
disk and that annulus equals the homological intersection $(-\l)\cdot \l=0$,
since $L$ is an oriented surface. We
conclude that the count of configurations in the above figure whose total
boundary homology classes are {\it non-zero}, also vanishes.
{\it (ii)} The configurations with $R=0$ contain a curve whose domain is an
annulus with a contracted path connecting the two boundary components. The
singular point of this domain must be mapped to an intersection point $K\cap L$,
so these configurations do not exist if $K\cap L=\emptyset$ \cite[Section~8.2.1, Item (c)]{BC09B}.
{\it (iii)} The configurations with $R=+\infty$ correspond to an annulus
breaking into two disks, one with boundary on $K$ and the other with boundary on
$L$ \cite[Section~8.2.1, Item (d)]{BC09B}. One of the disks can be constant, and the possible configurations are shown
in Figure~\ref{fig:config_R_infty}.
\begin{figure}[h]
\includegraphics{Fig_M_R_infty}
\caption{The limiting configurations when $R=+\infty$, called type (iii).}
\label{fig:config_R_infty}
\end{figure}
In fact, there is another potential annulus breaking at $R=+\infty$ that we have
ignored: the one into a Maslov index 4 disk on one Lagrangian and a (necessarily
constant) Maslov index 0 disk on the other Lagrangian, see
Figure~\ref{fig:config_R_infty_unwanted}. This broken configurations cannot
arise from the configurations in $\mathcal{M}$ by the non-zero boundary condition imposed
on the elements of this moduli space. The fact that a Maslov index 0 disk has to
be constant is due to the generic choice of $J$.
\begin{figure}
\includegraphics{Fig_M_R_infty_unwanted}
\caption{The limiting configurations for $R=+\infty$ which are impossible by the non-zero boundary condition.}
\label{fig:config_R_infty_unwanted}
\end{figure}
\begin{lemma}
\label{lem:configs_CO}
The count of configurations in Figure~\ref{fig:config_R_infty} equals $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\cdot \mathcal{{O}{C}}\c([p_K])$ as defined in Section~\ref{sec:intro}.
\end{lemma}
\begin{proof} In the right-most configuration in
Figure~\ref{fig:config_R_infty}, forget the $\omega=b$ disk so that one endpoint
of the $\nabla f_1$-flowline becomes free; let $C^L$ be the singular 2-chain on
$L$ swept by these endpoints. In other words, for each disk $D_i^L$, consider the
closure of
$$\mathcal{C}_i^L = \{ \phi_l(x) \in L\ :\ x \in \partial D_i^L,\ \phi_l \ \text{is the time-}l \ \text{flow of}
\ \nabla f_1,\ l \ge 0 \}$$ oriented so that the component of $\partial \mathcal{C}_i^L$ corresponding
to $\partial D_i^L$ has the same orientation as $\partial D_i^L$. Then $C^L = \bigcup_i
\overline{\mathcal{C}_i^L}$.
We claim that $\partial C^L=\sum_{i}\partial
D_i^L$ on chain level. Indeed, the boundary $\partial C^L$ corresponds to zero-length
flowlines that sweep $\sum_{i}\partial D_i^L$, and to flowlines broken at an index
1 critical point of $f_1$, shown below:
\vspace{0.1cm}
\noindent
\hspace*{\fill}\includegraphics{Fig_C_Morse_break}
\hspace*{\fill}
\vspace{0.1cm}
\noindent
The endpoints of these configurations sweep the zero 1-chain. Indeed, we are given that
$\sum_{i}\partial[D_i^L]=0$ so the algebraic count of the appearing index~1 critical points represents a null-cohomologous Morse cocycle, therefore this count equals zero by perfectness of $f_1$. It follows that $\partial C^L=\sum_{i}\partial
D_i^L$.
Similarly, define the 2-chain $C^K$ on $K$, $\partial C^K=\sum_j\partial D_j^K$, by
forgetting the $\omega=a$ disk in the second configuration of type (iii) above,
and repeating the construction. It follows that the homology class
$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$ from Subsection~\ref{subsec: DefOC_low} can be represented by the cycle
$$(\cup_i D_i^L)\cup C^L,$$ and
similarly $\mathcal{{O}{C}}\c([p_K])$ can be represented by $(\cup_j D_j^K)\cup C^K$.
This intersection number can be expanded into four chain-level
intersections:
$$
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\cdot \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_K])=(\cup_i D_i^L)\cdot (\cup_j
D_j^K)+(\cup_i D_i^L)\cdot C^K+ C^L\cdot(\cup_j D_j^K)+C^L\cdot C^K.
$$
The last summand vanishes because $L\cap K=\emptyset$, and the other summands
correspond to the three configurations of type (iii) pictured earlier.
\end{proof}
\begin{remark}
Note that the equality between the intersection number $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\cdot
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_K])$ and the count of the $R=+\infty$ boundary points of $\mathcal{M}$ holds
integrally, i.e.~with signs. This follows from the general set-up of
orientations of moduli spaces in Floer theory, which are consistent with taking
fibre products and subsequent gluings, see e.g.~\cite[Appendix C]{Ab10} for the case most relevant to us. For example, in our case the signed
intersection points between a pair of holomorphic disks can be seen as the
result of taking fibre product along evaluations at interior marked points;
therefore these intersection signs agree with the orientations on the moduli
space of the glued annuli.
\end{remark}
If the moduli space $\mathcal{M}$ is completed by the above configurations (i)---(iii),
it becomes compact. Indeed, by the condition $a+b< \min(A,B)$, Maslov index 2
disks on $L$ with area higher than $a$ cannot bubble. Disks of Maslov index
$\mu\ge 4$ cannot bubble (for finite $R$) on either Lagrangian because the rest
of the configuration would contain an annulus of Maslov index $\mu \le 0$
passing through a fixed point on the Lagrangian, and such configurations have
too low index to exist generically (the annuli can be equipped with a generic
domain-dependent perturbation of $J$, hence are regular). Similarly, holomorphic
disks of Maslov index $\mu\le 0$ cannot bubble as they do not exist for generic
perturbations of the initial almost complex structure $J$. (This is true for
simple disks by the index formula, and follows for non-simple ones from the
decomposition theorems \cite{La11,KO00}, as such disks must have an underlying
simple disk with $\mu\le 0$.) Side bubbles of Maslov index 2 disks (not
carrying a marked point with a $p_K$ or a $p_L$ constraint) cannot occur because
the remaining Maslov index 2 annulus, with both the $p_K$ and $p_L$ constraints,
would not exist generically. Finally, as usual, sphere bubbles cannot happen in a
1-dimensional moduli space because such bubbling is a codimension 2 phenomenon in the monotone case.
By the compactness of $\mathcal{M}$, the signed count of its boundary points (i)---(iii)
equals zero. We therefore conclude from Lemma~\ref{lem:configs_CO} and the
preceding discussion that $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\cdot \mathcal{{O}{C}}\c([p_K])=0$, which contradicts the
hypothesis of Theorem~\ref{th:CO}.\qed
\subsection{Transversality for the annuli}
\label{subsec:trans}
In the proof of
Theorem~\ref{th:CO}, we mentioned that in order to make the annuli appearing in the moduli space $\mathcal{M}$
regular, we need to use a domain-dependent perturbation of the $J$-holomorphic
equation on those annuli. We wish to make this detail explicit.
First, let us recall the moduli space of domains used to define $\mathcal{M}$, see Figure~\ref{fig:def_Moduli}, and its compactification. Recall that the annuli in $\mathcal{M}$ were allowed to have free conformal parameter
$$R\in(0,+\infty),$$
and the limiting cases $R=0$, $R=+\infty$ were included in the compactifcation of $\mathcal{M}$ as explained in the proof above.
\begin{figure}[h]
\includegraphics[]{Fig_Perturb_Annuli}
\caption{The annuli $A_R$ with various conformal parameters, and a compact disk $D=D_R$ supporting the domain-dependent almost complex structure $J_D$.}
\label{fig:perturb_annuli}
\end{figure}
Now, for each conformal parameter $R\in(0,+\infty)$, we pick some closed disk inside the corresponding annulus:
$$
D_R\subset A_R=\{1\le |z|\le e^R\}
$$
depending smoothly on $R$, and disjoint from the regions where the domains $A_R$ degenerate as $R\to+\infty$ or $R\to 0$. To see what the last condition means, recall that the annulus $A_R$ used for defining $\mathcal{M}$ has two fixed boundary marked points; we are assuming they are the points $1$ and $e^R$. With these marked points, the annulus has no holomorphic automorphims. We can then uniformise the family of annuli $\{A_R\}_{R\in(0,\infty)}$ as shown in Figure~\ref{fig:perturb_annuli}. To do so, we start with an annulus of a fixed conformal parameter, say $R=1$. The annuli with $R<1$ are obtained from the $R=1$ annulus by performing a slit along a fixed line segment $C\subset A_{1}$ connecting the two boundary components. The annuli for $R>1$ are obtained by stretching the conformal structure of the $R=1$ annulus in a fixed neighbourhood of some core circle $S\subset A_{1}$. In this presentation, all annuli $A_{R}$ have a common piece of domain away from a neighbourhood of $C\cup S$, and it is notationally convenient to choose $D_R\subset A_R$ to be a fixed closed disk $D$ inside that common domain, for each $R$. See Figure~\ref{fig:perturb_annuli}.
Next, let $\mathcal{J}$ denote the space of compatible almost complex structures on $X$. Let $J\in \mathcal{J}$ be the almost complex structure we have been using in the proof of Theorem~\ref{th:CO}---namely, we are given that the relevant $J$-holomorphic disks are regular.
Now pick some domain-dependent almost complex structure
$$
J_{A_R}\in C^{\infty} (A_R,\mathcal{J}),\quad J_{A_R}\equiv J \text{ away from }D_R.
$$
Using the above presentation where all the $D_R$ are identified with a fixed disk $D$, it is convenient to take $J_{A_R}|_{D_R}$ to be the same domain-dependent almost complex structure
$$J_D\in C^\infty(D,\mathcal{J})$$ for all $R$,
such that $J_D\equiv J$ near $\partial D$.
In our (modified) definition of $\mathcal{M}$, we use the following domain-dependent perturbation of the $J$-holomorphic equation for each of the appearing annuli:
\begin{equation}
\label{eq:perturbed}
du+J_{A_R}(u)\circ du\circ j=0,\quad u\colon\thinspace A_R\to X
\end{equation}
Here $j$ is the complex structure on $A_R$.
Observe that
(\ref{eq:perturbed}) restricts to the usual $J$-holomorphic equation, with the
constant $J$, away from $D_R$. For this reason, the equation is $J$-holomorphic
in the neighbourhoods of the nodal points formed by: domain degenerations as
$R\to +\infty$ and $R\to 0$, and the side bubbling of holomorphic disks. The
standard gluing and compactness arguments work in this setting, compare
e.g.~Sheridan~\cite[Proof of Proposition~4.3]{She11}; therefore, the $R=0$ and
$R=+\infty$ compactifications of $\mathcal{M}$ from the proof of Theorem~\ref{th:CO} are
still valid in our perturbed case, as well as the fact that the flowline length
0 configurations in Figure~\ref{fig:def_Moduli}~(middle and right) are interior
points of $\mathcal{M}$. (This is a very hands-on version of the general notion of a
consistent choice of perturbation data in the setup of by
Seidel~\cite{SeiBook08}, see also~\cite{She11}, except that we are not using a
Hamiltonian term in our equation.) We keep all disks appearing in $\mathcal{M}$ to be $J$-holomorphic, without any perturbation.
It is well known that the solutions to (\ref{eq:perturbed}) are regular for a Baire set of $J_{A_R}|_{D_R}$, or
equivalently for a Baire set of $J_D$'s \cite[Lemma~4.1, Corollary~4.4]{CM07}. The fact that a perturbation in a neighbourhood of a point is sufficient follows from the unique continuation principle for Cauchy-Riemann type equations and is contained in the statement of \cite[Lemma~4.1]{CM07}.
In particular, there is a sequence $J_{A_R,n}$ converging
to the constant $J$:
$$
J_{A_R,n}|_{D_R}\to J \quad \text{ in } C^{\infty}(D_R,\mathcal{J})
$$
such that the annuli solving (\ref{eq:perturbed}) with respect to $J_{A_R,n}$ are regular for each $n$.
The rest of the proof of Theorem~\ref{th:CO} requires one more minor modification. Looking
at Figure~\ref{fig:config_R_infty}~(left), one of the holomorphic disks now
carries a domain-dependent perturbation supported in the subdomain $D$ inherited
from the annulus, compare Figure~\ref{fig:perturb_annuli}~(right). (Which of the
two disks carries this perturbation depends on which side of the core circle the
$D$ was in the annulus.) We note that the disks in
Figure~\ref{fig:config_R_infty}~(middle and right) do not carry a perturbation,
as they arise as side bubbles from the annuli. We claim that for large enough
$n$, the count of the configurations in Figure~\ref{fig:config_R_infty}~(left)
where one of the disks carries an above perturbation, equals to the same count
where the disks are purely $J$-holomorphic. Indeed, it follows from the fact
that $J_{D,n}\to J$, using continuity and the regularity of the unperturbed
$J$-holomorphic disks. This allows us to use Lemma~\ref{lem:configs_CO} and
conclude the proof.
\subsection{Proof of Theorem~\ref{th:CO_groups}}
This is a simple modification of the proof of Theorem~\ref{th:CO}, so we shall be brief. The idea is to redefine the moduli space $\mathcal{M}$ by considering only those configurations in Figure~\ref{fig:def_Moduli} whose total boundary homology classes in $H_1(L;\k)$ resp.~$H_1(K;\k)$ belong to the affine subspace $S_L$ resp.~$S_K$.
The only difference in the proof arises when we argue that configurations of type~(i) cancel, see Figure~\ref{Fig:M_Morse_break}. At that point of the above proof, we used Condition~(\ref{eq:disk_low_area_cancel_bdies}); now we need to use Condition~(\ref{eq:disk_groups_cancel_bdies}) instead.
Let us consider configurations as in the left part of Figure~\ref{Fig:M_Morse_break}. Assume that the area $b$ annulus in the figure has boundary homology class $l\in H_1(L;\k)$ on $L$. Then the area $a$ disk of the same configuration has boundary class belonging to the affine subspace $S_L-l\subset H_1(L;\k)$; this is true because the total boundary homology class has to lie in $S_L$. By a Morse-theoretic version of Condition~(\ref{eq:disk_groups_cancel_bdies}), the count of such area $a$ disks with the attached flowlines (asymptotic to index 1 critical points) vanishes. The rest of the proof goes without change.\qed
\subsection{Adjusting the area conditions}\label{subsec:wall_crossing}
We would like to point out that the area restrictions in Theorems~\ref{th:CO} and
~\ref{th:CO_groups} can be weakened at the expense of requiring one of the two
Lagrangians be monotone. In the setup of Section~\ref{subsec: DefOC_low}, suppose that $K$ is monotone, so that $B = +\infty$ and $b$ is the
monotonicity constant of $X$, i.e. $\omega(\beta) = \frac{b}{2} \mu(\beta)$ for
$\beta \in H_2(X,K)$.
Below is a modified version of Theorem~\ref{th:CO}, which differs by the fact that the numbers $a,A$ become redefined compared to those from Section~\ref{subsec: DefOC_low}. We are still using a coefficient ring $Q$ for homology.
\begin{theorem}
\label{thm:alg_cancel}
Suppose $K,L\subset X$ are orientable Lagrangian surfaces, and $K$ is monotone.
Fix any tame almost complex structure $J$, and let $\mathcal{M}_C(pt)$ be the moduli space of holomorphic disks in the homology class $C$ through a fixed point in $L$. Define $A$ to be
\begin{equation}
A=\min\left\{\omega_0:
\sum_{
\begin{smallmatrix}
{C\in H_2(X,L):}
\\ \omega(C)=\omega_0,\
\mu(C)=2
\end{smallmatrix}
}\quad \sum_{u\in \mathcal{M}_C(pt)}[\partial u] \neq 0\in H_1(L)\right\}
\end{equation}
(This minimum exists by Gromov compactness.)
Let $a$ be any number less than $A$, and assume in addition that that all holomorphic disks of area less than $a+b$ with boundary on $L$ are regular with respect to the chosen $J$. Define $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$ as in Section~\ref{subsec: DefOC_low} using holomorphic disks of area $a$.
If $a+b<A$, $[L]=[K]=0$, and
$$
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])\cdot \mathcal{{O}{C}}\c([p_K])\neq 0,
$$
then $L$ and $K$ are Hamiltonian non-displaceable from each other.
\end{theorem}
The meaning of this modification is as follows. In Theorem~\ref{th:CO}, $a$ and $A$ were the first two positive areas of classes in $H_2(X,L)$; moreover the boundares of area-$a$ disks had to cancel, in order for the string invariant to be defined. Here the boundaries of area-$a$ disks still cancel by the new definition of $A$ and because $a<A$.
Now we come to the difference: in the setup of Theorem~\ref{th:CO}, there existed no (topological) disks with areas between $a$ and $a+b$, while in the setup of Theorem~\ref{thm:alg_cancel}, such disks could exist (and be holomorphic of Maslov index 2), but their boundaries will still cancel by the assumption $a+b<A$. It turns out that this is sufficient to run the argument.
We can similarly modify Theorem~\ref{th:CO_groups}.
\begin{theorem}
\label{thm:alg_cancel_groups}
In the setup of the previous theorem,
additionally choose $S_L$ as in Section~\ref{sec:intro}. The statement of
Theorem~\ref{thm:alg_cancel} holds if we replace
$A$
by
\begin{equation}
A=\min\left\{\omega_0:
\sum_{
\begin{smallmatrix}
{C\in H_2(X,L):}\\
\omega(C)=\omega_0,\ \mu(C)=2,\\
\partial C\in S_L+l
\end{smallmatrix}
}\quad \sum_{u\in \mathcal{M}_C(pt)}[\partial u] \neq 0\in H_1(L) \text{ for some }l\in H_1(L)\right\}
\end{equation}
and $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L])$ by $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_L],S_L)$.
\end{theorem}
\begin{proof}[Proof of Theorems~\ref{thm:alg_cancel},~\ref{thm:alg_cancel_groups}]
Given that $K$ is monotone, $\mathcal{{O}{C}}\c([p_K])$ is obviously
invariant under its Hamiltonian isotopies. After this, the proofs of Theorems~\ref{th:CO} and~\ref{th:CO_groups} can be repeated using the fixed $J$ appearing in the hypothesis, with one obvious adjustment: the configurations in Figures~\ref{fig:def_Moduli} and~\ref{Fig:M_Morse_break} must allow disks of any area less than $a+b$ with boundary on $L$. The configurations in Figure~\ref{Fig:M_Morse_break} still cancel by hypothesis. Note that no new configurations of type~(iii) (see Figure~\ref{fig:config_R_infty}) need to be included.
In order to achieve the transversality of annuli by the method of Subsection~\ref{subsec:trans}, we choose domain-dependent perturbations $J_{A_R,n}$ that converge, as $n\to+\infty$, to the fixed $J$ appearing in the hypothesis.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:disks_in_nbhood}}
We start with Part~(i) which is purely topological.
Assuming that $H_1(L;\mathbb{Z})\to H_1(T^*M;\mathbb{Z})$ is $N$-torsion, for any class in $D\in H_2(X,T_a;\mathbb{Z})$ its $N$-multiple can be written in the following general form:
$$
ND=i_*(D')+D'',\quad D'\in H_2(T^*M,L_a;\mathbb{Z}),\quad D''\in H_2(X;\mathbb{Z}).
$$
Recall that $\omega=c_1/k\in H^2(X;\mathbb{R})$ is integral. Assuming $\mu(D)=2$, we compute:
$$
\begin{array}{lll}
\mu(ND)&=&\mu(D')+2c_1(D'')=2N,\\
\omega(ND)&=&a\cdot \mu(D')/2+c_1(D'')/k\\
&=&a(N-c_1(D''))+c_1(D'')/k\in \{aN+(1-ka)\mathbb{Z}\}.
\end{array}
$$
Above, we have used the fact that the $L_a$ are monotone in $T^*M$, and that $c_1(D'')$ is divisible by $k$.
Therefore,
$$
\omega(D)\in \{a+\textstyle\frac 1 N (1-ka)\mathbb{Z}\}
$$
When $a<1/(k+N)$, the least positive number in the set $\{a+\textstyle\frac 1 N
(1-ka)\mathbb{Z}\}$ is $a$, and the next one is $A=a+\frac 1 N (1-ka)$. This proves
Lemma~\ref{lem:disks_in_nbhood}(i). Notice that area $a$ is achieved if and only
if $c_1(D'')=\omega(D'')=0$.
To prove Lemma~\ref{lem:disks_in_nbhood}(ii), first notice that holomorphic
disks with boundary on $L_a\subset T^*M$ must be contained in $U\subset T^*M$ by
the maximum principle, for any almost complex structure cylindrical near $\partial
U$. Therefore to prove the desired 1-1 correspondence between the holomorphic
disks, it suffices to prove that for some almost complex $J$ on $X$, the
area-$a$ Maslov index 2 holomorphic disks with boundary on $T_a$ are contained
in $i(U)$. We claim that this is true for a $J$ which is sufficiently stretched
around $\partial i(U)$, in the sense of SFT neck-stretching.
\begin{figure}
\includegraphics[]{Fig_Neck_Stretching}
\caption{(a): holomorphic building which is the limit of a holomorphic disk, and its part $C$; (b): the area computation for $C$.}
\label{fig:neck_stretch}
\end{figure}
Pick the standard Liouville 1-form $
\theta$ on $i(U)$, and stretch $J$ using a cylindrical almost complex structure with respect to $\theta$ near $\partial U$.
The SFT compactness theorem \cite{CompSFT03} implies that disks not contained
in $i(U)$ converge in the neck-stretching limit to a holomorphic building, like
the one shown in Figure~\ref{fig:neck_stretch}(a). One part of the building is
a curve with boundary on $T_a$ and several punctures. Denote this curve by $C$.
It is contained in $i(U)$, and its punctures are asymptotic to Reeb orbits in
$\partial i(U)$ which we denote by $\{\gamma_j\}$.
Recall that above we have shown that the homology class of the original disk $D$ had the form $$D=i_*(D')/N+D''/N\in H_2(X,T_a;\mathbb{Q}),$$ where $D''$ is a closed 2-cycle and $\omega(D'')=0$.
Denote
$$ D_0=i_*(D')/N\in H_2(X,T_a;\mathbb{Q}).$$
Then $\omega( D_0)=a$ and $ D_0$ can be realised as a chain sitting inside $i(U)$, whose boundary in $T_a$ matches the one of $C$ (or equivalently, $D$).
Consider the chain $C\cup (-D_0)$, where $(-D_0)$ is the chain $D_0$ taken with the opposite orientation, see Figure~\ref{fig:neck_stretch}(b). Then:
$$\partial \left(C\cup (-D_0)\right)=\cup_j\gamma_j.$$
Below, the second equality
follows from the Stokes formula using $\omega = d\theta$, which can be applied because the whole chain is contained in
$i(U)$:
$$\omega(C)-a=\omega(C\cup (-D_0))=\textstyle \sum_j A(\gamma_j),$$
where
$$ A(\gamma_j) = \textstyle \int_{\gamma_j} \theta > 0,$$
since $\theta( \textit{Reeb vector field}) = 1$.
On the other hand, recall that $\omega(C)<a$ because $C$ is part of a
holomorphic building with total area $a$. This gives a contradiction. We
conclude that all area-$a$ Maslov index 2 holomorphic disks are contained in
$i(U)$ for a sufficiently neck-stretched $J$.\qed
\section{The tori $T_a$ are non-displaceable from the Clifford torus} \label{sec:T_a}
In this section we recall the definition of the tori $\hat{T}_a\subset
\C P^1\times\C P^1$ which were studied by Fukaya, Oh, Ohta and Ono \cite{FO312},
and the tori $T_a\subset \C P^2$ appearing in the introduction. We prove
Theorem~\ref{th:T_a} along with a similar result for the $\hat{T}_a \subset \C P^1
\times \C P^1$, and for an analogous family of tori in the 3-point blowup of $\C P^2$.
We also prove Lemma~\ref{lem:cotangent_bundles}, and check that Floer cohomology
with bulk deformations vanishes for the $T_a$.
\subsection{Definition of the tori.} \label{sec: dfn Tori}
We choose to define the tori $T_a$ as in \cite{Wu15}, using the {\it coupled
spin} system \cite[Example 6.2.4]{PR14} on $\C P^1\times\C P^1$. Consider
$\C P^1\times \C P^1$ as the double pendulum composed of two unit length rods: the
endpoint of the first rod is attached to the origin $0\in\mathbb{R}^3$ around which the
rod can freely rotate; the second rod is attached to the other endpoint of the
first rod and can also freely rotate around it, see Figure~\ref{fig:pend}.
\begin{figure}[h]
\includegraphics{Fig_DoublePend}
\caption{The double pendulum defines two functions $\hat F,\hat G$ on $\C P^1\times \C P^1$.
}
\label{fig:pend}
\end{figure}
\noindent
Define two functions $\hat F,\hat G\colon\thinspace \C P^1\times\C P^1\to \mathbb{R}$ to be, respectively, the
$z$-coordinate of the free endpoint of the second rod, and its distance from the origin, normalised by $1/2$. In formulas,
$$
\begin{array}{l}
\C P^1 \times \C P^1=\{ x_1^2 + y_1^2 + z_1^2 = 1 \} \times \{ x_2^2 + y_2^2 + z_2^2 = 1 \}
\subset \mathbb{R}^6,
\\
\hat F(x_1,y_1,z_1,x_2,y_2,z_2) = \frac 1 2 (z_1 + z_2),
\\
\hat G(x_1,y_1,z_1,x_2,y_2,z_2) = \frac 1 2 \sqrt{(x_1 + x_2)^2 + (y_1 + y_2)^2 +(z_1 +
z_2)^2}.
\end{array}
$$
The function $\hat G$ is not smooth along the anti-diagonal Lagrangian sphere
$S^2_{\ad} = \{(x_1,y_1,z_1,x_2,y_2,z_2) \in \C P^1\times \C P^1; x_2 = -x_1, y_2 = -y_1, z_2= -z_1\}$
(corresponding to the folded pendulum), and away from it the
functions $\hat F$ and $\hat G$ Poisson commute. The image of the ``moment map''
$(\hat F,\hat G)\colon\thinspace \C P^1\times\C P^1\to \mathbb{R}^2$ is the triangle shown in Figure~\ref{fig:polyt}.
\begin{figure}[h]
\includegraphics{Fig_SemiToricPolyt}
\caption{The images of the ``moment maps'' on $\C P^1\times\C P^1$ and $\C P^2$, and the lines above which the tori $\hat{T}_a, T_a$ are located.
}
\label{fig:polyt}
\end{figure}
\begin{definition}
For $a\in(0,1)$, the Lagrangian torus $\hat{T}_a\subset \C P^1\times \C P^1$ is the pre-image of $(0,a)$
under the map $(\hat F,\hat G)$.
\end{definition}
The functions $(\hat F,\hat G)$ are invariant under the $\mathbb{Z}/2$-action on $\CP^1 \times \CP^1$ that
swaps the two $\C P^1$ factors. This involution defines a 2:1 cover $\C P^1\times
\C P^1\to \C P^2$ branched along the diagonal of $\C P^1\times\C P^1$, so the
functions $(\hat F,\hat G)$ descend to functions on $\C P^2$ which we denote by $(F,G)$;
the image of $(F, G/2)\colon\thinspace \C P^2\to \mathbb{R}^2$ is shown in Figure~\ref{fig:polyt}.
Note that the quotient of the Lagrangian sphere $S^2_{\ad}$ is $\R P^2 \subset \C P^2$.
Being branched, the 2:1 cover cannot be made symplectic, so it requires some
care to explain with respect to which symplectic form the tori $T_a\subset\C P^2$ are Lagrangian. One solution is to
consider $\C P^2$ as the symplectic cut \cite{Le95} of $T^*\R P^2$, as explained by
Wu~\cite{Wu15}. It is natural to take $(F,G/2)$, not $(F,G)$, as the ``moment
map'' on $\C P^2$.
We normalise the symplectic forms $\omega$ on $\C P^2$ and
$\hat{\omega}$ in $\CP^1 \times \CP^1$ so that
$\omega(H) = 1$ and $\hat{\omega}(H_1) = \hat{\omega}(H_2) = 1$, where $H =
[\C P^1]$ is the generator of $H_2(\C P^2)$, and $H_1 = [\{\mathrm{pt}\}\times \C P^1]$,
$H_2 = [\C P^1 \times \{\mathrm{pt}\}]$ in $H_2(\CP^1 \times \CP^1)$.
\begin{definition}
For $a \in (0,1)$, the Lagrangian torus $T_a \subset \C P^2$ is the pre-image of
$(0, a/2)$ under $(F,G/2)$,~i.e.~the image of $\hat{T}_a$ under the 2:1 branched cover
$\CP^1 \times \CP^1 \to \C P^2$.
\end{definition}
\begin{remark} \label{rem:def_Ta_Auroux} There is an alternative way to define
the tori $\hat{T}_a$ and $T_a$. It follows from the work of Gadbled \cite{Gad13}, see
also \cite{OU13}, that the above defined tori are Hamiltonian isotopic to the
so-called Chekanov-type tori introduced by Auroux \cite{Au07}:
$$
\begin{array}{cc} \hat{T}_a \cong \{ ([x:w],[y:z])\in \CP^1 \times \CP^1 \setminus
\{z=0\}\cup\{w=0\}: \ \frac{xy}{wz} \in \hat{\gamma}_a, \ \left|\frac{x}{w} \right| =
\left|\frac{y}{z} \right| \}, \\ T_a \cong \{ [x:y:z]\in \C P^2 \setminus
\{z=0\}: \ \frac{xy}{z^2} \in \gamma_a, \ \left|\frac{x}{z} \right| =
\left|\frac{y}{z} \right| \}, \end{array}
$$
where $\hat{\gamma}_a,\gamma_a\subset
\mathbb{C}$ are closed curves that enclose a domain not containing $0\in\mathbb{C}$. The area of
this domain is determined by $a$ and must be such that the areas of holomorphic
disks computed in \cite{Au07} match Table~\ref{tab: Disks}; see below. (Curves
that enclose domains of the same area not containing $0\in\mathbb{C}$ give rise to
Hamiltonian isotopic tori.) The advantage of this presentation is that the tori
$T_a$ are immediately seen to be Lagrangian. The tori $\hat{T}_{1/2}$ and $T_{1/3}$
the monotone manifestations in $\CP^1 \times \CP^1$ and $\C P^2$ of the Chekanov torus \cite{Che96}.
A presentation of the monotone Chekanov tori similar to the above was described
in \cite{ElPo93}.
Yet another way of defining the
tori is by Biran's circle bundle construction~\cite{Bi01} over a monotone circle
in the symplectic sphere which is the preimage of the top side of the triangles
in Figure~\ref{fig:polyt}; see again \cite{OU13}. \end{remark}
\subsection{Holomorphic disks} We start by recalling the theorem of Fukaya, Oh,
Ohta and~Ono mentioned in the introduction.
\begin{theorem}[{\cite[Theorem 3.3]{FO312}}]
For $a \in (0, 1/2]$, the torus $\hat{T}_a\subset \C P^1\times \C P^1$ is non-displaceable.\qed
\end{theorem}
\begin{proposition} \label{prp: Probes}
Inside $\C P^1\times\C P^1$ and $\C P^2$,
all fibres corresponding to interior points of the ``moment polytopes'' shown in Figure~\ref{fig:polyt}, except for the tori $\hat{T}_a$ when $a\in(0,1/2]$, and $T_a$ when $a \in (0,1/3]$,
are displaceable.
\end{proposition}
\begin{proof} First, note that our model is toric in the complement of the
Lagrangians $S^2_{\ad} \subset \CP^1 \times \CP^1$ resp.~$\mathbb{R} P^2 \subset \C P^2$, represented
by the bottom vertex of Figure \ref{fig:polyt}. In fact, $\CP^1 \times \CP^1 \setminus S^2_{\ad}$ and $\C P^2 \setminus \mathbb{R} P^2$ can be identified with the following normal
bundles, respectively: $\mathcal{O}(2)$ over the diagonal in $\CP^1 \times \CP^1$, giving the maximum level set of
$\hat G$; and $\mathcal{O}(4)$ over the conic in $\C P^2$, giving the
maximum level set of $G/2$. Clearly, these spaces are toric.
Recall the method of probes due to McDuff~\cite{MD11} which is
a mechanism for displacing certain toric fibres. Horizontal probes displace all
the fibres except the $\hat{T}_a$ or $T_a$, $a\in (0,1)$. Vertical probes over the
segment $\{0\}\times (0,1/2]$ displace the $T_a$ for $a > 1/2$, and probes over
the segment $\{0\}\times (0,1]$ to displace the $\hat{T}_a$ for $a > 1/2$. All the
displacements given by probes can be performed by a Hamiltonian compactly
supported in the complement of the Lagrangians $S^2 \subset \CP^1 \times \CP^1$, respectively
$\mathbb{R} P^2 \subset \C P^2$. When $1/3 < a< 1/2$, the method of probes cannot
displace $T_a$.
The proof of this remaining
case is due to Georgios Dimitroglou Rizell (currently not in the literature),
who pointed out that for $a > 1/3$, the tori $T_a$, up to Hamiltonian isotopy,
can be seen to project onto the open segment $S$ connecting $(0,0)$ with
$(1/3,1/3)$ in the standard moment polytope of $\C P^2$ (using the description of
Remark \ref{rem:def_Ta_Auroux}, we may take $\gamma_a$ inside the disk of radius
1 for $a > 1/3$). But there is a Hamiltonian isotopy of $\C P^2$ that sends the
preimage of $S$ to the preimage of the open segment connecting $(0,1)$ with
$(1/3,1/3)$, and hence disjoint from $S$. \end{proof}
The Maslov index 2 holomorphic disks for the tori $\hat{T}_a$ and $T_a$, with respect
to some choice of an almost complex structure for which the disks are regular,
were computed, respectively, by Fukaya, Oh, Ohta and~Ono \cite{FO312} and Wu
\cite{Wu15}. Their results can also be recovered using the alternative
presentation of the tori from Remark~\ref{rem:def_Ta_Auroux}. Namely, Chekanov
and Schlenk \cite{ChSch10} determined Maslov index 2 holomorphic disks for the
monotone Chekanov tori $T_{1/3} \subset \C P^2$ and $T_{1/2} \subset \CP^1 \times \CP^1$, and
the combinatorics of these disks stays the same for the Chekanov-type tori from
Remark~\ref{rem:def_Ta_Auroux} if one uses the standard complex structures on
$\C P^2$ and $\CP^1 \times \CP^1$ \cite[Proposition~5.8, Corollary~5.13]{Au07}.
We summarise these results in the statement below.
\begin{table}[h]
{\it
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{$\vphantom{\hat{T}_a}T_a\subset \C P^2$}
\\
\hline
Disk class& \# & Area & $\mathfrak{PO}$ term
\\
\hline
$H-2\beta-\alpha$ & 1 & $a$ & $t^az^{-2}w^{-1}$
\\
$H-2\beta$ & 2 & $a$ & $t^az^{-2}$
\\
$H-2\beta+\alpha$ & 1 & $a$ & $t^az^{-2}w$
\\
$\beta$ & 1 & $(1-a)/2$ & $t^{(1-a)/2}z$
\\
\hline
\multicolumn{4}{c}{}
\end{tabular}
~
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{$\hat{T}_a\subset \C P^1\times\C P^1$}
\\
\hline
Disk class& \# & Area & $\mathfrak{PO}$ term\\
\hline
$H_1-\beta-\alpha$ & 1 & $a$ & $t^az^{-1}w^{-1}$\\
$H_1-\beta$ & 1 & $a$ & $t^az^{-1}$ \\
$H_2-\beta$ & 1 & $a$ & $t^az^{-1}$\\
$H_2-\beta+\alpha$ & 1 & $a$ & $t^az^{-1}w$\\
$\beta$ & 1 & $1-a$ & $t^{1-a}z$\\
\hline
\end{tabular}
}
\smallskip
\caption{The homology classes of all Maslov index two $J$-holomorphic disks on the tori; the
number of such disks through a generic point on the torus; their areas; the corresponding monomials in the superpotential function: all for
some regular almost complex structure $J$. Here
$\alpha,\beta$ denote some fixed homology classes in $H_2(\C P^2,T_a)$ or
$H_2(\C P^1\times\C P^1,\hat{T}_a)$, and $\partial \alpha, \partial \beta$ generate $H_1(T_a, \mathbb{Z})$
or $H_1(\hat{T}_a, \mathbb{Z})$.}
\label{tab: Disks}
\end{table}
\begin{proposition}[\cite{Au07, ChSch10, FO312, Wu15}] \label{Prop: Disks T_a}
There exist almost complex structures on $\C P^2$ and $\CP^1 \times \CP^1$ for which the
enumerative geometry of Maslov index 2 holomorphic disks with boundary on $T_a$,
resp.~$\hat{T}_a$, is as shown in Table~\ref{tab: Disks}, and these disks are regular.
Here we are considering the standard spin structure in the tori to orient
the moduli spaces of disks. \qed
\end{proposition}
\begin{remark}
The fact that all disks contribute with positive signs is an argument
analogous to \cite[Proposition~8.2]{Cho04} --
see also \cite[Section~5.5]{Vi13} for a similar discussion.
\end{remark}
\subsection{Proof of Theorem \ref{th:T_a}}
We now have all the ingredients to prove Theorem~\ref{th:T_a} using
Theorem~\ref{th:CO}.
Take the almost complex structure $J$ from Proposition~\ref{Prop: Disks T_a}, then the parameter $a$ indexing the torus $T_a\subset\C P^2$ satisfies Equation~\eqref{eq:defn_a_dim4} whenever $a<1/3$.
Let $\{D_i\}_i\subset (\C P^2,T_a)$ be the images of all
$J$-holomorphic Maslov index 2 disks of area $a$ such that $p \in \partial D_i$, for
a fixed point $p \in T_a$.
We work over the coefficient ring $Q=\mathbb{Z} /8$. According to Table~\ref{tab: Disks},
$$
\sum_i \partial [D_i]= - 8\cdot \partial\beta = 0 \in H_1(T_a; \mathbb{Z}/8).
$$
Moreover, according to Table~\ref{tab: Disks} we have
\begin{equation} \label{eq: CP2OC_Ta}
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{T_a}]) = 4H \in H_2(\C P^2; \mathbb{Z}/8).
\end{equation}
Note that the next to the least area $A$ from Equation~(\ref{eq: def A}) equals $A = (1-a)/2$.
Let us move to the Clifford torus. It is well known that
the monotone Clifford torus $T_{\mathit{Cl}}$ bounds three Maslov index 2
$J$-holomorphic disks passing through a generic point, belonging to classes of
the form $\beta_1$, $\beta_2$, $H - \beta_1 - \beta_2\in H_2(\C P^2,T_\mathit{Cl};\mathbb{Z})$
(and counting positively with respect to the standard spin structure on the
torus) \cite{CO06}, see also \cite[Proposition~5.5]{Au07}, and having area $b =
1/3$. So we obtain
$$
\mathcal{{O}{C}}\c([p_{T_\mathit{Cl}}]) = H \in H_2(\C P^2; \mathbb{Z}/8).
$$
\begin{proof}[Proof of Theorem \ref{th:T_a}]
Since
$$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{T_a}]) \cdot \mathcal{{O}{C}}\c([p_{T_\mathit{Cl}}]) = 4 \ne 0 \mod 8,$$ we are in shape to
apply Theorem \ref{th:CO}, provided that:
$$
a + b = a + 1/3 < A = \tfrac{1-a}{2}
$$
~i.e.~$a < 1/9$. The case $a = 1/9$ follows by continuity.
\end{proof}
\begin{remark}
\label{rem:not_works_for_conj}
We are unable to prove that the tori $T_a$ are non-displaceable using Theorem~\ref{th:CO} because
$
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{T_a}])\cdot \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{T_a}]) = 16 \equiv 0\mod 8.
$
\end{remark}
\begin{remark}
\label{rem:not_works_over_Z}
It is instructive to see why the argument cannot be made to work over $\mathbb{C}$ or $\mathbb{Z}$. Then $\sum_i \partial [D_i]= - 8\cdot \partial\beta$ is non-zero, but this can be fixed by introducing a local system $\rho\colon\thinspace \pi_1(T_a)\to\mathbb{C}^\times$ taking $\alpha\mapsto -1$, $\beta\mapsto +1$. By definition, $\rho$ is multiplicative, so for example, $\rho(\alpha+\beta)=\rho(\alpha)\rho(\beta)$. Then $\sum_i\rho(\partial[D_i])\cdot \partial[D_i]$ equals
$$
-(-2\partial\beta-\partial\alpha)+2(-2\partial\beta)-(-2\partial\beta+\partial\alpha)=0\in H_1(T_a;\mathbb{C}).
$$
However, in this case $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{T_a}];\rho)=\sum_i\rho(\partial [D_i])[D_i]$
vanishes in $H_2(\C P^2;\mathbb{C})$, because the $H$-classes from Table~\ref{tab: Disks}
cancel in this sum. \end{remark}
\subsection{Similar theorems for $\CP^1 \times \CP^1$ and $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$}
Using our technique, we
can prove a similar non-displaceability result inside $\CP^1 \times \CP^1$, which is probably
less novel, and $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$, both endowed with a monotone symplectic form. We start
with $\CP^1 \times \CP^1$.
\begin{theorem}
\label{th: Ta in PxP}
For each $a\in (0,1/4]$, the torus $\hat{T}_a \subset \CP^1 \times \CP^1$ is Hamiltonian
non-displaceable from the monotone Clifford torus $T_\mathit{Cl} \subset \CP^1 \times \CP^1$.
\end{theorem}
\begin{remark} We believe this theorem can be obtained by a short elaboration
on~\cite{FO312}: for the bulk-deformation $\mathfrak{b}$ used in \cite{FO312}, there
should exist local systems (which in this context are weak bounding cochains
\cite[Appendix~1]{FO312}) on $\hat{T}_a$ and $T_\mathit{Cl}$ such that $HF^{\mathfrak{b}}(\hat{T}_a,T_\mathit{Cl})
\ne 0$, for $a \in (0, 1/2]$. Alternatively, in addition to
$HF^{\mathfrak{b}}(\hat{T}_a,\hat{T}_a)\neq 0$ as proved in \cite{FO312}, one can show that
$HF^{\mathfrak{b}}(T_\mathit{Cl},T_\mathit{Cl})\neq 0$ for some local system, and there should be a
bulk-deformed version of Theorem~\ref{th:OCCO_monot} using the unitality of the
string maps and the semi-simplicity of the deformed quantum cohomology
$QH^{\mathfrak{b}}(\C P^2)$. Our proof only works for $a\le 1/4$, but is based on much
simpler transversality foundations. \end{remark}
As a warm-up, let us try to apply Theorem~\ref{th:CO}; we shall work over $\mathbb{Z}/4$. By looking at Table~\ref{tab:
Disks}, we see that for $a < 1/2$ we have
\begin{equation} \label{eq: PxP:OC_Ta}
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\hat{T}_a}]) = 2(H_1 + H_2) \in H_2(\CP^1 \times \CP^1; \mathbb{Z}/4),
\end{equation}
and $ A = 1 -a$.
One easily shows that
$$\mathcal{{O}{C}}\c([p_{\hat{T}_\mathit{Cl}}]) = H_1 + H_2 \in H_2(\CP^1 \times \CP^1; \mathbb{Z}/4),$$
since the Clifford torus bounds holomorphic Maslov index 2 disks of area $b =
1/2$, passing once through each point of $\hat{T}_\mathit{Cl}$, in classes of the form
$\beta_1$, $\beta_2$, $H_1 - \beta_1$, $H_2 - \beta_2$ (and counting
positively with respect to the standard spin structure on the torus)
\cite{CO06}, see also
\cite[Section~5.4]{Au07}. We cannot directly apply Theorem~\ref{th:CO} because
$$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\hat{T}_a}]) \cdot \mathcal{{O}{C}}\c([p_{\hat{T}_\mathit{Cl}}]) = 4 \equiv 0 \mod 4.$$
Hence we need to use the more refined Theorem \ref{th:CO_groups}.
\begin{proof}[Proof of Theorem \ref{th: Ta in PxP}]
Consider $S_{\hat{T}_\mathit{Cl}} \subset H_1(\hat{T}_\mathit{Cl}; \mathbb{Z}/2)$ to be the linear space generated
by $[\partial \beta_2]$ and $S_{\hat{T}_a} \subset H_1(\hat{T}_a; \mathbb{Z}/2)$ generated by $\partial
\beta$; both satisfy Condition~ \eqref{eq:disk_groups_cancel_bdies} over $\k=Q=\mathbb{Z}/2$. So we have:
\begin{equation} \label{eq: PxP:OC_Ta_S}
\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\hat{T}_a}],S_{\hat{T}_a}) = H_1 + H_2 \in H_2(\CP^1 \times \CP^1; \mathbb{Z}/2),
\end{equation}
\begin{equation}
\mathcal{{O}{C}}\c([p_{\hat{T}_\mathit{Cl}}],S_{\hat{T}_\mathit{Cl}}) = H_2 \in H_2(\CP^1 \times \CP^1; \mathbb{Z}/2),
\end{equation}
and hence,
$$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\hat{T}_a}],S_{\hat{T}_a}) \cdot \mathcal{{O}{C}}\c([p_{\hat{T}_\mathit{Cl}}],S_{\hat{T}_\mathit{Cl}}) = 1 \neq 0 \mod 2.$$
Therefore by Theorem \ref{th:CO_groups}, $\hat{T}_a$ is non-displaceable from $\hat{T}_\mathit{Cl}$
provided that $ a + b = a + 1/2 < A = 1 - a$, i.e.~ $a < 1/4$.
\end{proof}
Next, we pass on to $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$ which we see as $\CP^1 \times \CP^1$ blown up at the two points
corresponding to the two top corners of the image of the ``moment map''
$(\hat F,\hat G)$, see Figure \ref{fig:polytBlIII}. If the blowup is of the correct
size then the resulting symplectic form on $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$ is monotone; see
\cite[Section 7]{Vi16b} for more details. We denote by $\bar{T}_a$ the tori in
$\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$ coming from the $\hat{T}_a \subset \CP^1 \times \CP^1$, in particular, $\bar{T}_a =
L^{1/2}_{1 - a}$ in the notation of \cite[Section 7]{Vi16b}. We also denote by
$\bar{T}_{\mathit{Cl}}$ the monotone torus corresponding to the baricentre of the
standard moment polytope of $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$.
\begin{figure}[h]
\includegraphics{Fig_Bl3P2.pdf}
\caption{The images of the ``moment maps'' on $\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$,
and the line above which the tori $\bar{T}_a$ are located.
}
\label{fig:polytBlIII}
\end{figure}
\begin{theorem}
\label{th: Ta in BlIII}
For each $a\in (0,1/4]$, the torus $\bar{T}_a \subset \mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$ is Hamiltonian
non-displaceable from the monotone Clifford torus $\bar{T}_\mathit{Cl} \subset \mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$.
\end{theorem}
\begin{proof}
Let $E_1$ and $E_2$ be the classes of the exceptional curves of the above blowups, so that
$$H_2(\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2},\bar{T}_a) = \langle H_1,
H_2, E_1, E_2, \beta, \alpha \rangle.$$ Compared to Table~\ref{tab: Disks}, the torus $\bar{T}_a$
acquires two extra holomorphic disks of area $1/2$, with boundary in classes
$[\partial \alpha]$ and $-[\partial \alpha]$, and whose sum gives the class $H_1
+ H_2 - E_1 - E_2$, see \cite[Lemma 7.1]{Vi16b}.
We then use $S_{\bar{T}_a} \subset H_1(\bar{T}_a; \mathbb{Z}/2)$ generated by $\partial
\beta$ and $S_{\bar{T}_\mathit{Cl}} \subset H_1(\bar{T}_\mathit{Cl}; \mathbb{Z}/2)$ in a similar
fashion as in the proof of Theorem \ref{th: Ta in PxP}, so that $S_{\bar{T}_a}$,
$S_{\bar{T}_\mathit{Cl}}$ satisfy Condition~\eqref{eq:disk_groups_cancel_bdies} and
$\mathcal{{O}{C}}\c([p_{\hat{T}_\mathit{Cl}}],S_{\hat{T}_\mathit{Cl}}) = H_2$. Hence
$$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\bar{T}_a}],S_{\bar{T}_a}) \cdot \mathcal{{O}{C}}\c([p_{\bar{T}_\mathit{Cl}}],S_{\bar{T}_\mathit{Cl}})
= (H_1+H_2)\cdot H_2 =1 \mod 2.$$
If one defines $A$ by (\ref{eq: def A}), then $A = b = 1/2$, so
Theorem~\ref{th:CO_groups} does not apply. However, we can use Theorem~\ref{thm:alg_cancel}.
Notice that the boundaries of both disks of area $1/2$ are equal to $\alpha$ over $\mathbb{Z}/2$, and there are two such disks so their count vanishes over $\mathbb{Z}/2$. Therefore in the setup of Theorem~\ref{thm:alg_cancel} we can take $A=
1-a$. So we get the desired non-displaceability result as long as $a+b < 1-a$, i.e.~$a
< 1/4$.
\end{proof}
\begin{remark}
We expect that Theorems~\ref{th: Ta in PxP},~\ref{th: Ta in BlIII} can be improved
so as to allow $a\in (0,1/2]$. Indeed, the tori appearing in those theorems are non-self-displaceable for $a\in(0,1/2]$: see \cite{FO312, FO311b} for the case of $\C P^1\times \C P^1$ and \cite{Vi16b} for the case of
$\mathbb{C}P^2\# 3\overline{\mathbb{C}P^2}$; and see the previous remark.
\end{remark}
\subsection{Proof of Lemma~\ref{lem:cotangent_bundles}} Starting with
$X=\C P^1\times \C P^1$ or $X=\C P^2$, remove the divisor $D\subset X$ given by the
preimage of the top side of the triangle in Figure~\ref{fig:polyt} under the
``moment map''. The complement $U$ is symplectomorphic to an open co-disk bundle
inside $T^*S^2$, respectively $T^*\R P^2$. The Lagrangian tori $\hat T_a$ resp.~$
T_a$ are monotone in $U$. Indeed, note that the only disk in $X$ passing through
the divisor $D$ is the one in class $\beta$ (Table \ref{tab: Disks}) -- this can
be seen in any presentation of the tori \cite{Au07,FO312,Wu15}, see again Remark
\ref{rem:def_Ta_Auroux}.
Monotonicity of the tori follows from noting that
$H_2(U,\hat{T}_a; \mathbb{Q})$, resp. $H_2(U,T_a; \mathbb{Q})$, is generated by the remaining Maslov
index 2 disks -- which all have the same area $a$ and boundary generate
$H_1(\hat{T}_a; \mathbb{Q})$, resp. $H_1(T_a; \mathbb{Q})$ -- together with the Lagrangian
zero-section $S^2_{\ad}$ when $X = \CP^1 \times \CP^1$, which have Maslov index $0$ (recall that
$H_2(T^*\mathbb{R} P^2; \mathbb{Q}) = 0$). Actually these tori differ by scaling inside the
respective cotangent bundle. We denote these tori seen as sitting in the
cotangent bundles by $\hat L_a \subset T^*S^2$ resp.~$L_a\subset T^*\R P^2$.
These are the tori we take for Lemma~\ref{lem:cotangent_bundles}. In the
cotangent bundle, the tori can be scaled without constraint so we actually get a
family indexed by $a\in(0,+\infty)$ and not just $(0,1)$.
As we pointed out, the holomorphic disks of area $a$ from Table \ref{tab: Disks}
are precisely the ones which lie in the complement of $D\subset X$ \cite{FO312,
Wu15}, therefore they belong to $U$. Finally, the tori $\hat L_a$ and $L_a$
bound no holomorphic disks in $T^*S^2$ resp.~$T^*\R P^2$ other than the ones
contained inside $U$, by the maximum principle. Therefore we know all
holomorphic Maslov index 2 disks on these tori, and
Lemma~\ref{lem:cotangent_bundles} becomes a straightforward computation,
that we actually already performed. Indeed, the disks used to compute
$\mathcal{{O}{C}}\c$ in $U$ (and hence in $T^*S^2$ resp.~$T^*\R P^2$) are
the same used to compute $\mathcal{{O}{C}}_{\mathit{low}}^{(2)}$ in $X$, i.e.,
$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\hat{T}_a}]) =i_*\mathcal{{O}{C}}\c([p_{\hat L_a}])$, resp.
$\mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{T_a}]) =i_*\mathcal{{O}{C}}\c([p_{L_a}])$.
\begin{remark}
Note that the disks computed in Table \ref{tab: Disks} were with respect to
the standard complex structure $J$. Moreover, the divisor $D$ corresponds to
the diagonal in $\CP^1 \times \CP^1$ and to a conic in $\C P^2$.
In particular, $J$ is cylindrical at infinity
for $X\setminus D$.
\end{remark}
Namely, as in the proof of Theorem \ref{th: Ta in PxP}, the holomorphic Maslov index 2 disks
with boundary on $\hat L_a \subset T^*S^2$ satisfy Condition
\eqref{eq:disk_low_area_cancel_bdies} over $\mathbb{Z}/4$, and Equation~\eqref{eq:S2_OC} from Lemma
\ref{lem:cotangent_bundles} follows immediately from \eqref{eq: PxP:OC_Ta}:
$$
\begin{array}{r}
i_*\mathcal{{O}{C}}\c([p_{\hat L_a}]) = \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\hat{T}_a}]) = 2(H_1 + H_2) =
2(H_1 - H_2)
\\ = 2i_*[S^2] \in
H_2(\CP^1 \times \CP^1; \mathbb{Z}/4),
\end{array}
$$
and injectivity of $i_*: H_2(U; \mathbb{Z}/4) \to H_2(\CP^1 \times \CP^1; \mathbb{Z}/4)$, where $i$ is the embedding of $U\subset X$..
Similarly, we can identify $S_{\hat L_a}$ with the $S_{\hat{T}_a}$ from proof of
Theorem \ref{th: Ta in PxP}, which satisfies Condition
\eqref{eq:disk_groups_cancel_bdies} over $\k=Q=\mathbb{Z}/2$. Equation~\eqref{eq:S2_OC_groups} from
Lemma \ref{lem:cotangent_bundles} follows immediately from \eqref{eq:
PxP:OC_Ta_S}:
$$
\begin{array}{r}
i_*\mathcal{{O}{C}}\c([p_{\hat L_a}],S_{\hat L_a}) = \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{\hat{T}_a}],S_{\hat{T}_a}) = H_1 + H_2 = H_1 - H_2
\\
= i_*[S^2]
\in H_2(\CP^1 \times \CP^1; \mathbb{Z}/2),
\end{array}
$$
and the injectivity of $i_*: H_2(U; \mathbb{Z}/2) \to H_2(\CP^1 \times \CP^1; \mathbb{Z}/2)$.
Analogously, Lemma \ref{lem:cotangent_bundles}(ii)
is checked as in the proof of Theorem \ref{th:T_a}, in particular Equation~\eqref{eq:RP2_OC}
follows from \eqref{eq: CP2OC_Ta}:
$$
i_*\mathcal{{O}{C}}\c([p_{L_a}]) = \mathcal{{O}{C}}_{\mathit{low}}^{(2)}([p_{T_a}]) = 4H = i_*[4\mathbb{R} P^2] \in H_2(\C P^2; \mathbb{Z}/8).
$$
Indeed,
$i_*$ sends the generator $[4\mathbb{R} P^2]$ of $H_2(T^*\mathbb{R} P^2; \mathbb{Z} / 8) \cong \mathbb{Z} / 2$
to $4H\allowbreak \in H_2(\C P^2;$ \linebreak[3] $\mathbb{Z}/8)$.
Finally, we note that these computations are actually valid for $a \in
(0,+\infty)$, as scaling monotone tori in a cotangent bundle does not change the
enumerative geometry of holomorphic disks. \qed
\subsection{The superpotentials} We conclude by an informal discussion of
the superpotentials of the tori we study, aimed to readers familiar to the
notions of the superpotential and bulk deformations. We refer to
\cite{Au07,FO310b,FO312,Wu15} for the definitions. The Landau-Ginzburg
superpotential (further called ``potential'') associated to a Lagrangian 2-torus
and an almost complex structure $J$ is a Laurent series in two variables which
combinatorially encodes the information about all $J$-holomorphic index 2 disks
through a point on $L$.
In the setting of Proposition~\ref{Prop: Disks T_a}, the potentials are given by
\begin{equation}\label{eq:Pot_CP2}
\mathfrak{PO}_{\C P^2} = t^{(1-a)/2}z + \frac{t^a}{z^2w} + 2\frac{t^a}{z^2} + \frac{t^aw}{z^2} = t^{(1-a)/2}z + t^a\frac{(1 +w)^2}{z^2w};
\end{equation}
\begin{equation}\label{eq:Pot_CP1}
\mathfrak{PO}_{\CP^1 \times \CP^1} = t^{1-a}z + \frac{t^a}{zw} + 2\frac{t^a}{z} + \frac{t^aw}{z} = t^{1-a}z + t^a\frac{(1 +
w)}{zw} + t^a\frac{(1 + w)}{z}.
\end{equation}
(These functions are sums of monomials corresponding to the disks as shown in Table~\ref{tab: Disks}.)
Here $t$ is the
formal parameter of the Novikov ring $\Lambda_0$ associated with a ground field $\k$, usually assumed to be of characteristic zero:
$$
\Lambda_0 = \{ \sum a_i t^{\lambda_i}\ | \ a_i \in \k, \ \lambda_i \in \mathbb{R}_{\ge 0}, \
\lambda_i \le \lambda_{i+1}, \ \lim_{i \to \infty} \lambda_i = \infty \}.
$$
Let
$\Lambda_\times$ be the field of elements of $\Lambda_0$ with nonzero constant term $a_0t^0$.
We can see $(\Lambda_\times)^2$ as the space of local systems $\pi_1(L)\to \Lambda_\times$ on a Lagrangian torus $L$, or \cite[Remark 5.1]{FO310b} as the space $\exp(H_1(L;\Lambda_0))$ of exponentials of elements in $H_1(L;\Lambda_0)$, the so-called bounding cochains from the works of Fukaya, Oh, Ohta and~Ono \cite{FO310,FO310b, FO3Book}.
In turn, the potential can be seen as a function $(\Lambda_\times)^2\to \Lambda_0$,
and its critical points
correspond to local systems
$\sigma\in (\Lambda_\times)^2$ such that $HF^*(L,\sigma)\neq 0$ \cite[Theorem 5.9]{FO310b}
If the potential has no critical points, it can sometimes be fixed by introducing a bulk deformation $\mathfrak{b} \in H^{2k}(X; \Lambda_0)$ which deforms the function; critical points of the deformed potential correspond to local systems $\sigma\in (\Lambda_\times)^2$ such that $HF^\mathfrak{b}(L,\sigma)\neq 0$ \cite[Theorem~8.4]{FO310b}. This was the strategy of \cite{FO312} for proving that the tori $\hat{T}_a\subset\C P^1\times\C P^1$ are non-displaceable. When $\mathfrak{b}\in H^{2}(X;\Lambda_0)$, the deformed potential is still determined by Maslov index 2 disks (if $\dim X=2n>4$, this will be the case for $\mathfrak{b}\in H^{2n-2}(X;\Lambda_0)$), see e.g.~\cite[Theorem~8.2]{FO310b}. For bulk deformation classes in other degrees, the deformed potential will use disks of all Maslov indices, and its computation becomes out of reach.
In contrast to the $\hat{T}_a$, the potential for the tori $T_a$ does not acquire a critical point after we introduce a degree 2 bulk deformation class
$\mathfrak{b} \in H^2(\C P^2, \Lambda_0)$.
\begin{proposition} \label{prop: Bulk CP^2}
Unless $a=1/3$, for any bulk deformation class $\mathfrak{b} \in H^2(\C P^2, \Lambda_0)$, the deformed potential $\mathfrak{PO}^\mathfrak{b}$ for the torus $T_a\subset \C P^2$ has no critical point in $(\Lambda_\times)^2$.
\end{proposition}
\begin{proof}
Let $Q\subset\C P^2$ be the quadric which is the preimage of the top side of the traingle in Figure~\ref{fig:polyt}, so $[Q]=2H$.
Then $\mathfrak{b}$ must be Poincar\'e dual to $c \cdot [Q]$ for some $c \in \Lambda_0$. Among the holomorphic disks in Table~\ref{tab: Disks}, the only disk intersecting $Q$ is the $\beta$-disk intersecting it once \cite{Wu15}. Therefore the deformed potential
$$
\mathfrak{PO}^{\mathfrak{b}}_{\C P^2} = t^{(1-a)/2}e^c z + t^a\frac{(1 + w)^2}{z^2w}
$$
differs from the usual one by the $e^c$ factor by the monomial corresponding to the $\beta$-disk, compare \cite{FO312}. Its critical points are given by
$$
w = 1 , \ z^3 = 8t^{(3a-1)/2}e^{-c}.
$$
Unless $3a-1=0$, the $t^0$-term of $z$ has to vanish, so $z\notin\Lambda_\times$.
\end{proof}
Keeping an informal attitude,
let us drop the monomial $t^{(1-a)/2}z$ from Equation~(\ref{eq:Pot_CP2}) of $\mathfrak{PO}_{\C P^2}$; denote the resulting function by $\mathfrak{PO}_{\C P^2,{\mathit{low}}}$.
For $a < 1/3$, it reflects the information about the least area holomorphic disks with boundary on~$T_a\subset\C P^2$,
\begin{equation}
\label{eq:trunc_po}
\mathfrak{PO}_{\C P^2,low} = t^a\frac{(1 + w)^2}{z^2w}.
\end{equation}
Now, this function has plenty of critical points. Over $\mathbb{C}$, it has the critical line $w=-1$, and if one works over $\mathbb{Z}/8$ then the point $(1,1)$ is also a critical point, reflecting the fact the boundaries of the least area holomorphic Maslov index 2 disks on $T_a$ cancel modulo 8, with the trivial local system.
The potential (\refeq{eq:trunc_po}) becomes the usual potential for the monotone
tori $L_a\subset T^*S^2$ from Lemma~\ref{lem:cotangent_bundles}. The fact that
it has a critical point implies, this time by the standard machinery, that the
tori $L_a\subset T^*\R P^2$ are non-displaceable \cite[Theorem~2.3]{FO312}
(note Condition~6.1, Theorem~A.1 and Theorem~A.2 in \cite[Appendix~1]{FO312}), see also \cite{BC12,She13}.
The same is true of the $\hat L_a\subset T^*S^2$ and has been known due to
\cite{AF08}, see also \cite[Appendix~2]{FO312}.
| {'timestamp': '2017-07-04T02:00:42', 'yymm': '1511', 'arxiv_id': '1511.00891', 'language': 'en', 'url': 'https://arxiv.org/abs/1511.00891'} |
\section{Introduction}
Pion-nucleon scattering in the $J^{P}=1/2^{+}$ channel captures the information on the
excitations of the nucleon ($N=p, n$). The $N\pi$ scattering in $p$-wave is elastic only
below the inelastic threshold $m_N+2m_\pi$ for $N\pi\pi$. The main feature in this
channel at low energies is the so-called Roper resonance with $m_R=(1.41-1.45)~$GeV and
$\Gamma_R=(0.25-0.45)~$GeV \cite{pdg14} that was first introduced by L.D. Roper
\cite{Roper:1964zza} to describe the experimental $N\pi$ scattering. The resonance decays
to $N\pi$ in $p$-wave with a branching ratio $Br\simeq 55-75\%$ and to $N\pi\pi$ with
$Br\simeq 30-40\%$ (including $N(\pi\pi)^{I=0}_{s-wave}$, $\Delta\pi$ and $N\rho$), while
isospin-breaking and electromagnetic decays lead to a $Br$ well below one percent.
Phenomenological approaches that considered the $N^*(1440)$ resonance as dominantly $qqq$
state, for example quark models \cite{Isgur:1978xj,Liu:1983us,Capstick:1986bm}, gave a
mass that is too high and a width that is too small in comparison to experiment. This led
to several suggestions on its nature and a large number of phenomenological studies. One
possibility is a dynamically generated Roper resonance where the coupled-channel
scattering $N\pi/N\sigma/ \Delta\pi$ describes the $N\pi$ experimental scattering data
without any excited $qqq$ core
\cite{Krehl:1999km,Schutz:1998jx,Liu:2016uzk,Matsuyama:2006rp}. The scenarios with
significant $qqqq\bar q$ Fock components \cite{Jaffe:2004zg,JuliaDiaz:2006av} and hybrids
$qqqG$ with gluon-excitations \cite{Golowich:1982kx,Kisslinger:1995yw} were also
explored. The excited $qqq$ core, where the interaction of quarks is supplemented by the
pion exchange, brings the mass closer to experiment
\cite{Glozman:1995fu,Glozman:1997ag}. A similar effect is found as a result of some other
mechanisms that accompany the $qqq$ core, for example a vibrating $\pi \sigma$
contribution \cite{Alberto:2001fy} or coupling to all allowed channels
\cite{Golli:2007sa}. These models are not directly based on QCD, while the effective
field theories contain a large number of low-energy-constants that need to be determined
by other means. The rigorous Roy-Steiner approach is based on phase shift data and
dispersion relations implementing unitarity, analyticity and crossing symmetry; it leads
to $N\pi$ scattering amplitudes at energies $E\leq 1.38~$GeV that do not cover the whole
region of the Roper resonance \cite{ Hoferichter:2015hva}. The implications of the
present simulation on various scenarios are discussed in Section \ref{sec:discussion}.
All previous lattice QCD simulations, except for \cite{Kiratidis:2016hda}, addressed excited states in
this channel using three-quark operators; this has conceptual issues for a strongly decaying resonance
where coupling to multi-hadron states is essential. In principle multi-hadron eigenstates can also arise
from the $qqq$ interpolators in a dynamical lattice QCD simulation but in practical calculations the
coupling to $qqq$ was too weak for an effect. Another assumption of the simple operator approach is
that the energy of the first excited eigenstate is identified with the mass of $N^*(1440)$, which is a drastic
approximation for a wide resonance. The more rigorous L\"uscher approach
\cite{Luscher:1990ux,Luscher:1991cf} assuming elastic scattering predicts an eigenstate in the energy region within the resonance width (see Fig. \ref{fig:E_analytic}).
The masses of the Roper obtained in the recent dynamical lattice simulations
\cite{Liu:2014jua,Alexandrou:2013fsu,Alexandrou:2014mka,Engel:2013ig,Edwards:2011jj,Mahbub:2013ala,Roberts:2013ipa} using the $qqq$ approach are summarized in
\cite{Leinweber:2015kyz}. Extrapolating these to physical quark masses, where
$m_{u/d}\simeq m_{u/d}^{phys}$, the Roper mass was found above $1.65~$ GeV by all
dynamical studies except \cite{Liu:2014jua}, so most of the studies disfavour a low-lying
Roper $qqq$ core. The only dynamical study that observes a mass around $1.4~$GeV was done
by the $\chi$QCD collaboration \cite{Liu:2014jua}; it was based on the fermions with
good chiral properties (domain-wall sea quarks and overlap valence quarks) and employed a
Sequential Empirical Bayesian (SEB) method to extract eigenenergies from a single
correlator. It is not yet finally settled
\cite{Leinweber:2015kyz,Liu:2014jua,Liu:2016rwa,chuan_lat16} whether the discrepancy of
\cite{Liu:2014jua} with other results is related to the chiral properties of quarks, use
of SEB or poor variety of interpolator spatial-widths in some studies\footnote{The
$\chi$QCD collaboration \cite{Liu:2016rwa} recently verified that SEB and variational
approach with wide smeared sources ($r\simeq 0.8~$fm) lead to compatible $E\simeq
1.9~$GeV for Wilson-clover fermions and $m_\pi\simeq 400~$MeV.}. Linear combinations of
operators with different spatial widths allow to form the radially-excited eigenstate with
a node in the radial wave function, which was found at $r\simeq 0.8~$fm in
\cite{Roberts:2013ipa,Roberts:2013oea,Liu:2014jua}.
An earlier quenched simulation \cite{Mathur:2003zf} based on $qqq$ interpolators
used overlap fermions and the SEB method to extract eigenenergies. The authors find a
crossover between first excited $1/2^+$ state and ground $1/2^-$ state as a function of
the quark mass, approaching the experimental situation. A more recent quenched
calculation \cite{Mahbub:2009aa} using FLIC fermions with improved chiral properties and
variational approach also reported a similar observation.
In continuum the $N^*(1440)$ is not an asymptotic state
but a strongly decaying resonance that manifests itself in the continuum of $N\pi$ and
$N\pi\pi$ states. The spectrum of those states becomes discrete on the finite
lattice of size $L$. For non-interacting $N$ and $\pi$ the periodic boundary conditions in
space constrain the momenta to multiples of $2\pi/L$. The interactions modify the
energies of these discrete multi-hadron states and possibly render additional eigenstates.
The multi-hadron states have never been established in the previous lattice simulations
of the Roper channel, although they should inevitably appear as eigenstates in dynamical
lattice QCD. In addition to being important representatives of the $N\pi$ and $N\pi\pi$
continuum, their energies and number in principle provide phase shifts for the scattering
of nucleons and pions. These, in turn, provide information on the Roper resonance that
resides in this channel. In the approximation when $N\pi$ is decoupled from other channels
the $N\pi$ phase shift and the scattering matrix are directly related to eigenenergies via
the L\"uscher method \cite{Luscher:1990ux,Luscher:1991cf}. The determination of the
scattering matrix for coupled two-hadron channels has been proposed in
\cite{Doring:2011vk,Hansen:2012tf} and was recently extracted from a lattice QCD
simulation \cite{Dudek:2014qha,Dudek:2016cru} for other cases. The presence of the
three-particle decay mode $N\pi\pi$ in the Roper channel, however, poses a significant
challenge to the rigorous treatment, as the scattering matrix for three-hadron decay has
not been extracted from the lattice yet, although impressive progress on the analytic side
has been made \cite{Hansen:2015zga}.
The purpose of the present paper is to determine the complete discrete spectrum for the
interacting system with $J^P=1/2^+$, including multi-hadron eigenstates. Zero total
momentum is considered since parity is a good quantum number in this case. In addition to
$qqq$ interpolating fields, we incorporate for the first time $N\pi$ in $p$-wave in
order to address their scattering. The $N\sigma$ in $s$-wave is also employed to account
for $N(\pi\pi)^{I=0}_{s-wave}$. We aim at the energy region below $1.65~$GeV, where the
Roper resonance is observed in experiment. In absence of meson-meson and meson-baryon interactions one expects eigenstates dominated by $N(0)$, $N(0)\pi(0)\pi(0)$, $N(0)\sigma(0)$ and $N(1)\pi(-1)$, in our $N_f=2+1$
dynamical simulation for $m_\pi\simeq 156~$MeV and $L\simeq 2.9~$fm. The momenta in
units of $2\pi/L$ are given in parenthesis. $N$ and $\pi$ in $N\pi$ need at least
momentum $2\pi/L$ to form the $p$-wave. The PACS-CS configurations \cite{Aoki:2008sm}
have favourable parameters since the non-interacting energy
$\sqrt{m_\pi^2+(2\pi/L)^2}+\sqrt{m_N^2+(2\pi/L)^2}\simeq 1.5~$GeV of $N(1)\pi(-1)$ falls
in the Roper region. The number of observed eigenstates and their energies will lead to
certain implications concerning the Roper resonance.
In the approximation of elastic $N\pi$ scattering, decoupled from $N\pi\pi$, the
experimentally measured $N\pi$ phase shift predicts four eigenstates below $1.65~$GeV, as
argued in Section \ref{sec:discussion_elastic} and Figure \ref{fig:E_analytic}. Further
analytic guidance for this channel was recently presented in \cite{Liu:2016uzk}, where
the expected discrete lattice spectrum (for our $L$ and $m_\pi$) was calculated using a
Hamiltonian Effective Field Theory (HEFT) approach for three hypotheses concerning the
Roper state (Fig. \ref{fig:HEFT}). All scenarios involve channels
$N\pi/N\sigma/\Delta\pi$ (assuming stable $\sigma$ and $\Delta$) and are apt to reproduce
the experimental $N\pi$ phase shifts. The scenario which involves also a bare Roper $qqq$
core predicts four eigenstates in the region $E<1.7~$GeV of our interest, while the
scenario without Roper $qqq$ core predicts three eigenstates
\cite{Liu:2016uzk}.\footnote{This numbering omits the $\Delta(1) \pi(-1)$ and $N(1)\sigma(-1)$
eigenstates that are near $1.7~$GeV; these are not expected to be found in our study
since the corresponding interpolators are not included. Our notation implies projection of all operators to $J^P=\frac{1}{2}^+$.} The Roper resonance in the second case is
dynamically generated purely from the $N\pi/N\sigma/\Delta\pi$ channels, possibly
accompanied by the ground state nucleon $qqq$ core.
As already mentioned, our aim is to establish the expected low-lying multi-particle
states in the positive-parity nucleon channel. This has been already accomplished in the
negative-parity channel, where $N\pi$ scattering in $s$-wave was simulated in
\cite{Lang:2012db}. An exploratory study \cite{Verduci:2014csa} was done in a moving
frame, where both parities contribute to the same irreducible representation. The only
lattice simulation in the positive-parity channel that included (local) $qqqq\bar q$
interpolators in addition to $qqq$ was recently presented in \cite{Kiratidis:2016hda}.
No energy levels were found
between $m_N$ and $\simeq 2~$GeV for $m_\pi\simeq 411~$MeV. The levels related to
$N(1)\pi(-1)$ and $N(0)\sigma(0)$ were not observed, although they are expected below
$2~$GeV according to \cite{Liu:2016uzk}. This is possibly due to the local nature of the
employed $qqqq\bar q$ interpolators \cite{Kiratidis:2016hda}, which
seem to couple too weakly to multi-hadron states in practice.
This paper is organized as follows. Section \ref{sec:simulation} presents the ensemble,
methodology, interpolators and other technical details to determine the eigenenergies.
The resulting eigenenergies and overlaps are presented in Section \ref{sec:results},
together with a discussion on the extraction of the $N\pi$ phase shift. The physics
implications are drawn in Section \ref{sec:discussion} and an outlook is given in the
conclusions.
\section{Lattice setup}\label{sec:simulation}
\subsection{Gauge configurations}\label{sec:conf}
We perform a dynamical calculation on 197 gauge configurations generated by the
PACS-CS collaboration with $N_f=2+1$, lattice spacing $a=0.0907(13)~$fm, lattice
extension $V=32^3\times 64$, physical volume $L^3\simeq (2.9~$fm$)^3$ and
$\kappa_{u/d}=0.13781$ \cite{Aoki:2008sm}. The quark masses, $m_{u}=m_d$, are nearly
physical and correspond to $m_\pi=156(7)(2)~$MeV as estimated by PACS-CS
\cite{Aoki:2008sm}. Our own estimate leads to somewhat larger $m_\pi$ as detailed
below (we still refer to it as an ensemble with $m_\pi\simeq 156~$MeV). The quarks
are non-perturbatively improved Wilson-clover fermions, which do not respect exact
chiral symmetry (i.e., the Ginsparg-Wilson relation \cite{Ginsparg:1981bj}) at
non-zero lattice spacing $a$. Most of the previous simulations of the Roper channel
also employed Wilson-clover fermions, for example
\cite{Alexandrou:2013fsu,Alexandrou:2014mka,Edwards:2011jj,Mahbub:2013ala,Roberts:2013ipa}.
Closer inspection of this ensemble reveals that there are a few configurations responsible
for a strong fluctuation of the pion mass, which is listed in Table
\ref{tab:singlehadronmasses}. Removing one or four of the "bad" configurations changes the
pion mass by more than two standard deviations. The configuration-set "all" indicates the
full set of 197 gauge configurations, while "all-1" ("all-4") indicate a subset with 196
(193) configurations where one (four) configuration(s) leading to the strong fluctuations
in $m_\pi$ are removed\footnote{In the set
\texttt{RC32x64\_B1900Kud01378100Ks01364000C1715} configuration \texttt{jM000260} is
removed in "all-1", while \texttt{jM000260, hM001460, jM000840} and \texttt{jM000860} are
removed in "all-4".}.
\begin{table}[t]
\begin{ruledtabular}
\begin{tabular}{l c c }
config. set & $m_\pi $ [MeV] &$m_N$ [MeV] \\
\hline
all & $153.9 \pm 4.1 $ & $951\pm 19$ \\
all-1 & $163.9 \pm 2.4$ & $965 \pm13$ \\
all-4 & $164.4 \pm 2.1$ & $969 \pm 12 $ \\
\end{tabular}
\end{ruledtabular}
\caption{The single hadron masses obtained for the full ("all") set of configurations and
for the sets with one ("all-1") or four ("all-4") configurations omitted. Interpolators,
fit type and fit range are like in Table \ref{tab:singleH}. As discussed in the text our
final results are based on set "all-4". }\label{tab:singlehadronmasses}
\end{table}
We tested these three configuration-sets for a variety of hadron energies, and we find
that only $m_\pi$ varies outside the statistical error, while variations of masses for
other hadrons (mesons with light and/or heavy quarks and nucleon) are smaller than the
statistical errors. This also applies for the nucleon mass listed in Table
\ref{tab:singlehadronmasses}. The energies of the pions and other hadrons with non-zero
momentum also do not vary significantly with this choice.
The Roper resonance is known to be challenging as far as statistical errors are concerned,
{especially for nearly physical quark masses}. The error on the masses and energies is
somewhat bigger for the full set than on the reduced sets in some cases, for example
$m_\pi$ and $m_N$ in Table \ref{tab:singlehadronmasses}. Throughout this paper, we will
present results for the reduced configuration-set "all-4", unless specified differently.
The final spectrum was studied for all three configuration-sets, and we arrive at the same
conclusions for all of them.
\subsection{Determining eigenenergies}
We aim to determine the eigenenergies in the Roper channel, and we will need also the
energies of a single $\pi$ or $N$. Lattice computation of eigenenergies $E_n$
proceeds by calculating the correlation matrix $C(t)$ for a set of interpolating fields
$O_{i}$($\bar O_{i}$) that annihilate (create) the physics system of interest
\begin{align}
C_{ij}(t)&=\langle \Omega|O_i(t+t_{src})\bar O_j (t_{src})|\Omega\rangle \nonumber \\
&= \sum_n \langle \Omega | O_i|n\rangle \mathrm{e}^{-E_nt} \langle n|\bar O_j |\Omega\rangle \nonumber \\
&= \sum_n Z_i^n Z_j^{n*} \mathrm{e}^{-E_nt}
\label{C}
\end{align}
with overlaps $Z_i^n=\langle \Omega | O_i|n\rangle$. All our results are averaged over
all the source time slices $t_{src}=1,..,64$.
The $E_n$ and $Z_j^{n}$ are extracted from $C(t)$ via the generalized
eigenvalue method (GEVP) \cite{Michael:1985ne,Luscher:1985dn,Luscher:1990ck,Blossier:2009kd}
\begin{align}
\label{gevp}
C(t)u^{(n)}(t)&=\lambda^{(n)}(t)C(t_0)u^{(n)}(t)\;,\ \ \lambda^{(n)}(t)\propto \mathrm{e}^{-E_n t}
\end{align}
and we apply $t_0=2$ for all cases except for the single pion correlation where we choose $t_0=3$.
The large-time behavior of the eigenvalue $\lambda^{(n)}(t)$ provides $E_n$,
where specific fit forms will be mentioned case by case. The
\begin{equation}
Z_j^{n}(t)=\mathrm{e}^{E_n t/2} C_{jk}(t) u_k^{(n)}(t)/|C(t)^\frac{1}{2} u^{(n)}(t)|
\label{eq:Z}
\end{equation}
give the overlap factors in the plateau region.
For fitting $E_n$ from $\lambda^{(n)}(t)$ we usually employ a sum of two exponentials,
where the second exponential helps to parameterize the residual contamination from higher
energy states at small $t$ values. For the single pion ground state we have a large range
of $t$-values to fit and there we combine $\cosh[E_n(t-N_T/2)]$ also with such an
exponential. Correlated fits are used throughout. Single-elimination jackknife is used
for statistical analysis.
\subsection{Quark smearing width and distillation}\label{sec:Dis}
The interpolating fields are built from the quark fields and we employ these with two
smearing widths illustrated in Fig. \ref{fig:smearing}. Linear combinations of operators
with different smearing widths provide more freedom to form the eigenstates with nodes
in the radial wave function. This is favourable for the Roper resonance
\cite{Roberts:2013ipa,Roberts:2013oea,Liu:2014jua}, which is a radial excitation within a
quark model.
Quark smearing is implemented using the so-called distillation method
\cite{Peardon:2009gh}. The method is versatile and enables us to compute all necessary
Wick-contractions, including terms with quark-annihilation. This is made possible by
pre-calculating the quark propagation from specific quark sources. The sources are
the lowest $k=1,..,N_v$ eigenvectors $v^{k}_{\mathbf{x}c}$ of the spatial lattice
Laplacian and $c$ is the color index. Smeared quarks are provided by
$q^c(\mathbf{x})\equiv \square_{\mathbf{x'}c',\mathbf{x}c} \; q_{point}^{c'}(\mathbf{x'})$
\cite{Peardon:2009gh} with the smearing operator
$\square_{\mathbf{x'}c',\mathbf{x}c}=\sum_{k=1}^{N_v}v^{k}_{\mathbf{x'}c'}v^{k\dagger}_{\mathbf{x}c}$. Different $N_v$ lead to different effective smearing widths.
In previous work we used stochastic distillation \cite{Morningstar:2011ka} on this
ensemble, which is less costly but renders noisier results. For the present project we
implemented the distillation\footnote{Sometimes referred to as the full distillation. }
with narrower ($n$) smearing $N_v=48$ and wider ($w$) smearing $N_v=24$, illustrated
in Fig. \ref{fig:smearing}. Two smearings are employed to enhance freedom in forming the
eigenstates with nodes. Most of the interpolators and results below are based on narrower
smearing which gives better signals in practice, although both widths are not very
different. The details of our implementation of the distillation method are collected
in \cite{Lang:2011mn} for another ensemble.
\begin{figure}[!htb]
\begin{center}
\includegraphics*[width=0.45\textwidth,clip]{fig_1.pdf}
\end{center}
\caption{ The profile $ \Psi(r)$ of the ``narrower" ($N_v=48$) and the ``wider" ($N_v=24$)
smeared quark, where $ \Psi(r)=\sum_{\mathbf{x},t}\sqrt{\mathrm{Tr}_c[~\square_{\mathbf{x,x+r}}(t)~
\square_{\mathbf{x,x+r}}(t)~]} $.}
\label{fig:smearing}
\end{figure}
\subsection{\texorpdfstring{Interpolators and energies of $\pi$ and $N$}{}}
Single particle energies are needed to determine reference energies of the non-interacting
(i.e., disregarding interaction between the mesons and baryons)
system, and also to examine phase shifts (see Subsection
\ref{subsec:scatteringanalysis}). The following $\pi$ and $N$ annihilation interpolators
are used to extract energies of the single hadrons with momenta ${\mathbf n}\,2\pi/ L$
(these are also used as building blocks for interpolators in the Roper channel):
\begin{align}
\label{pi}
\pi^+(\mathbf{n})& =\sum_{\mathbf x} \bar d({\mathbf x},t)\gamma_5 u({\mathbf x},t) \mathrm{e}^{i\mathbf{x\cdot n}\frac{2\pi}{L}} \\
\pi^0(\mathbf{n}) &=\tfrac{1}{\sqrt{2}}\sum_{\mathbf x} [\bar d({\mathbf x},t)\gamma_5 d({\mathbf x},t)-\bar u({\mathbf x},t)\gamma_5 u({\mathbf x},t)] \mathrm{e}^{i\mathbf{x\cdot n}\frac{2\pi}{L}}\nonumber
\end{align}
and
\begin{align}
\label{N}
& N^{i}_{m_s=1/2}(\mathbf{n})= {\cal N}^{i}_{\mu=1}(\mathbf{n})\;,\ N^{i}_{m_s=-1/2}(\mathbf{n})= {\cal N}^{i}_{\mu=2}(\mathbf{n}) \\
& {\cal N}^i_\mu(\mathbf{n})\!=\!\sum_{\mathbf{x}} \epsilon_{abc} [u^{aT}(\mathbf{x},t) \Gamma_2^i d^b (\mathbf{x},t)] ~[\Gamma_1^i q^c(\mathbf{x},t)]_{\mu}~\mathrm{e}^{i\mathbf{x\cdot n}\frac{2\pi}{L}}\nonumber\\
& i=1,2,3:\quad (\Gamma_1^i,\Gamma_2^i)=(\mathbf{1},C\gamma_5),~(\gamma_5,C),~(i\mathbf{1},C\gamma_t\gamma_4)\nonumber
\end{align}
Three standard choices for $\Gamma_{1,2}$ are used. The 3rd quark is $q=u$ for the
proton and $q=d$ for the neutron. Equation (\ref{N}) is in Dirac basis and the upper two
components ${\cal N}_{\mu=1,2}$ of the Dirac four spinor ${\cal N}_{\mu}$ are the ones
with positive parity at zero momentum. The spin component $m_s$ in $N_{m_s}$ is a good
quantum number for $\mathbf{p}=0$ or $\mathbf{p}\propto e_z$, which is employed to
determine energies in Table \ref{tab:singleH}. It is not a good quantum number for
general $\mathbf{p}$ and it denotes the spin component $m_s$ of the corresponding field at
rest. The ``non-canonical" fields $N_{m_s}(\mathbf{n})$ (\ref{N}) built only from
upper-components have the desired transformation properties under rotation $R$ and
inversion $I$, which are necessary to build two-hadron operators \cite{Prelovsek:2016iyo}:
\begin{align}
\label{transf}
RN_{m_s}(\mathbf{n})R^\dagger \!\!&=\!\!\! \sum_{m_s'} D^{1/2}_{m_sm_s'}(R^\dagger) N_{m_s'} (R\mathbf{n}), \nonumber\\
R\pi(\mathbf{n})R^\dagger&=\pi(R\mathbf{n)}\nonumber\\
I N_{m_s}(\mathbf{n})I &= N_{m_s}(-\mathbf{n}),\nonumber\\
I\pi(\mathbf{n})I&=-\pi(-\mathbf{n})~.
\end{align}
Interpolators with narrower quark sources are used for the determination of the masses
and energies of $\pi$ and $N$. Those are collected in Table \ref{tab:singleH}, where they
are compared to energies $E^c$ expected in the continuum limit $a\to 0$.
\begin{table*}[t]
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c}
hadron & $\mathbf{n}=\tfrac{\mathbf{p}L}{2\pi} $ &interpol. & fit range & fit type & $\chi^2$/dof & $E\,a$ (lat) &
$E^{c}a=a \sqrt{m^2+\mathbf{p}^2}$ \\
\hline
$\pi$ & (0,0,0) & $ \pi$ & 8-18 & cosh+exp, c & 0.99& $0.07558 \pm 0.00098$ & \\
$\pi$ & (0,0,1) & $ \pi$ & 6-20 & 2 exp, c & 1.91 & $0.2049 \pm 0.0023 $ &0.2104\\
\hline
$N$ & (0,0,0) & $N_n^{1,3}$ & 4-12 & 2 exp, c & 0.39& $0.4455 \pm 0.0056$ & \\
$N$ & (0,0,1) & $N_n^{1,3}$ & 4-12 & 2 exp, c & 0.54 & $0.4920 \pm 0.0072$ & 0.4864 \\
\end{tabular}
\end{ruledtabular}
\caption{ The energies of single hadrons $\pi$ and $N$ for two relevant momenta, based on configuration set "all-4". Energies in GeV are obtained by multiplying with $1/a\simeq 2.17~$GeV.
}\label{tab:singleH}
\end{table*}
\subsection{Interpolating fields for the Roper channel}
Our central task is to calculate the energies of the eigenstates $E_n$ with $J^P=1/2^+$
and total momentum zero, including multi-particle states. We want to cover the energy
range up to approximately $1.65~$GeV, which is relevant for the Roper region. The
operators with these quantum numbers have to be carefully constructed. Although $qqq$
interpolators in principle couple also to multi-hadron intermediate states in dynamical
QCD, the multi-hadron eigenstates are often not established in practice unless the
multi-hadron interpolators are also employed in the correlation matrix.
We apply 10 interpolators $O_{i=1,...,10}$ with $P=+$, $S=1/2$, $(I,I_3)=(1/2,1/2)$
and total momentum zero \cite{Prelovsek:2016iyo} ($P$ and $m_s$ are good continuum
quantum numbers in this case). For $m_s=1/2$, we have
\begin{align}
O_{1,2}^{N\pi}&=
-\sqrt{\tfrac{1}{3}} ~\bigl[p^{1,2}_{-\frac{1}{2}}(-e_x) \pi^0(e_x)-p^{1,2}_{-\frac{1}{2}}(e_x) \pi^0(-e_x)\nonumber\\
& \qquad \qquad-i p^{1,2}_{-\frac{1}{2}}(-e_y) \pi^0(e_y)+i p^{1,2}_{-\frac{1}{2}}(e_y) \pi^0(-e_y)\nonumber\\
& \qquad \qquad+ p^{1,2}_{\frac{1}{2}}(-e_z) \pi^0(e_z)-p^{1,2}_{\frac{1}{2}}(e_z) \pi^0(-e_z)\bigr]\nonumber\\
& \quad +\sqrt{\tfrac{2}{3}} ~\bigl[\{p\to n, \pi^0\to \pi^+\}\bigr] \quad [narrower] \nonumber\\
O_{3,4,5}^{N_w}&=p^{1,2,3}_{\frac{1}{2}}(0)\quad [wider]\nonumber\\
O_{6,7,8}^{N_n}&=p^{1,2,3}_{\frac{1}{2}}(0)\quad [narrower]\nonumber\\
O_{9,10}^{N\sigma}&=p^{1,2}_{\frac{1}{2}}(0) \sigma(0) \quad [narrower]
\label{O}
\end{align}
where these are the annihilation fields and
\begin{equation}
\label{sigma}
\sigma(0)=\tfrac{1}{\sqrt{2}}\sum_{\mathbf x} [\bar u({\mathbf x},t)u({\mathbf x},t)+\bar d({\mathbf x},t) d({\mathbf x},t)]~.
\end{equation}
The momenta of fields in units of $2\pi/L$ are given in parenthesis with $e_x$, $e_y$, and
$e_z$ denoting the unit vectors in $x, y$, and $z$ directions, while the lower index on
$N=p,n$ is $m_s$. All quarks have the same smearing width (narrower or wider in Fig.
\ref{fig:smearing}) within one interpolator. The $O^{N\pi}$ was constructed in
\cite{Prelovsek:2016iyo}, while factors with square-root are Clebsch-Gordan coefficients
related to isospin. For $m_s=-1/2$, $p_{1/2}$ and $n_{1/2}$ gets replaced by $p_{-1/2}$
and $n_{-1/2}$ in $O_{3-10}$, while $O_{1,2}$ becomes \cite{Prelovsek:2016iyo}
\begin{align}
O_{1,2}^{N\pi}&=
-\sqrt{\tfrac{1}{3}} ~\bigl[p^{1,2}_{\frac{1}{2}}(-e_x) \pi^0(e_x)-p^{1,2}_{\frac{1}{2}}(e_x) \pi^0(-e_x)\nonumber\\
& \qquad \qquad+i p^{1,2}_{\frac{1}{2}}(-e_y) \pi^0(e_y)-i p^{1,2}_{\frac{1}{2}}(e_y) \pi^0(-e_y)\nonumber\\
& \qquad \qquad- p^{1,2}_{-\frac{1}{2}}(-e_z) \pi^0(e_z)+p^{1,2}_{-\frac{1}{2}}(e_z) \pi^0(-e_z)\bigr]\nonumber\\
& \quad +\sqrt{\tfrac{2}{3}} ~\bigl[\{p\to n, \pi^0\to \pi^+\}\bigr] \quad [narrower]
\label{Osd}
\end{align}
The basis (\ref{O}) contains conventional $qqq$ fields as well as the most relevant multi-hadron
components. The non-interacting levels below $1.65~$GeV are $N(0)$, $N(1)\pi(-1)$,
$N(0)\pi(0)\pi(0)$ and, assuming zero width approximation, $N(0) \sigma(0)$. The $N(2)\pi(-2)$,
$N(1)\pi(-1)\pi(0)$ and others are at higher energies. Here $O^{N\pi}$ corresponds to $N(1)\pi(-1)$ in $p$-wave \cite{Prelovsek:2016iyo}. Our notation implies projection to $J^P=\frac{1}{2}^+$ for all
operators (e.g., $N(1)\sigma(-1)$ actually refers to $\sum_{\mu=1}^3 N(e_\mu)\sigma(-e_\mu)$).
Interpolators $N(n)\pi(-n)$ with $n\geq 2$ are not incorporated, so we do not expect to find those in the
spectrum. We implement only one type of $\sigma$ interpolator (\ref{sigma}) in $O^{N\sigma}$ and we
expect that this represents a possible superposition of $N\pi\pi$ and $N\sigma$.\footnote{The $\sigma$
channel itself was recently simulated with a number of interpolators in \cite{Briceno:2016mjc}.}
On the discrete lattice the continuum rotation symmetry group is reduced to the discrete
lattice double-cover group $O_h^2$. The states with the continuum quantum number
$J^P=1/2^+$ transform according to the $G_1^+$ irreducible representation on the
lattice. All operators (\ref{O}) indeed transforms according to $G_1^+$
\begin{align}
RO^{m_s}_i(0)R^\dagger &= \sum_{m_s'} D^{1/2}_{m_sm_s'}(R^\dagger) O^{m_s'}_i (0),\ \nonumber \\
I O^{m_s}_i(0)I & = O_i^{m_s}(0),
\end{align}
as can be checked by using the transformations of individual fields $N$, $\pi$, $\sigma$
(eqn. \ref{pi}, \ref{N}, \ref{sigma}). The $N\pi$ operator with such transformation
properties was constructed using the projection, partial-wave and helicity methods
\cite{Prelovsek:2016iyo}, all leading to $O_{1,2}^{N\pi}$ in eqns. (\ref{O},\ref{Osd}).
The partial-wave method indicates that it describes $N\pi$ in $p$-wave.
We restrict our calculations to zero total momentum since parity is a good quantum number
in this case. The positive parity states with $J=1/2$ as well as $J\geq 7/2$ appear in
the relevant irreducible representation $G_1^+$ of $O_h^2$. The observed baryons with
$J\geq 7/2$ lie above $1.9~$GeV, therefore this does not present a complication for
the energy region of our interest. We do not consider the system with non-zero total
momenta since $1/2^+$ as well as $1/2^-$ (and others) appear in the same irreducible
representation \cite{Gockeler:2012yj}, which would be a significant complication
especially due to the negative parity states $N(1535)$ and $N(1650)$.
\subsection{Wick contractions for the Roper channel}
The $10\times 10$ correlation function $C_{ij}(t)$ (\ref{C}) for the Roper channel is
obtained after evaluating the Wick contractions for any pair of source $\bar{O}_j $ and
sink $O_i$. The number of Wick contractions involved in computing the correlation
functions between our interpolators (eqn. \ref{O}) are tabulated in Table \ref{tb:Wick}.
\begin{table}[h!] \begin{tabular}{ c| c c c }
$O_i\backslash O_j$ & $O^N$ & $O^{N\pi}$ & $O^{N\sigma}$ \\
\hline
$O^N$ & 2 & 4 & 7 \\
$O^{N\pi}$ & 4 & 19 & 19\\
$O^{N\sigma}$ & 7 & 19 & 33
\end{tabular}
\caption{Number of Wick contractions involved in computing correlation functions between
interpolators in eqn. (\ref{O}).}
\label{tb:Wick}
\end{table}
\vspace{0.2cm}
The $O^N\leftrightarrow O^N$ contractions have been widely used in the past.
\footnote{Footnote added after publication: the $N\pi$ contribution to correlators $O^N\leftrightarrow O^N$
with local operators has been determined via ChPT in \cite{Bar:2015zwa}.} The 19
Wick-contractions $O^{N\pi}\leftrightarrow O^{N\pi}$ and 4 Wick contractions
$O^{N}\leftrightarrow O^{N\pi}$ are the same as in the Appendix of \cite{Lang:2012db},
where the negative-parity channel was studied. The inclusion of $O^{N\sigma}$ introduces
additional $2\cdot 7 + 2\cdot 19+33$ Wick contractions, while the inclusion of three
hadron interpolators like $N\pi\pi$ would require many more. We evaluate all necessary
contractions in Table \ref{tb:Wick} using the distillation method \cite{Peardon:2009gh}
discussed in Section \ref{sec:Dis}.
Appendix \ref{sec:wick} illustrates how to handle the spin components in evaluating
$C(t)$, where one example of the Wick contraction $\langle\Omega|O^{N\pi}\bar
O^{N}|\Omega\rangle$ is considered.
\begin{figure*}[!htb]
\begin{center}
\includegraphics*[height=70mm,clip]{fig_2a.pdf}$\qquad$
\includegraphics*[height=78mm,clip]{fig_2b.pdf}
\end{center}
\caption{ The eigenenergies $E_n$ (a) and normalized overlaps $Z_i^n=\langle \Omega| O_i|n\rangle$ (b), which result from correlation matrix (\ref{C}) based on the complete interpolator set (\ref{O_complete}). Left pane (a): The energies $E_n$ from lowest ($n=1$) to highest ($n=4$). The horizontal dashed lines represent the energies $m_N+2m_\pi$ and $E_{N(1)}+E_{\pi(-1)}$ of the expected multi-hadron states in the non-interacting limit. Right pane (b): the ratios of overlaps $Z_i^n$ with respect to the largest among $|Z_i^{m=1,...5}|$; these ratios are independent on the normalization of $O_i$. The full and empty symbols correspond to the positive and negative $Z_i^n$, respectively ($Z_i^n$ are almost real). Configuration set "all-4" is used. }
\label{fig:E_final}
\end{figure*}
\begin{figure}[!htb]
\begin{center}
\includegraphics*[width=0.35\textwidth,clip]{fig_3.pdf}
\end{center}
\caption{ The effective energies $E_n^{eff}(t)=\log[\lambda^{(n)}(t)/\lambda^{(n)}(t+1)]\to E_n$ of eigenvalues $\lambda^{(n)}$. These correspond to the energies of eigenstates $E_n$ in Fig. \ref{fig:E_final}a and Table \ref{tab:E_final}. It is based on the complete interpolator set (\ref{O_complete}) and configuration set "all-4". The fits of $\lambda^{(n)}(t)$ that render $E_n$ are also presented. Non-interacting energies of $N(0)\pi(0)\pi(0)$ and $N(1)\pi(-1)$ are shown with dashed lines. }
\label{fig:Eeff_final}
\end{figure}
\begin{table}[t]
\begin{ruledtabular}
\begin{tabular}{c c c c c }
eigenstate & fit & fit & $\chi^2$/dof & $E\,a$\\
$n$ & range & type & & \\
\hline
1 &4-12 & 2 exp, c&0.50 & $0.4427\pm 0.0055$ \\
2 & 4-12& 2 exp, c&1.04 & $0.6196\pm 0.0266$ \\
3 &4-10 & 2 exp, c&0.88 & $0.6873\pm 0.0195$ \\
4 &4-7 &1 exp, c &0.32 & $0.9527 \pm 0.0338$ \\
\end{tabular}
\end{ruledtabular}
\caption{ The final energies $E_n$ of eigenstates in the Roper channel, which correspond to
Fig. \ref{fig:E_final}a and effective masses in Fig. \ref{fig:Eeff_final}.
They are obtained from correlated fits based on complete interpolator
set (eqn. \ref{O_complete}) and configuration set ''all-4". Energies
in GeV can be obtained by multiplying with $1/a\simeq 2.17~$GeV. }\label{tab:E_final}
\end{table}
\begin{figure}[!htb]
\begin{center}
\includegraphics*[height=65mm,clip]{fig_4.pdf}
\end{center}
\caption{The energies of eigenstates $E_n$ for various choices of interpolator basis
(\ref{O}) used in correlation matrix (\ref{C},\ref{gevp}). The reference choice 1
representing the complete interpolator set $O_1^{N\pi},~
O^{N_w}_3,~O^{N_n}_{6,8},~O_{9}^{N\sigma}$ (\ref{O_complete}) is highlighted. One or
more interpolators are removed for other choices. The horizontal lines present
non-interacting energies of $N(0)\pi(0)\pi(0)$ and $N(1)\pi(-1)$. Results are based on
configuration set "all-4".}
\label{fig:E_interp_set}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics*[height=65truemm,clip]{fig_5.pdf}
\end{center}
\caption{The energies $E_n$ are determined on all 197 configurations ("all"), on 196
configurations ("all-1"), and on 193 configurations ("all-4"), as described in Section
\ref{sec:conf}. The values are based on the interpolator set
$O_1^{N\pi},~O^{N_n}_{6,8},~O_{9}^{N\sigma}$ which gives smaller statistical errors than
set (\ref{O_complete}) for "all" and "all-1". The horizontal lines present non-interacting
energies of $N(0)\pi(0)\pi(0)$ and $N(1)\pi(-1)$ for the corresponding configuration
sets. }
\label{fig:E_config_set}
\end{figure}
\section{Results} \label{sec:results}
\subsection{Energies and overlaps}
Our main result are the energies of the eigenstates in the $J^P=1/2^+$ channel, shown
in Fig. \ref{fig:E_final}a. These are based on the $5\times 5$ correlation matrix
(\ref{C}) for the subset of interpolators (\ref{O})
\begin{equation}
\label{O_complete}
\mathrm{complete\ interpolator\ set:}\ O_1^{N\pi},~ O^{N_n}_3,~O^{N_w}_{6,8},~O_{9}^{N\sigma}\;,
\end{equation}
which we refer to as the "complete set" since it contains all types of interpolators.
Adding other interpolators to this basis, notably $ O_{2,4,7,10}$, which include the
$N^{i=2}$ interpolator\footnote{It has been observed already earlier, e.g.
\cite{Brommel:2003jm}, that this interpolator shows no plateau behavior in the effective
energy.}, makes the eigenenergies noisier. The eigenenergies $E_n$ are obtained from
the fits of the eigenvalues $\lambda^{(n)}(t)$ (\ref{gevp}), with fit details in Table
\ref{tab:E_final}. The horizontal dashed lines represent the energies of the expected
multi-hadron states $m_N+2m_\pi$ and $E_{N(1)}+E_{\pi(-1)}$ in the non-interacting limit
(the individual hadron energies measured on our lattice and given in Table \ref{tab:singleH}
are used for this purpose throughout this work). The study of this channel with
almost physical pion mass is challenging as far as statistical errors are concerned.
This can be seen from the effective energies in Fig. \ref{fig:Eeff_final} which give
eigenenergies in the plateau region.
The ground state ($n=1$) in Fig. \ref{fig:E_final}a represents the nucleon. The
first-excited eigenstate ($n=2$) lies near $m_N+2m_\pi$ and appears to be close to
$N(0)\pi(0)\pi(0)$ in the non-interacting limit. The next eigenstate $n=3$ lies near the
non-interacting energy $E_{N(1)}+E_{\pi(-1)}$. It dominantly couples to $O^{N\pi}$ and we
relate it to $N(1)\pi(-1)$ in the non-interacting limit. Further support in favor of
this identification for levels $n=2,3$ will be given in the discussion of Figs.
\ref{fig:E_interp_set} and \ref{fig:E_config_set}. The most striking feature of the
spectrum is that there are only three eigenstates below $1.65~$GeV, while the other
eigenstates appear at higher energy.
The overlaps of these eigenstates with various operators are presented in Fig.
\ref{fig:E_final}b. The nucleon ground state $n=1$ couples well with all
interpolators that contain $N^1$. The operator $O^{N\pi}$ couples well with eigenstate
$n=3$, which gives further support that this state is related to $N(1)\pi(-1)$. The
operator $O^{N\sigma}$ couples best with the nucleon ground state, which is not surprising
due to the presence of the Wick contraction where the isosinglet $\sigma$
(\ref{sigma}) annihilates and the remaining $N^1$ couples to the nucleon. Interestingly,
the $O^{N\sigma}$ has similar couplings to the eigenstates $n=2$ and $n=3$, which are
related to $N(0)\pi(0)\pi(0)$ and $N(1)\pi(-1)$ in the non-interacting limit. One would
expect $|\langle \Omega| O^{N\sigma} |n=2\rangle| \gg |\langle \Omega|O^{N\sigma}
|n=3\rangle|$ if the channel $N\pi$ were decoupled from $N\sigma/N\pi\pi$. Our overlaps
$Z_{i=9}^{n=2,3}$ suggest that the channels are significantly coupled. The scenario
where the coupled-channel scattering might be crucial for the Roper resonance will
discussed in Section \ref{sec:discussion}.
The features of the spectrum for various choices of the interpolator basis are
investigated in Fig. \ref{fig:E_interp_set}. The complete set (\ref{O_complete}) with all
types of interpolators is highlighted as choice 1. If the operator $O^{N\pi}$ is removed
(choice 3) the eigenstate with energy $\simeq E_{N(1)}+E_{\pi(1)}$ disappears, so the
$N\pi$ Fock component is important for this eigenstate. The eigenstate with energy $\simeq
m_N+2m_\pi$ disappears if $O^{N\sigma}$ is removed (choice 4), which suggests that this
eigenstate is dominated by $N(0)\pi(0)\pi(0)$, possibly mixed with $N(0)\sigma(0)$. Any interpolator individually
renders the nucleon as a ground state (choices 5,6,7).
All previous lattice simulations, except for \cite{Kiratidis:2016hda}, used just $qqq$
interpolators. This is represented by the choice 5, which renders the nucleon, while the
next state is above $1.65~$GeV; this result is in agreement with most of the previous
lattice results based on $qqq$ operators, discussed in the Introduction. No
interpolator basis renders more than three eigenstates below $1.65~$GeV.
The most striking feature of the spectra in Figs. \ref{fig:E_final} and
\ref{fig:E_interp_set} is the absence of any additional eigenstate in the energy region
where the Roper resonance resides in experiment. The eigenstates $n=2,3$ lie in this
energy region, but two eigenstates related to $N(0)\pi(0)\pi(0)$ and $N(1)\pi(-1)$ are
inevitably expected there in dynamical QCD, even in absence of the interactions between
hadrons.
A further indication that eigenstate $n=2$ is dominated by $N(0)\pi(0)\pi(0)$ is presented
in Fig. \ref{fig:E_config_set}, where the spectrum from all configurations is compared to
the spectrum based on configuration sets "all-4" (shown in other figures) and "all-1".
The horizontal dashed lines indicate non-interacting energies obtained from the
corresponding sets. Only the central value of $E_{2}$ and $m_N+2m_\pi$ visibly depend
on the configuration set. The variation of $m_N+2m_\pi$ is due to the variations of
$m_\pi$ pointed out in Section \ref{sec:conf}. The eigenstate $n=2$ appears to track the
threshold $m_N+2m_\pi$, which suggests that its Fock component $N(0)\pi(0)\pi(0)$ is
important. Note that the full configuration set gives larger statistical errors, as
illustrated via effective masses in Fig. \ref{fig:Eeff_set} of Appendix
\ref{app:effmasses}.
\subsection{Scattering phase shift}\label{subsec:scatteringanalysis}
In order to discuss the $N\pi$ phase shift, we consider the elastic approximation where
$N\pi$ scattering is decoupled from the $N\pi\pi$ channel. In this case, the $N\pi$ phase
shift $\delta$ can be determined from the eigenenergy $E$ of the interacting state
$N\pi$ via
L\"uscher's relation \cite{Luscher:1990ux,Luscher:1991cf}
\begin{equation}
\label{luscher}
\delta(p)=\mathrm{atan}\biggl[\frac{\sqrt{\pi} p L}{2\,Z_{00}(1;(\tfrac{pL}{2\pi})^2)}\biggr],\ E=E_{N(p)}+ E_{\pi(p)}
\end{equation}
where $E_{H(p)}=\sqrt{m_H^2+p^2}$ applies in the continuum limit. The eigenenergy $E$
($E_3$ from basis $O^{N\pi,N,N\sigma}$ or $E_2$ from $O^{N\pi,N}$) has sizable error
for this ensemble with close-to-physical pion mass. It lies close to the non-interacting
energy $E_{N(1)}+E_{\pi(1)}$, as can be seen in Figs. \ref{fig:E_final},
\ref{fig:Eeff_final} and \ref{fig:Eeff_set}. We find that the resulting energy shift
$\Delta E=E-E_{N(1)}-E_{\pi(1)}$ is consistent with zero (modulo $\pi$) within the errors.
This implies that the phase shift $\delta$ is zero within a large statistical error.
We verified this using a number of choices to extract $\Delta E$ and $\delta$. The
interpolator set $O^{N\pi,N}$ rightmost column of Fig. \ref{fig:Eeff_set}) that imitates
the elastic $N\pi$ scattering served as a main choice, while it was compared to other sets
also. Correlated and uncorrelated fits of $E$ as well as $E_{N(1)}+E_{\pi(1)}$ were
explored for various fit-ranges. Further choices of dispersion relations $E_\pi(p)$ and
$E_N(p)$ that match lattice energies at $p=0,1$ in Table \ref{tab:singleH} (e.g.,
interpolation of $E^2$ linear in $p^2$) were investigated within the L\"uscher analysis
to arrive at same conclusions.
\begin{figure}[!tb]
\begin{center}
\includegraphics*[width=0.45\textwidth,clip]{fig_6.pdf}
\end{center}
\caption{ The experimental phase shift $\delta$ and inelasticity $1-\eta^2$ as extracted
by the GWU group \cite{Workman:2012hx} (solution WI08). The dot-dashed line is a smooth
interpolation that is used in Section \ref{sec:discussion_elastic}.}
\label{fig:P11_exp}
\end{figure}
\section{Discussion and interpretation} \label{sec:discussion}
Here we discuss the implications of our results, in particular that only three
eigenstates are found below $1.65$ GeV. These appear to be associated with
$N(0),~N(0)\pi(0)\pi(0)$ and $N(1)\pi(-1)$ in the non-interacting limit.
The experimental $N\pi$ scattering data for the amplitude $T=(\eta e^{2i\delta}-1)/(2i)$
for this ($P_{11}$) channel are shown in Fig. \ref{fig:P11_exp} \cite{Workman:2012hx}\footnote{ The experimental data comes from the GWU homepage {\tt
gwdac.phys.gwu.edu}}. The channel is complicated by the fact that $N\pi$ scattering is
not elastic above the $N\pi\pi$ threshold and the inelasticity is sizable already in the
energy region of the Roper resonance.
The presence of the $N\pi\pi$ channel prevents rigorous investigation on lattice at the
moment. While the three-body channels have been treated analytically, see for example
\cite{Hansen:2015zga,Hansen:2016ync}, the scattering parameters have not been determined
in any channel within lattice QCD up to now. For this reason we consider implications for
the lattice spectrum based on various simplified scenarios. By comparing our lattice
spectra to the predictions of these scenarios, certain conclusions on the Roper resonance
are drawn.
\begin{figure}[!htb]
\begin{center}
\includegraphics*[width=0.45\textwidth,clip]{fig_7a.pdf}\\~\\
\includegraphics*[width=0.45\textwidth,clip]{fig_7b.pdf}
\end{center}
\caption{ (a) Analytic prediction for the eigenenergies $E$ as a function of the lattice
size $L$, according to (\ref{luscher}). The $N\pi$ and $N\pi\pi$ are assumed to be
decoupled, and $N\pi\pi$ is non-interacting. The curves show: non-interacting $N\pi$ (red
dashed), interacting $N\pi$ based on experimental phase shift \cite{Workman:2012hx}
(orange dotted), $N\pi\pi$ threshold (blue dashed), proton mass (black), Roper mass (cyan
band). The experimental masses of hadrons are used. (b) Left: energy values from our
simulation. (b) Right: the full violet circles show the analytic predictions for the
energies at our $L=2.9~$fm based on the experimental phase shift data and elastic
approximation (same as violet circles in upper pane). We show only the energy region
$E<1.7~$GeV where we aim to extract the complete spectrum (there are additional
multi-hadron states in the shaded region and we did not incorporate interpolator fields
for those).
}
\label{fig:E_analytic}
\end{figure}
\subsection{\texorpdfstring{$N\pi$ scattering in elastic approximation}{}} \label{sec:discussion_elastic}
Let us examine what would be the lattice spectrum assuming experimental $N\pi$ phase
shift in the approximation when $N\pi$ is decoupled from the $N\pi\pi$ channel. In
addition we consider no interactions in the $N\pi\pi$ channel. The elastic phase shift
$\delta$ in Figure \ref{fig:P11_exp} allows to obtain the discrete energies $E$ as
function of the spatial lattice size $L$ via L\"uscher's equation (\ref{luscher}) .
Figure \ref{fig:E_analytic}a shows the non-interacting levels for $N(0)$ (black),
$N(0)\pi(0)\pi(0)$ (blue), and $N(1)\pi(-1)$ (red). These are shifted by the interaction. Also
plotted are the eigenstates (orange) in the interacting $N\pi$ channel derived from the
experimental elastic phase shift with help of eqn. (\ref{luscher}). The elastic scenario
should therefore render four eigenstates below 1.65 GeV at our $L\simeq 2.9~$fm,
indicated by the violet circles in Figures \ref{fig:E_analytic}a and
\ref{fig:E_analytic}b. Three non-interacting levels\footnote{These are three intercepts
of dashed curves with vertical green line at $L=2.9~$fm.} below $1.65~$GeV turn into
four interacting levels (violet circles) at $L\simeq 2.9~$fm. The Roper resonance phase
shift passing $\pi/2$ is responsible for the extra level.
Our actual lattice data features only three eigenstates below $1.65~$GeV, and no extra
low-lying eigenstate is found. Comparison in Figure \ref{fig:E_analytic}b indicates that
the lattice data is qualitatively different from the prediction of the resonating $N\pi$
phase shift for the low-lying Roper resonance, assuming it is decoupled from $N\pi\pi$.
\subsection{\texorpdfstring{Scenarios with coupled $N\pi-N\sigma-\Delta\pi$ scattering}{}}
Our analysis does not show the resonance related level. One reason could be that the Roper
resonance is a truly coupled channel phenomenon and one has to include further
interpolators like $\Delta \pi$, N$\rho$ and an explicit $N\pi\pi$ three hadron
interpolator. The scattering of $N\pi-N\sigma-\Delta\pi$ in the Roper channel was
studied recently using Hamiltonian Effective Field Theory (HEFT) \cite{Liu:2016uzk}. The
$\sigma$ and $\Delta$ were assumed to be stable under the strong decay, which is a
(possibly serious) simplification. The free parameters were always fit to the experimental
$N\pi$ phase shift and describe the data well. Three models were discussed:
\begin{enumerate}
\item[I]
The three channels are coupled with a low-lying bare Roper operator of type $qqq$.
\item[II]
No bare baryon; the $N\pi$ phase shift is reproduced solely via coupled channels.
\item[III]
The three channels are coupled only to a bare nucleon.
\end{enumerate}
The resulting Hamiltonian was considered in a finite volume leading to discrete
eigenenergies for all three cases, plotted in Fig. \ref{fig:HEFT} for our parameters
$L=2.9~$fm and $m_\pi=156~$MeV \cite{Liu:2016uzk}.
In Fig. \ref{fig:HEFT} we compare our lattice spectra with the prediction for energies of $J^P=1/2^+$ states in three scenarios. The stars mark the
high-lying eigenstates $N(1)\sigma(-1)$, $\Delta(1)\pi(-1)$ and $N(2)\pi(-2)$
\cite{Liu:2016uzk}, which are not expected to be found in our study since we did not incorporate corresponding interpolators in
(\ref{O}). The squares denote predictions from the three scenarios that can be
qualitatively compared with our lattice spectra.
Our lattice levels below $1.7~$GeV disagree with model I based on bare Roper $qqq$
core, but are consistent with II and (preferred) III with no bare Roper $qqq$ core. In
those scenarios the Roper resonance is dynamically generated from the
$N\pi/N\sigma/\Delta\pi$ channels, coupled also to a bare nucleon core in case III.
Preference for interpretations II,III was reached also in other phenomenological
studies \cite{Krehl:1999km,Schutz:1998jx,Liu:2016uzk,Matsuyama:2006rp} and on the lattice
\cite{Kiratidis:2016hda}, for example.
\begin{figure}[!htb]
\begin{center}
\includegraphics*[width=0.49\textwidth,clip]{fig_8.pdf}
\end{center}
\caption{ Analytic predictions for the lattice spectra at $m_\pi=156~$MeV and $L=2.9~$fm from the Hamiltonian Effective Field theory. These are based on three scenarios concerning the Roper resonance \cite{Liu:2016uzk}. Our lattice spectrum is shown with circles on the left. Qualitative comparison between the energies represented by squares and circles can be made, as discussed in the main text. }
\label{fig:HEFT}
\end{figure}
\subsection{Hybrid baryon scenario}
Several authors, for example \cite{Golowich:1982kx,Kisslinger:1995yw}, have proposed that
the Roper resonance might be a hybrid baryon $qqqG$ with excited gluon field. This
scenario predicts the longitudinal helicity amplitude $S_{1/2}$ to vanish
\cite{Li:1991yba}, which is not supported by the measurement \cite{Mokeev:2012vsa}.
Our lattice simulation cannot provide any conclusion regarding this scenario since we have
not incorporated interpolating fields of the hybrid type.
\subsection{Other possibilities for absence of the resonance related level}
Let us discuss other possible reasons for the missing resonance level in our results,
beyond the coupled-channel interpretation offered above.
We could be missing the eigenstate because we might have missed important coupling
operators. One such candidate might be a genuine pentaquark operator. A local five quark
interpolator (with baryon-meson color structure) has been used by \cite{Kiratidis:2016hda}
who, however, also did not find a Roper signal. The local pentaquark operator with color
structure $\epsilon_{abc} \bar q_a [qq]_b[qq]_c$ ($[qq]_c=\epsilon_{cde} q_cq_dq_e$) can
be rewritten as a linear combination of local baryon-meson operators $BM=(\epsilon_{abc}
q_a q_b q_c) (\bar q_eq_e)$ by using
$\epsilon_{abc}\epsilon_{ade}=\delta_{bd}\delta_{ce}-\delta_{be}\delta_{cd}$.
Furthermore, the local baryon-meson operators are linear combinations of $B({\mathbf
p})M(-\mathbf{p})$. Among various terms, the $N(1)\pi(-1)$ and $N(0)\sigma(0)$ are the
essential ones for the explored energy region and those were incorporated in our basis
(\ref{O}). So, we expect that our simulation does incorporate the most essential
operators in the linear combination representing the genuine localized pentaquark
operator. It remains to be seen if structures with significantly separated diquark
(such as proposed in \cite{Lebed:2015tna} for $P_c$) could be also be probed by
baryon-meson operators like (\ref{O}).
It could also be that -- contrary to our expectation -- using operators with
different quark smearing widths is not sufficient to scan the $qqq$ radial excitations.
One might have to expand the interpolator set to include non-local interpolators
\cite{Edwards:2011jj} so as to have good overlap with radial excitations with non-trivial
nodal structures. There has been no study that involved use of such operators along with
the baryon-meson operators and within the single hadron approach such operators do not
produce low lying levels in the Roper energy range \cite{Edwards:2011jj}.
Finally, our results are obtained using fermions that do not obey exact chiral symmetry
at finite lattice spacing $a$, like in most of the previous simulations. It would be
desirable to verify our results using fermions that respect chiral symmetry at finite
$a$.
\section{Conclusion and outlook}
We have determined the spectrum of the $J^P=1/2^+$ and $I=1/2$ channel below 1.65 GeV,
where the Roper resonance appears in experiment. This lattice simulation has been
performed on the PACS-CS ensemble with $N_f=2+1$, $m_\pi\simeq 156$ MeV and $L=2.9~$fm.
Several interpolating fields of type $qqq$ ($N$) and $qqqq\bar q$ ($N\sigma$ in $s$-wave
and $N\pi$ in $p$-wave) were incorporated, and three eigenstates below $1.65~$GeV are
found. The energies, their overlaps to the interpolating fields and additional arguments
presented in the paper indicate that these are related to the states that correspond to
$N(0)$, $N(0)\pi(0)\pi(0)$ and $N(1)\pi(-1)$ in the non-interacting limit (momenta in
units of $2\pi/L$ are given in parenthesis). This is the first simulation that finds the
expected multi-hadron states in this channel. However, the uncertainties on the extracted
energies are sizable and the extracted $N\pi$ phase shift is consistent with zero within a
large error.
One of our main results is that only three eigenstates lie below $1.65~$GeV, while the
fourth one lies already at about $1.8(1)~$GeV or higher. In contrast, the experimental
$N\pi$ phase shift implies four lattice energy levels below 1.65 GeV in the elastic
approximation when $N\pi$ is decoupled from $N\pi\pi$ and the later channel is
non-interacting. Our results indicate that the low-lying Roper resonance does not arise on
the lattice within the elastic approximation of $N\pi$ scattering. This points to a
possibility of a dynamically generated resonance, where the coupling of $N\pi$ with
$N\pi\pi$ or other channels is essential for the existence of this resonance. This is
supported by comparable overlaps of the operator $O^{N\sigma}$ to the second and third
eigenstates.
We come to a similar conclusion if we compare our lattice spectrum to the HEFT predictions
for $N\pi /N\sigma /\Delta\pi$ scattering in three scenarios \cite{Liu:2016uzk}. The case
where these three channels are coupled with the low-lying bare Roper $qqq$ core is
disfavored. Our results favor the scenario where the Roper resonance arises solely as a
coupled channel phenomenon, without the Roper $qqq$ core.
Future steps towards a better understanding of this channel include simulations at
larger $m_\pi L$, decreasing the statistical error and employing $qqq$ or $qqqq\bar q$
operators with greater variety of spatially-extended structures. Simulating the system
at non-zero total momentum will give further information but will introduce additional
challenges: states of positive as well as negative parity contribute to the relevant
irreducible representations in this case. It would also be important to investigate the
spectrum based on fermions with exact chiral symmetry at finite lattice spacing.
Our results point towards the possibility that Roper resonance is a coupled-channel
phenomenon. If this is the case, the rigorous treatment of this channel on the lattice
will be challenging. This is due to the three-hadron decay channel $N\pi\pi$ and the
fact that the three-hadron scattering matrix has never been extracted from lattice QCD
calculations yet. The simplified two-body approach to coupled-channels $N\sigma
/\Delta\pi$ (based on stable $\sigma$ and $\Delta$) cannot be compared quantitatively
to the lattice data at light $m_\pi$ where $\sigma$ and $\Delta$ are broad unstable
resonances. This is manifested also in our simulation, where $O^{N\sigma}$ operator
renders an eigenstate with $E\simeq m_N+2m_\pi$ and not $E\simeq m_N+m_\sigma$.
Pion-nucleon scattering has been the prime source of our present day knowledge on hadrons.
After decades of lattice QCD calculations we are now approaching the possibility to study
that scattering process from first principles. This has turned out to be quite challenging
and our contribution is only one step of more to follow.
\acknowledgements
We thank the PACS-CS collaboration for providing the gauge configurations. We would
kindly like to thank M. D{\"o}ring, L. Glozman, Keh-Fei Liu and D. Mohler for
valuable discussions. We are grateful to B. Golli, M. Rosina and S. \v Sirca for careful
reading of the manuscript and numerous valuable discussions and suggestions. This work is
supported in part by the Slovenian Research Agency ARRS, by the Austrian Science Fund
FWF:I1313-N27 and by the Deutsche Forschungsgemeinschaft Grant No. SFB/TRR 55. The
calculations were performed on computing clusters at the University of Graz (NAWI Graz)
and Ljubljana. S.P. acknowledges support from U.S. Department of Energy Contract No.
DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates
Jefferson Lab.
| {'timestamp': '2017-02-01T02:04:58', 'yymm': '1610', 'arxiv_id': '1610.01422', 'language': 'en', 'url': 'https://arxiv.org/abs/1610.01422'} |
\section{Introduction}
Although global stabilization of dynamical systems is of importance in system theory and engineering \cite{Kh:02,An:02},
it is sometimes difficult or impossible to synthesize a global stabilizing controller for certain linear and nonlinear systems \cite{Va:02}.
The reasons could be the poor controllability of system, e.g., systems that have uncontrollable linearizations \cite{Di:11}
and systems that have fewer degrees of control freedom than the degrees of freedom to be controlled \cite{Gu:13,Gu:14},
the input/output constraints in practice, e.g., an unstable linear time-invariant system cannot be globally stabilized in the presence of
input saturations \cite{Bl:99}, time delay \cite{Sun:11,Sun:13}, and/or the involved large disturbances \cite{Ki:06}, etc.
Moreover, in many applications the full stabilization, while possible, carries high penalty due to the cost of the control, thus is also not desirable.
Instead, minimizing a long-time average of the cost functional might be more realistic.
For instance, long-time-average cost analysis and control is often considered in irrigation, flood control, navigation, water supply, hydroelectric power, computer communication networks, and other applications \cite{Du:10,Bo:92}.
In addition, systems that include stochastic factors are often controlled in the sense of long-time average.
In \cite{Ro:83}, a summary of long-time-average cost problems for continuous-time Markov processes is given.
In \cite{Me:00}, the long-time-average control of a class of problems that arise in the modeling of semi-active suspension systems was considered,
where the cost includes a term based on the local time process diffusion.
Notice that the controller design methods proposed in \cite{Ro:83,Me:00} are highly dependent on the stochastic property of dynamical systems.
In certain cases, as, for example, turbulent flows of fluid, calculating the time averages is a big challenge even in the uncontrolled case. As a result, developing the control aimed at reducing the time-averaged cost for turbulent flows, for example by using the receding horizon technique, leads to controllers too complicated for practical implementation \cite{Bewley:01}.
To overcome this complexity, it was proposed \cite{Ph:14} to use an upper bound for the long-time average cost instead of the long-time average cost itself in cases when such an upper bound is easier to calculate. The idea is based on the hope that the control reducing an upper bound for a quantity will also reduce the quantity itself. Meanwhile, \cite{Ph:14} uses the sum of squares (SOS) decomposition of polynomials and semidefinite programming (SDP) and allows a trade-off between the quality of bound and the complexity of its calculation.
The SOS methods apply to systems defined by a polynomial vector field. Such systems may describe a wide variety of dynamics~\cite{Va:01} or approximate a system defined by an analytical vector field~\cite{Va:02}. A polynomial system can therefore yield a reliable model of a dynamical system globally or in larger regions than the linear approximation in the state-space~\cite{Va:03}. Recent results on SOS decomposition have transformed the verification of non-negativity of polynomials into SDP, hence providing promising algorithmic procedures for stability analysis of polynomial systems. However, using SOS techniques for optimal control, as for example in~\cite{Pr:02,Zh:07,Ma:10}, is subject to a generic difficulty: while the problem of optimizing the candidate Lyapunov function certifying the stability for a closed-loop system for a given controller and the problem of optimizing the controller for a given candidate Lyapunov function are reducible to an SDP and thus, are tractable, the problem of simultaneously optimizing both the control and the Lyapuniov function is non-convex. Iterative procedures were proposed for overcoming this difficulty~\cite{Zh:07,Zh:09,Ng:11}.
While optimization of an upper bound with control proposed in~\cite{Ph:14} does not involve a Lyapunov function, it does involve a similar tunable function, and it shares the same difficulty of non-convexity. In the present work we propose a polynomial type state feedback controller design scheme for the long-time average upper-bound control, where the controller takes the structure of an asymptotic series in a small-amplitude perturbation parameter.
By fully utilizing the smallness of the perturbation parameter, the resultant SOS optimization problems are solved in sequence,
thus avoiding the non-convexity in optimization. We apply it to an illustrative example and demonstrate that it does allow to reduce the long-time average cost even without fully stabilizing the system. Notice the significant conceptual difference between our approach and the studies of control by small perturbations, often referred to as tiny feedback, see for example~\cite{tc:93}.
The paper is organized as follows. Section~\ref{seq:Background}
presents some preliminary introduction on SOS and its application in bound estimation of long-time average cost for uncontrolled systems. Section~\ref{seq:Problem Formulation}
gives the problem formulation. Bound optimization of the long-time average cost for controlled polynomial systems is considered in Section~\ref{seq:Bound}.
An illustrative example of a cylinder wake flow is addressed in Section~\ref{seq:Example}.
Section~\ref{seq:Conclusion}
concludes the work.
\section{Background}\label{seq:Background}
In this section SOS of polynomials and a recently-proposed method of obtaining rigorous bounds of long-time average cost via SOS for uncontrolled polynomial systems are introduced.
\subsection{SOS of polynomials}
SOS techniques have been frequently used in the stability analysis and controller design for all kinds of systems, e.g., constrained ordinary differential equation systems \cite{An:02}, hybrid systems \cite{An:05}, time-delay systems \cite{An:04}, and partial differential equation systems \cite{Pa:06,Yu:08,GC:11}. These techniques help to overcome the common drawback of approaches based on Lyapunov functions: before \cite{Pr:02}, there were no coherent and tractable computational methods for constructing Lyapunov functions.
A multivariate polynomial $f({\bf x})$ is a SOS, if there exist polynomials $f_1({\bf x}), \cdots, f_m({\bf x})$ such that
\[
f({\bf x})=\sum_{i=1}^mf_i^2({\bf x}).
\]
If $f({\bf x})$ is a SOS then $f({\bf x})\ge 0, \forall{\bf x} $. In the general multivariate case, however,
$f({\bf x})\ge 0~ \forall \bf x$ does not necessarily imply that $f({\bf x})$ is SOS. While being stricter, the condition that $f({\bf x})$ is SOS is much more computationally tractable than non-negativity \cite{Par:00}. At the same time, practical
experience indicates that in many cases replacing non-negativity with the SOS property leads to satisfactory results.
In the present paper we will utilize the existence of efficient numerical methods and software \cite{Pra:04,Lo:09} for solving the optimization problems of the following type: minimize the linear objective function
\begin{equation}
{\bf w}^T{\bf c}
\label{linear}
\end{equation}
where ${\bf w}$ is the vector of weighting coefficients for the linear objective function, and
${\bf c}$ is a vector formed from the (unknown) coefficients of the
polynomials $p_i({\bf x})$ for $i=1,2,\cdots, \hat{N}$ and SOS $p_i({\bf x})$ for $i=(\hat{N}+1),\cdots, {N}$,
such that
\begin{eqnarray}
&a_{0,j}({\bf x})+\sum_{i=1}^Np_i({\bf x})a_{i,j}({\bf x})=0, ~ j=1,2,\cdots, \hat{J}, \label{c5} \\
[1ex]
&a_{0,j}({\bf x})+\sum_{i=1}^Np_i({\bf x})a_{i,j}({\bf x}) \mbox{~are SOS,~} j=(\hat{J}+1),\cdots, {J}.~
\label{c6}
\end{eqnarray}
In (\ref{c5}) and (\ref{c6}), the $a_{i,j}({\bf x})$ are given scalar constant coefficient polynomials.
The lemma below that provides a sufficient condition to test inclusions of sets defined by polynomials is frequently used for feedback controller design
in Section~~\ref{seq:Bound}. It is a particular case of the
Positivstellensatz
Theorem \cite{Po:99} and is a generalized ${\mathcal S}$-procedure \cite{Ta:06}.
\begin{lemma}
Consider two sets of ${\bf x}$,
\begin{eqnarray*}
{\mathcal S}_1&\eqdef& \left\{{\bf x}\in {\mathbb R}^n~|~h({\bf x})=0, f_1({\bf x})\ge 0, \cdots, f_r({\bf x})\ge 0\right\}, \\
[1ex]
{\mathcal S}_2&\eqdef& \left\{{\bf x}\in {\mathbb R}^n~|~f_0({\bf x})\ge 0\right\},
\end{eqnarray*}
where $f_i({\bf x}), i=0,\cdots, r$ and $h({\bf x})$ are scalar polynomial functions. The set containment ${\mathcal S}_1\subseteq {\mathcal S}_2$ holds if there exist a polynomial function $m({\bf x})$ and SOS polynomial functions $S_i({\bf x}), i=1,\cdots, r$ such that
\begin{eqnarray*}
f_0({\bf x})-\sum_{i=1}^rS_i({\bf x})f_i({\bf x})+m({\bf x})h({\bf x}) ~~\mbox{is SOS}.
\end{eqnarray*}
\end{lemma}
\subsection{Bound estimation of long-time average cost for uncontrolled systems}
For the convenience of the reader we outline here the method of obtaining bounds for long-time averages proposed in~\cite{Ph:14} and make some remarks on it.
Consider a system
\begin{eqnarray}
\dot{{\bf x}}={\bf f}({\bf x}),
\label{sys1}
\end{eqnarray}
where $\dot{{\bf x}}\eqdef d{\bf x}/dt$ and ${\bf f}({\bf x})$ is a vector of multivariate polynomials of the components of the state vector ${\bf x}\in{\mathbb R}^n$. The long-time average of a function of the state $\Phi({\bf x})$ is defined as
\
\bar{\Phi}=\lim_{T\rightarrow \infty}\frac{1}{T}\int_0^T\Phi({{\bf x}}(t))\,dt,
\
where ${\bf x}(t)$ is the solution of (\ref{sys1}).
Define a polynomial function of the system state, $V({\bf x})$, of degree $d_V,$ and containing unknown decision variables as its coefficients.
The time derivative of $V$ along the trajectories of system (\ref{sys1}) is
\[
\dot{V}({\bf x}) = \dot{{\bf x}} \cdot \nabla_{{\bf x}} V({\bf x})={\bf f}({\bf x})\cdot \nabla_{{\bf x}} V({\bf x}).
\]
Consider the following quantity:
\[
H({\bf x}) \eqdef \dot{V}({\bf x}) + \Phi({\bf x})= {\bf f}({\bf x}) \cdot\nabla_{{\bf x}} V({\bf x}) + \Phi({\bf x}).
\]
The following result is from \cite{Ph:14}:
\begin{lemma}
For the system (\ref{sys1}), assume that the state ${\bf x}$ is bounded in $\mathcal{D}\subseteq {\mathbb R}^n$. Then, $ H({\bf x})\le C, \forall {\bf x}\in \mathcal{D}$ implies $\bar{\Phi}\le C$.
\label{lemma1}
\end{lemma}
Hence, an upper bound of $\bar{\Phi}$ can be obtained by minimizing $C$ over $V$ under the constraint $H({\bf x})\le C$,
which can be formulated as a SOS optimization problem in the form:
\begin{eqnarray}
&\displaystyle\min_{V}~C \label{SOS_1}\\
[1ex]
&\mbox{s.t.} ~~-\left( {\bf f}({\bf x}) \cdot\nabla_{{\bf x}} V({\bf x})+\Phi({\bf x})-C\right) \mbox{~is~SOS},
\label{SOS}
\end{eqnarray}
which is a special case of (\ref{linear}). A better bound might be obtained by removing the requirement for $V({\bf x})$ to be a polynomial and replacing (\ref{SOS}) with the requirement of non-negativeness.
However, the resulting problem would be too difficult, since the classical algebraic-geometry problem
of verifying positive-definiteness of a general multi-variate polynomial is NP-hard \cite{An:02,An:05}.
Notice that while $V$ is similar to a Lyapunov function in a stability analysis, it is not required to be positive-definite. Notice also that a lower bound of any long-time average cost of the system (\ref{sys1}) can be analyzed in a similar way.
\begin{remark}\label{RemarkOnBoundedness}
For many systems the boundedness of system state immediately follows from energy consideration. In general, if the system state is bounded, this can often be proven using the SOS approach.
It suffices to check whether there exists a large but bounded global attractor, denoted by $\mathcal{D}_1.$
As an example, let $\mathcal{D}_1=\{{\bf x}~|~0.5{\bf x}^T{\bf x}\le \beta\}$, where the constant $\beta$ is sufficiently large. Then, the global attraction property of system in $\mathcal{D}_1$ may be expressed as
\begin{eqnarray}
{\bf x}^T\dot{{\bf x}}={\bf x}^T{\bf f}({\bf x})\le -(0.5{\bf x}^T{\bf x}-\beta).
\label{SOS1}
\end{eqnarray}
Introducing a tunable polynomial $S({\bf x})$ satisfying $S({\bf x})\ge 0 ~\forall {\bf x}\in{\mathbb R}^n$, by Lemma~1, (\ref{SOS1}) can be relaxed to
\begin{eqnarray}
\left\{
\begin{array}{c}
-\left({\bf x}^T{\bf f}({\bf x})-S({\bf x})(0.5{\bf x}^T{\bf x}-\beta)\right)\mbox{~is ~SOS}, \\
S({\bf x})\mbox{~is ~SOS}.
\end{array}
\right.
\label{SOS2}
\end{eqnarray}
Minimization of upper bound of long-time average cost for systems that have unbounded global attractor is usually meaningless, since the cost itself could be infinitely large.
\end{remark}
\section{Problem Formulation}\label{seq:Problem Formulation}
Consider a polynomial system with single input
\begin{eqnarray}
\dot{{\bf x}}={\bf f}({\bf x})+{\bf g}({\bf x}){\bf u}
\label{sys}
\end{eqnarray}
where ${\bf f}({\bf x}): {\mathbb R}^n\rightarrow {\mathbb R}^n$ and ${\bf g}({\bf x}): {\mathbb R}^n\rightarrow {\mathbb R}^{n\times m}$ are
polynomial functions of system state ${\bf x}$. The approach of this paper can easily be extended to multiple input systems.
The control ${\bf u}\in {\mathbb R}^m$, which is assumed to be a polynomial vector of the system state ${\bf x}$ with maximum degree $d_{{\bf u}}$,
is designed to minimize the upper bound of an average cost of the form:
\begin{eqnarray}
\bar{\Phi}=\lim_{T\rightarrow \infty}\frac{1}{T}\int_0^T\Phi({{\bf x}}(t),{\bf u}(t))\,dt,
\label{cost}
\end{eqnarray}
where ${{\bf x}}$ is the closed-loop solution of the system (\ref{sys}) with the control ${\bf u}$.
The continuous function $\Phi$ is a given non-negative polynomial cost in ${{\bf x}}$ and ${\bf u}$.
Similarly to (\ref{SOS_1})-(\ref{SOS}), we consider the following optimization problem:
\begin{eqnarray}
&\displaystyle \min_{{\bf u}, V} C \label{objective}\\
&{s.t.} -\left(({\bf f}({\bf x})+{\bf g}({\bf x}){\bf u})\cdot\nabla_{{\bf x}} V+\Phi({\bf x}, {\bf u})-C\right) \mbox{is SOS}.\quad
\label{optimization}
\end{eqnarray}
When it cannot be guaranteed that the closed-loop system state is bounded, SOS constraints (\ref{SOS2}) must be added to (\ref{optimization}) to make
our analysis rigorous.
Under the framework of SOS optimization, the main problem in solving (\ref{objective})-(\ref{optimization}) is due to the non-convexity of (\ref{optimization}) caused by the control input $u$ and the decision function $V,$ both of which are tunable, entering~(\ref{optimization}) nonlinearly.
Iterative methods \cite{Zh:07,Zh:09,Ng:11} may help to overcome this issue indirectly in the following way: first fix one subset of bilinear decision
variables and solve the resulting linear inequalities in the other decision variables; in the next step, the other bilinear decision variables are fixed and the procedure is repeated. For the particular long-time average cost control problem (\ref{objective})-(\ref{optimization}),
the non-convexity will be resolved in the following by considering a type of so-called small-feedback controller.
In such a new way, iterative updating of decision variables is exempted, and replaced by solving a sequence of SOS optimization problems.
\section{Bound optimization of long-time average cost for controlled polynomial systems}\label{seq:Bound}
In this section a small-feedback controller is designed to reduce the upper bound of the long-time average cost (\ref{cost}) for the controlled polynomial system (\ref{sys}).
It is reasonable to hope that a controller reducing the upper bound for the time-averaged cost will also reduce the time-averaged cost itself \cite{Ph:14}.
\subsection{Basic formalism of the controller design}
We will look for a controller in the form
\begin{eqnarray}
{\bf u}({\bf x},\epsilon)=\sum_{i=1}^{\infty}\epsilon^i {\bf u}_i({\bf x}),
\label{cc}
\end{eqnarray}
where $\epsilon>0$ is a parameter, and ${\bf u}_i({\bf x}), i= 1,2,\cdots$ are polynomial vector functions of system state ${\bf x}.$
In other words, we seek a family of controllers parameterised by $\epsilon$ in the form of a Taylor series in $\epsilon$. Notice that the expansion starts at the first-order term, so that $\epsilon=0$ gives the uncontrolled system.
To resolve the non-convexity problem of SOS optimization, we expand $V$ and $C$ in $\epsilon$:
\begin{eqnarray}
V({\bf x},\epsilon)&=&\sum_{i=0}^{\infty} \epsilon^i V_i({\bf x}), \label{LF0} \\
C(\epsilon)&=&\sum_{i=0}^{\infty}\epsilon^i C_i,
\label{LF1}
\end{eqnarray}
where $V_i$ and $C_i$ are the Taylor series coefficients for the tunable function and the bound, respectively, in the $i$th-order term of $\epsilon$.
Define
\begin{eqnarray}
F(V,u,C)\eqdef ({\bf f}({\bf x})+{\bf g}({\bf x}){\bf u})\cdot \nabla_{{\bf x}} V+\Phi({\bf x},{\bf u})-C.
\label{cc1}
\end{eqnarray}
Substituting (\ref{cc}), (\ref{LF0}), and (\ref{LF1}) into (\ref{cc1}), we have
\begin{eqnarray*}
F(V,u,C)&=& \left({\bf f}+{\bf g}\sum_{i=1}^{\infty}\epsilon^i{\bf u}_i\right)\cdot \sum_{i=0}^{\infty}\epsilon^i\nabla_{{\bf x}} V_i+\Phi\left({\bf x},\sum_{i=1}^{\infty}\epsilon^i{\bf u}_i\right)
-\sum_{i=0}^{\infty}\epsilon^i C_i.
\end{eqnarray*}
Noticing
\begin{eqnarray*}
\Phi\left({\bf x},\sum_{i=1}^{\infty}\epsilon^i{\bf u}_i\right
=
\sum_{i=0}^{\infty}\epsilon^i\left(\sum_{k=0}^i\frac{1}{k!}\frac{\partial^k\Phi}{\partial {\bf u}^k}({\bf x},0)\frac{1}{i!}\frac{\partial^i\left({\bf u}^k\right)}{\partial \epsilon^i}({\bf x},0)\right),
\end{eqnarray*}
it follows that
\begin{eqnarray}
F(V,u,C)=\sum_{i=0}^{\infty}\epsilon^i F_i(V_0,\cdots,V_i, {\bf u}_{1},\cdots,{\bf u}_i, C_i),
\label{inq2}
\end{eqnarray}
where
\begin{eqnarray}
F_i={\bf f}\cdot \nabla_{{\bf x}}V_i+\sum_{j+l=i}{\bf g}{\bf u}_j\cdot\nabla_{{\bf x}}V_l
+\sum_{k=0}^i\frac{1}{k!}\frac{\partial^k\Phi}{\partial {\bf u}^k}({\bf x},0)\frac{1}{i!}\frac{\partial^i\left({\bf u}^k\right)}{\partial \epsilon^i}({\bf x},0)-C_i.
\label{new1}
\end{eqnarray}
In (\ref{new1}), $({\partial^k\Phi}/{\partial {\bf u}^k})({\bf x},0)$ denotes the $k$th partial derivative of $\Phi$ with respect to ${\bf u}$ at ${\bf u}=0$, and
$({\partial^i({\bf u}^k)}/{\partial \epsilon^i})({\bf x},0)$ denotes the $i$th partial derivative of ${\bf u}^k({\bf x},\epsilon)\eqdef[u_1^k({\bf x},\epsilon),\cdots, u_m^k({\bf x},\epsilon)]^T$ with respect to $\epsilon$
at $\epsilon=0$.
Expression (\ref{inq2}) becomes more clear when a specific cost function $\Phi$ is considered.
For instance, let $\Phi=\Phi_0({\bf x})+{\bf u}^T{\bf u}$.
Then,
\begin{eqnarray*}
F(V,{\bf u},C)
=F_0(V_0,C_0)+\epsilon F_1(V_0,V_1,{\bf u}_1,C_1)+\epsilon^2F_2(V_0,V_1,V_2, {\bf u}_1,{\bf u}_2,C_2)+O(\epsilon^3),
\end{eqnarray*}
where
\begin{eqnarray*}
F_0&=&{\bf f}\cdot \nabla_{{\bf x}} V_0+\Phi_0-C_0, \\
[1ex]
F_1&=& {\bf f}\cdot \nabla_{{\bf x}} V_1+{\bf g} {\bf u}_1\cdot \nabla_{{\bf x}} V_0-C_1,\\
[1ex]
F_2&=&{\bf f}\cdot \nabla_{{\bf x}} V_2+{\bf g} {\bf u}_1\cdot \nabla_{{\bf x}} V_1+{\bf g} {\bf u}_2\cdot \nabla_{{\bf x}} V_0+{\bf u}_1^T{\bf u}_1-C_2,
\end{eqnarray*}
and $O(\epsilon^3)$ denotes all the terms with order of $\epsilon$ being equal or greater than 3.
It is clear that $F(V,{\bf u},C)\le 0$ holds if $F_i\le 0, i=0,1,2,\cdots$, simultaneously, and the series (\ref{cc})-(\ref{LF1}) converge.
Notice that $F_i$ includes tunable functions $V_j, j\le i$, and ${\bf u}_k, k\le i-1$. For any non-negative integers $i_1, i_2$ satisfying $i_1<i_2$, the tunable variables in $F_{i_1}$ are always a subset of the tunable variables in $F_{i_2}$. Hence (\ref{objective})-(\ref{optimization}) can be solved as a sequence of convex optimization problems. When the inequality constraints $F_i\le 0$ are relaxed to SOS conditions, our idea can be summarized as follows.
\noindent\rule{16.5cm}{0.1pt}
{\bf\it The sequential steps to solve (\ref{objective})-(\ref{optimization}): {\bf A-I}} \\
\noindent\rule{16.5cm}{0.1pt}
\begin{itemize}
\item[(s0)] First minimize $C_0$ over $V_0$ under the constraint $F_0(V_0,C_0)\le 0$, or more conservatively,
\begin{eqnarray*}
O_0: ~~\min_{V_0} C_0, {~~s.t.~~} -F_0(V_0,C_0)\mbox{~~is SOS}.
\end{eqnarray*}
Denote the optimal $C_0$ by $C_{0,SOS}$ and the associated $V_0$ by $V_{0,SOS}$.
\item[(s1)] Now, let $V_0=V_{0,SOS}$ in $F_1$, and then minimize $C_1$ over $V_1$ and ${\bf u}_1$ under the constraint $F_1(V_{0,SOS},V_1,{\bf u}_1,C_1)\le 0$, or under the framework of SOS optimization,
\begin{eqnarray*}
O_1: ~~\min_{V_1,{\bf u}_1} C_1, {~s.t.~} -F_1(V_{0,SOS},V_1,{\bf u}_1,C_1)\mbox{~is~ SOS}.
\end{eqnarray*}
Using the generalized $\mathcal{S}$-procedure given in Lemma~1 and the fact that
\begin{eqnarray}
-F_0(V_{0,SOS},C_{0,SOS})\ge 0,
\label{condition1}
\end{eqnarray}
$O_1$ can be revised by incorporating one more tunable function $S_0({\bf x})$:
\begin{eqnarray*}
O_1': \begin{array}{c}
\min_{V_1,{\bf u}_1,S_0} C_1, \\
[1ex]
{~~s.t.~~} \left\{
\begin{array}{c}
-F_1(V_{0,SOS},V_1,{\bf u}_1,C_1)
+S_0({\bf x})F_0(V_{0,SOS},C_{0,SOS}) \mbox{~~is SOS}, \\
[1ex]
S_0({\bf x})\mbox{~~is~ SOS}.
\end{array}
\right.
\end{array}
\end{eqnarray*}
Denote the optimal $C_1$ by $C_{1,SOS}$ and the associated $V_1$ and ${\bf u}_1$ by $V_{1,SOS}$ and ${\bf u}_{1,SOS}$, respectively.
\item[(s2)] Further let $V_0=V_{0,SOS}$, $V_1=V_{1,SOS}$, and ${\bf u}_1={\bf u}_{1,SOS}$ in $F_2$, and then minimize $C_2$ over $V_2$ and ${\bf u}_2$ under the constraint $F_2(V_{0,SOS},V_{1,SOS},V_2, {\bf u}_{1,SOS}, {\bf u}_2,C_2)\le 0$. In a more tractable way, consider
\begin{eqnarray*}
O_2:
\begin{array}{c}
\displaystyle \min_{V_2,~ {\bf u}_2} C_2,~ {s.t.}\\
-F_2(V_{0,SOS},V_{1,SOS},V_2, {\bf u}_{1,SOS}, {\bf u}_2,C_2)\mbox{~is SOS}.
\end{array}
\end{eqnarray*}
Similarly as in (s1), noticing (\ref{condition1}) and
\begin{eqnarray*}
-F_1(V_{0,SOS},V_{1,SOS},{\bf u}_{1,SOS},C_{1,SOS})\ge 0,
\end{eqnarray*}
the SDP problem $O_2$ can be revised by the generalized $\mathcal{S}$-procedure to the following form:
\begin{eqnarray*}
O_2': \begin{array}{c}
\min_{V_2,{\bf u}_2,S_0,S_1} C_2, ~~~{s.t.} \\
[1ex]
\left\{
\begin{array}{c}
-F_2(V_{0,SOS},V_{1,SOS},V_2, {\bf u}_{1,SOS}, {\bf u}_2,C_2)
+S_0({\bf x})F_0(V_{0,SOS},C_{0,SOS}) \\
~ +S_1({\bf x})F_1(V_{0,SOS},V_{1,SOS}, {\bf u}_{1,SOS},C_{1,SOS})\mbox{~is SOS},\\
[1ex]
S_0({\bf x})\mbox{~~is SOS}, \\
[1ex]
S_1({\bf x})\mbox{~~is SOS}.
\end{array}
\right.
\end{array}
\end{eqnarray*}
Denote the optimal $C_2$ by $C_{2,SOS}$ and the associated $V_2$ and ${\bf u}_2$ by $V_{2,SOS}$ and ${\bf u}_{2,SOS}$, respectively.
Notice that $S_0({\bf x})$ here might differ from the tunable function $S_0({\bf x})$ in $O_1'$. Throughout this paper we will use the same notations for the tunable functions like $S_0$ and $S_1$ in various instances of the $\mathcal{S}$-procedure, to keep the notation simple.
\item[(s3)] The SOS-based controller design procedure is continued for higher-order terms.
\end{itemize}
\noindent\rule{16.5cm}{0.1pt}
Now, define three series
\begin{eqnarray}
C_{SOS}=\sum_{i=0}^{\infty}\epsilon^iC_{i,SOS}, ~~{\bf u}_{SOS}=\sum_{i=1}^{\infty}\epsilon^i {\bf u}_{i,SOS}, ~~V_{SOS}=\sum_{i=0}^{\infty}\epsilon^iV_{i,SOS}.
\label{series}
\end{eqnarray}
When all of them converge, the following statement will be true.
\begin{theorem}
By applying the state-feedback controller ${\bf u}={\bf u}_{SOS}$ for the system (\ref{sys}), if the trajectories of the closed-loop system are bounded
\footnote{In the context of long-time average cost controller design and analysis, it is actually enough to assume the boundedness of the global attractor of the system to ensure the existence of $C_{SOS}$.
}, then $C_{SOS}$ is an upper bound of the long-time average cost $\bar{\Phi}$.
\end{theorem}
{\it Proof}. Using the algorithm {\bf A-I}, we obtain
\begin{eqnarray*}
F_i(V_{0,SOS},\cdots,V_{i,SOS}, {\bf u}_{1,SOS},\cdots, {\bf u}_{i,SOS},C_{i,SOS})\le 0, \forall ~i.
\label{new2}
\end{eqnarray*}
Then, it follows that
\begin{eqnarray*}
\sum_{i=0}^{\infty}F_i(V_{0,SOS},\cdots,V_{i,SOS}, {\bf u}_{1,SOS},\cdots, {\bf u}_{i,SOS},C_{i,SOS})
=F(V_{SOS}, {\bf u}_{SOS}, C_{SOS})\le 0,
\label{new3}
\end{eqnarray*}
where $C_{SOS}, {\bf u}_{SOS}, V_{SOS}$ are given in (\ref{series}).
By virtue of a same analysis as in proving Lemma~2 (see \cite{Ph:14}), we can conclude that $\bar{\Phi}\le C_{SOS}$.
\rule{0.09in}{0.09in}
\begin{remark}
After specifying the structure of controller to be of the form (\ref{cc}), the non-convexity in solving the optimization problem (\ref{objective})-(\ref{optimization}) has been avoided by solving the linear SDPs $O_0, O_1', O_2', \cdots$ in sequence.
During the process, all the involved decision variables are optimized sequentially, but not iteratively as in other methods \cite{Zh:07,Zh:09,Ng:11}.
\end{remark}
\begin{remark}
The smallness of $\epsilon$ can be used to relax $O_1', O_2', \cdots$ further.
For instance, in $O_1'$, in order to prove
$
F_0+\epsilon F_1\le 0,
$
we prove $F_1(V_{0,SOS},V_1,{\bf u}_1,C_1)\le 0$ with the aid of the known constraint $F_0(V_{0,SOS},C_{0,SOS})\le 0$,
thus not using that $\epsilon$ is small.
In fact, when $\epsilon$ is small, for $F_0+\epsilon F_1$ to be negative $F_1$ has to be negative only for those ${\bf x}$ where $F_0({\bf x})$ is small,
and not for all ${\bf x}$ as required in $O_1'$.
Meanwhile, checking the convergence of the series (\ref{series}) would be challenging or even impractical.
These points will be addressed in what follows.
\end{remark}
\subsection{Design of small-feedback controller}
Next, the sequential design method {\bf A-I} is revised to utilize that $\epsilon\ll 1$.
\noindent\rule{16.5cm}{0.1pt}
{\bf\it The revised sequential steps to solve (\ref{objective})-(\ref{optimization}): {\bf A-II}}
\noindent\rule{16.5cm}{0.1pt}
\begin{itemize}
\item[(s0)] Same as in {\bf A-I}, first solve the SOS optimization problem $O_0$.
Denote the optimal $C_0$ by $C_{0,SOS}$ and the associated $V_0$ by $V_{0,SOS}$.
\item[(s1)] Let $V_0=V_{0,SOS}$ in $F_1$, and then consider the following SDP problem:
\begin{eqnarray*}
&O_1'': \begin{array}{c}
\displaystyle \min_{V_1,{\bf u}_1,S_0} C_1, \\
[1ex]
{~~s.t.~~}
\begin{array}{c}
-F_1(V_{0,SOS},V_1,{\bf u}_1,C_1)
+S_0({\bf x})F_0(V_{0,SOS},C_{0,SOS}) \mbox{~is SOS},
\end{array}
\end{array}
\end{eqnarray*}
where $S_0$ is any tunable polynomial function of ${\bf x}$ of fixed degree.
Denote the optimal $C_1$ by $C_{1,SOS}$ and the associated $V_1$ and ${\bf u}_1$ by $V_{1,SOS}$ and ${\bf u}_{1,SOS}$, respectively.
Unlike $O_1'$, here the non-negativity requirement of $S_0$ is not imposed.
This can be understood as that the non-negativity constraint is imposed only for ${\bf x}$ such that $F_0(V_{0,SOS},C_{0,SOS})=0$.
\item[(s2)] Further let $V_0=V_{0,SOS}$, $V_1=V_{1,SOS}$, and ${\bf u}_1={\bf u}_{1,SOS}$ in $F_2$, and then consider
\begin{eqnarray*}
O_2'': \begin{array}{c}
\displaystyle \min_{V_2,{\bf u}_2,S_0,S_1} C_2, ~~~{s.t.} \\
[1ex]
\left\{
\begin{array}{c}
-F_2(V_{0,SOS},V_{1,SOS},V_2, {\bf u}_{1,SOS}, {\bf u}_2,C_2)
+S_0({\bf x})F_0(V_{0,SOS},C_{0,SOS}) \\
~+S_1({\bf x}) F_1(V_{0,SOS},V_{1,SOS},{\bf u}_{1,SOS},C_{1,SOS})
\mbox{~~is~ SOS},
\end{array}
\right.
\end{array}
\end{eqnarray*}
where $S_0$ and $S_1$ are any tunable polynomial functions of fixed degrees.
$S_0$ here does not need to be the same as in $O_1''$.
Denote the optimal $C_2$ by $C_{2,SOS}$ and the associated $V_2$ and ${\bf u}_2$ by $V_{2,SOS}$ and ${\bf u}_{2,SOS}$, respectively.
Similarly as in $O_1''$, here the non-negativity constraint is in effect imposed only where $F_0(V_{0,SOS},C_{0,SOS})=F_1(V_{0,SOS},V_{1,SOS}, {\bf u}_{1,SOS},C_{1,SOS})=0$.
\item[(s3)] The revised SOS-based controller design procedure is continued for higher-order terms.
\end{itemize}
\noindent\rule{16.5cm}{0.1pt}
Since the constraints of $S_i$ being SOS imposed in {\bf A-I} are removed in {\bf A-II}, the coefficients $C_{i,SOS}$ obtained in {\bf A-II} can be smaller than the coefficients $C_{i,SOS}$ obtained in {\bf A-I}. This advantage comes at a price: even if all the relevant series converge for a particular value of $\epsilon$, the procedure {\bf A-II} does not guarantee that the value $C_{SOS}$ given in (\ref{series}) is an upper bound for the time-averaged cost of the closed-loop system with the controller ${\bf u}_{SOS}$. We have now to consider (\ref{series}) as asymptotic expansions rather than Taylor series. Accordingly, we have to truncate the series and hope that the resulting controller will work for (sufficiently) small $\epsilon$
\footnote{It is worthy of noticing that the series truncation here does not mean that our controller design and analysis are conducted in a non-rigorous way.
The truncated controller would be effective if it leads to a better (lower) bound of the long-time average cost.}. It is possible to prove that this is, indeed, the case.
For illustration, the first-order truncation is considered only.
\begin{theorem}
Consider the first-order small-feedback controller for the system (\ref{sys}),
\begin{eqnarray}
{\bf u}_{SOS}=\epsilon {\bf u}_{1,SOS}
\label{new3}
\end{eqnarray}
where $\epsilon>0$ is sufficiently small. Assume that the trajectories of the closed-loop system are bounded, and that $C_{1,SOS}<0$.
Then, $C_{\kappa,SOS}\eqdef C_{0,SOS}+\epsilon\kappa C_{1,SOS}, \kappa\in(0,1)$ is an upper bound of the long-time average cost $\bar{\Phi}$.
Clearly, $C_{\kappa,SOS}<C_{0,SOS}$.
\end{theorem}
{\it Proof}. Let $V_{SOS}=V_{0,SOS}+\epsilon V_{1,SOS}$.
By substituting $V=V_{SOS}, C=C_{\kappa, SOS}, {\bf u}={\bf u}_{SOS}$ in the constraint function $F(V,{\bf u},C)$ that is defined in (\ref{cc1}), the remaining task
is to seek small $\epsilon>0$ such that
\begin{eqnarray}
F(V_{SOS},{\bf u}_{SOS},C_{\kappa, SOS}
\le
0.
\label{cc1_1}
\end{eqnarray}
Notice that
\begin{eqnarray}
F(V_{SOS},{\bf u}_{SOS},C_{\kappa,SOS}
=
F_0(V_{0,SOS},C_{0,SOS})
+\epsilon F_1(V_{0,SOS},V_{1,SOS},{\bf u}_{1,SOS},C_{1,SOS})
+\epsilon (1-\kappa)C_{1,SOS}+\epsilon^2 w({\bf x},\epsilon),
\label{boundx}
\end{eqnarray}
where
\begin{eqnarray*}
w({\bf x},\epsilon)={\bf g}{\bf u}_1\cdot \nabla_{{\bf x}}V_{1,SOS}
+\frac{1}{\epsilon^2}\left(\Phi({\bf x},\epsilon {\bf u}_{1,SOS})-\Phi({\bf x},0)-\epsilon\frac{\partial \Phi}{\partial {\bf u}}({\bf x},0){\bf u}_{1,SOS}\right),
\end{eqnarray*}
and $F_0, F_1$, being polynomial in ${\bf x}$, possess all the continuity properties implied by the proof.
Let $\mathcal{D}\in{\mathbb R}^n$ be the phase domain that interests us, where the closed-loop trajectories are all bounded.
Then,
\begin{eqnarray}
F_{1,max}\eqdef \max_{{\bf x}\in\mathcal{D}}F_1(V_{0,SOS},V_{1,SOS},{\bf u}_{1,SOS},C_{1,SOS})<\infty,
\label{bound1}
\end{eqnarray}
and $w({\bf x},\epsilon)$ is bounded for any ${\bf x}\in\mathcal{D}$ and any finite $\epsilon$ (the latter following from the standard mean-value-theorem-based formula for the Lagrange remainder). By (\ref{boundx}) and (\ref{bound1}),
\begin{eqnarray}
F(V_{SOS},{\bf u}_{SOS},C_{\kappa,SOS})\le
F_0(V_{0,SOS},C_{0,SOS})+\epsilon F_{1,max}+\epsilon (1-\kappa)C_{1,SOS}+O(\epsilon^2).
\label{bound1_ex}
\end{eqnarray}
Meanwhile, consider the two inequality constraints obtained by solving $O_0$ and $O_1''$:
\begin{eqnarray}
\left\{
\begin{array}{c}
F_0(V_{0,SOS},C_{0,SOS})\le 0, \\
[1ex]
F_1(V_{0,SOS},V_{1,SOS}, {\bf u}_{1,SOS},C_{1,SOS})\le 0
~~\forall {\bf x} ~~\mbox{such that}~ F_0(V_{0,SOS},C_{0,SOS})= 0.
\end{array}
\right.
\label{cons}
\end{eqnarray}
Define $\mathcal{D}_{\delta}\eqdef \left\{{\bf x}\in \mathcal{D} ~|~ \delta\le F_0(V_{0,SOS},C_{0,SOS})\le 0\right\}$ for a given constant $\delta\le 0$.
Clearly, $\mathcal{D}_{\delta}\rightarrow \mathcal{D}_{0}$ as $\delta\rightarrow 0$.
Further define
\begin{eqnarray}
F_{1,\delta}(\delta)\eqdef \max_{{\bf x}\in \mathcal{D}_{\delta}} F_1(V_{0,SOS},V_{1,SOS},{\bf u}_{1,SOS},C_{1,SOS}).
\label{bound2}
\end{eqnarray}
By the second constraint in (\ref{cons}), $\lim_{\delta\rightarrow 0}F_{1,\delta}(\delta)\le 0$.
Therefore, by continuity and the fact $C_{1,SOS}<0$, for any $0<\kappa<1$ there exists a constant $\delta_{\kappa}<0$ such that
\begin{eqnarray}
F_1(V_{0,SOS},V_{1,SOS}, {\bf u}_{1,SOS},C_{1,SOS})\le F_{1,\delta_{\kappa}}<-\frac{1}{2}(1-\kappa)C_{1,SOS}, ~~\forall {\bf x}\in \mathcal{D}_{\delta_{\kappa}}.
\label{bound3}
\end{eqnarray}
In consequence, (\ref{boundx}), the first constraint in (\ref{cons}), and (\ref{bound3}) render to
\begin{eqnarray}
F(V_{SOS},{\bf u}_{SOS},C_{\kappa,SOS}) &\le&
F_0(V_{0,SOS},C_{0,SOS})+\epsilon F_{1,\delta_{\kappa}}+\epsilon (1-\kappa)C_{1,SOS}+O(\epsilon^2) \nonumber \\
&\le& \frac{\epsilon}{2} (1-\kappa)C_{1,SOS}+O(\epsilon^2)\le 0, ~~\forall {\bf x}\in \mathcal{D}_{\delta_{\kappa}},
\label{bound4}
\end{eqnarray}
for sufficiently small $\epsilon$.
Next, we prove (\ref{cc1_1}) for any ${\bf x}\in \mathcal{D}\setminus\mathcal{D}_{\delta_{\kappa}}$. By the definition of the set $\mathcal{D}_{\delta_{\kappa}}$, we have
\begin{eqnarray}
F_0(V_{0,SOS},C_{0,SOS})<\delta_{\kappa}<0, ~~\forall {\bf x}\in \mathcal{D}\setminus\mathcal{D}_{\delta_{\kappa}}.
\label{bound5}
\end{eqnarray}
Then, (\ref{bound1_ex}) and (\ref{bound5}) yield
\begin{eqnarray}
F(V_{SOS},{\bf u}_{SOS},C_{\kappa,SOS})\le
\delta_{\kappa}+\epsilon F_{1,max}+\epsilon (1-\kappa)C_{1,SOS}+O(\epsilon^2)\le \delta_{\kappa}+O(\epsilon)\le 0, ~~\forall {\bf x}\in \mathcal{D}\setminus\mathcal{D}_{\delta_{\kappa}},
\label{bound1_ex1}
\end{eqnarray}
if $\epsilon$ is sufficiently small.
(\ref{bound4}) and (\ref{bound1_ex1}) imply that (\ref{cc1_1}) holds $\forall {\bf x}\in \mathcal{D}$. The proof is complete.
\rule{0.09in}{0.09in}
In practice, once the form of the controller has been specified in (\ref{new3}), the upper bound $C$ and the corresponding $V$ actually can be obtained by solving the following optimization problem directly:
\begin{eqnarray*}
O_{\epsilon}: \begin{array}{c}
\displaystyle \min_{V,~\epsilon} C, ~~~ \\
[1ex]
{s.t.}~~-F(V,\epsilon {\bf u}_{1,SOS},C)\mbox{~is~ SOS}.
\end{array}
\end{eqnarray*}
This problem can be further relaxed by incorporating the known constraints (\ref{cons}).
In $O_{\epsilon}$, if $\epsilon$ is set as one of the tunable variables, the SOS optimization problem will become non-convex again, thus causing additional trouble in solving it. Alternatively, one can fix $\epsilon$ here, and investigate its effect on the upper bound of $\bar{\Phi}$ by trial and error. We will follow this route in Section~\ref{seq:Example}.
\section{Illustrative example}\label{seq:Example}
As an illustrative example we consider a system proposed in \cite{Ki:05} as a model for studying control of oscillatory vortex shedding behind a cylinder. The actuation was assumed to be achieved by a volume force applied in a compact support region downstream of the cylinder. The Karhunen-Lo\`{e}ve (KL) decomposition \cite{No:03} was used and the first two KL modes and an additional shift mode were selected. For the Reynolds number equal to 100 the resulting low-order Galerkin model of the cylinder flow with control was given as follows
\begin{eqnarray}
\left[
\begin{array}{c}
\dot{a}_1 \\
\dot{a}_2 \\
\dot{a}_3
\end{array}
\right]&=&
\left[
\begin{array}{ccc}
\sigma_r & -\omega-\gamma a_3 & -\beta a_1 \\
\omega+\gamma a_3 & \sigma_r & -\beta a_2\\
\alpha a_1 & \alpha a_2 & -\sigma_3
\end{array}
\right]
\left[
\begin{array}{c}
{a}_1 \\
{a}_2 \\
{a}_3
\end{array}
\right]
+
\left[
\begin{array}{c}
g_1 \\
g_2 \\
0
\end{array}
\right]u,
\label{cf}
\end{eqnarray}
where $\sigma_r=0.05439, \sigma_3=0.05347, \alpha=0.02095, \beta=0.02116,$
$\gamma=-0.03504, \omega=0.9232, g_1=-0.15402$, and $g_2=0.046387$.
More details on deriving the reduced-order model (\ref{cf}) are given in~\cite{Ro:14}.
The system (\ref{cf}) possesses a unique equilibrium when $u=0$, which is at the origin.
Let $\Phi=1/2{\bf a}^T{\bf a}+u^2$, where ${\bf a}=[a_1 ~a_2 ~a_3]^T$.
The proposed algorithms {\bf A-I} and {\bf A-II} were applied to (\ref{cf}), with
the system state assumed to be available. In experiment, it could be estimated by designing a state observer with some sensed output measurement at a typical position~\cite{Ro:14}.
\subsection{Performance of algorithm {\bf A-I}}
The SDP problem $O_0$ is solved first. It corresponds to the uncontrolled sysytem.
The minimal upper bound we could achieve was $C_{0,SOS}=6.59.$ It was obtained with
\begin{eqnarray*}
V_{0,SOS}=-96.63 a_3+14.01a_1^2+14.01a_2^2+14.15 a_3^2.
\end{eqnarray*}
Increasing the degree of $V_0$ cannot give a better bound because there exists a stable limit cycle in the phase space of (\ref{cf}), on which $a_1^2+a_2^2=6.560,$ and $ a_3=2.570$.
Since $\bar{\Phi}=1/2{\bf a}^T{\bf a}=6.584$ on the limit cycle, the minimal upper bound achieved by SOS optimization is tight in the sense
that the difference between $C_{0,SOS}$ and $\bar{\Phi}$ is less than the prescribed precision for $C$, $0.01$.
Solving the SDP problem $O_1$, where $V_1$ and $u_1$ are tunable functions, gave $C_{1,SOS}=0$.
Solving $O_1',$ with $V_1, u_1, S_0$ being tuning functions, gave the same result: $C_{1,SOS}=0$.
In both cases, increasing the degrees of the tuning functions did not reduce the upper bound.
The consequent SOS optimization problems, $O_i',$ with $i=2,3$ also gave $C_{i,sos}=0, i=2,3$.
Therefore, by (\ref{series}),
\begin{eqnarray*}
C_{SOS}=C_{0,SOS}+\epsilon C_{1,SOS}+\epsilon^2 C_{2,SOS}+O(\epsilon^3)
\approx C_{0,SOS}=6.59,
\end{eqnarray*}
implying that {\bf A-I} does not generate a control ensuring a better upper bound of $\bar{\Phi}$ than the bound obtained in the uncontrolled case.
\subsection{Performance of algorithm {\bf A-II}}
Without any control, it has been obtained in {\bf A-I} that $C_{0,SOS}=6.59$.
We first solve $O_1''$. Given the vectors of monomials in ${\bf x}$ without repeated elements \cite{Ch:09}, $Z_i, i=1,\cdots, 3$, define $V_1=P_1^TZ_1$, $u_1=P_2^TZ_2$, and $S_0=P_3^TZ_3$, where the parametric vectors $P_i$, $i=1,\cdots, 3$ consist of tuning vector variables. The degrees of $V_1, u_1$ and $S_0$ are specified by the maximum degrees of monomials in $Z_i, i=1,\cdots, 3$, and denoted by $ d_{V_1}, d_{u_1}$, and $d_{S_0}$, respectively. Consider two subcases: $d_{V_1}=d_{u_1}=d_{S_0}=2$ and $d_{V_1}=d_{u_1}=d_{S_0}=4$.
For the former case, we have $C_{1,SOS}=-354$, induced by
\begin{eqnarray*}
u_{1,SOS,2}=45.37 a_1-28.47 a_2-142.76 a_2a_3+399.49 a_1a_3.
\end{eqnarray*}
For the latter case, we have $C_{1,SOS}=-1965$, induced by
\begin{eqnarray*}
u_{1,SOS,4}&=&233.08a_1-54.73a_2-67.61a_2a_3+218.56a_1a_3+717.28a_1^3+13.16a_1^2a_2 \\
&&+571.67a_1a_2^2-277.73a_2^3+466.61a_1a_3^2-141.41a_2a_3^2+230.53a_1^3a_3\\
&&+106.32a_1^2a_2a_3+220.19a_1a_2^2a_3-161.44a_2^3a_3+628.40a_1a_3^3-173.78a_2a_3^3.
\end{eqnarray*}
We then solve $O_{\epsilon}$ with a fixed $\epsilon$.
For simplicity we considered $u_{1,SOS}=u_{1,SOS,2}$ and $d_V\le 10$ only.
The upper-bound results for different $\epsilon$ are summarized in Fig. \ref{inf0}.
The long-time average cost $\bar{\Phi}$, which is obtained by direct numerical experiment, and the linear truncated bound $C_{0,SOS}+\epsilon C_{1,SOS}$ are also presented for comparison.
From Fig. \ref{inf0}, we can see the following.
Let $\epsilon_1= 1.267\times 10^{-2}$ and $\epsilon_2=7.416\times 10^{-2}$.
The small-feedback controller
\begin{eqnarray}
u=\epsilon u_{1,SOS,2}
\label{test}
\end{eqnarray}
reduces $\bar{\Phi},$ and the reduction in $\bar{\Phi}$ increases monotonically with $\epsilon$ when $0<\epsilon<\epsilon_2$. In particular, $\bar{\Phi}=0$ for $\epsilon_1 \le \epsilon< \epsilon_2,$ that is in this range of $\epsilon$ the controller fully stabilizes the system. When $\epsilon\ge \epsilon_2$, the controller makes the long-time average cost worse than in the uncontrolled case. The effect of $\epsilon$ on $\bar{\Phi}$ can be seen more clearly by investigating the qualitative properties of the closed-loop system. A simple check gives that when $0\le \epsilon< \epsilon_1$, the closed-loop system has a unique unstable equilibrium at the origin and a stable limit cycle, thus yielding a non-zero but finite $\bar{\Phi}$;
when $\epsilon_1 \le \epsilon< \epsilon_2$, the limit cycle disappears and the unique equilibrium becomes globally stable, thus implying
the vanishness of $\bar{\Phi}$;
when $\epsilon\ge \epsilon_2$ but is close to $\epsilon_2$, besides the equilibrium at the origin, there exist four additional non-zero equilibria, and as a result $\bar{\Phi}$ becomes large immediately. For instance, at the bifurcation point $\epsilon=\epsilon_2$, the non-zero equilibria of the closed-loop system are $(\pm 0.6988, \pm 2.362, 2.377)$ and $(\pm 0.7000, \pm 2.364, 2.382)$, resulting in $\bar{\Phi}=171.55$.
Solving $O_{\epsilon}, 0<\epsilon\le 8.7\times 10^{-4}$ yields a tight upper bound $C_{\epsilon, SOS}$ for $\bar{\Phi}$.
However, the obtained upper bound becomes non-tight when $\epsilon> 8.7\times 10^{-4}$.
The conservativeness of $C_{\epsilon, SOS}$ can be fully overcome by considering additional relaxation constraint (\ref{cons})
for $ 8.7\times 10^{-4}<\epsilon\le 4\times 10^{-3}$, but only mitigated to certain extend for larger $\epsilon$.
The two-term expansion $C_{0,SOS}+\epsilon C_{1,SOS}$ is only a linear approximation of $C_{SOS}$ in (\ref{series}).
Thus, as an upper bound of $\bar{\Phi}$, it behaves well when $\epsilon$ is very small, but it becomes conservative when $\epsilon$ is further increased, and meaningless as $\epsilon>-C_{0,SOS}/C_{1,SOS}=0.0186$.
In summary, for small $\epsilon$, the proposed small-feedback controller
yields a better bound of the long-time average cost than in the uncontrolled case. Further, the controller indeed reduces the long-time average cost itself.
\begin{figure}
\centerline{\includegraphics[trim=0mm 5mm 0mm 0mm,clip,width=10cm]{result1.eps}}
\centerline{$\epsilon$}
\caption{The long-time average cost $\bar{\Phi}$ and its upper bounds for different $\epsilon$.
$C_{0,SOS}, C_{1,SOS}, C_{\epsilon,SOS}, C_{\epsilon,SOS}'$ are obtained by solving $O_0, O_1'', O_{\epsilon}$, and $O_{\epsilon}$ with the relaxation (\ref{cons}), respectively.
}
\label{inf0}
\end{figure}
\begin{figure}
\centerline {\includegraphics[trim=0mm 10mm 0mm 0mm,clip,width=10cm]{ep20.eps}}
\centerline{$t$}
\caption{Control input profile.
}
\label{inf2}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=5cm,width=9cm]{ep30.eps}}
\caption{Closed-loop trajectory starting at ${\bf a}=[-0.3~-0.3~0.3]^T$.
Owing to the small-feedback control, the magnitude of the periodic oscillation has been reduced.
}
\label{inf3}
\end{figure}
Figs. \ref{inf2}-\ref{inf3} show more details of the control performance of the proposed controller (\ref{test}) with $\epsilon=8.7\times 10^{-4}$ and
the initial state ${\bf a}=[-0.3 ~-0.3 ~0.3]^T$.
\section{Conclusion}\label{seq:Conclusion}
Based on sum-of-squares decomposition of polynomials and semidefinite programming, a numerically tractable approach is presented for long-time average cost control of polynomial dynamical systems.
The obtained controller possesses a structure of small feedback, which is an asymptotic expansion in a small parameter, with all the coefficients being polynomials of the system state.
The derivation of the small-feedback controller is given in terms of the solvability conditions of state-dependent linear and bilinear inequalities.
The non-convexity in SOS optimization can be resolved by making full use of the smallness of the perturbation parameter while not using any iterative
algorithms.
The efficiency of the control scheme has been tested on a low-order model of cylinder wake flow stabilization problem.
In the next research phase, we will consider SOS-based long-time average cost control under modelling uncertainties and in the presence of noise, as well as direct numerical simulations of small-feedback control for actual fluid flows.
The proof of concept of the idea of using the upper bound of long-time average cost control as the objective of the control design, and the method of overcoming the non-convexity of simultaneous optimization of the control law and the tunable function are the two main contributions of the present paper.
| {'timestamp': '2015-04-02T02:13:11', 'yymm': '1504', 'arxiv_id': '1504.00307', 'language': 'en', 'url': 'https://arxiv.org/abs/1504.00307'} |
\section{Introduction}
The discovery of the Higgs boson in 2012 at the LHC has attested the success
of the standard model (SM) in describing the observed fermions and their
interactions. However, there exist many theoretical issues or open questions
that have no satisfactory answer. In particular, the observed flavour
pattern lacks of a definitive explanation, i.e., the quark Yukawa coupling
matrices $Y_u$ and $Y_d$, which in the SM reproduce the six quark masses,
three mixings angles and a complex phase to account for CP violation
phenomena, are general complex matrices, not constrained by any gauge
symmetry.
Experimentally the flavour puzzle is very intricate. First, there is the
quark mass hierarchy in both sectors. Secondly, the mixings in the SM,
encoded in the Cabibbo-Kobayashi-Maskawa (CKM) unitary matrix, turns out to
be close to the identity matrix. If one takes also the lepton sector into
account, the hierarchy there is even more puzzling~\cite%
{Emmanuel-Costa:2015tca}. On the other hand, in the SM there is in general
no connection between the quark masses hierarchy and the CKM mixing pattern.
In fact, if one considers the Extreme Chiral Limit, where the quark masses
of the first two generations are set to zero, the mixing does not
necessarily vanish~\cite{Botella:2016krk}, and one concludes that the CKM
matrix~$V$ being close to the identity matrix has nothing to do with the
fact that the quark masses are hierarchical. Indeed, in order to have $%
V\approx \mathbf{1}$, one must have a definite alignment of the quark mass
matrices in the flavour space, and to explain this alignment, a flavour
symmetry or some other mechanism is required~\cite{Botella:2016krk}.
Among many attempts made in the literature to address the flavour puzzle,
extensions of the SM with new Higgs doublet are particularly motivating.
This is due to fact that the number of Higgs doublets is not constrained by
the SM symmetry. Moreover, the addition of scalar doublets gives rise to new
Yukawa interactions and as a result it provides a richer framework in
approaching the theory of flavour. On the other hand, any new extension of
the Higgs sector must be very much constrained, since it naturally leads to
flavour changing neutral currents. At tree level, in the SM, all the flavour
changing transitions are mediated through charged weak currents and the
flavour mixing is controlled by the CKM matrix~\cite%
{Cabibbo:1963yz,Kobayashi:1973fv}. If new Higgs doublets are added, one
expects large FCNC effects already present at tree level. Such effects have
not been experimentally observed and they constrain severely any model with
extra Higgs doublets, unless a flavour symmetry suppresses or avoids large
FCNC~\cite{Branco:2011iw}.
Minimal flavour violating models~\cite%
{Joshipura:1990pi,Antaramian:1992ya,Hall:1993ca,Mantilla2017, Buras:2001,
Dambrosio:2002} are examples of a multiHiggs extension where FCNC are
present at tree-level but their contributions to FCNC phenomena involve only
off-diagonal elements of the CKM matrix or their products. The first
consistent models of this kind were proposed by Branco, Grimus and Lavoura
(BGL)~\cite{Branco:1996bq}, and consisted of the SM with two Higgs doublets
together with the requirement of an additional discrete symmetry. BGL models
are compatible with lower neutral Higgs masses and FCNC's occur at tree
level, with the new interactions entirely determined in terms of the CKM
matrix elements.
The goal of this paper is to generalize the previous BGL models and to,
systematically, search for patterns where a discrete flavour symmetry
naturally leads to the alignment of the flavour space of both the quark
sectors. Although the quark mass hierarchy does not arise from the symmetry,
the effect of both is such that the CKM matrix is near to the identity and
has the correct overall phenomenological features, determined by the quark
mass hierarchy, \cite{Branco:2011aa}. To do this we extend the SM with two
extra Higgs doublets to a total of three Higgs $\phi _{a}$. The choice for
discrete symmetries is to avoid the presence of Goldstone bosons that appear
in the context of any global continuous symmetry, when the spontaneous
electroweak symmetry breaking occurs. For the sake of simplicity, we
restrict our search to the family group $Z_{N}$, and demand that the
resulting up-quark mass matrix $M_{u}$ is diagonal. This is to say that, due
to the expected strong up-quark mass hierarchy, we only consider those cases
where the contribution of the up-quark mass matrix to quark mixing is
negligible.
If one assumes that all Higgs doublets acquire vacuum expectation values
with the same order of magnitude, then each Higgs doublet must couple to the
fermions with different strengths. Possibly one could obtain similar results
assuming that the vacuum expectation values (VEVs) of the Higgs have a
definite hierarchy instead of the couplings, but this is not considered
here. Combining this assumption with the symmetry, we obtain the correct
ordered hierarchical pattern if the coupling with $\phi _{3}$ gives the
strength of the third generation, the coupling with $\phi _{2}$ gives the
strength of the second generation and the coupling with $\phi _{1}$ gives
the strength of the first generation. Therefore, from our point of view, the
three Higgs doublets are necessary to ensure that there exists three
different coupling strengths, one for each generation, to guarantee
simultaneously an hierarchical mass spectrum and a CKM matrix that has the
correct overall phenomenological features e.g. $\left\vert V_{cb}\right\vert
^{2}+\left\vert V_{ub}\right\vert ^{2}=O(m_{s}/m_{b})^{2}$, \ and denoted
here by $V\approx \mathbf{1.}$
Indeed, our approach is within the BGL models, and such that the FCNC
flavour structure is entirely determined by CKM. Through the symmetry, the
suppression of the most dangerous FCNC's, by combinations of the CKM matrix
elements and light quark masses, is entirely natural.
The paper is organised as follows. In the next section, we present our model
and classify the patterns allowed by the discrete symmetry in combination with our assumptions.
In Sec. \ref{sec:num}, we give a brief numerical analysis of
the phenomenological output of our solutions. In Sec. \ref{sec:fcnc}, we
examine the suppression of scalar mediated FCNC in our framework for each
pattern. Finally, in Sec. \ref{sec:conc}, we present our conclusions.
\section{The Model}
\label{sec:model}
We extend the Higgs sector of the SM with two extra new scalar doublets,
yielding a total of three scalar doublets, as $\phi _{1}$, $\phi _{2}$, $%
\phi _{3}$. As it was mentioned in the introduction, the main idea for
having three Higgs doublets is to implement a discrete flavour symmetry,
that leads to the alignment of the flavour space of the quark sectors. The
quark mass hierarchy does not arise from the symmetry, but together with the
symmetry the effect of both is such that the CKM matrix is near to the
identity and has the correct overall phenomenological features, determined
by the quark mass hierarchy.
Let us start by considering the most general quark Yukawa coupling
Lagrangian invariant in our setup
\begin{equation}
-\mathcal{L}_{\text{Y}}=(\Omega _{a})_{ij}\,\overline{Q}_{Li}\ \widetilde{%
\phi }_{a}\ u_{R_{j}}+(\Gamma _{a})_{ij}\,\overline{Q}_{Li}\ \phi _{a}\
d_{R_{j}}+h.c., \label{eq:lag}
\end{equation}%
with the Higgs labeling $a=1,2,3$ and $i,j$ are just the usual flavour
indexes identifying the generations of fermions. In the above Lagrangian,
one has three Yukawa coupling matrices $\Omega _{1}$, $\Omega _{2}$, $\Omega
_{3}$ for the up-quark sector and three Yukawa coupling matrices $\Gamma
_{1} $, $\Gamma _{2}$, $\Gamma _{3}$ for the down sector, corresponding to
each of the Higgs doublets $\phi _{1}$, $\phi _{2}$, $\phi _{3}$. Assuming
that only the neutral components of the three Higgs doublets acquire vacuum
expectation value (VEV), the quark mass $M_{u}$ and $M_{d}$ are then easily
generated as
\begin{subequations}
\label{eq:mass}
\begin{align}
M_{u}& =\Omega _{1}\left\langle \phi _{1}\right\rangle \,^{\ast }+\,\Omega
_{2}\left\langle \phi _{2}\right\rangle \,^{\ast }+\,\Omega
_{3}\,\left\langle \phi _{3}\right\rangle ^{\ast }, \label{eq:massup} \\
M_{d}& =\Gamma _{1}\left\langle \phi _{1}\right\rangle \,+\,\Gamma
_{2}\left\langle \phi _{2}\right\rangle \,+\,\Gamma _{3}\left\langle \phi
_{3}\right\rangle ,
\end{align}%
where VEVs $\langle \phi _{i}\rangle $ are parametrised as
\end{subequations}
\begin{equation}
\left\langle \phi _{1}\right\rangle =\frac{v_{1}}{\sqrt{2}},\quad
\left\langle \phi _{2}\right\rangle =\frac{v_{2}e^{i\alpha _{2}}}{\sqrt{2}}%
,\quad \left\langle \phi _{3}\right\rangle =\frac{v_{3}e^{i\alpha _{3}}}{%
\sqrt{2}},
\end{equation}%
with $v_{1}$, $v_{2}$ and $v_{3}$ being the VEV moduli and $\alpha _{2}$, $%
\alpha _{3}$ just complex phases. We have chosen the VEV of $\phi _{1}$ to
be real and positive, since this is always possible through a proper gauge
transformation. As stated, we assume that the moduli of VEVs $v_{i}$ are of
the same order of magnitude, i.e.,
\begin{equation}
v_{1}\sim v_{2}\sim v_{3}. \label{vs}
\end{equation}
Each of the $\phi _{a}$ couples to the quarks with a coupling $(\Omega
_{a})_{ij},(\Gamma _{a})_{ij}$ which we take be of the same order of
magnitude, unless some element vanishes by imposition of the flavour
symmmetry. In this sence, each $\phi _{a}$ and $(\Omega _{a},\Gamma _{a})$
will generate it's own respective generation: i.e., our model is such that
by imposition of the flavour symmmetry, $\phi _{3}$, $\Omega _{3}$, $\Gamma
_{3}$ will generate $m_{t}$ respectivelly $m_{b}$, that $\phi _{2}$, $\Omega
_{2}$, $\Gamma _{2}$ will generate $m_{c}$ respectivelly $m_{s}$, and that $%
\phi _{1}$, $\Omega _{1}$, $\Gamma _{1}$ will generate $m_{u}$ respectivelly
$m_{d}$. Generically, we have
\begin{subequations}
\label{eq:hierarchy}
\begin{align}
v_{1}\left\vert (\Omega _{1})_{ij}\right\vert & \sim m_{u},\;v_{2}\left\vert
(\Omega _{2})_{ij}\right\vert \sim m_{c},\;v_{3}\left\vert (\Omega
_{3})_{ij}\right\vert \sim m_{t}, \\
v_{1}\left\vert (\Gamma _{1})_{ij}\right\vert & \sim m_{d},\;v_{2}\left\vert
(\Gamma _{2})_{ij}\right\vert \sim m_{s},\;v_{3}\left\vert (\Gamma
_{3})_{ij}\right\vert \sim m_{b},
\end{align}%
which together with Eq. \ref{vs} implies a definite hierarchy amongst the
non-vanishing Yukawa coupling matrix elements:
\end{subequations}
\begin{subequations}
\label{eq:hier}
\begin{align}
\left\vert (\Omega _{1})_{ij}\right\vert & \ll \left\vert (\Omega
_{2})_{ij}\right\vert \ll \left\vert (\Omega _{3})_{ij}\right\vert ,
\label{eq:hierup} \\[2mm]
\left\vert (\Gamma _{1})_{ij}\right\vert & <\left\vert (\Gamma
_{2})_{ij}\right\vert \ll \left\vert (\Gamma _{3})_{ij}\right\vert .
\end{align}
Next, we focus on the required textures for the Yukawa coupling matrices $%
\Omega _{a}$ and $\Gamma _{a}$ that naturally lead to an hierarchical mass
quark spectrum and at the same time to a realistic CKM mixing matrix. These
textures must be reproduced by our choice of the flavour symmetry. As
referred in the introduction, we search for quark mass patterns where the
mass matrix $M_{u}$ is diagonal. Therefore, one derives from Eqs.~%
\eqref{eq:massup}, \eqref{eq:hierup} the following textures for $\Omega _{a}$
\end{subequations}
\begin{equation}
\Omega _{1}=%
\begin{pmatrix}
\mathsf{x} & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
,\,\Omega _{2}=%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \mathsf{x} & 0 \\
0 & 0 & 0%
\end{pmatrix}%
,\,\Omega _{3}=%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & \mathsf{x}%
\end{pmatrix}%
. \label{eq:textureOs}
\end{equation}%
The entry $\mathsf{x}$ means a non zero element. In this case, the up-quark
masses are given by $m_{u}=v_{1}\left\vert (\Omega _{1})_{11}\right\vert $, $%
m_{c}=v_{2}\left\vert (\Omega _{2})_{22}\right\vert $ and $%
m_{t}=v_{3}\left\vert (\Omega _{3})_{33}\right\vert $.
Generically, the down-quark Yukawa coupling matrices must have the following
indicative textures
\begin{equation}
\Gamma _{1}=%
\begin{pmatrix}
\boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}}
\\
\mathsf{x} & \mathsf{x} & \mathsf{x} \\
\mathsf{x} & \mathsf{x} & \mathsf{x}%
\end{pmatrix}%
,\,\Gamma _{2}=%
\begin{pmatrix}
0 & 0 & 0 \\
\boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}}
\\
\mathsf{x} & \mathsf{x} & \mathsf{x}%
\end{pmatrix}%
,\,\Gamma _{3}=%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
\boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}} & \boldsymbol{\mathsf{x}}%
\end{pmatrix}%
. \label{eq:textureGs}
\end{equation}%
We distinguish rows with bold $\boldsymbol{\mathsf{x}}$ in order to indicate
that it is mandatory that at least one of matrix elements within that row
must be nonvanishing. Rows denoted with $\mathsf{x}$ may be set to zero,
without modifying the mass matrix hierarchy. These textures ensure that not only is
the mass spectrum hierarchy respected but it also leads to the alignment of the
flavor space of both the quark sectors \cite{Branco:2011aa}
and to a CKM matrix $V\approx \mathbf{1}$. For instance, if one would not have
a vanishing, or comparatively very small, $(1,3)$ entry in the $\Gamma _{2}$%
, this would not necessarily spoil the scale of $m_{s}$, but it would
dramatically change the predictions for the CKM mixing matrix.
In order to force the Yukawa coupling matrices $\Omega _{a}$ and $\Gamma
_{a} $ to have the indicative forms outlined in Eqs.~\eqref{eq:textureOs}
and~\eqref{eq:textureGs}, we introduce a global flavour symmetry. Since any
global continuous symmetry leads to the presence of massless Goldstone
bosons after the spontaneous electroweak breaking, one should instead
consider a discrete symmetry. Among many possible discrete symmetry
constructions, we restrict our searches to the case of cycle groups $Z_{N}$.
Thus, we demand that any quark or boson multiplet $\chi $ transforms
according to $Z_{N}$ as
\begin{equation}
\chi \rightarrow \chi ^{\prime }=e^{i\,\mathcal{Q}(\chi )\,\frac{2\pi }{N}%
}\chi ,
\end{equation}%
where $\mathcal{Q}(\chi )\in \{0,1,\dots ,N\}$ is the $Z_{N}$-charge
attributed for the multiplet $\chi $.
We have chosen the up-quark mass matrix $M_{u}$ to be diagonal. This
restricts the flavour symmetry $Z_{N}$. We have found that, in order to
ensure that all Higgs doublet charges are different, and to
have appropriate charges for fields ${Q_{L}}_{i}$ and ${u_{R}}_{i}$, we must
have $N\geq 7$. We simplify our analysis by fixing $N=7$ and choose:
\begin{subequations}
\label{eq:fix}
\begin{align}
\mathcal{Q}({Q_{L}}_{i})& =(0,1,-2), \\
\mathcal{Q}({u_{R}}_{i})& =(0,2,-4),
\end{align}%
In addition, we may also fix
\end{subequations}
\begin{equation}
\mathcal{Q}({Q_{L}}_{i})=\mathcal{Q}(\phi _{i}) \label{eq:fix1}
\end{equation}%
It turns out that these choices do not restrict the results, i.e. the
possible textures that one can have for the $\Gamma _{i}$ matrices. Other
choices would only imply that we reshuffle the charges of the multiplets.
With the purpose of enumerating the different possible textures for the $%
\Gamma _{i}$ matrices implementable in $Z_{7}$, we write down the charges of
the trilinears $\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{a}{d_{R}}_{j})$
corresponding to each $\phi _{a}$ as
\begin{subequations}
\begin{equation}
\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{1}{d_{R}}_{j})=%
\begin{pmatrix}
d_{1} & d_{2} & d_{3} \\
d_{1}-1 & d_{2}-1 & d_{3}-1 \\
d_{1}+2 & d_{2}+2 & d_{3}+2%
\end{pmatrix}%
,
\end{equation}%
\begin{equation}
\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{2}{d_{R}}_{j})=%
\begin{pmatrix}
d_{1}+1 & d_{2}+1 & d_{3}+1 \\
d_{1} & d_{2} & d_{3} \\
d_{1}+3 & d_{2}+3 & d_{3}+3%
\end{pmatrix}%
,
\end{equation}%
\begin{equation}
\mathcal{Q}({\overline{Q}_{L}}_{i}\phi _{3}{d_{R}}_{j})=%
\begin{pmatrix}
d_{1}-2 & d_{2}-2 & d_{3}-2 \\
d_{1}-3 & d_{2}-3 & d_{3}-3 \\
d_{1} & d_{2} & d_{3}%
\end{pmatrix}%
,
\end{equation}%
where $d_{i}\equiv \mathcal{Q}({d_{R}}_{i})$. One can check that, in order
to have viable solutions, one must vary the values of $d_{i}\in
\{0,1,-2,-3\} $.
We summarise in Table \ref{tab:downTextures} all the allowed textures for
the $\Gamma _{a}$ matrices and the resulting $M_{d}$ mass matrix texture,
excluding all cases which are irrelevant, e.g. matrices that have too much
texture zeros and are singular, or matrices that do not accommodate CP
violation. It must be stressed, that these are the textures obtained by the
different charge configurations that one can possibly choose. However, if
one assumes a definite charge configuration, then the entire texture, $M_{d}$
and $M_{u}$ and the respective phenomenology are fixed. As stated, the list
of textures in Table~\ref{tab:downTextures} remains unchanged even if one
chooses any other set than in Eqs.~\eqref{eq:fix}, \eqref{eq:fix1}. As
stated, that all patterns presented here are of the Minimal Flavour
Violation (MFV) type \cite%
{Joshipura:1990pi,Antaramian:1992ya,Hall:1993ca,Mantilla2017, Buras:2001,
Dambrosio:2002}.
Pattern~I in the table was already considered in Ref.~\cite{Botella:2009pq}
in the context of $Z_{8}$. We discard Patterns~IV, VII and X, because
contrary to our starting point, at least one of three non-zero couplings
with $\phi _{1}$ will turn out be of the same order as the larger coupling
with $\phi _{2}$ in order to meet the phenomenological requirements of the
CKM matrix.
Notice also, that the structure of other $M_{d}$'s cannot be trivially
obtained, e.g. from Pattern I, by a transformation of the right-handed down
quark fields.
Our symmetry model may be extended to the charged leptons and neutrinos,
e.g. in the context of type one see-saw. Choosing for the lepton doublets ${L%
}_{i}$ the charges $\mathcal{Q}({L}_{i})=(0,-1,2)$, opposite to the Higgs
doublets in Eq. \eqref{eq:fix1},~and e.g. for the charges $\mathcal{Q}({e_{R}%
}_{i})=(0,-2,4)$ of the right-handed fields ${e_{R}}_{i}$, we force the
charged~ lepton mass matrix to be diagonal. Then for the right-handed
neutrinos ${\nu _{R}}_{i}$, choosing $\mathcal{Q}({\nu _{R}}_{i})=(0,0,0)$,
we obtain for the neutrino Dirac mass matrix a pattern similar to pattern I.
Of course, for this case, the heavy right-handed neutrino Majorana mass
matrix is totally arbitrary. In other cases, i.e. for other patterns and
charges, in particular for the right-handed neutrinos, we could introduce
scalar singlets with suitable charges, which would then lead to certain
heavy right-handed neutrino Majorana mass matrices.
Next, we address an important issue of the model, namely, whether accidental
$U(1)$ symmetries may appear in the Yukawa sector or in the potential. One
may wonder whether a continuous accidental $U(1)$ symmetry could arise, once
the $Z_{7}$ is imposed at the Lagrangian level in Eq.~\eqref{eq:lag}. This
is indeed the case, i.e., for all realizations of $Z_{7}$, one has the
appearance of a global $U(1)_{X}$. However, any consistent global $U(1)_{X}$
must obey to the anomaly-free conditions of global symmetries~\cite%
{Babu:1989ex}, which read for the anomalies $SU(3)^{2}\times U(1)_{X}$, $%
SU(2)^{2}\times U(1)_{X}$ and $U(1)_{Y}^{2}\times U(1)_{X}$ as
\end{subequations}
\begin{subequations}
\begin{equation}
A_{3}\equiv \frac{1}{2}\sum_{i=1}^{3}\biggl(2X({Q_{L}}_{i})-X({u_{R}}_{i})-X(%
{d_{R}}_{i})\biggr)=0, \label{eq:A3}
\end{equation}%
\begin{equation}
A_{2}\equiv \frac{1}{2}\sum_{i=1}^{3}\biggl(3X({Q_{L}}_{i})+X({\ell _{L}}%
_{i})\biggr)=0, \label{eq:A2}
\end{equation}%
\begin{equation}
A_{1}\equiv \frac{1}{6}\sum_{i=1}^{3}\biggl(X({Q_{L}}_{i})+3X({\ell _{L}}%
_{i})-8X({u_{R}}_{i})-2X({d_{R}}_{i})-6X({e_{R}}_{i})\biggr)=0,
\end{equation}%
where $X(\chi )$ is the $U(1)_{X}$ charge of the fermion multiplet $\chi $.
We have properly shifted the $Z_{7}$-charges in Eq.\eqref{eq:fix} and in
Table \ref{tab:downTextures} so that $X(\chi )=\mathcal{Q}(\chi )$, apart of
an overall $U(1)_{X}$ convention. In general to test those conditions, one
needs to specify the transformation laws for all fermionic fields. Looking
at the Table~1, we derive that all the cases, except the first case
corresponding to $d_{i}=(0,0,0)$, violate the condition given in Eq.~%
\eqref{eq:A3} that depends only on coloured fermion multiplets. In the case $%
d_{i}=(0,0,0)$, if one assigns the charged lepton charges as $X({\ell _{L}}%
_{i})=X({Q_{L}}_{i})$ one concludes that the condition given in Eq.~%
\eqref{eq:A2} is violated. One then concludes that the global $U(1)_{X}$
symmetry is anomalous and therefore only the discrete symmetry $Z_{7}$
persists.
We also comment on the scalar potential of our model. The most general
scalar potential with three scalars invariant under $Z_{7}$ reads as
\end{subequations}
\begin{equation}
V(\phi )=\sum_{i}\left[ -\mu _{i}^{2}\phi _{i}^{\dagger }\phi _{i}+\lambda
_{i}(\phi _{i}^{\dagger }\phi _{i})^{2}\right] +\sum_{i<j}\left[ +C_{i}(\phi
_{i}^{\dagger }\phi _{i})(\phi _{j}^{\dagger }\phi _{j})+\,\bar{C}%
_{i}\left\vert \phi _{i}^{\dagger }\phi _{j}\right\vert ^{2}\right] ,
\label{eq:pot}
\end{equation}%
where the constants $\mu _{i}^{2}$, $\lambda _{i}$, $C_{i}$ and $\bar{C}_{i}$
are taken real for $i,j=1,2,3$. Analysing the potential above, one sees that
it gives rise to the accidental global continuous symmetry $\phi
_{i}\rightarrow e^{i\alpha _{i}}\phi _{i}$, for arbitrary $\alpha _{i}$,
which upon spontaneous symmetry breaking leads to a massless neutral scalar,
at tree level. Introducing soft-breaking terms like $m_{ij}^{2}\phi
_{i}^{\dagger }\,\phi _{j}\,+\text{H.c.}$ can erase the problem. Another
possibility without spoiling the $Z_{7}$ symmetry is to add new scalar
singlets, so that the coefficients $m_{ij}^{2}$ are effectively obtained
once the scalar singlets acquire VEVs.
\begin{table}[]
\caption{The table shows the viable configurations for the right-handed
down-quark ${d_{R}}_{i}$ and their corresponding $\Gamma _{1}$, $\Gamma _{2}$%
, $\Gamma_{3}$ and $M_{d}$ matrices. It is understood that, for each pattern
and coupling, the parameters expressed here by the same symbol, are in fact
different, but denoting he same order of magnitude, (or possibly smaller).
E.g. in pattern I, coupling $\Gamma_1$, the three $\protect\delta$, $\protect%
\delta$, $\protect\delta$, stand for $\protect\delta_1$, $\protect\delta_2$,
$\protect\delta_3$. The same applies to the $\protect\varepsilon$'s and $c$%
's. For patterns IV, VII, and X, which will be excluded, one of the
couplings in $\Gamma _{1}$ turns out to be much larger. }
\label{tab:downTextures}\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Pattern & $\mathcal{Q}({d_R}_i)$ & $\Gamma_1$ & $\Gamma_2$ & $\Gamma_3$ & $%
M_d$ \\ \hline
I & $(0, 0, 0)$ & $%
\begin{pmatrix}
\delta & \delta & \delta \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & \delta \\
\varepsilon & \varepsilon & \varepsilon \\
c & c & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
II & $(0, 0, 1)$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
0 & 0 & \delta \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
\varepsilon & \varepsilon & \delta \\
c & c & 0%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
III & $(0, 0, -3)$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & 0 \\
0 & 0 & \varepsilon%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
\varepsilon & \varepsilon & 0 \\
c & c & \varepsilon%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IV & $(0, 0, -2)$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
0 & 0 & 0 \\
0 & 0 & \varepsilon%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & \varepsilon & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & c & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & \delta & 0 \\
\varepsilon & \varepsilon & 0 \\
c & c & \varepsilon%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
V & $(0, 1, 0)$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
0 & \delta & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & 0 & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & 0 & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
\varepsilon & \delta & \varepsilon \\
c & 0 & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VI & $(0, -3, 0)$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & 0 & \varepsilon \\
0 & \varepsilon & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & 0 & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
\varepsilon & 0 & \varepsilon \\
c & \varepsilon & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VII & $(0, -2, 0)$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
0 & 0 & 0 \\
0 & \varepsilon & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
\varepsilon & 0 & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
c & 0 & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\delta & 0 & \delta \\
\varepsilon & 0 & \varepsilon \\
c & \varepsilon & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VIII & $(1, 0, 0)$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
\delta & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \varepsilon & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
\delta & \varepsilon & \varepsilon \\
0 & c & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IX & $(-3, 0, 0)$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & 0 & 0 \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \varepsilon & \varepsilon \\
\varepsilon & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & \varepsilon & \varepsilon \\
\varepsilon & c & c%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
X & $(-2, 0, 0)$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & 0 & 0 \\
\varepsilon & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & \varepsilon & \varepsilon \\
0 & 0 & 0%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & c & c%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
0 & \delta & \delta \\
0 & \varepsilon & \varepsilon \\
\varepsilon & c & c%
\end{pmatrix}%
$\rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\ \hline
\end{tabular}%
\end{table}
\newpage
\section{Numerical analysis}
\label{sec:num}
In this section, we give the phenomenological predictions obtained by the
patterns listed in Table~\ref{tab:downTextures}. Note that, although these
patterns arrize directly from the chosen discrete charge configuration of
the quark fields, one may further preform a residual flavour transformation
of the right-handed down quark fields, resulting in an extra zero entry in $%
M_{d}$. Taking this into account, all the parameters in each pattern may be
uniquely expressed in terms of down quark masses and the CKM matrix elements
$V_{ij}$. This follows directly from the diagonalization equation of $M_{d}:$
\begin{equation}
V\ ^{\dagger }M_{d}\ W=diag(m_{d},m_{s},m_{b})\quad \Longrightarrow \quad
M_{d}=V\ diag(m_{d},m_{s},m_{b})\ W^{\dagger } \label{diag}
\end{equation}%
with $V$ being the CKM mixing matrix, since $M_{u}$ is diagonal. Because of
the zero entries in $M_{d}$, it is easy to extract the right-handed
diagonalization matrix $W$, completely in terms of the down quark masses and
the $V_{ij}$. Thus, all parameters, modulo the residual transformation of
the right-handed down quark fields, are fixed, i.e., all parameters in each
pattern may be uniquely expressed in terms of down quark masses and the CKM
matrix elements $V_{ij}$, including the right-handed diagonalization matrix $%
W$ of $M_{d}$. More precisely, all matrix elements of $V$ are written in
terms of Wolfenstein real parameters $\lambda $, $A$, $\overline{\rho }$ and
$\overline{\eta }$, defined in terms of rephasing invariant quantities as
\begin{subequations}
\begin{equation}
\lambda \equiv \frac{|V_{us}|}{\sqrt{|V_{us}|^{2}+|V_{ud}|^{2}}},\qquad
A\equiv \frac{1}{\lambda }\left\vert \frac{V_{cb}}{V_{us}}\right\vert \,,
\end{equation}%
\begin{equation}
\overline{\rho }+i\,\overline{\eta }\equiv -\frac{V_{ud}^{\phantom{\ast}%
}V_{ub}^{\ast }}{V_{cd}^{\phantom{\ast}}V_{cb}^{\ast }}
\end{equation}%
and $diag(m_{d},m_{s},m_{b})$ in Eq.~\eqref{diag}
\end{subequations}
\begin{equation}
\begin{array}{l}
\sqrt{\frac{m_{d}}{m_{s}}}=\sqrt{{\frac{k_{d}}{k_s}}}\ \lambda \\
\\
\frac{m_{s}}{m_{b}}=k_{s}\ \lambda ^{2}%
\end{array}%
\quad \Longrightarrow \quad
\begin{array}{l}
m_{d}=k_{d}\ \lambda ^{4}m_{b} \\
\\
m_{s}=k_{s}\ \lambda ^{2}m_{b}%
\end{array}
\label{ks}
\end{equation}%
with phenomenologically, $k_{d}$ and $k_{s}$ being factores of order one.
Writing $W^{\dagger }$ in Eq.~\eqref{diag} as $W^{\dagger
}=(v_{1},v_{2},v_{3})$, with the $v_{i}$ vectors formed by the $i$-th column
of $W^{\dagger }$, we find e.g. for pattern II,
\begin{equation}
v_{3}=\frac{1}{n_{3}}\left(
\begin{array}{r}
\frac{m_{d}}{m_{b}}V_{11} \\
\frac{m_{s}}{m_{b}}V_{12} \\
V_{13}%
\end{array}%
\right) \times \left(
\begin{array}{r}
\frac{m_{d}}{m_{b}}V_{31} \\
\frac{m_{s}}{m_{b}}V_{32} \\
V_{33}%
\end{array}%
\right) \label{v3}
\end{equation}%
where $n_{3}$ is the norm of the vector obtained from the external product
of the two vectors. Taking into account the extra freedom of transformation
of the right-handed fields, we may choose $M_{31}^{d}=0$, corresponding to $%
c_{1}=0$ in Table~\ref{tab:downTextures}, and we conclude that%
\begin{equation}
v_{1}=\frac{1}{n_{1}}\left(
\begin{array}{r}
\frac{m_{d}}{m_{b}}V_{31} \\
\frac{m_{s}}{m_{b}}V_{32} \\
V_{33}%
\end{array}%
\right) \times v_{3}^{\ast } \label{v1}
\end{equation}%
Obviously, then $v_{2}=\frac{1}{n_{2}}v_{1}^{\ast }\times v_{3}^{\ast }$.
This process is replicated for all patterns. Thus, $V$ and $W$, are entirely
expressed in terms of Wolfenstein parameters and $k_{d}$ and $k_{s}$ of Eq.~%
\eqref{ks}. These two matrices will be later used to compute the patterns of
the FCNC's in Table \ref{tb:FCNCpatterns}. Indeed, in this way, we find e.g.
for pattern II, \ in leading order order,
\begin{equation}
M_{d}=m_{b}\ \left(
\begin{array}{ccc}
-k_{d}\ \lambda ^{3} & \left( \overline{\rho }-i\,\overline{\eta }\right) \
A\ \lambda ^{3} & 0 \\
-k_{d}\ \lambda ^{2} & A\ \lambda ^{2} & -k_{s}\ \lambda ^{3} \\
0 & 1 & 0%
\end{array}%
\right) \label{mdl}
\end{equation}%
which corresponds to the expected power series where the couplings in $%
\Gamma _{1}$ to the first Higgs $\phi _{1}$ are comparatively smaller than
then couplings in $\Gamma _{2}$, and these smaller to\ the couplings in $%
\Gamma _{3}$. Similar results are obtained for all patterns in Table~\ref%
{tab:downTextures}, except for patterns IV, VII and X, where e.g. for
pattern IV, we find that the coupling in $(\Gamma _{1})_{33}$ is
proportional to$\ \lambda $, which is too large and contradicts our initial
assumption that all couplings in $\Gamma _{1}$ to the first Higgs $\phi _{1}$
must be smaller than the couplings in $\Gamma _{2}$ to the second Higgs $%
\phi _{2}$. Therefore, we exclude Patterns IV, VII and X.
We give in Table {\ref{Yukawa_example}} a numerical example of a Yukawa
coupling configuration for each pattern. We use the following quark running
masses at the electroweak scale $M_{Z}$:
\begin{subequations}
\begin{align}
m_{u}& =1.3_{-0.2}^{+0.4}\,\text{MeV},\quad m_{d}=2.7\pm 0.3\,\text{MeV}%
,\quad m_{s}=55_{-3}^{+5}\,\text{MeV}, \\
m_{c}& =0.63\pm 0.03\,\text{GeV},\quad m_{b}=2.86_{-0.04}^{+0.05}\,\text{GeV}%
,\quad m_{t}=172.6\pm 1.5\,\text{GeV}.
\end{align}%
which were obtained from a renormalisation group equation evolution at
four-loop level \cite{1674-1137-38-9-090001}, which, taking into account all
experimental constrains \cite{Charles:2015gya}, implies:
\end{subequations}
\begin{subequations}
\begin{align}
\lambda & =0.2255\pm 0.0006,\qquad A=0.818\pm 0.015, \\
\overline{\rho }& =0.124\pm 0.024,\qquad \overline{\eta }=0.354\pm 0.015.
\end{align}
\begin{table}[]
\caption{A numerical example of a Yukawa coupling configuration for each
pattern that gives the correct hierarchy among the quark masses and mixing.}{%
} {\label{Yukawa_example}}
\par
\begin{center}
\setlength{\tabcolsep}{0.5pc}
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|}
\hline
Pattern & $v_1Y_1$ & $v_2Y_2$ & $v_3Y_3$ & $M_d$ \\
\hline
I &
$\begin{pmatrix}
0.00277 & 0.0124 & 0.0101\,e^{1.907 \,i} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0.0537 & 0.119 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.86
\end{pmatrix}$
&
$\begin{pmatrix}
0.00277 & 0.00124 & 0.0101\,e^{1.907 \,i} \\
0 & 0.0537 & 0.119 \\
0 & 0 & 2.86
\end{pmatrix}$
\\
\hline
II &
$\begin{pmatrix}
0.0123 & 0.0101\,e^{-1.235 \,i} & 0 \\
0 & 0 & 0.012 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0524 & 0.119 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 2.86 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0.0123 & 0.0101\,e^{-1.235 \,i} & 0 \\
0.0524 & 0.119 & 0.012 \\
0 & 2.86 & 0
\end{pmatrix}$
\\
\hline
III &
$\begin{pmatrix}
0.0127 & 0.0102\, e^{-1.253 \,i}& 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0523 & 0.120 & 0 \\
0 & 0 & 0.295
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 2.844 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0.0127 & 0.0102\, e^{-1.253 \,i} & 0 \\
0.0523 & 0.120 & 0 \\
0 & 2.844 & 0.295
\end{pmatrix}$
\\
\hline
V &
$\begin{pmatrix}
0.0127 & 0 & 0.0101\,e^{-1.234 \,i}\\
0 & 0.0117 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0524 & 0 & 0.112 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.86
\end{pmatrix}$
&
$\begin{pmatrix}
0.0127 & 0 &0.0101\,e^{-1.234 \,i} \\
0.0524 & 0.0117& 0.112 \\
0 & 0 & 2.86
\end{pmatrix}$
\\
\hline
VI &
$\begin{pmatrix}
0.0127 & 0 & 0.0102\,e^{-1.253 \,i} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0.0523 & 0 & 0.120 \\
0 & 0.295 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.844
\end{pmatrix}$
&
$\begin{pmatrix}
0.0127 & 0 & 0.0102\,e^{-1.253 \,i} \\
0.0523 & 0 & 0.120 \\
0 & 0.295 & 2.844
\end{pmatrix}$
\\
\hline
VIII &
$\begin{pmatrix}
0 &0.0127 & 0.0102\,e^{1.907 \,i} \\
0.0117 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0.0524 & 0.119 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2.86
\end{pmatrix}$
&
$\begin{pmatrix}
0 &0.0127 & 0.0102\,e^{1.907 \,i} \\
0.0117 & 0.0524 & 0.119 \\
0 & 0 & 2.86
\end{pmatrix}$
\\
\hline
IX &
$\begin{pmatrix}
0 & 0.0127 & 0.0101\,e^{-1.253 \,i} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0.0523& 0.120 \\
0.295 & 0 & 0
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 &2.844
\end{pmatrix}$
&
$\begin{pmatrix}
0 & 0.0127 & 0.0101\,e^{-1.253 \,i} \\
0 & 0.0523& 0.120 \\
0.295& 0 &2.844
\end{pmatrix}$
\\
\hline
\end{tabular}}
\end{center}
\end{table}
\section{Predictions of flavour changing neutral currents}\label{sec:fcnc}
In the SM, flavour changing neutral currents (FCNC) are forbidden at tree
level, both in the gauge and the Higgs sectors. However, by extending the SM
field content, one obtains Higgs Flavour Violating Neutral Couplings \cite%
{Branco:2011iw}. In terms of the quark mass eigenstates, the Yukawa couplings
to the Higgs neutral fields are:
\end{subequations}
\begin{equation}
\begin{aligned} -\mathcal{L}_{\text{Neutral Yukawa}}= &\frac{H_0}{v}\left(
\overline{d_L}\, D_d \, d_R + \overline{u_L}\, D_u \, u_R \right) +
\frac{1}{v'} \overline{d_L} \, N^d_{1}\, \left( R_1 + i\, I_1 \right) \, d_R
\\ & + \frac{1}{v'} \overline{u_L} \, N^u_{1} \, \left( R_1 - i\, I_1
\right) \, u_R + \frac{1}{v''} \overline{d_L} \, N^d_{2}\, \left( R_2 + i\,
I_2 \right) \, d_R \\ &+ \frac{1}{v''} \overline{u_L} \, N^u_{2} \, \left(
R_2 - i\, I_2 \right) \, u_R + h.c. \end{aligned}
\end{equation}%
where the $N_{i}^{u,d}$ are the matrices which give the strength and the
flavour structure of the FCNC,
\begin{subequations}
\begin{align} \label{eq:FCNC}
& N_{1}^{d}=\frac{1}{\sqrt{2}}V^{\dagger }\,\left( v_{2}\Gamma
_{1}-v_{1}e^{i\,\alpha _{2}}\Gamma _{2}\right) \,W, \\
& N_{2}^{d}=\frac{1}{\sqrt{2}}V^{\dagger }\left( v_{1}\Gamma
_{1}+v_{2}e^{i\,\alpha _{2}}\Gamma _{2}-\frac{v_{1}^{2}+v_{2}^{2}}{v_{3}}%
e^{i\,\alpha _{3}}\Gamma _{3}\right) \,W, \\
& N_{1}^{u}=\frac{1}{\sqrt{2}}\left( v_{2}\Omega _{1}-v_{1}e^{-i\,\alpha
_{2}}\Omega _{2}\right) , \\
& N_{2}^{u}=\frac{1}{\sqrt{2}}\left( v_{1}\Omega _{1}+v_{2}e^{-i\,\alpha
_{2}}\Omega _{2}-\frac{v_{1}^{2}+v_{2}^{2}}{v_{3}}e^{-i\,\alpha _{3}}\Omega
_{3}\right) .
\end{align}%
Since in our case the $N_{i}^{u}$ are diagonal, there are no flavour
violating terms in the up-sector. Therefore, the analysis of the FCNC
resumes only to the down-quark sector. One can use the equations of the mass
matrices presented in Eq.~\eqref{eq:mass} to simplify the Higgs mediated
FCNC matrices for the down-sector:
\end{subequations}
\begin{subequations}
\label{eq:simplefcnc}
\begin{align}
N_{1}^{d}& =\frac{v_{2}}{v_{1}}D_{d}-\frac{v_{2}}{\sqrt{2}}\left( \frac{v_{2}%
}{v_{1}}+\frac{v_{1}}{v_{2}}\right) e^{i\alpha _{2}}\,V^{\dagger }\,\Gamma
_{2}\,W-\frac{v_{2}\,v_{3}}{v_{1}\sqrt{2}}e^{i\alpha _{3}}V^{\dagger
}\,\,\Gamma _{3}\,W \\[2mm]
N_{2}^{d}& =D_{d}-\frac{v^{2}}{v_{3}\sqrt{2}}e^{i\alpha _{3}}\,V^{\dagger
}\,\Gamma _{3}\,W
\end{align}
In order to satisfy experimental constraints arising from $K^{0}-\overline{%
K^{0}}$, $B^{0}-\overline{B^{0}}$ and $D^{0}-\overline{D^{0}}$, the
off-diagonal elements of the Yukawa interactions $N_{1}^{d}$ and $N_{2}^{d}$
must be highly suppressed \cite{Botella:2014ska} \cite{AndreasCrivellin2013}%
. For each of our 10 solutions in Table~\ref{tab:downTextures}, we summarize
in Table {\ref{tb:FCNCpatterns}} all FCNC patterns, for each solution, and
for $v_{1}=v_{2}=v_{3}$ and $\alpha _{2}=\alpha _{3}=0$. These patterns are
of the BGL type, since in Eq.~\eqref{eq:simplefcnc} all matrices can be
expressed in terms of the CKM mixing matrix elements and the down quark
masses. As explained, to obtain these patterns, we express the CKM matrix $V$
and the matrix $W$ in terms of Wolfenstein parameters.
\begin{table}[]
\caption{For all allowed patterns, we find that the matrices $N^d_1-D_d$ and
$N^d_2$ are proportional to the following paterns, where $\protect\lambda$
is the Cabibbo angle.}{\label{tb:FCNCpatterns}}
{\ } \setlength{\tabcolsep}{14pt} \centering
\begin{tabular}{|c|c|c|}
\hline
Pattern & $(N^d_1-D_d)\sim$ & $N^d_{2}\sim$ \\ \hline
I & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^5 & \lambda^2 & \lambda^2 \\
\lambda^7 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^9 & \lambda^2 & \lambda^2 \\
\lambda^7 & \lambda^4 & 1%
\end{pmatrix}
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
II & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^9 & \lambda^2 & \lambda^2 \\
\lambda^7 & \lambda^4 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
III & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IV & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$\rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
V & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^7 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VI & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VII & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
VIII & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^7 & \lambda^3 \\
\lambda^7 & \lambda^2 & \lambda^2 \\
\lambda^5 & \lambda^4 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
IX & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ \rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\
X & $%
\begin{pmatrix}
\lambda^4 & \lambda^3 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$ & $%
\begin{pmatrix}
\lambda^4 & \lambda^5 & \lambda^3 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda & \lambda^2 & 1%
\end{pmatrix}%
$\rule{0pt}{0.9cm}\rule[-0.9cm]{0pt}{0pt} \\ \hline
\end{tabular}%
\end{table}
The tree level Higgs mediated $\Delta S=2$ amplitude must be suppressed.
This may allways be achieved if one chooses the masses of the flavour
violating neutral Higgs scalars sufficiently heavy. However, from the
experimental point of view, it would be interesting to have these masses as
low as possible. Therefore, we also estimate the lower bound of these
masses, by considering the contribution to $B^{0}-\overline{B^{0}}$ mixing.
We choose this mixing, since for our patterns, the $(3,1)$ entry of the
matrix $N_{1}^{d}$ is the less suppressed in certain cases and would require
very heavy flavour violating neutral Higgses. The relevant quantity is the
off-diagonal matrix element $M_{12}$, which connects the B meson with the
corresponding antimeson. This matrix element, $M_{12}^{NP}$, receives
contributions \cite{Botella:2014ska} both from a SM box diagram and a
tree-level diagram involving the FCNC:
\end{subequations}
\begin{equation}
M_{12}=M_{12}^{SM}+M_{12}^{NP},
\end{equation}%
where the New Physics (NP) short distance tree level contribution to the meson-antimeson
contribution is:
\begin{equation}
\begin{aligned} M_{12}^{NP}=& \sum_i^2 {\frac{f_B^2 \, m_B}{ 96\, v^2
m^2_{R_i}} } \left\{ \left[ \left(1+ \left( \frac{m_B}{m_d+m_b} \right)^2
\right)\left(a^R_i\right)_{12} \right] - \left[ \left(1+ 11 \left(
\frac{m_B}{m_d+m_b} \right)^2 \right] \right)\left(b^R_i\right)_{12}
\right\} \\ &+\sum_i^2 {\frac{f_B^2 \, m_B}{ 96\, v^2 m^2_{I_i}} } \left\{
\left[ \left( 1+ \left( \frac{m_B}{m_d+m_b}
\right)^2\right)\left(a^I_i\right)_{12}\right] - \left[ \left(1+ 11 \left(
\frac{m_B}{m_d+m_b} \right)^2\right]\right) \left(b^I_i\right)_{12} \right\}
\end{aligned}
\end{equation}%
with $v^{2}=v_{1}^{2}+v_{2}^{2}+v_{2}^{2}$ and
\begin{equation}
\begin{array}{l}
\left( a_{i}^{R}\right) _{12}=\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}+\left( N_{i}^{d}\right) _{13}\right] ^{2} \\
\left( a_{i}^{I}\right) _{12}=-\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}-\left( N_{i}^{d}\right) _{13}\right] ^{2}%
\end{array}%
~,\qquad
\begin{array}{l}
\left( b_{i}^{R}\right) _{12}=\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}-\left( N_{i}^{d}\right) _{13}\right] ^{2} \\
\left( b_{i}^{I}\right) _{12}=-\left[ \left( N_{i}^{d}\right) _{31}^{\ast
}+\left( N_{i}^{d}\right) _{13}\right] ^{2}%
\end{array}%
~,\qquad i=1,2 \label{ab}
\end{equation}%
In order to obtain a conservative measure, we have tentatively expanded the
original expression in \cite{Botella:2014ska} and, for the three Higgs case,
included all neutral Higgs mass eigenstates.
Adopting as input values the PDG experimental determinations of $f_{B}$, $%
m_{B}$ and $\Delta \,m_{B}$ and considering a common VEV for all Higgs
doublets, we impose the inequality $M_{12}^{NP}<\Delta m_{B}$. The following
plots show an estimate of the lower bound for the flavour-violating Higgs
masses for two different patterns. We plot two masses chosen from the set $%
\left( m_{1}^{R},m_{2}^{R},m_{1}^{I},m_{2}^{I}\right) $, while the other two
are varied over a wide range. In Fig. 1, we illustrate these lower bounds
for Pattern III, which are restricted by the $(3,1)$ entry of $N_{1}^{d}$
matrix and suppressed by a factor of $\lambda $. For Pattern VIII, in Fig. 2
we find the flavour violating neutral Higgs to be much lighter and possibly
accessible at LHC.
\section{Conclusions}
\label{sec:conc}
We have presented a model based on the SM with 3 Higgs and an additional
flavour discrete symmetry. We have shown that there exist flavour discrete
symmetry configurations which lead to the alignment of the quark sectors.\
By allowing each scalar field to couple to each quark generation with a
distinctive scale, one obtains the quark mass hierarchy, and although this
hierarchy does not arise from the symmetry, the effect of both is such that
the CKM matrix is near to the identity and has the correct overall
phenomenological features. In this context, we have obtained 7 solutions
fulfilling these requirements, with the additional constraint of the up
quark mass matrix being diagonal and real.
We have also verified if accidental $U(1)$ symmetries may appear in the
Yukawa sector or in the potential, particularly the case where a continuous
accidental $U(1)$ symmetry could arise, once the $Z_{7}$ is imposed at the
Lagrangian level. This was indeed the case, however we shown that the
anomaly-free conditions of global symmetries are violated. Thus, the global $%
U(1)_{X}$ symmetry is anomalous and therefore only the discrete symmetry $%
Z_{7}$ persists.
As in this model new Higgs doublets are added, one expects large FCNC
effects, already present at tree level. However, such effects have not been
experimentally observed. We show that, for certain of our specific
implementations of the flavour symmetry, it is possible to suppress the FCNC
effects and to ensure that the flavour violating neutral Higgs are
light enough to be detectable at LHC. Indeed, in this respect, our model
is a generalization of the BGL models for 3HDM, since the FCNC flavour
structure is entirely determined by CKM.
\begin{figure}[h!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P3-a.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_1$ and $I_1$.}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P3-b.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_2$ and $I_2$.}
\label{fig:sub2}
\end{subfigure}
\caption{Lower bound for the flavour-violating Higgs masses for case III. }
\label{fig:test}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P8-a-good.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_1$ and $I_1$.}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.\linewidth]{P8-b-good.png}
\caption{Estimate of the lower bound for the flavour-violating Higgs masses for $R_2$ and $I_2$.}
\label{fig:sub2}
\end{subfigure}
\caption{Lower bound for the flavour-violating Higgs masses for case VIII. }
\label{fig:test1}
\end{figure}
\acknowledgments
This work is partially supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a
Tecnologia (FCT, Portugal) through the projects CERN/FP/123580/2011,
PTDC/FIS-NUC/0548/2012, EXPL/FIS-NUC/0460/2013, and CFTP-FCT Unit 777
(PEst-OE/FIS/UI0777/2013) which are partially funded through POCTI (FEDER),
COMPETE, QREN and EU. The work of D.E.C. is also supported by Associa\c c\~
ao do Instituto Superior T\'ecnico para a Investiga\c c\~ao e
Desenvolvimento (IST-ID). N.R.A is supported by European Union Horizon 2020
research and innovation programme under the Marie Sklodowska-Curie grant
agreement No 674896. N.R.A is grateful to CFTP for the hospitality during
his stay in Lisbon.
\bibliographystyle{ieeetr}
| {'timestamp': '2017-10-30T01:03:40', 'yymm': '1705', 'arxiv_id': '1705.09743', 'language': 'en', 'url': 'https://arxiv.org/abs/1705.09743'} |
\section{Introduction} \label{intro}
Black holes have proven to be an excellent testing ground for theories
of gravity. In particular, one of the recent exciting developments
in string theory has been the reproduction of many of the macroscopic
black hole properties from the microscopic D-brane picture---for reviews
and references, see e.g.~\cite{horrev,maldrev}. At the same time, one
of the current puzzles is the failed attempt in~\cite{dps} to
obtain, from a
microscopic calculation,
the macroscopic scattering of a D-string probe off a five-dimensional
supersymmetric black hole carrying the maximum three charges.
Specifically, the interaction that is quadratic in
the charges was reproduced exactly, but the cubic term, which was seen
in~\cite{us} to be a degeneration of a three point interaction, was not
at all reproduced by the microscopic calculation.
In~\cite{us}, the macroscopic scattering of an arbitrary number of
the triply-charged supersymmetric five-dimensional black holes was given.
A proposal
for a microscopic calculation, based on the just-mentioned observation
of the origin of the three-point interaction, was also given. In this paper,
scattering of supersymmetric four-dimensional black holes carrying four
charges
will be
discussed. The motivation here rests on the fact that these
non-singular
black holes can be made purely out of D-brane~\cite{kleb,vjlars,vj,fermald}.
(If a supersymmetric four-dimensional black hole has fewer than four charges,
then it will be singular at the horizon.)
In principle, this makes the microscopic structure more
transparent~\cite{kleb,vjlars}.
This is in distinction to the five-dimensional case,
where despite requiring only three charges for non-singularity at the horizon,
there is no U-dual basis in which the charges are pure D-brane; the
usual description is as collections of parallel 5-branes and strings, with
momentum
along the strings. The difference in four dimensions is due to the additional
internal direction allowing the conditions for preservation of a
supersymmetry to be satisfied by a more general brane configuration.
In section~\ref{construct} we construct and discuss the black hole solution.
The black hole solution that we use is actually familiar from the heterotic
string---see e.g.~\cite{ct,ct2}. We rederive the
solution in a way that makes explicit its Type II origin; this complements
the discussion in~\cite{cvj}.
In section~\ref{scatter} we give the effective action
that describes the scattering of several of these black holes, and give
a lengthy discussion of its U-dual generalization. In particular, the
U-dual formulation of the three-point function is rather technical.
While we only explicitly calculate the effective action for
black holes with four charges, we explain at the end of section~\ref{scatter} why
the U-duality invariant formula should hold for arbitrary supersymmetric
black holes including the black holes of~\cite{ct2} that carry five charges.
In section~\ref{conc} we conclude with a discussion of the scattering of
two black
holes.
\section{The Black Hole Solution} \label{construct}
In~\cite{vj}, black holes were constructed purely out of e.g.\ %
several D-4-branes,
intersecting at arbitrary $U(3)$ angles in the compact torus, and a D-0-brane.
In~\cite{kleb,vjlars,fermald},
the black holes were constructed out of e.g.\ orthogonally
intersecting D-3-branes. However, because we will be doing the macroscopic
calculation, it will be convenient to use neither of these
descriptions in this paper. Instead, we would like to find an NS-NS
description of the black holes, so we can use the formulas of~\cite{maharana,%
cvetic} for the dimensionally reduced supergravity lagrangian.
This can be obtained, for example, via the following series of dualities from
the D-3-brane configuration of~\cite{kleb,vjlars,fermald}:%
\footnote{It is also possible to obtain the NS-NS black hole from
the IIA NS-5-brane, D-6-brane and D-2-brane with momentum configuration
of~\cite{jmphd}.}
\begin{eqnarray} \label{dualities}
\mbox{\scriptsize
\begin{tabular}{ccc}
\begin{tabular}{c|llllllllll}
{IIB} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
{D-3} & X & & & & X & X & X & & & \\
{D-3} & X & & & & & & X & X & X & \\
{D-3} & X & & & & X & & & & X & X \\
{D-3} & X & & & & & X & & X & & X
\end{tabular}
& {\Large $\stackrel{\mbox{\scriptsize T4,T5}}{\longrightarrow}$} &
\begin{tabular}{c|llllllllll}
{IIB} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
{D-1} & X & & & & & & X & & & \\
{D-5} & X & & & & X & X & X & X & X & \\
{D-3} & X & & & & & X & & & X & X \\
{D-3} & X & & & & X & & & X & & X
\end{tabular}
\\
\end{tabular}} \nonumber \\ \mbox{ \scriptsize \begin{tabular}{ccc}
& & {\Large $\downarrow$} {S} \\
\begin{tabular}{c|llllllllll}
{IIB} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
{NS-1}& X & & & & & & X & & & \\
{ETN} & & X & X & X & & & & & & X \\
{D-1} & X & & & & & & & & X & \\
{D-3} & X & & & & X & X & & X & &
\end{tabular}
& {\Large $\stackrel{\mbox{\scriptsize T5,T9}}{\longleftarrow}$} &
\begin{tabular}{c|llllllllll}
{IIB} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
{NS-1} & X & & & & & & X & & & \\
{NS-5} & X & & & & X & X & X & X & X & \\
{D-3} & X & & & & & X & & & X & X \\
{D-3} & X & & & & X & & & X & & X
\end{tabular}
\\
\end{tabular} } \\ \mbox{\scriptsize \begin{tabular}{ccc}
{T8,T6} {\Large $\downarrow$} \\
\begin{tabular}{c|llllllllll}
{IIB} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
{mom} & & & & & & & X & & & \\
{ETN} & & X & X & X & & & & & & X \\
{D-1} & X & & & & & & X & & & \\
{D-5} & X & & & & X & X & X & X & X &
\end{tabular}
& {\Large $\stackrel{\mbox{\scriptsize S}} \longrightarrow$} &
\begin{tabular}{c|llllllllll}
{IIB} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
{mom} & & & & & & & X & & & \\
{ETN} & & X & X & X & & & & & & X \\
{NS-1} & X & & & & & & X & & & \\
{NS-5} & X & & & & X & X & X & X & X &
\end{tabular}
\end{tabular} \nonumber
}
\end{eqnarray}
Note that under the T-duality in the 6-direction (T6), the fundamental
string parallel to the 6-direction transformed into a unit of Kaluza-Klein
momentum. This is just the well-known momentum--winding exchange.
Recalling that the Kaluza-Klein monopole is essentially the product
of time and Euclidean Taub-NUT (ETN)~\cite{kkm1,kkm2}, and that
the NS-5-brane is the magnetic dual of the NS-string (c.f.\ equation~%
(\ref{hsoln10}) below)
the magnetic-dual of this phenomenon is the NS-5-brane--ETN transformation
under the T9 perpendicular to the NS-5-brane~\cite{town} (compare also
with~\cite{cvj}).
Now applying the harmonic function rule for
orthogonally intersecting branes in ten-
dimensions~\cite{tsh,gaunt,argurio}
gives (relabeling $9 \rightarrow 4$ and $6 \rightarrow 9$)
\begin{mathletters} \label{soln10}
\begin{eqnarray}
\label{gsoln10}
ds_{\mbox{\scriptsize} str}^2 & = & \psi_1^{-1} [-dt^2 + dx_9^2
+ \frac{Q_R}{r} (dt -dx_9)^2]
+ \psi_5 \psi_E^{-1} (dx_4 + Q_E (1-\cos \theta) d \phi)^2
\nonumber \\
& & + \psi_5 \psi_E (dr^2 + r^2 d \theta^2 + r^2 \sin^2 \theta d \phi^2)
+ dx_5^2 + dx_6^2 + dx_7^2 + dx_8^2, \\
\label{dilsoln10}
\varphi &=& \frac{1}{2} \ln (\psi_5 \psi_1^{-1}), \\
\label{hsoln10}
H &=& -Q_5 \sin \theta d \theta \wedge d\phi \wedge dx_4
+ \psi_1^{-2} \frac{d \psi_1}{d r} dt \wedge dr \wedge dx_9, \\
\label{psi1soln10}
\psi_1 &=& 1 + \frac{Q_1}{r}, \\
\label{psi5soln10}
\psi_5 &=& 1 + \frac{Q_5}{r}, \\
\label{psiEsoln10}
\psi_E &=& 1 + \frac{Q_E}{r}.
\end{eqnarray}
\end{mathletters}
Here we have postulated an obvious generalization of the harmonic function
rule to configurations involving the ETN; in particular, the ETN does
not contribute an overall conformal factor, in analogy to the Kaluza-Klein
momentum.
For notational simplicity, only the one-centred black hole has been written;
the generalization to the multi-black hole is almost obvious---see, e.g.~%
\cite{tseyt} for details on multi-centred ETN. We have also set the
string coupling constant $g=e^{\varphi_\infty}=1$, where the subscript
denotes evaluation at spatial infinity.
The $Q_{\alpha}$s are constants; see also equations~(\ref{normq1})--(%
\ref{normqE}).
It is readily verified that equation~%
(\ref{soln10}) satisfies the equations of motion of the (string-frame)
NS-NS IIB action
\begin{equation} \label{action10}
S = \frac{1}{16 \pi G_{10}}
\int d^{10}x \sqrt{-g}e^{-2 \varphi} [R + 4 (\nabla \varphi)^2
- \frac{1}{12} H^2],
\end{equation}
where $G_{10} = 8 \pi^6 g^2 \alpha'^4$ is the ten-dimensional Newton constant.
This action, of course, describes the universal sector of all the
string theories,
and, in fact, the solution of equation~(\ref{soln10}) is not new,
having been discussed in the context of the heterotic string in
e.g.~\cite{ct}.
Dimensional reduction on a $T^6$ now proceeds in the usual way~%
\cite{maharana}. Of course, the NS-5-brane and the ETN give rise
to magnetic charges in 4-dimensions; it is therefore convenient to
dualize the corresponding vectors, and write the theory in terms of the
magnetic vector potentials and field strengths for which the
Bianchi identity and equation of motion are interchanged, e.g.\ %
$d \tilde{A}^{(2)}_4 \equiv \tilde{F}^{(2)}_4 =
e^{-2 \varphi} G^{44} \star F^{(2)}_4$, using the notation of~%
\cite{maharana}, and tildes to denote magnetic quantities. The
four-dimensional, Einstein frame
action, and solution for multiple black holes carrying
four charges, are then,
\begin{eqnarray} \label{action4}
S &= & \frac{1}{16 \pi G_4}
\int d^4x \sqrt{-g} \left\{ R - 2 (\partial_\mu \phi)^2
- \frac{1}{4} (\partial_\mu \ln G_{44})^2 - \frac{1}{4} (\partial_\mu
\ln G_{99})^2 - \frac{1}{4} e^{2 \varphi} G^{-1}_{44} (\tilde{F}^{(1)4}_{
\mu \nu})^2 \right. \nonumber \\
& & \left. - \frac{1}{4} e^{-2 \varphi} G_{99} (F^{(1)9}_{\mu \nu})^2
- \frac{1}{4} e^{2 \varphi} G_{44} (\tilde{F}^{(2)}_{4 \mu \nu})^2
- \frac{1}{4} e^{-2 \varphi} G^{-1}_{99} (F^{(2)}_{9 \mu \nu})^2 \right\} ,
\end{eqnarray}
\begin{mathletters} \label{soln4}
\begin{eqnarray} \label{gsoln4}
ds_{\mbox{\scriptsize E}}^2 &=& -(\psi_1 \psi_5 \psi_R \psi_E)^{-\frac{1}{2}}
dt^2 + (\psi_1 \psi_5 \psi_R \psi_E)^{\frac{1}{2}} d \vec{x}^2, \\
\label{dil4}
\varphi & = & \ln (\psi_1^{-\frac{1}{4}} \psi_5^{\frac{1}{4}}
\psi_R^{-\frac{1}{4}} \psi_E^{\frac{1}{4}}), \\
\label{g444}
G_{44} & = & \psi_5 \psi_E^{-1}, \\
\label{g994}
G_{99} & = & \psi_1^{-1} \psi_R, \\
\label{A1soln4}
A^{(2)}_9 & = & \psi_{1}^{-1} dt, \\
\label{A5soln4}
\tilde{A}^{(2)}_4 & = & \psi_{5}^{-1} dt, \\
\label{ARsoln4}
A^{(1)9} & = & \psi_{R}^{-1} dt, \\
\label{AEsoln4}
\tilde{A}^{(1)4} & = & \psi_E^{-1} dt,
\end{eqnarray}
\end{mathletters}
where the $\psi_\alpha$s, $\alpha \in \{1,5,R,E\}$ are harmonic functions,
\begin{mathletters} \label{psiandQ}
\begin{eqnarray} \label{psi}
\psi_\alpha &= &1 + \sum_{a=1}^N \frac{Q_{\alpha a}}{r_a}, \\
\label{normq1}
Q_{1a} &=& \frac{4 G_4 R_9}{\alpha'} n_{1a},\\
\label{normq5}
Q_{5a} &=& \frac{\alpha'}{2 R_4} n_{5a}, \\
\label{normqR}
Q_{Ra} &=& \frac{4 G_4}{R_9} n_{Ra},\\
\label{normqE}
Q_{Ea} &=& \frac{R_4}{2} n_{Ea},
\end{eqnarray}
\end{mathletters}
where the $n_{\alpha a}$ are non-negative integers.
$N$ is the number of black holes, and $\vec{r}_a$ is their positions.
The radii of the internal circles are $R_4,\ldots,R_9$ and
the four-dimensional Newton constant is $G_4 = \frac{g^2 \alpha'^4}%
{R_4 R_5 R_6 R_7 R_8 R_9}$.%
\footnote{For details on deriving the quantization of the charges and
the value of the $D$-dimensional Newton constant, see e.g.~\cite{jmphd}. In
particular, we obtained the quantum of $Q_{E}$ by T-dualizing the
quantum of $Q_{5}$.}
\section{The Effective Action and U-duality} \label{scatter}
The Manton-type scattering calculation~\cite{man} proceeds exactly as
in~\cite{fe,shir,us}, so we leave out
all the details here. The result to ${\cal O}(\vec{v}^2)$
is
\begin{eqnarray} \label{result}
S_{\mbox{\scriptsize eff}} &=& \int dt \left\{ -\sum_a m_a
+ \frac{1}{2} \sum_a m_a \vec{v}_a^2
+ \frac{1}{2 l_p^2} \sum_{\alpha<\beta} \sum_{a,b} Q_{\alpha a} Q_{\beta b}
\frac{|\vec{v}_a - \vec{v}_b|^2}{r_{ab}} \right. \nonumber \\
& & + \left. \frac{1}{4 l_p^2}
\sum_{\stackrel{\mbox{\scriptsize $\alpha<\beta$}}
{\mbox{\scriptsize $\gamma \neq \alpha, \beta$}}}
\sum_{a,b,c} Q_{\alpha a} Q_{\beta b} Q_{\gamma c} |\vec{v}_a -
\vec{v}_b|^2 (\frac{1}{r_{ab} r_{ac}} + \frac{1}{r_{ab}r_{bc}} -
\frac{1}{r_{ac}r_{bc}}) \right. \nonumber \\
& & \left. + \frac{1}{2 l_p^2}
\sum_{\stackrel{\mbox{\scriptsize $\alpha<\beta; \gamma<\delta$}}
{\mbox{\scriptsize $\alpha, \beta, \gamma, \delta$ all distinct}}}
\sum_{a,b,c,d}
Q_{\alpha a} Q_{\beta b} Q_{\gamma c} Q_{\delta d}
|\vec{v}_a - \vec{v}_b|^2
\int d^3x \frac{\vec{r}_a \cdot \vec{r}_b}{4 \pi r_a^3 r_b^3 r_c r_d}
\right\},
\end{eqnarray}
where saturation of the Bogomol'nyi bound gives~\cite{jmphd}
\begin{equation} \label{bps}
m_a = \frac{1}{l_p^2} (Q_{1a} + Q_{5a} + Q_{Ra} + Q_{Ea}),
\end{equation}
and the four-dimensional Planck constant is $l_p = \sqrt{4 G_4} = \frac{%
g \alpha'^2}{\sqrt{2 R_4 \ldots R_9}}$.
Note that the
$a=b$ terms in the multiple sums clearly don't contribute. Furthermore,
in the triple sum, the singular terms for $a=c$ or $b=c$ cancel, and in
the quadruple sum, the integral converges, even when two or more of
the coordinates coincide.
When one of the charges, say $Q_{Ea}$ vanishes for every black hole,
then equation~(\ref{result}) reduces to the result of~\cite{us} when the
latter is reduced from five to four dimensions, as required by the
arguments of~\cite{myers2}.
We would now like to make equation~(\ref{result}) U-duality invariant.
The U-dual expression for the terms linear and quadratic in the charges
follow exactly as in~\cite{dps,us}. In particular the mass is
(in a sense elaborated below) already invariant, and the quadratic term
involves the masses and
contraction of two factors of the $E_{7(7)}$ charge vector
$q_{\Lambda a}$ with the inverse of the matrix of moduli, $({\cal M}_\infty^{
-1})^{\Lambda \Sigma}$ (see equation~(\ref{uresult}));
${\cal M}_{\Lambda \Sigma}$ is the matrix which
multiplies the kinetic term for the vector fields in the $E_{7(7)}$
invariant action.
The quartic term is clearly
proportional to the quartic invariant of the U-duality group~$E_{7(7)}$.
However, while in~\cite{us} the cubic term was proportional to the
cubic invariant of the five dimensional U-duality group~$E_{6(6)}$,
we can not directly associate such an interpretation to it in this case,
since $E_{7(7)}$ has no cubic invariant, nor does $E_{6(6)}$ imbed itself
into $E_{7(7)}$ in an intrinsically natural way.
Furthermore, it can be checked that we cannot use the invariants made out
of the matrix of moduli, the charge vectors and the masses to obtain the
cubic term; we can understand this because an expression involving
the matrix of moduli
could not reduce to the moduli independent $E_{6(6)}$ formula.%
\footnote{\label{overallM}
There will actually be overall matrix of moduli factors that
arise during the compactification from five to four dimensions.}
Instead, we recall,
following~\cite{kk,cvetic,feretal}, that there is an intrinsically natural way
of dissecting the $D=4, N=8$ central charge matrix. Specifically, we
note that for each black hole
the moduli-dependent central charge matrix $\bbox{\sf Z}_a$ can be
$SU(8) \subset E_{7(7)}$
rotated into the form
\begin{equation} \label{ccharge}
\bbox{\sf Z}_a =
\mbox{diag} \{z_{1a},z_{2a},z_{3a},z_{4a}\} \otimes
\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right),
\end{equation}
with the $z_{\cdot a}$s the (possibly complex%
\footnote{But note that only the overall phase is invariant and not the
individual phases.}%
) ``eigenvalues''.
The largest eigenvalue, which we choose to be $z_{1a}$, is the mass of the
$a$th
black hole, by the BPS condition.
In fact, since
$z_{1a} = l_p^2 m_a$, it, and more technically an $SU(2) \subset
SU(8)$, is singled out. This was explained in~\cite{feretal} as the
$SU(2)$ corresponding to the supercharges (which transform linearly under
the $SU(8)$ automorphism) for which a complex linear combination annihilates
the state. This is just the statement that it corresponds,
by the BPS condition, to the
unbroken supersymmetry.
In the case at hand,~\cite{cvetic,kk}
\begin{mathletters} \label{defzs}
\begin{eqnarray}
\label{defz1}
z_{1a} &=& Q_{1a} + Q_{5a} + Q_{Ra} + Q_{Ea} = l_p^2 m_a, \\
\label{defz2}
z_{2a} &=& Q_{1a} - Q_{5a} + Q_{Ra} - Q_{Ea}, \\
\label{defz3}
z_{3a} &=& Q_{1a} + Q_{5a} - Q_{Ra} - Q_{Ea}, \\
\label{defz4}
z_{4a} &=& Q_{1a} - Q_{5a} - Q_{Ra} + Q_{Ea}.
\end{eqnarray}
\end{mathletters}
It is easily checked that
\begin{eqnarray} \label{cubicu}
\sum_{\alpha \neq \beta \neq \gamma} Q_{\alpha a} Q_{\beta b} Q_{\gamma c}
& = & \frac{1}{16} \left\{ z_{1a} \left(z_{1b} z_{1c} - \sum_{I=2}^4
z_{Ib} z^*_{Ic}\right) + (\mbox{5 perms}) \right\} \nonumber \\
& & + \frac{1}{8} \left\{ \mbox{Re} (z_{2a}
z_{3b} z_{4b}) + (\mbox{5 perms}) \right\},
\end{eqnarray}
where again we have taken into account the fact that for the more general
black holes, the $z_{\cdot a}$ are complex.
Furthermore, if we set $\bbox{\sf Z}_a$ real and traceless, then we can
make contact with the real, traceless
five dimensional central charge matrix. Specifically,
in this case equation~(\ref{cubicu})
is equivalent (up to a proportionality constant)
to the 5-dimensional $E_{6(6)}$ symmetric invariant,
written in the form~\cite{cvetic}
$\sum_{I=1}^4 z_{Ia} z_{Ib} z_{Ic}$.
In other words, when we restrict to black holes with three charges (or
fewer), then we recover the five-dimensional U-duality invariant formula.
Of course, we still need to convert equation~(\ref{cubicu}) into
a ``U-duality'' invariant formula involving the charge vectors $q_{\Lambda a}$.
The formula won't be truly U-duality invariant because we are decomposing
$E_{7(7)} \supset SU(8) \supset SU(2) \times SU(6)$; it will
only be invariant under the subgroup.%
\footnote{
As the $SU(8)$ is the maximal compact subgroup of $E_{7(7)}$,
this is (almost) the maximal decomposition of $E_{7(7)}$ involving our $SU(2)$
factor. There is also a possible $U(1)$ factor; however, we have fixed the
$U(1)$ by demanding that $z_{1a}=l_p^2 m_a$, i.e.\ by fixing that $z_{1a}$ be
real.
}
The central charge matrix transforms linearly in the $\bbox{28} \oplus
\bbox{\overline{28}}$
of $SU(8)$;
under the above decomposition~\cite{pat,feretal},
\begin{equation} \label{decomp56}
\bbox{28} \oplus \bbox{\overline{28}} \rightarrow
(\bbox{1},\bbox{15})
\oplus (\bbox{1},\bbox{\overline{15}})
\oplus (\bbox{2},\bbox{6})
\oplus (\bbox{2},\bbox{\overline{6}})
\oplus (\bbox{1},\bbox{1})
\oplus (\bbox{1},\bbox{1})
.
\end{equation}
Clearly, $z_{1a} \in (\bbox{1},\bbox{1})
$ and the other $z_{\cdot a} \in
(\bbox{1},\bbox{15})
$. So, the first term of equation~(\ref{cubicu})
is
\begin{mathletters} \label{justifyu}
\begin{equation} \label{firstjustu}
(\bbox{1},\bbox{1})
\otimes \{ [(\bbox{1},\bbox{1})]^2 +
(\bbox{1},\bbox{15})
\otimes (\bbox{1},\bbox{\overline{15}})
\},
\end{equation}
which indeed contains a singlet, as required.
The second term of equation~(\ref{cubicu}) is
\begin{equation} \label{secondjustu}
[(\bbox{1},\bbox{15})]^3 + [(\bbox{1},\bbox{\overline{15}})]^3,
\end{equation}
\end{mathletters}
and again
each term contains a singlet.%
\footnote{This follows since the $\bbox{15} \in SU(6)$ is an antisymmetric
product of two fundamentals. The antisymmetric product of six fundamentals
is clearly a singlet; this is the symmetric product of three~$\bbox{15}$s.
}
Note that this explains our choices of complex conjugation on
the right-hand side of equation~%
(\ref{cubicu}); any other polynomial choice that reduces to the left-%
hand side, and treats the $z_{\cdot a}$s and $z^*_{\cdot a}$s symmetrically%
\footnote{This is required since there is no invariant distinction between
the complex representations of $SU(6)$ and their complex conjugates.}%
,
would not be a singlet. Thus, we have arrived at
equation~(\ref{cubicu}) uniquely.
So, to write down a more invariant expression we decompose the
integer-valued $E_{7(7)}$ charge vector $q_{\Lambda a}$.
More precisely, since we were working with the
central charge matrix, which is moduli dependent, it is convenient
to raise the index using the matrix of moduli:
\begin{equation} \label{raiseq}
q^{\Lambda}_a \equiv ({\cal M}_\infty^{-\frac{1}{2}})^{\Lambda \Sigma}
q_{\Sigma a}.
\end{equation}
Then we can decompose $q^{\Lambda}_a$ as
$\{m_a, q^{A}_a, q^{\bar{A}}_a,\ldots\}$ where $m_a = l_p^{-2} z_{1 a}$
has been used for the $(\bbox{1},\bbox{1})$;
the index $A,\bar{A}=1,\ldots,15$ labels respectively
the $\bbox{15},
\bbox{\overline{15}} \in SU(6)$;
and the ellipses denote the representations that have not been included.
Then, we finally have the U-duality invariant version of
equation~(\ref{result}).
\begin{eqnarray} \label{uresult}
S_{\mbox{\scriptsize eff}} & = &
\int dt \left\{ - \sum_a m_a
+\frac{1}{2} \sum_a m_a \vec{v}_a^2
+\frac{1}{2}
\sum_{a<b} (l_p^2 m_a m_b - q_{\Lambda a} ({\cal M}_\infty^{-1})^{
\Lambda \Sigma}
q_{\Sigma b}) \frac{|\vec{v}_a - \vec{v}_b|^2}{
r_{ab}} \right. \nonumber \\
& & \left.
+\frac{3}{32} \sum_{a<b} \sum_c \left[l_p^4 m_a m_b m_c - \frac{l_p^2}{6}
(m_a q^{A}_{b} \delta_{A \bar{A}} q^{\bar{A}}_{c}
+ \mbox{5 perms})
+ l_p d_{(6)ABC}
q^{A}_{a} q^{B}_{b} q^{C}_{c} \right. \right. \nonumber \\
& & \left. \left. \hspace{1.5cm}
+ l_p d^*_{(6)\bar{A}\bar{B}\bar{C}}
q^{\bar{A}}_a q^{\bar{B}}_b q^{\bar{C}}_c \right]
|\vec{v}_a-\vec{v}_b|^2
\left[ \frac{1}{r_{ab} r_{ac}} +
\frac{1}{r_{ab} r_{bc}} - \frac{1}{r_{ac}r_{bc}} \right] \right.
\nonumber \\
& & \left.
+ \frac{l_p^2}{4} \sum_{a<b} \sum_{c,d} d^{\Lambda \Sigma \Gamma \Pi}
q_{\Lambda a} q_{\Sigma b} q_{\Gamma c} q_{\Pi d}
|\vec{v}_a - \vec{v}_b|^2 \int d^3x \frac{\vec{r}_a \cdot
\vec{r}_b}{4 \pi r_a^3 r_b^3 r_c r_d} \right\}.
\end{eqnarray}
Here, $d_{(6)ABC}$ is proportional to the symmetric cubic invariant for the
$\bbox{15}
\in SU(6)$, and $d^{\Lambda \Sigma \Gamma \Pi}$ is proportional to
the $E_{7(7)}$
cubic invariant.
Two final comments are required regarding the decomposition $E_{7(7)}
\supset SU(8) \supset SU(2) \times SU(6)$.
First, it appears that we have assumed that the central charge
matrices for the black holes can be simultaneously diagonalized (in
the sense of equation~(\ref{ccharge})).
However, all we really
need to assume is that they can be simultaneously block-diagonalized
into $SU(2)$ and $SU(6)$ subgroups; our final expression, equation~%
(\ref{uresult}) is $SU(6)$ invariant and so does not require that the
matrices be diagonal. That the block-diagonalization is possible is simply
the statement that the black holes preserve a common supersymmetry.
Incidentally, the block-diagonalization implies that the charges that
transform in the $(\bbox{2},\bbox{6})$ representation of $SU(2)\times SU(6)$
vanish; this is why there is no $(\bbox{1},\bbox{1})\otimes(\bbox{2},
\bbox{6})\otimes (\bbox{2},\bbox{\overline{6}})$ term in equation~%
(\ref{cubicu}) or~(\ref{uresult}).
Second, it
was implicitly assumed in the discussion
that the solution preserves exactly
$\frac{1}{8}$ of the supersymmetry. If the solution preserves more
supersymmetry---i.e. if more than one $z_{\cdot a} = l_p^2 m_a$---then
there is no longer a natural $SU(2) \subset SU(8)$ but rather a larger
subgroup that is selected. Nevertheless, it is easy from equation~%
(\ref{cubicu}) to see that no matter
how one chooses the $SU(2) \subset G$ (where, for a solution
preserving $\frac{1}{4}$ of the supersymmetry, $G=SU(4)$, for example)
one obtains the same answer for the cubic, namely zero, so there is no
ambiguity when there is more supersymmetry.
We now claim that equation~(\ref{uresult}), which was really only
derived for the special case of black holes with four charges, holds
for general (e.g.\ five charge) supersymmetric black hole configurations.
The forms of the two-point, three-point and four-point functions
have already been fixed uniquely by equation~(\ref{result}) and
the duality symmetries%
, so the only possible modification with the
additional charges, is the appearance of higher-point functions. Since
these higher-point functions vanish when only four charges are
non-zero, they must be proportional to all five separate charges, and hence
cannot be made out of invariants of order less than five. But the group
theory that led us to the ``U-duality'' invariant form for the cubic term
shows us that there are no such candidate invariants: all we can
work with
is the $(\bbox{1},\bbox{15})$
and its complex conjugate, and since
cubing one gives a singlet, and multiplying one by the other gives a singlet
we can never get a higher-order invariant. This leaves
the $E_{7(7)}$ invariants, and the only symmetric one is the quartic.
Thus, equation~(\ref{uresult}) is the general result.
\section{Discussion} \label{conc}
We have given the effective action to ${\cal O}(\vec{v}^2)$
for scattering of an arbitrary number of charged supersymmetric
four dimensional black holes. The U-duality invariant form required a
technical discussion of the
natural $SU(2)\times SU(6)$ decomposition of the four-dimensional
U-duality group $E_{7(7)}$. We would now like to give a slightly
more detailed discussion of the asymptotically flat
moduli space for two black holes. From equation~%
(\ref{uresult}), the moduli space is
\begin{mathletters} \label{mod2bh}
\begin{eqnarray}
\label{mod2bhmetric}
ds^2 &=& \frac{1}{2} f(\vec{r}) (dr^2 + r^2 d\Omega^2),
\end{eqnarray}
where
\begin{eqnarray}
\label{mod2bhf}
f(\vec{r}) &=& \mu + \frac{\Gamma_{II}}{r} + \frac{\Gamma_{III}}{r^2} +
\frac{\Gamma_{IV}}{r^3
, \\
\label{mod2bhg2}
\Gamma_{II} &=& l_p^2 M \mu - q_{\Lambda 1} ({\cal M}_\infty^{-1})^%
{\Lambda \Sigma} q_{\Sigma 2}, \\
\label{mod2bhg3}
\Gamma_{III} &=& \frac{3}{16} \left\{l_p^4 M^2 \mu
+ l_p d_{(6)ABC} q^A_1 q^B_2 (q^C_1 + q^C_2)
+ l_p d^*_{(6)\bar{A}\bar{B}\bar{C}} q^{\bar{A}}_1 q^{\bar{B}}_2
(q^{\bar{C}}_1 + q^{\bar{C}}_2)
\right. \nonumber \\ & & \left.
- \frac{l_p^2}{3} \left( M q^A_1 \delta_{A\bar{A}} q^{\bar{A}}_2
+ M q^A_2 \delta_{A\bar{A}} q^{\bar{A}}_1
+ m_1 q^A_2 \delta_{A\bar{A}} q^{\bar{A}}_2
+ m_2 q^A_1 \delta_{A\bar{A}} q^{\bar{A}}_1 \right) \right\}, \\
\label{mod2bhg4}
\Gamma_{IV} &=& \frac{l_p^2}{6} d^{\Lambda \Sigma \Gamma \Pi}
q_{\Lambda 1} q_{\Sigma 2} \left(q_{\Gamma 1} q_{\Pi 1} + q_{\Gamma 2}
q_{\Pi 2} \right)
\end{eqnarray}
and the centre of mass (relative mass) is $M=m_1+m_2$
($\mu=\frac{m_1 m_2}{m_1+m_2}$); also the relative coordinate is $\vec{r}=
\vec{r_2}-\vec{r_1}$. In equation~(\ref{mod2bhmetric}), we have subtracted
away the centre of mass motion. We have also omitted the term
\begin{equation} \label{omitmod2bhf}
\frac{\pi^3 l_p^4}{4} d^{\Lambda \Sigma \Gamma \Pi}
q_{\Lambda 1} q_{\Sigma 1} q_{\Gamma 2} q_{\Pi 2} \delta^{(3)}(\vec{r}),,
\end{equation}
\end{mathletters}
from equation~(\ref{mod2bhf}) because this contact interaction
only occurs at $r=0$ by which point
the moduli space approximation has presumably broken down.
We recall that the moduli space approximation is only valid for small
velocities
and neglects radiation; in particular, in~\cite{fe} it was argued that the
moduli
space approximation breaks down for $r \lesssim v_\infty^2 M$.
The coordinate singularity at $r=0$ of equation~%
(\ref{mod2bhmetric}) is removed, when $\Gamma_{IV} \neq 0$,
by performing the coordinate
transformation $\xi=-2 \frac{\alpha'^{\frac{3}{4}}}{\sqrt{r}}$.
One then finds that as $r \rightarrow 0$, ($\xi \rightarrow \infty$),
there is a second asymptotic region that is conical with deficit angle~%
$\pi$~\cite{fe}. For $\Gamma_{IV} = 0$ but $\Gamma_{III} \neq 0$,
the coordinate singularity at
$r=0$ is removed by the coordinate transformation $\xi = \ln \frac{r}{\sqrt{%
\alpha'}}$
to find that the
asymptotic region has topology $\relax{\rm I\kern-.18em R} \times S^2$.
If both $\Gamma_{IV} = 0$ and $\Gamma_{III} = 0$, then one removes the
$r=0$ coordinate singularity via $\xi = \sqrt{\sqrt{\alpha'} r}$ to again find
a second asymptotic region that is conical with deficit
angle~$\pi$~\cite{shir}.
Thus, geodesics which extend to $r=0$ enter this second
asymptotic region; i.e.\ the black holes coalesce~\cite{fe,shir,dps}.
(It might seem strange that, having just rejected the contact interaction
for being at $r=0$, we are exploring the $r \rightarrow 0$ behaviour.
The point is that we can use equations~(\ref{mod2bh}), even as
$r \rightarrow 0$ (but $r \neq 0$), by taking
$v_\infty \rightarrow 0$.)
By examining
the geodesic equation as in~\cite{dps}, we find that the
turning point $r_c$ in the black hole motion is real and positive when
the impact parameter $b$ obeys
\begin{eqnarray} \label{bc}
b^2 &>& b_c^2 \equiv \mbox{\small $- \frac{\Gamma_{II}^2}{12 \mu^2} +
\frac{\Gamma_{III}}{\mu}
$}\nonumber \\ && \mbox{\small $
+ \frac{{\Gamma_{II}^4 + 18\,\Gamma_{II}\,\Gamma_{IV}\,{{\mu}^2}}}{
{{12 \left( -\left( {\Gamma_{II}^6}\,{{\mu}^6}
\right) +
540\,{\Gamma_{II}^3}\,\Gamma_{IV}\,
{{\mu}^8} +
24\,\left( 243\,{\Gamma_{IV}^2}\,
{{\mu}^{10}} +
{\sqrt{3}}\,
{\sqrt{\Gamma_{IV}\,{{\mu}^{14}}\,
{{\left( -{\Gamma_{II}^3} +
27\,\Gamma_{IV}\,{{\mu}^2} \right) }^3
}}} \right) \right) }^{{\frac{1}{3}}}}}
$} \\ && \mbox{\small $
+ \frac{1}{12}
{{\left( -\frac{ {\Gamma_{II}^6}}{\mu^6} + 540\,
\frac{{\Gamma_{II}^3}\,
\Gamma_{IV}}{{\mu}^4} +
24\,\left( 243\,\frac{\Gamma_{IV}^2}{\mu^2} +
{\sqrt{3}}\,{\sqrt{\frac{\Gamma_{IV}}{{\mu}^{10}}\,
{{\left( -{\Gamma_{II}^3} +
27\,\Gamma_{IV}\,{{\mu}^2} \right) }^3
}}} \right) \right) }^{{\frac{1}{3}}}}$}\nonumber.
\end{eqnarray}
In particular there is coalescence for $b \leq b_c$. Na\"{\i}vely,
$b_c$ is only well-defined if either $\Gamma_{IV}=0$ or $\Gamma_{II}^3
\leq 27 \Gamma_{IV} \mu^2$;
in fact, one must merely be careful
about how one chooses the cube roots in equation~(\ref{bc}).
As equation~(\ref{bc}) is rather obscure, we point out some special cases.
If we consider the black holes of section~\ref{construct}, with
$Q_{1a} = Q_{5a} = Q_{Ra} = Q_{Ea} = l_p^2 \frac{m_a}{4}$, then we have the
Reissner-Nordstr\"{o}m black holes of~\cite{fe}%
\footnote{Equation~(\ref{bc}) corrects the polynomial equation
for $b_c$ that was given in the reference.}
for which the right-hand side of equation~(\ref{bc}) is real and positive,
and so there can be coalescence. Note that this is despite the
fact that for Reissner-Nordstr\"{o}m black holes,
$27 \Gamma_{IV} \mu^2 - \Gamma_{II}^3 < 0$.
For two Reissner-Nordstr\"{o}m black
holes of equal mass ($m_1 = m_2$), $b_c \approx 2.3660 \, l_p^2
\frac{M}{4}$, in agreement with~\cite{fe}.
If the black holes only carry three charges, then $\Gamma_{IV} = 0$;
we find $b_c = \sqrt{\frac{\Gamma_{III}}{\mu}}$.%
\footnote{To obtain this from equation~(\ref{bc}) requires
choosing $(-1)^\frac{1}{3} = \frac{1}{2} \pm i \frac{\sqrt{3}}{2}$.}
In fact, if the black holes carry fewer than three charges, so that also
$\Gamma_{III}=0$, then there is never coalescence (except for the obvious
case $b=0$); this is in agreement with the results of~\cite{shir}.
This is different from higher dimensions, where there is always a
critical, non-zero impact parameter below which there is coalescence~%
\cite{shir,us}.
\acknowledgments
I thank Vijay Balasubramanian, Eric Gimon, Gary Horowitz, John Pierre,
Joe Polchinski, Andy Strominger and Haisong Yang
for useful discussions. I also thank David Kaplan for
collaboration on an earlier, related paper.
I am grateful to Harald H. Soleng for making~\cite{cartan}
available, which was very useful in the computation.
The hospitality of the Physics Department at Harvard University is appreciated.
Financial support from NSERC and NSF is gratefully acknowledged;
this work was also supported in part by DOE Grant No. DOE-91ER40618.
| {'timestamp': '1997-08-25T22:44:30', 'yymm': '9708', 'arxiv_id': 'hep-th/9708091', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9708091'} |
\section*{Introduction}
Microbial Fuel Cells (MFCs) are devices that produce electricity from waste-water by utilizing microbial metabolic oxidation processes \cite{ieropoulos2013energy}. A MFC consists of a proton exchange membrane (PEM), an anode (negative half-cell) and a cathode (positive half-cell). Electricity is generated as a by-product of microbial metabolism, which results in electrons flowing from the bacterial cells onto the anode electrode, and then from anode to the cathode electrode via an external electrical circuit; this produces a flow of electrical current. Positively charged ions that are products of the oxidation reactions, such as protons, also flow out of the bacterial cells, due to electron-neutrality and diffuse to the cathode through the PEM. Electrons, cations and the cathodic oxidizing agent of choice (e.g. oxygen) recombine to complete the reaction and close the circuit. Despite the fact that MFC technology has been the subject of research for at least three decades, real-world implementation or commercialization is still limited \cite{cheng2011electricity,santoro2015cathode,ortiz2016study,mardanpour2012single,ledezma2013mfc,chouler2016towards}.
In addition to the utilization of MFCs in waste-water treatment and power production, their usage in other fields was suggested, like sensory applications \cite{kumlanghan2007microbial,abrevaya2015analytical}. Nonetheless, an application that is rarely studied is using configurations based on MFCs that realize computing units \cite{greenman2006microbial,greenman2006perfusion}. More specifically, the first approach \cite{greenman2006microbial} was to reproduce conventional binary logic gates using MFCs in order to examine the abilities of such a system, with an ultimate goal of constructing non-silicon multi-valued logic processing units being envisaged by the same authors in \cite{greenman2006microbial}. In that study, designs of hydraulic and electrical interconnections are suggested for building three basic logic gates (AND, OR and NOT) that can be combined to assemble universal gates, hence circuits capable of universal computation.
Apart from basic logic gates, more complicated computational abilities were exhibited with the appropriate interconnection of a small number of MFCs, namely a simplified Pavlovian learning model \cite{greenman2006perfusion}. In this study, the symbiotic mix of natural biological cells, such as the anodophiles, and artificial systems, like electrodes, actuators, pumps and chemical solutions, were used to simulate a learning cycle. In detail, two signals representing the smell of food --- that activates production of saliva --- and hearing the sound of a bell --- that does not activate independently the production of saliva --- were associated, in order for the sound of the bell to trigger the production of saliva by itself, i.e. automatically.
Despite the seemingly simplistic, binary computational tasks performed in both of the aforementioned studies \cite{greenman2006microbial,greenman2006perfusion}, the authors make notice of the high number of states that could be realized by MFC devices, in order to enable complex computing. In this respect, a configuration of MFCs is proposed here to mimic the computation dynamics of Cellular Automata (CAs). A novel development of MFCs, which includes the introduction of additional electrodes, acting as poise/bias points --- or `pins' --- has been recently proposed [Patent filing number: GB1501570.4]. The invention introduces the principle of electrochemical redox (reduction--oxidation) bias via a third and/or fourth electrode, when an external power supply, or another MFC is connected to this third or fourth pin and to the working anode or cathode electrode, respectively. This is, by default, an unconventional means of connection, since the potential difference (voltage) of the external MFC (also known as `driver') can bias the redox potential difference (voltage) of the anode or cathode half-cell, depending of course on how the connection is made. Such a system is naturally subject to polarization and is therefore limited to a time constant ($t$), based on materials, voltage levels and oxidation/reduction states. Consequently, the system is an ideal platform for pulse-width-modulation techniques. For the purposes of the current study, a third electrode is used to achieve poise and, thus, have two MFCs behaving like a CA cell.
CAs can be considered an idealization of a physical system in which space and time are discrete, and the physical quantities take only a finite set of values \cite{chopard2009cellular,mizas2008reconstruction}. A CA comprises identical cells in a regular grid that are characterized by their state. The state of each cell is updated by a uniform local rule depending on the states of the cells in its vicinity. CAs can conceptually be identified as \textit{general} and \textit{simple} \cite{sipper1995quasi}. The term \textit{general} is referring to the fact that CAs can promote universal computation and that the state and local updating rules of the cells are not limited to specific regulations. Moreover, the term \textit{simple} is justified by the plain outline of CAs --- cells are characterized by basic states with local interactions --- compared with other computing machines. Finally, CAs can be considered as one of the most favorite candidates of the future computational architectures tackling the bottleneck of the von Neumann architecture when referring to the co-existence of computing and memory units in the same simple unit, or the CA cell.
Specifically, a well known CA model, namely Conway's Game of Life (GoL) is studied, which encapsulates the ability of universal computation and construction \cite{adamatzky2010game}. However, the realization of GoL here is not limiting the capabilities of possible configurations consisted of hydraulically and electrically linked MFCs. In fact, based on the local activity \cite{chua1998cnn} exhibited by MFCs, any CA local rule can be implemented in similar configurations. In addition to the advantage of the energy independence, along with water purification of the proposed computational scheme, when compared with other transducers of renewable energy sources, MFCs integrate both energy extracting mechanisms and computational units. Moreover, the ongoing miniaturization of MFCs \cite{chouler2016towards} will enable the production of smaller biological processing units. Finally, the amount of physicochemical parameters that can be externally manipulated and affect the performance and, thus, the outputs of the biofilms in MFCs is enormous \cite{greenman2006perfusion}. This fact can justify the utilization of MFCs for more complex computational schemes than the ones suggested up to this date.
\section*{Game of Life}
A Cellular Automaton (CA) consists of a regular grid of cells. Each cell takes $k$ different states, where $k > 2$, but not at once. The grid can be $n$-dimensional ($n \geq 1$). The evolution of the cells takes place at discrete points in time. That means that the state of each cell in the grid changes only at discrete moments of time, namely at time steps $t$. The time step $t = 0$ is usually considered as the initial step and therefore no changes at the state of the cells occur.
For each cell, a set of cells called its neighborhood (usually including the cell itself) is defined relative to the specified cell. Regarding the two dimensional CA, the two most common types of neighborhood that are mainly considered are:
\begin{itemize}
\item \textit{von Neumann} neighborhood, that consists of the central cell, whose condition is to be updated, and the four cells located to the north, south, east and west of the central cell
\item \textit{Moore} neighborhood, that consists of the same cells with the von Neumann neighborhood together with the four other adjacent cells of the central cell (the northwester, northeaster, south-east and south west cells).
\end{itemize}
The evolution of the cells demands the definition of a cell state, the neighboring cells as well as the local transition function:
\begin{itemize}
\item The local internal state of each cell of the CA:
\begin{equation}
C ( \vec{r},t) = \{ C_{1}( \vec{r},t), C_{2}( \vec{r},t),..., C_{m}( \vec{r},t)\}
\label{theory_state}
\end{equation}
\noindent at time step $t=0,1,2,...$ is described by a set of variables associated with each position $\vec{r}$ of the grid.
\item The local transition function is defined as:
\begin{equation}
R=\{R_{1},R_{2},...,R_{m}\}
\label{theory_function}
\end{equation}
\noindent and determines the evolution during time of the internal state of each cell according to the following equation:
\begin{equation}
C_{j}( \vec{r},t+1) =R_{j} \left( C( \vec{r},t), C( \vec{r}+\vec{\delta}_{q},t),..., C( \vec{r}+\vec{\delta}_{m},t) \right)
\label{theory_evolution}
\end{equation}
\noindent where $\vec{r}+\vec{\delta}_{k}$ designate the cells which belong to a given neighborhood of cell $\vec{r}$.
\end{itemize}
The state of cell $\vec{r}$, at time step ($t+1$), is computed according to $R$. $R$ is a function of the state of this cell at time step ($t$) and the states of the cells in its neighborhood at time step ($t$). In the above definition, the function $R$ is identical for all sites and it is applied simultaneously to each of them, leading to synchronous dynamics. It is important to notice that the rule is homogeneous, i.e. it does not depend explicitly on the cell position $\vec{r}$. However, spatial inhomogeneities can be introduced by having some cells' states $C_{j}(\vec{r})$ systematically at a fixed value, i.e. 1, in some given locations of the lattice, to mark particular cells for which a different rule applies. Furthermore, the new state at time $t+1$ is only a function of the previous state at time $t$. It is sometimes necessary to have a longer memory and introduce a dependence on the states at times $t-1$, $t-2$, …, $t-k$. Such a situation is already included in the definition, if one keeps a copy of the previous state in the current state.
Conway's Game of Life (GoL) is a two-dimensional CA with binary states \cite{conway1970game} that had significantly contributed to the extensive attention CA theory has gained. The neighborhood considered is \textit{Moore} neighborhood and the two states that each cell can adopt is \textit{alive} and \textit{dead} (or `1' and `0', respectively). The local transition rule uses the states of all nine cells in the neighborhood, during the directly preceding time step, to determine the new state of the central cell in the neighborhood. More specifically, the following transitions between states can occur:
\begin{enumerate}
\item When a cell is \textit{dead} at time \textit{t} and precisely three of the eight neighbors are \textit{alive}, the cell adopts the state \textit{alive} at time \textit{t}+1.
\item When a cell is \textit{alive} at time \textit{t} and none, one or more than three of the eight neighbors are \textit{alive}, the cell adopts the state \textit{dead} at time \textit{t}+1.
\end{enumerate}
Note that if none of the two aforementioned cases are true, the local rule dictates that the cell retains its previous state. Assuming $i$ and $j$ the dimension indexes that establish the location of each cell in the grid and $t$ the current time step, the transition rule can be expressed as:
\begin{equation}
C_{i,j}^{t+1}= \begin{cases}
0, & \text{if } (\sum_{k=-1,l=-1}^{k=1,l=1} C_{i+k,j+l})-C_{i,j} \leq 1 \text{ or} \\
& (\sum_{k=-1,l=-1}^{k=1,l=1} C_{i+k,j+l})-C_{i,j} \geq 4 \\
1, & \text{if } (\sum_{k=-1,l=-1}^{k=1,l=1} C_{i+k,j+l})-C_{i,j} = 3\\
C_{i,j}^{t} & \text{else }
\end{cases}
\label{golrule}
\end{equation}
Note that \textit{Moore} neighborhood of cell $C_{i,j}$ is consisted of cells $C_{i+1,j}$, $C_{i,j+1}$, $C_{i-1,j}$, $C_{i,j-1}$, $C_{i+1,j+1}$, $C_{i+1,j-1}$, $C_{i-1,j+1}$, $C_{i-1,j-1}$ and the cell itself.
It is suggested that the inherent complexity of GoL is due to the fact that its transition rule is non-monotonic and nonlinear \cite{adamatzky2010game,chua1998cnn,Rendell2002}. Moreover, the aforementioned rule is characterized as an outer totalistic rule, given that it only accounts for the value of the central cell during the last time step and on the sum of values of cells in the outer \textit{Moore} neighborhood.
A continuous version of GoL, namely a discrete-time, continuous spatial automaton with the same behavior as GoL has been also proposed \cite{maclennan1990continuous}. In continuous spatial automata, the cells and their states form a continuum. A similar behavior to GoL in a continuous field can be realized with a local rule implemented by a non-monotonic function of the population density in the neighborhood. That is assuming the function will be graphically represented in two dimensions, namely the input or the population density in $x$-axis and the output or the next state of the cell in $y$-axis. Keeping in mind the rules of the original GoL, the continuous local rule function has to be continuously increasing in the space $(0,m)$ and continuously decreasing in the space $(m,1)$, for instance an inverted parabola; where $m$ is a value in space $(0,1)$, the minimum value of population density is $0$, while the maximum value is $1$. Note that value $m=3/8$ provides the closest analogy to the standard, binary GoL.
Another study \cite{adachi2004game} has proposed a CA model with a local rule based on a continuously valued expression with three parameters. That model matches GoL when one of these parameters, namely one defined as temperature $T$, is approximately zero and the other two have appropriate values. The upper limits of the temperature parameter were investigated, where formations of GoL, like gliders, start to decay. Nonetheless, it is suggested that for higher values of $T$ parameter, the model's behavior is increasingly biased towards chaos \cite{adamatzky2010game}.
Furthermore, the realization of GoL by Cellular Neural Networks (CNNs) have been suggested \cite{274337}. In that study, a two layer single-step CNN template, a multi-step three layer discrete time CNN template with threshold sigmoid and a multi-step and a multi-step piecewise-linear discrete time CNN template have been presented.
\section*{Proposed implementation}
Drawing inspiration from an implementation using MFCs to build logic gates \cite{greenman2006microbial}, we propose a new implementation to execute a popular example of CA, namely the aforementioned GoL. The instance of GoL is selected here as it is an excellent example of the fact that complex behaviors emerge from a trivial local interaction of simple agents. Nonetheless, the application of the GoL rules to realize functions that can be translated as global computations \cite{adamatzky2010game}, can define the limitations of the proposed configuration. The functionality of a MFC is affected by the voltage applied on the third electrode (see Fig. \ref{fig1}A) of the device, which introduces an electrochemical redox bias.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig1A.pdf}
\caption{}
\label{fig1a}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig1B.pdf}
\caption{}
\label{fig1b}
\end{subfigure}
\caption{{\bf Schematic of the configuration of MFCs implementing a GoL CA cell.} A: Inputs and outputs of one MFC. B: The configuration of a MFC duet.}
\label{fig1}
\end{figure}
The scheme realizing one cell of GoL CA consists of two MFCs, one primary and one secondary. The two MFCs are both hydraulically and electrically connected as shown in Fig. \ref{fig1}B. The secondary MFC is fed by a main/initial source of fuel, while the primary is fed by the effluent of the second. Both are operating under continuous flow conditions. MFCs stacked in cascades have been proven to produce higher power and current densities when their position is higher up the cascade, thus fed directly by the fuel source, than the ones placed lower downstream \cite{ledezma2013mfc,walter2015microbial}. Consequently, setting a fuel source that provides a balanced but limited substrate concentration, will enable only one of the MFCs to function.
The selection of which of the two MFCs will be functional is a result of their electrical interconnection. Both MFCs are equipped with a third electrode, independently. These electrodes are connected, via different resistances, with a single pin representing the electrical input of the CA cell implementation. Note that the different values of resistance separate the operation of the MFC GoL cell into three regions depending on the input voltage applied as explained below. The output power of the primary MFC is used to describe the state of the CA cell, i.e. its output.
The secondary MFC will act as a control unit, through the hydraulic link, for the primary one. The main fuel source is considered to provide a solution with limited carbon-energy, so that only one of the two MFCs will be able to fully process this by its anodophilic biofilm in order to produce electricity. When an appropriate bias is applied to the input port of the CA cell, thus, on the third electrode of the secondary MFC, the processes in the biofilm of the anode of the secondary MFC will be activated. Consequently, the biofilm will be utilizing the nutrients from the source, resulting in an effluent depleted from carbon-energy, which is used as the influent for the primary MFC. This means that the primary MFC will be unable to produce electrical energy and its power output will be low, hence the state of the proposed CA cell will be `0' (see Fig. \ref{fig2}C).
\begin{figure}[!tb]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig2A.pdf}
\caption{}
\label{fig2a}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig2B.pdf}
\caption{}
\label{fig2b}
\end{subfigure} \\
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig2C.pdf}
\caption{}
\label{fig2c}
\end{subfigure}
\caption{{\bf The three operation regions of the MFC-based scheme.} A: Low input bias resulting state `0'. B: Intermediate input bias resulting state `1'. C: High input bias resulting state `0'.}
\label{fig2}
\end{figure}
On the other hand, when the bias applied to the third electrode of both MFCs is low enough (or zero), neither of them will be activated. Consequently, the power produced by the primary MFC will be low and the state of the proposed CA cell will be `0' (see Fig. \ref{fig2}A).
Finally, an intermediate value of bias applied to the CA cell input, or both MFCs' third electrodes, will allow the primary MFC to produce electricity. Provided a design with appropriately chosen values of the resistances connected to the third electrodes, the voltage drop over the resistance connected to the secondary MFC will be sufficiently large not to enable its activation and, thus, the depletion of carbon-energy in its effluent. Note that the values of the resistances should satisfy $R_1 > R_2$, in order to set voltage drops that will activate only the primary MFC. The voltage drop over the resistance connected to the primary MFC will not be large enough, but given that the influent will be carbon-energy replete (i.e. rich in metabolites), it will result in the production of electrical current (state `1') (see Fig. \ref{fig2}B).
The power output ($P$) of each CA cell for the three regions of operation is expressed in relation to the input bias applied ($V$) as in Eq. \ref{eq1}. Note that the time variant is included in the equation, to conform to CA terminology. The time variant has an actual real analogy with the transition response for the establishment of a new steady-state, resulted in changes in the conditions applied to the biofilm component. That transition response is identified as approximately four minutes \cite{greenman2006microbial}.
\begin{equation} \label{eq1}
P_{out}^{t+1} = \begin{cases}
P_{low}, & \text{for } V_{in}^{t} \leq V_{thr\_low}\\
P_{high}, & \text{for } V_{thr\_low}\leq V_{in}^{t}\leq V_{thr\_high}\\
P_{low}, & \text{for } V_{thr\_high}\leq V_{in}^{t}
\end{cases}
\end{equation}
Each CA cell, realized by a duet of MFCs, is connected with its eight neighbors, as illustrated in Fig. \ref{fig3}, to form \textit{Moore} neighborhood which is used in GoL. The current produced by each primary MFC is used to convey the information about central cell's state to all of its neighbors.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.5\textwidth]{fig3.pdf}
\caption{{\bf Electrical interconnection of neighboring CA cells.}}
\label{fig3}
\end{figure}
\section*{Electrical Circuit Equivalent}
An electrical circuit that has an equivalent behavior to the MFC configuration presented in the previous Section and, thus, implements the GoL cell state transition rule, is presented in Fig \ref{fig4}A. However, there are significant differences. Firstly, the MFC logic can combine hydraulic and electrical links, thus, it can accommodate more complex computations with the same amount of basic building units. As mentioned previously, there is an enormous amount of physicochemical parameters that affect biofilms in MFCs. As a result, a MFC --- that is utilized as a computing unit --- except from the electrical pins, has also hydraulic inputs that affect its functionality. Note that there is no possible connection of two transistors that will act as an electrical counterpart of the proposed MFC duet configuration and, thus, in Fig. \ref{fig4}A three transistors are used. Moreover, MFC schemes do not require an external power supply, but a fuel source; a fuel that is inexpensive and abundant. Consequently, the proposed computing configurations instead of consuming energy, they are able to produce electrical energy, the level of which will depend on the requirements of the desired task; in other words, a more complicated computational task, requiring a higher number of MFCs, will naturally generate more power. In contrast, the transition response is faster in the electrical circuit.
\begin{figure}[!tb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig4A.pdf}
\caption{}
\label{fig4a}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig4B.pdf}
\caption{}
\label{fig4b}
\end{subfigure}
\caption{{\bf Equivalent electrical circuit to the proposed MFC configuration.} A: Schematic of the equivalent circuit. B: Output of equivalent circuit.}
\label{fig4}
\end{figure}
The circuit consists of three transistors in total. Transistors \texttt{Q1}-\texttt{Q2} ensure a high output when the input voltage is higher than a lower threshold (i.e. 2V as depicted in Fig. \ref{fig4}B), while \texttt{Q3} results in a high output when the input voltage is lower than a high threshold (i.e. 7V as depicted in Fig. \ref{fig4}B). Also, the interconnection of collectors of transistors \texttt{Q2} and \texttt{Q3} forms a hard-wired {\sc and} gate. The voltage output (blue line) of the circuit correlated with a sinusoid voltage input (black line) is illustrated in Fig. \ref{fig4}B. The behavior of the circuit imitates the behavior of a continuous GoL cell \cite{maclennan1990continuous}.
The functionality of the equivalent circuit for the three different states is presented in Fig. \ref{fig5} and can be compared with the functionality of the MFC configuration presented in Fig. \ref{fig2}.
\begin{figure}[!tbp]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig5A.pdf}
\caption{}
\label{fig5a}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig5B.pdf}
\caption{}
\label{fig5b}
\end{subfigure} \\
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{fig5C.pdf}
\caption{}
\label{fig5c}
\end{subfigure}
\caption{{\bf The three operation regions of the equivalent circuit.} A: Low current input resulting state `0'. B: Intermediate current input resulting state `1'. C: High input current resulting state `0'.}
\label{fig5}
\end{figure}
In order to illustrate the functionality of the equivalent circuit under the rules of GoL, a grid of $3 \times 3$ cells is designed using LTspice software as shown in Fig. \ref{fig6}. Despite the fact that the length of the grid can be characterized as small, this grid is used for demonstration reasons, thus its simplicity enhances readability and in detail comprehension of the proposed electronic circuits. Nevertheless, designing larger grids, i.e. of $n \times n$ cells, is a trivial and effortless procedure, due to the well known prominent and inherent characteristics of CA, like local interconnections, simplicity, uniformity and area utilization. The grid is initialized using transistors \texttt{Q1}, \texttt{Q2} and \texttt{Q3} to set the inputs of cells \texttt{X4}, \texttt{X5} and \texttt{X6} to the voltage needed to trigger the `1' state in the next time step. Note in the results depicted in Fig. \ref{fig7} that the outputs of the central cell (\texttt{X5}) and the west cell (\texttt{X4}) are high (state `1') at $t=2ms$. Also, note that the state of the central cell remains `1', while the states of the west (\texttt{X4}) and the north (\texttt{X2}) cells are oscillating between `1' and `0' states, whereas they are not both in the same state for any time step. Each cell in the grid presented in Fig. \ref{fig6} is comprised of the circuit depicted in Fig. \ref{fig4}A and a circuit adding some time delay between its input and its output. The time delay circuit for this example is adding $1ms$ from the moment that the input is changed to the cell's response, to realize the GoL rules in a synchronous matter and avoid the loss of signals. This procedure is inherent to the MFC configuration as the transition time between states is reported to be consistent around four minutes \cite{greenman2006microbial}.
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.9\textwidth]{fig6.pdf}
\caption{{\bf A $3 \times 3$ grid of equivalent circuit cells.}}
\label{fig6}
\end{figure}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.7\textwidth]{fig7.pdf}
\caption{{\bf The results of the grid initialized to oscillate as a GoL blinker.}}
\label{fig7}
\end{figure}
\section*{Conclusions}
The capabilities of MFCs in water purification, extraction of useful elements, sensory applications and energy production have been thoroughly studied. Another proposed application for MFCs is the carrying out of computational functions, which has been expressed as the construction of conventional logic gates, a Pavlovian learning model and, in this study, an implementation of a CA paradigm, namely GoL.
The ability of interconnecting MFCs via hydraulic and electrical links and the multiple states that can be adopted by each MFC, makes the possible computing configurations more complex than conventional ones. Moreover, the MFC computing units are not limited by a power source; on the contrary they are powered by sustainable, diverse and abundant fuel sources. A disadvantage of these systems is the long transition times experienced between steady states that can reach up to four minutes; however, they have been reported to be consistent.
Here the design of a duet of MFCs interconnected hydraulically and electrically to form a unit that behaves like a cell of GoL was proposed. Namely, the effluent of one MFC is used as an influent of the other, the third electrodes of both are connected with the electrical input of the cell, while the anode of one of the MFCs is used as the output of the cell. Given the fact that the proposed configuration consisted of two MFCs has the same behavior as a GoL CA cell, the realization of universal computation is possible.
An aspect of future work is the implementation of different CA local rules with configurations of real interconnected MFCs. Furthermore, the possibility of using conventional computing machines for the initialization of the CA grid and the exploitation of the outputs will be investigated, to design and realize a hybrid computing system.
\section*{Acknowledgments}
This work was supported by the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 686585.
\nolinenumbers
| {'timestamp': '2017-03-07T02:07:14', 'yymm': '1703', 'arxiv_id': '1703.01580', 'language': 'en', 'url': 'https://arxiv.org/abs/1703.01580'} |
\section{Introduction}
The nature of the "central engine" of active galactic nuclei (AGN) is still an open question. However,
usually it is assumed that the nuclear activity is caused by the accretion of matter on a super-massive black hole
\cite[SMBH, see][]{re84,be85}.
The radiation from the accretion disc is ionizing the surrounding gas, that forms a so called broad line region (BLR), which is located very close to the central SMBH (r$<$0.1 pc). The BLR is very compact (several 10s to several 100s light days), i.e. dimensions of the BLR correspond to approximately 10$^{-4}$ arcsec in the nearest AGN, which is a great challenge to resolve with the current largest telescopes, e.g. for now only the BLR of a quasar 3C 273 is resolved using the GRAVITY interferometer \citep[][]{st18}. Therefore, spectroscopy and/or spectro-polarimetry can give valuable information about the BLR structure in a larger number of AGN \citep[see][]{af18}.
Over the past 40 years, the variability of broad emission lines and continuum in the majority of AGN has been detected.
Already from the pioneering works in the seventies \citep[see][]{ch73,bo78}
it became clear that the intensities of the broad emission
lines in AGN change with a time delay of 1-3 weeks with respect to the continuum change.
The time delay depends on the time of passage of light through the
BLR and on the BLR geometry and kinematics \citep[][]{bo82,bl82}
This can be used for investigations of the BLR structure and dimension,
i.e. by finding correlations between changes in the continuum and broad-line fluxes
it is possible to "map" the BLR structure. This method is a so called reverberation method \citep[see][and references therein]{pe93}.
Additionally, one can explore the BLR evolution by studying the variability of the broad emission lines on a long time scale, i.e. study
the changes in the BLR physics, kinematics and geometry as a function of time. Finally, the BLR is supposed to be near to the
SMBH in AGN and may hold basic information about the formation and fueling of AGN, and especially can give information about the
central black hole mass in the case of the BLR virialization.
Here we study the variability of the broad emission lines and continuum of active galaxy NGC 3516 on a long
time scale (1996--2018). NGC 3516 was one of the first observed active galaxies
that showed a variable flux in the broad lines \citep[][]{an68}
The galaxy is a close (z$\sim$0.009), bright (V$\sim$12.5 magnitude, but also variable) object with the morphological type SBO. The optical spectrum
of the NGC 3516 nucleus was studied repeatedly \citep[see][and reference therein]{co73,ad75,bo77,os77,cr86,wa93}.
Strong variations in the intensity of the broad lines and optical Fe II lines were reported in several papers
\citep[][]{so68,an71,co73,bo77,co88,bo90}.
In NGC 3516 the contribution of the absorption spectrum of the galactic nucleus stellar component is very significant \citep[][]{cr85}, and in addition, NGC 3516 shows a strong intrinsic UV absorption, which is blueshifted \citep[][]{go99}.
The absorption lines width and ionization state are consistent with one expected in the narrow line region (NLR), i.e.
it seems that the origin of the UV absorption in NGC 3516 is in the NLR \citep[see][]{go99}.
This significant contribution of the stellar population to the AGN continuum, estimated from the aperture of the size of 1.0$\arcsec\times 4.0\arcsec$, is $\sim$70\% to the continuum flux in the H$\beta$ wavelength region \citep[][]{bo90}.
This makes NGC 3516 potentially a very interesting AGN, since there is a huge
contribution of the circum-nuclear component, and in addition, in the center resides a low luminosity AGN that emits the broad Hydrogen
emission which is strongly time-variable. The observed H$\alpha$/H$\beta$ \citep[$\sim$5, see e.g.][]{de16} is larger than the theoretical Case B value, which is expected from the pure photoionization model \citep[][]{of06}. However, a special case of photoionizaton model could explain the observed Balmer lines ratio \citep[][]{de16}, as well as some alternative approaches \citep[][]{po02}. In addition, the effects of collisional excitation and dust extinction could be the reason of such large deviation of the Balmer decrement.
In 1990 the first tight spectral and photometric optical monitoring during 5 months was performed as part of the LAG (Lovers of Active Galaxies) collaboration \citep[][]{wa93,wa94,on03}.
A large amplitude of variability of broad lines and continuum, variable asymmetric line profiles, and a variable dip in the blue wing of H$\beta$ were detected, and time-lags were also estimated \citep[H$\alpha$-14 and H$\beta$-7 days][]{wa94}. In 2007 a high sampling rate,
6-month optical reverberation mapping campaign of NGC 3516, was undertaken at MDM Observatory with the support of observations at several telescopes \citep[][]{de10}.
They showed that the H$\beta$ emission region within the BLR of NGC 3516 has complex kinematics (clearly see evidence
for outflowing, infalling, and virialized BLR) and reported an updated time-delay of the broad H$\beta$ line (11.7 days). Additionally, the line shape investigation given by \cite{po02} indicated a presence of a disc-like BLR which emits mostly in the line wings, and
another BLR component (so called intermediate BLR - IBLR) which emits narrower lines and contributes to the line core. Recently, \cite{de18} found that the time delay between the continuum and H$\beta$ is $\sim$4-8 days, that in combination with the measured root-mean-square (rms) profile of H$\beta$ width (around 2440 km s$^{-1}$) gives the central black hole mass of $\log(M/M_\odot)=7.63$.
Finally, NGC 3516 was under the simultaneous monitoring in the X-ray and optical B-band in 2013--2014 \citep[][]{no16},
when the object was detected in its faint phase.
In this paper (Paper I), we present the results of the long-term photometric (B,V,R) and spectral
(in the H$\alpha$ and H$\beta$ wavelength band) monitoring of NGC 3516 during the period between 1996 and 2018, and discuss the broad line and continuum flux variability. In Paper II we are going to investigate the changes in the BLR, i.e. in the shape of broad lines.
The paper is organized as follows: in Section 2 we report on our observations and describe the data reduction; in
Section 3 we describe the performed data analysis, and in Section 4 we discuss our results; finally in Section 5 we outline our conclusions.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{lc_n3516_photo_nov2018.ps}
\caption{Photometric light curves in B, V, R filters, transformed into corresponding fluxes using the equations from \citet{di98}. The fluxes are given in units of
10$^{-15}$ erg cm$^{-2}$s$^{-1}$\AA$^{-1}$. Bottom two panels give the light curves of
color indexes B-V and V-R, in stellar magnitudes.} \label{fig1}
\end{figure}
\begin{table*}
\centering
\caption{ A sample of measured photometric magnitudes of NGC 3516. Columns are: (1): Number, (2): Modified Julian date (MJD), (3): Mean seeing in arcsec, and (4)-(6): BRV magnitudes and corresponding errors. The full table is available online as Supporting information.}
\label{tab1}
\begin{tabular}{cccccc}
\hline
N & MJD & Seeing & m$_{\rm B}$ $\pm \sigma$ & m$_{\rm V}$ $\pm \sigma$ & m$_{\rm R}$ $\pm \sigma$ \\
& 2400000+ & [arcsec] & & & \\
1 & 2 & 3 & 4 & 5 & 6 \\
\hline
1 & 52996.63 & 1.3 & 13.945$\pm$0.002 & 13.053$\pm$0.044 & 12.406$\pm$0.013 \\
2 & 53031.55 & 2.0 & 14.023$\pm$0.043 & 13.115$\pm$0.019 & 12.471$\pm$0.019 \\
3 & 53058.48 & 2.5 & 14.044$\pm$0.007 & 13.129$\pm$0.039 & 12.422$\pm$0.081 \\
4 & 53088.44 & 3.0 & 14.028$\pm$0.006 & 13.149$\pm$0.004 & 12.493$\pm$0.000 \\
5 & 53122.23 & 3.0 & 13.921$\pm$0.004 & 13.062$\pm$0.018 & 12.386$\pm$0.006 \\
6 & 53149.33 & 1.5 & 13.758$\pm$0.022 & 13.024$\pm$0.008 & 12.380$\pm$0.015 \\
7 & 53347.57 & 3.0 & 13.773$\pm$0.009 & 12.999$\pm$0.005 & 12.341$\pm$0.011 \\
8 & 53386.55 & 1.2 & 13.611$\pm$0.000 & 12.869$\pm$0.010 & 12.238$\pm$0.016 \\
9 & 53405.49 & 2.2 & 13.648$\pm$0.007 & 12.911$\pm$0.012 & 12.274$\pm$0.007 \\
10 & 53413.47 & 2.2 & 13.714$\pm$0.018 & 12.957$\pm$0.004 & 12.319$\pm$0.005 \\
\hline
\end{tabular}
\end{table*}
\section{Observations and data reduction}
\label{sec:obs}
Details about the observations, calibration and unification of the spectral data, and measurements of the spectral fluxes are reported in our previous works \citep[see][and references therein]{sh01,sh04,sh08,sh10,sh12,sh13, sh16,sh17}, and will not be repeated here. However, we give some basic information about photometric and spectral observations of NGC 3516 and data reduction.
\begin{table*}
\centering
\caption{Details of the spectroscopic observations. Columns are: (1): Observatory, (2): Code, (3): Telescope aperture and type of spectrograph. (4): Projected spectrograph
entrance apertures (slit width$\times$slit length in arcsec), and (5): Focus of the telescope.}
\label{tab2}
\begin{tabular}{lcccc}
\hline
Observatory & Code & Tel.aperture + equipment & Aperture [arcsec] & Focus \\
1 & 2 & 3 & 4 & 5\\
\hline
SAO (Russia) & L(N) & 6 m + Long slit & 2.0$\times$6.0 & Nasmith \\
SAO (Russia) & L(U) & 6 m + UAGS & 2.0$\times$6.0 & Prime \\
GHO (M\'exico)& GHO & 2.1 m + B\&C & 2.5$\times$6.0 & Cassegrain \\
SAO (Russia) & Z1 & 1 m + UAGS & 4.0$\times$19.8 & Cassegrain \\
SAO (Russia) & Z2K & 1 m + UAGS & 4.0$\times$9.45 & Cassegrain \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Spectroscopic observations log. Columns are: (1): Number, (2): UT date, (3): Modified Julian date (MJD), (4): Code, given in Table~\ref{tab2}, (5): Projected spectrograph entrance apertures, (6): Wavelength range covered, and (7): Mean seeing in arcsec. The full table is available online as Supporting information.}
\label{tab3}
\begin{tabular}{ccccccc}
\hline
N & UT-date & MJD & Code & Aperture &Sp.range & Seeing \\
& &2400000+ & & [arcsec] & [\AA] & [arcsec] \\
1 & 2 & 3 & 4 & 5 & 6 & 7\\
\hline
1 & 14.01.1996 & 50096.63 & Z1 & 4.0$\times$19.8& 3738-6901 & 4.0 \\
2 & 20.01.1996 & 50103.47 & L(N) & 4.0$\times$19.8& 5100-7300 & - \\
3 & 19.03.1996 & 50162.31 & L(N) & 2.0$\times$6.0 & 3702-5595 & 3.0 \\
4 & 20.03.1996 & 50163.35 & L(N) & 2.0$\times$6.0 & 3702-5595 & 4.5 \\
5 & 05.10.1997 & 50726.62 & L(N) & 2.0$\times$6.0 & 3845-6288 & 4.0 \\
6 & 07.10.1997 & 50728.64 & L(N) & 2.0$\times$6.0 & 3845-6289 & 4.0 \\
7 & 20.01.1998 & 50834.34 & L(N) & 2.0$\times$6.0 & 3838-6149 & 2.5 \\
8 & 28.01.1998 & 50842.44 & L(U) & 2.0$\times$6.0 & 4540-5348 & 2.8 \\
9 & 22.02.1998 & 50867.31 & L(N) & 2.0$\times$6.0 & 3837-6149 & 2.0 \\
10 & 07.05.1998 & 50940.53 & L(N) & 2.0$\times$6.0 & 3738-6149 & 3.0 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Flux scale factors $\varphi$ and extended source correction G(g)
[in units of 10$^{-15} \rm erg \ cm^{-2} s^{-1}$\AA$^{-1}$] for the optical
spectra in the case of different telescopes. GHO(m) sample contains spectra with
spectral resolution of 15 \AA.}
\label{tab4}
\begin{tabular}{lcccc}
\hline
Sample & Years & Aperture& Scale factor& Extended source correction \\
& & (arcsec) & ($\varphi\pm\sigma$) & G(g) \\
\hline
GHO & 1999-2007 & 2.5$\times$6.0 & 1.000 & 0.000 \\
GHO(m) & 1999-2007 & 2.5$\times$6.0 & 1.020$\pm$0.085 & 0.000 \\
L(U,N) & 1999-2010 & 2.0$\times$6.0 & 1.230$\pm$0.049 & 1.42$\pm$1.18 \\
Z1K & 1999-2004 & 4.0$\times$19.8 & 1.350$\pm$0.110 & 6.58$\pm$0.73 \\
Z2K & 2003-2017 & 4.0$\times$9.45 & 1.319$\pm$0.072 & 5.92$\pm$2.45 \\
\hline
\end{tabular}
\end{table*}
\subsection{Photometric observations}
\label{sec:phot}
The photometry in the BVR filters of NGC 3516 was performed at the Special Astrophysical Observatory of the
Russian Academy of Science (SAO RAS) during the 1999 -- 2017 period (139 nights) with CCD-photometers of 1-m and 60-cm Zeiss telescopes.
The photometric system is similar to those of Johnson in B and V, and of Cousins in R spectral band \citep[][]{co76}.
The software developed at SAO RAS by \cite{vl93}
was used for the data reduction. Photometric standard stars from \cite{pe71},
in 1998--2003, and from \cite{do05},
in 2004--2017, were used. In Table \ref{tab1} the photometric BVR-magnitude data for the aperture of 10\arcsec are presented. In Figure \ref{fig1} we plot the light curves in the BVR bands and (B-V), (V-R) color indexes.
For the light curves (Fig. \ref{fig1}), the magnitudes [m(B), m(V),m(R)] were transformed into
fluxes F(B), F(V) and F(R) in units of 10$^{-15}$ erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$, using the equations from \cite{di98}.
\begin{figure*}
\centering
\includegraphics[width=10cm, angle=90]{fig02.eps}
\caption{The image of the observed spectrum of NGC 3516 in 2017 (top panel), and from top to bottom (bottom panel): the extracted composite (aperture of 2\arcsec$\times$4\arcsec) , host galaxy, and pure AGN spectrum, respectively.} \label{sp2017}
\end{figure*}
\subsection{Spectral observations}
\label{sec:spec}
Spectral monitoring of the galaxy NGC 3516 was carried out in 1996--2007 and 2014--2018
during $\sim$160 observing nights. Spectra were taken with the 6 m and 1 m telescopes
of the SAO RAS, Russia (1996--2018), and with the 2.1 m telescope of the Instituto Nacional de Astrof\'{\i}sica, \'{O}ptica y
Electr\'onica (INAOE) at the "Guillermo Haro
Observatory" (GHO) at Cananea, Sonora, Mexico (1998--2007). They were obtained with
long-slit spectrographs equipped with CCDs. The typical covered wavelength interval was
from $\sim$3700 \AA\ to 7700 \AA, the spectral resolution was between $\sim$(8--10) \AA \, or (12--15) \AA, and
the signal-to-noise (S/N) ratio was $\sim$40--50 in the continuum near the H$\beta$ and H$\alpha$ lines.
Spectrophotometric standard stars were observed every night. Table \ref{tab2} provides a short information on the source of spectroscopic observations. The log of spectroscopic observations is given in Table \ref{tab3}. The spectrophotometric data reduction was carried out using either the software developed at SAO RAS \citep[][]{vl93} or the
IRAF package for the spectra obtained at GHO, and it included bias and flat-field corrections, cosmic ray removal, 2D wavelength linearization, sky spectrum subtraction, addition of the spectra for every night, and relative flux calibration based on spectrophotometric standard star observations. In the analysis, about 10\% of the spectra were discarded for several different reasons (e.g. high noise level,
badly corrected spectral sensitivity, poor spectral resolution $>$15 \AA, etc.). Thus, our final data set consists of
123 blue (covering H$\beta$) and 89 red (covering H$\alpha$) spectra, taken during 146 nights, which we use in further analysis.
Additionally, we observed NGC 3516 with the 6 m telescope with the SCORPIO-2 spectrograph on February 1, 2017 in the spectral range from 4820 \AA\ to 7060 \AA. The observations were done in spectro-polarimetic mode with the grating VPHG1800@590 giving a dispersion of
0.5 \AA\ per px with a spectral resolution of 4.5 \AA . The slit width was 2\arcsec,
and the height was 57\arcsec. The exposure time was 3600 s and seeing was 2.3\arcsec. The
observed spectrum is shown in Figure \ref{sp2017} (top panel), from which, due to its high-quality we
could extract the composite spectra (aperture size of 2\arcsec$\times$4\arcsec, top spectrum, bottom panel), the
spectrum of the host galaxy (middle spectrum, bottom panel) and the
pure AGN spectrum (bottom spectrum, bottom panel). To subtract the host galaxy spectrum from the composite one,
we extracted the offset spectra from two regions in the range from -3\arcsec to -18\arcsec below and
from +3\arcsec' to +18\arcsec above the center. The averaged values (middle spectrum, bottom panel)
is subtracted from the composite spectrum, and we
obtained the spectrum of the pure AGN (bottom spectrum, bottom panel), which
is showing the presence of weak broad components in the H$\alpha$ and
H$\beta$ lines. As it can be seen from the Figure \ref{sp2017} the narrow
and broad lines are present.
The above procedure for the host-galaxy subtraction, could be done only in the case of the latest high-quality spectrum. In order to test if there is a "hidden" broad-line component in the H$\alpha$ and H$\beta$ line profiles in all spectra from our campaign, we estimated the host-galaxy contribution using the spectral principal component analysis (PCA), a statistical method which is described in \citet{fr92,vb06}. We applied the PCA to the year-average spectra, obtained from those spectra covering the total wavelength range.
\citet{vb06} introduced the application of this
statistical method for spectral decomposition of a composite spectrum into a pure-host and pure-AGN part. The PCA uses eigenspectra of AGN and galaxies, whose linear combination can reproduce the observed spectrum \citep[see][etc.]{fr92,co95,yi04a,yi04b}.
An example of the PCA decomposition of the year-average observed spectrum (from 1997) to the host-galaxy and pure-AGN spectrum is shown in Figure \ref{pca}. The obtained host-galaxy spectra were subtracted from the observed year-average spectra in order to obtain the pure AGN component.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{n3516_host.ps}
\caption{The PCA decomposition of the year-average spectrum in 1997.} \label{pca}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{n3516_spectra.ps}
\caption{Observed spectra in the minimum and maximum of activity during the monitored period (epoch of
observations denoted in the upper left corner).}
\label{fig-mm}
\end{figure}
\subsection{Absolute calibration (scaling) of the spectra}
\label{sec:cal}
Usually, for the absolute calibration of the spectra of AGN, fluxes in the narrow emission lines are used because it is assumed
that they are not variable at intervals of tens of years \citep[]{pe93}. All blue spectra of NGC 3516 were thus scaled to the constant
flux of F([O III] $\lambda$(5007+4959) = 4.47$\times$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$. This value is obtained using the data of \cite{de10} for F([O III] $\lambda$5007)= 3.35$\times$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ and the flux ratio of F([O III] $\lambda$5007)/F([O III] $\lambda$4959) = 3 \citep[see][]{di07}.
The scaling method of the blue spectra \citep[see][]{sh04}
allows us to obtain a homogeneous set of spectra with the same wavelength calibration and the same [O III] $\lambda$4959+5007 fluxes.
The spectra obtained using the SAO 1-m telescope with the resolution of $\sim$8-10 \AA\ (UAGS+CCD2K, Table \ref{tab2}) and spectra of 2.1m telescope with the resolution $\sim$12-15 \AA\ (Boller\&Chivens spectrograph + a grism of 150 l/mm) are covering both the H$\alpha$ and H$\beta$ spectral bands. These spectra were scaled using the [O III] $\lambda$4959+5007 lines, and consequently, the red spectral band was automatically scaled to the flux of these lines.
The blue spectra of NGC 3516 in the wavelength region of $\sim$(3700--5800) \AA \, and with the spectral resolution of $\sim$8 \AA, taken with the 6 m and 1 m telescopes of SAO RAS and with the 2.1 m telescope of GHO\footnote{Code L(N), L(U), Z1 and GHO from Tables 2 and 3} were also scaled using the flux of the [O III] $\lambda$4959+5007 lines. The red spectra observed at the same night (or next night) in the wavelength region (5600--7600) \AA \, were first scaled to the fluxes of [S II] $\lambda$6717+6731 lines, and then the scaling was corrected using the overlapping continuum with the corresponding blue spectrum which was scaled to [O III] $\lambda$4959+5007. The [S II] $\lambda$6717+6731 total flux was determined from the scaled spectra covering the entire wavelength range. However, the accuracy of the scaling of the red region depends both on the accuracy of the determination of the [S II] lines flux and on the slope of the continuum. In the spectra of NGC 3516, the fluxes in the [S II] $\lambda\lambda$6717,6731 lines are almost an order of magnitude smaller then the fluxes in the [O III] $\lambda\lambda$4959,5007 lines. Therefore, when scaling to the [S II] $\lambda$6717+6732 flux, the scaling accuracy varied within 2-10\%, depending on the quality of the spectrum. To improve the accuracy of the scaling, we used overlapping sections of the continuum of the blue and red spectra recorded on the same or next night. However, in this case the accuracy of the scaling procedure depends strongly on the determination of the continuum slope in the blue and red spectral band, i.e. one has to carefully account the spectral sensitivity of the equipment. This has been performed by using the comparison stars. In poor photometric conditions (clouds, mist, etc.) the reduction can give a wrong spectral slope (fall or rise) and, consequently, the errors in the scaling procedure for the H$\alpha$ wavelength band can be larger. Usually (as a rule), the fluxes in the H$\alpha$ line and red continuum determined from the spectra scaled to the [S II] $\lambda$6717+6731 flux or using the overlapping sections of the continuum show little difference (less than 5\%), but in several red spectra ($\sim$6\%) the fluxes differ up to 10\%. In the latter case, we used the flux from the average spectrum. Similarly, in the case of spectra that cover the whole wavelength range, which were scaled to [O III] $\lambda$4959+5007, for better precision, we also scaled the spectra using the flux in [S II] $\lambda$6717+6731. Then we compared the fluxes in the H$\alpha$ line and the red continuum obtained from two differently scaled spectra, and if there were differences of more than 5\%, the average flux was used. As it was mentioned at the beginning of Section
2, more details on the scaling can be found in our previously published papers \citep[see e.g.][]{sh01,sh04,sh08,sh10,sh12,sh13,sh16,sh17}.
\begin{figure*}
\centering
\includegraphics[width=16cm]{lc_n3516_sp_ph_deRosa_nov2018.ps}
\caption{Light curves for the spectral lines and continuum fluxes, compared to
the photometry flux in the V filter, F(V,$\lambda$5500\AA) shown in top panel. Observations with different
telescopes are denoted with different symbols given in the second panel from the top. Also, observations reported by \citet{de18} are included.
The continuum flux is in units of
$10^{-15} \rm erg \, cm^{-2} s^{-1}$\AA$^{-1}$ and the line flux in units of $10^{-13} \rm erg \, cm^{-2} s^{-1}$.} \label{fig2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{F_cnt_hb.ps}
\includegraphics[width=\columnwidth]{F_cnt_ha.ps}
\caption{Continuum vs. line flux for H$\beta$ and H$\alpha$. Symbols and units are the same as in Figure \ref{fig2}.
The correlation coefficients $r$ and the corresponding null-hypotesis $P_0$ values are also given.} \label{fig3}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{F_hab.ps}
\includegraphics[width=\columnwidth]{F_cnt_cnt.ps}
\caption{H$\alpha$ vs. H$\beta$ line flux (left) and red vs. blue continuum flux (right).
Symbols and units are the same as in Figure \ref{fig2}. The correlation coefficients $r$ and the corresponding null-hypotesis $P_0$ values are also given.} \label{fig4}
\end{figure*}
\subsection{Measurements of the spectral fluxes, their unification and errors}
\label{sec:flux}
Using the scaled spectra, we determined the fluxes in the blue and red continuum and in the broad emission lines for each data set
(i.e. data with a given aperture and telescope from Table \ref{tab2}).
The average flux in the continuum near the H$\beta$ line at the observed wavelength 5145 \AA\
($\sim$5100 \AA\ in the rest frame) was obtained by averaging flux in the spectral range of (5130--5160) \AA. The continuum near the H$\alpha$
line at the observed wavelength 6385 \AA\ ($\sim$6330 \AA\ in the rest frame), was measured by averaging fluxes in the spectral range of
(6370--6400) \AA. To measure the observed fluxes of
H$\alpha$ and H$\beta$, it is necessary to subtract the underlying continuum.
For this goal, a linear continuum was fitted through the windows of
20 \AA, located at observed wavelength 4760 \AA\ and 5120 \AA\ for H$\beta$, and at 6390 \AA\
and 6820 \AA\ for H$\alpha$. After the continuum subtraction, we defined the line fluxes in the following
observed wavelength bands: from 4845 \AA\ to 4965 \AA\ for H$\beta$ and from 6490 \AA\ to 6750 \AA\ for H$\alpha$.
In order to investigate the long-term spectral variability of an AGN, it is necessary to gather a consistent and uniformed data set.
Since observations were carried out using instruments of different apertures, it is necessary to correct the line and continuum
fluxes for these effects \citep[][]{pe83}. As reported in our previous papers
\citep[][]{sh01,sh04,sh08,sh10,sh12,sh13,sh16,sh17}
we determined a point-source correction factor ($\varphi$) and an aperture-dependent correction factor to account for the host galaxy contribution to the continuum (G(g)). We used the following expressions \citep[see][]{pe95}
\begin{eqnarray}
F({\rm line})_{\rm true} = \varphi \times F({\rm line})_{\rm obs} \\
F({\rm cont})_{\rm true} = \varphi \times F({\rm cont})_{\rm obs} - G(g)
\end{eqnarray}
where index "obs" denotes the observed flux, and "true" the aperture corrected flux. The spectra of the 2.1 m telescope at GHO
(INAOE, Mexico) within an aperture of 2.5\arcsec$\times$6.0\arcsec were adopted as standard (i.e. $\varphi$= 1.0, G(g)=0 by definition).
The correction factors $\varphi$ and G(g) are determined empirically by comparing pairs of simultaneous observations from each of given
telescope data sets (see Table \ref{tab4}) to that of the
standard data set \citep[as it was used in AGN Watch, see e.g.][]{pe94,pe98,pe02}.
The time intervals between observations
which were defined as quasi-simultaneous are typically of 1-3 days.
In Table \ref{tab5}, the fluxes for the continuum at the rest-frame wavelengths at 5100 \AA \,
and 6330 \AA, as well as the H$\beta$ and H$\alpha$ lines and their errors are given.
The mean errors of the continuum and line fluxes are in the interval between 3.3\% and 4.5\%. The error-bars were estimated by comparing measured fluxes from the spectra obtained within the time interval that is shorter than 3 days. The flux errors listed in
Table \ref{tab5} were estimated using the mean errors.
\begin{table*}
\centering
\caption{The measured continuum and line fluxes, and their estimated errors.
Columns are: (1): Number of spectra, (2): UT-date, (3): Modified Julian Date (MJD),
(4): Blue continuum, (5): H$\beta$, (6): Red continuum, and (7): H$\alpha$. The line fluxes are in units of $10^{-13} \rm erg \, cm^{-2} s^{-1}$, and continuum fluxes in units of $10^{-15} \rm erg \, cm^{-2} s^{-1}$\AA$^{-1}$.
The full table is available online as Supporting information.}
\label{tab5}
\begin{tabular}{ccccccc}
\hline
N & UT-date & MJD & F${\rm 5100}\pm \sigma$ & F(H$\beta$)$\pm \sigma$ & F${\rm 6330}\pm \sigma$ & F(H$\alpha$)$\pm \sigma$ \\
1& 2 & 3 & 4&5& 6 & 7 \\
\hline
1 & 14.01.1996 & 50096.63 & 26.17$\pm$1.10 & 14.41$\pm$0.62 & - & - \\
2 & 20.01.1996 & 50103.47 & - & - & 26.71$\pm$0.88 & 64.82$\pm$2.92 \\
3 & 19.03.1996 & 50162.31 & 23.36$\pm$0.98 & 14.82$\pm$0.64 & 22.68$\pm$0.75 & 62.77$\pm$2.82 \\
4 & 20.03.1996 & 50163.35 & 26.07$\pm$1.10 & 14.53$\pm$0.62 & 25.15$\pm$0.83 & 61.22$\pm$2.75 \\
5 & 05.10.1997 & 50726.63 & 24.49$\pm$1.03 & 12.30$\pm$0.53 & 23.94$\pm$0.79 & 54.77$\pm$2.46 \\
6 & 07.10.1997 & 50728.64 & 24.07$\pm$1.01 & 11.31$\pm$0.49 & 24.20$\pm$0.80 & 54.55$\pm$2.45 \\
7 & 20.01.1998 & 50834.34 & 23.62$\pm$0.99 & 11.06$\pm$0.48 & 18.60$\pm$0.61 & 44.87$\pm$2.02 \\
8 & 28.01.1998 & 50842.44 & 22.69$\pm$0.95 & 12.45$\pm$0.54 & 19.50$\pm$0.64 & 45.49$\pm$2.05 \\
9 & 22.02.1998 & 50867.31 & 20.85$\pm$0.88 & 10.51$\pm$0.45 & - & - \\
10 & 07.05.1998 & 50940.53 & 20.92$\pm$0.88 & 10.09$\pm$0.43 & - & - \\
\hline
\end{tabular}
\end{table*}
\section{Data analysis and results}
\label{sec:data}
In this section we present our results. First we shortly give an analysis of the photometric observations, and then of the spectral observations, which contain the continuum and broad line variations.
\subsection{Photometric results}
\label{sec:photores}
Our photometric results are presented in Figure \ref{fig1}, where we show the observations in B, V, and R filters. As one can see in Figure \ref{fig1},
the photometric observations show the same variability in all three considered filters. Since there is a lack of data between 2008 and 2012, we cannot be sure that the maximum in the light curve was in 2007, but it seems it was close to the maximum. The minimum was in 2014, and also in the following years (2014--2018), there were no large changes in the photometric data.
The color B-V and V-R diagrams (Fig. \ref{fig1}, two bottom panels) show that in the high activity phase (2002--2008), the slope of the spectra from blue to red band was much steeper (bluer) than in the minimum phase (2012--2018), when the continuum was almost flat. It is also evident from Figure \ref{fig1} that for every increase in the brightness (flux) the colors decreased (i.e. became bluer), which is expected in case of AGN.
\begin{table}
\centering
\caption{Parameters of the continuum and line variations. Columns are: (1): Analyzed spectral feature, (2): Total number of spectra, (3): Mean flux, (4): Standard deviation, (5): Ratio of the maximal to minimal flux, (6): Variation amplitude (see text).
Continuum flux is in units of $10^{-15} \rm erg \, cm^{-2} s^{-1}$\AA$^{-1}$
and line flux in units of $10^{-13} \rm erg \ cm^{-2}s^{-1}$. }
\label{tab7}
\begin{tabular}{lccccc}
\hline
Feature & N & $F$(mean) & $\sigma$($F$) & $R$(max/min)& $F$(var)\\
1 & 2 & 3 & 4 & 5 & 6 \\
\hline
cont 6330 & 89 & 19.3 & 3.1 & 2.0 & 0.158 \\
cont 5100 & 122 & 19.0 & 4.1 & 2.7 & 0.167 \\
H$\alpha$ - total & 89 & 33.5 & 11.1 & 9.7 & 0.331 \\
H$\beta$ - total & 122 & 7.0 & 3.1 & 13.9 & 0.442 \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=5.6cm]{ha_mean_8A.ps}
\includegraphics[width=5.6cm]{hb_mean_8A.ps}
\includegraphics[width=5.6cm]{hab_rms_8A.ps}
\caption{Mean and rms-profiles of H$\alpha$ (left) and H$\beta$ (middle) for spectra
with higher spectral resolution ($\sim$8\AA). The right panel represents comparison of the normalized H$\beta$ and H$\alpha$ rms-profiles.} \label{mean}
\end{figure*}
\subsection{Spectral results}
\label{sec:specres}
First we inspect the spectra obtained during the whole long-term period, finding that the maximum in the optical spectra
was in 2007, and minimum (as also in photometric observations) in 2014. As it is shown in Figure \ref{fig-mm} we explore the optical spectra
in these two extreme epochs, finding that in the maximum, the continuum is strong,
and broad lines are very prominent, showing
typical Sy 1 spectrum. In the maximum phase there are Balmer lines
from H$\alpha$ to H$\delta$, and also very intense Fe II lines, especially,
the Fe II feature between H$\beta$ and H$\gamma$ lines (Fig. \ref{fig-mm}).
On the other hand, in the minimum phase (Fig. \ref{fig-mm}), the broad lines disappeared, and the spectrum of NGC 3516 is similar to Sy 2 spectrum, without strong continuum and broad lines. Additionally, it is interesting that in contrast to a typical Sy 2 spectrum, in the composite spectrum during the minimum, there is no narrow H$\beta$ line, which is probably absorbed. The absorption lines from the host galaxy are dominant, showing both forbidden and permitted narrow emission lines.
Since we find an extreme difference between the NGC 3516 spectrum in the phase of minimum and maximum activity, we explore how much the result can be affected by some artificial effects (as e.g. slit motion). Therefore, we repeated long-slit observations with 6 m telescope of SAO RAS in Febuary 01, 2017 (Fig. \ref{sp2017}), and we find that after subtracting the host galaxy contribution there is a very weak
H$\beta$ broad component, which cannot be seen in the composite spectra from our monitoring campaign.
The light curves of broad line and corresponding continuum fluxes are shown in Figure \ref{fig2}, from which it can be seen that the active phase was more-or-less present in the whole monitored period, beside several last years. In Figure \ref{fig2}, we also plot the observed fluxes of H$\beta$ and continuum at $\lambda$5100\AA\ reported in \cite{de18}, which cover only a small part of the period uncovered by our monitoring campaign. As it can be seen in Figure \ref{fig2}, the observations of \citet{de18} fit very well our photometric and spectral observations.
In general it is expected that the line flux variation is well correlated with the continuum flux variation, however in some well-known AGNs this is not the case, e.g. in NGC 4151 (see Shapovalova et al. 2008) or Arp 102B (see Shapovalova et al. 2013). In addition, as it was noted above, NGC 3516 contains a low-luminosity AGN, and it is interesting to explore the response of the line flux to the continuum flux variability. To test this, we plot in Figure \ref{fig3} the flux of H$\beta$ (left panel) and H$\alpha$ (right panel) as a function of the continuum flux at 5100 \AA\ and 6330 \AA , respectively. Figure \ref{fig3} shows that there are good correlations between the line and corresponding continuum (r=0.81 for H$\beta$, and r=0.79 for H$\alpha$), however there is a large scatter especially in the case of the weak broad line flux. This is expected since in the low activity phase the weakness of broad emission lines is due to the lack of an ionizing continuum from the nucleus \citep[][]{ki18}.
On the other hand, a better correlation is obtained between the broad H$\alpha$ and H$\beta$ lines ($r=0.95$) and between the blue and red
continuum ($r=0.90$), which is shown in Figure \ref{fig4}. This is expected, and it confirms that the relative flux calibration of the blue and red spectra obtained from different telescopes was done correctly.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{GPALLOU.eps}
\caption{GP model fit (solid line) to the observed light curves (points with error bars),
which are denoted in each plot. The continuum flux is in units of
$10^{-15} \rm erg \, cm^{-2} s^{-1}$\AA$^{-1}$ and the line flux in units of $10^{-13} \rm erg \, cm^{-2} s^{-1}$.} \label{gp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ZDCFGPobsHa2.eps}
\includegraphics[width=\columnwidth]{ZDCFGPobsHb2.eps}
\caption{Cross correlation functions (ZDCF) for the H$\alpha$ (top) and H$\beta$ (bottom). The error bars show the ZDCF for observed and GP modeled curve. The vertical lines mark the obtained time lag for the observed (dashed–dotted) and GP modeled light curve (solid).} \label{ccf}
\end{figure}
\subsubsection{Variability of the emission lines and the optical continuum}
\label{sec:var}
As it can be seen in Figures \ref{fig1} and \ref{fig2}, there is a large variability in the spectra during the monitored period.
To explore the rate of variability we calculate the variation amplitude using the method given by \citet{ob98} and present it in Table \ref{tab7}.
The changes in the continuum were around two times (2.7 times for $\lambda$5100 \AA\ and 2 times for
$\lambda$6330 \AA), which is usual for Sy 1 galaxies \citep[see e.g.][]{sh17}. However, the line flux changed for somewhat more than 10 times (Tab. \ref{tab7}), which is expected in AGN which change their type, as e.g. in Fairall 9 in which broad line fluxes changed by more than an order of magnitude also changed its type from Sy 1 to Sy 1.9 \citep[][]{ko85}.
Additionally, there is a big change in the line profiles. We show the mean and rms-profiles of the broad H$\alpha$ and H$\beta$ lines in Figure \ref{mean}, from which it can be seen that the mean profile of H$\alpha$ and H$\beta$ lines show structures in the blue and red wing, like shoulders which may indicate a complex BLR \citep[see][]{po02}. We construct the mean and rms-profiles for both lines using only spectra with resolution of 8 \AA\, and find that the full width at half maximum (FWHM) of the mean H$\alpha$ is 3560 km s$^{-1}$ (the FWHM of H$\alpha$ rms-profile is 4110 km s$^{-1}$), whereas the mean H$\beta$ seems to be broader with the FWHM of 5120 km s$^{-1}$ (the FWHM of H$\beta$ rms-profile is 4360$\pm$80 km s$^{-1}$). For the FWHM of H$\beta$ rms-profile, which is later used for the black hole mass estimation, we estimated the uncertainty by making several measurements for different levels of the underlying continuum, taking the resulting average for the FHWM and the uncertainty to be 1$\sigma$. Both mean-profiles and their rms show a red asymmetry, that may be caused by the inflow or gravitation redshift \citep[see][]{jo16}, but also other effects can be present, as e.g. it could imply outflow if the inward facing side of the BLR clouds are brighter than the outward face, as it is suggested by photoionization modeling. Both rms-profiles have the same shape (see Figure \ref{mean}, right panel), which indicates similar kinematics of both regions.
\subsection{Time-lag and periodicity analysis}
\label{sec:lag}
The time-lags between light curves in the H$\alpha$, H$\beta$ lines and corresponding continuum bands are determined from the z-transformed discrete correlation function (ZDCF) analysis \citep[following the technique detailed in][]{al97,po14,sh16,sh17}.
Note that the light curves are sampled at the same time, as noted in \cite{ed88}
Our long-term observations are covering 22 years, however since there is a large gap after year 2007, in this analysis we used
only the part of the light curve up to year 2007 (MJD 54500). In addition, we modeled Gaussian Process (GP) simulated light-curves, which
are shown in Figure \ref{gp}, in order to obtain the time lags in case of light curves with increased sampling rate. The ZDCF analysis applied to both observed and GP simulated light curves, and their ZDCFs are presented in Figure \ref{ccf}.
Time-lags with the corresponding ZDCFs are given in Table \ref{tab8}. We find the time-lag of observed H$\alpha$
and continuum $\tau_{\rm zdcf}=0.0^{+2.0}_{-2.0}$ days and cross correlation coefficient of $r_{\rm zdcf}=0.69^{+0.07}_{-0.08}$.
Their GP counterparts exhibit larger lag of $\tau_{\rm zdcf}=15.0^{+5.0}_{-0.0}$ days and similarly larger values
of $r_{\rm zdcf}=0.81^{+0.01}_{-0.01}$. In the case of H$\beta$ and its continuum the time-lag for observed
light curves is $\tau_{\rm zdcf}=9.7^{+20.3}_{-8.7}$ days, which $r_{\rm zdcf}=0.79^{+0.05}_{-0.05}$ is slightly
larger then in the case of observed H$\alpha$ (Fig. \ref{ccf}). Their GP counterparts
show the largest time-lag $\tau_{\rm zdcf}=17.0^{+5.0}_{-0.0}$ days and $r_{\rm zdcf}=0.85^{+0.01}_{-0.01}$.
Results based on the GP light curve analysis suggest that the time-lag of H$\beta$
could be larger than H$\alpha$ with the upper limit of about 20 days.
In addition, the time lags were also calculated with the modified versions of the Interpolated Cross-Correlation Function \citep[ICCF][]{ga86}, as well as the Discrete Cross-Correlation Function \citep[DCCF][]{ed88}, as explained in \cite{pa13}. Both methods produced almost the same time lags for both H$\alpha$ and H$\beta$ lines.
Finally, we generated two artificial light curves of the duration of 4920 days, starting from the power spectral density function, with the 30-days cadence and 15 days time lag between them, added the red noise, and applied the ZDCF method. The ZDCF was able to detect the 15 days time lag with small uncertainty in both cases, with and without red noise. If we randomly extract 70 points from the artificial light curves with red noise, and apply the ZDCF, the obtained time lag is again the same, i.e. $\tau_{\rm zdcf}=12.0^{+3.1}_{-27.1}$ ($r_{\rm zdcf}=0.96^{+0.01}_{-0.01}$). Therefore we concluded that the sampling rate is influencing more the uncertainty than the estimated time lags. We show that all methods and tests give similar results for the time lags, and later in the text we will use the time lag obtained from the ZDCF method applied on the GP modeled light-curves for the calculation of the mass of the SMBH.
We note that in the case of long term light-curves, the “red noise problem” could affect the estimated time lags, in such a way that in addition to the variations on the reverberation timescale, there are longer term variations that bias the estimated lag to larger values, as it was noted by Peterson et al. (2002) for NGC 5548, which is a binary black hole candidate \citep{bo16,li16}. In some cases, the problem of time lag estimates from the "red noise" light curves has been mitigated with the Gaussian process regression \citep{ma10,pa11,te13}, which we also applied here. However, we cannot neglect a possibility that true time lag may be somewhat smaller then our analysis indicates.
\begin{table}
\centering
\caption{The results of the ZDCF analysis. Columns are: (1): Analyzed light curves. (2): Number of points. (3): Time-lag in days from the ZDCF. (4): ZDCF correlation coefficient.}
\label{tab8}
\begin{tabular}{lccc}
\hline
Light curves & N & $\tau_{\rm zdcf}$ [days] & $r_{\rm zdcf}$ \\
1 & 2 & 3 & 4 \\
\hline
GP cnt vs H$\alpha$ & 3728 & 15.0$_{-0.0}^{+5.0}$ & 0.81$_{-0.01}^{+0.01}$ \\
GP cnt vs H$\beta$ & 3728 & 17.0$_{-0.0}^{+5.0}$ & 0.85$_{-0.01}^{+0.01}$ \\
OBS cnt vs H$\alpha$ & 50 & 0.0$_{-2.0}^{+2.0}$ & 0.69$_{-0.08}^{+0.07}$ \\
OBS cnt vs H$\beta$ & 63 & 9.7$_{-8.7}^{+20.3}$ & 0.79$_{-0.05}^{+0.05}$ \\
\hline
\end{tabular}
\end{table}
\subsubsection{Periodicity}
\label{sec:period}
In order to test for any meaningful signal in the light curves,
we calculate for observed and GP light curves
Lomb-Scargle periodogram with a bootstrap analysis
to assess its significance as it is described in \cite{sh16,sh17}.
We test whether a
purely red noise model can produce a periodic variability of
light curves. We obtain random
light curves from the Ornstein--Uhlenbeck (OU) process (red noise)
sampled to a regular
time interval.
The periodogram analysis (Fig. \ref{period}) shows that there are no
significant periodic signals.
The largest peak of each curve corresponds to the whole observed period
However, in the H$\beta$ line one can see a peak at about $\sim$523 days, the continuum at $\lambda$5100 \AA\ has a peak at $\sim$698 days,
whereas the H$\alpha$ line has a peak at $\sim$ 515 days.
On the other hand, GP light curves calculated from the observed light
curves do not exhibit
any significant periodic signal.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{lombngc3516.eps}
\caption{Lomb-Scargle periodogram of the observed light curves, red noise and GP
models.
The horizontal lines show the $1\%$ and $5\%$ significance levels for the
highest peak in the periodogram, determined by 1000 bootstrap resamplings.} \label{period}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=16cm]{year_host.ps}
\caption{Year-average host-galaxy and continuum subtracted spectra, obtained for those spectra covering the total wavelength range. All spectra are normalized to the [O III] $\lambda$5007 intensity and shifted for comparision.} \label{year}
\end{figure*}
\section{Discussion}
In this paper we investigate the long-term photometric and spectroscopic
variability of NGC 3516 observed with three telescopes in a monitoring campaign that lasted for 22
years (from 1996 to 2018). We find that the intensity in the broad lines as well as in the continuum
flux was changing at a rate between two times in the continuum flux, and more than ten times
in the broad-line flux. NGC 3516 changed the type of activity during the monitoring campaign, having the typical Sy 1 spectrum in the first period of the campaign, and changing from 2014 to the spectrum which is without broad lines, similar to the spectrum of
Sy 2 galaxies, which is clearly visible in Figure \ref{year} where we plot the year-average spectra corrected for the host-galaxy and continuum.
We note that, as can be seen in Figure \ref{fig-mm}, the H$\beta$ narrow line also disappeared in the composite spectrum in 2014. It seems that the stellar absorption of H$\beta$ in the low-state phase is too strong that the narrow emission was absorbed, which is clear from the host-galaxy corrected spectrum in which the narrow H$\beta$ is slightly appearing in 2014 (Fig.\ref{year}). Such a low-state was also observed in the X-ray \citep[][]{no16}, since observations in 2013--2014 showed that the X-ray emission in this period was at level of just 5\% of the average flux observed in 1997--2002 period \citep[][]{no16}.
As we noted above, we performed observations in 2017 to get the high resolution spectrum of the AGN in the minimum phase (see \S2). Figure \ref{sp2017} shows that after subtracting the host galaxy spectrum, there are very weak broad emission lines (H$\alpha$, but also H$\beta$). We compare the broad line profiles of H$\beta$ and H$\alpha$ (Fig. \ref{broad2017}) and find that they have practically the same line profiles. The
H$\alpha$ and H$\beta$ have also the same FWHM which is around
2000 km s$^{-1}$ (Fig. \ref{broad2017}). The FWHM from this period is around two times smaller than the FWHM of the averaged line profile, and both broad components are significantly shifted to the blue (around 1000 km s$^{-1}$), which can indicate both the outflow in the minimum phase, and also the disc emission \citep[][]{po02}. The shifted broad H$\beta$ and
H$\alpha$ with an extensive red wing can be created in an outflowing disc-like BLR
\citep[as it was discussed and presented in Figure 19 of][]{pop11}. The
investigation of the broad line profiles and consequently the model of the BLR
structure of NGC 3516 will be given in Paper II.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig13.eps}
\caption{The comparison of the broad H$\alpha$ and H$\beta$ profiles observed in
2017 with SCORPIO-2 spectrograph at the 6-m telescope, during the phase near minimal activity.} \label{broad2017}
\end{figure}
Additionally, we measure the narrow line ratios in the minimum phase and we obtain
[O III]$\lambda$5007/H$\beta$=10.3, [N II]$\lambda$6583/H$\alpha$=3.8, which
indicates the strong shock wave excitation in the narrow line region,
that can be in the process of gas outflowing on the edge of the
ionization conus \citep{af07}.
Variability in the line profiles of broad Balmer lines is high, as we noted above, the broad component almost disappeared in the last period of
the monitoring campaign (in 2014), and after the total minimum, the low-flux broad lines started to appear, as e.g. in 2017. The line variability in the UV spectral range was reported by
\cite{go99}. They found that high-ionization emission lines
(Ly$\alpha$ $\lambda$1215, C IV $\lambda$1549, N V $\lambda$1240, and He II $\lambda$1640) showed significant variation that was of the order of a factor of $\sim$2, similar as
we find in the H$\beta$ and H$\alpha$ lines.
One of the most interesting facts is that NGC 3516 is the changing-look AGN (Fig. \ref{year}), and the nature of these objects can be different
\citep[see][etc.]{ma03,bi05,ki18,no18}. If an AGN is a changing-look from type 1 to type 1.9 or 2, this can be
explained via a variable absorption of matter between an observer and
the accretion disc. In that case the obscuring material (e.g. dust clouds) should have a patchy distribution, then the dynamical movement of dust clouds can result in a change of the continuum (and broad line) emission, which affects the current classification.
On the other side, any lack of accretion (that may be caused by different effects) could result in the lack of the ionization continuum, and consequently in the lack of broad emission lines \citep[see][]{ki18,no18}. As e.g. Mrk 1018 changed from type 1.9 to 1 and returned back to 1.9 over a period of 40 years \citep[][]{ki18}. From our observations we could not see that there was a change in the
past, since 1996, when we started our monitoring campaign. However, from the
comparison of the H$\beta$ profile observed in 1943 by \cite{sy43} and
in 1967 by \citep[][]{and68}, it can be clearly seen that the broad H$\beta$
component was present in the observation from 1943, and was absent in the epoch
of 1967 \citep[see Fig. 3 in][]{and68}. Therefore we can not exclude that
there is some repetition of the changing look of NGC 3516 (with some periodicity) that should be
investigated in the future. Additionally, there is something in common with
Mrk 1018, since both AGN in the phase of type 1 showed complex broad Balmer lines which indicate more than one emission line region \citep[][]{po02,ki18}. Finally, we note that NGC 3516 showed a strong absorption in the UV lines
\citep[][]{go99}, as well as in the X-ray continuum \citep[see][etc.]{kr02,tu11,ho12}.
\subsection{Black hole mass determination}
The SMBH mass ($M_{\rm BH}$) of NGC 3516 can be estimated using the virial theorem \citep[see][]{pe14}:
\begin{equation}
M_{\rm BH}=f{\Delta V_{\rm FWHM} R_{\rm BLR}\over G},
\end{equation}
where $\Delta V_{\rm FWHM}$ is the line-of-sight orbital velocity at the radius $R_{\rm BLR}$ of the BLR, which is estimated from the width
of the variable part of the H$\beta$ emission line, and $f$ is a factor that depends on the geometry and orientation of
the BLR. Different values are obtained for the scale factor $f$, depending whether it was determined statistically \citep[see e.g.][]{on04,wo15} or by detailed modeling of the reverberation data \citep[see e.g.][]{pa14,gr17}.
Here we will use the recent result for the $f$ factor from \cite{wo15} who obtained ${\rm log} f = 0.05 \pm0.12$ for the H$\beta$ FWHM-based $M_{\rm BH}$ estimates. Taking into account that the dimension of the H$\beta$ BLR is $\sim$ 17 light days and that the
FWHM of the H$\beta$ rms-profile is 4360 km s$^{-1}$, we obtained that the central SMBH has a mass of (4.73$\pm$1.40)$\times10^7\ M_\odot$ ($\log(M[M_\odot])=7.67$). The uncertainty in the time lag and FWHM are propagated through to calculate the formal mass uncertainty.
Although we used the FWHM of the H$\beta$ line in our analysis, we obtain the result which is in agreement with the estimates of other authors who calculated the mass based on the line-dispersion, i.e. \cite{pe04} found that the mass is (4.27$\pm$1.46)$\times 10^7M_\odot$ ($\log(M[M_\odot])=7.63$), \cite{de10} reported the mass of (3.17$^{+0.28}_{-0.42}$)$\times 10^7M_\odot$($\log(M[M_\odot])=7.50$), and the most recent finding of \cite{de18} is the mass of (4.27$\pm$1.35)$\times 10^7M_\odot$ ($\log(M[M_\odot])=7.63$).
\section{Conclusions}
Here we present the long-term (from 1996 to 2018) photometric and spectroscopic monitoring campaign for NGC 3516.
We analyze observations in order to explore the long-term variability in spectral characteristics of the object. NGC 3516 is known as
variable object from X-ray \citep[][]{no16} to the optical \citep{de10,de18} spectra. From our analysis of the long-term monitoring we can outline
following conclusions:
\begin{enumerate}
\item During more than 20 years of monitoring, the range of continuum flux (blue at 5100 \AA\ and red at 6330 \AA) variations exceeded a factor of two, whereas the range of broad lines variations were of the order of magnitude.
This causes a huge change in the optical spectrum of NGC 3516, that mostly, during the monitored period has typical Sy 1 spectrum, but from 2014, in the minimum of activity, the broad lines almost disappeared, and NGC 3516 has the spectrum which is typical for Sy 2 galaxies, beside the narrow H$\beta$ line, which also disappeared in the composite spectrum after 2014 (Fig.\ref{year}). This indicates a strong absorption in this period. The spectrum did not change a lot in the following four years, until the end of the monitoring in 2018, when only a weak broad H$\alpha$ and H$\beta$ components are present. These components have the same shape (blue-shifted peaks around 1000 km
s$^{-1}$ and larger red wing) with smaller FWHM (around 2000 km s$^{-1}$) than averaged
broad line profiles (FWHM $\sim$ 4000-5000 km s$^{-1}$). This indicates
that the structure of the BLR was significantly changed.
\item During the main monitoring period (1996–-2007), there is a good correlation between fluxes in the broad lines and the corresponding continuum ($r\sim0.8$). This indicates that the main mechanism for the formation of broad emission lines in the BLR is the photoionization by the continuum from the nucleus. However, in the low activity phase broad-line fluxes are caused mainly by shock excitation as a result of an outflow, and not by photoionization from a pure AGN (see Fig. \ref{fig2} and discussion).
\item We find that the BLR has a dimension of 17 light days of H$\beta$. Using this dimension, and the FWHM of H$\beta$ rms-profile we find that the mass of the central black hole is (4.73$\pm$1.40)$\times 10^7M\odot$, that is in an agreement with previous estimates \citep[][]{de10,de18}.
\item The mean and rms line profile indicate a complex BLR, probably with two components \citep[see][]{po02}, however we did not investigate the broad line profile in more details, and we leave the investigation of the broad line structure to Paper II.
\end{enumerate}
\section*{Acknowledgements}
The authors thank the anonymous referee for useful comments and suggestions.
This work was supported by: INTAS (grant N96-0328), RFBR (grants
N97-02-17625 N00-02-16272, N03-02-17123, 06-02-16843, N09-02-01136,
12-02-00857a, 12-02-01237a,N15-02-02101), CONACYT research grants 39560, 54480,
151494, and 280789 (M{\'e}xico), and the Ministry of Education and Science of Republic of Serbia through the project
Astrophysical Spectroscopy of Extragalactic Objects (176001).
We especially thank Borisov N.V., Fathulin T., Fioktistova I., Moiseev A.,
Mikhailov V. ,and Vlasyuk V.V. for taking part in the observations.
| {'timestamp': '2019-03-01T02:06:26', 'yymm': '1902', 'arxiv_id': '1902.10845', 'language': 'en', 'url': 'https://arxiv.org/abs/1902.10845'} |
\section{Introduction}
Devising Convolutional Neural Networks (CNN) that can run efficiently on resource-constrained edge devices has become an important research area. There is a continued push to put increasingly more capabilities on-device for personal privacy, latency, and scale-ability of solutions. On these constrained devices, there is often extremely high demand for a limited amount of resources, including computation and memory, as well as power constraints to increase battery life. Along with this trend, there has also been greater ubiquity of custom chip-sets, Field Programmable Gate Arrays (FPGAs), and low-end processors that can be used to run CNNs, rather than traditional GPUs.
A common design choice is to reduce the FLOPs and parameters of a network by factorizing convolutional layers~\cite{howard2017mobilenets, sandler2018mobilenetv2, ma2018shufflenet,zhang2017shufflenet} into a depth-wise separable convolution that consists of two components: (1) \emph{spatial fusion}, where each spatial channel is convolved independently by a depth-wise convolution, and (2) \emph{channel fusion}, where all the spatial channels are linearly combined by $1 \times 1$ convolutions, known as pointwise convolutions. Inspecting the computational profile of these networks at inference time reveals that the computational burden of the spatial fusion is relatively negligible compared to that of the channel fusion\cite{howard2017mobilenets}. In this paper we focus on designing an efficient replacement for these pointwise convolutions.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figs/test3.png}
\caption{\footnotesize Replacing pointwise convolutions with BFT in state-of-the-art architectures results in significant accuracy gains in resource constrained settings.}
\label{fig:mobilenet_spectrum}
\end{figure}
We propose a set of principles to design a replacement for pointwise convolutions motivated by both efficiency and accuracy. The proposed principles are as follows: (1) \textit{full connectivity from every input to all outputs}: to allow outputs to use all available information, (2) \textit{large information bottleneck}: to increase representational power throughout the network, (3) \textit{low operation count}: to reduce the computational cost, (4) \textit{operation symmetry}: to allow operations to be stacked into dense matrix multiplications. In Section~\ref{sec:model}, we formally define these principles, and mathematically prove a lower-bound of $O(n\log n)$ operations to satisfy these principles. We propose a novel, lightweight convolutional building block based on the Butterfly Transform (BFT). We prove that BFT yields an asymptotically optimal FLOP count under these principles.
We show that BFT can be used as a drop-in replacement for pointwise convolutions in several state-of-the-art efficient CNNs. This significantly reduces the computational bottleneck for these networks. For example, replacing pointwise convolutions with BFT decreases the computational bottleneck of MobileNetV1 from $95\%$ to $60\%$, as shown in Figure \ref{fig:mobilenet_pie}. We empirically demonstrate that using BFT leads to significant increases in accuracy in constrained settings, including up to a $6.75\%$ absolute Top-1 gain for MobileNetV1, $4.4\%$ for ShuffleNet V2 and $5.4\%$ for MobileNetV3 on the ImageNet\cite{deng2009imagenet} dataset. There have been several efforts on using butterfly operations in neural networks \cite{eunn_rnn,dao2019learning,Munkhoeva_quadrature} but, to the best of our knowledge, our method outperforms all other structured matrix methods (Table \ref{tab:lowrank}) for replacing pointwise convolutions as well as state-of-the-art Neural Architecture Search (Table \ref{tab:search}) by a large margin at low FLOP ranges.
\section{Related Work}
\label{sec:related_work}
Deep neural networks suffer from intensive computations. Several approaches have been proposed to address efficient training and inference in deep neural networks.
\paragraph{Efficient CNN Architecture Designs:} Recent successes in visual recognition tasks, including object classification, detection, and segmentation, can be attributed to exploration of different CNN designs \cite{lecun1990handwritten,simonyan2014very,he2016deep, krizhevsky2012imagenet,szegedy2015going, huang2017densely}. To make these network designs more efficient, some methods have factorized convolutions into different steps, enforcing distinct focuses on spatial and channel fusion \cite{howard2017mobilenets, sandler2018mobilenetv2}. Further, other approaches extended the factorization schema with sparse structure either in channel fusion \cite{ma2018shufflenet,zhang2017shufflenet} or spatial fusion \cite{mehta2018espnetv2}. \cite{huang2017condensenet} forced more connections between the layers of the network but reduced the computation by designing smaller layers. Our method follows the same direction of designing a sparse structure on channel fusion that enables lower computation with a minimal loss in accuracy.
\paragraph{Structured Matrices:} There have been many methods which attempt to reduce the computation in CNNs, \cite{wen2016learning, li2018constrained, denton2014exploiting, jaderberg2014speeding} by exploiting the fact that CNNs are often extremely overparameterized. These models learn a CNN or fully connected layer by enforcing a linear transformation structure during the training process which has less parameters and computation than the original linear transform. Different kinds of structured matrices have been studied for compressing deep neural networks, including circulant matrices\cite{ding2017c}, toeplitz-like matrices\cite{toeplitz_small_footprint}, low rank matrices\cite{Sainath2013LowrankMF}, and fourier-related matrices\cite{ACDC_moczulski}. These structured matrices have been used for approximating kernels or replacing fully connected layers.
UGConv \cite{ugconv} has considered replacing one of the pointwise convolutions in the ShuffleNet structure with unitary group convolutions, while our Butterfly Transform is able to replace all of the pointwise convolutions.
The butterfly structure has been studied for a long time in linear algebra \cite{Parker95randombutterfly, Li2015ButterflyF} and neural network models \cite{bidirectional_bft}. Recently, it has received more attention from researchers who have used it in RNNs \cite{eunn_rnn}, kernel approximation\cite{Munkhoeva_quadrature, mathew_approx_hessian, choromanski-orthogmontcarlo} and fully connected layers\cite{dao2019learning}.
We have generalized butterfly structures to replace pointwise convolutions, and have significantly outperformed all known structured matrix methods for this task, as shown in Table \ref{tab:lowrank}.
\paragraph{Network pruning:} This line of work focuses on reducing the substantial redundant parameters in CNNs by pruning out either neurons or weights \cite{han2015deep, han2015learning, Wortsman_neurips19, BagherinezhadRF16}. Our method is different from these type methods in the way that we enforce a predefined sparse channel structure to begin with and we do not change the structure of the network during the training.
\paragraph{Quantization:} Another approach to improve the efficiency of the deep networks is low-bit representation of network weights and neurons using quantization \cite{soudry2014expectation,rastegari2016xnor,wu2016quantized,courbariaux2016binarized,zhou2016dorefa,hubara2016quantized,andri2018yodann}. These approaches use fewer bits (instead of 32-bit high-precision floating points) to represent weights and neurons for the standard training procedure of a network. In the case of extremely low bitwidth (1-bit) \cite{rastegari2016xnor} had to modify the training procedure to find the discrete binary values for the weights and the neurons in the network. Our method is orthogonal to this line of work and these method are complementary to our network.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\textwidth]{figs/BFT.pdf}
\caption{\footnotesize \textbf{BFT Architecture:} This figure illustrates the graph structure of the proposed Butterfly Transform. The left figure shows the recursive procedure of the BFT that is applied to an input tensor and the right figure shows the expanded version of the recursive procedure as $\log n$ Butterfly Layers in the network. }
\label{fig:BFT}
\end{figure*}
\paragraph{Neural architecture search:} Recently, neural search methods, including reinforcement learning and genetic algorithms, have been proposed to automatically construct network architectures \cite{zoph2016neural,xie2017genetic,real2017large,zoph2018learning,tan2018mnasnet,liu2018progressive}. Recent search-based methods \cite{tan2018mnasnet,cai2018proxylessnas,wu2018fbnet, mobilenet_v3_dblp} use Inverted Residual Blocks \cite{sandler2018mobilenetv2} as a basic search block for automatic network design. The main computational bottleneck in most of the search based method is in the channel fusion and our butterfly structure does not exist in any of the predefined blocks of these methods. Our efficient channel fusion can be augmented with these models to further improve the efficiency of these networks. Our experiments shows that our proposed butterfly structure outperforms recent architecture search based models on small network design.
\section{Model}
\label{sec:model}
In this section, we outline the details of the proposed model. As discussed above, the main computational bottleneck in current efficient neural architecture design is in the channel fusion step, which is implemented with a pointwise convolution layer. The input to this layer is a tensor $\mathbf{X}$ of size $n_{\text{in}}\times h \times w$, where $n$ is the number of channels and $w$, $h$ are the width and height respectively. The size of the weight tensor $\mathbf{W}$ is $ n_{\text{out}} \times n_\text{in} \times 1 \times 1 $ and the output tensor $\mathbf{Y}$ is $n_\text{out} \times h \times w$. For the sake of simplicity, we assume $n = n_\text{in} = n_\text{out}$. The complexity of a pointwise convolution layer is $\mathcal{O}(n^2wh)$, and this is mainly influenced by the number of channels $n$. We propose to use \emph{Butterfly Transform} as a layer, which has $\mathcal{O}((n\log n) wh)$ complexity. This design is inspired by the Fast Fourier Transform (FFT) algorithm, which has been widely used in computational engines for a variety of applications and there exist many optimized hardware/software designs for the key operations of this algorithm, which are applicable to our method. In the following subsections we explain the problem formulation and the structure of our butterfly transform.
\subsection{Pointwise Convolution as Matrix-Vector Products}
A pointwise convolution can be defined as a function $\mathcal{P}$ as follows:
\begin{equation}
\mathbf{Y} = \mathcal{P}(\mathbf{X}; \mathbf{W})
\label{eq:pointwiseconv}
\end{equation}
This can be written as a matrix product by reshaping the input tensor $\mathbf{X}$ to a 2-D matrix $\mathbf{\hat{X}}$ with size $n \times (hw)$ (each column vector in the $\mathbf{\hat{X}}$ corresponds to a spatial vector $\mathbf{X}[:,i,j]$) and reshaping the weight tensor to a 2-D matrix $\mathbf{\hat{W}}$ with size $n \times n$,
\begin{equation}
\mathbf{\hat{Y}} = \mathbf{\hat{W}}\mathbf{\hat{X}}
\end{equation}
where $\mathbf{\hat{Y}}$ is the matrix representation of the output tensor $\mathbf{Y}$. This can be seen as a linear transformation of the vectors in the columns of $\mathbf{\hat{X}}$ using $\mathbf{\hat{W}}$ as a transformation matrix. The linear transformation is a matrix-vector product and its complexity is $\mathcal{O}(n^2)$. By enforcing structure on this transformation matrix, one can reduce the complexity of the transformation. However, to be effective as a channel fusion transform, it is critical that this transformation respects the desirable characteristics detailed below.
\paragraph{Fusion network design principles:}
1) \textit{full connectivity from every input to all outputs}: This condition allows every single output to have access to all available information in the inputs. 2) \textit{large information bottleneck}: The bottleneck size is defined as the minimum number of nodes in the network that if removed, the information flow from input channels to output channels would be completely cut off (i.e. there would be no path from any input channel to any output channel). The representational power of the network is bound by the bottleneck size. To ensure that information is not lost while passed through the channel fusion, we set the minimum bottleneck size to $n$. 3) \textit{low operation count}: The fewer operations, or equivalently edges in the graph, that there are, the less computation the fusion will take. Therefore we want to reduce the number of edges. 4) \textit{operation symmetry}: By enforcing that there is an equal out-degree in each layer, the operations can be stacked into dense matrix multiplications, which is in practice much faster for inference than sparse computation.
\textbf{\textit{Claim}:} A multi-layer network with these properties has at least $\mathcal{O}(n \log n)$ edges.
\textbf{\textit{Proof}:} Suppose there exist $n_i$ nodes in $i^\text{th}$ layer. Removing all the nodes in one layer will disconnect inputs from outputs. Since the maximum possible bottleneck size is $n$, therefore $n_i \geq n$.
Now suppose that out degree of each node at layer $i$ is $d_i$. Number of nodes in layer $i$, which are reachable from an input channel is $\prod_{j=0}^{i-1} d_j$. Because of the every-to-all connectivity, all of the $n$ nodes in the output layer are reachable. Therefore $\prod_{j=0}^{m-1} d_j \geq n$.
This implies that $\sum_{j=0}^{m-1} \log_2(d_j) \geq \log_2(n)$. The total number of edges will be:
$\sum_{j=0}^{m-1} n_j d_j \geq n\sum_{j=0}^{m-1} d_j \geq n \sum_{j=0}^{m-1} \log_2(d_j) \geq n\log_2n \blacksquare$
In the following section we present a network structure that satisfies all the design principles for fusion network.
\subsection{Butterfly Transform (BFT)}
As mentioned above we can reduce the complexity of a matrix-vector product by enforcing structure on the matrix. There are several ways to enforce structure on the matrix. Here we first explain how the channel fusion is done through BFT and then show a family of the structured matrix equivalent to this fusion leads to a $\mathcal{O}(n\log n)$ complexity of operations and parameters while maintaining accuracy.
\paragraph{Channel Fusion through BFT:}
We want to fuse information among all channels. We do it in sequential layers. In the first layer we partition channels to $k$ parts with size $\frac{n}{k}$ each, $\mathbf{x}_1,..,\mathbf{x}_k$. We also partition output channels of this first layer to $k$ parts with $\frac{n}{k}$ size each, $\mathbf{y}_1,..,\mathbf{y}_k$.
We connect elements of $\mathbf{x}_i$ to $\mathbf{y}_j$ with $\frac{n}{k}$ parallel edges $\mathbf{D}_{ij}$. After combining information this way, each $\mathbf{y}_i$ contains the information from all channels, then we recursively fuse information of each $\mathbf{y}_i$ in the next layers.
\paragraph{Butterfly Matrix:}
In terms of matrices $\mathbf{B}^{(n,k)}$ is a butterfly matrix of order $n$ and base $k$ where $\mathbf{B}^{(n,k)}\in \real^{n\times n}$ is equivalent to fusion process described earlier.
\begin{equation}
\mathbf{B}^{(n, k)} =
\begin{pmatrix}
\mathbf{M}^{(\frac{n}{k},k)}_{1} \mathbf{D}_{11} & \dots & \mathbf{M}^{(\frac{n}{k},k)}_{1} \mathbf{D}_{1k}\\
\vdots & \ddots & \vdots \\
\mathbf{M}^{(\frac{n}{k},k)}_{k} \mathbf{D}_{k1} & \dots & \mathbf{M}^{(\frac{n}{k},k)}_{k} \mathbf{D}_{kk}\\
\end{pmatrix}
\end{equation}
Where $\mathbf{M}^{(\frac{n}{k},k)}_{i}$ is a butterfly matrices of order $\frac{n}{k}$ and base $k$ and $\mathbf{D}_{ij}$ is an arbitrary diagonal $\frac{n}{k} \times \frac{n}{k}$ matrix. The matrix-vector product between a butterfly matrix $\mathbf{B}^{(n,k)}$ and a vector $\mathbf{x}\in \real ^{n}$ is :
\begin{equation}
\mathbf{B}^{(n, k)}\mathbf{x} =
\begin{pmatrix}
\mathbf{M}^{(\frac{n}{k},k)}_{1} \mathbf{D}_{11} & \dots & \mathbf{M}^{(\frac{n}{k},k)}_{1} \mathbf{D}_{1k}\\
\vdots & \ddots & \vdots \\
\mathbf{M}^{(\frac{n}{k},k)}_{k} \mathbf{D}_{k1} & \dots & \mathbf{M}^{(\frac{n}{k},k)}_{k} \mathbf{D}_{kk}\\
\end{pmatrix}
\begin{pmatrix}
\mathbf{x}_1\\
\vdots\\
\mathbf{x}_k
\end{pmatrix}
\end{equation}
where $\mathbf{x}_i \in \real ^{\frac{n}{k}}$ is a subsection of $\mathbf{x}$ that is achieved by breaking $\mathbf{x}$ into $k$ equal sized vector. Therefore, the product can be simplified by factoring out $\mathbf{M}$ as follow:
\begin{equation}
\begin{matrix}
\mathbf{B}^{(n, k)}\mathbf{x} =
\begin{pmatrix}
\mathbf{M}^{(\frac{n}{k},k)}_1\sum_{j=1}^{k}{\mathbf{D}_{1j}\mathbf{x}_j}\\
\vdots\\
\mathbf{M}^{(\frac{n}{k},k)}_i\sum_{j=1}^{k}{\mathbf{D}_{ij}\mathbf{x}_j}\\
\vdots\\
\mathbf{M}^{(\frac{n}{k},k)}_k\sum_{j=1}^{k}{\mathbf{D}_{kj}\mathbf{x}_j}
\end{pmatrix} = \begin{pmatrix}
\mathbf{M}^{(\frac{n}{k},k)}_1\mathbf{y}_1\\
\vdots\\
\mathbf{M}^{(\frac{n}{k},k)}_i\mathbf{y}_i\\
\vdots\\
\mathbf{M}^{(\frac{n}{k},k)}_k\mathbf{y}_k
\end{pmatrix}
\end{matrix}
\label{eq:halfbutterfly}
\end{equation}
where $\mathbf{y}_i = \sum_{j=1}^{k}{\mathbf{D}_{ij}\mathbf{x}_j}$. Note that $\mathbf{M}^{(\frac{n}{k},k)}_i\mathbf{y}_i$ is a smaller product between a butterfly matrix of order $\frac{n}{k}$ and a vector of size $\frac{n}{k}$ therefore, we can use divide-and-conquer to recursively calculate the product $\mathbf{B}^{(n, k)}\mathbf{x}$. If we consider $T(n,k)$ as the computational complexity of the product between a $(n,k)$ butterfly matrix and an $n$-D vector. From equation \ref{eq:halfbutterfly}, the product can be calculated by $k$ products of butterfly matrices of order $\frac{n}{k}$ which its complexity is $kT({n}/{k},k)$. The complexity of calculating $\mathbf{y}_i$ for all $i \in \{1, \dots ,k\}$ is $\mathcal{O}(kn)$ therefore:
\begin{equation}
T(n,k) = kT({n}/{k},k)+\mathcal{O}(kn)\\
\end{equation}
\begin{equation}
T(n,k) = \mathcal{O}(k(n\log_{k} n))
\end{equation}
With a smaller choice of $k (2\leq k \leq n)$ we can achieve a lower complexity. Algorithm \ref{alg:effproduct} illustrates the recursive procedure of a butterfly transform when $k=2$.
\small
\newcommand{$i=0$ \KwTo $n$}{$i=0$ \KwTo $n$}
\SetKwFunction{BFT}{ButterflyTransform}%
\SetKwProg{Fn}{Function}{:}{}
\begin{algorithm}[!h]
\SetAlgoLined
\Fn(\tcc*[]{algorithm as a recursive function}){\BFT{W, X, n}}
{\KwData{W Weights containing $2n\log(n)$ numbers}
\KwData{X An input containing $n$ numbers}
\uIf{n == 1}{
\KwRet{[X]} \;
}
Make $D_{11}, D_{12}, D_{21}, D_{22}$ using first $2n$ numbers of $W$\;
Split rest $2n(\log(n)-1)$ numbers to two sequences $W_1, W_2$ with length $n(\log(n)-1)$ \;
Split $X$ to $X_1, X_2$\;
$y_1 \longleftarrow D_{11} X_1 + D_{12} X_2$\;
$y_2 \longleftarrow D_{21} X_1 + D_{22} X_2$\;
$My_1 \longleftarrow \BFT(W_1, y_1, n-1)$\;
$My_2 \longleftarrow \BFT(W_2, y_2, n-1)$\;
\KwRet{Concat($My_1, My_2$)}\;
}
\caption{Recursive Butterfly Transform}
\label{alg:effproduct}
\end{algorithm}
\textbf{\begin{figure*}[!t]
\centering
\includegraphics[width= 1.0\textwidth]{figs/both.png}
\caption {\footnotesize \textbf{Distribution of FLOPs:} This figure shows that replacing the pointwise convolution with BFT reduces the size of the computational bottleneck. }
\label{fig:mobilenet_pie}
\end{figure*}}
\subsection{Butterfly Neural Network}
The procedure explained in Algorithm \ref{alg:effproduct} can be represented by a butterfly graph similar to the FFT's graph. The butterfly network structure has been used for function representation \cite{li2018butterfly} and fast factorization for approximating linear transformation \cite{dao2019learning}. We adopt this graph as an architecture design for the layers of a neural network. Figure ~\ref{fig:BFT} illustrates the architecture of a butterfly network of base $k=2$ applied on an input tensor of size $n\times h \times w$. The left figure shows how the recursive structure of the BFT as a network. The right figure shows the constructed multi-layer network which has $\log n$ Butterfly Layers (BFLayer). Note that the complexity of each Butterfly Layer is $\mathcal{O}(n)$ ($2n$ operations), therefore, the total complexity of the BFT architecture will be $\mathcal{O}(n\log n)$.
Each Butterfly layer can be augmented by batch norm and non-linearity functions (\textit{e.g.} ReLU, Sigmoid). In Section \ref{sec:ablation} we study the effect of using different choices of these functions. We found that both batch norm and nonlinear functions (ReLU and Sigmoid) are not effective within BFLayers. Batch norm is not effective mainly because its complexity is the same as the BFLayer $\mathcal{O}(n)$, therefore, it doubles the computation of the entire transform. We use batch norm only at the end of the transform. The non-linear activation $ReLU$ and $Sigmoid$ zero out almost half of the values in each BFLayer, thus multiplication of these values throughout the forward propagation destroys all the information.
The BFLayers can be internally connected with residual connections in different ways. In our experiments, we found that the best residual connections are the one that connect the input of the first BFLayer to the output of the last BFLayer.
The base of the BFT affects the shape and the number of FLOPs. We have empirically found that base $k=4$ achieves the highest accuracy while having the same number FLOPs as the base $k=2$ as shown in Figure \ref{fig:butterfly_base}.
Butterfly network satisfies all the fusion network design principles. There exist exactly one path between every input channel to all the output channels, the degree of each node in the graph is exactly $k$, the bottleneck size is $n$, and the number of edges are $\mathcal{O}(n \log n)$.
We use the BFT architecture as a replacement of the pointwise convolution layer ($1\times 1$ convs) in different CNN architectures including MobileNetV1\cite{howard2017mobilenets}, ShuffleNetV2\cite{ma2018shufflenet} and MobileNetV3\cite{mobilenet_v3_dblp}. Our experimental results shows that under the same number of FLOPs, the efficiency gain by BFT is more effective in terms of accuracy compared to the original model with smaller channel rate. We show consistent accuracy improvement across several architecture settings.
Fusing channels using BFT, instead of pointwise convolution reduces the size of the computational bottleneck by a large-margin. Figure \ref{fig:mobilenet_pie} illustrate the percentage of the number of operations by each block type throughout a forward pass in the network. Note that when BFT is applied, the percentage of the depth-wise convolutions increases by $~8\times$.
\section{Experiments}
\label{sec:experiments}
In this section, we demonstrate the performance of the proposed \sys on large-scale image classification tasks. To showcase the strength of our method in designing very small networks, we compare performance of Butterfly Transform with pointwise convolutions in three state-of-the-art efficient architectures: (1) MobileNetV1, (2) ShuffleNetV2, and (3) MobileNetV3. We compare our results with other type of structured matrices that have $\mathcal{O}(n \log n)$ computation (e.g. low-rank transform and circulant transform). We also show that our method outperforms state-of-the art architecture search methods at low FLOP ranges.
\begin{table*}[!t]
\begin{small}
\begin{subtable}{.50 \linewidth}
\centering
\caption{}
\begin{tabular}{|l||l|l|l|}
\hline
Flops & ShuffleNetV2 & ShuffleNetV2\textbf{+BFT} & Gain\\ \hline \hline
14 M & 50.86 (14 M)* & 55.26 (14 M) & \textbf{4.40}\\ \hline
21 M & 55.21 (21 M)* & 57.83 (21 M) & \textbf{2.62}\\ \hline
40 M & \begin{tabular}[c]{@{}l@{}}59.70(41 M)*\\ 60.30 (41 M)\end{tabular} & 61.33 (41 M) & \begin{tabular}[c]{@{}l@{}}\textbf{1.63}\\\textbf{1.03}\end{tabular}\\ \hline
\end{tabular}
\label{tab:shufflenet_low_flop}
\caption{}
\begin{tabular}{|l||l|l|l|}
\hline
Flops & MobileNetV3 & MobileNetV3\textbf{+BFT} & Gain \\ \hline \hline
10-15 M & 49.8 (13 M) & 55.21 (15 M) & \textbf{5.41}\\ \hline
\end{tabular}
\label{tab:mobilenetv3_low_flop}
\end{subtable}
\begin{subtable}{.5\linewidth}
\centering
\caption{}
\begin{tabular}{|l||l|l|l|}
\hline
Flops &
MobileNet & MobileNet\textbf{+BFT} & Gain \\ \hline \hline
14 M & 41.50 (14 M) & 46.58 (14 M) & \textbf{5.08}\\ \hline
20 M & 45.50 (21 M) & 52.26 (23 M) & \textbf{6.76} \\ \hline
40 M & \begin{tabular}[c]{@{}l@{}}47.70 (34 M) \\ 50.60 (41 M)\\ \end{tabular}& 54.30 (35 M) & \begin{tabular}[c]{@{}l@{}} \textbf{6.60} \\ \textbf{3.70} \end{tabular} \\ \hline
50 M & 56.30 (49 M) & \begin{tabular}[c]{@{}l@{}}57.56 (51 M) \\ 58.35 (52 M)\\ \end{tabular} & \begin{tabular}[c]{@{}l@{}} \textbf{1.26} \\ \textbf{2.05} \end{tabular} \\ \hline
110 M & 61.70 (110 M) & 63.03 (112 M)& \textbf{1.33} \\ \hline
150 M & 63.30 (150 M) & 64.32 (150 M)& \textbf{1.02} \\ \hline
\end{tabular}
\label{tab:mobilenet_low_flop}
\end{subtable}
\end{small}
\label{tab:mobilenet_shufflenet}
\caption{\footnotesize These tables compare the accuracy of ShuffleNetV2, MobileNetV1 and MobileNetV3 when using standard pointwise convolution vs using BFTs}
\end{table*}
\subsection{Image Classification}
\subsubsection{Implementation and Dataset Details:}
Following standard practice, we evaluate the performance of Butterfly Transforms on the ImageNet dataset, at different levels of complexity, ranging from 14 MFLOPS to 150 MFLOPs. ImageNet classification dataset contains 1.2M training samples and 50K validation samples, uniformly distributed across 1000 classes.
For each architecture, we substitute pointwise convolutions with Butterfly Transforms. To keep the FLOP count similar between BFT and pointwise convolutions, we adjust the channel numbers in the base architectures (MobileNetV1, ShuffleNetV2, and MobileNetV3). For all architectures, we optimize our network by minimizing cross-entropy loss using SGD. Specific learning rate regimes are used for each architecture which can be found in the Appendix. Since BFT is sensitive to weight decay, we found that using little or no weight decay provides much better accuracy. We experimentally found (Figure \ref{fig:butterfly_base}) that butterfly base $k = 4$ performs the best. We also used a custom weight initialization for the internal weights of the Butterfly Transform which we outline below. More information and intuition on these hyper-parameters can be found in our ablation studies (Section \ref{sec:ablation}).
\paragraph{Weight initialization:}
Proper weight initialization is critical for convergence of neural networks, and if done improperly can lead to instability in training, and poor performance. This is especially true for Butterfly Transforms due to the amplifying effect of the multiplications within the layer, which can create extremely large or small values. A common technique for initializing pointwise convolutions is to initialize weights uniformly from the range $(-x, x)$ where $x = \sqrt{\frac{6}{n_{in}+n_{out}}}$, which is referred to as Xavier initialization~\cite{xavier-init}. We cannot simply apply this initialization to butterfly layers, since we are changing the internal structure.
We denote each entry $B^{(n,k)}_{u,v}$ as the multiplication of all the edges in path from node $u$ to $v$. We propose initializing the weights of the butterfly layers from a range $(-y,y)$, such that the multiplication of all edges along paths, or equivalently values in $B^{(n,k)}$, are initialized close to the range $(-x,x)$. To do this, we solve for a $y$ which makes the expectation of the absolute value of elements of $B^{(n,k)}$ equal to the expectation of the absolute value of the weights with standard Xavier initialization, which is $x/2$. Let $e_1,..,e_{log(n)}$ be edges on the path $p$ from input node $u$ to output node $v$. We have the following:
\begin{equation}
E[|B^{(n,k)}_{u,v}|] = E[|\prod_{i=1}^{log(n)}{e_i}|] = \frac{x}{2}
\end{equation}
We initialize each $e_i$ in range $(-y, y)$ where
\begin{equation}
(\frac{y}{2})^{log(n)} = \frac{x}{2} \implies y = x^{\frac{1}{log(n)}} * 2^{\frac{log(n)-1}{log(n)}}.
\end{equation}
\setlength{\intextsep}{5pt}%
\setlength{\columnsep}{7pt}%
\begin{table*}[!t]
\begin{small}
\label{tab:othercomp}
\begin{subtable}{.5\linewidth}
\centering
\caption{\textbf{BFT vs. Architecture Search}}
\begin{tabular}{|l|l|}
\hline
Model & Accuracy \\ \hline\hline
ShuffleNetV2+\textbf{BFT} (14 M) & \textbf{55.26} \\ \hline
MobileNetV3Small-224-0.5+\textbf{BFT} (15 M) & \textbf{55.21} \\ \hline
FBNet-96-0.35-1 (12.9 M) & 50.2 \\ \hline
FBNet-96-0.35-2 (13.7 M) & 51.9 \\ \hline
MNasNet (12.7 M) & 49.3 \\ \hline
MobileNetV3Small-224-0.35 (13 M) & 49.8 \\ \hline
MobileNetV3Small-128-1.0 (12 M) & 51.7 \\ \hline
\end{tabular}
\label{tab:search}
\end{subtable}%
\begin{subtable}{.5\linewidth}
\centering
\caption{\textbf{BFT vs. Other Structured Matrix Approaches}}
\begin{tabular}{|l|l|}
\hline
Model & Accuracy \\ \hline\hline
MobilenetV1+\textbf{BFT} (35 M) & \textbf{54.3} \\ \hline
MobilenetV1 (42 M) & 50.6 \\ \hline
MobilenetV1+Circulant* (42 M) & 35.68 \\ \hline
MobilenetV1+low-rank* (37 M) & 43.78 \\ \hline
MobilenetV1+BPBP (35 M) & 49.65 \\ \hline
MobilenetV1+Toeplitz* (37 M) & 40.09 \\ \hline
MobilenetV1+FastFood* (37 M) & 39.22 \\ \hline
\end{tabular}
\label{tab:lowrank}
\end{subtable}
\end{small}
\caption{\footnotesize These tables compare BFT with other efficient network design approaches. In Table (a), we show that ShuffleNetV2 + BFT outperforms state-of-the-art neural architecture search methods (MNasNet \cite{tan2018mnasnet}, FBNet\cite{wu2018fbnet}, MobilenetV3\cite{mobilenet_v3_dblp}). In Table (b), we show that BFT achieves significantly higher accuracy than other structured matrix approaches which can be used for channel fusion. The * denotes that this is our implementation.}
\end{table*}
\subsubsection{MobileNetV1 + BFT}
\begin{wrapfigure}{r}{3cm}
\centering
\includegraphics[]{figs/mobilenet_bft.png}
\caption {\footnotesize \\ \textbf{MobileNetV1+BFT Block}}
\label{fig:mobilenet_bft_block}
\end{wrapfigure}
To add BFT to MobileNeV1, for all MobileNetV1 blocks, which consist of a depthwise layer followed by a pointwise layer, we replace the pointwise convolution with our Butterfly Transform, as shown in Figure \ref{fig:mobilenet_bft_block}. We would like to emphasize that this means we replace \emph{all} pointwise convolution in MobileNetV1, with BFT. In Table \ref{fig:mobilenet_spectrum}, we show that we outperform a spectrum of MobileNetV1s from about 14M to 150M FLOPs with a spectrum of MobileNetV1s+BFT within the same FLOP range. Our experiments with MobileNetV1+BFT include all combinations of width-multiplier 1.00 and 2.00, as well as input resolutions 128, 160, 192, and 224. We also add a width-multiplier 1.00 with input resolution 96 to cover the low FLOP range (14M). A full table of results can be found in the Appendix.
In Table \ref{tab:mobilenet_low_flop} we showcase that using BFT outperforms traditional MobileNets across the entire spectrum, but is especially effective in the low FLOP range. For example using BFT results in an increase of 6.75\% in top-1 accuracy at 23 MFLOPs. Note that MobileNetV1 + BFT at 23 MFLOPs has much higher accuracy than MobileNetV1 at 41 MFLOPs, which means it can get higher accuracy with almost half of the FLOPs. This was achieved without changing the architecture at all, other than simply replacing pointwise convolutions, which means there are likely further gains by designing architectures with BFT in mind.
\subsubsection{ShuffleNetV2 + BFT}
We modify the ShuffleNet block to add BFT to ShuffleNetv2. In Table \ref{tab:shufflenet_low_flop} we show results for ShuffleNetV2+BFT, versus the original ShuffleNetV2. We have interpolated the number of output channels to build ShuffleNetV2-1.25+BFT, to be comparable in FLOPs with a ShuffleNetV2-0.5. We have compared these two methods for different input resolutions (128, 160, 224) which results in FLOPs ranging from 14M to 41M. ShuffleNetV2-1.25+BFT achieves about 1.6\% better accuracy than our implementation of ShuffleNetV2-0.5 which uses pointwise convolutions. It achieves 1\% better accuracy than the reported numbers for ShuffleNetV2~\cite{ma2018shufflenet} at 41 MFLOPs.
\textbf{\subsubsection{MobileNetV3 + BFT}}
We follow a procedure which is very similar to that of MobileNetV1+BFT, and simply replace all pointwise convolutions with Butterfly Transforms. We trained a MobileNetV3+BFT Small with a network-width of 0.5 and an input resolution 224, which achieves $55.21\%$ Top-1 accuracy. This model outperforms MobileNetV3 Small network-width of 0.35 and input resolution 224 at a similar FLOP range by about $5.4\%$ Top-1, as shown in \ref{tab:mobilenetv3_low_flop}. Due to resource constraints, we only trained one variant of MobileNetV3+BFT.
\subsubsection{Comparison with Neural Architecture Search}
Including BFT in ShuffleNetV2 allows us to achieve higher accuracy than state-of-the-art architecture search methods, MNasNet\cite{tan2018mnasnet}, FBNet \cite{wu2018fbnet}, and MobileNetV3 \cite{mobilenet_v3_dblp} on an extremely low resource setting ($\sim$ 14M FLOPs). These architecture search methods search a space of predefined building blocks, where the most efficient block for channel fusion is the pointwise convolution. In Table \ref{tab:search}, we show that by simply replacing pointwise convolutions in ShuffleNetv2, we are able to outperform state-of-the-art architecture search methods in terms of Top-1 accuracy on ImageNet. We hope that this leads to future work where BFT is included as one of the building blocks in architecture searches, since it provides an extremely low FLOP method for channel fusion.
\begin{figure*}[!t]
\centering
\begin{subfigure}[]{.3\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/weight_decay_big.png}
\captionof{figure}{\textbf{Effect of weight-decay} }
\label{fig:weight_decay}
\end{subfigure}
\begin{subfigure}[]{.3 \textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/activation_big.png}
\captionof{figure}{\textbf{Effect of activations} }
\label{fig:activation}
\end{subfigure}
\begin{subfigure}[]{.3 \textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/Figure_3.png}
\captionof{figure}{\textbf{Effect of butterfly base} }
\label{fig:butterfly_base}
\end{subfigure}
\caption{\footnotesize Design choices for BFT: a) In BFT we should not enforce weight decay, because it significantly reduces the effect of input channels on output channels. b) Similarly, we should not apply the common non-linear activation functions. These functions zero out almost half of the values in the intermediate BFLayers, which leads to a catastrophic drop in the information flow from input channels to the output channels. c) Butterfly base determines the structure of BFT. Under $40 M$ FLOP budget base $k=4$ works the best.}
\end{figure*}
\subsubsection{Comparison with Structured Matrices }
To further illustrate the benefits of Butterfly Transforms, we compare them with other structured matrix methods which can be used to reduce the computational complexity of pointwise convolutions. In Table \ref{tab:lowrank} we show that BFT significantly outperforms all these other methods at a similar FLOP range. For comparability, we have extended all the other methods to be used as replacements for pointwise convolutions, if necessary. We then replaced all pointwise convolutions in MobileNetV1 for each of the methods and report Top-1 validation accuracy on ImageNet. Here we summarize these other methods:
\textbf{Circulant block:}
In this block, the matrix that represents the pointwise convolution is a circulant matrix. In a circulant matrix rows are cyclically shifted versions of one another \cite{ding2017c}. The product of this circulant matrix by a column can be efficiently computed in $\mathcal{O}(n \log(n))$ using the Fast Fourier Transform (FFT).
\textbf{Low-rank matrix:}
In this block, the matrix that represents the pointwise convolution is the product of two $log(n)$ rank matrices ($W = UV^T$). Therefore the pointwise convolution can be performed by two consequent small matrix product and the total complexity is $\mathcal{O}(n \log n)$.
\textbf{Toeplitz Like:}
Toeplitz like matrices have been introduced in \cite{toeplitz_small_footprint}. They have been proven to work well on kernel approximation. We have used displacement rank $r=1$ in our experiments.
\textbf{Fastfood: }
This block has been introduce in \cite{Fastfood41466} and used in Deep Fried ConvNets\cite{Yang2014DeepFC}. In Deep Fried Nets they replace fully connected layers with FastFood. By unifying batch, height and width dimension, we can use a fully connected layer as a pointwise convolution.
\textbf{BPBP:}
This method uses the butterfly network structure for fast factorization for approximating linear transformation, such as Discrete Fourier Transform (DFT) and the Hadamard transform\cite{dao2019learning}. We extend BPBP to work with pointwise convolutions by using the trick explained in the Fastfood section above, and performed experiments on ImageNet.
\subsection{Ablation Study}
\label{sec:ablation}
Now, we study different elements of our BFT model. As mentioned earlier, residual connections and non-linear activations can be augmented within our BFLayers. Here we show the performance of these elements in isolation on CIFAR-10 dataset using MobileNetv1 as the base network. The only exception is the Butterfly Base experiment which was performed on ImageNet.
\setlength{\intextsep}{5pt}%
\setlength{\columnsep}{7pt}%
\begin{wraptable}{o}{4.5cm}
\begin{tabular}{|l|l|}
\hline
Model & Accuracy \\ \hline\hline
No residual & 79.2 \\ \hline
Every-other-Layer & 81.12 \\ \hline
First-to-Last & \textbf{81.75} \\ \hline
\end{tabular}
\caption{\textbf{Residual connections}}
\label{tab:residual}
\end{wraptable}
\textbf{Residual connections:}
The graphs that are obtained by replacing BFTransform with pointwise convolutions are very deep. Residual connections generally help when training deep networks. We experimented with three different ways of adding residual connections (1) \emph{First-to-Last}, which connects the input of the first BFLayer to the output of last BFLayer, (2) \emph{Every-other-Layer}, which connects every other BFLayer and (3) \emph{No-residual}, where there is no residual connection. We found the First-to-last is the most effective type of residual connection as shown in Table \ref{tab:residual}.
\textbf{With/Without Non-Linearity:}
As studied by \cite{sandler2018mobilenetv2} adding a non-linearity function like $ReLU$ or $Sigmoid$ to a narrow layer (with few channels) reduces the accuracy because it cuts off half of the values of an internal layer to zero. In BFT, the effect of an input channel $i$ on an output channel $o$, is determined by the multiplication of all the edges on the path between $i$ and $o$. Dropping any value along the path to zero will destroy all the information transferred between the two nodes. Dropping half of the values of each internal layer destroys almost all the information in the entire layer. Because of this, we don't use any activation in the internal Butterfly Layers. Figure \ref{fig:activation} compares the the learning curves of BFT models with and without non-linear activation functions.
\textbf{With/Without Weight-Decay:}
We found that BFT is very sensitive to the weight decay. This is because in BFT there is only one path from an input channel $i$ to an output channel $o$. The effect of $i$ on $o$ is determined by the multiplication of all the intermediate edges along the path between $i$ and $o$. Pushing all weight values toowards zero, will significantly reduce the effect of the $i$ on $o$. Therefore, weight decay is very destructive in BFT. Figure \ref{fig:weight_decay} illustrates the learning curves with and without using weight decay on BFT.
\textbf{Butterfly base:}
The parameter $k$ in $B^{(n,k)}$ determines the structure of the Butterfly Transform and has a significant impact on the accuracy of the model. The internal structure of the \sys will contain $\log_k(n)$ layers. Because of this, very small values of $k$ lead to deeper internal structures, which can be more difficult to train. Larger values of $k$ are shallower, but have more computation, since each node in layers inside the \sys has an out-degreee of $k$. With large values of $k$, this extra computation comes at the cost of more FLOPs.
We tested the values of $k = 2, 4, 8, n$ on MobileNetV1+BFT with an input resolution of 160x160 which results in $\sim40M$ FLOPs. When $k=n$, this is equivalent to a standard pointwise convolution. For a fair comparison, we made sure to hold FLOPs consistent across all our experiments by varying the number of channels, and tested all models with the same hyper-parameters on ImageNet. Our results in Figure \ref{fig:butterfly_base} show that $k=4$ significantly outperforms all other values of $k$.
Our intuition is that this setting allows the block to be trained easily, due to its shallowness, and that more computation than this is better spent elsewhere, such as in this case increasing the number of channels.
It is a likely possibility that there is a more optimal value for $k$, which varies throughout the model, rather than being fixed. We have also only performed this ablation study on a relatively low FLOP range ($40M$), so it might be the case that larger architectures perform better with a different value of $k$. There is lots of room for future exploration in this design choice.
\section{Drawbacks}
A weakness of our model is that there is an increase in working memory when using BFT since we must add substantially more channels to maintain the same number of FLOPs as the original network. For example, a MobileNetV1-2.0+BFT has the same number of FLOPS as a MobileNetV1-0.5, which means it will use about four times as much working memory. Please note that the intermediate BFLayers can be computed in-place so they do not increase the amount of working memory needed. Due to using wider channels, GPU training time is also increased. In our implementation, at the forward pass, we calculate $B^{(n,k)}$ from the current weights of the BFLayers, which is a bottleneck in training. Introducing a GPU implementation of butterfly operations would greatly reduce training time.
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, we demonstrated how a family of efficient transformations referred to as the Butterfly Transforms can replace pointwise convolutions in various neural architectures to reduce the computation while maintaining accuracy. We explored many design decisions for this block including residual connections, non-linearities, weight decay, the power of the \sys, and also introduce a new weight initialization, which allows us to significantly outperform all other structured matrix approaches for efficient channel fusion that we are aware of. We also provided a set of principles for fusion network design, and \sys exhibits all these properties.
As a drop-in replacement for pointwise convolutions in efficient Convolutional Neural Networks, we have shown that our method significantly increases accuracy of models, especially at the low FLOP range, and can enable new capabilities on resource constrained edge devices. It is worth noting that these neural architectures have not at all been optimized for \sys, and we hope that this work will lead to more research towards networks designed specifically with the Butterfly Transform in mind, whether through manual design or architecture search. \sys can also be extended to other domains, such as language and speech, as well as new types of architectures, such as Recurrent Neural Networks and Transformers.
We look forward to future inference implementations of Butterfly structures which will hopefully validate our hypothesis that this block can be implemented extremely efficiently, especially on embedded devices and FPGAs. Finally, one of the major challenges we faced was the large amount of time and GPU memory necessary to train \sys, and we believe there is a lot of room for optimizing training of this block as future work.
\section*{Acknowledgement}
Thanks Aditya Kusupati, Carlo Del Mundo, Golnoosh Samei, Hessam Bagherinezhad, James Gabriel and Tim Dettmers for their help and valuable comments.
This work is in part supported by NSF IIS 1652052, IIS 17303166, DARPA N66001-19-2-4031, 67102239 and gifts from Allen Institute for Artificial Intelligence.
{\small
\bibliographystyle{ieee_fullname}
| {'timestamp': '2020-04-20T02:04:17', 'yymm': '1906', 'arxiv_id': '1906.02256', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.02256'} |
\section{Introduction}
A common self-referenced technique to measure phase and amplitude of an ultrashort laser pulse is Frequency-resolved optical
gating (FROG)~\cite{Trebino:1993}. An investigated pulse is split in two replicas, imposing a relative
delay upon the two which are guided into a nonlinear medium where second harmonic generation (SHG) takes place. The
upconverted light is measured with a spectrometer for varying delays. This 2d intensity map, the trace, encodes
all information to retrieve the electric field of the investigated pulse. To invert the associated
nonlinear integral several solvers have been developed: approaches inspired by the Gerchberg-Saxton
algorithm~\cite{Gerchberg:1971} like~\cite{DeLong:1994,Kane:1998real,Sidorenko:2016} and
least-squares solvers using generic optimisation or search methods \cite{Hyyti:2017,Geib:2019}.
In this paper we present a more specialised algorithm using numerical methods~\cite{Kelley:2018numerical,Kelley:2003solving}
that have been successfully applied to other nonlinear integral equations in physics like the Ornstein-Zernike
equations~\cite{Ornstein:1914accidental,Kelley:2004}, describing the direct correlation functions of molecules in liquids
and the Chandrasekhar H-equation~\cite{Chandrasekhar:1960radiative} arising in radiative transfer theory, naming two classical
examples. This is, first of all, Newton's method. Modern Jacobian-free Newton-Krylov methods~\cite{Knoll:2004jacobian},
variants of Newton's method, are the basis of large-scale nonlinear solvers like KINSOL, NOX, SNES~\cite{Collier:2020,Heroux:2005,Balay:2012}.
At second, homotopy continuation~\cite{Allgower:1993,Morgan:2009,Sommese:2005}, a technique to globalize Newton's method,
has proven to be reliable and efficient for computing all isolated solutions of polynomial systems and is the primary
computational method for polynomial solvers like Bertini and PHCpack~\cite{Bates:2013,Verschelde:2011}.
We combine the continuation method with techniques from stochastic optimisation
\cite{Pilanci:2017newton,Berahas:2020,Martinsson:2020}. While path tracking towards the
solution we frequently alternate the random matrices which would be in any case necessary to reduce
the over-determined polynomial system to square form. For each path segment and homotopy the matrix
is fixed, such that the full solution path is partially continuous and partially stochastic.
This is, up to our knowledge, a novel method for real root finding of polynomial systems and
to find optimal solutions of noisy polynomial systems.
For the retrieval with realistic experimental data these methods alone would not be sufficient because Newton's
method has certain smoothness assumptions, problematic, if noise is present. For that purpose we chose
an integral discretisation based on surface averages and Tikhonov-type regularization.
The Tikhonov factor is adaptively decreased during the solution process to obtain a near optimal amount of regularisation
at the solution which can be refined using the L-curve method \cite{Hansen:1999}.
The structure of the paper is as follows:
Sec. \ref{sec:intro} Notation, integral representation, discretisation.
Sec. \ref{sec:setting_poly} Setting real polynomial system, real roots, gauge condition.
Sec. \ref{sec:solver} Polynomial solver, squaring the system, Newton's method, homotopy continuation.
Sec. \ref{sec:regul} Adaptive regularisaton for noisy traces.
Sec. \ref{sec:application} Application examples, convergence, practical concerns, L-curve method.
Sec. \ref{sec:conclusion} Conclusion.
Sec. \ref{app:A} Modifications for similar integrals.
Sec. \ref{app:B} Higher order polynomials, splines.
\section{Notation, integral representation, discretisation} \label{sec:intro}
The nonlinear integral for SHG-FROG is defined as
\begin{equation} \label{eq:one}
I[E](\omega,\tau) := \left\lvert \int_{-\infty}^{+\infty} E(t) E(t-\tau) e^{-i\omega t} \,\text{d}t \right\rvert^2,
\end{equation}
where $E(t)$ is a complex function, the electric field of the pulse which we assume to be non-zero
on the interval $t\in[-1,1]$ and zero elsewhere \footnotemark, with time units such that this interval
has length 2.
The outcome of the FROG experiment is the FROG \textit{trace}
$I_\text{exp}(\omega,\tau) \approx I[E_\text{in}](\omega,\tau)$ of the pulse to
be investigated $E_\text{in}(t)$. We obtain $E_\text{in}(t)$ by solving the integral equation
\footnotetext{Setting $E(t)$ on a bounded domain enables clipping of long low-amplitude wings / zooming to the
region of interest on the trace, saving computational cost. A bounded domain can be used as $e^{i\omega t} $ is
removed from the integral.}
\begin{equation} \label{eq:shg_frog}
I[E](\omega,\tau) - I_\text{exp}(\omega,\tau) = 0.
\end{equation}
We bring (\ref{eq:one}) into a form better suited for polynomial approximation by Fourier
transform $\omega \rightarrow \sigma$, as the explicit $t,\omega$ dependence in the integrant is
removed
\begin{eqnarray} \label{eq:Jshg}
J[E](\tau,\sigma) &=&
\int_{-\infty}^{+\infty} \left( \int_{-\infty}^{+\infty} E(t) E(t-\tau) e^{-i\omega t} \,\text{d}t
\, \int_{-\infty}^{+\infty} {\bar{E}}(t) {\bar{E}}(s-\tau) e^{ i\omega s} \,\text{d}s \right)
e^{i\omega \sigma} \,\text{d}\omega \,/\, ( 2\pi) \nonumber \\
J[E,{\bar{E}}](\tau,\sigma) &=& \int_{-\infty}^{+\infty} E(t) E(t-\tau) {\bar{E}}(t-\sigma) {\bar{E}}(t-\tau-\sigma) \,\text{d}t
\end{eqnarray}
In the first line we have split the absolute value in (\ref{eq:one}) into a complex integral and
its complex conjugate and then applied the relation $\int_{-\infty}^{+\infty} e^{i\omega (r-(t-\sigma))} \text{d}\omega = 2\pi \delta(r-(t-\sigma))$,
where $J[E](\tau,\sigma)$ is the double-delay representation of the SHG-FROG integral, also denoted as $J[E,{\bar{E}}](\tau,\sigma)$
\footnote{Used in the following to denote that the integral is understood as a function of two independent field variables.}
and ${\bar{E}}(t)$ is the complex conjugate of $E(t)$.
For $E(t)$ non-zero on $t\in [-1,1]$ the trace $J[E](\tau,\sigma)$ is non-zero on $\tau,\sigma \in [-2,2]$.
In the following we consider only the first quadrant $\tau,\sigma \in [0,2]$, as the others are related through discrete symmetries.
We introduce two new function for clarity of notation
\begin{equation}
F_\tau(t) := E(t)E(t-\tau),\quad G_\sigma(t) := E(t){\bar{E}}(t-\sigma)
\end{equation}
now
\begin{equation} \label{eq:Jauto}
J[E](\tau,\sigma) = \int_{-\infty}^{+\infty} F_\tau(t) {\bar{F}}_\tau(t-\sigma) \text{d}t\quad = \quad \int_{-\infty}^{+\infty} G_\sigma(t) G_\sigma(t-\tau) \text{d}t.
\end{equation}
For fixed $\tau=\text{const}$ the integral $J[E](\tau,\sigma)$ appears to be a one-dimensional auto-correlation of the
function $F_\tau(t)$. We discretise the electric field with a piecewise constant function
(polynomial of degree zero)\footnotemark, in the context of numerical integration often called \emph{midpoint rule},
\footnotetext{It is possible to use piecewise linear or, more general polynomials or splines, see Appendix \ref{app:B}.}
\begin{equation} \label{eq:0spline}
E(t) =
\begin{cases}
0 & t < -1 \quad \text{or} \quad 1 < t \\
E_k & t \in [t_k,t_{k+1}], \quad k = 0,\dots, N-1
\end{cases}
\end{equation}
on a uniform $t$-grid $t_k = -1 + k\cdot h,\, k=0,\dots,N$ with $N$ intervals, where $h=2/N$
is the grid spacing. In the same way we define the grids along $\tau$ and $\sigma$: $\tau_i, \sigma_i = i\cdot h,\, i=0,\dots,N$.
If the delay is equal to an integer multiple of the grid spacing, thus, $\tau = \tau_i$, the product $F_{\tau_i}(t)$
is again piecewise constant, see Fig.~\ref{fig:exampleN4} (bottom left), which we abbreviate as $^{(i)}F_k = F_{\tau_i}(t_k) = E_k E_{k+i}$.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{plots/fourPlots.png}
\caption{\footnotesize Illustrating example: $E(t)$ discretised with a piecewise constant function on $N=4$ intervals (top left).
Then, the associated products $F_{\tau_i}(t)$ and $F_{\tau_i}(t)F_{\tau_i}(t-\sigma)$ (bottom) are piecewise constant as well
on small parallelograms such that the nonlinear integral $J[E](\tau,\sigma)$ (top right) can be computed via list auto-correlations
along all grid segments (red lines). We consider $J$ only in quadrand $(+,+)$, $\tau,\sigma\in[0,2]$ as the other quadrands
are linked through discrete symmetries. $J$ is non zero only below the diagonal (dashed line) as $E(t)$ is non zero on a
bounded domain $t\in[-1,1]$ by definition. $E(t)$ is generally complex and normalised such that $\text{max}\,|J[E](\tau,\sigma)| = 1$.}
\label{fig:exampleN4}
\end{figure}
Then the integrand of $J[E](\tau_i,\sigma)$, see Fig.~\ref{fig:exampleN4} (bottom right), is also piecewise constant
on small parallelograms\footnote{The integration boundaries depend on $\hat{\sigma}$, for the upper / lower triangles
they are $\int_{\hat{\sigma}-1}^1d\hat{t}$ / $\int_{-1}^{\hat{\sigma}-1}d\hat{t}$.} and integration over $t$ disassembles into two sums of
$N$ sub-integrals for the $j$th $\sigma$ interval $\sigma \in [\sigma_j, \sigma_{j+1}]$
($j$th column in Fig. \ref{fig:exampleN4} (bottom right))
\begin{eqnarray} \label{eq:list-auto}
J[E](\tau_i,\sigma) &=& h\int_{\hat{\sigma}-1}^1 d\hat{t}\sum_{k=1}^N\,^{(i)}F_k\, ^{(i)} {\bar{F}}_{k+j}
\,\, + \,\, h\int_{-1}^{\hat{\sigma}-1} d\hat{t}\sum_{k=1}^N\, ^{(i)}F_k\, ^{(i)} {\bar{F}}_{k+j+1}, \, \sigma \in [\sigma_j, \sigma_{j+1}], \\
J[E](\tau_i,\sigma) &=& h(2 - \hat{\sigma})\, \text{corr}( ^{(i)}F_k, {^{(i)} {\bar{F}}}_{k} )_j + h\,\hat{\sigma}\, \text{corr}( ^{(i)}F_k, {^{(i)} {\bar{F}}}_{k} )_{j+1}
\end{eqnarray}
where $\hat{\sigma} \in [0,2]$ and $\hat{t}\in[-1,1]$ are local coordinates on the intervals $[\sigma_j,\sigma_{j+1}]$, $[t_k,t_{k+1}]$.
The first sum is collecting all small upper triangles per column in Fig. \ref{fig:exampleN4} (bottom right) and the second the lower triangles.
The expression $\text{corr}( ^{(i)}F_k, {^{(i)} {\bar{F}}}_{k} )_j := \sum_{k=1}^N\,^{(i)}F_k\, ^{(i)} {\bar{F}}_{k+j}$, $i = 0,\dots,N-1$
denotes the list auto-correlations of $^{(i)}F_k$ that can be computed with complexity $N\cdot (N\,log(N))$ \footnotemark.
\footnotetext{Alternatively, for the relatively small $N$ consider here, the direct method to compute the
correlation is more efficient for $N<1000$, Subsec.~``FFT versus Direct Convolution'' \cite{SASPWEB:2011},
than the FFT-based variant when parallelised on thousands of cores.}
Eq.~(\ref{eq:list-auto}) and the equivalent for $G_\sigma(t)$ gives the nonlinear integral along all
grid segments $[\tau_i,\tau_{i+1}]$, $[\sigma_j,\sigma_{j+1}],\, i,j = 0,\dots,N-1$, red lines in Fig.~\ref{fig:exampleN4} (top right).
Now $N$ could be chosen such that the $\tau_i$ overlap with the points of the experimental data, assuming an equally-spaced
grid with $K$ points along $\tau$, and the integral equation be solved similar to what follows.
As a measured trace is normally noisy, the better way to go is setting up a pixelwise instead of a pointwise representation
of the equation. Moreover, on a coarse-graining hierarchy of smaller grids $N_1<N_2< \dots < K$ the solver is faster and may
resolve long- and short-wavelength components successively.
At first, we pixelise the integral $J[E](\tau,\sigma)$. For the single pixel with grid coordinates
$(\tau_i, \sigma_j)$ (lower left corner) we linearly interpolate the values
from the left pixel boundary to the right boundary
\begin{equation} \label{eq:lowerLeft}
J[E](\hat{\tau}, \hat{\sigma})_{ij}^\text{left right} = J[E](\tau_i, \hat{\sigma}) ( 1 - \hat{\tau}/2 ) + J[E](\tau_{i+1}, \hat{\sigma})\, \hat{\tau}/2,
\end{equation}
to have the integral approximated inside the pixel.
As before an over hat denotes local pixel coordinates $\hat{\tau}, \hat{\sigma} \in [0,2]$.
Then we integrate $J[E](\hat{\tau}, \hat{\sigma})_{ij}^\text{left right}$ over the pixel surface
normalised by its area to obtain the dimensionless pixel average
\begin{eqnarray} \label{eq:corrLeftRight}
\langle J[E]_{ij}^\text{left right} \rangle := \int_0^2 \int_0^2 J[E](\hat{\tau}, \hat{\sigma})_{ij}^\text{left right}\,
d\hat{\tau} d\hat{\sigma} \,/ \int_0^2 \int_0^2 \, d\hat{\tau} d\hat{\sigma} =
\frac{1}{2} h\,\text{\LARGE(}\, \text{corr}( {^{(i )} F}_k, {^{(i )} {\bar{F}}}_{k} )_j + \\
\text{corr}( {^{(i+1)}F}_k, {^{(i+1)} {\bar{F}}}_{k} )_{j} +
\text{corr}( {^{(i+1)} F}_k, {^{(i+1)} {\bar{F}}}_{k} )_{j+1} + \text{corr}( {^{(i )}F}_k, {^{(i )} {\bar{F}}}_{k} )_{j+1} \,\text{\LARGE)}. \nonumber
\end{eqnarray}
The analog can be done for the bottom and top boundary and the correlation coefficients \\
$\text{corr}( {^{(j)} G}_k, {^{(j)} G}_{k} )_i$ improving the accuracy\footnotemark of the approximation.
Then the total pixel average is
\begin{equation} \label{eq:JaveTot}
\langle J[E]_{ij} \rangle :=
\left( \langle J[E]_{ij}^\text{left right} \rangle +
\langle J[E]_{ij}^\text{bottom top} \rangle \right) / 2.
\end{equation}
and the pixel average of the nonlinear integral is given by adding up the list correlation coefficients for
each corner square times $\frac{1}{2} h/2$.
\footnotetext{For most applications it is enough to set
$\langle J[E]_{ij} \rangle := \langle J[E]_{ij}^\text{left right} \rangle$ speeding up the
computations by a factor of two, though, sacrificing some accuracy. Note the swapping of indices for the
coefficients of $G$.}
At last, the pixel averages of the Fourier transformed measurement trace
$I_\text{exp}(\omega,\tau) \rightarrow J_\text{exp}(\tau,\sigma) \rightarrow \langle J^\text{exp}_{ij} \rangle$
have to be computed to setup the polynomial system (\ref{eq:double-delay}) where these values constitute the constant part.
This can be computed using the trapezoidal rule or simply by averaging all data points within a pixel.
\section{Setting real polynomial system, real roots, gauge condition} \label{sec:setting_poly}
The integral equation (\ref{eq:shg_frog}) in double-delay representation is now discretised
\begin{equation} \label{eq:double-delay}
\langle J[E,\widetilde{E}]_{ij} \rangle - \langle J^\text{exp}_{ij} \rangle = 0,
\end{equation}
as a 4th order polynomial system in the $2k$ complex variables, the components ${E_k, \widetilde{E}_k}$. Here
we use a tilde to denote $\widetilde{E}$ as a new variable\footnotemark in place of ${\bar{E}}$, the complex conjugate of $E$
\footnotetext{This step may appear confusing at first sight, as we double the number of variables:
Newton's method requires the nonlinear function to be Lipschitz continuous to guarantee convergence
which the operations of complex conjugation or taking the absolute value prevent, see for example
{1.9.1} in \cite{Kelley:2003solving}. Similar requirements, often overseen,
come in hand with the gradient descent method when applied to least-squares.}.
Note: The so-created polynomial system has, strictly speaking, no exact solution as computing
the pixel averages of the nonlinear integral on the one hand and the pixel averages of the trace
come along with numerical and experimental errors limiting the accuracy. For the polynomial
solver introduced in Sec. \ref{sec:solver} we employ methods from stochastic optimisation to retrieve
an optimal solution.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\textwidth]{plots/example-trace.png}
\caption{\footnotesize Illustrating example: Synthetic measurement trace with $129\times129$ data points
on $\tau, \sigma \in [0,2]$. Coarse-grained data (only imaginary part shown) on $21\times21$ pixels
(40 data points per pixel) enables fast computation of approximants to initialise refined retrievals.
Every pixel of the lower triangular part ($21\times(21+1)/2$) is associated to one equation in (\ref{eq:ImJ}).
If $E^+$ and $E^-$ are real roots, then $\widetilde{E} \rightarrow {\bar{E}}$ and $\langle J[E^+,E^-]_{ij} \rangle^+$ and $\langle J[E^+,E^-]_{ij} \rangle^-$
are real and equivalent to $\text{Re}[ \langle J[E,{\bar{E}}]_{ij} \rangle]$, $\text{Im}[ \langle J[E,{\bar{E}}]_{ij} \rangle]$.
Note: The exponent $1/4$ is convenience for data examination as (\ref{eq:double-delay}) constitutes
a 4th order polynomial system in the components $E_k, \widetilde{E}_k$.}
\label{fig:example-trace}
\end{figure}
The two linear combinations
\begin{eqnarray}
E^+ &:=& ( E + \widetilde{E} )/2 \\
E^- &:=& ( E - \widetilde{E} )(-i/2)
\end{eqnarray}
serve as a new set of independent variables in the following $ \langle J[E,\widetilde{E}]_{ij} \rangle \rightarrow \langle J[E^+,E^-]_{ij} \rangle$.
Clearly, if $E^+, E^-$ are found as real roots of the polynomial system (\ref{eq:double-delay}), we are dealing with a physical solution.
Then $E^+ \rightarrow \text{Re}(E)$, $E^- \rightarrow \text{Im}(E)$ are nothing but the real and
imaginary part of the electric field and $\widetilde{E} \rightarrow {\bar{E}}$.
In the same fashion we create a new polynomial system introducing the linear combinations
$\langle J_{ij} \rangle^+ = ( \langle J_{ij} \rangle + \langle J_{ij} \rangle )/2$ and
$\langle J_{ij} \rangle^- = ( \langle J_{ij} \rangle + \langle J_{ij} \rangle )(-i/2)$
\begin{eqnarray}
\langle J[E^+,E^-]_{ij} \rangle^+ - \langle J^\text{exp}_{ij} \rangle^+ &=& 0 \label{eq:ReJ} \\
\langle J[E^+,E^-]_{ij} \rangle^- - \langle J^\text{exp}_{ij} \rangle^- &=& 0 \label{eq:ImJ}
\end{eqnarray}
such that the new system has real coefficients and as long as $E^+$ and $E^-$ are real, eq. (\ref{eq:ReJ}), (\ref{eq:ImJ}) are real
and imaginary part of eq. (\ref{eq:double-delay}). The reasons for these rearrangements are the following:
starting with a real initial iterate, Newton's method remains real and we can stick to real arithmetics,
which is about five times faster than using complex variables, more importantly, we are interested in finding
real roots.
For the integral (\ref{eq:Jauto}) the absolute phase as well as the time direction of the electric
field are not fixed: for any solution $E(t)$, the product $E(t)\cdot \exp( i\, \text{const})$ and $E(-t)$ are also
solutions. We fix the rotational symmetry by adding the following equation to the system (\ref{eq:ReJ}), (\ref{eq:ImJ})
\begin{eqnarray} \label{eq:null}
\int_{-\infty}^{+\infty} E^+(t) \,\text{d}t - \int_{-\infty}^{+\infty} E^-(t) \,\text{d}t &=& 0 \quad \Rightarrow \nonumber \\
2/N\, {\sum}_{k=1}^{N} ( E^+_k - E^-_k ) &=& 0 \label{eq:gauge}
\end{eqnarray}
which we call \emph{null gauge condition}, it fixes the absolute complex phase but leaves the overall scaling
and shape of $E^+(t),\, E^-(t)$ free.
Moreover, its a polynomial equation with real coefficients.
\section{Squaring the system, Newton's method, Homotopy continuation} \label{sec:solver}
The systems (\ref{eq:double-delay}),(\ref{eq:ReJ}),(\ref{eq:ImJ}) consist each of $(N+1)N/2$ equations.
Only the lower triangular part is non-zero as $E(t)$ is zero beyond the domain $t \in [-1,1]$.
Such that (\ref{eq:ReJ}),(\ref{eq:ImJ}) contribute $(N+1)N$ equations.
We denote the total system (\ref{eq:ReJ}), (\ref{eq:ImJ}), (\ref{eq:gauge}) with $(N+1)N + 1$ equations as
\begin{equation} \label{eq:FXC}
F(X) - C_1 = 0, \quad F(X) :=
\begin{pmatrix}
F_1(X_1, \dots, X_{2N}) \\ \vdots \\ F_{(N+1)N+1} (X_{1}, \dots, X_{2N})
\end{pmatrix}
\end{equation}
where $X = \{ E^+_k, E^-_k \}$ is the list of $2N$ variables and $F(X)$ is the $X$-dependent part of the set of equations
$F_k(X), k=1,\dots, (N+1)N$ for the lower triangular part of (\ref{eq:double-delay}) flattened to a list and, analogously,
the constant part of (\ref{eq:double-delay}) (pixel trace averages) is flattened to the list
$C_{1\, k}, k=1,\dots, (N+1)N$. The last equation
$F_{(N+1)N + 1}(X) - C_{1\, (N+1)N + 1} = 0$ is set to be the gauge condition (\ref{eq:gauge}).
The polynomial system (\ref{eq:FXC}) is overdetermined. Moreover, due to numerical and experimental
errors, it has no exact solution. We multiply\footnotemark the vector of equations with a random matrix $M$ having
dimensions such that the reduced system has as many equations as variables. Then, the Jacobian of the
reduced system is well defined and by alternating random matrices stochastic optimisation can be integrated.
\footnotetext{For Rademacher variables this operation is implemented without any multiplication:
50\% of all equations are added up randomly chosen and the sum of the remaining equations is subtracted to
obtain one new equation. Moreover, Rademacher variables do not rescale the noise.}
Here we choose $M$ with i.i.d.\ Rademacher random variables (taking values $\{ -1,+1 \}$
with probability $1/2$) to reduce the first $(k+1)k$ equations to $2k-1$ and attach the gauge condition as before
at the end.
We denote the reduced system as
\begin{equation} \label{eq:FXCM}
F^M(X) - C_1^M = 0.
\end{equation}
It contains all isolated roots of the original system which Bertini's
theorem guarantees, see for example \S{1.1.4} in~\cite{Bates:2013}, and additional ``spurious'' roots
which are simple to detect as they do not solve (\ref{eq:FXC}). The situation here is similar to approaches
for solving nonlinear PDEs by means of discretising them to polynomial systems, see e.g. Chapter 17
of \cite{Bates:2013} or \cite{Allgower:2006}.
In most cases heuristics have to be used to decide whether a found solution actually corresponds
to a physical solution of the original PDE. And it is not clear, a priori, whether the discretisation contains
any solution, a single solution or infinitely many.
A standard iterative technique for root finding of nonlinear equations is Newton's method.
Given an initial iterate $X_{n=0}$
\footnote{Abuse of notation, this is a variable vector of length $2k$. Here the index $n = 0$ denotes $0$th Newton iteration.}
and a nearby root $X^*$ the function (\ref{eq:FXCM}) is linearised at $X_0$
\begin{equation} \label{eq:F-linear}
F^M(X_0) + F'^M(X_0)\Delta X - C_1^M = 0,
\end{equation}
solved for $\Delta X$, the Newton step and a step towards the root is taken
\begin{equation} \label{eq:newton-it}
X_{n+1} = X_{n} + \Delta X.
\end{equation}
For a one-dimensional function $X_{n+1}$ is the point, where the tangent at $X_0$ crosses the $X$ axes.
For vector valued functions the derivative $F'^M(X_0)$, the Jacobian, is a square matrix and
eq. (\ref{eq:newton-it}) a linear system for the unknown $\Delta X$.
The iteration (\ref{eq:newton-it}) is known to converge roughly quadratically towards the root $e_{n+1} \sim e^2_{n}$,
if the function is Lipschitz continuous (which polynomial functions satisfy) and the Jacobian nonsingular,
see for example {1.2.1} in \cite{Kelley:2003solving}.
Where $e_n = \lVert X^* - X_n \rVert$ is the error of the $n$th iteration with $\lVert \cdot \rVert$ being the standard
Euclidean norm on $R^{2N}$. The roughly quadratic convergence can be observed monitoring the norm\footnotemark
$\lVert F(X_n) - C_1 \rVert$ often called \emph{residual}.
\footnotetext{In the ultrafast optics community instead of the Euclidean norm, typically the rms error is used, often called FROG error or trace error.}
For an arbitrarily chosen initial iterate $X_0$ there is, in general, no close enough root
for the iteration (\ref{eq:newton-it}) to converge, then, additional tricks are required to
globalise Newton's method.
As the primary computational method for that purpose polynomial system solvers like Bertini and PHCpack
\cite{Bates:2013,Verschelde:2011} employ the \emph{continuation} method, where a homotopy is assembled
\begin{equation} \label{eq:HXs}
H^M(X,s) := \left(F^M(X) - C_0^M\right) (1-s) + \left(F^M(X) - C_1^M\right) s, \quad \text{with} \quad H^M(X(s=0),0) = 0
\end{equation}
which is connecting two polynomial systems and all roots of them via smooth curves $X(s)$,
the start system (first term) at $s=0$ and the target system (second term) at $s=1$, where
$s$ is the continuation parameter; in general, a curve in the complex plane, in the
following real $s\in[0,1]$.
An $X(s=0)$ is chosen freely (normally a Gaussian) to compute $C_0 := F(X(s=0))$ in forward direction.
It is guaranteed\footnotemark that beginning at the solution $X(s=0)$ of the so-created
start system and following the curve $X(s)$ to arrive at a solution of the target system,
if $H'^M(X(s),s)$ is nonsingular and $s$ an arbitrary complex curve beginning at $s=0$ and ending at $s=1$.
\footnotetext{Algebraic closedness can only be guaranteed for complex homotopies. Two real
roots can be generally connected via complex continuation paths, though, in general, not via real paths.
Moving along a random complex path we are not guaranteed to end up at a real solution; moving along
a random real paths we are not guaranteed this path is connected to the solution.}
Starting at $s=s_m$ with $H^M(X(s_m),s_m) = 0$ and taking a step $s_{m+1} = s_m + \Delta s$ with
$\Delta s$ small enough along the $X(s)$ curve, we can guarantee to be close enough
to a solution of $H^M(X,s_{m+1}) = 0$ when using the initial iterate $X(s_m)$. This path tracking,
see Fig.~\ref{fig:pathtracker} (top left), is usually done in a predictor-corrector scheme with adaptive
step size control. We use step size parameters as in PHCpack \cite{Verschelde:2011}, a predictor given by the local tangent
and one Newton steps as a corrector (reusing the Jacobian to compute the new local tangent).
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\textwidth]{plots/path-tracker.png}
\caption{\footnotesize Solving polynomial system for $N=15$. Top left: Path tracking the solution curve consisting
of small path segments each corresponding to a single reduced system with fixed random matrix $M$. Top right: Global distance
to target trace of full (colored) and reduced (gray) system decreases approxematly exponentially (on each
segment linearly).
Bottom left: Local residual after predictor step (colored) and after corrector step (Newton step) (gray).
Bottom right: Relative number of up paths (increasing $\lVert \Delta C \rVert$) and valid paths
(successful Newton step) when doing trial steps at the end of each segment to find a new path (and new $M$)
along which $\lVert \Delta C \rVert$ decreases.}
\label{fig:pathtracker}
\end{figure}
The main obstacle of applying the continuation method here is that in the current form (\ref{eq:HXs})
the Jacobian $H'^M(X(s),s)$, in general, is singular\footnotemark on a finite number of points along the real path.
Polynomial solvers normally circumnavigate those and take a random path through the complex plan
which is also beginning at $s=0$ and ending at $s=1$ but is less likely to hit a singular point,
for example Sec. 2.1.2. \cite{Bates:2013}.
This is easily accomplished by multiplying the first term in (\ref{eq:HXs}) with a random complex
constant $\gamma$. This so-called \emph{gamma trick} is not applicable here as it would render
the homotopy to have complex coefficients and the continuation path would, in general, end at
an undesirable complex root of the target system.
\footnotetext{The structure of singular points for real homotopies like (\ref{eq:HXs}) has been fully characterised
\cite{Li:1993}: These singularities are quadratic turning points or \emph{simple folds}, where two real and two complex
conjugated solution branches meet, rotated by $\pi/2$ in the complex plan and toughing at their turning points, the simple fold.
Both branches smoothly transit the turning point, if an arc-length parameter is used instead of $s$
or pseudo arclength continuation \cite{Keller:1987lectures}. Then it is possible to follow the real curve through the
bifurcation point or, alternatively, jump onto the complex solution branch. Following the
real branch we simply return to a new real root of the start system, continuing the complex branch
we either end up on a complex root (or its complex conjugate) of the target system or eventually flow into
another simple fold where a transition to another real branch is possible. We implemented pseudo arc-length
continuation. Unfortunately, after passing a simple fold along the complex branch it is unlikely
that it touches another simple fold and rather ends up on an undesired complex solution of the
target system. The same phenomenon has been observed in \cite{Henderson:1990} in the attempt of
bypassing these singular points towards real roots.}
We found a different solution: we track the continuation path of (\ref{eq:HXs}) beginning at $s=0$ and hold
the tracker at a \emph{break point} $s=s_b$
\begin{equation}
H^M(X,s_b) = F^M(X(s_b)) - C_b^M = 0, \quad \text{where} \quad C_b^M = C_0^M - s_b (C_0^M - C_1^M),
\end{equation}
whenever close to a bifurcation point and also, if the momentary distance to the target trace
$\lVert \Delta C(s) \rVert = \lVert F(s) - C_1 \rVert$ (residual of the full system (\ref{eq:FXC})) increases.
Then, we create a new randomly reduce system, using the intermediate
solution $X(s_b)$ as an initial iterate for the corresponding new homotopy beginning at $s=0$. In this manner, we get a
collection of path segments $\{ s_{b_i} \}_{i=1,\dots}$ where $S=s_{b_1} + s_{b_2} + \dots$
is the total continuation time, see Fig. \ref{fig:pathtracker} (top left, colored segments), with
decreasing $\lVert \Delta C(S) \rVert$ (top right, colored).
The momentary distance to the reduced target trace (top right, gray) $\lVert \Delta C^M(S) \rVert=$ \\
$\lVert F^M(S) - C_1^M \rVert = \lVert C_{b_i}^M - C_1^M \rVert$,
decreases linearly for each path segment
\begin{eqnarray}
\lVert C_{b_i}^M - C_1^M \rVert > \lVert C_{b_{i+1}}^M - C^M_1 \rVert = (1 - s_{b_i}) \lVert C_{b_i}^M - C_1^M \rVert \quad \Rightarrow \\
\lVert \Delta C^M(S) \rVert \approx - \frac{d}{dS} \lVert \Delta C^M(S) \rVert \quad \Rightarrow \quad \lVert \Delta C^M(S) \rVert \approx \lVert \Delta C^M(S=0) \rVert e^{-S}
\end{eqnarray}
and as the length of each path segment is relatively small $s_b \textrm{\tiny $\parallel$} S$, globally, the total error decrease
appears like an exponential decay in $S$.
Initially, the distance $\lVert \Delta C(S) \rVert$ is decreasing approximately linearly as well on each segment.
We are moving along smooth curves, stepping along any newly create path with decreasing $\lVert \Delta C(S) \rVert$,
which we call \emph{down paths} as opposed to \emph{up paths}, it is likely that the next step is also decreasing.
The number of steps before reaching a break point ($\lVert \Delta C(S) \rVert$ increases) is getting smaller
as $X(S)$ is getting closer to the optimum, until no significant reduction
of the error is possible when reaching the accuracy limit set by numerical errors or noise floor.
At every break point trial predictor-corrector steps are computed for newly created randomised systems until
a down path has been found. If the trail step succeeds (Newton's method converges), the path is
called \emph{valid path}\footnotemark which can be either an up or down path.
\footnotetext{This automatically excludes all continuation paths for which the conditioning of the local Jacobian
is bad and those with high velocities / curvature. In practice, we first compute the full Jacobian and then try several
random projections. This way we have the cost of computing the full Jacobian at the beginning of every new
path segment only once.}
Clearly, the probability of finding an up path from the list of all valid paths is an important quantity
which we denote as $p_\text{up}$\footnotemark. We found that almost every valid path is a down path before reaching
the accuracy limit for noisy systems. Then, $p_\text{up}$ rises steeply, see Fig.~\ref{fig:pathtracker} (bottom right).
This intrinsic quantity is, thus, most practical and sensitive to stop the solver by setting a threshold
$p^\text{stop}_\text{up} > p_\text{up}$ at $p^\text{stop}_\text{up} = 50\% - 90\%$, for example.
\footnotetext{An efficient and still accurate enough method to estimate this quantity is, rather than computing many
trail steps for every path segment, to keep a running list of the last, say, 20 trials of preceding path segments.
The step size for computing trial steps for each path segment over the whole path has to be the same to make this
quantity comparable.}
\section{Adaptive regularisaton for noisy traces} \label{sec:regul}
In this section we show how to make the algorithm work for pulse retrieval of noisy experimental data.
One effect of computing the integral (\ref{eq:one}) in the forward direction given $E$ is smoothing
as we are dealing with an autocorrelation-like nonlinear integral.
Contrary, the inverse mapping acts as a high-pass filter with the undesirable tendency of noise amplification;
rather problematic when using Newton's method. We resolve this phenomenon by adding a regularization term to the equations.
Tikhonov regularization or ridge regression, when solving an ill-posed least squares problem have a long history
in statistics, see for example \cite{Engl:1996regularization}.
The pixelwise smoothing of the trace (\ref{eq:JaveTot}) is providing noise reduction and thereby implicit regularisation.
As the pixelisation is refined, steep local gradients arise when sampling a set of continuation paths, causing the path tracker
to reduce the step size to very small and leading to smaller and smaller path segments. Until the smoothness
assumptions coming with Newton's method do not apply. Then more direct countermeasures are asked for.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\textwidth]{plots/regul.png}
\caption{\footnotesize Solving polynomial system including Tikhonov-type penalty term. Top right: The size of
$\lambda$ is constant along each path segment (see Fig. \ref{fig:pathtracker}) and
adaptively decreased at the end of each segment, if a threshold on the relative size of the regularisation term
is crossed $\delta^\text{regul}_{\Delta C} < 20\%$ eq.(\ref{eq:delR}) for three different noise levels 0\% (green), 1\% (yellow), 2\% (blue).
Bottom Left: The relative size of the penalty term $K$ on each segment is growing and globally adaptively reduced.
Top left: The distances to the target trace with (opaque colored) and without (full colored) penalty term are
simultaneously decreasing along each path segment, while their difference is increasing because $F$ is deformed
slightly towards an improved match with the target trace $C_1$ for a class of curves with similar mean curvature.
Then, in the noiseless case (green) / noisy case (yellow, blue) $\delta^\text{regul}_{\Delta C}$ vanishes / settles where $\lambda$ is
near optimal.
Bottom right: Beginning at a zero phase Gaussian $E(s=0)$, connected by a series of smooth intermediate solutions
the path tracker continues towards the target pulse (dashed gray, only first 200 iterations shown).}
\label{fig:regul}
\end{figure}
Analogously to Tikhonov regularization we add a penalty term $K(X,\lambda)$ with components $K_l(X,\lambda),\, l=1,\dots, k(k+1)$ to
the first $k(k+1)$ equations of the original system (\ref{eq:FXC}) which gives preference to solutions with smaller
norms, also known as $L_2$ regularisation
\footnote{For Tikhonov regularisation the penalty term which is $\lVert X \rVert$ is added to the least squares
problem. Roots of the first derivative of this sum wrt $X$ correspond to regularised minima.}
\begin{equation} \label{eq:FKC}
F(X) + K(X,\lambda) - C_1 = 0, \quad K(X,\lambda) := \lambda\, M^i_{\text{reg}} \partial_i \lVert X \rVert^2 ,
\end{equation}
where $\lambda$ is the Tikhonov factor to scale the penalty term and $M^i_{\text{reg}}$ a shuffle matrix
which remains constant through out the path tracking and can be used for validation purposes via re-shuffling
and repeating the tracking.
For piecewise constant approximants (\ref{eq:0spline}) to $E(t)$ we get
\begin{eqnarray}
\partial_l \lVert X \rVert^2 &=& \partial_l \sum^N_{i=0} (X_{i+1} - X_i)^2 = 0, \quad l = 0,\dots,N \\
&=& (X_{l-1} - 2 X_l + X_{l+1}) = 0,\, \text{and}\, X_{l=0} = 0, X_{l=N} = 0\, (\text{boundary condition})
\end{eqnarray}
which is nothing but the 2nd order finite difference of $X$ on a three point stencil (up to a factor) which means
eq. (\ref{eq:FKC}) gives preference to solutions with small mean curvature and implements the desired smoothing effect.
As the penalty term alters the solution as little regularisation as necessary is wanted at the target solution,
though, while path tracking, $\lambda$ can be larger and this is actually beneficial from a numerical point of view
as it improves the conditioning of the Jacobian and smoother intermediate solutions enable longer path segments and larger steps.
By slowly decreasing\footnotemark $\lambda$, see Fig. \ref{fig:regul} (top right), we can assure that smooth initial data is
connected by as series of smooth intermediate solutions to the smooth target Fig. \ref{fig:regul} (bottom right).
\footnotetext{Though, throttling $\lambda$ too slowly during the path tracking, may cause an undesired prolongination of the path.}
Moving along any path segment and the related homotopy of (\ref{eq:FKC}), where we hold $\lambda$ constant, $F$ is deformed slightly towards
an improved match with the target trace $C_1$ for a class of curves with similar mean curvature. Therefore,
the difference $\rVert F + K - C_1 \lVert \,-\, \rVert F - C_1 \lVert$ is growing along each segment, while, of course,
both are decreasing simultaneously, see Fig. \ref{fig:regul} (top left). Until, to improve the match further, the amount of smoothness must
be reduced by decreasing $\lambda$ (top right). Finally, the match cannot be significantly improved by allowing
rougher curves. Even further reducing $\lambda$ would cause matching $F$ to the noise and the afore-mentioned
shortening of path segments and step size due to steeper local gradients; wasted computational cost.
The relative difference
\begin{equation} \label{eq:delR}
\delta^\text{regul}_{\Delta C} := \left( \rVert F + K - C_1 \lVert - \rVert F - C_1 \lVert \right)\, / \, \rVert F + K - C_1 \lVert
\end{equation}
is, thus, an ideal candidate for setting a threshold to lower $\lambda$ from one path segment to the other.
Moreover, $\delta^\text{regul}_{\Delta C}$ is inert to details of the solution and noise model and vanishes, if no noise is present.
For Fig. \ref{fig:regul} the threshold was set at $\delta^\text{regul}_{\Delta C} < 20\%$.
When starting from very smooth initial data, like an initial Gaussian, in the coarse initial phase (first 100 iteration)
the size of the penalty term can grow undesirably Fig.~\ref{fig:regul} (bottom left) before the above mechanism can set in because
the intermediate solutions attain flection. (For the same reason the relative size of the penalty term
grows along each path segment.)
This is prevented by adding another threshold for decreasing
$\lambda$, if $\rVert K \lVert \,/\, \rVert F + K \lVert$ rises above, say, $30\%$. Of course, if informed
initial data is at hand, like a solution from a coarser grid, this is not necessary.
As shown in the following section this adaptation mechanism is steering $\lambda$
near optimality or close enough for fine-tuning, for example, using the L-curve method or some other tool.
\section{Application examples, convergence, practical concerns} \label{sec:application}
We implemented the algorithm as a hybrid code in Mathematica (prototyping, pre- and post-processing) and in Fortran90
(core routine path tracker). All simulations were performed on an Intel Core i7-4790 CPU\@3.6GHz with 4 cores on 8Gb
Ram, linux OS and using OpenMP parallelisation and the Intel Compiler.
As a test cases we selected the pulse with index 42 (TBP2) from the data base of 101 randomly generated pulses with time-bandwidth
product (TBP) equal to 2 which were used in \cite{Geib:2019} to profile their least-squares solver, another less intricated
test pulse (TBP1) using the same generator with TBP = 1 and a third test case (A2908).
For other test cases we found the same universal convergence behavior as shown in the following.
For every run a different seed is used to initialise the Xorshift random number generator ``xoshiro256+`` \cite{Vigna:2014}
for computing the random matrices $M$.
To show the applicability to realistic defective traces Gaussian noise is added with
$\sigma_\text{noise} = 1\%,\, 2\%,\, 3\%$ to the synthetic trace
$I_\text{exp}(\omega,\tau) = I[E_\text{in}](\omega,\tau) + noise$ before\footnotemark
Fourier transforming it to $J_\text{exp}(\tau,\sigma)$.
\footnotetext{The noise is chosen to have zero mean, if this is not the case, either a background subtraction of $I_\text{exp}(\omega,\tau)$
has to be done or equivalently the zero mode after Fourier transform has to be removed and interpolated for $J_\text{exp}(\tau,\sigma)$
which we consider the cleaner choice.
The trace $J_\text{exp}(\tau,\sigma) \approx J[E_\text{in}](\tau,\sigma)$ is initially normalised to have its absolute value maximum equal to one.
Then every initial data $E$ should be scaled for the integral $J[E](\tau,\sigma)$ to have the same property.}
We study the effect of varying, see eq. (\ref{eq:delR}), the regularisation reduction threshold $\delta^\text{regul}_{\Delta C} = 5\%,\, 25\%,\, 40\%$ (controlling the decrease
of $\lambda$ while path tracking), see Sec. \ref{sec:regul}, as well as the effect of varying the termination criterion
to stop the solver $p^\text{stop}_\text{up} = 50\%,\, 70\%,\, 90\%$ on the
error convergence while fine-graining the pixelisation, increasing $N$. Where we measure the retrieval accuracy
or \emph{pulse error} $\epsilon$ as
\begin{equation}
\epsilon = \lVert E - E_\text{in} \rVert\, /\, \sqrt{N}\, /\, \text{max}(|E|)
\end{equation}
instead of using the relative error norm $ \lVert E - E_\text{in} \rVert / \lVert E \rVert$ to make results
comparable with the literature, in particular, \cite{Geib:2019}. The scaling behavior with $N$ is the same for both metrics.
To measure the (pixelwise) \emph{trace error} we use the relative Euclidean distance to the target trace
$\lVert \Delta C \rVert\, /\, \lVert C \rVert$ as before. In the literature on ultrafast nonlinear optics
often the rms error, also called FROG error is used.
\subsection{Initialisation, first example} \label{subsec:init}
As a first illustrating example, already discussed above, the retrieval of test pulse A2908 (dashed curves) is shown
in Fig. \ref{fig:regul} (bottom right) for $N=25$ with
$\sigma_\text{noise} = 1\%,\, \delta^\text{regul}_{\Delta C} = 25\%,\, p^\text{stop}_\text{up} = 90\%$. Initial data was set to a Gaussian bell curve with zero phase,
where the initial width of the Gaussian is set to minimise the polynomial system including penalty term (\ref{eq:FKC})
for some large enough initial value of $\lambda$.
The synthetic measurement trace with $129\times 129$ points was coarse grained to $N\times N = 25\times 25$ pixels.
In Fig.~\ref{fig:regul} (top left) the convergence of the trace error while iterating along the solution path is
shown, already discussed in a previous section.
When averaging over 100 retrievals we get a mean trace error / evolution time / within iterations:
$0.04/1s/200, 0.02/2s/400, 0.019/2.5s/500$, compare with Fig. \ref{fig:regul} (top left).
\begin{figure}[!t]
\centering
\includegraphics[width=0.6\textwidth]{plots/timing.png}
\caption{\footnotesize Performance overview: Retrieval time vs $N$ with $1\%$ additive noise and three different values for the
termination criterion $p^\text{stop}_\text{up}$ and the resulting retrieval accuracy, number of iterations and
retrieval accuracy after applying additional refinement steps all averaged over 10 runs. Initial data for one level
is the interpolated result from the next coarser level. The plot implies to choose as smaller $p^\text{stop}_\text{up}$ while cascading
towards the final grid and do refined retrievals there.}
\label{fig:timing}
\end{figure}
The null gauge (\ref{eq:null}) links the areas under the curves $E^+(t)$ and $E^-(t)$ to be equal locally,
though, simultaneously different from point to point on the continuation path. The overall scaling of $E(t)$ and,
in particular, its shape is free to evolve.
Such that initial and target pulse can be deformed into one another, connected through a collection of smooth intermediate solutions
Fig. \ref{fig:regul} (bottom right). In this example, this is possible for any random path.
As mentioned in Sec. \ref{sec:solver}, in general, for the combination initial data, target pulse, gauge condition
there is no guarantee that every random path is connecting as long as real homotopies are employed.
Algebraic closedness can be guaranteed for complex homotopies, though, generally ending at an unphysical complex solution.
For the test pulse TBP2 about 2\% of all random paths run into this situation\footnotemark when using zero phase initial data.
\footnotetext{These situation can be identified using the L-curve method or convergence rate besides comparing
several retrievals.}
A possible solution that we could not try within the time of the project: There is another real gauge condition.
Setting $E^+(t)$ or $E^-(t)$ alone equal to zero is again a unique but different null condition, fixing the absolute
phase of $E(t)$. Altering between these two gauges while path tracking should be tried.
At the moment the following practical workaround can be used: first,
compute $k$ trial runs ($k\approx5$) with a zero phase initial Gaussian, zero phase on
a coarse grid $N=15$ to $N=25$ which takes about $0.3s$ to $1s$ per run.
Then, the correct initialisation for a fine grid retrieval using this result as initial data has chances $1-0.10^k$,
if the coarse grid retrievals fail in, for example, $10\%$ of the cases.
\subsection{Retrieval timing, computational cost} \label{subsec:timing}
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\textwidth]{plots/convergence.png}
\caption{\scriptsize Error convergence vs $N$: Average (full colored points) final pulse error $\epsilon$ (a,c,d) and trace error
$\lVert \Delta C \rVert / \lVert C \rVert$ (b) on 14 pixelisations for the test cases TBP1, TBP2 with and
without noise.
Two examples of the retrieved electric field on a coarse (e) and medium (f) pixelised grid for TBP1.
(g) vs (h): Comparison of our solver with the result of a least-squares solver without regularisation.
For more details see Sec. \ref{sec:convergence}. Note: Here more intermediate grids then necessary are
used. Normally, a cascade like $N \rightarrow 20 \rightarrow 40 \rightarrow 65$ is sufficient.
}
\label{fig:convergence}
\end{figure}
A more detailed overview of the timing, number of iterations and the influence of the termination criterion
of the implemented algorithm is shown in Fig.~\ref{fig:timing}. Every data point corresponds to an average over
10 runs. As a test case we used the pulse TBP1 with $\sigma_\text{noise} = 1\%$.
Initial data for one level is the interpolated result from the next coarser level. Then, the number of iterations
before termination decreases approximately linearly with $N$ for noisy traces and is constant for noiseless cases
(not shown here). As mentioned before, a speedup by a factor of two is possible, as any of the summands in
eq. (\ref{eq:JaveTot}) suffice to approximated the total pixel average. Here we use both terms.
A larger value of $p^\text{stop}_\text{up}$ results in
higher accuracy but the additional costs usually do not justify the small improvement on $\epsilon$
at every intermediate level. Fig.~\ref{fig:timing} suggests using $p^\text{stop}_\text{up} = 50\%$ on coarser grids before reaching
the target level and doing refinement there, if required. If speed is a concern, the number of intermediate
levels can be optimised. For our purposes a single initial coarse grid and one fine grid was enough.
As an example $N_\text{initial} = 15,\, N_\text{target} = 51$, we got
$\epsilon_\text{target} \approx 0.015,\, \text{iterations} \approx 600,\, \text{retrieval time} \approx 30 s$.
Another practical concern, if the investigated pulse has large low-amplitude wings, a large part of the computational
domain and cost are spend on this low amplitude region and a clipping or zoom of the experimental data to
the region of interest is suggestive, setting the wings to zero at first. Then, in a follow-up retrieval a
larger domain could be included using the result as initial data, if required.
\subsection{Convergence, scaling behavior} \label{sec:convergence}
In Fig.~\ref{fig:convergence} (a-d) the convergence of pulse and trace error are studied
on 14 grid levels $N = 15,18,\dots,53$ with 6 retrievals per $N$ (opaque colored points)
and their averages (full colored points) for test pulses TBP1 and TBP2.
In (e-g) the input pulse shapes are shown (solid lines) and a retrieval result (points) on $N=15$ (e), $N=30$ (f), $N=51$ (g).
We set $p^\text{stop}_\text{up} = 90\%$ and do an additional centering and two refinement steps at every level, see Subsec.\ref{subsec:refine}.
The retrieval accuracy $\epsilon$ without noise (gray dots and stars) Fig.~\ref{fig:convergence} (a) as
well as the trace error converge with $\sim 1/N^2$ (gray dots and stars) Fig.~\ref{fig:convergence} (b)
as the dominant numerical error is stemming from the interpolating of $J[E](\tau,\sigma)$ from boundary values
to the pixel interior (\ref{eq:lowerLeft}) which is of the order $O(h^2)$.
This scaling behavior is overlayed with a $\sim N$ increase of the (pixelwise) relative
trace error for noisy traces $J_\text{exp}$, as the noise suppression scales with $\sim 1/\sqrt{N^2}$ when averaging over
less data points per pixel area for larger $N$.
Still $\epsilon$ may decrease while $\lVert \Delta C \rVert$ is increasing as more details of the pulse
are being resolved when increase $N$, until
$\epsilon$ becomes approximately constant at the accuracy limit, see Fig.~\ref{fig:convergence} (c),
for TBP1 with 1\% noise (solid circles) at about $\epsilon = 10^{-2},\, N=50$ for TBP2 with 1\% (empty circles)
at about $\epsilon = 10^{-1.9},\, N=65$.
Resolving more details beyond this point is possible, if less noisy traces are input. The dependence of
the accuracy limit on the noise level is apparent in Fig.~\ref{fig:convergence} (a) (blue, yellow, green).
Analogously, for larger enough $N$ the minimal trace error should only depend on the amount of noise and not the
particular pulse. This is apparent in Fig.~\ref{fig:convergence} (b) (empty / filled gray and blue points).
To make the numerical integration error apparent when computing pixel surface averages from the trace
$J_\text{exp}(\tau,\sigma) \rightarrow \langle J^\text{exp}_{ij} \rangle$ we chose two different resolutions
$129\times 129$ points (gray dots) vs $257\times 257$ (gray stars) in Fig.~\ref{fig:convergence} (a,b).
The integration error is only relevant, if the noise level is low enough to reach this high accuracy.
\footnotetext{We use the trapezoidal rule for numerical integration and first order interpolation, if the
pixel boundaries do not automatically lie on data grid. First order interpolation was used to preserve the
additive nature of the noise. For realistic traces, where other sources of error dominate, higher order
interpolation could be used.}
An interesting effect becomes apparent when varying the amount of regularisation by varying the threshold
$\delta^\text{regul}_{\Delta C} = 5\%, 25\%, 40\%$ (pink, violet, blue), zoom
into Fig. \ref{fig:convergence} (b). Though, the trace error is larger for larger $\lambda$ (violet, blue vs pink) the corresponding
pulse error, as shown in (d), is smaller (does not apply to even larger $\lambda$).
This is a reminder that retrieval techniques / optimisation codes that only aim at minimising the trace error without regularisation
are prone to fitting the noise; the right balance between over-fitting and over-smoothing has to be found.
Consider this when comparing a retrieval of our algorithm with the result of the least-squares
solver (no regularisation\footnote{An unpublished version (private repository) of this solver including
regularisation is available by now.}) used in \cite{Geib:2019}, see Fig. \ref{fig:convergence} (g) vs (h) \footnotemark.
\footnotetext{More grid points (parameters) not necessarily imply higher resolution and accuracy.
An increase beyond the number of significant parameters (effective DOF) for least-squares can
cause overfitting. In Fig. \ref{fig:convergence} (g) for TBP2 with 1\% noise the pulse error did improve beyond
$N=65,\, \epsilon = 0.013$ and similar to TBP1 with 1\%, see Fig. \ref{fig:convergence} (c), the accuracy limit is in both
cases at about $10^{-2}$. As the position of the pulse relative to the grid is not fixed, the result can be sampled on
arbitrary inter-grid locations, see next section.}
As this effect on the trace error is relatively small, one could also express it differently: there are many possible
pulse shapes of varying smoothness which have approximately the same trace error but a rather different pulse error.
Through regularisation and coarse-graining an optimal pulse shape can be computed.
\subsection{Optimal $\lambda$, refined solution, oversampled solution} \label{subsec:refine}
\begin{figure}[!t]
\centering
\includegraphics[width=1.05\textwidth]{plots/Lcurve.png}
\caption{\footnotesize Left: Applying the L-curve method for 9 different values of $\lambda$, 10 retrievals each (opaque colored),
mean value (full colored). Noisy oscillations begin to increase the length of the pulse $\lVert X \rVert$ when decreasing $\lambda$,
the trace error $\lVert \Delta C \rVert$ improves through over-fitting. Too smooth solutions (too large $\lambda$) show deviations
from the original solution and have larger trace errors. The optimal amount of $\lambda$ is within the corner of the L-curve where
the pulse error $\epsilon$ is small.
Right: 15 randomly offset retrievals of pulse TBP2 (gray lines) on $N=65$ intervals. As the position of the pulse
is not fixed relative to the numerical grid, sampling on arbitrary inter-grid locations is possible.}
\label{fig:lcurve}
\end{figure}
The most natural choice to test and fine tune the amount of regularisation for optimality in the presents of additive noise
would be a chi-square test while varying $\lambda$ because the pulse error is not available a priori.
Considering modern measurement devices and pulse retrieval setups, see for example \cite{Geib:2020}, for the problem at hand,
where (difficult to quantify) systematical errors and multiplicative noise are the most relevant sources of error, a goodness
of fit test of this type does not seem applicable yet to a measured trace.
A popular practical solution that we recommend in this case, is the so-called \emph{L-curve} method \cite{Hansen:1999} until more
sophisticated techniques are asked for.
An application of the L-curve method for test case TBP1 with $\sigma_\text{noise} = 2\%,\, N = 36$ is shown in Fig. \ref{fig:lcurve},
where for fixed $\lambda$ (adaptive decrease of $\lambda$ turned off) 10 retrievals (opaque colored points) and the average
(full colored points) are shown. For smallest $\lambda$ the retrieved pulses have smallest trace error (overfitting region)
but they do not have the smallest pulse error. The length $\lVert X \rVert$ of each pulse is extended by noisy wiggling
about some smoother solution which is reached for increasing $\lambda$.
In the L's corner, the amount of regularisation is optimal and the pulse error is minimal.
Note that the overfitted solutions also reveal themselves by having the smallest spread (opaque colored points) in the
trace error as they differ only by a new random sample of the reverse amplified noise; all samples having in average
about the same length.
The implemented mechanism to adaptively decrease $\lambda$ while path tracking, Sec. \ref{sec:regul}, steers
$\lambda$ towards optimality. Here are two examples: $\log_{10} \lambda_\text{final} = -1.8$ for $\delta^\text{regul}_{\Delta C} = 5\%$ and for
$\delta^\text{regul}_{\Delta C} = 25\%$ $\log_{10} \lambda_\text{final} = -1.3$ which is optimal.
As the ratio of up paths relative to all valid paths $p_\text{up}$ is estimated from a finite sample, the number
of iterations before crossing the threshold set by $p^\text{stop}_\text{up}$ differ slightly. To assure the solver cannot improve
the solution within this threshold we perturb the solution sightly by shifting it one (or more) grid points to the
left or right and use it as initial data for a new retrieval. This refinement step does not improve $\epsilon$
much, if the threshold was already set high $p^\text{stop}_\text{up} \approx 90\%$, as shown in Sec. \ref{subsec:timing}, Fig. \ref{fig:timing},
but yields a small improvement otherwise.
There is a translational symmetry which has not been discussed yet: $E(t) \rightarrow E(t+\delta t)$. This symmetry is,
strictly speaking, broken as the $E(t)$ is defined on a bounded domain $t\in[-1,1]$ and zero elsewhere. But as we are
dealing with finite accuracy solutions and as $E(t)$ models a physical light
pulse with low amplitude
wings there are actually infinitely many similar solutions within a given error bound which differ only by a small
shift $\delta t$ of the pulse relative to the numerical grid. As a consequence, in particular on coarser grids any solution
should be centered on the numerical grid before transferring the result.
As a benefit, on finer grids, if we re-process a solution shifted by some small random inter-grid distance $\delta t < h$
via interpolation, the result is a shifted solution, sampled on slightly different points $E(t+\delta t)$.
With this technique the pulse can be sampled on arbitrary inter-grid locations, as shown
in Fig. \ref{fig:lcurve} (right), where the above procedure was applied for $N=65$ for TBP2 with 1\% noise.
To be more precise, ``the'' over-sampled solution is rather an error band as for noisy traces a spread of
near optimal solutions within some error bound exist.
A small deviation from the original pulse is apparent when looking at its peak-value which is caused by the
discretisation error of $E(t)$ on $N=65$ intervals that should disappear as the grid is refined.
We did the same as above for the pulse TBP2 but with $\lambda = 10^{-1.7}$ (not shown). The average pulse error and
oscillations in the error band were slightly less, though, small deviations of the mean curve through
the error band from the original pulse are visible in some regions. Implying that care has to be taken when fine-tuning
$\lambda$ at the corner of the L-curve; rather choosing solutions with slightly less $\lambda$, slightly
bigger $\lVert X \rVert$ and averaging to obtain a mean curve. This phenomenon is also apparent in
Fig. \ref{fig:lcurve} (left, compare green, red, violet, brown) all four points have small nearby errors,
the smallest for brown, though, not the optimal choice.
\section{Conclusion and Outlook} \label{sec:conclusion}
An algorithm has been developed having in mind the applicability to real experimental data
for common pulse retrieval schemes in ultrafast nonlinear optics like FROG and d-scan
\cite{Trebino:1993,Miranda:2012}.
The employed numerical techniques are borrowed from other fields of physics, where nonlinear integral
equations of similar type appear, techniques used in polynomial system solvers and stochastic optimisation.
It has been shown how to implement Tikhonov-type regularisation into the polynomial equations, how to
adaptively decrease it while path tracking the solution and how to fine-tune $\lambda$ at the
solution when dealing with noisy, defective experimental data.
The system of equations was setup such that each equation corresponds to a pixel surface average of
the original integral equation on a grid of pixels, rather than a pointwise representation.
This coarse-graining capability enables fast computations of approximants, noise suppression
and high accuracy retrievals on fine pixelisations.
The solution technique of altering random matrices along a collection of homotopy curves
constitutes a new method for path tracking real solutions of overdetermined polynomial systems.
Every movement along a path segment, corresponding to a new random reduction of the system to be solved,
causes a deformation
of the momentary solution in parts in the direction of the global solution and in parts towards some random
perturbation. These perturbations appear to cancel when path tracking over a collection of many path segments.
Similarly, perturbations due to added noise seem to compensate each other.
We are not aware of a theoretical foundation of these partially stochastic homotopy paths and rely
on such heuristics and numerical evidence.
A link to the theoretical framework developed
for the Newton-sketch method \cite{Pilanci:2017newton,Berahas:2020}, or in Randomised Numerical
Linear Algebra \cite{Drineas2016:randnla,Martinsson:2020} or in other areas of stochastic optimisation
seems plausible.
If speed is a concern, there are two ways to accelerate the solver: The most immediate solution
is parallelising the list auto-correlations in (\ref{eq:lowerLeft}) on GPUs.
Secondly, it stands to reason trying Jacobian-free Newton-Krylov methods \cite{Knoll:2004jacobian}
or other Quasi-Newton methods \cite{Kelley:1995iterative,Kelley:2003solving} to replace Newton's method
in the algorithm which could accelerate the computations by a factor of $N$.
At third, when reducing the full system to random linear combinations the Hadamard transform
is the method of choice for dimensional reduction \cite{Ailon:2009,Boutsidis:2013} for many other applications in
stochastic optimisation.
For realistic FROG or d-scan traces an additional frequency dependent function can be introduced
which is multiplied with the trace that models frequency dependent systematical errors in the nonlinear medium
and the experimental setup which are otherwise neglected. This function could
be added to the list of unknowns and retrieved with the presented algorithm.
As mentioned before, for complex homotopies it is guaranteed that every random continuation path is connecting
any root of the start polynomial system with a solution of the target system, though, in general,
an unphysical complex root. For real homotopies several paths have to be tested which can be done
quickly on coarse grids to initialise a fine grid retrieval. Alternatively, switching between two real
gauge conditions as explained in Subsec. \ref{subsec:init} is worth further investigation.
For refining a retrieved pulse further the pointwise representation of the integral equation should be given
a second look, see eq. (\ref{eq:list-auto}), as the numerical integration errors could be avoided. This could yield
a small gain in accuracy.
Finally, there are other similar phase retrieval and inversion problems in optics and nonlinear
optics \cite{Fienup:82,Mairesse:2005}, where an application of this algorithm seems plausible as soon as Quasi-Newton methods
are accessible.
\section*{Acknowledgement}
I would like to thank Günter Steinmeyer, Esmerando Escoto, Lorentz von Grafenstein from the MBI Berlin,
Peter Staudt from APE GmbH, as well as Carl T. Kelley, Michael H. Henderson, Jan Verschelde, Alex Townsend
for helpful comments and discussion.
In particular, I thank Nils C. Geib for carefully reading and commenting on the first version of this article.
Without the daily support of my wife Halina Hoppe and the kind hospitality of Heike and Ludwig Hoppe
this work would have been impossible during these difficult times. Thank you.
This work was financially support by the European Union through the EFRE program
(IBB Berlin, ProFit grant, contract no. 10164801, project OptoScope) in
collaboration between the Max-Born Institute, Berlin and the company APE GmbH.
\bibliographystyle{unsrt}
| {'timestamp': '2020-10-09T02:15:25', 'yymm': '2010', 'arxiv_id': '2010.03930', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.03930'} |
\section{Introduction}
The accessory parameters appeared first in the Riemann-Hilbert problem
asking for an ordinary differential equation whose solutions transform
according to a given monodromy group \cite{bolibrukh}. They reappear
in Liouville theory in the quest for an auxiliary differential
equation in which all elements of the monodromy group belong to
$SU(1,1)$. Such a request is the necessary and sufficient condition
for having a single valued conformal Liouville field. Their
determination also play a crucial role in $2+1$ dimensional gravity
\cite{CMS2} in presence of matter. This is also connected to the
Polyakov relation which relates such accessory parameters to the
variation of the on-shell action of Liouville theory under the change
in the position of the sources \cite{ZT1,ZT2,CMS1,CMS2,TZ}. They
appear again in the classical limit of the conformal blocks of the
quantum conformal theory
\cite{ZZ,HJP,FP,menottiAccessory,piatek,LLNZ,
menottiConformalBlocks,menottiTorusBlocks}.
In several developments it is important to establish the nature of the
dependence of such accessory parameters on the source positions and on
the moduli of the theory. To this end we have the result of Kra
\cite{kra} which in the case of the sphere topology in presence only
of parabolic and finite order elliptic singularities proved that such
a dependence is real analytic (not analytic). The technique used to
reach such a result was that of the fuchsian mapping, a method which
cannot be applied to the case of general elliptic singularities.
On the other hand in the usual applications, general elliptic, not
finite order elliptic singularities appear. Finite order singularities
are those for which the source strength is given by $\eta_k
=(1-1/n)/2, n\in Z_+$ (see section 2).
In the case when only one independent accessory parameter is present,
like the sphere topology with four sources or the torus with one
source, it was proven that such accessory parameters are real analytic
functions of the source position or moduli, almost everywhere
(i.e. everywhere except for a zero measure set) in the source position
or moduli space
\cite{menottiAccessory,
menottiPreprint,menottiHigherGenus,menottiTorusBlocks}. The
qualification almost everywhere implies e.g. that we could not exclude
the presence of a number of cusps in the dependence of the accessory
parameters on the source positions, a phenomenon which may be expected
in the solution of a system of implicit equations.
This result was obtained by applying complex variety techniques to the
conditions which impose the $SU(1,1)$ nature of all the monodromies.
In \cite{menottiPreprint} an extension of such a technique was attempted
to the case of two independent accessory parameters, like the sphere
with five sources and the torus with two sources but results where
obtained only under an irreducibility assumption.
The usual approach to the solution of the Liouville equation is the
variational approach. Such an approach was suggested by Poincar\'e in
\cite{poincare} but not pursued by him due to some difficulties in
proving the existence of the minimum of a certain functional. The
variational approach was developed with success by Lichtenstein
\cite{lichtenstein} and in a different context by Troyanov
\cite{troyanov} by writing the conformal field as the sum of a proper
background and a remainder. With such a splitting the problem is
reduced to the search of the minimum of a given functional. One proves
that the such a minimum exists and solves the original problem
\cite{lichtenstein,troyanov,menottiExistence}. Poincar\'e in
\cite{poincare} pursued and solved the same problem by means of a
completely different procedure which became known as the Le
Roy-Poincar\'e continuation method \cite{leroy,mahwin}.
The idea is to write the solution of the Liouville equation as a power
series expansion in certain properly chosen parameters. Such a series
turns out to be uniformly convergent over all the complex plane or
Riemann surface.
This cannot be achieved in a single step. Once one has solved the
equation with one of such parameter in a certain region one uses the
obtained solution as the starting point of an other series in another
parameter and thus at the end one has the solution as a series of
series, each uniformly convergent.
The procedure is more lengthy than the variational approach but has
the advantage that one can follow the dependence of each series on the
input, the input being the Lichtenstein background field.
Such a field, to be called $\beta$ is a real positive function smooth
everywhere except at the source positions, the singularity being
characterized by the nature and the strength of the sources; apart
from these requirements the choice of $\beta$ is free. Thus except at
the singularities $\beta$ is a smooth, say $C^\infty$ function. The
uniqueness theorem \cite{lichtenstein,menottiExistence} tells us that
the final result does not depend on the specific choice of $\beta$.
Simple smoothness would not be a good starting point for proving the
real analytic dependence of the result; on the other hand as we
shall see, it is
possible to provide a background field $\beta$ satisfying all the
Lichtenstein requirements and real analytic in the moduli except
obviously at the sources.
Starting from such a $\beta$ one sees that the zero order
approximation in the Poincar\'e procedure gives rise to a conformal
field which is real analytic in the position of the sources $z_k$ and
in the argument $z$ except at the source positions $z_k$. The problem
is to show that such real analyticity properties are inherited in all
power expansion procedures and finally by the conformal factor itself.
This is what is proved in this paper in presence on any number of
elliptic singularities. The final outcome is that the
conformal factor depends in real analytic way both on the argument $z$
of the field and on the source positions. Once this result is
established is not difficult to express the accessory parameters in
terms of the conformal field and prove the real analytic dependence of
the accessory parameters themselves on the source positions.
The paper is structured as follows. In section \ref{lichtenstein} we
describe the Lichtenstein decomposition and provide a background field
$\beta$ which is real analytic everywhere except at the sources.
In section \ref{poincareprocedure} we give the Poincar\'e procedure
for the solution of the Liouville equation and in the following
section \ref{linearsection} we give the method of solution for a class
of linear inhomogeneous equation which appear in section
\ref{poincareprocedure}. In section \ref{inheritance} we prove how the
real analytic properties of the background field $\beta$ are inherited
in all the iteration process and finally by the solution i.e. the
Liouville field. In section \ref{realanalyticitysection}, using the
obtained result we prove the real analytic dependence of the accessory
parameters on the source positions for the sphere topology for any
number of general elliptic singularities. In section
\ref{torusanalyticity} we give the extension of the result to the
torus with any number of sources.
Finally in section \ref{conclusions} we discuss the perspectives
for the extension of the method to the parabolic singularities and
higher genus. For making the paper more readable we have relegated in
four appendices the proof of some technical results which are employed
in the text.
\section{The Lichtenstein decomposition}\label{lichtenstein}
The Liouville equation is
\begin{equation}
\Delta\phi=e^\phi
\end{equation}
with the boundary conditions at the elliptic singularities
\begin{equation}
\phi+2\eta_k \log|z-z_k|^2={\rm bounded},~~~~~~~\eta_k<\frac{1}{2}
\end{equation}
and at infinity
\begin{equation}
\phi+2\log|z|^2={\rm bounded}~~.
\end{equation}
The procedure starts by constructing a positive function $\beta$
everywhere smooth except at the sources where it obeys the
inequalities
\begin{equation}\label{inequality1}
0<\lambda_m<\beta |z-z_k|^{4\eta_k} <\lambda_M
\end{equation}
and for $|z|>\Omega$, $\Omega$ being the radius of a disk which
include all singularities
\begin{equation}\label{inequality2}
0<\lambda_m<\beta |z|^4<\lambda_M ~.
\end{equation}
Note that $\int \beta(z) d^2z<\infty$. In addition $\beta$ will be normalized
as to have
\begin{equation}\label{sumrule}
-\sum_k2\eta_k+\frac{1}{4\pi}\int\beta(z)d^2z=-2
\end{equation}
for the sphere topology.
Apart from these requirements $\beta$ is free and due
to the uniqueness theorem the final result for the field $\phi$ does
not depend on the specific choice of $\beta$. On the other hand, as
discussed in the introduction, it will be useful to start from a
$\beta$ which is real analytic both in $z$ and in the source positions
$z_k$, except at the sources. One choice is
\begin{equation}\label{ourbeta}
\beta = c \prod_k\frac{[(z-z_k)(\bar z-\bar
z_k)]^{-2\eta_k}}{[1+z\bar z]^{-2\sigma+2}},~~~~~~~~\sigma=\sum_k \eta_k
\end{equation}
where the positive constant $c$ has to be chosen as to comply with
the sum rule (\ref{sumrule}).
Picard inequalities
require the presence of at least three singularities, the case of
three singularities being soluble in terms of hypergeometric functions.
As is well known, by performing a projective
transformation we can set $z_1=0, z_2=1, z_3=i$.
We shall be interested in the dependence
of the accessory parameters on a given $z_k$ keeping the other fixed;
we shall call such a source position $z_4$.
Obviously (\ref{ourbeta}) is not the only choice but it is
particularly simple. Varying the position $z_4$ around a given initial
position we shall need to vary the $c$ in
order to keep (\ref{sumrule}) satisfied. It is easily seen that
such $c$ depends on $z_4$ in real analytic way (see Appendix A).
Given $\beta$ one constructs the function
\cite{lichtenstein,menottiExistence}
\begin{equation}\label{liouville}
\nu = \phi_1+\frac{1}{4\pi}\int\log|z-z'|^2\beta(z')d^2z'\equiv\phi_1+I
\end{equation}
with
\begin{equation}
\phi_1 =\sum_k(-2\eta_k) \log|z-z_k|^2
\end{equation}
and we define $u$ by
\begin{equation}
\phi = \nu+u~.
\end{equation}
With such a definition the Liouville equation becomes
\begin{equation}\label{liouville2}
\Delta u = e^\nu e^u-\beta \equiv \theta e^u-\beta~.
\end{equation}
The real analyticity of $\beta$ and $\theta$ need a little discussion.
We recall that a real analytic function can be defined as the value
assumed by an analytic function of two variables $f(z,z^c)$ when $z^c$
assumes the value $\bar z$. Equivalently it can be defined as a
function of two real variables $x$ and $y$ which locally can be
expanded in a convergent power series
\begin{equation}
f(x+\delta x,y+\delta y) - f(x,y)= \sum_{m,n} a_{m,n} \delta x^m
\delta y^n~.
\end{equation}
In eq.(\ref{ourbeta}) we can write
\begin{equation}
[(z-z_k)(\bar z-\bar z_k)]^{-2\eta_k}=
[(x-x_k)^2+(y-y_k)^2]^{-2\eta_k}
\end{equation}
which around a point $x,y$ with $x\neq x_k$ and/or $y\neq y_k$
can be expanded in a power series, obviously with bounded convergence
radius. The function $\nu$ and consequently the function $\theta$
contain $\beta$ in the form
\begin{equation}
e^\nu = e ^{\phi_1+\frac{1}{4\pi}\int\log|z-z'|^2\beta(z')d^2z'}=
\prod_k [(z-z_k)(\bar z-\bar z_k)]^{-2\eta_k} ~e^I.
\end{equation}
As we shall keep all $z_k$ fixed except $z_4$ we shall write
\begin{equation}
I(z,z_4)=\frac{1}{4\pi}\int\log|z-z'|^2\beta(z',z_4)d^2z'
\end{equation}
The analytic properties of $I$ both in $z$ and $z_4$ are worked out
in Appendix B.
\section{The Poincar\'e procedure}\label{poincareprocedure}
After performing the decomposition of the field $\phi$ as
$\phi=u+\nu$ the Liouville equation becomes
\begin{equation}\label{originaleq}
\Delta u= \theta e^u-\beta~~~~~{\rm with}~~\theta=e^\nu\equiv r\beta
\end{equation}
and as a consequence of the inequalities (\ref{inequality1},
\ref{inequality2}) we have
\begin{equation}\label{r1rr2}
0<r_1<r<r_2
\end{equation}
for certain $r_1,r_2$.
Let $\alpha$ be the minimum
\begin{equation}
\alpha=\min\bigg(\frac{\beta}{\theta}\bigg)=\frac{1}{\max r}
\end{equation}
which due to (\ref{r1rr2}) is a positive number.
Then we can rewrite the equation as
\begin{equation}\label{rewritten}
\Delta u = \theta e^u - \alpha\theta -\beta(1-\alpha r)~.
\end{equation}
As a consequence of the choice for $\alpha$ we have $\psi\equiv
\beta(1-\alpha r)\geq 0$.
Convert the previous equation to
\begin{equation}\label{lambdaeq}
\Delta u = \theta e^u-\alpha\theta -\lambda\psi
\end{equation}
and write
\begin{equation}\label{nonlinearseries}
u=u_0+\lambda u_1+\lambda^2 u_2+\dots~.
\end{equation}
We have to solve the system
\begin{eqnarray}\label{nonlinearsystem}
&&\Delta u_0 = \theta (e^{u_0}-\alpha)\nonumber\\
&&\Delta u_1 = \theta e^{u_0}u_1-\psi\nonumber\\
&&\Delta u_2 = \theta e^{u_0}(u_2+w_2)\nonumber\\
&&\Delta u_3 = \theta e^{u_0}(u_3+w_3)\nonumber\\
&&\dots
\end{eqnarray}
where
\begin{equation}
w_2 = \frac{u_1^2}{2},~~~~w_3 = \frac{u_1^3}{6}+u_1u_2,~~~~
w_4 = \frac{u_1^4}{24}+\frac{u_1^2 u_2+u_2^2}{2}+u_1u_3,~~~~\dots
\end{equation}
are all polynomials with positive coefficients. We see that in
the $n$-th equation the $w_n$ is given in terms of $u_k$ with $k<n$
and thus each of the equations (\ref{nonlinearsystem}) is a linear
equation.
Thus the previous is a system of linear inhomogeneous differential
equation for the $u_k$. The first equation is solved by $u_0=\log
\alpha$. We shall see in the next section that each of the following
equations in (\ref{nonlinearsystem}) can be solved by iterated power
series expansion and that all the $u_k$ are bounded. From the
properties of the Laplacian $\Delta$ and eq.(\ref{nonlinearsystem}) we
have
\begin{eqnarray}\label{inequalities}
&&|u_1| \leq \max\bigg(\frac{\psi}{e^{u_0}\theta}\bigg)\nonumber\\
&&|u_2| \leq \max~|w_2|\nonumber\\
&&\dots\nonumber\\
&&|u_k| \leq \max~|w_k|\nonumber\\
&& \dots
\end{eqnarray}
If at $z=z_{\rm max}$, where $z_{\rm max}$ is the point where $|u_k|$
reaches its maximum, $\Delta u_k$ is finite the above inequalities
follow from the well known properties of the Laplacian. At the
singular points it may happen that the Laplacian diverges but the
inequalities (\ref{inequalities}) still hold. In fact if the maximum
of $|u_k|$ is reached at the singular point $z_l$, with $u_k(z_l)>0$
and the r.h.s. in eq.(\ref{nonlinearsystem}) is definite positive in a
neighborhood of $z_l$, then the circular average
\begin{eqnarray}
\frac{1}{2\pi}\int u_k(z_l+\rho e^{i\varphi})d\varphi \equiv \bar u(\rho)
\end{eqnarray}
has a positive definite source. Thus it is increasing with $\rho$
which contradicts the fact that $u_k(z_l)$ is the maximum. The same
reasoning works also if at $z_l$ we have $u_k(z_l)<0$.
Using the above inequalities one proves (see Appendix D)
that the series (\ref{nonlinearseries}) converges for
\begin{equation}\label{lambda0bound}
|\lambda|<\frac{\alpha(\log 4-1)}
{\max|\frac{\psi}{\theta}|}
\end{equation}
and such convergence is uniform.
It is not difficult to show \cite{poincare} using the results of
Appendix C, that the convergent series satisfies the differential
equation (\ref{lambdaeq}) i.e. that one can exchange in
(\ref{lambdaeq}) the Laplacian with the summation operation.
Thus we are able to solve the equation
\begin{equation}
\Delta u = \theta e^u - \alpha\theta -\lambda_0\psi
\end{equation}
for
\begin{equation}
0<\lambda_0<\frac{\alpha(\log4-1)}{\max~|\frac{\psi}{\theta}|}~.
\end{equation}
If $\lambda_0$ can be taken equal to $1$ the problem is solved.
Otherwise one can extend the region of solubility of our equation by
solving the equation
\begin{equation}
\Delta u = \theta e^u - \theta \alpha-
\lambda_0\psi
-\lambda\psi\equiv
\theta e^u - \varphi-\lambda\psi~.
\end{equation}
Expanding as before in $\lambda$ one obtains
\begin{eqnarray}
&&\Delta u_0 = \theta e^{u_0}-\varphi\nonumber\\
&&\Delta u_1 = \theta e^{u_0}u_1-\psi\nonumber\\
&&\Delta u_2 = \theta e^{u_0}(u_2+w_2)\nonumber\\
&&\Delta u_3 = \theta e^{u_0}(u_3+w_3)\nonumber\\
&&\dots
\end{eqnarray}
From the first equation using $\varphi>0$ we have
\begin{equation}
\min~e^{u_0} >\min(\frac{\varphi}{\theta})
\end{equation}
and thus from the second
\begin{equation}
\max~ |u_1| \leq\max |\frac{\psi}{\theta e^{u_0}}|\leq
\max |\frac{\psi}{\theta}|\frac{1}{\min(\frac{\varphi}{\theta})}=
\max|\frac{\psi}{\theta}|\frac{1}{\min(\alpha+\lambda_0\frac{\psi}{\theta})}<
\frac{\max~|\frac{\psi}{\theta}|}{\alpha}~.
\end{equation}
Then following the procedure of the previous step we have convergence
for
\begin{equation}
|\lambda|<\frac{\alpha(\log 4-1)}{\max~|\frac{\psi}{\theta}|}~.
\end{equation}
This is the same bound as (\ref{lambda0bound}) and thus repeating such
extension procedure, in a finite number of steps we reach the solution
of the original equation (\ref{originaleq}). We shall call these
steps extension steps.
\section{The equation $\Delta u = \eta u -\varphi$}
\label{linearsection}
In the previous section we met the problem of solving linear
equations in $u$ of the type
\begin{equation}\label{linearequation}
\Delta u = \theta e^U u -\varphi
\end{equation}
where $U$ is provided by the solution of a previous equation. Here we
give the procedure for obtaining the solution of the more general
equation
\begin{equation}\label{generallinearequation}
\Delta u = \eta u -\varphi
\end{equation}
where $\eta$ is positive and has
the same singularities as $\theta$ in the sense that $
0<c_1<\frac{\eta}{\theta}<c_2$ \cite{poincare}. We start noticing
that due to the positivity of $\eta$, if $u$ and $v$ are two solutions of
(\ref{generallinearequation}) then we have
\begin{equation}
\int(u-v)\Delta (u-v)d^2z=-\int\nabla(u-v)\cdot\nabla(u-v)d^2z=
\int\eta(u-v)^2d^2z=0
\end{equation}
i.e. $u=v$. To construct the solution one considers the equation
\begin{equation}\label{lambdadiffeq}
\Delta u = \lambda\eta u -\varphi_0 -\lambda\psi
\end{equation}
with $\int\varphi_0 d^2z=0$ and writes the $u$ as
\begin{equation}\label{ulinearexp}
u = (u_0+c_0)+\lambda(u_1+c_1)+\lambda^2(u_2+c_2)+\dots
\end{equation}
and then we have
\begin{eqnarray}\label{linearsystem}
&&\Delta u_0 = -\varphi_0 \nonumber\\
&&\Delta u_1 = \eta (u_0+c_0)-\psi\nonumber\\
&&\Delta u_2 = \eta (u_1+c_1)\nonumber\\
&&\Delta u_3 = \eta (u_2+c_2)\nonumber\\
&&\dots
\end{eqnarray}
where the $u_k$ are simply given by
\begin{equation}
u_k=\frac{1}{4\pi}\int\log|z-z'|^2 s_k(z')d^2z'
\end{equation}
being $s_k$ the sources in eq.(\ref{linearsystem}). Due to the compactness
of the domain, i.e. the Riemann sphere, equations of the type $\Delta u=
s$ are soluble only if $\int s d^2z=0$.
The solutions of
the $\Delta u = s$ are determined up to a constant, a fact which
have been explicitly taken into account in (\ref{ulinearexp}).
Then the $c_k$ are chosen as to have the integral of the r.h.s. of the
equations in (\ref{linearsystem}) equal to zero.
\begin{eqnarray}\label{cequations}
&&c_0\int\eta d^2z= \int \psi d^2z - \int\eta u_0 d^2z\nonumber\\
&&c_1\int\eta d^2z= - \int\eta u_1 d^2z\nonumber\\
&&c_2\int\eta d^2z= - \int\eta u_2 d^2z\nonumber\\
&& \dots
\end{eqnarray}
Thus we have $|c_k|<\max|u_k|$ for $k\geq 1$. On the other hand we
have from the inequality proven in Appendix B
\begin{equation}
\max|u_2|\leq B \max~|u_1+c_1|
\end{equation}
from which
\begin{equation}
\max~|u_2+c_2|\leq 2 \max |u_2| < 2B \max~|u_1+c_1|
\end{equation}
and similarly for any $k$. Thus the series converges uniformly for
$|\lambda|<\frac{1}{2B}$. Again one can easily prove \cite{poincare}
using the results of Appendix C, that one can exchange the summation
operation with the Laplacian and thus the series satisfies the
differential equation (\ref{lambdadiffeq}).
It is important to notice
that the convergence radius does not depend on $\varphi$.
Then chosen any $\lambda_1$, $0<\lambda_1< \frac{1}{2B}$ we can solve
for any $\varphi$
\begin{equation}\label{generallinear}
\Delta u =\lambda_1 \eta u -\varphi\equiv\lambda_1 \eta u -\varphi_0-\psi
\end{equation}
as the power expansion in $\lambda$ of
\begin{equation}
\Delta u =\lambda \eta u -\varphi_0 - \frac{\lambda}{\lambda_1}\psi
\end{equation}
converges for $\lambda=\lambda_1$.
Thus if $\frac{1}{2B}>1$ the problem is solved. Otherwise one can
extend the region of convergence in the following way.
Chosen $0<\lambda_1 = \frac{1}{2B}-\varepsilon$
we consider the equation
\begin{equation}
\Delta u=\lambda_1 \eta u+\lambda \eta u -\varphi~.
\end{equation}
We are already able to solve
\begin{equation}
\Delta u=\lambda_1\eta u -\varphi
\end{equation}
and thus we shall expand in $\lambda$
\begin{equation}
u=u_0+\lambda u_1+\lambda^2 u_2+\dots
\end{equation}
with
\begin{eqnarray}\label{seconditeration}
&&\Delta u_0=\lambda_1\eta u_0-\varphi\nonumber\\
&&\Delta u_1=\lambda_1\eta u_1+\eta u_0\nonumber\\
&&\Delta u_2=\lambda_1\eta u_2+\eta u_1\nonumber\\
&&\Delta u_3=\lambda_1\eta u_3+\eta u_2\nonumber\\
&&\dots
\end{eqnarray}
all of which are of the form (\ref{generallinear}) and thus we are
able to solve.
To establish the convergence radius in $\lambda$ we use the fact
that in the solution of (\ref{seconditeration}) we have
\begin{equation}
|u_{k+1}|<\frac{1}{\lambda_1}\max|u_{k}|,~~~~~~~~k\geq 1
\end{equation}
and thus we have uniform convergence of the series in $\lambda$ for
$|\lambda|<\lambda_1$. We repeat now the procedure starting from
the equation
\begin{equation}
\Delta u=\lambda_1 \eta u+\lambda_2 \eta u +\lambda \eta u
-\varphi
\end{equation}
with $0<\lambda_2<\lambda_1$ which is solved again by expanding in
$ \lambda$. From the same
argument as before the convergence radius in $\lambda$ is
\begin{equation}
\lambda_1+\lambda_2
\end{equation}
which is even larger than the convergence radius in $\lambda_2$ and
thus in a finite number of extension steps we are able to solve
\begin{equation}
\Delta u=(\lambda_1+\lambda_2+\dots+\lambda_n)\eta u -\varphi
\end{equation}
with $\lambda_1+\lambda_2+\dots+\lambda_n=1$ which is our original
equation (\ref{generallinearequation}).
\section{The inheritance of real analyticity}\label{inheritance}
Not to overburden the notation we shall write $f(z,z_4)$ for
$f_c(z,z^c,z_4,z_4^c)$ at $z^c=\bar z$ and $z_4^c=\bar z_4$ with
$\frac{\partial}{\partial z}=\frac{1}{2}(\frac{\partial}{\partial x}
-i\frac{\partial}{\partial y})$ and
$\frac{\partial}{\partial \bar z}=\frac{1}{2}(\frac{\partial}{\partial
x} +i\frac{\partial}{\partial y})$.
We need the detailed structure of the most important function which
appears in the iteration procedure i.e. of $\theta=\beta r$. We are
interested in the problem when $z_4$ varies in a domain $D_4$ around a
$z^0_4$, say $|z_4-z^0_4|<R_4$ which excludes all others singularities.
We choose $R_4$ equal to $1/4$ the minimal distance of $z_4^0$ from
the singularities $z_k$, $k\neq 4$. We know that $0<r_1<r<r_2$ where
the bounds $r_1$ and $r_2$ can be taken independent of $z_4$ for
$z_4\in D_4$. The function $\theta(z,z_4)$ is explicitly given by
\begin{equation}
\theta = \prod_k ((z-z_k)(\bar z-\bar z_k))^{-2\eta_k} e^{I(z,z_4)}
\end{equation}
where
\begin{equation}\label{I}
I(z,z_4)=\frac{1}{4\pi}\int\log|z-z'|^2 \beta(z',z_4)d^2z'~.
\end{equation}
In dealing with integrals of the type (\ref{I}) to avoid the
appearance of non integrable functions in performing the derivative
w.r.t. $z_4$ it is instrumental to isolate a disk ${\cal R}_1$ around
$z_4$ of radius $R_1$ that for $z_4\in D_4$ contains only the
singularity $z_4$ and not the others $z_k$.
For the function $u$ of $z$ and $z_4$ it is useful to write for
$|z-z_4|<R_1$, $u(z,z_4)=\hat u(\zeta,z_4)$ with $\zeta=z-z_4$ and
thus also $\hat\theta(\zeta,z_4)=\theta(z,z_4)$ for $|\zeta|<
R_1$. Thus for $|z-z_4|<R_1$ we shall have denoting with ${\cal
R}_{1c}$ the complement of ${\cal R}_1$
\begin{eqnarray}\label{exphatI}
&& \hat I(\zeta,z_4) =\frac{1}{4\pi}\int_{{\cal R}_1}\log|\zeta-\zeta'|^2
\hat\beta(\zeta',z_4)
d^2\zeta'\nonumber\\
&&+\frac{1}{4\pi}\int_{{\cal R}_{1c}}\log|\zeta+z_4-z'|^2
\beta(z',z_4) d^2z'~.
\end{eqnarray}
We shall also consider an other disk centered in $z_4$ with
radius $R_2<R_1$ and write for $|z-z_4|>R_2$
\begin{eqnarray}\label{expI}
&& I(z,z_4) =\frac{1}{4\pi}\int_{{\cal R}_2}\log|z-z_4-\zeta'|^2
\hat\beta(\zeta',z_4)
d^2\zeta'\nonumber\\
&&+\frac{1}{4\pi}\int_{{\cal R}_{2c}}\log|z-z'|^2
\beta(z',z_4) d^2z'~.
\end{eqnarray}
In Appendix B it is proven that $I(z,z_4)$ eq.(\ref{I}), is continuous
and it is real analytic in $z$ for $z\neq z_k$ and that $\hat I(\zeta,z_4)$
eq.(\ref{exphatI}) is real analytic in $z_4$ for $z_4\in D_4$ and
$I(z,z_4)$ eq.(\ref{expI}) real analytic in $z_4\in D_4$.
\bigskip
The typical transformation we where confronted with in the previous
sections was
\begin{equation}\label{transformation}
u(z,z_4)=\frac{1}{4\pi}\int\log|z-z'|^2 \theta(z',z_4) s(z',z_4)d^2z'~.
\end{equation}
For $|z-z_4|<R_1$ we have
\begin{eqnarray}\label{transfin}
&& \hat u(\zeta,z_4) =
\frac{1}{4\pi}\int_{{\cal R}_1}\log|\zeta-\zeta'|^2
\hat\theta(\zeta',z_4) \hat s(\zeta',z_4)d^2\zeta'\nonumber\\
&&+ \frac{1}{4\pi}\int_{{\cal R}_{1c}}\log|\zeta+z_4-z'|^2
\theta(z',z_4) s(z',z_4)d^2z'
\end{eqnarray}
and for $|z-z_4|> R_2$ we have
\begin{eqnarray}\label{transfout}
&& u(z,z_4) =
\frac{1}{4\pi}\int_{{\cal R}_2}\log|z-\zeta'-z_4|^2
\hat\theta(\zeta',z_4) \hat s(\zeta',z_4)d^2\zeta'\nonumber\\
&&+ \frac{1}{4\pi}\int_{{\cal R}_{2c}}\log|z-z'|^2
\theta(z',z_4) s(z',z_4)d^2z'~.
\end{eqnarray}
We recall that we work under the condition
\begin{equation}\label{integral0}
\int\theta(z',z_4) s(z',z_4)d^2z'=0
\end{equation}
which can also be written as
\begin{equation}
\int_{\cal R}\hat\theta(\zeta',z_4) \hat s(\zeta',z_4)d^2\zeta'+
\int_{{\cal R}_c}\theta(z',z_4) s(z',z_4)d^2z'=0~.
\end{equation}
A consequence of relation (\ref{integral0}) is that we can work also
with
\begin{eqnarray}
&& \hat u(\zeta,z_4) =
\frac{1}{4\pi}\int_{{\cal R}_1}\log\big|1-\frac{\zeta'}{\zeta}\big|^2
\hat\theta(\zeta',z_4) \hat s(\zeta',z_4)d^2\zeta'\nonumber\\
&& + \frac{1}{4\pi}\int_{{\cal R}_{1c}}\log\big|1+\frac{z_4-z'}{\zeta}\big|^2
\theta(z',z_4) s(z',z_4)d^2z'~,
\end{eqnarray}
\begin{eqnarray}\label{u1out}
&& u(z,z_4) =
\frac{1}{4\pi}\int_{{\cal R}_2}\log\big|1-\frac{\zeta'+z_4}{z}\big|^2
\hat\theta(\zeta',z_4) \hat s(\zeta',z_4)d^2\zeta'\nonumber\\
&&+ \frac{1}{4\pi}\int_{{\cal R}_{2c}}\log\big|1-\frac{z'}{z}\big|^2
\theta(z',z_4) s(z',z_4)d^2z'~.
\end{eqnarray}
This last form is useful in investigating the behavior of
$u(z,z_4)$ at $z=\infty$.
\bigskip
We shall now show that some boundedness and real analyticity
properties of the source $s(z,z_4)$ are inherited by $u(z,z_4)$
through the transformation (\ref{transformation}). We shall always
work with $z_4\in D_4$ where $D_4$ was described at the beginning of
the present section and does not contain any other singularity $z_k$.
The real analyticity is proven by showing the existence the complex
derivatives w.r.t. $z$ and $\bar z$ or w.r.t $z_4$ and $\bar z_4$.
Due to the symmetry of the problem it is sufficient to prove
analyticity w.r.t. $z$ and $z_4$.
Properties of the source $s$ which are inherited by $u$ in the
transformation (\ref{transformation}) are
\bigskip
P1. $u$ is bounded and continuous in $z,z_4$, $z_4\in D_4$
P2. $\hat u(\zeta,z_4)$ is analytic in $\zeta$ for $|\zeta|<R_1$,
$\zeta\neq 0$.
P3. $\hat u(\zeta,z_4)$ is analytic in $z_4$ with
$\frac{\partial \hat u(\zeta,z_4)}{\partial z_4}$ bounded for
$z_4\in D_4$, $|\zeta|<R_1$.
P4. $u(z,z_4)$ is analytic in $z$, for $|z-z_4|>R_2$ , $z=\infty$
included, except at $z=z_k$.
P5. $u(z,z_4)$ is analytic in $z_4$ with
$\frac{\partial u(z,z_4)}{\partial z_4}$ bounded for
$z_4\in D_4$, $|z-z_4|>R_2$.
\bigskip
Thus we shall assume that the properties P1-P5 are satisfied by
$s(z,z_4)$ and prove that they are inherited by $u(z,z_4)$ of
eq.(\ref{transformation})
\bigskip
The inheritance of property P1 is a consequence of the inequality
proven in Appendix C.
The inheritance of properties P2 and P4 is proved by computing the
derivative w.r.t. $z$ using the method employed in Appendix B when
dealing with the derivative of $I(z,z_4)$ and using the analyticity
and boundedness of $s(z,z_4)$.
As for P3 we shall use the expression (\ref{transfin}) for $\hat
u$. $\hat\theta$ has the following structure
\begin{equation}\label{thetahat}
\hat\theta(\zeta,z_4) = (\zeta\bar\zeta)^{-2\eta_4} \prod_{k\neq 4}
(|\zeta+z_4-z_k|^2)^{-2\eta_k} e^{\hat I(\zeta,z_4)},~~~~~~~~|\zeta |<R_1~.
\end{equation}
With respect to the first term in (\ref{transfin}), in taking the
derivative w.r.t. $z_4$ one easily sees that the conditions are
satisfied for taking the derivative under the integral sign, for all
$\zeta$, provided $\hat s$ and
$\frac{\partial \hat s}{\partial z_4}$ be bounded in ${\cal R}_1\times
D_4$, i.e. properties P1 and P3.
In fact the derivative of the product in eq.(\ref{thetahat})
w.r.t. $z_4$ is regular and the derivative of the exponential boils
down to the derivative of $\hat I$ which we have shown in Appendix B
to be analytic in $z_4$ for all $\zeta$ in ${\cal R}_1$.
Then we have to differentiate $\hat s(\zeta,z_4)$ w.r.t. $z_4$. As
the $\frac{\partial\hat s}{\partial z_4}$ uniformly bounded in ${\cal
R}_1$, property P3, such differentiation under the integral sign is
legal.
In taking the derivative w.r.t. $z_4$ of the second term in
(\ref{transfin}) we must take into account the fact that the
integration region ${\cal R}_{1c}$ moves as $z_4$ varies. Then the
derivative of the second integral appearing in (\ref{transfin}) is
\begin{eqnarray}
&& \frac{1}{4\pi}\int_{{\cal R}_{1c}} \frac{\partial}{\partial z_4}
\bigg[\log|\zeta+z_4-z'|^2
\theta(z',z_4) s(z',z_4)\bigg] d^2z'\nonumber\\
&-& \frac{1}{8\pi i}\oint_{\partial {\cal R}_{1c}} \log|\zeta+z_4-z'|^2
\theta(z',z_4) s(z',z_4) d\bar z'~.
\end{eqnarray}
The logarithms in the above equation are not singular for
$\zeta\in{\cal R}_1$ which makes the differentiation under
the integral sign legal.
We come now to the $u(z,z_4)$ with $|z-z_4|>R_2$ where we use
expression (\ref{transfout}). In taking the derivative w.r.t. $z_4$
the first integral does not present any problem as $z\in {\cal R}_{2c}$
and $\zeta\in {\cal R}_2$ and thus the logarithm is non singular. The
second integral gives two contributions, being the first provided by
the derivative of $\theta s$ which gives rise to an integrand bounded
by an absolutely integrable function, independent of $z_4$ for $z_4\in
D_4$ and a contour integral due to the motion of ${\cal R}_{2c}$ as $z_4$
varies.
We are left to examine the neighborhood of $z=\infty$ which, with the
behavior (\ref{inequality2}) for the $\beta$ at infinity and the
consequent behavior of the $\theta$, is a regular point. We have with
$\tilde u(x,z_4)=u(1/x,z_4)$ and $\tilde\theta(y,z_4)=\theta(1/y,z_4)$
and using (\ref{u1out})
\begin{equation}\label{tildeequation}
\tilde u(x,z_4)=\frac{1}{4\pi}\int\log|x-y|^2\frac{\tilde\theta(y,z_4)}
{(y\bar y)^2}\tilde s(y,z_4)d^2y-
\frac{1}{4\pi}\int\log|y|^2\frac{\tilde\theta(y,z_4)}
{(y\bar y)^2}\tilde s(y,z_4)d^2y~.
\end{equation}
Exploiting the analyticity of $\tilde\theta(y,z_4)/(y\bar y)^2$ for
$|y|<1/\Omega$ we have that (\ref{tildeequation}) has complex
derivative w.r.t. $x$ for $|x|<1/\Omega$ thus proving the analyticity
in $x$ of $\tilde u$ around $x=0$.
From the previous eq.(\ref{tildeequation}) we see that $\tilde
u(x,z_4)$ is analytic in the polydisk $|x|<1/\Omega$ and $z_4\in
D_4$. This assures not only that $u$ at infinity is bounded, a result
that we knew already from the treatment of sections
\ref{poincareprocedure}, \ref{linearsection} and Appendix C, but that
$\frac{\partial u}{\partial z_4}$ is uniformly bounded for all $z$
with $|z-z_4|>R_2$. Thus we have reproduced for $u$ the properties
P1-P5.
\bigskip
We have now to extend the properties P1-P5 to all the $u_k$ of
sections \ref{poincareprocedure} and \ref{linearsection} and to their
sum. First of all we notice that in solving equation
(\ref{lambdadiffeq}) the constants $c_k$ intervene. These are given
by eq.(\ref{cequations}) i.e. by the ratio of two integrals where the
one which appears at the denominator never vanishes. The real analytic
dependence on $z_4$ of the denominator $\int\eta d^2$ is established
by the method provided in Appendix A while the derivative of the
numerator is again computed by splitting the integration region as
${\cal R}\cup{{\cal R}_c}$ and using the fact that $u_k$ and
$\frac{\partial u_k}{\partial z_4}, \frac{\partial\hat u_k}{\partial
z_4}$ are bounded.
To establish the real analyticity of the sum of the series we shall
exploit the well known result that given a sequence of analytic
functions $f_n$ defined in a domain $\Omega$ which converge to $f$
uniformly on every compact subset of $\Omega$ then their sum is
analytic in $\Omega$ and the series of the derivatives $f'_n$ converge
uniformly to $f'$ on every compact subset of $\Omega$.
We saw in sections \ref{poincareprocedure} and \ref{linearsection}
that in general more than one extension step is required to reach the
complete solutions of eqs.(\ref{originaleq}) and
(\ref{generallinearequation}) but these steps are always finite in
number. Let us consider first the case in which a single step is
sufficient
Then we have explicitly
\begin{eqnarray}
&& \Delta u= \theta e^u-\beta\label{basic2}\\
&&u=\sum_{k=0}^\infty \lambda_1^k ~u_k\\
&& u_0=\log \alpha\\
&& \Delta u_1=\alpha\theta u_1 -\psi\\
&&\Delta u_k=\alpha\theta(u_k+w_k),~~~~k\geq 2\label{nlinear2}\\
&& u_k=\sum_{h=0}^\infty \lambda_2^h~ u_{k,h}~.
\end{eqnarray}
Now we climb back the above sequence. Starting from
\begin{eqnarray}\label{onesteplinear}
&&\Delta u_{1,0}=-\varphi_0\nonumber\\
&&\Delta u_{1,1}= \eta(u_{1,0}+c_{1,0})-\psi\nonumber\\
&&\Delta u_{1,2}= \eta (u_{1,1}+c_{1,1})\nonumber\\
&&\Delta u_{1,3}= \eta (u_{1,2}+c_{1,2})\nonumber\\
&&\dots
\end{eqnarray}
and applying the inheritance result proven above we have that
$u_{1,h}$ are bounded in $D_4$ together with $\frac{\partial
u_{1,h}}{\partial z_4}, \frac{\partial \hat u_{1,h}}{\partial z_4}$.
Being the convergence uniform we have that the sum
i.e. $u_1=\sum_{h=0}^\infty \lambda_2^h~ u_{1,h}$ is real analytic and
bounded in $D_4$ and that $\frac{\partial u_1}{\partial z_4},
\frac{\partial \hat u_1}{\partial z_4}$ are bounded in
$|z_4-z_4^0|<R_4-\frac{\varepsilon}{4}$. The function $u_2$ is
obtained by solving (\ref{nlinear2}) where we recall that the source
$w_2$ depends only on $u_1$ and $w_k$ depends only on $u_r$
with $r<k$, in polynomial and thus analytic way.
Repeating the previous reasoning for $u_2$ we have analyticity and
boundedness of $u_2$ and its derivative in
$|z_4-z_4^0|<R_4-\frac{\varepsilon}{4}-\frac{\varepsilon}{8}$ and thus
boundedness of $u=\sum_{k=0}^\infty \lambda_1^k u_k$ in
$|z_4-z_4^0|<R_4-\frac{\varepsilon}{2}$ with its derivative bounded in
$|z_4-z_4^0|<R_4-\varepsilon$. Similarly one extends the analyticity
of $\hat u_k$ in $\zeta$ for $\zeta\neq0$ and of $u_k$ in $z$,
$|z-z_4|>R_2$ for $z\neq z_k$ to the sum of the series.
In the case the solution of eq.(\ref{basic2}) requires more
that one extension step one repeats the same procedure for each
extension step and the same for eq.(\ref{nlinear2}) keeping in mind
that such extension steps are always finite in number. Suppose
e.g. that the solution of eq.(\ref{basic2}) requires three
extension steps. Then we allocate for each step $\varepsilon/3$
instead of $\varepsilon$ and proceed as before. The same is done if
the intermediate linear equations require more than one extension
step. Here we employ the general result that given the equation
$\Delta u =\lambda \eta u - \phi_0-\lambda \psi$ if the sources
$\phi_0$ and $\psi$ are of the form $\eta s$ with $s$ having the
properties P1-P5 for $z_4\in D_4$, then the solution $u$ has the
properties P1-P5 for $|z_4-z_4^0|<R_4-\varepsilon$ for any
$\varepsilon>0$. Such a result is proven using exactly the treatment
of eq.(\ref{onesteplinear}) given above.
We recall now that the conformal field $\phi(z,z_4)$ is given in terms
of $u$ by
\begin{equation}
\phi(z,z_4)=u(z,z_4)+\nu(z,z_4)=u(z,z_4)-2\sum_k\eta_k\log|z-z_k|^2+ I(z,z_4)
\end{equation}
where the analytic properties of $I(z,z_4)$ have already been given in
Appendix B. Thus we conclude that $\phi(z,z_4)$ is real analytic in
$z_4$ and in $z$ for $|z_4-z_4^0|< R_4-\varepsilon$ and for $z\neq z_k$.
\section{The real analyticity of the accessory parameters}
\label{realanalyticitysection}
Consider now a singularity $z_k$ with $k\neq 4$ and a circle $C$
around it of radius such that no other singularity is contained in
it. Given the conformal factor $\phi=u+\nu$ we have that both $u$ and
$\nu$ are real analytic functions of $z$ and $z_4$ for $z$ in an
annulus containing $C$ and $z_4\in D_4$. The accessory parameter $b_k$
can be expressed in terms of $\phi$ as
\begin{equation}\label{contourintegral}
b_k=\frac{1}{i\pi}\oint_{z_k} Q dz
\end{equation}
where $Q$ is given by (see e.g. \cite{menottiHigherGenus})
\begin{equation}
Q= -e^{\frac{\phi}{2}}\frac{\partial^2 }{\partial z^2}e^{-\frac{\phi}{2}}
=\sum_k\frac{\eta_k(1-\eta_k)}{(z-z_k)^2}+
\sum_k\frac{b_k}{2(z-z_k)}~.
\end{equation}
Due to the analyticity of $\phi$ we can associate to any point of $C$ a
polydisk $D_z\times D_4$ where the $\phi$ is real analytic. Due to the
compactness of $C$ we can extract a finite covering provided by
such polydisks.
It follows then that the integral (\ref{contourintegral}) is a real
analytic function of $z_4$ for $z_4\in D_4$. Thus we have that all
the accessory parameters $b_k$ with $k\neq 4$ are real analytic
functions of $z_4$. With regard to the accessory parameter $b_4$
we recall that due to the Fuchs relations \cite{menottiHigherGenus} it
is given in terms of the other $b_k$ and thus also $b_4$ is
real analytic in $z_4$. The reasoning obviously holds for the
dependence on the position of any singularity keeping the others
fixed, thus concluding the proof of the real analyticity on all source
positions.
We considered explicitly the case of the sphere topology with an
arbitrary number of elliptic singularities. This treatment extends the
results of \cite{menottiAccessory,menottiHigherGenus,menottiPreprint}
where it was found that in the case of the sphere with
four sources we had real analyticity almost everywhere. With the
almost everywhere attribute we could not exclude the occurrence of a
number of cusps in the dependence of the accessory parameters
on the position of the sources. Here we proved real analyticity
everywhere and for any number of sources and thus the occurrence of
cusps is excluded. Obviously the whole reasoning holds when the
positions of the singularities are all distinct. What happens when two
singularities meet has been studied only in special cases in
\cite{kra} and \cite{HS1,HS2}.
\section{Higher genus}\label{torusanalyticity}
For the case of the torus i.e. genus 1 we can follow the treatment of
\cite{menottiExistence}.
In this case we know the explicit form of
the Green function
\begin{equation}
G(z,z'|\tau)=\frac{1}{4\pi}\log[\theta_1(z-z'|\tau)\times c.c.]+
\frac{i}{4(\tau-\bar\tau)}(z-z'-\bar z+\bar z')^2
\end{equation}
where $\theta_1$ is the elliptic theta function \cite{batemanII}.
$G(z,z'|\tau)$ is a real analytic function in $z$, for $z\neq z'$,
and in $\tau$.
It satisfies
\begin{equation}
\Delta G(z,z'|\tau)=\delta^2(z-z') - \frac{2i}{\tau-\bar\tau}~.
\end{equation}
As for the $\beta$ we can construct it using the Weierstrass $\wp$
function.
\begin{equation}
\beta(z|\tau) = c\prod_k (\wp(z-z_k|\tau)\times c.c. )^{\eta_k}~.
\end{equation}
Using the freedom of $c$ we normalize the $\beta$ as to have
\begin{equation}
\int\beta(z|\tau)d^2z = 4\pi \sum_k 2\eta_k
\end{equation}
consistent with the topological restriction $\sum_k
2\eta_k>2(1-g)=0$. The $\nu$ is given by
\begin{equation}
\nu = 4\pi\sum_k-2\eta_kG(z,z_k|\tau)+
\int G(z,z'|\tau)\beta(z'|\tau)d^2z'
\end{equation}
with $\phi=u+\nu$.
We proceed now as in sections \ref{inheritance} and
\ref{realanalyticitysection} to obtain the real analytic dependence
of $\phi$ both on the position of the sources and on the modulus.
The $\phi(z)$ is translated to the
two sheet representation of the torus $\varphi(v,w)$ using
$v=\wp(z|\tau)$ and
\begin{equation}
\varphi(v,w) = \phi(z) + \log \big(\frac{d z}{d v}\times c.c.\big)
\end{equation}
where
\begin{equation}
w = \frac{\partial v}{\partial z}
=\wp'(z|\tau) =
\sqrt{4(v-v_1)(v-v_2)(v-v_3)}
\end{equation}
and thus
\begin{equation}
\varphi(v,w) = \phi(z) -\frac{1}{2}
\log[16(v-v_1)(\bar v-\bar v_1)
(v-v_2)(\bar v-\bar v_2)(v-v_3)(\bar v-\bar v_3)]~.
\end{equation}
Now we proceed as in section \ref{realanalyticitysection}
where now \cite{menottiHigherGenus} in the auxiliary equation we have
\begin{eqnarray}\label{Qtorus}
&&Q =\frac{3}{16}\bigg(\frac{1}{(v-v_1)^2}+\frac{1}{(v-v_2)^2}+
\frac{1}{(v-v_3)^2}\bigg) \nonumber\\
&& +\frac{b_1}{2(v-v_1)}+\frac{b_2}{2(v-v_2)}+
\frac{b_3}{2(v-v_3)}\nonumber\\
&& + \sum_{k>3} \eta_k(1-\eta_k)\frac{(w+w_k)^2}{4(v-v_k)^2w^2}+
\frac{b_k(w+w_k)}{4(v-v_k)w}~.
\end{eqnarray}
In eq.(\ref{Qtorus}) $w=\sqrt{4(v-v_1)(v-v_2)(v-v_3)}$ takes opposite
values on the two sheets and the factors $\frac{w+w_k}{2w}$ project the
singularities on the sheet to which they belong. We recall that the
accessory parameters $b_k$ are related by the Fuchs relations which in
the case of the torus are three in number \cite{menottiHigherGenus}
and thus the independent ones are as many as the sources. Then
proceeding as in the previous section we can extract by means of a
contour integral the real analytic dependence of the accessory parameters
on the source positions and on the modulus.
For higher genus we do not possess the explicit form of the Green
function and we have a representation of the analogue of the
Weierstrass $\wp$ function only for genus 2 \cite{komori}. Thus one
should employ more general arguments for the analyticity of the Green
function and for the expression of the $\beta$ function.
\section{Discussion and conclusions}\label{conclusions}
In the present paper we proved that on the sphere topology with any
number of elliptic singularities the accessory parameters are real
analytic functions of the source positions and the result has been
extended to the torus topology with any number of elliptic
singularities. This complements the result of Kra \cite{kra} where the
real analytic dependence was proven for parabolic and elliptic
singularities of finite order. Here the elliptic singularities are
completely general. The extension of the present treatment to the
case when one or more singularities are parabolic should be in
principle feasible even though more complicated. Poincar\'e
\cite{poincare} in fact applied with success the continuation method
also in presence of parabolic singularities but the treatment is far
lengthier. The reason is that integrals of the type
\begin{equation}
\int \log|z-z'|^2 f(z')d^2z'
\end{equation}
with $f(z')$ behaving like
\begin{equation}
\frac{1}{|z'-z_k|^2 \log^2|z'-z_k|^2}
\end{equation}
for $z'$ near $z_k$, diverge for $z\rightarrow z_k$, contrary to what
happens in the elliptic case. On the other hand it is proven in
\cite{poincare} that the solution of eq.(\ref{originaleq}) is finite
even at the parabolic singularities; in different words even if each
term of the series diverges for $z\rightarrow z_k$ their sum converges
to a function which is finite for $z\rightarrow z_k$, a procedure which
requires an higher number of iteration steps. Some of these
intermediate steps employ $C^\infty$ but non analytic regularization.
This does not mean that the continuation method does not work for
parabolic singularities but simply that one one should revisit the
procedure of \cite{poincare} keeping analyticity in the forefront.
The real analytic dependence of the accessory parameter on the sphere
with four sources elliptic and/or parabolic and of the torus with one
source was proven already in
\cite{menottiAccessory,menottiHigherGenus,menottiPreprint} almost
everywhere in the moduli space using analytic variety
techniques. Almost everywhere meant e.g that we could not exclude the
presence of a number of cusps in the dependence of the
parameter on the source position or moduli. The results of the present
paper remove such a possibility.
In section \ref{torusanalyticity} we extended the procedure to the torus
topology in a rather straightforward way and thus in presence of $n$
elliptic sources on the torus we have that the independent accessory
parameters, which are $n$ in number, depend in real analytic way on
the source positions and on the modulus.
For higher genus i.e. $g>1$ the best approach appears to be the use of
the representation of the Riemann surface using the fuchsian domains
in the upper half-plane. For carrying through the program, in absence
of explicit forms of the Green function, one should establish its
analytic dependence on the moduli and also one should provide an
analytic $\beta$ satisfying the correct boundary conditions.
\bigskip
\section*{Appendix A}
In the text the problem arises to establish the
real analyticity of certain integrals. The problem can be dealt with
in two equivalent ways. The integral in question is a function of two
real variables $x,y$. To prove real analyticity we must show
that around a real point $x^0,y^0$ the function for real values of
$x,y$ is identical to the values taken by a holomorphic function
of two complex variables, call them $x^c,y^c$.
Alternatively one can use the complex variable $z$ and its complex
conjugate $\bar z$. Then proving the real analyticity of $f(z,\bar z)$
around $z^0,\bar z^0$ is equivalent to prove the analyticity of
$f(a,b)$ in $a$ and $b$ taken as independent variables. We shall use
this complex variable notation as it is simpler.
\bigskip
We prove here the real analyticity of $c(z_4)$ introduced in section
\ref{lichtenstein}. Write
\begin{equation}
f(z,z_4)=\frac{\prod_{k\neq 4}[(z-z_k)(\bar z-\bar z_k)]^{-2\eta_k}}
{[1+z\bar z]^{-2\sigma+2}}
[(z-z_4)(\bar z-\bar z_4)]^{-2\eta_4}\equiv g(z,\bar z) [(z-z_4)(z-z_4)]^{-2\eta_4}
\end{equation}
with $\sigma= \sum\eta_k$. In $g$ we ignored the dependence on
the $z_k$ with $k\neq 4$ as they will always be kept fixed.
In computing the derivative of $A$ defined by
\begin{equation}
A=\int\frac{\prod_{k\neq 4}[(z-z_k)(\bar z-\bar
z_k)]^{-2\eta_k}}{[1+z\bar z]^{-2\sigma+2}}
[(z-z_4)(\bar z-\bar z_4)]^{-2\eta_4} \frac{i}{2}dz\wedge d\bar z
\end{equation}
w.r.t. $z_4$ in order to avoid the occurrence
of non integrable singularities it is expedient to apply the
technique of writing
\begin{equation}
A= A_1+A_2
\end{equation}
being $A_1$ the integral extended inside a disk of center $z_4$ and
radius $R$ excluding all other singularities and $A_2$ the integral
outside.
Then we have
\begin{equation}
A_1=
\int_{\cal R} (\zeta\bar\zeta)^{-2\eta_4} g(\zeta+z_4,\bar\zeta+\bar z_4)
\frac{i d\zeta\wedge d\bar \zeta}{2}
\end{equation}
which has derivative w.r.t. $z_4$
\begin{equation}\label{A1derivative}
\frac{\partial A_1}{\partial z_4}=
\int_{\cal R} (\zeta\bar\zeta)^{-2\eta_4}
\frac{\partial g(\zeta+z_4,\bar\zeta+\bar z_4)}{\partial z_4}
\frac{i d\zeta\wedge d\bar\zeta}{2}~.
\end{equation}
It is justified to take the derivative operation inside the integral
sign as due to the real analyticity of $g$ in $\cal R$
the integrand in (\ref{A1derivative}) can be bounded by a function
$b(\zeta,\bar\zeta)$ independent of $z_4$ whose integral over $\cal R$
is absolutely convergent, exploiting $-2\eta_4+1>0$.
For $A_2$ we have
\begin{equation}
A_2 = \int_{{\cal R}_c} f(z,z_4) i \frac{d z\wedge d \bar z}{2}
\end{equation}
where ${\cal R}_c$ is the complement of $\cal R$, whose derivative is
given by
\begin{equation}\label{A2derivative}
\frac{\partial A_2}{\partial z_4}= \int_{{\cal R}_c} \frac{\partial
f(z,z_4)}{\partial z_4 }\frac{i dz\wedge d\bar z}{2}
+\oint_{\partial {\cal R}_c} f(z,z_4) \frac{id\bar z}{2}~.
\end{equation}
Again it is legal to take the derivative operation inside the integral
sign in the first term of (\ref{A2derivative}) as we are working
outside $\cal R$ and the contour integral arises from the fact
that the domain ${\cal R}_c$ moves with $z_4$.
Such term equals
\begin{equation}
i\oint f(z,z_4) \frac{dx}{2} + \oint f(z,z_4) \frac{dy}{2}~.
\end{equation}
We have also the complex conjugate equation which give rise to the
complex derivative w.r.t. $x,y$. Thus $c(z_4)=C(x_4,y_4)$ is
real analytic.
For future developments we point out the uniform bound for $|z|>2
~{\max} |z_k|$
\begin{equation}\label{generalbound}
\beta=\frac{c(z_4)}{(1+z\bar z)^2}(1+\frac{1}{z\bar z})^{2\sigma}
\prod_k\bigg|1-\frac{z_k}{z}\bigg|^{-4\eta_k}
<\frac{{\rm const}}{(1+z\bar z)^2}(1+\frac{1}{z\bar z})^{2\sigma}
\prod_k\bigg|1\pm\frac{1}{2}\bigg|^{-4\eta_k}
\end{equation}
with $+$ or $-$ according to $\eta_k<0$ or $\eta_k>0$ and for $z_4\in
D_4$.
\section*{Appendix B}
In this Appendix we work out the analytic properties of
\begin{equation}\label{Iintegral}
I(z,z_4)=\frac{1}{4\pi}\int\log|z-z'|^2 \beta(z',z_4) d^2z'
\end{equation}
which are necessary to establish the properties of the function
$\theta(z,z_4)$.
We have
\begin{equation}
I(z,z_4)=\frac{1}{4\pi}\log (z\bar z)\int\beta(z',z_4) d^2z'+
\frac{1}{4\pi}\int\log|1-\frac{z'}{z}|^2 \beta(z',z_4) d^2z'.
\end{equation}
Let $\Omega$ be the radius of a disk which encloses all singularities
$z_k$; outside such a disk we have
\begin{equation}
\beta(z,z_4)<\frac{c}{(z\bar z)^2}
\end{equation}
with $c$ independent of $z_4$ for $z_4\in D_4$.
Moreover we choose $\Omega>1$. We have
\begin{eqnarray}
&& \int\log|1-\frac{z'}{z}|^2 \beta(z',z_4) d^2z'=\\
&& |z|^2\int_{|y|<\frac{1}{2}}\log|1-y|^2 \beta(zy,z_4) d^2y+
|z|^2\int_{|y|>\frac{1}{2}}\log|1-y|^2 \beta(zy,z_4) d^2y
\end{eqnarray}
First we examine the region $|z|>2\Omega$. The first integral
is less than
\begin{equation}
2\log 2~|z|^2\int_{|y|<\frac{1}{2}}\beta(zy,z_4)d^2y\leq
2\log 2\int\beta(z',z_4)d^2z'
\end{equation}
and the second, due to $\beta(z,z_4)<c/(z\bar z)^2$ is less than
\begin{equation}
\frac{c}{z\bar z}\int_{|y|>\frac{1}{2}}
\big|\log|1-y|^2\big|\frac{d^2y}{(y\bar y)^2}~.
\end{equation}
Thus for $|z|>2\Omega$ we have
\begin{eqnarray}
&& |I(z,z_4)|\leq\frac{1}{4\pi}\log(z\bar z)\int\beta(z',z_4) d^2z'
+ \frac{2 \log 2}{4\pi} ~\int\beta(z',z_4) d^2z'\\
&&+ \frac{c}{4\pi z\bar z}\int_{|y|>\frac{1}{2}}
\big|\log|1-y|^2\big|\frac{d^2y}{(y\bar y)^2}~.
\end{eqnarray}
For $|z|<2\Omega$ we isolate the singularities $z_k$ of $\beta$ by non
overlapping discs of radius $a<\frac{1}{4}$. In the complement
$\beta(z,z_4)$ is majorized by $c/(1+z\bar z)^2$ with $c$ independent
of $z_4$ for $z_4\in D_4$. We bound $|I|$ by the sum of two
terms the first being
\begin{eqnarray}
&& \frac{c}{4\pi}\int_{|\zeta|>1} \log\zeta\bar\zeta\frac{1}{(1+(z+\zeta)
(\bar z+\bar\zeta))^2}d^2\zeta\leq\\
&& \frac{c}{4\pi}\int_{1<|\zeta|<2\Omega}\log\zeta\bar\zeta ~d^2\zeta+
\frac{c}{4\pi}\int_{|\zeta|>2\Omega}\log\zeta\bar\zeta
\frac{1}{(1+(|\zeta|-2\Omega)^2)^2}d^2\zeta
\end{eqnarray}
and the second is the contribution of $|\zeta|<1$, where
$\log\zeta\bar\zeta$ is negative
\begin{eqnarray}
-\frac{c}{4\pi}\int_{|\zeta|<1} \log\zeta\bar\zeta\frac{1}{(1+(z+\zeta)
(\bar z+\bar\zeta))^2}d^2\zeta\leq
-\frac{c}{4\pi}\int_{|\zeta|<1}\log\zeta\bar\zeta ~d^2\zeta~.
\end{eqnarray}
The singularity at $z'=z_k$ is dealt with $\zeta=z-z_k$. The contribution
of the disk of radius $a$ is
\begin{equation}
4\pi I_a= \int_a\log|\zeta-\zeta'|^2\tilde\beta(\zeta',z_4)d^2\zeta'
\end{equation}
where $\tilde\beta(\zeta,z_4)=\beta(\zeta+z_k,z_4)$.
We have
\begin{equation}
4\pi I_a= \log|\zeta|^2 \int_{a} \tilde\beta(\zeta',z_4)d\zeta'+ \int_{a}
\log\bigg|1-\frac{\zeta'}{\zeta}\bigg|^2\tilde\beta(\zeta,z_4) \hat
d^2\zeta'
\end{equation}
and thus for $|\zeta|>2 a$ we have
\begin{equation}
|4\pi I_a|\leq (\big|\log|\zeta|^2\big|+2\log 2) \int_{a}
\tilde\beta(\zeta',z_4)d\zeta'
\end{equation}
and as we are working for $|z|<2\Omega$ the $|\log|\zeta|^2|$
is bounded.
For $a<|\zeta|<2a$ as $\log|\zeta-\zeta'|^2$ is always
negative we have
\begin{equation}
|4\pi I_a|\leq -M\int_{a} \log|\zeta-\zeta'|^2
(\zeta'\bar\zeta')^{-2\eta_k} d^2\zeta' =
-\pi M\log(\zeta\bar\zeta)\frac{(a^2)^{1-2\eta_k}}{1-2\eta_k}
\end{equation}
where $M$ is such that
$\tilde\beta(\zeta',z_4)<M(\zeta'\bar\zeta')^{-2\eta_k}$ for $\zeta$
in the disk of radius $a$ and $z_4\in D_4$.
Finally for $|\zeta|<a$
\begin{equation}
|4\pi I_a|\leq
\frac{\pi M}{1-2\eta_k}\bigg[-(a^2)^{1-2\eta_k}\log a^2
+\frac{(a^2)^{1-2\eta_k}-(\zeta\bar\zeta)^{1-2\eta_k}}
{1-2\eta_k}\bigg]~.
\end{equation}
We conclude that $I(z,z_4)$ for any $z$ and $z_4\in D_4$ is always finite
and bounded by
\begin{equation}\label{summaryI}
|I(z,z_4)|\leq\frac{1}{4\pi}\log (z\bar z+1)\int\beta(z,z_4)d^2z+ c_1
\end{equation}
with $c_1$ independent of $z_4$ for $z_4\in D_4$.
We prove now that $I(z,z_4)$ is analytic in $z$ for $z\neq z_k$.
Given a $z_0\neq z_k$ let us consider a disk $D$ of center $z_0$ and
radius $r$ such that such disk does not contain any singularity of
$\beta(z,z_4)$. By standard arguments one shows that the derivative
w.r.t. $z$ of the contribution of such disk to the integral
(\ref{Iintegral}) is
\begin{equation}\label{Dintegral}
\frac{1}{4\pi} \int_D\frac{\beta(z',z_4)}{z-z'}d^2z~.
\end{equation}
The contribution of the complement $D_c$ of $D$ to the derivative is
\begin{equation}\label{Dcintegral}
\frac{1}{4\pi} \int_{D_c}\frac{\beta(z',z_4)}{z-z'}d^2z
\end{equation}
as for $|z-z_0|<\frac{r}{2}$ we have that the integrand
is bounded by
\begin{equation}
\frac{\beta(z',z_4)}{|z'-z_0|-\frac{r}{2}}
\end{equation}
which is absolutely convergent and independent of $z$.
Thus $I(z,z_4)$ is analytic in $z$ for $z\neq z_k$ and its derivative
is given by the sum of (\ref{Dintegral}) and (\ref{Dcintegral}) i.e.
by (\ref{Dintegral}) with $D$ replaced by the whole $z$ plane.
In working out the derivative of $I(z,z_4)$ w.r.t. $z_4$ in order to
avoid non integrable singularities we must isolate a disk ${\cal R}_1$
of fixed radius $R_1$ with center $z_4$ and excluding all other $z_k$.
Thus as given in section \ref{inheritance} we write for $|\zeta|<R_1$,
$\zeta=z-z_4$,
\begin{eqnarray}\label{IR1}
\hat I(\zeta,z_4)&=&\frac{1}{4\pi}\int_{{\cal R}_1}
\log|\zeta-\zeta'|^2 \hat\beta(\zeta',z_4)d^2\zeta'\nonumber\\
&+&\frac{1}{4\pi}\int_{{\cal R}_{1c}}
\log|\zeta+z_4-z'|^2 \beta(z',z_4)d^2z'~.
\end{eqnarray}
where ${\cal R}_{1c}$ is the complement of ${\cal R}_1$.
We notice that $\hat I(\zeta,z_4)$ does not depend on the specific
choice of the radius $R_1$ of the domain used in (\ref{IR1}) and in
(\ref{derivativeofhatI}) below to compute the derivative w.r.t. $z_4$,
provided that ${\cal R}_1$ does not contain any other singularity
except $z_4$.
Its derivative w.r.t. $z_4$ is
\begin{eqnarray}\label{derivativeofhatI}
\frac{\partial \hat I(\zeta,z_4)}{\partial z_4}
&=&\frac{1}{4\pi}\int_{{\cal R}_1}
\log|\zeta-\zeta'|^2 \frac{\partial \hat\beta(\zeta',z_4)}
{\partial x_4 }d^2\zeta\nonumber\\
&+&\frac{1}{4\pi}\int_{{\cal R}_{1c}}
\frac{1}{\zeta+z_4-z'}\beta(z',z_4) d^2 z'\nonumber\\
&+&\frac{1}{4\pi}\int_{{\cal R}_{1c}} \log|\zeta+z_4-z'|^2
\frac{\partial \beta(z',z_4)}{\partial z_4}d^2 z'\nonumber\\
&+&\frac{i}{8\pi}\oint_{\partial {\cal R}_{1c}}
\log|\zeta+z_4-z'|^2 \beta(z',z_4) d\bar z'~.
\end{eqnarray}
The contour integral is the contribution of the dependence of the
domain ${\cal R}_{1c}$ on $z_4$. In the first term of
(\ref{derivativeofhatI}) taking the derivative under the integral sign
is legal because the integrand is of the form
$(\zeta\bar\zeta)^{-2\eta_4} \frac{\partial f(\zeta,z_4)}{\partial
z_4}$ and this expression can be majorized for $|\zeta|<R_1$ by a
function independent of $z_4$ for $z_4\in D_4$ whose integral is
absolutely convergent due to $-2\eta_4+1>0$. In the second
term the denominator $\zeta+z_4-z'$ never vanishes and we can apply
the same majorization. In the third term the $\log$ is not singular;
$\frac{\partial \beta(z',z_4)}{\partial z_4}$ has the singularity
$(|z'-z_4|^2)^{-2\eta_4}/(z'-z_4)$ which is non integrable for
$-4\eta_4 +1< 0$ but such a singularity lies outside the integration
region ${\cal R}_{1c}$. As for the contour integral, on it both the
logarithm and the $\beta$ are regular.
We consider then an other disk ${\cal R}_2$ centered again in $z_4$ and
of radius $R_2<R_1$.
For $|z-z_4|>R_2$ we use the expression
\begin{eqnarray}
I(z,z_4)&=&\frac{1}{4\pi}\int_{{\cal R}_2}
\log|z-\zeta'-z_4|^2 \hat\beta(\zeta',z_4)d^2\zeta'\nonumber\\
&+& \frac{1}{4\pi}\int_{{\cal R}_{2c}}
\log|z-z'|^2 \beta(z',z_4)d^2z'~.
\end{eqnarray}
Its derivative w.r.t. $z_4$ is given by
\begin{eqnarray}\label{derivativeofI}
&&-\frac{1}{4\pi}\int_{{\cal R}_2}
\frac{1}{z-\zeta'-z_4} \hat\beta(\zeta',z_4)d^2\zeta'\nonumber\\
&& +\frac{1}{4\pi}\int_{{\cal R}_2}
\log|z-\zeta'-z_4|^2 \frac{\partial
\hat\beta(\zeta',z_4)}{\partial z_4} d^2\zeta'\nonumber\\
&& +\frac{1}{4\pi}\oint_{{\cal R}_{2c}}
\log|z-z'|^2 \frac{\partial
\beta(z',z_4)}{\partial z_4}d\bar z'\nonumber\\
&& +\frac{i}{8\pi}\oint_{\partial {{\cal R}_2c}}
\log|z-z'|^2 \beta(z',z_4) d\bar z'~.
\end{eqnarray}
Regarding the first two terms in (\ref{derivativeofI}) as $z-\zeta'-z_4$
never vanishes in ${\cal R}_2$ it is legal to take the derivative under
the integral sign. In the third term the integrand can be majorized by
a $z_4$ independent integrable function. The last term is the
contribution of the moving integration region.
Thus we have analyticity of $I(z,z_4)$ for $|z-z_4|>R_2$, $z_4\in D_4$.
We conclude that $I(z,z_4)$ is everywhere finite, bounded by
(\ref{summaryI}), continuous in $z, z_4$. $I(z,z_4)$ is analytic for
$z\neq z_k$. $\hat I(\zeta,z_4)$ is analytic in $z_4$ for $z_4 \in
D_4$ $|\zeta|<R_1$ while $I(z,z_4)$ is analytic in $z_4$ for $z_4\in
D_4$, $|z-z_4|>R_2$.
In the text we shall also need some information about the behavior of
$I(z,z_4)$ and $\frac{\partial I(z,z_4)}{\partial z_4}$ for large $z$. We
already saw that $I(z,z_4)$ behaves at infinity like $(\sum_k 2\eta_k
-2)\log|z|^2 $ and from (\ref{derivativeofI}) we have the simple bound
$|\frac{\partial I(z,z_4)}{\partial z_4}|< {\rm const}~\log|z|^2$.
\section*{Appendix C
The main tool used in the text is the solution of the
equation
\begin{equation}\label{sourceequation}
\Delta u(z,z_4) = \theta(z,z_4) s(z,z_4)
\end{equation}
under the condition on the source
\begin{equation}\label{zerointcondition}
\int \theta(z,z_4)s(z,z_4)) d^2z=0~.
\end{equation}
The solution of eq.(\ref{sourceequation}) apart for the addition of an harmonic
function is
\begin{equation}\label{sourcesolution}
\frac{1}{4\pi}\int\log|z-z'|^2 \theta(z',z_4) s(z',z_4) d^2z'
\end{equation}
and as we are interested in bounded solutions the only freedom will be the
addition of a constant. The purpose here is to give a bound on
(\ref{sourcesolution}).
We recall that
\begin{equation}
\theta(z,z_4)=\prod_k [(z-z_k)(\bar z -\bar z_k)]^{-2\eta_k} ~e^{I(z,z_4)}=
\beta(z,z_4) ~r(z,z_4)
\end{equation}
with $0<r_1<r<r_2$.
The function $\theta(z,z_4)$ is positive with elliptic singularities
and bounded at infinity by $\frac{\rm const}{|z\bar z|^2}$ and we
shall give a bound on (\ref{sourcesolution}) in terms of
$\max|s(z,z_4)|$. As $r_1<r<r_2$ most of the techniques for proving
such a bound have been already worked out in the preceding Appendix B.
We have due to the condition
(\ref{zerointcondition}) and the bound (\ref{generalbound})
\begin{equation}\label{thetwoforms}
\int\log|z-z'|^2 \theta(z',z_4)s(z',z_4)d^2z'=
\int\log|1-\frac{z'}{z}|^2 \theta(z',z_4)s(z',z_4)d^2z'~.
\end{equation}
For $|z|>2\Omega$ we use the second form in eq.(\ref{thetwoforms})
replacing $s(z',z_4)$ with $\max |s(z',z_4)|$ and the logarithm by its
absolute value and the proceeding as in the previous Appendix. It is
important to notice that now due to the condition
(\ref{zerointcondition}) the term $\log z\bar z$ which diverges at
infinity in eq.(\ref{summaryI}) is absent and thus we have boundedness
also at infinity. The region $|z|<2\Omega$ is treated exactly as in the
previous Appendix B. The result is that
\begin{equation}
\bigg|\frac{1}{4\pi} \int\log|z-z'|^2 \theta(z',z_4)s(z',z_4)d^2z'\bigg|
< B ~\max|s(z,z_4)|
\end{equation}
with $B$ independent of $z_4$ for $z_4\in D_4$.
\bigskip
\section*{Appendix D}
In this appendix we report the proof that the series (\ref{nonlinearseries})
converges with a non zero convergence radius.
From the text we have for $k\geq 2$
\begin{equation}
\max|u_k|\leq\max|w_k|~.
\end{equation}
Given $\max|u_1| \equiv \gamma_1$ one considers the series
\begin{equation}
\nu = \lambda \gamma_1+\lambda^2 \gamma_2+\lambda^3 \gamma_3+\dots
\end{equation}
where (see section \ref{poincareprocedure})
\begin{equation}
\gamma_2= \frac{\gamma_1^2}{2},~
\gamma_3= \frac{\gamma_1^3}{6}+\gamma_1\gamma_2,~
\gamma_4 = \frac{\gamma_1^4}{24}+\frac{\gamma_1^2 \gamma_2+\gamma_2^2}{2}
+\gamma_1\gamma_3,~~~~\dots
\dots
\end{equation}
Obviously such a series of positive terms majorizes term by term
the series $\lambda u_1+\lambda^2 u_2+\lambda^3 u_3+\dots$. We want to find
the convergence radius of $\nu$.
For the function $\nu$ we have
\begin{eqnarray}
e^\nu = 1+\lambda \gamma_1&+&\lambda^2 \gamma_2+\lambda^3 \gamma_3
+\dots\nonumber \\
&+&\lambda^2 \gamma_2+\lambda^3 \gamma_3+\dots \nonumber\\
&=&1+2\nu-\lambda \gamma_1~.
\end{eqnarray}
Let us consider the implicit function defined by
\begin{equation}
1+2 \nu - e^\nu = \lambda \gamma_1~.
\end{equation}
As for $\nu=0$ we have $\lambda \gamma_1=0$ and
\begin{equation}\label{jacobian}
\frac{\partial(1+2\nu-e^\nu)}{\partial \nu}= 2-e^\nu
\end{equation}
which equals $1$ at $\nu=0$ the analytic implicit function theorem
assures us the the $\nu$ is analytic in $\lambda$ in a finite disk
around $\lambda=0$ and thus with a non zero radius of convergence in
the power expansion. This suffices for the developments of the
present paper. Such radius of convergence $r_0$ can be computed
\cite{poincare} and is given by $r_0=(\log4-1)/\gamma_1$, corresponding
to the vanishing of (\ref{jacobian}).
\bigskip
| {'timestamp': '2021-04-09T02:11:42', 'yymm': '2103', 'arxiv_id': '2103.05943', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.05943'} |
\section{Introduction}
Instrumental variable methods are powerful tools for causal inference with unmeasured treatment-outcome confounding.
\citet{angrist1996identification} use potential outcomes to clarify the role of a binary instrumental variable in identifying causal effects. They show that the classic two-stage least squares estimator is consistent for the complier average causal effect under the monotonicity and exclusion restriction assumptions.
Measurement error is common in empirical research, which is also called misclassification for discrete variables.
\citet{black2003measurement} study the return of a possibly misreported education status. \citet{boatman2017estimating} study the effect of a self-reported smoking status. In those settings, the treatments are endogenous and mismeasured. \citet{chalak2017instrumental} considers the measurement error of an instrumental variable. \citet{pierce2012effect} consider a continuous treatment and either a continuous or a binary outcome with measurement errors. The existing literature often relies on modeling assumptions \citep{schennach2007instrumental, pierce2012effect}, auxiliary information \citep{black2003measurement, kuroki2014measurement, chalak2017instrumental,boatman2017estimating}, or repeated measurements of the unobserved variables \citep{battistin2014misreported}.
With binary variables, we study all possible scenarios of measurement errors of the instrumental variable, treatment and outcome. Under non-differential measurement errors, we show that the measurement error of the instrumental variable does not result in bias, the measurement error of the treatment moves the estimate away from zero, the measurement error of the outcome moves the estimate toward zero.
This differs from the result for the total effect \citep{bross1954misclassification} where measurement errors of the treatment and outcome both move the estimate toward zero.
For non-differential measurement errors, we focus on qualitative analysis and nonparametric bounds. For differential measurement errors, we focus on sensitivity analysis. In both cases, we do not impose modeling assumptions or require auxiliary information.
\section{Notation and assumptions for the instrumental variable estimation}
For unit $i$, let $Z_i$ denote the treatment assigned, $D_i$ the treatment received, and $Y_i$ the outcome. Assume that $ (Z_i,D_i, Y_i)$ are all binary taking values in $\{0,1\}$. We ignore pretreatment covariates without loss of generality, because all the results hold within strata of covariates. We use potential outcomes to define causal effects. Define the potential values of the treatment received and the outcome as $D_{zi}$ and $Y_{zi}$ if unit $i$ were assigned to treatment arm $z$ ($z=0, 1$). The observed values are $D_i = Z_iD_{1i} + (1-Z_i)D_{0i}$ and $Y_i = Z_iY_{1i} + (1 - Z_i)Y_{0i}$. \citet{angrist1996identification} classify the units into four latent strata based on the joint values of $(D_{1i}, D_{0i} ) $. They define $U_i=a$ if $(D_{1i}, D_{0i} ) =(1,1)$, $U_i=n$ if $(D_{1i}, D_{0i} ) =(0,0)$, $U_i=c$ if $(D_{1i}, D_{0i} ) =(1,0)$, and $U_i=d$ if $(D_{1i}, D_{0i} ) =(0,1)$. The stratum with $U_i=c$ consists of compliers.
For notational simplicity, we drop the subscript $i$. We invoke the following assumption for the instrumental variable model.
\begin{assumption}
\label{asm:iv}
Under the instrumental variable model,
(a) $Z \mbox{$\perp\!\!\!\perp$} (Y_1, Y_0, D_1, D_0)$, (b) $D_1 \geq D_0$, and (c) $\pr(Y_1=1 \mid U=u) =\pr(Y_0=1 \mid U=u)$ for $u=a$ and $n$.
\end{assumption}
Assumption~\ref{asm:iv}(a) holds in randomized experiments. Assumption~\ref{asm:iv}(b) means that the treatment assigned has a monotonic effect on the treatment received for all units, which rules out the latent strata $U=d.$ Assumption~\ref{asm:iv}(c) implies that the treatment assigned affects the outcome only through the treatment received, which is called exclusion restriction.
Define $\textsc{RD}_{R\mid Q}= \pr(R=1 \mid Q=1)-\pr(R=1 \mid Q=0)$ as the risk difference of $Q$ on $R$. For example, $\textsc{RD}_{YD\mid (1-Z)}= \pr(Y=1,D=1 \mid Z=0)-\pr(Y=1,D=1 \mid Z=1)$. \citet{angrist1996identification} show that the complier average causal effect
\begin{eqnarray*}
\CACE
\equiv E(Y_1-Y_0\mid U=c)
=\frac{\pr(Y=1 \mid Z=1)-\pr(Y=1 \mid Z=0)}{\pr(D=1 \mid Z=1)-\pr(D=1 \mid Z=0)}
=\frac{\textsc{RD}_{Y\mid Z}}{\textsc{RD}_{D\mid Z}}
\end{eqnarray*}
can be identified by the ratio of the risk differences of $Z$ on $Y$ and $D$ if $\textsc{RD}_{D\mid Z}\neq 0.$
\section{Non-differential measurement errors}\label{sec::nondiffmeasure}
Let $(Z',D',Y')$ denote the possibly mismeasured values of $(Z,D,Y)$. Without the true variables, we use the naive estimator based on the observed variables to estimate $\CACE$:
\begin{eqnarray*}
\CACE'
\equiv \frac{\pr(Y'=1 \mid Z'=1)-\pr(Y'=1 \mid Z'=0)}{\pr(D'=1 \mid Z'=1)-\pr(D'=1 \mid Z'=0)}= \frac{\textsc{RD}_{Y'\mid Z'}}{\textsc{RD}_{D'\mid Z'}}.
\end{eqnarray*}
\begin{assumption}
\label{asm:nondif}
All measurement errors are non-differential:
$\pr(D' \mid D, Z',Z,Y,Y')=\pr(D' \mid D)$, $\pr(Y' \mid Y, Z,Z',D,D')=\pr(Y' \mid Y)$, and $\pr(Z' \mid Y, Y',Z,D,D')=\pr(Z' \mid Z) .$
\end{assumption}
Under Assumption \ref{asm:nondif}, the measurements of the variables do not depend on other variables conditional on the unobserved true variables. We use the sensitivities and specificities to characterize the non-differential measurement errors:
\begin{align*}
\textsc{SN}_D&=\pr(D'=1 \mid D=1),\quad & \textsc{SP}_D&=\pr(D'=0 \mid D=0),\quad & r_D &= \textsc{SN}_D+\textsc{SP}_D-1 \leq 1, \\
\textsc{SN}_Y&=\pr(Y'=1 \mid Y=1) , \quad &\textsc{SP}_Y&=\pr(Y'=0 \mid Y=0),\quad &r_Y &= \textsc{SN}_Y+\textsc{SP}_Y-1 \leq 1.
\end{align*}
Without measurement errors, $r_D = r_Y = 1.$
Assume $r_D >0$ and $r_Y >0$, which means that the observed variable is informative for the true variable, i.e., the observed variable is more likely to be 1 if the true variable is 1 rather than 0. We state a simple relationship between $\CACE$ and $\CACE'$.
\begin{theorem}
\label{thm:cace}
Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif},
$\CACE = \CACE' \times r_D/r_Y$.
\end{theorem}
Theorem \ref{thm:cace} shows that measurement errors of $Z$, $D$ and $Y$ have different consequences. The measurement error of $Z$ does not bias the estimate. The measurement error of $D$ biases the estimate away from zero. The measurement error of $Y$ biases the estimate toward zero. In contrast, measurement errors of the treatment and outcome both bias the estimate toward zero in the total effect estimation \citep{bross1954misclassification}.
Moreover, the measurement errors of $D$ and $Y$ have mutually independent influences on the estimation of $\CACE$.
Theorem \ref{thm:cace} also shows that $\CACE$ and $\CACE'$ have the same sign when $r_D>0$ and $r_Y > 0$.
\section{Bounds on $\CACE$ with non-differential measurement errors}
\label{sec::nondiff}
When $D$ or $Y$ is non-differentially mismeasured, we can identify $\CACE$ if we know $r_D$ and $r_Y$. Without knowing them, we cannot identify $\CACE$. Fortunately, the observed data still provide some information about $\CACE$. We can derive its sharp bounds based on the joint distribution of the observed data. We first introduce a lemma.
\begin{lemma}
\label{lem:bound}
Define $\textsc{SN}'_Z= \pr(Z=1\mid Z'=1)$ and $\textsc{SP}'_Z= \pr(Z=0\mid Z'=0)$.
Under Assumption 1, given the values of $(\textsc{SN}'_Z,\textsc{SP}'_Z,\textsc{SN}_D,\textsc{SP}_D,\textsc{SN}_Y,\textsc{SP}_Y)$, there is a one-to-one mapping between the set $\{\pr(Z=z), \pr(U=u),\pr(Y_z=1\mid U=u) : z=0,1; u=a,n,c\}$ and the set $\{ \pr(Z'=z', D' = d', Y'=y'): z', d', y' = 0,1 \}$.
\end{lemma}
Lemma~\ref{lem:bound} allows for simultaneous measurement errors of more than one elements of $(Y, Z, D)$.
From Lemma~\ref{lem:bound}, given the sensitivities and specificities, we can recover the joint distribution of $(Y_z,U,Z)$ for $z=0,1$. Conversely, the conditions $\{ 0\leq \pr(Z=z) \leq 1, 0\leq \pr(U=u) \leq 1, 0\leq \pr(Y_z=1\mid U=u) \leq 1 : z=0,1;u=a,n,c\}$ induce sharp bounds on the sensitivities and specificities, which further induce sharp bounds on $\CACE$. This is a general strategy that we use to derive sharp bounds on $\CACE$.
First, we discuss the measurement error of $Y$.
\begin{theorem}
\label{thm:bound:Y}
Suppose that $\CACE '\geq 0$ and only $Y$ is mismeasured with $r_Y>0$.
Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif}, the sharp bounds are
$\textsc{SN}_Y \geq M_Y$, $\textsc{SP}_Y \geq 1- N_Y$, and $\CACE ' \leq \CACE \leq \CACE '/(M_Y- N_Y)$,
where $M_Y$ and $N_Y$ are the maximum and minimum values of the set
\begin{eqnarray*}
\left \{\pr(Y'=1\mid D=0, Z=1),\pr(Y'=1 \mid D=1, Z=0), \frac{\textsc{RD}_{Y'D\mid Z}}{\textsc{RD}_{D\mid Z}}, \frac{\textsc{RD}_{Y'(1-D)\mid (1-Z)}}{\textsc{RD}_{D\mid Z}}\right\}.
\end{eqnarray*}
\end{theorem}
We can obtain the bounds under $\CACE ' <0$ by replacing $Y$ with $1-Y$ and $Y'$ with $1-Y'$ in Theorem \ref{thm:bound:Y}. Thus, we only consider $\CACE ' \geq 0$ in Theorem \ref{thm:bound:Y} and the theorems in later parts of the paper.
In Theorem \ref{thm:bound:Y}, the lower bounds on $\textsc{SN}_Y $ and $\textsc{SP}_Y$ must be smaller than or equal to $1$, i.e., $M_Y \leq 1$ and $1- N_Y \leq 1$. These two inequalities further imply the following corollary on the testable conditions of the instrumental variable model with the measurement error of $Y.$
\begin{corollary}
\label{cor:testY}
Suppose that only $Y$ is mismeasured with $r_Y>0$.
Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif},
\begin{eqnarray*}
\pr(Y'=y,D=1 \mid Z=1) &\geq& \pr(Y'=y, D=1 \mid Z=0), \quad (y=0,1),\\
\pr(Y'=y,D=0 \mid Z=0) &\geq& \pr(Y'=y, D=0 \mid Z=1), \quad (y=0,1).
\end{eqnarray*}
\end{corollary}
The conditions in Corollary \ref{cor:testY} are all testable with observed data $(Z,D,Y')$, and they are the same under $\CACE ' \geq 0$ and $\CACE '<0$. \citet{balke1997bounds} derive the same conditions as in Corollary \ref{cor:testY} without the measurement error of $Y$. \citet{wang2017falsification} propose statistical tests for these conditions. From Corollary \ref{cor:testY}, the non-differential measurement error of $Y$ does not weaken the testable conditions of the binary instrumental variable model.
Second, we discuss the measurement error of $D$.
\begin{theorem}
\label{thm:bound:D}
Suppose that $\CACE ' \geq 0$ and only $D$ is mismeasured with $r_D>0$. Under
Assumptions~\ref{asm:iv} and~\ref{asm:nondif}, the sharp bound are
$
M_D\leq \textsc{SN}_D \leq U_D ,$
$
1-N_D \leq \textsc{SP}_D \leq 1-V_D,
$
and
$
\CACE ' \times (M_D - N_D) \leq \CACE \leq \CACE ' \times (U_D - V_D),
$
where
\begin{eqnarray*}
M_D&=&
\max \left\{ \max_{z=0,1} \pr(D'=1\mid Z=z), \max_{y=0,1} \pr(D'=1 \mid Y=y,Z=1),\frac{\textsc{RD}_{(1-Y)D'\mid (1-Z)}}{\textsc{RD}_{Y\mid Z}}\right\} ,\\
N_D&=&
\min \left\{\min_{z=0,1} \pr (D'=1\mid Z=z),\min_{y=0,1} \pr (D'=1 \mid Y=y,Z=0),\frac{\textsc{RD}_{YD'\mid Z}}{\textsc{RD}_{Y\mid Z}}\right\} ,\\
U_D &=& \min\left \{1,\frac{\textsc{RD}_{YD'\mid Z}}{\textsc{RD}_{Y\mid Z}}\right\},\quad
V_D = \max\left \{0,\frac{\textsc{RD}_{(1-Y)D'\mid (1-Z)}}{\textsc{RD}_{Y\mid Z}}\right\} .
\end{eqnarray*}
\end{theorem}
With a mis-measured $D$, \citet{ura2018heterogeneous} derives sharp bounds with and without Assumption \ref{asm:nondif}, respectively. The former bounds are equivalent to ours, but the latter bounds are wider. In Theorem \ref{thm:bound:D}, the lower bounds on $\textsc{SN}_D$ and $\textsc{SP}_D$ must be smaller than or equal to their upper bounds. This further implies the following corollary on the testable conditions of the binary instrumental variable model with the measurement error of $D.$
\begin{corollary}
\label{cor:testD}
Suppose that $\CACE ' \geq 0$ and only $D$ is mismeasured with $r_D>0$.
Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif},
\begin{eqnarray}
\pr(Y=1,D'=1 \mid Z=1) &\geq& \pr(Y=1, D'=1 \mid Z=0), \label{eq::testableD1} \\
\pr(Y=0,D'=0 \mid Z=0) &\geq &\pr(Y=0, D'=0 \mid Z=1), \nonumber \\
\pr(D'=1 \mid Y=y,Z=1) &\leq & \textsc{RD}_{YD'\mid Z} / \textsc{RD}_{Y\mid Z}, \quad \hspace{1.23cm}
(y=0,1), \nonumber \\
\pr(D'=1 \mid Y=y,Z=0) &\geq & \textsc{RD}_{(1-Y)D'\mid (1-Z)} / \textsc{RD}_{Y\mid Z} , \quad (y=0,1). \nonumber
\end{eqnarray}
\end{corollary}
We can obtain the conditions under $\CACE ' < 0$ by replacing $Y$ with $1-Y$.
In the Supplementary material, we show that the conditions in Corollary~\ref{cor:testD} are weaker than those in \citet{balke1997bounds}.
Thus, the non-differential measurement error of $D$ weakens the testable conditions of the binary instrumental variable model.
It is complicated to obtain closed-form bounds under simultaneous measurement errors of more than one elements of $(Z, D, Y)$. In those cases, we can numerically calculate the sharp bounds on $\CACE$ with details in the Supplementary material.
\section{Results under strong monotonicity}
\label{sec::mono}
Sometimes, units in the control group have no access to the treatment. It is called the one-sided noncompliance problem with the following assumption.
\begin{assumption}
\label{asm:str}
For all individual $i$,
$D_{0i}=0$.
\end{assumption}
Under strong monotonicity, we have only two strata with $U=c$ and $U=n$. Theorem \ref{thm:cace} still holds.
Moreover, strong monotonicity sharpens the bounds in \S \ref{sec::nondiff}.
First, we consider the measurement error of $Y.$ We have
\begin{eqnarray*}
\CACE ' =\left\{ \pr(Y'=1 \mid Z=1)-\pr(Y'=1 \mid Z=0) \right\} / \pr(D=1 \mid Z=1) , \quad \CACE =\CACE '/r_Y.
\end{eqnarray*}
\begin{theorem}
\label{thm:bound:str:Y}
Suppose that $\CACE ' \geq 0$ and only $Y$ is mismeasured with $r_Y>0$.
Under Assumptions~\ref{asm:iv}--\ref{asm:str}, the sharp bounds are
$\textsc{SP}_Y \geq 1- N_Y^{\textup{m}}$, $\SN_Y \geq M_Y^{\textup{m}}$, and $\CACE ' \leq \CACE \leq \CACE '/ (M_Y^{\textup{m}} - N_Y^{\textup{m}})$, where
\begin{eqnarray*}
N_Y^{\textup{m}} &=& \min \{\pr(Y'=1\mid D=0,Z=1), \pr(Y'=1 \mid D=1,Z=1)-\CACE '\},\\
M_Y^{\textup{m}} &=& \max \{\pr(Y'=1\mid D=0,Z=1), \pr(Y'=1\mid D=1,Z=1)\}.
\end{eqnarray*}
\end{theorem}
Second, we consider the measurement error of $D.$ Subtle issues arise. When $D$ is mismeasured, $\pr(D'=0 \mid D=0, Z=0)=1$ is known, and $\pr(D'=1 \mid D=1, Z=0)$ is not well defined. Thus, Assumption \ref{asm:nondif} of non-differential measurement error is implausible. We need modifications. Define
\begin{eqnarray*}
&\textsc{SN}_D^1&= \pr(D'=1 \mid D=1, Z=1), \quad
\textsc{SP}_D^1= \pr(D'=0 \mid D=0, Z=1)
\end{eqnarray*}
as the sensitivity and specificity conditional on $Z=1$. We have
\begin{eqnarray*}
\CACE ' = \textsc{RD}_{Y\mid Z} / \left\{ \pr(D'=1 \mid Z=1)-(1-\textsc{SP}_D^1) \right\},\quad
\CACE = \CACE ' \times (\textsc{SN}_D^1+\textsc{SP}_D^1-1) .
\end{eqnarray*}
\begin{theorem}
\label{thm:bound:str:D}
Suppose that $\CACE ' \geq 0$, only $D$ is mismeasured, and
\begin{equation}
\pr(D'=1 \mid Y=1,Z=1) \geq \pr(D'=1 \mid Y=0,Z=1).
\label{eq::conditionD}
\end{equation}
Under Assumptions~\ref{asm:iv} and~\ref{asm:str}, the sharp bounds are
\begin{eqnarray*}
&&\textsc{SP}_D^1 \geq 1- \pr(D'=1 \mid Y=0,Z=1), \quad \textsc{SN}_D^1 \geq \pr(D'=1 \mid Y=1,Z=1),\\
&& \textsc{SN}_D^1 \leq \left\{ \pr(Y=1,D'=1 \mid Z=1)-(1-\textsc{SP}_D^1)\times \pr(Y=1\mid Z=0) \right\} / \textsc{RD}_{Y\mid Z} ,\\
&& \pr(D'=1 \mid Y=1, Z=1)\times \textsc{RD}_{Y\mid Z} / \pr(D'=1 \mid Z=1) \leq \CACE \leq 1 .
\end{eqnarray*}
\end{theorem}
Unlike Theorems~\ref{thm:bound:Y}--\ref{thm:bound:str:Y},
the upper bound on $ \textsc{SN}_D^1$ depends on $\textsc{SP}_D^1$ in Theorem~\ref{thm:bound:str:D}.
The condition in \eqref{eq::conditionD} is not necessary for obtaining the bounds, but it helps to simplify the expression of the bounds.
It holds in our applications in \S \ref{sec::illustration}. We give the bounds on $\CACE$ without \eqref{eq::conditionD} in the Supplementary material. The upper bound on $\CACE$ is not informative in Theorem~\ref{thm:bound:str:D}, but,
fortunately, we are more interested in the lower bound in this case.
It is complicated to obtain closed-form bounds under simultaneous measurement errors of more than one elements of $(Z, D, Y)$. In those cases, we can numerically calculate the sharp bounds with more details in the Supplementary material.
\section{Sensitivity analysis formulas under differential measurement errors}
\label{sec::sensitivityanalysis}
Non-differential measurement error is not plausible in some cases.
\S \ref{sec::mono} shows that under strong monotonicity, the measurement error of $D$ cannot be non-differential because it depends on $Z$ in general. In this section, we consider differential measurement errors of $D$ and $Y$ without requiring strong monotonicity. We do not consider the differential measurement error of $Z$, because the measurement of $Z$ often precedes $(D, Y)$ and its measurement error is unlikely to depend on later variables.
We first consider the differential measurement error of $Y$.
\begin{theorem}
\label{thm:diff:Y}
Suppose that only $Y$ is mismeasured.
Define
\begin{align}
\textsc{SN}_Y^1&= \pr(Y'=1 \mid Y=1, Z=1), \quad &\textsc{SN}_Y^0&= \pr(Y'=1 \mid Y=1, Z=0), \label{eq::misY1} \\
\textsc{SP}_Y^1&= \pr(Y'=0 \mid Y=0, Z=1), \quad &\textsc{SP}_Y^0&= \pr(Y'=0 \mid Y=0, Z=0). \label{eq::misY2}
\end{align}
Under Assumption~\ref{asm:iv},
\begin{eqnarray*}
\CACE = \left\{ \frac{\pr(Y'=1 \mid Z=1)-(1-\textsc{SP}_Y^1)}{\textsc{SN}_Y^1+\textsc{SP}_Y^1-1}-\frac{\pr(Y'=1 \mid Z=0)-(1-\textsc{SP}_Y^0)}{\textsc{SN}_Y^0+\textsc{SP}_Y^0-1}\right\} \bigg/ \textsc{RD}_{D\mid Z}.
\end{eqnarray*}
\end{theorem}
Theorem \ref{thm:diff:Y} allows the measurement error of $Y$ to depend on $D$, but the formula of $\CACE$ only needs
the sensitivities and specificities in \eqref{eq::misY1} and \eqref{eq::misY2} conditional on $(Z, Y)$.
It is possible that $\CACE'$ is positive but $\CACE$ is negative. For example, if
$
\textsc{SN}_Y^1+\textsc{SP}_Y^1=\textsc{SN}_Y^0+\textsc{SP}_Y^0>1
$
and
$ \textsc{SP}_Y^0-\textsc{SP}_Y^1>\textsc{RD}_{Y'\mid Z},
$
then $\CACE$ and $\CACE'$ have different signs.
We then consider the differential measurement error of $D$.
\begin{theorem}
\label{thm:diff:D}
Suppose that only $D$ is mismeasured.
Define
\begin{align}
\textsc{SN}_D^1&= \pr(D'=1 \mid D=1, Z=1), \quad &\textsc{SN}_D^0&= \pr(D'=1 \mid D=1, Z=0), \label{eq::misD1}\\
\textsc{SP}_D^1&= \pr(D'=0 \mid D=0, Z=1), \quad &\textsc{SP}_D^0&= \pr(D'=0 \mid D=0, Z=0). \label{eq::misD2}
\end{align}
Under Assumption~\ref{asm:iv},
\begin{eqnarray*}
\CACE =\textsc{RD}_{Y\mid Z} \bigg /\left\{\frac{\pr(D'=1 \mid Z=1)-(1-\textsc{SP}_D^1)}{\textsc{SN}_D^1+\textsc{SP}_D^1-1}-\frac{\pr(D'=1 \mid Z=0)-(1-\textsc{SP}_D^0)}{\textsc{SN}_D^0+\textsc{SP}_D^0-1}\right\}.
\end{eqnarray*}
\end{theorem}
Theorem \ref{thm:diff:D} allows the measurement error of $D$ to depend on $Y$, but the formula of $\CACE$ only needs
the sensitivities and specificities \eqref{eq::misD1} and \eqref{eq::misD2} conditional on $Z$.
Similar to the discussion after Theorem \ref{thm:diff:Y}, it is possible that $\CACE'$ and $\CACE$ have different signs.
Based on Theorems~\ref{thm:diff:Y} and~\ref{thm:diff:D}, if we know or can consistently estimate the sensitivities and specificities in \eqref{eq::misY1}--\eqref{eq::misD2}, then we can consistently estimate $\CACE$; if we only know the ranges of the sensitivities and specificities, then we can obtain bounds on $\CACE$.
For simultaneous differential measurement errors of $D$ and $Y$, the formula of $\CACE$ depends on too many sensitivity and specificity parameters. Thus we omit the discussion.
\section{Illustrations}\label{sec::illustration}
We give three examples and present the data in the Supplementary material.
\begin{example}\label{eg::1}
\citet{improve2014endovascular} assess the effectiveness of the emergency endovascular versus the open surgical repair strategies for patients with a clinical diagnosis of ruptured aortic aneurism. Patients are randomized to either the emergency endovascular or the open repair strategy. The primary outcome is the survival status after 30 days. Let $Z$ be the treatment assigned, with $Z=1$ for the endovascular strategy and $Z=0$ for the open repair. Let $D$ be the treatment received.
Let $Y$ be the survival status, with $Y=1$ for dead, and $Y=0$ for alive.
If none of the variables are mismeasured, then the estimate of $\CACE $ is $0.131$ with 95\% confidence interval $(-0.036, 0.298)$ including $0$.
If only $Y$ is non-differentially mismeasured, then
$0.382 \leq \textsc{SP}_Y \leq 1$, $0.759 \leq \textsc{SN}_Y \leq 1$, $0.141 \leq r_Y \leq 1$, and thus
$0.131 \leq \CACE \leq 0.928$ from Theorem \ref{thm:bound:Y}.
If only $D$ is non-differentially mismeasured, then
$0.658 \leq \textsc{SN}_D \leq 1$, $0.908 \leq \textsc{SP}_D \leq 1$, $0.566 \leq r_D \leq 1$, and thus
$0.074 \leq \CACE \leq 0.131$ from Theorem \ref{thm:bound:D}.
\end{example}
\begin{example}\label{eg::2}
In \citet{hirano2000assessing}, physicians are randomly selected to receive a letter encouraging them to inoculate patients at risk for flu. The treatment is the actual flu shot, and the outcome is an indicator for flu-related hospital visits. However, some patients do not comply with their assignments. Let $Z_i$ be the indicator of encouragement to receive the flu shot, with $Z=1$ if the physician receives the encouragement letter, and $Z=0$ otherwise. Let $D$ be the treatment received.
Let $Y$ be the outcome, with $Y=0$ if for a flu-related hospitalization during the winter, and $Y=1$ otherwise.
If none of the variables are mismeasured, then the estimate of $\CACE $ is $0.116$ with 95\% confidence interval $(-0.061, 0.293)$ including $0$.
If only $Y$ is non-differentially mismeasured, then from Theorem \ref{thm:bound:Y}, $\textsc{SP}_Y \geq 1.004 > 1$, and thus the assumptions of the instrumental variable do not hold.
If only $D$ is non-differentially mismeasured, then from Theorem \ref{thm:bound:D}, $\textsc{SN}_D \geq 8.676 > 1$, and thus the assumptions of the instrumental variable
do not hold either. We reject the testable condition \eqref{eq::testableD1} required by both Corollaries~\ref{cor:testY} and \ref{cor:testD} with $p$-value smaller than $10^{-9}$.
As a result, the non-differential measurement error of $D$ or $Y$ cannot explain the violation of the instrumental variable assumptions in this example.
\end{example}
\begin{example}\label{eg::3}
\citet{sommer1991estimating} study the effect of vitamin A supplements
on the infant mortality in Indonesia. The vitamin supplements are randomly assigned to
villages, but some of the individuals in villages assigned to the treatment group do not
receive them. Strong monotonicity holds, because the individuals assigned to the control group have no access to the supplements. Let $Y$ denote a binary outcome, with $Y=1$ if the infant survives to twelve months, and $Y=0$ otherwise. Let $Z$ denote the indicator of assignment to the supplements. Let $D$ denote the actual receipt of the supplements. If none of the variables are mismeasured, then the estimate of $\CACE $ is $0.003$ with 95\% confidence interval $(0.001, 0.005)$ excluding $0$.
If only $Y$ is non-differentially mismeasured, then $\textsc{SP}_Y \geq 0.014$, $\textsc{SN}_Y \geq 0.999$, and thus
$0.003 \leq \CACE \leq 0.252$ from Theorem \ref{thm:bound:str:Y}. The 95\% confidence interval is $(0.001,1)$.
If only $D$ is non-differentially mismeasured, then $\textsc{SP}^1_D \geq 0.739$, $\textsc{SN}^1_D \geq 0.802$, and thus
$0.003 \leq \CACE \leq 1$ from Theorem \ref{thm:bound:str:D}. The 95\% confidence interval is $(-1\times 10^{-5},1)$.
In the Supplementary material, we give the details for constructing confidence intervals for $\CACE$ based on its sharp bounds.
\end{example}
In Examples \ref{eg::1} and \ref{eg::3}, the upper bounds on $\CACE$ are too large to be informative, but fortunately, the lower bounds are of more interest in these applications.
\section{Discussion}\label{sec::discussion}
\subsection{Further comments on the measurement errors of $Z$}
If only $Z$ is mismeasured and the measurement error is non-differential, then $\textsc{RD}_{D\mid Z'}= r_Z' \times \textsc{RD}_{D\mid Z}$ where $r_Z' = \textsc{SN}'_Z+\textsc{SP}'_Z-1$ with $\textsc{SN}'_Z$ and $\textsc{SP}'_Z$ defined in Lemma \ref{lem:bound}. If $r_Z' $ and $\textsc{RD}_{D\mid Z}$ are both constants that do not shrink to zero as the sample size $n$ increases, then $\textsc{RD}_{D\mid Z'}$ does not shrink to zero either. In this case, measurement error of $Z$ does not cause the weak instrumental variable problem \citep{nelson1990distribution,staiger1997instrumental}. Theorem~\ref{thm:cace} shows that the non-differential measurement error of $Z$ does not affect the large-sample limit of the naive estimator. We further show in the Supplementary material that it does not affect the asymptotic variance of the naive estimator either.
Nevertheless, in finite samples, the measurement error of $Z$ does result in smaller estimate for $\textsc{RD}_{D\mid Z'}$. If we consider the asymptotic regime that $ r_Z' = o(n^{-\alpha})$ for some $\alpha >0$, then it is possible to have the weak instrumental variable problem. In this case, we need tools that are tailored to weak instrumental variables \citep{nelson1990distribution,staiger1997instrumental}.
Practitioners sometimes dichotomize a continuous instrumental variable $Z$ into a binary one based on the median or other quantiles. The dichotomized variable based on other quantiles are measurement errors of the dichotomized variable based on the median. However, these measurement errors are differential and thus our results in \S\ref{sec::nondiffmeasure} and \S\ref{sec::nondiff} are not applicable.
\subsection{Further commments on the measurement errors of $D$}
We discussed binary $D$. If we dichotomize a discrete $D \in \{0, 1, \ldots,J\}$ at $k$, i.e., $D'=1(D \geq k)$, then we can define two-stage least squares estimators based on $D$ and $D'$:
$$
\tau_{\text{2sls}} =
\frac{ E(Y\mid Z=1)-E(Y\mid Z=0) }{ E(D \mid Z=1)- E(D \mid Z=0) } ,\quad
\tau_{\text{2sls}} ' =
\frac{ E(Y\mid Z=1)-E(Y\mid Z=0) }{ E(D' \mid Z=1)- E(D' \mid Z=0) } .
$$
\citet{angrist1995two} show that $\tau_{\text{2sls}}$
is a weighted average of some subgroup causal effects.
Analogous to Theorem \ref{thm:cace}, we show in the Supplementary material that $ \tau_{\text{2sls}} = \tau_{\text{2sls}} ' \times w_k $, where
$w_k = \pr(D_1 \geq k>D_0) / \sum_{j=1}^J \pr(D_1 \geq j>D_0) \in [0,1]$ if Assumptions \ref{asm:iv}(a) and (b) hold. Therefore, the dichotomization biases the estimate away from zero.
\subsection{Further comments on the measurement errors of $Y$}
For a continuous outcome, it is common to assume that the measurement error of $Y$ is additive and non-differential, i.e., $Y'=Y+U$, where $U$ is the error term with mean zero. If the binary $Z$ and $D$ are non-differentially mismeasured as in Assumption \ref{asm:nondif}, then $\CACE = \CACE ' \times r_D$. In this case, the measurement error of $Y$ does not bias the estimate for $\CACE$.
\newpage
| {'timestamp': '2019-06-06T02:16:11', 'yymm': '1906', 'arxiv_id': '1906.02030', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.02030'} |
\section{Introduction}
In the Friedmann-Robertson-Walker solution for the structure of the
universe the geometry and future of the expansion uniquely depend on
the mean mass density, $\rho_0$, and a possible cosmological constant.
It is a statement of arithmetic\,\cite{oort} that at redshift zero
$\Omega_M \equiv\rho_0/\rho_c= M/L \times j/\rho_c$, where $M/L$ is
the average mass-to-light ratio of the universe and $\rho_c/j$ is the
closure mass-to-light ratio, with $j$ being the luminosity density of
the universe. Estimates of the value of $\Omega_M$ have a long
history with a substantial range of cited results\,\cite{bld}. Both
the ``Dicke coincidence'' and inflationary cosmology would suggest
that $\Omega_M=1$. The main thrust of our survey is to clearly
discriminate between $\Omega_M=1$ and the classical, possibly biased,
indicators that $\Omega_M\simeq 0.2$.
Rich galaxy clusters
are the largest collapsed regions in the universe and are ideal to
make an estimate of the cluster $M/L$ which can be corrected to the
value which should apply to the field as a whole. To use clusters to
estimate self-consistently the global $\Omega_M$ we must, as a
minimum, perform four operations.
\parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{Measure the total gravitational mass within some radius.}
\parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{Sum the luminosities of the visible galaxies within the same
radius.}
\parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{Measure the field luminosity density at the cluster redshift.}
\parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{Understand the
differential luminosity and density evolution between
the clusters and the field.}
\section{Survey Design}
The Canadian Network for Observational Cosmology (CNOC) designed
observations to make a conclusive measurement of $\Omega_M$ using
clusters\,\cite{yec}. The clusters are selected from the X-ray
surveys, primarily the Einstein Medium Sensitivity
Survey\,\cite{emss1,emss2,gl}, which has a well defined flux-volume
relation. The spectroscopic sample, roughly one in two on the average,
is drawn from a photometric sample which goes nearly 2 magnitudes
deeper, thereby allowing an accurate measurement of the selection
function. The sample contains 16 clusters spread from redshift 0.18
to 0.55, meaning that evolutionary effects are readily visible, and
any mistakes in differential corrections should be more readily
detectable. For each cluster, galaxies are sampled all the way from
cluster cores to the distant field. This allows testing the accuracy
of the virial mass estimator and the understanding of the differential
evolution process. We introduce some improvements to the classical
estimates of the velocity dispersion and virial radius estimators,
which have somewhat better statistical properties. A critical element
is to assess the errors in these measurements. The random errors are
relatively straightforward and are evaluated using either the
statistical jackknife or bootstrap methods\,\cite{et}, which follow
the entire complex chain of analysis from catalogue to result. The
data are designed to correct from the $M/L$ values of clusters to the
field $M/L$.
\section{Results}
We find\,\cite{profile} that $\Omega_M=0.19\pm0.06$ (in a
$\Omega_\Lambda=0$ cosmology, which is the formal $1\sigma$ error. In
deriving this result we apply a variety of corrections and tests of
the assumptions. \parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{The clusters have statistically identical
$M/L$ values, once corrected for evolution\,\cite{global}.}
\parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{High luminosity cluster and field galaxies are evolving at a
comparable rate with redshift, approximately one magnitude per unit
redshift.} \parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{Cluster galaxies have no excess star formation with
respect to the field\,\cite{a2390,balogh}. } \parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{Cluster galaxies
are 0.1 and 0.3 magnitudes fainter than similar field
galaxies\,\cite{profile,schade_e,schade_d,lin}.} \parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{The virial
mass overestimates the true mass of a cluster by about 15\%, which can
be attributed to the neglect of the surface term in the virial
equation\,\cite{profile}.} \parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{There is no significant change of
$M/L$ with radius within the cluster\,\cite{profile}.} \parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{The mass
field of the clusters is remarkably well described by the
NFW\,\cite{nfw} profile, both in shape and scale radius\,\cite{ave}.}
\parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{The evolution of the number of clusters per unit volume is very
slow, in accord with the PS\,\cite{ps} predictions for a low density
universe\,\cite{s8}.} \parskip=0pt\par\noindent\hangindent=3pc\hangafter=1 $\bullet$~{The clusters have statistically identical
efficiencies of converting gas into stars, which is consistent
with the
value in the field\,\cite{omb}.} \par\noindent These results rule out
$\Omega_M=1$ in any component with a velocity dispersion less than
about 1000~\kms.
\section{$\Omega_\Lambda$ dependence}
The luminosity density, $j$, contains the cosmological volume element
which has a very strong cosmology dependence. The cosmological
dependence of the $\Omega_e(z)$ can be illustrated by expanding the
cosmological terms to first order in the redshift, $z$,
\begin{equation}
\Omega_e(z) \simeq \Omega_M [1 + {3\over 4}(\Omega_M^i
-\Omega_M+2\Omega_\Lambda)z],
\label{eq:ao}
\end{equation}
where $\Omega_M$ and $\Omega_\Lambda$ are the true values and
$\Omega_M^i$ with $\Lambda=0$ is the cosmological model assumed for
the sake of the calculation\,\cite{lambda}. If there is a non-zero
$\Lambda$ then $\Omega_e(z)$ will vary with redshift. The available
data are the CNOC1 cluster $M/L$ values and the 3000 galaxies of the
preliminary CNOC2\,\cite{cnoc2_pre} field sample for $j$. To provide
a well defined $\Omega_e(z)$ we limit both the field and cluster
galaxy luminosities at $M_r^{k,e}\le -19.5$ mag, which provides a
volume limited sample over the entire redshift range. A crucial
advantage is of using high luminosity galaxies alone is that they are
known to have a low average star formation rate and evolve slowly with
redshift, hence their differential corrections are small, and
reasonably well determined\,\cite{profile}. The results are displayed
in Figure~1. The fairly narrow redshift range available does not
provide a very good limit on $\Omega_\Lambda$, although values
$\Omega_\Lambda>1.5$ are ruled out. The power of this error ellipse is
to use it in conjunction with other data, such as the SNIa results
which provide complementary constraints on the
$\Omega_M-\Omega_\Lambda$ pair.
\bigskip
\epsfysize 8truecm
\centerline{\epsfbox{chi2.ps}}
\noindent
{\footnotesize Figure 1: The CNOC1 cluster $M/L$ values combined with
the CNOC2 measurements of $j$ for $M_r^{k,e}\le -19.5$ mag galaxies,
gives an $\Omega_e(z)$ which leads to the plotted $\chi^2$ (68\% and
90\% confidence) contours.}
\medskip
The limit on the $\Omega_M-\Omega_\Lambda$ pair in Figure~1 has been
corrected for known systematic errors which are redshift independent
scale errors in luminosity and mass. The high luminosity galaxies in
both the cluster and field populations are evolving at a statistically
identical rate with redshift, which is close passive evolution. If
the cluster galaxies are becoming more like the field with redshift
({\it i.~e.} the Butcher-Oemler effect, which is partially shared with
the field), so that they need less brightening to be corrected to the
field, then that would raise the estimated $\Omega_\Lambda$, although
the correction is so small that the correction would be
$\Delta\Omega_\Lambda\simeq0.3$ over this redshift interval. The
results are completely insensitive to galaxy merging that produces no
new stars. The data indicate that there is no excess star formation in
cluster galaxies over the observed redshift range, with galaxies
fading as they join the cluster\,\cite{profile,balogh}. The fact that
evolution of the high luminosity field galaxies is very slow and
consistent with pure luminosity evolution\,\cite{profile} (Lin, {\it et~al.}~\
in preparation) gives us confidence that the results are reasonably
well understood. It will be very useful to have data that
extends to both higher and lower redshift, which would allow a
measurement of $\Omega_\Lambda$ and better constraints on any
potential systematic errors.
\section*{References}
| {'timestamp': '1998-04-28T22:51:50', 'yymm': '9804', 'arxiv_id': 'astro-ph/9804312', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9804312'} |
\section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\nonumber}{\nonumber}
\newcommand{\underline{1}}{\underline{1}}
\newcommand{\underline{2}}{\underline{2}}
\newcommand{\underline{a}}{\underline{a}}
\newcommand{\underline{b}}{\underline{b}}
\newcommand{\underline{c}}{\underline{c}}
\newcommand{\underline{d}}{\underline{d}}
\def\cs#1{\footnote{{\bf Stefan:~}#1}}
\def \int\!\!{\rm d}^4x{\int\!\!{\rm d}^4x}
\def \int\!\!{\rm d}^8z{\int\!\!{\rm d}^8z}
\def \int\!\!{\rm d}^6z{\int\!\!{\rm d}^6z}
\def \int\!\!{\rm d}^6{\bar z}{\int\!\!{\rm d}^6{\bar z}}
\def \frac{E^{-1}}{R}{\frac{E^{-1}}{R}}
\def \frac{E^{-1}}{\bar R}{\frac{E^{-1}}{\bar R}}
\def (\cD^2 - 4 {\bar R}){(\cD^2 - 4 {\bar R})}
\def (\cDB^2 - 4 R){(\cDB^2 - 4 R)}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\def\dt#1{{\buildrel {\hbox{\LARGE .}} \over {#1}}}
\newcommand{\bm}[1]{\mbox{\boldmath$#1$}}
\def\double #1{#1{\hbox{\kern-2pt $#1$}}}
\begin{document}
\begin{titlepage}
\begin{flushright}
hep-th/0601177\\
January, 2006\\
\end{flushright}
\vspace{5mm}
\begin{center}
{\Large \bf
On compactified harmonic/projective superspace,\\
5D superconformal theories, and all that}
\end{center}
\begin{center}
{\large
Sergei M. Kuzenko\footnote{{[email protected]}}
} \\
\vspace{5mm}
\footnotesize{
{\it School of Physics M013, The University of Western Australia\\
35 Stirling Highway, Crawley W.A. 6009, Australia}}
~\\
\vspace{2mm}
\end{center}
\vspace{5mm}
\begin{abstract}
\baselineskip=14pt
\noindent
Within the supertwistor approach, we analyse
the superconformal structure of 4D $\cN=2$ compactified
harmonic/projective superspace. In the case of 5D
superconformal symmetry, we derive the superconformal Killing vectors
and related building blocks which emerge in the transformation laws
of primary superfields.
Various off-shell superconformal multiplets
are presented both in 5D harmonic and projective superspaces,
including the so-called tropical (vector) multiplet and polar
(hyper)multiplet. Families of superconformal actions are
described both in the 5D harmonic and projective superspace
settings. We also present examples of 5D superconformal theories
with gauged central charge.
\end{abstract}
\vfill
\end{titlepage}
\newpage
\setcounter{page}{1}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\sect{Introduction}
According to Nahm's classification \cite{Nahm},
superconformal algebras exist in space-time dimensions
${\rm D} \leq 6$. Among the dimensions included,
the case of ${\rm D}=5$ is truly
exceptional, for it allows the existence of the unique superconformal algebra
$F(4)$ \cite{Kac}.
This is in drastic contrast to
the other dimensions
which are known to be compatible with series of superconformal algebras
(say, 4D $\cN$-extended or 6D $(\cN,0) $ superconformal symmetry).
Even on formal grounds, the exceptional feature
of the five-dimensional case
is interesting enough
for studying in some depth the properties of 5D superconformal
theories. On the other hand, such rigid superconformal theories
are important prerequisites in the construction,
within the superconformal tensor calculus
\cite{Ohashi,Bergshoeff},
of 5D supergravity-matter dynamical systems which
are of primary importance, for example,
in the context of bulk-plus-brane scenarios.
The main motivation for the present work was the desire to develop
a systematic setting to build 5D superconformal theories, and clearly this
is hardly possible without employing superspace techniques.
The superconformal algebra in five dimensions
includes 5D simple (or $\cN=1$)
supersymmetry algebra\footnote{On historic grounds,
5D simple supersymmetry is often labeled $\cN=2$,
see e.g. \cite{Bergshoeff}.}
as its super Poincar\'e subalgebra.
As is well-known, for supersymmetric theories
in various dimensions with eight supercharges
(including the three important cases:
(i) 4D $\cN=2$, (ii) 5D $\cN=1$ and (iii) 6D $\cN= (1,0)$)
a powerful formalism to generate off-shell formulations is the {\it harmonic
superspace} approach that was originally developed
for the 4D $\cN=2$ supersymmertic Yang-Mills theories
and supergravity \cite{GIKOS,GIOS}.
There also exists a somewhat different, but related, formalism --
the so-called {\it projective superspace} approach
\cite{projective0,Siegel,projective,BS}, first introduced
soon after the harmonic superspace had appeared.
Developed originally for describing the general self-couplings
of 4D $\cN=2$ tensor multiplets, this approach has been extended
to include some other interesting multiplets.
Both the harmonic and projective approaches make use of
the same superspace, ${\mathbb R}^{D|8} \times S^2$,
which first emerged, for $D=4$, in a work of Rosly \cite{Rosly}
(see also \cite{RS})
who built on earlier ideas due to Witten \cite{Witten}.
In harmonic superspace, one deals with so-called Grassmann analytic
superfields required to be smooth tensor fields on $S^2$.
In projective superspace, one also deals with Grassmann analytic
superfields required, however, to be holomorphic on an open subset of $S^2$
(typically, the latter is chosen to be
$ {\mathbb C}^* = {\mathbb C} \setminus \{ 0 \}$
in the Riemann sphere realisation
$S^2 ={\mathbb C} \cup \{ \infty \}$).
In many respects, the harmonic and projective superspaces
are equivalent and complementary to each other \cite{Kuzenko},
although harmonic superspace is obviously more fundamental.
Keeping in mind potential applications to the brane-world physics,
the projective superspace setting seems to be more useful, since the 5D projective
supermultiplets \cite{KL} are easy to reduce to 4D $\cN=1$ superfields.
To our knowledge, no comprehensive discussion
of the superconformal group
and superconformal multiplets
in projective superspace has been given,
apart from the analysis of $SU(2)$ invariance in \cite{projective0}
and the semi-component consideration
of tensor multiplets in \cite{dWRV}.
On the contrary, a realisation of
the superconformal symmetry
in 4D $\cN=2$ harmonic superspace\footnote{See
\cite{ISZ} for an extension to six dimensions.}
has been known for
almost twenty years \cite{GIOS-conf,GIOS}.
But some nuances of this realisation still
appear to be quite mysterious
(at least to newcomers) and call for a different interpretation.
Specifically, one deals with
superfields depending on harmonic variables $u^\pm_i$
subject to the two constraints
\begin{equation}
u^{+i}\,u^-_i =1~, \qquad \overline{u^{+i}} =u^-_i~, \qquad \quad
i=\underline{1}, \underline{2}~
\label{1+2const}
\end{equation}
when describing general 4D $\cN=2$ super Poincar\'e
invariant theories in harmonic superspace
\cite{GIKOS,GIOS}.
In the case of superconformal theories, on the other hand,
only the first constraint in (\ref{1+2const}) has to be imposed
\cite{GIOS-conf,GIOS}.
Since any superconformal theory is, at the same time,
a super Poincar\'e invariant one,
some consistency issues
seem to arise, such as that
of the functional spaces used to describe the theory.
It is quite remarkable that these issues simply do not occur
if one pursues the twistor approach to harmonic superspace
\cite{Rosly2,LN,HH} (inspired by earlier constructions
due to Manin \cite{Manin}).
In such a setting, the constraints (\ref{1+2const})
can be shown to
appear only as possible `gauge conditions'
and therefore they have no intrinsic significance,
with the only structural condition being
$u^{+i}\,u^-_i \neq 0$.
In our opinion, the supertwistor construction sketched
in \cite{Rosly2,LN}
and further analysed in \cite{HH} is quite illuminating,
for it allows a unified treatment
of the harmonic and projective superspace formalisms.
That is why
it is reviewed and developed further
in the present paper.
Unlike \cite{Rosly2,HH} and standard texts on Penrose's twistor theory
\cite{twistors}, see e.g. \cite{WW}, we avoid considering
compactified complexified Minkowski space and its
super-extensions, for these concepts are not relevant
from the point of view of
superconformal model-building we are interested in.
Our 4D consideration is directly based on the use of
(conformally) compactified Minkowski space $S^1 \times S^3$
and its super-extensions.
Compactified Minkowski space is quite interesting in its own right
(see, e.g. \cite{GS}), and its universal covering space
(i) possesses a unique causal structure compatible
with conformal symmetry \cite{S},
and (ii) coincides with the boundary of five-dimensional
Anti-de Sitter space, which is crucial in the context
of the AdS/CFT duality \cite{AGMOO}.
In the case of 5D superconformal symmetry, one can also pursue
a supertwistor approach. However, since we are aiming at
future applications to brane-world physics,
a more pragmatic course is chosen here,
which is based on the introduction of
the relevant superconformal Killing vectors
and elaborating associated building blocks.
The concept of superconformal Killing vectors
\cite{Sohnius,Lang,BPT,Shizuya,BK,HH,West}, has proved
to be extremely useful for various studies of
superconformal theories in four and six
dimensions, see e.g. \cite{Osborn,Park,KT}.
This paper is organized as follows.
In section 2 we review the construction \cite{U}
of compactified
Minkowski space $\overline{\cM}{}^4 = S^1 \times S^3$
as the set of null two-planes
in the twistor space. In section
3 we discuss $\cN$-extended compactified
Minkowski superspace $\overline{\cM}{}^{4|4\cN}$
and introduce the corresponding superconformal
Killing vectors. In section 4 we develop different
aspects of 4D $\cN=2$ compactified
harmonic/projective superspace.
Section 5 is devoted to the 5D superconformal
formalism. Here we introduce the 5D superconformal
Killing vectors and related building blocks,
and also introduce several off-shell superconformal
multiplets, both in the harmonic and projective superspaces.
Section 6 introduces the zoo of 5D superconformal theories.
Three technical appendices are also included at the end of the paper.
In appendix A, a non-standard realisation for $S^2$ is given.
Appendix B is devoted to the projective
superspace action according to \cite{Siegel}.
Some aspects of the reduction \cite{KL}
from 5D projective supermultiplets to 4D $\cN=1,2$
superfields are collected in Appendix C.
\sect{Compactified Minkowski space}
\label{section:two}
We start by recalling a remarkable
realisation\footnote{This realisation
is known in the physics literature since the early 1960's
\cite{U,S,Tod,GS}, and it
can be related (see, e.g. \cite{S})
to the Weyl-Dirac construction \cite{W,D}
of compactified Minkowski space
$ S^1 \times S^3 / {\mathbb Z}_2$
as the set of straight lines through the origin of the cone in
${\mathbb R}^{4,2}$.
In the mathematics literature, its roots go back
to Cartan's classification of the irreducible homogeneous
bounded symmetric domains \cite{Cartan,Hua}.}
of compactified
Minkowski space $\overline{\cM}{}^4 = S^1 \times S^3$
as the set of null two-dimensional subspaces
in the twistor space\footnote{In the literature,
the term `twistor space' is often used for ${\mathbb C}P^3$.
In this paper we stick to the original
Penrose terminonology \cite{twistors}.}
which is a copy of
${\mathbb C}^4$.
The twistor space is defined to be equipped with the scalar product
\begin{eqnarray}
\langle T, S \rangle = T^\dagger \,\Omega \, S~, \qquad
\Omega =\left(
\begin{array}{cc}
{\bf 1}_2 & 0\\
0 & -{\bf 1}_2
\end{array}
\right) ~,
\end{eqnarray}
for any twistors $T,S \in {\mathbb C}^4$.
By construction, this scalar product is invariant under the action
of the group $SU(2,2) $
to be identified with the conformal group.
The elements of $SU(2,2)$ will be represented by block
matrices
\begin{eqnarray}
g=\left(
\begin{array}{cc}
A & B\\
C & D
\end{array}
\right) \in SL(4,{\mathbb C}) ~, \qquad
g^\dagger \,\Omega \,g = \Omega~,
\label{SU(2,2)}
\end{eqnarray}
where $A,B,C$ and $D$ are $2\times 2$ matrices.
We will denote by $\overline{\cM}{}^4 $ the space of null
two-planes through the origin
in ${\mathbb C}^4$. Given a two-plane, it is generated
by two linearly independent twistors $T^\mu$, with $\mu=1,2$,
such that
\begin{equation}
\langle T^\mu, T^\nu \rangle = 0~, \qquad
\mu, \nu =1,2~.
\label{nullplane1}
\end{equation}
Obviously, the basis chosen, $\{T^\mu\}$, is defined only modulo
the equivalence relation
\begin{equation}
\{ T^\mu \}~ \sim ~ \{ \tilde{T}^\mu \} ~, \qquad
\tilde{T}^\mu = T^\nu\,R_\nu{}^\mu~,
\qquad R \in GL(2,{\mathbb C}) ~.
\label{nullplane2}
\end{equation}
Equivalently,
the space $\overline{\cM}{}^4 $ consists of
$4\times 2$ matrices of rank two,
\begin{eqnarray}
( T^\mu )=\left(
\begin{array}{c}
F\\ G
\end{array}
\right) ~, \qquad
F^\dagger \,F =G^\dagger \,G~,
\label{two-plane}
\end{eqnarray}
where $F$ and $G$ are $2\times 2$ matrices
defined modulo the equivalence relation
\begin{eqnarray}
\left(
\begin{array}{c}
F\\ G
\end{array}
\right) ~ \sim ~
\left(
\begin{array}{c}
F\, R\\ G\,R
\end{array}
\right) ~, \qquad R \in GL(2,{\mathbb C}) ~.
\end{eqnarray}
In order for
the two twistors
$T^\mu $ in (\ref{two-plane})
to generate a two-plane, the $2\times 2$ matrices $F$ and $G$
must be non-singular,
\begin{equation}
\det F \neq 0~, \qquad \det G \neq 0~.
\label{non-singular}
\end{equation}
Indeed, let us suppose the opposite.
Then, the non-negative
Hermitian matrix $F^\dagger F$ has a zero eigenvalue.
Applying an equivalence transformation of the form
\begin{eqnarray}
\left(
\begin{array}{c}
F\\ G
\end{array}
\right) ~ \to ~
\left(
\begin{array}{c}
F\, \cV\\ G\,\cV
\end{array}
\right) ~, \qquad \cV \in U(2) ~,
\nonumber
\end{eqnarray}
and therefore
\begin{eqnarray}
F^\dagger \,F ~\to~ \cV^{-1} \Big(F^\dagger \,F \Big) \,\cV~, \qquad
G^\dagger \,G ~\to~ \cV^{-1} \Big(G^\dagger \,G \Big)\, \cV~,
\nonumber
\end{eqnarray}
we can arrive at the following situation
\begin{eqnarray}
F^\dagger \,F =G^\dagger \,G =
\left(
\begin{array}{cc}
0 & 0\\
0 & \lambda^2
\end{array}
\right)~, \qquad \lambda \in {\mathbb R} ~.
\nonumber
\end{eqnarray}
In terms of the twistors $T^\mu$,
the conditions obtained imply that $T^1 =0$ and $T^2 \neq 0$.
But this contradicts the assumption that the two vectors $T^\mu $ generate a two-plane.
Because of (\ref{non-singular}), we have
\begin{eqnarray}
\left(
\begin{array}{l}
F\\ G
\end{array}
\right) ~ \sim ~
\left(
\begin{array}{c}
h \\ {\bf 1}
\end{array}
\right) ~, \qquad h =F\,G^{-1} \in U(2) ~.
\end{eqnarray}
It is seen that the space $\overline{\cM}{}^4 $
can be identified with the group manifold
$U(2) = S^1 \times S^3$.
The conformal group acts by linear transformations on the twistor space:
associated with the group element (\ref{SU(2,2)}) is
the transformation $T \to g\, T$, for any twistor $T \in {\mathbb C}^4$.
This group representation
induces an action of $SU(2,2)$ on $\overline{\cM}{}^4 $.
It is defined as follows:
\begin{equation}
h ~\to ~g\cdot h = (A\,h +B ) \,(C\,h +D)^{-1} \in U(2) ~.
\end{equation}
One can readily see that $\overline{\cM}{}^4 $ is a homogeneous
space of the group $SU(2,2)$, and therefore
it can be represented as $\overline{\cM}{}^4 =SU(2,2) /H_{h_0}$,
where $H_{h_0} $ is the isotropy group at a fixed unitary matrix
$h_0 \in \overline{\cM}{}^4 $. With the choice
\begin{equation}
h_0 = - {\bf 1}~,
\end{equation}
a coset representative $s(h)\in SU(2,2)$ that
maps $h_0$ to $h \in \overline{\cM}{}^4 $
can be chosen as follows
(see, e.g. \cite{PS}):
\begin{eqnarray}
s(h)= (\det h)^{-1/4}
\left(
\begin{array}{cr}
-h ~ & 0 \\
0 ~& {\bf 1}
\end{array}
\right)~, \qquad
s(h) \cdot h_0 =h\in U(2)~.
\end{eqnarray}
The
isotropy group corresponding to $h_0$
consists of those
$SU(2,2)$ group elements (\ref{SU(2,2)}) which obey
the requirement
\begin{equation}
A+C = B+D~.
\label{stability}
\end{equation}
This subgroup proves to be isomorphic to
a group generated by the Lorentz transformations, dilatations and
special conformal transformations.
To visualise this,
it is useful to
implement a special similarity transformation for both the group
$SU(2,2)$ and the twistor space.
We introduce a special $4\times 4$ matrix $\Sigma$,
\begin{eqnarray}
\Sigma= \frac{1}{ \sqrt{2} }
\left(
\begin{array}{cr}
{\bf 1}_2 ~ & - {\bf 1}_2\\
{\bf 1}_2 ~& {\bf 1}_2
\end{array}
\right)~, \qquad \Sigma^\dagger \,\Sigma= {\bf 1}_4~,
\end{eqnarray}
and associate with it the following similarity transformation:
\begin{eqnarray}
g ~& \to & ~ {\bm g} = \Sigma \, g\, \Sigma^{-1} ~, \quad g \in SU(2,2)~;
\qquad
T ~ \to ~ {\bm T} = \Sigma \, T~, \quad T \in {\mathbb C}^4~.
\end{eqnarray}
The elements of $SU(2,2)$ are now represented by block
matrices
\begin{eqnarray}
{\bm g}=\left(
\begin{array}{cc}
{\bm A} & {\bm B}\\
{\bm C} & {\bm D}
\end{array}
\right) \in SL(4,{\mathbb C}) ~, \qquad
{\bm g}^\dagger \,{\bm \Omega} \, {\bm g} = {\bm \Omega}~,
\label{SU(2,2)-2}
\end{eqnarray}
where
\begin{eqnarray}
{\bm \Omega} = \Sigma \, \Omega\, \Sigma^{-1}
=
\left(
\begin{array}{cc}
0& {\bf 1}_2 \\
{\bf 1}_2 &0
\end{array}
\right) ~.
\end{eqnarray}
The $2\times 2$ matrices in (\ref{SU(2,2)-2}) are related to those
in (\ref{SU(2,2)}) as follows:
\begin{eqnarray}
{\bm A} &=& \frac12 ( A+D-B-C)~, \nonumber \\
{\bm B} &=& \frac12 ( A+B- C-D)~, \nonumber \\
{\bm C} &=& \frac12 ( A+C-B-D)~, \nonumber \\
{\bm D} &=& \frac12 ( A+B+C+D)~.
\end{eqnarray}
Now, by comparing these expressions with (\ref{stability}) it is seen
that the stability group $\Sigma H_{h_0} \Sigma^{-1}$
consists of upper block-triangular matrices,
\begin{equation}
{\bm C} =0~.
\label{stability2}
\end{equation}
When applied to $\overline{\cM}{}^4 $, the effect
of the similarity transformation\footnote{We follow the two-component spinor
notation of Wess and Bagger \cite{WB}.}
is
\begin{eqnarray}
\left(
\begin{array}{c}
h \\ {\bf 1}
\end{array}
\right) ~\to ~
\Sigma\, \left(
\begin{array}{c}
h \\ {\bf 1}
\end{array}
\right) = \frac{1 }{ \sqrt{2} }
\left(
\begin{array}{c}
h -{\bf 1} \\ h+{\bf 1}
\end{array}
\right) ~\sim ~
\left(
\begin{array}{c}
{\bf 1} \\ -{\rm i}\, \tilde{x}
\end{array}
\right) ~,
\qquad \tilde{x}
=x^m \,(\tilde{\sigma}_m)^{\dt \alpha \alpha}~,
\label{two-plane-mod}
\end{eqnarray}
where
\begin{eqnarray}
-{\rm i}\, \tilde{x} = \frac{ h+{\bf 1} }{ h-{\bf 1} }~,
\qquad \tilde{x}^\dagger
= \tilde{x} ~.
\label{inverseCayley}
\end{eqnarray}
The inverse expression for $h$ in terms of $\tilde{x}$
is given by the so-called Cayley transform:
\begin{equation}
-h = \frac{ {\bf 1} - {\rm i}\, \tilde{x} } { {\bf 1} + {\rm i}\, \tilde{x} } ~.
\label{Cayley}
\end{equation}
It is seen that
\begin{equation}
h_0 =-{\bf 1} \quad \longleftrightarrow \quad
\tilde{x}_0 =0~.
\end{equation}
Unlike the original twistor representation,
eqs. (\ref{two-plane}) and (\ref{non-singular}),
the $2\times 2$ matrices $h\pm{\bf 1}$
in (\ref{two-plane-mod}) may be singular at some points.
This means that the variables $x^m $ (\ref{inverseCayley})
are well-defined local coordinates in the open subset of $\overline{\cM}{}^4$
which is specified by $\det \,( h-{\bf 1} ) \neq 0$ and, as will become clear soon,
can be identified with the ordinary Minkowski space.
As follows from (\ref{two-plane-mod}), in terms of the
variables $x^m$
the conformal group acts by fractional linear transformations
\begin{equation}
-{\rm i}\, \tilde{x} ~\to ~-{\rm i}\, \tilde{x}' =
\Big({\bm C} - {\rm i}\, {\bm D}\,\tilde{x} \Big)
\Big({\bm A} - {\rm i}\, {\bm B}\,\tilde{x} \Big)^{-1}~.
\end{equation}
These transformations can be brought to a more familiar form
if one takes into account the explicit structure of the
elements of $SU(2,2)$:
\begin{eqnarray}
{\bm g} = {\rm e}^{\bm L}~, \quad
{\bm L} = \left(
\begin{array}{cc}
\omega_\alpha{}^\beta - \frac12 \,\tau \delta_\alpha{}^\beta \quad & -{\rm i} \,b_{\alpha \dt \beta}
\\
-{\rm i} \,a^{\dt \alpha \beta} \quad & -{\bar \omega}^{\dt \alpha}{}_{\dt \beta}
+ \frac12 \, \tau \delta^{\dt \alpha}{}_{\dt \beta} \\
\end{array}
\right)~,
\quad
{\bm L}^\dagger =- {\bm \Omega} \, {\bm L} \, {\bm \Omega}~.
\label{confmat}
\end{eqnarray}
Here the matrix elements correspond to a
Lorentz transformation $(\omega_\alpha{}^\beta,~{\bar \omega}^{\dt \alpha}{}_{\dt \beta})$,
translation $a^{\dt \alpha \beta}$, special conformal transformation
$ b_{\alpha \dt \beta}$ and dilatation $\tau$.
In accordance with (\ref{stability2}), the isotropy group at $x_0=0$
is spanned by the Lorentz transformations,
special conformal boosts and scale transformations.
\sect{Compactified Minkowski superspace}
\label{section:three}
The construction reviewed in the previous section
can be immediately generalised to the case of
$\cN$-extended conformal supersymmetry
\cite{Manin},
by making use of the supertwistor
space ${\mathbb C}^{4|\cN}$ introduced by Ferber \cite{Ferber},
with $\cN=1,2,3$ (the case $\cN=4$ is known to be somewhat special,
and will not be discussed here).
The supertwistor space is equipped with scalar product
\begin{eqnarray}
\langle T, S \rangle = T^\dagger \,\Omega \, S~, \qquad
\Omega =\left(
\begin{array}{ccc}
{\bf 1}_2 & {}&0 \\
{} & -{\bf 1}_2 & {}\\
0 & {} & -{\bf 1}_{\cN}
\end{array}
\right) ~,
\end{eqnarray}
for any supertwistors $T,S \in {\mathbb C}^{4|\cN}$.
The $\cN$-extended superconformal group acting on the supertwistor
space is $SU(2,2|\cN) $. It is spanned by supermatrices of the form
\begin{eqnarray}
g \in SL(4|\cN ) ~, \qquad
g^\dagger \,\Omega \,g = \Omega~.
\label{SU(2,2|N)}
\end{eqnarray}
In complete analogy with the bosonic construction,
compactified Minkowski superspace $\overline{\cM}{}^{4|4\cN}$
is defined to be the space of null two-planes
through the origin in ${\mathbb C}^{4|\cN}$.
Given such a two-plane, it is generated by two supertwistors
$T^\mu$ such that (i) their bodies are linearly independent;
(ii) they obey the equations
(\ref{nullplane1}) and (\ref{nullplane2}).
Equivalently,
the space $\overline{\cM}^{4|4\cN} $ consists of
rank-two supermatrices of the form
\begin{eqnarray}
( T^\mu )=\left(
\begin{array}{c}
F\\ G \\
\Upsilon
\end{array}
\right) ~, \qquad
F^\dagger \,F =G^\dagger \,G +\Upsilon^\dagger \,\Upsilon~,
\label{super-two-plane}
\end{eqnarray}
defined modulo the equivalence relation
\begin{eqnarray}
\left(
\begin{array}{c}
F\\ G \\ \Upsilon
\end{array}
\right) ~ \sim ~
\left(
\begin{array}{c}
F\, R\\ G\,R \\ \Upsilon\,R
\end{array}
\right) ~, \qquad R \in GL(2,{\mathbb C}) ~.
\end{eqnarray}
Here $F$ and $G$ are $2\times 2$
bosonic matrices, and $\Upsilon$ is a $\cN \times 2$
fermionic matrix.
As in the bosonic case, we have
\begin{eqnarray}
\left(
\begin{array}{c}
F\\ G \\ \Upsilon
\end{array}
\right) ~ \sim ~
\left(
\begin{array}{c}
h \\ {\bf 1} \\ \Theta
\end{array}
\right) ~, \qquad
h^\dagger h = {\bf 1} + \Theta^\dagger \, \Theta ~.
\end{eqnarray}
Introduce the supermatrix
\begin{eqnarray}
\Sigma= \frac{1 }{ \sqrt{2} }
\left(
\begin{array}{crc}
{\bf 1}_2 ~ & - {\bf 1}_2 & 0\\
{\bf 1}_2 ~& {\bf 1}_2 & 0 \\
0 & 0 ~& \sqrt{2} \,{\bf 1}_{\cN}
\end{array}
\right)~, \qquad \Sigma^\dagger \,\Sigma= {\bf 1}_{4+\cN}~,
\end{eqnarray}
and associate with it the following similarity transformation:
\begin{eqnarray}
g ~& \to & ~ {\bm g} = \Sigma \, g\, \Sigma^{-1} ~, \quad g \in SU(2,2|\cN)~;
\qquad
T ~ \to ~ {\bm T} = \Sigma \, T~, \quad T \in {\mathbb C}^{4|\cN}~.
\label{sim2}
\end{eqnarray}
The supertwistor metric becomes
\begin{eqnarray}
{\bm \Omega} = \Sigma \, \Omega\, \Sigma^{-1}
=
\left(
\begin{array}{ccc}
0& {\bf 1}_2 &0\\
{\bf 1}_2 &0 &0\\
0 & 0& -{\bf 1}_{\cN}
\end{array}
\right) ~.
\end{eqnarray}
When implemented on the superspace $\overline{\cM}{}^{4|4\cN} $,
the similarity transformation results in
\begin{eqnarray}
\left(
\begin{array}{c}
h \\ {\bf 1} \\ \Theta
\end{array}
\right) ~\to ~
\Sigma\, \left(
\begin{array}{c}
h \\ {\bf 1} \\ \Theta
\end{array}
\right) = \frac{1 }{ \sqrt{2} }
\left(
\begin{array}{c}
h -{\bf 1} \\ h+{\bf 1} \\ \sqrt{2}\, \Theta
\end{array}
\right) ~\sim ~
\left(
\begin{array}{c}
{\bf 1} \\ -{\rm i}\, \tilde{x}_+
\\ 2 \,\theta
\end{array}
\right)
= \left(
\begin{array}{r}
\delta_\alpha{}^\beta \\ -{\rm i}\, \tilde{x}_+^{\dt \alpha \beta}
\\ 2 \,\theta_i{}^\beta
\end{array}
\right)
~,
\label{super-two-plane-mod}
\end{eqnarray}
where
\begin{eqnarray}
-{\rm i}\, \tilde{x}_+ = \frac{ h+{\bf 1} }{ h-{\bf 1} }~,
\qquad
\sqrt{2} \, \theta = \Theta \,( h-{\bf 1} )^{-1}~.
\end{eqnarray}
The bosonic $\tilde{x}_+$ and fermionic $\theta$ variables
obey the reality condition
\begin{equation}
\tilde{x}_+ -\tilde{x}_- =4{\rm i}\, \theta^\dagger \,\theta~,
\qquad \tilde{x}_- = (\tilde{x}_+)^\dagger~.
\label{chiral}
\end{equation}
It is solved by
\begin{equation}
x_\pm^{\dt \alpha \beta} = x^{\dt \alpha \beta} \pm 2{\rm i} \,
{\bar \theta}^{\dt \alpha i} \theta^\beta_i ~,\qquad
{\bar \theta}^{\dt \alpha i} = \overline{ \theta^\alpha_i}~,
\qquad \tilde{x}^\dagger = \tilde{x}~,
\end{equation}
with $z^A = (x^a ,\theta^\alpha_i , {\bar \theta}_{\dt \alpha}^i)$ the coordinates
of $\cN$-extended flat global superspace ${\mathbb R}^{4|4\cN}$.
We therefore see that the supertwistors in
(\ref{super-two-plane-mod}) are parametrized
by the variables $x^a_+$ and $\theta^\alpha_i$ which are
the coordinates in the chiral subspace.
Since the superconformal group acts by linear
transformations on ${\mathbb C}^{4| 2\cN}$,
we can immediately conclude that
it acts by holomorphic transformations
on the chiral subspace.
To describe the action of $SU(2,2|\cN)$ on the chiral subspace,
let us consider a generic group element:
\begin{equation}
{\bm g} ={\rm e}^{\bm L}~, \quad
{\bm L} = \left(
\begin{array}{ccc}
\omega_\alpha{}^\beta - \sigma \delta_\alpha{}^\beta \quad & -{\rm i} \,b_{\alpha \dt \beta} \quad &
2\eta_\alpha{}^j \\
-{\rm i} \,a^{\dt \alpha \beta} \quad & -{\bar \omega}^{\dt \alpha}{}_{\dt \beta}
+ {\bar \sigma} \delta^{\dt \alpha}{}_{\dt \beta} \quad &
2{\bar \epsilon}^{\dt \alpha j} \\
2\epsilon_i{}^\beta \quad & 2{\bar \eta}_{i \dt \beta} \quad & \frac{2}{\cN}({\bar \sigma} - \sigma)\,
\delta_i{}^j + \Lambda_i{}^j
\end{array}
\right)~,
\label{su(2,2|n)}
\end{equation}
where
\begin{equation}
\sigma = \frac12 \left( \tau + {\rm i}\,
\frac{\cN}{\cN -4} \varphi \right)~,
\qquad
\Lambda^\dag = -\Lambda~, \qquad {\rm tr}\; \Lambda = 0~.
\end{equation}
Here the matrix elements, which
are not present in (\ref{confmat}), correspond to
a $Q$--supersymmetry $(\epsilon_i^\alpha,~ {\bar \epsilon}^{\dt \alpha i})$,
$S$--supersymmetry $(\eta_\alpha^i,~{\bar \eta}_{i \dt \alpha})$,
combined scale and chiral transformation $\sigma$,
and chiral $SU(\cN)$ transformation $\Lambda_i{}^j$.
Now, one can check that
the coordinates of the chiral subspace transform
as follows:
\begin{eqnarray}
\delta \tilde{x}_+ &=& \tilde{a} +(\sigma +{\bar \sigma})\, \tilde{x}_+
-{\bar \omega}\, \tilde{x}_+ -\tilde{x}_+ \,\omega
+\tilde{x}_+ \,b \,\tilde{x}_+
+4{\rm i}\, {\bar \epsilon} \, \theta - 4 \tilde{x}_+ \, \eta \, \theta ~,
\nonumber \\
\delta \theta &=& \epsilon + \frac{1}{\cN} \Big(
(\cN-2) \sigma + 2
{\bar \sigma}\Big)\, \theta - \theta\, \omega
+ \Lambda \, \theta +\theta \, b \, \tilde{x}_+
-{\rm i}\,{\bar \eta}\, \tilde{x}_+ - 4\,\theta \,\eta \, \theta~.
\label{chiraltra}
\end{eqnarray}
Expressions (\ref{chiraltra})
can be rewritten in a more compact form,
\begin{equation}
\delta x^a_+ = \xi^a_+ (x_+, \theta) ~, \qquad
\delta \theta^\alpha_i = \xi^\alpha_i (x_+, \theta) ~,
\end{equation}
where
\begin{equation}
\xi^a_+ = \xi^a + \frac{\rm i}{8} \,\xi_i \,\sigma^a \, {\bar \theta}^i~,
\qquad \overline{\xi^a} =\xi^a~.
\end{equation}
Here the parameters $\xi^a$ and $\xi^\alpha_i$ are components
of the superconformal Killing vector
\begin{equation}
\xi = {\overline \xi} = \xi^a (z) \,\pa_a + \xi^\alpha_i (z)\,D^i_\alpha
+ {\bar \xi}_{\dt \alpha}^i (z)\, {\bar D}^{\dt \alpha}_i~,
\end{equation}
which generates the infinitesimal transformation in the full superspace,
$z^A \to z^A + \xi \,z^A$, and is
defined to satisfy
\begin{equation}
[\xi \;,\; {\bar D}_i^\ad] \; \propto \; {\bar D}_j^\bd ~,
\end{equation}
and therefore
\begin{equation}
{\bar D}_i^{\dt \alpha } \xi^{\dt \beta \beta} = 4{\rm i} \, \ve^{\dt \alpha{}\dt \beta} \,\xi^\beta_i~.
\label{4Dmaster}
\end{equation}
All information about the superconformal algebra is encoded
in the superconformal Killing vectors.
${}$From eq. (\ref{4Dmaster}) it follows that
\begin{equation}
[\xi \;,\; D^i_\alpha ] = - (D^i_\alpha \xi^\beta_j) D^j_\beta
= {\tilde\omega}_\alpha{}^\beta D^i_\beta - \frac{1}{\cN}
\Big( (\cN-2) \tilde{\sigma} + 2 \overline{ \tilde{\sigma}} \Big) D^i_\alpha
- \tilde{\Lambda}_j{}^i \; D^j_\alpha \;.
\label{4Dmaster2}
\end{equation}
Here the parameters of `local' Lorentz $\tilde{\omega}$ and
scale--chiral $\tilde{\sigma}$ transformations are
\begin{equation}
\tilde{\omega}_{\alpha \beta}(z) = -\frac{1}{\cN}\;D^i_{(\alpha} \xi_{\beta)i}\;,
\qquad \tilde{\sigma} (z) = \frac{1}{\cN (\cN - 4)}
\left( \frac12 (\cN-2) D^i_\alpha \xi^\alpha_i -
{\bar D}^{\dt \alpha}_i {\bar \xi}_{\dt \alpha}^{ i} \right)
\label{lor,weyl}
\end{equation}
and turn out to be chiral
\begin{equation}
{\bar D}_{\dt \alpha i} \tilde{\omega}_{\alpha \beta}~=~ 0\;,
\qquad {\bar D}_{\dt \alpha {} i} \tilde{\sigma} ~=~0\;.
\end{equation}
The parameters $\tilde{\Lambda}_j{}^i$
\begin{equation}
\tilde{\Lambda}_j{}^i (z) = -\frac{\rm i}{32}\left(
[D^i_\alpha\;,{\bar D}_{\dt \alpha j}] - \frac{1}{\cN}
\delta_j{}^i [D^k_\alpha\;,{\bar D}_{\dt \alpha k}] \right)\xi^{\dt \alpha \alpha}~, \qquad
\tilde{\Lambda}^\dag = - \tilde{\Lambda}~, \qquad {\rm tr}\; \tilde{\Lambda} = 0
\label{lambda}
\end{equation}
correspond to `local' $SU(\cN )$ transformations.
One can readily check the identity
\begin{equation}
D^k_\alpha \tilde{\Lambda}_j{}^i = -2 \left( \delta^k_j D^i_\alpha
-\frac{1}{\cN} \delta^i_j D^k_\alpha \right) \tilde{\sigma}~.
\label{an1}
\end{equation}
\sect{Compactified harmonic/projective superspace}
\label{section:four}
${}$For Ferber's supertwistors used in the previous section,
a more appropriate name seems to be {\it even supertwistors}.
Being elements of ${\mathbb C}^{4|\cN}$, these objects have
four bosonic components and $\cN$ fermionic components.
One can also consider {\it odd supertwistors} \cite{LN}. By definition,
these are $4+\cN$ vector-columns such that their top four entries
are fermionic, and the rest $\cN$ components are bosonic.
In other words, the odd supertwistors are elements of
${\mathbb C}^{\cN |4}$. It is natural to treat the even and odd supertwistors
as the even and odd elements, respectively,
of a supervector space\footnote{See, e.g. \cite{DeWitt,BK}
for reviews on supervector spaces.}
of dimension $(4|\cN )$
on which the superconformal group $SU(2,2|\cN) $ acts.
Both even and odd supertwistors should be used \cite{Rosly2,LN} in order
to define harmonic-like superspaces in extended supersymmetry.
Throughout this section, our consideration is restricted to the case $\cN=2$.
Then, $\tilde{\Lambda}^{ij} = \ve^{ik} \,\tilde{\Lambda}_k{}^{j}$ is symmetric,
$\tilde{\Lambda}^{ij}= \tilde{\Lambda}^{ji}$,
and eq. (\ref{an1}) implies
\begin{equation}
D^{(i}_\alpha \tilde{\Lambda}^{jk)} = {\bar D}^{(i}_{\dt \alpha} \tilde{\Lambda}^{jk)}= 0~.
\label{an2}
\end{equation}
\subsection{Projective realisation}
${}$ Following \cite{LN},
we accompany the two even null supertwistors $T^\mu$,
which occur in the construction of the compactified
$\cN=2 $ superspace $\overline{\cM}{}^{4|8} $,
by an odd supertwistor $\Xi$ with non-vanishing {\it body}
(in particular, the body of $ \langle \Xi, \Xi \rangle$ is non-zero).
These supertwistors are required to obey
\begin{equation}
\langle T^\mu, T^\nu \rangle = \langle T^\mu, \Xi \rangle =
0~, \qquad
\mu, \nu =1,2 ~,
\label{nullplane3}
\end{equation}
and are defined modulo the equivalence relation
\begin{eqnarray}
(\Xi, T^\mu)~\sim ~ (\Xi, T^\nu) \,
\left(
\begin{array}{cc}
c~ &0 \\
\rho_\nu~ & R_\nu{}^\mu
\end{array}
\right) ~,\qquad
\left(
\begin{array}{cc}
c~ &0 \\
\rho~ & R
\end{array}
\right) \in GL(1|2)~,
\end{eqnarray}
with $\rho_\nu$ anticommuting complex parameters.
The superspace obtained can be seen to be
$\overline{\cM}{}^{4|8} \times S^2$.
Indeed, using the above freedom in the definition
of $T^\mu$ and $\Xi$, we can choose them to be of the form
\begin{eqnarray}
T^\mu \sim
\left(
\begin{array}{c}
h \\ {\bf 1} \\ \Theta
\end{array}
\right) ~,
\qquad
\Xi \sim
\left(
\begin{array}{c}
0 \\ - \Theta^\dagger \,v \\ v
\end{array}
\right) ~,
\qquad
h^\dagger h = {\bf 1} + \Theta^\dagger \, \Theta ~,
\quad v \neq 0~.
\end{eqnarray}
Here
the non-zero two-vector $v \in {\mathbb C}^2$ is still defined
modulo re-scalings $v \to c\, v $, with $c \in {\mathbb C}^*$.
A natural name for the supermanifold obtained is
{\it projective superspace}.
\subsection{Harmonic realisation}
Now, we would like to present a somewhat different, but equivalent,
realisation for $\overline{\cM}{}^{4|8} \times S^2$
inspired by the exotic
realisation for the two-sphere described in Appendix A.
We will consider a space of quadruples $\{T^\mu, \Xi^+, \Xi^- \}$
consisting of two even supertwistors $T^\mu$ and
two odd supertwistors $\Xi^\pm$ such that (i) the bodies of
$T^\mu$ are linearly independent four-vectors;
(ii) the bodies of $\Xi^\pm$ are lineraly independent two-vectors.
These supertwistors
are further required to obey the relations
\begin{equation}
\langle T^\mu, T^\nu \rangle = \langle T^\mu, \Xi^+ \rangle =
\langle T^\mu, \Xi^- \rangle =
0~, \qquad
\mu, \nu =1,2 ~,
\label{nullplane4}
\end{equation}
and are defined modulo the equivalence relation
\begin{eqnarray}
(\Xi^-,\Xi^+, T^\mu)\sim (\Xi^-,\Xi^+, T^\nu) \,
\left(
\begin{array}{ccc}
a~& 0~& 0 \\
b~& c~ &0 \\
\rho^-_\nu~ & \rho^+_\nu ~&R_\nu{}^\mu
\end{array}
\right) ~,\quad
\left(
\begin{array}{lll}
a~& 0~& 0 \\
b~& c~ &0 \\
\rho^- ~ & \rho^+ ~&R
\end{array}
\right)
\in GL(2|2)~,
\end{eqnarray}
with $\rho^\pm_\nu$ anticommuting complex parameters.
Using the `gauge freedom' in the definition
of $T^\mu$ and $\Xi^\pm$, these supertwistors
can be chosen to have the form
\begin{eqnarray}
T^\mu \sim
\left(
\begin{array}{c}
h \\ {\bf 1} \\ \Theta
\end{array}
\right) ~,
\quad
\Xi^\pm \sim
\left(
\begin{array}{c}
0 \\ - \Theta^\dagger \,v^\pm \\ v^\pm
\end{array}
\right) ~,
\quad
h^\dagger h = {\bf 1} + \Theta^\dagger \, \Theta ~,
\quad
\det \, (v^- \,v^+) \neq 0~.
\end{eqnarray}
Here the `complex harmonics' $v^\pm$
are still defined modulo
arbitrary transformations of the form (\ref{equivalence2}).
Given a $2\times 2$ matrix ${\bm v}=
(v^-\, v^+ ) \in GL(2,{\mathbb C})$,
there always exists a lower triangular matrix $R$ such that
${\bm v} R \in SU(2)$. The latter implies that $v^-$ is uniquely
determined in terms of $v^+$, and therefore
the supermanifold under consideration is indeed
$\overline{\cM}{}^{4|8} \times S^2$.
In accordance with the construction given,
a natural name for this supermanifold is
{\it harmonic superspace}.
\subsection{Embedding of
$\bm{ {\mathbb R}^{4|8} \times S^2}$:
Harmonic realisation}
We can now analyse the structure of superconformal
transformations on the flat global superspace
$ {\mathbb R}^{4|8} \times S^2$
embedded in $\overline{\cM}{}^{4|8} \times S^2$.
Upon implementing the similarity transformation, eq. (\ref{sim2}),
we have
\begin{eqnarray}
({\bm T}^\mu ) \sim
\left(
\begin{array}{c}
{\bf 1} \\ -{\rm i}\, \tilde{x}_+
\\ 2 \,\theta
\end{array}
\right)
= \left(
\begin{array}{r}
\delta_\alpha{}^\beta \\ -{\rm i}\, \tilde{x}_+^{\dt \alpha \beta}
\\ 2 \,\theta_i{}^\beta
\end{array}
\right)
~, \qquad
{\bm \Xi}^\pm \sim
\left(
\begin{array}{c}
0 \\ 2{\bar \theta}^\pm
\\ u^\pm
\end{array}
\right)
= \left(
\begin{array}{c}
0 \\ 2{\bar \theta}^{\pm \dt \alpha }
\\ u^\pm_i
\end{array}
\right) ~.
\label{par}
\end{eqnarray}
with
\begin{eqnarray}
\det \Big(u_i{}^- \, u_i{}^+ \Big) =
u^{+i} \,u^-_i \neq 0~,
\qquad u^{+i} = \ve^{ij} \,u^+_j~.
\nonumber
\end{eqnarray}
Here the bosonic $x^m_+$ and fermionic $\theta^\alpha_i$ variables
are related to each other by the reality condition (\ref{chiral}).
The orthogonality conditions
$\langle {\bm T}^\mu, {\bm \Xi}^\pm \rangle = 0$ imply
\begin{equation}
{\bar \theta}^{+ \dt \alpha } = {\bar \theta}^{\dt \alpha i} \,u^+_i~,
\qquad
{\bar \theta}^{- \dt \alpha } = {\bar \theta}^{\dt \alpha i} \,u^-_i~.
\end{equation}
The complex harmonic variables $u^\pm_i$
in (\ref{par}) are still defined modulo arbitrary
transformations of the form
\begin{eqnarray}
\Big(u_i{}^- \, u_i{}^+ \Big) ~\to ~
\Big(u_i{}^- \, u_i{}^+ \Big)
\,R~,
\qquad
R= \left(
\begin{array}{cc}
a & 0\\
b & c
\end{array}
\right) \in GL(2,{\mathbb C})~.
\label{equivalence22}
\end{eqnarray}
The `gauge' freedom (\ref{equivalence22})
can be reduced by imposing the `gauge' condition
\begin{equation}
u^{+i} \,u^-_i =1~.
\label{unimod}
\end{equation}
It can be further reduced by choosing
the harmonics to obey the reality condition
\begin{equation}
u^{+i} =\overline{u^-_i} ~.
\label{real}
\end{equation}
Both requirements (\ref{unimod}) and (\ref{real}) have no fundamental
significance, and represent themselves
possible gauge conditions only.
It is worth pointing out that the reality condition
(\ref{real}) implies
$ \langle {\bm \Xi}^- , {\bm \Xi}^+ \rangle = 0$.
If both equations (\ref{unimod}) and (\ref{real}) hold,
then we have in addition
$ \langle {\bm \Xi}^+ , {\bm \Xi}^+ \rangle
= \langle {\bm \Xi}^- , {\bm \Xi}^- \rangle = -1$.
In what follows, the harmonics will be assumed
to obey eq. (\ref{unimod}) only.
As explained in the appendix, the gauge freedom
(\ref{equivalence22}) allows one to represent
any infinitesimal transformation of the harmonics
as follows:
\begin{eqnarray}
\delta u^-_i =0~, \qquad \delta u^+_i = \rho^{++}(u)\, u^-_i~,
\nonumber
\end{eqnarray}
for some parameter $\rho^{++}$ which is determined by
the transformation under consideration.
In the case of an infinitesimal superconformal transformation
(\ref{su(2,2|n)}),
one derives
\begin{eqnarray}
\delta u^-_i =0~, \qquad
\delta u^+_i = - \tilde{\Lambda}^{++}\, u^-_i~,
\qquad
\tilde{\Lambda}^{++} = \tilde{\Lambda}^{ij} \,u^+_i u^+_j~,
\label{deltau+}
\end{eqnarray}
with the parameter $ \tilde{\Lambda}^{ij} $ given by (\ref{lambda}).
Due to (\ref{an2}), we have (using the notation $D^\pm_\alpha =D^i_\alpha u^\pm_i$
and $ {\bar D}^\pm_{\dt \alpha} ={\bar D}^i_{\dt \alpha} u^\pm_i$)
\begin{equation}
D^+_\alpha \tilde{\Lambda}^{++} ={\bar D}^+_{\dt \alpha} \tilde{\Lambda}^{++} =0~,
\qquad D^{++} \tilde{\Lambda}^{++} =0~.
\label{L-anal}
\end{equation}
Here and below, we make use of the harmonic derivatives \cite{GIKOS}
\begin{eqnarray}
D^{++}=u^{+i}\frac{\partial}{\partial u^{- i}} ~,\qquad
D^{--}=u^{- i}\frac{\partial}{\partial u^{+ i}} ~.
\label{5}
\end{eqnarray}
It is not difficult to express $\tilde{\Lambda}^{++} $ in terms
of the parameters in (\ref{su(2,2|n)}) and superspace coordinates:
\begin{equation}
\tilde{\Lambda}^{++} =\Lambda^{ij} \,u^+_i u^+_j +4 \, {\rm i}\,\theta^+ \,b \,{\bar \theta}^+
- ( \theta^+ \eta^+ -{\bar \theta}^+ {\bar \eta}^+ ) ~.
\end{equation}
The transformation (\ref{deltau+}) coincides with the one
originally given in
\cite{GIOS-conf}.
${}$For the superconformal variations
of $\theta^{+}_{ \alpha} $ and ${\bar \theta}^+_{\dt \alpha}$, one finds
\begin{eqnarray}
\delta \theta^{+}_{ \alpha} &=& \delta \theta^i_{ \alpha } \, u^+_i + \theta^i_{ \alpha }\, \delta u^+_i
= \xi^i_{ \alpha } \, u^+_i -
\tilde{\Lambda}^{++} \, \theta^i_{ \alpha } \, u^-_i ~,
\end{eqnarray}
and similarly for $\delta {\bar \theta}^{+}_{\dt \alpha}$. From eqs. (\ref{4Dmaster2})
and (\ref{L-anal}) one then deduces
\begin{equation}
D^+_\beta \, \delta q^{+}_{ \alpha } = {\bar D}^+_{\dt \beta} \, \delta \theta^{+}_{ \alpha}=0~,
\end{equation}
and similarly for $\delta {\bar \theta}^{+}_{\dt \alpha}$.
The superconformal variations $ \delta q^{+}_{ \alpha } $ and
$\delta {\bar \theta}^{+}_{ \dt \alpha}$ can be seen to coincide
with those originally given in \cite{GIOS-conf}.
One can also check that the superconformal variation of
the analytic bosonic coordinates
\begin{equation}
y^a = x^a - 2{\rm i}\, \theta^{(i}\sigma^a {\bar \theta}^{j)}u^+_i u^-_j~,
\qquad
D^+_\beta \, y^a = {\bar D}^+_{\dt \beta} \, y^a=0~,
\end{equation}
is analytic. This actually follows from the transformation
\begin{equation}
\delta D^+_\alpha \equiv
[ \xi - \tilde{\Lambda}^{++} D^{--} , D^+_{ \alpha} ]
= \tilde{\omega}_{ \alpha}{}^{ \beta}\, D_{ \beta}^+
- ( \tilde{\sigma} + \tilde{\Lambda}^{ij} \,u^+_i u^-_j ) \, D^+_{ \alpha}~,
\end{equation}
and similarly for $\delta {\bar D}^+_{\dt \alpha} $.
We conclude that the analytic subspace parametrized by
the variables
$$\zeta=( y^a,\theta^{+\alpha},{\bar\theta}^+_{\dt \alpha}, \,
u^+_i,u^-_j )~,
\qquad D^+_\beta \, \zeta = {\bar D}^+_{\dt \beta} \, \zeta=0~,
$$
is invariant under the superconformal group.
The superconformal variations of these coordinates
coincide with those given in \cite{GIOS-conf}.
No consistency clash occurs between
the $SU(2)$-type constraints (\ref{1+2const})
and the superconformal transformation law (\ref{deltau+}),
because the construction does not require
imposing either of the constraints (\ref{1+2const}).
Using eq. (\ref{an1}) one can show that
the following descendant
of the superconformal Killing vector
\begin{equation}
\Sigma = \tilde{\Lambda}^{ij} \,u^+_i u^-_j + \tilde{\sigma}
+\overline{ \tilde{\sigma} }
\end{equation}
possesses the properties
\begin{equation}
D^+_\beta \, \Sigma = {\bar D}^+_{\dt \beta} \, \Sigma=0~, \qquad
D^{++} \Sigma =\tilde{\Lambda}^{++}~.
\end{equation}
It turns out that the objects $\xi$, $\tilde{\Lambda}^{++}$ and $\Sigma$ determine
the superconformal transformations of primary analytic superfields
\cite{GIOS}.
\subsection{Embedding of $\bm{ {\mathbb R}^{4|8} \times S^2}$:
Projective realisation}
Now, let us try to exploit
the realisation of $S^2$ as the Riemann sphere ${\mathbb C}P^1$.
The superspace can be covered by two open sets -- the north chart
and the south chart -- that are specified by the
conditions: (i) $u^{+ \underline{1}} \neq 0$; and
(ii) $u^{+ \underline{2}} \neq 0$.
In the north chart, the gauge freedom (\ref{equivalence22})
can be completely fixed by choosing
\begin{eqnarray}
u^{+i} \sim (1, w) \equiv w^i ~, \quad && \quad u^+_i \sim (-w,1) = w_i~,
\quad \qquad \nonumber \\
u^{-i} \sim (0,-1) ~, \quad && \quad u^-_i \sim (1,0)~.
\label{projectivegaugeN}
\end{eqnarray}
Here $w$ is the complex coordinate parametrizing the north chart.
Then the transformation law (\ref{deltau+}) turns into
\begin{equation}
\delta w = \tilde{\Lambda}^{++}(w)~,
\qquad
\tilde{\Lambda}^{++} (w)= \tilde{\Lambda}^{ij} \,w^+_i w^+_j~.
\label{deltaw+}
\end{equation}
It is seen that the superconformal group acts by holomorphic
transformations.
The south chart is defined by
\begin{eqnarray}
u^{+i} \sim (y, 1) \equiv y^i~, \quad && \quad
u^+_i \sim (-1,y) =y_i ~, \nonumber \\
\quad \qquad
u^{-i} \sim (1,0) ~, \quad && \quad u^-_i \sim (0,1)~,
\end{eqnarray}
with $y$ the local complex coordinate. The transformation law (\ref{deltau+})
becomes
\begin{equation}
\delta y = -\tilde{\Lambda}^{++}(y)~,
\qquad
\tilde{\Lambda}^{++} (y)= \tilde{\Lambda}^{ij} \,y^+_i y^+_j~.
\label{deltay+}
\end{equation}
In the overlap of the north and south charts,
the corresponding complex coordinates
are related to each other in the standard way:
\begin{equation}
y= \frac{1}{w}~.
\end{equation}
\sect{5D superconformal formalism}
\label{section:five}
As we have seen, modulo some global topological issues,
all information about the superconformal structures
in a superspace is encoded in the corresponding
superconformal Killing vectors.
In developing the 5D superconformal formalism below,
we will not pursue global aspects, and simply base
our consideration upon elaborating
the superconformal Killing vectors and related
concepts.
Our 5D notation and conventions follow \cite{KL}.
\subsection{5D superconformal Killing vectors}
In 5D simple
superspace ${\mathbb R}^{5|8}$ parametrized
by coordinates $ z^{\hat A} = (x^{\hat a}, \theta^{\hat \alpha}_i )$,
we introduce an infinitesimal coordinate transformation
\begin{equation}
z^{\hat A} ~\to ~ z^{\hat A} = z^{\hat A} + \xi \, z^{\hat A}
\end{equation}
generated by a real vector field
\begin{equation}
\xi ={\bar \xi} = \xi^{\hat a} (z) \, \pa_{\hat a}
+ \xi^{\hat \alpha}_i (z) \, D_{\hat \alpha}^i ~,
\end{equation}
with $D_{\hat A} = ( \pa_{\hat a}, D_{\hat \alpha}^i ) $
the flat covariant derivatives.
The transformation is said to be superconformal if
$[\xi , D_{\hat \alpha}^i] \propto D_{\hat \beta}^j $,
or more precisely
\begin{equation}
[\xi , D_{\hat \alpha}^i] = -( D_{\hat \alpha}^i \, \xi^{\hat \beta}_j )\, D_{\hat \beta}^j~.
\label{master1}\end{equation}
The latter equation is equivalent to
\begin{equation}
D_{\hat \alpha}^i \xi^{\hat b} = 2{\rm i} \,(\Gamma^{\hat b})_{\hat \alpha}{}^{\hat \beta}\,
\xi^i_{\hat \beta}
= - 2{\rm i} \,(\Gamma^{\hat b})_{\hat \alpha \hat \beta}\,
\xi^{\hat \beta i} ~.
\label{master2}
\end{equation}
It follows from here
\begin{equation}
\ve^{ij} \,(\Gamma_{\hat a})_{\hat \alpha \hat \beta}\, \pa^{\hat a} \xi^{\hat b}
= (\Gamma^{\hat b})_{\hat \alpha \hat \gamma}\, D_{\hat \beta}^j \, \xi^{\hat \gamma i}
+ (\Gamma^{\hat b})_{\hat \beta \hat \gamma}\, D_{\hat \alpha}^i \, \xi^{\hat \gamma j}~.
\label{master3}
\end{equation}
This equation implies that $\xi^{\hat a}= \xi^{\hat a}(x,\theta) $ is
an ordinary conformal Killing vector,
\begin{equation}
\pa^{\hat a} \xi^{\hat b}+\pa^{\hat b} \xi^{\hat a}
=\frac{2}{5}\, \eta^{\hat a \hat b} \,
\pa_{\hat c} \, \xi^{\hat c}~,
\label{master4}
\end{equation}
depending parametrically on the Grassmann superspace coordinates,
\begin{eqnarray}
\xi^{\hat a}(x,\theta) &=& b^{\hat a} (\theta) + 2\sigma (\theta) \, x^{\hat a}
+ \omega^{\hat a}{}_{\hat b} (\theta) \,x^{\hat b}
+k^{\hat a} (\theta)\, x^2 -2 x^{\hat a} x_{\hat b}\, k^{\hat b}(\theta) ~,
\end{eqnarray}
with $\omega^{\hat a \hat b} =- \omega^{\hat a \hat b}$.
${}$From (\ref{master2}) one can derive a closed equation
on the vector components
$\xi_{\hat \beta \hat \gamma} = (\Gamma^{\hat b})_{\hat \beta \hat \gamma} \xi_{\hat b}$:
\begin{equation}
D^i_{( \hat \alpha}\, \xi_{\hat \beta ) \hat \gamma}
=-\frac{1}{5} \,D^{ \hat \delta i} \, \xi_{\hat \delta ( \hat \alpha} \, \ve_{\hat \beta ) \hat \gamma}~.
\end{equation}
One can also deduce closed equations on the spinor
components $ \xi^{\hat \alpha}_i $:
\begin{eqnarray}
D_{\hat \alpha}^{(i} \, \xi_{\hat \beta}^{ j) } &=&\frac{1}{ 4} \,
\ve_{\hat \alpha \hat \beta} \,D^{\hat \gamma (i }\, \xi_{\hat \gamma}^{ j)}~,
\label{master5} \\
(\Gamma^{\hat b})_{\hat \alpha \hat \beta} \,D^{\hat \alpha i} \xi^{\hat \beta }_i
&=&0~.
\label{master6}
\end{eqnarray}
At this stage it is useful to let harmonics
$u^\pm_i$, such that $u^{+i}u^-_i\neq 0$,
enter the scene for the first time.
With the definitions
$D^\pm_{\hat \alpha} = D^i_{\hat \alpha} \, u^\pm_i$ and
$\xi^\pm_{\hat \alpha} = \xi^i_{\hat \alpha} \, u^\pm_i$,
eq. (\ref{master5}) is equivalent to
\begin{equation}
D_{\hat \alpha}^{+} \xi_{\hat \beta}^{ + } =\frac{1}{4} \,
\ve_{\hat \alpha \hat \beta} \,D^{+ \hat \gamma } \xi_{\hat \gamma}^{ +}
\quad \Longrightarrow \quad
D_{\hat \alpha}^{+} D_{\hat \beta}^{+} \xi_{\hat \gamma}^{ + } =0~.
\end{equation}
The above results lead to
\begin{equation}
[\xi , D_{\hat \alpha}^i] = \tilde{\omega}_{\hat \alpha}{}^{\hat \beta}\, D_{\hat \beta}^i
-\tilde{\sigma} \, D_{\hat \alpha}^i - \tilde{\Lambda}_j{}^i D_{\hat \alpha}^j~,
\label{param}
\end{equation}
where
\begin{eqnarray}
\tilde{\omega}^{\hat \alpha \hat \beta} =-\frac12 \,D^{k (\hat \alpha} \xi^{ \hat \beta )}_k~,
\quad
\tilde{\sigma} = \frac{1}{8} D_{\hat \gamma}^k \xi^{\hat \gamma }_k~,
\quad \tilde{\Lambda}^{ij} = \frac{1}{ 4}
D_{\hat \gamma }^{( i} \xi^{j) \hat \gamma }~.
\end{eqnarray}
The parameters on the right of (\ref{param}) are related to each other
as follows
\begin{eqnarray}
D_{\hat \alpha}^i \tilde{\omega}_{\hat \beta \hat \gamma} &=&
2\Big( \ve_{\hat \alpha \hat \beta} \, D_{\hat \gamma}^i \tilde{\sigma}
+ \ve_{\hat \alpha \hat \gamma} \, D_{\hat \beta}^i \tilde{\sigma} \Big)~, \nonumber \\
D_{\hat \alpha}^i \tilde{\Lambda}^{jk} &=&
3\Big( \epsilon^{ik} \,D_{\hat \alpha}^j \tilde{\sigma} +
\epsilon^{ij} \,D_{\hat \alpha}^k \tilde{\sigma} \Big) ~.
\label{relations}
\end{eqnarray}
The superconformal transformation of the
superspace integration measure
involves
\begin{equation}
\pa_{\hat a} \,\xi^{\hat a} - D^i_{\hat \alpha} \,\xi^{\hat \alpha}_i
=2\tilde{\sigma}~.
\label{trmeasure1}
\end{equation}
\subsection{Primary superfields}
Here we give a few examples of 5D primary superfields,
without Lorentz indices.
Consider a completely symmetric iso-tensor
superfield $H^{i_1\dots i_n}= H^{(i_1\dots i_n)}$
with the superconformal transformation law
\begin{equation}
\delta H^{i_1\dots i_n}= -\xi \,H^{i_1\dots i_n}
-p \,\tilde{\sigma}\, H^{i_1\dots i_n}
-\tilde{\Lambda}_k{}^{(i_1} H^{i_2\dots i_n )k} ~,
\label{lin1}
\end{equation}
with $p$ a constant parameter being equal to half
the conformal weight of $H^{i_1\dots i_n}$.
It turns out that this parameter is equal to
$3n$ if $H^{i_1\dots i_n}$ is
constrained by
\begin{equation}
D_{\hat \alpha}{}^{(j} H^{i_1\dots i_n)} =0 \quad
\longrightarrow \quad p=3n~.
\label{lin2}
\end{equation}
The vector multiplet strength transforms as
\begin{equation}
\delta W = - \xi\,W -2\tilde{\sigma} \,W~.
\label{vmfstransfo}
\end{equation}
The conformal weight of $W$ is uniquely fixed by
the Bianchi identity
\begin{equation}
D^{(i}_{\hat \alpha} D_{\hat \beta }^{j)} W
= \frac{1 }{ 4} \ve_{\hat \alpha \hat \beta} \,
D^{\hat \gamma (i} D_{\hat \gamma }^{j)} W~
\label{Bianchi1}
\end{equation}
obeyed by $W$.
\subsection{Analytic building blocks}
In what follows we make use of the harmonics $u^\pm_i$
subject to eq. (\ref{unimod}). As in the 4D $\cN=2$ case,
eq. (\ref{unimod})
has no intrinsic significance, with the only essential condition being
$(u^+u^-) \equiv u^{+i}u^-_i\neq 0$. Eq. (\ref{unimod}) is nevertheless
handy, for it allows one to get rid of numerous annoying factors
of $(u^+u^-)$.
Introduce
\begin{equation}
\Sigma = \tilde{\Lambda}^{ij} \,u^+_i u^-_j +3\tilde{\sigma}~,\qquad
\tilde{\Lambda}^{++} = D^{++} \Sigma
=\tilde{\Lambda}^{ij} \,u^+_i u^+_j~.
\end{equation}
It follows from (\ref{relations}) and the identity
$[ D^{++}, D^+_{\hat \alpha} ]=0$,
that $\Sigma$ and $\tilde{\Lambda}^{++} $ are analytic superfields,
\begin{equation}
D^+_{\hat \alpha} \Sigma =0~, \qquad
D^+_{\hat \alpha} \tilde{\Lambda}^{++} =0~.
\end{equation}
Representing
$\xi = \xi^{\hat a} \pa_{\hat a}
-\xi^{+\hat \alpha} D^-_{\hat \alpha}
+ \xi^{-\hat \alpha} D^+_{\hat \alpha}$,
one can now check that
\begin{equation}
[ \xi - \tilde{\Lambda}^{++} D^{--} \, , \, D^+_{\hat \alpha} ]
= \tilde{\omega}_{\hat \alpha}{}^{\hat \beta}\, D_{\hat \beta}^+
- (\Sigma - 2\tilde{\sigma} ) \, D^+_{\hat \alpha}~.
\end{equation}
This relation implies that the operator $ \xi - \tilde{\Lambda}^{++} D^{--} $
maps every analytic superfield into an analytic one.
It is worth pointing out that the superconformal transformation of
the analytic subspace measure involves
\begin{equation}
\pa_{\hat a} \xi^{\hat a} +D^-_{\hat \alpha} \xi^{+\hat \alpha}
-D^{--}\tilde{\Lambda}^{++} =2\Sigma~.
\end{equation}
\subsection{Harmonic superconformal multiplets}
We present here several superconformal multiplets
that are globally defined over the harmonic superspace.
Such a multiplet is described by a
smooth Grassmann analytic superfields $\Phi^{(n)}_\kappa (z,u^+,u^-)$,
\begin{equation}
D^+_{\hat \alpha} \Phi^{(n)}_\kappa =0~,
\end{equation}
which is endowed with the following superconformal transformation law
\begin{equation}
\delta \Phi^{(n)}_\kappa = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \Phi^{(n)}_\kappa
-\kappa \,\Sigma \, \Phi^{(n)}_\kappa ~.
\label{harmult1}
\end{equation}
The parameter $\kappa$ is related to the conformal weight of
$ \Phi^{(n)}_\kappa$. We will call $ \Phi^{(n)}_\kappa$
an analytic density of weight $\kappa$.
When $ n$ is even, one can define
real superfields,
$\breve{\Phi}^{(n)}_\kappa=\Phi^{(n)}_\kappa$,
with respect to the analyticity-preserving conjugation \cite{GIKOS,GIOS}
(also known as `smile-conjugation').
Let $V^{++}$ be a real analytic gauge potential describing
a $U(1)$ vector multiplet.
Its superconformal transformation is
\begin{equation}
\delta V^{++} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, V^{++}~.
\label{v++tr}
\end{equation}
Associated with the gauge potential is the field strength
\begin{eqnarray}
W = \frac{\rm i}{8} \int {\rm d}u \,
({\hat D}^-)^2 \, V^{++}~, \qquad
({\hat D}^\pm)^2=D^{\pm \hat \alpha} D^\pm_{\hat \alpha}
\label{W2}
\end{eqnarray}
which is known to be invariant under the gauge transformation
$\delta V^{++} = D^{++} \lambda $, where the gauge parameter
$\lambda$ is a real analytic superfield.
The superconformal transformation of $W$,
\begin{eqnarray}
\delta W = -\frac{\rm i}{8} \int {\rm d}u \,
({\hat D}^-)^2 \Big( \xi + (D^{--} \tilde{\Lambda}^{++}) \Big) \,
V^{++}~,
\end{eqnarray}
can be shown to coincide with (\ref{vmfstransfo}).
There are many ways to describe a hypermultiplet.
In particular, one can use
an analytic superfield $q^+ (z,u)$ and its smile-conjugate
$\breve{q}^+ (z,u)$ \cite{GIKOS,GIOS}. They transform
as follows:
\begin{equation}
\delta q^+ = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, q^+
- \,\Sigma \, q^+ ~, \qquad
\delta \breve{q}^+ = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \breve{q}^+
- \,\Sigma \, \breve{q}^+ ~.
\label{q+-trlaw}
\end{equation}
One has $\kappa =n$ in (\ref{harmult1}), if
the superfield is annihilated by
$D^{++}$,
\begin{eqnarray}
&& D^+_{\hat \alpha} H^{(n)} = D^{++} H^{(n)} =0~
\quad \longrightarrow \quad
H^{(n)}(z,u) = H^{i_1\dots i_n} (z) \,u^+_{i_1} \dots u^+_{i_n} ~,
\nonumber \\
&& \qquad \delta H^{(n)} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, H^{(n)}
-n \,\Sigma \, H^{(n)}~.
\label{O(n)-harm}
\end{eqnarray}
Here the harmonic-independent superfield
$H^{i_1\dots i_n} $ transforms according to
(\ref{lin1}) with $p=3n$.
\subsection{Projective superconformal multiplets}
In the projective superspace approach,
one deals only with superfields
${\bm \phi}^{(n)} (z,u^+)$ obeying the constraints
\begin{eqnarray}
&& D^+_{\hat \alpha} {\bm \phi}^{(n)} = D^{++} {\bm \phi}^{(n)} =0~,
\qquad n\geq 0~.
\label{holom2}
\end{eqnarray}
Here the first constraint means that ${\bm \phi}^{(n)} $ is Grassmann analytic,
while the second constraint demands independence of $u^-$.
Unlike the harmonic superspace approach, however,
${\bm \phi}^{(n)} (z,u^+)$
is not required to be well-defined over the two-sphere, that is,
${\bm \phi}^{(n)}$ may have singularities (say, poles)
at some points of $S^2$. The presence of singularities
turns out to be perfectly
OK since the projective-superspace action involves
a contour integral in $S^2$, see below.
We assume that ${\bm \phi}^{(p)} (z,u)$
is non-singular outside the north and south poles
of $ S^2$.
In the north chart,
we can represent
\begin{equation}
D^+_{\hat \alpha} = - u^{+\underline{1}}\, \nabla_{\hat \alpha} (w)~,
\qquad \nabla_{\hat \alpha} (w) = -D^i_{\hat \alpha} \, w_i~,
\qquad w_i = (-w, 1)~,
\label{nabla0}
\end{equation}
Then, the equations
(\ref{holom2})
are equivalent to
\begin{equation}
\phi (z, w) = \sum_{n=-\infty}^{+\infty} \phi_n (z) \,w^n~,
\qquad
\nabla_{\hat \alpha} (w) \, \phi(z,w)=0~,
\label{holom0}
\end{equation}
with the holomorphic superfield
$\phi (z, w) \propto {\bm \phi}^{(n)} (z,u^+)$.
These relations define a {\it projective multiplet}, following
the four-dimensional terminology \cite{projective}.
Associated with $\phi (z,w) $ is its smile-conjugate
\begin{eqnarray}
\breve{\phi} (z, w) = \sum_{n=-\infty}^{+\infty} (-1)^n \,
{\bar \phi}_{-n} (z) \,w^n~, \qquad
\nabla_{\hat \alpha} (w) \, \breve{\phi}(z,w)=0~,
\label{holom3}
\end{eqnarray}
which is also a projective multiplet.
If $\breve{\phi} (z, w) = {\phi} (z, w) $, the projective superfield
is called real.
Below we present several superconformal multiplets
as defined in the north chart. The corresponding transformations
laws involve the two analytic building blocks:
$$
\tilde{\Lambda}^{++} (w)= \tilde{\Lambda}^{ij} \,w^+_i w^+_j
= \tilde{\Lambda}^{\underline{1} \underline{1} }\, w^2 -2 \tilde{\Lambda}^{\underline{1} \underline{2}}\, w
+ \tilde{\Lambda}^{\underline{2} \underline{2}} ~,\quad
\Sigma (w) = \tilde{\Lambda}^{\underline{1} i} \,w_i +3 \tilde{\sigma}
= - \tilde{\Lambda}^{\underline{1} \underline{1}} \,w + \tilde{\Lambda}^{\underline{1} \underline{2}} +3 \tilde{\sigma}~.
$$
Similar structures occur in the south chart, that is
$$
\tilde{\Lambda}^{++} (y)= \tilde{\Lambda}^{ij} \,y^+_i y^+_j
= \tilde{\Lambda}^{\underline{1} \underline{1} } -2 \tilde{\Lambda}^{\underline{1} \underline{2}}\, y
+ \tilde{\Lambda}^{\underline{2} \underline{2}} \,y^2~,\quad
\Sigma (y) = \tilde{\Lambda}^{\underline{2} i} \,y_i +3 \tilde{\sigma}
= - \tilde{\Lambda}^{\underline{1} \underline{2}} + \tilde{\Lambda}^{\underline{2} \underline{2}}\, y +3 \tilde{\sigma}~.
$$
In the overlap of the two charts, we have
\begin{eqnarray}
\tilde{\Lambda}^{++} (y)&=& \frac{1}{w^2} \,\tilde{\Lambda}^{++} (w)
\quad \longrightarrow \quad
\tilde{\Lambda}^{++} (y)\,\pa_y =- \tilde{\Lambda}^{++} (w)\,\pa_w \nonumber \\
\Sigma(y) &=&
\Sigma (w) +\frac{1}{w} \,\tilde{\Lambda}^{++} (w)~.
\end{eqnarray}
To realise a massless vector multiplet,
one uses the so-called tropical multiplet
described by
\begin{equation}
V (z, w) = \sum_{n=-\infty}^{+\infty}
V_n (z) \,w^n~, \qquad
\bar{V}_n = (-1)^n \,V_{-n}~.
\label{tropical}
\end{equation}
Its superconformal transformation
\begin{equation}
\delta V= - \Big( \xi + \tilde{\Lambda}^{++} (w)\,\pa_w \Big) \, V~.
\label{tropicaltransf}
\end{equation}
The field strength of the vector multiplet\footnote{A more
general form for the field strength (\ref{strength3}) is given
in Appendix B.} is
\begin{equation}
W(z) =- \frac{1}{ 16\pi {\rm i}} \oint {\rm d} w \,
(\hat{D}^-)^2 \, V(z,w)
=\frac{1}{ 4 \pi {\rm i}} \oint \frac{{\rm d} w}{w} \,
\cP (w) \, V(z,w) ~,
\label{strength3}
\end{equation}
where
\begin{eqnarray}
\cP(w) =\frac{1}{ 4w} \,
(\bar D_{\underline{1}})^2 + \pa_5 - \frac{w}{ 4} \, (D^{\underline{1}})^2~.
\label{Diamond}
\end{eqnarray}
The superconformal transformation of $W$
can be shown to coincide with (\ref{vmfstransfo}).
The field strength (\ref{strength3}) is invariant
under the gauge transformation
\begin{equation}
\delta V(z,w) = {\rm i}\Big( \breve{\lambda} (z,w)-\lambda (z,w) \Big)~,
\label{lambda4}
\end{equation}
with $\lambda(z,w)$ an arbitrary arctic multiplet,
see below.
To describe a massless off-shell hypermultiplet, one can use
the so-called arctic multiplet $\Upsilon (z, w)$:
\begin{equation}
{\bm q}^+ (z, u) = u^{+\underline{1}}\, \Upsilon (z, w) \sim
\Upsilon (z, w)~, \quad \qquad
\Upsilon (z, w) = \sum_{n=0}^{\infty} \Upsilon_n (z) w^n~.
\label{qsingular}
\end{equation}
The smile-conjugation of $ {\bm q}^+$ leads to the so-called
the antarctic multiplet $\breve{\Upsilon} (z, w) $:
\begin{equation}
\breve{{\bm q}}^+ (z, u) = u^{+\underline{2}} \,\breve{\Upsilon} (z, w) \sim
w\, \breve{\Upsilon} (z, w) \qquad \quad
\breve{\Upsilon} (z, w) = \sum_{n=0}^{\infty} (-1)^n {\bar \Upsilon}_n (z)
\frac{1}{w^n}\;.
\label{smileqsingular}
\end{equation}
Their superconformal transformations are
\begin{eqnarray}
\delta \Upsilon = - \Big( \xi &+& \tilde{\Lambda}^{++} (w)\,\pa_w \Big) \Upsilon
- \Sigma (w) \, \Upsilon ~, \nonumber \\
\delta \breve{\Upsilon} =
- \frac{1}{w}\Big( \xi &+& \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w\,\breve{\Upsilon} )
-\Sigma (w) \,\breve{\Upsilon} ~.
\label{polarsuperconf}
\end{eqnarray}
In the south chart, these transformations take the form
\begin{eqnarray}
\delta \Upsilon = - \frac{1}{y} \Big( \xi &-& \tilde{\Lambda}^{++} (y)\,\pa_y \Big) (y\,\Upsilon )
- \Sigma (y) \, \Upsilon ~, \nonumber \\
\delta \breve{\Upsilon} =
- \Big( \xi &-& \tilde{\Lambda}^{++} (y) \,\pa_y \Big) \breve{\Upsilon}
-\Sigma (y) \,\breve{\Upsilon} ~.
\end{eqnarray}
Both $\Upsilon(z,w)$ and $\breve{\Upsilon}(z,w)$ constitute
the so-called polar multiplet.
Since the product of two arctic superfields is again arctic,
from (\ref{polarsuperconf}) we obtain more general transformation
laws
\begin{eqnarray}
\delta \Upsilon_\kappa = - \Big( \xi &+& \tilde{\Lambda}^{++} (w)\,\pa_w \Big) \Upsilon_\kappa
- \kappa\,\Sigma (w) \, \Upsilon_\kappa ~, \nonumber \\
\delta \breve{\Upsilon}_\kappa =
- \frac{1}{w^\kappa}\Big( \xi &+& \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w^\kappa\,\breve{\Upsilon}_\kappa )
-\kappa\,\Sigma (w) \,\breve{\Upsilon}_\kappa ~,
\label{polarsuperconf-kappa}
\end{eqnarray}
for some parameter $\kappa$.
The case $\kappa=1$ corresponds to free hypermultiplet dynamics,
see below.
Since the product $U_\kappa = \breve{\Upsilon}_\kappa \, \Upsilon_\kappa $ is a tropical multiplet,
we obtain more general transformation laws than the one
defined by eq. (\ref{tropicaltransf}):
\begin{eqnarray}
\delta U_\kappa =
- \frac{1}{w^\kappa}\Big( \xi &+& \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w^\kappa\,U_\kappa )
-2\kappa\,\Sigma (w) \,U_\kappa ~.
\label{tropicaltransf-kappa}
\end{eqnarray}
${}$Finally, let us consider the projective-superspace reformulation
of the multiplets (\ref{O(n)-harm}) with an even superscript,
\begin{eqnarray}
H^{(2n)} (z,u) &=&
\big({\rm i}\, u^{+1} u^{+2}\big)^n H^{[2n]}(z,w) \sim
\big({\rm i}\, w\big)^n H^{[2n]}(z,w)~, \\
H^{[2n]}(z,w) &=&
\sum_{k=-n}^{n} H_k (z) w^n~,
\qquad {\bar H}_k = (-1)^k H_{-k} ~. \nonumber
\label{O(n)-proj}
\end{eqnarray}
The projective superfield $H^{[2n]}(z,w) $ is often called a real $O(2n)$
multiplet \cite{projective}.
Its superconformal transformation in the north chart is
\begin{eqnarray}
\delta H^{[2n]} &=&
- \frac{1}{w^n}\Big( \xi + \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w^n\, H^{[2n]} )
-2n \,\Sigma (w)\, H^{[2n]} ~.
\label{o2n}
\end{eqnarray}
In a similar way one can introduce complex $O(2n+1)$
multiplets. In what follows, we will use the same name
`$O(n)$ multiplet' for both harmonic multiplets (\ref{O(n)-harm})
and the projective ones just introduced.
Among the projective superconformal multiplets considered, it is only
the $O(n)$ multiplets which can be lifted to well-defined representations
of the superconformal group on a compactified 5D
harmonic superspace. The other multiplets realise
the superconformal algebra only.
\sect{5D superconformal theories}
\label{section:six}
With the tools developed,
we are prepared to constructing 5D superconformal theories.
Superfield formulations for
5D $\cN=1$ rigid supersymmetric theories
were earlier elaborated in the harmonic \cite{Z,KL}
and projective \cite{KL} superspace settings.\footnote{In
the case of 6D $\cN=(1,0)$ rigid supersymmetric theories,
superfield formulations have been developed
in the conventional \cite{6Dstand},
harmonic \cite{6Dhar}
and projective \cite{6Dproj} superspace settings.}
\subsection{Models in harmonic superspace}
Let $\cL^{(+4)}$ be an analytic density of weight $+2$.
Its superconformal transformation is a total derivative,
\begin{eqnarray}
\delta \cL^{(+4)} &=& - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \cL^{(+4)}
-2 \,\Sigma \, \cL^{(+4)} \nonumber \\
&=&-\pa_{\hat a} \Big( \xi^{\hat a} \, \cL^{(+4)}\Big)
-D^-_{\hat \alpha} \Big( \xi^{+ \hat \alpha} \, \cL^{(+4)}\Big)
+ D^{--} \Big( \tilde{\Lambda}^{++} \, \cL^{(+4)}\Big)~.
\end{eqnarray}
Therefore, such a superfield generates a superconformal invariant
of the form
\begin{equation}
\int {\rm d} \zeta^{(-4)} \, \cL^{(+4)} ~,
\end{equation}
where
\begin{equation}
\int {\rm d} \zeta^{(-4)}
:=
\int{\rm d} u
\int {\rm d}^5 x \,
(\hat{D}^-)^4 ~, \qquad
(\hat{D}^\pm)^4 = -\frac{1}{ 32} (\hat{D}^\pm)^2
\, (\hat{D}^\pm)^2~.
\end{equation}
This is the harmonic superspace action \cite{GIOS} as applied
to the five-dimensional case.
Let $V^{++}$ be the gauge potential of an Abelian
vector multiplet.
Given a real $O(2)$ multiplet $\cL^{++}$,
\begin{equation}
D^+_{\hat \alpha} \cL^{++} = D^{++} \cL^{++} =0~,\qquad
\delta \cL^{++} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \cL^{++}
-2 \,\Sigma \, \cL^{++}~,
\label{tensor}
\end{equation}
we can generate the following superconformal invariant
\begin{equation}
\int {\rm d} \zeta^{(-4)} \, V^{++}\,\cL^{++} ~.
\end{equation}
Because of the constraint $D^{++} \cL^{++} =0$,
the integral is invariant under the vector multiplet
gauge transformation $\delta V^{++} =- D^{++} \lambda$,
with $\lambda $ a real analytic gauge parameter.
The field strength of the vector multiplet, $W$, is a primary superfield
with the transformation (\ref{vmfstransfo}).
Using $W$, one can construct
the following analytic superfield \cite{KL}
\begin{equation}
-{\rm i} \, G^{++} =
D^{+ \hat \alpha} W \, D^+_{\hat \alpha} W
+\frac12 \,W \, ({\hat D}^+)^2 W ~, \qquad
D^+_{\hat \alpha} G^{++}=D^{++}G^{++} =0 ~.
\label{G++}
\end{equation}
which transforms as a harmonic superfield weight 2,
\begin{equation}
\delta G^{++} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, G^{++}
-2 \,\Sigma \, G^{++} ~.
\label{G++transf}
\end{equation}
In other words, $G^{++}$ is a real $O(2)$ multiplet.
As a result,
the supersymmetric Chern-Simons action\footnote{A different form
for this action was given in \cite{Z}.} \cite{KL}
\begin{equation}
S_{\rm CS} [V^{++}]= \frac{1}{12 }
\int {\rm d} \zeta^{(-4)} \, V^{++} \,
G^{++} ~
\label{CS2}
\end{equation}
is superconformally invariant.
Super Chern-Simons theory (\ref{CS2}) is quite remarkable
as compared with the superconformal models of a single vector
multiplet in four and six dimensions. In the 4D $\cN=2$ case,
the analogue of $G^{++}$ in (\ref{CS2}) is known to be
$D^{+\alpha} D^+_\alpha W= {\bar D}^+_{\dt \alpha} {\bar D}^{+\dt \alpha} {\bar W}$,
with $W$ the chiral field strength, and therefore the model is free.
In the case 6D $\cN=(1,0)$, the analogue of $G^{++}$ in (\ref{CS2}) is
$(D^+)^4 D^-_{\hat \alpha} W^{-\hat \alpha}$, see \cite{ISZ} for more details,
and therefore the models is not only free but also has higher derivatives.
It is only in five dimensions that the requirement of superconformal invariance
leads to a nontrivial dynamical system.
The model (\ref{CS2}) admits interesting generalisations.
In particular, given several Abelian vector multiplets $V^{++}_I$,
where $I=1,\dots, n$, the composite superfield (\ref{G++})
is generalised as follows:
\begin{eqnarray}
G^{++} ~\to~
G^{++}_{IJ} =G^{++}_{(IJ)}
&=&{\rm i}\,
\Big\{ D^{+ \hat \alpha} W_{I} \, D^+_{\hat \alpha} W_{J}
+\frac12 \,W_{(I} \, ({\hat D}^+)^2 W_{J)} \Big\}~, \nonumber \\
D^+_{\hat \alpha} G^{++}_{IJ}&=&D^{++}G^{++}_{IJ} =0 ~.
\end{eqnarray}
The gauge-invariant and superconformal action (\ref{CS2})
turns into
\begin{equation}
\tilde{S}_{\rm CS} = \frac{1}{12 }
\int {\rm d} \zeta^{(-4)} \, V^{++}_I \, c_{I ,JK}\,
G^{++}_{JK} ~,
\qquad
c_{I ,JK} =c_{I, KJ}~,
\label{CS3}
\end{equation}
for some constant parameters $c_{I ,JK} $.
One can also generalise the super Chern-Simons theory (\ref{CS2})
to the non-Abelian case.
In harmonic superspace, some superconformal transformation
laws are effectively independent (if properly understood)
of the dimension of space-time. As a result, some 4D $\cN=2$
superconformal theories can be trivially extended to five dimensions.
In particular, the model for a massless
$U(1)$ charged hypermultiplet \cite{GIKOS}
\begin{equation}
\label{q-hyper}
S_{\rm hyper}= - \int {\rm d} \zeta^{(-4)}\,
\breve{q}{}^+ \Big( D^{++} +{\rm i} \, e\, V^{++} \Big) \,q^+
\end{equation}
can be seen to be superconformal. This follows from
eqs. (\ref{v++tr}) and
(\ref{q+-trlaw}), in conjunction with the observation that
the transformation laws of $q^+$ and
$D^{++} q^+$ are identical.
The dynamical system $S_{\rm CS} + S_{\rm hyper}$ can be chosen
to describe the supergravity compensator sector (vector multiplet plus
hypermultiplet) when describing 5D simple supergravity within
the superconformal tensor calculus \cite{Ohashi,Bergshoeff}.
Then, the hypermultiplet charge $e$ is equivalent to the presence of
a non-vanishing cosmological constant, similar to the 4D $\cN=2$
case \cite{GIOS}.
Our next example is a naive 5D generalisation of the 4D $\cN=2$
improved tensor multiplet \cite{deWPV,LR,projective0}
which was described in the harmonic superspace approach in
\cite{GIO1,GIOS}.
Let us consider the action
\begin{eqnarray}
S_{\rm tensor} [H^{++}]
= \int {\rm d} \zeta^{(-4)} \,\cL^{(+4)} (H^{++}, u) ~,
\label{tensoraction1}
\end{eqnarray}
where
\begin{eqnarray}
\cL^{(+4)} (H^{++}, u) = \mu^3 \,
\Big( \frac{ \cH^{++} }{1 + \sqrt{ 1+ \cH^{++} \,c^{--} }}
\Big)^2~, \qquad
\cH^{++} = H^{++}
- c^{++} ~,
\end{eqnarray}
with $\mu$ a constant parameter of unit mass dimension,
and $c^{++}$ a (space-time) independent
holomorphic vector field on $S^2$,
\begin{equation}
c^{\pm \pm }(u) = c^{ij} \,u^\pm_i u^\pm_j ~, \qquad
c^{ij} c_{ij} =2~, \qquad c^{ij} = {\rm const}~.
\end{equation}
Here $H^{++}(z,u)$ is a real $O(2)$ multiplet
possessing the superconformal transformation law
(\ref{O(n)-harm}) with $n=2$.
The superconformal invariance of (\ref{tensoraction1})
can be proved in complete analogy to
the detailed consideration given \cite{GIO1,GIOS}.
Now, let us couple the vector multiplet
to the real $O(2)$ multiplet
by putting forward the action
\begin{eqnarray}
S_{\rm vector-tensor}[V^{++},H^{++}]=
S_{\rm CS} [V^{++}]
+ \kappa \int {\rm d} \zeta^{(-4)} \, V^{++} \, H^{++}
+ S_{\rm tensor} [H^{++}]~,
\end{eqnarray}
with $\kappa$ a coupling constant.
This action is both gauge-invariant and superconformal.
It is a five-dimensional generalisation of the 4D $\cN=2$ model
for massive tensor multiplet introduced in \cite{Kuz-ten}.
The dynamical system $S_{\rm vector-tensor}$ can be chosen
to describe the supergravity compensator sector (vector multiplet plus
tensor multiplet) when describing 5D simple supergravity within
the superconformal tensor calculus \cite{Ohashi,Bergshoeff}.
Then, the coupling constant $\kappa$ is equivalent to
a cosmological constant, similar to the 4D $\cN=2$
case \cite{BS}.
Finally, consider
the vector multiplet model
\begin{equation}
S_{\rm CS} [V^{++}]
+ S_{\rm tensor} [G^{++} / \mu^3]~,
\end{equation}
with $G^{++}$ the composite superfield (\ref{G++}).
The second term here turns out to be a unique superconformal
extension of the $F^4$-term, where $F$ is the field strength of the
component gauge field. In this respect, it is instructive
to recall its 4D $\cN=2$ analogue \cite{deWGR}
\begin{equation}
\int {\rm d}^4 x \,{\rm d}^8 \theta\,
\ln W \ln {\bar W} ~.
\end{equation}
The latter can be shown \cite{BKT} to be a unique $\cN=2$ superconformal
invariant in the family of actions
of the form $\int {\rm d}^4x \,{\rm d}^8 \theta \,H(W, {\bar W})$
introduced for the first time in \cite{Hen}.
In five space-time dimensions, if one looks for a superconformal invariant
of the form $\int {\rm d}^5x \,{\rm d}^8 \theta \,H(W)$, the general solution
is $H(W) \propto W$, as follows from (\ref{trmeasure1}) and (\ref{vmfstransfo}),
and this choice corresponds to a total derivative.
\subsection{Models in projective superspace}
Let $\cL (z,w) $ be an analytic superfield transforming
according to
(\ref{tropicaltransf-kappa})
with $\kappa=1$.
This transformation law can be rewritten as
\begin{eqnarray}
w\, \delta \cL &=&
- \Big( \xi + \tilde{\Lambda}^{++} \,\pa_w \Big) (w \, \cL )
-2 w\, \Sigma \, \cL \nonumber \\
&=& -\pa_{\hat a} \Big( \xi^{\hat a} \, w\, \cL \Big)
-D^-_{\hat \alpha} \Big( \xi^{+ \hat \alpha} \, w\, \cL \Big)
-\pa_w \Big( \tilde{\Lambda}^{++} \, w\,\cL \Big)~.
\label{o2}
\end{eqnarray}
Such a superfield turns out to generate a
superconformal invariant of the form
\begin{eqnarray}
I =
\oint
\frac{{\rm d} w}{2\pi {\rm i}} \,
\int {\rm d}^5 x \,
(\hat{D}^-)^4 \, w\,\cL (z,w)~,
\label{projac1}
\end{eqnarray}
where
$\oint {\rm d} w $ is a (model-dependent)
contour integral in ${\mathbb C}P^1$.
Indeed, it follows from (\ref{o2}) that this functional
does not change under the superconformal transformations.
Eq. (\ref{projac1})
generalises the projective superspace action \cite{projective0,Siegel}
to the five-dimensional case.
A more general form for this action, which
does not imply the projective gauge conditions (\ref{projectivegaugeN})
and is based on the construction in \cite{Siegel}, is given in Appendix B.
It is possible to bring the action (\ref{projac1}) to a somewhat simpler form
if one exploits the fact that $\cL$ is Grassmann analytic.
Using the considerations outlined in Appendix C gives
\begin{eqnarray}
\int {\rm d}^5 x \,
(\hat{D}^-)^4 \, \cL
=\frac{1}{w^2}
\int {\rm d}^5 x \,
D^4 \cL \Big|~, \qquad
D^4 = \frac{1}{16} (D^{\underline{1}})^2 ({\bar D}_{\underline{1}})^2 \Big|~.
\end{eqnarray}
Here $D^4$ is the Grassmann part of the integration measure of 4D $\cN=1$
superspace,
$\int {\rm d}^4 \theta = D^4$.
Then, functional (\ref{projac1}) turns into
\begin{eqnarray}
I= \oint \frac{ {\rm d} w}{2\pi {\rm i} w} \,
\int {\rm d}^5 x \,
D^4 \cL
= \oint \frac{ {\rm d} w}{2\pi {\rm i}w} \int {\rm d}^5 x \,{\rm d}^4 \theta \,
\cL ~.
\label{projac2}
\end{eqnarray}
Our first example is the tropical multiplet formulation
for the super Chern-Simons theory \cite{KL}
\begin{equation}
S_{\rm CS} = -
\frac{1}{12 }
\oint
\frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \,
V\,G ~,
\label{CS-proj}
\end{equation}
with the contour around the origin. Here
$G(w) $ is the composite $O(2) $ multiplet
(\ref{G++}) constructed out of the tropical gauge potential
$V(w)$,
\begin{equation}
G^{++}= ({\rm i} \,u^{+\underline{1}}u^{+\underline{2}}) \, G(w)
\sim {\rm i} \,w\,G(w)~,
\qquad G(w) = -\frac{1}{ w} \, \Psi+K+ w\, \bar \Psi~,
\label{sYMRed}
\end{equation}
The explicit expressions for the superfields
$\Psi$ and $K$
can be found in \cite{KL}.
The above consideration
and the transformation laws
(\ref{tropicaltransf}) and (\ref{G++transf}) imply that
the action (\ref{CS-proj}) is superconformal.
Next, let us generalise to five dimensions
the charged $\Upsilon$-hypermultiplet model of \cite{projective}:
\begin{equation}
S_{\rm hyper}=
\oint \frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \,
\breve{\Upsilon} \,{\rm e}^{ q \, V }\, \Upsilon ~,
\end{equation}
with $q$ the hypermultiplet charge, and
the integration contour around the origin.
This action is superconformal, in accordance
with the transformation laws (\ref{tropicaltransf})
and (\ref{polarsuperconf}).
It is also invariant under gauge transformations
\begin{equation}
\delta \Upsilon = {\rm i} \, q \, \Upsilon ~, \qquad
\delta V = {\rm i} ( \breve{\lambda}-\lambda )~,
\end{equation}
with $\lambda$ an arctic superfield.
Now, let us couple the vector multiplet to a real
$O(2)$ multiplet $H(w)$
\begin{equation}
H^{++}= ({\rm i} \,u^{+\underline{1}}u^{+\underline{2}}) \, H(w)
\sim {\rm i} \,w\,H(w)~,
\qquad H(w) = -\frac{1}{ w} \, \Phi+L + w\, \bar \Phi~,
\label{O(2)-components}
\end{equation}
We introduce the vector-tensor system
\begin{eqnarray}
S &=& -
\oint
\frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \,
V \Big\{ \frac{1}{12 }\, G
+\kappa \, H \Big\}
+ \mu^3 \oint
\frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \, H \, \ln H ~,
\label{vt-proj}
\end{eqnarray}
where the first term on the right involves a contour around the origin,
while the second comes with a contour turning clockwise and anticlockwise
around the roots of of the quadratic equation
$w\, H(w)=0$. The second term in (\ref{vt-proj}) is
a minimal 5D extension of the 4D $\cN=2$ improved tensor multiplet
\cite{projective0}.
It should be pointed out that the component superfields
in (\ref{O(2)-components}) obey the constraints \cite{KL}
\begin{equation}
{\bar D}^{\dt \alpha} \, \Phi =0~,
\qquad
-\frac{1}{ 4} {\bar D}^2 \, L
= \pa_5\, \Phi~.
\end{equation}
It should be also remarked that the real linear superfield $L$
can always be dualised into a chiral scalar and its conjugate \cite{KL},
which generates a special chiral superpotential.
Given several $O(2) $ multiplets $H^I(w)$, where $I=1,\dots,n$,
superconformal dynamics is generated by the action
\begin{equation}
S=\oint
\frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \, \cF( H^I ) ~, \qquad I=1,\dots ,n~
\end{equation}
where $\cF (H) $ is a weakly homogeneous function
of first degree in the variables $H$,
\begin{equation}
\oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \,
{\rm d}^4 \theta \,
\Big\{ H^I \, \frac{\pa \cF(H ) }{\pa H^I}
-\cF (H ) \Big\} =0~.
\end{equation}
This is completely analogous to the four-dimensional case
\cite{projective0,BS,dWRV} where the component structure
of such sigma-models has been studied in detail
\cite{deWKV}.
A great many superconformal models can be obtained if one
considers $\Upsilon$-hypermultiplet actions of the form
\begin{eqnarray}
S = \oint \frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \,
K \big( \Upsilon^I , \breve{\Upsilon}^{ \bar J} \big)~,
\qquad I,{\bar J} =1,\dots ,n~
\label{nact}
\end{eqnarray}
with the contour around the origin.
Let us first assume that the superconformal
transformations of all $\Upsilon$'s and $\breve{\Upsilon}$'s
have the form (\ref{polarsuperconf}).
Then, in accordance with general principles,
the action is superconformal if $K ( \Upsilon , \breve{\Upsilon} ) $ is
a weakly homogeneous function of first degree in the variables $\Upsilon$,
\begin{equation}
\oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \,
{\rm d}^4 \theta \,
\Big\{ \Upsilon^I \, \frac{\pa K(\Upsilon, \breve{\Upsilon} ) }{\pa \Upsilon^I}
-K(\Upsilon, \breve{\Upsilon} ) \Big\} =0~.
\label{polar-homog}
\end{equation}
This homogeneity condition is compatible
with the K\"ahler invariance
\begin{equation}
K(\Upsilon, \breve{\Upsilon}) \quad \longrightarrow \quad K(\Upsilon, \breve{\Upsilon}) ~+~
\Lambda(\Upsilon) \,+\, {\bar \Lambda} (\breve{\Upsilon} )
\end{equation}
which the model (\ref{nact}) possesses \cite{Kuzenko,GK,KL}.
Unlike the $O(n)$ multiplets, the superconformal transformations
of $\Upsilon$ and $\breve{\Upsilon}$ are not fixed uniquely by the constraints,
as directly follows from (\ref{polarsuperconf-kappa}).
Therefore, one can consider superconformal sigma-models of the form
(\ref{nact}) in which the dynamical variables $\Upsilon$'s consist
of several subsets with different values for the weight $\kappa$ in
(\ref{polarsuperconf-kappa}), and then $K(\Upsilon, \breve{\Upsilon} )$
should obey weaker conditions than eq. (\ref{polar-homog}).
Such a situation occurs, for instance, if one starts with a gauged linear
sigma-model and then integrates out the gauge multiplet,
in the spirit of \cite{LR,dWRV}.
As an example, consider
\begin{equation}
S=
\oint \frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \,
\Big\{
\breve{\Upsilon}^\alpha \,\eta_{\alpha \beta} \, \Upsilon^\beta \,{\rm e}^{ V }
+ \breve{\Upsilon}^\mu \,\eta_{\mu \nu} \, \Upsilon^\nu \,{\rm e}^{ - V } \Big\} ~,
\end{equation}
where $\eta_{\alpha \beta} $ and $\eta_{\mu \nu}$ are constant
diagonal metrics,
$\alpha=1, \dots , m$ and $\mu =1, \dots , n$.
Integrating out the tropical multiplet gives
the gauge-invariant action
\begin{equation}
S= 2
\oint \frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \,
\sqrt{
\breve{\Upsilon}^\alpha \,\eta_{\alpha \beta} \, \Upsilon^\beta
\, \breve{\Upsilon}^\mu \,\eta_{\mu \nu} \, \Upsilon^\nu }~.
\end{equation}
The gauge freedom can be completely fixed by setting, say,
one of the superfields $\Upsilon^\nu$ to be unity.
Then, the action becomes
\begin{equation}
S= 2
\oint \frac{{\rm d}w}{2\pi {\rm i}w}
\int {\rm d}^5 x \,
{\rm d}^4 \theta \,
\sqrt{
\breve{\Upsilon}^\alpha \,\eta_{\alpha \beta} \, \Upsilon^\beta
\,( \eta_{nn} + \breve{\Upsilon}^{\underline \mu} \,
\eta_{\underline{\mu} \underline{\nu}} \,
\Upsilon^{\underline \nu}) }~,
\end{equation}
where $\underline{\mu}, \underline{\nu}=1,\dots,n-1.$
This action is still superconformal,
but now $ \Upsilon^\beta $ and $\Upsilon^{\underline \nu}$ transform
according to (\ref{polarsuperconf-kappa})
with $\kappa=2$ and $\kappa=0$, respectively.
Sigma-models (\ref{nact}) have an interesting geometric interpretation
if $K(\Phi, \bar \Phi )$ is the K\"ahler potential of a K\"ahler manifold
$\cM$ \cite{Kuzenko,GK,KL}.
Among the component superfields of
$\Upsilon (z,w) = \sum_{n=0}^{\infty} \Upsilon_n (z) \,w^n$,
the leading components
$\Phi = \Upsilon_0 | $ and $\Gamma = \Upsilon_1 |$
considered as 4D $\cN=1$ superfields,
are constrained:
\begin{equation}
{\bar D}^{\dt \alpha} \, \Phi =0~,
\qquad
-\frac{1}{ 4} {\bar D}^2 \, \Gamma
= \pa_5\, \Phi~.
\label{pm-constraints}
\end{equation}
The $\Phi$ and $\Gamma$ can be regarded as
a complex coordinate of the K\" ahler
manifold and a tangent vector at point $\Phi$ of the same manifold,
and therefore they parametrize the tangent bundle $T\cM$
of the K\"ahler manifold.
The other components, $\Upsilon_2, \Upsilon_3, \dots$,
are complex unconstrained superfields.
These superfields are auxiliary since they appear
in the action without derivatives.
The auxiliary superfields $\Upsilon_2, \Upsilon_3, \dots$, and their
conjugates, can
be eliminated with the aid of the
corresponding algebraic equations of motion
\begin{equation}
\oint {{\rm d} w} \,w^{n-1} \, \frac{\pa K(\Upsilon, \breve{\Upsilon}
) }{\pa \Upsilon^I} = 0~,
\qquad n \geq 2 ~.
\label{int}
\end{equation}
Their elimination can be carried out
using the ansatz
\begin{eqnarray}
\Upsilon^I_n = \sum_{p=o}^{\infty}
U^I{}_{J_1 \dots J_{n+p} \, \bar{L}_1 \dots \bar{L}_p} (\Phi, {\bar \Phi})\,
\Gamma^{J_1} \dots \Gamma^{J_{n+p}} \,
{\bar
\Gamma}^{ {\bar L}_1 } \dots {\bar \Gamma}^{ {\bar L}_p }~,
\qquad n\geq 2~.
\end{eqnarray}
It can be shown that the coefficient functions
$U$'s are uniquely determined by equations
(\ref{int}) in perturbation theory.
Upon elimination of the auxiliary superfields,
the action
(\ref{nact}) takes the form
\begin{eqnarray}
S
[\Phi, \bar \Phi, \Gamma, \bar \Gamma]
&=& \int {\rm d}^5 x \,
{\rm d}^4 \theta \,
\Big\{\,
K \big( \Phi, \bar{\Phi} \big) - g_{I \bar{J}} \big( \Phi, \bar{\Phi}
\big) \Gamma^I {\bar \Gamma}^{\bar{J}}
\nonumber\\
&&\qquad +
\sum_{p=2}^{\infty} \cR_{I_1 \cdots I_p {\bar J}_1 \cdots {\bar
J}_p } \big( \Phi, \bar{\Phi} \big) \Gamma^{I_1} \dots \Gamma^{I_p} {\bar
\Gamma}^{ {\bar J}_1 } \dots {\bar \Gamma}^{ {\bar J}_p }~\Big\}~,
\end{eqnarray}
where the tensors $\cR_{I_1 \cdots I_p {\bar J}_1 \cdots {\bar
J}_p }$ are functions of the Riemann curvature $R_{I {\bar
J} K {\bar L}} \big( \Phi, \bar{\Phi} \big) $ and its covariant
derivatives. Each term in the action contains equal powers
of $\Gamma$ and $\bar \Gamma$, since the original model (\ref{nact})
is invariant under rigid $U(1)$ transformations
\begin{equation}
\Upsilon(w) ~~ \mapsto ~~ \Upsilon({\rm e}^{{\rm i} \alpha} w)
\quad \Longleftrightarrow \quad
\Upsilon_n(z) ~~ \mapsto ~~ {\rm e}^{{\rm i} n \alpha} \Upsilon_n(z) ~.
\label{rfiber}
\end{equation}
The complex linear superfields $\Gamma^I$ can be dualised
into chiral superfields\footnote{This is accompanied
by the appearance of a special chiral superpotential \cite{KL}.}
$\Psi_I$ which can be interpreted as
a one-form at the point $\Phi \in \cM$ \cite{GK,KL}.
Upon elimination of $\Gamma$ and $\bar \Gamma$,
the action turns into $S[\Phi, \bar \Phi, \Psi, \bar \Psi]$.
Its target space is (an open neighborhood of the zero section) of
the cotangent bundle $T^*\cM$ of the K\"ahler manifold $\cM$.
Since supersymmetry requires this target space to be hyper-K\"ahler,
our consideration is in accord with recent mathematical results
\cite{cotangent} about the existence of hyper-K\"ahler
structures on cotangent bundles of K\"ahler manifolds.
\subsection{Models with intrinsic central charge}
We have so far considered only superconformal multiplets
without central charge. As is known, there is no clash
between superconformal symmetry
and the presence of a central charge provided the latter is gauged.
Here we sketch a 5D superspace setting for supersymmetric theories
with gauged central charge, which is a natural generalisation
of the 4D $\cN=2$ formulation \cite{DIKST}.
To start with, one introduces an Abelian vector multiplet, which
is destined to gauge the central charge $\Delta$, by defining
gauge-covariant derivatives
\begin{equation}
\cD_{\hat A} = ( \cD_{\hat a}, \cD_{\hat \alpha}^i )
= D_{\hat A} + \cV_{\hat A} (z)\, \Delta ~, \qquad
[\Delta , \cD_{\hat A} ]=0~.
\end{equation}
Here the gauge connection $ \cV_{\hat A} $
is inert under the central
charge transformations,
$[\Delta \,,\cV_{\hat A} ] =0$.
The gauge-covariant derivatives are required
to obey the algebra
\begin{eqnarray}
\{\cD^i_{\hat \alpha} \, , \cD^j_{\hat \beta} \} &= &-2{\rm i} \,
\ve^{ij}\,
\Big(
\cD_{\hat \alpha \hat \beta}
+ \ve_{\hat \alpha \hat \beta} \, \cW \,\Delta \Big)~,
\qquad \big[ \cD^i_{\hat \alpha} \, , \Delta \big] =0~,
\nonumber \\
\big[
\cD^i_{\hat \gamma}\,, \cD_{\hat \alpha \hat \beta} \big] &=&
{\rm i}\,
\ve_{\hat a \hat \beta} \, \cD^i_{\hat \gamma}
\cW\,\Delta
+2{\rm i}\,\Big( \ve_{\hat \gamma \hat \alpha} \,\cD^i_{\hat \beta}
- \ve_{\hat \gamma \hat \beta} \,\cD^i_{\hat \alpha} \Big)\cW \,\Delta
~,
\label{SYM-algebra}
\end{eqnarray}
where the real field strength $\cW(z)$ obeys the Bianchi identity
(\ref{Bianchi1}). The field strength should possess a non-vanishing
expectation value, $\langle \cW \rangle \neq 0$,
corresponding to the case of rigid central charge.
By applying a harmonic-dependent gauge transformation,
one can choose a frame in which
\begin{equation}
\cD^+_{\hat \alpha} ~\to ~D^+_{\hat \alpha} ~,
\quad D^{++} ~\to ~ D^{++} +\cV^{++}\,\Delta~,
\quad D^{--} ~\to ~ D^{--} +\cV^{--}\,\Delta~,
\end{equation}
with $\cV^{++} $ a real analytic prepotential, see \cite{DIKST}
for more details.
This frame is called the $\lambda$-frame, and the original representation
is known as the $\tau$-frame \cite{GIKOS}.
To generate a supersymmetric action,
it is sufficient to construct a real superfield
$\cL^{(ij)}(z)$ with the properties
\begin{equation}
\cD^{(i}_{\hat \alpha} \cL^{jk)} =0~,
\end{equation}
which for $\cL^{++}(z,u) = \cL^{ij} (z) \, u^+_i u^+_j$ take the form
\begin{equation}
\cD^+_{\hat \alpha} \cL^{++} = 0~,
\qquad D^{++}\cL^{++} =0~.
\end{equation}
In the $\lambda$-frame, the latter properties become
\begin{equation}
D^+_{\hat \alpha} \tilde{\cL}^{++} = 0~,
\qquad (D^{++} + \cV^{++} \,\Delta) \tilde{\cL}^{++} =0~.
\end{equation}
Associated with $ \tilde{\cL}^{++}$ is
the supersymmetric action
\begin{equation}
\int {\rm d} \zeta^{(-4)} \, \cV^{++}\,
\tilde{ \cL}^{++}
\end{equation}
which invariant under the central charge gauge transformations
$\delta \cV^{++} =- D^{++} \lambda $ and
$\delta \tilde{\cL}^{++} = \lambda \,\Delta \, \tilde{\cL}^{++} $,
with an arbitrary analytic parameter $\lambda$.
Let us give a few examples of off-shell supermultiplets
with intrinsic central charge. The simplest is
the 5D extension of the Fayet-Sohnius hypermultiplet.
It is described by an iso-spinor superfield
${\bm q}_i (z)$
and its conjugate ${\bar {\bm q}}^i (z)$
subject to the constraint
\begin{equation}
\cD^{(i}_{\hat \alpha} \, {\bm q}^{j)} =0~.
\label{FSh}
\end{equation}
This multiplet becomes on-shell if $\Delta = {\rm const}$.
With the notation ${\bm q}^+(z,u) ={\bm q}^{j} (z) u^+_i$,
the hypermultiplet dynamics is dictated by the Lagrangian
\begin{equation}
L^{++}_{\rm FS} = \frac12 \,
\breve{{\bm q}}^+
\stackrel{\longleftrightarrow}{ \Delta}
{\bm q}^+
-{\rm i}\, m\, \breve{{\bm q}}^+ {\bm q}^+~,
\label{FS-Lagrangian}
\end{equation}
with $m$ the hypermultiplet mass/charge.
This Lagrangian generates a superconformal theory.
Our second example is an off-shell gauge two-form multiplet
called in \cite{Ohashi} the massless tensor multiplet.
It is Poincar\'e dual to the 5D vector multiplet.
Similarly to the 4D $\cN=2$ vector-tensor multipet \cite{DIKST},
it is described by a constrained real superfield $L(z) $ coupled
to the central charge vector multiplet.
By analogy with the four-dimensional case \cite{DIKST},
admissible constraints must obey some
nontrivial consistency conditions. In particular,
the harmonic-independence of $L$ (in the $\tau$-frame)
implies
\begin{eqnarray}
0=(\hat{\cD}^+)^2 (\hat{\cD}^+)^2D^{--} L &=& D^{--} (\hat{\cD}^+)^2(\hat{\cD}^+)^2 L
-4 \,\cD^{-\hat \alpha} \cD^+_{\hat \alpha} (\hat{\cD}^+)^2L
+8{\rm i}\, \cD^{\hat \alpha \hat \beta} \cD^+_{\hat \alpha} \cD^+_{\hat \beta} \nonumber \\
& - &8{\rm i}\, \Delta \,\Big\{ L \,(\hat{\cD}^+)^2 \cW
+\cW \,(\hat{\cD}^+)^2 L +4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\}~.
\label{consistency}
\end{eqnarray}
Let us assume that $L$ obeys the constraint
\begin{equation}
\cD^+_{\hat \alpha} \cD^+_{\hat \beta } L
= \frac{1}{4} \ve_{\hat \alpha \hat \beta} \,
({\hat \cD}^+)^2 L \quad
\Rightarrow \quad
\cD^+_{\hat \alpha} \cD_{\hat \beta }^+ \cD_{\hat \gamma }^+ L
= 0
\label{Bianchi2}
\end{equation}
which in the case $\Delta =0$ coincides with the Bianchi identity
for an Abelian vector multiplet. Then, eq. (\ref{consistency}) gives
\begin{equation}
\Delta \, \Big\{ L \,(\hat{\cD}^+)^2 \cW
+\cW \,(\hat{\cD}^+)^2 L +4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\}
=0~.
\label{consistency2}
\end{equation}
The consistency condition is satisfied if $L$ is constrained as
\begin{equation}
(\hat{\cD}^+)^2 L =- \frac{1}{\cW}\,\Big\{
L \,(\hat{\cD}^+)^2 \cW
+4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\}~.
\label{two-form-constraint1}
\end{equation}
The corresponding Lagrangian is
\begin{equation}
\cL^{++} = -\frac{\rm i}{4} \Big( \cD^{+ \hat \alpha} L\,\cD^+_{\hat \alpha} L
+\frac12 \,L\,(\hat{\cD}^+)^2L\Big)~.
\label{two-form-lagrang}
\end{equation}
The theory generated by this Lagrangian is superconformal.
Another solution to (\ref{consistency2}) describes a Chern-Simons
coupling of the two-form multiplet to an external Yang-Mills
supermultiplets:
\begin{eqnarray}
(\hat{\cD}^+)^2 L &=&- \frac{1}{\cW}\,\Big\{
L \,(\hat{\cD}^+)^2 \cW
+4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\}
+ \frac{\rho}{\cW}\,{\mathbb G}^{++}~,
\label{two-form-constraint2}
\end{eqnarray}
where
\begin{eqnarray}
-{\rm i} \, {\mathbb G}^{++} &=& {\rm tr}\,
\Big( \cD^{+ \hat \alpha} {\mathbb W} \, \cD^+_{\hat \alpha} {\mathbb W}
+ \frac{1 }{ 4} \{ {\mathbb W} \,,
({\hat \cD}^+)^2 {\mathbb W} \} \Big)~.
\end{eqnarray}
Here $\rho$ is a coupling constant, and $\mathbb W$
is the gauge-covariant field strength of the Yang-Mills
supermultiplet, see \cite{KL} for more details.
As the corresponding supersymmetric Lagrangian
one can again choose $\cL^{++}$
given by eq. (\ref{two-form-lagrang}).
A plain dimensional reduction $5{\rm D} \to 4{\rm D}$
can be shown to reduce the constraints
(\ref{Bianchi2}) and (\ref{two-form-constraint2})
to those describing the so-called
linear vector-tensor multiplet\footnote{Ref. \cite{DIKST}
contains an extensive list of publications on
the linear and nonlinear vector-tensor multiplets
and their couplings.}
with Chern-Simons couplings
\cite{DIKST}.
\vskip.5cm
When this paper was ready for submission to the hep-th archive,
there appeared an interesting work \cite{BX}
in which 4D and 5D supersymmetric nonlinear
sigma models with eight supercharges were formulated in
$\cN=1$ superspace.
\noindent
{\bf Acknowledgements:}\\
It is a pleasure to thank Ian McArthur for reading the manuscript.
The author is grateful to the Max Planck Institute for Gravitational Physics
(Albert Einstein Institute) in Golm
and the Institute for Theoretical Physics at the
University of Heidelberg
for hospitality during the course of the work.
This work is supported
by the Australian Research Council and by a UWA research grant.
\begin{appendix}
\sect{Non-standard realisation for
$\bm{ S^2}$
}
Let us consider a quantum-mechanical spin-$1/2$ Hilbert space,
i.e. the complex space ${\mathbb C}^2$ endowed with
the standard positive definite scalar
product $\langle ~|~\rangle$ defined by
\begin{eqnarray}
\langle u|v\rangle = u^\dagger \,v ={\bar u}^i \,v_i~, \qquad
|u \rangle = (u_i) =\left(
\begin{array}{c}
u_1 \\
u_2
\end{array}
\right)~,
\qquad \langle u | = ({\bar u}^i) ~,
\quad {\bar u}^i =\overline{u_i}~.
\end{eqnarray}
Two-sphere $S^2$ can be identified with the space
of rays in ${\mathbb C}^2$. A ray is represented by a
normalized state,
\begin{equation}
|u^- \rangle = (u^-_i) ~,
\qquad
\langle u^- | u^- \rangle=1~,
\qquad \langle u^- | = (u^{+i}) ~,
\quad u^{+i} =\overline{u^-_i}~,
\end{equation}
defined modulo the
equivalence relation
\begin{equation}
u^-_i ~ \sim ~ {\rm e}^{ -{\rm i} \varphi } \, u^-_i~,
\qquad | {\rm e}^{-{\rm i} \varphi } |=1~.
\label{equivalence}
\end{equation}
Associated with $|u^- \rangle $ is
another normalized state $|u^+ \rangle $,
\begin{eqnarray}
|u^+ \rangle = (u^+_i) ~,
\qquad
u^+_i = \ve_{ij}\,u^{+j}~, \qquad
\langle u^+ | u^+ \rangle=1~,
\end{eqnarray}
which is orthogonal to $|u^- \rangle $,
\begin{equation}
\langle u^+ | u^- \rangle=0~.
\end{equation}
The states $|u^- \rangle $ and $|u^+ \rangle $
generate the unimodular unitary matrix
\begin{eqnarray}
{\bm u}=\Big( |u^- \rangle \, ,\, |u^+ \rangle \Big)
=({u_i}^-\,,\,{u_i}^+) \in SU(2)~.
\end{eqnarray}
In terms of this matrix,
the equivalence relation (\ref{equivalence}) becomes
\begin{eqnarray}
{\bm u} ~\sim ~ {\bm u}\,
\left(
\begin{array}{cc}
{\rm e}^{ -{\rm i} \varphi } & 0\\
0& {\rm e}^{ {\rm i} \varphi }
\end{array}
\right)~.
\end{eqnarray}
This gives
the well-known realisation
$S^2 = SU(2) /U(1)$.
The above unitary realisation for $S^2$ is ideal if
one is interested in the action of $SU(2)$, or its subgroups,
on the two-sphere.
But it is hardly convenient if one considers,
for instance, the action of $SL(2,{\mathbb C})$ on $S^2$.
There exists, however, a universal realisation.
Instead of dealing with the orthonormal basis
$( |u^- \rangle , |u^+ \rangle )$ introduced,
one can work with an arbitrary basis for ${\mathbb C }^2$:
\begin{eqnarray}
{\bm v}=\Big( |v^- \rangle \, ,\, |v^+ \rangle \Big)
=({v_i}^-\,,\,{v_i}^+) \in GL(2,{\mathbb C})~,\qquad
\det {\bm v}=v^{+i}\,v^-_i~.
\end{eqnarray}
The two-sphere is then obtained by factorisation with respect to
the equivalence relation
\begin{eqnarray}
{\bm v} ~\sim ~ {\bm v}\,R~,
\qquad
R= \left(
\begin{array}{cc}
a & 0\\
b & c
\end{array}
\right) \in GL(2,{\mathbb C})~.
\label{equivalence2}
\end{eqnarray}
Given an arbitrary matrix ${\bm v} \in GL(2,{\mathbb C})$,
there always exists a lower triangular matrix $R$ such that
${\bm v} R \in SU(2)$,
and then we are back to the unitary realisation.
One can also consider an intermediate realisation for $S^2$
given in terms of unimodular matrices of the form
\begin{eqnarray}
{\bm w}=\Big( |w^- \rangle \, ,\, |w^+ \rangle \Big)
=({w_i}^-\,,\,{w_i}^+) \in SL(2,{\mathbb C}) \quad
\longleftrightarrow \quad w^{+i} w^-_i =1~,
\end{eqnarray}
and the matrix $R$ in (\ref{equivalence2}) should be
restricted to be unimodular.
The harmonics $w^\pm$ are complex in the sense that
$w^-_i$ and $w^{+i}$ are not related by complex conjugation.
Let us consider a left group transformation acting on $S^2$
\begin{eqnarray}
{\bm w} ~\to ~g\, {\bm w}= ({v_i}^-\,,\,{v_i}^+) \equiv {\bm v}~.
\end{eqnarray}
If $g$ is a ``small'' transformation, i.e. if it belongs
to a sufficiently small neighbourhood of the identity,
then there exists a matrix $R$ of the type (\ref{equivalence2})
such that
\begin{equation}
g\, {\bm w} \,R = ({w_i}^-\,,\,{\hat{w}_i}^+) \in SL(2,{\mathbb C}) ~.
\end{equation}
Since
$$
w^{+i} w^-_i =1~,\qquad \hat{w}^{+i} {w}^-_i =1~,
$$
for the transformed harmonic we thus obtain
\begin{equation}
\hat{w}^+_i = w^+_i + \rho^{++}(w) \,w^-_i ~.
\end{equation}
All information about the group transformation $g$
is now encoded in $ \rho^{++} $.
\sect{Projective superspace action}
${}$Following \cite{Siegel}, consider
\begin{eqnarray}
I= \frac{1}{2\pi {\rm i}} \oint
\frac{ u^+_i\,{\rm d} u^{+i}}{(u^+ u^-)^4} \,
\int {\rm d}^5 x \,
(\hat{D}^-)^4 \,\cL^{++} (z,u^+)~,
\label{projac3}
\end{eqnarray}
where the Lagrangian enjoys the properties
\begin{equation}
D^+_{\hat \alpha} \cL^{++} (z,u^+)=0~,
\qquad
\cL^{++} (z,c\,u^+) = c^2\, \cL^{++} (z,u^+)~,
\qquad c \in{\mathbb C}^*~.
\end{equation}
The functional (\ref{projac3})
is invariant under arbitrary projective transformations
(\ref{equivalence22}).
Choosing the projective gauge (\ref{projectivegaugeN})
gives the supersymmetric action (\ref{projac1}).
It is worth pointing out that
the vector multiplet field strength (\ref{strength3})
can be rewritten in the projective-invariant form
\begin{equation}
W(z) =- \frac{1}{ 16\pi {\rm i}} \oint
\frac{ u^+_i\,{\rm d} u^{+i}}{(u^+ u^-)^2} \,
(\hat{D}^-)^2 \, V(z,u^+)~,
\label{strengt4}
\end{equation}
where the gauge potential enjoys the properties
\begin{equation}
D^+_{\hat \alpha} V (z,u^+)=0~,
\qquad
V (z,c\,u^+) = V (z,u^+)~,\qquad c \in{\mathbb C}^*~.
\end{equation}
\sect{From 5D projective supermultiplets to
4D
$\bm{ \cN=1, \,2}$
superfields}
The conventional 5D simple superspace ${\mathbb R}^{5|8}$
is parametrized
by coordinates $ z^{\hat A} = (x^{\hat a}, \theta^{\hat \alpha}_i )$,
with $i = \underline{1} , \underline{2}$.
Any hypersurface $x^5 ={\rm const}$ in ${\mathbb R}^{5|8}$
can be identified with the 4D, $\cN=2$ superspace
${\mathbb R}^{4|8}$ parametrized by
$
z^{A} = (x^a, \theta^\alpha_i , {\bar \theta}_{\dt \alpha}^i)$,
where $(\theta^\alpha_i )^* = {\bar \theta}^{\dt \alpha i}$.
The Grassmann coordinates of ${\mathbb R}^{5|8}$ and
${\mathbb R}^{4|8}$
are related to each other as follows:
\begin{eqnarray}
\theta^{\hat \alpha}_i = ( \theta^\alpha_i , - {\bar \theta}_{\dt \alpha i})~,
\qquad
\theta_{\hat \alpha}^i =
\left(
\begin{array}{c}
\theta_\alpha^i \\
{\bar \theta}^{\dt \alpha i}
\end{array}
\right)~.
\end{eqnarray}
Interpreting $x^5$ as a central charge variable,
one can view ${\mathbb R}^{5|8}$ as a 4D, $\cN=2$
central charge superspace.
One can relate the 5D spinor covariant derivatives
(see \cite{KL} for more details)
\begin{eqnarray}
D^i_{\hat \alpha}
= \left(
\begin{array}{c}
D_\alpha^i \\
{\bar D}^{\dt \alpha i}
\end{array}
\right)
=D^i_{\hat \alpha} = \frac{\pa}{\pa \theta^{\hat \alpha}_i}
- {\rm i} \, (\Gamma^{\hat b} ){}_{\hat \alpha \hat \beta} \, \theta^{\hat \beta i}
\, \pa_{\hat b}
~,
\qquad
D^{\hat \alpha}_i =
(D^\alpha_i \,, \, -{\bar D}_{\dt \alpha i})
\label{con}
\end{eqnarray}
to the 4D, $\cN=2$ covariant derivatives
$D_A = (\pa_a , D^i_\alpha , {\bar D}^{\dt \alpha}_i )$
where
\begin{eqnarray}
D^i_\alpha &=& \frac{\pa}{\pa \theta^{\alpha}_i}
+ {\rm i} \,(\sigma^b )_{\alpha \bd} \, {\bar \theta}^{\dt \beta i}\, \pa_b
+
\theta^i_\alpha \, \pa_5 ~,
\quad
{\bar D}_{\dt \alpha i} =
- \frac{\pa}{\pa {\bar \theta}^{\dt \alpha i}}
- {\rm i} \,
\theta^\beta _i (\sigma^b )_{\beta \dt \alpha} \,\pa_b
-{\bar \theta}_{\dt \alpha i} \,
\pa_5 ~.
\label{4D-N2covder1}
\end{eqnarray}
These operators obey the anti-commutation relations
\begin{eqnarray}
\{D^i_{\alpha} \, , \, D^j_{ \beta} \} = 2 \,
\ve^{ij}\, \ve_{\alpha \beta} \,
\pa_5 ~,
\quad && \quad
\{{\bar D}_{\dt \alpha i} \, , \, {\bar D}_{\dt \beta j} \} = 2 \,
\ve_{ij}\, \ve_{\dt \alpha \dt \beta} \,
\pa_5 ~,
\nonumber \\
\{D^i_{\alpha} \, , \, \bar D_{ \dt \beta j} \} &=& -2{\rm i} \, \delta^i_j\,
(\sigma^c )_{\alpha \dt \beta} \,\pa_c ~,
\label{4D-N2covder2}
\end{eqnarray}
which correspond to the 4D, $\cN=2$ supersymmetry algebra with
the central charge $\pa_5$.
Consider a 5D projective superfield (\ref{holom0}).
Representing the differential operators
$\nabla_{\hat \alpha} (w)$, eq. (\ref{nabla0}), as
\begin{eqnarray}
\nabla_{\hat \alpha} (w)
= \left(
\begin{array}{c}
\nabla_\alpha (w) \\
{\bar \nabla}^{\dt \alpha} (w)
\end{array}
\right)~,
\quad \nabla_\alpha (w) \equiv w D^{\underline{1}}_\alpha - D^{\underline{2}}_\alpha ~,
\quad
{\bar \nabla}^{\dt \alpha} (w) \equiv {\bar D}^{\dt \alpha}_{ \underline{1}} +
w {\bar D}^{\dt \alpha}_{ \underline{2}}~,
\label{nabla}
\end{eqnarray}
the constraints (\ref{holom3})
can be rewritten in the component form
\begin{equation}
D^{\underline{2}}_\alpha \phi_n = D^{\underline{1}}_\alpha \phi_{n-1} ~,\qquad
{\bar D}^{\dt \alpha}_{\underline{2}} \phi_n = - {\bar D}^{\dt \alpha}_{ \underline{1}}
\phi_{n+1}~.
\label{pc2}
\end{equation}
The relations (\ref{pc2}) imply that the dependence
of the component superfields
$\phi_n$ on $\theta^\alpha_{\underline{2}}$ and ${\bar \theta}^{\underline{2}}_{\dt \alpha}$
is uniquely determined in terms
of their dependence on $\theta^\alpha_{\underline{1}}$
and ${\bar \theta}^{\underline{1}}_{\dt \alpha}$. In other words,
the projective superfields depend effectively
on half the Grassmann variables which can be choosen
to be the spinor coordinates of 4D $\cN=1$ superspace
\begin{equation}
\theta^\alpha = \theta^\alpha_{\underline{1}} ~, \qquad {\bar \theta}_{\dt \alpha}=
{\bar \theta}_{\dt \alpha}^{\underline{1}}~.
\label{theta1}
\end{equation}
Then, one deals with reduced superfields
$\phi | $, $ D^{\underline{2}}_\alpha \phi|$, $ {\bar D}_{\underline{2}}^{\dt \alpha} \phi|, \dots$
(of which not all are usually independent)
and 4D $\cN=1$ spinor covariant derivatives $D_\alpha$
and ${\bar D}^{\dt \alpha}$ defined in the obvious way:
\begin{equation}
\phi| = \phi (x, \theta^\alpha_i, {\bar \theta}^i_{\dt \alpha})
\Big|_{ \theta_{\underline{2}} = {\bar \theta}^{\underline{2}}=0 }~,
\qquad D_\alpha = D^{\underline{1}}_\alpha \Big|_{\theta_{\underline{2}} ={\bar \theta}^{\underline{2}}=0} ~,
\qquad
{\bar D}^{\dt \alpha} = {\bar D}_{\underline{1}}^{\dt \alpha}
\Big|_{\theta_{\underline{2}} ={\bar \theta}^{\underline{2}}=0}~.
\label{N=1proj}
\end{equation}
\end{appendix}
\small{
| {'timestamp': '2007-07-16T06:29:49', 'yymm': '0601', 'arxiv_id': 'hep-th/0601177', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/0601177'} |
\section{Introduction}\label{sec:intro}
In {the} past a few years, subspace clustering~\cite{vidal2010tutorial,soltanolkotabi2012geometric} has been extensively studied and has established solid applications, for example, in computer vision~\cite{elhamifar2009sparse} and network topology inference~\cite{eriksson2011high}. Among many subspace clustering algorithms which aim to obtain a structured representation to fit the underlying data, two prominent examples are Sparse Subspace Clustering~(SSC)~\cite{elhamifar2009sparse,soltanolkotabi2014robust} and Low-Rank Representation~(LRR)~\cite{liu2013robust}. Both of them utilize the idea of self-expressiveness, i.e., expressing each sample as a linear combination of the remaining. What is of difference is that SSC pursues a sparse solution while LRR prefers a low-rank structure.
In this paper, we are interested in the LRR method, which is shown to achieve state-of-the-art performance on a broad range of real-world problems~\cite{liu2013robust}. Recently, \cite{liu2014recovery} demonstrated that, when equipped with a proper dictionary, LRR can even handle the coherent data~--~a challenging issue in the literature~\cite{candes2009exact,candes2011robust} but commonly emerges in realistic datasets such as the Netflix.
Formally, the LRR problem we investigate here is formulated as follows~\cite{liu2013robust}:
\begin{align}
\label{eq:lrr}
\min_{\boldsymbol{X}, \boldsymbol{E}}\ \fractwo{\lambda_1} \fronorm{\boldsymbol{Z}-\boldsymbol{Y}\boldsymbol{X}-\boldsymbol{E}}^2 + \nuclearnorm{\boldsymbol{X}} + \lambda_2 \onenorm{\boldsymbol{E}}.
\end{align}
Here, $\boldsymbol{Z} = (\boldsymbol{z}_1, \boldsymbol{z}_2, \cdots, \boldsymbol{z}_n) \in \mathbb{R}^{p\times n}$ is the observation matrix with $n$ samples lying in a $p$-dimensional subspace. The matrix $\boldsymbol{Y} \in \mathbb{R}^{p\times n}$ is a given dictionary, $\boldsymbol{E}$ is some possible sparse corruption and $\lambda_1$ and $\lambda_2$ are two tunable parameters. Typically, $\boldsymbol{Y}$ is chosen as the dataset $\boldsymbol{Z}$ itself. The program seeks a low-rank representation $\boldsymbol{X} \in \mathbb{R}^{n\times n}$ among all samples, each of which can be approximated by a linear combination of the atoms in the dictionary $\boldsymbol{Y}$.
While LLR is mathematically elegant, three issues are immediately incurred in the face of big data:
\begin{issue}[Memory cost of $\boldsymbol{X}$]\label{is:memX}
In the LRR formulation~\eqref{eq:lrr}, there is typically no sparsity assumption on $\boldsymbol{X}$. Hence, the memory footprint of $\boldsymbol{X}$ is proportional to $n^2$ which precludes most of the recently developed nuclear norm solvers~\cite{lin2010augmented,jaggi2010simple,avron2012efficient,hsieh2014nuclear}.
\end{issue}
\begin{issue}[Computational cost of $\nuclearnorm{\boldsymbol{X}}$]\label{is:compX}
Due to the size of the nuclear norm regularized matrix $\boldsymbol{X}$ is $n \times n$, optimizing such problems can be computationally expensive even when $n$ is not too large~\cite{recht2010guaranteed}.
\end{issue}
\begin{issue}[Memory cost of $\boldsymbol{Y}$]\label{is:memY}
Since the dictionary size is $p\times n$, it is prohibitive to store the entire dictionary $\boldsymbol{Y}$ during optimization when manipulating a huge volume of data.
\end{issue}
To remedy these issues, especially the memory bottleneck, one potential way is solving the problem in online manner. That is, we sequentially reveal the samples $\boldsymbol{z}_1, \boldsymbol{z}_2, \cdots, \boldsymbol{z}_n$ and update the components in $\boldsymbol{X}$ and $\boldsymbol{E}$. Nevertheless, such strategy appears difficult to execute due the the residual term in~\eqref{eq:lrr}. To be more precise, we note that each column of $\boldsymbol{X}$ is the coefficients of a sample with respect to the {\em entire} dictionary $\boldsymbol{Y}$, e.g., $\boldsymbol{z}_1 \approx \boldsymbol{Y} \boldsymbol{x}_1 + \boldsymbol{e}_1$. This indicates that without further technique, we have to load the entire dictionary $Y$ so as to update the columns of $\boldsymbol{X}$. Hence, for our purpose, we need to tackle a {more serious} challenge:
\begin{issue}[Partial realization of $\boldsymbol{Y}$]\label{is:partialY}
We are required to guarantee the optimality of the solution but can only access part of the atoms of $Y$ in each iteration.
\end{issue}
\subsection{Related Works}
There are a vast body of works attempting to mitigate the memory and computational bottleneck of the nuclear norm regularizer. However, to the best of our knowledge, none of them can handle Issue~\ref{is:memY} and Issue~\ref{is:partialY} in the LRR problem.
One of the most popular ways to alleviate the huge memory cost is online implementation. \cite{feng2013online} devised an online algorithm for the Robust Principal Component Analysis~(RPCA) problem, which makes the memory cost independent of the sample size. Yet, compared to RPCA where the size of the nuclear norm regularized matrix is $p \times n$, that of LRR is $n\times n$~--~a worse and more challenging case. Moreover, their algorithm cannot address the partial dictionary issue that emerges in our case. It is also worth mentioning that~\cite{qiu2014recursive} established another online variant of RPCA. But since we are dealing with a different problem setting, i.e., the multiple subspaces regime, it is not clear how to extend their method to LRR.
To tackle the computational overhead, \cite{cai2010singular} considered singular value thresholding technique. However, it is not scalable to large problems since it calls singular value decomposition~(SVD) in each iteration. \cite{jaggi2010simple} utilized a sparse semi-definite programming solver to derive a simple yet efficient algorithm. Unfortunately, the memory requirement of their algorithm is proportional to the number of observed entries, making it impractical when the regularized matrix is large and dense (which is the case of LRR). \cite{avron2012efficient} combined stochastic subgradient and incremental SVD to boost efficiency. But for the LRR problem, the type of the loss function does not meet the requirements and thus, it is still not practical to use that algorithm in our case.
Another line in the literature explores a structured formulation of LRR beyond the low-rankness. For example, \cite{wang2013provable} provably showed that combining LRR with SSC can take advantages of both methods. \cite{liu2014recovery} demonstrated that LRR is able to cope with the intrinsic group structure of the data. Very recently, \cite{shen2016learning} argued that the vanilla LRR program does not fully characterize the nature of multiple subspaces, and presented several effective alternatives to LRR.
\subsection{Summary of Contributions}
In this paper, we propose a new algorithm called Online Low-Rank Subspace Clustering (OLRSC), which admits a low computational complexity. In contrast to existing solvers, OLRSC reduces the memory cost of LRR from $\mathcal{O}(n^2)$ to $\mathcal{O}(pd)$~($d < p \ll n$). This nice property makes OLRSC an appealing solution for large-scale subspace clustering problems. Furthermore, we prove that the sequence of solutions produced by OLRSC converges to a stationary point of the expected loss function asymptotically even though only one atom of $\boldsymbol{Y}$ is available at each iteration. In a nutshell, OLRSC resolves {\em all} practical issues of LRR and still promotes global low-rank structure~--~the merit of LRR.
\subsection{Roadmap}
The paper is organized as follows. In Section~\ref{sec:setup}, we reformulate the LRR program~\eqref{eq:lrr} in a way which is amenable for online optimization. Section~\ref{sec:alg} presents the algorithm that incrementally minimizes a surrogate function to the empirical loss. Along with that, we establish a theoretical guarantee in Section~\ref{sec:theory}. The experimental study in Section~\ref{sec:exp} confirms the efficacy and efficiency of our proposed algorithm. Finally, we conclude the work in Section~\ref{sec:conclusion} and the lengthy proof is deferred to the appendix.
\vspace{0.1in}
\noindent{\bf Notation.} \
We use bold lowercase letters, {e.g.} $\boldsymbol{v}$, to denote a column vector. The $\ell_2$ norm and $\ell_1$ norm of a vector $\boldsymbol{v}$ are denoted by $\twonorm{\boldsymbol{v}}$ and $\onenorm{\boldsymbol{v}}$ respectively. Bold capital letters such as $\boldsymbol{M}$ are used to denote a matrix, and its transpose is denoted by $\boldsymbol{M}^\top$. For an invertible matrix $\boldsymbol{M}$, we write its inverse as $\boldsymbol{M}^{-1}$. The capital letter $\boldsymbol{I}_r$ is reserved for identity matrix where $r$ indicates the size. The $j$th column of a matrix $\boldsymbol{M}$ is denoted by $\boldsymbol{m}_j$ if not specified. Three matrix norms will be used: $\nuclearnorm{\boldsymbol{M}}$ for the nuclear norm, i.e., the sum of the singular values, $\fronorm{\boldsymbol{M}}$ for the Frobenius norm and {$\onenorm{\boldsymbol{M}}$ for the $\ell_1$ norm of a matrix seen as a long vector.} The trace of a square matrix $\boldsymbol{M}$ is denoted by $\tr(\boldsymbol{M})$.
For an integer $n > 0$, we use $[n]$ to denote the integer set $\{1, 2, \cdots, n\}$.
\section{Problem Formulation}\label{sec:setup}
Our goal is to efficiently learn the representation matrix $\boldsymbol{X}$ and the corruption matrix $\boldsymbol{E}$ in an online manner so as to mitigate the issues mentioned in Section~\ref{sec:intro}. The first technique for our purpose is a {\em non-convex reformulation} of the nuclear norm. Assume that the rank of the global optima $\boldsymbol{X}$ in~\eqref{eq:lrr} is at most $d$. Then a standard result in the literature (see, e.g.,~\cite{fazel2001rank}) showed that,
\begin{equation}
\label{eq:nuclear reform}
\nuclearnorm{\boldsymbol{X}} = \min_{\boldsymbol{U}, \boldsymbol{V}, \boldsymbol{X}=\boldsymbol{U} \boldsymbol{V}^\top} \fractwo{1} \( \fronorm{\boldsymbol{U}}^2 + \fronorm{\boldsymbol{V}}^2 \),
\end{equation}
where $\boldsymbol{U} \in \mathbb{R}^{n\times d}$ and $\boldsymbol{V} \in \mathbb{R}^{n\times d}$. The minimum can be attained at, for example, $\boldsymbol{U} = \boldsymbol{U}_0 \boldsymbol{S}_0^{\frac{1}{2}}$ and $\boldsymbol{V} = \boldsymbol{V}_0 \boldsymbol{S}_0^{\frac{1}{2}}$ where $\boldsymbol{X} = \boldsymbol{U}_0 \boldsymbol{S}_0 \boldsymbol{V}_0^\top$ is the singular value decomposition.
In this way, \eqref{eq:lrr} can be written as follows:
\begin{align}
\min_{\boldsymbol{U}, \boldsymbol{V}, \boldsymbol{E}}\ \fractwo{\lambda_1}\fronorm{\boldsymbol{Z}-\boldsymbol{Y}\boldsymbol{U}\boldsymbol{V}^\top-\boldsymbol{E}}^2 + \fractwo{1} \fronorm{\boldsymbol{U}}^2 + \fractwo{1}\fronorm{\boldsymbol{V}}^2 + \lambda_2 \onenorm{\boldsymbol{E}}.
\end{align}
Note that by this reformulation, updating the entries in $\boldsymbol{X}$ amounts to sequentially updating the rows of $\boldsymbol{U}$ and $\boldsymbol{V}$. Also note that this technique is utilized in~\cite{feng2013online} for online RPCA. Unfortunately, the size of $\boldsymbol{U}$ and $\boldsymbol{V}$ in our problem are both proportional to $n$ and the dictionary $\boldsymbol{Y}$ is partially observed in each iteration, making the algorithm in~\cite{feng2013online} not applicable to LRR. Related to online implementation, another challenge is that, all the rows of $\boldsymbol{U}$ are coupled together at this moment as $\boldsymbol{U}$ is left multiplied by $\boldsymbol{Y}$ in the first term. This makes it difficult to sequentially update the rows of $\boldsymbol{U}$.
For the sake of decoupling the rows of $\boldsymbol{U}$, as part of the crux of our technique, we introduce an {auxiliary variable} $\boldsymbol{D} = \boldsymbol{Y}\boldsymbol{U}$, whose size is $p \times d$ ({i.e.}, independent of the sample size $n$). Interestingly, in this way, we are approximating the term $\boldsymbol{Z} - \boldsymbol{E}$ with $\boldsymbol{D}\boldsymbol{V}^\top$, which provides an intuition on the role of $\boldsymbol{D}$: Namely, $\boldsymbol{D}$ can be seen as a {\em basis dictionary} of the clean data, with $\boldsymbol{V}$ being the coefficients.
These key observations allow us to derive an equivalent reformulation of LRR~\eqref{eq:lrr}:
\begin{align}
\min_{\boldsymbol{D}, \boldsymbol{U}, \boldsymbol{V}, \boldsymbol{E}}\ \fractwo{\lambda_1}\fronorm{\boldsymbol{Z}-\boldsymbol{Y}\boldsymbol{U}\boldsymbol{V}^\top-\boldsymbol{E}}^2 + \fractwo{1} \( \fronorm{\boldsymbol{U}}^2 + \fronorm{\boldsymbol{V}}^2 \) + \lambda_2 \onenorm{\boldsymbol{E}},\quad \mathrm{s.t.}\ \ \boldsymbol{D} = \boldsymbol{Y}\boldsymbol{U}.
\end{align}
By penalizing the constraint in the objective, we obtain a {\em regularized} version of LRR on which our algorithm is based
\begin{align}
\label{eq:reg batch}
\min_{\boldsymbol{D}, \boldsymbol{U}, \boldsymbol{V}, \boldsymbol{E}}\ \fractwo{\lambda_1} \fronorm{ \boldsymbol{Z} - \boldsymbol{D}\boldsymbol{V}^\top -\boldsymbol{E} }^2 + \fractwo{1}\( \fronorm{\boldsymbol{U}}^2 + \fronorm{\boldsymbol{V}}^2 \) + \lambda_2 \onenorm{\boldsymbol{E}} + \fractwo{\lambda_3} \fronorm{\boldsymbol{D} - \boldsymbol{Y}\boldsymbol{U}}^2.
\end{align}
\begin{remark}[Superiority to LRR]
There are two advantages of \eqref{eq:reg batch} compared to~\eqref{eq:lrr}. First, it is amenable for online optimization. Second, it is more informative since it explicitly models the basis of the union of subspaces, hence a better subspace recovery and clustering (see Section~\ref{sec:exp}). This actually meets the core idea of~\cite{liu2014recovery} but they {\em assumed} $Y$ contains true subspaces whereas we {\em learn} the true subspaces.
\end{remark}
\begin{remark}[Parameter]
Note that $\lambda_3$ may be gradually increased until some maximum value is attained so as to enforce the equality constraint. In this way, \eqref{eq:reg batch} attains the same minimum as \eqref{eq:lrr}. Actually, the choice of $\lambda_3$ depends on how much information $\boldsymbol{Y}$ brings for the subspace basis. As we aforementioned, $\boldsymbol{D}$ is the basis dictionary of the clean data and is in turn approximated by (or equal to) $\boldsymbol{Y}\boldsymbol{U}$. This suggests that the range of $\boldsymbol{D}$ is a subset of that of $\boldsymbol{Y}$. As a typical choice of $\boldsymbol{Y} = \boldsymbol{Z}$, if $\boldsymbol{Z}$ is slightly corrupted, we would like to pick a large quantity for $\lambda_3$.
\end{remark}
\begin{remark}[Connection to RPCA]
Due to our explicit modeling of the basis, we unify LRR and RPCA as follows: for LRR, $\boldsymbol{D} \approx \boldsymbol{Y}\boldsymbol{U}$ (or $\boldsymbol{D} = \boldsymbol{Y}\boldsymbol{U}$ if $\lambda_3$ tends to infinity) while for RPCA, $\boldsymbol{D} = \boldsymbol{U}$. That is, ORPCA~\cite{feng2013online} considers a problem of $\boldsymbol{Y} = \boldsymbol{I}_p$ whose size is independent of $n$, hence can be kept in memory which naturally resolves Issue~\ref{is:memY} and~\ref{is:partialY}. This is why RPCA can be easily implemented in an online fashion while LRR cannot.
\end{remark}
\begin{remark}[Connection to Dictionary Learning]
Generally speaking, LRR~\eqref{eq:lrr} can be seen as a coding algorithm, with the dictionary $\boldsymbol{Y}$ known in advance and $\boldsymbol{X}$ is a desired structured code while other popular algorithms such as dictionary learning~(DL)~\cite{mairal2010online} simultaneously optimizes the dictionary and the sparse code. Interestingly, in view of~\eqref{eq:reg batch}, the link of LRR and DL becomes more clear in the sense that the difference lies in the way how the dictionary is constrained. That is, for LRR we have $\boldsymbol{D} \approx \boldsymbol{Y}\boldsymbol{U}$ and $\boldsymbol{U}$ is further regularized by Frobenius norm whereas for DL, we have $\twonorm{\boldsymbol{d}_i} \leq 1$ for each column of $D$.
\end{remark}
Let $\boldsymbol{z}_i$, $\boldsymbol{y}_i$, $\boldsymbol{e}_i$, $\boldsymbol{u}_i$, and $\boldsymbol{v}_i$ be the $i$th column of matrices $\boldsymbol{Z}$, $\boldsymbol{Y}$, $\boldsymbol{E}$, $\boldsymbol{U}^\top$ and $\boldsymbol{V}^\top$ respectively and define
\begin{align}
\label{eq:tl}
\tilde{\ell}(\boldsymbol{z}, \boldsymbol{D}, \boldsymbol{v}, \boldsymbol{e}) &\stackrel{\text{def}}{=} \fractwo{\lambda_1} \twonorm{ \boldsymbol{z} - \boldsymbol{D} \boldsymbol{v} - \boldsymbol{e} }^2 + \fractwo{1}\twonorm{\boldsymbol{v}}^2 + \lambda_2 \onenorm{\boldsymbol{e}},\\
\label{eq:ell}
\ell(\boldsymbol{z}, \boldsymbol{D}) &= \min_{\boldsymbol{v}, \boldsymbol{e}} \tilde{\ell}(\boldsymbol{z}, \boldsymbol{D}, \boldsymbol{v}, \boldsymbol{e}).
\end{align}
In addition, let
\begin{align}
\label{eq:th}
\th(\boldsymbol{Y}, \boldsymbol{D}, \boldsymbol{U}) &\stackrel{\text{def}}{=} \sum_{i=1}^n \fractwo{1} \twonorm{\boldsymbol{u}_i}^2 + \fractwo{\lambda_3} \fronorm{\boldsymbol{D} - \sum_{i=1}^{n} \boldsymbol{y}_i \boldsymbol{u}_i^\top}^2,\\
\label{eq:h}
h(\boldsymbol{Y}, \boldsymbol{D}) &= \min_{\boldsymbol{U}} \th(\boldsymbol{Y},\boldsymbol{D},\boldsymbol{U}).
\end{align}
Then \eqref{eq:reg batch} can be rewritten as:
\begin{align}\label{eq:tmp}
\min_{\boldsymbol{D}} \min_{\boldsymbol{U}, \boldsymbol{V}, \boldsymbol{E}}\ \sum_{i=1}^{n} \tilde{\ell}(\boldsymbol{z}_i, \boldsymbol{D}, \boldsymbol{v}_i, \boldsymbol{e}_i) + \th(\boldsymbol{Y}, \boldsymbol{D}, \boldsymbol{U}),
\end{align}
which amounts to minimizing the empirical loss function:
\begin{align}
\label{eq:f_n(D)}
f_n(\boldsymbol{D}) \stackrel{\text{def}}{=} \frac{1}{n}\sum_{i=1}^{n} \ell(\boldsymbol{z}_i, \boldsymbol{D}) + \frac{1}{n} h(\boldsymbol{Y}, \boldsymbol{D}).
\end{align}
{In stochastic optimization, we are interested in analyzing the optimality of the obtained solution with respect to the expected loss function. To this end, we first derive the optimal solutions $\boldsymbol{U}^*$, $\boldsymbol{V}^*$ and $\boldsymbol{E}^*$ that minimize~\eqref{eq:tmp} which renders a concrete form of the empirical loss function $f_n(\boldsymbol{D})$, hence we are able to derive the expected loss.
Given $\boldsymbol{D}$, we need to compute the optimal solutions $\boldsymbol{U}^*$, $\boldsymbol{V}^*$ and $\boldsymbol{E}^*$ to evaluate the objective value of $f_n(\boldsymbol{D})$. What is of interest here is that, the optimization procedure of $\boldsymbol{U}$ is totally different from that of $\boldsymbol{V}$ and $\boldsymbol{E}$. According to \eqref{eq:ell}, when $\boldsymbol{D}$ is given, each $\boldsymbol{v}_i^*$ and $\boldsymbol{e}_i^*$ can be solved by only accessing the $i$th sample $\boldsymbol{z}_i$. However, the optimal $\boldsymbol{u}_i^*$ depends on the whole dictionary $\boldsymbol{Y}$ as the second term in $\th(\boldsymbol{Y}, \boldsymbol{D}, \boldsymbol{U})$ couples all the $\boldsymbol{u}_i$'s. Fortunately, it is possible to obtain a closed form solution for $\boldsymbol{U}^*$ which simplifies our analysis. To be more precise, the first order optimality condition for~\eqref{eq:h} gives
\begin{align}
\frac{\partial \th(\boldsymbol{Y}, \boldsymbol{D}, \boldsymbol{U})}{\partial \boldsymbol{U}} =\ \boldsymbol{U} + \lambda_3 (\boldsymbol{Y}^\top \boldsymbol{Y}\boldsymbol{U} - \boldsymbol{Y}^\top \boldsymbol{D}) = 0,
\end{align}
which implies
\begin{align}
\label{eq:U^*}
\boldsymbol{U}^* &= \({\lambda_3}^{-1}\boldsymbol{I}_p + \boldsymbol{Y}^\top \boldsymbol{Y}\)^{-1} \boldsymbol{Y}^\top \boldsymbol{D} \notag\\
&= \lambda_3 \sum_{j=0}^{+\infty} \(-\lambda_3 \boldsymbol{Y}^\top \boldsymbol{Y}\)^j \boldsymbol{Y}^\top \boldsymbol{D} \notag\\
&= \lambda_3 \boldsymbol{Y}^\top \Bigg[ \sum_{j=0}^{+\infty} \(-\lambda_3 \boldsymbol{Y} \boldsymbol{Y}^\top\)^j \Bigg] \boldsymbol{D} \notag\\
&=\boldsymbol{Y}^\top \({\lambda_3}^{-1}\boldsymbol{I}_p + \boldsymbol{Y}\bY^\top\)^{-1} \boldsymbol{D}.
\end{align}
Likewise, another component $\boldsymbol{Y}\boldsymbol{U}^{*\top}$ in~\eqref{eq:th} can be derived as follows:
\begin{align}
\boldsymbol{Y}\boldsymbol{U}^{*\top} = \boldsymbol{D} - \frac{1}{n} \(\frac{1}{n} \boldsymbol{I}_p+\frac{\lambda_3}{n} \boldsymbol{N}_n \)^{-1}\boldsymbol{D},
\end{align}
where we denote
\begin{align}
\label{eq:N_n}
\boldsymbol{N}_n = \sum_{i=1}^{n} \boldsymbol{y}_i \boldsymbol{y}_i^\top.
\end{align}
Recall that $\boldsymbol{u}_i$ is the $i$th column of $\boldsymbol{U}^\top$. So for each $i \in [n]$, we immediately have
\begin{align}
\label{eq:actual u_t}
\boldsymbol{u}_i^* = \boldsymbol{D}^\top \(\frac{1}{\lambda_3}\boldsymbol{I}_p + \sum_{i=1}^{n} \boldsymbol{y}_i \boldsymbol{y}^\top_i\)^{-1} \boldsymbol{y}_i = \frac{1}{n} \boldsymbol{D}^\top \(\frac{1}{\lambda_3 n}\boldsymbol{I}_p + \frac{1}{n} \boldsymbol{N}_n\)^{-1} \boldsymbol{y}_i.
\end{align}
Plugging $\boldsymbol{U}^*$ and $\boldsymbol{Y}\boldsymbol{U}^{*\top}$ back to $\th(\boldsymbol{Y}, \boldsymbol{D}, \boldsymbol{U})$ gives
\begin{align}
h(\boldsymbol{Y}, \boldsymbol{D}) = \frac{1}{n^2} \sum_{i=1}^n \fractwo{1} \twonorm{ \boldsymbol{D}^\top \(\frac{1}{\lambda_3 n}\boldsymbol{I}_p + \frac{1}{n} \boldsymbol{N}_n\)^{-1} \boldsymbol{y}_i }^2 + \frac{\lambda_3}{2 n^2} \fronorm{ \(\frac{1}{n} \boldsymbol{I}_p +\frac{\lambda_3}{n} \boldsymbol{N}_n \)^{-1}\boldsymbol{D} }^2.
\end{align}
Now we derive the expected loss function, which is defined as the limit of the empirical loss function when $n$ tends to infinity. If we assume that all the samples are drawn independently and identically from some (unknown) distribution, we have
\begin{align}
\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^{n} \ell(\boldsymbol{z}_i, \boldsymbol{D}) = \mathbb{E}_{\boldsymbol{z}} [\ell(\boldsymbol{z}, \boldsymbol{D})].
\end{align}
If we further assume that the smallest singular value of $\frac{1}{n} \boldsymbol{N}_n$ is bounded away from zero~(which implies $\boldsymbol{N}_n$ is invertible and the spectrum of $\boldsymbol{N}_n^{-1}$ is bounded from the above), we have
\begin{align}
0 \leq \lim_{n \rightarrow \infty} \frac{1}{n} h(\boldsymbol{Y},\boldsymbol{D}) \leq \lim_{n \rightarrow \infty} \frac{1}{n^3} \sum_{i=1}^n \mathrm{C}_0 = 0.
\end{align}
Here $\mathrm{C}_0$ is some absolute constant since $\boldsymbol{D}$ is fixed and $\boldsymbol{y}_i$'s are bounded. Hence, it follows that
\begin{align}
\lim_{n \rightarrow \infty} \frac{1}{n} h(\boldsymbol{Y},\boldsymbol{D}) = 0.
\end{align}
Finally, the expected loss function is given by
\begin{align}
\label{eq:f(D)}
f(\boldsymbol{D}) \stackrel{\text{def}}{=} \lim_{n \rightarrow \infty} f_n(\boldsymbol{D}) = \mathbb{E}_{\boldsymbol{z}}\big[\ell(\boldsymbol{z}, \boldsymbol{D}) \big].
\end{align}
\section{Algorithm}\label{sec:alg}
Our OLRSC algorithm is summarized in Algorithm~\ref{alg:all}. Recall that OLRSC is an online implementation to solve \eqref{eq:f_n(D)}, which is derived from the regularized version of LRR~\eqref{eq:reg batch}. The main idea is optimizing the variables in an alternative manner. That is, at the $t$-th iteration, assume the basis dictionary $\boldsymbol{D}_{t-1}$ is given, we compute the optimal solutions $\{\boldsymbol{v}_t, \boldsymbol{e}_t\}$ by minimizing the objective function $\tilde{\ell}(\boldsymbol{z}_t, {\boldsymbol{D}}_{t-1}, \boldsymbol{v}, \boldsymbol{e})$ over $\boldsymbol{v}$ and $\boldsymbol{e}$. For $\boldsymbol{u}_t$, we need a more carefully designed paradigm since a direct optimization involves loading the full dictionary $\boldsymbol{Y}$ (see~\eqref{eq:actual u_t}). We will elaborate the details later. Subsequently, we update the basis dictionary $\boldsymbol{D}_{t}$ by optimizing a surrogate function to the empirical loss $f_n(\boldsymbol{D})$. In our algorithm, we need to maintain three additional accumulation matrices for which the sizes are independent of $n$.
\begin{algorithm}[t]
\caption{Online Low-Rank Subspace Clustering}
\label{alg:all}
\begin{algorithmic}[1]
\REQUIRE $\boldsymbol{Z} \in \mathbb{R}^{p\times n}$ (observed samples), $\boldsymbol{Y} \in \mathbb{R}^{p\times n}$, parameters $\lambda_1$, $\lambda_2$ and $\lambda_3$, initial basis $\boldsymbol{D}_0 \in \mathbb{R}^{p\times d}$, zero matrices $\boldsymbol{M}_0 \in \mathbb{R}^{p\times d}$, $\boldsymbol{A}_0 \in \mathbb{R}^{d\times d}$ and $\boldsymbol{B}_0 \in \mathbb{R}^{p\times d}$.
\ENSURE Optimal basis $\boldsymbol{D}_n$.
\FOR{$t=1$ to $n$}
\STATE Access the $t$-th sample $\boldsymbol{z}_t$ and the $t$-th atom $\boldsymbol{y}_t$.
\STATE Compute the coefficient and noise:
\begin{align*}
& \{\boldsymbol{v}_t, \boldsymbol{e}_t\} = \argmin_{\boldsymbol{v}, \boldsymbol{e}} \tilde{\ell}({\boldsymbol{z}_t, \boldsymbol{D}_{t-1}, \boldsymbol{v}, \boldsymbol{e}}),\\
&\boldsymbol{u}_t = \argmin_{\boldsymbol{u}} \tilde{\ell}_2(\boldsymbol{y}_t, \boldsymbol{D}_{t-1}, \boldsymbol{M}_{t-1}, \boldsymbol{u}).
\end{align*}\vspace{-0.2in}
\STATE Update the accumulation matrices:
\begin{align*}
\boldsymbol{M}_t \leftarrow \boldsymbol{M}_{t-1} + \boldsymbol{y}_t \boldsymbol{u}_t^\top,\ \ \boldsymbol{A}_t \leftarrow \boldsymbol{A}_{t-1} + \boldsymbol{v}_t\boldsymbol{v}_t^\top,\ \ \boldsymbol{B}_t \leftarrow \boldsymbol{B}_{t-1} + (\boldsymbol{z}_t - \boldsymbol{e}_t)\boldsymbol{v}_t^\top.
\end{align*}\vspace{-0.2in}
\STATE Update the basis dictionary:
\begin{align*}
\boldsymbol{D}_t = \argmin_{\boldsymbol{D} } \frac{1}{t} \Bigg[ \frac{1}{2}\tr\(\boldsymbol{D}^\top \boldsymbol{D}(\lambda_1 \boldsymbol{A}_t + \lambda_3 \boldsymbol{I}_d)\) - \tr\(\boldsymbol{D}^\top (\lambda_1 \boldsymbol{B}_t + \lambda_3 \boldsymbol{M}_t)\) \Bigg].
\end{align*}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\vspace{0.1in}
\noindent{\bf Solving $\{\boldsymbol{v}_t, \boldsymbol{e}_t\}$.} \
We observe that if $\boldsymbol{e}$ is fixed, we can optimize $\boldsymbol{v}$ in closed form:
\begin{align}\label{eq:v}
\boldsymbol{v} = \( \lambda_1^{-1}\boldsymbol{I}_d + \boldsymbol{D}_{t-1}^\top \boldsymbol{D}_{t-1}\)^{-1}\boldsymbol{D}_{t-1}^\top \(\boldsymbol{z}_t - \boldsymbol{e}\).
\end{align}
Conversely, given $\boldsymbol{v}$, the variable $\boldsymbol{e}$ is obtained via soft-thresholding~\cite{donoho1995denoising}:
\begin{align}\label{eq:e}
\boldsymbol{e} = \mathcal{S}_{\lambda_2 / \lambda_1}\(\boldsymbol{z}_t - \boldsymbol{D}_{t-1}\boldsymbol{v}\).
\end{align}
Thus, we utilize block coordinate minimization algorithm to optimize $\boldsymbol{v}$ and $\boldsymbol{e}$.
\vspace{0.1in}
\noindent{\bf Solving $\boldsymbol{u}_t$.} \
The closed form solution~\eqref{eq:actual u_t} tells us that it is impossible to derive an accurate estimation of $\boldsymbol{u}_t$ without the entire dictionary $\boldsymbol{Y}$. Thus, we have to ``approximately'' solve it during the online optimization procedure\footnote{Here, ``accurately'' and ``approximately'' mean that when only $\boldsymbol{D}_{t-1}$, $\boldsymbol{z}_t$ and $\boldsymbol{y}_t$ are given, whether we can obtain the same solution $\{\boldsymbol{v}_t, \boldsymbol{e}_t, \boldsymbol{u}_t\}$ as for the batch problem~\eqref{eq:f_n(D)}.}.
Our carefully designed approximate process to solve $\th(\boldsymbol{Y}, \boldsymbol{D}, \boldsymbol{U})$~\eqref{eq:th} is motivated by the coordinate minimization method appealing to $\th(\boldsymbol{Y}, \boldsymbol{D}, \boldsymbol{U})$. As a convention, such method starts with initial guess that $\boldsymbol{u}_i = \boldsymbol{0}$ for all $i \in [n]$ and updates the $\boldsymbol{u}_i$'s in a cyclic order, i.e., $\boldsymbol{u}_1$, $\boldsymbol{u}_2$, $\cdots$, $\boldsymbol{u}_n$, $\boldsymbol{u}_1$, $\cdots$. Let us consider the first pass where we have already updated $\boldsymbol{u}_1$, $\boldsymbol{u}_2$, $\cdots$, $\boldsymbol{u}_{t-1}$ and are to optimize over $\boldsymbol{u}_t$ for some $t > 0$. Note that since the initial values are zero, $\boldsymbol{u}_{t+1} = \boldsymbol{u}_{t+2} = \cdots = \boldsymbol{u}_n = \boldsymbol{0}$. Thereby, the optimal $\boldsymbol{u}_t$ is actually given by minimizing the following function:
\begin{align}
\label{eq:tl_2}
\tilde{\ell}_2(\boldsymbol{y}_t, \boldsymbol{D}, \boldsymbol{M}_{t-1}, \boldsymbol{u}) \stackrel{\text{def}}{=} \fractwo{1} \twonorm{\boldsymbol{u}}^2 + \fractwo{\lambda_3} \fronorm{\boldsymbol{D} - \boldsymbol{M}_{t-1} - \boldsymbol{y}_t \boldsymbol{u}^\top}^2,
\end{align}
where
\begin{align}
\boldsymbol{M}_{t-1} = \sum_{i=1}^{t-1}\boldsymbol{y}_i \boldsymbol{u}_i^\top.
\end{align}
We easily obtain the closed form solution to~\eqref{eq:tl_2} as follows:
\begin{align}
\label{eq:u_t}
\boldsymbol{u}_t = (\twonorm{\boldsymbol{y}_t}^2 + {1}/{\lambda_3})^{-1} (\boldsymbol{D} - \boldsymbol{M}_{t-1})^\top \boldsymbol{y}_t.
\end{align}
Now let us turn to the alternating minimization algorithm, where $\boldsymbol{D}$ is updated iteratively rather than fixed as in~\eqref{eq:u_t}. The above coordinate minimization process can be adjusted in this scenario as we did in Algorithm~\ref{alg:all}. That is, given $\boldsymbol{D}_{t-1}$, after revealing a new atom $\boldsymbol{y}_t$, we compute $\boldsymbol{u}_t$ by minimizing $\tilde{\ell}_2(\boldsymbol{y}_t, \boldsymbol{D}_{t-1}, \boldsymbol{M}_{t-1}, \boldsymbol{u})$, followed by updating $\boldsymbol{D}_t$. In this way, when the algorithm terminates, we in essence run a one-pass update on $\boldsymbol{u}_t$'s with a simultaneous computation of new basis dictionary.
\vspace{0.1in}
\noindent{\bf Solving $\boldsymbol{D}_t$.} \
As soon as the past filtration $\{ \boldsymbol{v}_i, \boldsymbol{e}_i, \boldsymbol{u}_i \}_{i=1}^t$ are available, we can compute a new iterate $\boldsymbol{D}_t$ by optimizing the surrogate function
\begin{align}
\label{eq:g_t(D)}
g_t(\boldsymbol{D}) \stackrel{\text{def}}{=} \frac{1}{t} \(\sum_{i=1}^{t} \tilde{\ell}(\boldsymbol{z}_i, \boldsymbol{D}, \boldsymbol{v}_i, \boldsymbol{e}_i) + \sum_{i=1}^{t} \fractwo{1} \twonorm{\boldsymbol{u}_i}^2 + \fractwo{\lambda_3} \fronorm{\boldsymbol{D} - \boldsymbol{M}_t}^2 \).
\end{align}
Expanding the first term, we find that $\boldsymbol{D}_t$ is given by
\begin{align}
\boldsymbol{D}_t =&\ \argmin_{\boldsymbol{D} } \frac{1}{t} \Bigg[ \frac{1}{2}\tr\(\boldsymbol{D}^\top \boldsymbol{D}(\lambda_1 \boldsymbol{A}_t + \lambda_3 \boldsymbol{I}_d)\) - \tr\(\boldsymbol{D}^\top (\lambda_1 \boldsymbol{B}_t + \lambda_3 \boldsymbol{M}_t)\) \Bigg]\notag\\
=&\ (\lambda_1 \boldsymbol{B}_t + \lambda_3 \boldsymbol{M}_t) (\lambda_1 \boldsymbol{A}_t + \lambda_3 \boldsymbol{I}_d)^{-1},
\end{align}
where $\boldsymbol{A}_t = \sum_{i=1}^{t} \boldsymbol{v}_i \boldsymbol{v}_i^\top$ and $\boldsymbol{B}_t = \sum_{i=1}^{t} (\boldsymbol{z}_i - \boldsymbol{e}_i)\boldsymbol{v}_i^\top$. We point out that the size of $\boldsymbol{A}_t$ is $d\times d$ and that of $\boldsymbol{B}_t$ is $p\times d$, i.e., independent of sample size. In practice, as~\cite{mairal2010online} suggested, one may apply a block coordinate descent approach to minimize over $\boldsymbol{D}$. Compared to the closed form solution given above, such algorithm usually converges very fast after revealing sufficient number of samples. In fact, we observe that a one-pass update on the columns of $\boldsymbol{D}$ suffices to ensure a favorable performance. See Algorithm~\ref{alg:D}.
\begin{algorithm}[h]
\caption{Solving $\boldsymbol{D}$}
\label{alg:D}
\begin{algorithmic}[1]
\REQUIRE $\boldsymbol{D} \in \mathbb{R}^{p\times d}$ in the previous iteration, accumulation matrix $\boldsymbol{M}$, $\boldsymbol{A}$ and $\boldsymbol{B}$, parameters $\lambda_1$ and $\lambda_3$.
\ENSURE Optimal $\boldsymbol{D}$ (updated).
\STATE Denote $\widehat{\boldsymbol{A}} = \lambda_1 \boldsymbol{A} + \lambda_3 \boldsymbol{I}$ and $\widehat{\boldsymbol{B}} = \lambda_1 \boldsymbol{B} + \lambda_3 \boldsymbol{M} $.
\REPEAT
\FOR{$j = 1$ to $d$}
\STATE Update the $j$th column of $\boldsymbol{D}$:
\begin{align*}
\boldsymbol{d}_j \leftarrow \boldsymbol{d}_j - \frac{1}{\widehat{A}_{jj}} \( \boldsymbol{D} \widehat{\boldsymbol{a}}_j - \widehat{\boldsymbol{b}}_j \)
\end{align*}
\ENDFOR
\UNTIL{convergence}
\end{algorithmic}
\end{algorithm}
\vspace{0.1in}
\noindent{\bf Memory Cost.} \
It is remarkable that the memory cost of Algorithm~\ref{alg:all} is $\mathcal{O}(pd)$. To see this, note that when solving $\boldsymbol{v}_t$ and $\boldsymbol{e}_t$, we load the auxiliary variable $\boldsymbol{D}_t$ and a sample $\boldsymbol{z}_t$ into the memory, which costs $\mathcal{O}(pd)$. To compute the optimal $\boldsymbol{u}_t$'s, we need to access $\boldsymbol{D}_t$ and $\boldsymbol{M}_t \in \mathbb{R}^{p\times d}$. Although we aim to minimize \eqref{eq:g_t(D)}, which seems to require all the past information, we actually only need to record $\boldsymbol{A}_t$, $\boldsymbol{B}_t$ and $\boldsymbol{M}_t$, whose sizes are at most $\mathcal{O}(pd)$ (since $d < p$).
\vspace{0.1in}
\noindent{\bf Computational Efficiency.} \
In addition to memory efficiency, we further clarify that the the computation in each iteration is cheap. To compute $\{\boldsymbol{v}_t, \boldsymbol{e}_t\}$, one may utilize the block coordinate method in~\cite{richtarik2014iteration} which enjoys linear convergence due to strong convexity. One may also apply the stochastic variance reduced algorithms which also ensure a geometric rate of convergence~\cite{xiao2014proximal,saga}. The $\boldsymbol{u}_t$ is given by simple matrix-vector multiplication, which costs $\mathcal{O}(pd)$. It is easy to see the update on the accumulation matrices is $\mathcal{O}(pd)$ and that of $\boldsymbol{D}_t$ is $\mathcal{O}(pd^2)$.
\vspace{0.1in}
\noindent{\bf A Fully Online Scheme.} \
Now we have provided a way to (approximately) optimize the LRR problem~\eqref{eq:lrr} in online fashion. Usually, researchers in the literature will take an optional post-processing step to refine the segmentation accuracy, for example, applying spectral clustering on the representation matrix $\boldsymbol{X}$. In this case, one has to collect all the $\boldsymbol{u}_i$'s and $\boldsymbol{v}_i$'s to compute $\boldsymbol{X} = \boldsymbol{U} \boldsymbol{V}^\top$ which again increases the memory cost to $\mathcal{O}(n^2)$. Here, we suggest an alternative scheme which admits $\mathcal{O}(kd)$ memory usage where $k$ is the number of subspaces. The idea is utilizing the well-known $k$-means on $\boldsymbol{v}_i$'s. There are two notable advantages compared to the spectral clustering. First, updating the $k$-means model can be implemented in online manner and the computation is $\mathcal{O}(kd)$ for each iteration. Second, we observe that $\boldsymbol{v}_i$ is actually a robust feature for the $i$th sample. Combining the online $k$-means with Algorithm~\ref{alg:all}, we obtain a fully and efficient online subspace clustering scheme where the memory cost is $\mathcal{O}(pd)$. For the reader's convenience, we summarize this pipeline in Algorithm~\ref{alg:fully}.
\begin{algorithm}[h]
\caption{Fully Online Pipeline for Low-Rank Subspace Clustering}
\label{alg:fully}
\begin{algorithmic}[1]
\REQUIRE $\boldsymbol{Z} \in \mathbb{R}^{p\times n}$ (observed samples), $\boldsymbol{Y} \in \mathbb{R}^{p\times n}$, parameters $\lambda_1$, $\lambda_2$ and $\lambda_3$, initial basis $\boldsymbol{D}_0 \in \mathbb{R}^{p\times d}$, zero matrices $\boldsymbol{M}_0 \in \mathbb{R}^{p\times d}$, $\boldsymbol{A}_0 \in \mathbb{R}^{d\times d}$ and $\boldsymbol{B}_0 \in \mathbb{R}^{p\times d}$, number of clusters $k$, initial centroids $\boldsymbol{C} \in \mathbb{R}^{d\times k}$.
\ENSURE Optimal basis $\boldsymbol{D}_n$, cluster centroids $\boldsymbol{C}$, cluster assignments $\{ o_1, o_2, \cdots, o_n \}$.
\STATE Initialize $r_1 = r_2 = \cdots = r_k = 0$.
\FOR{$t=1$ to $n$}
\STATE Access the $t$-th sample $\boldsymbol{z}_t$ and the $t$-th atom $\boldsymbol{y}_t$.
\STATE Compute $\{ \boldsymbol{v}_t, \boldsymbol{e}_t, \boldsymbol{u}_t, \boldsymbol{D}_t \}$ by Algorithm~\ref{alg:all}.
\STATE Compute $o_t = \argmin_{1 \leq j \leq k} \twonorm{\boldsymbol{v}_t - \boldsymbol{c}_j}$.
\STATE Update the $o_t$-th center:
\begin{align*}
&r_{o_t} \leftarrow r_{o_t} + 1,\\
&\boldsymbol{c}_{o_t} \leftarrow \frac{r_{o_t} - 1}{r_{o_t}} \boldsymbol{c}_{o_t} + \frac{1}{r_{o_t}} \boldsymbol{v}_t.
\end{align*}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\vspace{0.1in}
\noindent{\bf An Accurate Online Implementation.} \
Our strategy for solving $\boldsymbol{u}_t$ is based on an approximate routine which resolves Issue~\ref{is:partialY} as well as has a low complexity. Yet, to tackle Issue~\ref{is:partialY}, another potential way is to avoid the variable $\boldsymbol{u}_t$\footnote{We would like to thank the anonymous NIPS 2015 Reviewer for pointing out this potential solution to the online algorithm. Here we explain why this alternative can be computationally expensive.}. Recall that we derive the optimal solution $\boldsymbol{U}^*$ (provided that $\boldsymbol{D}$ is given) to~\eqref{eq:reg batch} is given by~\eqref{eq:U^*}. Plugging it back to~\eqref{eq:reg batch}, we obtain
\begin{align*}
& \fronorm{\boldsymbol{U}^*}^2 = \tr\( \boldsymbol{D} \boldsymbol{D}^\top \( \boldsymbol{Q}_n - {\lambda_3}^{-1} \boldsymbol{Q}^2_n \) \),\\
&\fronorm{\boldsymbol{D} - \boldsymbol{Y}\boldsymbol{U}^*}^2 = \fronorm{\boldsymbol{D} - {\lambda_3}^{-1} \boldsymbol{Q}_n\boldsymbol{D}}^2,
\end{align*}
where
\begin{align*}
\boldsymbol{Q}_n = \( {\lambda_3}^{-1} \boldsymbol{I}_p + \boldsymbol{N}_n \)^{-1}.
\end{align*}
Here, $\boldsymbol{N}_n$ was given in~\eqref{eq:N_n}. Note that the size of $\boldsymbol{Q}_n$ is $p\times p$. Hence, if we incrementally compute the accumulation matrix $\boldsymbol{N}_t = \sum_{i=1}^{t} \boldsymbol{y}_i \boldsymbol{y}_i^\top$, we can update the variable $\boldsymbol{D}$ in an online fashion. Namely, at $t$-th iteration, we re-define the surrogate function as follows:
\begin{align*}
g_t(\boldsymbol{D}) \stackrel{\text{def}}{=} \frac{1}{t} \Bigg[ \sum_{i=1}^{t} \tilde{\ell}(\boldsymbol{z}_i, \boldsymbol{D}, \boldsymbol{v}_i, \boldsymbol{e}_i) + \fractwo{\lambda_3} \fronorm{\boldsymbol{D} -\frac{1}{\lambda_3} \boldsymbol{Q}_t\boldsymbol{D}}^2 + \fractwo{1} \tr\( \boldsymbol{D} \boldsymbol{D}^\top \( \boldsymbol{Q}_t - \frac{1}{\lambda_3} \boldsymbol{Q}^2_t \) \) \Bigg].
\end{align*}
Again, by noting the fact that $\tilde{\ell}(\boldsymbol{z}_i, \boldsymbol{D}, \boldsymbol{v}_i, \boldsymbol{e}_i)$ only involves recording $\boldsymbol{A}_t$ and $\boldsymbol{B}_t$, we show that the memory cost is independent of sample size.
While promising since the above procedure avoids the approximate computation, the main shortcoming is computing the inverse of a $p\times p$ matrix in each iteration, hence not efficient. Moreover, as we will show in Theorem~\ref{thm:stationary}, although the $\boldsymbol{u}_t$'s are approximate solutions, we are still guaranteed the convergence of $\boldsymbol{D}_t$.
\section{Theoretical Analysis}\label{sec:theory}
We make three assumptions underlying our analysis.
\begin{assumption}\label{as:z}
The observed data are generated i.i.d.\ from some distribution and there exist constants $\alpha_0$ and $\alpha_1$, such that the conditions $0 < \alpha_0 \leq \twonorm{\boldsymbol{z}} \leq \alpha_1$ and $ \alpha_0 \leq \twonorm{\boldsymbol{y}} \leq \alpha_1$ hold almost surely.
\end{assumption}
\begin{assumption}\label{as:N_t}
The smallest singular value of the matrix $\frac{1}{t}\boldsymbol{N}_t$ is lower bounded away from zero.
\end{assumption}
\begin{assumption}\label{as:g_t(D)}
The surrogate functions $g_t(\boldsymbol{D})$ are strongly convex for all $t \geq 0$.
\end{assumption}
Based on these assumptions, we establish the main theoretical result, justifying the validity of {Algorithm~\ref{alg:all}}.
\begin{theorem}
\label{thm:stationary}
Assume~\ref{as:z},~\ref{as:N_t} and~\ref{as:g_t(D)}. Let $\{\boldsymbol{D}_t\}_{t=1}^{\infty}$ be the sequence of optimal bases produced by Algorithm~\ref{alg:all}. Then, the sequence converges to a stationary point of the expected loss function $f(\boldsymbol{D})$ when $t$ goes to infinity.
\end{theorem}
Note that since the reformulation of the nuclear norm~\eqref{eq:nuclear reform} is non-convex, we can only guarantee that the solution is a stationary point in general~\cite{bertsekas1999nonlinear}. We also remark that OLRSC asymptotically fulfills the first order optimality condition of~\eqref{eq:lrr}. To see this, we follow the proof technique of Prop.3 in~\cite{mardani2015subspace} and let $\boldsymbol{X} = \boldsymbol{U} \boldsymbol{V}^\top$, $W_1 = \boldsymbol{U}\bU^\top$, $W_2 = \boldsymbol{V}\bV^\top$, $\boldsymbol{M}_1 = \boldsymbol{M}_3 = 0.5\boldsymbol{I}$, $\boldsymbol{M}_2 = \boldsymbol{M}_4 = 0.5\lambda_1 \boldsymbol{Y}^\top(\boldsymbol{Y}\boldsymbol{X} + \boldsymbol{E} - \boldsymbol{Z})$. Due to our uniform bound~(Prop.~\ref{prop:bound:veABMDu}), we justify the optimality condition. See the details in~\cite{mardani2015subspace}.
More interestingly, as we mentioned in Section~\ref{sec:alg}, the solution~\eqref{eq:u_t} is not accurate in the sense that it is not equal to that of~\eqref{eq:actual u_t} given $\boldsymbol{D}$. Yet, our theorem asserts that this will not deviate $\{\boldsymbol{D}_t\}_{t\geq 0}$ away from the stationary point. The intuition underlying such amazing phenomenon is that the expected loss function~\eqref{eq:f(D)} is only determined by $\ell(\boldsymbol{z}, \boldsymbol{D})$ which does not involve $\boldsymbol{u}_t$. What is of matter for $\boldsymbol{u}_t$ and $\boldsymbol{M}_t$ is their uniform boundedness and concentration to establish the convergence. Thanks to the carefully chosen function $\tilde{\ell}(\boldsymbol{z}, \boldsymbol{D}, \boldsymbol{M}, \boldsymbol{u})$ and the surrogate function $g_t(\boldsymbol{D})$, we are able to prove the desired property by mathematical induction which is a crucial step in our proof.
In particular, we have the following lemma that facilitates our analysis:
\begin{lemma}
Assume~\ref{as:z} and~\ref{as:N_t} and~\ref{as:g_t(D)}. Let $\{\boldsymbol{M}_t\}_{t\geq 0}$ be the sequence of the matrices produced by Algorithm~\ref{alg:all}. Then, there exists some universal constant $\mathrm{C}_0$, such that for all $t \geq 0$, $\fronorm{M_t} \leq \mathrm{C}_0$.
\end{lemma}
Due to the above lemma, the solution $\boldsymbol{D}_t$ is essentially determined by $\frac{1}{t}\boldsymbol{A}_t$ and $\frac{1}{t}\boldsymbol{B}_t$ when $t$ is a very large quantity since $\frac{1}{t}\boldsymbol{M}_t \rightarrow 0$. We also have a non-asymptotic rate for the numerical convergence of $\boldsymbol{D}_t$ as $\twonorm{\boldsymbol{D}_t - \boldsymbol{D}_{t-1}} = \mathcal{O}(1/t)$. See Appendix~\ref{supp:sec:proof} for more details and a full proof.
\section{Experiments}\label{sec:exp}
Before presenting the empirical results, we first introduce the universal settings used throughout the section.
\vspace{0.1in}
\noindent{\bf Algorithms.} \
For the subspace recovery task, we compare our algorithm with state-of-the-art solvers including ORPCA~\cite{feng2013online}, LRR~\cite{liu2013robust} and PCP~\cite{candes2011robust}. For the subspace clustering task, we choose ORPCA, LRR and SSC~\cite{elhamifar2009sparse} as the competitive baselines. Recently, \cite{liu2014recovery} improved the vanilla LRR by utilizing some low-rank matrix for $\boldsymbol{Y}$. We denote this variant of LRR by LRR2 and accordingly, our algorithm equipped with such $\boldsymbol{Y}$ is denoted as OLRSC2.
\vspace{0.1in}
\noindent{\bf Evaluation Metric.} \
We evaluate the fitness of the recovered subspaces $\boldsymbol{D}$ (with each column being normalized) and the ground truth $L$ by the Expressed Variance (EV)~\cite{xu2010principal}:
\begin{align}
\text{EV}(\boldsymbol{D}, \boldsymbol{L}) \stackrel{\text{def}}{=} \frac{\tr(\boldsymbol{D}\bD^\top \boldsymbol{L}\bL^\top)}{\tr(\boldsymbol{L}\bL^\top)}.
\end{align}
The value of EV scales between 0 and 1, and a higher value means better recovery.
The performance of subspace clustering is measured by clustering accuracy, which also ranges in the interval $[0, 1]$, and a higher value indicates a more accurate clustering.
\vspace{0.1in}
\noindent{\bf Parameters.} \
We set $\lambda_1 = 1$, $\lambda_2 = {1}/{\sqrt{p}}$ and $\lambda_3 = \sqrt{t/p}$, where $t$ is the iteration counter. These settings are actually used in ORPCA. We follow the default parameter setting for the baselines.
\subsection{Subspace Recovery}\label{subsec:recovery}
\noindent{\bf Simulation Data.} \
We use 4 disjoint subspaces $\{\mathcal{S}_k \}_{k=1}^4 \subset \mathbb{R}^p$, whose bases are denoted by $\{ L_k \}_{k=1}^4 \in \mathbb{R}^{p \times d_k}$. The clean data matrix $\bar{\boldsymbol{Z}}_k \in \mathcal{S}_k$ is then produced by $\bar{\boldsymbol{Z}}_k = L_k R_k^\top$, where $R_k \in \mathbb{R}^{n_k \times d_k}$. The entries of $L_k$'s and $R_k$'s are sampled i.i.d.\ from the normal distribution. Finally, the observed data matrix $\boldsymbol{Z}$ is generated by $\boldsymbol{Z} = \bar{\boldsymbol{Z}} + \boldsymbol{E}$, where $\bar{\boldsymbol{Z}}$ is the column-wise concatenation of $\bar{\boldsymbol{Z}}_k$'s followed by a random permutation, $\boldsymbol{E}$ is the sparse corruption whose $\rho$ fraction entries are non-zero and follow an i.i.d.\ uniform distribution over $[-2, 2]$. We independently conduct each experiment 10 times and report the averaged results.
\vspace{0.1in}
\noindent{\bf Robustness.} \
We illustrate by simulation results that OLRSC can effectively recover the underlying subspaces, confirming that $\boldsymbol{D}_t$ converges to the union of subspaces. For the two online algorithms OLRSC and ORPCA, We compute the EV after revealing all the samples. We examine the performance under different intrinsic dimension $d_k$'s and corruption $\rho$. To be more detailed, the $d_k$'s are varied from $0.01p$ to $0.1p$ with a step size $0.01p$, and the $\rho$ is from 0 to 0.5, with a step size 0.05.
\begin{figure}[h!]
\centering
\includegraphics[width=0.24\linewidth]{diff_n_4000_olrsc.eps}
\includegraphics[width=0.24\linewidth]{diff_n_4000_orpca.eps}
\includegraphics[width=0.24\linewidth]{diff_n_4000_lrr.eps}
\includegraphics[width=0.24\linewidth]{diff_n_4000_pcp.eps}
\caption{{\bf Subspace recovery under different intrinsic dimensions and corruptions.} Brighter is better. We set $p = 100$, $n_k = 1000$ and $d = 4d_k$. LRR and PCP are batch methods. OLRSC consistently outperforms ORPCA and even improves the performance of LRR. Compared to PCP, OLRSC is competitive in most cases and degrades a little for highly corrupted data, possibly due to the number of samples is not sufficient for its convergence.
}
\label{fig:diff_rank_rho}
\end{figure}
The results are presented in Figure~\ref{fig:diff_rank_rho}. The most intriguing observation is that OLRSC as an online algorithm outperforms its batch counterpart LRR! Such improvement may come from the explicit modeling for the basis, which makes OLRSC more informative than LRR. Interestingly,~\cite{guo2014online} also observed that in some situations, an online algorithm can outperform the batch counterpart. To fully understand the rationale behind this phenomenon is an important direction for future research. Notably, OLRSC consistently beats ORPCA (an online version of PCP), which may be the consequence of the fact that OLRSC takes into account that the data are produced by a union of small subspaces. While PCP works well for almost all scenarios, OLRSC degrades a little when addressing difficult cases (high rank and corruption). This is not surprising since Theorem~\ref{thm:stationary} is based on asymptotic analysis and hence, we expect that OLRSC will converge to the true subspace after acquiring more samples.
\vspace{0.1in}
\noindent{\bf Convergence Rate.} \
Now we test on a large dataset to show that our algorithm usually converges to the true subspace faster than ORPCA. We plot the EV curve against the number of samples in Figure~\ref{fig:ev_samples}. Firstly, when equipped with a proper matrix $\boldsymbol{Y}$, OLRSC2 and LRR2 can always produce an exact recovery of the subspace as PCP does. When using the dataset itself for $\boldsymbol{Y}$, OLRSC still converges to a favorable point after revealing all the samples. Compared to ORPCA, OLRSC is more robust and converges much faster for hard cases (see, e.g., $\rho = 0.5$). Again, we note that in such hard cases, OLRSC outperforms LRR, which agrees with the observation in Figure~\ref{fig:diff_rank_rho}.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.24\linewidth]{ev_samples_rho_0.01.eps}}
{\includegraphics[width=0.24\linewidth]{ev_samples_rho_0.3.eps}}
{\includegraphics[width=0.24\linewidth]{ev_samples_rho_0.5.eps}}
{\includegraphics[width=0.24\linewidth]{ev_samples_time.eps}}
\caption{{\bf Convergence rate and time complexity.} A higher EV means better subspace recovery. We set $p = 1000$, $n_k = 5000$, $d_k = 25$ and $d = 100$. OLRSC always converges to or outperforms the batch counterpart LRR. For hard cases, OLRSC converges much faster than ORPCA. Both PCP and LRR2 achieve the best EV value. When equipped with the same dictionary as LRR2, OLRSC2 also well handles the highly corrupted data ($\rho=0.5$). Our methods are more efficient than the competitors but PCP when $\rho$ is small, possibly because PCP utilizes a highly optimized C++ toolkit while ours are written in Matlab.
}
\label{fig:ev_samples}
\end{figure}
\vspace{0.1in}
\noindent{\bf Computational Efficiency.} We also illustrate the time complexity of the algorithms in the last panel of Figure~\ref{fig:ev_samples}. In short, our algorithms (OLRSC and OLRSC2) admit the lowest computational complexity for all cases. One may argue that PCP spends slightly less time than ours for a small $\rho$ (0.01 and 0.1). However, we remark here that PCP utilizes a highly optimized C++ toolkit to boost computation while our algorithms are {fully} written in Matlab. We believe that ours will work more efficiently if properly optimized by, {e.g.,} the blas routine. Another important message conveyed by the figure is that, OLRSC is always being orders of magnitude computationally more efficient than the batch method LRR, as well as producing comparable or even better solution.
\subsection{Subspace Clustering}\label{subsec:clustering}
\noindent{\bf Datasets.} \
We examine the performance for subspace clustering on 5 realistic databases shown in Table~\ref{tb:dataset}, which can be downloaded from the LibSVM website. For MNIST, We randomly select 20000 samples to form MNIST-20K since we find it time consuming to run the batch methods on the entire database.
\begin{table}[h]
\centering
\caption{\bf Datasets for subspace clustering.}
\begin{tabular}{lccc}
\toprule
& \#classes & \#samples & \#features \\
\midrule
Mushrooms & 2 & 8124 & 112 \\
DNA & 3 & 3186 & 180 \\
Protein & 3 & 24,387 & 357 \\
USPS & 10 & 9298 & 256 \\
MNIST-20K & 10 & 20,000 & 784 \\
\bottomrule
\end{tabular}
\label{tb:dataset}
\end{table}
\vspace{0.1in}
\noindent{\bf Standard Clustering Pipeline.} \
In order to focus on the solution quality of different algorithms, we follow the standard pipeline which feeds $\boldsymbol{X}$ to a spectral clustering algorithm~\cite{ng2002spectral}. To this end, we collect all the $\boldsymbol{u}$'s and $\boldsymbol{v}$'s produced by OLRSC to form the representation matrix $\boldsymbol{X}=\boldsymbol{U}\boldsymbol{V}^\top$. For ORPCA, we use $\boldsymbol{R}_0\boldsymbol{R}_0^\top$ as the similarity matrix~\cite{liu2013robust}, where $\boldsymbol{R}_0$ is the row space of $\boldsymbol{Z}_0=\boldsymbol{L}_0\boldsymbol{\Sigma}_0\boldsymbol{R}_0^\top$ and $\boldsymbol{Z}_0$ is the clean matrix recovered by ORPCA. We run our algorithm and ORPCA with 2 epochs so as to apply backward correction on the coefficients ($\boldsymbol{U}$ and $\boldsymbol{V}$ in ours and $\boldsymbol{R}_0$ in ORPCA).
\vspace{0.1in}
\noindent{\bf Fully Online Pipeline.} \
As we discussed in Section~\ref{sec:alg}, the (optional) spectral clustering procedure needs the similarity matrix $\boldsymbol{X}$, making the memory proportional to $n^2$. To tackle this issue, we proposed a fully online scheme where the key idea is performing $k$-means on $\boldsymbol{V}$. Here, we examine the efficacy of this variant, which is called OLRSC-F.
\begin{table}[h]
\caption{{\bf Clustering accuracy (\%) and computational time (seconds).} For each dataset, the first row indicates the accuracy and the second row the running time. For all the large-scale datasets, OLRSC (or OLRSC-F) has the highest clustering accuracy. Regarding the running time, our method spends comparable time as ORPCA~(the fastest solver) does while dramatically improves the accuracy. Although SSC is slightly better than SSC on Protein, it consumes one hour while OLRSC takes 25 seconds.
}
\centering
\begin{tabular}{lccccc}
\toprule
& OLRSC &OLRSC-F & ORPCA & LRR & SSC \\
\midrule
Mush- & {85.09} & {\bf 89.36} & 65.26 & 58.44 & 54.16 \\
rooms & 8.78 & 8.78 & 8.30 & 46.82 & 32 min\\
\midrule
\multirow{2}{*}{DNA} & {67.11} & {\bf 83.08} & 53.11 & 44.01 & 52.23\\
& 2.58 & 2.58 & 2.09 & 23.67 & 3 min\\
\midrule
\multirow{2}{*}{Protein} & { 43.30} & 43.94 & 40.22 & 40.31 & {\bf 44.27}\\
& 24.66 & 24.66 & 22.90 & 921.58 & 65 min\\
\midrule
\multirow{2}{*}{USPS} & {65.95} & {\bf 70.29} & 55.70 & 52.98 & 47.58\\
& 33.93 & 33.93 & 27.01 & 257.25 & 50 min\\
\midrule
MNIST- & {\bf 57.74} & 55.50 & 54.10 & 55.23 & 43.91\\
20K & 129 & 129 & 121 & 32 min & 7 hours\\
\bottomrule
\end{tabular}
\label{tb:sc}
\end{table}
The results are recorded in Table~\ref{tb:sc}, where the time cost of spectral clustering or $k$-means is not included so we can focus on comparing the efficiency of the algorithms themselves. Also note that we use the dataset itself as the dictionary $\boldsymbol{Y}$ because we find that an alternative choice of $\boldsymbol{Y}$ does not help much on this task. For OLRSC and ORPCA, they require an estimation on the true rank. Here, we use $5k$ as such estimation where $k$ is the number of classes of a dataset. Our algorithm significantly outperforms the two state-of-the-art methods LRR and SSC both for accuracy and efficiency. One may argue that SSC is slightly better than OLRSC on Protein. Yet, it spends 1 hour while OLRSC only costs 25 seconds. Hence, SSC is not practical. Compared to ORPCA, OLRSC always identifies more correct samples as well as consumes comparable running time. For example, on the USPS dataset, OLRSC achieves the accuracy of 65.95\% while that of ORPCA is 55.7\%. Regarding the running time, OLRSC uses only 7 seconds more than ORPCA~--~same order of computational complexity, which agrees with the qualitative analysis in Section~\ref{sec:alg}.
\begin{table}[h]
\caption{{\bf Time cost (seconds) of spectral clustering and $k$-means.}}
\centering
\begin{tabular}{lccccc}
\toprule
& Mushrooms & DNA & Protein & USPS & MNIST-20K\\
\midrule
Spectral & 295 & 18 & 7567 & 482 & 4402\\
$k$-means & 2 & 6 & 5 & 19 & 91\\
\bottomrule
\end{tabular}
\label{tab:fully_online}
\end{table}
More interestingly, it shows that the $k$-means alternative (OLRSC-F) usually outperforms the spectral clustering pipeline. This suggests that perhaps for {\em robust} subspace clustering formulations, the simple $k$-means paradigm suffices to guarantee an appealing result. On the other hand, we report the running time of spectral clustering and $k$-means in Table~\ref{tab:fully_online}. As expected, since spectral clustering computes SVD for an $n$-by-$n$ similarity matrix, it is quite slow. In fact, it sometimes dominates the running time of the whole pipeline. In contrast, $k$-means is extremely fast and scalable, as it can be implemented in online fashion.
\subsection{Influence of $d$}
A key ingredient of our formulation is a factorization on the nuclear norm regularized matrix, which requires an estimation on the rank of the $\boldsymbol{X}$ (see \eqref{eq:nuclear reform}). Here we examine the influence of the selection of $d$ (which plays as an upper bound of the true rank). We report both EV and clustering accuracy for different $d$ under a range of corruptions. The simulation data are generated as in Section~\ref{subsec:recovery} and we set $p=200$, $n_k=1000$ and $d_k = 10$. Since the four subspaces are disjoint, the true rank is 40.
\begin{figure}[h]
\centering
{\includegraphics[width=0.3\linewidth]{diff_d_EV.eps}}
{\includegraphics[width=0.3\linewidth]{diff_d_ClusteringAccuracy.eps}}
\caption{{\bf Examine the influence of ${d}$.} We experiment on $d = \{ 2, 20, 40, 60, 80, 100, 120, 140, 160, 180 \}$. The true rank is 40. }
\label{fig:diff_d}
\end{figure}
From Figure~\ref{fig:diff_d}, we observe that our algorithm cannot recover the true subspace if $d$ is smaller than the true rank. On the other hand, when $d$ is sufficiently large (at least larger than the true rank), our algorithm can perfectly estimate the subspace. This agrees with the results in~\cite{burer2005local} which says as long as $d$ is large enough, any local minima is global optima. We also illustrate the influence of $d$ on subspace clustering. Generally speaking, OLRSC can consistently identify the cluster of the data points if $d$ is sufficiently large. Interestingly, different from the subspace recovery task, here the requirement for $d$ seems to be slightly relaxed. In particular, we notice that if we pick $d$ as 20 (smaller than the true rank), OLRSC still performs well. Such relaxed requirement of $d$ may benefit from the fact that the spectral clustering step can correct some wrong points as suggested by~\cite{soltanolkotabi2014robust}.
\section{Conclusion}\label{sec:conclusion}
In this paper, we have proposed an online algorithm called OLRSC for subspace clustering, which dramatically reduces the memory cost of LRR from $\mathcal{O}(n^2)$ to $\mathcal{O}(pd)$~--~ orders of magnitudes more memory efficient. One of the key techniques in this work is an explicit basis modeling, which essentially renders the model more informative than LRR. Another important component is a non-convex reformulation of the nuclear norm. Combining these techniques allows OLRSC to simultaneously recover the union of the subspaces, identify the possible corruptions and perform subspace clustering. We have also established the theoretical guarantee that solutions produced by our algorithm converge to a stationary point of the expected loss function. Moreover, we have analyzed the time complexity and empirically demonstrated that our algorithm is computationally very efficient compared to competing baselines. Our extensive experimental study on synthetic and realistic datasets also illustrates the robustness of OLRSC. In a nutshell, OLRSC is an appealing algorithm in all three worlds: memory cost, computation and robustness.
\clearpage
| {'timestamp': '2016-05-27T02:05:18', 'yymm': '1503', 'arxiv_id': '1503.08356', 'language': 'en', 'url': 'https://arxiv.org/abs/1503.08356'} |
\section{Introduction}
The Higgs boson mass in the Standard
Model (SM) is not stable against qunatum corrections
and has quadratic divergences. Because the reduced Planck
scale is about 16 order larger than the electroweak (EW) scale,
there exists huge fine-tuning to have the EW-scale
Higgs boson mass, which is called the gauge hierarchy problem.
Supersymmetry is a symmetry between the bosonic and fermionic
states, and it naturally solves this
problem due to the cancellations between the bosonic and
fermionic quantum corrections.
In the Minimal Supersymmetric Standard Model (MSSM) with
$R$ parity under which the SM particles are even
while the supersymmetric particles (sparticles)
are odd, the $SU(3)_C\times SU(2)_L\times U(1)_Y$ gauge
couplings can be unified around $2\times 10^{16}$
GeV~\cite{Langacker:1991an}, the lightest supersymmetric
particle (LSP) such as the
neutralino can be a cold dark matter
candidate~\cite{Ellis:1983ew, Goldberg:1983nd},
and the EW precision constraints can be
evaded, etc. Especially, the gauge coupling unification strongly
suggests Grand Unified Theories (GUTs), which can explain
the SM fermion quantum numbers.
However, in the supersymmetric $SU(5)$ models,
there exist the doublet-triplet splitting problem and
dimension-five proton decay problem. Interestingly, these problems
can be solved elegantly in the flipped $SU(5)\times U(1)_X$
models via missing partner mechanism~\cite{smbarr, dimitri, AEHN-0}.
Previously, the flipped $SU(5)\times U(1)_X$ models have been
constructed systematically in the free fermionic string
constructions at the Kac-Moody level one~\cite{Antoniadis:1988tt, Lopez:1992kg}.
To solve
the little hierarchy problem between the traditional unification scale
and the string scale, two of us (TL and DVN) with Jiang have proposed the
testable flipped $SU(5)\times U(1)_X$ models, where the
TeV-scale vector-like particles are introduced~\cite{Jiang:2006hf}.
There is a two-step unifcation: the $SU(3)_C\times SU(2)_L$
gauge couplings are unified at the scale $M_{32}$ around
the usual GUT scale, and the $SU(5)\times U(1)_X$ gauge
couplings are unified at the final unification scale $M_{\cal{F}}$
around $5\times 10^{17}$ GeV~\cite{Jiang:2006hf}. Moreover,
such kind of models have been constructed locally from
the F-theory model building~\cite{Beasley:2008dc, Jiang:2009zza},
and are dubbed as ${\cal F}$-$SU(5)$~\cite{Jiang:2009zza}.
In particular, these models are very
interesting from the phenomenological point of view~\cite{Jiang:2009zza}:
the vector-like particles can be observed at the Large Hadron Collider (LHC),
proton decay is within the reach of the future
Hyper-Kamiokande~\cite{Nakamura:2003hk} and
Deep Underground Science and Engineering
Laboratory (DUSEL)~\cite{DUSEL} experiments~\cite{Li:2009fq, Li:2010dp},
the hybrid inflation can be naturally realized, and the
correct cosmic primodial density fluctuations can be
generated~\cite{Kyae:2005nv}.
With No-Scale boundary conditions
at $M_{\cal{F}}$~\cite{Cremmer:1983bf},
two of us (TL and DVN) with Maxin and Walker have
described an extraordinarily constrained ``golden point''~\cite{Li:2010ws}
and ``golden strip''~\cite{Li:2010mi} that satisfied all the latest
experimental constraints and has an imminently observable proton
decay rate~\cite{Li:2009fq}. Especially, the UV boundary condition $B_{\mu}=0$
gives very strong constraint on the viable parameter space, where $B_{\mu}$
is the soft bilinear Higgs mass term in the MSSM.
In addition, exploiting a ``Super-No-Scale''
condition, we dynamically determined the universal gaugino mass
$M_{1/2}$ and the ratio of the Higgs Vacuum Expectation
Values (VEVs) $\tan\beta$.
Since $M_{1/2}$ is related to the modulus field of
the internal space in string models, we stabilized the modulus
dynamically~\cite{Li:2010uu, Li:2011dw}. Interestingly,
the supersymmetric particle (sparticle) spectra
generially have a light stop and gluino, which are lighter than all the
other squarks.
Thus, we can test such kinds of models at the LHC
by looking for the ultra high jet signals~\cite{Li:2011hr, Maxin:2011hy}.
Moreover, the complete
viable parameter space in no-scale $\cal{F}$-$SU(5)$ has been
studied by considering a set of ``bare minimal'' experimental
constaints~\cite{Li:2011xu}. For the other LHC and dark matter
phenomenological studies,
see Refs.~\cite{Li:2011in, Li:2011gh, Li:2011rp}.
It is well known that one of main LHC goals is to detect the SM or SM-like
Higgs boson. Recently, both the CMS~\cite{CMSLP} and ATLAS~\cite{ATLASLP} collaborations
have presented their combined searches for the SM Higgs boson, base on
the integrated luminosities between $1~{\rm fb}^{-1}$ and $2.3~{\rm fb}^{-1}$
depending on the search channels. For the light SM Higgs boson mass region
preferred by the EW precision data, they have excluded the SM Higgs boson
with mass larger than 145 GeV and 146 GeV, respectively.
In the no-scale $\cal{F}$-$SU(5)$,
the lightest CP-even Higgs boson mass is generically about 120 GeV if the
contributions from the vector-like particles are neglected~\cite{LMNW-P}.
Thus, the interesting question is whether the lightest CP-even Higgs boson
can have mass up to 146 GeV naturally if we include the contributions from
the additional vector-like particles.
In this paper, we consider five kinds of testable
flipped $SU(5)\times U(1)_X$ models from F-theoy.
Two kinds of models only have vector-like particles around the TeV scale.
Because the gauge mediated supersymmetry breaking can be realized naturally in
the F-theory GUTs~\cite{Heckman:2008qt},
we also introduce vector-like particles
with mass around $10^{11}~{\rm GeV}$~\cite{Heckman:2008qt}, which can be considered
as messenger fields, in the other three kinds of models. We require that
the Yukawa couplings for the TeV-scale vector-like particles and
the third family of the SM fermions are smaller than three from the EW scale to
the scale $M_{32}$ from the perturbative bound, {\it i.e.}, the
Yukawa coupling squares are less than $4\pi$.
With the two-loop Renormalization Group
Equation (RGE) running for the gauge couplings and Yukawa couplings,
we obtain the maximal Yukawa couplings for
the TeV-scale vector-like particles. To calculate the lightest CP-even
Higgs boson mass upper bounds, we employ the Renormalization Group (RG) improved
effective Higgs potential approach, and consider the two-loop leading contributions
in the MSSM and one-loop contributions from the TeV-scale
vector-like particles. For simplicity, we assume that the
mixings both between the stops and between the TeV-scale vector-like scalars are
maximal. In general, we shall increase the lightest CP-even Higgs boson
mass upper bounds if we increase the supersymmetry breaking scale
or decrease the TeV-scale vector-like particle masses.
The numerical results for our five kinds of models are roughly the same.
For the TeV-scale vector-like particles and sparticles with masses
around 1~TeV, we show that the lightest
CP-even Higgs boson can have mass up to 146 GeV naturally.
This paper is organized as follows. In Section II, we briefly
review the testable flipped $SU(5)\times U(1)_X$ models from
F-theory and present five kinds of models. We calculate the
lightest CP-even Higgs boson mass upper bounds
in Section III. Section IV is our conclusion.
In Appendices, we present all the RGEs in five kinds of models.
\section{Testable Flipped $SU(5)\times U(1)_X$ Models from F-Theory}
We first briefly review the minimal flipped
$SU(5)$ model~\cite{smbarr, dimitri, AEHN-0}.
The gauge group for flipped $SU(5)$ model is
$SU(5)\times U(1)_{X}$, which can be embedded into $SO(10)$ model.
We define the generator $U(1)_{Y'}$ in $SU(5)$ as
\begin{eqnarray}
T_{\rm U(1)_{Y'}}={\rm diag} \left(-{1\over 3}, -{1\over 3}, -{1\over 3},
{1\over 2}, {1\over 2} \right).
\label{u1yp}
\end{eqnarray}
The hypercharge is given by
\begin{eqnarray}
Q_{Y} = {1\over 5} \left( Q_{X}-Q_{Y'} \right).
\label{ycharge}
\end{eqnarray}
There are three families of the SM fermions
whose quantum numbers under $SU(5)\times U(1)_{X}$ are
\begin{eqnarray}
F_i={\mathbf{(10, 1)}},~ {\bar f}_i={\mathbf{(\bar 5, -3)}},~
{\bar l}_i={\mathbf{(1, 5)}},
\label{smfermions}
\end{eqnarray}
where $i=1, 2, 3$. The SM particle assignments in $F_i$, ${\bar f}_i$
and ${\bar l}_i$ are
\begin{eqnarray}
F_i=(Q_i, D^c_i, N^c_i),~{\overline f}_i=(U^c_i, L_i),~{\overline l}_i=E^c_i~,~
\label{smparticles}
\end{eqnarray}
where $Q_i$ and $L_i$ are respectively the superfields of the left-handed
quark and lepton doublets, $U^c_i$, $D^c_i$, $E^c_i$ and $N^c_i$ are the
$CP$ conjugated superfields for the right-handed up-type quarks,
down-type quarks, leptons and neutrinos, respectively.
To generate the heavy right-handed neutrino masses, we introduce
three SM singlets $\phi_i$~\cite{Georgi:1979dq}.
To break the GUT and electroweak gauge symmetries, we introduce two pairs
of Higgs representations
\begin{eqnarray}
H={\mathbf{(10, 1)}},~{\overline{H}}={\mathbf{({\overline{10}}, -1)}},
~h={\mathbf{(5, -2)}},~{\overline h}={\mathbf{({\bar {5}}, 2)}}.
\label{Higgse1}
\end{eqnarray}
We label the states in the $H$ multiplet by the same symbols as in
the $F$ multiplet, and for ${\overline H}$ we just add ``bar'' above the fields.
Explicitly, the Higgs particles are
\begin{eqnarray}
H=(Q_H, D_H^c, N_H^c)~,~
{\overline{H}}= ({\overline{Q}}_{\overline{H}}, {\overline{D}}^c_{\overline{H}},
{\overline {N}}^c_{\overline H})~,~\,
\label{Higgse2}
\end{eqnarray}
\begin{eqnarray}
h=(D_h, D_h, D_h, H_d)~,~
{\overline h}=({\overline {D}}_{\overline h}, {\overline {D}}_{\overline h},
{\overline {D}}_{\overline h}, H_u)~,~\,
\label{Higgse3}
\end{eqnarray}
where $H_d$ and $H_u$ are one pair of Higgs doublets in the MSSM.
We also add one SM singlet $\Phi$.
To break the $SU(5)\times U(1)_{X}$ gauge symmetry down to the SM
gauge symmetry, we introduce the following Higgs superpotential at the GUT scale
\begin{eqnarray}
{\it W}_{\rm GUT}=\lambda_1 H H h + \lambda_2 {\overline H} {\overline H} {\overline
h} + \Phi ({\overline H} H-M_{\rm H}^2)~.~
\label{spgut}
\end{eqnarray}
There is only
one F-flat and D-flat direction, which can always be rotated along
the $N^c_H$ and ${\overline {N}}^c_{\overline H}$ directions. So, we obtain that
$<N^c_H>=<{\overline {N}}^c_{\overline H}>=M_{\rm H}$. In addition, the
superfields $H$ and ${\overline H}$ are eaten and acquire large masses via
the supersymmetric Higgs mechanism, except for $D_H^c$ and
${\overline {D}}^c_{\overline H}$. The superpotential $ \lambda_1 H H h$ and
$ \lambda_2 {\overline H} {\overline H} {\overline h}$ couple the $D_H^c$ and
${\overline {D}}^c_{\overline H}$ with the $D_h$ and ${\overline {D}}_{\overline h}$,
respectively, to form the massive eigenstates with masses
$2 \lambda_1 <N_H^c>$ and $2 \lambda_2 <{\overline {N}}^c_{\overline H}>$. So, we
naturally have the doublet-triplet splitting due to the missing
partner mechanism~\cite{AEHN-0}.
Because the triplets in $h$ and ${\overline h}$ only have
small mixing through the $\mu$ term, the Higgsino-exchange mediated
proton decay are negligible, {\it i.e.},
we do not have the dimension-5 proton
decay problem.
The SM fermion masses are from the following
superpotential
\begin{eqnarray}
{ W}_{\rm Yukawa} = y_{ij}^{D}
F_i F_j h + y_{ij}^{U \nu} F_i {\overline f}_j {\overline
h}+ y_{ij}^{E} {\overline l}_i {\overline f}_j h + \mu h {\overline h}
+ y_{ij}^{N} \phi_i {\overline H} F_j~,~\,
\label{potgut}
\end{eqnarray}
where $y_{ij}^{D}$, $y_{ij}^{U \nu}$, $y_{ij}^{E}$ and $y_{ij}^{N}$
are Yukawa couplings, and $\mu$ is the bilinear Higgs mass term.
After the $SU(5)\times U(1)_X$ gauge symmetry is broken down to the SM gauge
symmetry, the above superpotential gives
\begin{eqnarray}
{ W_{SSM}}&=&
y_{ij}^{D} D^c_i Q_j H_d+ y_{ji}^{U \nu} U^c_i Q_j H_u
+ y_{ij}^{E} E^c_i L_j H_d+ y_{ij}^{U \nu} N^c_i L_j H_u \nonumber \\
&& + \mu H_d H_u+ y_{ij}^{N}
\langle {\overline {N}}^c_{\overline H} \rangle \phi_i N^c_j
+ \cdots (\textrm{decoupled below $M_{GUT}$}).
\label{poten1}
\end{eqnarray}
Similar to the flipped $SU(5)\times U(1)_X$ models
with string-scale gauge coupling
unification~\cite{Jiang:2006hf, Lopez:1995cs},
we introduce vector-like particles which form complete
flipped $SU(5)\times U(1)_X$ multiplets.
The quantum numbers for these additional vector-like particles
under the $SU(5)\times U(1)_X$ gauge symmetry are
\begin{eqnarray}
&& XF ={\mathbf{(10, 1)}}~,~{\overline{XF}}={\mathbf{({\overline{10}}, -1)}}~,~\\
&& Xf={\mathbf{(5, 3)}}~,~{\overline{Xf}}={\mathbf{({\overline{5}}, -3)}}~,~\\
&& Xl={\mathbf{(1, -5)}}~,~{\overline{Xl}}={\mathbf{(1, 5)}}~,~\\
&& Xh={\mathbf{(5, -2)}}~,~{\overline{Xh}}={\mathbf{({\overline{5}}, 2)}}~,~ \\
&& XT ={\mathbf{(10, -4)}}~,~{\overline{XT}}={\mathbf{({\overline{10}}, 4)}}~.~\,
\end{eqnarray}
Moreover, the particle contents from the decompositions of
$XF$, ${\overline{XF}}$, $Xf$, ${\overline{Xf}}$,
$Xl$, ${\overline{Xl}}$, $Xh$, ${\overline{Xh}}$,
$XT$, and ${\overline{XT}}$, under the SM gauge
symmetry are
\begin{eqnarray}
&& XF = (XQ, XD^c, XN^c)~,~ {\overline{XF}}=(XQ^c, XD, XN)~,~\\
&& Xf=(XU, XL^c)~,~ {\overline{Xf}}= (XU^c, XL)~,~\\
&& Xl= XE~,~ {\overline{Xl}}= XE^c~,~\\
&& Xh=(XD, XL)~,~ {\overline{Xh}}= (XD^c, XL^c)~,~\\
&& XT = (XY, XU^c, XE)~,~ {\overline{XT}}=(XY^c, XU, XE^c)~.~\,
\end{eqnarray}
Under the $SU(3)_C \times SU(2)_L \times U(1)_Y$ gauge
symmetry, the quantum numbers for the extra vector-like
particles are
\begin{eqnarray}
&& XQ={\mathbf{(3, 2, {1\over 6})}}~,~
XQ^c={\mathbf{({\bar 3}, 2,-{1\over 6})}} ~,~\\
&& XU={\mathbf{({3},1, {2\over 3})}}~,~
XU^c={\mathbf{({\bar 3}, 1, -{2\over 3})}}~,~\\
&& XD={\mathbf{({3},1, -{1\over 3})}}~,~
XD^c={\mathbf{({\bar 3}, 1, {1\over 3})}}~,~\\
&& XL={\mathbf{({1}, 2,-{1\over 2})}}~,~
XL^c={\mathbf{(1, 2, {1\over 2})}}~,~\\
&& XE={\mathbf{({1}, 1, {-1})}}~,~
XE^c={\mathbf{({1}, 1, {1})}}~,~\\
&& XN={\mathbf{({1}, 1, {0})}}~,~
XN^c={\mathbf{({1}, 1, {0})}}~,~\\
&& XY={\mathbf{({3}, 2, -{5\over 6})}}~,~
XY^c={\mathbf{({\bar 3}, 2, {5\over 6})}} ~.~\
\end{eqnarray}
To separate the mass scales $M_{32}$ and $M_{\cal F}$ in our F-theory
flipped $SU(5)\times U(1)_X$ models,
we need to introduce sets of vector-like particles
around the TeV scale or intermediate scale whose contributions to the one-loop
beta functions satisfy $\Delta b_1 < \Delta b_2 = \Delta b_3$.
To avoid the Landau pole problem, we have shown that there are
only five possible such sets of vector-like
particles as follows
due to the quantizations of the one-loop beta functions~\cite{Jiang:2006hf}
\begin{eqnarray}
&& Z0: XF+{\overline{XF}}~;~\\
&& Z1: XF+{\overline{XF}}+Xl+{\overline{Xl}} ~;~\\
&& Z2: XF+{\overline{XF}}+Xf+{\overline{Xf}} ~;~\\
&& Z3: XF+{\overline{XF}} + Xl+{\overline{Xl}}
+Xh+{\overline{Xh}} ~;~\\
&& Z4: XF+{\overline{XF}}+Xh+{\overline{Xh}}~.~\,
\end{eqnarray}
We have systematically constructed flipped $SU(5)\times U(1)_X$ models with
generic sets of vector-like particles around the TeV scale and/or
around the intermediate scale from the F-theory. In addition,
gauge mediated supersymmetry breaking can be realized naturally in
the F-theory GUTs~\cite{Heckman:2008qt}, and there may exist vector-like particles as
the messenger fields at the intermediate scale around $10^{11}$~GeV~\cite{Heckman:2008qt}.
Therefore, in this paper, we shall calculate the lightest CP-even Higgs boson
mass in five kinds of the flipped $SU(5)\times U(1)_X$ models
from F-theory: (i) In Model I, we introduce the $Z0$ set of
vector-like particles $(XF, ~{\overline{XF}})$ at the TeV scale,
and we shall add superheavy vector-like particles around $M_{32}$ so
that the $SU(5)\times U(1)_X$ unification scale is smaller than
the reduced Planck scale; (ii) In Model II, we introduce the
vector-like particles $(XF, ~{\overline{XF}})$ at the TeV scale
and the vector-like particles $(Xf, ~{\overline{Xf}})$ at the
intermediate scale which can be considered as the messenger fields;
(iii) In Model III, we introduce the
vector-like particles $(XF, ~{\overline{XF}})$ at the TeV scale
and the vector-like particles $(Xf, ~{\overline{Xf}})$
and $(Xl, ~{\overline{Xl}})$ at the
intermediate scale; (iv)
In Model IV, we introduce the $Z1$ set of
vector-like particles $(XF, ~{\overline{XF}})$ and
$(Xl, ~{\overline{Xl}})$ at the TeV scale;
(v) In Model V, we introduce the
vector-like particles $(XF, ~{\overline{XF}})$ and
$(Xl, ~{\overline{Xl}})$ at the TeV scale, and
the vector-like particles $(Xf, ~{\overline{Xf}})$ at the
intermediate scale.
In particular, we emphasize that
the vector-like particles at the intermediate scale in Models II, III, and V
will give us the generalized gauge medidated supersymmetry breaking
if they are the messenger fields~\cite{Li:2010hi}. By the way,
if we introduce the vector-like particles $(Xh, ~{\overline{Xh}})$ at
the intermediate scale which are the traditional messenger fields in gauge mediation,
the discussions are similar and the numerical results are almost the same.
Thus, we will not study such kind of models here.
For simplicity, we assume that the masses for the vector-like particles around
the TeV scale or the intermediate scale are universal. Also, we denote the
universal mass for the vector-like particles at the TeV
scale as $M_V$, and the universal mass for the vector-like particles at the
intermediate scale as $M_I$. With this convention, we present the
vector-like particle contents
of our five kinds of models in Table~\ref{Model-PC}.
In the following discussions, we shall choose $M_{I}=1.0\times 10^{11}$~GeV.
Moreover, we will assume
universal supersymmetry breaking at low energy and denote
the universal supersymmetry breaking scale as $M_S$.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Models & Vector-Like Particles at $M_V$ & Vector-Like Particles at $M_{I}$ \\
\hline
\hline
~Model I~& ~ ($XF$, $\overline{XF}$)
& \\
\hline
~Model II~& ~ ($XF$, $\overline{XF}$)
& ~($Xf$, $\overline{Xf}$) ~ \\
\hline
~Model III~& ~ ($XF$, $\overline{XF}$)
& ~($Xf$, $\overline{Xf}$),~ ($Xl$, $\overline{Xl}$)\\
\hline
~Model IV~& ~ ($XF$, $\overline{XF}$),~ ($Xl$, $\overline{Xl}$)~
& \\
\hline
~Model V~& ~ ($XF$, $\overline{XF}$),~ ($Xl$, $\overline{Xl}$)~
& ~($Xf$, $\overline{Xf}$) ~ \\
\hline
\end{tabular}
\end{center}
\caption{The vector-like particle contents in Model I, Model II, Model III,
Model IV, and Model V. }
\label{Model-PC}
\end{table}
It is well known that there exists a few pecent fine-tuning for
the lightest CP-even Higgs boson mass in the MSSM to be
larger than 114.4 GeV. In all the above five kinds of models,
we have the vector-like particles $XF$ and $\overline{XF}$ at
the TeV scale. Then we can introduce the
following Yukawa interaction terms between the MSSM Higgs fields
and these vector-like particles in the superpotential in the
flipped $SU(5)\times U(1)_X$ models:
\begin{eqnarray}
W &=& {1\over2} Y_{xd} XF XF h + {1\over2}
Y_{xu} \overline{XF} \overline{XF} \overline{h}~,~\,
\end{eqnarray}
where $Y_{xd}$ and $Y_{xu}$ are Yukawa couplings.
After the gauge symmetry $SU(5)\times U(1)_X$ is broken down
to the SM gauge symmetry, we have the following relevant
Yukawa coupling terms in the superpotential
\begin{eqnarray}
W &=& Y_{xd} XQ XD^c H_d +
Y_{xu} XQ^c XD H_u ~.~\,
\end{eqnarray}
To have the upper bounds on the lightest CP-even
Higgs boson mass, we first need to calculate the
upper bounds on the Yukawa couplings $Y_{xu}$ and $Y_{xd}$.
In this paper, employing the two-loop RGE running,
we will require that all
the Yukawa couplings including $Y_{xu}$ and $Y_{xd}$ are smaller than three
(perturbative bound)
below the $SU(3)_C \times SU(2)_L$ unification
scale $M_{32}$ for simplicity since $M_{32}$ is close to
the $SU(5) \times U(1)_X$ unification scale $M_{\cal F}$.
The other point is that above the scale $M_{32}$,
there might exist other superheavy threshold corrections
and then the RGE running for the gauge couplings and Yukawa couplings
might be very complicated. Moreover, we will
not give the two-loop RGEs in the SM and
the MSSM, which can be easily found in the literatures, for example,
in the Refs.~\cite{Barger:1992ac, Martin:1993zk}.
We shall present the RGEs in the SM with vector-like particles, and
Models I to V in the Appendices A, B, C,
D, E, and F, respectively.
\section{The Lightest CP-Even Higgs Boson Mass}
In our calculations, we employ the RG improved
effective Higgs potential approach. The two-loop leading
contributions to the lightest CP-even Higgs boson mass $m_h$
in the MSSM are~\cite{Okada:1990vk,Carena:1995bx}
\begin{eqnarray}
[m_h^2]_{\mbox{MSSM}}&=&M_Z^2\cos^22\beta(1-\frac{3}{8\pi^2}\frac{m_t^2}{v^2}t)\nonumber\\
&&+\frac{3}{4\pi^2}\frac{m_t^4}{v^2}[t+\frac{1}{2}X_t
+\frac{1}{(4\pi)^2}(\frac{3}{2}\frac{m_t^2}{v^2}-32\pi\alpha_s)(X_tt+t^2)],
\end{eqnarray}
where $M_Z$ is the $Z$ boson mass, $m_t$ is the $\overline{MS}$ top
quark mass, $v$ is the SM Higgs VEV,
and $\alpha_S$ is the strong coupling constant. Also, $t$ and $X_t$ are
given as follows
\begin{eqnarray}
t=\mbox{log}\frac{M_S^2}{M_t^2},~~~X_t=\frac{2{\tilde
A}_t^2}{M_S^2}(1-\frac{{\tilde A}_t^2}{12M_S^2}),~~~{\tilde A}_{t}=A_{t}-\mu\cot\beta,
\end{eqnarray}
where $M_t$ is the top quark pole mass,
and $A_t$ denotes the trilinear soft term for the top quark Yukawa coupling term.
Moreover, we use the RG-improved one-loop effective Higgs potential
approach to calculate the contributions to the lightest CP-even Higgs boson
mass from the vector-like
particles~\cite{Babu:2008ge,Martin:2009bg}. Such contributions in our models are
\begin{eqnarray}
\Delta m_h^2&=&-\frac{N_c}{8\pi^2}M_Z^2\cos^22\beta({\hat
Y}_{xu}^2+{\hat Y}_{xd}^2)t_V+\frac{N_cv^2}{4\pi^2}\times\{{\hat
Y}_{xu}^4[t_V+\frac{1}{2}X_{xu}]\nonumber\\
&&+{\hat Y}_{xu}^3{\hat
Y}_{xd}[-\frac{2M_S^2(2M_S^2+M_V^2)}{3(M_S^2+M_V^2)^2}-\frac{{\tilde
A}_{xu}(2{\tilde A}_{xu}+{\tilde
A}_{xd})}{3(M_S^2+M_V^2)}]\nonumber\\
&&+{\hat Y}_{xu}^2{\hat
Y}_{xd}^2[-\frac{M_S^4}{(M_S^2+M_V^2)^2}-\frac{({\tilde
A}_{xu}+{\tilde A}_{xd})^2}{3(M_S^2+M_V^2)}]\nonumber\\
&&+{\hat Y}_{xu}{\hat
Y}_{xd}^3[-\frac{2M_S^2(2M_S^2+M_V^2)}{3(M_S^2+M_V^2)^2}-\frac{{\tilde
A}_{xd}(2{\tilde A}_{xd}+{\tilde A}_{xu})}{3(M_S^2+M_V^2)}]+{\hat
Y}_{xd}^4[t_V+\frac{1}{2}X_{xd}]\},\label{Delta mhs}
\end{eqnarray}
where
\begin{eqnarray}
&&{\hat Y}_{xu}=Y_{xu}\sin\beta,~~~~~{\hat
Y}_{xd}=Y_{xd}\cos\beta,~~~~~
t_V=\mbox{log}\frac{M_S^2+M_V^2}{M_V^2},\nonumber\\
&&X_{xu}=-\frac{2M_S^2(5M_S^2+4M_V^2)-4(3M_S^2+2M_V^2){\tilde
A}_{xu}^2+{\tilde A}_{xu}^4}{6(M_V^2+M_S^2)^2},\nonumber\\
&&X_{xd}=-\frac{2M_S^2(5M_S^2+4M_V^2)-4(3M_S^2+2M_V^2){\tilde
A}_{xd}^2+{\tilde A}_{xd}^4}{6(M_V^2+M_S^2)^2},\nonumber\\
&&{\tilde A}_{xu}=A_{xu}-\mu\cot\beta,~~~~~~~{\tilde
A}_{xd}=A_{xd}-\mu\tan\beta,
\end{eqnarray}
where ${A}_{xu}$ and ${A}_{xd}$ denote the
supersymmetry breaking trilinear soft terms for
the superpotential Yukawa terms $Y_{xu} XQ^c XD H_u $
and $Y_{xd} XQ XD^c H_d$, respectively.
The third, fourth, fifth, and sixth terms in Eq.~(\ref{Delta mhs})
are suppressed by the inverses of
$\tan\beta$, $\tan^2\beta$, $\tan^3\beta$, and
$\tan^4\beta$, respectively. To have the lightest CP-even Higgs boson mass
upper bounds, we usually need $\tan\beta \sim 22 $
from the numerical calculations.
Especially, in order to increase the
lightest CP-even Higgs boson mass, we should choose relatively large $Y_{xu}$ and
small $Y_{xd}$~\cite{Babu:2008ge,Martin:2009bg}. Thus, for simplicity,
we only employ the first and second terms
in our calculations, {\it i.e.}, the first line of Eq.~(\ref{Delta mhs}).
In order to have larger corrections to the lightest CP-even Higgs
boson mass, we consider the maximal mixings $X_t$ and
$X_{xu} $ respectively for both the stops and the TeV-scale vector-like
scalars, {\it i.e.}, $X_t=6$ with ${\tilde A}_t^2=6M_S^2$, and
$X_{xu}=\frac{8}{3}+\frac{M_S^2(5M_S^2+4M_V^2)}{3(M_S^2+M_V^2)}$
with ${\tilde A}_{xu}^2=6M_S^2+4M_V^2$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=6in]{Fig1}
\end{center}
\vspace{-1cm}\caption{(color online). The upper bounds on the lightest
CP-even Higgs boson mass versus $\tan\beta$ for our five kinds of models with
$Y_{xd}=0$, $M_S=800$~GeV, and $M_I=1.0\times 10^{11}$~GeV. The upper lines,
middle lines, and lower lines are for $M_V=400$~GeV, 1000~GeV,
and 2000~GeV, respectively. }
\label{varying-with-tanbeta-yxd=0}
\end{figure}
In this Section, we shall calculate the lightest CP-even Higgs boson mass in
our five kinds of models. The relevant parameters are the universal supersymmetry
breaking scale $M_S$, the light vector-like particle mass $M_V$, the
intermediate scale $M_I$, the
mixing terms $X_t$ and $X_V$ respectively for the stops and TeV-scale vector-like
scalars, and the two new Yukawa couplings for TeV-scale vector-like particles
$Y_{xu}$ and $Y_{xd}$.
Because we consider low energy supersymmetry, we choose $M_S$ from
360~GeV to 2~TeV. In order
to increase the lightest CP-even Higgs boson mass, we need to choose small $M_V$ as well.
The experimental lower bound on $M_V$ is about
325~GeV~\cite{Graham:2009gy}, so we will choose $M_V$ from 360~GeV to 2~TeV.
In our numerical calculations, we will use the SM input parameters at scale
$M_Z$ from Particle Data Group~\cite{Nakamura:2010zzi}. In particular, we use the updated
top quark pole mass $M_t=172.9$~GeV, and the corresponding
$\overline{MS}$ top quark mass $m_t=163.645$~GeV~\cite{Nakamura:2010zzi}.
In this paper, we require that all the Yukawa couplings for both the TeV-scale
vector-like particles and the third family of SM fermions
are less than three (perturbative bound) from the
EW scale to the scale $M_{32}$. To obtain the upper bounds on the Yukawa couplings
$Y_{xu}$ and $Y_{xd}$ at low energy, we consider the two-loop RGE running
for both the SM gauge couplings and all the Yukawa couplings. The only exception is
that when $M_V<M_S$, we use the two-loop RGE running
for the SM gauge couplings and
one-loop RGE running for all the Yukawa couplings
from $M_V$ to $M_S$, see Appendix A for details. Because in this case
$M_V$ is still close
to $M_S$, such small effects are negligible. After we obtain the upper bounds on
$Y_{xu}$ and $Y_{xd}$,
we use the maximal $Y_{xu}$ to
calculate the upper bounds on the lightest CP-even Higgs boson mass with the maximal
mixings for stops and TeV-scale
vector-like scalars.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6in]{Fig2}
\end{center}
\vspace{-1cm}\caption{(color online). The upper bounds on the lightest
CP-even Higgs boson mass versus $\tan\beta$ for our five kinds of models with
$Y_{xd}(M_V)=Y_{xu}(M_V)$, $M_S=800$~GeV, and $M_I=1.0\times 10^{11}$~GeV. The upper lines,
middle lines, and lower lines are for $M_V=400$~GeV, 1000~GeV,
and 2000~GeV, respectively. }
\label{varying-with-tanbeta-yxd=yxu}
\end{figure}
First, we consider $Y_{xd}=0$, $M_S=800$~GeV, and $M_I=1.0\times 10^{11}$~GeV. We choose
three values for $M_V$: $M_V=400$~GeV, 1000~GeV,
and 2000~GeV. In Fig.~\ref{varying-with-tanbeta-yxd=0}, we present the upper
bounds on the lightest CP-even Higgs boson mass by varying $\tan\beta$ from
2 to 50. We find that for the same $M_V$, the upper bounds on the lightest
CP-even Higgs boson mass are almost the same for five kinds of models.
In particular, the small differences are less than 0.4 GeV. Because the
gauge couplings will give negative contributions to the Yukawa coupling
RGEs, we will have a little bit larger maximal Yukawa couplings $Y_{xu}$ if
the vector-like particles contribute more to the gauge coupling
RGE running. Thus, the Model order for the lightest CP-even Higgs boson
mass upper bounds from small to large is Model I, Model IV, Model II, Model III,
Model V. Also, the upper bounds on the lightest CP-even Higgs boson
mass will decrease when we increase $M_V$, which is easy
to understand from physics point of view. Moreover,
the maximal Yukawa couplings $Y_{xu}$ are about 0.96, 1.03, and 1.0
for $\tan\beta=2$, $\tan\beta \sim 23$, and $\tan\beta=50$, respectively.
In addition, for $M_V=400$~GeV and $\tan\beta \simeq 21$,
$M_V=1000$~GeV and $\tan\beta \simeq 23.5$, and $M_V=2000$~GeV and $\tan\beta \simeq 24.5$,
we obtain the lightest CP-even Higgs boson mass upper bounds
around 153.5~GeV, 141.6~GeV, and 136.8~GeV, respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6in]{Fig3}
\end{center}
\vspace{-1cm}\caption{(color online). The upper bounds on the lightest
CP-even Higgs boson mass versus $M_V$ for our five kinds of models with
$Y_{xd}=0$, $\tan\beta=20$, $M_S=800$~GeV, and $M_I=1.0\times 10^{11}$~GeV. }
\label{varying-with-M_V-yxd=0}
\end{figure}
Second, we consider $Y_{xd}=Y_{xu}$ at the scale $M_V$,
$M_S=800$~GeV, and $M_I=1.0\times 10^{11}$~GeV. We choose
three values for $M_V$: $M_V=400$~GeV, 1000~GeV,
and 2000~GeV. In Fig.~\ref{varying-with-tanbeta-yxd=yxu}, we present the upper
bounds on the lightest CP-even Higgs boson mass by varying $\tan\beta$ from
2 to 50. For $\tan\beta < 40$, we find that the lightest CP-even Higgs boson mass
upper bounds are almost the same as those in Fig.~\ref{varying-with-tanbeta-yxd=0}.
However, for $\tan\beta > 40$, we find that the lightest CP-even Higgs boson mass
upper bounds decrease fast when $\tan\beta$ increases. At $\tan\beta=50$,
the upper bounds on the lightest CP-even Higgs boson mass are smaller than
130 GeV for all our scenarios. The reasons are the following:
for $\tan\beta<40$, the Yukawa couplings
$Y_{xu}$ and $Y_{t}$ are easy to run out of the perturbative bound, while
for $\tan\beta>40$, the Yukawa couplings $Y_{xd}$, $Y_{b}$,
and especially $Y_{\tau}$ are easy to run out, where
$Y_t$, $Y_{b}$ and $Y_{\tau}$ are Yukawa couplings for the top quark,
bottom quark, and tau lepton, respectively. In particular, for $\tan\beta =50$,
the maximal Yukawa couplings $Y_{xd}=Y_{xu}$ are as small as 0.67
while they are about 1.025 for $\tan\beta<40$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6in]{Fig4}
\end{center}
\vspace{-1cm}\caption{(color online). The upper bounds on the lightest
CP-even Higgs boson mass versus $M_S$ for our five kinds of models
with $Y_{xd}=0$, $\tan\beta=20$, and $M_I=1.0\times 10^{11}$~GeV. The upper lines,
middle lines, and lower lines are for $M_V=400$~GeV, 1000~GeV,
and 2000~GeV, respectively.}
\label{varying-with-M_S-yxd=0}
\end{figure}
Third, we consider $Y_{xd}=0$, $\tan\beta=20$, $M_S=800$~GeV, and $M_I=1.0\times 10^{11}$~GeV.
In Fig.~\ref{varying-with-M_V-yxd=0}, we present the upper bounds on
the lightest CP-even Higgs boson mass by varying $M_V$ from
360~GeV to 2~TeV. We can see that as
the value of $M_V$ increases from 360~GeV to 2~TeV, the upper bounds on the lightest
CP-even Higgs boson
mass decrease from 155~GeV to 137~GeV. In particular, to have the lightest
CP-even Higgs boson mass
upper bounds larger than 146 GeV, we obtain that $M_V$ is smaller than about 700 GeV.
Moreover, the maximal Yukawa couplings $Y_{xu}$ vary
only a little bit, decreasing from about 1.029 to 1.016 for $M_V$
from 360~GeV to 2~TeV.
Fourth, we consider $Y_{xd}=0$, $\tan\beta=20$, and $M_I=1.0\times 10^{11}$~GeV. We choose
three values for $M_V$: $M_V=400$~GeV, 1000~GeV,
and 2000~GeV. In Fig.~\ref{varying-with-M_S-yxd=0}, we present the upper bounds on
the lightest CP-even Higgs boson mass by varying $M_S$ from 360~GeV to 2~TeV.
As the value of $M_S$ increases, the upper bounds on the lightest
CP-even Higgs boson mass increase from about 143~GeV to 162~GeV,
from about 136~GeV to 150~GeV, and from about 134~GeV to 141~GeV,
for $M_V=400$~GeV, 1000~GeV, and 2000~GeV, repectively.
Especially, to have the lightest
CP-even Higgs boson mass upper bounds larger than 146 GeV,
we obtain that $M_S$ is larger than about $430$~GeV and 1260~GeV
for $M_V=400$~GeV and 1000~GeV, respectively.
Moreover, the maximal Yukawa couplings $Y_{xu}$
decrease from about 1.049 to 1.007 for $M_S$
from 360~GeV to 2~TeV.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6in]{Fig5}
\end{center}
\vspace{-1cm}\caption{(color online). The upper bounds on the lightest
CP-even Higgs boson mass versus $X_{xu}$ in Model I with
$Y_{xd}=0$, $\tan\beta=20$, $M_S=800$~GeV,
$M_V=400$~GeV, 1000~GeV, and 2000~GeV, and $X_t=0, ~3, ~{\rm and} ~6$.}
\label{varying-with-X_xu-yxd=0}
\end{figure}
Fifth, we consider $Y_{xd}=0$, $\tan\beta=20$, and $M_S=800$~GeV. Also, we choose
three values for $M_V$: $M_V=400$~GeV, 1000~GeV,
and 2000~GeV, and three values for $X_t$:
$X_t=0$, 3, and 6. For simplicity, we only consider
Model I here. In Fig.~\ref{varying-with-X_xu-yxd=0},
we present the upper bounds on the lightest CP-even
Higgs boson mass by varying ${\tilde A}_{xu}$. As we expected, they
behave just like the variations of the lightest CP-even Higgs boson
mass upper bounds with varying stop
mixing $X_t$, which have been studied extensively before
in Refs.~\cite{Espinosa:1999zm,Espinosa:2000df,Carena:2000dp,Heinemeyer:2004ms}.
\section{Conclusion}
We calculated the lightest CP-even Higgs boson mass in
five kinds of testable flipped $SU(5)\times U(1)_X$
models from F-theory. Two kinds of models have
vector-like particles around the TeV scale, while the
other three kinds also have the vector-like
particles at the intermediate scale as the messenger
fields in gauge mediation. The Yukawa couplings
for the TeV-scale vector-like particles and the
third family of the SM fermions are required to be
smaller than three from the EW scale to
the scale $M_{32}$. With the two-loop RGE running
for both the gauge couplings and Yukawa couplings,
we obtained the maximal Yukawa couplings between
the TeV-scale vector-like particles and Higgs fields.
To calculate the lightest CP-even Higgs boson mass
upper bounds, we used the RG improved effective Higgs
potential approach, and considered the two-loop
leading contributions in the MSSM and one-loop
contributions from the TeV-scale vector-like particles.
For simplicity, we assumed that the mixings both between
the stops and between the TeV-scale vector-like scalars are
maximal. The numerical results for these five kinds of
models are roughly the same. With $M_V$ and $M_S$ around
1~TeV, we showed that the lightest CP-even Higgs boson
mass can be close to 146 GeV naturally, which is the
upper bound from the current CMS and ATLAS
collaborations.
\begin{acknowledgments}
This research was supported in part
by the Natural Science Foundation of China
under grant numbers 10821504 and 11075194 (YH, TL, CT),
and by the DOE grant DE-FG03-95-Er-40917 (TL and DVN).
\end{acknowledgments}
| {'timestamp': '2011-09-13T02:01:58', 'yymm': '1109', 'arxiv_id': '1109.2329', 'language': 'en', 'url': 'https://arxiv.org/abs/1109.2329'} |
\section{Introduction}
Since their origin in the early $20$th century, the study of Hardy inequalities has been a vigorous field of research.
On the one hand, this stems from the great variety of applications Hardy inequalities have in analysis and mathematical physics, for example as uncertainty principles in quantum mechanics, \cite{Fr11}, for the solvability and growth control of differential equations \cite{AGG06} and in spectral graph theory \cite{Nag04}. On the other hand, they provide intriguing examples of functional inequalities where explicit sharp constants and asymptotics of minimizers or ground states can be studied. While most of the literature focuses on the continuum setting of differential operators, the inequality was originally phrased and proven by Hardy in the discrete setting; see \cite{KMP06} for a review of the history.
Classically, the Hardy weight $ w $ is expressed as the inverse of the distance function to some power which is for example given by $ w(x)=\frac{(d-2)^2}{4}|x|^{-2} $ for the most familiar setting in $ {\mathbb R}^{d} $, $ d\ge 3 $. This weight is optimal in a precise sense and in particular the constant $ \tfrac{(d-2)^2 }{4}$ is sharp. In 2014, Devyver, Fraas, and Pinchover \cite{DFP} (see also \cite{DP}) presented a method---the so-called supersolution construction---to construct \emph{optimal} Hardy weights for general (not necessarily symmetric) positive second-order elliptic operators on non-compact Riemannian manifolds. This method was later extended in \cite{KePiPo2} to weighted graphs where it was also shown that for the standard Laplacian on $ {\mathbb Z}^{d} $, $ d\ge 3 $, there is an optimal Hardy weight $ w $ which asymptotically satisfies the continuum asymptotics $ w(x)\sim \frac{(d-2)^2}{4}|x|^{-2} $, improving the constant previously obtained by Kapitanski and Laptev \cite{KaL}. While the method using super solutions such as the Green's function to obtain Hardy inequalities and the ground state transform seem to be folklore, see e.g. \cite{Fi00,FS08,FSW08,Gol14}, the novelity of the approach lies in the use of solutions with specific properties and the proof of optimality. For further recent work on optimal decay of Hardy weights in the continuum, see \cite{BGGP,DPP,KP,PV}.
\subsection{Summary of main results about optimal Hardy weights}
In this paper we establish inverse-square behavior at large distances of optimal Hardy weights for elliptic operators on $ {\mathbb Z}^{d} $ with $d\geq 3$. Specifically, we prove \textit{inverse-square bounds on optimal Hardy weights} in the following settings.
\begin{itemize}
\item For general coefficients, upper bounds on annular averages and lower bounds on sectorial averages (Theorem~\ref{thm:mainspatial}).
\item For ergodic coefficients satisfying a logarithmic Sobolev inequality, pointwise almost-sure upper bounds (Theorem~\ref{thm:mainrandom}).
\item For weakly random i.i.d.\ coefficients, pointwise lower bounds in expectation (Theorem \ref{thm:mainrandompert}).
\end{itemize}
An upper bound on an optimal Hardy weight informs on what is the best possible expected behavior, while a lower bound corresponds to a concrete Hardy inequality that is useful in applications.
We comment on the fact that all of the bounds we prove here involve some kind of averaging or exclude a probability-zero set. One may naively hope for stronger purely deterministic bounds without averaging which are uniform in the underlying coefficients (or rather, only depend on their ellipticity ratio). However, such bounds are not to be expected because optimal Hardy weights are closely related to Green's function derivatives and from elliptic regularity theory \cite{A1,A2,deGiorgi,GT,LSW63,Moser,Nash} it is well-known that pointwise bounds of the expected scaling cannot hold with constants that are uniform in the coefficients; see Section \ref{sect:ERT}. Here, we address this problem by proving suitably weakened versions of inverse square behavior.
Our main focus lies on the treatment of the discrete case, since it is more interesting from a technical point of view. Analogous results in the continuum case do not seem to have appeared previously but can be derived in a parallel manner; see e.g.\ Section \ref{sect:continuum}.
\subsection{Green's function estimates and asymptotics}
On a technical level, we leverage and extend techniques developed in the wider context of homogenization theory for controlling Green's functions. This is a wide field; see \cite{AKM,BG,BGO1,BGO2,B,C,CKS,CN,DD,DGL,DGO,GNO,GO,KL,MO1,MO2} and references therein. Specifically, we draw on upper bounds proved by Marahrens-Otto \cite{MO1,MO2} and add some new results on lower bounds and asymptotics of Green's functions and their derivatives. More precisely:
\begin{itemize}
\item[(a)] We provide a general argument for lower bounds on sectorial averages (with the expected $|x|^{1-d}$-scaling) of Green's function derivatives, cf.\ Lemma \ref{lm:onedirection}. This is a simple one-page argument using only the Aronson-type bounds \eqref{eq:aronsondiscrete}, the sectorial geometry, and the Cauchy-Schwarz inequality.
\item[(b)] We provide a deterministic asymptotic expansion of the averaged (or ``annealed'') Green's function for weakly random i.i.d.\ coefficients, i.e., at ``small ellipticity contrast'' (Theorem \ref{thm:GFasymptotic}). To our knowledge, this is the first asymptotic formula of this kind. It draws on a recent novel approach to stochastic homogenization by Bourgain \cite{B}, which gives a precise description of the averaged Green's function, in the nearly sharp rendition from \cite{KL}. We combine these results with the Fourier-space techniques of Uchiyama \cite{Uch98}.
\item[(c)] Theorem \ref{thm:GFasymptotic} implies an analogous asymptotic expansion for Green's function derivatives, up to third order in $d=3$ and up to order $d+1$ for $d\geq 4$ (Corollary~\ref{cor:nablaGFasymptotic}). These bounds may become useful more widely, since especially lower bounds on Green's function derivatives are usually hard to come by. In fact, upper bounds on higher derivatives of order $>2$ were only proved recently \cite[Corollary~1.5]{KL}, also at small ellipticity contrast. For upper bounds on derivatives of the averaged Green's function up to second-order valid in more general contexts, see \cite{CGO,CN,DD}
\item[(d)] We can combine Theorem~\ref{thm:GFasymptotic} with concentration results from \cite{MO1} to obtain the surprisingly universal formula \eqref{eq:randomasymptoticinformal} for the long-distance asymptotic of the random Green's function at small ellipticity contrast (Corollary~\ref{cor:randomGFasymptotic}). Statements of a similar flavor in related settings can be found in \cite{BG,BGO1,BGO2}.
\end{itemize}
\subsection{Organization of the paper}
The paper is organized as follows.
\begin{itemize}
\item In \textbf{Section \ref{sect:prelim}}, we recall the elliptic graph setting, review the definition of optimal Hardy weights and their graph-intrinsic supersolution construction \cite{KePiPo2}.
\item In \textbf{Sections \ref{sect:main1}} and \textbf{\ref{sect:main2}}, we state our main results on optimal Hardy weights. The former section contains the determistic results with annular averages and the latter section contains the results for elliptic operators with ergodic coefficients.
\item In \textbf{Section \ref{sect:GF}}, we present some novel facts about discrete Green's functions. Here, they are mainly used to control optimal Hardy weights but they may turn out to be interesting in their own right.
\item The brief \textbf{Section \ref{sect:general}} contains rather general upper and lower bounds relating optimal Hardy weights and Green's functions on any elliptic graph.
\item In \textbf{Section \ref{sect:pfmainspatial}}, we prove the main results of \textbf{Sections \ref{sect:main1}} with spatial averaging.
\item In \textbf{Section \ref{sect:pfmainrandom}}, we prove the main results of \textbf{Sections \ref{sect:main2}} and \textbf{\ref{sect:GF}} which are probabilistic.
\item In the short \textbf{Section \ref{sect:rellich}}, we apply the results to Rellich inequalities on elliptic graphs. Specifically, we use the techniques of \cite{KePiPo4} to derive the expected $|x|^{-4}$ scaling in a probabilistic sense.
\item The \textbf{Appendix} contains the proof of a non-standard but elementary fact about the free Green's function (it is nowhere locally constant) and a self-contained derivation of the leading term in Theorem \ref{thm:GFasymptotic} based on \cite[Theorem 1.1]{KL} and elementary tools from harmonic analysis.
\end{itemize}
Two open questions are discussed in Remarks \ref{rmk:T} and \ref{rmk:open}.
\section{Preliminaries}\label{sect:prelim}
\subsection{The elliptic graph setting}
Let $ X $ be a discrete countable set and let $b $ be a \emph{graph} over $ X $. That is $ b:X\times X\to[0,\infty) $ is a symmetric map with zero diagonal. We denote $ x\sim y $
whenever $ b(x,y)>0 $. The graph $ b $ is assumed to be \emph{connected} that is if for any to vertices $ x,y\in X $ there is a path $ x=x_{0}\sim \ldots\sim x_{n}=y $.
We denote
by $ \deg $ the \emph{weighted vertex degree}
\begin{align*}
\deg(x)=\sum_{y\in X}b(x,y).
\end{align*}
We assume that the graph satisfies the following ellipticity condition for some $ E>0 $
\begin{align}\label{E}
b(x,y)\ge E\sum_{z\in X}b(x,z)
\end{align}
for all $ x\sim y $.
It is not hard to see that \eqref{E} implies that the graph has bounded combinatorial degree, i.e.,
\begin{align*}
\sup_{x\in X}\#\{y\in X\mid b(x,y)>0\} <\infty.
\end{align*}
Hence, $ \deg $ takes finite values and the graph is locally finite.
Let $ {L} $ be the operator acting on $ C(X) $, the real valued functions on $ X $, via
\begin{equation}\label{eq:Ldefn}
{L} f(x)=\sum_{\substack{y\in X}}b(x,y)(f(x)-f(y)).
\end{equation}
Restricting $ {L} $ to the compactly supported functions $ C_{c}(X) $, we obtain a symmetric operator on $ \ell^{2}(X) $ whose Friedrich extension is with slight abuse of notation also denoted by $ L $ with domain $ D(L) $.
Note that $ L $ is bounded if and only if $ \deg $ is bounded and in this case $ D(L)=\ell^{2}(X) $, see e.g. \cite[Theorem~9.3]{HKLW}. The associated quadratic form
$$
{Q}(f)= \sum_{x, y\in X}b(x,y)(f(x)-f(y))^{2}
$$
is the Dirichlet energy of $f\in C(X)$ and for $ f\in D(L) $ we have
\begin{align*}
{Q}(f)=2\langle f,Lf\rangle.
\end{align*}
Denote by $ G: X\times X\to[0, \infty] $ the Green function of $L$ which is given by
\begin{align*}
G(x,y)=\lim_{{\alpha}\searrow 0}(L+{\alpha})^{-1}1_{x}(y)
\end{align*}
where $ 1_{x} $ is the characteristic function of the vertex $ x $.
In case $ G(x,y) $ is finite for some $ x,y\in X $ it is finite for all $ x,y \in X $ due to connectedness of the graph. In this case the graph is called \emph{transient}. We say the Green's function is \emph{proper} if $ G(o,\cdot) $ is a proper function for some vertex $ o\in X $. (We recall that a function is proper if the preimages of compact sets are compact.)
In the following, whenever we consider elliptic operators on $ {\mathbb Z}^{d} $, we will always have that $ G(o,\cdot) $ is proper. Also, we will often denote $$ G(x)=G(o,x) $$
and choose $ o =0$ whenever $ X={\mathbb Z}^{d} $.
\subsection{Optimal Hardy weights}
In this paper, we are interested in the large-distance behavior of optimal Hardy weights.
We say $ w:X\to [0,\infty) $ is a \emph{Hardy weight} for the graph $ b $ if all $ f\in C_{c}(X)$ satisfy the \emph{Hardy inequality} with weight $w$,
\begin{equation}\label{eq:hardy}
\begin{aligned}
{Q}(f)=\sum_{x, y\in X}b(x,y)(f(x)-f(y))^{2}\ge \sum_{x\in X}w(x)f(x)^{2}
\end{aligned}
\end{equation}
This inequality can be extended to the extended Dirichlet space, i.e., the closure with respect to $ {Q}^{1/2} $ which becomes a norm as we assumed that the graph is transient, see e.g. \cite{FOT} for general Dirichlet forms or \cite[Theorem~B.2]{KLSW} in the graph case. In particular, the constant function $ 1 $ is not in the extended Dirichlet space of transient Dirichlet forms. If $ \deg $ is bounded, then $ L $ is bounded and $ \ell^{2}(X) $ is included in the extended Dirichlet space, see \cite{HKLW}.
A \emph{ground state} for an operator $ H $ is a positive non-trivial solution $ u\in C(X) $, $ u\ge 0 $ to $ Hu=0 $ such that for any other super-solution $ v \ge 0$ to $ Hv\ge0 $ outside of a compact set with $ u\ge v\ge 0 $ there exists $ C\ge0 $ such that $ v\ge Cu $. The ground state of ${L}$ always exists in our situation of locally finite graphs by the virtue of the Allegretto-Piepenbrink theorem, \cite[Theorem~3.1]{HK}.
The following definition of optimal Hardy weights was first made by Devyver, Fraas, and Pinchover \cite{DFP}, see also \cite{DP}, for elliptic operators in the continuum and it was later investigated in \cite{KePiPo2} for graphs.
\begin{definition}\label{defn:optimal}
We say that $w$ is an \textit{optimal Hardy weight} if it satisfies the following three conditions.
\begin{itemize}
\item[(a)] \textit{Criticality}: For every $\tilde w\geq w$ but $\tilde w\neq w$, the Hardy inequality fails.
\item[(b)] \textit{Null-criticality:} The formal Schr{\"o}dinger operator ${L}-w$ does not have a ground state in $\ell^2(X)$.
\item[(c)] \textit{Optimality near infinity:} For any $\lambda>0$ and any compact set $K$, the function $(1+\lambda)w$ fails to be a Hardy weight for compactly supported functions in $X\setminus K$.
\end{itemize}
\end{definition}
We note that (a) and (b) imply (c) as can be seen from the proof in \cite{KePiPo2} and it was shown in the continuum in \cite{KP}. Another answer to how large Hardy weight can be is given in \cite{KP} in the continuum case in terms of integrability in at infinity.
\subsection{Connection to the Green's function}
The results of \cite{KePiPo2} (building on \cite{DFP,DP}) derived optimal Hardy weights in an intrinsic way via superharmonic functions. More precisely, by \cite[Theorem~0.1]{KePiPo2}, the function $ w:X\to[0,\infty) $ defined by
\begin{align*}
w(x)=2\frac{{L} u^{\frac{1}{2}}(x)}{u^{\frac{1}{2}}(x)}
\end{align*}
is an optimal Hardy weight whenever $ u $ is a proper positive superharmonic function which is harmonic outside of a finite set and satisfies
the growth condition $ \sup_{x\sim y} u(x)/u(y)<\infty$. Now, $ G(o,\cdot)=G(\cdot) $ is a positive superharmonic function which is and harmonic outside of $ \{o\} $. Then, \cite[Theorem~0.2]{KePiPo2} implies that
\begin{align}\label{eq:wG}
w_G(x)=2\frac{{L} G(x)^{\frac{1}{2}}}{G(x)^{\frac{1}{2}}}=1_{o}(x)+\frac{1}{G(x)}
\sum_{\substack{y\in X: \\y\sim x}}b(x,y)(G(x)^{\frac{1}{2}}-G(y)^{\frac{1}{2}})^{2}
\end{align}
is an optimal Hardy weight. The condition $ \sup_{x\sim y} G(x)/G(y)<\infty$ is satisfied by \eqref{E}, see Lemma~\ref{lm:basic} below, and, therefore, $ w_{G} $ is optimal in the sense of Definition~\ref{defn:optimal} whenever $ G $ is proper.
We will make use of the facts discussed in this section as follows:
\begin{enumerate}
\item We can use bounds on the Green's function and its derivatives to obtain upper and lower bounds on $w_G$ via \eqref{eq:wG}.
\item We can use the optimality of the Hardy weight $w_G$ to conclude asymptotic bounds on all possible Hardy weights $w$.
\end{enumerate}
In previous works \cite{KePiPo2,KePiPo3}, this has been implemented for graphs where the Green's function is, at least asymptotically, exactly computable. Here we show for the first time that the Hardy weight scaling of $|x|^{-2}$ is robust in various senses for a wide class of elliptic graphs defined on ${\mathbb Z}^d$ whose Green's functions are no longer explicitly computable.
\subsection{From upper bounds on $w_G$ to asymptotic bounds on all Hardy weights}
In this short section, we make precise Step (2) mentioned above, i.e., how to go from upper bounds on $w_G$ to asymptotic bounds on all Hardy weights. Note that the Hardy inequality \eqref{eq:hardy} is stronger if the weight $w$ is large, so we are interested in limiting how large a general Hardy weight can be, given a bound on the special optimal weight $w_G$.
Let $ d $ be some metric on $ X $ and $ \|x\|=d(x,o) $ for $ x\in X $ and $ o\in X $ which was used in the definition of $ G(x)=G(o,x) $. Below when we consider $ X={\mathbb Z}^{d} $ we will use the Euclidean norm $ |\cdot| $ for $ \|\cdot\| $
and $ o=0 $.
\begin{proposition}\label{prop:wGw} Assume that $b$ is a connected transient graph over $X$ which satisfies \eqref{E} and $ G $ is proper.
Assume that there exists a constant $C>0$ and a power $p>0$ such that
\begin{equation}\label{eq:wGrate}
w_G(x)\leq \frac{C}{1+\|x\|^{p}},\qquad x\in X.
\end{equation}
Suppose a function $\tilde w:X\to (0,\infty)$ satisfies
$$
\tilde w(x)\geq (1+{\lambda})\frac{C}{1+\|x\|^p},\qquad x\in X\setminus K,
$$
for some ${\lambda}>0$ and some compact set $K\subseteq X$. Then $\tilde w$ is not a Hardy weight.
\end{proposition}
In particular, Proposition \ref{prop:wGw} shows that the asymptotic decay rate of $w_G$ controls the asymptotics of all Hardy weights: Assuming we know that \eqref{eq:wGrate} holds, there cannot exist a Hardy weight $\tilde w$ satisfying $\tilde w(x)\gtrsim \|x\|^{{\varepsilon}-p}$ asymptotically (meaning outside of compacts), for any ${\varepsilon}>0$. Here, $\|x\|^{\varepsilon}$ can be replaced by any function that goes to infinity as $\|x\|\to\infty$, say $\log\log\|x\|$.
\begin{proof}[Proof of Proposition \ref{prop:wGw}]
By optimality at infinity of $w_G$, we know that or every ${\lambda}>0$ and every compact set $K$, there exists a function $f$ supported in $X\setminus K$ such that the Hardy inequality with weight $(1+{\lambda})w_G$ fails. By the assumed bound on $w_G$, this implies that the Hardy inequality also fails for any $\tilde w$ such that
$$
\tilde w(x)\geq \frac{C(1+{\lambda})}{\|x\|^{p}}\geq (1+{\lambda})w_G(x),\qquad \textnormal{ on } X\setminus K.
$$
This proves Proposition \ref{prop:wGw}.
\end{proof}
We remark that Proposition \ref{prop:wGw} only uses optimality near infinity of $w_G$, i.e., property (c) of Definition~\ref{defn:optimal}. Properties (a) and (b) also transfer information from optimal Hardy weights to general Hardy weights, but we focus on property (c) because there the mechanism is the most clear.
\section{Bounds on optimal Hardy weights with spatial averaging}\label{sect:main1}
In this section, we discuss our first class of main results, namely upper and lower bounds on the optimal Hardy weight $w_G$ for models on ${\mathbb Z}^d$, $d\geq 3$. These bounds translate to information on the best possible large-distance decay of any Hardy weight via Proposition~\ref{prop:wGw}.
\subsection{The benchmark: the free Laplacian on ${\mathbb Z}^d$}
We are interested in the long-distance behavior of the optimal Hardy weight $w_G$ from \eqref{eq:wG}. To understand this, we require asymptotic control on $G$ and $\nabla G$ (for a definition of $ \nabla G $ see the next subsection) which is made explicit Proposition \ref{prop:wGbounds} below.
The benchmark is the case of the free Laplacian $ \Delta $ on ${\mathbb Z}^d$, $d\geq 3$,
which is given via the standard edge weights $ b(x,y)=1 $ if $ |x-y|=1 $ and zero otherwise. The Green's function $ G_{0} $ of $ \Delta $ is asymptotically explicitly computable and satisfies $G_0(x)\sim |x|^{2-d}$ and $|\nabla G_0(x)|\sim |x|^{1-d}$; see e.g.\ \cite{Spitzer,Uch98}, where $$ |x|=(|x_1|^{2}+\ldots+|x_d|^{2})^{1/2} $$ denotes the Euclidian norm. Using asymptotic formulas, the optimal Hardy weight, including the sharp constant, is computed as
\begin{align}\label{eq:wG0}
w_{G_0}(x)={2}\frac{{\Delta}[G_0^{1/2}(x)]}{G_0^{1/2}(x)}=\frac{\big( d-2 \big)^2}{4} \,\frac{1}{|x|^2} + {\mathcal{O}}\Bigg( \frac{1}{|x|^3} \Bigg), \quad\,\, x\in {\mathbb Z}^d,
\end{align}
in \cite[Theorem~8.1]{KePiPo2}.
See \cite{KaL} for a similar estimate with a different constant.
Our goal in this paper is to show that the inverse square decay is robust, in various senses, within the class of elliptic graphs on ${\mathbb Z}^d$, whose Green's functions are no longer known to be explicitly computable.
\subsection{General elliptic operators on ${\mathbb Z}^d$}
\label{sect:ERT}
Let us introduce the setting of elliptic operators on ${\mathbb Z}^d$, $d\geq 3$. Let $\mathbb E^d$ denote the set of undirected edges $[x,y]$ with nearest neighbors $x,y\in {\mathbb Z}^d$, i.e. $ |x-y|=1 $. For ${\lambda}\in (0,1]$, we define the class of uniformly elliptic coefficient fields
\begin{equation}\label{eq:setting1}
\mathcal{A}_{{\lambda}}:=\setof{a:\mathbb E^d\to {\mathbb R}}{{\lambda} \leq a\leq {\lambda}^{-1}}.
\end{equation}
We will study the following class of examples. For $a\in \mathcal{A}_{{\lambda}}$, we set
\begin{equation}\label{eq:setting2}
X={\mathbb Z}^d,
\qquad b(x,x+e_j)=b(x+e_j,x)=a([x,x+e_j])
\end{equation}
for all $ x\in{\mathbb Z}^{d} $ and the standard basis $ e_{j} $, $1\leq j\leq d$. One can easily verify that the graph defined by \eqref{eq:setting2} satisfies the ellipticity condition \eqref{E} for some number $E=E(d,{\lambda})>0$.
We define the associated operator ${L}$ via \eqref{eq:Ldefn} and also denote by $ L $ corresponding self-adjoint operator on $ \ell^{2}({\mathbb Z}^{d}) $. Equivalently, we may express this as $$ {L}=\nabla^* a \nabla $$ for appropriately defined $\nabla,\nabla^*$, cf. \cite{MO1},
i.e.,
\begin{align*}
\nabla :C({\mathbb Z}^{d})\to C({\mathbb Z}^{d})^{d}, \quad
\nabla f(x )=
\left(\begin{matrix}
f(x+e_{1})-f(x)\\
\vdots\\
f(x+e_{d})-f(x)
\end{matrix}\right),
\end{align*}
$ a $ is identified with the operator induced by the $ d\times d $ diagonal matrix
\begin{align*}
\left(\begin{matrix}
a([\cdot,\cdot+e_{1}])&&0\\
& \ddots&\\
0 &&a([\cdot,\cdot +e_{d}])
\end{matrix}\right)
\end{align*}
and $ \nabla ^{*} $ is the formal adjoint of $\nabla $ which acts as
\begin{align*}
\nabla^{*} :C({\mathbb Z}^{d})^{d}\to C({\mathbb Z}^{d}), \quad \nabla ^{*}
\left(\begin{matrix}
f_1(x)\\
\vdots \\
f_d(x)
\end{matrix}\right)= \sum_{j=1}^{d}(f_j(x-e_{j} )-f_j(x)).
\end{align*}
With slight abuse of notation we also write for $ f\in C({\mathbb Z}^{d}) $ an edge $ e=[x,x+e_{j}] $
\begin{align*}
\nabla f(e)=f(x)-f(x+e_{j})
\end{align*}
In order to correctly estimate the optimal Hardy weights of these graphs defined by \eqref{eq:wG}, the ideal situation would be to have upper and lower bounds on their Green's functions which agree with the behavior $G_0(x)\sim |x|^{2-d}$ and $|\nabla G_0(x)|\sim |x|^{1-d}$ of the free Green's function $G_0$ that we reviewed above. Deriving such bounds on $\nabla G$ and its continuum analog are old problems in potential theory and probability theory with close connections to the elliptic regularity theory developed by de Giorgi, Nash, and Moser in the 1950s \cite{deGiorgi,Moser,Nash}.
On the one hand, the situation for $G$ itself is relatively pleasant. Based on Nash's parabolic comparison estimates to the free Laplacian, Littman, Stampacchia, and Weinberger \cite{LSW63} and Aronson \cite{A1,A2} proved that the $|x|^{2-d}$-decay is preserved uniformly in the class $\curly{A}_{{\lambda}}$, i.e., there exists a constant $C_{d,{\lambda}}>1$ such that
\begin{equation}\label{eq:aronson}
C^{-1}_{d,{\lambda}} |x-y|^{2-d}\leq G(x,y)\leq C_{d,{\lambda}} |x-y|^{2-d},\qquad x, y\in {\mathbb R}^d, x\neq y.
\end{equation}
We emphasize that $C_{d,{\lambda}}$ depends on the coefficient field $a$ only through the ellipticity parameter ${\lambda}$. To be precise, \eqref{eq:aronson} was proved in the continuum setting. For the extension to the discrete case, see \cite{DD,SC} and references therein. In the discrete case, we may use the fact that the Green's function is bounded on the diagonal to write
\begin{equation}\label{eq:aronsondiscrete}
C^{-1}_{d,{\lambda}} (1+|x-y|)^{2-d}\leq G(x,y)\leq C_{d,{\lambda}} (1+|x-y|)^{2-d},\qquad x,y\in{\mathbb Z}^d.
\end{equation}
We remark that the bounds in \eqref{eq:aronson} and \eqref{eq:aronsondiscrete} establish that the Green's function is proper. We capture this fact because of its relevance in the following lemma.
\begin{lemma}Whenever, $ a\in \mathcal{A}_{\lambda} $ for $ \lambda \in(0,1]$, the Green's function $G(y,\cdot)$ is proper for any fixed $y\in {\mathbb Z}^d$.
\end{lemma}
On the other hand, bounds for $\nabla G$ are a more delicate matter. In fact, within the general elliptic framework as defined above, there are counterexamples that show one cannot expect a pointwise upper bound $|\nabla G(x)|\leq C|x|^{1-d}$ to hold with an $a$-uniform constant $C$. However, it turns out that derivative bounds for general elliptic equations (discrete and continuum) are much more subtle. Indeed, the standard elliptic regularity theory \cite{deGiorgi,Nash,Moser} only yields
$
|\nabla G(x)|\leq C(d,{\lambda}) |x|^{2-d-{\alpha}(d,{\lambda})}
$
for some ${\alpha}(d,{\lambda})>0$, cf.\ \cite{MO1,MO2}. Moreover, it is well-known that in the continuum situation, an $a$-uniform bound of the form $|\nabla G(x,y)|\lesssim |x-y|^{1-d}$ cannot hold; explicit counterexamples can be constructed using the theory of quasi-conformal mappings; see p.\ 795 in \cite{GO} and p.\ 299 in \cite{GT}.
Still, there is hope to prove weaker (for example average-case) bounds on $\nabla G$ of the right order and making this precise is one of the main applications of homogenization theory \cite{AKM,B,C,CKS,CN,DD,DGL,DGO,GNO,GO,KL,MO1,MO2}. The main idea of our paper is to build on and in some cases extend the Green's function estimates connected to homogenization theory and to apply them to optimal Hardy weights.
\subsection{Upper and lower bounds for annular averages}
We follow the order laid out in the introduction and begin with the most general results which hold uniformly in the coefficient field $a\in \curly{A}_{{\lambda}}$. This comes at the price of a spatial average, either over a dyadic annulus (upper bound) or a dyadic sector (lower bound).
To define a discrete sector, we introduce the cone $\curly C_{j,{\delta}}\subseteq {\mathbb R}^d$ in direction $j\in \{1,\ldots,d\}$ with opening angle given in terms of ${\alpha}\in (0,1)$, i.e.,
$$
\curly C^{j, {\alpha}}=\setof{x\in {\mathbb R}^d}{\scp{x}{e_j}>(1-{\alpha})|x|}.
$$
We then define the associated discrete sector of radial size $\ell>0$ on scale $R>0$ by
$$
\curly S^{j,{\alpha}}_{R,\ell}=\setof{x\in{\mathbb Z}^d\cap \curly C_{j,{\alpha}}}{R\leq |x|\leq \ell R}.
$$
\begin{theorem}[Inverse-square bounds on spatial averages]\label{thm:mainspatial}
Let $d\geq 3$ and ${\lambda}\in (0,1]$. There exist constants $c_{d,\lambda}>1$ and $\ell=\ell_d>1$ such that for every $a\in\curly{A}_{{\lambda}}$ and every radius $R\geq 1$, $j\in \{1,\ldots,d\}$ and ${\alpha}\in (0,1)$,
\begin{align}
\label{eq:mainspatialub}
R^{-d}\sum_{\substack{x\in{\mathbb Z}^d:\\ R\leq |x|\leq 2R}} w_G(x)
\leq& c_{d,{\lambda}} R^{-2}\\
\label{eq:mainspatiallb}
R^{-d}\sum_{x\in \curly S^{j,{\alpha}}_{R,\ell}} w_G(x)
\geq& c_{d,{\lambda}}^{-1} R^{-2}.
\end{align}
\end{theorem}
Note that the number of summands in both sums is of the order $R^d$, so up to a change in the constant $c_{d,{\lambda}}$, \eqref{eq:mainspatialub} and \eqref{eq:mainspatiallb} are indeed upper and lower bounds on normalized spatial averages. The proof is given in Section~\ref{sect:pfmainspatial}.
We can use Theorem \ref{thm:mainspatial} to obtain matching upper and lower bounds on annular averages.
\begin{corollary}[Inverse-square behavior of annular averages]
\label{cor:ublb}
Let $d\geq 3$ and ${\lambda}\in (0,1]$. There exist constants $c_{d,\lambda},\ell_d>1$ so that for all $a\in\curly{A}_{{\lambda}}$ and all $R\geq 1$,
\begin{equation}
\frac{1}{c_{d,{\lambda}}} R^{-2}
\leq
R^{-d}\sum_{\substack{x \in {\mathbb Z}^d:\\ R\leq |x|\leq \ell_d R}} w_G(x)\leq c_{d,{\lambda}} R^{-2}.
\end{equation}
\end{corollary}
Corollary \ref{cor:ublb} says that the inverse square behavior of the optimal Hardy weight holds for completely general elliptic coefficients on ${\mathbb Z}^d$ after annular averaging.
\begin{proof}[Proof of Corollary \ref{cor:ublb}]
The lower bound follows from \eqref{eq:mainspatiallb} and $\curly S^{j,{\alpha}}_{R,\ell}\subseteq \{x:\, R\leq|x|\leq \ell_d R\}$.
The upper bound follows by covering the annulus $\{x:\, R\leq|x|\leq \ell_d R\}$ with finitely many dyadic annuli and using \eqref{eq:mainspatialub} on each dyadic annulus.
\end{proof}
\subsection{The continuum case}\label{sect:continuum}
While we focus on the discrete case, we briefly mention that our statements have continuum analogs which have not
appeared in the literature. Optimal Hardy weights were originally considered in the continuum in work of Devyver, Fraas, and Pinchover, \cite{DFP,DP}. The notion of optimality is analogous to Definition \ref{defn:optimal}; see Definition 2.1 in \cite{DFP}.
We consider an elliptic, divergence form operator
$$
P=\mathrm{div}(A\nabla)
$$
on ${\mathbb R}^d$ with a scalar function $A$ belonging to
$$
\curly{A}^{cont}_{\lambda}=\setof{\tilde A:{\mathbb R}^d\to{\mathbb R}}{ \textnormal{ H\"older continuous, }{\lambda}\leq \tilde A\leq 1 \textnormal{ a.e.}}
$$
for some ${\lambda}>0$. Moreover, \cite{DFP} mention that the assumption of H\"older continuity of $\tilde A$ can be relaxed to any assumption guaranteeing standard elliptic regularity theory, which in addition to ${\lambda}\leq \tilde A\leq 1$ means just measurability \cite{deGiorgi,Moser,Nash}
A Hardy weight is defined as a non-zero function $W:{\mathbb R}^d\setminus\{0\}\to[0,\infty)$ verifying
$$
\scp{\varphi}{P\varphi}\geq \int_{{\mathbb R}^d\setminus\{0\}} W(x) |\varphi(x)|^2 \mathrm{d} x,\qquad \varphi \in C_0^\infty({\mathbb R}^d\setminus\{0\}).
$$
According to Theorem 2.2 in \cite{DFP}, an optimal Hardy weight can be defined in terms of the minimal positive Green's function with a pole at the origin, $G$, in the following way
\begin{equation}\label{eq:WG}
W_G(x)=\l|\nabla\log \sqrt{G(x)}\r|_A^2=\frac{|\nabla G(x)|_A^2}{4G(x)^2},\qquad \text{where } |\xi|_A^2=\xi\cdot A\xi.
\end{equation}
One can prove analogous bounds on this continuum optimal Hardy weight. For the sake of brevity, we only state one result that establishes the inverse-square behavior with upper and lower bounds on annular averages, namely a continuum analog of Corollary \ref{cor:ublb}. For readers interested in continuum analogs of the later results in the perturbative probabilistic setting of Section \ref{sect:perturbativesetup}, we note that an extension of the results in \cite{KL} to the continuum setting is in preparation in \cite{DLP}.
\begin{proposition}[Inverse square behavior of annular averages --- continuum version]\label{prop:continuum}
Let $d\geq 3$ and ${\lambda}\in(0,1]$. There exists constants $c_{d,{\lambda}},\ell_d>1$ so that for every $A\in \curly{A}^{cont}_{\lambda}$ and every $R\geq 1$, it holds that
\begin{equation}
\frac{1}{c_{d,{\lambda}}} R^{-2}
\leq
R^{-d}\int\limits_{\substack{x\in{\mathbb R}^d\setminus\{0\}:\\
R\leq |x|\leq \ell_d R}} W_G(x) \mathrm{d} x \leq c_{d,{\lambda}} R^{-2}
\end{equation}
\end{proposition}
The proof is a straightforward continuum analog of the proof of Theorem \ref{thm:mainspatial}; see Section \ref{sect:continuumpf}.
\section{Bounds on optimal Hardy weights for ergodic coefficients}\label{sect:main2}
\subsection{Upper bound for ergodic coefficients}
We can remove the need for spatial averaging and prove pointwise bounds for elliptic operator with ergodic coefficients. Note that due to the random environment, the Green's function is not exactly computable in this setting.
We now describe the precise setup. Let $\mathbb P$ be a probability measure on the measure space $\curly{A}_{{\lambda}}$ endowed with the product topology. We write $\qmexp{\cdot}$ for the associated expectation value.
\begin{assumption}\label{ass:P}
We make the following two assumptions on $\mathbb P$.
\begin{enumerate}
\item $\mathbb P$ is stationary, i.e., for any $z\in{\mathbb Z}^d$, the translated coefficient field $a(\cdot+z)$ is also distributed according to $\mathbb P$.
\item $\mathbb P$ satisfies a logarithmic Sobolev inequality with constant $\rho>0$, i.e., for any random variable $\zeta:\curly{A}_{{\lambda}}\to {\mathbb R}$, it holds that
\begin{equation}\label{eq:LSI}
\qmexp{\zeta^2 \log\frac{\zeta^2}{\qmexp{\zeta^2}}}\leq \frac{1}{2\rho}\qmexp{\sum_{e\in \mathbb E^d} \l(\mathrm{osc}_{e}\zeta\r)^2},
\end{equation}
where the oscillation of $\zeta$ is a new random variable defined by
$$
\l(\mathrm{osc}_{e}\zeta\r)(a)=\sup_{\substack{\tilde a\in \curly{A}_{{\lambda}}:\\ \tilde a=a \mbox{\scriptsize { on }}\mathbb{E}^{d}\setminus\{e\}}} \zeta(\tilde a)-\inf_{\substack{\tilde a\in \curly{A}_{{\lambda}}:\\ \tilde a=a \mbox{\scriptsize { on }}\mathbb{E}^{d}\setminus\{e\}}} \zeta(\tilde a).
$$
\end{enumerate}
\end{assumption}
In a nutshell, Assumption \ref{ass:P} specifies that the probability measure $\mathbb P$ is stationary and ``sufficiently'' ergodic. Indeed, the logarithmic Sobolev inequality implies a spectral gap and thus mixing time estimates for the associated Glauber dynamics \cite{GNO}. The idea that Assumption \ref{ass:P} is useful in the context of Green's function estimates in stochastic homogenization goes back to an unpublished manuscript of Naddaf-Spencer. We quote a well-known result which immediately provides lots of examples satisfying this assumption.
\begin{lemma}[The case of i.i.d.\ coefficients; see e.g.\ Lemma 1 in \cite{MO1}]
\label{lm:iid}
If the coefficients $\l(a(e)\r)_{e\in \mathbb E^d}$ are chosen in an independent and identically distributed (i.i.d.) way with values in $[{\lambda},1]$, then the induced measure $\mathbb P$ on $\curly{A}_{{\lambda}}$ satisfies Assumption~\ref{ass:P} (with $\rho=1/8$).
\end{lemma}
\begin{remark}
The logarithmic Sobolev inequality \eqref{eq:LSI} is a slightly weaker version of the more standard one in which the oscillation is replaced by the partial derivative $\frac{\partial \zeta}{\partial a(e)}$. The version \eqref{eq:LSI} has the slight advantage that the above lemma holds for the most general choices of i.i.d.\ coefficient fields \cite{MO1}.
\end{remark}
We are now ready to state the main result in the probabilistic setting, which provides a pointwise upper bound on the random function $w_G:\Omega\times{\mathbb Z}^d\to{\mathbb R}_+$ with probability $1$. Here $\Omega$ denotes the underlying probability space. We warn the reader that we will suppress the $\omega$-dependence of $w_G$ on occasion.
\begin{theorem}[Pointwise upper bound]\label{thm:mainrandom}
Let $d\geq 3$, and suppose Assumption \ref{ass:P} holds. For every $p\geq 1$, there exists $C_{d,{\lambda},p}>0$ so that
\begin{equation}\label{eq:wGannealed}
\qmexp{w_G(x)^p}\leq C_{d,{\lambda},p} (1+|x|)^{-2p}, \qquad x\in{\mathbb Z}^d.
\end{equation}
Moreover, for every ${\varepsilon}>0$, there exists finite sets $K_{\varepsilon}(\omega)\subseteq {\mathbb Z}^{d}$ so that
\begin{equation}\label{eq:wGprob1}
w_G(\omega,x)\leq (1+|x|)^{-2+{\varepsilon}},\qquad \omega\in \Omega,\, x\in{\mathbb Z}^d\setminus K_{{\varepsilon}}(\omega)
\end{equation}
holds with probability $1$.
\end{theorem}
The first estimate \eqref{eq:wGannealed} is a pointwise estimate for the ``annealed'' Hardy weight $\qmexp{w_G}$. The second part of the statement holds pointwise with probability $1$, at the modest price of an ${\varepsilon}$-loss in the exponent. The proof is given in Section~\ref{sect:pfmainrandom}.\medskip
Since $w_G$ is an optimal Hardy weight, this theorem yields information on the best-possible decay of an arbitrary Hardy weight via Proposition~\ref{prop:wGw}. For instance, we have the following corollary.
\begin{corollary}[Pointwise decay of optimal Hardy weights]\label{cor}
Let $d\geq 3$ and suppose Assumption \ref{ass:P} holds. Let $p>0$ and let $w:\Omega\times {\mathbb Z}^d\to{\mathbb R}_+$ be a function so that for some $C(\omega)>0$,
\begin{equation}\label{eq:cor}
w(\omega,x)\geq C(\omega) (1+|x|)^{-p},\qquad x\in{\mathbb Z}^d
\end{equation}
holds with positive probability. If $w$ is a Hardy weight, then $p\geq 2$.
\end{corollary}
\begin{proof}
We prove the contrapositive statement. Suppose that \eqref{eq:cor} holds for some $p<2$ with positive probability. We have
$$
\mathbb P\l(\inf_{x\in{\mathbb Z}^d}w(x)(1+|x|)^p>0\r)>0
$$
Let ${\varepsilon}_0=\frac{2-p}{2}>0$ and apply \eqref{eq:wGprob1} of Theorem \ref{thm:mainrandom} to obtain
$$
\mathbb P\l(\inf_{x\in{\mathbb Z}^d}w_G(x)(1+|x|)^{2-{\varepsilon}_0}<\infty\r)=1
$$
On the positive-probability event where both estimates hold, one can use $p<2-{\varepsilon}_0$ and apply Proposition \ref{prop:wGw} with $K$ taken to be an appropriate ball to conclude that $w$ is not a Hardy weight. This proves Corollary \ref{cor}.
\end{proof}
\subsection{The perturbative setting of small randomness}\label{sect:perturbativesetup}
While Theorem \ref{thm:mainrandom} gives a pointwise upper bound, the fact that the constant in \eqref{eq:wGprob1} depends on the random realization $\omega$ and that there is no matching lower bound leave something to be desired. In this last main result, we show that both of these restrictions can be remedied in a clean way in the perturbative setting of weak i.i.d.\ randomness (also known as small ellipticity contrast). For this, we build on a recent breakthrough of Bourgain \cite{B}, subsequently refined in \cite{KL}, which gives a precise description of the averaged Green's function; see also Section \ref{sect:GFasymptotics}.
We begin by reviewing the setup and main results of \cite{B,KL}.
We consider a family of mean-zero i.i.d.\ random variables $\{\omega_x\}_{x\in{\mathbb Z}^d}$ which are taking values in $ [-1,1] $. We let ${\delta}\in (0,1)$ and define an elliptic operator through \eqref{eq:setting2} with
\begin{equation}\label{eq:pertcoefficients}
a([x,x+e_j])=1+{\delta} \omega_x,\qquad j\in\{1,\ldots,d\}.
\end{equation}
Note that $a\in \curly{A}_{1-{\delta}}$ is elliptic. In \cite{KL}, which this section is based on, the model is studied also for non-isotropic $ a $.
In \cite{B}, it is shown that there exists $c_d>0$ so that for all ${\delta}\in (0,c_d)$, the averaged Green's function $\qmexp{G}$ arises itself as a Green's function of a translation-invariant operator $\curly{L}$, which is the harmonic mean of the original random operator. This ``parent operator'' has an explicit representation of the form
\begin{equation}\label{eq:harmonicmeanoperator}
\curly{L}={\Delta} +\nabla^* \mathbf{K}^{\delta} \nabla,
\end{equation}
where $ \Delta $ is the free Laplacian and $\mathbf{K}^{\delta}$ is a convolution operator-valued $d\times d$ matrix whose components satisfy the key decay estimate \cite[Theorem 1.1]{KL}
\begin{equation}\label{eq:Kdeltadecay}
|K^{\delta}_{j,k}(x-y)|\leq C_d {\delta}^2 (1+|x-y|)^{-3d+1/2},\qquad j,k\in \{1,\ldots,d\}.
\end{equation}
Equivalently, the operator $\curly{L}$ is a Fourier multiplier with the symbol $m: \mathbb T^d\to \mathbb C$, on the torus $\mathbb T^d=({\mathbb R}/2\pi {\mathbb Z})^d$, given by
\begin{equation}\label{eq:mthetadefn}
m(\theta)=2\sum_{j=1}^d (1-\cos\theta_j)+\sum_{1\leq j,k\leq d}
(e^{-i\theta_j}-1)\widehat{K^{\delta}_{j,k}}(\theta) (e^{i\theta_k}-1).
\end{equation}
The decay bound \eqref{eq:Kdeltadecay} implies that $$ \widehat{K^{\delta}_{j,k}}\in C^{2d-1}(\mathbb T^d) $$ for all $j,k\in \{1,\ldots,d\}$ and this regularity will be useful in the following.
\subsection{Results in the perturbative setting}\label{sect:perturbativeresults}
The following theorem can be considered the matching lower bound to Theorem \ref{thm:mainrandom}.
\begin{theorem}[Pointwise lower bounds]\label{thm:mainrandompert}
\mbox{}
Let $d\geq 3$. There exists $c_d>0$ so that for all ${\delta}\in (0,c_d)$ and all $p\in (\frac{1}{2},\infty)$, there is a constant $C_{d,{\delta},p}>0$ so that the following holds.
\begin{enumerate}
\item[(i)] For $d\in \{3,4\}$, there is a radius $R_0>1$ so that
\begin{equation}\label{eq:mainrandompert1}
\langle w_G(x)^p\rangle \geq C_{d,{\delta},p} (1+|x|)^{-2p}, \qquad |x|\geq R_0.
\end{equation}
\item[(ii)] For $d\geq 5$,
\begin{equation}\label{eq:mainrandompert2}
\langle w_G(x)^p\rangle \geq C_{d,{\delta},p} (1+|x|)^{-2p}, \qquad x\in{\mathbb Z}^d.
\end{equation}
\end{enumerate}
\end{theorem}
Theorem \ref{thm:mainrandompert} is proved in Section \ref{sect:mainrandompertpf}. At the heart of the proof are novel Green's function asymptotics which are described in Section \ref{sect:GFasymptotics}.
We may combine Theorems \ref{thm:mainrandom} and \ref{thm:mainrandompert} to conclude that, e.g., for $p\geq 1$ and $d\geq 5$, there exist constants $C_{d,{\delta},p} ,C_{d,{\delta},p}' >0$ so that
\begin{equation}
C_{d,{\delta},p} (1+|x|)^{-2p}\leq \langle w_G(x)^p\rangle \leq C_{d,{\delta},p}' (1+|x|)^{-2p},\qquad x\in{\mathbb Z}^d.
\end{equation}
These results thus confirm the inverse-square scaling of the optimal Hardy weight $w_G$ in an averaged sense.
\begin{remark}
Note that in dimensions $d\in \{3,4\}$ we have a lower bound at large distances $|x|\geq R_0$ only, while in dimensions $d\geq 5$ we have a stronger, global lower bound, which is positive everywhere on ${\mathbb Z}^d$. Indeed, in general the optimal Hardy weight $w_G:{\mathbb Z}^d\to [0,\infty)$ may have zeros, a possibility we can exclude for $d\geq 5$ in a probabilistic sense with this result.
\end{remark}
Since we now have control over various moments of the pointwise optimal Hardy weight $w_G$, the second moment method can be used to obtain pointwise lower bounds with controlled positive probability.
\begin{corollary}[Pointwise lower bounds with positive probability]\label{cor:smm}
\mbox{}
Let $d\geq 3$. There exists $c_d>0$ so that for all ${\delta}\in (0,c_d)$, there are constants $c_{d,{\delta}},C_{d,{\delta}}>0$ so that the following holds.
\begin{enumerate}
\item[(i)] For $d\in \{3,4\}$, there is a radius $R_0>1$ so that
\begin{equation}\label{eq:corrandompert1}
\mathbb P\l(w_G(x)>C_{d,{\delta}} (1+|x|)^{-2}\r)\geq c_{d,{\delta}}>0, \qquad |x|\geq R_0.
\end{equation}
\item[(ii)] Let $d\geq 5$. Then
\begin{equation}\label{eq:corrandompert2}
\mathbb P\l(w_G(x)>C_{d,{\delta}} (1+|x|)^{-2}\r)\geq c_{d,{\delta}}>0,\qquad x\in{\mathbb Z}^d.
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}[Proof of Corollary \ref{cor:smm}] We use the Paley-Zygmund inequality which says that for all $\theta\in(0,1)$
$$
\mathbb P\big(w_G(x)>\theta \qmexp{w_G(x)}\big)\geq (1-\theta)^2 \frac{\qmexp{w_G(x)}^2}{\qmexp{w_G(x)^2}}.
$$
We apply this with $\theta=\frac{1}{2}$. Then we use Theorem \ref{thm:mainrandompert} with $p=1$ to obtain a lower bound on $\qmexp{w_G(x)}$ and Theorem \ref{thm:mainrandom} with $p=2$ to obtain an upper bound on $\qmexp{w_G(x)^2}$. This proves Corollary \ref{cor:smm}.
\end{proof}
\begin{remark}[Probabilistic interpretation of $\curly{L}$]
\label{rmk:T}
In the setting of \cite{B,KL} described above, one may ask whether the operator $\curly{L}$ from \eqref{eq:harmonicmeanoperator} is again the generator of a random walk on ${\mathbb Z}^d$. More precisely, one can write $m(\theta)=4d\l(1-\hat T(\theta)\r)$ with the function $T:{\mathbb Z}^d\to{\mathbb R}$ defined by
\begin{equation}\label{eq:Tdefn}
\begin{aligned}
T(x)=\frac{1}{2}{\delta}_{x=0}+\frac{1}{4d}{\delta}_{|x|=1}+\frac{1}{4d}\sum_{j,k=1}^d \big(&-K^{\delta}_{j,k}(x)+K^{\delta}_{j,k}(x-e_j)\\
&+K^{\delta}_{j,k}(x-e_k)-K^{\delta}_{j,k}(x-e_j-e_k)\big).
\end{aligned}
\end{equation}
So, a curious question is the following: Is $T(x)\geq 0$ for all $x\in{\mathbb Z}^d$? If this is the case, it can be interpreted as the transition function of a random walk with generator $\mathcal L$ (up to a multiplicative factor $4d$). This would mean that probabilistic averages of solutions are themselves governed by \textit{bona fide} diffusion process which may hold non-trivial information about the non-averaged processes. In our opinion, such a direct physical meaning of the operator $\mathcal L$ is not at all obvious and would be remarkable. We encountered this question when considering the Green's function asymptotics in Theorem \ref{thm:GFasymptotic}, which is a necessary tool for Theorem \ref{thm:mainrandompert} and would follow immediately from work of Uchiyama \cite{Uch98} if $T(x)\geq 0$. According to our initial investigation, identifying the conditions under which $T(x)\geq 0$ is true appears to be connected to subtle questions concerning componentwise positivity of matrix inverses \cite{JS}.
\end{remark}
\section{Green's function estimates}
\label{sect:GF}
Our investigations of optimal Hardy weights make use of existing upper bounds on Green's function gradients, specifically those of Marahrens-Otto \cite{MO1,MO2}, but they also require us to prove some new lower bounds on Green's function gradients. Since such bounds are largely absent from the literature, we believe they may be of independent interest to readers concerned with homogenization theory.
We recall our notation $G(x)=G(x,0)$.
\subsection{Lower bound on sectorial averages of Green's function derivatives}\label{sect:onedirection}
The following lemma gives a general lower bound on sectorial averages of Green's function derivatives for all elliptic coefficient fields. To this end, recall the definition of a sector
$ \curly S^{j,{\alpha}}_{R,\ell}=\setof{x\in{\mathbb Z}^d}{\langle x,e_{j}\rangle>(1-{\alpha})|x|,\;R\leq |x|\leq \ell R} $ for $ j\in\{1,\ldots,d\} $ and $ R,\ell >0 $, $ {\alpha} \in(0,1)$.
\begin{lemma}[Lower bound on sectorial averages]\label{lm:onedirection}
Let $d\geq 3$. There exist constants $\ell=\ell_d>0$ and $C_{d,\lambda}>1$ such that for every $a\in\curly{A}_{{\lambda}}$, every $1\leq j\leq d$, every $ {\alpha}\in(0,1) $ and every radius $R\geq 1$,
$$
R^{-d}\sum_{x\in \curly S^{j,{\alpha}}_{R,\ell}} |G(x)-G(x+e_j)|^2 \geq C_{d,{\lambda}} R^{2-2d}.
$$
\end{lemma}
The one-page proof of this lemma relies only on Aronson's pointwise bounds \eqref{eq:aronsondiscrete}, the sectorial geometry, and the Cauchy-Schwarz inequality; see Section~\ref{sect:onedirectionpf} for the details. Lemma \ref{lm:onedirection} is used to prove the lower bound in Theorem \ref{thm:mainspatial}.
\subsection{Green's function asymptotics}\label{sect:GFasymptotics}
In this section, we discuss the second class of main results about asymptotics of Green's functions and of optimal Hardy weights in a perturbative model with weak randomness previously studied in \cite{B,DGL,KL}.
By Taylor expansion of the Fourier multiplier \eqref{eq:mthetadefn} at the origin, we find that the lowest order is quadratic of the form $\scp{\theta}{\mathbf Q \theta}$ with the $d\times d$ Hessian matrix
\begin{equation}\label{eq:Qdefn}
\mathbf Q=\mathrm{Hess}(m)(0)=\mathbf{I}_d+\widehat{\mathbf{K}^\delta}(0).
\end{equation}
The matrix $\mathbf Q$ plays an important role for characterizing the leading-order behavior at large distances. In \cite{KL} a detailed analysis of $ \mathbf{K}^\delta$ is carried which yields that
the $d\times d$ matrix $\widehat{\mathbf{K}^\delta}(0)$ is symmetric\footnote{The symmetry can be seen from the power series representation in equation (1.14) of \cite{KL}. Indeed, the so called Helmholtz projection $\mathbf{K}={\nabla \Delta^{-1}\nabla^*}$ appearing therein has a kernel $\mathbf K(x-y)$ taking values in the real-valued $d\times d$ matrices and satisfying $\mathbf K^T(x)=\mathbf{K}(-x)$. Combining this with the invariance of the underlying probability distribution of $\{\omega_x\}_x$ under reflection yields the symmetry.} and satisfies \eqref{eq:Kdeltadecay} which in turn implies $$ \|\widehat{\mathbf{K}^\delta}(0)\|\leq C_d{\delta}^2. $$
This observation readily yields the following proposition which is used for our analysis below.
\begin{proposition}\label{prop:Ksymm}
The $d\times d$ matrix $\mathbf{Q}$ is symmetric and $\mathbf{Q}\geq 1-C_d {\delta}^2$. In particular, $\mathbf{Q}$ is positive definite for sufficiently small ${\delta}>0$.
\end{proposition}
From now on we assume that $ \delta $ is sufficiently small such that $ \mathbf{Q} $ is positive definite.
We introduce the modified spatial variable
\begin{equation}\label{eq:tildexdefn}
\tilde x = \sigma \mathbf{Q}^{-1/2} x,\qquad \textnormal{with } \sigma=(\det\mathbf{Q})^{1/(2d)},
\end{equation}
and the universal constant
\begin{equation}\label{eq:kappadefn}
\kappa_d=\frac{1}{2}\pi^{-d/2}\Gamma(d/2-1).
\end{equation}
For $d\geq 3$, we denote
\begin{equation}\label{eq:mddefn}
m_d=
\begin{cases}
3,\qquad &\textnormal{if } d=3,\\
d+1,\qquad &\textnormal{if } d\geq 4.
\end{cases}
\end{equation}
\begin{thm}[Averaged Green's function asymptotics in the perturbative setting]\label{thm:GFasymptotic}
For $d\geq 3$, there exists $c_d>0$ so that for all ${\delta}\in (0,c_d)$ the following holds. There exist polynomials $U_1,\ldots,U_{m_d}$ with $U_k$ having degree at most $3k$ so that
\begin{equation}\label{eq:GFasymptotic}
\qmexp{G(x)}=\frac{\kappa_d}{\sigma^2} |\tilde x|^{2-d}
+\sum_{k=1}^{m_d} U_k\l(\frac{\tilde x}{|\tilde x|}\r) |\tilde x|^{2-d-k} +o(|\tilde x|^{2-d-m_d}), \qquad \textnormal{as } |x|\to\infty.
\end{equation}
\end{thm}
Recall that $\tilde x =\sigma\mathbf{Q}^{-1/2} x$. We have
\begin{equation}\label{eqn:xtilde}
C_{d,{\delta}}^{-1}|x|\leq |\tilde x|\leq C_{d,{\delta}}|x|
\end{equation}
for an appropriate constant $C_{d,{\delta}}>1$ and $ \delta>0 $ small enough. The reason is that $\mathbf{Q}=\mathbf I_d +\widehat{\mathbf{K}^\delta}(0)$ is close to the identity by Proposition \ref{prop:Ksymm}. In particular, we have that $|x|\to\infty$ and $|\tilde x|\to\infty$ are equivalent and $o(|x|^{-k})=o(|\tilde x|^{-k})$.
The proof of Theorem \ref{thm:GFasymptotic} generalizes a delicate Fourier analysis developed by Uchiyama \cite{Uch98} in the probabilistic setting of random walks; see Section \ref{sect:GFasymptoticpf}. The key input is the decay estimate \eqref{eq:Kdeltadecay} which is the main result of \cite{KL}.
The polynomials $U_k$ are Fourier transforms of fractions of the form $\tfrac{P_{2d-2+k}(\xi)}{|\xi|^{2d}}$ where $P_{2d-2+k}$ is a homogeneous polynomial of degree $2d-2+k$. These polynomial can be explicitly computed as moments of the function $T$ defined in \eqref{eq:Tdefn}, cf.\ \cite{Uch98}. For instance, we have
$$
U_1(\omega)= \int_{\mathbb R^d} \frac{P_{2d-1}(\xi)}{|\xi|^{2d}} e^{-i\omega \cdot \xi}\mathrm{d} \xi,
\qquad
P_{2d-1}(\xi)
=-\frac{2i}{3\sigma^4 (2\pi)^d} |\xi|^{2d-4} \sum_{x\in{\mathbb Z}^d} T(x) (\xi\cdot x)^3.
$$
Here the Fourier transform is defined in the sense of tempered distributions on $\mathbb R^d\setminus\{0\}$ and the integral can be explicitly computed via Lemma 2.1 in \cite{Uch98}.
\begin{remark} For readers with a background in homogenization theory, we point out that the $U_k$ which depend on $T$ from \eqref{eq:Tdefn} are therefore computable from the operator $\mathbf{K}^\delta$ alone. This operator has several interesting properties some of which are still under investigation: (a) It fully characterizes the law of the probablity measure \cite[Proposition 1.8]{KL} and (b) its moments can be viewed as higher-order correctors in the context of homogenization theory with small ellipticity contrast \cite{DGL}. The latter this is related to the so-called Bourgain-Spencer conjecture \cite{D,DGL,DLP}.
\end{remark}
\begin{remark}\label{rmk:firstterm}
For readers interested in the underlying Fourier theory and the emergence of the matrix $\mathbf{Q}$, we give a short self-contained derivation of the lowest-order asymptotic order in Theorem \ref{thm:GFasymptotic} in Appendix \ref{sect:direct} based on \cite[Theorem 1.1]{KL}. The idea is to reduce it to the Green's function of the free Laplacian through Taylor expansion around the origin in Fourier space and then one uses basic harmonic analysis to control the error terms.
\end{remark}
\subsection{Consequences of Theorem \ref{thm:GFasymptotic}}
In the discrete setting, pointwise asymptotics yield pointwise asymptotics of derivatives of one order less. Moreover, this procedure can be iterated for higher derivatives, with a loss of one asymptotic order per derivative. Since our expansion \eqref{eq:GFasymptotic} has $m_d+1$ terms, we can describe asymptotics of the derivatives $\qmexp{\nabla^\alpha G}$ with $|\alpha|\leq m_d$ up to order $m_d-|\alpha|$. (Recall that $m_d$ defined in \eqref{eq:mddefn} is $d+1$ for $d\geq 4$.)
We recall the notation of the derivative $\nabla$ along edges given
$$
\nabla f([x,x+e_j]) = f(x+e_j)-f(x).
$$
We also note that by linearity $\qmexp{\nabla f}=\nabla \qmexp{f}$.
\begin{corollary}[Asymptotics of Green's function derivatives]\label{cor:nablaGFasymptotic}
Let $1\leq j\leq d$, $s\in\{ \pm 1\}$ and let $\alpha\in {\mathbb N}_0^d$ be a multi-index with $|{\alpha}|\leq m_d$. We make the assumptions as in Theorem \ref{thm:GFasymptotic}.
Then
\begin{equation}\label{eq:nablaalphaGFasymptotic}
\begin{aligned}
\qmexp{\nabla^\alpha G([x,x+s e_j])}=&\nabla^\alpha\l(\frac{\kappa_d}{\sigma^2} |\tilde x|^{2-d}
+\sum_{k=1}^{m_d-|\alpha|} U_k\l(\frac{\tilde x}{|\tilde x|}\r) |\tilde x|^{2-d-k}\r)\\
&+o(|\tilde x|^{2-d-m_d}), \qquad \textnormal{as } |x|\to\infty.
\end{aligned}
\end{equation}
\end{corollary}
A few comments on this are in order. Recall notation \eqref{eq:tildexdefn} and the fact that each $U_k$ is a polynomial. By the mean value theorem, we can bound the discrete derivative by the corresponding continuum derivative, cf.\ \cite[Lemma, p.~6]{Don}. Together with Proposition \ref{prop:Ksymm}, this implies
\begin{equation}\label{eq:Uorder}
\nabla^\alpha U_k\l(\frac{\tilde x}{|\tilde x|}\r) |\tilde x|^{2-d-k}=O(|\tilde x|^{2-d-k-|\alpha|}).
\end{equation}
In other words, the different summands appearing in \eqref{eq:nablaalphaGFasymptotic} belong to successively smaller powers of $|\tilde x|$. Therefore, as mentioned above, \eqref{eq:nablaalphaGFasymptotic} is indeed an asymptotic expansion comprising $m_d-|\alpha|$ orders. For $|{\alpha}|=m_d$, \eqref{eq:nablaalphaGFasymptotic} reduces to
$$
\qmexp{\nabla^\alpha G([x,x+s e_j])}=\frac{\kappa_d}{\sigma^2} \nabla^\alpha |\tilde x|^{2-d}+o(|\tilde x|^{2-d-m_d}), \qquad \textnormal{as } |x|\to\infty.
$$
so it just manages to capture the leading order asymptotic of the $m_d$-th derivative.
The leading terms do not involve $U_k$ and are therefore particularly easy to compute. For example, we have the following leading-order gradient asymptotic
\begin{equation}\label{eq:nablaGFasymptotic}
\qmexp{\nabla G([x,x+s e_j])}=s \frac{2-d}{2}\frac{\kappa_d}{\sigma^2} |\tilde x|^{1-d} \frac{\scp{\tilde x_j}{\tilde e_j}}{|\tilde x|}
+{\mathcal{O}}(|\tilde x|^{-d}), \qquad \textnormal{as } |x|\to\infty.
\end{equation}
This particular result will be used to prove Theorem \ref{thm:mainrandompert} about Hardy weights.
As a second and final consequence of Theorem \ref{thm:GFasymptotic}, we use concentration bounds from \cite{MO1} to obtain surprisingly universal asymptotic information on the random Green's function $G$.
\begin{corollary}[Universal asymptotics of the random Green's function]\label{cor:randomGFasymptotic}
With the same assumptions as in Theorem \ref{thm:GFasymptotic}, let ${\varepsilon}>0$. There exist constants $C_{d,{\delta},{\varepsilon}},C_{d,{\delta},{\varepsilon}}'>0$ so that
\begin{align}
\label{eq:randomGFasymptotic}
\sup_{x\in{\mathbb Z}^d} \frac{\l|G(x)-\frac{\kappa_d}{\sigma^2} |\tilde x|^{2-d}\r|}{(1+|x|)^{1-d+{\varepsilon}}} &\leq C_{d,{\delta},{\varepsilon}}
\end{align}
holds with probability $1$.
\end{corollary}
Stated informally, Corollary \ref{cor:randomGFasymptotic} says that, with probability $1$,
\begin{equation}\label{eq:randomasymptoticinformal}
\begin{aligned}
G(x)=-\frac{\kappa_d}{\sigma^2} |\tilde x|^{2-d}+{\mathcal{O}}(|x|^{1-d+{\varepsilon}}),\qquad \textnormal{as } |x|\to\infty.
\end{aligned}
\end{equation}
To our knowledge, Corollary \ref{cor:randomGFasymptotic} is the first result identifying such universal spatial asymptotic of a random Green's function.
Corollaries \ref{cor:nablaGFasymptotic} and \ref{cor:randomGFasymptotic}, as well as Formula \eqref{eq:nablaGFasymptotic}, are proved in Section \ref{sect:corproofs}.
\begin{remark}[An open question]
\label{rmk:open}
It would be very interesting to generalize Corollary \ref{cor:randomGFasymptotic} to describe the asymptotic of the gradient of the random Green's function $\nabla G([x,x+e_j])$. Such a result would have immediate consequences in the context of Hardy weights (by improving Theorem \ref{thm:mainrandompert} and Corollary \ref{cor:smm} on pointwise lower bounds), but also well beyond, because there are almost no workable lower bounds on random Green's function gradients in the literature as we mentioned in the discussion preceding Lemma~\ref{lm:onedirection}.
Since the averaged Green's function gradient is understood by Corollary \ref{cor:nablaGFasymptotic}, which as we mentioned can be easily refined, the main outstanding technical roadblock to such an extension is the lack of concentration bound \eqref{eq:GFconcentration} which is due to the fact that the ``vertical derivative'' $\mathrm{osc}_e \nabla G([x,x+e_j],0)$ behaves like $\nabla\nabla G([x,x+e_j],e) \nabla G(e,0)$ due to an asymmetry that arises from differentiating the standard resolvent equation; see also formula (54) in \cite{MO1}
\end{remark}
\section{General pointwise estimates}\label{sect:general}
In this short section, we present some convenient general pointwise estimates that are used in more specific situations later and are collected here for the convenience of the reader. We derive from the formula for $ w_{{G}} $ given by \eqref{eq:wG} upper and lower bounds which directly involve the discrete derivative $|G(x)-G(y)|$ for $x\sim y$ as summarized in the following proposition. These bounds hold for all elliptic graphs and without any averaging.
Recall that $E>0$ is a constant so that \eqref{E} holds.
\begin{proposition}[General pointwise bound]\label{prop:wGbounds}
Assume that $b$ is a connected transient graph over $X$ which satisfies \eqref{E}. Then,
\begin{align}
w_G(x)
\leq& 1_{o}(x)+\frac{1}{G(x)^2}
\sum_{\substack{y\in X}}b(x,y)(G(x)-G(y))^2,\\
w_G(x)
\geq& 1_{o}(x)+(1+E^{-1/2})^{-1}\frac{1}{G(x)^2}
\sum_{\substack{y\in X}}b(x,y)(G(x)-G(y))^2.
\end{align}
\end{proposition}
We mention in passing that these bounds yield rough decay estimates on $w_G$ that do not require any information on $\nabla G$, though we will not use these rough bounds in the following. Indeed, from ${Q}(f)=2\scp{f}{Lf}$ and the defining property of the Green's function we have that
$$
\sum_{x\in X} w_G(x) G(x)^2 \leq C G(o) <\infty.
$$
which for example implies a weak decay estimate on ${\mathbb Z}^3$ via the Aronson bound \eqref{eq:aronsondiscrete}.
The proof of Proposition \ref{prop:wGbounds} uses the following basic comparison property of the Green function of strongly elliptic graphs with bounded combinatorial degree.
\begin{lemma}\label{lm:basic}
Let $ b $ be a connected transient graph over $ X $ which satisfies the ellipticity condition \eqref{E}. Then, for all $ x\sim y $
\begin{align*}
G(x)\geq E G(y)
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lm:basic}]
Note that the Green's function is superharmonic, more specifically, it satisfies
\begin{align*}
{L} G(o,\cdot)=1_{o}.
\end{align*}
By the virtue of the ellipticity condition \eqref{E} we have for all $ x\sim y $
\begin{align*}
G(x) \ge\frac{1}{\sum_{z\in X}b(x,z) } \sum_{z\in X}b(x,z) G(z)\ge E G(o,y)
\end{align*}
which proves Lemma \ref{lm:basic}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:wGbounds}]
For the upper bound, we note that
$$
|G(x)^{1/2}-G(y)^{1/2}|=\frac{|G(x)-G(y)|}{G(x)^{1/2}+G(y)^{1/2}}
\leq
\frac{|G(x)-G(y)|}{G(x)^{1/2}}.
$$
For the lower bound, we use that by Lemma \ref{lm:basic}, we have $G(y)\leq E^{-1}G(x)$ and so
$$
|G(x)^{1/2}-G(y)^{1/2}|=\frac{|G(x)-G(y)|}{G(x)^{1/2}+G(y)^{1/2}} \geq (1+E^{-1/2})^{-1}\frac{|G(x)-G(y)|}{G(x)^{1/2}}.
$$
Squaring both sides and applying the resulting estimates to the formula \eqref{eq:wG} for $ w_{G} $ establishes Proposition \ref{prop:wGbounds}.
\end{proof}
\section{Proof of Theorem \ref{thm:mainspatial} on spatial averaging}
\label{sect:pfmainspatial}
\subsection{Proof of the upper bound \eqref{eq:mainspatialub}}
Let $a\in\curly{A}_{{\lambda}}$ and $R\geq 1$ be arbitrary and recall the setting given by \eqref{eq:setting2}. By the upper bound in Proposition \ref{prop:wGbounds}, the lower Aronson-type bound in \eqref{eq:aronsondiscrete} and $a\leq {\lambda}^{-1}$, we have
$$
\begin{aligned}
&\sum_{\substack{x\in{\mathbb Z}^d:\\ R\leq |x|\leq 2R}} w_G(x)\\
&\leq \sum_{\substack{x\in{\mathbb Z}^d:\\ R\leq |x|\leq 2R}} \frac{1}{G(x)^2}\sum_{\substack{j=1,\ldots,d\\ s=\pm1}}
a([x,x+se_j])\l|G(x)-G(x+se_j)\r|^2\\
& \leq {\lambda}^{-1} C_{d,{\lambda}}^2 R^{2(d-2)} \sum_{\substack{x\in{\mathbb Z}^d:\\ R\leq |x|\leq 2R}}\sum_{\substack{j=1,\ldots,d\\ s=\pm1}} |G(x)-G(x+se_j)|^2.
\end{aligned}
$$
To bound the annular averages of discrete derivatives of $G$, we employ \cite[Lemma~1.4]{MO2}. To verify the conditions, we recall that the class $\curly{A}_{\lambda}$ is related to the class used in \cite{MO2} (where the upper bound on the coefficient is equal to $1$ instead of ${\lambda}^{-1}$) by simple rescaling which only changes the ${\lambda}$-dependence of the constant below. In conclusion, we obtain the existence of a constant $\tilde C_{d,{\lambda}}>0$ such that for every $a\in\curly{A}_{{\lambda}}$ and every radius $R\geq 1$,
$$
R^{-d}\sum_{\substack{x\in{\mathbb Z}^d:\\ R\leq |x|\leq 2R}}\sum_{\substack{j=1,\ldots,d\\ s=\pm1}} |G(x)-G(x+se_j)|^2\leq \tilde C_{d,{\lambda}}R^{2(1-d)}.
$$
This yields
$$
\begin{aligned}
R^{-d}\sum_{\substack{x\in{\mathbb Z}^d:\\ R\leq |x|\leq 2R}} w_G(x)
\leq c_{d,{\lambda}} R^{-2},
\end{aligned}
$$
which is the desired upper bound \eqref{eq:mainspatialub}.\medskip
It now remains to prove the lower bound \eqref{eq:mainspatiallb}.
\subsection{Proof of Lemma~\ref{lm:onedirection}}
\label{sect:onedirectionpf}
The proof of the lower bound in Theorem \ref{thm:mainspatial} uses Lemma~\ref{lm:onedirection} which we prove now.
Let ${\delta}>0$. We consider the case $j=1$ without loss of generality. We let $\ell>0$ to be determined later. The idea of the proof is that by using both parts of the Aronson-type bound \eqref{eq:aronsondiscrete}, we know that $G$ must have decayed somewhat between the interior and the exterior boundary of the sector.
In a preliminary step, we replace the sector
$$
\curly{S}^{1,{\alpha}}_{R,\ell}=\setof{x\in{\mathbb Z}^d}{\scp{x}{e_1}>(1-{\alpha})|x|,\;R\leq |x|\leq \ell R}
$$
by a cuboid. Observe that there are constants $c_{in}, c_{out},c_{orth}>0$ depending only on $d$, so that the cuboid
$$
\mathrm{Cub}=\setof{(x_1,\ldots,x_d)\in {\mathbb Z}^d}{c_{in} R\leq x_1\leq c_{out} \ell R \text{ and } |x_i|\leq c_{orth} {\alpha} R\mbox{ for all } i\ge 2}
$$
is contained in the sector $\curly{S}^{1,{\alpha}}_{R,\ell}$. Without loss of generality, we assume that $c_{in} R$ and $c_{out} \ell R$ are integers. Note further that for any fixed $\ell$, $\mathrm{Cub}$ has cardinality of order $R^d$ as $R\to\infty$. We conclude that it suffices to prove the claim with $\mathrm{Cub}$ in place of $\curly{S}^{1,{\alpha}}_{R,\ell}$.
We introduce the inner and outer faces of the cuboid
$$
\begin{aligned}
F_{in}=&\setof{x=(c_{in} R,x_2,\ldots,x_d)}{|x_i|\leq c_{orth} {\alpha} R\,\mbox{ for all } i\ge 2},\\
F_{out}=&\setof{x=(c_{out} \ell R,x_2,\ldots,x_d)}{|x_i|\leq c_{orth} {\alpha} R\,\mbox{ for all }i\ge 2}.
\end{aligned}
$$
By the two-sided Aronson-type bounds \eqref{eq:aronsondiscrete}, we have
$$
\begin{aligned}
G(x)\geq& C_{d,\lambda}^{-1} (1+|x|)^{2-d}\geq C'_{d,\lambda} c_{in}^{2-d} R^{2-d},\qquad x\in F_{in},\\
G(x)\leq& C_{d,\lambda} (1+|x|)^{2-d}\leq C_{d,\lambda} c_{out}^{2-d} (\ell R)^{2-d},\qquad x\in F_{out}.
\end{aligned}
$$
Given $x\in F_{in}$, note that the point $x+R(c_{out}\ell-c_{in})e_1\in F_{out}$. By choosing $\ell=\ell_d>0$ sufficiently large, depending on $d$ but not on $R$, these bound imply that the Greens' function differs by order $R^{2-d}$ between these two points. Namely, there exists $C''_{d,\lambda} >0$ so that
\begin{equation}\label{eq:glb}
G(x)-G(x+Rc e_1)\geq C''_{d,\lambda} R^{2-d}, \qquad x\in F_{in}.
\end{equation}
where we introduced $c:=c_{out}\ell_d-c_{in}$.
Since $F_{in}$ contains an order of $(c_{orth}{\alpha} R)^{d-1}\sim R^{d-1}$ many sites, this implies
$$
\sum_{x\in F_{in}} \l(G(x)-G(x+Rce_1)\r)\geq C'''_{d,\lambda} R.
$$
By telescoping (recall that $cR\in{\mathbb Z}$) and the Cauchy-Schwarz inequality, we can upper bound the left-hand side as follows
\begin{equation}\label{eq:telescopic}
\begin{aligned}
&\sum_{x\in F_{in}} \l(G(x)-G(x+Rce_1)\r)\\
&=\sum_{x\in F_{in}} \sum_{n=1}^{cR}\l(G(x+(n-1)e_1)-G(x+ ne_1)\r)\\
&\leq C_{d} R^{d/2}\sqrt{\sum_{x\in F_{in}} \sum_{n=1}^{cR} |G(x+(n-1)Re_1)-G(x+ ne_1)|^2}\\
&\leq C_{d} R^{d/2} \sqrt{\sum_{x\in \mathrm{Cub}} |G(x)-G(x+e_1)|^2}.
\end{aligned}
\end{equation}
We combine this with \eqref{eq:glb} to conclude
$$
C_{d,\lambda}'' R\leq R^{d/2}\sqrt{\sum_{x\in \mathrm{Cub}} |G(x)-G(x+e_1)|^2},
$$
Since it suffices to prove the claim for $\mathrm{Cub}$, Lemma \ref{lm:onedirection} follows.
\qed
\subsection{Proof of the lower bound \eqref{eq:mainspatiallb}}
Recall the definition of a sector $ \curly{S}^{j,{\alpha}}_{R,\ell}=\setof{x\in{\mathbb Z}^d}{\scp{x}{e_j}>(1-{\alpha})|x|,\;R\leq |x|\leq \ell R} $ for $ j\in\{1,\ldots,d\} $ and ${\alpha}, R,\ell>0 $.
By the lower bound in Proposition \ref{prop:wGbounds}, the upper Aronson-type bound \eqref{eq:aronsondiscrete} and $a\geq {\lambda}$, we have
$$
\begin{aligned}
\sum_{\curly{S}^{j,{\alpha}}_{R,\ell}} w_G(x)
&\geq (1+E^{-1/2})^{-1}\sum_{x\in\curly{S}^{j,{\alpha}}_{R,\ell}} \frac{1}{G(x)^2}\sum_{\substack{i=1,\ldots,d\\ s=\pm1}} a([x,x+se_j])|G(x)-G(x+se_i)|^2\\
& \geq (1+E^{-1/2})^{-1}C_{d,{\lambda}}^{-2} \lambda (1+(\ell_dR)^{d-2})^{-2} \sum_{\curly{S}^{j,{\alpha}}_{R,\ell}}\sum_{\substack{i=1,\ldots,d\\ s=\pm1}} |G(x)-G(x+se_i)|^2\\
& \geq C'_{d,{\lambda}} R^{2(2-d)} \sum_{x\in\curly{S}^{j,{\alpha}}_{R,\ell}} |G(x)-G(x+e_j)|^2.
\end{aligned}
$$
By Lemma \ref{lm:onedirection}, we conclude that
$$
R^{-d} \sum_{x\in\curly{S}^{j,{\alpha}}_{R,\ell}} w_G(x)
\geq C''_{d,{\lambda}} R^{-2},
$$
which is \eqref{eq:mainspatiallb}. This completes the proof of Theorem \ref{thm:mainspatial}.
\qed
\subsection{Proof of Proposition \ref{prop:continuum} on the continuum case}
\label{sect:continuumpf}
The proof follows along the same lines as Theorems \ref{thm:mainspatial} and Corollary \ref{cor:ublb}. Here we summarize the necessary modifications.
In the continuum, Definition \eqref{eq:WG} is already in the general form achieved in the discrete setting by Proposition \ref{prop:wGbounds}. For the upper bound, we then use Aronson's bound \eqref{eq:aronson} in the original continuum version and note that Lemma 1.4 of \cite{MO2} is also proved in the continuum in Section 2 of \cite{MO2}; see also Lemma 2.1 in \cite{LSW63}. For the lower bound, when proving Lemma \ref{lm:onedirection} in the continuum, we argue analogously and replace the telescopic sum in \eqref{eq:telescopic} by an application of the fundamental theorem of calculus. The details are left to the reader.
\qed
\section{Proofs for random coefficients}
\label{sect:pfmainrandom}
In this section, we first prove Theorem \ref{thm:mainrandom} for general elliptic coefficients subject to Assumption \ref{ass:P}. Afterwards, we turn to the perturbative setting and prove Theorems \ref{thm:GFasymptotic} and \ref{thm:mainrandompert}, in that order.
\subsection{Proof of Theorem \ref{thm:mainrandom} on pointwise upper bounds}
We first prove the annealed estimate \eqref{eq:wGannealed}. Let $p\geq 1$. The case $ x=0 $ is clear by Proposition~\ref{prop:wGbounds}. For $x\neq 0$, by Proposition~\ref{prop:wGbounds} and the Aronson bound \eqref{eq:aronsondiscrete}, we have
\begin{equation}\label{eq:bigsum}
\begin{aligned}
\qmexp{w_G(x)^p}
&\leq
\left\langle\frac{1}{G(x)^{2p}}
\left(\sum_{\substack{i=1,\ldots,d\\ s=\pm1}} a([x,x+se_j])|G(x)-G(x+se_j)|^2\right)^{p}
\right\rangle\\
&
\leq
\left\langle
C_{d,{\lambda}}^{2p} \l(1+|x|\r)^{2p(d-2)}
\left(\sum_{\substack{i=1,\ldots,d\\ s=\pm1}} a([x,x+se_j])|G(x)-G(x+se_j)|^2\right)^{p}
\right\rangle\\
\end{aligned}
\end{equation}
To estimate the discrete derivative of $G$, we use $|a|\leq 1$ and \cite[Theorem 1]{MO1} which applies under Assumption \ref{ass:P} and gives
$$
\qmexp{|G(x)-G(x\pm e_j)|^{2p}}^{\frac{1}{2p}}\leq C'_{d,{\lambda},\rho,p} (1+|x|)^{1-d},
$$
for every $1\leq j\leq d$. Using this and the elementary inequality $\l(\sum_{j=1}^d b_j\r)^{p}\leq 2^{p-1} \sum_{j=1}^d b_j^p$, we find
$$
\begin{aligned}
\qmexp{w_G(x)^p}
\leq&
C_{d,{\lambda}}^{2p} 2^{p-1} \l(1+|x|\r)^{2p(d-2)}\sum_{\substack{i=1,\ldots,d\\ s=\pm1}} \qmexp{|G(x)-G(x+e_j)|^{2p}}\\
\leq & C_{d,{\lambda}}^{2p}2^{p}d(C'_{d,{\lambda},\rho,p})^{2p} (1+|x|)^{-2p}.
\end{aligned}
$$
This proves \eqref{eq:wGannealed} when $x\neq 0$.
It remains to prove the almost-sure estimate \eqref{eq:wGprob1}. Let ${\varepsilon}>0$ and set $p_{\varepsilon}=\frac{2d}{{\varepsilon}}$. By Markov's inequality and the estimate \eqref{eq:wGannealed} proven right above, we have
$$
\sum_{x\in {\mathbb Z}^d}\mathbb P\l(w_G(x)\geq (1+|x|)^{-2+{\varepsilon}}\r)
\leq
C_{d,{\varepsilon},{\lambda}}\sum_{x\in {\mathbb Z}^d}(1+|x|)^{-2{\varepsilon} p_{\varepsilon}}<\infty.
$$
Now, the Borel-Cantelli lemma implies that $w_G(x)\geq (1+|x|)^{-2+{\varepsilon}}$ can occur for at most finitely many $x\in {\mathbb Z}^d$. This proves Theorem \ref{thm:mainrandom}.
\qed
\subsection{Proof of Theorem \ref{thm:GFasymptotic}}\label{sect:GFasymptoticpf}
Recall that the averaged Green's function $\qmexp{G}$ is the Green's function of the operator $ \mathcal{L} $ defined in \eqref{eq:harmonicmeanoperator}. Since the latter is a Fourier multiplier, as discussed in \cite[Appendix~A]{KL}, we have the Fourier-space representation
\begin{equation}\label{eq:GavgFourierrep}
\qmexp{G(x)}=\int_{\mathbb T^d} e^{ix\cdot \theta} \frac{1}{m(\theta)} \frac{\mathrm{d}^d \theta}{(2\pi)^d}
\end{equation}
with
$$
m(\theta)=2\sum_{j=1}^d (1-\cos\theta_j)+\sum_{1\leq j,k\leq d}
(e^{-i\theta_j}-1)\widehat{K^{\delta}_{j,k}}(\theta) (e^{i\theta_k}-1)
$$
and $\widehat{K^{\delta}_{j,k}}\in C^{2d-1}(\mathbb T^d)$ for all $j,k\in \{1,\ldots,d\}$ by \cite[Theorem 1.1]{KL}.
For $ \delta $ small enough $ m $ vanishes only at the origin $ \theta=0 $ by the bound on $ \mathbf K^{\delta} $, \eqref{eq:Kdeltadecay}. Thus, by Taylor expansion, $\frac{1}{m(\theta)}$ only has a quadratic singularity at the origin and so the integral is well-defined for $d\geq 3$.
The goal is to perform asymptotic analysis of \eqref{eq:GavgFourierrep} as $|x|\to\infty$. This is a delicate stationary phase argument which has to take special care of the singularity at the origin in Fourier space as this becomes non-integrable after the last derivative is taken. The same problem was treated in detail by Uchiyama \cite{Uch98} in a probabilistic setting and we show below that his argument extends to our case. (This is perhaps not so surprising considering that there are only few Fourier-theoretic arguments that leverage positivity conditions.)
To make contact with the probabilistic perspective, we denote
\begin{equation}\label{eq:mrewrite}
m(\theta)=4d\l(1-\hat T(\theta)\r)
\end{equation}
with $T:{\mathbb Z}^d\to{\mathbb R}$ given as in Remark \ref{rmk:T}, i.e.,
\begin{equation}\label{eq:Tdefn'}
\begin{aligned}
T(x)=\frac{1}{2}{\delta}_{x=0}+\frac{1}{4d}{\delta}_{|x|=1}+\frac{1}{4d}\sum_{j,k=1}^d \big(&-K^{\delta}_{j,k}(x)+K^{\delta}_{j,k}(x-e_j)\\
&+K^{\delta}_{j,k}(x-e_k)-K^{\delta}_{j,k}(x-e_j-e_k)\big).
\end{aligned}
\end{equation}
Note that we produced the term $\frac{1}{2}{\delta}_{x=0}$ by adding and subtracting a constant in \eqref{eq:mrewrite}. This is a common technical trick in the context of discrete random walks to remove periodicity, cf.\ \eqref{eq:ap} below.\medskip
\textit{Step 1.} As mentioned above, our goal is to extend \cite[Theorem 2]{Uch98} to our situation. In a first step, we verify the assumptions of that theorem with the exception of $T(x)\geq 0$. The function $T$ satisfies the following properties assumed in \cite{Uch98} for small $ \delta $. For all these properties the decay bound \eqref{eq:Kdeltadecay}, which is $ |K^{\delta}_{j,k}(x)|\leq C_{d}\delta^{2} (1+|x|^{-3d+1/2}) $, from \cite[Theorem 1.1]{KL} is of the essence.
\begin{enumerate}[label=(\roman*)]
\item $T$ has zero mean. Indeed, by \eqref{eq:Kdeltadecay}, we can use Fubini and a change of variables to see
\begin{equation}
\label{eq:Uass1}
\begin{aligned}
\sum_{x\in{\mathbb Z}^d} xT(x)
=&\frac{1}{4d}\sum_{x\in{\mathbb Z}^d} x {\delta}_{|x|=1}\\
&+\frac{1}{4d}\sum_{j,k=1}^d \sum_{x\in{\mathbb Z}^d} K^{\delta}_{j,k}(x) \l(-x+(x+e_j)+(x+e_k)-(x+e_j+e_k)\r)\\
=&0.
\end{aligned}
\end{equation}
\item The smallest subgroup of ${\mathbb Z}^d$ generated by
\begin{equation}\label{eq:ap}
\setof{x\in{\mathbb Z}^d}{T(x)>0}
\end{equation}
is equal to ${\mathbb Z}^d$. This is a kind if aperiodicity property. To see it is true, note that we can use the decay bound \eqref{eq:Kdeltadecay}, to conclude that for all sufficiently small ${\delta}>0$, we have $T(0)>0$ and $T(\pm e_j)>0$ for $j=1,\ldots,d$.
\item The decay bound \eqref{eq:Kdeltadecay} also implies the summability of
\begin{equation}\label{eq:Uass2}
\begin{aligned}
&\sum_{x\in{\mathbb Z}^d} |T(x)||x|^{2+m_d} <\infty,\qquad &d=3 \textnormal{ or } d\geq 5,\\
&\sum_{x\in{\mathbb Z}^d} |T(x)||x|^{2+m_d} \ln |x| <\infty,\qquad &d=4,
\end{aligned}
\end{equation}
were $m_d=2d-3$, $ d=3,4 $ and $ m_{d}=d+1 $, $ d\ge 5 $, was defined in \eqref{eq:mddefn}.
\end{enumerate}
Together (i)-(iii) verify the assumptions of Theorem 2 in \cite{Uch98} with $m$ equal to $m_d$ and $T$ called $p$ there, with the exception of non-negativity. \medskip
\textit{Step 2.} We confirm that the fact that $T$ may be negative does not pose any problems in the proof. This step uses Proposition \ref{prop:Ksymm} and the extension is applicable as long as $\mathbf Q$ is strictly positive semidefinite.
The proof of \cite[Theorem 2]{Uch98} is contained in Section 4 of that paper.
The proof makes use of general estimates on Fourier integrals taken from Sections 2 and 3 of \cite{Uch98} which do not depend on the non-negativity of $ p $. This concerns Lemma~2.1, Lemma~3.1 and Corollary 3.1 from \cite{Uch98}. These are used in Section~4
together with the absolute summability \eqref{eq:Uass2} to control the error terms. The only step where the loss of non-negativity requires a short argument is the proof of $c(\theta)^2+s(\theta)^2>0$ which is obtained on page 226 of \cite{Uch98} from positivity and aperiodicity. We now verify this condition to our context.
For $\theta\in\mathbb T^d$, we set
$$
c(\theta)=\sum_{x\in{\mathbb Z}^d} T(x)(1-\cos(\theta\cdot x)),\qquad s(\theta)=\sum_{x\in{\mathbb Z}^d} T(x)\sin(\theta\cdot x).
$$
\begin{lemma}\label{lm:cstheta}
For sufficiently small ${\delta}>0$, we have $c(\theta)^2+s(\theta)^2>0$ for $\theta\in\mathbb T^d\setminus\{0\}$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lm:cstheta}]
By Taylor expansion around $\theta=0$, we obtain
$$
\begin{aligned}
c(\theta)=&\frac{1}{2}\sum_{x\in{\mathbb Z}^d} T(x) (\theta\cdot x)^2+{\mathcal{O}}(\theta^4),\\
s(\theta)=&\sum_{x\in{\mathbb Z}^d} T(x) (\theta\cdot x)+{\mathcal{O}}(\theta^5),
\end{aligned}
$$
where the error terms are controlled the decay bound of $ \mathbf{K}^{\delta} $, \eqref{eq:Kdeltadecay} which enters via the definition of $ T $ given in \eqref{eq:Tdefn}. By the definitions of $ \mathbf Q $, \eqref{eq:Qdefn}, and $ T $, \eqref{eq:Tdefn}, we have $$ \mathbf{Q}=\mathrm{Hess}(m)(0)=-4d \mathrm{Hess}(\hat T)(0). $$ Using this for $ c $ and the zero mean property \eqref{eq:Uass1} of $ T $, for $ s $ we obtain $$
c(\theta)=\frac{1}{2d}\scp{\theta}{\mathbf{Q}\theta}+{\mathcal{O}}(\theta^4),
\qquad s(\theta)={\mathcal{O}}(\theta^5).
$$
Now Proposition \ref{prop:Ksymm} says that for sufficiently small ${\delta}>0$ the matrix $\mathbf Q$ is positive definite, i.e.,
$$
\scp{\theta}{\mathbf{Q}\theta}\geq (1-{\delta}^2 C_d)\theta^2
$$
and therefore $c(\theta)^2+s(\theta)^2>0$ holds for all $|\theta|<r$ for some small $r>0$.
It remains to prove a lower bound over the set $K=\setof{\theta\in\mathbb T^d}{|\theta|\geq r}$. To this end, let $c_0(\theta),s_0(\theta)$ denote the analogs of $c(\theta),s(\theta)$ with ${\delta}=0$. Then we have $c_0(\theta)^2+s_0(\theta)^2>0$ for all $\theta\in K$ by the aperiodicity of the simple random walk. On the one hand, the continuous function $c_0(\theta)^2+s_0(\theta)^2$ takes its minimum on the compact set $K$; call it $\mu>0$. On the other hand, by \eqref{eq:Tdefn'} and \eqref{eq:Kdeltadecay}, we have
$$
\sup_{\theta\in K}|c(\theta)^2+s(\theta)^2-\l(c_0(\theta)^2+s_0(\theta)^2\r)|\leq {\delta}^2 C_{d}.
$$
Thus, choosing ${\delta}$ small enough that ${\delta}^2 C_{d}\leq \mu/2$, we conclude that $c(\theta)^2+s(\theta)^2>0$ holds on $K$ as well. This proves Lemma \ref{lm:cstheta}.
\end{proof}
\textit{Step 3.} We are now ready to complete the proof of Theorem \ref{thm:GFasymptotic}. Thanks to (i)-(iii), Lemma \ref{lm:cstheta} and the paragraph preceding it, the proof of Theorem 2 from \cite{Uch98} extends to our situation and yields an asymptotic expansion similar to \eqref{eq:GFasymptotic}. Namely, taking account of the rescaling by $4d$ that we introduced in \eqref{eq:mrewrite}, we have the asymptotic expansion
\begin{equation}\label{eq:Uchexpansion}
\qmexp{G(x)}=\frac{1}{4d}\frac{\kappa_d}{(\sigma')^2} |x'|^{2-d}
+\sum_{k=1}^{m_d} U_k\l(\frac{x'}{|x'|}\r) |x'|^{2-d-k} +o(|x'|^{2-d-m_d}), \quad \textnormal{as } |x|\to\infty,
\end{equation}
where $x'=\sigma'(\mathbf{Q}')^{-1/2} x$ and $\mathbf{Q}'$ is the $d\times d$ matrix generating the second-moment functional
$$
\scp{\theta}{\mathbf Q' \theta} = \sum_{x\in{\mathbb Z}^d} T(x) (x\cdot \theta)^2,\qquad \sigma'=(\det\mathbf Q')^{1/(2d)}.
$$
When we compare this with our claim \eqref{eq:GFasymptotic}, we see that the latter features $\tilde x =\sigma\mathbf{Q}^{-1/2} x$ instead, with the matrix $\mathbf{Q}$ defined in \eqref{eq:Qdefn}. These are related via
\begin{equation}\label{eq:Q'Qconnect}
\mathbf{Q}'=\frac{1}{4d}\mathbf Q
\end{equation}
To see this, we use the Fourier representation and recall \eqref{eq:mrewrite} and \eqref{eq:mthetadefn} to find
$$
\begin{aligned}
\mathbf{Q}_{i,j}'
=\sum_{x\in{\mathbb Z}^d} T(x) x_i x_j
=- \l(\frac{\partial^2}{\partial \theta_i\partial \theta_j}\hat T\r)(0)
=\frac{1}{4d}\l(\frac{\partial^2}{\partial \theta_i\partial \theta_j}m\r)(0)
= \frac{1}{4d} \mathbf{Q}_{i,j}
\end{aligned}
$$
for all $i,j\in\{1,\ldots,d\}$.
Finally, we employ the identity \eqref{eq:Q'Qconnect} and its consequence $\sigma'=(4d)^{-1/2}\sigma$ in \eqref{eq:Uchexpansion}. For the leading term, we note that $|x'|=|x|$ and $\frac{1}{4d(\sigma')^2}=\frac{1}{\sigma^2}$. For the subleading term, we note that $\frac{x'}{|x'|}=\frac{\tilde x}{|\tilde x|}$. Absorbing the factors of $(4d)^{d+k-2}$ into the $U_1,\ldots,U_m$ then yields \eqref{eq:GFasymptotic}. This proves Theorem \ref{thm:GFasymptotic}.
\qed\\
\subsection{Proofs of Corollaries \ref{cor:nablaGFasymptotic} and \ref{cor:randomGFasymptotic} on Green's function asymptotics}
\label{sect:corproofs}
\begin{proof}[Proof of Corollary \ref{cor:nablaGFasymptotic}]
Let $1\leq j\leq d$, $s\in\{ \pm 1\}$ and let $\alpha\in {\mathbb N}_0^d$ be a multi-index with $|\alpha|\leq m_d$. We apply Theorem \ref{thm:GFasymptotic}. Regarding the $U_k$ terms, \eqref{eq:Uorder} shows that
$$
\nabla^\alpha \sum_{k=m_d-|{\alpha}|+1}^{m_d} U_k\l(\frac{\tilde x}{|\tilde x|}\r) |\tilde x|^{2-d-k}\in O(|\tilde x|^{1-d-m_d})\subseteq o(|\tilde x|^{2-d-m_d}).
$$
Regarding the $o(|\tilde x|^{2-d-m_d})$ error term appearing in Theorem \ref{thm:GFasymptotic}, we note that
for $f\in o(|\tilde x|^{2-d-m_d})$, the triangle inequality implies $\nabla^\alpha f([x,x+se_j])
\in o(|\tilde x|^{2-d-m_d})$, so the error does not get worse under discrete differentiation. This proves Corollary~\ref{cor:nablaGFasymptotic}.
\end{proof}
\begin{proof}[Proof of Formula \eqref{eq:nablaGFasymptotic}]
We recall the notation \eqref{eq:tildexdefn}, i.e., $\tilde x=\sigma \mathbf{Q}^{-1/2} x$. We first use Corollary \ref{cor:nablaGFasymptotic} and the bound \eqref{eq:Uorder} for all $k\geq 1$ to find
\begin{align*}
\qmexp{\nabla G([x,x+s e_j])}=\frac{\kappa_d}{\sigma^2} ( |\tilde x+s\tilde e_j|^{2-d}- |\tilde x|^{2-d}) +{\mathcal{O}}(|\tilde x|^{-d})
\end{align*}
To compute the leading term, we expand $(1+y)^q=1+qy+{\mathcal{O}}(y^2)$ to obtain
$$
\begin{aligned}
|\tilde x+s\tilde e_j|^{2-d}- |\tilde x|^{2-d}
&=|\tilde x|^{2-d}\l(\l(1+\scpp{\frac{\tilde x}{|\tilde x|^2}}{s\tilde e_j}+\frac{|\tilde e_j|^2}{|\tilde x|^2}\r)^{\frac{2-d}{2}}-1\r)\\
&=|\tilde x|^{1-d} \frac{2-d}{2} \scpp{\frac{\tilde x}{|\tilde x|}}{s\tilde e_j}+{\mathcal{O}}(|\tilde x|^{-d}),\qquad \textnormal{as } |x|\to\infty,
\end{aligned}
$$
where we also made use of the equivalence of the norms $|x| $ and $ |\tilde x| $, cf.\ \eqref{eqn:xtilde}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:randomGFasymptotic}]
By Theorem \ref{thm:GFasymptotic}, the assertion\eqref{eq:randomGFasymptotic} follows once we show that
\begin{equation}\label{eq:randomGFasymptotic'}
\sup_{x\in{\mathbb Z}^d} \frac{\l|G(x)-\qmexp{G(x)}\r|}{(1+|x|)^{1-d+{\varepsilon}}} \leq C_{d,{\delta},{\varepsilon}}
\end{equation}
holds with probability $1$. Let $p\in (1,\infty)$. By Markov's inequality, we have
$$
\mathbb P\l(\l|G(x)-\qmexp{G(x)}\r|>(1+|x|)^{1-d+{\varepsilon}}\r)
\leq \frac{\qmexp{\l|G(x)-\qmexp{G(x)}\r|^{2p}}}{(1+|x|)^{2p(1-d+{\varepsilon})}}
$$
for every $x\in {\mathbb Z}^d$. We bound the numerator through the concentration bound from Corollary 1 of \cite{MO1}, which says that
\begin{equation}\label{eq:GFconcentration}
\qmexp{|G(x)-\qmexp{G(x)}|^{2p}}^{\frac{1}{p}}\leq C_{d,{\delta},p} (1+|x|)^{2-2d},\qquad x\in {\mathbb Z}^d.
\end{equation}
We note that this is applicable because the choice of coefficients \eqref{eq:pertcoefficients} satisfies the logarithmic Sobolev inequality by Lemma \ref{lm:iid}. We obtain
$$
\mathbb P\l(\l|G(x)-\qmexp{G(x)}\r|>(1+|x|)^{1-d+{\varepsilon}}\r)
\leq C_{d,{\delta},p} (1+|x|)^{-2p{\varepsilon}}.
$$
This is summable over $x\in {\mathbb Z}^d$ for the choice $p=\frac{d}{{\varepsilon}}$. Hence, the Borel-Cantelli lemma yields \eqref{eq:randomGFasymptotic'} and Corollary \ref{cor:randomGFasymptotic} is proven.
\end{proof}
\subsection{Proof of Theorem \ref{thm:mainrandompert} on pointwise lower bounds}
\label{sect:mainrandompertpf}
Let $p\geq \frac{1}{2}$. We set out to show the lower bound \eqref{eq:mainrandompert2} which reads as
\begin{align*}
\langle w_G(x)^p\rangle \geq C_{d,{\delta},p} (1+|x|)^{-2p}
\end{align*}
for $ x\in{\mathbb Z}^d $.
We first use the lower bound in Proposition \ref{prop:wGbounds}, the lower Aronson-type bound \eqref{eq:aronsondiscrete} and that $b$ is induced by $a\in \curly{A}_{1-{\delta}}$ to find
$$
\begin{aligned}
w_G(x)^p
\geq& (1+E^{-1/2})^{-p}\frac{1}{G(x)^{2p}}
\l(\sum_{\substack{y\in X}}b(x,y)(G(x)-G(y))^2\r)^p\\
\geq& (1-{\delta}) C_{d,{\delta}} (1+|x|)^{2p(d-2)}\l(\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x}}(G(x)-G(y))^2\r)^p\\
\geq& C_{d,{\delta},p} (1+|x|)^{2p(d-2)}\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x}}(G(x)-G(y))^{2p}.
\end{aligned}
$$
Next, we take the average $\qmexp{\cdot}$ of both sides and use Jensen's inequality with the convex function $z\mapsto z^{2p}$ to obtain
\begin{equation}\label{eq:wGplb}
\begin{aligned}
\qmexp{w_G(x)^p}
\geq& C_{d,{\delta},p} (1+|x|)^{2p(d-2)}\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x}}\qmexp{(G(x)-G(y))^{2p}}\\
\geq& C_{d,{\delta},p} (1+|x|)^{2p(d-2)} \sum_{j=1}^d \sum_{s=\pm 1} (\qmexp{\nabla G([x,x+s e_j])})^{2p}.
\end{aligned}
\end{equation}
From Corollary \ref{cor:nablaGFasymptotic}, specifically \eqref{eq:nablaGFasymptotic}, we obtain the lower bound
$$
\begin{aligned}
\qmexp{w_G(x)^p}
\geq C_{d,{\delta},p} (1+|x|)^{2p(d-2)} |\tilde x|^{2p(1-d)}, \qquad \textnormal{as } |x|\to\infty
\end{aligned}
$$
By \eqref{eqn:xtilde}, we have $|\tilde x|\leq C_{d,{\delta}} |x|$ and so
$$
\begin{aligned}
\qmexp{w_G(x)^p}
\geq C_{d,{\delta},p} (1+|x|)^{-2p}, \qquad \textnormal{as } |x|\to\infty.
\end{aligned}
$$
Let now $R_0>2$ be sufficiently large so that
\begin{equation}\label{eq:wGplbfar}
\begin{aligned}
\qmexp{w_G(x)^p}
\geq \frac{C_{d,{\delta},p}}{2} (1+|x|)^{-2p}
\end{aligned}
\end{equation}
for $ |x|\geq R_0 $.
This proves \eqref{eq:mainrandompert1} for $d\in \{3,4\}$. To prove the assertion \eqref{eq:mainrandompert2} for all $ x $ and $d\geq 5$, we need a lower bound on $\qmexp{w_G(x)^p}$ over the ball $B_{R_0}=\setof{x\in{\mathbb Z}^d}{|x|<R_0}$. In view of \eqref{eq:wGplb}, it suffices to prove that for every $x_0\in B_{R_0}$ there exists $C_{d,{\delta},x_0}>0$ so that
\begin{equation}\label{eq:positivePball}
\mathbb P\l(\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x_0}}|G(x)-G(y)|>C_{d,{\delta},x_0}\r) >0.
\end{equation}
Given $\beta\in [-1,1]$, we write $G_\beta$ for the Green's function of $(1+{\delta}\beta)\Delta$, a simple rescaling of the well-understood Green's function $G_0$ of the free Laplacian.
We will now prove \eqref{eq:positivePball} by using that with positive probability $G$ is uniformly close to some $G_\beta$ on the entire ball $B_{R_0}$.
\begin{lemma}[Green's function comparison lemma]
\label{lm:GFcomparison}
Let $d\geq 5$ and $R_0$ be given as above. For any $\beta\in [-1,1]$ in the support of $ \omega_{0} $ and any ${\varepsilon}>0$, we have
\begin{equation}
\mathbb P\l(\sup_{x_1\in B_{2R_0}}|G(x_1)-G_\beta(x_1)|\leq {\varepsilon}\r)>0.
\end{equation}
\end{lemma}
We postpone the proof of Lemma \ref{lm:GFcomparison} for now and continue with the proof of \eqref{eq:positivePball} to conclude Theorem \ref{thm:mainrandompert}.
Let $d\geq 5$ and fix $x_0\in B_{R_0}$. According to Lemma \ref{lm:GFcomparison}, for any ${\varepsilon}>0$ there is a positive probability that
\begin{equation}\label{eq:GFreplacement}
\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x_0}}|G(x_0)-G(y)|
\geq\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x_0}}|G_\beta(x_0)-G_\beta(y)|-4d{\varepsilon}.
\end{equation}
Finally, we recall that $G_\beta=(1+\delta\beta)^{-1} G_0$ with $G_0$ the Green's function of the free Laplacian. Lemma \ref{lm:G0} implies that there exists $C_{d,R_0}$ so that
$$
\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x_0}}|G_\beta(x_0)-G_\beta(y)|
>(1+\delta\beta)^{-1} C_{d,R_0}, \qquad |x|\leq 2R_0.
$$
Taking ${\varepsilon}$ sufficiently small in \eqref{eq:GFreplacement} then yields \eqref{eq:positivePball}. Thus, we have shown \eqref{eq:wGplbfar} for $ d\ge 5 $ and all $ x $ and we have proven Theorem~\ref{thm:mainrandompert}.
\qed\\
It remains to prove Lemma \ref{lm:GFcomparison}.
\begin{proof}[Proof of Lemma \ref{lm:GFcomparison}]
We take $\beta\in [-1,1]$ to be any point in the support of the random variable $\omega_0$. By definition, this means that for all ${\varepsilon}>0$, there exists $p_{\varepsilon}>0$ so that
$$
\mathbb P(\omega_0\in (\beta-{\varepsilon},\beta+{\varepsilon}))\geq p_{\varepsilon}.
$$
Let ${\varepsilon}>0$ and define
$$ R_{\varepsilon}=\max\{4R_0,{\varepsilon}^{-2}\} .$$
Since $\{\omega_x\}_{x\in {\mathbb Z}^d}$ are independent and identically distributed, we can strengthen the above to the statement that there exists $\tilde p_{\varepsilon}>0$ so that
\begin{equation}\label{eq:betasupport}
\mathbb P\l(\omega_x\in (\beta-{\varepsilon},\beta+{\varepsilon}) \mbox{ for all } x\in B_{R_{\varepsilon}}\r)\geq \tilde p_{\varepsilon}.
\end{equation}
From now on, we fix a realization of $\{\omega_x\}_{x\in {\mathbb Z}^d}$ which we assume satisfies $\omega_x\in (\beta-{\varepsilon},\beta+{\varepsilon})$ for all $x\in B_{R_{\varepsilon}}$ with an ${\varepsilon}$ to be chosen later.
We now replace the variables $\omega_x$ one-by-one by $\beta$ via the resolvent identity. To make this precise, it is convenient to use edge variables following \cite[Section 7.3]{MO1}. Recall the original setup \eqref{eq:setting1} and \eqref{eq:setting2}. Let $F\subset \mathbb E^d$ and let $F'=F\cup \{e'\}$ for some fixed edge $e'\not\in F$. For ${\gamma}\in\{F,F'\}$ let $a^{\gamma}=\{a^{{\gamma}}(e)\}_{e\in \mathbb E^d}$ be two families of coefficients in $\mathcal A_{1-{\delta}}$ satisfying
$$
a^F(e')\neq a^{F'}(e') \quad \textnormal{ and } \quad a^F(e)= a^{F'}(e),\qquad e\neq e'.
$$
We write $G^{\gamma}$, ${\gamma}\in\{F,F'\}$, for the corresponding Green's functions. The resolvent identity gives
\begin{equation}\label{eq:resolventidentity}
G^F(x,0)-G^{F'}(x,0)=(a^F(e')- a^{F'}(e)) \nabla G^{F}(x,e') \nabla G^{F'}(e',0)\qquad x\in {\mathbb Z}^d,
\end{equation}
confer Equation~(53) in \cite{MO1}.
Let $\{e_n\}_{n\geq 0}=\mathbb E^d$ be an enumeration of the edge set. We define
$$
F_N:=\bigcup_{1\leq n\leq N} \{e_n\},\qquad N\geq 0
$$
with the convention that $F_0=\emptyset$ and $F_\infty=\mathbb E^d$. We recall \eqref{eq:pertcoefficients} and note that every edge $e$ has a unique representation as $e=[x,x+e_j]$ with $x\in{\mathbb Z}^d$ and $j\in \{1,\ldots,d\}$. We use this to define the coefficients
\begin{equation}\label{eq:aedefn}
a^{F_N}(e)=
\begin{cases}
1+{\delta} \omega_x,\qquad &\textnormal{if } e=[x,x+e_j]\in F_N,\\
1+{\delta} \beta,\qquad &\textnormal{if } e\not\in F_N.
\end{cases}
\end{equation}
We can then use telescoping and the resolvent identity \eqref{eq:resolventidentity} to write, for every $x_1\in B_{2R_0}$,
\begin{equation}\label{eq:GminusGbeta}
\begin{aligned}
&G(x)-G_\beta(x_1)=G^{F_0}(x_1)-G^{F_\infty}(x_1)\\
&=\sum_{N\geq 0} G^{F_N}(x_1)-G^{F_{N+1}}(x_1)\\
&=\sum_{N\geq 0} (a^{F_N}(e_{N+1})- a^{F_{N+1}}(e_{N+1})) \nabla G^{F_N}(x_1,e_{N+1}) \nabla G^{F_{N+1}}(e_{N+1},0).
\end{aligned}
\end{equation}
We decompose the last sum in $N$ as follows. Let $\mathcal I\subset \mathbb N_0$ be the finite index set of non-negative integers $N\geq 0$ so that $e_{N+1}=[x,x+e_j]$ with $x\in B_{R_{\varepsilon}}$. On this set, we may apply \eqref{eq:betasupport} and use \eqref{eq:aedefn} to obtain
$$
|a^{F_N}(e_{N+1})- a^{F_{N+1}}(e_{N+1})|\leq {\varepsilon}
$$
We combine this with the a priori estimate
\begin{equation}\label{eq:GFapriori}
|\nabla G^{F_N}(x_1,e_{N+1}) \nabla G^{F_{N+1}}(e_{N+1},0)|
\leq 4 \l(\sup_{N\geq 0} \sup_{x,y\in {\mathbb Z}^d} G^{F_N}(x,y)\r)^2
\leq C_{d,{\delta}},
\end{equation}
where the latter estimate holds by \eqref{eq:aronsondiscrete} and the fact that $a^{F_N}\in \curly{A}_{1-{\delta}}$ for all $N\geq 0$. Employing these estimates for $n\in\mathcal I$ in \eqref{eq:GminusGbeta} gives
\begin{equation}\label{eq:GimplementI}
\begin{aligned}
&|G(x_1)-G_\beta(x_1)|\\
&\leq{\varepsilon} | \mathcal I| C_{d,{\delta}}+(1+{\delta})^2\sum_{N\in \mathbb N_0\setminus \mathcal{I}}
|\nabla G^{F_N}(x_1,e_{N+1})| |\nabla G^{F_{N+1}}(e_{N+1},0)|,
\end{aligned}
\end{equation}
where $| \mathcal I|<\infty$ denotes the cardinality of $\mathcal I$.
To estimate the sum over $\mathbb N_0\setminus \mathcal{I}$, we first reparametrize it using the fact that there is a bijective map $N:B_{R_{\varepsilon}}^c \times \{1,\ldots,d\}\to \mathbb N_0\setminus \mathcal I$ so that $e_{N(x,j)}=[x,x+e_j]$.
$$
\begin{aligned}
&\sum_{N\in \mathbb N_0\setminus \mathcal{I}} |\nabla G^{F_N}(x_1,e_{N+1})| |\nabla G^{F_{N+1}}(e_{N+1},0)|\\
&=\sum_{x\in B_{R_{\varepsilon}}^c} \sum_{j=1}^d |\nabla G^{F_{N(x,j)}}(x_1,[x,x+e_j])| |\nabla G^{F_{N(x,j)+1}}([x,x+e_j],0)|\\
\end{aligned}
$$
Next, we refine the a priori estimate \eqref{eq:GFapriori} via \eqref{eq:aronsondiscrete} to
$$
\begin{aligned}
|\nabla G^{F_{N(x,j)}}(x_1,[x,x+e_j])| |\nabla G^{F_{N(x,j)+1}}([x,x+e_j],0)|\leq C_{d,{\delta}} |x_1-x|^{2-d} |x|^{2-d}.
\end{aligned}
$$
We observe that $|x-x_1|\geq |x|/2$ for all $ x\in B_{R_{{\varepsilon}}^{c}} $ because $x_1\in B_{2R_0}$ and $R_{\varepsilon}\geq 4R_0$. Hence $|x_1-x|^{2-d}\leq C |x|^{2-d}$
and since $ \sum_{x\in {\mathbb Z}^{d}} |x|^{-d-1/2}\leq C_{d}<\infty$ we have shown that
$$
\sum_{x\in B_{R_{\varepsilon}}^c} \sum_{j=1}^d |\nabla G^{F_{N(x,j)}}(x_1,[x,x+e_j])| |\nabla G^{F_{N(x,j)+1}}([x,x+e_j],0)|
\leq C_{d,{\delta}} R_{\varepsilon}^{4-d-{1}/{2}}.
$$
Whenever $ d\ge5 $, we have $ R_{\varepsilon}^{4-d-{1}/{2}}\leq {\varepsilon} $ since $R_{\varepsilon}\geq {\varepsilon}^{-2}$.
In view of \eqref{eq:GimplementI}, this implies
$$
|G(x_1)-G_\beta(x_1)|\leq C_{d,{\delta},R_0} {\varepsilon}.
$$
Note that all estimates are uniform in $x_1\in B_{2R_0}$. Since the required assumption \eqref{eq:betasupport} holds with positive probability, this proves Lemma \ref{lm:GFcomparison}.
\end{proof}
\section{Application to Rellich inequalities}\label{sect:rellich}
In this short section, we explain the implications of the results from Section \ref{sect:perturbativeresults} for Rellich inequalities on graphs. The first Rellich inequality was presented by F.~Rellich at the ICM 1954 in Amsterdam \cite{R}. Very recently, \cite{KePiPo4} provided a general mechanism for generating Rellich inequalities from strictly positive Hardy weights on graphs. Adapting \cite[Corollary 4.1]{KePiPo4} to the case of elliptic operators on ${\mathbb Z}^d$ and subharmonic function $u=G$, we obtain the following result.
\begin{thm}[Rellich inequality on ${\mathbb Z}^d$, \cite{KePiPo4}]\label{thm:rellich}
Let $ a\in \mathcal{A}_{\lambda} $ for $ \lambda\in(0,1] $, $W={\mathrm {supp}\,} w_G\subseteq {\mathbb Z}^{d}$ and ${\alpha}\in (0,1)$. Then we have the Rellich inequality
\begin{equation}\label{eq:rellich}
\|\mathbf{1}_\varphi \Delta \varphi \|_{\frac{G^{\alpha}}{w_G}} \geq (1-{\gamma}) \|\varphi\|_{G^{\alpha} w_G}
\end{equation}
for all $ {\varphi}\in C_{c}(W) $, where $\mathbf{1}_{{\varphi}} $ is the characteristic function of the support of $ {\varphi} $ and
$$
{\gamma}=\l(\frac{1-(\lambda^{2}/2d)^{{\alpha}/2}}{1-(\lambda^{2}/2d)^{1/2}}\r)^2.
$$
\end{thm}
\begin{proof}
In view of \cite[Remark~3.4]{KePiPo4} the assumption of standard weights can replaced by the ellipiticity condition \eqref{E} which is satisfied in our setting with $ E= \lambda^{2}/2d $. Precisely, in \cite[Theorem~3.3]{KePiPo4} from which \cite[Corollary 4.1]{KePiPo4} is deduced one can replaced the constant $ D $ with $ 1/E $.
\end{proof}
For the free Laplacian on ${\mathbb Z}^d$, $d\geq 5$, and ${\alpha}=\tfrac{2}{d-2}$ the standard Green's function asymptotic shows that $\frac{G_0^{\alpha}}{w_{G_0}} \sim C$ and $G_0^{\alpha} w_{G_0}\sim C' |x|^{-4}$ \cite[Example 4.5]{KePiPo4}. This matches the $|x|^{-4}$-scaling of the original Rellich weight on ${\mathbb R}^d$ \cite{R}.
We consider this question in the weakly random setting of Section \ref{sect:perturbativesetup}. We can use the lower probabilistic bounds established on $w_G$ to derive the same scaling behavior for Rellich weights of general elliptic operators on ${\mathbb Z}^d$, $d\geq 5$, that was found for the free Laplacian, again in a probabilistic sense.
\begin{proposition}[$|x|^{-4}$-scaling for Rellich weights]
For $d\geq 5$, let ${\alpha}=\tfrac{2}{d-2}$. Then, there exists $c_d>0$ so that for all ${\delta}\in (0,c_d)$ there exists $ C_{d,{\delta}} >0$ such that
$$
\begin{aligned}
\left\langle G(x)^{\alpha} w_G(x) \right\rangle \geq C_{d,{\delta}} (1+|x|)^{-4}
\end{aligned}
$$
and there exist $ c_{d,{\delta}}>0 $ such that
$$
\mathbb P\l(\frac{G(x)^{\alpha}}{w_G(x)}<C_{d,{\delta}}\r)\geq c_{d,{\delta}},\qquad x\in {\mathbb Z}^d.
$$
\end{proposition}
Note that the direction of the bounds are the right ones to be applicable in \eqref{eq:rellich}.
\begin{proof}
The first bound follows from \eqref{eq:aronsondiscrete} and Theorem \ref{thm:mainrandompert}.
The second bound follows from \eqref{eq:aronsondiscrete} and Corollary \ref{cor:smm}.
\end{proof}
\section*{Acknowledgments} The authors are grateful to Yehuda Pinchover for valuable comments. They would like to thank Peter Bella, Arianna Giunti, and Felix Otto for helpful comments on a draft version of the paper. MK acknowledges the financial support of the German Science Foundation. The authors would like to thank the organizers of the program ``Spectral Methods in Mathematical Physics'' held in 2019 at Institut Mittag-Leffler where this project was initiated.
\begin{appendix}
\section{The free Green's function is never locally constant}
The following lemma is used in the main text to derive lower bounds on Hardy weights near the origin. It is elementary, but does not appear to be completely standard. We learned of the argument through \cite{mathoverflow} and include it here for the convenience of the reader.
We recall that $G_0$ is the Green's function of the free Laplacian on ${\mathbb Z}^d$.
\begin{lemma}\label{lm:G0}
Let $d\geq 1$. Then
\begin{equation}
\sum_{\substack{y\in{\mathbb Z}^d:\\ y\sim x}}|G_0(x)-G_0(y)| >0,\qquad x\in{\mathbb Z}^d.
\end{equation}
\end{lemma}
\begin{proof}
We write $\mathbb{P}^0$ for the probability measure of the symmetric simple random walk $ S $ started at the origin. A random-walk representation of the Green's function is given in \cite{LL} via
$$
G_0(x)=C_d\sum_{n\geq 0} \mathbb{P}^0(S_{n}=x)
$$
Let $ x\in {\mathbb Z}^{d} $ and without loss of generality, we suppose that the first coordinate $x_1\geq 1$. Moreover, we assume that $\sum_{i=1}^d x_i$ is odd; the even case can be argued analogously. Observe that then
$$
\mathbb{P}^0(S_{n}=x+e_1)=\mathbb{P}^0(S_{n}=x-e_1)=0,\qquad n\geq 0 \textnormal{ odd},
$$
due to parity considerations.
Now we claim that if $\mathbb{P}^0(S_{n}=x+e_1)>0$, then
\begin{equation}\label{eq:G0claim}
\mathbb{P}^0(S_{n}=x+e_1)<\mathbb{P}^0(S_{n}=x-e_1),\qquad n\geq 0 \textnormal{ even}.
\end{equation}
This implies the assertion of the lemma after summation in $n$.
It remains to prove \eqref{eq:G0claim}. Given an index set $\mathcal I_n\subset \{1,\ldots,n\}$, let $A(\mathcal I_n)$ be the random event that the steps $\mathcal I_n$ occurr in the $\pm e_1$ direction and the steps in $\{1,\ldots,n\}\setminus \mathcal I_n$ do not. By conditioning we have
$$
\mathbb{P}^0(S_{n}=x\pm e_1)=\sum_{\mathcal I_n\subset \{1,\ldots,n\}} \mathbb{P}^0(S_{n}=x\pm e_1\vert A(\mathcal I_n))\, \mathbb{P}^0(A(\mathcal I_n)
$$
Therefore, it suffices to prove that if $\mathbb{P}^0(S_{n}=x+e_1\vert A(\mathcal I_n))>0$, then
\begin{align*}
\mathbb{P}^0(S_{n}=x+e_1\vert A(\mathcal I_n))<\mathbb{P}^0(S_{n}=x-e_1\vert A(\mathcal I_n)),\qquad n\geq 0 \textnormal{ even}.
\end{align*}To see this, we use the fact that conditional on $A(\mathcal I_n)$, we have two independent random walks: one in the $\pm e_1$ direction of $\tilde n=|\mathcal I_n|$ steps and one in the remaining $d-1$ directions of $n-\tilde n$ steps. For any $y\in{\mathbb Z}^d$, we write $y=(y_1,P(y))$ with $P(y)\in {\mathbb Z}^{d-1}$. Note that $P(x+e_1)=P(x-e_1)=P(x)$. Hence, if $\mathbb{P}^0(S_{n}=x+e_1\vert A(\mathcal I_n))>0$, then
$$
\begin{aligned}
\mathbb{P}^0(S_{n}=x+e_1\vert A(\mathcal I_n))
&=\mathbb{P}^0(P(S_{n})=P(x) \vert A(\mathcal I_n)) \binom{\tilde n}{\tfrac{\tilde n+x_1+1}{2}} 2^{-\tilde n}\\
&<\mathbb{P}^0(P(S_{n})=P(x) \vert A(\mathcal I_n)) \binom{\tilde n}{\tfrac{\tilde n+x_1-1}{2}} 2^{-\tilde n}\\
&=\mathbb{P}^0(S_{n}=x-e_1\vert A(\mathcal I_n))
\end{aligned}
$$
where we used the elementary estimate that
$$
\binom{\tilde n}{\tfrac{\tilde n+x_1+1}{2}}<\binom{\tilde n}{\tfrac{\tilde n+x_1-1}{2}}
$$
because $\tilde n\ge x_1\geq 1$. This proves Lemma \ref{lm:G0}.
\end{proof}
\section{Direct argument for the leading order in Theorem \ref{thm:GFasymptotic}}\label{sect:direct}
In this appendix, we give a self-contained proof of the lowest order asymptotic in Theorem \ref{thm:GFasymptotic}, i.e.,
\begin{equation}\label{eq:direct}
\qmexp{G(x)}=\frac{\kappa_d}{\sigma^2}|\tilde x|^{2-d} +{\mathcal{O}}(|x|^{1-d}),\qquad \textnormal{as } |x|\to\infty.
\end{equation}
The approach is to use a Taylor expansion around the origin in Fourier space which is justified since $\widehat{K^{\delta}_{j,k}}\in C^{2d-1}(\mathbb T^d)$ by \cite[Theorem 1.1]{KL}. The leading term is reduced to the free Green's function by an appropriate substitution, whose leading order asymptotic we assume here. The error terms are controlled by adapting the dyadic decomposition argument in Appendix A of \cite{KL}.
\begin{proof}[Proof of \eqref{eq:direct}]
We recall Definition \eqref{eq:mthetadefn} of $m(\theta)$ and the fact that $\widehat{K^{\delta}_{j,k}}\in C^{2d-1}(\mathbb T^d)$ for all $j,k\in \{1,\ldots,d\}$. We first isolate the lowest, quadratic order of $m(\theta)$ by setting
$$
m(\theta)=m_0(\theta)+\tilde m(\theta),
\qquad \textnormal{where }
m_0(\theta)=\scpp{\theta}{\mathbf{Q}\theta}.
$$
Observe that $\tilde m\in C^{2d-1}(\mathbb T^d)$ satisfies
\begin{equation}\label{eq:tildembounds}
|D^{\alpha} \tilde m(\theta)|\leq C_{d}|\theta|^{3-|{\alpha}|},\qquad {\alpha}\in \mathbb N_0^d \textnormal{ with } |{\alpha}|\leq 2d-1.
\end{equation}
We can decompose
\begin{equation}\label{eq:1/mdecomp}
\frac{1}{m(\theta)} =\frac{1}{m_0(\theta)} -\frac{\tilde m(\theta)}{m_0(\theta)m(\theta)}.
\end{equation}
The next lemma then implies \eqref{eq:direct}.
\begin{lemma}\label{lm:GFsubleading}
For all ${\delta}\geq 0$ sufficiently small, we have
\begin{equation}\label{eq:G_hot}
\int_{\mathbb T^d} e^{ix\cdot \theta} \frac{\tilde m(\theta)}{m_0(\theta)m(\theta)}\frac{\mathrm{d}^d \theta}{(2\pi)^d}={\mathcal{O}}(|x|^{1-d}),\qquad \textnormal{as } |x|\to\infty.
\end{equation}
\begin{equation}\label{eq:G_lt}
\int_{\mathbb T^d} e^{ix\cdot \theta} \frac{1}{m_0(\theta)} \frac{\mathrm{d}^d \theta}{(2\pi)^d}
=\frac{\kappa_d}{\sigma^2} |x'|^{2-d} +{\mathcal{O}}(|x|^{1-d}),\qquad \textnormal{as } |x|\to\infty.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lm:GFsubleading}]
We loosely follow Appendix A in \cite{KL} where a similar problem is treated and start with the proof of \eqref{eq:G_hot}. We define the function $ F $ on $ \mathbb{T}^{d} $
$$
F(\theta)=\frac{\tilde m(\theta)}{m_0(\theta)m(\theta)}.
$$
and note that, due to \eqref{eq:tildembounds} and the quadratic vanishing order of $m(\theta)$ and $m_0(\theta)$ at the origin, we have
\begin{equation}\label{eq:DalFestimate}
|F(\theta)|\leq C_d |\theta|^{-1}.
\end{equation}
In essence, we aim to use integration by parts to transfer $d-1$ derivatives onto $F$. This leads to a singular Fourier integral, which we analyze through the following dyadic scale decomposition of the frequencies. Let $\varphi:[0,\infty)\to{\mathbb R}$ be a smooth cutoff function with $\varphi=1$ on $[0,2\pi]$ which is supported on $[0,4\pi]$. Define $\psi(r):=\varphi(r)-\varphi(2r)$ and $\psi_l(r)=\psi(2^l r)$ for all $l\geq 1$ and $ r\ge 0 $. Note that this defines a partition of unity $\sum_{l\geq 0}\psi_l(r)=1$ for all $r\neq 0$. We decompose
\begin{equation}\label{eq:fldecompose}
\int_{\mathbb T^d} e^{ix\cdot \theta} F(\theta)\frac{\mathrm{d}^d \theta}{(2\pi)^d}
=\sum_{l\geq 0} f_l(x).
\end{equation}
where we rescaled and introduced
$$
f_l(x)=2^{-ld}\int_{\mathbb T^d} e^{i2^{-l}x\cdot \theta} F_l(\theta)\frac{\mathrm{d}^d \theta}{(2\pi)^d},\qquad F_l(\theta)=\psi(|\theta|) F(2^{-l}\theta).
$$
Note that \eqref{eq:DalFestimate} implies $|F_l(\theta)|\leq C_d 2^{l} |\theta|^{-1}$, so we can use the triangle inequality to obtain
\begin{equation}\label{eq:flestimate1}
|f_l(x)|\leq C_d 2^{-l(d-1)}.
\end{equation}
This is useful for for $|x|2^{-l}\leq 1$, while for $|x|2^{-l}\geq 1$ it can be improved to
\begin{equation}\label{eq:flestimate2}
|f_l(x)|\leq C_d \frac{2^{-l(d-1)}}{(|x|2^{-l})^{d}}.
\end{equation}
To prove \eqref{eq:flestimate2}, we assume without loss of generality that $|x_1|=\max_{1\leq j\leq d}|x_j|$ and use $d$-fold integration by parts to write
\begin{equation}\label{eq:flibp}
f_l(x) =i^d \frac{2^{-ld}}{(x_1 2^{-l})^d}
\int_{\mathbb T^d} e^{i2^{-l}x\cdot \theta} \partial_{\theta_1}^d F_l(\theta)\frac{\mathrm{d}^d \theta}{(2\pi)^d}.
\end{equation}
Next, we observe that \eqref{eq:tildembounds} and the quadratic behavior of both $m_0$ and $m$ at the origin yield that $\|\partial_{\theta_1}^d F_l\|_\infty\leq C_d 2^{-l(d-1)}$; compare Lemma A.1 in \cite{KL}. Applying this bound to \eqref{eq:flibp} yields \eqref{eq:flestimate2}.
We use \eqref{eq:flestimate1} and \eqref{eq:flestimate2} and bound the resulting geometric series to find
$$
\begin{aligned}
\sum_{l\geq 0} |f_l(x)|
\leq C_d \sum_{l\geq \log_2 |x|} 2^{-l(d-1)}+C_d \sum_{0\leq l\leq \log_2 |x|}\frac{2^{-l(d-1)}}{(|x|2^{-l})^{d}}
\leq C_d |x|^{1-d}.
\end{aligned}
$$
In view of \eqref{eq:fldecompose}, this proves \eqref{eq:G_hot}.
Next, we turn to the proof of \eqref{eq:G_lt}. By Proposition \ref{prop:Ksymm}, the matrix $\mathbf{Q}$ is symmetric and positive definite. Hence, by substitution,
$$
\begin{aligned}
\int_{\mathbb T^d} e^{ix\cdot \theta} \frac{1}{m_0(\theta)} \frac{\mathrm{d}^d \theta}{(2\pi)^d}
=&\int_{\mathbb T^d} e^{ix\cdot \theta} \frac{1}{\scp{\mathbf{Q}^{1/2}\theta}{\mathbf{Q}^{1/2}\theta}} \frac{\mathrm{d}^d \theta}{(2\pi)^d}\\
=&\sigma^{-d}
\int_{\mathbb T^d} e^{i (\mathbf{Q}^{-1/2}x)\cdot \theta} \frac{1}{|\theta|^2} \frac{\mathrm{d}^d \theta}{(2\pi)^d}.
\end{aligned}
$$
For the volume element, we used that $\det(\mathbf{Q}^{1/2})= (\det\mathbf{Q})^{1/2}=\sigma^d$ since $\mathbf{Q}$ is symmetric.
By applying \eqref{eq:G_hot} with ${\delta}=0$, we obtain
$$
\begin{aligned}
\int_{\mathbb T^d} e^{i \mathbf{Q}^{-1/2}x\cdot \theta} \frac{1}{|\theta|^2} \frac{\mathrm{d}^d \theta}{(2\pi)^d}
=\int_{\mathbb T^d} e^{i \mathbf{Q}^{-1/2}x\cdot \theta} \frac{1}{2\sum_{j=1}^d (1-\cos\theta_j)} \frac{\mathrm{d}^d \theta}{(2\pi)^d}
+{\mathcal{O}}(|x|^{1-d}).
\end{aligned}
$$
We recognize the first integral as the Green's function of the free Laplacian on ${\mathbb Z}^d$ evaluated at the point $\mathbf Q^{-1/2}x$. The standard asymptotic formula for the free Laplacian, see e.g.\ \cite{Spitzer,Uch98}, gives
$$
\int_{\mathbb T^d} e^{ix\cdot \theta} \frac{1}{m_0(\theta)} \frac{\mathrm{d}^d \theta}{(2\pi)^d}
=\frac{\kappa_d}{\sigma^d} |\mathbf{Q}^{-1/2}x|^{2-d} +{\mathcal{O}}(|x|^{1-d})
=\frac{\kappa_d}{\sigma^2} |x'|^{2-d} +{\mathcal{O}}(|x|^{1-d}).
$$
This proves \eqref{eq:G_lt}.
\end{proof}
Combining \eqref{eq:1/mdecomp} in the beginning of the proof with \eqref{eq:GavgFourierrep}, \eqref{eq:G_lt} and \eqref{eq:G_hot} yields \eqref{eq:direct} which finishes the proof.
\end{proof}
\end{appendix}
\bibliographystyle{amsplain}
| {'timestamp': '2021-04-01T02:23:19', 'yymm': '2103', 'arxiv_id': '2103.17019', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.17019'} |
\section{Introduction} \label{Introduction}
Social networks have become an integral part of our lives. These networks can be represented as graphs with nodes being entities (members) of the network and edges representing the association between entities (members).
As the size of these graphs increases, it becomes quite difficult for small enterprises and business units to store the graphs in-house. So, there is a desire to store such information in cloud servers.
In order to protect the privacy of individuals (as is now mandatory in EU and other places), data is often anonymized before storing in remote cloud servers. However, as pointed out by Backstrom \emph{et al.} \cite{BDK11}, anonymization does not imply privacy. By carefully studying the associations between members, a lot of information can be gleaned.
The data owner, therefore, has to store the data in encrypted form.
Trivially, the data owner can upload all data in encrypted form to the cloud. Whenever some query is made, data owner has to download all data, do necessary computations and re-upload the re-encrypted data. This is very inefficient and does not serve the purpose of cloud service.
Thus, we need to keep the data stored in the cloud in encrypted form in such a way that we can compute efficiently on the encrypted data.
Some basic queries for a graph are neighbor query (given a vertex return the set of vertices adjacent to it), vertex degree query (given a vertex, return the number of adjacent vertices), adjacency query (given two vertices return if there is an edge between them) etc. It is important that when an encrypted graph supports some other queries, like shortest distance queries, it should not stop supporting these basic queries.
Nowell and Kleinberg~\cite{Liben-NowellK03} first defined the link prediction problem for social networks.
The link prediction problem states that given a snapshot of a graph whether we can predict which new interactions between members are most likely to occur in the near future.
For example, given a node $A$ at an instant, the link prediction problem tries to find the most likely node $B$ with which $A$ would like to connect at a later instant.
Different types of distance metrics are used to measure the likelihood of the formation of new links. The distances are called \emph{score} (\cite{Liben-NowellK03}).
Nowell and Kleinberg, in \cite{Liben-NowellK03}, considered several metrics including common neighbors, Jaccard's coefficient, Adamic/Adar, preferential attachment, Katz$_\beta$ etc. For example, if $A$ and $B$ (with no edge between them) have a large number of common neighbors they are more likely to be connected in future. In this paper, for simplicity, we have considered common neighbors metric to predict the emergence of a link.
Though there has been a large body of literature on link prediction, to the best of our knowledge the secure version of the problem has not been studied to date.
\emph{Secure Link Prediction (SLP)} problem computes link prediction algorithms over secure i.e., encrypted data.
\medskip
\noindent
\textbf{Our Contribution}\quad
We introduce the notion of secure link prediction and present three constructions.
In particular, we ask and answer the question, ``Given a snapshot of a graph $G \equiv (V,E)$ ($V$ is the set of vertices and $E \subseteq V \times V$) at a given instant and a vertex $v\in V$, which is the most likely vertex $u$, such that, $u$ is a neighbor of $v$ at a later instant and $vu \notin E$". The score-metric we consider is the number of common neighbors of the two vertices $v$ and $u$. This can be used to answer the question, ``Given a snapshot of a graph $G=(V,E)$ at a given instant and a vertex $v\in V$, which are the $k$-most likely neighbors of $v$ at a later instant such that none of these $k$ vertices were neighbors of $v$ in $G$."
Note that the data owner outsources an encrypted copy of the graph $G$ to the cloud and sends an encrypted vertex $v$ as a query. The cloud runs the secure link prediction algorithm and returns an encrypted result, from which the client can obtain the most likely neighbor of $v$.
The cloud knows neither the graph $G$ nor the queried vertex $v$.
It is to be noted that the client has much less computational and storage capacity. We propose three schemes, ($\mathtt{SLP}$-$\mathtt{I}$, $\mathtt{SLP}$-$\mathtt{II}$ and $\mathtt{SLP}$-$\mathtt{III}$), in all of which, the client takes the help of a proxy server which makes it efficient to obtain query results.
At a high level:
\begin{enumerate}
\item $\mathtt{SLP}$-$\mathtt{I}$: is the most efficient with almost no computation at client-side and leaks only the scores to the proxy server.
\item $\mathtt{SLP}$-$\mathtt{II}$: has a little more communication at client-side compared to $\mathtt{SLP}$-$\mathtt{I}$ but leaks the scores of a subset of vertices to the proxy server.
\item $\mathtt{SLP}$-$\mathtt{III}$: is a very efficient scheme with almost no computation and communication at the client-side and leaks almost nothing to the proxy. This is achieved with an extra computational and communication cost between the cloud and the proxy.
\end{enumerate}%
In all three schemes, the client does not leak anything, to the cloud, except the number of vertices in the graph.
We have designed the scheme in such a way that it supports link prediction query as well as basic queries. Each of the previous schemes on encrypted graph are designed to support a specific query (for example, shortest distance query, focused subgraph query etc.). However, we have designed more general schemes that support not only link prediction query but also basic queries including neighbor query, vertex degree query, adjacency query etc.
All our schemes have been shown to be adaptively secure in real-ideal paradigm.
Further, we have analyzed the performance of the schemes in terms of storage requirement, computation cost and communication cost, and counted the execution time of the schemes assuming benchmark implementations of some underlying cryptographic primitives.
we have implemented prototypes for the schemes $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$, and measured the performance with different real-life datasets to study the feasibility.
From the experiment, we see that they take $12.15$s and $13.75$s to encrypt whereas $8.87$s and $8.59$s process query for a graph with $10^2$ vertices.
\medskip
\noindent
\textbf{Organization}\quad
The rest of the paper is organized as follows.
Related work is discussed in Section~\ref{sec:RelatedWorks}.
Preliminaries and cryptographic tools are discussed in Section~\ref{sec:Preliminaries}.
Link prediction problem and its security are described in Section~\ref{sec:SLP}.
Section~\ref{sec:SLP1} describes our proposed scheme for $\mathtt{SLP}$-$\mathtt{I}$. Two improvements of $\mathtt{SLP}$-$\mathtt{I}$, $\mathtt{SLP}$-$\mathtt{II}$ and $\mathtt{SLP}$-$\mathtt{III}$, are discussed in Section~\ref{sec:SLP2} and Section~\ref{sec:SLP3} respectively.
In Section~\ref{sec:PerformanceAnalysis}, a comparative study of the complexities of our proposed schemes is given.
In Section~\ref{sec:ExperimentalEvaluation}, details of our implementation and experimental results are shown.
A variant of link prediction problem $\mathtt{SLP}_k$ is introduced in Section~\ref{sec:slpk}.
Finally, a summary of the paper and future research direction are given in Section~\ref{sec:Conclusion}.
\section{Related Work} \label{sec:RelatedWorks}
Graph algorithms are well studied when the graph is not encrypted. Since, necessity of outsourcing graph data in encrypted form is increasing very fast and encryption makes it difficult to work those algorithms, study is required to enable them.
There are only few works that deals with the `query' on `outsourced encrypted graph'.
Chase and Kamara~\cite{ChaseK10} introduced the notion of graph encryption while they were presenting structured encryption as a generalization of searchable symmetric encryption (SSE) proposed by Song \emph{et al.}~\cite{SongWP00}. They presented schemes for \emph{neighbor queries}, \emph{adjacency queries} and \emph{focused subgraph queries} on labeled graph-structured data.
In all of their proposed schemes, the graph was considered as an adjacency matrix and each entry was encrypted separately using symmetric key encryption. The main idea of their scheme, given a vertex and the corresponding key, the scheme could return adjacent vertices. However, complex query requires complex operation (like addition, subtraction, division etc.) on adjacent matrix which make the scheme unsuitable.
A parallel secure computation framework \emph{GraphSC} has been designed and implemented by Nayak \emph{et al.}~\cite{NayakWIWTS15}. This framework computes functions like histogram, PageRank, matrix factorization etc.
To run this algorithms, \emph{GraphSC} introduced parallel programming paradigms to secure computation. The parallel and secure execution enables the algorithms to perform even for large datasets. However, they adopt Path-ORAM~\cite{ccs/StefanovDSFRYD13} based techniques which is inefficient if the client has little computation power or the client doesn't uses very large size RAM.
Sketch-based approximate shortest distance queries over encrypted graph have been studied by Meng \emph{et al.}~\cite{MengKNK15}.
In the pre-processing stage, the client computes the sketches for every vertex that is useful for efficient shortest distance query. Instead of encrypting the graph directly, they encrypted the pre-processed data. Thus, in their scheme, there is no chance of getting information about the original graph.
Shen \emph{et al.}~\cite{ShenMZMDH18} introduced and studied cloud-based approximate \emph{constrained shortest distance queries} in encrypted graphs which finds the shortest distance with a constraint such that the total cost does not exceed a given threshold.
Exact distance has been computed on dynamic encrypted graphs in \cite{SecGDB}. Similar to our paper, this paper uses a proxy to reduce client-side computation and information leakage to the cloud.
In the scheme, adjacency lists are stored in an inverted index. However, in a single query, the scheme leaks all the nodes reachable from the queried vertex which is a lot of information about the graph. For example, if the graph is complete, it reveals the whole graph.
A graph encryption scheme, that supports top-$k$ nearest keyword search queries, has been proposed by Liu \emph{et al.}~\cite{LiuZC17}. They have made an encrypted index using order preserving encryption for searching.
Together with lightweight symmetric key encryption schemes, homomorphic encryption is used to compute on encrypted data.
Besides,
Zheng \emph{et al.}~\cite{ZhengWLH15} proposed link prediction in decentralized social network preserving the privacy. Their construction split the link score into private and public parts and applied sparse logistic regression to find links based on the content of the users. However, the graph data was not considered to be encrypted in the privacy preserving link prediction schemes.
In this paper, we outsource the graph in encrypted form. In most of the previous works, the schemes are designed to perform single specific query like neighbor query (\cite{ChaseK10}), shortest distance query (\cite{MengKNK15,ShenMZMDH18,SecGDB}), focused subgraph queries (\cite{ChaseK10}) etc. So, either it is hard to get the information about the source graph (\cite{MengKNK15}, \cite{ShenMZMDH18}), as they do not support basic queries, or leaks a lot of information for a single query (\cite{SecGDB}). One trivial approach is that taking different schemes and use all of them to support all types of required queries.
In this paper, our target is to get as much information about the graph as possible whenever required with supporting the link prediction query and leak as little information as possible. To the best of our knowledge, the secure link prediction problem has not been studied before. We study issues on link prediction problem in encrypted outsourced data and give three possible solutions overcoming them.
\section{Preliminaries} \label{sec:Preliminaries}
Let $G = (V,E)$ be a graph and $A = (a_{ij}) _{N \times N}$ be its adjacency matrix where $N$ is the number of vertices. Let $\lambda$ be the security parameter.
Set of positive integers $\{1,2,\cdots,n\} $ is denoted by $[n]$.
By $ x \xleftarrow{\$} X $, we mean to choose a random element from the set $X$. $D\log$ denotes the discrete logarithm. $id: \{ 0,1\}^* \rightarrow \{ 0,1\}^ {\log N}$ gives the identifiers corresponding to the vertices.
A function $negl : \mathbb{N} \leftarrow \mathbb{R}$ is said to be \emph {negligible} over $n$ if $\forall c \in \mathbb{N}$, $\exists N_c \in \mathbb{N}$ such that $\forall n> N_c,\ negl(n)<n^{-c} $.
A probabilistic polynomial-time (PPT) permutation $\{0, 1\} ^* \times \{0, 1\} ^n \rightarrow \{0, 1\}^n$ is said to be a \emph{Pseudo Random Permutation (PRP)} if it is indistinguishable from random permutation by any PPT adversary. We consider two PRPs, $F_{k_{perm}}$ and $\pi_s$, where $k_{perm}$ and $s$ are their keys (or seeds) respectively.
\subsection{Bilinear Maps} \label{ss:BilinearMaps}
Let $\mathbb{G}$ and $\mathbb{G}_1$ be two (multiplicative) cyclic groups of order $n$ and $g$ be a generator of $\mathbb{G}$. A map $e :\mathbb{G} \times \mathbb{G} \rightarrow \mathbb{G}_1$ is said to be an \emph{admissible non-degenerate bilinear map} if--
\begin{enumerate}
\item $\forall u,v \in \mathbb{G}$ and $\forall a, b \in \mathbb{Z}$, we have $e(u^a,v^b) = e(u,v)^{ab}$,
\item $ e(g,g) \neq 1$, and
\item $e$ can be computed efficiently.
\end{enumerate}
Our algorithms use bilinear map based BGN encryption scheme \cite{2dnf}. So, we first discuss this.
\subsection{BGN Encryption Scheme} \label{ss:BGN}
Boneh \emph{et al.}~\cite{2dnf} proposed a homomorphic encryption scheme (henceforth referred to as BGN encryption scheme) that allows an arbitrary number of additions and one multiplication.
The scheme consists of three algorithms- $\mathtt{Gen}()$, $\mathtt{Encrypt}()$ and $\mathtt{Decrypt}()$ .
\begin{algorithm} \DontPrintSemicolon
\caption{$\mathtt{Gen}(1^\lambda)$} \label{algo:bgnGen}
$(q_1,\ q_2,\ \mathbb{G},\ \mathbb{G}_1,\ e) \gets \mathcal{G}(\lambda) $ \;
$n \gets q_1q_2$ \;
$g \xleftarrow{\$} \mathbb{G} $; $ r \xleftarrow{\$} [n] $ \;
$ u \gets g^r $; $ h \gets u^{q_2} $ \;
$ sk \gets q_1$; $ pk\gets (n,\ \mathbb{G},\ \mathbb{G}_1,\ e,\ g,\ h) $ \;
\Return $(pk,sk) $ \;
\end{algorithm}
\medskip
\noindent
{\bf Key generation:} This takes a security parameter $\lambda$ as input and outputs a public-private key pair $(pk,sk)$ (see Algo.~\ref{algo:bgnGen}). Here, $ pk = (n,\ \mathbb{G},\ \mathbb{G}_1,\ e,\ g,\ h) $ and $sk = q_1$. In $pk$, $e$ is a bilinear map from $\mathbb{G}\times \mathbb{G}$ to $\mathbb{G}_1$ where both $\mathbb{G} $ and $\mathbb{G}_1$ are groups of order $q_1$. Note that, given $\lambda$, $\mathcal{G}$ returns $(q_1,\ q_2,\ \mathbb{G},\ \mathbb{G}_1,\ e)$ (see \cite{2dnf}) where $q_1$ and $q_2$ are two large primes, and $\mathbb{G}$ and $\mathbb{G}_1$ are groups of order $n=q_1q_2$.
\begin{figure}[!htb]
\centering
\begin{minipage}{.4\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{Encrypt}_\mathbb{G}( pk, a )$} \label{algo:bgnEncryptG}
$ (n,\ \mathbb{G},\ \mathbb{G}_1,\ e,\ g,\ h) \gets pk$ \;
$r \xleftarrow{\$} [n]$ \;
$c \gets g^ah^r $ \;
\Return $c$ \;
\end{algorithm}
\end{minipage}%
\begin{minipage}{.09\textwidth}
\
\end{minipage}%
\begin{minipage}{0.51\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{Decrypt}_\mathbb{G}(pk, sk,c )$} \label{algo:bgnDecryptG}
$ (n,\ \mathbb{G},\ \mathbb{G}_1,\ e,\ g,\ h) \gets pk$; $q_1 \gets sk$ \;
$c' \gets c^ {q_1}$; $\hat{g} = g^{q_1} $ \;
$s = D\log_{\hat{g}} {c'}$ \label{dlogComputeG} \;
\Return $s$ \;
\end{algorithm}
\end{minipage}
\end{figure}
\medskip
\noindent
{\bf Encryption:} An integer $a$ is encrypted in $G$ using Algo.~\ref{algo:bgnEncryptG}. Let $a_1$ and $a_2$ be two integers that are encrypted in $\mathcal{G}$ as $c_1$ and $c_2$. Then, the bilinear map $e(c_1,c_2)$, belongs to $\mathbb{G}_1$, gives the encryption of $(a_1a_2)$. Note that arbitrary addition of plaintext is also possible in the group $\mathbb{G}_1$. If $g$ is a generator of the group $\mathbb{G}$, $e(g,g)$ acts as a generator of the group $\mathbb{G}_1$. Thus, the encryption of an integer $a$ is possible in $\mathbb{G}_1$ in similar manner (see Algo.~\ref{algo:bgnEncryptG1}).
\begin{figure}[!htb]
\centering
\begin{minipage}{.46\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{Encrypt}_{\mathbb{G}_1}( pk, a )$} \label{algo:bgnEncryptG1}
$ (n,\ \mathbb{G},\ \mathbb{G}_1,\ e,\ g,\ h) \gets pk$ \;
$r\xleftarrow{\$} [n]$\;
$g_1 \gets e(g,g)$; $h_1 \gets e(g,h)$ \;
$c \gets (g_1)^a (h_1)^r $ \;
\Return $c$ \;
\end{algorithm}
\end{minipage}%
\begin{minipage}{.08\textwidth}
\
\end{minipage}%
\begin{minipage}{0.46\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{Decrypt}_{\mathbb{G}_1}(pk, sk,c )$} \label{algo:bgnDecryptG1}
$ (n,\ \mathbb{G},\ \mathbb{G}_1,\ e,\ g,\ h) \gets pk$ \;
$q_1 \gets sk$ \;
$c' \gets c^ {q_1}$; $\hat{g}_1 = e(g,g)^{q_1} $ \;
$s = D\log_{\hat{g}} {c'}$ \label{dlogComputeG1} \;
\Return $s$ \;
\end{algorithm}
\end{minipage}
\end{figure}
\medskip \noindent
{\bf Decryption:} At the time of encryption each entry is randomized. The secret key $q_1$ eliminates the randomization. Then, it is enough to find discrete logarithm $D\log$ of the rest. Algo.~\ref{algo:bgnDecryptG} and Algo.~\ref{algo:bgnDecryptG1} describes the decryption in $\mathbb{G}$ and $\mathbb{G}_1$ respectively.
In decryption algorithms, $D\log$ computation can be done with expected time $O(\sqrt{n} )$ using Pollard's lambda method \cite{MenezesOV96}. However, it can be done in constant time using some extra storage (\cite{2dnf}).
Let $\mathtt{BGN}$ be an encryption scheme as described above. Then, it is a tuple of five algorithms ($\mathtt{Gen}$, $\mathtt{Encrypt}_\mathbb{G}$, $\mathtt{Decrypt}_\mathbb{G}$, $\mathtt{Encrypt}_{\mathbb{G}_1}$, $\mathtt{Decrypt}_{\mathbb{G}_1}$) as described in Algo.~\ref{algo:bgnGen}, \ref{algo:bgnEncryptG}, \ref{algo:bgnDecryptG}, \ref{algo:bgnEncryptG1} and \ref{algo:bgnDecryptG1} respectively.
\subsection{Garbled Circuit (GC)}\label{ss:SecureComputations}
Let us consider two parties, with input $x$ and $y$ respectively, who want to compute a function $f (x, y) $. Then, a garbled circuit~\cite{Yao82b,LindellP09} allows them to compute $f(x,y)$ in such a way that none of the parties get any `meaningful information' about the input of the other party and none, other than the two parties, is able to compute $f(x,y)$.
Kolesnikov \emph{et al.}~\cite{KolesnikovS08} introduced an optimization of garbled circuit that allows XOR gates to be computed without communication or cryptographic operations \cite{SecGDB}. Kolesnikov \emph{et al.}~\cite{KolesnikovSS09} presented efficient GC constructions for several basic functions using the garbled circuit construction of \cite{KolesnikovS08}. In this paper, we use garbled circuit blocks for subtraction ($\mathtt{SUB}$), comparison ($\mathtt{COMP}$) and multiplexer ($\mathtt{MUX}$) functions from \cite{KolesnikovS08}.
\section{The Secure Link Prediction (SLP) Problem} \label{sec:SLP}
Given $G = (V,E)$, let $N_{v} $ denotes the set of vertices incident on $v\in V$.
Let $score(v,u)$ be a measure of how likely the vertex $v$ is connected to another vertex $u$ in the near future, where $vu \notin E$.
A variant of the \emph{Link Prediction} problem states that given $v \in V$, it returns a vertex $u \in V$ ($vu \notin E$) such that $score(v,u) $ is the maximum in $\{{ score(v,u): u \in V \setminus( N_{v} \cup \{v\} ) }\}$ i.e.,
\begin{equation}
score(v,u) \geq score(v,u'), \forall u' \in V \setminus( N_{v} \cup \{v\} )
\end{equation}
Thus, given a vertex $v$, we find most likely vertex to connect with. There are various metrics to measure score like the number of common neighbors, Jaccard's coefficient, Adamic/Adar metric etc.
In this paper, we consider $score(v,u)$ as the number of common nodes between $v$ and $u$ i.e., $score(v,u) = |N_{v} \cap N_{u}|$.
Let $A$ be the adjacency matrix of the graph $G$. If $i_v$ and $i_u$ are the rows corresponding to the vertices $v$ and $u$ respectively then, the score is the inner product of the rows i.e., $score(v,u) = \sum_{k=1}^{N} A[i_v][k].A[i_u][k] $. In this paper we have used BGN encryption scheme to securely compute this inner product.
\subsection{System Overview}
Here, we describe the system model considered for the link prediction problem and goals which we want to achieve.
\medskip
\noindent
{\bf System Model:}
In our model (see Fig.~\ref{fig:systemModel}), there is a client, a cloud server, and a proxy server. Each of them communicates with others to execute the protocol.
The \emph{client} is the data owner and is considered to be \emph{trusted}. It outsources the graph in encrypted form to the cloud server and generates link prediction queries. Given a vertex $v$, it queries for the vertex $u$ which is most likely to be connected in the future.
\begin{figure}[htbp]
\centering
{\includegraphics[width=0.4\textwidth]{figures/systemModel.png}}
\caption{System model}
\label{fig:systemModel}
\end{figure}
The \emph{cloud server (CS)} holds the encrypted graph and computes over the encrypted data when the client requests a query. We assume that the cloud server is honest-but-curious . It is curious to learn and analyze the encrypted data and queries. Nevertheless, it is honest and follows the protocol.
The \emph{proxy server (PS)} helps the cloud server and the client to find the most likely vertex securely. It reduces computational overhead of the client by performing decryptions.
However, the proxy server is assumed to be honest-but-curious.
All channels connecting the client, the cloud and the proxy servers are assumed to be secure. An adversary can eavesdrop on channels but can not tamper messages sent along it. However, we assume, the cloud and the proxy servers do not collude.
This system model is to outsource as much computation as possible without leaking the information about the data, assuming the client has very low computation power (like mobile devices). This kind of model to outsource computation previously used by Wang et al.~\cite{SecGDB} for secure comparison. Assumption of the proxy and cloud server do not collude is a standard assumption.
\medskip
\noindent
{\bf Design Goals:}
In this paper, under the assumption of the above system model, we aim at providing a solution for the secure link prediction problem. In our design, we want to achieve the following objectives.
\begin{enumerate}
\item \emph{Confidentiality:}
The cloud and proxy servers should not get any information about the graph structure i.e., the servers should not be able to construct a graph which is isomorphic to the source graph.
\item \emph{Efficiency:} In our model, the client is weak with respect to storage and computations. Since the cloud server has a large amount of storage and computation power, the client outsources the data to it.
\end{enumerate}
Moreover, the client should efficiently perform neighbor query, vertex degree query or adjacency query. These are the basic query that every graph should support. The client should leak as little information as possible.
\subsection{Secure Link Prediction Scheme} \label{ss:securityDefinitions}
In this section, we present definition of link prediction scheme for a graph $G$ and its security against adaptive chosen-query attack.
\begin{definition}
A \emph{secure link prediction ($\mathtt{SLP}$) scheme} for a graph $G$ is a tuple $(\mathtt{KeyGen} $, $\mathtt{EncMatrix} $, $\mathtt{TrapdoorGen}$, $\mathtt{LPQuery}$, $\mathtt{FindMaxVertex})$ of algorithms as follows.
\begin{itemize}
\item $(\mathcal{PK}, \mathcal{SK})\gets \mathtt{KeyGen}(1^ \lambda):$ is a client-side PPT algorithm that takes $\lambda$ as a security parameter and outputs a public key $\mathcal{PK}$ and a secret key $\mathcal{SK}$.
\item $ T \gets \mathtt{EncMatrix}(G, \mathcal{SK, PK}): $ is a client-side PPT algorithm that takes a public key $\mathcal{PK}$, a secret key $\mathcal{SK}$ and a graph $G $ as inputs and outputs a structure $T$ that stores the encrypted adjacency matrix of $G$.
\item $\tau_v \gets \mathtt{TrapdoorGen}(v,\mathcal{SK} ):$ is a client-side PPT algorithm that takes a secret key $\mathcal{SK} $ and a vertex $v$ as inputs and outputs a query trapdoor $\tau _v$.
\item $ \hat{c} \gets \mathtt{LPQuery}(\tau _v, T):$ is a PPT algorithm run by a cloud server that takes a query trapdoor $\tau_v$ and the structure $T$ as inputs and outputs list of encrypted scores $\hat{c}$ with all vertices.
\item $i_{res} \gets \mathtt{FindMaxVertex} (pk,sk,\hat{c}):$ is a PPT algorithm run by a proxy server that takes $pk$, $sk$ and $\hat{c}$ as inputs and outputs the most probable vertex identifier $i_{res}$ to connect with the queried vertex.
\end{itemize}
\end{definition}
\noindent
{\bf Correctness:}
An $\mathtt{SLP}$ scheme is said to be correct if, $\forall \lambda \in \mathbb{N}$, $\forall (\mathcal{PK}, \mathcal{SK}) $ generated using $\mathtt {KeyGen}(1^\lambda)$ and all sequences of queries on $T$, each query outputs a correct vertex identifier except with negligible probability.
\bigskip \noindent
{\bf Adaptive security:}
An $\mathtt{SLP}$ scheme should have two properties:
\begin{enumerate}
\item Given $T$, the cloud servers should not learn any information about $G$ and
\item From a sequence of query trapdoors, the servers should learn nothing about corresponding queried vertices.
\end{enumerate}
The security of an $SLP$ is defined in real-ideal paradigm.
In real scenario, the the challenger $\mathcal{C}$ generates keys. The adversary $\mathcal{A}$ generates a graph $G$ which it sends to $\mathcal{C}$. $\mathcal{C}$ encrypts the graph with its secret key and sends it to $\mathcal{A}$. Later, $q$ times it finds a query vertex based on previous results (i.e., adaptive) and receives trapdoor for the current. Finally $\mathcal{A}$ outputs a guess bit $b$.
In ideal scenario, on receiving the graph $G$, the simulator $\mathcal{S}$ generates a simulated encrypted matrix. For each adaptive query of $\mathcal{A}$, $\mathcal{S}$ returns a simulated token. Finally $\mathcal{A}$ outputs a guess bit $b'$.
The security definition (Definition~\ref{def:cqa2slp}) ensures $\mathcal{A}$ can not distinguish $\mathcal{C}$ from $\mathcal{S}$.
We have assumed that the communication channel between the client and the servers are secure. Since the CS and the PS do not collude, they do not share their collected information. So, the simulator can treat CS and PS separately.
In our scheme, the proxy server does not have the encrypted data or the trapdoors. During query operation, it gets a set of scrambled scores of the queried vertex with other vertices. So, we can consider only the cloud server as the adversary (see \cite{BoschPLLTWHJ14}). Let us define security as follows.
\begin{figure}[!htb]
\centering
\begin{minipage}{.45\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{ ${\textbf{Real}}^{\mathtt{SLP}}_ {\mathcal{A}}(\lambda)$} \label{algo1:gameReal}
$(\mathcal{PK}, \mathcal{SK})\gets \mathtt{KeyGen}(1^ \lambda)$ \;
$(G, st_{\mathcal{A}}) \gets \mathcal{A}_0(1^ \lambda)$ \;
$T \gets \mathtt{EncMatrix}(G, \mathcal{SK, PK})$ \;
$(v_1,st_{\mathcal{A}}) \gets \mathcal{A}_1(st_{\mathcal{A}},T) $ \;
$\tau_{v_1} \gets \mathtt{TrapdoorGen}(v_1,\mathcal{SK} )$ \;
\For { $2 \leq i \leq q$}{
$(v_i , st_{\mathcal{A}} ) \gets \mathcal{A}_i (st_{\mathcal{A}},T, \tau_{v_1}, \ldots , \tau_{v_{i-1}})$ \;
$\tau_{v_i} \gets \mathtt{TrapdoorGen}(v_i,\mathcal{SK} )$ \;
}
$\tau = (\tau_{v_1}, \tau_{v_2}, \ldots , \tau_{v_{q}})$ \;
$b \gets \mathcal{A}_{q+1} { (T,\tau, st_{\mathcal{A}})}$, where $b\in \{0,1\} $ \;
\Return $b$ \;
\end{algorithm}
\end{minipage}%
\begin{minipage}{.03\textwidth}
\
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{ ${\textbf{Ideal}}^{\mathtt{SLP}}_ {\mathcal{A}, \mathcal{S}}(\lambda)$} \label{algo1:gameIdeal}
$(G, st_{\mathcal{A}}) \gets \mathcal{A}_0(1^ \lambda)$ \;
$ (st_{\mathcal{S}},T) \gets \mathcal{S}_0(\mathcal{L}_{bld}(G) ) $ \;
$(v_1,st_{\mathcal{A}}) \gets \mathcal{A}_1(st_{\mathcal{A}},T) $ \;
$ (\tau_{v_1} , st_{\mathcal{S}}) \gets \mathcal{S}_1(st_{\mathcal{S}}, \mathcal{L}_{qry}(v_1)) $ \;
\For { $2 \leq i \leq q$}{
$(v_i , st_{\mathcal{A}} ) \gets \mathcal{A}_i (st_{\mathcal{A}},T, \tau_{v_1}, \ldots , \tau_{v_{i-1}})$ \;
$(\tau_{v_i} , st_{\mathcal{S}}) \gets \mathcal{S}_i (st_{\mathcal{S}}, \mathcal{L}_{qry}(v_1), \ldots, \mathcal{L}_{qry}(v_{i-1}) )$ \;
}
$\tau = (\tau_{v_1}, \tau_{v_2}, \ldots , \tau_{v_{q}})$ \;
$b' \gets \mathcal{A}_{q+1} { (T, \tau, st_{\mathcal{A}})}$, where $b' \in \{0,1\} $ \;
\Return $b'$ \;
\end{algorithm}
\end{minipage}
\end{figure}
\begin{definition}[Adaptive semantic security $(\mathtt{CQA2})$] \label{def:cqa2slp}
Let $\mathtt{SLP}$ = $(\mathtt{KeyGen} $, $\mathtt{EncMatrix} $, $\mathtt{TrapdoorGen}$, $\mathtt{LPQuery}$, $\mathtt{FindMaxVertex})$ be a secure link prediction scheme. Let $\mathcal{A}$ be a stateful adversary, $\mathcal{C}$ be a challenger, $\mathcal{S}$ be a stateful simulator and $\mathcal{L} = (\mathcal{L}_{bld} , \mathcal{L}_{qry} )$ be a stateful leakage algorithm. Let us consider two games- ${\textbf{Real}}^{\mathtt{SLP}}_ {\mathcal{A}}(\lambda)$ (see Algo.~\ref{algo1:gameReal}) and ${\textbf{Ideal}}^{\mathtt{SLP}}_ {\mathcal{A}, \mathcal{S}}(\lambda)$ (see Algo.~\ref{algo1:gameIdeal}).
The $\mathtt{SLP}$ is said to be adaptively semantically $\mathcal{L}$-secure against chosen-query attacks ($\mathtt{CQA2}$) if, $\forall$ PPT adversaries $\mathcal{A} = (\mathcal{A}_0, \mathcal{A}_1,\ldots, \mathcal{A}_{q+1})$, where $q=poly(\lambda)$, $\exists$ a PPT simulator $\mathcal{S} = (\mathcal{S}_0, \mathcal{S}_1, \ldots, \mathcal{S}_q)$, such that
\begin{equation}
|Pr[{\textbf{Real}}^{\mathtt{SLP}}_ \mathcal{A}(\lambda)=1] - Pr[{\textbf{Ideal}}^{\mathtt{SLP}} _ {\mathcal{A},\mathcal{S}}(\lambda)=1]| \leq negl(\lambda)
\end{equation}
\end{definition}
\subsection{Overview of our proposed schemes}
A graph can be encrypted in several ways like adjacency matrix, adjacency list, edge list etc. Each of them has advantages and disadvantages depending on the application. In our scheme, we have defined score as the number of common neighbors that can be calculated just by computing inner product of the rows corresponding to the calculating vertices. The basic idea is that, given a vertex, to predict the most probable vertex to connect with, we compute scores with all other vertices and sort them according to their score. However, calculating the inner product and sorting them in cloud server are expensive operations and there is no scheme that provides all of the functionality to be computed over encrypted data. So, we have used BGN homomorphic encryption scheme that enables us to compute inner product on encrypted data. Choosing BGN, gives power to the client for querying not only link prediction query but also neighbor query, degree of a vertex query, adjacency query etc.
Besides, the score computation, the score decryption and sorting the score in encrypted form is non-trivial keeping in mind that the client has low computation power.
So, we have proposed three schemes that perform score computations as well as sorting on encrypted data with the help of a honest-but-querious proxy server which does not collude with the cloud server. The three schemes show tread-off between the computation cost, communication cost and leakage in order to compute the vertex most probable to connect with.
\section{Our Proposed Protocol for SLP}\label{sec:SLP1}
In this section, we propose an efficient scheme $\mathtt{SLP}$-$\mathtt{I}$ and analyze its security. The scheme is divided into three phases-- key generation, data encryption, and query phase.
The client first generates required secret and public keys. Then it encrypts the adjacency matrix of the graph in a structure and uploads it to the CS. To query for a vertex, the client generates a query trapdoor and sends it to the CS. The CS computes encrypted score (i.e., inner products of the row corresponding to the queried vertex with the other vertices on the encrypted graph). The PS decrypts the scores, finds the vertex with highest score and sends the result to the client.
\medskip
\noindent
{\bf Key Generation:} \label{ss4:keyGen}
In this phase, given a security parameter $\lambda$, the client chooses a bilinear map $e :\mathbb{G} \times \mathbb{G} \rightarrow \mathbb{G}_1$.
Then, the permutation key ${k_{perm}}$ is chosen at random for the PRP $F: \{0,1\}^{*}\times \{0,1\}^{\log N} \rightarrow \{0,1\}^{\log N} $. It executes $\mathtt{BGN.Gen}() $ to get $sk$ and $pk$.
After generating private key $\mathcal{SK}$ and public key $\mathcal{PK}$, a part $sk$ of $\mathcal{SK}$ is shared with the PS. This part of the key helps the PS to compute secure comparisons. Key generation is described in Algo.~\ref{algo4:keyGen}.
\begin{algorithm} \DontPrintSemicolon
\caption{$\mathtt{KeyGen}(1^ \lambda)$} \label{algo4:keyGen}
$ k_{perm} \xleftarrow{\$} \{0,1\}^\lambda$ \;
$(pk,sk) \gets \mathtt{BGN.Gen}(1^\lambda) $ \;
$\mathcal{PK} \gets pk$;
$\mathcal{SK} \gets (sk, k_{perm} ) $ \;
\Return $ (\mathcal{PK}, \mathcal{SK})$ \;
\end{algorithm}
\medskip \noindent
{\bf Data Encryption:} In this phase, the client encrypts the adjacency matrix with its private key and uploads the encrypted matrix to the CS (see Algo.~\ref{algo4:EncMatrix}). Each entry $a_{ij}$ in the adjacency matrix $A$ of $G$ is encrypted using Algo.~\ref{algo:bgnEncryptG}. Let $ M = (m_{ij})_{N\times N}$ be the encrypted matrix. Then, each row of $M$ is stored in the structure $T$. The PRP $F$ gives the position in $T$ corresponding to vertices. Finally, the structure $T$ is sent to the CS.
\begin{figure}[!htb]
\centering
\begin{minipage}{.5\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{EncMatrixI}(A, \mathcal{SK, PK})$} \label{algo4:EncMatrix}
$(n,\mathbb{G}, \mathbb{G}_1, e, g, h) \gets \mathcal{PK}$ \;
$(q_1, k_{perm}) \gets \mathcal{SK}$ \;
\For {$i=1, j=1$ \KwTo $i=N, j=N$} {
$m_{ij} \gets \mathtt{BGN.Encrypt_{\mathbb{G}}}(\mathcal{PK}.pk, a_{ij}) $ \;
}
Construct a structure $T$ of size $N$. \;
\For {$i=1$ \KwTo $i=N$} {
$ind \gets F_{k_{perm}}(id(v_i))$ \;
$T[ind] \gets (m_{i1},m_{i2}, \ldots, m_{iN} ) $. \;
}
\Return $T$ \;
\end{algorithm}%
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{TrapdoorGenI}(v, \mathcal{SK})$} \label{algo4:TrapdoorGen}
$(sk, k_{perm} ) \gets \mathcal{SK} $ \;
$ i' \gets F_{k_{perm}}(id(v)) $; $s \xleftarrow{\$} \{0,1\}^{\lambda}$ \;
$\tau_v \gets ( i' ,s)$ \;
\Return $\tau_v$
\end{algorithm
\end{minipage}%
\begin{minipage}{.02\textwidth}
\
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{LPQueryI}(\tau _v, T)$} \label{algo4:LPQuery}
$N \gets |T|$; $(i',s ) \gets \tau_v $ \;
$(m_{i' 1},m_{i' 2}, \ldots, m_{ i' N} ) \gets T[i'] $ \;
\For {$i = 1$ \KwTo $i = N$}{
$r \xleftarrow{\$} \{0,1\}^\lambda $ \;
\eIf {$i \neq i' $ }{
$(m_{i1},m_{i2}, \ldots, m_{iN} ) \gets T[{i}] $ \;
$c_{i} \gets e(g,h)^{r} .\prod ^{N} _{k=1} e(m_{i'k},m_{ik}) $ \label{encScoreComp} \;
}{
$c_{i'} \gets e(g,g)^{0}.e(g,h)^{r} $ \;
}
}
$\pi _s\gets$ permutation with key $s$. \;
$\hat{c} \gets (c_{\pi_s (1) },c_{\pi_s (2) }, \ldots, c_{\pi_s (N) })$ \;
$\hat{m} \gets (m_{\pi_s (1) },m_{\pi_s (2) }, \ldots, m_{\pi_s (N) })$,\\
where $m_{i } \gets m_{i'i}.h^{r_i}$, $r_i \xleftarrow{\$} \{0,1\}^\lambda $ \;
\Return ($ \hat{c} $, $\hat{m} $) to the PS \;
\end{algorithm}%
\end{minipage}
\end{figure}
\medskip \noindent
{\bf Query:}
In the query phase, the client sends a query trapdoor to the CS. The CS finds encrypted scores with respect to the other vertices and sends them to the PS. The PS decrypts them and sends the identifier of the vertex with highest score to the client.
To query for a vertex $v$, the client first chooses a secret key $s \xleftarrow{\$} \{0,1\}^{\lambda}$ for the PRP $\pi _s$ that is not known to the PS (see Algo.~\ref{algo4:TrapdoorGen}). Then it finds the position $ i' = F_{k_{perm}}(id(v)) $. Finally, the client sends the trapdoor $\tau_v = ( i' ,s)$ as query trapdoor to the CS.
On receiving $\tau_v $, the CS computes the encrypted scores $ (c_{1}, c_{2},\ldots , c_{N})$ (see Algo.~\ref{algo4:LPQuery}) and computes $(m_{1}, m_{2},\ldots , m_{N}) $ corresponding to the queried vertex. Using $\pi _s$, the CS shuffles the order of the encrypted scores and $m_i$'s. Finally, the CS sends the shuffled encrypted scores and the scrambled queried-row entries $ (m_{\pi _s(1) },m_{\pi_s (2) }, \ldots, m_{\pi_s (N) })$ to the PS.
\begin{algorithm} \DontPrintSemicolon
\caption{$\mathtt{FindMaxVertexI}(sk, \hat{c} ,\hat{m} )$} \label{algo4:FindMaxVertex}
$ (\bar{c}_{1 },\bar{c}_{2}, \ldots, \bar{c}_{N }) \gets \hat{c} $ \;
$ (\bar{m}_{1 },\bar{m}_{2}, \ldots, \bar{m}_{N }) \gets \hat{m}$ \;
\For {$i = 1$ \KwTo $i = N$} {
${s}_i \gets \mathtt{BGN.Decrypt}_{\mathbb{G}_1}(pk,sk,\bar{c}_i)$ \;
${a}_i \gets (\mathtt{BGN.Decrypt}_{\mathbb{G}}(pk, sk,\bar{m}_i)) \bmod 2$ \;
}
$i_{res} \gets i: (a_i=0) \wedge (s_i = max \{s_j:j\in [N]\}) $ \;
\Return $i_{res} $ to the client \;
\end{algorithm}
Since, the PS has $sk$ ($=q_1$), it can decrypt all $\bar {c_i}$s and $\bar {m_i}$s. It decrypts $\bar {m_i}$ first and then decrypts $\bar{c_i}$ only if corresponding decrypted value of $\bar{m_i}$ is 0. Then, it takes an ${i_{res}}$ such that $s_{i_{res}}$ is the maximum in the set $\{s_i: i \in [N] \}$ and sends it to the client (see Algo.~\ref{algo4:FindMaxVertex}).
Finally, the client finds the resulting vertex identifier $v_{res}$ as $v_{res} \gets \pi_s ^{-1} ({i_{res}}) $.
\medskip \noindent
{\bf Correctness:}
For any two rows $T[i]$ and $T[j]$, if $c_{ij}$ is the encryption of the score $s_{ij}$ then, $c_{ij} = e(g,h)^{r} \prod ^{N} _{k=1} e(m_{ik},m_{jk})$. Again, since ${e(g,g)^{q_1q_2}} =1$, we get $(c_{ij})^{q_1}= (e(g,g)^{q_1})^ {\sum ^{N} _{k=1} {a_{ik}a_{jk}}}$ = $\hat{g} ^{s_{ij}},\ where\ \hat{g} = e(g,g)^{q_1} $.
Thus, $D\log$ of $(c_{ij})^{q_1}$ to the base $\hat{g} $ gives $s_{ij}$. If powers of $\hat{g}$ are pre-computed, the score $s_{ij}$ can be found in constant time. However, Pollard's lambda method \cite{MenezesOV96} can be used to find discrete logarithm of $c'_{ij}$ base $\hat{g}$.
\subsection{Security Analysis}
In the security definition, a small amount of leakage has been allowed.
The adversary knows the algorithms and possesses the encrypted data and queried trapdoors. Only $\mathcal{SK}$ is unknown to it.
The leakage function $\mathcal{L}$ is a pair $(\mathcal{L}_{bld}, \mathcal{L}_{qry})$ (associated with $\mathtt{EncMatrix}$ and $\mathtt{LPQuery}$ respectively) where $\mathcal{L}_{bld}(G) = \{|T|\} $ and $\mathcal{L}_{qry}(v) = \{ \tau _{v} \}$.
\begin{theorem} \label{th:security1}
If $\mathtt{BGN}$ is semantically secure and $F$ is a PRP, then $\mathtt{SLP}$-$\mathtt{I}$ is $\mathcal{L} $-secure against adaptive chosen-query attacks.
\end{theorem}
\begin{proof}
The proof of security is based on the simulation-based $\mathtt{CQA}$-$\mathtt{II}$ security (see Definition~\ref{def:cqa2slp}).
Given the leakage $\mathcal{L}_{bld}$, the simulator $\mathcal{S}$ generates a randomized structure $ \widetilde{T}$ which simulates the structure $ {T} $ of the challenger $\mathcal{C}$.
Given a query trapdoor $\tau_{v}$, $\mathcal{S}$ returns simulated trapdoors $\widetilde{\tau_{v}}$ maintaining system consistency of the future queries by the adversary. To prove the theorem, it is enough to
show that the trapdoors generated by $\mathcal{C}$ and $\mathcal{S}$ are indistinguishable to $\mathcal{A}$.
\begin{itemize}
\item (Simulating the structure $T$) $\mathcal{S}$ first generates $ (\mathcal{SK}, \mathcal{PK}) \gets \mathtt{BGN}.\mathtt{Gen}(1^{\lambda})$. Given $ \mathcal{L}_{bld} (A)$, $\mathcal{S}$ takes an empty structure $\widetilde{T}$ of size $|T|$. Finally, it takes $\widetilde{m_{ij}} \gets \mathtt{BGN}.\mathtt{Encrypt_\mathbb{G}}( \mathcal{PK}.pk,0^{\lambda}), \ (i, j )\in [N] \times [N]$ where $N = |T| $.
\item (Simulating query trapdoor $\tau_v$) $\mathcal{S}$ first takes an empty dictionary $Q$. Given $ \mathcal{L}_{srch}(v)$, $\mathcal{S}$ checks whether $v$ is present in $Q$. If not, it takes a random $\log N$-bit string $\widetilde{ \tau_v}$, stores it as $Q[v] = \widetilde{ \tau_v} $ and returns $\widetilde{ \tau_v} $. If $v$ has appeared before, it returns $Q[v]$.
\end{itemize}
Semantic security of $\mathtt{BGN}$ guarantees that $\widetilde{m_{ij}} $ and ${m_{ij}} $ are indistinguishable. Since $F$ is a PRP, $ \widetilde{ \tau_v}$ and ${ \tau_v}$ are indistinguishable. This completes the proof.
\end{proof}
\section{$\mathtt{SLP}$-$\mathtt{II}$ with less leakage} \label{sec:SLP2}
Though the $\mathtt{SLP}$-$\mathtt{I}$ scheme is efficient, it has few disadvantages.
Firstly, in $\mathtt{SLP}$-$\mathtt{I}$, the number of common nodes between the queried vertex and all other vertices are leaked to the PS which provides partial knowledge of the graph to it.
Since, the server PS is semi honest, we want to leak as little information as possible.
In this section, we propose another scheme $\mathtt{SLP}$-$\mathtt{II}$ that hides most of the scores from the PS which results in leakage reduction.
Secondly, the client has high communication cost with PS while processing a link prediction query. Our proposed $\mathtt{SLP}$-$\mathtt{II}$ scheme has an advantages over this with reduced communication cost from CS to PS is. We achieve these by using extra storage of size of the matrix $M$ and extra bandwidth from the PS to the CS of $O(N)$.
\subsection{Proposed Protocol}
In $\mathtt{SLP}$-$\mathtt{II}$, after computing the scores, the CS increases that of the incident vertices randomly from maximum possible score i.e., degree of the queried vertex. For example, let $s$ be a score in the form $g_1^s$, then a random number $r $, greater than or equal to the degree, is added with it.
Then the scores is increased as $g_1^s.g_1^r = g_1^{(s+r)}$. Since lower bound of $r$ is known to the client, it can eliminate the scores with adjacent vertices.
The PS only derypts the scores and sends the sorted list to the client. Since the degree is hidden from PS and known to the client, it can eliminate the vertices with score larger than degree.
The algorithms are as follows.
\medskip
\noindent
{\bf Key Generation:} Same as Algo.~\ref{algo4:keyGen}.
\medskip \noindent
{\bf Data Encryption:} In $\mathtt{SLP}$-$\mathtt{II}$, data encryption is similar to Algo.~\ref{algo4:EncMatrix}.
Together with $M = (m_{ij})_{N \times N}$, another matrix $M' = (m'_{ij})_{N \times N} $ is generated by encrypting a matrix B (see Algo.~\ref{algo2:EncMatrix}). The matrix $B = (b_{ij})_{N \times N}$ where $b_{ij} = t $, ($\deg{v_i}<t<N-\deg{v_i} $) if $v_i$ and $v_j$ are connected, else $b_{ij} = 0 $. Now, $m'_{ij} = e(g,g)^{b_{ij}}.e(g,h)^{r_{ij}} $, where notations are usual. Finally, The matrices $M$ and $M'$ are uploaded to the CS together in structures $T$ and $T'$ respectively. Rows of $M$ and $M'$ corresponding to the vertex $v$ are stored in $T[F_{k_{perm}}(id(v))]$ and $T'[F_{k_{perm}}(id(v))]$ respectively.
Note that, entries of $M$ are in the group $\mathbb{G}$ whereas that of $M'$ are in $\mathbb{G}_1$.
\begin{figure}[!htb]
\centering
\begin{minipage}{.5\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{EncMatrixII}(A, \mathcal{SK, PK})$} \label{algo2:EncMatrix}
$(n,\mathbb{G}, \mathbb{G}_1, e, g, h) \gets \mathcal{PK}$; $(q_1, k_{perm}) \gets \mathcal{SK}$ \;
Construct matrix $B$ from $A$ \;
\For {$i=1, j=1$ \KwTo $i=N, j=N$} {
$m_{ij} \gets \mathtt{BGN.Encrypt_{\mathbb{G}}}(\mathcal{PK}.pk, a_{ij}) $ \;
$m'_{ij} \gets \mathtt{BGN.Encrypt_{\mathbb{G}_1}}(\mathcal{PK}.pk, b_{ij}) $ \;
}
Construct structures $T$ and $T'$ of size $N$ \;
\For {$i=1$ \KwTo $i=N$} {
$ind_i \gets F_{k_{perm}}(id(v_i))$ \;
$T[ind_i] \gets (m_{i1},m_{i2}, \ldots, m_{iN} ) $ \;
$T'[ind_i] \gets (m'_{i1},m'_{i2}, \ldots, m'_{iN} ) $ \;
}
\Return $(T, T')$ \;
\end{algorithm
\end{minipage}%
\begin{minipage}{.02\textwidth}
\
\end{minipage}%
\begin{minipage}{0.51\textwidth}
\begin{algorithm}[H] \DontPrintSemicolon
\caption{$\mathtt{LPQueryII}(\tau _v, T)$} \label{algo2:LPQuery}
$N \gets |T|$; $(i',s ) \gets \tau_v $ \;
$(m_{i' 1},m_{i' 2}, \ldots, m_{ i' N} ) \gets T[i'] $ \;
\For {$i = 1$ \KwTo $i = N$}{
$r \xleftarrow{\$} \{0,1\}^\lambda $ \;
\eIf {$i \neq i' $ }{
$(m_{i1},m_{i2}, \ldots, m_{iN} ) \gets T[{i}] $ \;
$c_{i} \gets e(g,h)^{r} .\prod ^{N} _{k=1} e(m_{i'k},m_{ik}) $ \label{encScoreComp2} \;
}{
$c_{i} \gets e(g,g)^{0}.e(g,h)^{r} $ \;
}
$c_ i = c_i.m'_{i'i}$ \;
}
$m \gets \prod_{i = 1}^{i = N} m_{i'i}$ \;
$\pi _s\gets$ permutation with key $s$. \;
$\hat{c} \gets (c_{\pi_s (1) },c_{\pi_s (2) }, \ldots, c_{\pi_s (N) })$ \;
\Return $ \hat{d} $ to PS and $m$ to the client \;
\end{algorithm}%
\end{minipage}
\end{figure}
\medskip \noindent
{\bf Query:} As in the previous scheme, the client sends query trapdoor $\tau _v = (i',s)$ to the CS for a vertex $v$. Let $ \hat{c} =(c_{1}, c_{2},\ldots , c_{N}) $ be the set of encrypted scores computed in step \ref{encScoreComp} of Algo.~\ref{algo2:LPQuery}. In addition, for each $i$, $c_ i$ is updated as $c_ i = c_i.m'_{i'i}$. Then $ \hat{c} =(c_{\pi _s (1) },c_{\pi _s (2) }, \ldots, c_{\pi _s (N) }) $ is sent to the PS.
Instead of sending $\hat{m}$ to the PS, $m = \prod_{i = 1}^{i = N} m_{i'i}$ is sent to the client, which results the encryption of the degree of the vertex $v$. $\mathtt{SLP}$-$\mathtt{II}$ query is described in Algo.~\ref{algo2:LPQuery}.
The PS decrypts $\hat{c}$ as $s'_1, s'_2, \ldots, s'_N$ and sorts them. Then, the PS sends $ (s'_{i_1}, i_1)$, $(s'_{i_2}, i_2)$, $\ldots$, $(s'_{i_N}, i_N) $ where $s'_{i_j}$'s are in sorted order and $i_j$'s are their indices in $\hat{c}$ (see Algo.\ref{algo2:FindMaxVertex}).
The client takes the first index ${i_{res}} = i_j$ such that $s'_{i_j} \leq \deg{v}$. The client gets $\deg{v}$ by decrypting $m$. Finally, the client can find the resulting vertex identifier $v_{res}$ as $v_{res} \gets \pi _s ^{-1} ({i_{res}}) $.
\begin{algorithm} \DontPrintSemicolon
\caption{$\mathtt{FindMaxVertexII}(sk, \bar{c} ,\bar{m} )$} \label{algo2:FindMaxVertex}
$ (\bar{d}_{1 },\bar{d}_{2}, \ldots, \bar{d}_{N }) \gets \hat{d} $ \;
\For {$i = 1$ \KwTo $i = N$} {
$s'_i \gets \mathtt{BGN.Decrypt _{\mathbb{G}_1}}(pk,sk,\bar{d}_i)$ \label{s_dashed} \;
}
Sorting $s'_i$s gets ($ (s'_{i_1}, i_1)$, $(s'_{i_2}, i_2)$, $\ldots,(s'_{i_N}, i_N) $) \;
\Return ($ (s'_{i_1}, i_1)$, $(s'_{i_2}, i_2)$, $\ldots,(s'_{i_N}, i_N) $) \;
\end{algorithm}%
\medskip
\noindent
{\bf Correctness:} For all $i$, the decrypted entry $s'_{i}$ (line \ref{s_dashed}, Algo.~\ref{algo2:FindMaxVertex}) is equals to $s_i + b_{i'i}$ where $s_i$ is the actual score. Since $s_i \leq \deg{v}$ and $b_{i'i}$ is zero, when $v_{i'}$ and $v_{i}$ are connected, we can see that, $s'_{i}$ becomes greater than $\deg {v}$ when $v_{i'}$ and $v_{i}$ are connected. So, the client can eliminate these entries from the list.
\subsection{Security Analysis} $\mathtt{SLP}$-$\mathtt{II}$ does not leak any extra information to the CS than $\mathtt{SLP}$-$\mathtt{I}$. The leakage $ \mathcal{L} =(\mathcal{L}_{bld}, \mathcal{L}_{qry})$ is same as it is in $\mathtt{SLP}$-$\mathtt{I}$.
\begin{theorem}
If $\mathtt{BGN}$ is semantically secure and $F$ is a PRP, then $\mathtt{SLP}$-$\mathtt{II}$ is $\mathcal{L} $-secure against adaptive chosen-query attacks.
\end{theorem}
\begin{proof}
As we have seen the proof of Theorem~\ref{th:security1}, The simulator requires to simulate the $ {T} $, $ {T'} $ and $\tau_v $. To simulate the structure $T'$, given $ \mathcal{L}_{bld} (A)$, $\mathcal{S}$ takes an empty structure $\widetilde{T}'$ of size $|T'|$. Finally, it takes $\widetilde{m'_{ij}} \gets \mathtt{BGN}.\mathtt{Encrypt_{\mathbb{G}_1}} ( \mathcal{PK}.pk, 0^{\lambda})$, $(i, j )\in [N] \times [N]$. Rest of the proof is similar as that of Theorem~\ref{th:security1}.
\end{proof}
\section{ $\mathtt{SLP}$ scheme using garbled circuit ($\mathtt{SLP}$-$\mathtt{III}$)} \label{sec:SLP3}
In $\mathtt{SLP}$-$\mathtt{II}$, the PS is still able to get scores with many vertices and there is a good amount of communication cost from PS to the client. In this section, we propose $\mathtt{SLP}$-$\mathtt{III}$ in which PS does not get any scores. Besides, the proxy needs to send only result to the client which reduces communication overhead for the client.
\subsection{Protocol Description}
In $\mathtt{SLP}$-$\mathtt{III}$, after generating the keys, the client encrypts the adjacency matrix of the graph and uploads it to the CS. At the same time, it shares a part of its secret key with the PS. In the query phase, the CS computes the encrypted scores on receiving query trapdoor from the client. However, it masks each score with random number selected by itself before sending them to the PS. The PS decrypts the masked scores and evaluates a garbled circuit, constructed by the CS (as described in Section~\ref{ss:mgc}), to find the vertex with maximum score.
Finally, the PS returns the index corresponding to the evaluated identifier of the vertex with maximum score.
\medskip
\noindent
{\bf Key Generation:} Same as Algo.~\ref{algo4:keyGen}.
\medskip
\noindent
{\bf Data Encryption:} Same as Algo.~\ref{algo4:EncMatrix}.
\medskip
\noindent
{\bf Query:}
To query for a vertex $v$, the client generates a query trapdoor $t_v = (i',s)$ (see Algo.~\ref{algo4:TrapdoorGen}) and sends it to the CS.
On receiving $\tau _ v$, the CS computes the encrypted scores $\ (c_1, c_2,\ldots,c_N)$. It then considers the row $T[i'] = (m_{i'1}, m_{i'2},\ldots,m_{i'N})$ corresponding to the queried vertex. Then, with random $r_i$ and $r'_i$, it computes,
$\bar{c}_i \gets c_{\pi _s (i)}. \mathtt{BGN.Encrypt}_{\mathbb{G}_1}(\mathcal{PK}.pk, r_i)$ and $\bar{m}_i \gets m_{i'{\pi _s(i)}}. \mathtt{BGN.Encrypt}_{\mathbb{G}}(\mathcal{PK}.pk, r'_i)$, for all $i$.
If the encrypted scores are sent directly, the PS can decrypt the scores directly as it has the partial secret key $sk$. That is why the CS chooses random $r_i$s and $r'_i$s to mask them.
\begin{algorithm}[H] \DontPrintSemicolon
\begin{multicols}{2}
\caption{$\mathtt{LPQueryIII}(\tau _v, T)$} \label{algo5:LPQuery}
$N \gets |T|$; $(i',s ) \gets \tau_v $ \;
$(m_{i' 1},m_{i' 2}, \ldots, m_{ i' N} ) \gets T[i'] $ \;
\For {$i = 1$ \KwTo $i = N$}{
\eIf {$i \neq i' $ }{
$(m_{i1},m_{i2}, \ldots, m_{iN} ) \gets T[{i}] $ \;
$c_{i} \gets \prod ^{N} _{k=1} e(m_{\tau _vk},m_{ik}) $ \;
}{
$r \xleftarrow{\$} \{0,1\}^\lambda $ \;
$c_{i'} \gets e(g,g)^{0}.e(g,h)^{r} $ \;
}
}
$\pi _s \gets$ permutation with key $s$.\;
\For {$i = 1$ \KwTo $i = N$}{
$r_i, r'_i,x_i, x'_i \xleftarrow{\$} \{0,1\}^\lambda $ \;
$\bar{c}_i \gets c_{\pi _s (i)}.e(g,g)^{r_i}.e(g,h)^{x_i} $ \;
$\bar{m}_i \gets m_{i'{\pi _s(i)}}. g^{r'_i}.h^{x'_i}$ \;
}
$\hat{c} \gets (\bar{c}_{1 },\bar{c}_{2}, \ldots, \bar{c}_{N })$ \;
$\hat{m} \gets (\bar{m}_{1 },\bar{m}_{2}, \ldots, \bar{m}_{N })$ \;
Computes $MGC$ \;
\Return ($ \hat{c} $, $\hat{m} $, $MGC$) to PS \;
\end{multicols}
\end{algorithm}
To find the vertex with highest score, the CS builds a garbled circuit $MGC$ (see Fig.~\ref{fig:maximalCircuit}) as described in Section~\ref{ss:mgc}.
The CS sends $\hat{c} = (\bar{c}_{1}, \bar{c}_{2}, \ldots, \bar{c}_{N})$ and $\hat{m} = (\bar{m}_{1}, \bar{m}_{2}, \ldots, \bar{m}_{N})$ together with a garbled circuit $MGC$. The CS-side algorithm is described in Algo.~\ref{algo5:LPQuery}.
The PS receives $\hat{c}$ and $\hat{m}$. $\forall i$, let $\bar{s}_i$ and $\bar{a}_i$ be the decryption of $\bar{c_i}$ and $\bar {m_i}$ respectively (see Algo.~\ref{algo5:FindMaxVertex}). Then, the PS evaluates $MGC$. During evaluation, the PS gives all $\bar{s}_i$s and $a_i$s and corresponding indices $i$s as input where $ a_i= (\bar{a}_i \bmod 2)$. The CS gives $r_i$s and $r''_i$s where $r''_i = (r'_i \bmod 2)$, $\forall i$ (see Section~\ref{ss:mgc}).
\begin{algorithm} \DontPrintSemicolon
\caption{$\mathtt{FindMaxVertexIII}(sk, \hat{c} ,\hat{m} , GC)$} \label{algo5:FindMaxVertex}
$ (\bar{c}_{1 },\bar{c}_{2}, \ldots, \bar{c}_{N }) \gets \hat{c} $ \;
$ (\bar{m}_{1 },\bar{m}_{2}, \ldots, \bar{m}_{N }) \gets \hat{m}$ \;
\For {$i = 1$ \KwTo $i = N$} {
$\bar{s}_i \gets \mathtt{BGN.Decrypt}_{\mathbb{G}_1}(pk,sk,\bar{c}_i)$ \;
$\bar{a}_i \gets (\mathtt{BGN.Decrypt}_{\mathbb{G}}(pk,sk,\bar{m}_i))$ \;
$a_i \gets \bar{a_i} \bmod 2$
}
Evaluates $MGC$ with $\bar{s}_i $ and $a_i$s as its inputs.\;
$i_{res} \gets $ output of the $MGC$ evaluation \;
\Return $i_{res} $ to the client \;
\end{algorithm}
From $MGC$, the PS gets an index $i_{res}$ which is sent to the client.
Finally, the client finds the resulting vertex identifier $v_{res}$ as $v_{res} \gets \pi_s ^{-1} ({i_{res}}) $.
\subsection{Maximum Garbled Circuit (MGC)} \label{ss:mgc}
We want minimum information to be leaked to both the servers. Without the knowledge of values, it is hard to find the maximum value because it is an iterative comparison process and requires several round of communication if we use only secure comparison. However, building a maximum garbled circuit allows cloud and proxy servers to find the maximum without knowing the value by anyone.
Kolesnikov and Schneider~\cite{KolesnikovSS09} first presented a garbled circuit that computes minimum from a set of distance. In their scheme, one party holds a set of points and the second party holds a single point. They used homomorphic encryption to compute the the distances from the single points to the set of points and sort them using the garble circuit. However, \emph{the original value of the points belongs to them were known to them}.
In this paper, we have introduced a novel maximum garbled circuit ($MGC$) by which one party computes the maximum from a set of numbers, \emph{without the knowledge their values}, with the help of another party without leaking them to it.
Given a set of scores $MGC$ outputs only the identity of the vertex with maximum score.
\noindent
{\bf Computing vertex with max score:}
In $\mathtt{SLP}$-$\mathtt{III}$, the CS computes a garbled circuit $MGC$ (an example is shown in Fig.~\ref{fig:maximalCircuit}) for each query to find the maximum scored vertex identifier. Before computing $MGC$, in $\mathtt{SLP}$-$\mathtt{III}$, the PS gets $ (\bar{s}_1, \bar{s}_2,\ldots, \bar{s}_N)$ and $ (a_1, a_2,\ldots, a_N)$ (Algo.~\ref{algo5:FindMaxVertex}). The CS keeps $ (r_1, r_2,\ldots, r_N)$ and $ (r''_1, r''_2,\ldots, r''_N)$ which are used as input in $MGC$. During construction, it keeps the indices in the $MGC$ such a way that $MGC$ outputs only the index of the resulted maximum score.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{figures/maximal.png}
\caption{Example of a Maximum circuit with $N = 7$}
\label{fig:maximalCircuit}
\end{figure}
$MGC$ is required to find the index corresponding to the maximum scored vertex. The circuit is constructed layer by layer. The idea is to compare pair of scores every time in a layer and pass the result for the next until the resulted vertex is found. If $|V|=N$, $MGC$ has $(\log N +1 )$ layers starting from $0$ to $N$. In the 0th layer, there are $N$ number of $\mathtt{NSS}$ blocks and the rest of the blocks are $\mathtt{Max}$ block. The $\mathtt{NSS}$ blocks is for the 1st layers and computes the scores securely without knowing them. Thus, each $\mathtt{NSS}$ block corresponds to some vertex. $\mathtt{Max}$ computes the maximum score and corresponding index without knowing them. Example of a $MGC $, to compute maximum, assuming $N = 7$ and using $\mathtt{Max}$ blocks and $\mathtt{NSS}$ blocks, is shown in Fig.~\ref{fig:maximalCircuit}. $MGC$ for any $N$ is constructed similarly.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\includegraphics[width=0.9\textwidth]{figures/max1.png}
\caption{$\mathtt{Max}_1$ block}
\label{fig:max1}
\end{subfigure}%
\begin{subfigure}[t]{0.24\textwidth}
\includegraphics[width=0.9\textwidth]{figures/max2.png}
\caption{$\mathtt{Max}_2$ block}
\label{fig:max2}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\includegraphics[width=0.9\textwidth]{figures/max3.png}
\caption{$\mathtt{Max}_3$ block }
\label{fig:max3}
\end{subfigure}%
\begin{subfigure}[t]{0.24\textwidth}
\includegraphics[width=0.9\textwidth]{figures/max4.png}
\caption{$\mathtt{Max}_4$ block}
\label{fig:max4}
\end{subfigure}%
\caption{Different max blocks used in $\mathtt{MAXIMUM}$ circuit}
\label{fig:maxBlocks}
\end{figure}
\noindent
{\bf Max blocks}
There are 4-types of $\mathtt{Max} $ blocks to compute the maximum- $\mathtt{Max_1}$, $\mathtt{Max_2}$, $\mathtt{Max_3}$ and $\mathtt{Max_4}$ (see Fig.~\ref{fig:maxBlocks}). The blocks are made different to handle extreme cases. These blocks use $\mathtt{COMP}$ and $\mathtt{MUX }$ blocks (see Section~\ref{ss:SecureComputations}).
\noindent
{\bf NSS blocks:}
Each $\mathtt{NSS}$ block has four inputs $\bar{s}_i$, $r_i$, $a_i$ and $r''_i$. The inputs $r_i$ and $r''_i$ comes from the CS while $\bar{s}_i$ and $a_i$ comes from the PS. It first subtracts $r_i$ from $\bar{s}_i$ using $\mathtt{SUB}$ block to get the score $s_i$. Then, using $\mathtt{SUB}'$ block, it finds the flag bit that tells whether the vertex is adjacent to the queried vertex.
$\mathtt{MUL}$ block (see Fig~\ref{fig:mul}) is used in $\mathtt{NSS}$ block as shown in Fig.~\ref{fig:nss} to make the score $s_i$ zero if the vertex is adjacent else keeps the score $s_i$ same.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[scale=0.12]{figures/nss.png}
\caption{$\mathtt{NSS}$ block}
\label{fig:nss}
\end{subfigure}
\begin{subfigure}[t]{0.44\textwidth}
\centering
\includegraphics[scale=0.23]{figures/mul.png}
\caption{$\mathtt{MUL}$ block}
\label{fig:mul}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[scale=0.22 ]{figures/sub2.png}
\caption{$\mathtt{SUB}'$ block}
\label{fig:sub2}
\end{subfigure}
\caption{Few circuit blocks}
\label{fig:fewBlocks}
\end{figure}
\noindent
{\bf Elimination of scores for adjacent vertices:}
It can be seen from encryption that $\bar{s}_i = s_i + r_i$, where $s_i$ is the actual score corresponding to $i$th row and $r_i$ randomizes the score. Each bit $r''_i$ is taken to indicate whether $r'_i$ is odd or even. On the other hand, each bit $a_i$ indicates whether the decrypted $\bar{a}_i$ is odd or even. Inequality of $r''_i$ and $a_i$ indicates that the vertex corresponding to $i$th row is connected with the queried vertex. In that case, we consider the score $s_i=0$.
The block $\mathtt{SUB}'$, in Fig.~\ref{fig:sub2}, finds outputs $1$ if they are equal, else outputs $0$.
Since, $(\bar{s}_i - r_i) $ gives the score, $\mathtt{SUB}$ block (see Section.~\ref{ss:SecureComputations}) is used in $MGC$ to compute the scores where the PS gives $\bar{s}_i$ and CS gives $r_i$.
It can be seen that $\mathtt{SUB}'$ subtract only one bit which is very efficient.
\subsection{Security Analysis} In $\mathtt{SLP}$-$\mathtt{III}$, though the PS has almost no leakage, the CS has a little more leakage than $\mathtt{SLP}$-$\mathtt{I}$. This extra leakage occurs when it interacts with the PS through OT protocol to provide encoding corresponding to the input of PS. Since OT is secure and does not leak any meaningful information, we can ignore this leakage. In $\mathtt{SLP}$-$\mathtt{III}$,
the leakage $ \mathcal{L} =(\mathcal{L}_{bld}, \mathcal{L}_{qry})$ is same as it is in $\mathtt{SLP}$-$\mathtt{I}$.
\begin{theorem}
If $\mathtt{BGN}$ is semantically secure and $F$ is a PRP, then $\mathtt{SLP}$-$\mathtt{III}$ is $\mathcal{L} $-secure against adaptive chosen-query attacks.
\end{theorem}
\begin{proof}
The proof is the same as that of Theorem~\ref{th:security1}.
\end{proof}
\subsection{Basic Queries}
All the three schemes support basic queries which includes neighbor query, vertex degree query and adjacency query.
\medskip \noindent{\bf Neighbor query:}
Given a vertex, neighbor query is to return the set of vertices adjacent to it.
It is to be noted that, since we have encrypted adjacency matrix of the graph, it is enough for the client if it gets the decrypted row corresponding to the queried vertex,
To query neighbor for a vertex $v$, the client generates $\tau_{v}= (i', s)$ as in Algo.~\ref{algo4:TrapdoorGen} and sends it to the CS. The CS permutes rows corresponding to row $i'$ and send the permuted row $\hat{m} \gets (m_{\pi_s (1) },m_{\pi_s (2) }, \ldots, m_{\pi_s (N) })$ to the PS. The PS decrypts them and send the decrypted vector $(a_1, a_2, \ldots, a_N)$ to the client. The client can compute inverse permutation for the entries for which the the entries are 1. Here, the CS gets only the queried vertex and the PS gets the degree of the vertex.
\medskip \noindent{\bf Vertex degree query:}
To query degree of a vertex $v$, similarly, the client sends $\tau_{v}= i'$ to the CS. The CS computes encrypted degree as $m \gets \prod_{i = 1}^{i = N} m_{i'i}$ and sends $m$ to the proxy. The proxy decrypts $m$ and sends the result to the client.
$s$ is not needed as permuting the elements of some row is not required.
Here, the degree is leaked to the PS which can be prevented by randomizing the result. The CS can randomize the encrypted degree and send the randomization secret to the client. The client can get the degree just by subtracting the randomization from the result by the PS.
However, this leakage can be avoided easily, without randomizing the encrypted degree, if the client performs the decryption.
\medskip \noindent{\bf Adjacency Query:}
Given two vertices, adjacency query (edge query) tells wither there is an edge between them. If the client wants to perform adjacency query for the pair of vertices $v_1$ and $v_2$, the client sends $(i'_1, i'_2)$ (as generated in Algo.~\ref{algo4:TrapdoorGen}) to the CS. The CS returns $m_{{i'_1}{ i'_2}}$. The client can get either the randomized result from the PS or it can decrypt $m_{{i'_1}{ i'_2}}$ by itself.
\section{Performance Analysis} \label{sec:PerformanceAnalysis}
In this section, we discuss the efficiency of our proposed schemes. The efficiency is measured in terms of computations and communication complexities together with storage requirement and allowed leakages. A summary is given in Table~\ref{tab:comparison}.
Since there is no work on the secure link prediction before, we have not compared complexities of our schemes with any other similar encrypted computations.
\subsection{Complexity analysis}
Let the graph be $G= (V,E)$ and $N = |V|$. Let $\mathtt{BGN}$ encryption outputs $\rho$-bit string for every encryption. We describe the complexities as bellow.
\medskip
\noindent {\bf Leakage Comparison:}\label{ss:LeakageComparison}
As we see the Table~\ref{tab:comparison}, each scheme leaks, to the CS, same amount of information which is the number of vertices of the graph and the query trapdoors. However, none of the schemes leaks information about the edges in the graph to the CS.
In $\mathtt{SLP}$-$\mathtt{I} $, since the PS has the power to decrypt the scores, it gets to know $ S_{v} = \{score(v, u): u \in V\}$.
However, $\mathtt{SLP} $-$ \mathtt{II}$ reveals only a subset $ S'_{v} $ of $ S_{v} $ and $\mathtt{SLP}$-$\mathtt{III}$ manages to hide all scores from the PS. $\mathtt{SLP}$-$\mathtt{I}$ can not hide scores from the PS which results in maximum leakage to the PS.
\begin{table}[!htbp]
\centering
\caption{Complexity Comparison Table} \label{tab:comparison}
\vspace{6pt}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}\hline
Param & Entity & $\mathtt{SLP}$-$\mathtt{I}$ & $\mathtt{SLP}$-$\mathtt{II}$ & $\mathtt{SLP}$-$\mathtt{III}$ \\ \hline \hline
Leakage & CS & $|V|$, $\tau_{v_1},\tau_{v_2},\ldots$ & $|V|$, $\tau_{v_1},\tau_{v_2},\ldots$ & $|V|$, $\tau_{v_1},\tau_{v_2},\ldots$ \\ \cline{2-5}
& PS & $S_{v},i_{res}$ & $S'_{v},i_{res}$& $i_{res}$ \\
\hline
& client & $\lambda $ bits& $\lambda$ bits& $\lambda$ bits\\ \cline{2-5}
Storage & CS & $|V|^2\rho$ bits& $2|V|^2\rho$ bits& $|V|^2\rho$ bits\\ \cline{2-5}
& PS & $\rho $ bits& $\rho $ bits& $\rho
$ bits\\
\hline
& client& $|V|^2(\mathsf{M}+\mathsf{A})$ & $|V|^2(\mathsf{M}+\mathsf{A}+\mathsf{M_1}+\mathsf{A_1})$ & $|V|^2(\mathsf{M}+\mathsf{A})$ \\ \cline{2-5}
Compu- & CS & $|V|^2$ $\mathsf{P}$ + $|V|$ $\mathsf{E}$ & $|V|^2$ $\mathsf{P}$ + & $|V|^2$ $\mathsf{P}$ + $4|V|$ $\mathsf{E}$ \\
tation& & + ($|V|^2+ |V|$) $\mathsf{M}$ & ($|V|^2+ 2|V|$) $\mathsf{M}$ & + ($|V|^2+ 3|V|$) $\mathsf{M}$ +\\
& & & & $MGC_{const}{(\log |V|,|V|)}$\\ \cline{2-5}
& PS & $|V|log|V| (\mathsf{M+C}+\mathsf{M_1+C_1})$ & $|V| (\mathsf{M_1+C_1})$ +& $|V| (\mathsf{M+C}+\mathsf{M_1+C_1})$+ \\
& & +$|V|log|V| \mathsf{C}$ & +$|V| log|V| \mathsf{C}$ & $MGC_{eval}{(\log |V|,|V|)}$ \\
\hline
& client$\rightarrow$CS & $|V|^2 \rho$ bits& $2 |V|^2 \rho$ bits & $|V|^2\rho$ bits \\ \cline{2-5}
Commu- & CS$\rightarrow$PS& $2|V|\rho$ bits & $|V|\rho$ bits & $2|V|\rho$ bits + $|V|OT ^{(\log |V| +1)}_{snd}$+ \\
nication& & & & $MGC_{size}{(\log |V|,|V|)}$ bits \\ \cline{2-5}
& PS$\rightarrow$CS& - & - & $|V|OT ^{(\log |V| +1)}_{rcv}$\\ \cline{2-5}
& PS$\rightarrow$client & $\log |V|$ bits&$2|V| \log |V|$ bits& $\log |V|$ bits\\ \hline
\end{tabular}
}
\begin{flushleft}
$S_{v}$ - Set of scores of $v$ with all other vertices,
$S'_{v} $- a subset of $ S_{v}$,
$\rho $- length of elements in $\mathbb{G}$ or $\mathbb{G}_1$,
$\mathsf{C}$- comparison in $\mathbb{G}$,
$\mathsf{C_1}$- comparison in $\mathbb{G}_1$,
$\mathsf{M}$- multiplication in $\mathbb{G}$,
$\mathsf{M_1}$- multiplication in $\mathbb{G}_1$,
$\mathsf{E}$- exponentiation in $\mathbb{G}$,
$\mathsf{E_1}$- exponentiation in $\mathbb{G}_1$,
$\mathsf{P}$- pairing/ bilinear map computation,
$MGC_{size}{(\log |V|,|V|)}$- size of $MGC$ with $|V|$ $\log |V|$-bit inputs,
$MGC_{const}{(\log |V|,|V|)}$- $MGC$ contraction with $|V|$ $\log |V|$-bit inputs,
$MGC_{eval}{(\log |V|,|V|)}$- $MGC$ evaluation with $|V|$ $\log |V|$-bit inputs,
$OT ^{(\log |V| +1)}_{snd}$- information to send for $(\log |V|+1) $-bit $OT$,
$OT ^{(\log |V| +1)}_{rcv}$- information to receive for $\log |V| $-bit $OT$.
\end{flushleft}
\end{table}
\medskip
\noindent {\bf Storage Requirement:} \label{ss:StorageRequirement}
One of the major goals of secure link prediction scheme is that the client should require very little storage. All our designed schemes have very low storage requirement for the client. The client has to only store a key which is of $\lambda$ bits. For all schemes, the PS stores only a part of the secret key which is of $\lambda$ bits.
In $\mathtt{SLP}$-$\mathtt{I}$, the CS is required to store $|V|^2\rho$ bits for the structure $T$ where the PS is required to store only the secret key.
While reducing the leakage in $\mathtt{SLP}$-$\mathtt{II}$, the CS storage becomes doubled. However, $\mathtt{SLP}$-$\mathtt{III}$ requires the same amount of storage as $\mathtt{SLP}$-$\mathtt{I}$.
\medskip
\noindent {\bf Computation Complexity:} \label{ss:ComputationComplexity}%
In all schemes, the client computes $|V|^2$ number of $\mathtt{BGN}$ encryption to encrypt $A$ while $\mathtt{SLP}$-$\mathtt{II}$ additionally computes $|V|^2$ number of the same to encrypt $B$. To compute each of $|V|$ encrypted scores, the CS requires $|V|$ bilinear map ($e$) computation and $|V|$ multiplications.
Additionally, $\mathtt{SLP}$-$\mathtt{I}$ randomizes the encrypted entries corresponding to the row that has been queried. This requires $|V|$ exponentiations and $|V|$ multiplications. $\mathtt{SLP}$-$\mathtt{II}$ randomizes the encrypted scores. This requires $|V|$ multiplications and computes the encrypted degree of the queried vertex which requires $|V|$ multiplications. Apart from computations of encrypted scores, in $\mathtt{SLP}$-$\mathtt{III}$, the CS computes a garbled circuit $MGC$.
In all, the PS decrypts $|V|$ scores. Each decryption requires $\log|V|$ multiplications on average. To find the vertex with maximum score, in $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$, the PS compares $|V|$ numbers. The $|V|$ encrypted entries are decrypted by the PS in $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{III}$. In addition, the PS evaluates the garbled circuit $MGC$ in $\mathtt{SLP}$-$\mathtt{III}$.
\medskip
\noindent {\bf Communication Complexity:} \label{ss:CommunicationComplexity}
To upload the encrypted matrices, $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{III}$ requires $ |V|^2\rho$ bits and $\mathtt{SLP}$-$\mathtt{II}$ requires $ 2|V|^2\rho$ bits of communications. To query, it sends only the trapdoor of size $2\rho$ bits (aprx.).
The CS sends $2|V|$ entries to the PS, in case of $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{III}$. For $\mathtt{SLP}$-$\mathtt{II}$, the CS sends only $|V|$ entries. Each of these entries is of $\rho$ bits. In addition, $\mathtt{SLP}$-$\mathtt{III}$ sends the garbled circuit $MGC$.
PS to CS communication happens only when the PS evaluates $MGC$.
For $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{III}$, the PS sends only $i_{res}$ which is of $\log |V|$ bits to the client. However, the PS sends $2|V| \log |V|$ bits to the client.
\medskip
\noindent {\bf Complexity for GC Computation:}
It can be observed that $\log |V|$-bit $\mathtt{SUB}$, $1$-bit $\mathtt{SUB}'$, $\log |V|$-bit $\mathtt{MUL}$, $\log |V|$-bit $\mathtt{COMP}$ and $\log |V|$-bit $\mathtt{MUX}$ blocks consist of ($4\log |V|$ XOR-gates and $\log |V|$ AND-gates), ($4$ XOR-gates and $1$ AND-gate), ($\log |V|$ AND-gates), ($3\log |V|$ XOR-gates and $\log |V|$ AND-gates) and ($2\log |V|$ XOR-gates and $\log |V|$ AND-gates) respectively. Thus, $\log |V|$-bit $\mathtt{NSS}$ and $\log |V|$-bit $\mathtt{Max}$ blocks consist of ($(4\log |V|+4)$ XOR-gates and $(2\log |V|+1)$ AND-gates) and ($7\log |V|$ XOR-gates and $3\log |V|$ AND-gates) respectively.
In our designed garbled circuit $MGC$, there are $(|V|-1)$ $\mathtt{Max}$ blocks and $|V|$ $\mathtt{NSS}$ blocks. Thus, $MGC$ requires $|V|(11\log |V|+4)$ XOR-gates and $|V|(5\log |V|+1)$ AND-gates. However, the PS receives $|V|(\log |V| +1)$ bits through OT for the first layer.
Thus, $MGC_{size}{(\log |V|,|V|)}$ is the size of $|V|(11\log |V|+4)$ XOR-gates and $|V|(5\log |V|+1)$ AND-gates, whereas
$MGC_{const}{(\log |V|,|V|)}$ and $MGC_{eval}{(\log |V|,|V|)}$ are computational cost to construct and evaluate.
\section{Experimental Evaluation} \label{sec:ExperimentalEvaluation}
In this section, the experimental evaluations of our designed schemes, $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$, are presented.
In our experiment, we have used a single machine for both the client and the server. All data has been assumed to be residing in main memory. The machine is with an Intel Core i7-4770 CPU and with 8-core operating at 3.40GHz. It is equipped with 8GB RAM and runs an Ubuntu 16.04 LTS 64-bit operating system. The open source PBC~\cite{PBC} library has been used in our implementation to support $\mathtt{BGN}$. The code is in the repository \cite{slpImp}.
\subsection{Datasets}
For our experiment, we have used real-world datasets. We have taken the datasets from the \emph{SNAP datasets}~\cite{snapnets}. The collection consists of various kinds of real-world network data which includes social networks, citation networks, collaboration networks, web graphs etc.
\begin{table}[ht]
\caption{Detail of the graph datasets} \label{tab:dataset} \centering
\begin{tabular}{l|r|r}
\textbf{Dataset Name} & \textbf{\#Nodes} & \textbf{\#Edges} \\ \hline \hline
bitcoin-alpha & 3,783 & 24,186 \\ \hline
ego-facebook & 4,039 & 88,234 \\ \hline
email-Enron & 36,692 & 183,831 \\ \hline
email-Eu-core & 1,005 & 25,571 \\ \hline
Wiki-Vote & 7,115 & 103,689 \\
\end{tabular}
\end{table}
For our experiment, we have considered the undirected graph datasets-
\emph{bitcoin-alpha},
\emph{ego-Facebook},
\emph{Email-Enron},
\emph{email-Eu-core} and
\emph{Wiki-Vote}.
The number of nodes and the edges of the graphs are shown in Table~\ref{tab:dataset}.
Instead of the above graphs, their subgraphs have been considered. First fixed number of vertices from the graph datasets and edges joining them have been chosen for the subgraphs. For example, for 1000, vertices with identifier $<1000$ have been taken for the subgraph.
\subsection{Experiment Results}
In our experiment, five datasets have been taken. The experiment has been done for each dataset taking extracted subgraphs with vertices 50 to 1000 incremented by 50. The number of edges in the subgraphs is shown in Fig.~\ref{fig:subgraphInfo}.
For the pairing, 128, 256 and 512 bits prime-pairs are taken.
\begin{figure}[htbp]
\centering
{\includegraphics[width=0.48\textwidth]{figures/edge_barChart.pdf}}
\caption{Number of vertices and edges of the subgraphs}
\label{fig:subgraphInfo}
\end{figure}
In our proposed schemes, the most expensive operation for the client is encrypting the matrix ($\mathtt{EncMatrix}$). For the cloud and the proxy, score computing ($\mathtt{LPQuery}$) and finding maximum vertex ($\mathtt{FindMaxVertex}$) are the most expensive operations respectively. Hence, throughout this section, we have discussed mainly these three operations.
As we have seen, in the proposed protocols, encrypting each entry of the adjacency matrix is the main operation of the encryption, the number of edges does not affect the encryption time for both $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$. This is because, irrespective of SLP schemes, the number of operations are independent of number of edges.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width=0.98\textwidth]{figures/client_plot.pdf}
\caption{Encryption time taken\\ by the client }
\label{fig:compClient}
\end{subfigure}%
\begin{subfigure}[t]{0.33\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{figures/cloud_plot.pdf}
\caption{Encrypted score computation times}
\label{fig:compCloud}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width=0.98\textwidth]{figures/proxy_plot.pdf}
\caption{Score decryption and sorting times}
\label{fig:compProxy}
\end{subfigure}%
\caption{comparison between $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$ w.r.t. computation time when the primes are of 128 bits each}
\label{fig:compSLPs}
\end{figure}
Similarly, time required by the cloud to compute score is independent of number of edges and depends on number of entries in the adjacency matrix i.e., $N^2$.
Time taken for each of the operations is shown in Fig.~\ref{fig:compSLPs}. In the figure, we have compared time for both $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$ taking primes 128 bits each.
However, the time taken by the proxy to decrypt the scores is depends on the number of vertices. In $\mathtt{SLP}$-$\mathtt{I}$, the proxy has to decrypt $|V|$ entries in $\mathbb{G}$ as well as $|V|$ scores in $\mathbb{G}_1$ where in $\mathtt{SLP}$-$\mathtt{II}$, it decrypts only in $|V|$ scores in $\mathbb{G}_1$. So proxy takes more time in $\mathtt{SLP}$-$\mathtt{I}$ than in $\mathtt{SLP}$-$\mathtt{II}$. This can be observed in Fig.~\ref{fig:compProxy}.
\begin{figure}[!htbp]
\centering
{\includegraphics[width=0.32\textwidth]{figures/comp_128_slp2_proxy.pdf}}
\caption{Time taken by the proxy in $\mathtt{SLP}$-$\mathtt{II}$ for different datasets considering 128-bit primes}
\label{fig:comp_128_slp2_proxy1}
\end{figure}
For a query, in $\mathtt{SLP}$-$\mathtt{II}$, the proxy decrypts scores only for corresponding vertices that are not incident to the vertex queried for. So, only in this case, the computational time depends on the number of edges in the graph. As density of edges in a graph increases the chance of decreasing computational time for the graph increases. In Fig.~\ref{fig:comp_128_slp2_proxy1} we have compared computational time taken by the proxy in $\mathtt{SLP}$-$\mathtt{II}$ for different datasets.
In the above figures, we have considered only 128-bit primes. It can be observed from the experiment, the computational time depends on the security parameter. As we increase the size of the primes, the computational time grows exponentially. We have compared the change of computational time for all of the client, cloud and proxy for both $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$ (see Fig.~\ref{fig:compTimeSLP1} and Fig.~\ref{fig:compTimeSLP2} respectively).
However, in practical, as we keep the security bit fixed, keeping the security bits as low as possible improves the performance.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.33\textwidth}
\centering
\fbox{\includegraphics[width=0.9\textwidth]{figures/slp1_client1.pdf}}
\caption{Client time in $\mathtt{SLP}$-$\mathtt{I}$}
\label{fig:slp1_client1}
\end{subfigure}%
\begin{subfigure}[t]{0.33\textwidth}
\centering
\fbox{\includegraphics[width=0.9\textwidth]{figures/slp1_cloud1.pdf}}
\caption{Cloud time in $\mathtt{SLP}$-$\mathtt{I}$}
\label{fig:slp1_cloud1}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\centering
\fbox{\includegraphics[width=0.9\textwidth]{figures/slp1_proxy1.pdf}}
\caption{Proxy time in $\mathtt{SLP}$-$\mathtt{I}$}
\label{fig:slp1_proxy1}
\end{subfigure}%
\caption{Computational time in $\mathtt{SLP}$-$\mathtt{I}$ with 128, 256 and 512-bit primes}
\label{fig:compTimeSLP1}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.33\textwidth}
\centering
\fbox{\includegraphics[width=0.9\textwidth]{figures/slp2_client1.pdf}}
\caption{Client time in $\mathtt{SLP}$-$\mathtt{II}$}
\label{fig:slp2_client1}
\end{subfigure}%
\begin{subfigure}[t]{0.33\textwidth}
\centering
\fbox{\includegraphics[width=0.9\textwidth]{figures/slp2_cloud1.pdf}}
\caption{Cloud time in $\mathtt{SLP}$-$\mathtt{II}$}
\label{fig:slp2_cloud1}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\centering
\fbox{\includegraphics[width=0.9\textwidth]{figures/slp2_proxy1.pdf}}
\caption{Proxy time in $\mathtt{SLP}$-$\mathtt{II}$}
\label{fig:slp2_proxy1}
\end{subfigure}%
\caption{Computational time in $\mathtt{SLP}$-$\mathtt{II}$ with 128, 256 and 512-bit primes}
\label{fig:compTimeSLP2}
\end{figure}
\subsection{Estimation of computational cost in $\mathtt{SLP}$-$\mathtt{III}$ } \label{ComputationCost}
In the previous section, we have shown the experimental results for $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$. In this section, we have estimated the computational cost for $\mathtt{SLP}$-$\mathtt{III}$.
Encryption algorithm of $\mathtt{SLP}$-$\mathtt{III}$ is same as $\mathtt{SLP}$-$\mathtt{I}$. So both required same amount of time for encryption for the same dataset.
To estimate query time, we have considered a random graph with $10^3$ vertices.
\medskip
\noindent
{\bf Query Time:} In $\mathtt{SLP}$-$\mathtt{III}$ the cloud computes encrypted scores and the proxy decrypts the scores as well as random numbers. The number of decryption in each group is same as $\mathtt{SLP}$-$\mathtt{I}$. However, in $\mathtt{SLP}$-$\mathtt{III}$, it requires an extra garbled circuit computation. For this, $1000$ OT for 128-bit security of ECC is required which takes $138*1000$ms = $138$s aprx. (\cite{AsharovL0Z13,NaorP01}). In addition to that, the PS evaluates the GC with $1000*(11*257+4)=2831000$ XOR-gates and $1000*(5*257+1)=1286000$ AND-gates. Assuming that the encryption used in each GC circuit is AES (128-bit), GC evaluation requires 2 AES decryption and the CS requires 8 encryption. As we see in \cite{benchmarkLink}, it requires 0.57 cycles per byte for AES. Thus, for evaluation in a single core processor, the PS requires (2*(1286000*256/8)*0.57) cycles = 46913280 cycles that takes $(46913280/(2.5*10^9))= 0.019$s. Similarly, The CS requires 0.078s to construct the GC.
The estimated costs are measured with respect to a single core 2.5 GHz processor. However, in practice, the CS provides a large number of multi-core processors. As we see all the computations can be computed in \emph{parallel}, the query cost can be reduced dramatically. Each of the above-mentioned costs can be improved to $\frac{cost}{p}$s with $p$ processors and cost is $cost$.
\section {Introduction to $\mathtt{SLP}_k$}\label{sec:slpk}
Let us define another variant of secure link prediction problem $\mathtt{SLP}_k$. Instead of returning the vertex with highest score, an $\mathtt{SLP}_k$ returns indices of $k$ number of top-scored vertices.
Let, a graph $G = (V,E)$ is given. Then, the \emph{top-$k$ Link Prediction Problem} states that given a vertex $v \in V$, it returns a set of vertices $\{u_1, u_2, \ldots, u_k \}$ such that $score(v,u_i) $ is among top-$k$ elements in $S_v$.
The top-$k$ link prediction scheme is said to be secure i.e., a secure top-$k$ link prediction problem scheme ($\mathtt{SLP}_k$) if, the servers do not get any meaningful information about $G$ from its encryption or sequence of queries.
Our proposed schemes, $\mathtt{SLP}$-$\mathtt{I}$ and $\mathtt{SLP}$-$\mathtt{II}$, can be extended to support $\mathtt{SLP}_k$ queries. In $\mathtt{SLP}$-$\mathtt{I}$, the only change is that instead of returning only the index of the vertex with highest score, the proxy has to return the indices of the top-$k$ highest scores to the client.
\section{Conclusion} \label{sec:Conclusion}
In this paper, we have introduced the secure link prediction problem and discussed its security. We have presented three constructions of SLP.
The first proposed scheme $\mathtt{SLP}$-$\mathtt{I}$ has the least computational time with maximum leakage to the proxy. The second one $\mathtt{SLP}$-$\mathtt{II}$ reduces the leakage by randomizing scores. However, it suffers high communication cost from proxy to the client. The third scheme $\mathtt{SLP}$-$\mathtt{III}$ has minimum leakage to the proxy. Though the garbled circuit helps to reduce leakage, it increases the communication and computational cost of the cloud and the proxy servers.
Performance analysis shows that they are practical. We have implemented prototypes of first two schemes and measured the performance by doing experiment with different real-life datasets. We also estimated the cost for $\mathtt{SLP}$-$\mathtt{III}$. In the future, we want to make a library that support multiple queries including neighbor query, edge query, degree query, link prediction query etc.
It is to be noted that the cost of computation without privacy and security is far better. The performance has been degraded since we have added security. The performance comes at the cost of security.
Throughout the paper, we have considered unweighted graph. As a future work the schemes can be extended to weighted graphs.
Moreover, we have initiated the secure link prediction problem and considered only common neighbors as score metric. As a future work, we will consider the other distance metrics like Jaccard's coefficient, Adamic/Adar, preferential attachment, Katz$_\beta$ etc. and compare the efficiency of each.
\section*{Acknowledgments} We thank Gagandeep Singh and Sameep Mehta of IBM India research for their initial valuable comments on this work.
| {'timestamp': '2019-02-01T02:11:23', 'yymm': '1901', 'arxiv_id': '1901.11308', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.11308'} |
\section{Introduction}
\label{introduction}
Non-leptonic decays of hyperons (NLDH) can be understood by a
mechanism~\cite{LE-6129} due to non-perturbative flavor and parity mixings in
the physical hadrons and the intervention of the strong-interaction Yukawa
hamiltonian.
In Ref.~\cite{LE-6129} we have shown that this mechanism leads to the
predictions of the $|\Delta I|=1/2$ rule~\cite{marshak} for these decays, as
well as to numerical values of the so-called parity-conserving $B$ amplitudes
which are in good agreement with experiment.
If the new Yukawa coupling constants (YCC), that appear in the so-called
parity-violation $A$ amplitudes~\footnote{
We remind the reader that in the approach of a priori mixings in hadrons both
$A$ and $B$ amplitudes are actually parity and flavor conserving.
}
are assumed to have the same magnitudes as their ordinary counterparts, that appear in the $B$'s, then predictions for the $A$'s are obtained that also agree well with experiment.
All the YCC are constrained by their experimental values, the only free
parameters are the flavor and parity mixing angles $\sigma$, $\delta$, and
$\delta'$ that appear in physical hadrons.
We have refered to these angles as a priori mixings angles~\cite{LE-6129}, in
order to distinguish them from the perturbative ones that must be originated by
the intervention of the $W^\pm_\mu$ bosons.
In this paper we shall perform a detailed quantitative analysis, which was only
sketched in Ref.~\cite{LE-6129}.
Our main purpose will be not only to reproduce the experimental values of the
$A$'s and $B$'s but to establish as reliably as possible the values of $\delta$,
$\delta'$, and $\sigma$ in NLDH.
In Ref.~\cite{LE-6129} there was litle space for this latter task.
Such values are crucial to be able to proceed with the research program
discussed in the last section of Ref.~\cite{LE-6129}, namely, once their
values are determined in some type of decays they will be useful to test
their expected universality-like property in another type of decays.
In Sec.~\ref{formulas} we shall reproduce the expressions predicted for the
$A$ and $B$ amplitudes by the a priori mixings in hadrons approach.
In Sec.~\ref{data} the available experimental evidence that will be used in our
analysis will be discussed.
In Sec.~\ref{b} we shall study the $B$ amplitudes and the determination of
the a priori mixing angle $\sigma$ that accompanies them.
The $A$ amplitudes and their angles $\delta$ and $\delta'$ will be discussed in
Sec.~\ref{a}.
The simultaneous determination of the three angles will be considered in
Sec.~\ref{ab}.
The violation of the $|\Delta I|=1/2$ rule due to the breaking of isospin
invariance and its implications upon the values of $\sigma$, $\delta$, and
$\delta'$ will be studied in Sec.~\ref{sb}.
The last section is reserved for discussions and conclusions.
\section{A Priori Mixing Expressions for the $A$ and $B$ amplitudes}
\label{formulas}
For the sake of completeness and to introduce our notation, we shall reproduce
the expressions for the so-called parity-violating and parity-conserving $A$
and $B$ amplitudes, respectively, obtained if a priori flavor
and parity mixings exist in physical hadrons and the transition operator is the
strong-interaction Yukawa hamiltonian $H_Y$, namely,
\[
A_1
=
\delta'
\sqrt{\frac{3}{2}} g^{{}^{p,sp}}_{{}_{n,p\pi^-}} +
\delta
(
g^{{}^{s,ss}}_{{}_{\Lambda,pK^-}} - g^{{}^{s,pp}}_{{}_{\Lambda,\Sigma^+\pi^-}}
)
,
\]
\[
A_2
=
-\frac{1}{\sqrt{2}}
[
-\delta'
\sqrt{3}g^{{}^{p,sp}}_{{}_{n,n\pi^0}} +
\delta
(
g^{{}^{s,ss}}_{{}_{\Lambda,n\bar{K}^0}} -
\sqrt{3} g^{{}^{s,pp}}_{{}_{\Lambda,\Lambda\pi^0}} -
g^{{}^{s,pp}}_{{}_{\Lambda,\Sigma^0\pi^0}}
)
]
,
\]
\[
A_3
=
\delta
(
g^{{}^{s,ss}}_{{}_{\Sigma^-,nK^-}} +
\sqrt{\frac{3}{2}} g^{{}^{s,pp}}_{{}_{\Sigma^-,\Lambda\pi^-}} +
\frac{1}{\sqrt{2}} g^{{}^{s,pp}}_{{}_{\Sigma^-,\Sigma^0\pi^-}}
)
,
\]
\begin{equation}
A_4
=
-\delta'
g^{{}^{p,sp}}_{{}_{p,n\pi^+}} +
\delta
(
\sqrt{\frac{3}{2}} g^{{}^{s,pp}}_{{}_{\Sigma^+,\Lambda\pi^+}} +
\frac{1}{\sqrt{2}} g^{{}^{s,pp}}_{{}_{\Sigma^+,\Sigma^0\pi^+}}
)
,
\label{aes}
\end{equation}
\[
A_5
=
- \delta'
g^{{}^{p,sp}}_{{}_{p,p\pi^0}} -
\delta
(
\frac{1}{\sqrt{2}} g^{{}^{s,ss}}_{{}_{\Sigma^+,p\bar{K}^0}} +
g^{{}^{s,pp}}_{{}_{\Sigma^+,\Sigma^+\pi^0}}
)
,
\]
\[
A_6
=
\delta'
g^{{}^{p,sp}}_{{}_{\Sigma^-,\Lambda\pi^-}} +
\delta
(
g^{{}^{s,ss}}_{{}_{\Xi^-,\Lambda K^-}} +
\sqrt{\frac{3}{2}} g^{{}^{s,pp}}_{{}_{\Xi^-,\Xi^0\pi^-}}
)
,
\]
\[
A_7
=
\frac{1}{\sqrt{2}}
[
\delta'
(
\sqrt{3} g^{{}^{p,sp}}_{{}_{\Lambda,\Lambda\pi^0}} +
g^{{}^{p,sp}}_{{}_{\Sigma^0,\Lambda\pi^0}}
)
+
\delta
(
- g^{{}^{s,ss}}_{{}_{\Xi^0,\Lambda\bar{K}^0}} +
\sqrt{3} g^{{}^{s,pp}}_{{}_{\Xi^0,\Xi^0\pi^0}}
)
]
,
\]
\noindent
and
\[
B_1
=
\sigma
(
- \sqrt{\frac{3}{2}} g_{{}_{n,p\pi^-}} +
g_{{}_{\Lambda,pK^-}} - g_{{}_{\Lambda,\Sigma^+\pi^-}}
)
,
\]
\[
B_2
=
-\frac{1}{\sqrt{2}}
\sigma
(
\sqrt{3}g_{{}_{n,n\pi^0}} +
g_{{}_{\Lambda,n\bar{K}^0}} -
\sqrt{3} g_{{}_{\Lambda,\Lambda\pi^0}} -
g_{{}_{\Lambda,\Sigma^0\pi^0}}
)
,
\]
\[
B_3
=
\sigma
(
g_{{}_{\Sigma^-,nK^-}} +
\sqrt{\frac{3}{2}} g_{{}_{\Sigma^-,\Lambda\pi^-}} +
\frac{1}{\sqrt{2}} g_{{}_{\Sigma^-,\Sigma^0\pi^-}}
)
,
\]
\begin{equation}
B_4
=
\sigma
(
g_{{}_{p,n\pi^+}} +
\sqrt{\frac{3}{2}} g_{{}_{\Sigma^+,\Lambda\pi^+}} +
\frac{1}{\sqrt{2}} g_{{}_{\Sigma^+,\Sigma^0\pi^+}}
)
,
\label{bes}
\end{equation}
\[
B_5
=
\sigma
(
g_{{}_{p,p\pi^0}} -
\frac{1}{\sqrt{2}} g_{{}_{\Sigma^+,p\bar{K}^0}} -
g_{{}_{\Sigma^+,\Sigma^+\pi^0}}
)
,
\]
\[
B_6
=
\sigma
(
- g_{{}_{\Sigma^-,\Lambda\pi^-}} +
g_{{}_{\Xi^-,\Lambda K^-}} +
\sqrt{\frac{3}{2}} g_{{}_{\Xi^-,\Xi^0\pi^-}}
)
,
\]
\[
B_7
=
\frac{1}{\sqrt{2}}
\sigma
(
- \sqrt{3} g_{{}_{\Lambda,\Lambda\pi^0}} -
g_{{}_{\Sigma^0,\Lambda\pi^0}} -
g_{{}_{\Xi^0,\Lambda\bar{K}^0}} +
\sqrt{3} g_{{}_{\Xi^0,\Xi^0\pi^0}}
)
.
\]
\noindent
The subindeces $1,\dots,7$ correspond to
$\Lambda\rightarrow p\pi^-$,
$\Lambda\rightarrow n\pi^0$,
$\Sigma^-\rightarrow n\pi^-$,
$\Sigma^+\rightarrow n\pi^+$,
$\Sigma^+\rightarrow p\pi^0$,
$\Xi^-\rightarrow \Lambda\pi^-$,
and
$\Xi^0\rightarrow \Lambda\pi^0$,
respectively.
The YCC in the $B$'s are the ordinary ones, while the YCC in the $A$'s are new
ones.
In the latter, the upper indeces serve as a reminder of the parities of the
parity eigenstates involved.
$\delta$, $\delta'$, and $\sigma$ are the mixing angles that appear in the a
priori mixed hadrons.
We remind the reader that these physical hadrons are obtained, given our
current inability to compute well with QCD, following the ansatz discussed in
Ref.~\cite{LE-6129}.
The a priori mixed hadrons thus obtained and used to obtain Eqs.~(\ref{aes})
and (\ref{bes}) are
\[
K^+_{ph} =
K^+_{0p} - \sigma \pi^+_{0p} - \delta' \pi^+_{0s}
+ \cdots
,
\]
\[
K^0_{ph} =
K^0_{0p} +
\frac{1}{\sqrt{2}} \sigma \pi^0_{0p} + \frac{1}{\sqrt{2}} \delta' \pi^0_{0s}
+ \cdots
,
\]
\begin{equation}
\pi^+_{ph} =
\pi^+_{0p} + \sigma K^+_{0p} - \delta K^+_{0s}
+ \cdots
,
\label{mph}
\end{equation}
\[
\pi^0_{ph} =
\pi^0_{0p} -
\frac{1}{\sqrt{2}} \sigma ( K^0_{0p} + \bar{K}^0_{0p} ) +
\frac{1}{\sqrt{2}} \delta ( K^0_{0s} - \bar{K}^0_{0s} )
+ \cdots
,
\]
\[
\pi^-_{ph} =
\pi^-_{0p} + \sigma K^-_{0p} + \delta K^-_{0s}
+ \cdots
,
\]
\[
\bar{K}^0_{ph} =
\bar{K}^0_{0p} + \frac{1}{\sqrt{2}} \sigma \pi^0_{0p} -
\frac{1}{\sqrt{2}} \delta'\pi^0_{0s}
+ \cdots
,
\]
\[
K^-_{ph} =
K^-_{0p} - \sigma \pi^-_{0p} + \delta' \pi^-_{0s}
+ \cdots
,
\]
\noindent
and,
\[
p_{ph} =
p_{0s} - \sigma \Sigma^+_{0s} - \delta \Sigma^+_{0p}
+ \cdots
,
\]
\[
n_{ph} =
n_{0s} +
\sigma ( \frac{1}{\sqrt{2}} \Sigma^0_{0s} + \sqrt{\frac{3}{2}} \Lambda_{0s}) +
\delta ( \frac{1}{\sqrt{2}} \Sigma^0_{0p} + \sqrt{\frac{3}{2}} \Lambda_{0p} )
+ \cdots
,
\]
\[
\Sigma^+_{ph} =
\Sigma^+_{0s} + \sigma p_{0s} - \delta' p_{0p}
+ \cdots
,
\]
\begin{equation}
\Sigma^0_{ph} =
\Sigma^0_{0s} +
\frac{1}{\sqrt{2}} \sigma ( \Xi^0_{0s}- n_{0s} ) +
\frac{1}{\sqrt{2}} \delta \Xi^0_{0p} + \frac{1}{\sqrt{2}} \delta' n_{0p}
+ \cdots
,
\label{bph}
\end{equation}
\[
\Sigma^-_{ph} = \Sigma^-_{0s} + \sigma \Xi^-_{0s} + \delta \Xi^-_{0p}
+ \cdots
,
\]
\[
\Lambda_{ph} =
\Lambda_{0s} +
\sqrt{\frac{3}{2}} \sigma ( \Xi^0_{0s}- n_{0s} ) +
\sqrt{\frac{3}{2}} \delta \Xi^0_{0p} +
\sqrt{\frac{3}{2}} \delta' n_{0p}
+ \cdots
,
\]
\[
\Xi^0_{ph} =
\Xi^0_{0s} -
\sigma
( \frac{1}{\sqrt{2}} \Sigma^0_{0s} + \sqrt{\frac{3}{2}} \Lambda_{0s} ) +
\delta'
( \frac{1}{\sqrt{2}} \Sigma^0_{0p} + \sqrt{\frac{3}{2}} \Lambda_{0p} )
+ \cdots
,
\]
\[
\Xi^-_{ph} =
\Xi^-_{0s} - \sigma \Sigma^-_{0s} + \delta' \Sigma^-_{0p}
+ \cdots
.
\]
\noindent
The dots stand for other mixings not used in obtaining Eqs.~(\ref{aes}) and
(\ref{bes}).
The subindices naught, $s$, and $p$ mean flavor, positive, and negative parity
eigenstates, respectively.
The main qualitative and already semi-quantitative result obtained with the
above expressions for the $A$'s and $B$'s are the predictions of the
$|\Delta I|=1/2$ rule for NLDH, when $H_Y$ is assumed to be an isospin $SU(2)$
invariant-operator.
In this symmetry limit, as discussed in detail in Ref.~\cite{LE-6129}, one
obtains the equalities~\cite{dumbrajs}
\begin{equation}
A_2 = - \frac{1}{\sqrt{2}} A_1, \ \ \ \ \ \
A_5 = \frac{1}{\sqrt{2}} ( A_4 - A_3 ), \ \ \ \ \ \
A_7 = \frac{1}{\sqrt{2}} A_6,
\label{sla}
\end{equation}
\noindent
and
\begin{equation}
B_2 = - \frac{1}{\sqrt{2}} B_1, \ \ \ \ \ \
B_5 = \frac{1}{\sqrt{2}} ( B_4 - B_3 ), \ \ \ \ \ \
B_7 = \frac{1}{\sqrt{2}} B_6.
\label{slb}
\end{equation}
The $SU(2)$ symmetry limit of the YCC leads to the equalities
$g_{{}_{p,p\pi^0}}=-g_{{}_{n,n\pi^0}}=g_{{}_{p,n\pi^+}}/{\sqrt 2}
=g_{{}_{n,p\pi^-}}/{\sqrt 2}$,
$g_{{}_{\Sigma^+,\Lambda\pi^+}}=g_{{}_{\Sigma^0,\Lambda\pi^0}}
=g_{{}_{\Sigma^-,\Lambda\pi^-}}$,
$g_{{}_{\Lambda,\Sigma^+\pi^-}}=g_{{}_{\Lambda,\Sigma^0\pi^0}}$,
$g_{{}_{\Sigma^+,\Sigma^+\pi^0}}=-g_{{}_{\Sigma^+,\Sigma^0\pi^+}}
=g_{{}_{\Sigma^-,\Sigma^0\pi^-}}$,
$g_{{}_{\Sigma^0,pK^-}}=g_{{}_{\Sigma^-,nK^-}}/{\sqrt 2}
=g_{{}_{\Sigma^+,p\bar K^0}}/{\sqrt 2}$,
$g_{{}_{\Lambda,pK^-}}=g_{{}_{\Lambda,n\bar K^0}}$,
$g_{{}_{\Xi^0,\Xi^0\pi^0}}=g_{{}_{\Xi^-,\Xi^0\pi^-}}/{\sqrt 2}$,
$g_{{}_{\Xi^-,\Lambda K^-}}=-g_{{}_{\Xi^0,\Lambda \bar K^0}}$,
and
$g_{{}_{\Lambda,\Lambda \pi^0}}=0$.
It is these equalities that lead to Eqs.~(\ref{slb}) when they are used in
Eqs.~(\ref{bes}).
Similar relations are valid within each set of upper indeces, e.\ g.,
$g^{{}^{p,sp}}_{{}_{p,p\pi^0}}=-g^{{}^{p,sp}}_{{}_{n,n\pi^0}}$, etc.\ when
$SU(2)$ symmetry is applied to the new YCC.
The equalities thus obtained lead to Eqs.~(\ref{sla}) when they are used in
Eqs.~(\ref{aes}).
In the next section we shall discuss the available experimental evidence on
NLDH, on the YCC, and the relevance of the signs of the $A$ and $B$ amplitudes.
\section{Experimental data and the signs of the $A$ and $B$ amplitudes}
\label{data}
The experimental data~\cite{pdg} on the seven NLDH we are concerned with here
come in the form of decay rates $\Gamma_i$ and spin asymmetries $\alpha_i$ and
$\gamma_i$ ($i=1,\dots,7$)~\footnote{
We find it convenient to use the $\gamma_i$-asymmetries, instead of the angle
$\phi_i$.
The experimental correlation pointed out in Ref.~\cite{pdg} has a
negligible effect in our analysis.
}.
These data are listed in Table~\ref{tablai}.
Absorbing certain kinematical and overall factors, these observables take the
particularly simple forms $\Gamma_i=S^2_i+P^2_i$,
$\alpha_i=2S_iP_i/(S^2_i+P^2_i)$, and $\gamma_i=(S^2_i-P^2_i)/(S^2_i+P^2_i)$.
We shall ignore final state interactions and assume $CP$-invariance; thus, each
amplitude is real.
$S_i$ are proportional to the $A_i$ and $P_i$ are proportional to the $B_i$.
It is also customary to quote experimental values for all the amplitudes.
This we do too in Table~\ref{tablai}.
However, the determination of the signs of the amplitudes requires a detailed
discussion.
In a plane whose cartesian axes correspond to $S_i$ and $P_i$, $\Gamma_i$
represents a circunference and $\alpha_i$ a hyperbola.
There are four intersections between these two curves.
These four solutions are such that one is equal to another one up to an overall
sign; so there are actually only two solutions up to such overall signs.
In addition one of the two solutions becomes the other one by interchanging the
magnitudes of $S_i$ with $P_i$ (or of $A_i$ with $B_i$).
The role of $\gamma_i$ is to determine the relative magnitudes between $S_i$
and $P_i$ (or between $A_i$ and $B_i$).
Their relative sign is fixed by $\alpha_i$.
Therefore, the relative sign and the relative magnitudes between $A_i$ and
$B_i$ are unique, but their overall signs cannot be experimentally
determined.
We have freedom to chose the overall signs, but Eqs.~(\ref{sla}) and (\ref{slb})
impose many restrictions, which are very important because they are predictions
independent of the particular values of the YCC and the a priori mixing angles.
They are part of the predictions of the $|\Delta I|=1/2$ rule.
For definiteness we shall asign the overall signs to the $B_i$ amplitudes.
Since there are seven $B_i$ amplitudes and two signs, we have $2^7$
possibilities.
However, the relative signs between $B_1$ and $B_2$, $B_4$ and $B_5$, and $B_6$
and $B_7$ are fixed by Eqs.~(\ref{slb}) and the fact that $|B_4|$,
$|B_5|\gg|B_3|$.
They are required to be negative, positive, and positive, respectively.
Our choice is then limited to $2^4=16$ posibilities.
There is still another limitation.
Eqs.~(\ref{sla}) and the fact that $|A_3|$, $|A_5|\gg|A_4|$ require that the
relative sign between $A_3$ and $A_5$ be opposite.
In addition, from Table~\ref{tablai}, we see that $\alpha_3<0$ and thus that
the relative sign between $B_3$ and $A_3$ must be negative.
Therefore, $B_3$ and $A_5$ must have the same sign and since $B_5$ has the same
sign as $B_4$, the relative sign between $B_3$ and $B_4$ is fixed to be the
same as the relative sign between $B_5$ and $A_5$, i.\ e., as the sign of
$\alpha_5$.
Since $\alpha_5<0$, the sign between $B_3$ and $B_4$ must be negative.
Clearly, we are left with only $2^3=8$ possiblities, out of the initial $2^7$.
These eight possibilities we shall apply to $B_1$, $B_3$, and $B_6$.
So, for example, if we choose $B_1>0$, $B_3<0$, and $B_6<0$, then we have fixed
$B_2<0$, $B_4>0$, $B_5>0$, and $B_7<0$.
Then, from the above discussion we also have $A_3>0$, $A_5<0$.
Knowing that $\alpha_1>0$, $\alpha_4>0$, and $\alpha_6<0$ we are forced to take
$A_1>0$, $A_4>0$, and $A_6>0$.
Finally the signs of $A_2<0$ and $A_7>0$ are fixed by Eqs.~(\ref{sla}).
Proceeding this way we can form the remainig seven choices.
All the sign possibilities are collected in Table~\ref{tablaii}.
Notice that since the relative signs between $A_1$ and $A_2$ and $B_1$ and $B_2$
are fixed by Eqs.~(\ref{sla}) and (\ref{slb}) (both negative), once the
relative sign between $A_1$ and $B_1$ is fixed experimentally by $\alpha_1$,
the relative sign between $A_2$ and $B_2$ is also fixed and it is fixed to be
the same as the sign of $\alpha_1$.
That is, irrespective of the above freedom to choose overall signs,
Eqs.~(\ref{sla}) and (\ref{slb}) predict that the sign of $\alpha_2$ must be
the same of $\alpha_1$.
Analogous remarks apply to $A_6$, $A_7$, $B_6$, and $B_7$,
Eqs.~(\ref{sla}) and (\ref{slb}) predict that the sign of $\alpha_7$ must be
the same of $\alpha_6$.
These predictions are very general, they are independent of the particular
values of $\delta$, $\delta'$, and $\sigma$ and of the particular isospin
symmetry-limit values of the YCC that may appear in Eqs.~(\ref{aes}) and
(\ref{bes}).
In Table~\ref{tablai}, we can verify that these two predictions are indeed
experimentally confirmed.
To close this section let us list in Table~\ref{tablaiii}, for easy later
reference, the experimental values of the ordinary YCC currently available.
Only the squares of five couplings are quoted in Ref.~\cite{dumbrajs}, but we
shall need two more $g_{{}_{\Xi^0,\Xi^0\pi^0}}$ and
$g_{{}_{\Xi^-,\Lambda K^-}}$.
Also, their relative signs are important.
In as-much-as strong-flavor $SU_3$, broken as it is, is a reliable symmetry, we
shall assume that the relative signs are fixed by this symmetry.
Along these lines, we shall then assume that $g_{{}_{\Xi^0,\Xi^0\pi^0}}$ and
$g_{{}_{\Xi^-,\Lambda K^-}}$ can be estimated by their $SU_3$ relationship, but
assign to them an error bar allowing for variations of some $30\%$ around such
values used as central values.
The values entered into Table~\ref{tablaiii} are normalized to the pion-nucleon
YCC (assumed to be positive).
The data of Table~\ref{tablai} should expected to be very reliable, they have
been obtained through many experiments which have shown very acceptable
agreement with one another.
The experimental values of the YCC of Table~\ref{tablaiii} may not be so
stable.
They are model dependent to an extent which is difficult to assess and the
attempts to determine them have not always been free of controversy.
It should not be surprising that these data show in future determinations some
important changes.
However, it should be emphasized that they show reasonable consistency with
broken strong $SU_3$ symmetry at the expected 20--30$\%$ level.
It is probably this last remark that provides the best line of judgement in
their use.
\section{Determination of the $B$ amplitudes and the angle $\sigma$}
\label{b}
The predictions for the so-called parity conserving
amplitudes in the a priori mixed hadron approach, given in
Eqs.~(\ref{bes}), require that the several YCC that appear in them be
identified with the ordinary ones determined in strong-interaction physics.
They are not free parameters, they are constrained by the currently
experimental values displayed in Table~\ref{tablaiii}.
In contrast, the angle $\sigma$ remains unconstrained.
We do not have any theoretical argument which could help us fix it or even
loosely bound it.
We must leave it as a free parameter and extract its value from the comparison
of Eqs.~(\ref{bes}) with their counterparts in Table~\ref{tablai}.
As discussed in the last section, the phases of the $B$'s cannot be determined,
but out of all the possible choices only the eight ones displayed in
Table~\ref{tablaii} turned out to be acceptable.
We cannot tell in advance which of these eight choices can be reproduced by
Eqs.~(\ref{bes}), so we must try them all.
It turns out that four of them are the best reproduced.
We collect in Table~\ref{tablaiv} all the predictions of Eqs.~(\ref{bes}) in
these four choices, along with the values of the YCC and the angle $\sigma$.
Let us discuss the results obtained.
The experimental values of the $B$'s are very well reproduced.
The YCC come out quite reasonably close the values of Table~\ref{tablaiii}.
Most importantly, all the $SU_3$ signs are reproduced.
A very interesting feature is that the value of the only free parameter
$\sigma$ remains quite stable along the four choices of signs of the
experimental $B$'s.
The results obtained are good enough to conclude that a priori mixings in
hadrons, not only yield the $|\Delta I|=1/2$ rule predictions for the parity
conserving amplitudes of NLDH, but also provide a very good framework for their
detailed description in terms of only one free parameter.
\section{
Determination of the $A$ amplitudes and the angles $\delta$ and $\delta'$
}
\label{a}
The predictions of a priori mixings in hadrons for the so-called parity
violating amplitudes $A$ are given in Eqs.~(\ref{aes}).
New YCC are involved in them and this is indicated by the indeces $s$ and $p$
attached.
Although Eqs.~(\ref{aes}) provide a framework, we face a practical difficulty.
Due to our current inability to compute well with QCD, we are unable to obtain
the theoretical values of these new YCC and accordingly we must leave them as
free parameters in order to reproduce the experimental $A$'s of
Table~\ref{tablai}.
However, this is not good enough because there are more parameters than $A$
amplitudes.
If we try this latter way we simply learn nothing and we cannot determine the
angles $\delta$ and $\delta'$.
If we want to proceed, we must introduce constraints on these YCC by making
educated guesses.
Since QCD is assumed to be common to the positive and negative parity quarks of
the anzats we have used for guidance, one may expect that the new YCC are some
how related to the ordinary ones.
Specifically, one may reasonably expect that the magnitudes of the YCC are
the same as the magnitudes of the ordinary ones.
Their signs may differ however.
We shall impose these constraints on the new YCC.
Although, this way they are not free parameters anymore, we must still face many
possibilities since there are two signs to be chosen for each one of the new
YCC.
Therefore, we should perform a systematic analysis allowing for each possible
choice of relative signs between the new and the ordinary sets of YCC.
This analysis presents no essential difficulty although it is a tedious one.
The results of this analysis are very interesting.
Not all of the choices are allowed.
As a matter of fact, most of them are ruled out, but still many of the choices
remain possible, one out of every five.
That is, out of the initial 256 possibilities only about 50 remain.
We shall not display them all but we shall mention their most important
features.
In each one of these 50 or so possibilities the $A$'s are always reasonably well
reproduced (with the eight overall signs of Table~\ref{tablaii} taken into
consideration) and the magnitudes obtained for the new YCC come out also close
to the experimental magnitudes of Table~\ref{tablaiii}, but the most important
result is that the values of the $\delta$ and $\delta'$ angles show a
remarkable systematics: they always come out in either one of two groups.
They either take values around
$\delta = 0.10\times 10^{-6}$
and
$\delta' = 0.04\times 10^{-6}$
or around
$\delta = 0.15\times 10^{-6}$
and
$\delta' = 0.30\times 10^{-6}$.
The main conclusion of this analysis is, then, that (i) the $A$'s can be
reproduced in detail and (ii) the possible values of $\delta$ and $\delta'$ are
reduced to only two sets.
Even if it is not necessary to display all of the many cases of the above
analysis, it is convenient to show some of them.
This we do in Table~\ref{tablav}.
The cases displayed will be quite relevant in what folows.
The predictions for the $A$'s should be compared with the corresponding
experimental ones in Table~\ref{tablaii} and the YCC should be compared with
the values in Table~\ref{tablaiii}.
One can see in these comparisons that the results obtained are quite acceptable
and, therefore, that Eqs.~(\ref{aes}) can describe the $A$ amplitudes fairly
well when the new YCC are constrained in their magnitudes by the values of
Table~\ref{tablaiii}.
One can also appreciate that $\delta$ and $\delta'$ are determined within two
sets of values, in accordance with their systematic behavior mentioned before.
\section{
Predictions for the experimental observables and simultaneous determination of
$\delta$, $\delta'$, and $\sigma$
}
\label{ab}
Predicting the $A$ and $B$ amplitudes separately is really an intermediate
step, one must do more and proceed to predict the complete collection of
experimental observables of Table~\ref{tablai}.
This represents a substantially more stringent test of Eqs.~(\ref{aes}) and
(\ref{bes}) as we shall presently see.
We have seen in the last sections that there seems to be many solutions for
Eqs.~(\ref{aes}) and (\ref{bes}) to describe NLDH.
These many solutions arise in the choices for the overall signs of the $A$'s and
$B$'s and are increased by the free relative signs between the new and the
ordinary YCC.
In addition, although the new YCC were constrained in their magnitudes by the
experimental values of Table~\ref{tablaiii}, the error bars allow small
differences between the magnitudes of YCC in going from $A$ to $B$ amplitudes.
This can be observed by comparing the corresponding entries in
Tables~\ref{tablaiv} and \ref{tablav}.
Therefore, the strict equality of the magnitudes of the new and ordinary YCC
can only be enforced by reproducing the complete set of experimental
observables.
This should then reduce appreciably the number of solutions found in the
previous sections.
We must then perform both of the systematic searches of Secs.~\ref{b} and
\ref{a} simultaneously while using all the data for the observables $\Gamma_i$,
$\alpha_i$, and $\gamma_i$ of Table~\ref{tablai}.
After performing this analysis, a very important result is obtained: the best
description of experimental data is reduced to three cases.
Most of the possibilities for the $B$'s and $A$'s are ruled out and a few
remain which are not too bad but are no longer as good as they appeared at
first.
These three cases are displayed in Tables~\ref{tablavi} and \ref{tablavii}.
In these three cases the experimental data are very well reproduced and the
YCC are also very well reproduced in the first two cases (from left to right
in Table~\ref{tablavii}), while in the third case (to the right of
Table~\ref{tablavii}) one can observe some variations which are not
negligible.
This last observation, however, must be taken with care, since as we remarked
at the end of Sec.~\ref{data}, the experimental values of Table~\ref{tablaiii}
may change in the future because the experimental determination of the YCC is
quite difficult.
With this in mind, we find that the third case is acceptable.
It must be pointed out the stability of the three values obtained for
$\sigma$.
It must also be pointed out that $\delta$ and $\delta'$ still fall into either
one of the two sets found in Sec.~\ref{a}.
The a priori mixing angles are fairly well determined whether one uses the
experimental amplitudes or the experimental observables.
\section{
Violations of $|\Delta I|=1/2$ rule predictions through $SU(2)$ symmetry
breaking
}
\label{sb}
It is well known that experimentally the predictions of the $|\Delta I|=1/2$
rule are not exact.
In the case of a priori mixings in hadrons the violations of these predictions
will come by the breaking of the $SU(2)$ strong-flavor symmetry, which was
introduced by assuming that the Yukawa hamiltonian was an $SU(2)$ scalar.
It is, therefore, necessary to explore the effect of such breaking.
In this section we shall let the YCC that appear in Eqs.~(\ref{aes}) and
(\ref{bes}) to differ from their $SU(2)$ symmetry limit.
However, we shall allow for only small differences from this limit by
constraining such changes to remain at the few percent level.
Stronger variations will not be considered.
This analysis leads to Tables~\ref{tablaviii} and \ref{tablaix}.
In going through Tables~\ref{tablaviii} and \ref{tablaix} and after comparing
them with Tables~\ref{tablavi} and \ref{tablavii}, respectively, one can
observe that changes of a few percent from the $SU(2)$ symmetry limit of the
YCC allow the predictions of Eqs.~(\ref{aes}) and (\ref{bes}) to describe the
experimental data even better than before.
The YCC that go into these predictions come out very reasonable.
Also the variations observed in the third case (from left to right) in
Table~\ref{tablavii} are milder now.
The main effect of these small changes is seen in the error bars of the a priori
mixing angles, especially in the second and third cases.
We must conclude that the determination of the a priori mixing angles is
affected by $SU(2)$ breaking corrections and we should avoid overestimating the
values obtained for them in NLDH.
One must then be cautious and quote the values of $\sigma$, $\delta$, and
$\delta'$ affected by $SU(2)$ breaking, namely, those of Table~\ref{tablaix}.
\section{Discussions and conclusions}
\label{conclusions}
Throughout the last four sections we have performed a very detailed analysis of
the ability of the a priori mixings in hadrons to describe NLDH.
This description comes in four levels.
First, there are very general features which cover the predictions of the
$|\Delta I|=1/2$ rule.
They are independent of the values of the a priori mixing angles and of the
YCC and they will be violated only by $SU(2)$ symmetry breakings.
Since these breakings are very small, these predicitons are quite accurate, as
is experimentally the case.
Second, the so-called parity-conserving $B$ amplitudes must
be described using the values of the YCC observed in the strong-interactions of
hyperons and mesons.
Only a free parameter remains.
The results obtained for the $B$ amplitudes are very good, as was discussed in
Sec.~\ref{b}.
No new assumptions had to be introduced.
Third, in order to describe the so-called parity-violating $A$
amplitudes one has to introduce new assumptions because new
YCC are involved.
It is reasonable to expect that these YCC are not completely independent of the
ordinary YCC.
We have introduced an educated guess based on the original motivation that led
to the ansatz which we introduced to guide ourselves for the practical
implementation of a priori mixings in hadrons.
Since QCD is common to both the positive and negative parity quarks used in our
ansatz, one may expect that the magnitudes of the new and ordinary YCC be the
same.
This still leaves the relative signs open.
We performed a thorough study in Sec.~\ref{a}, showing that the $A$'s can be
well described.
The two free angles $\delta$ and $\delta'$ showed a remarkable stability and
always stayed close to either one of two sets of values.
Fourth, the very many possibilities allowed at the third level are very much
reduced when trying to cover simultaneously all of the available data on the
experimental observables in NLDH.
The best cases are reduced to three and were displayed in the tables of
Sec.~\ref{ab}.
Since it is well known that the predictions of the $|\Delta I|=1/2$ rule are
not exact experimentally and in order to complete our analysis, in
Sec.~\ref{sb} we allowed for the presence of small $SU(2)$ violations in the
YCC.
This exercise taught us that one should be somewhat cautious when
determinig the a priori mixing angles, but otherwise these
small violations are seen to improve even more the agreement between
Eqs.~(\ref{aes}) and (\ref{bes}) and experiment.
There is another point whose discussion we wanted to leave to the end of this
paper.
We did not commit ourselves in any way about the relative signs between
the new and the ordinary YCC.
In this respect, there are several interesting observations we wish to make.
Again, since QCD is assumed to be common to the positive and negative parity
quarks of our ansatz, one could expect that the mechanism that assigns
strong-flavors to positive-parity hyperons and negative-parity mesons be common
to negative-parity baryons and positive-parity mesons.
That is, it could be possible that the latter hadrons come in $SU(3)$ octets
too, albeit, different octets than those of the former ones.
If this were to be the case then one could expect that certain relative signs of
the new YCC be the same as the relative signs of the corresponding ordinary
YCC.
We can distinguish three groups of the new YCC according to the indeces
$(p,sp)$, $(s,ss)$, and $(s,pp)$ in Eqs.~(\ref{aes}).
One could expect that the relative signs of the new YCC within each one of
these groups be the same as the relative signs between the corresponding
ordinary YCC of Table~\ref{tablaiii}.
One can observe that this is indeed the case in the third solution of
Sec.~\ref{ab}.
This can be taken as an indication that it could make sense to go beyond the
assumption of only equating the magnitudes of the new and old YCC.
To close this paper let us make some comments about the values of the a priori
mixing angles.
Perhaps the most striking result of our detailed analysis is the relatiive
stability of the values obtained for them throughout all the cases considered.
The best values we have obtained for them are those of Table~\ref{tablaix}.
We have there three sets of values which even if they are not quite unique they
are, however, very close to one another.
In view of the last remarks, one might be inclined to prefer the set of the
second solution in this table, namely,
\begin{eqnarray}
\sigma & = & (4.9\pm 2.0)\times 10^{-6}
\nonumber \\
|\delta| & = & (0.22\pm 0.09)\times 10^{-6}
\label{ang} \\
|\delta'| & = & (0.26\pm 0.09)\times 10^{-6}
\nonumber
\end{eqnarray}
The overall sign of the new YCC can be reversed and the new overall sign can be
absorbed into $\delta$ and $\delta'$.
This can be done partially in the group of such constants that accompanies
$\delta$ or in the group that accompanies $\delta'$ or in both.
Because of this, we have determined only the absolute values of $\delta$ and
$\delta'$.
In order to emphasize this fact we have inserted absolute value bars on
$\delta$ and $\delta'$ in Eq.~(\ref{ang}).
The relevance of the values of the a priori mixing angles lies in that a
crucial test of the whole a priori mixings in hadrons scheme is that these
angles must show a universality-like proprety.
This is essential for this scheme to serve as a serious framework to describe
the enhancement phenomenon observed in non-leptonic, weak radiative, and
rare-mode weak decays of hadrons.
\acknowledgments
The authors wish to acknowledge partial support by CONACyT (M\'exico).
| {'timestamp': '1998-05-12T06:20:27', 'yymm': '9804', 'arxiv_id': 'hep-ph/9804340', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9804340'} |
\section{Introduction\label{intro}}
Widespread deployment of mobile sensors is expected to revolutionize
our ability to monitor and control physical environments. However, for these networks
to reach their full range of applicability they must be capable of
operating in uncertain and unstructured environments. Realizing
the full potential of networked sensor systems will require the development of protocols that are fully
distributed and adaptive in the face of persistent faults and time-varying,
unpredictable environments.
\ao{Our goal in this paper is to initiate the study of cooperative multi-agent learning by distributed
networks operating in unknown and changing environments, subject to faults and failures of
communication links. While our focus here is on the basic problem of learning an unknown vector,
we hope to contribute to the development of a broad theory of cooperative, distributed learning in such environments, with
the ultimate aim of designing sensor network protocols capable of adaptability. }
\ao{We will study a simple, local protocol for learning a vector from intermittent
measurements and evaluate its performance in terms of the number of nodes and the (time-varying) network structure.} Our
direct motivation is the problem of tracking a direction from chemical gradients. A network of
mobile sensors needs to move in a direction $\mu$ (understood as a vector on the unit circle), which none
of the sensors initially knows; however, intermittently some sensors are able to obtain a sample of $\mu$. The sensors
can observe the velocity of neighboring sensors but, as the sensors move, the set of neighbors of each sensor changes; moreover,
new sensors occasionally join the network and current sensors sometimes permanently leave the network. The challenge
is to design a protocol by means of which the sensors can adapt their velocities based on the measurements
of $\mu$ and observations of the velocities of neighboring sensors so that every node's velocity converges to $\mu$ as
fast as possible. This challenge is further complicated by the fact that all estimates of $\mu$ as well as all
observations of the velocities of neighbors
are assumed to be noisy.
We will consider a natural generalization in the problem, \ao{wherein we abandon the constraint that $\mu$ lies on the
unit circle} and instead consider the problem of learning an arbitrary vector $\mu$ by a network of mobile nodes subject to time-varying (and unpredictable) inter-agent connectivity, and intermittent, noisy measurements. \ao{We will be interested in the speed at which
local, distributed protocols are able to drive every node's estimate of $\mu$ to the correct value. We will be especially concerned with identifying the salient
features of network topology that result in good (or poor) performance.}
\subsection{Cooperative multi-agent learning}
We begin by formally stating the problem for a fixed number of nodes. We consider $n$ autonomous nodes engaged in the task of learning a vector $\mu \in \mathbb{R}^l$. At each time $t=0,1,2,\ldots$ we denote by $G(t)=(V(t),E(t))$ the graph of inter-agent communications at time $t$:
two nodes are connected by an edge in $G(t)$ if and only if they are able to exchange messages at time $t$. Note that by definition the graph $G(t)$ is undirected. If $(i,j) \in G(t)$ then we will say that $i$ and $j$ are neighbors at time $t$. We will adopt the convention that $G(t)$ contains no self-loops. We will assume the graphs $G(t)$ satisfy a standard condition of uniform connectivity over a long-enough
time scale: namely, there exists some
constant positive integer $B$ (unknown to any of the nodes) such that the graph sequence $G(t)$ is $B$-connected, i.e. the graphs $(\{1,\ldots,n\}, \bigcup_{kB+1}^{(k+1)B} E(t) )$ are connected for each
integer $k \geq 0$. \ao{Intuitively, the uniform connectivity condition means that once we take all the edges that have appeared between times $kB$ and $(k+1)B$, the graph is connected}.
Each node maintains an estimate of $\mu$; we will denote the estimate of node $i$ at time $t$ by $v_i(t)$. At time $t$, node $i$ can update $v_i(t)$ as a function of the noise-corrupted estimates $v_j(t)$ of its neighbors. We will use $o_{ij}(t)$ to denote the noise-corrupted estimate of the offset $v_j(t)-v_i(t)$ available to neighbor $i$ at time $t$: \[ o_{ij}(t) = v_j(t) - v_i(t) + w_{ij}(t) \] Here $w_{ij}(t)$ is a zero-mean random vector every entry of which has variance $(\sigma')^2$, and all $w_{ij}(t)$ are assumed to be independent of each other, as well as all other
random variables in the problem (which we will define shortly). These updates may be the result of a wireless message exchange or may come about as a result of sensing by each node. Physically, each node is usually able to sense (with noise) the relative difference $v_j(t) - v_i(t)$, for example if $v_i(t)$ represent
velocities and measurements by the agents are made in their frame of reference. Alternatively, it may be that nodes are able to measure the absolute
quantities $v_j(t), v_i(t)$ and then $w_{ij}(t)$ is the sum of the noises in these two measurements.
Occasionally, some nodes have access to a noisy measurement \[ \mu_i(t) = \mu + w_i(t),\] where $w_i(t)$ is a zero-mean random vector every entry of which has variance $\sigma^2$; we assume all vectors $w_i(t)$ are independent of each other and of all $w_{ij}(t)$. In this case, node $i$ incorporates this measurement into its updated estimate $v_i(t+1)$. We will refer to a time $t$ when at least one node has a measurement as a {\em measurement time}. For the rest of the paper, we will be making an assumption of uniform measurement speed, namely that fewer than $T$ steps pass between successive measurement times; more precisely, letting $t_k$ be the times when at least one node makes a measurement, we will assume that $t_1 = 1$ and $|t_{k+1} - t_k| < T$ for all positive integers $k$.
It is useful to think of this formalization in terms of our motivating scenario, which is a collection of nodes - vehicles, UAVs, mobile
sensors, or underwater gliders - which need to learn and follow a direction. Updated information about the direction arrives from time to time as one or more of the nodes takes measurements, and the nodes need a protocol by which they update their velocities $v_i(t)$ based on the measurements and observations of the velocities of neighboring nodes.
This formalization also describes the scenario in which a moving group of animals must all learn which way to go based on intermittent samples of a preferred direction and social interactions with near neighbors. An example is collective migration where high costs associated with obtaining measurements of the migration route suggest that the majority of individuals rely on the more accessible observations of the relative motion of their near neighbors when they update their own velocities \cite{gc10}.
\subsection{Our results\label{section:controllaw}\label{section:results}} We now describe the protocol which we analyze for the remainder of
this paper. If at time $t$ node $i$ does not have a measurement of $\mu$, it nudges its velocity in the direction of its neighbors:
\begin{equation} \label{nonmeasuringupdate} v_i(t+1) = v_i(t) + \frac{\Delta(t)}{4} \sum_{j \in N_i(t)} \frac{\aor{o_{ij}(t)}}{\max(d_i(t),d_j(t))},
\end{equation} where $N_i(t)$ is the set of neighbors of node $i$ at time $t$, $d_i(t)$ is the cardinality of $N_i(t)$, and $\Delta(t)$ is a stepsize
which we will specify later.
On the other hand, if node $i$ does have a measurement $\mu_i(t)$, it updates as
\begin{equation} \label{measuringupdate} v_i(t+1) = v_i(t) + \frac{\Delta(t)}{4} \left( \mu_i(t) - v_i(t) \right) + \frac{\Delta(t)}{4} \sum_{j \in N_i(t)} \frac{\aor{o_{ij}(t)}}{\max(d_i(t), d_j(t))}. \end{equation}
Intuitively, each node seeks to align its estimate $v_i(t)$ with both the measurements it takes and estimates of neighboring nodes. As nodes align with one another, information from each measurement slowly propagates throughout the system.
Our protocol is motivated by a number of recent advances within the literature on multi-agent consensus. On the one hand, the weights we accord to neighboring nodes are based on Metropolis weights (first introduced within the context of multi-agent control in \cite{bdx04}) and are chosen because they lead to a tractable Lyapunov analysis as in \cite{noot09}. On the other hand, we introduce a stepsize $\Delta(t)$ which we will later choose to decay to zero with $t$ at an appropriate speed by analogy with the recent work on multi-agent optimization
\cite{no09, sn11, yns12}.
\aor{ The use of a stepsize $\Delta(t)$ is crucial for the system to be able to successfully learn the unknown vector $\mu$ with this scheme. Intuitively, as $t$ gets large, the nodes should avoid overreacting by changing their
estimates in response to every new noisy sample. Rather, the influence of every new sample on the estimates $v_1(t), \ldots, v_n(t)$ should decay with $t$: the
more information the nodes have collected in the past, the less they should be inclined to revise their estimates in response to a new sample. This is accomplished by ensuring that the influence of each successive new sample decays with the stepsize $\Delta(t)$.}
Our protocol is also motivated by models used to analyze collective decision making and collective motion in animal groups \cite{c05,lsnscl12}. Our time varying stepsize rule is similar to models of context-dependent interaction in which individuals reduce their reliance on social cues when they are progressing towards their target \cite{tnc09}.
We now proceed to set the background for our main result, which bounds the rate at which the estimates $v_i(t)$ converge to $\mu$. We first state a proposition which assures us that the estimates $v_i(t)$ do indeed converge to $\mu$ almost surely.
\bigskip
\begin{proposition} \label{thm:conv} If the stepsize $\Delta(t)$ is nonnegative, nonincreasing and satisfies
\[ \sum_{t=1}^{\infty} \Delta(t) = +\infty, ~~~~~~~ \sum_{t=1}^{\infty} \Delta^2(t) < \infty, ~~~~~~~~~ \sup_{t \geq 1} \frac{\Delta(t)}{\Delta(t+c)} < \infty ~~~~ \mbox{ for any integer } c \] then for any initial values
$v_1(0), \ldots, v_n(0)$, we have that with probability $1$ \[ \lim_{t \rightarrow \infty} v_i(t) = \mu ~~~~ \mbox{ for all } i. \]
\end{proposition}
We remark that this proposition may be viewed as a generalization of earlier results on leader-following and learning, which achieved similar conclusions either without the
assumptions of noise, or on fixed graphs, or with the assumption of a fixed leader (see \cite{jlm, leader1, leader2, leader3, angelia-asu-learn, bfh, moura7} as well as the related \cite{hj, carli2}). Our protocol is very much in the spirit of this earlier literature. All the previous protocols (as well as ours) may be thought of as consensus protocols driven by inputs, and we note there are a number of other possible variations on this
theme which can accomplish the task of learning the unknown vector $\mu$.
\ao{Our main result in this paper is a strengthened version of \aor{Proposition} \ref{thm:conv} which provides quantitative bounds on the rate at which convergence to $\mu$ takes place. We are particularly interested in the scaling of the convergence time with the number of nodes and with the combinatorics of the interconnection graphs $G(t)$. We will adopt the natural measure of how far we are from convergence, namely} the sum of the squared distances from the final limit: \[ Z(\aor{t}) = \sum_{i=1}^n ||v_i(t) - \mu||_2^2. \] We will refer to $Z(t)$ as the variance at time $t$.
Before we state our main theorem, we introduce some notation. First, we define the the notion of the lazy Metropolis walk on an undirected graph: this is the random walk which moves from $i$ to $j$ with probability
$1/(4\max(d(i),d(j)))$ whenever $i$ and $j$ are neighbors. Moreover, given a random walk on a graph, the hitting time from $i$ to $j$ is defined to be the expected time until
the walk visits $j$ starting from $i$. We will use $d_{\rm max}$ to refer to the largest degree of any node in the sequence $G(t)$ and $M$ to refer to the
largest number of nodes that have a measurement at any one time; clearly both $d_{\rm max}$ and $M$ are at most $n$. Finally, $\lceil x \rceil$ denotes the smallest integer which is at least $x$, and recall that $l$ is the dimension of $\mu$ and all $v_i(t)$. With this notation in place, we now state our main result.
\bigskip
\begin{theorem} \label{mainthm} Let the stepsize be $\Delta(t)=1/(t+1)^{1-\epsilon}$ for some $\epsilon \in (0,1)$. Suppose each of the graphs $G(t)$ is connected and let $\mathcal{H}$ be the largest hitting time from any node to any node in a lazy
Metropolis walk on any of the graphs $G(t)$. If $t$ satisfies the lower bound
\begin{equation} \label{transient} t \geq 2T \left[ \frac{288T \mathcal{H} }{\epsilon} \ln \left( \frac{96T \mathcal{H} }{\epsilon} \right) \right]^{1/\epsilon}, \end{equation}
then we have the following decay bound on the expected variance:
\begin{equation} \label{eq:connected}
E[ Z(t) ~|~ v(1) ] \leq 15 {\cal H} T l \frac{M \sigma^2 + n T(\sigma')^2}{(t/T-1)^{1 - \epsilon}} + Z(1) e^{-\frac{(t/T-1)^\epsilon - 2}{24 \mathcal{H} T\epsilon}}. \end{equation}
In the general case when each $G(t)$ is not necessarily connected but the sequence $G(t)$ is $B$-connected, we have that if $t$ satisfies the lower bound
\[ t \geq 2 \max(T,B) \left[ \frac{384 n^2 d_{\rm max} \left( 1 + \max(T,B) \right) }{\epsilon} \ln \left( \frac{128 n^2 d_{\rm max} \left( 1 + \max(T,B) \right) }{\epsilon} \right) \right]^{1/\epsilon} \] then
we have the following decay bound on the expected variance: \begin{equation} \label{eq:general} E[ Z(t) ~|~ v(1) ] \leq 2 n^{2} d_{\rm max} \max(T,B) l \frac{ M \sigma^2 + 2n ( 1 + \max(T,B)) (\sigma')^2}{\left( t/\max(t,B) \right)^{1 - \epsilon}} + Z(1) e^{- \frac{\left(t/\max(T,B) \right)^\epsilon - 2}{32n^2 d_{\rm max} \left( 1 + \max(T,B) \right) \epsilon}}. \end{equation}
\end{theorem}
Our theorem provides a quantitative bound on the convergence time of the repeated alignment process of Eq. (\ref{nonmeasuringupdate}) and Eq. (\ref{measuringupdate}). We believe this is the first time a convergence time result has been demonstrated in the setting of time-varying (not necessarily connected) graphs,
intermittent measurements by possibly different nodes, and noisy communications among nodes. The convergence
time expressions are somewhat unwieldy, and we pause now to discuss some of their features.
First, observe that the convergence times are a sum of two terms: the first which decays with $t$ as $O(1/t^{1-\epsilon})$ and the second which decays as
$O(e^{-t^{\epsilon}})$ (here $O$-notation hides all terms that do not depend on $t$). In the limit of large $t$, the second will be negligible and we may focus
our attention solely on the first. Thus our finding is that it is possible to achieve a nearly linear decay with time by picking a stepsize $1/(t+1)^{1-\epsilon}$ with
$\epsilon$ close to zero.
Moreover, examining Eq. (\ref{eq:general}), we find that for every choice of $\epsilon \in (0,1)$, the scaling with the number of nodes $n$ is polynomial. Moreover, in analogy to some recent work on consensus \cite{noot09}, better convergence time bounds are available when the largest degree of any node is small. This is somewhat counter-intuitive since higher degrees are associated with improved connectivity. A plausible intuitive explanation for this mathematical
phenomenon is that low degrees ensure that the influence of new measurements on nodes does not get repeatedly diluted in the update process.
Furthermore, while it is possible to obtain a nearly linear decay with the number of iterations $t$ as we just noted, such a
choice blows up the bound on the transient period before the asymptotic decay bound kicks in. Every choice of $\epsilon$ then provides a trade off between the transient size and the asymptotic rate of decay. This is to be contrasted with the usual situation in distributed optimization (see e.g., \cite{ram-angelia, sn11}) where a specific choice of stepsize usually
results in the best bounds.
Finally, in the case when all graphs are connected, the effect of network topology on the convergence time comes through the maximum hitting time $\mathcal{H}$ in all the
individual graphs $G(t)$. There are a variety of results on hitting times for various graphs which may be plugged into Theorem \ref{mainthm} to obtain precise
topology-dependent estimates. We first mention the general result that $\mathcal{H} = O(n^2)$ for the Metropolis chain on an arbitrary connected graph from
\cite{metropolis}. On a variety of reasonably connected graphs, hitting times are considerably smaller. A recent preprint \cite{luxburg} shows that for many graphs, hitting times are proportional to the inverse degrees. In a 2D or 3D grid, we have that ${\cal H} = \widetilde{O}(n)$ \cite{covertime}, where the notation $\widetilde{O}( f(n) )$ is the same as ordinary $O$-notation with the exception of hiding multiplicative factors which are polynomials in $\log n$.
We illustrate the convergence times of Theorem \ref{mainthm} with a concrete example. Suppose we have a collection of nodes interconnected in
(possibly time-varying) 2D grids with a single (possibly different) node sampling at every time. We are interested how the time until $E[Z(t) ~|~ v(1)]$ falls below
$\delta$ scales with the number of nodes $n$ as well as with $\delta$. Let us assume that the dimension $l$ of the vector we are learning as well as the noise variance $\sigma^2$ are constants independent of the number of nodes. Choosing a step size $\Delta(t) = 1/\sqrt{t}$, we have that Theorem \ref{mainthm} implies that
variance $E[Z(t) ~|~ Z(0)]$ will fall below $\delta$ after $\widetilde{O}(n^2/\delta^2)$ steps of the protocol. The exact bound, with all the constants,
may be obtained from Eq. (\ref{eq:connected}) by plugging in the hitting time of the 2D grid \cite{covertime}. Moreover, the transient period until this exact bound applies (from Eq. (\ref{transient})) has length $\widetilde{O}(n^2)$. We can obtain a better asymptotic decay approaching $\widetilde{O}(n^2/\delta)$ by picking a more slowly decaying stepsize, at the expense of lenghtening the transient period.
\subsection{Related work} We believe that our paper is the first to \ao{derive rigorous convergence time results for} the problem of cooperative multi-agent learning by a network \ao{subject to} unpredictable communication disruptions and intermittent measurements. The key features of our model are 1) its cooperative nature (many nodes working together) 2) its reliance only on distributed and local observations 3) the incorporation of time-varying communication restrictions.
Naturally, our work is not the first attempt to fuse learning algorithms with distributed control or multi-agent settings. \ao{Indeed, the study of learning in games is a classic subject which has attracted considerable attention within the last couple of decades due in part to its applications to multi-agent systems.} We refer the reader to the recent papers \cite{ma1, ma2, ma3, ma4, ma5, ma6, ma7, ma8, ma9, ma10, jadb-learn, jadb-more-learn} \ao{as well as the classic works \cite{ma11, ma13}} which study multi-agent learning in a game-theoretic context. \ao{Moreover, the related problem of distributed reinforcement learning has attracted some recent attention; we refer the reader to \cite{ma11, ma12, ma14}.} We mention especially the recent surveys \ao{\cite{ma15, ma16}}. \ao{Moreover, we note that much of the recent literature in distributed robotics has focused on distributed algorithms robust to faults and communication link failures. We refer the reader to the representative papers \cite{r1, r2}. }
Our work here is very much in the spirit of the recent literature on distributed filtering \cite{olfati1, olfati2, olfati3, rantzer, sper, sayed1, sayed2, sayed3, moura1, moura6, diffadpt} and
especially \cite{carli}. These works consider the problem of tracking a time-varying signal from local measurements by each node, which are then repeatedly combined through
a consensus-like iteration. The above-referenced papers consider a variety of schemes to this effect and obtain bounds on their performance, usually stated in
terms of solutions to certain Lyapunov equations, or in terms of eigenvalues of certain matrices on fixed graphs.
Our work is most closely related to a number of recent papers on distributed detection \cite{moura1, moura2, moura3, moura5, moura6, moura7, moura8, moura9} which seek to evaluate protocols for networked cooperative hypothesis testing and related problems. Like the previously mentioned work on distributed filtering, these papers use the idea of local iterations which are combined through a distributed consensus update, termed ``consensus plus innovations''; a similar idea is called ``diffusion adaptation'' in \cite{diffadpt}. This literature clarified a number of distinct phenomena in cooperative filtering and estimation; some of the contributions include working out tight bounds on error exponents for choosing the right hypothesis and other performance measures for a variety of settings (e.g., \cite{moura2, moura3, moura5}), as well as establishing a number of fundamental limits for distributed parameter estimation \cite{moura7}.
In this work, we consider the related (and often simpler) question of learning a static unknown vector. However,
we derive results which are considerably stronger compared to what is available in the previous literature, obtaining convergence rates in settings when the network is time-varying, measurements are intermittent, and communication is noisy. {\em Most importantly, we are able to explicitly bound the speed of convergence to the unknown vector $\mu$ in these unpredictable settings in terms of network size and combinatorial features of the networks.}
\subsection{Outline} We now outline the remainder of the paper. Section \ref{sec:conv}, which comprises most of our paper, contains the proof of Proposition \ref{thm:conv} as well as the main result, Theorem \ref{mainthm}. The proof is broken up into several distinct pieces since some steps
are essentially lengthy exercises in analysis. We begin in Section \ref{subsec:aclass} which contains some basic facts about symmetric substochastic
matrices which will be useful. The following Section \ref{subsec:decay} is devoted solely to analyzing a particular inequality. We will later show
that the expected variance satisfies this inequality and apply the decay bounds we derived in that section. We
then begin analyzing properties of our protocol in Section \ref{subsec:sieve}, before finally proving Proposition \ref{thm:conv} and Theorem \ref{mainthm} in Section \ref{sec:proof}. Finally, Section \ref{sec:simul} contains some simulations of our protocol and
Section \ref{sec:concl} concludes with a summary of our results and a list of several open problems.
\section{Proof of the main result\label{sec:conv}}
The purpose of this section is to prove Theorem \ref{mainthm}; we prove Proposition \ref{thm:conv} along the way.
We note that the first several subsections contain some basic results which
we will have occasion to use later; it is only in Section \ref{sec:proof} that we begin directly proving Theorem \ref{mainthm}.
We begin with some preliminary definitions.
\subsection{Definitions
Given a nonnegative matrix $A \in \mathbb{R}^{n \times n}$, we will use $G(A)$ to
denote the graph whose edges correspond to the positive entries of $A$ in the following way: $G(A)$ is the directed graph on the vertices
$\{1,2, \ldots, n\}$ with edge set $\{ (i,j) ~|~ a_{ji}>0 \}$. Note that if $A$ is symmetric then the graph $G(A)$ will be \ao{undirected}. We will use the standard convention of ${\bf e}_i$ to mean
the $i$'th basis column vector and ${\bf 1}$ to mean the all-ones vector. Finally, we will use $r_i(A)$ to denote the row sum of the $i$'th row of $A^2$ and $R(A)={\rm diag}(r_1(A), \ldots,r_n(A))$. When the argument matrix $A$ is clear from context, we will simply write $r_i$ and $R$ for $r_i(A), R(A)$.
\subsection{A few preliminary lemmas\label{subsec:aclass}} In this subsection we prove a few lemmas which we will find useful in the proofs of our main theorem. Our first lemma gives a decomposition of a symmetric matrix and its immediate corollary provides a way to bound the change in norm arising from multiplication by a symmetric matrix. Similar statements were proved in \cite{bdx04},\cite{noot09}, and \cite{TN11}.
\smallskip
\begin{lemma} For any symmetric matrix $A$, \[ A^2 = \ao{R} - \sum_{k<l} [A^2]_{kl} ({\bf e}_k - {\bf e}_l) ({\bf e}_k - {\bf e}_l)^T.\] \label{lemma:decomposition} \end{lemma}
\begin{proof} Observe that each term $({\bf e}_k - {\bf e}_l) ({\bf e}_k - {\bf e}_l)^T$ in the sum on the right-hand side has row sums of zero, and consequently both sides of the above equation have identical row sums. Moreover, both sides of the above equation are symmetric. This implies it suffices to prove that all the $(i,j)$-entries of both sides
with $i<j$ are the same. But on both sides, the $(i,j)$'th element when $i<j$ is $[A^2]_{ij}$. \end{proof}
\smallskip
\ao{This lemma may be used to bound how much the norm of a vector changes after multiplication by a symmetric matrix.}
\smallskip
\begin{corollary} \label{sievebound} For any symmetric matrix $A$,
\[ ||Ax||_2^2 ~=~ ||x||_2^2 - \sum_{j=1}^n (1-r_j) x_j^2 \aor{-} \sum_{k<l} [A^2]_{kl}(x_k - x_l)^2. \] \end{corollary}
\begin{proof} By Lemma \ref{lemma:decomposition}, \begin{eqnarray*} ||Ax||_2^2 & = &
x^T A^2 x \\
& = & x^T R x - \sum_{k<l} [A^2]_{kl} x^T ({\bf e}_k - {\bf e}_l) ({\bf e}_k - {\bf e}_k)^T x \\
& = & \sum_{j=1}^n r_j x_j^2 - \sum_{k<l} [A^2]_{kl} (x_k - x_l)^2.
\end{eqnarray*} Thus the decrease in squared norm from $x$ to $Ax$ is
\[ ||x||_2^2 - ||Ax||_2^2 = \sum_{j=1}^n (1-r_j) x_j^2 + \sum_{k<l} [A^2]_{kl}(x_k - x_l)^2. \] \end{proof}
\bigskip We now introduce a measure of graph connectivity which we call the {\em sieve constant} of a graph, defined as follows. For a nonnegative, stochastic matrix $A \in \mathbb{R}^{n \times n}$, the sieve constant $\kappa(A)$ is defined as \[ \kappa(A) = \min_{m=1, \ldots, n} ~~\min_{||x||_2=1} ~~ x_m^2 + \sum_{k \neq l} a_{kl} (x_k - x_l)^2. \] For an undirected graph $G=(V,E)$, the sieve constant $\kappa(G)$ denotes the sieve constant of the Metropolis matrix, which is the stochastic matrix with
\begin{equation*}
a_{ij} =
\begin{cases} \frac{1}{\max(d_i, d_j)}, & \text{ if } (i,j) \in E \mbox{ and } i \neq j,\\
0, &\text{ if } (i,j) \notin E.
\end{cases}
\end{equation*} The sieve constant is, as far as we are aware, a novel graph parameter: we are not aware of any previous works making use of it. Our name is due to the geometric picture inspired by the above optimization problem: one entry of the vector $x$ must be held close to zero while keeping it close to all the other entries, with $\kappa(A)$ measuring how much ``sieves'' through the gaps.
\smallskip
The sieve constant will feature prominently in our proof of Theorem \ref{mainthm}; we will use it in conjunction with Lemma \ref{sievebound} to bound
how much the norm of a vector decreases after multiplication by a substochastic, symmetric matrix. We will require a bound on how small $\kappa(A)$ can be
in terms of the combinatorial features of the graph $G(A)$; such a bound is given by the following lemma.
\smallskip
\begin{lemma} \label{lemma:snonnegativity} $\kappa(A) \geq 0$. Moreover, if the smallest entry of the matrix $A$, which we will denote by $\eta$, satisfies $\eta \leq 1$ and if the graph $G(A)$ is weakly connected\footnote{A directed
graph is weakly connected if the undirected graph obtained by ignoring the orientations of the edges is connected.} we have $$\kappa(A) \geq \frac{\eta}{n D },$$ where $D$ is the (weakly-connected) diameter of $G(A)$.
\end{lemma}
\smallskip
\begin{proof} It is evident from the definition of $\kappa(A)$ that it is necessarily nonnegative. We will show that for any $m$,
\[ \min_{||x||_2=1} ~~~~~ x_m^2 + \sum_{(i,j) \in E} (x_i - x_j)^2 \geq \frac{1}{D n}
\] This then implies the lemma immediately from the definition of the sieve constant.
Indeed, we may suppose $m=1$ without loss of generality. Suppose the minimum in the above optimization problem is achieved by the vector $x$; let $Q$ be the index of the component of $x$ with
the largest absolute value; without loss of generality, we may suppose that the shortest path connecting $1$ and $Q$ is $1-2-\cdots -Q$ (we can simply relabel the nodes to make this true). Moreover, we may also assume $x_Q > 0$ (else, we can just replace $x$ with $-x$).
\ao{Now the assumptions that $||x||_2=1$, that
$x_M$ is the largest component of $x$ in absolute value, and that $x_M > 0$} imply that $x_M \geq 1/\sqrt{n}$ or
\[ (x_1-0) + (x_2 - x_1) + \cdots + (x_Q - x_{Q-1}) \geq \frac{1}{\sqrt{n}} \] and applying Cauchy-Schwarz
\[ Q ( x_1^2 + (x_2 - x_1)^2 + \cdots (x_Q - x_{Q-1})^2) \geq \frac{1}{n}, \] or
\[ x_1^2 + (x_2 - x_1)^2 + \cdots (x_Q- x_{Q-1})^2 \geq \frac{1}{Zn} \geq \frac{1}{ D n}. \] \end{proof}
\subsection{A decay inequality and its consequences\label{subsec:decay}} We continue here with some preliminary results which we will use in the course of proving Theorem \ref{mainthm}. The proof of that theorem will proceed by arguing that $a(t)= E[Z(t) ~|~ Z(0)]$ will satisfy the inequality
\begin{equation} \label{exampledecay} a(t_{k+1}) \leq \left( 1 - \frac{q}{t_{k+1}^{1-\epsilon}} \right) a({t_k}) + \frac{d}{t_k^{2 - 2 \epsilon}} \end{equation} for some increasing integer sequence $t_k$ and some positive constants $q,d$. We will not turn to deriving this inequality for $E[Z(t) ~|~ Z(0)]$ now; this will be done later in Section \ref{sec:proof}. The current subection is instead devoted to analyzing the consequences of the inequality, specifically deriving a bound on how fast $a(t_k)$ decays as a function of $q,d$ and the sequence $t_k$.
The only result from this subsection which will be used later is Corollary \ref{effdecay}; all the other lemmas proved here are merely steps on the way of the
proof of that corollary.
We begin with a lemma which bounds some of the products we will shortly encounter.
\smallskip
\begin{lemma} Suppose $q \in (0,1]$ and $\epsilon \in (0,1)$ and for $2 \leq a < b$ define \[ \Phi_q(a,b) = \prod_{t=a}^{b-1} \left( 1 - \frac{q}{t^{1-\epsilon}} \right). \] Moreover, we will adopt the convention that $\Phi_q(a,b)=1$ when $a=b$. Then we have
\[ \Phi_q(a,b) \leq e^{-q(b^{\epsilon} - a^{\epsilon})/\epsilon}. \] \label{phibound}
\end{lemma}
\smallskip
\begin{proof} Taking the logarithm of the definition of $\Phi_q(a,b)$, and using the inequality $\ln(1-x) \leq -x$,
\[ \ln \Phi_q(a,b) = \sum_{t=a}^{b-1} \ln \left( 1 - \frac{q}{t^{1-\epsilon}} \right) \leq - \sum_{t=a}^{b-1} \frac{q}{t^{1-\epsilon}}
\leq -q \frac{b^{\epsilon} - a^{\epsilon}}{\epsilon}, \] where, in the last inequality, we applied the standard technique of lower-bounding a
nonincreasing nonnegative sum by an integral.
\end{proof}
\smallskip
We next turn to a lemma which proves yet another bound we will need, namely a lower bound on $t$ such that
the inequality $t \geq \beta \log t$ holds.
\smallskip
\begin{lemma} Suppose $\beta \geq 3$ and $t \geq 3 \beta \ln \beta$. Then $\beta \ln t \leq t$. \label{betabound}
\end{lemma}
\smallskip
\begin{proof} On the one hand, the inequality holds at $t= 3 \beta \ln \beta$:
\[ \beta \ln (3 \beta \ln \beta) = \beta \ln 3 + \beta \ln \beta + \beta \ln \ln \beta \leq 3 \beta \ln \beta. \] On the other hand,
the derivative of $t - \beta \ln t$ is nonnegative for $t \geq \beta$, so that the inequality continues to hold for all $t \geq 3 \beta \ln \beta$.
\end{proof}
\smallskip
Another useful bound is given in the following lemma.
\smallskip
\begin{lemma} Suppose $\epsilon \in (0,1)$ and $\alpha \leq b$. Then
\[ (b-\alpha)^{\epsilon} \leq b^{\epsilon} - \frac{\epsilon}{b^{1-\epsilon}} \alpha \] \label{epsilonpower}
\end{lemma}
\begin{proof} Note that for fixed $\alpha \leq b$ the expression on the left is a convex function of $\epsilon$ and we have equality for both $\epsilon=0$ and
$\epsilon=1$. This implies we have equality for all $\epsilon \in [0,1]$.
\end{proof}
\smallskip
We now combine Lemma \ref{phibound} and Lemma \ref{epsilonpower} to obtain a convenient bound on $\Phi_q(a,b)$ whenever $a$ is not too close to $b$.
\smallskip
\begin{lemma} Suppose $q \in (0,1]$ and $\epsilon \in (0,1)$.
Then if $a$ satisfies $2 \leq a \leq b - \frac{2}{q} b^{1-\epsilon} \ln (b) $, we have \label{cubedecay} \[ \Phi_q(a,b) \leq \frac{1}{b^2} \]
\end{lemma}
\smallskip
\begin{proof} Indeed, observe that as a consequence of Lemma \ref{epsilonpower},
\[ b^{\epsilon} - a^{\epsilon} \geq b^{\epsilon} - (b - \frac{2}{q} b^{1-\epsilon} \ln (b))^{\epsilon} \geq b^{\epsilon} - ( b^{\epsilon} - \frac{\epsilon}{b^{1-\epsilon}} \frac{2}{q} b^{1-\epsilon} \ln b ) = \frac{2 \epsilon}{q} \ln b \] and
consequently \[ e^{-q \frac{b^\epsilon - a^\epsilon}{\epsilon}} \leq e^{-2 \ln b} = \frac{1}{b^2}. \] The claim now follows by Lemma \ref{phibound}.
\end{proof}
\smallskip
The previous lemma suggests that as long as the distance between $a$ and $b$ is at least $(2/q) b^{1-\epsilon} \ln b$, then $\Phi_q(a,b)$ will be small.
The following lemma provides a bound on how long it takes until the distance from $b/2$ to $b$ is at least this large.
\smallskip
\begin{lemma} Suppose $b \geq \left[ \frac{12}{q\epsilon} \ln \frac{4}{q\epsilon} \right]^{1/\epsilon}$, $q \in (0,1]$, and $\epsilon \in (0,1)$. Then $b - \frac{2}{q} b^{1-\epsilon} \ln(b) \geq b/2$.
\label{halflemma}
\end{lemma}
\begin{proof} Rearranging, we need to argue that
\[ b^{\epsilon} \geq \frac{4}{q} \ln b \] Setting $t = b^{\epsilon}$ this is equivalent to
\[ t \geq \frac{4}{q \epsilon} \ln t \] which, by Lemma \ref{betabound}, occurs if
\[ t \geq \frac{12}{q\epsilon} \ln \frac{4}{q\epsilon} \] or
\[ b \geq \left[ \frac{12}{q\epsilon} \ln \frac{4}{q\epsilon} \right]^{1/\epsilon}.\]
\end{proof}
\smallskip
For simplicity of presentation, we will henceforth adopt the notation \[ \alpha(q,\epsilon) = \left[ \frac{12}{q \epsilon} \ln \left( \frac{4}{q \epsilon} \right) \right]^{1/\epsilon}. \]
\smallskip
\smallskip With all these lemmas in place, we now turn to the main goal in this subsection, which is to analyze how a sequence satisfying
Eq. (\ref{exampledecay}) decays with time. Our next lemma does this for the special choice of $t_k=k$. The proof of this lemma relies on
all the results derived previously in this subsection.
\smallskip
\begin{lemma} \label{decay} Suppose \[ a({k+1}) \leq \left( 1 - \frac{q}{(k+1)^{1-\epsilon}} \right) a(k) + \frac{d}{k^{2 - 2 \epsilon}},\] where $q \in (0,1]$, $ \epsilon \in (0,1)$, and $d$
and the
initial condition $a(1)$ are all nonnegative.
Then for \[ k \geq \alpha(q,\epsilon) \] we have \[ a(k) \leq \frac{9d}{q} \frac{\ln k}{ k^{1 - \epsilon}} + a(1) e^{-q(k^\epsilon - 2)/\epsilon}. \]
\label{decay0}
\end{lemma}
\smallskip
\begin{proof} Let us adopt the shorthand $\phi(k) = d/k^{2-2\epsilon}$. We have that
\[ a(k) \leq \phi(k-1) + \phi(k-2) \Phi_q(k,k+1) + \phi(k-3) \Phi_q(k-1,k+1) + \cdots + \phi(1) \Phi_q(3,k+1) + a(1) \Phi_q(2,k+1). \] We will break this sum up into
three pieces:
\[ a(k) \leq \sum_{j=1}^{\lfloor (2/q) k^{1-\epsilon} \ln k \rfloor } \phi(k-j) \Phi_q(k+2-j,k+1) + \sum_{j= \lceil (2/q) k^{1-\epsilon} \ln k \rceil}^{k-1} \phi(k-j) \Phi_q(k+2-j,k+1) + a(1) \Phi_q(2,k+1) \] We will bound each piece separately.
The first piece can be bounded by using Lemma \ref{halflemma} to argue that each of the terms $\phi(k-j)$ is at most $d(1/(k/2))^{2 - 2 \epsilon}$, the quantity $\Phi_q(k-j+2,k+1)$ is upper bounded by $1$, and there are at most $(2/q) k^{1-\epsilon} \ln k$ terms in the sum. Consequently,
\[ \sum_{j=1}^{\lfloor (2/q) k^{1-\epsilon} \ln k \rfloor} \phi(k-j) \Phi_q(k-j+1,k) \leq \frac{2}{q} k^{1-\epsilon} \ln k \frac{d}{(k/2)^{2-2\epsilon}} \leq \frac{8d \ln k}{q k^{1-\epsilon}}. \]We bound the second piece by arguing that all the terms $\phi(k-j)$ are bounded above by $d$, whereas the sum of $\Phi_q(k-j+2,k+1)$ over
that range is at most $1/k$ due to Lemma \ref{cubedecay}. Thus
\[ \sum_{j= \lceil (2/q) k^{1-\epsilon} \ln k \rceil}^{k-1} \phi(k-j) \Phi_q(k+2-j,k+1) \leq \frac{d}{k}. \] Finally, the last term is bounded straightforwardly by Lemma \ref{phibound}. Putting these
three bounds together gives the statement of the current lemma.
\end{proof}
\smallskip Finally, we turn to the main result of this subsection, which is the extension of the previous corollary to the case of general sequences
$t_k$. The following result is the only one which we will have occasion to use later, and its proof proceeds by an appeal to Lemma \ref{decay0}.
\smallskip
\begin{corollary} Suppose \[ a(t_{k+1}) \leq \left( 1 - \frac{q}{t_{k+1}^{1-\epsilon}} \right) a(t_k) + \frac{d}{t_k^{2 - 2 \epsilon}},\] where $q \in (0,1]$, $d$ and the
initial condition $a(1)$ are nonnegative, $\epsilon \in (0,1)$ and $t_k$ is some increasing integer sequence satisfying $t_1=1$ and
\[ |t_{k+1} - t_k | < T ~~~~ \mbox{ for all nonnegative } k, \] where $T$ is some positive integer.
Then if
\[ k \geq \left[ \frac{12T}{q \epsilon} \ln \left( \frac{4T}{q \epsilon} \right) \right]^{1/\epsilon} ,\] we will have
\[ a(t_k) \leq \frac{9dT}{q k^{1 - \epsilon}} + a(1) e^{-q(k^\epsilon - 1)/(T\epsilon)} .\]\label{effdecay}
\end{corollary}
\begin{proof} Define $b(k) = a(t_k)$. Then
\[ b(k+1) \leq \left( 1 - \frac{q}{t_{k+1}^{1-\epsilon}} \right) b(k) + \frac{d}{t_k^{2 - 2 \epsilon}}. \] Since $ k \leq t_k \leq kT$, we have that
\[ b(k+1) \leq \left( 1 - \frac{q/T^{1-\epsilon}}{(k+1)^{1-\epsilon}} \right) + \frac{d}{k^{2 - 2 \epsilon}}. \] Applying Lemma \ref{decay0}, we get
that for
\[ k \geq \left[ \frac{18T}{q \epsilon} \ln \left( \frac{6T}{q\epsilon} \right) \right]^{1/\epsilon},\] we have
\begin{equation} \label{tkeq} b(k) \leq \frac{9dT}{qk^{1-\epsilon}} + b(1) e^{-q(k^{\epsilon}-2)/(\epsilon T)}. \end{equation}
The corollary now follows since $a(t_k)=b(k)$ and $t_1=1$.
\end{proof}
\subsection{Analysis of the learning protocol\label{subsec:sieve}}
With all the results of the previous subsections in place, we can now turn to the analysis of our protocol. We do not begin the actual proof of Theorem \ref{mainthm} in this subsection, but rather we derive some bounds on the decrease of $Z(t)$ at each step. It is in the next Section \ref{sec:proof}, we will make use of these bounds to prove Theorem \ref{mainthm}.
For the remainder of Section \ref{subsec:sieve},
we will assume that $l=1$, i.e, $\mu$ and all $v_i(t)$ belong to $\mathbb{R}$. We will then define $v(t)$ to be the vector that stacks up $v_1(t), \ldots, v_n(t)$.
The following proposition describes a convenient way to write Eq (\ref{nonmeasuringupdate}). We omit the proof (which is obvious).
\smallskip
\begin{proposition} \label{remark:eqrewrite} We can rewrite Eq. (\ref{nonmeasuringupdate}) and Eq. (\ref{measuringupdate}) as follows:
\begin{eqnarray*} y(t+1) & = & A(t) v(t) + b(t) \\
q(t+1) & = & (1 - \Delta(t)) v(t) + \Delta(t) y(t+1) \\
v(t+1) & = & q(t+1) + \Delta(t) r(t)\aor{ + \Delta(t) c(t)},
\end{eqnarray*} where:
\begin{enumerate}
\item If $i \neq j$ and $i,j$ are neighbors in $G(t)$, \[ a_{ij}(t) = \frac{1}{4 \max(d_i(t), d_j(t))}. \] \ao{However, if $i \neq j$ are not neighbors in
$G(t)$, then $a_{ij}(t)=0$.} As a consequence,
$A(t)$ is a symmetric matrix.
\item If node $i$ does not have a measurement of $\mu$ at time $t$, then \[ a_{ii}(t) = 1 - \frac{1}{4} \sum_{j \in N_i(t), ~~ j \neq i}
\frac{1}{\max(d_i(t),d_j(t))}. \] On the other hand, if node $i$ does have a measurement of $\mu$ at time $t$,
\[ a_{ii}(t) = \frac{3}{4} - \frac{1}{4} \sum_{j \in N_i(t), ~~ j \neq i}
\frac{1}{\max(d_i(t),d_j(t))} . \] Thus $A(t)$ is a diagonally dominant matrix and its graph is merely the
intercommunication graph at time $t$: $G(A(t)) = G(t)$. Moreover, if no node has a measurement at time $t$, $A(t)$ is stochastic.
\item If node $i$ does not have a measurement of $\mu$ at time $t$, then $b_i(t)=0$. If node $i$ does have a measurement of $\mu$
at time $t$, then $b_i(t) = (1/4) \mu$.
\item If node $i$ has a measurement of $\mu$ at time $t$, $r_i(t)$ is a random variable with mean zero and variance $\sigma^2/16$. Else,
$r_i(t)=0$. Each $r_i(t)$ is independent of all $v(t)$ \aor{and all other $r_j(t)$. Similarly, $c_i(t)$ is the random variable \[ c_i(t) = \frac{1}{4} \sum_{j \in N(i)} \frac{w_{ij}(t)}{\max( d_i(t), d_j(t) )}. \] Each $c_i(t)$ has mean zero, and the vectors $c(t), c(t')$ are independent whenever $t \neq t'$. Moreover, $c(t)$ and $r(t')$ are
independent for all $t,t'$. Finally, $E[c_i^2(t)] \leq (\sigma')^2/16$.}
\end{enumerate}
Putting it all together, we may write our update as
\[ v(t+1) = (1- \Delta(t)) v(t) + \Delta(t) A(t) v(t) + \Delta(t) b(t) + \Delta(t) r(t) + \Delta(t) c(t). \]
\end{proposition}
Let us use $S(t)$ for the set of agents who have a measurement at time $t$. We use this notation in the next lemma, which
bounds the decrease in $Z(\aor{t})$ from time $t$ to $t+1$.
\smallskip
\begin{lemma} \label{lemma:measurementdecrease} If $\Delta(t) \in (0,1)$ then
\begin{eqnarray*}
E[Z(\aor{t+1}) ~ {\bf | }~ v(t), \aor{v(t-1), \ldots, v(1)}] & \leq & Z(\aor{t}) - \frac{\Delta(t)}{\ao{8}} \sum_{(k,l) \in E(t)} \frac{(v_k(t) - v_l(t))^2}{\max (d_k(t), d_l(t))} \\ && ~~~~~-\frac{\Delta(t)}{4} \sum_{k \in S(t)} (v_k(t) - \mu)^2 + \frac{\Delta(t)^2}{16} \aor{ \left( M \sigma^2 + n (\sigma')^2 \right)}
\end{eqnarray*} Recall that $E(t)$ is the set of undirected edges in the graph, so every pair of neighbors $(i,j)$ appears once in the above sum. Moreover, if $S(t)$ is nonempty, then
\[ E[Z(\aor{t+1}) ~ {\bf | }~ v(t), \aor{v(t-1), \ldots, v(1)}] \leq \left( 1 - \frac{1}{8} \Delta(t) \kappa \left[ G(t) \right] \right) Z(\aor{t}) + \frac{\Delta(t)^2}{16} \aor{ \left( M\sigma^2 + n(\sigma')^2 \right)}. \]
\end{lemma}
\begin{proof} Observe that, for any $t$, the vector $\mu {\bf 1}$ satisfies \[ \mu {\bf 1} = A(t) \mu {\bf 1} + b(t),\] and therefore,
\begin{equation} \label{yminusmu} y(t+1) - \mu {\bf 1} = A(t) (v(t) - \mu {\bf 1}).
\end{equation} We now apply Corollary \ref{sievebound} which involves the entries and row-sums of the matrix $A^2(t)$ which we lower-bound as follows. Because $A(t)$ is diagonally dominant \ao{and nonnegative}, we have that if $(k,l) \in E(t)$ then $$ [A^2(t)]_{kl} \geq [A(t)]_{kk} [A(t)]_{kl} \geq \frac{1}{2} \frac{1}{4 \max( d_k(t), d_l(t))} \geq \frac{1}{8 \max (d_k(t), d_l(t))}.$$ Moreover, if $k$ has a measurement of $\mu$ then the row
sum of the $k$'th row of $A$ \ao{equals} $3/4$, which implies that the $k$'th row sum of $A^2$ is also \ao{at most} $3/4$. Consequently, Corollary \ref{sievebound} implies
\begin{small} \begin{equation} ||y(t+1) - \mu {\bf 1}||_2^2 \leq Z(\aor{t}) - \frac{1}{8} \sum_{(k,l) \in E(t)} \frac{(v_k(t)-v_l(t))^2}{\max (d_k(t), d_l(t))} - \frac{1}{4 } \sum_{k \in S(t)} (v_k(t) - \mu)^2. \label{eq:subtractbound}
\end{equation} \end{small} Next, \ao{since $\Delta(t) \in (0,1)$ we can appeal to the convexity of the squared two-norm to obtain}
\begin{small} \begin{eqnarray*} ||q(t+1) - \mu {\bf 1}||_2^2 & \leq & Z(\aor{t}) - \frac{\Delta(t)}{8} \sum_{(k,l) \in E(t)} \frac{(v_k(t)-v_l(t))^2}{\max (d_k(t), d_l(t))} - \frac{\Delta(t)}{4} \sum_{k \in S(t)} (v_k(t) - \mu)^2.
\end{eqnarray*} \end{small} Since $E[r(t)]= 0, \aor{E[b(t)]=0}$ and $\aor{E[||r(t) + c(t)||_2^2] \leq (M\sigma^2 + n(\sigma')^2)/16}$ independently of \aor{all} $v(t)$, this immediately implies the first inequality in the statement of the lemma. The second inequality is then a straightforward consequence of the
definition of the sieve constant.
\end{proof}
\subsection{Completing the proof\label{sec:proof}} With all the lemmas of the previous subections in place, we finally begin the proof of our main result,
Theorem \ref{mainthm}. Along the way, we will prove the basic convergence result of Proposition \ref{thm:conv}.
Our first observation in this section is that it suffices to prove it in the case when $l=1$, (i.e., when $\mu$ is a number). Indeed, observe that the update
equations Eq. (\ref{nonmeasuringupdate}) and (\ref{measuringupdate}) are separable in the entries of the vectors $v_i(t)$. Therefore, if Theorem \ref{mainthm} is proved under the assumption $l=1$, we may apply it to each component to obtain it for the general case. Thus we will therefore be assuming without loss of generality that $l=1$ for the remainder of this paper.
Our first step is to prove the basic convergence result of Proposition \ref{thm:conv}. Our proof strategy is to repeatedly apply Lemma \ref{lemma:measurementdecrease} to bound the the decrease in $Z(t)$ at each stage. This will yield a decrease rate for $Z(t)$ which will imply almost sure convergence to the correct $\mu$.
\bigskip
\begin{proof}[Proof of \aor{Proposition} \ref{thm:conv}] We first claim that there exists some constant $c>0$ such that if $t_k=k \max(T,B)$, then
\begin{equation} \label{constantdecay} E[Z(\aor{t_{k+1}}) ~|~ v(t_k)] \leq (1- c \Delta(t_{k+1})) Z(\aor{t_k}) + \max(T,B) \Delta(t_k)^2 \aor{( M \sigma^2 + n (\sigma')^2)}. \end{equation}
\ao{We postpone the proof of this claim for a few lines while we observe that,} as a consequence of our assumptions on $\Delta(t)$, we have the following three facts:
\[ \sum_{k=1}^{\infty} c \Delta(t_{k+1}) = +\infty, ~~~~~~~\sum_{k=1}^{\infty} \max(T,B) \Delta(t_k)^2 \aor{( M \sigma^2 + n (\sigma')^2)} < \infty \] \[ \lim_{\ao{k} \rightarrow \infty} \frac{ \max(T,B) \Delta(t_k)^2 \aor{( M \sigma^2 + n (\sigma')^2)}}{c \Delta(t_{k+1})} = 0. \] Moreover, eventually it is true that $c \Delta( t_{k+1} ) < 1$. \ao{Now as a consequence of these four facts, Lemma 10 from Chapter 2.2 of \cite{p87}} implies $\lim_{t \rightarrow \infty} Z(\aor{t})=0$ with probability $1$.
\ao{To conclude the proof, it remains to demonstrate} Eq. (\ref{constantdecay}). \aor{Applying Lemma \ref{lemma:measurementdecrease} at every time $t$ between $t_{k}$ and $t_{k+1}-1$ and iterating expectations, we obtain} \begin{small}
\begin{eqnarray} E[Z(\aor{t_{k+1}}) ~|~ v(t_k)] & \leq & Z(\aor{t_k}) - \sum_{m=t_k}^{t_{k+1}-1} \left( \frac{\Delta(m)}{8} \sum_{(k,l) \in E(m)} \frac{E[(v_k(m)- v_l(m))^2 ~|~ v(t_k)]}{\max (d_k(m), d_l(m))} \nonumber \right. \\ && \left. ~~~~~~~~+ \frac{\Delta(m)}{4 } \sum_{k \in S(m)} E [(v_k(m) - \mu)^2 ~|~ v(t_k)]
+ \Delta^2(m) \frac{\aor{ M \sigma^2 + n (\sigma')^2}}{16} \right). \label{decreaseineq}
\end{eqnarray} \end{small} Consequently, Eq. (\ref{constantdecay}) follows from the assertion
\[ \inf ~~ \frac{\sum_{m=t_k}^{t_{k+1}-1} \sum_{(k,l) \in E(m)} E[(v_k(m) - v_l(m))^2 ~|~ v(t_k)] + \sum_{k \in S(m)} E[(v_k(m) - \mu)^2 ~|~ v(t_k)] }{\sum_{i=1}^n (v_i(t_k) - \mu)^2} > 0 \] where the infimum is taken over all vectors $v(t_k)$ such that $v(t_k) \neq \mu {\bf 1}$ and over all possible sequences
of undirected communication graphs and measurements between time $t_k$ and $t_{k+1}-1$ satisfying the conditions of uniform connectivity and uniform measurement speed. Now since $E[X^2] \geq E[X]^2$, we have that \begin{eqnarray*} && \inf ~~ \frac{\sum_{m=t_k}^{t_{k+1}-1} \sum_{(k,l) \in E(m)} E[(v_k(m) - v_l(m))^2 ~|~ v(t_k)] + \sum_{k \in S(m)} E[(v_k(m) - \mu)^2 ~|~ v(t_k)] }{\sum_{i=1}^n (v_i(t_k) - \mu)^2} \\ && \geq \inf ~~ \frac{\sum_{m=t_k}^{t_{k+1}-1} \sum_{(k,l) \in E(m)} E[v_k(m) - v_l(m) ~|~ v(t_k)]^2 + \sum_{k \in S(m)} E[v_k(m) - \mu ~|~ v(t_k)]^2 }{\sum_{i=1}^n (v_i(t_k) - \mu)^2}. \end{eqnarray*} We will \ao{complete the proof by arguing} that this last infimum is positive.
Let us \ao{define} $z(t) = E[v(t) - \mu {\bf 1} ~|~ v(t_k)]$ for $t \geq t_k$. \ao{From Proposition \ref{remark:eqrewrite} and Eq. (\ref{yminusmu}), we can work out
the dynamics satisfied by the sequence $z(t)$ for $t \geq t_k$: \begin{eqnarray} z(t+1) & = & E[v(t+1) - \mu {\bf 1} ~|~ v(t_k)] \nonumber \\
& = & E[q(t+1) - \mu {\bf 1} ~|~ v(t_k)] \nonumber \\
& = & E[(1-\Delta(t))v(t) + \Delta(t) y(t+1) - \mu {\bf 1} ~|~ v(t_k)] \nonumber \\
& = & E[ (1-\Delta(t))(v(t) - \mu {\bf 1}) ~|~ v(t_k)] + E[\Delta(t) (y(t+1) - \mu {\bf 1}) ~|~ v(t_k) ] \nonumber \\
& = & E[ (1-\Delta(t))(v(t) - \mu {\bf 1}) ~|~ v(t_k)] + E[\Delta(t) A(t) (v(t) - \mu {\bf 1}) ~|~ v(t_k) ] \nonumber \\
& = & \left[ (1- \Delta(t)) I + \Delta(t) A(t) \right] z(t) \label{zequation}.\end{eqnarray}}
Clearly, we need to argue that \begin{equation} \label{zinf} \inf ~~ \frac{\sum_{m=t_k}^{t_{k+1}-1} \sum_{(k,l) \in G(m)} (z_k(m) - z_l(m))^2 + \sum_{k \in S(m)} z_k^2(m) }{\sum_{i=1}^n z_i^2(t_k)} > 0 \end{equation} where the infimum is taken over all sequences of undirected communication graphs satisfying the conditions of uniform connectivity and measurement speed and \ao{all nonzero} $z(t_k)$ (which in turn determines all the $z(t)$ with $t \geq t_k$ through Eq. (\ref{zequation})).
From Eq. (\ref{zequation}), we have that the expression within the infimum in Eq. (\ref{zinf}) is invariant under scaling of $z(t_k)$. So
we can conclude that, for any sequence of graphs $G(t)$ and measuring sets $S(t)$, the infimum is always achieved by some vector $z(t_k)$ with $||z(t_k)||_2=1$.
Now given a sequence of graphs $G(t)$ and a sequence of measuring sets $S(t)$, suppose $z(t_k)$ is a vector that achieves this infimum; let $S_+ \subset \{1, \ldots, n\}$ be the set of indices $i$ with $z_i(t_k) > 0$, $S_-$ be the set of indices $i$ with $z_i(t_k) < 0$, and $S_0$ be the set of indices with $z_i(t_k)=0$. Since $||z(t_k) - \mu {\bf 1}||_2 = 1$ we have that at least one of $S_+, S_-$ is nonempty. Without loss of generality, suppose that $S_-$ is nonempty. Due to the conditions of uniform connectivity and uniform measurement speed, there is a first time $t' < t_{k+1}$ when at least one of the following two events happens: (i) some node $i' \in S_-$ is connected to a node $j' \in S_0 \cup S_+$ (ii) some node $i' \in S_-$ has a measurement of $\mu$.
In the former case, $z_{i'}(t')<0$ and $z_{j'}(t') \geq 0$ and consequently $(z_{i'}(t')-z_{j'}(t'))^2$ will be positive; in the latter case, $z_{i'}(t')<0$ and consequently
$z_{i'}^2(t')$ will be positive. In either case, the infimum of Eq. (\ref{zinf}) will be strictly positive.
\end{proof}
\smallskip Having established Proposition \ref{thm:conv}, we now turn to the proof of Theorem \ref{mainthm}. We will split the proof into several chunks. Recall
that Theorem \ref{mainthm} has two bounds: Eq. (\ref{eq:connected}) which holds when each graph $G(t)$ is connected and Eq. (\ref{eq:general}) which holds
in the more general case when the graph sequence $G(t)$ is $B$-connected. We begin by analyzing the first case. Our first lemma towards that end
provides an upper bound on the eigenvalues of the matrices $A(t)$ corresponding to connected $G(t)$.
\smallskip
\begin{lemma} Suppose each $G(t)$ is connected and at least one node makes a measurement at every time $t$. Then the largest eigenvalue of the matrices $A(t)$ satisfy $$\lambda_{\rm max}(A(t)) \leq 1 - \frac{1}{24 \mathcal{H}} ~~ \mbox{ for all } t,$$ \label{conneig}
\noindent where, recall, $\mathcal{H}$ is an upper bound on the hitting times in the lazy Metropolis walk on $G(t)$.
\end{lemma}
\smallskip
\begin{proof} Let us drop the argument $t$ and simply refer to $A(t)$ as $A$. Consider the iteration
\begin{equation} p(k+1) = A p(k). \label{aupdate} \end{equation} We argue that it has a probabilistic interpretation. Namely, let us transform the matrix $A$ into a stochastic matrix $A'$ in the following way: we introduce a new node $i'$ for every row $i$ of $A$ which has row sum less than $1$ and set
\[ [A']_{i,i'} = 1 - \sum_{j} [A]_{ij}, ~~~ [A']_{i',i'} = 1. \] Then $A'$ is a stochastic matrix, and moreover, observe that by construction $[A']_{i,i'}=1/4$ for every new node $i'$ that is introduced. Let us adopt the notation $\mathcal{I}$ to be the set of new nodes $i'$ added in this way, and we will use $\mathcal{N} = \{1, \ldots, n\}$ to refer to the original nodes.
We then have that if $p(0)$ is a stochastic vector (meaning it has nonnegative entries which sum to one), then $p(k)$ generated by Eq. (\ref{aupdate}) has the following interpretation: $p_j(k)$ is the probability that a random walk taking steps according to $A'$ is at node $j$ at time $k$. This is easily seen by induction:
clearly it holds at time $0$, and if it holds at time $k$, then it holds at time $k+1$ since $[A]_{ij} = [A']_{ij}$ if $i,j \in \mathcal{N}$ and $[A']_{i',j}=0$ for all $i' \in \mathcal{I}$ and $j \in \mathcal{N}$.
Next, we note that $\mathcal{I}$ is an absorbing set for the Markov chain with transition matrix $A'$, and moreover $||p(k)||_1$ is the probability that the
random walk starting at $p(0)$ is not absorbed in $\mathcal{I}$ by time $k$. Defining $T_i'$ to be the expected time until a random walk in $A'$ starting from $i$ is absorbed in the set $\mathcal{I}$, we have that by Markov's inequality,
\[ \left| \left|p \left( 2 \lceil \max_{i \in \mathcal{N}} T_i' \rceil \right) \right| \right|_1 \leq \frac{1}{2} \] for any stochastic $p(0)$. Thus for any nonnegative integer $k$,
\[ \left| \left|p \left( 2 k \lceil \max_{i \in \mathcal{N}} T_i' \rceil \right) \right| \right|_1 \leq \left( \frac{1}{2} \right)^k. \] Now by convexity of the $1$-norm, we have that in
fact for all initial vectors $p(0)$ (not necessarily stochastic ones),
\[ \left| \left| p \left( 2 k \lceil \max_{i \in \mathcal{N}} T_i' \rceil \right) \right| \right|_1 \leq \left( \frac{1}{2} \right)^k ||p(0)||_1 .\] By the Perron-Frobenius theorem, $\lambda_{\rm max}(A)$ is real and its corresponding eigenvector is real. Plugging it in for $p(0)$, we get that
\[ \lambda_{\rm max}^{2k \lceil \max_{i \in \mathcal{N}} T_i' \rceil} \leq \left( \frac{1}{2} \right)^k \]
or \begin{equation} \label{lambdap} \lambda_{\rm max} \leq \left( \frac{1}{2} \right)^{1/( 2 \lceil \max_{i \in \mathcal{N}} T_i')} \leq 1 - \frac{1}{4 \lceil \max_{i \in \mathcal{N}} T_i' \rceil} \end{equation} where we used the inequality $(1/2)^x \leq 1 - x/2$ for all $x \in [0,1]$.
It remains to rephrase this bound in terms of the hitting times in
the lazy Metropolis walk on $G$. We simply note that $[A']_{i,i'} = 1/4$ by construction, so
\[ \max_i T_i' \leq 4 ( \mathcal{H} + 1 ). \] Thus
\[ \lceil \max_i T_i' \rceil \leq 4 (\mathcal{H}+1) + 1 \leq 6 \mathcal{H}. \] Combining this bound with Eq. (\ref{lambdap}) proves the lemma.
\end{proof}
\smallskip
\ao{With this lemma in place, we can now proceed to prove the first half of Theorem \ref{mainthm}, namely the bound of Eq. (\ref{eq:connected}). }
\bigskip
\begin{proof}[Proof of Eq. (\ref{eq:connected})] We use Proposition \ref{remark:eqrewrite} to write a bound for $E[Z(t+1) ~|~ v(t)]$ in terms of the largest eigenvalue $\lambda_{\rm max}(A(t))$. Indeed, as we remarked previously in Eq. (\ref{yminusmu}),
\[ y(t+1) - \mu {\bf 1} = A(t) (v(t) - \mu {\bf 1} ) \] we therefore have that
\[ ||y(t+1) - \mu {\bf 1} ||_2^2 \leq \lambda_{\rm max}^2(A(t)) Z(t) \leq \lambda_{\rm max}(A(t)) Z(t) \] where we used the fact that $\lambda_{\rm max}(A(t)) \leq 1$ since $A(t)$ is a substochastic matrix. Next, we have
\[ ||q(t+1) - \mu {\bf 1}|| \leq \left( 1 - \Delta(t) (1 - \lambda_{\rm max}(A(t)) \right) Z(t) \] and finally
\[ E [ Z(t+1) ~|~ v(t) ] \leq \left[ 1 - \Delta(t) \left( 1 - \lambda_{\rm max}(A(t)) \right) \right] Z(t) + \Delta(t_{k})^2 \frac{\aor{ M\sigma^2 + n(\sigma')^2}}{16}. \]
Let $t_k$ be the times when a node has had a measurement. Applying the above equation at times $t_k, t_{k}+1, \dots, t_{k+1}-1$ and using the eigenvalue bound of Lemma \ref{conneig} at time $t_{k}$ and the trivial bound $\lambda_{\rm max}(A(t)) \leq 1$ at times $t_{k}+1, \ldots, t_{k+1}-1$, we obtain,
\begin{eqnarray*} E[Z(t_{k+1}) ~|~ v(t_k)] & \leq & \left( 1 - \frac{1}{24 \mathcal{H} t_k^{1-\epsilon}} \right) Z(t_k) + \frac{ M \sigma^2 + nT (\sigma')^2 }{16 t_k^{2 - 2 \epsilon}} \\
& \leq & \left( 1 - \frac{1}{24 \mathcal{H} t_{k+1}^{1-\epsilon}} \right) Z(t_k) + \frac{M \sigma^2 + nT(\sigma')^2}{16 t_k^{2 - 2 \epsilon}}.
\end{eqnarray*} Iterating expectations and applying Corollary \ref{effdecay}, we obtain that for
\begin{equation} \label{kbound} k \geq \left[ \frac{288 T \mathcal{H} }{\epsilon} \ln \left( \frac{96 T \mathcal{H} }{\epsilon} \right) \right]^{1/\epsilon},
\end{equation} we have
\[ E [ Z(t_k) ~|~ v(1) ] \leq \frac{9 (1/16) (M\sigma^2 + nT(\sigma')^2)24 {\cal H} T}{k^{1 - \epsilon}} + Z(1) e^{-(k^\epsilon - 2)/(24 \mathcal{H} T\epsilon)}. \] Using the inequality $t_k \leq kT$,
\begin{equation} \label{expect-tk} E [ Z(t_k) ~|~ v(1) ] \leq 14 {\cal H} T \frac{ M \sigma^2 + n T(\sigma')^2}{(t_k/T)^{1 - \epsilon}} + Z(1) e^{-((t_k/T)^\epsilon - 2)/(24 \mathcal{H} T\epsilon)}.
\end{equation}
Finally, for any $t$ satisfying \begin{equation} \label{tlower} t \geq T + T \left[ \frac{288 T \mathcal{H} }{\epsilon} \ln \left( \frac{96 T \mathcal{H} }{\epsilon} \right) \right]^{1/\epsilon} \end{equation} there is some $t_k$ with $k$ satisfying Eq. (\ref{kbound}) within the last $T$ steps before $t$. We can therefore get an upper bound
on $E[Z(t) ~|~ Z(1)]$ applying Eq. (\ref{expect-tk}) to that last $t_k$, and noting that the expected increase from that $Z(t_k)$ to $Z(t)$ is bounded as
\[ E[Z(t) ~|~ v(1)] - E[Z(t_k) ~|~ v(1)] \leq \frac{nT (\sigma')^2}{t_k^{2-2\epsilon}} \leq \frac{n T (\sigma')^2}{t_k^{1-\epsilon}} \leq \frac{n T (\sigma')^2}{(t/T-1)^{1-\epsilon}} \] This implies that for $t$ satisfying Eq. (\ref{tlower}), we have
\[ E[ Z(t) ~|~ v(1) ] \leq 15 {\cal H} T \frac{M \sigma^2 + n T(\sigma')^2}{(t/T-1)^{1 - \epsilon}} + Z(1) e^{-((t/T-1)^\epsilon - 2)/(24 \mathcal{H} T\epsilon)}. \] After some simple algebra, this is exactly the bound of the theorem.
\end{proof}
\bigskip
We now turn to the proof of the second part of Theorem \ref{mainthm}, namely Eq. (\ref{eq:general}). Its proof requires a certain inequality between quadratic forms in the vector $v(t)$ which we
separate into the following lemma.
\bigskip
\begin{lemma} Let $t_1 = 1$ and $t_k = (k-1) \max(T,B)$ for $k > 1$, and assume that the entries of the vector $v(t_k)$ satisfy \[ v_1(t_k) < v_2(t_k) < \cdots < v_n(t_k). \] Further, let us assume that none of the $v_i(t_k)$ equal $\mu$, and let us define $p_-$ to be the largest index such that $v_{p-}(t_k)<\mu$ and $p_+$ to be the smallest index such that
$v_p(t_k)> \mu$. We then have \begin{eqnarray} \sum_{m=t_k}^{t_{k+1}-1} \sum_{(k,l) \in E(m)} E[(v_k(m)-v_l(m))^2 ~|~ v(t_k)] + \sum_{k \in S(m)} E[(v_k(m) - \mu)^2 ~|~ v(t_k)] & \geq & \nonumber\\ (v_{p_-}(t_k) - \mu)^2 + (v_{p+}(t_k) - \mu)^2 + \sum_{i=1, \ldots, n, ~~ i \neq p_{-}} (v_i(t_k) - v_{i+1}(t_k))^2 && \label{eq:quantdecbound} \end{eqnarray} \label{lemma:quantdecbound}
\end{lemma}
\begin{proof} \ao{\ao{The proof parallels a portion of the proof of {\aor Proposition} \ref{thm:conv}. First, we change variables by defining $z(t)$ as \[ z(t) = E[ \aor{v}(t) - \mu {\bf 1} ~|~ v(t_k)] \] for $t \geq t_k$. We claim that
\begin{equation} \label{claimeq} \sum_{m=t_k}^{t_{k+1}-1} \sum_{(k,l) \in E(m)} (z_k(m)-z_l(m))^2 + \sum_{k \in S(m)} z_k^2(m) \geq z_{p_-}^2(t_k) + z_{p_+}^2(t_k) + \sum_{i=1, \ldots, n-1, ~~ i \neq p_-} (z_i(t_k) - z_{i+1}(t_k))^2 \end{equation} The claim immediately implies the lemma after application of the inequality $E[X^2] \geq E[X]^2$.} }
\ao{Now we turn to the proof of the claim, \ao{which is similar to the proof of a lemma from \cite{noot09}.} We will associate to each term on the right-hand side of
Eq. (\ref{claimeq}) a term on the left-hand side of Eq. (\ref{claimeq}), and we will argue that each term on the left-hand side is at least is big as the sum of
all terms on the right-hand side associated with it. }
\ao{To describe this association, we first introduce some new notation. We denote the set of nodes $\{1, \ldots, l\}$ by $S_l$; its complement, the set $\{l+1, \ldots, n\}$, is then $S_l^c$. If $l \neq p_-$, we will abuse notation by saying that $S_l$ contains zero if $l \geq p_+$; else, we say that $S_l$ does not contain zero and $S_l^c$ contains zero. However, in the case of $l=p_-$, we will say that neither $S_{p_-}$ nor $S_{p_-}^c$ contains zero. We will say that $S_l$ ``is crossed by an edge'' at time $m$ if a node in $S_l$ is connected to a node in $S_l^c$ at time $m$. For $l \neq p_-$, we will say that $S_l$ is ``crossed by a measurement'' at time $m$ if a node in whichever of $S_l,S_l^c$ that does not contain zero has a measurement at time $m$. We will say that $S_{p_-}$ is ``crossed by a measurement from the left'' at time $m$ if a node in $S_{p_-}$ has a measurement at time $m$; we will say that it is ``crossed by a measurement from the right'' at time $m$ if a node in $S_{p_-}^c$ had a measurement at time $m$. Note that the assumption of uniform connectivity means that every $S_l$ is crossed by an edge at least one time $m \in \{ t_k, \ldots, t_{k+1}-1\}$. It may happen that $S_l$ is also crossed by measurements, but it isn't required by the uniform measurement assumption. Nevertheless, the uniform measurement assumption implies that $S_{p_-}$ is crossed by a measurement at some time $m \in \{ t_k, \ldots, t_{k+1}-1\}$. Finally, we will say that $S_l$ is crossed at time $m$ if it is either crossed by an edge or crossed by a measurement (plainly or from left or right).}
\ao{We next describe how we associate terms on the right-hand side of Eq. (\ref{claimeq}) with terms on the left-hand side of Eq. (\ref{claimeq}).
Suppose $l$ is any number in $1, \ldots, n-1$ except $p_{-}$; consider the {\em first} time $S_l$ is crossed; let this be time $m$. If the crossing is by an edge, then let $(i,j)$ be any edge which goes between $S_l$ and $S_l^c$ at time $m$. We will associate $(z_l(t_k)-z_{l+1}(t_k))^2$ \aor{on the right-hand side of Eq. (\ref{claimeq})} with $(z_i(m)-z_j(m))^2$ \aor{on the left-hand side of Eq. (\ref{claimeq})}; as a shorthand for this, we will say that we associate index $l$ with the edge $(i,j)$ at time $m$. On the other hand, if $S_l$ is crossed by a measurement\footnote{If $S_l$ is crossed both by an edge and a measurement at time $m$, we will say it is crossed by an edge first. Throughout the remainder of this proof, we keep to the convention breaking ties in favor of edges by saying that $S_l$ is crossed first by an edge if the first crossing was simultaneously by both an edge and by a measurement.} at time $m$, let $i$ be a node in whichever of $S_l, S_l^c$ does not contain zero which has a measurement at time $m$. We associate $(z_l(t_k) - z_{l+1}(t_k))^2$ with $z_i^2(m)$; as a shorthand for this, we will say that we associate index $l$ with a measurement by $i$ at time $m$. }
\ao{Finally, we describe the associations for the terms $v_{p_-}(t_k)^2$ and $v_{p+}(t_k)^2$, which are more intricate. Again, let us suppose that $S_{p_-}$ is crossed first at time $m$; if the crossing is by an edge, then we associate both these terms with any edge $(i,j)$ crossing $S_{p_-}$ at time $m$. If, however, $S_{p_-}$ is crossed first by a measurement from the left, then we associate $v_{p_-}^2(t_k)$ with $z_i^2(m)$, where $i$ is any node in $S_{p_-}$ having a measurement at time $m$. We then consider $u$, which is the first time $S_{p_-}$ is crossed by either an edge or a measurement from the right; if it is crossed by an edge, then we associate $v_{p_+}(t_k)^2$ with $(z_i(u)-z_j(u))^2$ with any edge $(i,j)$ going between $S_{p_-}$ and $S_{p_-}^c$ at time $u$; else, we associate it with $z_i^2(u)$ where $i$ is any node in $S_{p_-}^c$ having a measurement at time $u$. On the other hand, if $S_{p_-}$ is crossed first by a measurement from the right, then we flip the associations:
we associate $v_{p_+}^2(t_k)$ with $z_i^2(m)$, where $i$ is any node in $S_{p_-}^c$ having a measurement at time $m$. We then consider $u$, which is now the first time $S_{p_-}$ is crossed by either an edge or a measurement from the left. If $S_{p_-}$ is crossed by an edge first, then we associate $v_{p-}(t_k)^2$ with $(z_i(u)-z_j(u))^2$ with any edge $(i,j)$ going between $S_{p_-}$ and $S_{p_-}^c$ at time $u$; else, we associate it with $z_i^2(u)$ where $i$ is any node in $S_{p_-}$ having a measurement at time $u$. }
\aor{It will be convenient for us to adopt the following shorthand: whenever we associate $v_{p_-}^2(t_k)$ with an edge or measurement as shorthand we will say that we associate the {\em border} $p_-$,
and likewise for $p_+$. Thus we will refer to the association of {\em indices} $l_1, \ldots, l_k$ and {\em borders} $p_-, p_+$ with the understanding that the former refer
to the terms $(v_{l_i}(t_k) - v_{l_i + 1}(t_k))^2$ while the latter refer to the terms $v_{p_-}^2(t_k)$ and $v_{p_+}^2(t_k)$.}
\ao{We now go on to prove that every term on the left-hand side of Eq. (\ref{claimeq}) is at least as big as the sum of all terms on the right-hand side of Eq. (\ref{claimeq}) associated with it. }
\ao{Let us first consider the terms $(z_i(m)-z_j(m))^2$ on the left-hand side of Eq. (\ref{claimeq}). Suppose the edge $(i,j)$ with $i<j$ at time $m$ was associated with indices $l_1 < l_2 < \cdots < l_r$.
\aor{It must be that $i \leq l_1$ while $j \geq l_{r}+1$.} The key observation is that if any $S_l$ has not been crossed before time $m$ then \[ \max_{i=1, \ldots, l} z_i(m) \leq z_l(t_k) \leq z_{l+1}(t_k) \leq \min_{i=l+1, \ldots, n} z_i(m). \] Consequently, \[ z_i(m) \leq z_{l_1}(t_k) \leq z_{l_1+1}(t_k) \leq z_{l_2}(t_k) \leq z_{l_2+1}(t_k) \leq \cdots \leq z_{l_r}(t_k) \leq z_{l_r+1}(t_k) \leq z_j(m) \] which implies that
\[ (z_i(m) - z_j(m))^2 \geq (z_{l_1+1}(t_k) - z_{l_1}(t_k))^2 + (z_{l_2+1}(t_k)-z_{l_2}(t_k))^2 + \cdots + (z_{l_r+ 1}(t_k)-z_{l_r}(t_k))^2. \]
This proves the statement in the case when the edge $(i,j)$ is associated with indices
$l_1 < l_2 < \cdots < l_r$.}
\ao{Suppose now that the edge $(i,j)$ is associated with indices $l_1 < l_2 < \cdots <l_r$ as well as both borders $p_-,p_+$. This happens when
every $S_{l_i}$ and $S_{p_-}$ is crossed for the first time by $(i,j)$, so that we can simply repeat the argument in the previous paragraph
to obtain \[ (z_i(m) - z_j(m))^2 \geq (z_{l_1+1}(t_k) - z_{l_1}(t_k))^2 + (z_{l_2+1}(t_k)-z_{l_2}(t_k))^2 + \cdots + (z_{l_r+ 1}(t_k)-z_{l_r}(t_k))^2 + (z_{p_-}(t_k) \aor{-} z_{p_+}(t_k))^2 \] which, since $(z_{p_-}(t_k) - z_{p_+}(t_k))^2 \geq z_{p_-}^2(t_k) + z_{p_+}^2(t_k)$ proves the statement in this case.}
\ao{Suppose now that the edge $(i,j)$ with $i<j$ at time $m$ is associated with indices $l_1 < l_2 < \cdots < l_r$ as well as the border $p_-$. \aor{From our
association rules, this can only happen in the following case:} every $S_{l_i}$ has not been crossed before time $m$, $S_{p_-}$ is being crossed by an edge at time $m$ and has been crossed from the right by a measurement but has not been crossed from the left before time $m$, nor has it been crossed by an edge before time $m$. Consequently, in addition to the inequalities $i \leq l_1, j \geq l_r+1$ we have the additional inequalities $i \leq p_-$ while $j \geq p_+$ (since $(i,j)$ crosses $S_{p_-}$). Because $S_{l_r}$ has not been crossed before and $S_{p_-}$ has not been crossed by an edge or measurement from the left before, we have \begin{eqnarray*}
z_i(m) & \leq & \min(z_{p_-}(t_k), z_{l_1}(t_k)) \\
z_j(m) & \geq & \max(0, z_{l_r+1}(t_k))
\end{eqnarray*} so that
\begin{eqnarray*} (z_i(m)-z_j(m))^2 \geq (z_{l_1+1}(t_k) - z_{l_1}(t_k))^2 + (z_{l_2+1}(t_k)-z_{l_2}(t_k))^2 + \cdots + (z_{l_r+ 1}(t_k)-z_{l_r}(t_k))^2 + (z_{p_-}(t_k)-0)^2(t_k)
\end{eqnarray*} which proves the statement in this case.}
\ao{The proof when the edge $(i,j)$ is associated with index $l_1 < \cdots < l_r$ and $z_{p_+}^2(t_k)$ is similar, and we omit it. Similarly, the case when $(i,j)$ is associated with just one of the borders and no indices and both borders and no indices are proved with an identical argument. Consequently, we have now proved
the desired statement for all the terms of the form $(z_i(m)-z_j(m))^2$.}
\ao{It remains to consider the terms $z_i^2(m)$. So let us suppose that the term $z_i^2(m)$ is associated with indices $l_1 < l_2 < \cdots < l_r$ as well as possibly one of the borders $p_-, p_+$. Note that due to the way we defined the associations it cannot be associated with them both. Moreover, again due to the
association rules, we will either have
$l_1 < \cdots < l_r < p_-$ or $p_+ \leq l_1 < \cdots < l_r$; let us assume it is the former as the proof in the latter case is similar. Since $S_{l_1}$ has
not been crossed before, we have that
\[ z_i(m) \leq z_{l_1}(t_k) \leq z_{l_1+1}(t_k) \leq z_{l_2}(t_k) \leq z_{l_2+1}(t_k) \leq \cdots \leq z_{l_r}(t_k) \leq z_{l_r+1}(t_k) \leq z_{p_-}(t_k) < 0 \]
and therefore
\[ z_i^2(m) \geq (z_{l_1+1}(t_k) - z_{l_1}(t_k))^2 + (z_{l_2+1}(t_k)-z_{l_2}(t_k))^2 + \cdots + (z_{l_r+ 1}(t_k)-z_{l_r}(t_k))^2 + (z_{p_-}(t_k)-0)^2 \] which proves the result in this case. Finally, the case when $z_i(m)$ is associated with just one of the borders is proved with an identical argument. This concludes the proof.}
\end{proof}
\smallskip
\ao{We are now finaly able to prove the very last piece of Theorem \ref{mainthm}, namely
Eq. (\ref{eq:general}).}
\bigskip
\begin{proof}[Proof of Eq. (\ref{eq:general})] As in the statement of Lemma \ref{lemma:quantdecbound}, we choose $t_1=1$, and
$t_k = (k-1) \max(T,B)$ for $k > 1$. \ao{Observe that by a continuity argument Lemma (\ref{eq:quantdecbound}) holds even with the strict inequalities between $v_i(t_k)$ replaced with nonstrict inequalities and without the assumption that none of the $v_i(t_k)$ are \aor{equal to $\mu$}. Moreover, using the inequality \[ (v_{p_-}(t_k) - \mu)^2 + (v_{p+}(t_k) - \mu)^2 \geq \frac{(v_{p_-}(t_k) - \mu)^2 + (v_{p+}(t_k) - \mu)^2 + (v_{p_-}(t_k) - v_{p_+}(t_k))^2}{4}, \] we have that Lemma (\ref{eq:quantdecbound}) implies that} \[ \sum_{m=t_k}^{t_{k+1}-1} \sum_{(k,l) \in E(m)} E[ (v_k(m)-v_l(m))^2 ~|~ v(t_k)] + \sum_{k \in S(m)} E[ (v_k(m) - \mu)^2 ~|~ v(t_k)] \geq \frac{1}{4} \kappa(L_n) Z(t_k).\] Because $\Delta(t)$ is decreasing and the degree of any vertex at any time is at most $d_{\rm max}$, this in turn implies \begin{footnotesize}
\[ \sum_{m=t_k}^{t_{k+1}-1} \frac{\Delta(m)}{8} \sum_{(k,l) \in E(m)} \frac{E[ (v_k(m)-v_l(m))^2 ~|~ v(t_k)]}{\max (d_k(m), d_l(m))} + \frac{\Delta(m)}{4 } \sum_{k \in S(m)} E[ (v_k(m) - \mu)^2 ~|~ v(t_k)] \geq \frac{\Delta(t_{k+1})}{32 d_{\rm max}} \kappa(L_n) Z(t_k).\] \end{footnotesize}
Now appealing to Eq. (\ref{decreaseineq}), we have
\[ E[Z(t_{k+1}) ~|~ v(t_k) ] \leq \left(1 - \frac{\Delta(t_{k+1}) \kappa(L_n)}{32 d_{\rm max} } \right) Z(t_k) + \Delta(t_k)^2 \frac{\aor{ M \sigma^2 + n \max(T,B) (\sigma')^2}}{16}. \]
Now applying Corollary \ref{effdecay} as well as the bound $\kappa(L_n) \geq 1/n^2$ from Lemma \ref{lemma:snonnegativity}, we get that for
\begin{equation} \label{finalkbound} k \geq \left[ \frac{384 n^2 d_{\rm max} \left( 1 + \max(T,B) \right) }{\epsilon} \ln \left( \frac{128 n^2 d_{\rm max} \left( 1 + \max(T,B) \right) }{\epsilon} \right) \right]^{1/\epsilon} \end{equation} we will have
\[ E[ Z(t_k) ~|~ v(1) ] \leq (9/16) n^2 d_{\rm max} T \frac{ M \sigma^2 + n (1+\max(T,B)) (\sigma')^2}{k^{1 - \epsilon}} + Z(1) e^{-(k^\epsilon - 2)/(32n^2 d_{\rm max} \left( 1 + \max(T,B) \right) \epsilon)}. \] Now using $t_k = (k-1) \max(T,B) $ for $k >1$, we have
\[ E[ Z(t_k) ~|~ v(0) ] \leq n^2 d_{\rm max} \max(T,B) \frac{ M \sigma^2 + n \max(T,B) (\sigma')^2}{\left( 1 + t_k/\max(T,B) \right)^{1 - \epsilon}} + Z(1) e^{-( \left(1 + t_k/\max(T,B)\right)^\epsilon - 2)/(32n^2 d_{\rm max} \left( 1 + \max(T,B) \right) \epsilon)}. \]
For a general time $t$, we have that as long as
\[ t \geq \max(T,B) + \max(T,B) \left[ \frac{384 n^2 d_{\rm max} \left( 1 + \max(T,B) \right) }{\epsilon} \ln \left( \frac{128 n^2 d_{\rm max} \left( 1 + \max(T,B) \right) }{\epsilon} \right) \right]^{1/\epsilon} \] we have that there exists a $t_k \geq t - \max(T,B)$ with $k$ satisfying the lower bound of Eq. (\ref{finalkbound}). Moreover, the increase from this $E[Z(t_k) ~|~ v(0)]$ to $E[Z(t) ~|~ v(0)]$ is upper bounded by $n \max(T,B) (\sigma')^2/t_k^{2 - 2 \epsilon}$. Thus:
\[ E[ Z(t) ~|~ v(1) ] \leq 2 n^2 d_{\rm max} \max(T,B) \frac{ M \sigma^2 + n (1+\max(T,B)) (\sigma')^2}{\left( t/\max(T,B) \right)^{1 - \epsilon}} + Z(1) e^{-( \left(t/\max(T,B)\right)^\epsilon - 2)/(32n^2 d_{\rm max} \left( 1 + \max(T,B) \right) \epsilon)}. \] After some simple algebra, this is exactly what we sought to prove.
\end{proof}
\section{Simulations\label{sec:simul}} We report here on several simulations of our learning protocol. These simulations confirm the broad outlines of the bounds we have derived; the convergence to $\mu$ takes place at a rate broadly consistent with inverse polynomial decay in $t$ and the scaling with $n$ appears to be polynomial as well.
Figure \ref{threeplots} shows plots of the distance from $\mu$ for the complete graph, the line graph \ao{(with one of the endpoint nodes doing the sampling)}, and the star graph \ao{(with the center node doing the sampling)}, each on $40$ nodes. These are the three graphs in the left column of the figure. We caution that there is no reason to believe these charts capture the correct asymptotic behavior as $t \rightarrow \infty$. Intriguingly, the star graph and the complete graph appear to have very similar performance. By contrast, the performance of the line graph is an order of magnitude inferior to the performance of either of these; it takes the line graph on $40$ nodes on the order of $400,000$ iterations to reach roughly the same level of accuracy that the complete graph and star graph reach after about $10,000$ iterations.
Moreover, Figure \ref{threeplots} also shows the scaling with the number of nodes $n$ on the graphs in the right column of the figure. The graphs show the time until $||v(t) - \mu {\bf 1}||_{\infty}$ decreases below a certain threshold as a function of number of nodes. We see scaling that could plausibly be superlinear for the line graph and essentially linear for the complete graph and essentially linear for the star graph over the range shown.
\begin{figure}[h]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in]{complete1.eps} &
\includegraphics[width=2.5in]{complete1.eps} \\
\includegraphics[width=2.5in]{star1.eps} &
\includegraphics[width=2.5in]{star2.eps} \\
\includegraphics[width=2.5in]{line1.eps} &
\includegraphics[width=2.5in]{line2.eps}
\end{array}$
\end{center}
\caption{The graphs in the first column show $||v(t) - \mu {\bf 1}||_{\infty}$ as a function of the number of iterations in a network of $40$ nodes starting from a random vector with entries uniformly random in $[0,5]$. The graphs in the second column show how long it takes $||v(t) - \mu {\bf 1}||_{\infty}$ to shrink below $1/2$ as function of the number of nodes; inital values are also uniformly random in $[0,5]$. The two graphs in the first row correspond to the complete graph; the two graphs in the middle row correspond to the star graph; the two graphs in the last row
correspond to the line graph. In each case, exactly one node is doing
the measurements; in the star graph it is the center vertex and in the line graph it is one of the endpoint vertices. Stepsize is chosen to be $1/t^{1/4}$ for all three simulations.} \label{threeplots}
\end{figure}
Finally, we include a simulation for the lollipop graph, defined to be a complete graph on $n/2$ vertices joined to a line graph on $n/2$ vertices. The lollipop graph often appears as an extremal graph for various random walk properties (see, for example, \cite{bw90}). \ao{The node at the end of the stem, i.e., the node which is furthest from the complete subgraph, is doing the sampling.} The scaling with the number of nodes is considerably worse than for the other graphs we have simulated here.
\begin{figure}[h]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in]{lollipop1.eps} &
\includegraphics[width=2.5in]{lollipop2.eps}
\end{array}$
\end{center}
\caption{The plot on the left shows $||v(t) - \mu {\bf 1}||_{\infty}$ as a function of the number of iterations for the lollipop graph on $40$ nodes; the plot on the right shows the time until $||v(t) - \mu {\bf 1}||_{\infty}$ shrinks below $0.5$ as function of the number of nodes $n$. In each case, exactly one node is performing the measurements, and it is the node farthest from the complete subgraph. The starting point is a random vector with entires in $[0,5]$ for both simulation and stepsize is $1/t^{1/4}$. } \label{lollipop}
\end{figure}
\ao{Finally, we emphasize that the learning speed also depends on the precise location of the node doing the sampling within the graph. While our results in this paper bound the worst case performance over all choices of sampling node, it may very well be that by appropriately choosing the sensing nodes, better performance relative to our bounds and relative to these simulations can be achieved.}
\section{Conclusion\label{sec:concl}} We have proposed a model for cooperative learning by multi-agent systems facing time-varying connectivity and intermittent measurements. We have proved a protocol capable of learning an unknown vector from independent measurements in this setting and provided quantitative bounds on its learning speed. Crucially, these bounds have a dependence on the number of agents $n$ which grows only polynomially fast, leading to reasonable scaling for our protocol. We note that the sieve constant of a graph, a new measure of connectivity we introduced, played a central role in our analysis. On sequences of
connected graphs, the largest hitting time turned out to be the most relevant combinatorial primitive.
Our research points to a number of intriguing open questions. Our results are for undirected graphs and it is unclear whether there is a learning protocol which will achieve similar bounds (i.e., a learning speed which depends only polynomially on $n$) on directed graphs. It appears that our bounds on the learning speed are loose by several orders of magnitude when compared to simulations, so that the learning speeds we have presented in this paper could potentially be further improved. Moreover, it is further possible that a different protocol provides a faster learning speed compared to the one we have provided here.
Finally, and most importantly, it is of interest to develop a general theory of decentralized learning capable of handling situations in which complex concepts need to be learned by a distributed network subject to time-varying connectivity and intermittent arrival of new information. Consider, for example, a group of UAVs all of which need to learn a new strategy to deal with an unforeseen situation, for example, how to perform formation maintenance in the face \ao{of a particular pattern of turbulence}. Given that selected nodes can try different strategies, and given that nodes can observe the actions and the performance of neighboring nodes, is it possible for the entire network of nodes to collectively learn the best possible strategy? A theory of general-purpose decentralized learning, designed to parallel the theory of PAC (Provably Approximately Correct) learning in the centralized case, is warranted.
\section{Acknowledgements} An earlier version of this paper published in the CDC proceedings had an incorrect decay rate with $t$ in the main result. The authors are grateful to Sean Meyn for pointing out this error.
| {'timestamp': '2013-12-30T02:13:59', 'yymm': '1209', 'arxiv_id': '1209.2194', 'language': 'en', 'url': 'https://arxiv.org/abs/1209.2194'} |
\section*{\abstractname}%
\else
\small
\begin{center}%
{\bfseries \ackname\vspace{-.5em}\vspace{\z@}}%
\end{center}%
\quotation
\fi}
{\if@twocolumn\else\endquotation\fi}
\fi
\makeatother
\theoremstyle{remark}
\newtheorem{rmk}[thm]{Remark}
\newcommand{{\mathbb{A}}}{{\mathbb{A}}}
\newcommand{{\mathbb{B}}}{{\mathbb{B}}}
\newcommand{{\mathbb{C}}}{{\mathbb{C}}}
\newcommand{{\mathbb{D}}}{{\mathbb{D}}}
\newcommand{{\mathbb{E}}}{{\mathbb{E}}}
\newcommand{{\mathbb{F}}}{{\mathbb{F}}}
\newcommand{{\mathbb{G}}}{{\mathbb{G}}}
\newcommand{{\mathbb{H}}}{{\mathbb{H}}}
\newcommand{{\mathbb{I}}}{{\mathbb{I}}}
\newcommand{{\mathbb{J}}}{{\mathbb{J}}}
\newcommand{{\mathbb{K}}}{{\mathbb{K}}}
\newcommand{{\mathbb{L}}}{{\mathbb{L}}}
\newcommand{{\mathbb{M}}}{{\mathbb{M}}}
\newcommand{{\mathbb{N}}}{{\mathbb{N}}}
\newcommand{{\mathbb{O}}}{{\mathbb{O}}}
\newcommand{{\mathbb{P}}}{{\mathbb{P}}}
\newcommand{{\mathbb{Q}}}{{\mathbb{Q}}}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}}
\newcommand{{\mathbb{S}}}{{\mathbb{S}}}
\newcommand{{\mathbb{T}}}{{\mathbb{T}}}
\newcommand{{\mathbb{U}}}{{\mathbb{U}}}
\newcommand{{\mathbb{V}}}{{\mathbb{V}}}
\newcommand{{\mathbb{W}}}{{\mathbb{W}}}
\newcommand{{\mathbb{X}}}{{\mathbb{X}}}
\newcommand{{\mathbb{Y}}}{{\mathbb{Y}}}
\newcommand{{\mathbb{Z}}}{{\mathbb{Z}}}
\newcommand{{\mathcal A}}{{\mathcal A}}
\newcommand{{\mathcal B}}{{\mathcal B}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\newcommand{{\mathcal D}}{{\mathcal D}}
\newcommand{{\mathcal E}}{{\mathcal E}}
\newcommand{{\mathcal F}}{{\mathcal F}}
\newcommand{{\mathcal G}}{{\mathcal G}}
\newcommand{{\mathcal H}}{{\mathcal H}}
\newcommand{{\mathcal I}}{{\mathcal I}}
\newcommand{{\mathcal J}}{{\mathcal J}}
\newcommand{{\mathcal K}}{{\mathcal K}}
\newcommand{{\mathcal L}}{{\mathcal L}}
\newcommand{{\mathcal M}}{{\mathcal M}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{{\mathcal O}}{{\mathcal O}}
\newcommand{{\mathcal P}}{{\mathcal P}}
\newcommand{{\mathcal Q}}{{\mathcal Q}}
\newcommand{{\mathcal R}}{{\mathcal R}}
\newcommand{{\mathcal S}}{{\mathcal S}}
\newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{\mathcal U}}{{\mathcal U}}
\newcommand{{\mathcal V}}{{\mathcal V}}
\newcommand{{\mathcal W}}{{\mathcal W}}
\newcommand{{\mathcal X}}{{\mathcal X}}
\newcommand{{\mathcal Y}}{{\mathcal Y}}
\newcommand{{\mathcal Z}}{{\mathcal Z}}
\newcommand{{\mathfrak{a}}}{{\mathfrak{a}}}
\newcommand{{\mathfrak{b}}}{{\mathfrak{b}}}
\newcommand{{\mathfrak{c}}}{{\mathfrak{c}}}
\newcommand{{\mathfrak{d}}}{{\mathfrak{d}}}
\newcommand{{\mathfrak{e}}}{{\mathfrak{e}}}
\newcommand{{\mathfrak{F}}}{{\mathfrak{F}}}
\newcommand{{\mathfrak{g}}}{{\mathfrak{g}}}
\newcommand{{\mathfrak{h}}}{{\mathfrak{h}}}
\newcommand{{\mathfrak{i}}}{{\mathfrak{i}}}
\newcommand{{\mathfrak{j}}}{{\mathfrak{j}}}
\newcommand{{\mathfrak{k}}}{{\mathfrak{k}}}
\newcommand{{\mathfrak{l}}}{{\mathfrak{l}}}
\newcommand{{\mathfrak{m}}}{{\mathfrak{m}}}
\newcommand{{\mathfrak{M}}}{{\mathfrak{M}}}
\newcommand{{\mathfrak{n}}}{{\mathfrak{n}}}
\newcommand{{\mathfrak{o}}}{{\mathfrak{o}}}
\newcommand{{\mathfrak{p}}}{{\mathfrak{p}}}
\newcommand{{\mathfrak{q}}}{{\mathfrak{q}}}
\newcommand{{\mathfrak{r}}}{{\mathfrak{r}}}
\newcommand{{\mathfrak{s}}}{{\mathfrak{s}}}
\newcommand{{\mathfrak{t}}}{{\mathfrak{t}}}
\newcommand{{\mathfrak{u}}}{{\mathfrak{u}}}
\newcommand{{\mathfrak{v}}}{{\mathfrak{v}}}
\newcommand{{\mathfrak{w}}}{{\mathfrak{w}}}
\newcommand{{\mathfrak{x}}}{{\mathfrak{x}}}
\newcommand{{\mathfrak{y}}}{{\mathfrak{y}}}
\newcommand{{\mathfrak{z}}}{{\mathfrak{z}}}
\newcommand{{\ \longrightarrow\ }}{{\ \longrightarrow\ }}
\newcommand{{\ \longleftarrow\ }}{{\ \longleftarrow\ }}
\newcommand{\big\langle}{\big\langle}
\newcommand{\big\rangle}{\big\rangle}
\newcommand{\Big\langle}{\Big\langle}
\newcommand{\Big\rangle}{\Big\rangle}
\newcommand{{q \frac{d}{dq}}}{{q \frac{d}{dq}}}
\newcommand{{\mathsf{p}}}{{\mathsf{p}}}
\newcommand{{\mathrm{ch}}}{{\mathrm{ch}}}
\DeclareMathOperator{\Hilb}{Hilb}
\newcommand{{\mathrm{rk}}}{{\mathrm{rk}}}
\newcommand{\mathbin{\mkern-6mu\fatslash}}{\mathbin{\mkern-6mu\fatslash}}
\newcommand{\mathfrak{Coh}(S)}{\mathfrak{Coh}(S)}
\newcommand{\mathfrak{Coh}_{r}(S)}{\mathfrak{Coh}_{r}(S)}
\newcommand{\mathfrak{Coh}^{\sharp}_{r}(S)}{\mathfrak{Coh}^{\sharp}_{r}(S)}
\newcommand{\mathfrak{Coh}^{\sharp}(S)}{\mathfrak{Coh}^{\sharp}(S)}
\newcommand{\mathrm{Coh}}{\mathrm{Coh}}
\newcommand{\mathsf{Coh}}{\mathsf{Coh}}
\newcommand{{\mathrm{tr}}}{{\mathrm{tr}}}
\newcommand{{\mathcal{H} om}}{{\mathcal{H} om}}
\DeclareFontFamily{OT1}{rsfs}{}
\DeclareFontShape{OT1}{rsfs}{n}{it}{<-> rsfs10}{}
\DeclareMathAlphabet{\curly}{OT1}{rsfs}{n}{it}
\renewcommand\hom{\curly H\!om}
\newcommand\ext{\curly Ext}
\newcommand\Ext{\operatorname{Ext}}
\newcommand\Hom{\operatorname{Hom}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand\Id{\operatorname{Id}}
\newcommand\Spec{\operatorname{Spec}}
\newcommand*\dd{\mathop{}\!\mathrm{d}}
\newcommand{{\overline M}}{{\overline M}}
\newcommand{\mathrm{td}}{\mathrm{td}}
\newcommand{\mathrm{CH}}{\mathrm{CH}}
\newcommand{\mathop{\rm Pic}\nolimits}{\mathop{\rm Pic}\nolimits}
\newcommand{\mathsf{PT}}{\mathsf{PT}}
\newcommand{\mathsf{DT}}{\mathsf{DT}}
\newcommand{\mathsf{GW}}{\mathsf{GW}}
\newcommand{{\mathrm{Sym}}}{{\mathrm{Sym}}}
\newcommand{\mathsf{Z}}{\mathsf{Z}}
\newcommand{{\mathrm{ev}}}{{\mathrm{ev}}}
\newcommand{{\mathsf{ev}}}{{\mathsf{ev}}}
\newcommand{\underline{\CC oh}}{\underline{{\mathcal C} oh}}
\newcommand\Tor{\operatorname{Tor}}
\newcommand\Map{\operatorname{Map}}
\newcommand{\mathrm{Eff}}{\mathrm{Eff}}
\newcommand{\mathsf{br}}{\mathsf{br}}
\newcommand{\mathrm{Aut}}{\mathrm{Aut}}
\newcommand{\mathbin{\!\!\pmb{\fatslash}}}{\mathbin{\!\!\pmb{\fatslash}}}
\newcommand{\mathfrak{Coh}(S)}{\mathfrak{Coh}(S)}
\newcommand{\mathfrak{Coh}^{r}(S)}{\mathfrak{Coh}^{r}(S)}
\newcommand{\mathfrak{C}}{\mathfrak{C}}
\newcommand{\mathrm{Def}}{\mathrm{Def}}
\newcommand{\mathfrak{C}^{\mathrm{tw}}_{g,N}}{\mathfrak{C}^{\mathrm{tw}}_{g,N}}
\newcommand{\mathfrak{M}^{\mathrm{tw}}_{g,N}}{\mathfrak{M}^{\mathrm{tw}}_{g,N}}
\newcommand{\mathrm{Ob}}{\mathrm{Ob}}
\newcommand{\mathsf{M}}{\mathsf{M}}
\begin{document}
\title[Gromov--Witten/Hurwitz wall-crossing]
{Gromov--Witten/Hurwitz wall-crossing}
\author{Denis Nesterov}
\address{University of Bonn, Institut f\"ur Mathematik}
\email{[email protected]}
\maketitle
\begin{abstract}
This article is the third in a series of three articles, the aim of which is to study various correspondences between four enumerative theories associated to a surface $S$: Gromov--Witten theory of $S^{[n]}$, orbifold Gromov--Witten theory of $[S^{(n)}]$, relative Gromov--Witten theory of $S \times C$ for a nodal curve $C$ and relative Donaldson--Thomas theory of $S \times C$.
In this article, we introduce a one-parameter stability condition, termed $\epsilon$-admissibility, for relative maps from nodal curves to $X\times C$. If $X$ is a point, $\epsilon$-admissibility interpolates between moduli spaces of stable maps to $C$ relative to some fixed points and moduli spaces of admissible covers with arbitrary ramifications over the same fixed points and simple ramifications elsewhere on $C$.
Using Zhou's calibrated tails, we prove wall-crossing formulas relating invariants for different values of $\epsilon$. If $X$ is a surface $S$, we use this wall-crossing in conjunction with author's quasimap wall-crossing to show that relative Pandharipande--Thomas/Gromov--Witten correspondence of $S\times C$ and Ruan's extended crepant resolution conjecture of the pair $S^{[n]}$ and $[S^{(n)}]$ are equivalent up to explicit wall-crossings. We thereby prove crepant resolution conjecture for 3-point genus-0 invariants in all classes, if $S$ is a toric del Pezzo surface.
\end{abstract}
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
\subsection{Overview}
Inspired by the theory of quasimaps to GIT quotients of \cite{CFKM}, a theory of quasimaps to moduli spaces of sheaves was introduced in \cite{N}. When applied to Hilbert schemes of $n$-points $S^{[n]}$ of a surface $S$, moduli spaces of $\epsilon$-stable quasimaps interpolate between moduli spaces of stable maps to $S^{[n]}$ and Hilbert schemes of 1-dimensional subschemes of a relative geometry $S\times C_{g,N}/{\overline M}_{g,N}$
\begin{equation} \label{qmwall}
\xymatrix{
{\overline M}_{g,N}(S^{[n]},\beta) \ar@{<-->}[r]|-{\epsilon } &\mathrm{Hilb}_{n,\check{\beta}}(S\times C_{g,N}/{\overline M}_{g,N}),}
\end{equation}
where $C_{g,N} \rightarrow {\overline M}_{g,N}$ is the universal curve of a moduli space of stable marked curves.
This interpolation gives rise to wall-crossing formulas, which therefore relate Gromov--Witten theory of $S^{[n]}$ and relative Donaldson--Thomas theory of $S\times C_{g,N}/{\overline M}_{g,N}$. Alongside with results of \cite{NK3}, the quasimap wal-crossing was used to prove various correspondences, among which is the wall-crossing part of Igusa cusp form conjecture of \cite{OPa}. For more details, we refer to \cite{N,NK3}.
In this article we introduce a notion of $\epsilon$-admissibility, depending on a parameter $\epsilon \in {\mathbb{R}}_{\leq0}$, for maps
$$ P \rightarrow X\times C$$
relative $X\times \mathbf x$, where $P$ is a nodal curve, $(C, \mathbf x)$ is a marked nodal curve and $X$ is a smooth projective variety.
As the value of $\epsilon$ varies, moduli spaces of $\epsilon$-admissible maps interpolate between moduli spaces of stable twisted maps to an orbifold symmetric product $[X^{(n)}]$ and moduli space of stable maps to a relative geometry $X\times C_{g,N}/{\overline M}_{g,N}$,
\[\xymatrix{
{\mathcal K}_{g,N}([X^{(n)}],\beta) \ar@{<-->}[r]|-{\epsilon } &{\overline M}_{\mathsf h}^{\bullet}(X\times C_{g,N}/{\overline M}_{g,N} (\gamma,n)),}
\]
such that the various discrete data on both sides, like genus or degree of a map, determine each other, as is explained in Section \ref{Relation1}.
Using Zhou’s theory of calibrated tails from \cite{YZ}, we establish wall-crossing formulas which relates the associated invariants for different values of $\epsilon \in {\mathbb{R}}_{\leq0}$. This wall-crossing is completely analogous to the quasimap wall-crossing. The result is an equivalence of orbifold Gromov--Witten theory of $[X^{(n)}]$ and relative\footnote{By relative, we mean a theory with relative insertions.} Gromov--Witten theory $X\times C_{g,N}/{\overline M}_{g,N}$ for an arbitrary smooth target $X$, which can be expressed in terms of a change of variables applied to certain generating series. The change of variables involves so-called \textit{I-functions}, which are defined via localised Gromov--Witten theory of $X\times \mathbb{P}^1$ with respect to ${\mathbb{C}}^*$-action coming from the $\mathbb{P}^1$-factor.
This wall-crossing can be termed Gromov--Witten/Hurwitz \textsf{(GW/H)} wall-crossing, because if $X$ is a point, the moduli spaces of $\epsilon$-admissible maps interpolates between Gromov--Witten and Hurwitz spaces of a curve $C$.
Note that our terminology unintentionally resembles the terminology used in \cite{OP06}. However, we do not know, if the two phenomena have any relation.
\\
In conjunction with the quasimap wall-crossing of \cite{N}, \textsf{GW/H} wall-crossing establishes the square of theories for a smooth surface $S$, illustrated in Figure \ref{square}. The square relates crepant resolution conjecture \textsf{(C.R.C.)}, proposed in \cite{YR} and refined in \cite{BG,CCIT}, and Pandharipande-- Thomas/Gromov--Witten correspondence \textsf{(PT/GW)},
proposed in \cite{MNOP1,MNOP1}. The Square has some similarities with Landau--Ginzburg/Calabi--Yau correspondence, as it is explained in Section \ref{LG/CY}.
\vspace{1 in}
\begin{figure} [h!]
\scriptsize
\[
\begin{picture}(200,75)(-30,-50)
\thicklines
\put(25,-25){\line(1,0){30}}
\put(95,-25){\line(1,0){30}}
\put(25,-25){\line(0,1){40}}
\put(25,30){\makebox(0,0){\textsf{quasimap}}}
\put(25,20){\makebox(0,0){\textsf{wall-crossing}}}
\put(25,35){\line(0,1){40}}
\put(125,-25){\line(0,1){40}}
\put(125,30){\makebox(0,0){\textsf{GW/H}}}
\put(125,20){\makebox(0,0){\textsf{wall-crossing}}}
\put(25,75){\line(1,0){30}}
\put(95,75){\line(1,0){30}}
\put(125,35){\line(0,1){40}}
\put(140,85){\makebox(0,0){$\mathsf{GW}_{\mathrm{orb}}([S^{(n)}])$}}
\put(15,85){\makebox(0,0){$\mathsf{GW}(S^{[n]})$}}
\put(75,75){\makebox(0,0){\textsf{C.R.C.}}}
\put(5,-35){\makebox(0,0){$\mathsf{PT_{rel}}(S\times C_{g,N}/{\overline M}_{g,N})$}}
\put(150,-35){\makebox(0,0){$\mathsf{GW_{rel}}(S\times C_{g,N}/{\overline M}_{g,N})$}}
\put(75,-25){\makebox(0,0){\textsf{PT/GW}}}
\end{picture}
\]
\caption{The square}
\label{square}
\end{figure}
With the help of the square, we establish the following results:
\begin{itemize}
\item 3-point genus-0 crepant resolution conjecture in the sense of \cite{BG} for the pair $S^{[n]}$ and $[S^{(n)}]$ in all classes, if $S$ is a toric del Pezzo surface.
\item the geometric origin of $y=-e^{iu}$ in \textsf{PT/GW} through $\mathsf{C.R.C.}$
\end{itemize}
Moreover, a cycle-valued version of the wall-crossing should have applications in the theory of double ramifications cycles of \cite{JPPZ}, comparison results for the TQFT's from \cite{Ca07} and \cite{BP}, etc. This will be addressed in a future work.
Various instances of the vertical sides of the square were studied on the level of invariants in numerous articles, mainly for ${\mathbb{C}}^2$ and ${\mathcal A}_m$ - \cite{OP10}, \cite{OP10b}, \cite{OP10c}, \cite{BP}, \cite{PT19}, \cite{Mau}, \cite{MO}, \cite{Che09} and etc. The wall-crossings, however, provide a geometric justification for these phenomena.
\subsection{$\epsilon$-stable quasimaps}
Let us now illustrate that the theory of quasimaps sheds light on a seemingly unrelated theme of admissible covers. A map from a nodal curve $C$, \[f\colon C \rightarrow S^{[n]},\]
is determined by its graph \[\Gamma_{f} \subset S\times C.\]
If the curve $C$ varies, the pair $(C, \Gamma_f)$ can degenerate in two ways:
\begin{itemize}
\item[(i)] the curve $C$ degenerates;
\item[(ii)] the graph $\Gamma_f$ degenerates.
\end{itemize}
By a degeneration of $\Gamma_f$ we mean that it becomes non-flat\footnote{A 1-dimensional subscheme $\Gamma \subset S\times C$ is a graph, if and only if it is flat.} over $C$ as a subscheme of $S\times C$, which is due to
\begin{itemize}
\item floating points;
\item non-dominant components.
\end{itemize}
Two types of degenerations of a pair $(C, \Gamma_f)$ are related. Gromov--Witten theory proposes that $C$ sprouts out a rational tail ($C$ degenerates), whenever non-flatness arises ($\Gamma_f$ degenerates). Donaldson--Thomas theory, on the other hand, allows non-flatness, since it is interested in arbitrary 1-dimensional subschemes, thereby restricting degenerations of $C$ to semistable ones (no rational tails).
A non-flat graph $\Gamma$ does not define a map to $S^{[n]}$, but it defines a quasimap to $S^{[n]}$. Hence the motto of quasimaps:
\[\textit{Trade rational tails for non-flat points and vice versa.}\]
The idea of $\epsilon$-stability is to allow both rational tails and non-flat points, restricting their degrees.
The moduli spaces involved in (\ref{qmwall}) are given by the extremal values of $\epsilon$.
\subsection{$\epsilon$-admissible maps}
The motto of $\mathsf{GW/H}$ wall-crossing is the following one:
\[\textit{Trade rational tails for branching points and vice versa.}\]
Let us explain what we mean by making an analogy with quasimaps. Let
\[f\colon P \rightarrow C\]
be an admissible cover with simple ramifications introduced in \cite[Chapter 4]{HM82}. If the curve $C$ varies, the pair $(C,f)$ can degenerate in two ways:
\begin{itemize}
\item[(i)] the curve $C$ degenerates;
\item[(ii)] the cover $f$ degenerates.
\end{itemize}
The degenerations of $f$ arise due to
\begin{itemize}
\item ramifications of higher order;
\item contracted components and singular points mapping to smooth locus.
\end{itemize}
As previously, these two types of degenerations of a pair $(C,f)$ are related. Hurwitz theory of a varying curve $C$ proposes that $C$ sprouts out rational tails, whenever $f$ degenerates in the sense above. On the other hand, Gromov--Witten theory of a varying curve $C$ allows $f$ to degenerate and therefore restricts the degenerations of $C$ to semistable ones. The purpose of $\epsilon$-admissible maps is to interpolate between these Hurwitz and Gromov--Witten cases.
Let us now sketch the definition of $\epsilon$-admissibility. Let $f\colon P\rightarrow C$ be a degree-$n$ map between nodal curves, such that it is admissible at nodes (see \cite[Chapter 4]{HM82} for admissibility at the nodes) and $g(P)=\mathsf h$, $g(C)=g$. We allow $P$ to be disconnected, requiring that each connected component is mapped non-trivially to $C$. Following \cite{FP}, we define the branching divisor
\[\mathsf{br}(f) \in \mathrm{Div}(C),\]
it is an effective divisor which measures the degree of ramification away from nodes and the genera of contracted components of $f$. If $C$ is smooth, the $\mathsf{br}(f)$ is given by associating to the 0-dimensional complex
\[f_*[f^*\Omega_C \rightarrow \Omega_P ]\]
its support weighted by Euler characteristics. Otherwise, we need to take
the part of the support which is contained in the smooth locus of $C$.
\begin{rmk} To establish that branching divisor behaves well in families for maps between singular curves, we have to go through an auxiliary (at least for the purposes of this work) notion of twisted $\epsilon$-admissable map, as is explained in Section \ref{sectionadm}. The construction of \textsf{br} for families in (\ref{globalbrtw}) and (\ref{globalbr}) is the only place where we use twisted maps.
\end{rmk}
Using the branching divisor $\mathsf{br}$, we now can define $\epsilon$-admissibility by the weighted stability of the pair $(C,\mathsf{br}(f))$, considered in \cite{Ha03}. Similar stability was considered in \cite{D}, where the source curve $P$ is allowed to have more degenerate singularities instead of contracted components. However, the moduli spaces of \cite{D} do not have a perfect obstruction theory.
\begin{defnn} Let $\epsilon \in {\mathbb{R}}_{\leq0}\cup \{-\infty\}$. A map $f$ is $\epsilon$-admissible, if
\begin{itemize}
\item $\omega_C(e^{1/\epsilon}\cdot \mathsf{br}(f))$ is ample;
\item $\forall p \in C$, $\mathrm{mult}_p(\mathsf{br}(f))\leq e^{-1/\epsilon}$;
\end{itemize}
\end{defnn}
One can readily verify that for $\epsilon=-\infty$, an $\epsilon$-admissible map is an admissible cover with simple ramifications. For $\epsilon=0$, an $\epsilon$-admissible map is a stable\footnote{When the target curve $C$ is singular, by a stable map we will mean a stable map which is admissible at nodes.} map, such that the target curve $C$ is semistable. Hence $\epsilon$-admissibility provides an interpolation between the moduli space of admissible covers with simple ramifications, $A dm_{g,\mathsf h, n}$, and the moduli space of stable maps, ${\overline M}^{\bullet}_{\mathsf h}(C_g/{\overline M}_g,n)$,
\[\xymatrix{
A dm_{g,\mathsf h, n} \ar@{<-->}[r]|-{\epsilon } &{\overline M}^{\bullet}_{\mathsf h}(C_g/{\overline M}_g,n)}
\]
After introducing markings $\mathbf x=(x_1,\dots, x_N)$ on $C$ and requiring maps to be admissible over these markings, $\epsilon$-admissibility interpolates between admissible covers with arbitrary ramifications over markings and relative stable maps.
As it is explained in \cite{ACV}), sometimes it is more convenient to consider normalisation of the moduli space of admissible covers - the moduli space of stable twisted maps to $BS_n$, denoted by ${\mathcal K}_{g,N}(BS_n,\mathsf h)$. For the purposes of enumerative geometry (virtual intersection theory of moduli spaces), the interpolation above can therefore be equally considered as the following one
\[\xymatrix{
{\mathcal K}_{g,N}(BS_n,\mathsf h) \ar@{<-->}[r]|-{\epsilon } &{\overline M}^{\bullet}_{\mathsf h}(C_{g,N}/{\overline M}_{g,N},n).}
\]
In fact, this point of view is more appropriate, if one wants to make an analogy with quasimaps.
\subsection{Higher-dimensional case}We can upgrade the set-up even further by adding a map $f_{X}\colon P \rightarrow X$ for some target variety $X$. This leads us to the study of $\epsilon$-admissibility of the data
\[(P,C,\mathbf x, f_X \! \times \! f_C),\]
which can be represented as a correspondence
\[
\begin{tikzcd}[row sep=small, column sep = small]
P \arrow[r, "f_{X}"] \arrow{d}[swap]{f_{C}} & X & \\
(C,\mathbf{x}) & &
\end{tikzcd}
\]
In this case, $\epsilon$-admissibility also takes into account the degree of the components of $P$ with respect to the map $f_X$, see Definition \ref{epsilonadm}. If $X$ is an point, we get the set-up discussed previously.
Let $\beta=(\gamma, \mathsf h) \in H_2(X,{\mathbb{Z}})\oplus {\mathbb{Z}}$ be an \textit{extended} degree\footnote{By a version of Riemann--Hurwitz formula, Lemma \ref{RHformula}, the degree of the branching divisor $\mathsf{br}(f)=\mathsf m$ and the genus $\mathsf h$ determine each other, latter we will use $\mathsf m$ instead of $\mathsf h$.}. For $\epsilon \in {\mathbb{R}}_{\leq0}\cup \{-\infty\}$, we then define \[Adm_{g,N}^{\epsilon}(X^{(n)},\beta)\] to be the moduli space of the data
\[(P,C,\mathbf x, f_X\times f_C),\] such that $g(P)=\mathsf h$; $g(C)=g$, $|\mathbf{x}|=N$ and the map $f_X \times f_C$ is of degree $(\gamma,n)$. The notation is slightly misleading, as $\epsilon$-admissible maps are not maps to $X^{(n)}$. However, it is justified by the analogy with quasimaps and is more natural with respect to our notions of degree of an $\epsilon$-admissible map (see also Section \ref{Relation1}).
As in this case of $X$ is a point, we obtain the following description of these moduli spaces for extremal values of $\epsilon$,
\begin{align*}
&{\overline M}_{\mathsf h}^{\bullet}(X\times C_{g,N}/{\overline M}_{g,N} (\gamma,n))= Adm_{g,N}^{0}(X^{(n)},\beta),\\
&{\mathcal K}_{g,N}([X^{(n)}],\beta)\xrightarrow{\rho} Adm_{g,N}^{-\infty}(X^{(n)},\beta),
\end{align*}
such that the map $\rho$ is a virtual normalisation in the sense of the diagram (\ref{normalisation}), which makes two spaces equivalent from the perspective of enumerative geometry. We therefore get an interpolation,
\[\xymatrix{
{\mathcal K}_{g,N}([X^{(n)}],\beta) \ar@{<-->}[r]|-{\epsilon } &{\overline M}_{\mathsf h}^{\bullet}(X\times C_{g,N}/{\overline M}_{g,N} (\gamma,n)),}
\]
which is completely analogous to (\ref{qmwall}).
\subsection{Wall-crossing} The invariants of ${\overline M}_{\mathsf h}^{\bullet}(X\times C_{g,N}/{\overline M}_{g,N} (\gamma,n))$ that can be related to orbifold invariants of ${\mathcal K}_{g,N}([X^{(n)}],\beta)$ are the \textit{relative} GW invariants taken with respect to the markings of the target curve $C$. More precisely, for all $\epsilon$, there exist natural evaluations maps
\begin{align*}
{\mathrm{ev}}_i\colon& Adm_{g,N}^{\epsilon}(X^{(n)},\beta)\rightarrow \overline{{\mathcal I}}X^{(n)}, \ i=1,\dots, N.
\end{align*}
where $\overline{{\mathcal I}}X^{(n)}$ is a rigidified version of the inertia stack ${\mathcal I} X^{(n)}$. We define
\[\langle \tau_{m_{1}}(\gamma_{1}), \dots, \tau_{m_{N}}(\gamma_{N}) \rangle^{\epsilon}_{g,N,\beta}:= \int_{[ Adm_{g,N}^{\epsilon}(X^{(n)},\beta)]^{\mathrm{vir}}}\prod^{i=N}_{i=1}\psi_{i}^{m_{i}} {\mathrm{ev}}^{*}_{i}(\gamma_{i},),\]
where $\gamma_{1}, \dots, \gamma_{N}$ are classes in orbifold cohomology $H_{\mathrm{orb}}^{*}(X^{(n)})$ and $\psi_1, \dots, \psi_N$ are $\psi$-classes associated to the markings of the source curves. By Lemma \ref{invariantscomp}, these invariants specialise to orbifold GW invariants associated to a moduli space ${\mathcal K}_{g,N}([X^{(n)}],\beta)$ and relative GW invariants associated to a moduli space ${\overline M}_{\mathsf h}^{\bullet}(X\times C_{g,N}/{\overline M}_{g,N} (\gamma,n))$ for corresponding choices of $\epsilon$.
To relate invariants for different values of $\epsilon$, we also use the master space technique developed by Zhou in \cite{YZ} for the purposes of quasimaps. We establish the properness of the master space in our setting in Section \ref{master}, following the strategy of Zhou.
To state compactly the wall-crossing formula, we define
\[F^{\epsilon}_{g}(\mathbf{t}(z))=\sum^{\infty}_{n=0}\sum_{\beta}\frac{q^{\beta}}{N!}\langle \mathbf{t}(\psi_1), \dots, \mathbf{t}(\psi_N) \rangle^{\epsilon}_{g,N,\beta},\]
where $\mathbf{t}(z) \in H_{\mathrm{orb}}^{*}(X^{(n)},{\mathbb{Q}})[z]$ is a generic element, and the unstable terms are set to be zero. There exists an element
\[\mu(z) \in H_{\mathrm{orb}}^{*}(X^{(n)})[z]\otimes {\mathbb{Q}}[\![q^{\beta}]\!],\]
defined in Section \ref{graphspaceSym} as a truncation of an $I$-function. The $I$-function is in turn defined via the virtual localisation on the space of stable maps to $X\times \mathbb{P}^1$ relative to $X\times\{\infty\}$. The element $\mu(z)$ provides the change of variables, which relates generating series for extremal values of $\epsilon$.
\begin{thmn} For all $g\geq 1$, we have
\[F^{0}_{g}(\mathbf{t}(z))=F^{-\infty}_{g}(\mathbf{t}(z)+\mu(-z)).\]
For $g=0$, the same equation holds modulo constant and linear terms in $\mathbf{t}$.
\end{thmn}
The change of variables above is the consequence of a wall-crossing formula across each wall between extremal values of $\epsilon$, see Theorem \ref{wallcrossingSym}.
\subsection{Applications}
\subsubsection{The square} For a del Pezzo surface $S$, we compute the wall-crossing invariants in Section \ref{del Pezzo}. A computation for analogous quasimap wall-crossing invariants is given in \cite[Proposition 6.10]{N}.
The wall-crossing invariants can easily be shown to satisfy \textsf{PT/GW}. Hence when both $\epsilon$-stable quasimap and \textsf{GW/H} wall-crossings are applied, \textsf{C.R.C.} becomes equivalent to \textsf{PT/GW}. For precise statements of both in this setting, we refer to Section \ref{qausiadm}. This is expressed in terms of the square of theories in Figure \ref{square}.
In \cite{PP}, \textsf{PT/GW} is established for $S\times \mathbb{P}^1$ relative to $S\times\{0,1,\infty\} \subset S\times \mathbb{P}^1$, if $S$ is toric. Alongside with \cite{PP}, the square therefore gives us the following result.
\begin{thmn}If $S$ is a toric del Pezzo surface, $g=0$ and $N=3$, then $\mathsf{C.R.C.}$ (in the sense of \cite{BG}) holds for $S^{[n]}$ for all $n\geq 1$ and in all classes.
\end{thmn}
Previously, the theorem above was established for $n = 2$ and $S = \mathbb{P}^2$ in \cite[Section 6]{W}; for an arbitrary $n$ and an arbitrary toric surface, but only for exceptional curve classes, in \cite{Che}; for an arbitrary $n$ and a simply connected $S$, but only for exceptional curve classes and in sense of \cite{YR}, in \cite{LQ}. If $S = {\mathbb{C}}^2$, \textsf{C.R.C.} was proven for all genera and any number of markings on the level of cohomological field theories in \cite{PT19b}. If $S = {\mathcal A}_n$, it was proved in genus-0 case and for any number of markings in \cite{CCIT} in the sense of \cite{CIR}. Crepant resolution conjecture was also proved for resolutions other than those that are of Hilbert--Chow type. The list is too long to mention them all.
The theorem can also be restated as an isomorphism of quantum cohomologies,
\[QH_{\mathrm{orb}}^*(S^{(n)})\cong QH^*(S^{[n]}),\]
we refer to Section \ref{qcoh} for more details. The result is very appealing, because
the underlying cohomologies with classical multiplications are not isomorphic for surfaces with $\mathrm{c}_1(S) \neq 0$, but the quantum cohomologies are. In
particular, the classical multiplication on $H_{\mathrm{orb}}^*(S^{(n)})$ is a non-trivial quantum deformation of the classical multiplication on $H^*(S^{[n]})$.
We want to stress that $\textsf{C.R.C.}$ should be considered as a more fundamental correspondence than $\textsf{PT/GW}$, because it relates theories which are closer to each other. Moreover, as \cite{BG} points out, $\textsf{C.R.C.}$ explains the origin of the change of variables,
\begin{equation} \label{eiu}
y=-e^{iu},
\end{equation}
it arises due to the following features of $\textsf{C.R.C.}$,
\begin{itemize}
\item[(i)] analytic continuation of generating series from 0 to -1;
\item[(ii)] factor $i$ in the identification of cohomologies of $S^{[n]}$ and $S^{(d)}$;
\item[(iii)] the divisor equation in $\mathsf{GW}(S^{[n]})$;
\item[(iv)] failure of the divisor equation in $\mathsf{GW}_{\mathrm{orb}}([S^{(n)}])$.
\end{itemize}
More precisely, (i) is responsible for the minus sign in (\ref{eiu}); (iii) and (iv) are responsible for the exponential; (ii) is responsible for $i$ in the exponential. More conceptual view on \textsf{C.R.C.} is presented in works of Iritani, e.g. \cite{I}.
\subsubsection{LG/CY vs C.R.C} \label{LG/CY} We will now draw certain similarities between \textsf{C.R.C.} and Landau-Ginzburg/Calabi-Yau correspondence (\textsf{LG/CY}). For all the details and notation on \textsf{LG/CY}, we refer to \cite{CIR}.
\textsf{LG/CY} consists of two types of correspondences - A-model and B-model correspondences. The B-model correspondence is the statement of equivalence of two categories - matrix factorisation categories and derived categories. While the A-model correspondence is the statement of equality of generating series of certain curve-counting invariants after an analytic continuation and a change of variables. Moreover, there exists a whole family of enumerative theories depending on a stability parameter $\epsilon \in {\mathbb{R}}$. For $\epsilon \in {\mathbb{R}}_{>0}$ it gives the theory of GIT quasimaps, while for $\epsilon \in {\mathbb{R}}_{\leq 0}$ it gives FJRW (Fan--Jarvis--Ruan--Witten) theory. GLSM (Gauged Linear Sigma Model) formalism, defined mathematically in \cite{HTY}, allows to unify quasimaps and FJRW theory. The analytic continuation occurs, when one crosses the wall at $\epsilon=0$.
In the case of \textsf{C.R.C.} we have a similar picture. B-model correspondence is given by an equivalence of categorises, $\mathrm{D}^b(S^{[n]})$ and $\mathrm{D}^b([S^{(n)}])$. A-model correspondence is given by an analytic continuation of generating series and subsequent application of a change of variables, as it is stated in Section \ref{crepant}. There also exist a family of enumerative theories depending on a parameter $\epsilon \in {\mathbb{R}}$. For $\epsilon \in {\mathbb{R}}_{> 0}$, it is given by quasimaps to a moduli space of sheaves, while for $\epsilon \in {\mathbb{R}}_{\leq 0}$ it is given by $\epsilon$-admissable maps. It would be interesting to know, if a unifying theory exists in this case (like GLSM in \textsf{LG/CY}).
\vspace{.1in}
\begin{table*}[h!]
\[
\begin{tabular}{ |c|c|c| }
\hline
& B-model & A-model \\ \hline
$\mathsf{LG/CY}$ & $\mathrm{D^b}(X_W)\cong \mathrm{MF}(W)$ & $\mathsf{GW}(X_W) \xleftarrow{\epsilon\leq 0}\mid_{0}\xrightarrow{\epsilon>0} \mathsf{FJRW}({\mathbb{C}}^{n},W)$ \\
\hline
$\mathsf{C.R.C.}$ & $\mathrm{D^b}(S^{[n]})\cong \mathrm{D^b}([S^{(n)}])$ & $\mathsf{GW}(S^{[n]}) \xleftarrow{\epsilon\leq0}\mid_{0}\xrightarrow{\epsilon>0} \mathsf{GW}_{\mathrm{orb}}([S^{(n)}])$ \\
\hline
\end{tabular}
\]
\caption{\textsf{LG/CY} vs \textsf{C.R.C}}
\end{table*}
The above comparison is not a mere observation about structural similarities of two correspondences. In fact, both correspondences are instances of the same phenomenon. Namely, in both cases there should exist \textit{K\"ahler moduli spaces}, ${\mathcal M}_{\mathsf{LG/CY}}$ and ${\mathcal M}_{\mathsf{C.R.C.}}$, such that two geometries in question correspond to two different cusps of these moduli spaces (e.g.\ $S^{[n]}$ and $[S^{(n)}]$ correspond to two different cusps of ${\mathcal M}_{\mathsf{C.R.C.}}$). B-models do not vary across these moduli spaces, hence the relevant categories are isomorphic. On the other hand, A-models vary in the sense that there exist non-trivial global quantum $D$-modules, ${\mathcal D}_{\mathsf{LG/CY}}$ and ${\mathcal D}_{\mathsf{C.R.C.}}$, which specialise to relevant enumerative invariants around cusps. For more details on this point of view, we refer to \cite{CIR} in the case of $\mathsf{LG/CY}$, and to \cite{I2} in the case of $\mathsf{C.R.C.}$
\subsection{Acknowledgments}
First and foremost I would like to thank Georg Oberdieck for the supervision of my PhD. In particular, I am grateful to Georg for pointing out that ideas of quasimaps can be applied to orbifold symmetric products.
I also want to thank Daniel Huybrechts for reading some parts of the present work and Maximilian Schimpf for providing the formula for Hodge integrals.
A great intellectual debt is owed to Yang Zhou for his theory of cali- brated tails, without which the wall-crossings would not be possible.
\subsection{Notation and conventions}
We work over the field of complex numbers ${\mathbb{C}}$. Given a variety $X$, by $[X^{(n)}]$ we denote a stacky symmetric product by $[X^n/S_n]$ and by $X^{(n)}$ its coarse quotient. We denote a Hilbert scheme of $n$-points by $X^{[n]}$. For a partition $\mu$ of $n$, let $\ell(\mu)$ denote the length of $\mu$ and $\mathrm{age}(\mu)=n-\ell(\mu)$.
For a possibly disconnected twisted curve ${\mathcal C}$ with the underlying coarse curve $C$, we define $g({\mathcal C}):=1-\chi({\mathcal O}_{\mathcal C})=1-\chi({\mathcal O}_C)$.
We set
$e_{{\mathbb{C}}^*}({\mathbb{C}}_{\mathrm{std}})=-z,$
where ${\mathbb{C}}_{\mathrm{std}}$ is the standard representation of ${\mathbb{C}}^*$ on a vector space ${\mathbb{C}}$.
Let $N$ be a semigroup and $\beta \in N$ be its generic element. By ${\mathbb{Q}}[\![ q^\beta ]\!]$ we will denote the (completed) semigroup algebra
${\mathbb{Q}}[\![ N]\!]$. In our case, $N$ will be various semigroups of effective curve classes.
\section{$\epsilon$-admissible maps} \label{sectionadm}
Let $X$ be a smooth projective variety, $({\mathcal C},\mathbf{x})$ be a twisted\footnote{By a twisted nodal curve we will always mean a balanced twisted nodal curve.} marked nodal curve and ${\mathcal P}$ be a possibly disconnected orbifold nodal curve.
\begin{defn} \label{twisted}
For a map
\[f=f_{X}\times f_{{\mathcal C}} \colon {\mathcal P} \rightarrow X\times {\mathcal C},\] the
data $({\mathcal P}, {\mathcal C}, \mathbf{x},f)$ is called a \textit{twisted pre-admissible} map, if
\begin{itemize}
\item[$\bullet$] $f_{{\mathcal C}}$ is \'etale over marked points and nodes;
\item[$\bullet$] $f_{{\mathcal C}}$ is a representable;
\item[$\bullet$] $f$ is non-constant on each connected component.
\end{itemize}
\end{defn}
We will refer to ${\mathcal P}$ and ${\mathcal C}$ as \textit{source} and \textit{target} curves, respectively. Note that by all the conditions above, ${\mathcal P}$ itself must be a twisted nodal curve with orbifold points over nodes and marked points of ${\mathcal C}$.
\\
Consider now the following complex
\[Rf_{{\mathcal C}*} [f_{{\mathcal C}}^*{\mathbb{L}}_{{\mathcal C}} \rightarrow {\mathbb{L}}_{{\mathcal P}}] \in \mathrm{D}^b({\mathcal C}),\]
where the morphism $f_{{\mathcal C}}^*{\mathbb{L}}_{{\mathcal C}} \rightarrow {\mathbb{L}}_{{\mathcal P}}$ is the one naturally associated to a map $f_{{\mathcal C}}$. The complex is supported at finitely many points of the non-stacky smooth locus, which we call \textit{branching} points. They arise either due to ramification points or contracted components of the map $f_{{\mathcal C}}$. Following \cite{FP}, to the complex above,
we can associate a effective Cartier divisor
\[\mathsf{br}(f) \in \mathrm{Div}({\mathcal C})\]
by taking the support of the complex weighted by its Euler characteristics. This divisor will be referred to as \textit{branching divisor}.
Let us give a more explicit expression for the branching divisor.
Let ${\mathcal P}_{\circ} \subseteq {\mathcal P}$ be the maximal subcurve of ${\mathcal P}$ which contracted by the map $f_{{\mathcal C}}$. Let ${\mathcal P}_{\bullet} \subseteq {\mathcal P}$ be the complement of ${\mathcal P}_{\circ}$, i.e. the maximal subcurve which is not contracted by the map $f_{{\mathcal C}}$. By $\widetilde{\mathcal P}_{\bullet}$ we denote its normalisation at the nodes which are mapped into a regular locus of ${\mathcal C}$. Note that the restriction of $f_{{\mathcal C}}$ to $\widetilde {\mathcal P}_{\bullet}$ is a ramified cover, the branching divisor of which is therefore given by points of ramifications.
By $\widetilde{\mathcal P}_{\circ, i}$ we denote the connected components of the normalisation $\widetilde {\mathcal P}_{\circ}$ and by $p_i\in {\mathcal C}$ their images in ${\mathcal C}$. Finally, let $N \subset {\mathcal P}$ be the locus of nodal points which are mapped into regular locus of ${\mathcal C}$. Following \cite[Lemma 10, Lemma 11]{FP}, the branching divisor $\mathsf{br}(f)$ can be expressed as follows.
\begin{lemma} \label{br} With the notation from above we have
\[ \mathsf{br}(f)=\mathsf{br}(f_{|\widetilde{\mathcal P}_{\bullet}}) + \sum_i(2g(\widetilde{\mathcal P}_{\circ, i})-2)[p_i]+2f_*(N).\]
\end{lemma}
\textit{Proof.} By the definition of twisted pre-admissibility, all the branching takes place away from orbifold points and nodes. Since branching of a map can be determined locally, we therefore can assume that both ${\mathcal C}$ and ${\mathcal P}$ are ordinary nodal curves $C$ and $P$.
Let
\[v\colon \widetilde{C} \rightarrow C,\]
be the normalisation of $C$ at $N$, and let $\tilde f=f\circ v$. Recall that ${\mathbb{L}}_C \cong \Omega_C$. By composing the normalisation morphism ${\mathbb{L}}_{C}\rightarrow v_*v^*{\mathbb{L}}_C$ with the natural morphism $v_*v^* {\mathbb{L}}_{C} \rightarrow v_*{\mathbb{L}}_{\widetilde{C}}$, we obtain following exact sequence
\begin{equation} \label{sequence}
0\rightarrow {\mathcal O}_{N} \rightarrow {\mathbb{L}}_C \rightarrow v_* {\mathbb{L}}_{\widetilde{{\mathcal C}}} \rightarrow 0,
\end{equation}
which, in particular, implies that
\begin{equation} \label{chi}
\chi({\mathbb{L}}_C)=\chi(\omega_C).
\end{equation}
On the other hand, since $N$ is mapped to the regular locus of $P$ and ${\mathbb{L}}_P$ is locally free at regular points, we obtain
\begin{equation} \label{sequence2}
0 \rightarrow f^*{\mathbb{L}}_P \rightarrow v_*\tilde f^* {\mathbb{L}}_P \rightarrow {\mathcal O}_N \otimes f^*{\mathbb{L}}_P \rightarrow 0.
\end{equation}
With the sequences (\ref{sequence}) and (\ref{sequence2}), the proof of \cite[Lemma 10]{FP} in our setting is exactly the same. So is the proof of \cite[Lemma 11]{FP} with (\ref{chi}).
\qed
\begin{rmk}
The reason we use ${\mathbb{L}}_{{\mathcal C}}$ instead of $\omega_{{\mathcal C}}$ is that $\pi^*\omega_{C}\cong \omega_{{\mathcal C}}$, where $\pi\colon {\mathcal C} \rightarrow C$ is the projection to the coarse moduli space. Hence $\omega_{\mathcal C}$ does not see non-\'etalness of $\pi$. Moreover, it is unclear, if a map $f^*_{{\mathcal C}}\omega_{{\mathcal C}} \rightarrow \omega_{{\mathcal P}}$ exists at all in general.
\end{rmk}
We fix $L \in \mathop{\rm Pic}\nolimits(X)$, an ample line bundle on $X$, such that for all effective curve classes $\gamma \in H_2(X,{\mathbb{Z}})$,
\[\deg(\gamma)=\beta\cdot \mathrm{c}_1(L)\gg 0.\]
Let $({\mathcal P},{\mathcal C},\mathbf{x}, f)$
be a twisted pre-admissible map. For a point $p\in {\mathcal C}$, let
\[f^*L_p:=f_X^*L_{|f_{{\mathcal C}}^{-1}(p)},\]
we set $\deg(f^*L_p)=0$, if $f_{{\mathcal C}}^{-1}(p)$ is 0-dimensional. For a component ${\mathcal C}'\subseteq {\mathcal C}$, let
\[f^*L_{|{\mathcal C}'}:=f_X^*L_{|f_{{\mathcal C}}^{-1}({\mathcal C}')}.\]
Recall that a \textit{rational tail} of a curve ${\mathcal C}$ is a component isomorphic to $\mathbb{P}^1$ with one special point (a node or a marked point). A \textit{rational bridge} is a component isomorphic to $\mathbb{P}^1$ with two special points.
\begin{defn} \label{epsilonadm} Let $\epsilon \in {\mathbb{R}}_{\leq0}\cup \{-\infty\}$.
A twisted pre-admissible map $f$ is twisted $\epsilon$-admissible, if
\begin{itemize}
\item[$\mathbf{(i)}$] for all points $p \in {\mathcal C}$,
\[ \mathrm{mult}_p(\mathsf{br}(f))+\deg(f^*L_p)\leq e^{-1/\epsilon};\]
\item[$\mathbf{(ii)}$] for all rational tails $T \subseteq ({\mathcal C},\mathbf{x})$,
\[\deg(\mathsf{br}(f)_{|T})+\deg(f^*L_{|T})>e^{-1/\epsilon};\]
\item[$(\mathbf{iv})$] \[|\mathrm{Aut}(f)|<\infty.\]
\end{itemize}
\end{defn}
\begin{lemma} \label{open} The condition of twisted $\epsilon$-admissability is an open condition.
\end{lemma}
\textit{Proof.} The conditions of twisted $\epsilon$-admissibility are constructable. Hence we can use the valuative criteria for openness. Given a discrete valuation ring $R$ with a fraction field$K$, we therefore need to show that if a pre-admissible map
\[({\mathcal P}, {\mathcal C}, \mathbf{x},f) \in
{\mathfrak{M}}(X\times\mathfrak{C}^{\mathrm{tw}}_{g,N}/{\mathfrak{M}}^{\mathrm{tw}}_{g,N}, (\gamma, n))(R)\]
is $\epsilon$-admissible at a closed fiber $\Spec {\mathbb{C}}$ of $\Spec R$, then it is $\epsilon$-admissible at the generic fiber. In fact, each of conditions of $\epsilon$-admissibility is an open condition.
For example, let
\[T \subseteq ({\mathcal C} ,\mathbf x) \]
a family of subcurves of $({\mathcal C} ,\mathbf x)$ such that the generic fiber $T_{|\Spec K}$ is a rational tail that does not satisfy the condition $\mathbf{(ii)}$. Then the central fiber $T_{|\Spec {\mathbb{C}}}$ of $T$ will be a tree of rational curves, whose rational tails do not satisfy the condition $(\mathbf{ii})$, because the degree of both $\mathsf{br}(f)$ and $f_X^*L$ can only decrease on rational tails of $T_{|\Spec {\mathbb{C}}}$. Note we use that $\mathsf{br}(f)$ is defined for families of pre-admissible twisted maps to conclude that the degree of $\mathsf{br}(f)$ is constant in families.
Other conditions of $\epsilon$-admissibility can be shown to be open in a similar way.
\qed
\\
A family of twisted $\epsilon$-admissable maps over a base scheme $B$ is given by two families of twisted $B$-curves ${\mathcal P}$ and $({\mathcal C},\mathbf{x})$ and a $B$-map
\[f=f_{X}\times f_{{\mathcal C}} \colon {\mathcal P} \rightarrow X\times ({\mathcal C},\mathbf{x}),\]
whose fibers over geometric points of $B$ are $\epsilon$-admissable maps. An isomorphism of two families
\[\Phi=(\phi_1, \phi_2)\colon ({\mathcal P},{\mathcal C},\mathbf{x}, f) \cong ({\mathcal P}',{\mathcal C}',\mathbf{x}', f')\]
is given by the data of isomorphisms of the source and target curves
\[(\phi_1, \phi_2) \in \mathrm{Isom}_B({\mathcal P},{\mathcal P}') \times \mathrm{Isom}_B(({\mathcal C},\mathbf{x}),({\mathcal C}',\mathbf{x}') ),\]
which commute with the maps $f$ and $f'$,
\[f' \circ \phi_1 \cong \phi_2 \circ f.\]
\begin{defn} Given an element
\[ \beta=(\gamma, \mathsf m) \in H_2(X,{\mathbb{Z}})\oplus {\mathbb{Z}},\]
we say that a twisted $\epsilon$-admissible map
is of degree $\beta$ to $X^{(n)}$, of genus $g$ with $N$ markings, if
\begin{itemize}
\item[$\bullet$] $f$ is of degree $(\gamma,n)$ and $\deg(\mathsf{br}(f))=\mathsf{m}$;
\item[$\bullet$] $g(C)=g$ and $|\mathbf{x}|=N$.
\end{itemize}
\end{defn}
We define
\begin{align*}
Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)^{\mathrm{tw}} \colon (Sch/ {\mathbb{C}})^{\circ} &\rightarrow (Grpd) \\
S&\mapsto \{\text{families of $\epsilon$-admissable maps over }B\}
\end{align*}
to be the moduli space of twisted $\epsilon$-admissible to $X^{(n)}$ maps of degree $\beta$ and genus $g$ with $N$ markings.
Recall that ${\mathbb{L}}_{{\mathcal C}}$is a perfect complex, ${\mathcal C}$ is l.c.i., so is ${\mathbb{L}}_{{\mathcal P}}$. Hence following \cite[Section 3.2]{FP} (see also \cite[Theorem 3.8]{D}), one can construct the universal branching divisor
\begin{equation} \label{globalbrtw}
\mathsf{br}: Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)^{\mathrm{tw}} \rightarrow {\mathfrak{M}}_{g,N,\mathsf{m}}.
\end{equation}
The space ${\mathfrak{M}}_{g,N,\mathsf m}$ is an Artin stack which parametrises triples
\[(C,\mathbf{x},D),\]
where $(C, \mathbf{x})$ is a genus-$g$ curve with $n$ markings; $D$ is an effective divisor of degree $\mathsf m$ disjoint from markings $\mathbf{x}$. An isomorphism of triples is an isomorphism of curves which preserve markings and divisors.
\\
There is another moduli space related to $Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)^{\mathrm{tw}}$, which is obtained by associating to a twisted $\epsilon$-admissible map the corresponding map between the coarse moduli spaces of the twisted curves. This association defines the following map
\[p\colon Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)^{\mathrm{tw}} \rightarrow {\mathfrak{M}}(X\times\mathfrak{C}_{g,N}/{\mathfrak{M}}_{g,N}, (\beta, n)),\]
where ${\mathfrak{M}}(X\times\mathfrak{C}_{g,N}/{\mathfrak{M}}_{g,N}, (\beta, n))$ is the relative moduli space of stable maps to the relative geometry
\[X\times\mathfrak{C}_{g,N} \rightarrow {\mathfrak{M}}_{g,N},\]
where $\mathfrak{C}_{g,N} \rightarrow {\mathfrak{M}}_{g,N}$ is the universal curve. By Lemma \ref{open}, the image of $p$ is open.
\begin{defn}
We denote the image of $p$ with its natural open-substack structure by $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$.
\end{defn}
The closed points of $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ are relative stable maps with restricted branching away from marked points and nodes, to which we refer as $\epsilon$-\textit{admissable maps}. On can similarly define \textit{pre-admissable maps}. As in Definition \ref{twisted}, we denote the data of a pre-admissible map by
\[(P, C, \mathbf{x},f).\]
The moduli spaces $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ will be the central objects of our study.
\begin{rmk} The difference between the moduli spaces
$ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ and $Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)^{\mathrm{tw}}$ is the same as the one between admissible covers and twisted bundles of \cite{ACV}. We prefer to work with $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$, because it is more convenient to work with schemes than with stacks for the purposes of deformation theory and of analysis of the basic properties of the moduli spaces. Moreover, the enumerative geometries of these two moduli spaces are equivalent, at least for the relevant values of $\epsilon$. For more details, see Section \ref{Relation1} and Section \ref{Relation2}.
\end{rmk}
Since $\mathsf{br}(f)$ is supported away from stacky points, the branching-divisor map descends,
\begin{equation} \label{globalbr}
\mathsf{br}: Adm_{g,N}^{\epsilon}(X^{(n)}, \beta) \rightarrow {\mathfrak{M}}_{g,N,\mathsf m}.
\end{equation}
The moduli spaces $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ also admit a disjoint-union decomposition
\begin{equation}\label{ramificationprofiles}
Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)= \coprod_{\underline\mu} Adm_{g,N}^{\epsilon}(X^{(n)}, \beta,\underline{\mu}),
\end{equation}
where $\underline \mu=(\mu^1,\dots, \mu^N)$ is a $N$-tuple of ramifications profiles of $f_{C}$ over the markings $\mathbf{x}$.
\\
Riemann--Hurwitz formula extends to the case of pre-admissible maps.
\begin{lemma} \label{RHformula} If $f\colon P\rightarrow (C,\mathbf{x})$ is a degree-$n$ pre-admissible map with ramification profiles $\underline{\mu}=(\mu^1,\dots, \mu^N)$ at the markings $\mathbf x \subset C$, then
\[2g(P)-2=n\cdot (2g(C)-2)+\deg(\mathsf{br}(f))+ \sum_i\mathrm{age(\mu^i)}.\]
\end{lemma}
\textit{Proof.} Using Lemma \ref{br} and the standard Riemann--Hurwitz formula, one can readily check that the above formula holds for pre-admissible maps. \qed
\subsection{Properness}
We now establish the properness of $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$, starting with the following result.
\begin{prop} The moduli spaces $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ are quasi-separated Deligne-Mumford stacks of finite type.
\end{prop}
\textit{Proof.} By $\epsilon$-admissibility condition,
the map $\mathsf{br}$ factors through a quasi-separated substack of finite type. Indeed, $(C,\mathbf{x},\mathsf{br}(f))$ is not stable (i.e. has infinitely many automorphisms), if one of the following holds
\begin{itemize}
\item[$\mathbf{(i)}$] there is a rational tail $T \subseteq (C,\mathbf{x})$, such that $\mathrm{supp}(\mathsf{br}(f)_{|T})$ is at most a point;
\item[$\mathbf{(ii)}$] there is a rational bridge $B \subseteq (C,\mathbf{x})$, such that $\mathrm{supp}(\mathsf{br}(f)_{|B})$ is empty.
\end{itemize}
Up to a change of coordinates, the restriction of $f_{C}$ to $T$ or $B$ must of the form
\begin{equation} \label{constantmaps}
z^{\underline n}\colon (\sqcup^k \mathbb{P}^1)\sqcup_0 P' \rightarrow \mathbb{P}^1
\end{equation}
at each connected component of $P$ over $T$ or $B$. Let us clarify the notation of (\ref{constantmaps}). The curve $\sqcup^k \mathbb{P}^1$ is disjoint union of $k$ distinct $\mathbb{P}^1$. A possibly disconnected marked nodal curve $(P',\mathbf p)$ is attached via markings to the disjoint union $\sqcup^k \mathbb{P}^1$ at points $0\in \mathbb{P}^1$ at each connected component of the disjoint union; $P'$ is contracted to $0\in \mathbb{P}^1$ in the target $\mathbb{P}^1$; while on $i$'th $\mathbb{P}^1$ in the disjoint union, the map is given by $z^{n_i}$ for $\underline n=(n_1, \dots, n_k)$.
The fact that the restriction of $f_{C}$ is given by a map of such form can be seen as follows. The condition $\mathbf{(i)}$ or $\mathbf{(ii)}$ implies that the restriction of $f_{C}$ to $T$ or $B$ has at most two\footnote{Remember that branching might also be present at the nodes.} branching points, which in turn implies that the source curve must be $\mathbb{P}^1$ by Riemann--Hurwitz theorem. A map from $\mathbb{P}^1$ to itself with two ramifications points is given by $z^m\colon \mathbb{P}^1 \rightarrow \mathbb{P}^1$ up to change of coordinate. For a rational tail $T$, there might also be a contracted component $P'$ attached to the ramification point.
In the case of $\mathbf{(ii)}$, the $\epsilon$-admissibility condition then says that
\[\deg(f^*L_{|B})>0.\]
While in the case of $\mathbf{(i)}$,
\[\deg(\mathsf{br}(f)_{|T})=\mathrm{mult}_p(\mathsf{br}(f))\] for a unique point $p \in T$ which is not a node. Hence $\epsilon$-admissibility says that
\[\deg(f^*L_{|T})-\deg(f^*L_p)>0.\]
Since we fixed the class $\beta$, the conclusions above bound the number of components $T$ or $B$ by $\deg(\beta)$. Hence the image of $\mathsf{br}$ is contained in a quasi-compact substack of ${\mathfrak{M}}_{g,N,\mathsf m}$, which is therefore quasi-separated and of finite type, because ${\mathfrak{M}}_{g,N,\mathsf m}$ is quasi-separated and locally of finite type.
The branching-divisor map $\mathsf{br}$ is of finite type and quasi-separated, since the fibers of $\mathsf{br}$ are sub-loci of stable maps to $X\times C$ for some nodal curve $C$. The moduli space $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ is of finite type and quasi-separated itself, because $\mathsf{br}$ is of finite type and quasi-seperated and factors through a quasi-separated substack of finite type.
\qed
\begin{lemma} \label{contraction}
Given a pre-admissable map $(P,C,\mathbf{x}, f)$.
Let $(P',C',\mathbf{x}', f')$ be given by contraction of a rational tail $T\subseteq (C,\mathbf x)$ and stabilisation of the induced map
\[f\colon P \rightarrow X\times C'.\]
Let $p \in C'$ be the image of contraction of $T$. Then the following holds
\[\deg(\mathsf{br}(f)_{|T})+\deg(f^*L_{|T})=\mathrm{mult}_{p}(\mathsf{br}(f'))+\deg(f'^*L_{p}).\]
\end{lemma}
\textit{Proof.}
By Lemma \ref{RHformula},
\[2g(P_{|T})-2=-2d+\deg(\mathsf{br}(f))+d-\ell(p),\]
where $\ell(p)$ is the number of points in fiber above $p$,
from which it follows that
\begin{align*}
\deg(\mathsf{br}(f))&=2g(P_{|T})-2+2d-d+\ell(p) \\
&=2g(P_{|T})-2+d+\ell(p).
\end{align*}
By Lemma \ref{br},
\begin{align*}
\mathrm{mult}_p(\mathsf{br}(f))&=2g(P_{|T})-2+2\ell(p)+d-\ell(p)\\
&=2g(P_{|T})-2+d+\ell(p).
\end{align*}
It is also clear by definition, that
\[\deg(f^*L_{|T})=\deg(f^*L_p),\]
the claim then follows.
\qed
\begin{defn} \label{modifiation}
Let $R$ be a discrete valuation ring. Given a pre-admissible map
$(P,C,\mathbf{x}, f)$ over $\Spec R$. A \textit{modification} of $(P,C,\mathbf{x}, f)$ is a pre-admissible map
$(\widetilde{P},\widetilde{C},\widetilde{\mathbf{x}},\widetilde f)$ over $\Spec R'$ such that
\[(\widetilde{P},\widetilde{C},\widetilde{\mathbf{x}}, \widetilde{f})_{|\Spec K'} \cong (P,C,\mathbf{x}, f)_{|\Spec K'},\]
where $R'$ is a finite extension of $R$ with a fraction field $K'$.
\end{defn}
A modification of a family of curves $C$ over a discrete valuation ring is given by three operations:
\begin{itemize}
\item blow-ups of the central fiber of $C$;
\item contractions of rational tails and rational bridges in the central fiber of $C$;
\item base changes with respect to finite extensions of discrete valuation rings.
\end{itemize}
A modification of a pre-admissible map is therefore given by an appropriate choice of three operations above applied to both target and source curves, such that the map $f$ can be extended as well.
\begin{thm} \label{properness}
The moduli spaces $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ are proper Deligne-Mumford stacks.
\end{thm}
\textit{Proof.} We will now use the valuative criteria of properness for quasi-separated Deligne-Mumford stacks. Let
\[(P^{*},C^{*},\mathbf{x}^*, f^*) \in Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)(K)\]
be a family of $\epsilon$-admissable maps over the fraction field $K$ of a discrete valuation ring $R$. The strategy of the proof is to separate $P^*$ into two components $P^*_{\circ}$ and $P^*_{\bullet}$, the contracted component and non-contracted one of $f^*_C$, respectively (as it was done for Lemma \ref{br}). We then take a limit of $f^*_{|P^*_{\bullet}}$ preserving it as a cover over the target curve, and a limit of $f^*_{|P^*_{\circ}}$ as a stable map. We then glue the two limits back and perform a series of modifications to get rid of points or rational components that do not satisfy $\epsilon$-admissibility.
\\
\textit{Existence, Step 1.} Let
\[(P^{*}_{\circ}, \mathbf{q}_\circ^*) \subseteq P^{^*} \]
be the maximal subcurve contracted by $f^{^*}_{C^*}$, the markings $\mathbf{q}^{*}_{\circ}$ are given by the nodes of $P^{*}$ disconnecting $P^{*}_{\circ}$ from the rest of the curve. By
\[(P^{*}_{\bullet}, \mathbf{q}_{\bullet}^*) \subseteq P^{^*} \]
we denote the complement of $P^{*}_{\circ}$ with similar markings.
Let
\[(\widetilde{P}^{*}_{\bullet}, \mathbf{t}^*, \mathbf{t}'^*)\]
be the normalisation of $P^{*}_{\bullet}$ at nodes which are mapped by $f^{*}_{C^*}$ to the regular locus of $C^{*}$, the markings $\mathbf{t}^*$ and $\mathbf{t}'^*$ are given by the preimages of the those nodes. The induced map
\[\tilde{f}^{*}_{\bullet,C^*}\colon \widetilde{P}^{*}_{\bullet} \rightarrow C^*\]
is an admissible cover. By properness of admissible covers, there exists, possibly after a finite base change\footnote{For this proof, if we take a finite extension $R\rightarrow R'$, we relabel $R'$ by $R$.}, an extension
\[((P_{\bullet}, \mathbf{q}_{\bullet}, \mathbf{t}, \mathbf{t}'), (C,\mathbf{x}), \tilde f_{\bullet, C})\in {\mathcal A} dm(R),\]
where ${\mathcal A} dm$ is the moduli space of stable admissible covers with fixed ramification profiles, such that both source and target curves are marked, and markings of the source curve are not allowed to map to the markings of the target curve. The ramification profiles are given by the ramification profiles of $\tilde{f}^{*}_{\bullet,C^*}$.
If necessary, we then take a finite base change and modify the central fibers of source and target curves to obtain a map
\[f_{\bullet}\colon P_{\bullet} \rightarrow X\times C,\]
such that $f_{\bullet,C}$ is still an admissible cover (possibly unstable)\footnote{The map $f_{\bullet}$ can be constructed differently. One can lift $\tilde{f}^{*}_{\bullet}\colon \widetilde{P}^{*}_{\bullet} \rightarrow X\times C^*$ to an element of the moduli of twisted stable map ${\mathcal K}_{g,N}([{\mathrm{Sym}}^n X])$ after passing from admissible covers to twisted stable maps and then take a limit there.}.
Now let
\[f_{\circ}\colon (P_{\circ},\mathbf{q}_{\circ}) \rightarrow X\times C\]
be the extension of
\[f^* \colon (P^*_{\circ},\mathbf{q}^*_{\circ}) \rightarrow X\times C\]
to $\Spec R$. It exists, possibly after a finite base change, by properness of the moduli space of stable marked maps. If necessary, we modify the curve $C$ to avoid contracted components mapping to the markings $\mathbf{x}$. If we do so, we modify $P_{\bullet}$ accordingly to make $f_{\bullet,C}$ an admissible cover (again, possibly unstable). We then glue back $P_{\circ}$ and $P_{\bullet}$ at the markings $(\mathbf{q}_{\circ},\mathbf{q}_{\bullet})$ and $(\mathbf{t},\mathbf{t}')$ to obtain a map
\[f\colon P \rightarrow X\times C.\]
Let
\begin{equation} \label{familyplus} (P,C,\mathbf{x}, f)
\end{equation}
be the corresponding pre-admissible map. We now perform a series of modification to the map above to obtain an $\epsilon$-admissible map.
\\
\textit{Existence, Step 2.} Let us analyse $(P,C,\mathbf{x}, f)$ in relation to the conditions of $\epsilon$-admissibility.
\\
$\mathbf{(i)}$ Let $p_0 \in C_{|\Spec {\mathbb{C}}}$ be a point in the central fiber of $C$ that does not satisfy the condition $\mathbf{(i)}$ of $\epsilon$-admissibility. There must be a contracted component over $p_0$, because $f_{\bullet,C}$ was constructed as an admissible cover, preserving the ramifications profiles. We then blow-up the family $C$ at the point $p_0 \in C$. The map $f_{C}$ lifts to a map $\tilde{f}_C$,
\[
\begin{tikzcd}[row sep=small, column sep = small]
& P \arrow[d,"f_C"] \arrow{dl}[swap]{\tilde{f}_C}& \\
\mathrm{Bl}_{p_0}C \arrow[r] & C&
\end{tikzcd}
\]
by the universal property of a blow-up, since the preimage of the point $p_0$ is a contracted curve (which is a Cartier divisor inside $P$). The map $f_X$ is left unchanged. Let $T \subset \mathrm{Bl}_{p_0}C$ be the exceptional curve, which is also a rational tail of the central fiber of $\mathrm{Bl}_{p_0}C$ attached at $p_0$ to $C_{|\Spec {\mathbb{C}}}$. By Lemma \ref{contraction}, we obtain that
\begin{equation}\label{eq1}
\deg(\mathsf{br}(\tilde{f})_{|T})+\deg(\tilde{f}^*L_{|T})=\mathrm{mult}_{p_0}(\mathsf{br}(f))+\deg(f^*L_{p_0})
\end{equation}
and, for all points $p \in T$,
\begin{equation} \label{eq2}
\mathrm{mult}_{p}(\mathsf{br}(\tilde f))+\deg(\tilde f^*L_{p})<\mathrm{mult}_{p_0}(\mathsf{br}(f))+\deg(f^*L_{p_0}).
\end{equation}
We repeat this process inductively for all points of the central fiber for which the part $\mathbf{(i)}$ of $\epsilon$-admissibility is not satisfied. By (\ref{eq1}) and (\ref{eq2}), this procedure will terminate and we will arrive at the map which satisfies the part $\mathbf{(i)}$ of $\epsilon$-admissibility. Moreover, the procedure does not create rational tails which do not satisfy the part $\mathbf{(ii)}$ of $\epsilon$-admissibility. \\
$\mathbf{(ii)}$ If a rational tail $T \subseteq (C_{|\Spec {\mathbb{C}}},\mathbf x_{|\Spec {\mathbb{C}}})$ does not satisfy the condition $\mathbf{(ii)}$ of $\epsilon$-admissibility, we contract it
\[
\begin{tikzcd}[row sep=small, column sep = small]
& P \arrow{d}{f_C} \arrow{dl}[swap]{\tilde{f}_C} & \\
C \arrow[r] & \mathrm{Con}_{T} C&
\end{tikzcd}
\]
The map $f_X$ is left unchanged. Let $p_0 \in \mathrm{Con}_T C$ be the image of the contracted rational tail $T$. Since \[\deg(\mathsf{br}(\tilde f)_{|P})+\deg(\tilde f^*L_{|P})=\mathrm{mult}_{p_0}(\mathsf{br}(f))+\deg(f^*L_{p_0}),\]
the central fiber satisfies the condition $\mathbf{(i)}$ of $\epsilon$-admissibility at the point $p_0 \in \mathrm{Con}_T C$. We repeat this process until we get rid of all rational tails that do not satisfy the condition $\mathbf{(ii)}$ of $\epsilon$-admissibility.
\\
$\mathbf{(iii)}$
By the construction of the family (\ref{familyplus}), all the rational bridges of $C$ satisfy the condition $\mathbf{(iii)}$ of $\epsilon$-admissibility.
\\
\textit{Uniqueness.} Assume we are given two families of $\epsilon$-admissible maps over $\Spec R$
\[(P_1,C_1,\mathbf{x}_1, f_1)\ \text{and} \ (P_2,C_2,\mathbf{x}_2, f_2),\]
which are isomorphic over $\Spec K$. Possibly after a finite base change, there exists a family of pre-admissible maps
\[(\widetilde{P},\widetilde{C}, \tilde{\mathbf{x}},\tilde{f})\]
which dominates both families in the sense that there exists a commutative square
\begin{equation}\label{dominatin}
\begin{tikzcd}[row sep=small, column sep = small]
\widetilde{P} \arrow[r, "\tilde{f}"] \arrow[d]& X\times \widetilde{C} \arrow[d] & \\
P_i \arrow[r,"f_i"]& X\times C_i &
\end{tikzcd}
\end{equation}
We take a minimal family $(\widetilde{P},\widetilde{C}, \tilde{\mathbf{x}},\tilde{f})$ with such property. The vertical maps are given by contraction of rational tails. Then by the equality
\[\deg(\mathsf{br}(f)_{|P})+\deg(L_{|P})=\mathrm{mult}_{p_0}(\mathsf{br}(f))+\deg(L_{p_0}),\]
those rational tails cannot satisfy the condition $(\mathbf{ii})$ of $\epsilon$-admissibility, otherwise the image point of contraction of a rational tail will not satisfy the condition $(\mathbf{i})$ of $\epsilon$-admissibility. But $(P_i,C_i,\mathbf{x}_i, f_i)$'s are $\epsilon$-admissible by assumption. Hence the target curves are isomorphic. By separatedness of the moduli space of maps to a fixed target, it must be that
\[ (P_1,C_1,\mathbf{x}_1, f_1) \cong (\widetilde{P},\widetilde{C}, \tilde{\mathbf{x}},\tilde{f}) \cong (P_2,C_2,\mathbf{x}_2, f_2).\] \qed
\subsection{Obstruction theory}
The obstruction theory of $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta)$ is defined via the obstruction theory of relative maps in the spirit of \cite[Section 2.8]{GV} with the difference that we have a relative target geometry $X\times \mathfrak{C}_{g,N}/ {\mathfrak{M}}_{g,N}$. There exists a complex $E^{\bullet}$, which defines a perfect obstruction theory relative to ${\mathfrak{M}}_{h,N'} \times {\mathfrak{M}}_{g,N}$,
\[ \phi: E^{\bullet} \rightarrow {\mathbb{L}}_{ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta))/{\mathfrak{M}}_{h,N'} \times {\mathfrak{M}}_{g,N}},\]
where ${\mathfrak{M}}_{h,N'}$ is the moduli space of source curves with markings at the fibers over marked points of the target curves; and $ {\mathfrak{M}}_{g,N}$ is the moduli space of target curves. More precisely, such complex exists at each connected component $ Adm_{g,N}^{\epsilon}(X^{(n)}, \beta,\underline{\mu})$.
\begin{prop}
The morphism $\phi$ is a perfect obstruction theory.
\end{prop}
\textit{Proof.} This is a relative version of \cite[Section 2.8]{GV}.
\qed
\\
\subsection{Relation to other moduli spaces} \label{Relation1} Let us now relate the moduli spaces of $\epsilon$-admissible maps for the extremal values of $\epsilon \in {\mathbb{R}}_{\leq0}\cup\{-\infty\}$ to more familiar moduli spaces.
\subsubsection{$\epsilon=-\infty$}\label{comparing} In this case the first two conditions of Definition \ref{epsilonadm} are
\begin{itemize}
\item[$\mathbf{(i)}$] for all points $p \in C$,
\[ \mathrm{mult}_p(\mathsf{br}(f))+\deg(f^*L_p)\leq 1;\]
\item[$\mathbf{(ii)}$] for all rational tails $T \subseteq ({\mathcal C},\mathbf{x})$,
\[\deg(\mathsf{br}(f)_{|T})+\deg(f^*L_{|T})>1.\]
\end{itemize}
Since multiplicity and degree take only integer values, by Lemma \ref{br} and the choice of $L$, there is only possibility for which the condition $\mathbf{(i)}$ is satisfied. Namely, if
$f_{{\mathcal C}}$ does not contract any irreducible components and has only simple ramifications.
To unpack the condition $\mathbf{(ii)}$, recall that a non-constant ramified map from a smooth curve to $\mathbb{P}^1$ has at least two ramification points; it has precisely two ramification points, if it is given by
\begin{equation} \label{simple}
z^{2}\colon \mathbb{P}^1 \rightarrow \mathbb{P}^1
\end{equation}
up to a change of coordinates. Hence \[\mathrm{mult}_p(\mathsf{br}(f))+\deg(f^*L_p)=1,\]
if and only if $f_C=z^2$ and $f_X$ is constant. In this case, $|\mathrm{Aut}(f)|=\infty$. In the light of the condition $(\mathbf {iii})$ of $\epsilon$-admissibility, the condition $\mathbf{(ii)}$ is therefore automatically satisfied.
We obtain that the data of a $-\infty$-admissible map
$(P,C,\mathbf{x}, f)$ can be represented by the following correspondence
\[
\begin{tikzcd}[row sep=small, column sep = small]
P \arrow[r, "f_{X}"] \arrow{d}[swap]{f_{C}} & X & \\
(C,\mathbf{x}, \mathbf{p}) & &
\end{tikzcd}
\]
where $f_{C}$ is a degree-$n$ admissible cover with arbitrary ramifications over the marking $\mathbf{x}$ and with simple ramifications over the unordered marking $\mathbf{p}=\mathsf{br}(f)$, such that $|\mathrm{Aut}(f)| <\infty$
Hence the moduli space $ Adm_{g,N}^{-\infty}(X^{(n)}, \beta)$ admits a projection from the moduli space of twisted stable maps with \textit{extended degree} (see \cite[Section 2.1]{BG} for the definition) to the orbifold $[X^{(n)}]$,
\begin{equation} \label{projection}
\rho \colon {\mathcal K}_{g,N}([X^{(n)}], \beta) \rightarrow Adm_{g,N}^{-\infty}(X^{(n)}, \beta),
\end{equation}
which is given by passing from twisted curves to their coarse moduli spaces. Indeed, an element of $ {\mathcal K}_{g,N}([X^{(n)}], \beta)$ is given by a data of
\[
\begin{tikzcd}[row sep=small, column sep = small]
{\mathcal P} \arrow[r, "f_{X}"] \arrow{d}[swap]{f_{{\mathcal C}}} & X & \\
({\mathcal C},\mathbf{x}, \mathbf{p}) & &
\end{tikzcd}
\]
where $f_{{\mathcal C}}$ is a representable degree-$n$ \'etale cover over twisted marked curve $(C,\mathbf{x}, \mathbf{p})$. The additional marking $\mathbf{p}$ is unordered, over this marking the map $f_{{\mathcal C}}$ must have simple ramifications after passing to coarse moduli spaces. The map $f_{X}$ has to be fixed by only finitely many automorphisms of the cover $f_{{\mathcal C}}$. Passing to coarse moduli space, the above data becomes the data of a $-\infty$-admissible map.
Moreover, the virtual fundamental classes are related by the push-forward, as it is shown in the following lemma
\begin{lemma} \label{fc}
\begin{equation*}
\rho_*[{\mathcal K}_{g,N}([X^{(n)}],\beta)]^{\mathrm{vir}}
= [ Adm_{g,N}^{-\infty}(X^{(n)}, \beta)]^{\mathrm{vir}}.
\end{equation*}
\end{lemma}
\textit{Proof.} Let $\mathfrak{K}_{g,N}(BS_n, \mathsf m)$ be the moduli stacks of twisted maps to $BS_n$ (not necessarily stable) and $\mathfrak{Adm}_{g, \mathsf m, n, N}$ be the moduli stack of admissible covers (again not necessarily stable). There exists the following pull-back diagram,
\begin{equation} \label{normalisation}
\begin{tikzcd}[row sep=small, column sep = small]
{\mathcal K}_{g,N}([X^{(n)}],\beta) \arrow[d,"\pi_2"] \arrow[r,"\rho"] & Adm_{g,N}^{-\infty}(X^{(n)}, \beta) \arrow[d,"\pi_2"] & \\
\mathfrak{K}_{g,N}(BS_n, \mathsf m) \arrow[r] & \mathfrak{Adm}_{g, \mathsf m, n, N}
\end{tikzcd}
\end{equation}
The bottom arrow is a normalisation map, therefore it is of degree 1. By \cite[Theorem 5.0.1]{Co}, we therefore obtain the claim for virtual fundamental classes given by the relative obstruction theories,
\begin{multline} \label{compatability}
\rho_*[{\mathcal K}_{g,N}([X^{(n)}],\beta)/\mathfrak{K}_{g,N}(BS_n, \mathsf m)]^{\mathrm{vir}} \\
= [ Adm_{g,N}^{-\infty}(X^{(n)}, \beta)/\mathfrak{Adm}_{g, \mathsf m, n, N}]^{\mathrm{vir}}.
\end{multline}
The moduli space $\mathfrak{K}_{g,N}(BS_n, \mathsf m)$ is smooth and $\mathfrak{Adm}_{g, \mathsf m, n, N}$ is a locally complete intersection (see \cite[Proposition 4.2.2]{ACV}), which implies that their naturally defined obstruction theories are given by cotangent complexes.
Using virtual pull-backs of \cite{Ma}, one can therefore express the virtual fundamental classes given by absolute perfect obstruction theories as follows
\begin{align*}[ Adm_{g,N}^{-\infty}(X^{(n)}, \beta)]^{\mathrm{vir}}&=(p\circ \pi_2)^![{\mathfrak{M}}_{g,N}] \\
&=\pi_2^{!}p^![{\mathfrak{M}}_{g,N}] \\
&=\pi_2^{!}[\mathfrak{Adm}_{g, \mathsf m, n, N}] \\
&=[ Adm_{g,N}^{-\infty}(X^{(n)}, \beta)/\mathfrak{Adm}_{g, \mathsf m, n, N}]^{\mathrm{vir}},
\end{align*}
where \[p\colon \mathfrak{Adm}_{g, \mathsf m, n, N} \rightarrow {\mathfrak{M}}_{g,N}\] is the natural projection; we used that $p^![{\mathfrak{M}}_{g,N}]=[\mathfrak{Adm}_{g, \mathsf m, n, N}]$, which is due to the fact that the obstruction theory is given by the cotangent complex. The same holds for ${\mathcal K}_{g,N}(BS_n, \mathsf m)$, hence we obtain that
\begin{equation*}
\rho_*[{\mathcal K}_{g,N}([X^{(n)}],\beta)]^{\mathrm{vir}}
= [ Adm_{g,N}^{-\infty}(X^{(n)}, \beta)]^{\mathrm{vir}}.
\end{equation*}
\qed
\subsubsection{$\epsilon=0$} By the first two conditions of Definition \ref{epsilonadm} the map $f_{C}$ can have arbitrary ramifications and contracted components of arbitrary genera (more precisely, the two are only restricted by $n$, $g$, $N$ and $\beta$). In conjunction with other conditions of Definition \ref{epsilonadm} we therefore obtain the following identification of moduli spaces
\begin{equation} \label{compatability2}
Adm_{g,N}^{0}(X^{(n)}, \beta) = {\overline M}^{\bullet}_{\mathsf{m}}(X\times C_{g,N}/{\overline M}_{g,N},(\gamma,n)),
\end{equation}
where the space on the right is the moduli space of relative stable maps with disconnected domains to the relative geometry \[X\times C_{g,N} \rightarrow {\overline M}_{g,N},\]
where $C_{g,N} \rightarrow {\overline M}_{g,N}$ is the universal curve and where the markings play the role of relative divisors. Instead of fixing the genus of source curves, we fix the degree $\mathsf{m}$ of the branching divisor. At each component $ Adm_{g,N}^{0}(X^{(n)}, \beta,\underline{\mu})$ of the decomposition (\ref{ramificationprofiles}), the genus of the source curve and the degree of the branching divisor are related by Lemma \ref{RHformula}.
The obstruction theories of two moduli spaces are equal, since the obstruction theory of the space $ Adm_{g,N}^{0}(X^{(n)}, \beta)$ was defined via the obstruction theory of relative stable maps.
\subsection{Inertia stack} We would like to define evaluation maps of moduli spaces $ Adm_{g,N}^{\epsilon}(X^{(n)},\beta)$ to a certain rigidification of the inertia stack ${\mathcal I} X^{(n)}$ of $[X^{(n)}]$, for that we need a few observations.
The inertia stack can be defined as follows
\[{\mathcal I} X^{(n)}=\coprod_{[g]}[X^{n,g}/C(g)],\]
where the disjoint union is taken over conjugacy classes $[g]$ of elements of $S_n$, $X^{n,g}$ is the fixed locus of $g$ acting on $X^n$ and $C(g)$ is the centraliser subgroup of $g$. Recall that conjugacy classes of elements of $S_n$ are in one-to-one correspondence with partitions $\mu$ of $n$. Let us express a partition $\mu$ in terms to repeating parts and their multiplicities,
\[\mu=(\underbrace{ \eta_1, \dotsb, \eta_1}_{m_1}, \dotsb, \underbrace{\eta_s, \dotsb, \eta_s}_{m_s}).\]
We define
\begin{equation} \label{centraliser}
C(\mu):= \prod^{s}_{t=1} C_{\eta_t} \wr S_{m_t},
\end{equation}
here $C_{\eta_t}$ is a cyclic group and $"\wr"$ is a \textit{wreath product} defined as
\[C_{\eta_t} \wr S_{m_t}:=C_{\eta_t}^{\Omega_t}\rtimes S_{m_t},\]
where $\Omega_t=\{1,\dots,m_t\}$; $S_{m_t}$ acts on $C_{\eta_t}^{\Omega_t}$ by permuting the factors. There exist two natural subgroups of $C(\mu)$
\begin{equation} \label{autmu}
\mathrm{Aut}(\mu):=\prod^{s}_{t=1} S_{m_t} \ \text{ and } \ N(\mu):=\prod^{s}_{t=1} C_{\eta_t}^{\Omega_t}
\end{equation}
as the notation suggests, $\mathrm{Aut}(\mu)$ coincides with the automorphism group of the partition $\mu$. The inclusion $\mathrm{Aut}(\mu) \hookrightarrow C(\mu)$ splits the following the sequence from the right
\begin{equation} \label{split}
1 \rightarrow N(\mu) \rightarrow C(\mu) \rightarrow \mathrm{Aut}(\mu) \rightarrow 1.
\end{equation}
Viewing a partition $\mu$ as a partially ordered\footnote{ $\mu_i\geq \mu_j, \iff j\geq i$.} set, we define $X^{\mu}$ as the self-product of $X$ over the set $\mu$. In particular,
\[X^{\mu}\cong X^{\ell (\mu)},\]
where $\ell(\mu)$ is the length of the partition $\mu$. The group $C(\mu)$ acts on $X^\mu$ as follows. The products of cyclic groups $C_{\eta_t}^{\Omega}$ acts trivially on corresponding factors of $X^\mu$, while $S_{m_t}$ permutes the factors corresponding to the same part $\eta_t$. These actions are compatible with the wreath product.
Given an element $g\in S_n$ in a conjugacy class corresponding to a partition $\mu$, we have the following identifications
\[C(g) \cong C(\mu)\ \text{and} \ X^{n,g}\cong X^\mu,\]
such that the group actions match.
In particular, with the notation introduced above the inertia stack can be re-expressed,
\begin{equation} \label{Inertia}
{\mathcal I} X^{(n)}=\coprod_{\mu}[X^{\mu}/C(\mu)],
\end{equation}
and by the splitting of (\ref{split}) we obtain that
\begin{equation} \label{rigid}
{\mathcal I} X^{(n)}=\coprod_{\mu}[X^{\mu}/\mathrm{Aut}(\mu)]\times BN(\mu).
\end{equation}
We thereby define a rigidified version of ${\mathcal I} X^{(n)}$,
\[\overline{{\mathcal I}}X^{(n)}:=\coprod_{\mu}[X^{\mu}/\mathrm{Aut}(\mu)].\]
Note, however, that this is not a rigidified inertia stack in the sense of \cite[Section 3.3]{AGV}, $\overline{{\mathcal I}}X^{(n)}$ is a further rigidifiction of ${\mathcal I} X^{(n)}$.
Recall that as a graded vector space, the orbifold cohomology is defined as follows
\[H^*_{\mathrm{orb}}(X^{(n)},{\mathbb{Q}}):=H^{*-2\mathrm{age(\mu)}}({\mathcal I} X^{(n)},{\mathbb{Q}}).\]
By (\ref{Inertia}), we therefore get that
\begin{equation} \label{orbifoldcoh}
H^*_{\mathrm{orb}}(X^{(n)},{\mathbb{Q}})=H^{*-2\mathrm{age}(\mu)}({\mathcal I} X^{(n)},{\mathbb{Q}})= H^{*-2\mathrm{age}(\mu)}(\overline{{\mathcal I}}X^{(n)},{\mathbb{Q}}).
\end{equation}
\subsection{Invariants}
Let
$ \overrightarrow{Adm}_{g,N}^{\epsilon}(X^{(n)},\beta)$ be the moduli space obtained from $ Adm_{g,N}^{\epsilon}(X^{(n)},\beta)$ by putting the \textit{standard order}\footnote{We order the points in a fiber in accordance with their ramification degrees.} on the fibers over marked points of the source curve. The two moduli spaces are related as follows
\begin{equation} \label{decomposition1}
\coprod_{\underline \mu} [ \overrightarrow{Adm}^{\epsilon}_{g,N}(X^{(n)}, \beta,\underline \mu)/\prod \mathrm{Aut}(\mu^i)]= {\overline M}^{\epsilon}_{g,N}(X^{(n)}, \beta),
\end{equation}
There exist naturally defined
evaluation maps at marked points
\[{\mathrm{ev}}_{i}\colon \overrightarrow{Adm}_{g,N}^{\epsilon}(X^{(n)},\beta) \rightarrow \coprod_{\mu} X^{\mu}, \quad i=1, \dots, N.\]
By (\ref{autmu}), (\ref{Inertia}) and (\ref{decomposition1}) we can define evaluation maps
\begin{equation} \label{evaluation}
{\mathrm{ev}}_i \colon Adm_{g,N}^{\epsilon}(X^{(n)},\beta) \rightarrow \overline{{\mathcal I}} X^{(n)}, \quad i=1, \dots, N,
\end{equation}
as the composition
\begin{multline*}
Adm^{\epsilon}_{g,N}(X^{(n)}, \beta) = \coprod_{\underline \mu} [ \overrightarrow{Adm}_{g,N}^{\epsilon}(X^{(n)},\beta,\underline{\mu})/\prod \mathrm{Aut}(\mu^i)] \xrightarrow{{\mathrm{ev}}_i} \\ \xrightarrow{{\mathrm{ev}}_i} \coprod_{\mu}[X^{\mu}/\mathrm{Aut}(\mu)]
=\overline{{\mathcal I}}X^{(n)}.
\end{multline*}
For universal markings
\[s_{i}\colon Adm_{g,N}^{\epsilon}(X^{(n)},\beta)\rightarrow {\mathcal C}_{g,N}\]
to the universal \textit{target} curve
\[{\mathcal C}_{g,N} \rightarrow Adm_{g,N}^{\epsilon}(X^{(n)},\beta)\]
we also define cotangent line bundles as follows
\[{\mathcal L}_{i}: =s^{*}_{i}(\omega_{{\mathcal C}_{g,N} / Adm_{g,N}^{\epsilon}(X^{(n)},\beta)}), \quad i=1, \dots, N,\]
where $\omega_{{\mathcal C}_{g,N} / Adm_{g,N}^{\epsilon}(X^{(n)},\beta)}$ is the universal relative dualising sheaf. We denote
\[\psi_{i}:=\mathrm{c}_{1}({\mathcal L}_{i}).\]
With above structures at hand we can define $\epsilon$-admissible invariants.
\begin{defn} The \textit{descendent} $\epsilon$-\textit{admissible invariants} are
\[\langle \tau_{m_{1}}(\gamma_{1}), \dots, \tau_{m_{N}}(\gamma_{N}) \rangle^{\epsilon}_{g,\beta}:= \int_{[ Adm_{g,N}^{\epsilon}(X^{(n)},\beta)]^{\mathrm{vir}}}\prod^{i=N}_{i=1}\psi_{i}^{m_{i}} {\mathrm{ev}}^{*}_{i}(\gamma_{i},),\]
where $\gamma_{1}, \dots, \gamma_{N} \in H_{\mathrm{orb}}^{*}(X^{(n)})$ and $m_{1}, \dots m_{N}$ are non-negative integers.
\end{defn}
\subsection{Relation to other invariants} \label{Relation2}
We will now explore how $\epsilon$-admissible invariants are related to the invariants associated to the spaces discussed in Section \ref{Relation1}.
\subsubsection{Classes} \label{Classes}
Let $\{\delta_1, \dots \delta_{m_S}\}$ be an ordered basis of $H^*(X,{\mathbb{Q}})$. Let
\[\vec{\mu}=((\mu_1,\delta_{\ell_1}), \dots, (\mu_k,\delta_{\ell_k}))\]
be a cohomology-weighted partition of $n$ with the standard ordering, i.e.
\[(\mu_{i}, \delta_{\ell_i}) > (\mu_{i'},\delta_{\ell_{i'}}),\]
if $\mu_{i}>\mu_{i'}$, or if $\mu_{i}=\mu_{i'}$ and $\ell_{i}>\ell_{i'}$. The underlying partition will be denoted by $\mu$. For each $\vec\mu$, we consider a class
\[\delta_{l_1}\otimes\dots \otimes \delta_{l_k} \in H^*(X^\mu,{\mathbb{Q}}),\]
we then define
\[\lambda(\vec{\mu}):= \overline{\pi}_*(\delta_{l_1}\otimes\dots \otimes \delta_{l_k})\in H_{\mathrm{orb}}^{*}(S^{(d)}, {\mathbb{Q}}),\]
where
\[ \overline{\pi}\colon \coprod_{\mu}X^{\mu} \rightarrow \overline{{\mathcal I}} X^{(n)}\]
is the natural projection.
More explicitly, as an element of
\[H^*(X^{\mu},{\mathbb{Q}})^{\mathrm{Aut}(\mu)} \subseteq H_{\mathrm{orb}}^{*}(X^{(n)}, {\mathbb{Q}}),\] the class $\lambda(\vec{\mu})$ is given by the following formula
\[ \sum_{h\in \mathrm{Aut}(\mu)}h^*(\delta_{l_1}\otimes\dots \otimes \delta_{l_k}) \in H^*(X^{\mu},{\mathbb{Q}})^{\mathrm{Aut}(\mu)}.\]
The importance of these classes is due to the fact they form a basis of $H_{\mathrm{orb}}^{*}(S^{(n)}, {\mathbb{Q}})$, see Proposition \ref{HilbSym}.
\subsubsection{Comparison} Given weighted partitions \[\vec \mu^i=((\mu^i_{1},\delta^{i}_1), \dots, (\mu^i_{k_{i}},\delta^i_{k_i})), \quad i=1,\dots N,\]
the relative Gromov--Witten descendent invariants associated to the moduli space ${\overline M}^{\bullet}_{\mathsf{m}}(X\times C_{g,N}/{\overline M}_{g,N},(\gamma,n))$ are usually\footnote{Note that sometimes the factor $1/|\mathrm{Aut}(\vec \mu)|$ is introduced, in this case we add such factor for all classes defined previously.} defined as
\[\int_{[{\overline M}^{\bullet}_{\mathsf{m}}(X\times C_{g,N}/{\overline M}_{g,N},(\gamma,n))]^{\mathrm{vir}}} \prod_{i=1}^n\psi_i^{m_i} \prod^{k_i}_{j=1} {\mathrm{ev}}_{i,j}^*\delta^i_{j},\]
such that the product is ordered according to the standard ordering of weighted partitions and
\[{\mathrm{ev}}_{i,j}\colon {\overline M}^{\bullet}_{\mathsf{m}}(X\times C_{g,N}/{\overline M}_{g,N},(\gamma,n)) \rightarrow X\]
are evaluation maps defined by sending a corresponding point in a fiber over a marked point.
In the case of ${\mathcal K}_{g,N}([X^{(n)}], \beta)$, we define evaluation maps as the composition
\[{\mathrm{ev}}_i\colon {\mathcal K}_{g,N}([X^{(n)}], \beta)\rightarrow {\mathcal I} X^{(n)} \rightarrow \overline{{\mathcal I}}X^{(n)}, \quad i=1,\dots N,\]
where we used (\ref{rigid}).
The next lemma concludes the comparison initiated in Section \ref{Relation1}. In what follows, by a $\psi$-class on ${\mathcal K}_{g,N}([X^{(n)}], \beta)$ we will mean a \textit{coarse} $\psi$-class. Orbifold $\psi$-classes are rational multiples of coarse ones.
\begin{lemma} \label{invariantscomp}
\begin{align*}
\langle \tau_{m_{1}}(\lambda(\vec{\mu}^1)), \dots, \tau_{m_{N}}(\lambda(\vec{\mu}^N) )\rangle^{0}_{g,\beta}&= \int_{[{\overline M}^{\bullet}_{\mathsf{m}}(X\times C_{g,N}/{\overline M}_{g,N},(\gamma,n))]^{\mathrm{vir}}} \prod_{i=1}^N\psi_i^{m_i} \prod^{k_i}_{j=1} {\mathrm{ev}}_{i,j}^*\delta^i_{j}\\
\langle \tau_{m_{1}}(\lambda(\vec{\mu}^1)), \dots, \tau_{m_{N}}(\lambda(\vec{\mu}^N) )\rangle^{-\infty}_{g,\beta}&= \int_{[{\mathcal K}_{g,N}([X^{(n)}], \beta)]^{\mathrm{vir}}} \prod_{i=1}^N \psi_i^{m_i} {\mathrm{ev}}_i^*\lambda(\vec{\mu}^i).
\end{align*}
\end{lemma}
\textit{Proof.}
In the light of our conventions, it is a straightforward application of projection and pullback-pushforward formulas. \qed
\section{Master space} \label{master}
\subsection{Definition of the master space}
The space ${\mathbb{R}}_{\leq 0} \cup \{-\infty \}$ of $\epsilon$-stabilities is divided into chambers, inside of which the moduli space $ Adm_{g,N}^{\epsilon}(X^{(n)},\beta)$ stays the same, and as $\epsilon$ crosses a wall between chambers, the moduli space changes discontinuously. Let $\epsilon_0 \in {\mathbb{R}}_{\leq 0} \cup \{-\infty \}$ be a wall, and $\epsilon_+$,
$\epsilon_-$ be some values that are close to $\epsilon_0$ from the left and the right of the wall, respectively. We set
\[d_0=e^{-1/\epsilon_0}\ \text{ and } \ \deg(\beta):=\mathsf {m}+\deg(\gamma)=d.\]
From now on, we assume
\[2g-2+N+1/d_0\cdot \deg(\beta)\geq 0\]
and \[1/d_0\cdot \deg(\beta) >2,\]
if $(g,N)=(0,0)$.
\begin{defn}
A pre-admissible map $(P,C,f, \mathbf{x})$ is called $\epsilon_0$\textit{-pre-admissible}, if
\begin{itemize}
\item[$\mathbf{(i)}$] for all points $p \in {\mathcal C}$,
\[ \mathrm{mult}_p(\mathsf{br}(f))+\deg(f^*L_p)\leq e^{-1/\epsilon_0};\]
\item[$\mathbf{(ii)}$] for all rational tails $T \subseteq {\mathcal C}$,
\[\deg(\mathsf{br}(f)_{|T})+\deg(f^*L_{|T})\geq e^{-1/\epsilon_0};\]
\item[$\mathbf{(iii)}$] for all rational bridges
$B \subseteq {\mathcal C}$,
\[\deg(\mathsf{br}(f)_{|B})+\deg(f^*L_{|B})>0;\]
\end{itemize}
\end{defn}
We denote by $\mathfrak{Adm}^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ the moduli space of $\epsilon_0$-pre-admissible maps. Let ${\mathfrak{M}}^{ss}_{g,N,d}$ be the moduli space of weighted semistable curves defined in \cite[Definition 2.1.2]{YZ}. There exists a map
\[ \mathfrak{Adm}^{\epsilon_0}_{g,N}(X^{(n)},\beta) \rightarrow {\mathfrak{M}}^{ss}_{g,N,d}\] \[(P,C,f, \mathbf{x}) \mapsto (C,\mathbf{x},\underline{d}),\]
where the value of $\underline{d}$ on a subcurve $C' \subseteq C$ is defined as follows
\[\underline{d}(C')=\deg(\mathsf{br} (f_{|C'}))+\deg(f^*L_{|C'}).\]
By $M\mathfrak{Adm}^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ we denote the moduli space of $\epsilon_0$-pre-admissible maps with calibrated tails, defined as the fiber product
\[M\mathfrak{Adm}^{\epsilon_0}_{g,N}(X^{(n)},\beta)= \mathfrak{Adm}^{\epsilon_0}_{g,N}(X^{(n)},\beta)\times_{{\mathfrak{M}}^{ss}_{g,N,d}}M \widetilde{{\mathfrak{M}}}_{g,N,d},\]
where $M \widetilde{{\mathfrak{M}}}_{g,N,d}$ is the moduli space of curves with calibrated tails introduced in \cite[Definition 2.8.2]{YZ}, which is a projective bundle over the moduli spaces of curves with entangled tails over a moduli space of curves with entangled tails, $\widetilde{{\mathfrak{M}}}_{g,N,d}$, see \cite[Section 2.2]{YZ}. The latter is constructed by induction on the integer $k$ by a sequence of blow-ups at the loci of curves with at least $k$ rational tails of degree $d_0$.
\begin{defn}
Given a pre-admissible map $(P,C,f, \mathbf{x})$. We say a rational tail $T \subseteq (C, \mathbf x)$ is of degree $d_0$, if
\[\deg(\mathsf{br} (f)_{|T})+\deg(f^*L_{|T})=d_0.\]
We say a branching point $p \in C$ is of degree $d_0$, if
\[ \mathrm{mult}_p(\mathsf{br}(f))+\deg(f^*L_p) =d_0.\]
\end{defn}
\begin{defn}
We say a rational tail $T \subseteq (C,\mathbf x)$ is \textit{constant}, if
\[|\mathrm{Aut}((P,C,f, \mathbf{x})_{|T})|=\infty.\]
\end{defn}
In other words, a rational tail $T \subseteq (C,\mathbf x)$ is constant, if at each connected component of $P_{|T}$, the map $f_{C|T}$ is equal to
\[z^{\underline n}\colon (\sqcup^k \mathbb{P}^1)\sqcup_0 P' \rightarrow \mathbb{P}^1\]
up to a change of coordinates. The notation is the same as in (\ref{constantmaps}).
\begin{defn} A $B$-family family of $\epsilon_0$-pre-admissible maps with calibrated tails
\[(P,C, \mathbf{x},f, e, {\mathcal L}, v_{1}, v_{2})\]
is $\epsilon_0$-admissible if
\begin{itemize}
\item[1)] any constant tail is an entangled tail;
\item[2)] if the geometric fiber $C_b$ of $C$ has tails of degree $d_0$, then those rational tails contain all the degree-$d_0$ branching points;
\item[3)] if $v_1(b)=0$, then is $\epsilon_-$-admissible;
\item[4)] if $v_2(b)=0$, then is $\epsilon_+$-admissible.
\end{itemize}
\end{defn}
We denote by $M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ the moduli space of genus-$g$, $n$-marked, $\epsilon_0$-admissable maps with calibrated tails.
\subsection{Obstruction theory}
The obstruction theory of $M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ is defined in the same way as the one of $ Adm^{\epsilon}_{g,N}(X^{(n)},\beta)$. There exists a complex $E^{\bullet}$, which defines a perfect obstruction theory relative to ${\mathfrak{M}}_{h,N'} \times M\widetilde{{\mathfrak{M}}}_{g,N,d}$,
\[ \phi: E^{\bullet} \rightarrow {\mathbb{L}}_{M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)/{\mathfrak{M}}_{h,N'} \times M\widetilde{{\mathfrak{M}}}_{g,N}}.\]
The fact that it is indeed a perfect obstruction theory is a relative version of \cite[Section 2.8]{GV}.
\subsection{Properness}
\begin{thm} The moduli space $M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ is a quasi-separated Deligne-Mumford stack of finite type.
\end{thm}
\textit{Proof.} The proof is the same as in \cite[Proposition 4.1.11]{YZ}.
\qed
\\
We now deal with properness of $M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ with the help of valuative criteria of properness. We will follow the strategy of \cite[Section 5]{YZ}. Namely, given a discrete valuation ring $R$ with the fraction field $K$. Let
\[\xi^*=(P^*, C^*,\mathbf{x}^*,f^*, e^*, {\mathcal L}^*, v^*_1,v^*_2) \in M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)(K) \] be a family of $\epsilon_0$-admissable map with calibrated tails over $\Spec K$. We will classify all the possible $\epsilon_0$-pre-admissible extensions of $\xi^*$ to $R$ up to a finite base change. There will be a unique one which is $\epsilon_0$-admissible.
\subsubsection{$(g,N,d)\neq(0,1,d_0)$} Assume $(g,N,d)\neq(0,1,d_0)$ and $\eta^*$ does not have rational tails of degree $d_0$. Let
\[\eta^*=(P^*, C^*, \mathbf{x}^*,f^*) \ \text{and} \ \lambda^*=(e^*, {\mathcal L}^*, v^*_1,v^*_2)\]
be the underlying pre-admissable map and the calibration data of $\eta^*$, respectively. Let
\[\xi_-=(\eta_-,\lambda_-) \in M{\mathfrak{M}}^{\epsilon_0}_{g,N}(X^{(n)},\beta)(R')\]
be family over degree-$r$ extension $R'$ of $R$, where the $\epsilon_-$-pre-admissible map
\[\eta_-= (P_-,C_-,\mathbf{x}_-,f_-).\]
is constructed according to the same procedure as (\ref{familyplus}). More precisely, we apply modifications of \textit{Step 2} with respect to $\epsilon_-$-stability; we leave the degree-$d_0$ branching points which are limits of degree-$d_0$ branching points of the generic fiber untouched. The family $\eta_-$ is the one closest to being $\epsilon_-$-admissible limit of $\eta^*$. The calibration $\lambda_-$ is given by a unique
extension of $\lambda^*$ to the curve $C_-$, which exists by \cite[Lemma 5.1.1 (1)]{YZ}.
Let
\[\{p_i \mid i= 1, \dots, \ell \}\]
be an ordered set, consisting of nodes of degree-$d_0$ rational tails and degree-$d_0$ branching points of the central fiber
\[p_i\in C_{-|\Spec {\mathbb{C}}}\subset C_{-}.\]
We now define
\[b_i \in {\mathbb{R}}_{>0} \cup \{\infty\}, \ i=1, \dots, \ell \]
as follows.
Set $b_i$
to be $\infty$, if $p_i$ is a degree-$d_0$ branching point. If $p_i$ is a node of a rational tail, then we define $b_i$ via the singularity type of $C_-$ at $p_i$. Namely, if the family $C_-$ has a $A_{b-1}$-type singularity at $p_i$, we set $b_i=b/r$. \\
We now classify all $\epsilon_0$-pre-admissible modifications of $\xi_-$ in the sense of Definition \ref{modifiation}. By \cite[Lemma 5.1.1 (1)]{YZ}, it is enough to classify the modifications of $\eta_-$.
All the modifications of $\eta_-$ are given by blow-ups and blow-downs around the points $p_i$ after taking base-changes with respect to finite extensions of $R$. The result of these modifications will be a change of singularity type of $\eta_-$ around $p_i$. Hence the classification will depend on an array of rational numbers
\[\underline{\alpha}=(\alpha_1,\dots, \alpha_\ell)\in {\mathbb{Q}}_{\geq 0}^\ell,\] the nominator of which keeps track of the singularity type around $p_i$, while the denominator is responsible for the degree of an extension of $R$. The precise statement is the following lemma.
\begin{lemma} \label{neq01d}
For each $\underline{\alpha}=(\alpha_1,\dots, \alpha_\ell)\in {\mathbb{Q}}_{\geq 0}^\ell$, such that $\underline{\alpha}\leq \underline{b}$, there exists a $\epsilon_0$-pre-admissible modification $\eta_{\underline{\alpha}}$ of $\eta_-$
with following properties
\begin{itemize}
\item up to a finite base change,
\[\eta_{\underline{\alpha}}\cong \eta_{\underline{\alpha}'} \iff \underline{\alpha}=\underline{\alpha}' ;\]
\item given a $\epsilon_0$-pre-admissible modification $\tilde{\eta}$ of $\eta_-$, then there exists $\underline{\alpha}$ such that \[\tilde{\eta} \cong \eta_{\underline{\alpha}}\]
up to a finite base change.
\item the central fiber of $\eta_{\underline{\alpha}}$ is $\epsilon_-$-stable, if and only if $\underline{a}=\underline{b}$.
\end{itemize}
\end{lemma}
\textit{Proof.}
Let us choose a fractional presentation of $(a_1,\dots, a_\ell)$ with a common denominator
\[(a_1,\dots, a_\ell)=(\frac{a'_1}{rr'},\dots, \frac{a'_\ell}{rr'}).\]
Take a base change of $\eta_-$ with respect to a degree-$r'$ extension of $R'$. We then construct $\eta_{\underline{\alpha}}$ by applying modifications $\eta_-$ around each point $p_i$, the result of which is a family
\[\eta_{\alpha_i}=(P_{\alpha_i}, C_{\alpha_i}, \mathbf{x}_{\alpha_i},f_{\alpha_i}),\]
which is constructed as follows.
\\
\textit{Case 1.} If $p_i$ is a node of a degree-$d_0$ rational tail and $a_i\neq 0$, we blow-up $C_-$ at $p_i$,
\[\mathrm{Bl}_{p_i}(C_-) \rightarrow C_-.\]
The map $f_{C_-}$ then defines a rational map
\[f_{C_-}\colon P_- \dashrightarrow \mathrm{Bl}_{p_i}(C_-).\]
We can eliminate the indeterminacies of the map above by blowing-up $P_-$ to obtain an everywhere-defined map
\[f_{\mathrm{Bl}_{p_i}(C_-)} \colon \widetilde{P}_- \rightarrow \mathrm{Bl}_{p_i}(C_-),\]
we take a minimal blow-up with such property.
The exceptional curve $E$ of $\mathrm{Bl}_{p_i}(C_-)$ is a chain of $r' b_i$ rational curves. The exceptional curve of $\widetilde{P}_-$ is a disjoint union $\sqcup E_j$, where each $E_j$ is a chain of $r b_i$ rational curves mapping to $E$ without contracted components. We blow-down all the rational curves but the $a'_i$-th ones in both $E$ and $E_j$ for all $j$. The resulting families are $C_{\alpha_i}$ and $P_{\alpha_i}$, respectively. The family $C_{\alpha_i}$ has an $A_{\alpha_i'-1}$-type singularity at $p_i$. The marking $\mathbf{x}_-$ clearly extends to a marking $\mathbf{x}_{\alpha_i}$ of $C_{\alpha_i}$. The map $f_{\mathrm{Bl}_{p_i}(C_-)}$ descends to a map
\[f_{C_{\alpha_i}}\colon P_{\alpha_i} \rightarrow C_{\alpha_i}.\]
The map $f_{-,X}$ is carried along with all those modifications to a map
\[f_{\alpha_i,X} \colon P_{\alpha_i} \rightarrow X,\] because exceptional divisors are of degree $0$ with respect to $f_{-,X}$, hence the contraction of curves in the exceptional divisors does not introduce any indeterminacies. We thereby constructed the family $\eta_{\alpha_i}$.
\\
\textit{Case 2.} Assume now that $p_i$ is a node of a degree-$d_0$ rational tail, but $a_i=0$. The family $C_{\alpha_i}$ is then given by the contraction of that degree-$d_0$ rational tail, it is smooth at $p_i$. The marking $\mathbf{x}_-$ extends to a marking $\mathbf{x}_{\alpha_i}$ of $C_{\alpha_i}$. The family $P_{\alpha_i}$ is set to be equal to $P_-$. The map $f_{\alpha_i}$ is the composition of the contraction and $f_-$.
\\
\textit{Case 3.} If $p_i$ is a branching point, we blow-up $C_-$ inductively $a'_i$ times, starting with a blow-up at $p_i$ and then continuing with a blow-up at a point of the exceptional curve of the previous blow-up. We then blow-down all rational curves in the exceptional divisor but the last one. The resulting family is $C_{\alpha_i}$, it has an $A_{a'_i}$-type singularity at $p_i$. The marking $\mathbf{x}_-$ extends to the marking $\mathbf{x}_{\alpha_i}$ of $C_{\alpha_i}$. The map $f_{C_-}$ then defines a rational map
\[f_{C_-} \colon P_- \dashrightarrow C_{\alpha_i}.\]
We set
\[f_{C_{\alpha_i}} \colon P_{\alpha_i} \rightarrow C_{\alpha_i}\]
to be the minimal resolution of indeterminacies of the rational map above. More specifically, $P_{\alpha_i}$ is obtained by consequently blowing-up $P_-$ and blowing-down all the rational curves in the exceptional divisor but the last one. The map $f_{-,X}$ is carried along, as in \textit{Case 1}.
\\
It is not difficult to verify that the central fiber of $\eta_{\underline{\alpha}}$ is indeed $\epsilon_0$-pre-admissible. Up to a finite base change, the resulting family is uniquely determined by $\underline{\alpha}=(\alpha_1,\dots, \alpha_\ell)\in {\mathbb{Q}}_{\geq 0}^\ell$ and independent of its fractional presentation, because of the singularity types at points $p_i$ and the degree of an extension $R$.
Given now an arbitrary $\epsilon_0$-pre-admissible modification
\[\eta=(P,C,\mathbf{x},f)\] of $\eta_-$. Possibly after a finite base change, there exists a modification
\[\tilde \eta=(\widetilde{P}, \widetilde{C}, \tilde{\mathbf{x}},\tilde{f})\] that dominates both $\eta$ and $\eta_-$ in the sense of $(\ref{dominatin})$. We take a minimal modification with such property. The family $\tilde \eta$ is given by blow-ups of $C_-$ and $P_-$. By the assumption of minimality and $\epsilon_0$-pre-admissibility of $\eta$, these are blow-ups at $p_i$. By $\epsilon_0$-pre-admissibility of $\eta$, the projections
\[\widetilde C \rightarrow C \ \text{ and } \ \widetilde P \rightarrow P\]
are given by contraction of degree-$d_0$ rational tails or rational components which do not satisfy $\epsilon_0$-pre-admissibility. These are exactly the operations described in \textit{Steps 1,2,3} of the proof. Uniqueness of of maps follows from seperatedness of the moduli space of maps to a fixed target. Hence we obtain that
\[\eta \cong \eta_{\underline{\alpha}}\]
for some $\underline{\alpha}=(\alpha_1,\dots, \alpha_\ell)\in {\mathbb{Q}}_{\geq 0}^\ell$, where $\underline{\alpha}$ is determined by the singularity types of $\eta$ at points $p_i$.
\qed
\subsubsection{$(g,N,d)=(0,1,d_0)$} We now assume that $(g,N,d)=(0,1,d_0)$. In this case the calibration bundle is the relative cotangent bundle along the unique marking. Moreover, there is no entanglement. Given a family of pre-admissible maps $(P,C,\mathbf{x}, f)$, we will denote the calibration bundle by $M_{C}$. Therefore the calibration data $\lambda$ is given just by a rational section $s$ of $M_{C}$.
Let
\[ \xi_-=(\eta_-,\lambda_-) \in M\mathfrak{Adm}^{\epsilon_0}_{0,1}(X^{(n)},\beta)(R')\]
be the family over degree-$r$ extension $R'$ of $R$, such that $\eta_-$ is again given by (\ref{familyplus}), if there is no degree-$d_0$ branching point in $\eta^*$. Otherwise, let $\eta_-$ be any pre-admissible limit. The calibration data $\lambda_-$ is given by a rational section $s_-$ which is an extension of the section $s^*$ of $M_{C^*}$ to $M_{C_-}$.
Given a modification $\widetilde{\eta}$ of $\eta_-$ over a degree-$r'$ extension of $R'$, the section $s^*$ extends to a rational section $\tilde{s}$ of $M_{\widetilde{C}}$.
\begin{defn}
The order of the modification $\widetilde{\eta}$ is defined to be $\mathrm{ord}(\tilde{s})/r$ at the closed point of $\Spec R'$.
\end{defn}
We set $b=\mathrm{ord}(s_-)/r$, of there is no degree-$d_0$ branching point in the generic fiber of $\eta^*$. Otherwise set $b=-\infty$.
\begin{lemma} \label{01d}
For each $\alpha \in {\mathbb{Q}}$, such that $\alpha \leq b$, there exists a $\epsilon_0$-pre-admissible modification $\eta_{\alpha}$ of $\eta_-$ of order $\alpha$
with following properties
\begin{itemize}
\item up to a finite base change,
\[\eta_{\alpha}\cong \eta_{\alpha'} \iff \alpha=\alpha' ;\]
\item given a $\epsilon_0$-pre-admissible modification $\tilde{\eta}$ of $\eta_-$, then there exists $\alpha$ such that \[\tilde{\eta} \cong \eta_{\alpha}\]
up to a finite base change.
\item the central fiber of $\eta_{\alpha}$ is $\epsilon_-$-stable, if and only if $\alpha=b$.
\end{itemize}
\end{lemma}
\textit{Proof.} Assume $\eta^*$ does not have a degree-$d_0$ branching point. We choose a fractional presentation $a=a'/rr'$. We take a base change of $\eta_-$ with respect to a degree-$r'$ extension of $R'$. We blow-up consequently $a'$ times the central fiber at the unique marking. We then blow-down all rational curves in the exceptional divisor but the last one. The resulting family with markings is $(C_{\alpha},\mathbf{x}_{\alpha})$. We do the same with the family $P_-$ at the points in the fiber over the marked point to a obtain the family $P_{\alpha}$ and the map
\[f_{P_{\alpha}}\colon P_{\alpha} \rightarrow \widetilde{C},\]
the map $f_{-,X}$ is carried along with blow-ups and blow-downs. The resulting family of $\epsilon_0$-pre-admissible maps is of order $a$.
\\
Assume now that the generic fiber has a degree-$d_0$ branching point. We take a base change of $\eta_-$ with respect to a degree-$r'$ extension of $R'$. After choosing some trivialisation of $M_{C^*}$, we have that \[s^*=\pi^{r'a_-} \in K',\]
where $a_-$ is the order of vanishing of $s_-$ before the base-change and $\pi$ is some uniformiser of $R'$. Because of automorphisms of $\mathbb{P}^1$ which fix a branching point and a marked point, we have an isomorphisms of $\epsilon_0$-pre-admissible maps with calibrated tails,
\[(\eta^*,s^*)\cong (\eta^*, \pi^{c} \cdot s^*) \]
for arbitrary $c\in {\mathbb{Z}}$. Hence we can multiply the section $s_-$ with $\pi^{a'-r'a_-}$ to obtain a modification of order $a$.
\\
The fact that these modifications classify all possible modifications follow from the same arguments as in the case $(g,N,d)\neq (0,1,d_0)$.
\qed
\begin{thm} The moduli space $M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ is proper.
\end{thm}
\textit{Proof.} With the classifications of modifications of $\eta_-$ of Lemma \ref{neq01d} and Lemma \ref{01d}, the proof of properness follows from the same arguments as in \cite[Proposition 5.0.1]{YZ}.
\qed
\section{Wall-crossing} \label{masterSym}
\subsection{Graph space} \label{graphspaceSym}
For a class $\beta = (\beta,\mathsf m)\in H_2(X,{\mathbb{Z}})\oplus {\mathbb{Z}}$ consider now
\[{\overline M}^{\bullet}_{\mathsf{m}}(X\times \mathbb{P}^1/X_{\infty}, (\gamma,n)),\]
the space of relative stable maps with disconnected domains of degree $(\gamma,n)$ to $X\times \mathbb{P}^1$ relative to
\[X_{\infty}:=X\times \{\infty \} \subset X\times \mathbb{P}^1.\]
One should refer to this moduli space as \textit{graph space}, as it will play the same role, as the graph space in the quasimap wall-crossing. Note that we fix the degree of the branching divisor $\mathsf m$ instead of the genus $\mathsf h$, the two are determined by Lemma \ref{RHformula}.
There is a standard ${\mathbb{C}}^*$-action on $\mathbb{P}^1$ given by
\[t[x,y]=[tx,y], \ t\in {\mathbb{C}}^*,\]
which induces a ${\mathbb{C}}^*$-action on ${\overline M}^{\bullet}_{\mathsf m}(X\times \mathbb{P}^1/X_{\infty}, (\gamma,n))$. Let
\[F_{\beta} \subset {\overline M}^{\bullet}_{\mathsf{m}}(X\times \mathbb{P}^1/X_{\infty}, (\gamma,n))^{{\mathbb{C}}^*}\]
be the distinguished ${\mathbb{C}}^*$-fixed component consisting of maps to $X\times \mathbb{P}^1$ (no expanded degenerations). Said differently, $F_{\beta}$ is the moduli space of maps, which are admissible over $\infty \in \mathbb{P}^1$ and whose degree lies entirely over $0 \in \mathbb{P}^1$ in the form of a branching point. Other ${\mathbb{C}}^*$-fixed components admit exactly the same description as in the case of quasimaps in \cite[Section 6.1]{N}.
\\
The virtual fundamental class of $F_{\beta}$,
\[[F_{\beta}]^{\mathrm{vir}} \in A_*(F_{\beta}),\]
is defined via the fixed part of the perfect obstruction theory of
\[{\overline M}^{\bullet}_{\mathsf{m}}(X\times \mathbb{P}^1/X_{\infty}, (\gamma,n)).\] The virtual normal bundle $N_{F_{\beta}}^{\mathrm{vir}}$ is defined by the moving part of the obstruction theory. There exists an evaluation map
\[\mathsf{ev} \colon F_{\beta} \rightarrow \overline{{\mathcal I}} X^{(n)}\]
defined in the same way as (\ref{evaluation}).
\begin{defn}
We define an $I$-function to be
\[I(q,z)=1+\sum_{\beta\neq0}q^{\beta}{\mathsf{ev}}_{*} \left(\frac{[F_{\beta}]}{e_{{\mathbb{C}}^*}(N_{F_{\beta}}^{\mathrm{vir}})}\right) \in H_{\mathrm{orb}}^{*}(X^{(n)})[z^{\pm}]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[\![q^{\beta}]\!].\]
Let
\[\mu(z) \in H_{\mathrm{orb}}^{*}(X^{(n)})[z]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[\![q^{\beta}]\!]\]
be the truncation $[zI(q,z)-z]_+$ by taking only non-negative powers of $z$. Let
\[\mu_{\beta}(z) \in H_{\mathrm{orb}}^{*}(X^{(n)})[z]\]
be the coefficient of $\mu(z)$ at $q^{\beta}$.
\end{defn}
For later, it is also convenient to define
\[{\mathcal I}_{\beta}:=\frac{1}{e_{{\mathbb{C}}^*}(N^\mathrm{vir}_{F_{\beta}})} \in A^*(F_{\beta})[z^{\pm}].\]
\subsection{Wall-crossing formula} From now on, we assume that
\[2g-2+n+1/d_0\deg(\beta)>0,\]
for $(g,N,d_0)=(0,1,d_0)$ we refer to \cite[Section 6.4]{YZ}.
There exists a natural ${\mathbb{C}}^*$-action on the master space $M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)$ given by
\[t \cdot (P, C,\mathbf{x},f, e, {\mathcal L}, v_1,v_2)= (P, C,\mathbf{x},f, e, {\mathcal L}, t \cdot v_1,v_2), \quad t \in {\mathbb{C}}^*.\]
By arguments presented in \cite[Section 6]{YZ}, the fixed locus admits the following expression \[M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)^{{\mathbb{C}}^*}=F_+ \sqcup F_- \sqcup \coprod_{\vv \beta} F_{\vv{\beta}},\]
we will now explain the meaning of each term in the union above, giving a description of virtual fundamental classes and virtual normal bundles.
\subsubsection{$F_+$} This is a simplest component,
\[
F_+= Adm^{\epsilon_+}_{g,N}(X^{(n)},\beta), \quad
N_{F_+}^{\mathrm{vir}}={\mathbb{M}}^{\vee}_+,\]
where ${\mathbb{M}}^{\vee}_+$ is the dual of the calibration bundle ${\mathbb{M}}_+$ on $ Adm^{\epsilon_+}_{g,N}(X^{(n)},\beta)$, with a trivial ${\mathbb{C}}^*$-action of weight -1, cf. \cite{YZ}. The obstruction theories also match, therefore
\[[F_+]^{\mathrm{vir}}=[ Adm^{\epsilon_+}_{g,N}(X^{(n)},\beta)]^{\mathrm{vir}}\]
with respect to the identification above.
\subsubsection{$F_-$} We define
\[\widetilde{Adm}^{\epsilon_-}_{g,N}(X^{(n)},\beta):= Adm^{\epsilon_-}_{g,N}(X^{(n)},\beta) \times_{{\mathfrak{M}}_{g,N,d}} \widetilde{{\mathfrak{M}}}_{g,N,d},\]
then
\[
F_-=\widetilde{Adm}^{\epsilon_-}_{g,N}(X^{(n)},\beta),\quad
N_{F_-}^{\mathrm{vir}}={\mathbb{M}}_- ,
\]
where, as previously, ${\mathbb{M}}_-$ is the calibration bundle on $\widetilde{Adm}^{\epsilon_-}_{g,N}(X^{(n)},\beta)$ with trivial ${\mathbb{C}}^*$-action of weight 1. The obstruction theories also match and
\[p_*[\widetilde{Adm}^{\epsilon_-}_{g,N}(X^{(n)},\beta)]^{\mathrm{vir}}=[ Adm^{\epsilon_-}_{g,N}(X^{(n)},\beta)]^{\mathrm{vir}},\]
where \[p \colon \widetilde{Adm}^{\epsilon_-}_{g,N}(X^{(n)},\beta) \rightarrow Adm^{\epsilon_-}_{g,N}(X^{(n)},\beta) \]
is the natural projection.
\subsubsection{$F_{\vec \beta}$} These are the wall-crossing components, which will be responsible for wall-crossing formulas. Let
\[\vv{\beta}=(\beta',\beta_1,\dots,\beta_k)\] be a $k+1$-tuple of classes in $H_2(X,{\mathbb{Z}}) \oplus {\mathbb{Z}}$, such that $\beta=\beta'+\beta_1+ \dots + \beta_k$ and $\deg(\beta_i)=d_0$. Then a component $F_{\vv{\beta}}$ is defined as follows
\begin{align*}
F_{\vv{\beta}}=\{ \xi \mid \ &\xi \text{ has exactly }k \text{ entangled tails,} \\
&\text{which are all fixed tails, of degree } \beta_1, \dots, \beta_k \}.
\end{align*}
Let
\[
\begin{tikzcd}[row sep=small, column sep = small]
{\mathcal E}_i \arrow[r] & F_{\vv{\beta}} \arrow[l, bend left=20,"p_i"] &i=1,\dots, k,
\end{tikzcd}
\]
be the universal $i$-th entangled rational tail with a marking $p_i$ given given by the node. We define $\psi({\mathcal E}_i)$ to be the $\psi$-class associated to the marking $p_i$. Let
\[\widetilde{\mathrm{gl}}_k \colon \widetilde{{\mathfrak{M}}}_{g,N+k, d-kd_0}\times ({\mathfrak{M}}_{0,1,d_0})^k \rightarrow \widetilde{{\mathfrak{M}}}_{g,N,d}\]
be the gluing morphism, cf. \cite[Section 2.4]{YZ}.
Let
\[\mathfrak{D}_i \subset \widetilde{{\mathfrak{M}}}_{g,N,d}\]
be a divisor defined as the closure of the locus of curves with exactly $i+1$ entangled tails.
Finally, let
\[Y \rightarrow \widetilde{Adm}^{\epsilon_-}_{g,N}(X^{(n)},\beta')\] be the stack of $k$-roots of ${\mathbb{M}}^{\vee}_-$.
\begin{prop} \label{isomophism}
There exists a canonical isomorphism
\[\widetilde{\mathrm{gl}}_k^*F_{\vv{\beta}} \cong Y \times_{(\overline{{\mathcal I}}X^{(n)})^k} \prod_{i=1}^{i=k} F_{ \beta_i}.\]
With respect to the identification above we have
\begin{align*}[F_{\vv{\beta}}]^{\mathrm{vir}}=&[Y]^{\mathrm{vir}} \times_{(\overline{{\mathcal I}}X^{(n)})^k} \prod_{i=1}^{i=k} [F_{ \beta_i}]^{\mathrm{vir}}, \\ \frac{1}{e_{{\mathbb{C}}^*}(N_{F_{\vv{\beta}}}^{\mathrm{vir}})}=&\frac{\prod^k_{i=1}(z/k+\psi({\mathcal E}_i) )}{-z/k-\psi({\mathcal E}_1)-\psi_{n+1}-\sum^{\infty}_{i=k}\mathfrak{D}_i} \cdot \prod^k_{i=1} {\mathcal I}_{\beta_i}(z/k+\psi({\mathcal E}_i)).
\end{align*}
\end{prop}
\textit{Proof.} See \cite[Lemma 6.5.6]{YZ}.
\qed
\begin{thm} \label{wallcrossingSym} Assuming $2g-2+N+1/d_0\cdot\deg(\beta)>0$, we have
\begin{multline*}
\langle \tau_{m_{1}}(\gamma_{1}), \dots, \tau_{m_{n}}(\gamma_{N}) \rangle^{\epsilon_{+}}_{g,\beta}-\langle \tau_{m_{1}}(\gamma_{1}), \dots, \tau_{m_{n}}(\gamma_{N}) \rangle^{\epsilon_{-}}_{g, \beta}\\
=\sum_{k\geq1}\sum_{\vec{\beta}}\frac{1}{k!}\int_{[ Adm_{g,N+k}^{\epsilon_-}(X^{(n)},\beta')]^{\mathrm{vir}}}\prod^{i=N}_{i=1}\psi_{i}^{m_{i}}{\mathrm{ev}}^{*}_{i}(\gamma_{i})\cdot \prod^{a=k}_{a=1}{\mathrm{ev}}^*_{n+a}\mu_{\beta_a}(z)|_{z=-\psi_{n+a}}
\end{multline*}
where $\vec{\beta}$ runs through all the $(k+1)$-tuples of effective curve classes
\[\vec{\beta}=(\beta', \beta_{1}, \dots, \beta_{k}),\]
such that $\beta=\beta'+ \beta_{1}+ \dots + \beta_{k}$ and $\deg(\beta_{i})=d_{0}$ for all $i=1, \dots, k$.
\end{thm}
\textit{Sketch of Proof.}
We will just explain the master-space technique. For all the details we refer to \cite[Section 6]{YZ}. By the virtual localisation formula we obtain
\begin{multline*} [M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)]^{\mathrm{vir}} \\
= \left( \sum \iota_{F_\star *}\left( \frac{[F_\star]^{\mathrm{vir}}}{e_{{\mathbb{C}}^*}(N_{F_\star}^{\mathrm{vir}})} \right) \right)\in A_*^{{\mathbb{C}}^*}(M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta))\otimes_{{\mathbb{Q}}[z]}{\mathbb{Q}}(z),
\end{multline*}
where $F_{\star}$'s are the components of the ${\mathbb{C}}^*$-fixed locus of $M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)$.
Let
\[\alpha=\prod^{i=N}_{i=1}\psi_{i}^{m_{i}}{\mathrm{ev}}^{*}_{i}(\gamma_{i})\in A^*(M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta))\]
be the class corresponding to decedent insertions.
After taking the residue\footnote{ i.e. by taking the coefficient of $1/z$ of both sides of the equality.} at $z=0$ of the above formula, capping with $\alpha$ and taking the degree of the class, we obtain the following equality
\begin{multline*}
\int_{[ Adm^{\epsilon_+}_{g,N}(X^{(n)},\beta)]^{\mathrm{vir}}} \alpha -\int_{[ Adm^{\epsilon_-}_{g,N}(X^{(n)},\beta)]^{\mathrm{vir}}} \alpha \\
=\deg \left( \alpha \cap \mathrm{Res}_{z=0} \left( \sum \iota_{F_{\beta}*}\left( \frac{[F_{\beta}]^{\mathrm{vir}}}{e_{{\mathbb{C}}^*}(N_{F_{\beta}}^{\mathrm{vir}})} \right)\right)\right),
\end{multline*}
where we used that there is no $1/z$-term in the decomposition of the class \[[M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)]^{\mathrm{vir}} \in A_*^{{\mathbb{C}}^*}(M Adm^{\epsilon_0}_{g,N}(X^{(n)},\beta)),\]
and that
\[\frac{1}{e_{{\mathbb{C}}^*}({\mathbb{M}}_{\pm})}=\frac{1}{z}+O(1/z^2).\]The analysis of the residue on the right-hand side presented in \cite[Section 7]{YZ} applies to our case. The resulting formula is the one claimed in the statement of the theorem.\qed
\\
We define
\[F^{\epsilon}_{g}(\mathbf{t}(z))=\sum^{\infty}_{n=0}\sum_{\beta}\frac{q^{\beta}}{N!}\langle \mathbf{t}(\psi), \dots, \mathbf{t}(\psi) \rangle^{\epsilon}_{g,N,\beta},\]
where $\mathbf{t}(z) \in H_{\mathrm{orb}}^{*}(S^{(n)},{\mathbb{Q}})[z]$ is a generic element, and the unstable terms are set to be zero. By repeatedly applying Theorem \ref{wallcrossingSym} we obtain.
\begin{cor} \label{changeofvariable} For all $g\geq 1$ we have
\[F^{0}_{g}(\mathbf{t}(z))=F^{-\infty}_{g}(\mathbf{t}(z)+\mu(-z)).\]
For $g=0$, the same equation holds modulo constant and linear terms in $\mathbf{t}(z)$.
\end{cor}
For $g = 0$, the relation is true only moduli linear terms in $\mathbf{t}(z)$, because the moduli space $Adm_{0,1}^{\epsilon_+}(X^{(n)}, \beta)$ is empty, if $e^{1/\epsilon_+} \cdot \deg(\beta) \leq 1$. In particular,
Theorem \ref{wallcrossingSym} does not hold. As for quasimaps, the wall-crossing
takes a different form in this case. More specifically, \cite[Theorem 6.6] {N} and \cite[Theorem 6.7]{N} apply verbatim to the case of $\epsilon$-admissible maps.
\section{Del Pezzo} \label{del Pezzo}
In this section we determine the $I$-function in the case $X=S$ is a del Pezzo surface. Firstly, consider the expansion
\[[zI(q,z)-z]_{+}=I_{1}(q)+(I_{0}(q)-1)z+I_{-1}(q)z^2+I_{-2}(q)z^3+\dots,\]
we will show that by the dimension constraint the terms $I_{-k}$ vanish for all $k\geq 1$.
For this section we consider $H_{\mathrm{orb}}^{*}(X^{(n)})$ with its \textit{naive}\footnote{We grade it with the cohomological grading of $H^*({\mathcal I} S^{(d)},{\mathbb{Q}})$.} grading. Let $z$ be of cohomological degree $2$ in $H_{\mathrm{orb}}^{*}(X^{(n)})[z^{\pm}]$. The virtual dimension of ${\overline M}^{\bullet}_{\mathsf{m}}(X\times \mathbb{P}^1/X_{\infty}, (\gamma,n),\mu)$ is equal to
\[\int_{\mathrm{c}_1(S)}\beta + n + \ell(\mu).\]
Hence, by the virtual localisation, the classes involved in the definition of $I$-function
\[{\mathsf{ev}}_{*}\left( \frac{[F_{\beta,\mu}]^{\mathrm{vir}}}{e_{{\mathbb{C}}^{*}}(N^{\mathrm{vir}})}\right) \in H^*(S^{\mu}/\mathrm{Aut}(\mu))[z^\pm] \subseteq H_{\mathrm{orb}}^{*}(S^{(n)})[z^{\pm}],\]
have naive cohomological degree equal to
\begin{equation} \label{quantity}
-2\left(\int_{\mathrm{c}_1(S)}\beta + n-\ell(\mu)\right).
\end{equation}
Since $S$ is a del Pezzo surface, the above quantity is non-positive, which implies that
\[I_{0}=1 \ \text{and} \ I_{-k}=0\]
for all $-k\geq 1$, because cohomology is non-negatively graded. Moreover, the quantity (\ref{quantity}) is zero, if and only if
\[\mu=(1,\dots,1) \ \text{and} \ \beta=(0,\mathsf m).\]
Let us now study $F_{\beta,\mu}$ for these values of $\mu$ and $\beta$. It is more convenient to put an ordering on fibers over $\infty \in \mathbb{P}^1$, so let $\vec F_{\beta, \mu}$ be the resulting space. We will not give a full description of $\vec F_{\beta, \mu}$, even though it is simple. We will only be interested in one type of components of $\vec F_{\beta, \mu}$,
\begin{equation} \label{iota}
\iota_i\colon {\overline M}_{\mathsf h,p_i}\times S^n \hookrightarrow \vec F_{\beta, \mu},
\end{equation}
where ${\overline M}_{\mathsf h,p_i}$ is the moduli spaces of stable genus-$\mathsf h$ curve with \textit{one} marking labelled by $p_i$, $i=1,\dots N$. The embedding $\iota_i$ is constructed as follows. Given a point
\[( (C,\mathbf x), x_1,\dots,x_n)) \in {\overline M}_{\mathsf h,p_i}\times S^n,\]
let
\begin{equation} \label{rational}
(\tilde{P},p_1,\dots, p_n)=\coprod^{i=n}_{i=1} (\mathbb{P}^1,0)
\end{equation}
be an ordered disjoint union of $\mathbb{P}^1$ with markings at $0\in \mathbb{P}^1$. We define a curve $P$ by gluing $(\tilde{P},p_1,\dots, p_n)$ with $(C, p_i)$ at the marking with the same labelling. We define
\[f_{\mathbb{P}^1}\colon P \rightarrow \mathbb{P}^1\] to be an identity on
$\mathbb{P}^1$'s and contraction on $C$. We define
\[f_{S}\colon P \rightarrow S\] by contracting $j$-th $\mathbb{P}^1$ in $P$ possibly with an attached curve to the point $x_j \in S$. We thereby defined the inclusion
\[\iota_i((C,p), x_1,\dots,x_n))= (P, \mathbb{P}^1,0,f_{\mathbb{P}^1}\times f_S ).\]
By Lemma \ref{RHformula},
\begin{equation}\label{hm}
\mathsf h= \mathsf m/2,
\end{equation}
in particular, $\mathsf m$ is even. More generally, any connected component of $\vec F_{\beta, \mu}$ admits a similar description with the difference that there might more markings on possibly disconnected $C$ by which it attaches to $\widetilde{P}$, i.e. $P$ has more nodes. These components are not relevant for our needs, as it will be explained below.
\\
Let us now consider the virtual fundamental classes and the normal bundles of these components ${\overline M}_{\mathsf h,p_i}\times S^n$.
By standard arguments we obtain that
\[\iota^*_i\frac{[F_{\beta,\mu}]^{\mathrm{vir}}}{e_{{\mathbb{C}}^{*}}(N^{\mathrm{vir}})}=e(\pi^*_iT_S\otimes p^*{\mathbb{E}}^{\vee}_{\mathsf h})\cdot \frac {e({\mathbb{E}}^{\vee}z)}{z(z-\psi_1)},\]
where
$\pi_i\colon {\overline M}_{\mathsf h,p_i}\times S^n \rightarrow S$ is the projection to $i$-th factor of $S^n$ and $p\colon {\overline M}_{\mathsf h,p_i}\times S^n \rightarrow {\overline M}_{\mathsf h,p_i} $ is the projection to ${\overline M}_{\mathsf h,p_i}$; ${\mathbb{E}}$ is the Hodge bundle on ${\overline M}_{\mathsf h,p_i}$
For other components of $\vv F_{\beta, \mu}$, the equivariant Euler classes $e_{{\mathbb{C}}^{*}}(N^{\mathrm{vir}})$ acquire factors \[\frac{1}{z(z-\psi_i)}\]for each marked point. This makes them irrelevant for purposes of determining the truncation of $I$-function. We therefore have to determine the following classes
\[\pi_*\left( e(\pi^*_i T_S\otimes p^*{\mathbb{E}}^{\vee}_{\mathsf h})\cdot \frac {e({\mathbb{E}}^{\vee}z)}{z(z-\psi_1)}\right) \in H^{*}(S^n)[z^{\pm}],\]
where $\pi \colon {\overline M}_{h,p_i}\times S^n \rightarrow S^n$ is the natural projection, which is identified with evaluation map ${\mathsf{ev}}$ via the inclusion (\ref{iota}).
Let $\ell_1$ and $\ell_2$ be the Chern roots of $\pi_i^*T_S$. Then we can rewrite the class above as follows
\[\int_{{\overline M}_{\mathsf h,1}}\frac{{\mathbb{E}}^{\vee}(\ell_1)\cdot {\mathbb{E}}^{\vee}(\ell_2) \cdot {\mathbb{E}}^{\vee}(z)}{z(z-\psi_1)},\]
where
\[{\mathbb{E}}^\vee(z):=e({\mathbb{E}}^{\vee}z)= \sum^{j=\mathsf h}_{j=0}(-1)^{g-j}\lambda_{g-j}z^j,\]
and similarly for ${\mathbb{E}}^\vee(\ell_{1})$ and ${\mathbb{E}}^\vee(\ell_{2})$.
By putting these Hodge integrals into a generating series, we obtain their explicit form. Note that below we sum over the degree $\mathsf m$ of the branching divisor, which in this case is related to the genus $\mathsf h$ by (\ref{hm}). The result was kindly communicated to the author by Maximilian Schimpf.
\begin{prop}[Maximilian Schimpf] \label{Max}
\[1+\sum_{\mathsf h>0} u^{2\mathsf h} \int_{{\overline M}_{\mathsf h,1}}\frac{{\mathbb{E}}^{\vee}(\ell_1)\cdot {\mathbb{E}}^{\vee}(\ell_2) \cdot {\mathbb{E}}^{\vee}(z)}{z(z-\psi_1)}=\left( \frac{\mathrm{sin}(u/2)}{u/2}\right)^{\frac{\ell_1+\ell_2}{z}}\]
\end{prop}
\textit{Proof.} The claim follows from the results of \cite{FabP}. Firstly,
\begin{multline*}
1+\sum_{\mathsf h>0} u^{2\mathsf h} \int_{{\overline M}_{\mathsf h,1}}\frac{{\mathbb{E}}^{\vee}(\ell_1)\cdot {\mathbb{E}}^{\vee}(\ell_2) \cdot {\mathbb{E}}^{\vee}(z)}{z(z-\psi_1)}\\
=1+\sum_{\mathsf h>0} u^{2\mathsf h} \int_{{\overline M}_{\mathsf h,1}}\frac{{\mathbb{E}}^{\vee}(\ell_1/z)\cdot {\mathbb{E}}^{\vee}(\ell_2/z) \cdot {\mathbb{E}}^{\vee}(1)}{1-\psi_1}.
\end{multline*}
Now let \[a=\ell_1/z, \quad b=\ell_2/z\]and
\[F(a,b)=1+\sum_{\mathsf h>0} u^{2\mathsf h} \int_{{\overline M}_{\mathsf h,1}}\frac{{\mathbb{E}}^{\vee}(a)\cdot {\mathbb{E}}^{\vee}(b) \cdot {\mathbb{E}}^{\vee}(1)}{1-\psi_1}.\]By using virtual localisation on a moduli space of stable maps to $\mathbb{P}^1$, we obtain the following identities
\begin{align*}
&F(a,b)\cdot F(-a,-b)=1;\\
&F(a,b)\cdot F(-a,1-b)=F(0,1).
\end{align*}
These identities, with the fact $F(a,b)$ is symmetric in $a$ and $b$, imply that
\begin{equation} \label{identityf}
F(a,b)=F(a,b)^{a+b}
\end{equation}
for integer values of $a$ and $b$. Each coefficient of a power of $u$ in $F(a,b)$ is a polynomial in $a$ and $b$, hence the identity (\ref{identityf}) is in fact a functional identity.
By the discussion in \cite[Section 2.2]{FP} and by \cite[Proposition 2]{FP}, we obtain that
\[F(0,1)= \frac{\mathrm{sin}(u/2)}{u/2},\]the claim now follows.
\qed
\\
Using the commutativity of the following diagram
\[
\begin{tikzcd}[row sep=small, column sep = small]
\vec F_{\beta, \mu} \arrow[d] \arrow[r,"\vec{{\mathrm{ev}}}"]& \arrow[d,"\overline \pi"] S^n & \\
F_{\beta, \mu} \arrow[r,"{\mathrm{ev}}"]& {[S^{(n)}]} &
\end{tikzcd}
\]
and Proposition \ref{Max}, we obtain
\begin{equation} \label{I1} I_1(q)=\log\left(\frac{\mathrm{sin}(u/2)}{u/2}\right)\cdot \frac{1}{d-1!}\overline{\pi}_* (\mathrm{c}_1(S) \otimes \dots \otimes 1).
\end{equation}
For $2g-2+N\geq 0$ we define
\[\left\langle \gamma_1, \ldots, \gamma_N \right\rangle^{ \epsilon}_{g, \gamma}:=\sum_k \langle \gamma_1, \dots, \gamma_N \rangle_{g,(\gamma,\mathsf m)}^{\epsilon}u^{\mathsf m},\]
setting invariants corresponding to unstable values of $g$,$N$ and $\beta$ to zero. By repeatedly applying Theorem \ref{wallcrossingSym}, we obtain that
\[\left\langle \gamma_1, \ldots, \gamma_N \right\rangle^{ 0}_{g, \beta}=\sum_{k\geq 1}\frac{1}{k!}\left\langle \gamma_1, \ldots, \gamma_N, \underbrace{I_1(q),\dots, I_1(q)}_{k} \right\rangle^{-\infty}_{g, \beta}.\]
Applying the divisor equation\footnote{One can readily verify that an appropriate form of the divisor equation holds for classes in $H^*(S^{(d)},{\mathbb{Q}}) \subseteq H_{\mathrm{orb}}^*(S^{(d)},{\mathbb{Q}})$.} and (\ref{I1}), we get following corollary.
\begin{cor} \label{change} Assuming $2g-2+N\geq 0$,
\[\left\langle \gamma_1, \ldots, \gamma_N \right\rangle^{0}_{g, \gamma} = \left( \frac{\mathrm{sin}(u/2)}{u/2} \right)^{\gamma \cdot \mathrm{c}_1(S)} \cdot \left\langle \gamma_1, \ldots, \gamma_N \right\rangle^{-\infty}_{g, \gamma}.\]
\end{cor}
\section{Crepant resolution conjecture} \label{crepant}
To a cohomology-weighted partition
\[\vec{\mu}=((\mu_1,\delta_{\ell_1}), \dots, (\mu_k,\delta_{\ell_k}))\]we can also associate a class in $H^*(S^{[n]},{\mathbb{Q}})$, using Nakajima operators,
\[\theta(\vec{\mu}):=\frac{1}{\prod_{i=1}^{k}\mu_i }P_{\delta_{\ell_1}}[\mu_1]\dotsb P_{\delta_{\ell_k}}[\mu_k]\cdot 1 \in H^*(S^{[n]},{\mathbb{Q}}),\]
where operators are ordered according to the standard ordering (see Subsection \ref{Classes}). For more details on these classes, we refer to \cite[Chapter 8]{N99}, or to \cite[Section 0.2]{Ob18} in a context more relevant to us.
\begin{prop}\label{HilbSym} There exists a graded isomorphism of vector spaces
\[L\colon H_{\mathrm{orb}}^{*}(S^{(n)},{\mathbb{C}})\simeq H^*(S^{[n]},{\mathbb{C}}),\]
\[ L(\lambda(\vec{\mu}))= (-i)^{\mathrm{age}(\mu)}\theta(\vec{\mu}). \]
\end{prop}
\textit{Proof.} See \cite[Proposition 3.5]{FG}.
\qed
\begin{rmk} The peculiar choice of the identification with a factor $(-i)^{\mathrm{age}(\mu)}$ is justified by crepant resolution conjecture - this factor makes the invariants match on the nose. See the next section for more details.
\end{rmk}
\subsection{Quasimaps and admissible covers} \label{qausiadm} From now one we assume that $2g-2+N\geq 0$. Using \cite[Corollary 3.13]{N}, we obtain an identification
\begin{equation}\label{degreechern} H_2(S^{[n]},{\mathbb{Z}})\cong H_2(S,{\mathbb{Z}})\oplus {\mathbb{Z}}.
\end{equation}
In the language of (quasi-)maps, it corresponds to association of the Chern character to the graph of a (quasi-)map .
Given classes $\gamma_i \in H^*_{\mathrm{orb}}(S^{(n)},{\mathbb{C}})$, $i=1,\dots N$, and a class \[(\gamma,\mathsf m) \in H_2(S,{\mathbb{Z}})\oplus {\mathbb{Z}},\]
for $\epsilon \in {\mathbb{R}}_{>0}\cup \{0^+,\infty\}$ we set
\[\langle \gamma_1, \dots, \gamma_N \rangle^{\epsilon}_{g,(\gamma,\mathsf{m})}:= {}^{\sharp}\langle L(\gamma_1), \dots, L(\gamma_N) \rangle^{\epsilon}_{g,(\gamma,\mathsf{m})} \in {\mathbb{C}},\]
the invariants on the right are defined in \cite[Section 5.3]{N} and $L$ is defined in Proposition \ref{HilbSym}. We set
\[\left\langle \gamma_1, \ldots, \gamma_N \right\rangle^{ \epsilon}_{g, \gamma}:=\sum_{\mathsf m} \langle \gamma_1, \dots, \gamma_N \rangle_{g,(\gamma,\mathsf{m})}^{\epsilon}y^\mathsf{m}.\]
For $\epsilon=0^+$, these are the relative PT invariants of the relative geometry $S\times C_{g,N}\rightarrow {\overline M}_{g,N}$. The summation over $\mathsf{m}$ with respect to the identification (\ref{degreechern}) corresponds to the summation over ${\mathrm{ch}}_3$ of a subscheme.
\\
Using wall-crossings, we will now show the compatibility of \textsf{PT/GW} and \textsf{C.R.C.} Let us firstly spell out our conventions.
\begin{itemize}
\item We sum over the degree of the branching divisor instead of the genus of the source curve. Assuming $\gamma_i$'s are homogenous with respect to the age, the genus $\mathsf{h}$ and the degree $\mathsf{m}$ are related by Lemma \ref{RHformula},
\[2\mathsf{h}-2=-2n+\mathsf m + \sum \mathrm{age}(\gamma_i).\]
For $\epsilon \in {\mathbb{R}}_{\leq 0}\cup \{-\infty\}$, let
\['\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{\epsilon}_{g, \gamma}:= \sum_{\mathsf h} \langle \gamma_1, \dots, \gamma_N \rangle_{g,(\gamma,\mathsf{h})}^{\epsilon}u^\mathsf{2h-2}\]
be generating series where the summation is taken over genus instead. Then two two generating series are are related as follows
\['\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{ \epsilon}_{g, \gamma} = u^{2n-\sum \mathrm{age}(\gamma_i)} \cdot \left\langle \gamma_1, \dots, \gamma_N \right\rangle^{ \epsilon}_{g, \gamma}.\]
\item We sum over Chern character ${\mathrm{ch}}_3$ instead of Euler characteristics $\chi$. For $\epsilon \in {\mathbb{R}}_{>0 }\cup \{0^+, \infty\}$, let
\['\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{\epsilon}_{g, \gamma}:= \sum_{\mathsf \chi} {}^{\sharp}\langle \gamma_1, \dots, \gamma_N \rangle_{g,(\gamma, \chi)}^{\epsilon}y^\chi\]
be the generating series where the summation is taken over Euler characteristics instead. Then by Hirzebruch--Riemann--Roch, theorem the two generating series are related as follows
\['\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{\epsilon}_{g, \gamma} = y^{(g-1)n} \cdot \left\langle \gamma_1, \dots, \gamma_N \right\rangle^{\epsilon}_{g, \gamma}.\]
\item The identification of Proposition \ref{HilbSym} has a factor of $(-i)^{\mathrm{age}(\mu)}$.
\end{itemize}
Taking into account all the conventions above and Lemma \ref{invariantscomp}, we obtain that \cite[Conjectures 2R, 3R]{MNOP} can be reformulated\footnote{We take the liberty to extend the statement of the conjecture in \cite{MNOP} from a fixed curve to a moving one; and from one relative insertion to multiple ones.} as follows.
\\
\noindent
\textbf{PT/GW.} \textit{The generating series} $\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{0^+}_{g, \gamma}(y)$ \textit{is a Taylor expansion of a rational function around $0$},\textit{ such that under the change of variables }$y=-e^{iu}$,
\[(-y)^{-\gamma\cdot \mathrm c_1(S)/2}\cdot \left\langle \gamma_1, \dots, \gamma_N \right\rangle^{0^+}_{g, \gamma}(y)=(-iu)^{\gamma\cdot \mathrm c_1(S)} \cdot \left\langle \gamma_1, \dots, \gamma_N \right\rangle^{0}_{g, \gamma}(u).\]
\\
Assume now that $S$ is a del Pezzo surface. Let us apply our wall-crossing formulas. Using Corollary \ref{change}, we obtain
\begin{equation} \label{GW}
(-iu)^{\gamma\cdot \mathrm c_1(S)} \cdot \left\langle \gamma_1, \dots , \gamma_N \right\rangle^{- \infty}_{g, \gamma}=(e^{iu/2}-e^{-iu/2})^{\gamma \cdot \mathrm c_1(S)}\cdot \left\langle \gamma_1, \dots, \gamma_N \right\rangle^{0}_{g, \gamma}.
\end{equation}
Using \cite[Corollary 6.11]{N} , we obtain
\begin{equation} \label{DT}
(-y)^{-\gamma\cdot \mathrm c_1(S)/2}\cdot \left\langle \gamma_1, \dots, \gamma_N \right\rangle^{\infty}_{g, \gamma}=(y^{1/2}-y^{-1/2{\tiny }})^{\gamma \cdot \mathrm c_1(S)}\cdot \left\langle \gamma_1, \dots, \gamma_N \right\rangle^{0^+}_{g, \gamma}.
\end{equation}
Combining the two, we obtain the statement of $\mathsf{C.R.C}$.
\\
\noindent
\textbf{C.R.C.} \textit{The generating series} $\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{\infty}_{g, \gamma}(y)$\textit{ is a Taylor expansion of a rational function around 0, such that under the change of variables} $y=-e^{iu}$,
\[\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{\infty}_{g, \gamma}(y) =\left\langle \gamma_1, \dots, \gamma_N \right\rangle^{-\infty}_{g, \gamma}(u).\]
By both wall-crossings, the statements of \textsf{PT/GW} and \textsf{C.R.C.} in the form presented above are equivalent.
\begin{cor} \label{equivalent}
\[\mathbf{PT/GW} \iff \mathbf{C.R.C.}\]
\end{cor}
\subsection{Quantum cohomology} \label{qcoh} Let $g=0, N=3$. This is a particularly nice case, firstly because these invariants collectively are known as \textit{quantum cohomology}. Secondly, the moduli space of genus-0 curves with 3 markings is a point. Hence the invariants $\left\langle \gamma_1, \gamma_2, \gamma_3 \right\rangle^{- \infty}_{0, \gamma}$ are relative PT invariants of $S\times \mathbb{P}^1$ relative to the vertical divisor $S_{0,1,\infty}$.
In \cite{PP}, \textsf{PT/GW} is established for $S\times \mathbb{P}^1$ relative to $S\times\{0,1,\infty\}$, if $S$ is toric. Corollary \ref{equivalent} then implies the following.
\begin{cor} \label{quantum} If $S$ is toric del Pezzo, $g=0$ and $N=3$, then
$\mathbf{C.R.C.}$ holds in all classes.
\end{cor}
The above result can also be stated as an isomorphism of quantum cohomologies with appropriate coefficient rings. Let
\begin{align*}QH^*(S^{[n]})& : = H^*(S^{[n]},{\mathbb{C}}) \otimes_{\mathbb{C}} {\mathbb{C}}[\![q^\gamma]\!](y) \\
QH_{\mathrm{orb}}^*(S^{(n)})&: = H_{\mathrm{orb}}^*(S^{(n)},{\mathbb{C}}) \otimes_{\mathbb{C}} {\mathbb{C}}[\![q^\gamma]\!](e^{iu})
\end{align*}
be quantum cohomologies, where ${\mathbb{C}}[\![q^\gamma]\!](y)$ and ${\mathbb{C}}[\![q^\gamma]\!](e^{iu})$ are rings of rational functions with coefficients in ${\mathbb{C}}[\![q^\gamma]\!]$ and in variables $y$ and $e^{iu}$, respectively. The latter we view as a subring of functions in the variable $u$. The quantum cohomologies are isomorphic by Corollary \ref{quantum},
\[QL\colon QH_{\mathrm{orb}}^*(S^{(n)})\cong QH^*(S^{[n]}),\]where $QL$ is given by a linear extension of $L$, defined in Proposition \ref{HilbSym}, from $H_{\mathrm{orb}}^*(S^{(n)},{\mathbb{C}})$ to $H_{\mathrm{orb}}^*(S^{(n)},{\mathbb{C}}) \otimes_{\mathbb{C}} {\mathbb{C}}[\![q^\gamma]\!]$ and by a change of variables $y=-e^{iu}$.
In particular,
\[QL(\alpha\cdot q^\gamma \cdot y^k)=(-1)^kL(\alpha)\cdot q^\gamma \cdot e^{iku}\]for an element $\alpha \in H_{\mathrm{orb}}^*(S^{(n)},{\mathbb{C}}) $. Ideally, one would also like to specialise
to $y = 0$ and $y = -1$, because in this way we recover the classical multiplications on $H_{\mathrm{orb}}^*(S^{(n)},{\mathbb{C}})$ and $H^*(S^{[n]},{\mathbb{C}})$, respectively. To do so, a more careful choice of
coefficients is needed - we have to take rational functions with no poles at $y= 0$ and $y = -1$.
\bibliographystyle{amsalpha}
| {'timestamp': '2022-08-02T02:34:21', 'yymm': '2208', 'arxiv_id': '2208.00889', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.00889'} |
\section{Introduction}
Solutions of general relativistic Liouville's equation ($grl$) in
a prescribed space-time
have been considered by some investigators.
Most authors have sought its solutions as functions of the
constants of motion, generated by Killing vectors of the space-time in
question. See for example Ehlers (1971), Ray and
Zimmerman (1977),
Mansouri and Rakei (1988), Ellis, Matraverse and Treciokas (1983), Maharaj and
Maartens (1985, 1987), Maharaj (1989), and Dehghani and Rezania (1996).
In applications to self gravitating stars and stellar systems, however, one
should combine Einstein's
field equations and $grl$. The resulting nonlinear equations can
be solved in certain approximations.
Two such methods are available;
the {\it post-Newtonian (pn) approximation} and the {\it weak-field
} one.
In this paper we adopt the first approach to study a self gravitating
system imbeded in an otherwise flat space-time.
In sect. 2, we derive the $grl$ in the $pn$ approximation.
In sect. 3 we seek static solutions of
the post-Newtonian Liouville's
equation ($pnl$). We find two integrals of $pnl$ that are the
$pn$
generalizations of the energy and angular momentum integrals of the
classical Liouville's equation.
Post-Newtonian polytropes, as simultaneous solutions of $pnl$ and Einsteins
equation,
are discussed and calculated in sect.
4. Section 5 is devoted to concluding remarks.
The main objective of this paper, however, is to set the stage for the second
paper in this series (Sobouti and Rezania, 1998). There, we study a class
of
non static oscillatory solutions of $pnl$, different from the conventional $p$
and $g$ modes of the system. They are associated with oscillations in
the space-time metric, without disturbing the classical equilibrium of the
system. In this respect they might be akin to the so called {\it
gravitational wave} modes that some authors have advocated to exist in
relativistic systems. See, for example,
Andersson, Kokkotas and Schutz (1995), Baumgarte and Schmidt (1993),
Detweiler and Lindblom (1983, 1985),
Kojima (1988),
Kokkotas and Schutz (1986, 1992), Leaver (1985),
Leins, Nollert and Soffel (1993),
Lindblom, Mendell and Ipser (1997), and
Nollert and Schmidt (1992).
\section{Liouville's equation in post-Newtonian
approximation}
The one particle distribution function of a gas of collisionless particles
with identical masse $m$, in the restricted seven dimensional phase space
\stepcounter{sub}
\begin{equation}
P(m):\;\;g_{\mu \nu} U^\mu U^\nu = -1
\end{equation}
satisfies $grl$:
\stepcounter{sub}
\begin{equation}
{\cal L}_Uf = (U^\mu \frac{\partial}{\partial x^\mu} - \Gamma_{\mu\nu}^i U^\mu
U^\nu \frac{\partial}{\partial U^i}) f(x^\mu,U^i) = 0,
\end{equation}
where $(x^\mu,U^i)$ is the set of configuration and velocity coordinates in
$P(m)$, $f(x^\mu,U^i)$ is a distribution function,
${\cal L}_U$
is Liouville's operator in the $(x^\mu,U^i)$ coordinates, and $\Gamma^i
_{\mu\nu}$ are Christoffel's symbols.
Greek indices run from 0 to 3 and Latin indices from
1 to 3. We use the convention
$c = 1$ except in numerical calculations of section 4.
The four-velocity of the particle and
its classical velocity are related as
\stepcounter{sub}
\begin{equation}
U^\mu = U^0 v^\mu;\;\;\;\; v^\mu = (1, v^i = dx^i/dt),
\end{equation}
where $U^0(x^\mu,v^i)$ is to be determined from Eq. (1).
In the $pn$ approximation, we need an expansion of
${\cal L}_U$ up to order $\bar{v}^4$, where $\bar{v}$ is a typical
Newtonian speed. To achieve this goal we transform
$(x^\mu,U^i)$ to $(x^\mu,v^i)$.
Liouville's operator transforms as
\stepcounter{sub}
\begin{equation}
{\cal L}_U = U^0 v^\mu (\frac{\partial}{\partial x^\mu} + \frac{\partial v^j}
{\partial x^\mu} \frac{\partial}{\partial v^j}) - \Gamma^i _{\mu\nu}U^{0^2}
v^\mu v^\nu \frac{\partial v^j}{\partial U^i} \frac{\partial}{\partial v^j},
\end{equation}
where $\partial v^j/ \partial x^\mu$ and $\partial v^j/ \partial U^i$ are
determined
from the inverse of the transformation matrix (see appendix A).
Thus,
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&\frac{\partial v^j}{\partial x^\mu} = - \frac{U^0}{2Q} v^j
\frac{\partial g_{\alpha \beta}}{\partial x^\mu} v^\alpha v^\beta,\\
\stepcounter{subeqn}
&&\frac{\partial v^j}{\partial U^i} =
\;\; \frac{1}{Q} v^j (g_{0i} + g_{ik} v^k);\hspace{2cm}\;\;\mbox{for $i
\neq j$},\nonumber\\
&&\\
&&\hspace{.81cm}=-\frac{1}{Q} (U^{0^{-2}} + \sum_{k \neq i} v^k (g_{0 k}
+ g_{kl} v^l));\;\; \mbox{for $i = j$},\nonumber
\end{eqnarray}
where
\stepcounter{subeqn}
\begin{equation}
Q = U^0 (g_{0 0} + g_{0 l} v^l).
\end{equation}
Using Eqs.(5) in Eq.(4), one finds
\stepcounter{sub}
\stepcounter{subeqn}
\begin{equation}
{\cal L}_U f =U^{0} {\cal L}_v f=0,
\end{equation}
or
\stepcounter{subeqn}
\begin{equation}
{\cal L}_v f(x^\mu,v^i) = 0,
\end{equation}
where
\stepcounter{subeqn}
\begin{eqnarray}
&&{\cal L}_v= v^\mu (\frac{\partial}{\partial x^\mu} - \frac{U^0}{2Q} v^j
\frac{\partial g_{\alpha \beta}}{\partial x^\mu} v^\alpha
v^\beta\frac{\partial}{\partial v^j})
- \Gamma^i _{\mu\nu}v^\mu v^\nu\{\sum_{j\neq i} \frac{1}{Q} v^j (g_{0i}
+ g_{ik} v^k) \frac{\partial}{\partial v^j} \nonumber \\
&&\hspace{4cm} - \frac{1}{Q} (U^{0^{-2}} + \sum_{k \neq i} v^k (g_{0 k}
+ g_{kl} v^l)) \frac{\partial}{\partial v^i} \},
\end{eqnarray}
We note that the post-Newtonian hydrodynamic equations are obtained from
integrations
of Eq. (6a) over the ${\bf v}$-space rather than Eq. (6b) (see appendix B).
We expand
${\cal L}_v$ up to the order $\bar{v}^4$. For this purpose, we need
expansions of Einstein's field equations, the metric tensor, and the affine
connections up to various orders.
Einstein's field equation with harmonic coordinates
condition, $g^{\mu\nu}\Gamma^{\lambda}_{\mu\nu}=0$, yields (see
Weinberg 1972):
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&\nabla^2\; ^2g_{00} = - 8\pi G\; ^0T^{00},\\
&&\nabla^2\; ^4g_{00} = \frac{\partial^2\; ^2g_{00}}{\partial t^2} +
\;^2g_{ij} \frac{\partial^2\; ^2g_{00}}{\partial x^i \partial x^j} -
(\frac{\partial\; ^2g_{00}}{\partial x^i})(\frac{\partial\; ^2g_{00}}
{\partial x^i})\nonumber\\
\stepcounter{subeqn}
&&\hspace{2cm}- 8\pi G (\;^2T^{00} - 2\; ^2g_{00}\; ^0T^{00} +\;
^2T^{ii}),\\
\stepcounter{subeqn}
&&\nabla^2\; ^3g_{i 0} = 16 \pi G\; ^1T^{i0},\\
\stepcounter{subeqn}
&&\nabla^2\; ^2g_{ij} = - 8 \pi G \delta_{ij}\; ^0T^{00}.
\end{eqnarray}
The symbols $^ng_{\mu\nu}$ and $^nT^{\mu\nu}$ denote the $n$th order terms in
${\bar v}$ in
the metric and in the energy-momentum tensors, respectively. Solutions of
these equations are
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&^2g_{00} = - 2 \phi,\\
\stepcounter{subeqn}
&&^2g_{ij} = - 2 \delta_{ij} \phi, \\
\stepcounter{subeqn}
&&^3g_{i 0} = \eta_i, \\
\stepcounter{subeqn}
&&^4g_{00} = - 2 \phi^2 - 2 \psi,
\end{eqnarray}
where
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&\phi({\bf x},t) = -G \int \frac{^0T^{00}({\bf x}',t)}{\vert{\bf x} -
{\bf x}' \vert} d^3 x', \\
\stepcounter{subeqn}
&&\eta^i ({\bf x},t) = -4 G \int \frac{^1T^{i 0} ({\bf x}',t)}{\vert {\bf x} -
{\bf x}' \vert} d^3 x',\\
\stepcounter{subeqn}
&&\psi({\bf x},t) = - \int \frac{d^3 x'}{\vert {\bf x} - {\bf x}' \vert}
(\frac{1}{4\pi} \frac{\partial^2 \phi({\bf x'},t)}{\partial t^2}
+ G\; ^2T^{00}({\bf x}',t)\nonumber\\
\stepcounter{subeqn}
&&\hspace{6cm}+G\;^2T^{ii}({\bf x'}, t)),
\end{eqnarray}
where bold characters denote the three vectors.
Substituting Eqs. (8) and (9) in (6c) gives
\stepcounter{sub}
\begin{eqnarray}
{\cal L}_v &= &{\cal L}^{cl}+{\cal L}^{pn} \nonumber\\
& =&\frac{\partial}{\partial t} + v^i
\frac{\partial}{\partial x^i} - \frac{\partial \phi}{\partial x^i}
\frac{\partial}{\partial v^i} \nonumber\\
&&- [(4\phi + {\bf v}^2) \frac{\partial \phi}{\partial x^i} -
\frac{\partial \phi}{\partial x^j} v^i v^j - v^i \frac{\partial
\phi}{\partial t} + \frac{\partial \psi}{\partial x^i} \nonumber\\
&&+ (\frac{\partial \eta_i}{\partial x^j} - \frac{\partial
\eta_j}{\partial x^i}) v^j+\frac{\partial\eta_i}{\partial t}]
\frac{\partial}{\partial v^i}
\end{eqnarray}
where ${\cal L}^{cl}$ and ${\cal L}^{pn}$ are the
classical Liouville
operator and its post-Newtonian correction,
respectively. Equation (6b) for the
distribution function $f(x^\mu,v^i)$ becomes
\stepcounter{sub}
\begin{equation}
({\cal L}^{cl}+{\cal L}^{pn}) f(x^\mu,v^i) = 0.
\end{equation}
The three scalar and vector potentials $\phi,\psi$ and
$\eta\hspace{-.2cm}\eta\hspace{-.2cm}\eta$
can now be given in terms of the distribution function.
The energy-momentum tensor in terms of $f(x^\mu, U^i)$ is
\stepcounter{sub}
\begin{equation}
T^{\mu\nu}(x^\lambda)=\int \frac{U^\mu U^\nu}{U^0} f(x^\lambda, U^i)
\sqrt{-g}d^3U,
\end{equation}
where $g=det(g_{\mu\nu})$. For various orders of $T^{\mu\nu}$
one finds
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&^0T^{00}(x^\lambda) = \int f(x^\lambda,v^i) d^3 v, \\
\stepcounter{subeqn}
&&^2T^{00}(x^\lambda) = \int (\frac{1}{2}v^2 + \phi(x^\lambda))
f(x^\lambda,v^i) d^3 v,\\
\stepcounter{subeqn}
&&^2T^{ij}(x^\lambda)= \int v^i v^j f(x^\lambda,v^i) d^3 v, \\
\stepcounter{subeqn}
&&^1T^{0i}(x^\lambda) = \int v^i f(x^\lambda,v^i) d^3 v.
\end{eqnarray}
Substituting Eqs. (13) in (9) gives
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&\phi({\bf x},t) =-G \int \frac{f({\bf x'},t,{\bf v')}}{\vert {\bf
x} - {\bf x'} \vert} d \Gamma',\\
\stepcounter{subeqn}
&&\eta\hspace{-.2cm}\eta\hspace{-.2cm}\eta ({\bf x},t) = -4G \int \frac{{\bf
v'} f({\bf x'},t, {\bf v')}}{\vert {\bf x} - {\bf x'} \vert} d\Gamma'\\
&&\psi({\bf x},t) = \frac{G}{4 \pi} \int \frac{\partial^2
f({\bf x''},t,{\bf v''})/\partial t^2 }{
\vert {\bf x} - {\bf x'} \vert \vert {\bf
x'} - {\bf x''} \vert} d^3x'd\Gamma'' \nonumber\\
&&\hspace{2cm}- \frac{3}{2}G \int \frac{{\bf v'}^2 f({\bf x'},t, {\bf
v'})}{\vert {\bf x} - {\bf x'} \vert} d \Gamma'\nonumber\\
\stepcounter{subeqn}
&&\hspace{2cm}+ G^2 \int \frac{f({\bf x'},t,{\bf v'}) f({\bf
x''},t,{\bf v''})}{\vert {\bf x} - {\bf x'}
\vert \vert {\bf x'} - {\bf x''} \vert} d \Gamma' d
\Gamma'',
\end{eqnarray}
where $d\Gamma=d^3xd^3v$. Equations (11) and (14) complete the
$pn$ order of Liouville's equation for self gravitating systems imbeded in
a flat space-time.
\section{Integrals of post-Newtonian Liouville's equation}
In an equilibrium state $f({\bf x}, {\bf v})$ is time-independent.
Macroscopic velocities along with the vector potential
$\eta\hspace{-.2cm}\eta\hspace{-.2cm}\eta$ vanish. Equations (10)
and (11) reduce to
\stepcounter{sub}
\begin{eqnarray}
&&({\cal L}^{cl}+{\cal L}^{pn})f({\bf x},{\bf v})=[(v^i\frac{\partial}
{\partial x^i}-
\frac{\partial\phi}{\partial x^i}\frac{\partial}{\partial v^i})\nonumber\\
&&\hspace{3cm}-(\frac{\partial\phi}{\partial x^i}(4\phi+v^2)-
\frac{\partial\phi}{\partial x^j}v^iv^j+\frac{\partial\psi}{\partial x^i})
\frac{\partial}{\partial v^i}]f=0,
\end{eqnarray}
One easily verifies that the following, a generalization of the classical
energy integral, is a solution of Eq. (15)
\stepcounter{sub}
\begin{eqnarray}
&&E=\frac{1}{2}v^2+\phi+2\phi^2+\psi + const.
\end{eqnarray}
Furthermore, if $\phi({\bf x})$ and $\psi({\bf x})$ are spherically symmetric,
which actually is the
case for an isolated system in an asymptotically flat space-time,
the following generalization of angular momentum are also integrals of
Eq. (15)
\stepcounter{sub}
\begin{eqnarray}
&& l_i=\varepsilon_{ijk}x^jv^kexp(-\phi),
\end{eqnarray}
where $\varepsilon_{ijk}$ is the Levi-Cevita symbol. Static distribution
functions maybe constructed as functions of $E$ and even functions of
$l_i$. The reason for restriction
to even functions of $l^{pn}_i$ is to insure the vanishing of $\eta^i$,
the condition for validity of Eq. (15).
\section{Polytropes in post-Newtonian approximation}
As in classical polytropes we
consider the distribution
function for a polytrope of index $n$ as
\stepcounter{sub}
\begin{eqnarray}
&&F_n(E)=\frac{\alpha_n}{4\pi\sqrt{2}}(-E)^{n-3/2};\;\;
\mbox{for}\;\; E< 0, \nonumber\\
&&\hspace{1.3cm}=0\hspace{3.00cm}\mbox{for}\;\; E> 0,
\end{eqnarray}
where $\alpha_n$ is a constant.
By Eqs. (13) the corresponding orders of the energy-momentum tensor are
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&^0T^{00}_n=\alpha_n\beta_n(-U)^n,\\
\stepcounter{subeqn}
&&^2T^{00}_n=\alpha_n\beta_n\phi (-U)^n +\alpha_n\gamma_n
(-U)^{n+1},\\ \stepcounter{subeqn}
&&^2T^{ii}_n\;\;=\delta_{ij}\;^2T^{ij}=2\alpha_n\gamma_n (-U)^{n+1},\\
\stepcounter{subeqn}
&&^1T^{0i}_n=0,
\end{eqnarray}
where
\stepcounter{sub}
\begin{eqnarray}
&&\beta_n=\int^1_0(1-\eta)^{n-3/2}\eta^{1/2}d\eta=\Gamma(3/2)\Gamma(n-1
/2)/\Gamma(n+1),\\
\stepcounter{sub}
&&\gamma_n=\int^1_0(1-\eta)^{n-3/2}\eta^{3/2}d\eta=\Gamma(5/2)\Gamma(n-1/
2)/\Gamma(n+2),
\end{eqnarray}
and $U=\phi+2\phi^2+\psi$ is the gravitational potential in $pn$ order.
It will be chosen zero at the
surface of the stellar configuration. With this choice, the escape velocity
$v_e=\sqrt{-2U}$ will mean escape to the boundary of the system rather
than to infinity. Einstein's equations, Eqs. (7), (8) and (9), lead to
\stepcounter{sub}
\begin{eqnarray}
&&\nabla^2\phi=4\pi G\; ^0T^{00}=
4\pi G\alpha_n\beta_n(-U)^n,\\
\stepcounter{sub}
&&\nabla^2\psi=4\pi G (^2T^{00}+^2T^{ii})
=4\pi G\alpha_n\beta_n\phi (-U)^n\nonumber\\
&&\hspace{1cm}+12\pi G\alpha_n\gamma_n(-U)^{n+1}.
\end{eqnarray}
Expanding $(-U)^n$ as
\stepcounter{sub}
\begin{equation}
(-U)^n=(-\phi)^n[1+n(2\phi+\frac{\psi}{\phi})+\cdots],
\end{equation}
ans Substituting it in Eqs. (22) and (23) gives
\stepcounter{sub}
\begin{eqnarray}
&&\nabla^2\phi=4\pi
G\alpha_n\beta_n[(-\phi)^n-2n(-\phi)^{n+1}-n(-\phi)^{n-1}\psi+\cdots],\\
\stepcounter{sub}
&&\nabla^2\psi=4\pi
G\alpha_n\beta_n
\{(3\frac{\gamma_n}{\beta_n}-1) (-\phi)^{n+1} \nonumber\\
&&\hspace{3cm}- [3(n+1)\frac{\gamma_n}{\beta_n}-n] [2(-\phi)^{n+2}+
(-\phi)^n\psi]+\cdots\},
\end{eqnarray}
These equations can be solved numerically by an iterative scheme. We
introduce three
dimesionless quantities
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&-\phi\equiv \lambda \theta,\\
\stepcounter{subeqn}
&&-\psi\equiv \lambda^2 \Theta,\\
\stepcounter{subeqn}
&&\hspace{.35cm}r\equiv a\zeta,
\end{eqnarray}
where, in terms of $\rho_c$, the central density,
$\lambda=(\rho_c/\alpha_n\beta_n)^{1/n}$ and
$a^{-2}=4\pi G\rho_c/\lambda$. Equations
(25) and (26) in various iteration orders reduce to
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&\nabla_{\zeta}^2\theta_o + \theta_o^n=0,\\
\stepcounter{subeqn}
&&\nabla_{\zeta}^2\Theta_o +(3\frac{\gamma_n}{\beta_n}-1) \theta_o^{n+1}=0,\\
\stepcounter{subeqn}
&&\nabla_{\zeta}^2\theta_1+\theta_1^n = qn(2\theta_o^{n+1} - \theta_o^{n-1}
\Theta_o),\\
&&\nabla_{\zeta}^2\Theta_1+(3\frac{\gamma_n}{\beta_n}-1) \theta_1^{n+1}=
\nonumber\\
\stepcounter{subeqn}
&&\hspace{2cm}q [3(n+1)\frac{\gamma_n}{\beta_n}-n] (2\theta_o^{n+2}-
\theta_o^n\Theta_o),
\end{eqnarray}
where
$\nabla_{\zeta}^2= \frac{1}{\zeta^2}
\frac{d}{d\zeta}(\zeta^2\frac{d}{d\zeta})$.
The subscripts 0 and 1 refer to orders of the iteration. The
dimensionless parameter $q$ is defined as
\stepcounter{sub}
\begin{equation}
q=\frac{4\pi G \rho_c a^2}{c^2}=\frac{R_s}{R} \frac{1}{2\zeta_1\mid\theta_o'
(\zeta_1)\mid},
\end{equation}
where $R_s$ is the Schwarzschild radius, $R=a\zeta_1$ is the radius of system,
$\zeta_1$ is defined by $\theta_o(\zeta_1)=0$ and $c$ is the light speed.
The order of magnitude of $q$ varies from $10^{-5}$ for white dwarfs to
$10^{-1}$ for neutron stars. For future reference, let us also note that
\stepcounter{sub}
\begin{equation}
-U=\lambda[\theta_1 + q (\Theta_1 - 2\theta_1^2)].
\end{equation}
We use a forth-order Runge-Kutta method to find
numerical solutions of the four coupled nonlinear differential Eqs. (28).
At the center we adopt
\stepcounter{sub}
\begin{equation}
\theta_a(0)=1;\;\;\;\theta'_a(0)=\frac{d\theta_a}{d\zeta}\mid_0=0;\;\;
\;\;\; a=0,1.
\end{equation}
In tables 1 and 2, we summarize the numerical results for the Newtonian and
post-Newtonian polytropes for different polytropic indices and $q$ values.
The $pn$ corrections tend to reduce the radius of the polytrope.
The
higher the polytropic index the smaller this radius. The same is true,
of course, for higher values of $q$.\\
\section{Concluding remarks}
As we discussed in section 1, some authors is debated to exist a new modes of
oscillations in relativistic stellar systems. They believed that these
modes generated by the perturbations in space time metric and no have analogue
in Newtonian systems. They used the general relativistic hydrodynamics
to distinguish them.
Although this way is routine but one needs to assume some thermodynamic
concepts
that may be fault in relativistic regime. Hence, to reject these conceptual
problems, we choosed general relativistic Liouville's equation that is the
purely dynamical theory. The combination Liouville and Einstein equations
enable one to study the behavior of relativistic systems without ambiguity.
Therefore, in this paper, we used the $pn$ approximation to obtain the
Einstein-Liouville equation for a relativistic self gravitating stellar system.
We found two integrals,
generalization of the classical energy and angular momentum, that are
satisfying $pnl$ in equilibrium state. These solutions enable one to
construct an equilibrium model for the system in $pn$ approximation.
Polytropic models, the most familiar stellar models, are constructed in $pn$
approximation. In tables 1 and 2, we compared these models with its
Newtonian correspondence. The $pn$ corrections tend to reduce the radius of
polytrope. The higher the polytropic index the smaller this radius.
We introduced a parameter $q$, Eq. (29), to enter the effect of central density
of system in calculations. Increasing values of $q$ reduce the
radius of system. In the second paper (Sobouti and Rezania 1998), we
study time-dependent oscillations of a relativistic system in $pn$
approximation.
\newpage
\setcounter{sub}{0}
\setcounter{subeqn}{0}
\renewcommand{\theequation}{A.\thesub\thesubeqn}
\noindent {\large{\bf Appendix A: Derivation of Eqs. (5)
}}\\ Consider a general coordinate transformation $(X, U)$ to
$(Y, V)$. The corresponding partial derivatives transform as
\[\left( \begin{array}{c}
\partial/\partial X\\ \partial/\partial U
\end{array} \right)\;= M
\left( \begin{array}{c}
\partial/\partial Y\\ \partial/\partial V
\end{array} \right)\;,\]
\stepcounter{sub}
\begin{equation}
\hspace{5.9cm}=
\left( \begin{array}{cc}
\partial Y/\partial X &\partial V/\partial
X\\
\partial Y/\partial U &\partial V/\partial U
\end{array}\right)
\left( \begin{array}{c}
\partial/\partial Y\\\partial/\partial V
\end{array} \right) ,
\end{equation}
where $M$ is the $7\times 7$ Jacobian matrix of transformation. Setting
$X=Y=x^{\mu}$, $V=v^i$ and $U=U^i$ for our problem, one finds
\stepcounter{sub}
\stepcounter{subeqn}
\begin{equation}
M=\left( \begin{array}{cc}
\partial x^{\mu}/\partial x^{\nu}&\partial v^i/\partial
x^{\nu}\\
\partial x^{\mu}/\partial U^j&\partial v^i/\partial U^j
\end{array}\right),
\end{equation}
and its inverse
\stepcounter{subeqn}
\begin{equation}
M^{-1}=\left( \begin{array}{cc}
\partial x^{\mu}/\partial x^{\nu}&\partial U^i/\partial
x^{\nu}\\
\partial x^{\mu}/\partial v^j&\partial U^i/\partial v^j
\end{array}\right).
\end{equation}
One easily finds
\stepcounter{sub}
\stepcounter{subeqn}
\begin{eqnarray}
&&\partial x^{\mu}/\partial x^{\nu}=\delta_{\mu\nu};\;\;\;\;\;\;\;
\partial x^{\mu}/\partial v^j=0,\\
\stepcounter{subeqn}
&&\partial U^i/\partial x^{\nu}=v^i\partial U^0/\partial
x^{\nu}
=\frac{{U^0}^3v^i}{2}\frac{\partial g_{\alpha\beta}}{\partial
x^{\nu}}v^{\alpha}v^{\beta},\\
\stepcounter{subeqn}
&&\partial U^i/\partial v^j=U^0\delta_{ij}+v^i \partial
U^0/\partial v^j
=U^0\delta_{ij}+{U^0}^3v^ig_{j\beta}v^{\beta}.
\end{eqnarray}
Inserting the latter in $M^{-1}$ and inverting the result one arrives at $M$
from which Eqs. (5) can be read out.\\
\newpage
\setcounter{sub}{0}
\renewcommand{\theequation}{B.\thesub\thesubeqn}
\noindent {\large{\bf Appendix B: Post-Newtonian hydrodynamics}}\\
Mathematical manipulations in the development of this work has been tasking.
To ensure that no error has crept in the course of calculations we have tried
to infer the post-Newtonian hydrodynamical equations from the post-Newtonian
Liouville equation derived earlier. From Eq. (6a) one has
\stepcounter{sub}
\begin{eqnarray}
{\cal L}_U^{pn}f&&=U^0({\cal L}^{cl}+{\cal L}^{pn})f\nonumber\\
&&=[(1-\phi+\frac{1}{2}{\bf v}^2){\cal L}^{cl}+{\cal L}^{pn}]f,
\end{eqnarray}
where ${\cal L}^{cl}$ and ${\cal L}^{pn}$ are given by Eq. (10).
We integrate ${\cal L}_U^{pn}f$ over the ${\bf v}$-space:
\stepcounter{sub}
\begin{equation}
\int {\cal L}_U^{pn}fd^3v=\int [(1-\phi+\frac{1}{2}{\bf v}^2) {\cal
L}^{cl}+{\cal L}^{pn}]fd^3v.
\end{equation}
Using Eqs. (12) and (13), one finds the continuity equation
\stepcounter{sub}
\begin{eqnarray}
&&\frac{\partial}{\partial
t}(\;^0T^{00}+\;^2T^{00})+\frac{\partial}{\partial x^j}(\; ^1T^{0j}
+\; ^3T^{0j})-\;^0T^{00}\frac{\partial \phi}{\partial t} =0,\nonumber\\
\end{eqnarray}
which is the $pn$ expansion of the continuity equation
\stepcounter{sub}
\begin{equation}
T^{0\nu}_{\;\;\;;\nu}=0,
\end{equation}
Next,
we multiply ${\cal L}_U^{pn}f$ by $v^i$ and integrate over the ${\bf
v}$-space:
\stepcounter{sub}
\begin{equation}
\int v^i {\cal L}_U^{pn}fd^3v=\int v^i[(1-\phi+\frac{1}{2}{\bf v}^2) {\cal
L}^{cl}+{\cal L}^{pn}]fd^3v.
\end{equation}
After some calculations one finds
\stepcounter{sub}
\begin{eqnarray}
&&\frac{\partial}{\partial
t}(\;^1T^{0i}+\;^3T^{0i})+\frac{\partial}{\partial x^j}(\; ^2T^{ij}
+\; ^4T^{ij})\nonumber\\
&&\;\;\;+\; ^0T^{00}[\frac{\partial}{\partial x^i}(\phi+2\phi^2+\psi)+
\frac{\partial
\eta_i}{\partial t}]+\; ^2T^{00}\frac{\partial
\phi}{\partial x^i}\nonumber\\
&&\;\;\;+\; ^1T^{0j}(\frac{\partial \eta_i}{\partial x^j}-\frac{\partial
\eta_j}{\partial x^i}-4\delta_{ij}\frac{\partial \phi}{\partial t})+ \;
^2T^{jk}
(\delta_{jk}\frac{\partial \phi}{\partial x^i}-4\delta_{ik}\frac{\partial \phi}
{\partial x^j})=0.
\end{eqnarray}
The latter is the correct $pn$ expansion of
\stepcounter{sub}
\begin{equation}
T^{i\nu}_{\;\;\;;\nu}=0;\;\;\;\;i=1,2,3.
\end{equation}
See Weinberg 1972, QED.
\vspace{2cm}\\
{\Large{\bf References:}}\\
\begin{itemize}
\item[ ] Andersson N., Kokkotas K. D., Schutz B. F., 1995, M. N. R. A. S.,
{\bf 274}, 1039
\item[ ] Baumgarte, T. W., Schmidt, B. G., 1993, Class. Quantum. Grav.,
{\bf 10}, 2067
\item[ ] Dehghani M. H., Rezania V., 1996, Astron. Astrophys., {\bf 305}, 379
\item[ ] Detweiler S. L., Lindblom L., 1985, Ap. J., {\bf 292}, 12
\item[ ] Ehlers J., 1977, in: Sachs R. K. (ed.) Proceedings of the international
summer school of
\indent Physics ``Enrico Fermi'', Course 47
\item[ ] Ellis G. R., Matraverse D. R., Treciokas R., 1983, Ann. Phys. {\bf
150}, 455
\item[ ] Ellis G. R., Matraverse D. R., Treciokas R., 1983, Ann. Phys. {\bf
150}, 487
\item[ ] Ellis G. R., Matraverse D. R., Treciokas R., 1983, Gen. Rel. Grav.
{\bf 15}, 931
\item[ ] Kojima, Y., 1988, Prog. Theor. Phys.,
{\bf 79}, 665
\item[ ] Kokkotas K. D., Schutz B. F., 1986, Gen. Rel. Grav., {\bf 18},
913
\item[ ] Kokkotas K. D., Schutz B. F., 1992, M. N. R. A. S., {\bf
255}, 119
\item[ ] Leaver, E. W., 1985, Proc. R. Soc. London A,
{\bf 402}, 285
\item[ ] Leins, M., Nollert, H. P., Soffel, M. H., 1993, Phys. Rev. D,
{\bf 48}, 3467
\item[ ] Lindblom L., Detweiler S. L., 1983, Ap. J. Suppl., {\bf 53}, 73
\item[ ] Lindblom, L., Mendell, G., Ipser, J. R., 1997, Phys. Rev. D,
{\bf 56}, 2118
\item[ ] Maartens R., Maharaj S. D., 1985, J. Math. Phys. {\bf 26}, 2869
\item[ ] Maharaj S. D., Maartens R., 1987, Gen. Rel. Grav. {\bf 19}, 499
\item[ ] Maharaj S. D., 1989, Nouvo Cimento 163B, No. 4, 413
\item[ ] Mansouri A., Rakei A., 1988, Class. Quantum. Grav. {\bf 5}, 321
\item[ ] Nollert, H. -P., Schmidt, B. G., 1992, Phys. Rev. D,
{\bf 45}, 2617
\item[ ] Ray J. R., Zimmerman J. C., 1977, Nouvo Cimento 42B, No. 2, 183
\item[ ] Sobouti Y., Rezania V., 1998, Astron. Astrophys., (Submitted to)
\item[ ] Weinberg S., 1972, Gravitation and Cosmology. John Wiley \& Sons, New
york
\end{itemize}
\newpage
\noindent Table 1. A comparison of the Newtonian and post-Newtonian polytropes
at certain selected radii for $n$=1, 2 and 3 and
different values $q$.\vspace{1cm}\\
Table 2. Same as Table 1. $n$=4 and 4.5.
\newpage
\begin{center}
Table 1.\vspace{.2cm}\\
\begin{tabular}{|c|c|c|c|c|c|}\hline
n &Polytropic & Newtonian &\multicolumn{3}{c|}{$pn$ polytrope,
$\theta+q(\Theta-2\theta^2)$}\\\cline{4-6}
&radius, $\zeta$ &polytrope, $\theta$& $q=10^{-5}$ & $q=10^{-3}$ &
$q=10^{-1}$ \\\hline & 0.0000000 & 1.00000 & 1.00000 & 1.00000 & 1.00000\\
& 1.0000000 & 0.84145 & 0.84145 & 0.84143 & 0.83936\\
& 2.0000000 & 0.45458 & 0.45458 & 0.45433 & 0.42949\\
1 & 2.9233000 & 0.07408 & 0.07407 & 0.07334 & 0.00000 \\
& 3.1388500 & 0.00087 & 0.00086 & 0.00000 & \\
& 3.1415500 & 0.00001 & 0.00000 & & \\
& 3.1415930 & 0.00000 & & & \\\hline
& 0.0000000 & 1.00000 & 1.00000 & 1.00000 & 1.00000\\
& 1.0000000 & 0.84868 & 0.84868 & 0.84863 & 0.84394\\
& 2.0000000 & 0.52989 & 0.52988 & 0.52945 & 0.48609\\
& 3.0000000 & 0.24188 & 0.24187 & 0.24031 & 0.13289\\
2 & 3.4737000 & 0.13904 & 0.13902 & 0.13770 & 0.00000 \\
& 4.3394800 & 0.00171 & 0.00169 & 0.00000 & \\
& 4.3527000 & 0.00002 & 0.00000 & & \\
& 4.3529000 & 0.00000 & & & \\\hline
& 0.0000000 & 1.00000 & 1.00000 & 1.00000 & 1.00000\\
& 1.0000000 & 0.85480 & 0.85480 & 0.85473 & 0.84773\\
& 2.0000000 & 0.58284 & 0.58283 & 0.58230 & 0.52894\\
& 3.0000000 & 0.35939 & 0.35938 & 0.35824 & 0.24016\\
& 4.0000000 & 0.20927 & 0.20925 & 0.20764 & 0.03226\\
3 & 4.1939500 & 0.18690 & 0.18688 & 0.18520 & 0.00000 \\
& 6.8435000 & 0.00228 & 0.00225 & 0.00000 & \\
& 6.8963000 & 0.00002 & 0.00000 & & \\
& 6.8967000 & 0.00000 & & & \\\hline
\end{tabular}
\end{center}
\newpage
\begin{center}
Table 2.\vspace{.2cm}\\
\begin{tabular}{|c|c|c|c|c|c|}\hline
n &Polytropic & Newtonian &\multicolumn{3}{c|}{$pn$ polytrope,
$\theta+q(\Theta-2\theta^2)$}\\\cline{4-6}
&radius, $\zeta$ &polytrope, $\theta$& $q=10^{-5}$ & $q=10^{-3}$ &
$q=10^{-1}$ \\\hline & 0.0000000 & 1.00000 & 1.00000 & 1.00000 & 1.00000\\
& 2.0000000 & 0.62294 & 0.62293 & 0.62235 & 0.56326\\
& 4.0000000 & 0.31804 & 0.31802 & 0.31645 & 0.14194\\
& 5.1541000 & 0.22574 & 0.22572 & 0.22383 & 0.00000 \\
& 8.0000000 & 0.10450 & 0.10448 & 0.10221 & \\
4 &12.0000000 & 0.02972 & 0.02970 & 0.02716 & \\
&14.0000000 & 0.00833 & 0.00830 & 0.00570 & \\
&14.6468000 & 0.00265 & 0.00262 & 0.00000 & \\
&14.9680000 & 0.00003 & 0.00000 & & \\
&14.9713400 & 0.00000 & & & \\\hline
& 0.0000000 & 1.00000 & 1.00000 & 1.00000 & 1.00000\\
& 2.0000000 & 0.63965 & 0.63964 & 0.63905 & 0.57857\\
& 4.0000000 & 0.36053 & 0.36651 & 0.35897 & 0.18628\\
& 5.7468600 & 0.24334 & 0.24332 & 0.24135 & 0.00000\\
4.5 & 8.0000000& 0.16173 & 0.16171 & 0.15946 & \\
&12.0000000 & 0.09015 & 0.09013 & 0.08766 & \\
&16.0000000 & 0.05402 & 0.05399 & 0.05141 & \\
&20.0000000 & 0.03230 & 0.03227 & 0.02962 & \\
&24.0000000 & 0.01782 & 0.01779 & 0.01510 & \\
&28.0000000 & 0.00747 & 0.00744 & 0.00472 & \\
&30.2689200 & 0.00282 & 0.00279 & 0.00000 &\\
&31.7792300 & 0.00004 & 0.00000 & & \\
&31.7878400 & 0.00000 & & & \\\hline
\end{tabular}
\end{center}
\end{document}
| {'timestamp': '1998-04-27T21:29:23', 'yymm': '9804', 'arxiv_id': 'astro-ph/9804120', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9804120'} |
\section{Introduction}
\label{sec:introduction}
Reinforcement Learning (RL) is a powerful algorithmic paradigm used to solve sequential decision-making problems and has resulted in great success in various types of environments, e.g., mastering the game of GO \citep{silver2017mastering}, playing computer games \citep{mnih2013playing} and controlling power systems \citep{qiu2021multi,wang2021multi}. The majority of these successes depends on an implicit assumption that \textit{the testing environment is identical to the training environment}. However, this assumption is too strong for most realistic problems, for example, controlling a robot. There are several situations where mismatches might appear between training and testing environments in robotics: (1) \textit{Parameter Perturbations} indicates that a large number of environmental parameters, e.g. temperature, friction factor could fluctuate after deployment and thus deviate from the training environment \citep{kober2013reinforcement}; (2) \textit{System Identification} estimates a transition function from finite experience. This estimation is biased compared with the real world model \citep{schaal1996learning}; (3) \textit{Sim-to-Real} learns a policy in a simulated environment and performs on real robots for reasons of safety and efficiency \citep{openai2019learning}. Henceforth, it is essential to develop Robust RL algorithms that can adapt to unseen testing environments for practical applications.
Robust Markov Decision Processes (Robust MDPs) \citep{iyengar2005robust, wiesemannrobust} is a common framework for analyzing the robustness of RL algorithms. Compared with regular MDPs with a single transition model $\mathcal{P}(s'|s,a)$, Robust MDPs consider an uncertainty set of transition models $\mathbb{P}=\{\mathcal{P}\}$ to better describe the perturbation of the transitions. This formulation is sufficiently general to cover various scenarios for robot learning problem.
Robust RL aims to learn a robust policy under the worst-case scenario for all transition models $\mathcal{P}\in\mathbb{P}$. If the transition model $\mathcal{P}$ is viewed as an adversarial agent and the uncertainty set $\mathbb{P}$ is its action space, one can reformulate Robust RL as a zero-sum game \citep{ho2018fast}. In general, solving such a problem is NP-hard \citep{wiesemannrobust}. \citet{derman2021twice}, however, adopted the Legendre-Fenchel transform to avoid excessive mini-max computations by converting the minimization over the transition model to a regularization on the value function. Furthermore, it enjoys the additional advantage that it provides more possibilities to design novel regularizers for different types of transition uncertainties. Its extension to continuous state space problems (required, e.g., for controlling robots) is unclear, which directly motivates the work of this paper.
In this paper, we (1) extend the robustness-regularization duality method to continuous control tasks; (2) propose the \textbf{U}ncertainty \textbf{S}et \textbf{R}egularizer (USR) on existing RL frameworks for learning robust policies; (3) learn an adversarial uncertainty set through the value function when the actual uncertainty set is unknown in some scenarios; (4) evaluate USR on the Real-world Reinforcement Learning (RWRL) benchmark, showing improvements for a robust performance in perturbed testing environments with unknown uncertainty sets.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Robust MDPs}
\label{sec:robust_mdp}
The mathematical framework of Robust MDPs \citep{iyengar2005robust, wiesemannrobust} extends regular MDPs in order to deal with uncertainty about the transition function.
A Robust MDP can be formulated as a 6-tuple $\langle \mathcal{S},\mathcal{A}, \mathbb{P}, r, \mu_0, \gamma \rangle$, where $\mathcal{S}, \mathcal{A}$ stand for the state and action space respectively, and $r(s,a): \mathcal{S} \times \mathcal{A} \to \mathbb{R}$\ stands for the reward function. Let $\Delta_{|S|}$ and $\Delta_{|A|}$ be the probability measure on $S$ and $A$ respectively. The initial state is sampled from an initial distribution $\mu_0 \in \Delta_{\mathcal{S}}$, and the future rewards are discounted by the discount factor $\gamma \in [0,1] $.
The most important concept in robust MDPs is the uncertainty set $\mathbb{P}=\left\{ \mathcal{P}(s'|s,a): \mathcal{S} \times \mathcal{A} \to \Delta_{\mathcal{S}} \right\}$ that controls the variation of transition $\mathcal{P}$, compared with the stationary transition $\mathcal{P}$ in regular MDPs. Let $\Pi=\left\{ \pi(a|s): \mathcal{S} \to \Delta_{\mathcal{A}} \right\}$ be the policy space; the objective of Robust RL can then be formulated as a mini-max problem,
\begin{equation}
\label{eq:robust_objective}
J^* = \max_{\pi \in \Pi} \min_{\mathcal{P} \in \mathbb{P}} \mathbb{E}_{ \pi, \mathcal{P} } \left[ \sum_{t=0}^{+\infty} \gamma^t r(s_t, a_t) \right].
\end{equation}
\subsection{Robust Bellman equation}
\label{sec:robust_bellman}
While \citet{wiesemannrobust} has proved NP-hardness of this mini-max problem with an arbitrary uncertainty set, most recent studies \citep{iyengar2005robust, ho2018fast, derman2021twice, nilim2004robustness, roy2017reinforcement, mankowitz2019robust, dermansoftrobust, wang2021online, grand-clement2021scalable} approximate it by assuming a rectangular structure on the uncertainty set, i.e., $\mathbb{P} = \mathop{\times}_{ (s,a) \in \mathcal{S} \times \mathcal{A} }^{} \mathbb{P}_{sa}, {\rm where }\ \mathbb{P}_{sa}=\{ \mathcal{P}_{sa}(s'): \Delta_{\mathcal{S}} \}$ denotes the local uncertainty of the transition at $(s, a)$. In other words, the variation of transition is independent at every $(s, a)$ pair. Under the assumption of a rectangular uncertainty set, the robust action value function $Q^{\pi}(s, a)$ under policy $\pi$ must satisfy the following robust version of the Bellman equation \citep{bellman1966dynamic} such that
\begin{equation}
\label{eq:bellman_equation}
\begin{split}
Q^{\pi}(s, a) &= r(s, a) + \min_{\mathcal{P}_{sa} \in \mathbb{P}_{sa} }\gamma \sum_{s', a'}\mathcal{P}_{sa}(s') \pi(a'|s') Q^{\pi}(s', a') \\
&= r(s, a) + \min_{\mathcal{P}_{sa} \in \mathbb{P}_{sa} }\gamma \sum_{s'}\mathcal{P}_{sa}(s') V^{\pi}(s').
\end{split}
\end{equation}
\citet{nilim2004robustness} have shown that a robust Bellman operator admits a unique fixed point of Equation \ref{eq:bellman_equation}, the robust action value function $Q^{\pi}(s, a)$.
\subsection{Robustness-regularization duality}
\label{sec:duality}
Solving the minimization problem in the RHS of Equation \ref{eq:bellman_equation} can be further simplified by the Legendre-Fenchel transform \citep{rockafellar1970convex}. For a function $f: X \to \mathbb{R}$, its convex conjugate is $f^{*}\left(x^{*}\right):=\sup \left\{\left\langle x^{*},x\right\rangle -f(x)~\colon ~x\in X\right\}$. Define $\delta_{\mathbb{P}_{sa}}(\mathcal{P}_{sa}) =0$ if $\mathcal{P}_{sa} \in {\mathbb{P}_{sa}} $ and $+\infty$ otherwise, Equation \ref{eq:bellman_equation} can be transformed to its convex conjugate (refer \citet{derman2021twice} for detailed derivation),
\begin{equation}
\label{eq:bellman_equation_dual}
\begin{split}
Q^{\pi}(s, a) &= r(s, a) + \min_{\mathcal{P}_{sa} }\gamma \sum_{s'}\mathcal{P}_{sa}(s') V^{\pi}(s') + \delta_{\mathbb{P}_{sa}}(\mathcal{P}_{sa}) \\
&= r(s, a) - \delta^*_{\mathbb{P}_{sa}}(-V^{\pi}(\cdot)).
\end{split}
\end{equation}
The transformation implies that the robustness condition on transition can be equivalently expressed as a regularization constraint on the value function, referred to as the robustness-regularization duality. The duality can extensively reduce the cost of solving the minimization problem over infinite transition choices and thus is widely studied in the robust reinforcement learning research community \citep{husain2021regularized, eysenbach2021maximum, brekelmans2022your}.
As a special case, \citet{derman2021twice} considered a $L_2$ norm uncertainty set on transitions, i.e., $\mathbb{P}_{sa} = \{ \bar{\mathcal{P}}_{sa} + \alpha \tilde{\mathcal{P}}_{sa} : \| \tilde{\mathcal{P}}_{sa} \|_2 \le 1 \}$, where $\bar{\mathcal{P}}_{sa}$ is usually called the nominal transition model. It could represent prior knowledge on the transition model or a numerical value of the training environment. The uncertainty set denotes that the transition model is allowed to fluctuate around the nominal model with some degree $\alpha$. Therefore, the corresponding Bellman equation in Equation \ref{eq:bellman_equation_dual} becomes $Q^{\pi}(s, a) = r(s, a) + \gamma \sum_{s'}\bar{\mathcal{P}}_{sa}(s') V^{\pi}(s') - \alpha\|V^{\pi}(\cdot)\|_2$. Similarly, the $L_1$ norm has also been used as uncertainty set on transitions \citep{wang2021online}, i.e., $\mathbb{P}_{sa} = \{ \bar{\mathcal{P}}_{sa} + \alpha \tilde{\mathcal{P}}_{sa} : \| \tilde{\mathcal{P}}_{sa} \|_1 \le 1 \}$, and the Bellman equation becomes the form such that $Q^{\pi}(s, a) = r(s, a) + \gamma \sum_{s'}\bar{\mathcal{P}}_{sa}(s') V^{\pi}(s') - \alpha \max_{s'}|V^{\pi}(s')|$. This robustness-regularization duality works well with finite state spaces, but is hard to directly extend to infinite state spaces since both $\|V^{\pi}(\cdot)\|_2$ and $\max_{s'}|V^{\pi}(s')|$ regularizers need calculations on the infinite dimensional vector $V^{\pi}(\cdot)$.
\section{Uncertainty Set Regularized Robust Reinforcement Learning}
\label{sec:method}
Having introduced the robustness-regularization duality and the problems regarding its extension to continuous state space problems in Section \ref{sec:duality}, here, we will first present a novel extension to continuous state space with the uncertainty set defined on the parameter space of the transition function. We will then utilize this extension to derive a robust value iteration that can be directly plugged into existing RL algorithms. Furthermore, to deal with the unknown uncertainty set, we propose an adversarial uncertainty set and visualize it in a simple \textit{moving to target} task.
\subsection{Uncertainty Set Regularized Robust Value Iteration (USR-RVI) }
\label{sec:rvi}
For environments with continuous state spaces, the transition model $\mathcal{P}(s'|s,a)$ is usually represented as a parametric function $\mathcal{P}(s'|s,a;w)$, where $w$ denotes the parameters of the transition function. Instead of defining the uncertainty set on the distribution space, we directly impose a perturbation on $w$ within a set $\Omega_w$. Consequently, the robust objective function (Equation \ref{eq:robust_objective}) becomes $J^* = \max_{\pi \in \Pi} \min_{w \in \Omega_w} \mathbb{E}_{ \pi, \mathcal{P}(s'|s,a;w) } \left[ \sum_{t=0}^{+\infty} \gamma^t r(s_t, a_t) \right]$. We further assume the parameter $w$ fluctuates around a nominal parameter $\bar{w}$, such that $w=\bar{w} + \tilde{w}$, with $\bar{w}$ being a fixed parameter and $\tilde{w} \in \Omega_{\tilde{w}} =\{ w-\bar{w} | w \in \Omega_w\}$ being the perturbation part. Inspired by Equation \ref{eq:bellman_equation_dual}, we can derive a robust value iteration algorithm on the parametric space for continuous control problems as shown in Proposition \ref{prop:rvi}.
\begin{proposition}
Suppose the uncertainty set of $w$ is $\Omega_w$ (i.e., the uncertainty set of $\tilde{w}=w-\bar{w}$ is $\Omega_{\tilde{w}}$),
the robust value iteration on parametric space can be represented as follows:
\begin{equation}
\label{eq:rvi}
Q^{\pi}(s, a) = r(s, a) + \gamma \int_{s'} \mathcal{P}(s'|s,a; \bar{w}) V^{\pi}(s') ds' - \gamma \int_{s'} \delta^*_{\Omega_{\tilde{w}}} \left[-\nabla_{w}\mathcal{P}(s'|s,a; \bar{w}) V^{\pi}(s') \right] ds',
\end{equation}
where $\delta_{\Omega_w}(w)$ is the indicator function that equals $0$ if $w \in \Omega_w$ and $+\infty$ otherwise, and $\delta^*_{\Omega_w}(w')$ is the convex dual function of $\delta_{\Omega_w}(w)$.
\label{prop:rvi}
\end{proposition}
The complete derivation is given in Appendix \ref{sec:robust_value_iteration}. Intuitively, Proposition \ref{prop:rvi} shows that ensuring robustness on parameter $w$ can be transformed into an additional regularizer on value iteration that relates to the product of the state value function $V^{\pi}(s')$ and the derivative of the transition model $\nabla_{w}\mathcal{P}(s'|s,a;\bar{w})$.
Taking the $L_2$ uncertainty set (also used in \citet{derman2021twice}) as a special case, i.e., $\Omega_w = \{\bar{w} + \alpha \tilde{w} : \| \tilde{w} \|_2 \le 1 \}$, where $\bar{w}$ stands for the parameter of the nominal transition model $\mathcal{P}(s'|s,a';\bar{w})$, the robust value iteration in Proposition \ref{prop:rvi} becomes
\begin{equation}
Q^{\pi}(s, a) = r(s, a) + \gamma \int_{s'}\mathcal{P}(s'|s,a;\bar{w}) V^{\pi}(s')ds' - \alpha \int_{s'} \|\nabla_w\mathcal{P}(s'|s,a;\bar{w})V^{\pi}(s')\|_2 ds'.
\label{eq:l2_usr}
\end{equation}
Following the idea of Q-Learning \citep{suttonreinforcement}, a Bellman operator can be naturally derived from Equation \ref{eq:l2_usr} to learn a robust policy. The two integrals in Equation \ref{eq:l2_usr} can be calculated exactly or approximated by sampling methods. We will discuss these in details in the following section.
\subsection{Uncertainty Set Regularized Robust Reinforcement Learning (USR-RRL)}
\label{sec:usr_rrl}
USR-RVI proposed in the last section naturally applies to model-based RL, as model-based RL learns a point estimate of the transition model $P(s'|s, a; \bar{w})$ by maximum likelihood approaches \citep{kegl2021modelbased}. Since calculating the derivatives in USR-RVI is computationally involved, we choose to construct a local transition model with mean and covariance as parameters only (cf. comments in Section \ref{sec:limitations}).
Specifically, suppose one can access a triple (state $s$, action $a$ and next state $x$) from the experience replay buffer, a local transition model $P(s'|s,a;\bar{w})$ is constructed as a Gaussian distribution with mean $x$ and covariance $\Sigma$ (with $\Sigma$ being a hyperparameter), i.e., the nominal parameter $\bar{w}$ consists of $(x, \Sigma)$. With this local transition model, we now have the full knowledge of $\mathcal{P}(s'|s,a;\bar{w})$ and $\nabla_w\mathcal{P}(s'|s,a;\bar{w})$, which allows us to calculate the RHS of Equation \ref{eq:l2_usr}.
To further approximate the integral calculation in Equation \ref{eq:l2_usr}, we sample $M$ points $\{s_1, s_2, ..., s_M\}$ from the local transition model $\mathcal{P}(s'|s,a;\bar{w})$ and use them to approximate the target action value function by $Q^{\pi}(s, a) \approx r(s, a) + \gamma \sum_{i=1}^M \left[ V^{\pi}(s_i) - \alpha \| \nabla_w\mathcal{P}(s_i|s,a;\bar{w})V^{\pi}(s_i)\|_2 / \mathcal{P}(s_i|s,a;\bar{w}) \right]$, where $\bar{w}=(x, \Sigma)$. With this approximated value iteration, the Bellman operator is guaranteed to converge to the robust value of current policy $\pi$. Policy improvement can be further applied to learn a more robust policy.
\subsection{Adversarial uncertainty set}
\label{sec:adv}
\begin{wrapfigure}{r}{0.55\textwidth}
\begin{center}
\includegraphics[width=0.55\textwidth]{figures/adversarial_uncertainty_set.pdf}
\end{center}
\caption{Generation of adversarial uncertainty set.}
\label{fig:adv_uc}
\end{wrapfigure}
The proposed method in Section \ref{sec:usr_rrl} relies on the prior knowledge of the uncertainty set of the parameter space. The $L_p$ norm uncertainty set is most widely used in the Robust RL and robust optimal control literature. However, such a fixed uncertainty set may not sufficiently adapt for various perturbation types. The $L_p$ norm uncertainty set with its larger region can result in an over-conservative policy, while the one with a smaller region may lead to a risky policy. In this section, we learn an adversarial uncertainty set through the agent's policy (and value function).
The basic idea of the adversarial uncertainty set is to provide a broader uncertainty range to parameters that are more sensitive to the value function, which is naturally measured by the derivative. The agent learning on such an adversarial uncertainty set is easier to adapt to the various perturbation types of parameters. We generate the adversarial uncertainty set in 5 steps as illustrated in Figure \ref{fig:adv_uc}, (1) \textit{sample} next state $s'$ according to the distribution $\mathcal{P}(\cdot|s,a;\bar{w})$, given the current state $s$ and action $a$; (2) \textit{forward} pass by calculating the state value $V(s')$ at next state $s'$; (3) \textit{backward} pass by using the reparameterization trick \citep{kingma2014autoencoding} to compute the derivative $g(\bar{w}) = \nabla_{w}V(s';\bar{w})$; (4) \textit{normalize} the derivative by $d(\bar{w})= g(\bar{w})/[\sum_i^W g(\bar{w})_i]^2]^{0.5}$; (5) \textit{generate} the adversarial uncertainty set $\Omega_w = \{ \bar{w} + \alpha \tilde{w} : \| \tilde{w} / d(\bar{w})\|_2 \le 1 \}$.
\subsection{Visualizing Adversarial Uncertainty Set}
\label{sec:vis_adv}
To further investigate the characteristics of the adversarial uncertainty set, we visualize it in a simple \textit{moving to target} task: controlling a particle to move towards a target point $e$ (Figure \ref{fig:uc_example}.a). The two-dimensional state $s$ informs the position of the agent, and the two-dimensional action $a=(a_1, a_2)$ ($\|a\|_2=1$ is normalized) controls the force in two directions. The environmental parameter $w=(w_1, w_2)$ represents the contact friction in two directions respectively. The transition function is expressed as $s' \sim \mathcal{N}(s+(a_1w_1, a_2w_2), \Sigma)$, and the reward is defined by the progress of the distance to the target point minus a time cost: $r(s,a,s')=d(s,e)-d(s',e)-2$. The nominal value $\bar{w}=(1,1)$ (Figure \ref{fig:uc_example}.b) indicates the equal friction factor in two directions for the training environment. It is easy to conjecture that the optimal action is pointing towards the target point, and the optimal value function $V^*(s)=-d(s,e)$.
We visualize the $L_2$, $L_1$ and the adversarial uncertainty set of the contact friction $w$ in Figure \ref{fig:uc_example}.(b,c,d) respectively, at a specific state $s=(4,3)$ and the optimal action $a^*=(-0.8, -0.6)$. We can see that compared with the $L_2$ uncertainty set, the adversarial uncertainty set extends the perturbation range of the horizontal dimensional parameter since it is more sensitive to the final return. As a result, the agent learning to resist such an uncertainty set is expected to generally perform well on unseen perturbation types, which will be verified in experiments in the next section.
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{1.0\linewidth}
\centering \includegraphics[width=\textwidth]{figures/uc_example.pdf}
\end{subfigure}
\caption{An example of uncertainty set.}
\label{fig:uc_example}
\end{figure*}
\section{Experiments}
\label{sec:experiments}
In this section, we demonstrate the experimental results on the Real-world Reinforcement Learning (RWRL) benchmark \citep{dulac-arnold2019challenges}, to validate the effectiveness of the proposed regularizing USR for resisting the perturbations in the environment.
\subsection{Experimental Setups}
\label{sec:exp_setup}
\paragraph{Task Description:} RWRL, whose back-end is the Mujoco environment \citep{todorov2012mujoco}), is a continuous control benchmark consisting of real-world challenges for RL algorithms (e.g., learning from limited samples, dealing with delays in system actuators, high dimensional state and action space). Using this benchmark, we will evaluate the proposed algorithm regarding the robustness of the learned policy in physical environments with perturbations over parameters of the state equations (dynamics). In more detail, we first train a policy through the interaction with the nominal environments (i.e., the environments with no perturbations), and then test the policy in the environments with perturbations within a range over relevant physical parameters.
In this paper, we conduct experiments on six tasks: \textit{cartpole\_balance}, \textit{cartpole\_swingup}, \textit{walker\_stand}, \textit{walker\_walk}, \textit{quadruped\_walk}, \textit{quadruped\_run}, with growing complexity in state and action space. More details about the specifications of tasks are shown in Appendix \ref{sec:rwrl}. The perturbed variables and their value ranges can be found in the Table \ref{tab:rwrl_env} in Appendix.
\paragraph{Evaluation metric:} A severe problem for Robust RL research is the lack of a standard metric to evaluate policy robustness. To resolve this obstacle, we define a new robustness evaluation metric which we call \textit{Robust-AUC} to calculate the area under the curve of the return with respect to the perturbed physical variables, in analogy to the definition of regular AUC \citep{huang2005using}. More specifically, a policy $\pi$ after training is evaluated in an environment with perturbed variable $P$ whose values $v \in [v_{min}, v_{max}]$ for $N$ episodes to obtain an $\alpha$\%-quantile return $r$. Then, these two sets of data are employed to draw a parameter-return curve $C(P, R)$ to describe the relationship between returns $r$ and perturbed values $v$. We define the relative area under this curve as Robust-AUC such that $rv(\pi, P)=\frac{Area(C(v, r))}{v_{max}-v_{min}}, v \in [v_{min}, v_{max}]$. Compared to the vanilla AUC, Robust-AUC describes the correlation between returns and the perturbed physical variables, which can sensitively reflect the response of a learning procedure (to yield a policy) to the unseen perturbations, i.e., the robustness. In practical implementations, we set $N=100$ and $\alpha=10$ and uniformly collect $20$ samples of $v$ to estimate the area of the curve.
\paragraph{Baselines and Implementation of Proposed Methods:} We first compare USR with a standard version of Soft Actor Critic (SAC) \citep{haarnoja2018soft}, which stands for the category of algorithms without regularizers (\textit{None Reg}). Another category of baselines is to directly impose $L_p$ regularization on the parameters of the value function (\textit{L1 Reg}, \textit{L2 Reg}) \citep{liu2019regularization}, which is a common way to improve the generalization of function approximation but without consideration of the environmental perturbations; For a fixed uncertainty set as introduced in Section \ref{sec:usr_rrl}, we implement two types of uncertainty sets on transitions, \textit{L2 USR} and \textit{L1 USR}, which can be viewed as an extension of \citet{derman2021twice} and the work of \citet{wang2021online} work to continuous control task respectively; finally, we also evaluate the adversarial uncertainty set (Section \ref{sec:adv}), denoted as \textit{Adv USR}.
\paragraph{Experimental Setups:} For a fair comparison, we utilize the same model specifications and hyperparameters for all algorithms in model-free RL and model-based RL respectively, except for the regularizer method and its coefficient. We select the best coefficient for each setting through validation. The detailed model specifications and hyperparameters are shown in Appendix \ref{sec:hyperparameters}. Each algorithm in the baseline is trained with 5 random seeds. During the test phase, for each environmental variable, we uniformly sample 20 perturbed values $v$ in the range of $[v_{min}, v_{max}]$. For each value $v$, we run 20 episodes (5 seeds, in total 100 episodes) and select the $10\%$-quantile as the return $r$ to calculate Robust-AUC defined previously.
\subsection{Main Results}
\label{sec:main_results}
We show the parameter-return curve for all algorithms below in Figure \ref{fig:mfrl} and their Robust-AUC in Table \ref{tab:mfrl}. Due to page limits, the results of other tasks are presented in Appendix (Figure \ref{fig:mfrl_extra} and Table \ref{tab:mfrl_extra}). In addition to calculating Robust-AUC under different perturbations, we also rank all algorithms and report the average rank as an overall robustness performance of each task. Notably, \textit{L1 Reg} and \textit{L2 Reg} do not improve on the robustness, and even impair the performance in comparison with the non-regularized agent on simple domains (\textit{cartpole} and \textit{walker}). In contrast, we observe that both \textit{L2 USR} and \textit{L1 USR} can outperform the default version under some certain perturbations (e.g. \textit{L1 USR} in \textit{cartpole\_swingup} for pole\_length, \textit{L2 USR} in \textit{walker\_stand} for thigh\_length); they are, however, not effective for all scenarios. We argue that the underlying reason could be that the fixed shape of the uncertainty set cannot adapt to all perturbed cases. This is supported by the fact that \textit{Adv USR} achieves the best average rank among all perturbed scenarios, showing the best zero-shot generalization performance in continuous control tasks.
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{1.0\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/cartpole_swingup.pdf}
\caption{cartpole\_swingup}
\label{fig:cartpole_swingup}
\end{subfigure}
~
\begin{subfigure}[b]{1.0\linewidth}
\centering \includegraphics[width=\textwidth]{figures/walker_stand.pdf}
\caption{walker\_stand}
\label{fig:walker_stand}
\end{subfigure}
~
\begin{subfigure}[b]{1.0\linewidth}
\centering \includegraphics[width=\textwidth]{figures/quadruped_walk.pdf}
\caption{quadruped\_walk}
\label{fig:quadruped_walk}
\end{subfigure}
\caption{The parameter-return curve of model-free RL algorithms. All graphs are plotted with the $10\%$-quantile and 5\%-15\%-quantile shading. The vertical dashed line denotes the nominal value that all algorithms train on. Robust-AUC is illustrated after each label in the graph.}
\label{fig:mfrl}
\end{figure*}
\begin{table}[ht]
\caption{Robust-AUC of Model-free RL algorithms.}
\begin{center}
\scalebox{0.7}{
\begin{tabular}{llcccccc}
\toprule
\multirow{2}{*}{\textbf{Task Name}} &
\multirow{2}{*}{\textbf{Variables}} &
\multicolumn{6}{c}{\textbf{Algorithms}} \\
&& \textit{None Reg} & \textit{L1 Reg} & \textit{L2 Reg} & \textit{L1 USR} & \textit{L2 USR} & \textit{Adv USR} \\
\midrule \multirow{5}{*}{cartpole\_swingup}
& pole\_length & 393.41 & 319.73 & 368.16 & \textbf{444.93} & 424.48 & 430.91 \\
& pole\_mass & 155.25 & 96.85 & 131.35 & 175.28 & 159.61 & \textbf{193.13} \\
& joint\_damping & 137.20 & 140.16 & 165.01 & 164.21 & 169.88 & \textbf{170.39} \\
& slider\_damping & 783.76 & 775.73 & 797.59 & 793.55 & 781.02 & \textbf{819.32} \\
& average rank & 4.5 & 5.75 & 3.75 & 2.5 & 3.25 & \textbf{1.25} \\
\midrule \multirow{5}{*}{walker\_stand}
& thigh\_length & 487.02 & 461.95 & 497.38 & 488.71 & \textbf{511.16} & 505.88 \\
& torso\_length & 614.06 & 586.16 & 586.20 & 598.02 & 610.93 & \textbf{623.56} \\
& joint\_damping & \textbf{607.24} & 387.89 & 443.82 & 389.77 & 527.87 & 514.77 \\
& contact\_friction & 946.74 & \textbf{947.24} & 941.92 & 943.11 & 940.73 & 945.69 \\
& average rank & 2.50 & 4.75 & 4.25 & 4.25 & 3.00 & \textbf{2.25} \\
\midrule \multirow{5}{*}{quadruped\_walk}
& shin\_length & 492.55 & 406.77 & 503.13 & 540.39 & 564.60 & \textbf{571.85} \\
& torso\_density & 471.45 & 600.86 & 526.22 & 442.05 & 472.80 & \textbf{602.09} \\
& joint\_damping & 675.95 & 711.54 & \textbf{794.56} & 762.50 & 658.17 & 785.11 \\
& contact\_friction & 683.80 & 906.92 & 770.44 & 777.40 & 767.80 & \textbf{969.73} \\
& average rank & 5.25 & 3.50 & 3.00 & 3.75 & 4.25 & \textbf{1.25} \\
\bottomrule
\end{tabular}
}
\end{center}
\label{tab:mfrl}
\end{table}
\section{Related Work}
\label{sec:related_work}
Robust Reinforcement Learning (Robust RL) has recently become a popular topic \citep{iyengar2005robust, dulac-arnold2019challenges, kirk2022survey, moos2022robust}, due to its effectiveness in tackling perturbations. Besides the transition perturbation in this paper, there are other branches relating to action, state and reward. We will briefly discuss them in the following paragraphs. Additionally, we will discuss the relation of Robust RL, sim-to-real and Bayesian RL approaches, which are also important topics in robot learning.
\paragraph{Action Perturbation:} Early works in Robust RL concentrated on action space perturbations. \citet{pinto2017robust} firstly proposed an adversarial agent perturbing the action of the principle agent, training both alternately in a mini-max style. \citet{tessler2019action} later performed action perturbations with probability $\alpha$ to simulate abrupt interruptions in the real world. Afterwards, \citet{kamalaruban2020robust} analyzed this mini-max problem from a game-theoretic perspective and claimed that an adversary with mixed strategy converges to a mixed Nash Equilibrium. Similarly, \citet{vinitsky2020robust} involved multiple adversarial agents to augment the robustness, which can also be explained in the view of a mixed strategy.
\paragraph{State Perturbation:} State perturbation can lead to the change of state from $s$ to $s_p$, and thus might worsen agent's policy $\pi(a|s)$ \citep{pattanaik2017robust}. \citet{zhang2021robust, oikarinen2021robust} both assume an $L_p$-norm uncertainty set on the state space (inspired by the idea of adversarial attacks widely used in computer vision \citep{szegedy2014intriguing}) and propose an auxiliary loss to encourage learning to resist such attacks. It is worth noting that state perturbation is a special case of transition perturbation, and that state perturbations with fixed transitions can be absorbed into transition perturbations, i.e., $\mathcal{P}(s'_p|s,a;\bar{w})=\mathcal{P}(s'|s,a;w)$, which can be covered by the framework proposed in this paper.
\paragraph{Reward Perturbation:} The robustness-regularization duality has been widely studied, especially when considering reward perturbations \citep{husain2021regularized, eysenbach2021maximum, brekelmans2022your}. One reason is that the policy regularizer is closely related to a perturbation on the reward function without the need for a rectangular uncertainty assumption. However, it restricts the scope of these works as reward perturbation, since it can be shown to be a particular case of transition perturbation by augmenting the reward value in the state \citep{eysenbach2021maximum}. Besides, the majority of works focus on the analysis of regularization to robustness, which can only analyze the effect of existing regularizers instead of deriving novel regularizers for robustness as in the work we present here.
\paragraph{Sim-to-Real:} Sim-to-Real is a key research topic in robot learning. Compared to the Robust RL problem, it aims to learn a robust policy from simulations for generalization in real-world environments. Domain randomization is a common approach to ease this mismatch in sim-to-real problems \citep{peng2018sim, openai2019learning}. However, \citet{mankowitz2019robust} has demonstrated that it actually optimizes the average case of the environment rather than the worst-case scenario (as seen in our research), which fails to perform robustly during testing. More recent active domain randomization methods \citep{mehta2020active} resolve this flaw by automatically selecting difficult environments during the training process. The idea of learning an adversarial uncertainty set considered in this paper can be seen as a strategy to actively search more valuable environments for training.
\paragraph{Bayesian RL:} One commonality between Bayesian RL and Robust RL is that they both store uncertainties over the environmental parameter (posterior distribution $q(w)$ in Bayesian RL and uncertainty set $\Omega_w$ in Robust RL). Uncertainties learned in Bayesian RL can benefit Robust RL in two ways: (1) Robust RL can define an uncertainty set $\Omega_w = \{w: q(w) > \alpha \}$ to learn a robust policy that can tolerate model errors, which is attractive for offline RL and model-based RL; (2) A soft robust objective with respect to the distribution $q(w)$ can ease the conservative behaviors caused by the worst-case scenario \citep{dermansoftrobust}.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we adopt the robustness-regularization duality method to design new regularizers in continuous control problems to improve the robustness and generalization of RL algorithms. Furthermore, to deal with unknown uncertainty sets, we design an adversarial uncertainty set according to the learned action state value function and incorporate it into a new regularizer. While further validation on real world systems is necessary, the proposed method already showed great promise regarding generalization and robustness under environmental perturbations unseen during training, which makes it a valuable addition to RL for robot learning.
\section{Limitations}
\label{sec:limitations}
The main limitation of the proposed method is possibly the efficiency of the uncertainty set regularizer. Depending on the derivative of the model parameters, it is expensive to compute for environments with a large number of parameters. To address this problem, a workable method is to detect important variables automatically and only apply regularization on these dimensions. Besides, while USR performs well on complex simulated environments in this paper, it should involve more tests on real-world robots in the future.
\clearpage
\acknowledgments{This research receives funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 953348 (ELO-X). The authors thank Jasper Hoffman, Baohe Zhang for the inspiring discussions.}
| {'timestamp': '2022-07-06T02:15:54', 'yymm': '2207', 'arxiv_id': '2207.02016', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.02016'} |
\section{Submission of papers to NeurIPS 2022}
Please read the instructions below carefully and follow them faithfully.
\subsection{Style}
Papers to be submitted to NeurIPS 2022 must be prepared according to the
instructions presented here. Papers may only be up to {\bf nine} pages long,
including figures. Additional pages \emph{containing only acknowledgments and
references} are allowed. Papers that exceed the page limit will not be
reviewed, or in any other way considered for presentation at the conference.
The margins in 2022 are the same as those in 2007, which allow for $\sim$$15\%$
more words in the paper compared to earlier years.
Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the
NeurIPS website as indicated below. Please make sure you use the current files
and not previous versions. Tweaking the style files may be grounds for
rejection.
\subsection{Retrieval of style files}
The style files for NeurIPS and other conference information are available on
the World Wide Web at
\begin{center}
\url{http://www.neurips.cc/}
\end{center}
The file \verb+neurips_2022.pdf+ contains these instructions and illustrates the
various formatting requirements your NeurIPS paper must satisfy.
The only supported style file for NeurIPS 2022 is \verb+neurips_2022.sty+,
rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09,
Microsoft Word, and RTF are no longer supported!}
The \LaTeX{} style file contains three optional arguments: \verb+final+, which
creates a camera-ready copy, \verb+preprint+, which creates a preprint for
submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the
\verb+natbib+ package for you in case of package clash.
\paragraph{Preprint option}
If you wish to post a preprint of your work online, e.g., on arXiv, using the
NeurIPS style, please use the \verb+preprint+ option. This will create a
nonanonymized version of your work with the text ``Preprint. Work in progress.''
in the footer. This version may be distributed as you see fit. Please \textbf{do
not} use the \verb+final+ option, which should \textbf{only} be used for
papers accepted to NeurIPS.
At submission time, please omit the \verb+final+ and \verb+preprint+
options. This will anonymize your submission and add line numbers to aid
review. Please do \emph{not} refer to these line numbers in your paper as they
will be removed during generation of camera-ready copies.
The file \verb+neurips_2022.tex+ may be used as a ``shell'' for writing your
paper. All you have to do is replace the author, title, abstract, and text of
the paper with your own.
The formatting instructions contained in these style files are summarized in
Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below.
\section{General formatting instructions}
\label{gen_inst}
The text must be confined within a rectangle 5.5~inches (33~picas) wide and
9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point
type with a vertical spacing (leading) of 11~points. Times New Roman is the
preferred typeface throughout, and will be selected for you by default.
Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no
indentation.
The paper title should be 17~point, initial caps/lower case, bold, centered
between two horizontal rules. The top rule should be 4~points thick and the
bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and
below the title to rules. All pages should start at 1~inch (6~picas) from the
top of the page.
For the final version, authors' names are set in boldface, and each name is
centered above the corresponding address. The lead author's name is to be listed
first (left-most), and the co-authors' names (if different address) are set to
follow. If there is only one co-author, list both author and co-author side by
side.
Please pay special attention to the instructions in Section \ref{others}
regarding figures, tables, acknowledgments, and references.
\section{Headings: first level}
\label{headings}
All headings should be lower case (except for first word and proper nouns),
flush left, and bold.
First-level headings should be in 12-point type.
\subsection{Headings: second level}
Second-level headings should be in 10-point type.
\subsubsection{Headings: third level}
Third-level headings should be in 10-point type.
\paragraph{Paragraphs}
There is also a \verb+\paragraph+ command available, which sets the heading in
bold, flush left, and inline with the text, with the heading followed by 1\,em
of space.
\section{Citations, figures, tables, references}
\label{others}
These instructions apply to everyone.
\subsection{Citations within the text}
The \verb+natbib+ package will be loaded for you by default. Citations may be
author/year or numeric, as long as you maintain internal consistency. As to the
format of the references themselves, any style is acceptable as long as it is
used consistently.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations appropriate for
use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
If you wish to load the \verb+natbib+ package with options, you may add the
following before loading the \verb+neurips_2022+ package:
\begin{verbatim}
\PassOptionsToPackage{options}{natbib}
\end{verbatim}
If \verb+natbib+ clashes with another package you load, you can add the optional
argument \verb+nonatbib+ when loading the style file:
\begin{verbatim}
\usepackage[nonatbib]{neurips_2022}
\end{verbatim}
As submission is double blind, refer to your own published work in the third
person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our
previous work [4].'' If you cite your other papers that are not widely available
(e.g., a journal paper under review), use anonymous author names in the
citation, e.g., an author of the form ``A.\ Anonymous.''
\subsection{Footnotes}
Footnotes should be used sparingly. If you do require a footnote, indicate
footnotes with a number\footnote{Sample of the first footnote.} in the
text. Place the footnotes at the bottom of the page on which they appear.
Precede the footnote with a horizontal rule of 2~inches (12~picas).
Note that footnotes are properly typeset \emph{after} punctuation
marks.\footnote{As in this example.}
\subsection{Figures}
\begin{figure}
\centering
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\end{figure}
All artwork must be neat, clean, and legible. Lines should be dark enough for
purposes of reproduction. The figure number and caption always appear after the
figure. Place one line space before the figure caption and one line space after
the figure. The figure caption should be lower case (except for first word and
proper nouns); figures are numbered consecutively.
You may use color figures. However, it is best for the figure captions and the
paper body to be legible if the paper is printed in either black/white or in
color.
\subsection{Tables}
All tables must be centered, neat, clean and legible. The table number and
title always appear before the table. See Table~\ref{sample-table}.
Place one line space before the table title, one line space after the
table title, and one line space after the table. The table title must
be lower case (except for first word and proper nouns); tables are
numbered consecutively.
Note that publication-quality tables \emph{do not contain vertical rules.} We
strongly suggest the use of the \verb+booktabs+ package, which allows for
typesetting high-quality, professional tables:
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
This package was used to typeset Table~\ref{sample-table}.
\begin{table}
\caption{Sample table title}
\label{sample-table}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Final instructions}
Do not change any aspects of the formatting parameters in the style files. In
particular, do not modify the width or length of the rectangle the text should
fit into, and do not change font sizes (except perhaps in the
\textbf{References} section; see below). Please note that pages should be
numbered.
\section{Preparing PDF files}
Please prepare submission files with paper size ``US Letter,'' and not, for
example, ``A4.''
Fonts were the main cause of problems in the past years. Your PDF file must only
contain Type 1 or Embedded TrueType fonts. Here are a few instructions to
achieve this.
\begin{itemize}
\item You should directly generate PDF files using \verb+pdflatex+.
\item You can check which fonts a PDF files uses. In Acrobat Reader, select the
menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can
also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is
available out-of-the-box on most Linux machines.
\item The IEEE has recommendations for generating PDF files whose fonts are also
acceptable for NeurIPS. Please see
\url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf}
\item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use
"solid" shapes instead.
\item The \verb+\bbold+ package almost always uses bitmap fonts. You should use
the equivalent AMS Fonts:
\begin{verbatim}
\usepackage{amsfonts}
\end{verbatim}
followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+
for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following
workaround for reals, natural and complex:
\begin{verbatim}
\newcommand{\RR}{I\!\!R}
\newcommand{\Nat}{I\!\!N}
\newcommand{\CC}{I\!\!\!\!C}
\end{verbatim}
Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package.
\end{itemize}
If your file contains type 3 fonts or non embedded TrueType fonts, we will ask
you to fix it.
\subsection{Margins in \LaTeX{}}
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the
figure width as a multiple of the line width as in the example below:
\begin{verbatim}
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
\end{verbatim}
See Section 4.4 in the graphics bundle documentation
(\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf})
A number of width problems arise when \LaTeX{} cannot properly hyphenate a
line. Please give LaTeX hyphenation hints using the \verb+\-+ command when
necessary.
\begin{ack}
Use unnumbered first level headings for the acknowledgments. All acknowledgments
go at the end of the paper before the list of references. Moreover, you are required to declare
funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work).
More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2022/PaperInformation/FundingDisclosure}.
Do {\bf not} include this section in the anonymized submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to autmoatically hide this section in the anonymized submission.
\end{ack}
\section*{References}
References follow the acknowledgments. Use unnumbered first-level heading for
the references. Any choice of citation style is acceptable as long as you are
consistent. It is permissible to reduce the font size to \verb+small+ (9 point)
when listing the references.
Note that the Reference section does not count towards the page limit.
\medskip
{
\small
[1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for
connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen
(eds.), {\it Advances in Neural Information Processing Systems 7},
pp.\ 609--616. Cambridge, MA: MIT Press.
[2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring
Realistic Neural Models with the GEneral NEural SImulation System.} New York:
TELOS/Springer--Verlag.
[3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and
recall at excitatory recurrent synapses and cholinergic modulation in rat
hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262.
}
\section*{Checklist}
The checklist follows the references. Please
read the checklist guidelines carefully for information on how to answer these
questions. For each question, change the default \answerTODO{} to \answerYes{},
\answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf
justification to your answer}, either by referencing the appropriate section of
your paper or providing a brief inline description. For example:
\begin{itemize}
\item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.}
\item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.}
\item Did you include the license to the code and datasets? \answerNA{}
\end{itemize}
Please do not modify the questions and only use the provided macros for your
answers. Note that the Checklist section does not count towards the page
limit. In your paper, please delete this instructions block and only keep the
Checklist section heading above along with the questions/answers below.
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerTODO{}
\item Did you describe the limitations of your work?
\answerTODO{}
\item Did you discuss any potential negative societal impacts of your work?
\answerTODO{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerTODO{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerTODO{}
\item Did you include complete proofs of all theoretical results?
\answerTODO{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerTODO{}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerTODO{}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerTODO{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerTODO{}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerTODO{}
\item Did you mention the license of the assets?
\answerTODO{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerTODO{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerTODO{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerTODO{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerTODO{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerTODO{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerTODO{}
\end{enumerate}
\end{enumerate}
\section{Introduction}
The convergence process of a neural network is heavily influenced by which optimizer is used during training.
An optimizer that can reach convergence quickly enables the model to achieve better generalization in less parameter updates, therefore reducing computational costs and allowing for longer and more complex experiments.
Modern optimizers, such as Adam \cite{kingma2014adam}, adjust the learning rate of parameters in order to increase the speed of convergence.
Adam does this by taking into consideration the exponential running mean of the previous gradients, maintaining a separate learning rate for each weight.
The inverse relation between batch size and learning rate has been shown before by ~\cite{smith2017don}.
They demonstrate that increasing the batch size leads to a similar result as decreasing the learning rate.
Using a higher batch size comes with the benefit of better hardware utilization and enhanced parallelization.
Moreover, fewer update steps are required in order to achieve convergence and produces similar results as a method which adjusts the learning rate.
As a consequence, since it has been shown that we can use a constant learning rate and modify the batch size, several studies have attempted to use this characteristic to develop new training techniques.
These previous works have mainly focused on changing the batch size using schedulers \cite{devarakonda2017adabatch}, \cite{khan2020adadiffgrad} or by analyzing data from the training history \cite{alfarraadaptive}.
Our proposed solution is to dynamically select which gradients are used to calculate the update for each layer, while also providing a method of adjusting the batch size at the end of each epoch. These gradients are chosen from the current batch which was forward passed through the model.
Weight updates are characterized by having two important components, their direction and magnitude.
For a fixed batch, changing the learning rate results in the magnitude of the update being adjusted, however, for a fixed learning rate, modifying the number of samples included in the batch allows us to impact both the magnitude and the direction.
Previous works which used the change of batch size as a method of adjusting the magnitude of the update gradient didn't focus their research on the possibility of adjusting the direction, which we consider to bring more benefits to the neural network's training process.
Therefore, our approach consists of selecting sample gradients that are to be included in the update gradient in order to improve both it's direction and magnitude.
The adjustment to the batch size is done in order to allow a qualitative selection process while also not wasting computational resources if only a small fraction of the sample gradients from each batch are included in the parameter update.
In Section \ref{other-works} we present the research done in other works which study the effect of batch size upon the convergence of the network.
In Section \ref{our-work} we introduce our optimization algorithm which selects the samples that are to be included in an update and modifies the batch size.
Next in Section \ref{experiments} we discuss the experiments and the results obtained using our optimization algorithm, in Section \ref{limitations} we talk about limitations and finally, the conclusion in Section \ref{conclusion}.
\section{Related work}
\label{other-works}
Improving model convergence by adjusting learning rates or adapting the batch size has been attempted before.
Previous works in literature studied the effect of the batch size on the convergence process of neural networks and possible ways in which the training speed can be improved with regards to batch size.
In \cite{li2014efficient} the authors analyze whether the convergence rate decreases with the increase in batch size and show that bigger batch sizes do not hinder convergence.
Since \cite{smith2017don} showed that similar results can be obtained by both approaches, dynamic adaptation of batches has garnered attention since in addition to model convergence, it also has the potential to address issues related to hardware utilization and computation parallelism.
Recent works by \cite{devarakonda2017adabatch} and \cite{khan2020adadiffgrad}, propose the use of schedulers which increase the batch size at set points during training similarly to the popularly equivalent technique applied to learning rate schedulers.
These methods bring the benefit of better convergence and hardware utilization but require an initial exploration to identify suitable milestones.
Alternatively, \cite{lederrey2021estimation} suggests increasing the batch size when the model hits a plateau which is less sensitive to hyperparameter choices but has the potential of waiting too long before making a change.
Other authors proposed that batch size should be increased when fulfilling a certain criteria, namely a reduction in the loss of the model \cite{liu2019accelerate} or based on the variance of the gradients for the current epoch \cite{balles2016coupling}, while \cite{gao2020balancing} suggests updating the batch size by a static amount when the convergence rate decreases under a certain threshold.
In \cite{alfarraadaptive}, authors propose a practical algorithm to approximate an optimal batch size by storing information regarding previously used gradients in order to decide a batch size for the current iteration.
Compared with previous research, our proposed method changes not only the batch size but also its composition, by using only a subset of the samples from current batch in order to perform the update of the weights.
Another important distinction of our works is that we do the selection of the samples for each layer of the network separately.
\section{Dynamic Batch Adaptation}
\label{our-work}
\begin{figure}
\centering
\begin{tikzpicture}[
node distance = 4mm,
MTRX/.style = {matrix of nodes,
nodes in empty cells,
nodes={draw, minimum size=7.0mm, anchor=center,
inner sep=0pt, outer sep=0pt},
column sep=-\pgflinewidth,
row sep=2mm},
MTRXSmall/.style = {matrix of nodes,font=\scriptsize,
nodes={draw, minimum size=4.5mm, anchor=center,
inner sep=0pt, outer sep=0pt},
column sep=-\pgflinewidth,
row sep=2mm},
]
\node[text width=1.5cm] at (-5.5,-1) {\small Candidate Gradients};
\matrix (m1) [MTRX,
row 1/.append,
label=above: Batch,
]
{
|[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_1}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_2}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_3}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_4}$ & |[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_5}$ & |[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_6}$ & |[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_7}$ & |[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_8}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_9}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{10}}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{11}}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{12}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{13}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{14}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{15}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{16}}$\\
};
\matrix (m2) [MTRXSmall, below left=1.0cm and -2.74cm of m1
]
{
|[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_1}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_2}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_3}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_4}$ &
|[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_9}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{10}}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{11}}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{12}}$ \\
};
\matrix (m3) [MTRXSmall, right=of m2
]
{
|[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_5}$ & |[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_6}$ & |[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_7}$ & |[draw,fill={rgb,255:red,247; green,179; blue,107}]|$\boldsymbol{x_8}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_9}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{10}}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{11}}$ & |[draw,fill={rgb,255:red,109; green,158; blue,235}]|$\boldsymbol{x_{12}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{13}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{14}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{15}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{16}}$\\
};
\matrix (m4) [MTRXSmall, right=0.75cm of m3
]
{
|[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_1}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_2}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_3}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_4}$ \\
};
\matrix (m5) [MTRXSmall, below=3.0cm of m1
]
{
|[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_1}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_2}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_3}$ & |[draw,fill={rgb,255:red,147; green,196; blue,125}]|$\boldsymbol{x_4}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{13}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{14}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{15}}$ & |[draw,fill={rgb,255:red,225; green,103; blue,102}]|$\boldsymbol{x_{16}}$\\
};
\draw[->, -{Implies[]}, very thick] ($(m3.south) - (0.82em, 0)$) -- node [text width=3.5cm,midway,left ] {\small Select best candidate that minimizes the margin loss} ($(m5.north) - (0, 0)$) ;
\draw[->, -{Implies[]}, very thick] (m1.south) -- (m2.north);
\draw[->, -{Implies[]}, very thick] (m1.south) -- (m3.north);
\draw[->, -{Implies[]}, very thick] (m1.south) -- (m4.north);
\path (m3) -- (m4) node [font=\large, midway, sloped] {$\dots$};
\end{tikzpicture}
\caption{Gradient Subset Selection}
\label{fig:candidate_gradient}
\end{figure}
Our algorithm, in its general form, is a wrapper on top of any other optimizer and it works by processing the gradients for each layer before they are applied to the weights.
More exactly, for each layer we compute the gradient of every sample individually, and then select only a subset of them which is chosen by minimizing a margin-based loss between a \textit{gradient metric} and a \textit{model metric}.
We refer to \textit{model metric} as being an indicator for the model's current performance which we measure by the model loss.
The \textit{gradient metric} is a function, such as a norm, calculated from a set of per-sample gradients for a given network layer.
However, instead of composing this set from individual samples, we group multiple of them in \textit{selection strides}, allowing for better performance and stability.
We define \textit{selection stride} as being a subset of $s$ consecutive samples within a batch.
In a batch of size $b$, samples $x_{0}$, $x_{1}$ \dots $x_{s-1}$ belong to the first \textit{selection stride}, samples $x_{s}$, $x_{s+1}$ \dots $x_{2s -1}$ belong to the second \textit{selection stride}, and so on.
The general formula for a selection stride is:
\begin{equation}
\{x_{i \cdot s}, x_{i \cdot s + 1} \dots x_{i \cdot (s + 1) - 1}\} \in SelectionStride_{i}, \forall i \in \left[0, \left\lceil \frac{b}{s} \right\rceil - 1 \right]
\end{equation}
These \textit{selection strides} are used to compose candidate gradients and select the one that best minimizes the margin-based loss function.
This is exemplified in Fig. ~\ref{fig:candidate_gradient} where a batch of 16 samples is divided into 4 selection strides from which multiple candidate gradients are constructed and evaluated.
The candidate which is the best fit for our criterion is used in the weight update.
Dynamic Batch Adaptation (DBA) is applied for every batch in a training epoch, for any layer that has update gradients.
Notice that since every layer has distinct gradients, different gradient candidates can be selected for each of them.
The chosen \textit{selection strides} are averaged in order to compute the gradient on the current layer.
After the gradient has been replaced on each layer of the neural network, the update of the weights is performed as usual. A pseudo-code description of how DBA processes the per-sample gradients after they have been back-propagated can be seen at Algorithm ~\ref{alg:step}.
At the end of each epoch, using information about how many samples were included into the actual update, the batch size for the next epoch is adjusted accordingly.
\begin{algorithm}
\caption{Optimizer step}\label{alg:step}
\DontPrintSemicolon
\SetKwFunction{FMain}{DBA::step}
\SetKwProg{Fn}{Procedure}{:}{}
\Fn{\FMain{$loss$}}{
$modelMetric \gets ModelMetric(loss)$ \\
\For{$layer \in model.layers$}{
$selectionStrides \gets SplitStrides(layer.gradSamples, strideSize)$ \\
$chosen \gets SelectionStrategy(selectionStrides)$ \\
$layer.gradient \gets Mean(chosen)$
}
optimizer.step($loss$)
}
\end{algorithm}
\subsection{Model metric}
Our intent with this metric is to introduce a mechanism that is tied to the model performance in order to adapt the gradient selection algorithm to the current state of the model.
This was desired in order to handle multiple stages during training.
In the beginning, the model should make large updates, while later in the training process, when the network has already learned a good set of weights, further updates should not make large changes.
We have experimented with using constant functions, however they offer no information about the current state of the model and are very sensitive to the chosen constant value.
The model metric we have have used is the loss of the model.
We consider that other metrics can be developed to fulfil this role, but for the rest of the paper we will use the model loss as the \textit{model metric} for our algorithm.
The loss is computed for each batch, therefore it offers us an approximation of classification proficiency of the model for the current samples.
\subsection{Gradient metric}
Using the \textit{model metric} calculated for a given batch loss, we are able to guide our selection by calculating the \textit{gradient metric} of candidate gradients.
Normally, the mean of the batch loss is back-propagated to every layer of the model.
However, for our method, we need to back-propagate gradients for each of the samples.
This technique has notably been extensively used in the field of Differential Privacy \cite{yousefpour2021opacus}.
After the initial gradient calculations, we iterate through each layer and select subsets of individual gradients that minimize our margin-based loss between \textit{model metric} and \textit{gradient metric}.
A subset of samples, represented by one or more \textit{selection strides} from the current batch, receives a score calculated using the \textit{gradient metric}.
In our experiments, we used two different \textit{gradient metrics}, the norm of the mean of the gradients for the selected samples, and the norm of the variance of the gradients for the selected samples. The grad metric can use one of these two formulations:
\begin{equation}
\label{grad-metric}
\text{Gradient Norm}=\norm{\frac{\sum_{j = 0}^{n - 1}\delta_{i_{j}} }{n}}_{2}, \text{ Variance Norm}=\norm{\frac{\sum_{j = 0}^{n - 1}(\delta_{i_{j}} - \overline{\delta_{i}})}{n - 1}}_{2}\\
\end{equation}
Where $\delta_{i_{j}}$ is the $j$th gradient sample from candidate gradient $i$ and $n$ being the size of the candidate gradient.
We have observed that the norm of the gradient from one sample is usually bigger than the norm of the mean of the gradients from more samples.
Generally, the more sample gradients we use, the norm of their mean is lower due to the fact that their directions cancel each other provided that there isn't a consensus between all samples.
Initially, because the model has random weights, all samples would have gradients with big norms because they generate a large loss.
As the model learns from the data, the norm of the gradients would decrease over time, hence samples which are classified correctly by the model would have a smaller corresponding norm than the ones which are classified incorrectly.
Moreover, the norms are bigger for the final classification layer than for the intermediate ones, which is why we consider that it is important to compute a different update for each layer.
Using different samples for each layer can cause internal covariate shift \cite{ioffe2015batch} on models with more layers due to the fact that each layer is effectively trained on a slightly different distribution of samples, but we haven't observed this phenomena on the small model used for our experiments.
Noisy samples and outliers lead to gradients with bigger norms because the model is not able to classify them well.
Therefore, our selection strategy would avoid \textit{selection strides} with noisy samples, or would select them only if their direction is canceled by the other samples and the negative effects are ameliorated.
We consider this to potentially be the primary reason for our good results in data-scarce environments, since noisy data has a greater impact on the model.
The norm of the variance of the gradients for the selected sample is the second indicator we used in our experiments to compute the \textit{gradient metric}.
This metric describes how close the direction of the gradients for each sample are from each other.
A set of gradients with similar directions would have a small variance, and therefore a small norm, while gradients with opposing directions would have a big variance and a big resulting variance norm.
Similarly, noisy samples increase the variance and the value of this metric, making it easier to identify them.
From our experiments we see that both the variance and the norm of the gradients produce similar results, but using the variance based metric tends to lead to slightly better ones.
\subsection{Margin-based loss}
Ideally our algorithm would cause the model to make large update steps at the beginning of training when the weights don't contain much information, and smaller steps towards the end of training when we only want to fine-tune the weights without destroying good feature representations it has already learned.
Because of this, we do not directly try to minimize the \textit{gradient metric} due to the fact that by doing so we would achieve a constant behaviour across the entire training phase.
Hence, we minimize a function which takes as arguments the \textit{model metric} and the \textit{gradient metric}.
We experimented with multiple such functions, but we have decided to use a margin-based loss that minimizes the slope between the \textit{model metric} and the \textit{gradient metric}.
In order to do this we keep an exponential running mean for each of the two metrics, and we try to choose \textit{selection strides} such that the \textit{gradient metric} minimizes the slope as follows:
\begin{equation}
\label{argmin}
arg \min_{X \subseteq S} \left| GradMetric(X) - \frac{ModelMetric}{ModelMetric_{exp}} \cdot GradMetric_{exp} \cdot \mu \right|, \mu > 0
\end{equation}
Where $S$ are the gradients of samples from the \textit{selection strides} which form the current batch, $ModelMetric$ is the \textit{model metric} for the current batch, $ModelMetric_{exp}$ and $GradMetric_{exp}$ are the exponential running means for the \textit{model metric} and the \textit{gradient metric} and the $\mu$ is a parameter which allows us to control the slope and is $1.0$ by default.
The intuition behind this function is that we want the model metric and gradient metric to behave similarly so that the magnitude of the update step is proportional to the model loss for the current batch.
For example, supposing that the \textit{model metric} increases for the current batch due to the presence of several instances that were not classified correctly, we would like the selected samples from the batch to have a similar increase in \textit{gradient metric} and consequently in the update step.
In the general case, the \textit{model metric} will decrease over time, therefore we want our \textit{gradient metric} to decrease, adjusting to the change in overall model performance .
Another minimizing function which we used but achieved slightly worse results is the absolute value of the difference between the two metrics: $ |\alpha \cdot GradMetric(X) - ModelMetric|, \alpha > 0$. In our implementation $\alpha$ is a scaling factor used to bring the two values to the same order of magnitude.
Minimizing this function follows the same principles as above, however it is more unstable because it does not take the past evolution into account and it is very sensitive to the choice of $\alpha$.
\subsection{Selection strategy}
In Eq. \ref{argmin} we try to select the best subset of \textit{selection strides} that minimizes our function.
However, there are $2^{|S|}$ possible subsets and a search through all of them is prohibitively costly.
Therefore, we have designed two greedy selection strategies.
While they have a lower complexity we compromise on the fact that they might not able to always find the best solution.
The first strategy is \textit{bottom up selection} and the pseudo-code can be seen at Algorithm \ref{bottom-up}, while the second strategy is \textit{top down selection} and the pseudo-code can be seen at Algorithm \ref{top-down}.
\begin{minipage}[t]{0.5\textwidth}
\begin{algorithm}[H]
\small
\caption{Bottom up selection}\label{bottom-up}
\DontPrintSemicolon
\SetKwFunction{FMain}{BottomUpSelection}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\FMain{$selectionStrides$}}{
$continue \gets true$ \\
$best \gets \infty$ \\
$chosen \gets \emptyset $ \\
$m1 \gets ModelMetric()$ \\
$m2 \gets ModelMetricMean()$\\
\While{$continue$}{
$continue \gets false$ \\
\For{$stride \in selectionStrides$}{
$chosen \gets chosen \cup \{stride\}$ \\
$g1 \gets GradMetric(chosen)$ \\
$g2 \gets GradMetricMean(chosen)$\\
$d \gets F(m1, m2, g1, g2)$\\
\eIf{$d < best$}{
$best \gets d$\\
$selectionStrides \gets selectionStrides \setminus \{stride\}$\\
$continue \gets true$\\
}{
$chosen \gets chosen \setminus \{stride\}$
}
}
}
\KwRet $chosen$\;
}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{algorithm}[H]
\small
\caption{Top down selection}\label{top-down}
\DontPrintSemicolon
\SetKwFunction{FMain}{TopDownSelection}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\FMain{$selectionStrides$}}{
$continue \gets true$ \\
$best \gets \infty$ \\
$chosen \gets $selectionStrides$ $ \\
$m1 \gets ModelMetric()$ \\
$m2 \gets ModelMetricMean()$\\
\While{$continue$}{
$continue \gets false$ \\
\For{$stride \in chosen$}{
$chosen \gets chosen \setminus \{stride\}$ \\
$g1 \gets GradMetric(chosen)$ \\
$g2 \gets GradMetricMean(chosen)$\\
$d \gets F(m1, m2, g1, g2)$\\
\eIf{$d < best$}{
$best \gets d$ \\
$continue \gets true$\\
}{
$chosen \gets chosen \cup \{stride\}$
}
}
}
\KwRet $chosen$\;
}
\end{algorithm}
\end{minipage}
Both strategies have a strong bias on the number of samples that would be included in an update.
\textit{Bottom up selection} usually selects few \textit{selection strides} while \textit{top down selection} selects more.
As a consequence, training a model and using the \textit{top down selection} results in higher batch sizes than the \textit{bottom up selection}.
In order to combat this bias, when applying our algorithm for each layer of the neural network we randomly choose one of the two possible strategies with equal probability.
\subsection{Updating the batch size}
At the end of each epoch our optimizer calculates a batch size suitable for the next epoch.
We record how many samples were selected to be used in updates by our algorithm, and retrieve the 50th percentile, $q$, for these values. Using this, we calculate the batch size for the next training epoch to be:
\begin{equation}
\label{batch-change}
Next = Current +
\begin{cases}
delta, q > 0.8 \cdot Current\\
-delta, q < 0.2 \cdot Current\\
0, otherwise
\end{cases}
\end{equation}
Where $Next$ represents the next batch size, $Current$ the current batch size and $delta$ a parameter with which we can control the difference between the current batch size and the next.
The batch size is also capped at both ends by a minimum and maximum value, also given as parameters.
The maximum batch size depends on hardware limitation and the size of the dataset, while the minimum batch size is needed to limit the maximum number of steps per epoch.
Putting it all together, our main training loop can be seen described in Algorithm~\ref{alg:train}.
\begin{algorithm}
\caption{Training process}\label{alg:train}
$dba \gets DBA(model, optimizer, strideSize, minBatchSize, maxBatchSize)$ \\
\While{$epochs < maxEpochs$}{
$batches \gets SplitInBatches(data, batchSize)$ \\
\For{$batch \in batches$}{
$loss = model.fit(batch)$ \\
$dba.step(loss)$
}
$batchSize \gets dba.nextBatchSize()$
}
\end{algorithm}
\section{Experimental results}
\label{experiments}
\begin{table}
\caption{Results on MNIST using $x\%$ of the data}
\label{sample-table}
\centering
\begin{tabular}{lllll}
\toprule
\multicolumn{2}{c}{} &
\multicolumn{3}{c}{Test Accuracy \%} \\
\cmidrule(r){3-5}
Optimizer & Variant & 100\% of Data & 10\% of Data & 1\% of Data \\
\midrule
SGD & batch size=64 & $\textbf{98.164} \boldsymbol{\pm} \textbf{0.077}$ & $95.148 \pm 0.128$ & $87.628 \pm 0.339$\\
SGD & batch size=128 & $98.148 \pm 0.087$ & $94.990 \pm 0.076$ & $87.574 \pm 0.401$ \\
SGD & batch size=256 & $98.068 \pm 0.036$ & $94.834 \pm 0.137$ & $87.380 \pm 0.386$ \\
Adam & batch size=64 & $97.786 \pm 0.037$ & $95.262 \pm 0.108$ & $88.350 \pm 0.330$ \\
Adam & batch size=128 & $97.770 \pm 0.068$ & $95.480 \pm 0.093$ & $88.250 \pm 0.452$ \\
Adam & batch size=256 & $97.872 \pm 0.085$ & $95.466 \pm 0.180$ & $88.314 \pm 0.545$ \\
DBA+SGD \textbf{(ours)} & gradient norm & $98.122 \pm 0.074$ & $95.546 \pm 0.073$ & $96.772 \pm 0.420$ \\
DBA+SGD \textbf{(ours)} & variance norm & $98.140 \pm 0.032$ & $\textbf{97.422} \boldsymbol{\pm} \textbf{0.198}$ & $\textbf{97.830} \boldsymbol{\pm} \textbf{0.079}$ \\
DBA+Adam \textbf{(ours)} & gradient norm & $95.038 \pm 0.118$ & $93.730 \pm 0.233$ & $90.846 \pm 1.749$ \\
DBA+Adam \textbf{(ours)} & variance norm & $94.952 \pm 0.181$ & $93.864 \pm 0.214$ & $92.198 \pm 0.550$ \\
\bottomrule
\end{tabular}
\end{table}
We evaluated our method on random subsets of the MNIST Dataset \cite{lecun-mnisthandwrittendigit-2010}, ranging from $10$ samples per class to $6000$ samples per class which represents the entire dataset. The dataset is made available under the terms of the Creative Commons Attribution-Share Alike 3.0 license.
Each experiment was run five times using different random seeds and no data augmentation techniques are applied across the experiments.
The model we used is a Multilayer Perceptron with a single hidden layer of $64$ neurons.
For Stochastic Gradient Descent (SGD), we use a learning rate of $0.01$, weight decay of $0.0005$ and a Nesterov momentum of $0.9$.
For Adam we use the same learning rate of $0.01$, while the beta coefficients used for computing running averages of gradient and its square are $0.9$ and $0.999$.
The term added to the denominator to improve numerical stability, $\epsilon$ is $1\mathrm{e}{-8}$ and we use the same weight decay of $0.0005$. For both of these optimizers, a learning rate scheduler reduces the learning rate with a factor of $0.5$ until a minimum learning rate of $1\mathrm{e}{-7}$ is reached. The scheduler is activated when encountering a plateau and the training loss does not decrease by more than $0.05\%$ in 25 epochs.
DBA is used as a wrapper for the previously mentioned optimizers.
The minimum and maximum allowed batch sizes are 32 and $\min(|Data|, 2048)$ respectively.
The \textit{selection stride} size is 16 for all the experiments, except for when we used 10 samples per class in which case we used \textit{selection strides} of size 10.
When computing our exponential running average for the \textit{gradient metric} and the \textit{model metric} we used a smoothing factor of $0.9$ and the coefficient $\mu$ for the margin-based loss used when computing the slope for our metrics is $1.0$.
We used a parameter in order to specify which of the two possible \textit{gradient metrics} mentioned in Eq. ~\ref{grad-metric} are used.
The last parameter for DBA is the $delta$ we use to modify the batch size according to Eq. ~\ref{batch-change}, which is set to $8$ across our experiments.
For the practical implementation we used PyTorch \cite{NEURIPS2019_9015} which is publicly available under a modified BSD license.
Due to the fact that the individual gradients in a batch are not easily available in the provided implementation, we used the open source library Opacus \cite{yousefpour2021opacus}, released under the Apache License 2.0, in order to recalculate and use them in our optimizer.
This brings considerable additional overhead and a better implementation that parallelizes our metric calculations and selection algorithm would increase performance, however a complete discussion of these technical issues are beyond the scope of this paper.
The training was done using two separate GPUs (RTX 1050 Ti and RTX 2080 Ti), each experiment being run on only one of them at a time.
Results for our experiments can be seen in Table~\ref{sample-table}. We include the average maximum accuracy and standard deviation obtained on the test split after 5 runs using different seeds. Results are reported for models trained using 100\%, 10\% and 1\% of the training data. The baseline training runs using SGD and Adam were repeated for different batch sizes, namely 64, 128 and 256 while the DBA runs start from a default 128 batch size. We also report the results of DBA runs using either gradient norm or variance norm as the \textit{gradient metric}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.90]
\begin{axis}
anchor=east,
xshift=-1.7cm,
x tick label style={font=\tiny},
y tick label style={font=\tiny},
symbolic x coords={SGD+bs=25,SGD+bs=50,SGD+bs=100},xtick=data]
\addplot+[forget plot,only marks, color={rgb,255:red,109; green,158; blue,235}]
plot[error bars/.cd, y dir=both, y explicit]
table[x=x,y=y,y error plus expr=\thisrow{y-max},y error minus expr=\thisrow{y-min}] {\sgddata};
\end{axis}
\begin{axis}
anchor=west,
x tick label style={font=\tiny},
y tick label style={font=\tiny},
symbolic x coords={Adam+bs=25,Adam+bs=50,Adam+bs=100},xtick=data]
\addplot+[forget plot,only marks, color={rgb,255:red,109; green,158; blue,235}]
plot[error bars/.cd, y dir=both, y explicit]
table[x=x,y=y,y error plus expr=\thisrow{y-max},y error minus expr=\thisrow{y-min}] {\adamdata};
\end{axis}
\end{tikzpicture}
\\
\begin{tikzpicture}[scale=0.90]
\begin{axis}
anchor=east,
xshift=-1.7cm,
x tick label style={font=\tiny},
y tick label style={font=\tiny},
symbolic x coords={DBA+SGD+VarianceNorm,DBA+SGD+GradientNorm},xtick=data]
\addplot+[forget plot,only marks, color={rgb,255:red,109; green,158; blue,235}]
plot[error bars/.cd, y dir=both, y explicit]
table[x=x,y=y,y error plus expr=\thisrow{y-max},y error minus expr=\thisrow{y-min}] {\dbasgddata};
\end{axis}
\begin{axis}
anchor=west,
x tick label style={font=\tiny},
y tick label style={font=\tiny},
symbolic x coords={DBA+Adam+VarianceNorm,DBA+Adam+GradientNorm},xtick=data]
\addplot+[forget plot,only marks, color={rgb,255:red,109; green,158; blue,235}]
plot[error bars/.cd, y dir=both, y explicit]
table[x=x,y=y,y error plus expr=\thisrow{y-max},y error minus expr=\thisrow{y-min}] {\dbaadamdata};
\end{axis}
\end{tikzpicture}
\caption{Test error rates (\%) on MNIST after training using 10 samples per class}
\label{fig:results_MNIST_10}
\end{figure}
The results show that DBA performs extremely well in data-scarce environments, compared to SGD and Adam.
Interestingly enough, we have found out from the classic optimizers, Adam outperforms SGD when using less samples, showing that Adam's per-weight learning rate acts better with less data.
However, the results for DBA + Adam are worse than DBA + SGD, and we suppose it is due to the fact that our selection of per-sample gradients interferes with Adam's exponential running mean mechanism.
When training on 10\% of the data (600 samples per class) our method achieved a lower accuracy than when training on 1\% of the data (60 samples per class).
We weren't able to identify the reason for this phenomenon but we suspect that it's related to our choice for the \textit{selection stride} size, which is $16$ and might not be suitable for the 10\% dataset.
Nonetheless, all the DBA+SGD combinations heavily outperform standard optimizers for the smaller subsets of MNIST with a significant difference being observed when training using only 1\% (60 samples per class). In this scenario one of our DBA+SGD runs using variance norm managed to reach an test accuracy of 97.79\% compared to the best performing baseline run (SGD with a batch size of 64) which managed to reach a maximum test accuracy of 87.87\%.
An important mention being that even if DBA didn't exceed certain baseline runs, it has performed comparably while it has the advantage of not needing to find a good batch size beforehand.
Motivated by our method's performance on small datasets, we also conducted experiments in an extreme data scarce scenario, namely using only 10 samples per class (100 samples total) of the MNIST dataset. We used the same parameters as mentioned above, except for the following:
\begin{itemize}
\item Baseline runs now used batch sizes of 25, 50, 100 instead of 64, 128, 256;
\item DBA runs use a starting batch size of 100 with a selection stride of 10 and minimum batch size of 20;
\item Note that the maximum batch size is 100;
\end{itemize}
We report the test error rates in Figure~\ref{fig:results_MNIST_10}. As it can be seen, all DBA runs manage to significantly reduce the error rate compared to standard training. Notably, the lowest error rate achieved, 2.56\%, was the result of a model trained using DBA+SGD and gradient norm as the \textit{gradient metric}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\centering
\begin{axis}[anchor=east,
xshift=-1.7cm,
ylabel= Data Utilization Rate,
xlabel= Epochs,
ymin=0.0,
ymax=1.0,]
\addplot[line width=1.pt, color={rgb,255:red,109; green,158; blue,235}] table [x=Step, y=Value, col sep=comma, mark=none, smooth] {data/smooth_epoch_utilization.csv};
\end{axis}
\begin{axis}[anchor=west,
ylabel= "Real Epochs",
xlabel= Epochs,]
\addplot[line width=1.pt, color={rgb,255:red,109; green,158; blue,235}] table [x=Step, y=Value, col sep=comma, mark=none, smooth] {data/smooth_real_epoch_count.csv};
\end{axis}
\end{tikzpicture}
\caption{Epoch utilization rate and "Real Epochs"}
\label{fig:convergence_results}
\end{figure}
Models trained using DBA need to be trained on a relatively small volume of data to reach convergence, however, since our method discards many samples during training, this improvement in convergence is not reflected in the number of iteration used. In Figure~\ref{fig:convergence_results} we present the average ratio between number of selected gradients to the batch size for one of our DBA+SGD runs on the complete dataset. The data utilization rate indicates that only $\sim56\%$ of the data passing thought the model gets included in the update steps. This is cumulatively represented in the right side figure in a metric which we call "Real Epochs". It can be seen that after 150 epochs of training, the model only used $\sim83$ epochs worth of data for its updates.
\section{Limitations}
\label{limitations}
Regarding this work's limitations, we note that since layers get trained on different subsets of the current batch, issues caused by internal covariate shift might present themselves when using DBA on deeper models and this issue was not visible in our experiments.
Moreover we note that DBA is computationally expensive, due to the fact that it requires an additional per-sample gradients calculation and our selection procedure is sequential in our current implementation.
This can be mitigated by deriving better selection procedures and a lower level implementation for both features.
Furthermore, the experiments are not performed on a more difficult task and using deeper models, mainly because the generalization benefits of the DBA seem to taper off when we have enough data.
Nevertheless, due to limited access to computational power we were not able to perform an extensive hyperparameter search.
\section{Conclusion}
\label{conclusion}
In this paper, we have introduced DBA, an algorithm that dynamically selects gradient samples for each layer to be included in weight updates.
The general consensus is that randomness in selecting samples plays a crucial role in ensuring a fast, stable and qualitative training process.
However, our results show that although we directly interfere with batch compositions, we manage to match the accuracy of standard training and even significantly exceed it when training in data scarce environments, such as using only 600 or 100 samples of the MNIST training dataset.
This indicates that the quality of the gradients we select has a significant impact on the convergence speed and generalization capabilities. Moreover our approach is model and optimizer agnostic, which means it has the potential of being applied in many use-cases.
Future work should focus on developing more efficient ways to calculate per-sample gradients and designing selection methods in order to scale up such approaches to larger models.
{\small
\bibliographystyle{ieee_fullname}
| {'timestamp': '2022-08-02T02:32:36', 'yymm': '2208', 'arxiv_id': '2208.00815', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.00815'} |
\section{Introduction} The Coq Proof
Assistant~\cite{the_coq_development_team_2019} is an Interactive Theorem Prover
in which one proves lemmas using tactic scripts. Individual tactics in these
scripts represent actions that transform the proof state of the lemma currently
being proved. A wide range of tactics exist, with a wide range of
sophistication. Basic tactics such as \texttt{apply \textit{lem}} and
\texttt{rewrite \textit{lem}} use an existing lemma \textit{lem} to perform one
specific inference or rewriting step while tactics like \verb|ring| and
\verb|tauto| implement entire decision procedures that are guaranteed to succeed
within specific domains. Finally, open-ended search procedures are implemented
by tactics like \verb|auto| and \verb|firstorder|. They can be used in almost
every domain but usually only work on simple proof states or need to be
calibrated carefully. Users are also encouraged to define new tactics that
represent basic steps, decision procedures, or specialized search procedures
within their specific mathematical domain.
When proving a lemma, the user's challenge is to observe the current proof state
and select the appropriate tactic and its arguments to be used. Often the user
makes this decision based on experience with previous proofs. If the current
proof state is similar to a previously encountered situation, then one can
expect that an effective tactic in that situation might also be effective now.
Hence, the user is continuously matching patterns of proof states in their mind
and selects the correct tactic based on these matches.
That is not the only task the user performs, however. When working on a
mathematical development, the user generally has two roles: (1) As a {\em
strategist}, the user comes up with appropriate lemmas and sometimes decides
on the main structure of complicated proofs. (2) As a {\em tactician}, the user
performs the long and somewhat mindless process of mental pattern matching on
proof states, applying corresponding tactics until the lemma is proved. Many of
the steps in the tactician's role will be considered as ``obvious'' by a
mathematician. Our system is meant to replicate the pattern matching process
performed in this role, alleviating the user from this burden. Hence, we have
aptly named it Tactician.
To perform its job, Tactician can learn from existing proofs, by looking at how
tactics modify the proof state. Then, when proving a new lemma, the user can ask
the system to recommend previously used tactics based on the current proof state
and even to complete the whole proof using a search procedure based on these
tactic recommendations.
In our previous publication, the underlying machine learning and proof search
techniques employed by Tactician and how suitable data is extracted from Coq are
described~\cite{blaauwbroek2020tactic}. It also contains an evaluation of
Tactician's current proof automation performance on Coq's standard library. We
will not repeat these details here. Instead, we will focus on the operational
aspects and description of Tactician when used as a working and research tool.
\Cref{sec:system-overview} gives a mostly non-technical overview of the system
suitable for casual Coq users. That includes Tactician's design principles, its
mode of operation, a concrete example and a discussion on using Tactician in
large projects. \Cref{sec:technical-implementation} briefly discusses some of
Tactician's technical implementation issues, and
\Cref{sec:tactician-as-a-machine-learning-platform} describes how Tactician can
be used as a machine learning platform. Finally, \Cref{sec:related-work}
compares Tactician to related work. Installation instructions of Tactician can
be found at the project's website \url{http://coq-tactician.github.io}. This
pre-print is an extended version of our CICM paper with the same
title~\cite{DBLP:conf/mkm/BlaauwbroekUG20}.
\section{System Overview}
\label{sec:system-overview}
In this section, we give a mostly non-technical overview of Tactician suitable
for casual Coq users. \Cref{sec:design-decisions} states the guiding design
principles of the project, and \Cref{sec:mode-of-operation} describes the
resulting user workflow. On the practical side, \Cref{sec:a-concrete-example}
gives a simple, concrete example of Tactician's usage, while
\Cref{sec:learning-in-the-large} discusses how to employ Tactician in large
projects.
\subsection{Design Principles}
\label{sec:design-decisions}
For our system, we start with the principal goal of learning from previous
proofs to aid the user with proving new lemmas. In Coq, there are essentially
two notions of proof: (1) proof terms expressed in the Gallina language (Coq's
version of the Calculus of Inductive
Constructions~\cite{DBLP:conf/tlca/Paulin-Mohring93}); (2) tactic proof scripts
written by the user that can then generate a Gallina term. In principle, it is
possible to use machine learning on both notions of proof. We have chosen to
learn from tactic proof scripts for two reasons:
\begin{enumerate}
\item Tactics scripts are a higher-level and forgiving environment,
which is more suitable for machine learning. A Gallina proof term must be
generated extremely precisely while a tactic script often still works after
minor local mutations have occurred. Gallina terms are also usually much
bigger than their corresponding tactic script because individual tactics can
represent large steps in a proof.
\item We acknowledge that automation systems within a proof assistant often
still need input from the user to fully prove a lemma. Working on the tactic
level allows the user to introduce domain-specific information to aid the
system. For example, one can write new tactics that represent decision
procedures and heuristics that solve problems Tactician could not otherwise
solve. One can teach Tactician about such new tactics merely by using them in
hand-written proofs a couple of times, after which the system will
automatically start to use them.
\end{enumerate}
Apart from the principal goal described above, the most important objective of
Tactician is to be usable and remain usable by actual Coq users. Hence, we
prioritize the system's ``look and feel'' over its hard performance numbers. To
achieve this usability, Tactician needs to be pleasant to all parties involved,
which we express in four basic ``friendliness'' tenets.
\begin{description}
\item[User Friendly] If the system is to be used by actual Coq users, it should
function as seamlessly as possible. After installation, it should be ready to
go with minimal configuration and without needing to spend countless hours
training a sophisticated machine learning model. Instead, there should be a
model that can learn on the fly, with a future possibility to add a more
sophisticated model that can be trained in batch once a development has been
finished. Finally, all interaction with the system should happen within the
standard Coq environment, regardless of which editor is used and without the
need to execute custom scripts.
\item[Installation Friendly] Ease of installation is essential to reach solid
user adoption. To facilitate this, the system should be implemented in Ocaml
(Coq's implementation language), with no dependencies on machine learning
toolkits written in other languages like Python or Matlab. Compilation and
installation will then be just as easy as with a regular Coq release.
\item[Integration Friendly] The system should not be a fork of the main Coq
codebase that has to be maintained separately. A fork would deter users from
installing it and risk falling behind the upstream code. Instead, it should
function as a plugin that can be compiled separately and then loaded into Coq
by the user.
\item[Maintenance Friendly] We intend for Tactician not to become abandonware
after main development has ceased, and at least remain compatible with the
newest release of Coq. As a first step, the plugin should be entered into the
Coq Package Index~\cite{coq_package_index}, enabling continuous integration
with future versions of Coq. Additionally, assuming that Tactician becomes
wildly popular, we eventually intend for it to be absorbed into the main Coq
codebase.
\end{description}
\subsection{Mode of Operation}
\label{sec:mode-of-operation}
Analogously to Coq's mode of operation, Tactician can function both in
interactive mode and in compilation mode.
\subsubsection{Interactive Mode}
\subfile{tactician-interactive-overview}
We illustrate the interactive mode of operation of Tactician using the schematic
in \Cref{fig:tactician-interactive-overview}. When the user starts a new Coq
development file---say \texttt{X.v}---the first thing Tactician does is create
an (in-memory) empty tactic database \texttt{X} corresponding to this file. The
user then starts to prove lemmas as usual. Behind the scenes, every executed
tactic, e.g. $tactic_{a1}$, is saved into the database accompanied by the proof
states before and after tactic execution, in this case,
$\langle\Gamma_{a1}\vdash\sigma_1, tactic_{a1}, \Gamma_{a2}\vdash\sigma_2
\rangle$. The difference between these two states represents the action
performed by the tactic, while the state before the tactic represents the
context in which it was useful. By recording many such triples for a tactic, we
create a dataset representing an approximation of that tactic's semantic
meaning. The database is kept synchronized with the user's movement within the
document throughout the entire interactive session.
After proving a few lemmas by hand, the user can start to reap the fruits of the
database. For this, the tactics \texttt{suggest} and \texttt{search} are
available. We illustrate their use in the schematic when ``Lemma z : $\omega$''
is being proven. The user first executes two normal tactics. After that, Coq's
proof state window displays a state for which the user is unsure what tactic to
use. Here Tactician's tactics come in.
\begin{description}
\item[\texttt{suggest}] This tactic can be executed to ask Tactician for a list
of recommendations. The current proof state
$A\!:\gamma_1,B\!:\gamma_2,\ldots,Z\!:\gamma_n\vdash \omega_3$ is fed into the
pattern matching engine, which will perform a comparison with the states in
the tactic database. From this, an ordered list of recommendations is
generated and displayed in Coq's messages window, where the user can select a
tactic to execute.
\item[\texttt{search}] Alternatively, the system can be asked to \texttt{search}
for a complete proof. We start with the current proof state, which we rename
to $\Phi_1 \vdash \rho_1$ for clarity. Then a search tree is formed by
repeatedly running \texttt{suggest} on the proof state and executing the
suggested tactics. This tree can be traversed in various ways, finishing only
when a complete proof has been found.
If a proof is found, two things happen. (1) The Gallina proof term that is
found is immediately submitted to Coq's proof engine, after which the proof
can be closed with \texttt{Qed}. (2) Tactician generates a reconstruction
tactic \texttt{search failing $\langle \texttt{t}_{12}, \texttt{t}_{32},...
\rangle$} which is displayed to the user (see the bottom of the figure). The
purpose of this tactic is to provide a modification resilient proof cache that
also functions when Tactician is not installed. Its semantics is to first try
to use the previously found list of tactics \texttt{$\langle \texttt{t}_{12},
\texttt{t}_{32},... \rangle$} to complete the proof immediately. ``Failing''
that (presumably due to changes in definitions or lemmas), a new search is
initiated to recover the proof. To use the cache, the user should copy it and
replace the original \texttt{search} invocation with it in the source file.
\end{description}
\subsubsection{Compilation Mode}
\subfile{tactician-batch-overview}
This mode is visualized in \Cref{fig:tactician-batch-overview}. After the file
\texttt{X.v} has been finished, one might want to depend on it in other files.
This requires the file to be compiled into a binary \texttt{X.vo} file. The
compilation is performed using the command \texttt{coqc X.v}. Tactician is
integrated into this process. During compilation, the tactic database is rebuilt
in the same way as in interactive mode and then included in the \texttt{.vo}
file. When development \texttt{X.v} is then \texttt{Require}d by another
development file \texttt{Y.v}, the tactic database of \texttt{X.v} is
automatically inherited.
\subsection{A Concrete Example}
\label{sec:a-concrete-example}
We now give a simple example use-case based on lists. Starting with an empty
file, Tactician is immediately ready for action. We proceed as usual by giving a
standard inductive definition of lists of numbers with their corresponding
notation and a function for concatenation.
\begin{minted}{coq}
Inductive list :=
| nil : list
| cons : nat -> list -> list.
Notation "[]" := nil.
Notation "x :: ls" := (cons x ls).
Fixpoint concat ls₁ ls₂ :=
match ls₁ with
| [] => ls₂
| x::ls₁' => x::(ls₁' ++ ls₂)
end where "ls₁ ++ ls₂" := (concat ls₁ ls₂).
\end{minted}
We wish to prove some standard properties of concatenation. The first is a lemma
stating that the empty list \texttt{[]} is the right identity of concatenation
(the left identity is trivial).
\begin{minted}{coq}
Lemma concat_nil_r : ∀ ls, ls ++ [] = ls.
\end{minted}
With Tactician installed, we immediately have access to the new tactics
\texttt{suggest} and \texttt{search}. Neither tactic will produce a result when
used now since the system has not had a chance to learn from proofs yet.
Therefore, we will have to prove this lemma by hand.
\begin{minted}{coq}
Proof.
intros. induction ls.
- simpl. reflexivity.
- simpl. f_equal. apply IHls.
Qed.
\end{minted}
The system has immediately learned from this proof (it was even learning during
the proof) and is now ready to help us with a proof of the associativity of
concatenation.
\begin{minted}{coq}
Lemma concat_assoc :
∀ ls₁ ls₂ ls₃, (ls ++ ls₂) ++ ls₃ = ls ++ (ls₂ ++ ls₃).
\end{minted}
Now, if we execute \texttt{suggest}, it outputs the ordered list \texttt{intros,
simpl, f\_equal, reflexivity}. Indeed, using \texttt{intros} as our next
tactic is not unreasonable. We can repeatedly ask \texttt{suggest} for a
recommendation after every tactic we input, which sometimes gives us good
tactics and sometimes bad tactics. However, we can also eliminate the middle-man
and execute the \texttt{search} tactic, which immediately finds a proof.
\begin{minted}{coq}
Proof. search. Qed.
\end{minted}
To cache the proof that is found for the future, we can copy-paste the
reconstruction tactic that Tactician prints into the source file. This example
shows how the system can quickly learn from very little data and with minimal
effort from the user. Of course, this also scales to much bigger developments.
We continue our example for more advanced Coq users to showcase how Tactician
can learn to use custom domain-specific tactics. We begin by defining an
inductive property encoding that one list is a non-contiguous sublist of
another.
\begin{minted}{coq}
Inductive sublist : list -> list -> Prop :=
| sl_nil : sublist [] []
| sl_cons₁ ls₁ ls₂ n : sublist ls₁ ls₂ -> sublist ls₁ (n::ls₂)
| sl_cons₂ ls₁ ls₂ n : sublist ls₁ ls₂ -> sublist (n::ls₁) (n::ls₂).
\end{minted}
We now wish to prove that some lists have the sublist property. For example,
\texttt{sublist (9::3::[]) (4::7::9::3::[])}. Deciding this is not entirely
trivial, because it is not possible to judge from the head of the list whether
to apply \texttt{sl\_cons₁} or \texttt{sl\_cons₂}. Instead of manually writing
these proofs, we create a domain-specific, heuristic proving tactic that
automatically tries to find a proof.
\begin{minted}{coq}
Ltac solve_sublist := solve [match goal with
| |- sublist [] [] => apply sl_nil
| |- sublist (_::_) [] => fail
| |- sublist _ _ =>
(apply sl_cons₁ + apply sl_cons₂); solve_sublist
| |- _ => solve [auto]
end].
\end{minted}
This tactic looks at the current proof state and checks that the goal is of the
form \texttt{sublist ls₁ ls₂}. If the lists are empty, it can be finished using
\texttt{sl\_nil}. If \texttt{ls₂} is empty but \texttt{ls₁} nonempty, the
postfix is unprovable, and the tactic fails. In all other cases, we initiate a
backtracking search where we try to apply either \texttt{sl\_cons₁} or
\texttt{sl\_cons₂} and recurse. Finally, we add a simple catch-all clause that
tries to prove any side-conditions using Coq's built-in \texttt{auto}. We now
teach Tactician about the existence of our previous lemmas and the
domain-specific tactic by defining some simple examples. Finally, we ask
Tactician to solve a more complicated, compound problem.
\begin{minted}{coq}
Lemma ex1 : sublist (9::3::[]) (4::7::9::3::[]).
Proof. solve_sublist. Qed.
Lemma ex2 : ∀ ls, 1::2::ls ++ [] = 1::2::ls.
Proof. intro. rewrite concat_nil_r. reflexivity. Qed.
Lemma dec2 : ∀ ls₁ ls₂, sublist ls₁ ls₂ ->
sublist (7::9::13::ls₁) (8::5::7::[] ++ 9::13::ls₂ ++ []).
Proof. search. Qed.
\end{minted}
The proof found by Tactician is \texttt{rewrite
concat\_nil\_r;intros;solve\_sublist}. It has automatically figured out that
it needs to introduce the hypothesis, normalize the list and then run the
domain-specific prover. This example is somewhat contrived but should illustrate
how the user can easily teach Tactician domain-specific knowledge.
\subsection{Learning and Proving in the Large}
\label{sec:learning-in-the-large}
The examples above are fun to play with and useful for demonstration purposes.
However, when using Tactician to develop real projects, three main issues need
to be taken care of, namely (1) instrumenting dependencies, (2) instrumenting
the standard library and, (3) reproducible builds. Below, we describe how to use
Tactician in complex projects and, in particular, how these issues are solved.
Tactician itself is a collection of easily installed Opam~\cite{opam} packages
distributed through the Coq Package Index~\cite{coq_package_index}. The package
\texttt{coq-tactician} provides the main functionality. It needs to be installed
to run the examples of \Cref{sec:a-concrete-example}. These examples have no
dependencies and make minimal use of Coq's standard library. All learning is
done within one file, making them a simple use-case. Things become more
complicated when one starts to use the standard library and libraries defined in
external dependencies. Although Tactician will keep working normally in those
situations, by default, it does not learn from proofs in these libraries. Hence,
Tactician's ability to synthesize proofs for lemmas concerning the domain of
those libraries will be severely limited.
The main question to remedy this situation is where Tactician should get its
learning data from. As explained in \Cref{sec:mode-of-operation}, Tactician
saves the tactic database of a library in the compiled \texttt{.vo} binaries.
This database becomes available as soon as the compiled library is loaded.
However, this only works if the library was compiled with Tactician enabled,
which is the case neither for Coq's standard library nor most external packages.
Hence, we need to instrument these libraries by recompiling them while Tactician
is loaded. Loading Tactician amounts to finding a way of convincing Coq to
pre-load the library \texttt{Tactician.Ltac1.Record} before starting the
compilation process.
\subsubsection{External Dependency Instrumentation}
An external dependency can be any collection of Coq source files together with
an arbitrary build system. Injecting the loading of
\texttt{Tactician.Ltac1.Record} into the build process automatically is not
possible in general. However, we do provide a simple solution for the most
common situation. Coq provides package developers with a utility called
\texttt{coq\_makefile} that automatically generates a Makefile to build and
install their files. This build system is usually packaged up with Opam to be
released on the Coq Package Index. Although this package index does not require
packages to use \texttt{coq\_makefile}, most do this in practice.
Makefiles generated by \texttt{coq\_makefile} are highly customizable through
environment variables. Tactician provides command-line utilities called
\texttt{tactician enable} and \texttt{tactician disable} that configure Opam to
automatically inject Tactician through these environment variables. When
building packages without Opam, the user can modify the environment by running
\texttt{eval \$(tactician inject)} before building. This solution will suffice
to instrument most packages created using \texttt{coq\_makefile}, as long as
authors do not customize the resulting build file too heavily. We will add
support for Coq's new Dune build system~\cite{dune} when it has stabilized. For
more stubborn packages, rather aggressive methods of injecting Tactician are
also possible but, in general, packages that circumvent instrumentation are
always possible. Therefore, we do not provide built-in solutions for those
cases.
\subsubsection{Standard Library Instrumentation}
In order to instrument Coq's standard library, it also needs to be recompiled
with \texttt{Tactician.Ltac1.Record} pre-loaded. We provide the Opam package
\texttt{coq-tactician-stdlib} for this purpose. This package does not contain
any code, but simply takes the source files of the installed standard library
and recompiles them. It then promptly commits the Cardinal Sin of Package
Management by overwriting the original binary \texttt{.vo} files of the standard
library. We defend this choice by noting that (1) the original files will be
backed up and restored when \texttt{coq-tactician-stdlib} is removed and (2) the
alternative of installing the recompiled standard library in a secondary
location is even worse. This choice would cause a rift in the users local
ecosystem of packages, with some packages relying on the original standard
library and some on the recompiled one. Coq will refuse to load two packages
from the rivaling ecosystems citing ``incompatible assumptions over the standard
library,'' forever setting them apart.
Even with our choice of overwriting the standard library, an ecosystem rift
still occurs if packages depending on Coq already pre-existed. To resolve this,
Tactician ships with the command-line utility \texttt{tactician recompile} that
helps the user find and recompile these packages.
\subsubsection{Tactician Usage within Packages}
In order to use Tactician's \texttt{suggest} and \texttt{search} tactics, the
library \texttt{Tactician.Ltac1.Tactics} needs to be loaded. However, we
strongly advise against loading this library directly, for two reasons. (1) If a
development \texttt{X} that uses Tactician is submitted to the Coq Package Index
as \texttt{coq-x}, an explicit dependency on \texttt{coq-tactician} is needed.
This dependency can be undesirable due to users potentially being unwilling to
install Tactician. (2) It would undermine the build reproducibility of the
package. Even though \texttt{coq-tactician} would be installed as a dependency
when \texttt{coq-x} is installed, there is no way to ensure that Tactician has
instrumented the other dependencies of the package. Hence, it is likely that
Tactician will be operating with a smaller tactic database, reducing its ability
to prove lemmas automatically.
Instead, the package \texttt{coq-x} should depend on
\texttt{coq-tactician-dummy}. This package is extremely simple, containing one
30-line library called \texttt{Tactician.\allowbreak Ltac1Dummy}. It provides
alternative versions of Tactician tactics that act as dummies of the real
version. Tactics \texttt{suggest} and \texttt{search} will not perform any
action. However, tactic \texttt{search failing $\langle$...$\rangle$}, described
in \Cref{sec:mode-of-operation}, will still be able to complete a proof using
its cache (but without the ability to search for new proofs in case of failure).
A released package can thus only employ cached searches. This way, any build
will be reproducible.
During development, the real version of Tactician should be loaded to gain its
full power. Instead of loading it explicitly through a \texttt{Require} in
source files, we recommend that users load it through the \texttt{coqrc} file.
Coq will automatically process any vernacular defined in this file at startup.
The command-line utility \texttt{tactician enable} will assist in adding the
correct vernacular to the \texttt{coqrc} file.
\section{Technical Implementation}
\label{sec:technical-implementation}
In this section, we provide a peek behind the curtains of Tactician's technical
implementation and how it is integrated with Coq. A previous publication already
covers the following aspects of Tactician~\cite{blaauwbroek2020tactic}: (1) The
machine learning models used to suggest tactics; (2) an explanation of how data
extracted from Coq is decomposed and transformed for these models; and (3) the
search procedure to synthesize new proofs. These details are therefore omitted
here.
\subsection{Intercepting Tactics}
\label{sec:intercepting-tactics}
Tactician is implemented as a plugin that provides a new proof mode (tactic
language) to Coq. This proof mode contains precisely the same syntactical elements
as the Ltac1 tactic language~\cite{DBLP:conf/lpar/Delahaye00}. The purpose of
the proof mode is to intercept and decompose executed tactics and save them in
Tactician's database. After interception, the tactics are redirected back to the
regular Ltac1 engine. By loading the library \texttt{Tactician.Ltac1.\allowbreak
Record}, this proof mode is activated
instead of the regular Ltac1 language.
Note that Ltac1 is the most popular but by no means the only tactic language
for
Coq~\cite{DBLP:journals/pacmpl/KaiserZKRD18,DBLP:conf/esop/MalechaB16,DBLP:journals/jfrea/GonthierM10,pedrot2019ltac2}.
All these languages are compiled into a proof monad implemented on the OCaml
level~\cite{DBLP:journals/jlp/KirchnerM10}. It would be preferable to instrument
the proof monad directly as this would enable us to record tactics from all
languages at once. It appears that this is impossible, though, because the
structure of the monadic interface does not allow us to recover high-level
tactical units such as decision procedures, even when implemented as the most
general Free Monad. As such, Tactician only supports Ltac1 at the moment. In the
future, we intent to provide improved support for
SSreflect~\cite{DBLP:journals/jfrea/GonthierM10} and support for recording the
new Ltac2 language~\cite{pedrot2019ltac2}.
\subsection{State Synchronization}
\label{sec:state-synchronization}
When recording tactics in interactive mode, it is important to synchronize the
tactic database with the undo/redo actions of the user, both from a theoretical
and practical perspective. In theory, if a user undoes a proof step, this
represents a mistake made by the user, meaning that the recorded information in
the database is also a mistake. In practice, keeping such information will lead
to problems in compilation mode because the database will be smaller due to the
lack of undo/redo actions. Therefore \texttt{search}es that succeeded in
interactive mode may not succeed in compilation mode. Below, we explain how Coq
and Tactician deal with state synchronization.
Internally, Coq ships with a state manager that keeps track of all state
information generated when vernacular commands are executed. This information
includes, for example, definitions, proofs, and custom tactic definitions. This
data is automatically synchronized with the user's interactive movement through
the document, and saved to the binary \texttt{.vo} file during compilation. All
data structures registered with the state manager are expected to be persistent
(in the functional programming sense~\cite{DBLP:journals/jcss/DriscollSST89}).
The copy-on-write semantics of such data structures allow the state manager to
easily keep a history of previous versions and revert to them on demand.
For Tactician, registering data structures with the state manager to ensure
proper synchronization is awkward, because the state manager assumes that
tactics have no side-effects outside of modifications to the proof state. Hence,
any data registered with the state manager is discarded as soon as the current
proof has been finished. Tactician solves this by tricking Coq into thinking
that tactics are side-effecting vernacular commands, convincing it to re-execute
all tactics at \texttt{Qed} time to properly register the side-effects. However,
as a consequence, these tactics will modify the proof state a second time, at a
time when this is not intended. This is a likely source of future bugs for which
a permanent solution is yet to be found.
\section{Tactician as a Machine Learning Platform}
\label{sec:tactician-as-a-machine-learning-platform}
Apart from serving as a tool for end-users, Tactician also functions as a
machine learning platform. A simple OCaml interface to add a new learning model
to Tactician is provided. The learning task of the model is to predict a list of
tactics for a given proof state. When registering a new model, Tactician will
automatically take advantage of it during proof search. Our interface hides
Coq's internal complexities while being as general as possible. We encourage
everyone to implement their favorite learning technique and try to beat the
built-in model. Tactician's performance can easily be benchmarked on the
standard library and other packages. The signature of a machine learning model
is as follows:
\begin{minted}{ocaml}
type sentence = Node of string * sentence list
type proof_state =
{ hypotheses : (id * sentence) list
; goal : sentence }
type tactic
val tactic_sentence : tactic -> sentence
val local_variables : tactic -> id list
val substitute : tactic -> id_map -> tactic
module type TacticianModelType = sig
type t
val create : unit -> t
val add : t -> before:proof_state -> tactic -> after:proof_state -> t
val predict : t -> proof_state -> (float * tactic) list
end
val register_learner : string -> (module TacticianLearnerType) -> unit
\end{minted}
A \texttt{sentence} is a very general tree data type able to encode all of Coq's
internal syntax trees, such as those of terms and tactics. Node names of syntax
trees are converted into \texttt{string}s. This way, most semantic information
is preserved using a much simpler data type that is suitable for most machine
learning techniques. Proof states are encoded as a list of named hypothesis
sentences and a goal sentence. In this case, sentences represent a Gallina term.
We abstract from some of Coq's proof state complexities such as the shelf and
the unification map.
Tactician represents \texttt{tactic}s as an abstract type that can be inspected
as a sentence using \texttt{tactic\_sentence}. Since the goal of this interface
is to \textit{predict} tactics but not \textit{synthesize} tactics, it is not
possible to modify them (this would seriously complicate the interface). There
is one exception. We provide a way to extract a list of variables that refer to
the local context of a proof. The local variables of a tactic can also be
updated using a simultaneous substitution. Such substitutions will allow for a
limited form of parameter prediction.
We think that local variable prediction is the only kind of parameter prediction
that makes sense in Tactician's context. The only other major classes of
parameters are global lemma names and complete Gallina terms. Predicting
complete terms is known to be very difficult and would unnecessarily complicate
the interface. Predicting names of global lemmas is possible, but does not
appear to be very useful because lemma names are almost always directly
associated with basic tactics like \texttt{apply lem} or \texttt{rewrite lem}.
Predicting parameters for these tactics is counter-productive because their
semantics are mostly dependent on the definition of the lemma. Hence, it is
better to view the incarnations of such tactics with different lemmas as
arguments as completely separate tactics.
Finally, implementing a learning model entails implementing the module type
\texttt{TacticianModelType} and registering it with Tactician. This module
requires an implementation of a database type \texttt{t}. For reasons explained
in \Cref{sec:state-synchronization}, this database needs to be persistent.
Tactician will \texttt{add} \texttt{tactic}s to the database, together with the
\texttt{proof\_state} before and after the tactic was applied. The machine
learning task of the model is to \texttt{predict} a weighted list of tactics
that are likely applicable to a previously unseen proof state.
The current interface only allows for models that support online learning
because database entries are \texttt{add}ed one by one in interactive mode. We
justify this by the user-friendliness requirements from
\Cref{sec:design-decisions}. However, we realize that together with the
persistence requirement, this places considerable limitations on the kind of
learning models that can be employed. In the future, we intent to support a
secondary interface that can be used to create offline models employing batch
learning on large Coq packages in its entirety.
\section{Case Study}
\label{sec:case-studies}
The overall performance of our tactical search on the full Coq Standard Library
is reported in a previous publication~\cite{blaauwbroek2020tactic}, which also
reports performance on various parts of the library. The best-performing version
of our learning model can prove 34.0\% of the library lemmas when using a 40s
time limit. Six different versions together prove 39.3\%. The union with all
CoqHammer methods achieves 56.7\%.\footnote{CoqHammer's eight methods prove
together 40.8\%, with the best proving 28.8\%.}
Here we show an example of a nontrivial proof found by Tactician. The system was
asked to automatically find the proof of the following lemma from the library
file
\texttt{Structures/GenericMinMax.v},\footnote{\url{https://coq.inria.fr/library/Coq.Structures.GenericMinMax.html}}
where facts about general definitions of \texttt{min} and \texttt{max} are
proved.
\begin{minted}{coq}
Lemma max_min_antimono f :
Proper (eq==>eq) f -> Proper (le==>flip le) f ->
forall x y, max (f x) (f y) == f (min x y).
\end{minted}
Tactician's learning model evaluated the following two lemmas as similar to what
has to be proven:
\begin{minted}{coq}
Lemma min_mono f :
(Proper (eq ==> eq) f) -> (Proper (le ==> le) f) ->
forall x y, min (f x) (f y) == f (min x y).
intros Eqf Lef x y.
destruct (min_spec x y) as [(H,E)|(H,E)]; rewrite E;
destruct (min_spec (f x) (f y)) as [(H',E')|(H',E')]; auto.
- assert (f x <= f y) by (apply Lef; order). order.
- assert (f y <= f x) by (apply Lef; order). order.
Qed.
Lemma min_max_modular n m p :
min n (max m (min n p)) == max (min n m) (min n p).
intros. rewrite <- min_max_distr.
destruct (min_spec n p) as [(C,E)|(C,E)]; rewrite E; auto with *.
destruct (max_spec m n) as [(C',E')|(C',E')]; rewrite E'.
- rewrite 2 min_l; try order. rewrite max_le_iff; right; order.
- rewrite 2 min_l; try order. rewrite max_le_iff; auto.
Qed.
\end{minted}
The trace through the proof search tree that resulted in a proof is as follows:
\begin{verbatim}
max_min_antimono .0.0.0.5.5.2.1.0.5.1.5.1
\end{verbatim}
This trace represents, for every choice point in the search tree, which of
\texttt{suggest}'s ranked suggestion was used to reach the proof. The proof
search went into depth 12 and the first three tactics used in the final proof
are those with the highest score as recommended by the learning model, which
most likely followed the proof of \texttt{min\_mono}. However, after that, it
had to diverge from that proof, using only the sixth-best ranked tactic twice in
a row. This nontrivial search continued for the next seven tactical steps,
combining mostly tactics used in the two lemmas and some other tactics. The
search finally yielded the following proof of \texttt{max\_min\_antimono}.
\begin{minted}{coq}
intros Eqf Lef x y. destruct (min_spec x y) as [(H, E)|(H, E)]. rewrite E.
destruct (max_spec (f x) (f y)) as [(H', E')| (H', E')].
assert (f y <= f x) by (apply Lef; order). order. auto. rewrite E.
destruct (max_spec (f x) (f y)) as [(H', E')| (H', E')]. auto.
assert (f x <= f y) by (apply Lef; order). order.
\end{minted}
Note that the original proof of the lemma is quite similar, but shorter and
without some redundant steps. Redundant steps are known to happen in systems
like Tactician, such as the TacticToe~\cite{DBLP:conf/lpar/GauthierKU17} system
for HOL4~\cite{DBLP:conf/tphol/SlindN08}.
\begin{minted}{coq}
intros Eqf Lef x y. destruct (min_spec x y) as [(H,E)|(H,E)]; rewrite E;
destruct (max_spec (f x) (f y)) as [(H',E')|(H',E')]; auto.
- assert (f y <= f x) by (apply Lef; order). order.
- assert (f x <= f y) by (apply Lef; order). order.
\end{minted}
\section{Related Work}
\label{sec:related-work}
There exist quite a few machine learning systems for Coq and other interactive
theorem provers. The most significant distinguishing factor of Tactician to
other systems for Coq is its
user-friendliness.
There are several other systems that are interesting, but rather challenging to
install and use for end-users. They often depend on
external tools such as machine learning toolkits and automatic theorem provers.
Some systems need a long time to train their machine learning
models---preferably on dedicated hardware. Those are often not geared towards
end-users at all but rather towards the Artificial Intelligence community.
Tactician takes its main inspiration from the
TacticToe~\cite{DBLP:conf/lpar/GauthierKU17} system for
HOL4~\cite{DBLP:conf/tphol/SlindN08} which learns tactics expressed in the
Standard ML language. Using this knowledge, it can then automatically search for
proofs by predicting tactics and their arguments. Our work is similar both in
doing a learning-guided tactic search and by its complete integration in the
underlying proof assistant without relying on external machine learning
libraries and specialized hardware.
Below is a short list of machine learning systems for the Coq theorem prover.
\begin{description}
\item[ML4PG] provides tactic suggestions by clustering together various
statistics extracted from interactive
proofs~\cite{DBLP:journals/corr/abs-1212-3618}. It is integrated with the
Proof General~\cite{DBLP:conf/tacas/Aspinall00} proof editor and requires
connections to Matlab or Weka.
\item[SEPIA] provides proof search using tactic predictions and is also
integrated with Proof General~\cite{DBLP:conf/cade/GransdenWR15}. Note,
however, that its proof search is only based on tactic traces and does not
make predictions based on the proof state.
\item[Gamepad] is a framework that integrates with the Coq codebase and allows
machine learning to be performed in Python~\cite{DBLP:conf/iclr/HuangDSS19}.
It uses recurrent neural networks to make tactic prediction and to evaluate
the quality of a proof state. The system is able to synthesize proofs in the
domain of algebraic rewriting. Gamepad is not geared towards end-users.
\item[CoqGym] extracts tactic and proof state information on a large scale and
uses it to construct a deep learning model capable of generating full tactic
scripts~\cite{DBLP:conf/icml/YangD19}. CoqGym's evaluation is using a time
limit of 600s, which is impractically high for Coq practitioners. Still, it is
significantly weaker than CoqHammer. A probable cause is the slowness of deep
neural networks which is common to most proving experiments geared towards the
deep learning community.
\item[Proverbot9001] is a proof search system for Coq based on a neural
architecture~\cite{DBLP:journals/corr/abs-1907-07794}. The system is evaluated
on the verified CompCert compiler~\cite{DBLP:journals/cacm/Leroy09}. It is
reported that Proverbot9001's architecture is a significant improvement over
CoqGym.
\item[CoqHammer] is a machine learning and proving tool in the general
\emph{hammers}~\cite{DBLP:journals/jfrea/BlanchetteKPU16,DBLP:journals/jar/KaliszykU14,DBLP:journals/jar/BlanchetteGKKU16,DBLP:journals/jar/KaliszykU15a}
category designed for Coq~\cite{DBLP:journals/jar/CzajkaK18}. Hammers
capitalize on the capabilities of automatic theorem
provers~\cite{DBLP:conf/lpar/Schulz13,DBLP:conf/tacas/MouraB08,DBLP:conf/cade/RiazanovV99}
to assist ITP's. To this end, learning-based premise selection is used to
translate an ITP problem to the ATP's target language (typically First Order
Logic). A proof found by the ATP can then be reconstructed in the proof
assistant. CoqHammer is a maintained system that is well-integrated into Coq
and only requires a connection to an ATP.
It has similar performance as Tactician but proves different lemmas, making
these systems complementary~\cite{blaauwbroek2020tactic}.
\end{description}
\section{Further Work and Conclusion}
\label{sec:conclusion}
We have presented Tactician, a seamless and interactive tactic learner and
prover for Coq. The machine learning perspective has been described in a
previous publication~\cite{blaauwbroek2020tactic}. We showed how Tactician is an
easy-to-use tool that can be employed by the user with minimal installation
effort. A clear approach to using the system in large developments has also been
outlined. With its current machine learning capabilities, we expect Tactician to
help the user with its proving efforts significantly. Finally, we presented a
powerful machine learning interface that will allow researchers to bring their
advanced learning models to Coq users while being isolated from Coq's internal
complexities. We expect this to be of considerable utility to both the
artificial intelligence community and Coq users.
There are many future research directions. There is a never-ending quest to
improve the built-in learning model. With better features and stronger (but
still fast) learners such as boosted trees (used in
ENIGMA-NG~\cite{DBLP:conf/cade/ChvalovskyJ0U19})
we hope to push Tactician's performance
over the standard library towards 50\%. Apart from this, we expect to improve
support for SSreflect and to introduce support for the new Ltac2 language. In
the future, the machine learning interface will be expanded to allow for batch
learning. Additionally, we would like to incorporate the tactic history
(\textit{memory}) of the current lemma into the learning model, similar to
SEPIA. Short-term memory will allow Tactician to modify its suggestions based on
the tactics that were previously executed in the current proof.
\bibliographystyle{plainurl}
| {'timestamp': '2020-08-04T02:04:44', 'yymm': '2008', 'arxiv_id': '2008.00120', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.00120'} |
\section{Introduction}
\whizard\ is a multi-purpose event generator for collider
physics~\cite{Kilian:2007gr}. It is a very general framework for all
types of colliders, but with a special emphasis on the physics program
at lepton colliders, and has been used for many studies and design
reports for e.g. ILC, CLIC and
FCC-ee~\cite{Fujii:2015jha,deBlas:2018mhx,Baer:2013cma,Behnke:2013lya,Abada:2019zxq}.
Hard scattering process matrix elements are generated with \whizard's
intrinsic (tree-level) matrix element generator
\texttt{O'Mega}~\cite{Moretti:2001zz}, using the color-flow formalism
for QCD~\cite{Kilian:2012pz}. It supports all particles up to spin 2,
and also fermion-number violating
vertices~\cite{Ohl:2002jp,AguilarSaavedra:2005pw,Hagiwara:2005wg,Kalinowski:2008fk}.
\texttt{O'Mega} can write matrix-element code as compiled process code
(libraries) or as byte-code instructions in the form of a virtual
machine~\cite{Nejad:2014sqa}. The latter produces very small and
efficient matrix element instructions. The NLO automation will be discussed in
Sec.~\ref{sec:nlo}. \whizard\ comes with two different final- and
initial-state parton shower implementations, a $k_T$-ordered shower as
well as an analytic parton shower~\cite{Kilian:2011ka}. For LC
simulations, \whizard\ ships with the final \texttt{Pythia6}
version~\cite{Sjostrand:2006za} for shower and hadronization; it also
has a full-fledged interface to
\texttt{Pythia8}~\cite{Sjostrand:2014zea}. This is very handy as it
directly transfers data between the two event records of the
generators and allows \whizard\ to use all of \texttt{Pythia8}'s
machinery for matching and merching. \whizard\ also automatically
assigns underlying resonances to full off-shell processes and gives
the correct information of resonant shower systems to the parton
shower.
One of the special features of \whizard\ is its framework for the
support of lepton collider physics, including electron PDFs with
resummation of soft photons to all orders and hard-collinear photons
up to third order in $\alpha$, the generation of ISR photon $p_T$
spectra, sampling of lepton collider beam spectra~\cite{Ohl:1996fi},
proper simulation of polarized beams, crossing angles and
photon-induced background processes.
\whizard\ has a large number of hard-coded Beyond the Standard Model
(BSM) models. The newest development for new physics, especially
regarding completely general Lorentz tensor structures, will be
described in Sec.~\ref{sec:bsm}.
\section{New physics and technical features}
\subsection{Performance and integration, technical features}
\whizard\ has a very modular infrastructure that allows to easily
exchange different components: there are several different phase-space
algorithms implemented, as well as several different Monte Carlo
integration options. Besides the traditional \texttt{VAMP}
integrator~\cite{Ohl:1998jn}, there is now a conceptually identical
implementation generalized to an MPI-based parallelization. In
contrast to event generation which can always be trivially
parallelized, adaptive phase space integration cannot so easily
parallelized, and is a major bottleneck for high-multiplicity tree-
and especially loop-level processes. This
\texttt{VAMP2} integrator~\cite{Brass:2018xbv} will now be further
improved with a dynamic load balancer that allows for non-blocking
communication between the different workers. The new setup will be
released in version 3.0$\alpha$, cf. below. Even without the load
balancer speed-ups between 10 and 100 are observed, depending on the
complexity of processes.
Further technical improvements are the finalization of the proper event
headers for the LCIO event interface for the LC software framework, as
well as the completion of the interface to \texttt{HepMC3}. Rescanning
of event files in order to recalculate hard matrix elements without
recalculating the phase space, now also work with beam spectra and
structure functions. Alternative weights (squared matrix elements) can
now be written out not only in LHE and HepMC formats, but also to LCIO.
\subsection{Beyond the standard model physics}
\label{sec:bsm}
Besides of the full SM samples for TESLA, ILC, CLIC and CEPC,
\whizard\ has been extensively used for BSM simulations where it
contains e.g. complete implementations of Little Higgs
models~\cite{Kilian:2003xt,Kilian:2004pp,Kilian:2006eh,Reuter:2012sd,Reuter:2013iya,Dercks:2018hgz}. Another
interesting feature is
\whizard's ability to calculate unitarity constraints for vector boson
scattering (VBS) and multi-boson processes and to deliver unitarized
\begin{table}
\begin{tiny}
\def1.05{1.05}
\begin{tabular}{l l l l}
\toprule{}%
Process & $\sigma^{\text{LO}}[{\ensuremath\rm pb}]$ & $\sigma^{\text{NLO}}[{\ensuremath\rm pb}]$ & $K$ \\
\midrule{
$pp\to jj$ & \tablelinepp{1.157}{2}{1.604}{7}{6} \\
\midrule{}%
$pp\to Z$ & \tablelinepp{4.2536}{3}{5.4067}{2}{4} \\
$pp\to Zj$ & \tablelinepp{7.207}{2}{9.720}{17}{3} \\
$pp\to Zjj$ & \tablelinepp{2.352}{8}{2.735}{9}{3} \\
\midrule{}%
$pp\to W^\pm$ & \tablelinepp{1.3750}{5}{1.7696}{9}{5} \\
$pp\to W^\pm j$ & \tablelinepp{2.043}{1}{2.845}{6}{4} \\
$pp\to W^\pm jj$ & \tablelinepp{6.798}{7}{7.93}{3}{3} \\
\midrule{}%
$pp\to ZZ$ & \tablelinepp{1.094}{2}{1.4192}{32}{1} \\
$pp\to ZZj$ & \tablelinepp{3.659}{2}{4.820}{11}{0} \\
$pp\to ZW^\pm$ & \tablelinepp{2.775}{2}{4.488}{4}{1} \\
$pp\to ZW^\pm j$ & \tablelinepp{1.604}{6}{2.103}{4}{1} \\
$pp \to W^+W^- (4f)$ & \tablelinepp{0.7349}{7}{1.027}{1}{2} \\
$pp \to W^+W^-j \;(4f)$ & \tablelinepp{2.868}{1}{3.733}{8}{1} \\
$pp \to W^+W^+jj$ & \tablelinepp{1.483}{4}{2.238}{6}{-1} \\
$pp \to W^-W^-jj$ & \tablelinepp{6.755}{4}{9.97}{3}{-1} \\
\midrule{}%
$pp \to W^+W^-W^\pm (4f)$ & \tablelinepp{1.309}{1}{2.117}{2}{-1} \\
$pp \to ZW^+W^-(4f)$ & \tablelinepp{0.966}{2}{1.682}{2}{-1} \\
$pp \to W^+W^-W^\pm Z (4f)$ & \tablelinepp{0.642}{2}{1.240}{2}{-3} \\
$pp \to W^\pm ZZZ$ & \tablelinepp{0.588}{2}{1.229}{2}{-5} \\
\midrule{}%
$pp\to \ttb$ & \tablelinepp{4.588}{2}{6.740}{9}{2} \\
$pp\to \ttb j$ & \tablelinepp{3.131}{3}{4.194}{9}{2} \\
$pp\to \ttb\ttb$ & \tablelinepp{4.511}{2}{9.070}{9}{-3} \\
$pp\to \ttb Z$ & \tablelinepp{5.281}{8}{7.639}{9}{-1} \\
& \\ & \\ & \\
\bottomrule{}
\end{tabular}
\quad%
\def1.05{1.05}
\begin{tabular}{l l l l}
\toprule{}%
Process & $\sigma^{\text{LO}}[{\ensuremath\rm fb}]$ & $\sigma^{\text{NLO}}[{\ensuremath\rm fb}]$ & $K$ \\
\midrule{}%
$e^+e^-\to jj$ & \tableline{622.73}{4}{639.41}{9}{} \\
$e^+e^-\to jjj$ & \tableline{342.4}{5}{318.6}{7}{} \\
$e^+e^-\to jjjj$ & \tableline{105.1}{4}{103.0}{6}{} \\
$e^+e^-\to jjjjj$ & \tableline{22.80}{2}{24.35}{15}{} \\
\midrule{}%
$e^+e^-\to\bbb$ & \tableline{92.32}{1}{94.78}{7}{} \\
$e^+e^-\to\bbb \bbb$ & \tableline{1.64}{2}{3.67}{4}{1} \\
$e^+e^-\to\ttb$ & \tableline{166.4}{1}{174.53}{6}{} \\
$e^+e^-\to\ttb j$ & \tableline{48.3}{2}{53.25}{6}{} \\
$e^+e^-\to\ttb jj$ & \tableline{8.612}{8}{10.46}{6}{} \\
$e^+e^-\to\ttb jjj$ & \tableline{1.040}{1}{1.414}{10}{} \\
$e^+e^-\to\ttb \ttb$ & \tableline{6.463}{2}{11.91}{2}{4} \\
$e^+e^-\to\ttb \ttb j$ & \tableline{2.722}{1}{5.250}{14}{5} \\
$e^+e^-\to\ttb \bbb$ & \tableline{0.186}{1}{0.293}{2}{} \\
\midrule{}%
$e^+e^-\to \ttb H$ & \tableline{2.022}{3}{1.912}{3}{} \\
$e^+e^-\to \ttb Hj$ & \tableline{0.2540}{9}{0.2664}{5}{} \\
$e^+e^-\to \ttb Hjj$ & \tableline{2.666}{4}{3.144}{9}{2} \\
$e^+e^-\to \ttb \gamma$ & \tableline{12.71}{4}{13.78}{4}{} \\
$e^+e^-\to \ttb Z$ & \tableline{4.64}{1}{4.94}{1}{}\\
$e^+e^-\to\ttb Z j$ & \tableline{0.610}{4}{0.6927}{14}{}\\
$e^+e^- \to\ttb Z jj$ & \tableline{6.233}{8}{8.201}{14}{2}\\
$e^+e^-\to\ttb W^\pm jj$ & \tableline{2.41}{1}{3.695}{9}{4}\\
$e^+e^-\to\ttb \gamma \gamma$ & \tableline{0.382}{3}{0.420}{3}{} \\
$e^+e^-\to \ttb \gamma Z$ & \tableline{0.220}{1}{0.240}{2}{}\\
$e^+e^-\to\ttb \gamma H$ & \tableline{9.748}{6}{9.58}{7}{2} \\
$e^+e^-\to \ttb Z Z$ & \tableline{3.756}{4}{4.005}{2}{2} \\
$e^+e^-\to \ttb W^+ W^-$ & \tableline{0.1370}{4}{0.1538}{4}{} \\
$e^+e^-\to \ttb H H$ & \tableline{1.367}{1}{1.218}{1}{2} \\
$e^+e^-\to \tth Z$ & \tableline{3.596}{1}{3.581}{2}{2}\\
\bottomrule{}
\end{tabular}
\end{tiny}
\caption{\label{tab:nlo_comp}
Selection of validated processes at LO and NLO QCD with
\whizard. $e^+ e^-$ processes (left) are for 1 TeV fixed beams, $pp$
processes are for 13 TeV. The scale is the scalar transverse energy,
$H_T$. Jets are clustered with the anti-$k_T$ algorithm and jet
radius $\Delta R = 0.5$, with cuts of $p_T > 30 GeV$ for the Born
jets.}
\end{table}
amplitudes for SMEFT dim-6/dim-8 operators and simplified
models~\cite{Beyer:2006hx,Alboteanu:2008my,Kilian:2014zja,Kilian:2015opv,Fleper:2016frz,Brass:2018hfw},
while precision SM predictions for VBS
can be found in~\cite{Ballestrero:2018anz}. Ongoing work deals with
the automatic calculation of unitarity limits for multiple
(transversal) vector boson production both for hadron and
(high-energy) lepton colliders.
Nowadays, new physics models are almost exclusively included via
automated interfaces, e.g. to
\texttt{FeynRules}~\cite{Christensen:2010wz,Christensen:2008py}.
These explicit interfaces have now been superseded by \whizard's
implementation of its UFO~\cite{Degrande:2011ua}
interface. \whizard\ now (with the upcoming
versions 2.8.3 and 3.0$\alpha$) supports this completely including
spins 1/2, 3/2, 0, 1, 2, 3, 4, 5, automatic construction of 5-, 6-ary
and even higher vertices, fermion-number violating vertices,
four-fermion vertices (and higher), SLHA-type input files for BSM
models and customized propagators defined in the UFO files. This makes
the old interfaces to FeynRules and SARAH~\cite{Staub:2013tta}
deprecated, however, they will be kept for backwards compatibility.
\subsection{Next-to-leading order QCD automation}
\label{sec:nlo}
\whizard\ started first with hard-coded next-to-leading order (NLO)
projects regarding QED and electroweak corrections for SUSY
production~\cite{Kilian:2006cj,Robens:2008sa} and NLO QCD correction for $pp \to
b\bar{b}b\bar{b}$~\cite{Binoth:2009rv,Greiner:2011mp}. Now,
\whizard\ is based on an automated implementation of the FKS
subtraction algorithm~\cite{Frixione:1995ms}. In this automated
implementation only the virtual amplitudes are external from one-loop
providers (OLP, there are interfaces to
\texttt{Openloops}~\cite{Cascioli:2011va,Buccioni:2019sur},
\texttt{Recola}~\cite{Actis:2016mpe} and
\texttt{GoSam}~\cite{Cullen:2014yla}), while subtraction terms are
automatically generated in \whizard.
The NLO QCD has been fully validated as can be seen from
Table~\ref{tab:nlo_comp}. First applications of this automated
interface have been devoted to linear collider top physics in the
continuum~\cite{Nejad:2016bci} and in the threshold
region~\cite{Bach:2017ggt}. These examples also show NLO calculations
with factorized processes as well as NLO QCD decays. Recently, the
selection of heavy-flavor jets in the jet clustering (bottom and charm)
as well as a veto for them has been added, and also the possibility
for photon isolation to separate perturbative QCD from nonperturbative
effects in photon-jet fragmentation. The final validation is being
finished now, there are still a few ongoing issues especially
regarding easier usage, but an alpha version of \whizard\ 3.0
officially releasing NLO QCD automation will be done in March
2020. \whizard\ allows for a completely automatized POWHEG-type
matching (and damping)~\cite{Reuter:2016qbi} to the parton shower (for
final state showering). While the corresponding matching for
initial-state showering is being implemented, the work on NLO
electroweak corrections has been started and first total cross
sections for simple processes are already available. Next steps here
are the complete validation, as well as the proper matching to the
higher-order corrections for incoming electron PDFs. Also, the work
for other NLO matching schemes has started.
\subsection{Summary and Outlook}
This is a status report of the close-to-final release version
2.8.2/2.8.3 of the \whizard\ version 2 series, showing intense work on
the complete NLO QCD automation, the completion of
automatic generation of arbitrary Lorentz tensor representations and
the UFO interface, and many technical and convenience developments
driven by the upcoming 250 GeV full SM Monte Carlo mass production for
ILC with 2 ab${}^{-1}$ integrated luminosity.
\section*{Acknowledgments}
This work was funded by the Deutsche Forschungsgemeinschaft under
Germany’s ExcellenceStrategy – EXC 2121 “Quantum Universe” –
390833306. JRR wants to thank the organizers for a fruitful and
interesting conference in Sendai, which was followed by an intense and
very productive LC generator group meeting at University of Tokyo.
\baselineskip15pt
| {'timestamp': '2020-02-17T02:17:10', 'yymm': '2002', 'arxiv_id': '2002.06122', 'language': 'en', 'url': 'https://arxiv.org/abs/2002.06122'} |
\section{Introduction}
Gas dynamical processes are believed to play an important role in
the evolution of astrophysical systems on all length scales.
Smoothed particle hydrodynamics~(SPH) is a powerful gridless
particle method to solve complex fluid-dynamical problems. SPH
has a number of attractive features such as its low numerical
diffusion in comparison to grid based methods. An adequate
scenario for SPH application is the unbound astrophysical
problems, especially on the shock propagation~(see, e.g., Liu \&
Liu 2003). In this way, the basic principles of the SPH is
written in this paper and the simulation of isothermal and
adiabatic shocks are applied to test the ability of this numerical
simulation to produce known analytic solutions.
The program is written in Fortran and is highly portable. This
package supports only calculations for isothermal and adiabatic
shock waves. It is possible to change (to complete) the program
for a wide variety of applications ranging from astrophysics to
fluid mechanics. The program is written in modular form, in the
hope that it will provide a useful tool. I ask only that:
\begin{itemize}
\item If you publish results obtained using some parts of this
program, please consider acknowledging the source of the
package.
\item If you discover any errors in the program or
documentation, please promptly communicate the to the author.
\end{itemize}
\section{Formulation of Shock Waves}
An extremely important problem is the behavior of gases subjected
to compression waves. This happens very often in the cases of
astrophysical interests. For example, a small region of gas
suddenly heated by the liberation of energy will expand into its
surroundings. The surroundings will be pushed and compressed.
Conservation of mass, momentum, and energy across a shock front
is given by the Rankine-Hugoniot conditions~(Dyson \& Williams
1997)
\begin{equation}\label{e:rh1}
\rho_1 v_1=\rho_2 v_2
\end{equation}
\begin{equation}
\rho_1 v_1^2+ K\rho_1^\gamma =\rho_2 v_2^2+ K\rho_2^\gamma
\end{equation}
\begin{equation}\label{e:rh3}
\frac{1}{2}v_1^2 + \frac{\gamma}{\gamma-1} K \rho_1^{\gamma-1}=
\frac{1}{2}v_2^2 + \frac{\gamma}{\gamma-1} K \rho_2^{\gamma-1} +Q
\end{equation}
where the equation of state, $p=K\rho^\gamma$, is used. In
adiabatic case, we have $Q=0$, and for isothermal shocks, we will
set $\gamma=1$.
We would interested to consider the collision of two gas sheets
with velocities $v_0$ in the rest frame of the laboratory. In this
reference frame, the post-shock will be at rest and the pre-shock
velocity is given by $v_1=v_0+v_2$, where $v_2$ is the shock front
velocity. Combining equations (\ref{e:rh1})-(\ref{e:rh3}), we have
\begin{equation}\label{e:v2}
v_2=a_0[-\frac{b}{2}+\sqrt{1+\frac{b^2}{4}+(\gamma-1)
(\frac{M_0^2}{2}-q)}]
\end{equation}
where $a_0\equiv \gamma K\rho_1 ^{\gamma-1}$ is the sound speed,
$M_0\equiv v_0/a_0$ is the Mach number, $b$ and $q$ are defined as
\begin{equation}
b\equiv \frac{3-\gamma}{2}M_0+ \frac{\gamma-1}{M_0}q\quad ;\quad
q\equiv \frac{Q}{a_0^2}.
\end{equation}
Substituting (\ref{e:v2}) into equation (\ref{e:rh1}), density of
the post-shock is given by
\begin{equation}\label{e:den}
\rho_2=\rho_1\{1+\frac{M_0}{[-\frac{b}{2}+\sqrt{1+\frac{b^2}{4}+(\gamma-1)
(\frac{M_0^2}{2}-q)}]}\}.
\end{equation}
\section{SPH Equations}
The smoothed particle hdrodynamics was invented to simulate
nonaxisymmetric phenomena in astrophysics~(Lucy 1977, Gingold \&
Monaghan 1977). In this method, fluid is represented by $N$
discrete but extended/smoothed particles (i.e. Lagrangian sample
points). The particles are overlapping, so that all the physical
quantities involved can be treated as continues functions both in
space and time. Overlapping is represented by the kernel
function, $W_{ab} \equiv W(\textbf{r}_a-\textbf{r}_b,h_{ab})$,
where $h_{ab} \equiv (h_a+h_b)/2$ is the mean smoothing length of
two particles $a$ and $b$. The continuity, momentum and energy
equation of particle $a$ are~(Monaghan 1992)
\begin{equation}
\rho_a=\sum_b m_b W_{ab}
\end{equation}
\begin{equation}
\frac{d\textbf{v}_a}{dt}=-\sum_b m_b (\frac{p_a}{\rho_a}+
\frac{p_b}{\rho_b}+ \Pi_{ab}) \nabla_a W_{ab}
\end{equation}
\begin{equation}
\frac{du_a}{dt}=\frac{1}{2} \sum_b m_b (\frac{p_a}{\rho_a}+
\frac{p_b}{\rho_b}+ \Pi_{ab}) \textbf{v}_{ab} \cdot \nabla_a
W_{ab}
\end{equation}
where $\textbf{v}_{ab}\equiv \textbf{v}_a- \textbf{v}_b$ and
\begin{equation}
\Pi_{ab}=\cases{
\frac{\alpha v_{sig} \mu_{ab}
+\beta \mu_{ab}^2}{\bar{\rho}_{ab}}, &
if $\textbf{v}_{ab}.\textbf{r}_{ab}<0$,\cr
0 , & otherwise,}
\end{equation}
is the artificial viscosity between particles $a$ and $b$, where
$\bar{\rho}_{ab}= \frac{1}{2}(\rho_a+\rho_b)$ is an average
density, $\alpha\sim 1$ and $\beta\sim 2$ are the artificial
coefficients, and $\mu_{ab}$ is defined as its usual form
\begin{equation}
\mu_{ab}=-\frac{\textbf{v}_{ab}
\cdot\textbf{r}_{ab}}{\bar{h}_{ab}}
\frac{1}{r_{ab}^2/\bar{h}_{ab}^2+\eta^2}
\end{equation}
with $\eta\sim 0.1$ and $\bar{h}_{ab}= \frac{1}{2}(h_a+h_b)$. The
signal velocity, $v_{sig}$, is
\begin{equation}
v_{sig}=\frac{1}{2}(c_a+c_b)
\end{equation}
where $c_a$ and $c_b$ are the sound speed of particles. The SPH
equations are integrated using the smallest time-steps
\begin{equation}
\Delta t_a=C_{cour}MIN[ \frac{h_a}{\mid \textbf{v}_a\mid},
(\frac{h_1}{\mid\textbf{a}_1\mid})^{0.5}, \frac{u_a}{\mid du_a/dt
\mid}, \frac{h_a}{\mid dh_a/dt \mid}, \frac{\rho_a}{\mid
d\rho_a/dt \mid}]
\end{equation}
where $C_{cour}\sim 0.25$ is the Courant number.
\section{Results and Prospects}
The chosen physical scales for length and time are $[l]=3.0 \times
10^{16} m$ and $[t]=3.0 \times 10^{13} s$, respectively, so the
velocity unit is approximately $1km.s^{-1}$. The gravity constant
is set $G= 1$ for which the calculated mass unit is $[m]=4.5
\times 10^{32} kg$. There is considered two equal one dimensional
sheets with extension $x= 0.1 [l]$, which have initial uniform
density and temperature of $\sim 4.5\times 10^8 m^{-3}$ and $\sim
10K$, respectively.
Particles with a positive x-coordinate are given an initial
negative velocity of Mach 5, and those with a negative
x-coordinate are given a Mach 5 velocity in the opposite
direction. In adiabatic shock, with $M_0=5$, the post-shock
density must be $2.9$, which is obtained from analytic solution
(\ref{e:den}) with $Q=0$ and $\gamma=2$. The Results of adiabatic
shock are shown in Fig.~1-3. In isothermal shock, with $M_0=5$,
the post-shock density must be $26.9$, which is obtained from
analytic solution Equ.~(\ref{e:den}) with $\gamma=1$. The Results
of isothermal shock are shown in Fig.~4-5. Algorithm of the
program is shown in Fig.~6.
| {'timestamp': '2006-01-20T21:27:10', 'yymm': '0601', 'arxiv_id': 'astro-ph/0601480', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0601480'} |
\section{Introduction}\label{sec:intro}
Historically, humans have performed inconsistently in judgemental forecasting \cite{Makridakis2010,TetlockExp2017}, which incorporates subjective opinion and probability estimates to predictions \cite{Lawrence2006}. Yet, human judgement remains essential in cases where pure statistical methods are not applicable, e.g. where historical data alone is insufficient or for one-off, more `unknowable' events \cite{Petropoulos2016,Arvan2019,deBaets2020}. Judgemental forecasting is widely relied upon for decision-making \cite{Nikolopoulos2021}, in myriad fields from epidemiology to national security \cite{Nikolopoulos2015,Litsiou2019}. Effective tools to help humans improve their predictive capabilities thus have enormous potential for impact. Two recent global events -- the COVID-19 pandemic and the US withdrawal from Afghanistan -- underscore this by highlighting the human and financial cost of predictive deficiency. A multi-purpose system which could improve our ability to predict the incidence and impact of events
by as little as 5\%, could save millions of lives and be worth trillions of dollars per year \cite{TetlockGard2016}.
Research on judgemental forecasting (see \cite{Lawrence2006,Zellner2021} for overviews), including the recent\AX{,}
groundbreaking `Superforecasting Experiment' \cite{TetlockGard2016}, is instructive in establishing the desired properties for systems for supporting forecastin
.
In addition to reaffirming the importance of fine-grained probabilistic reasoning \cite{Mellers2015}, this literature points to the benefits of some group techniques versus solo forecasting \cite{Landeta2011,Tetlock2014art}, of synthesising qualitative and quantitative information \cite{Lawrence2006}, of combating agents' irrationality \cite{Chang2016} and of high agent engagement with the forecasting challenge, e.g. robust debating \cite{Landeta2011} and frequent prediction updates \cite{Mellers2015}.
Meanwhile, \emph{computational argumentation} (see \cite{AImagazine17,handbook} for recent overviews) is a field of AI which involves reasoning with uncertainty and resolving conflicting information, e.g. in natural language debates. As such, it is an ideal candidate for aggregating the broad, polymorphous set of information involved in judgemental group forecasting. An extensive and growing literature is based on various argumentation frameworks -- rule-based systems for aggregating, representing and evaluating sets of arguments, such as those applied in the contexts of \emph{scheduling} \cite{Cyras_19}, \emph{fact checking} \cite{Kotonya_20} or in various instances of \emph{explainable AI} \cite{Cyras_21}.
Subsets of the requirements for forecasting systems are addressed by individual formalisms, e.g. \emph{probabilistic argumentation}
\AX{\cite{Dung2010,Thimm2012,Hunter2013,Fazzinga2018}}
may effectively represent and analyse uncertain arguments about the future. However, we posit that a purpose-built argumentation framework for forecasting is essential to effectively utilise
argumentation's reasoning capabilities in this context.
\begin{figure*}
\includegraphics[width=\textwidth]{images/FAF_diagram.png}
\caption{The step-by-step process of a FAF over its lifetime.}
\label{fig:FAFdiag}
\end{figure*}
In this paper, we attempt to cross-fertilise these two as of yet unconnected academic areas. We draw from forecasting literature to inform the design of a new computational argumentation approach: \emph{Forecasting Argumentation Frameworks} (FAFs). FAFs
empower (human and artificial) agents to structure debates in real time and to deliver argumentation-based forecasting.
They offer an approach in the spirit of \emph{deliberative democracy} \cite{Bessette1980} to respond to a forecasting problem over time. The steps which underpin FAFs are depicted in Figure \ref{fig:FAFdiag} (referenced throughout) and can be described in simple terms \FT{as follows}: a FAF is initialised with a time limit \FT{(for the overall forecasting process and for each iteration therein)} and a pre-agreed `base-rate' forecast $\ensuremath{\mathcal{F}}$ (Stage 1), e.g. based on historical data.
\FT{Then,} the forecast is revised by one or more (non-concurrent) debates, \BI{in the form of `update frameworks' (Stage 2)}, opened and resolved by participating agents \FT{(}until
\FT{the} specified time limit is reached\FT{)}. Each update framework begins with a proposed revision to the current forecast (Stage 2a), and proceeds with a cycle of argumentation (Stage 2b) about the proposed forecast, voting on said argumentation and forecasting. Forecasts deemed `irrational' with a view to agents' argumentation and voting are blocked. Finally, the rational forecasts are aggregated and the result replaces the current group forecast (Stage 2c). This process may be repeated over time \BI{in an indefinite number of update frameworks} (thus continually \BI{revising} the group forecast) until the
\FT{(overall)} time limit is reached.
The composite nature of this process enables the appraisal of new
information relevant to the forecasting question as and when it arrives. Rather than confronting an unbounded forecasting question with a diffuse set of possible debates open at once, all agents concentrate their argumentation on a single topic (a proposal) at any given time.
After giving the necessary background on forecasting and argumentation (§\ref{sec:background}), we formalise our \FT{update} framework\FT{s for Step 2a}
(§\ref{sec:fw}).
We then give
\FT{our} notion of rationality \FT{(Step 2b)}, along with \FT{our} new method for
\FT{aggregating rational forecasts (Step 2c)} from a group of agents (§\ref{sec:forecasting}) \FT{and FAFs overall}. We explore the underlying properties of
\FT{FAFs} (§\ref{sec:props}), before describing
\FT{\AX{an
experiment}
with
\FT{a prototype implementing} our approach (§\ref{sec:experiments}). Finally, we conclude and suggest potentially fruitful avenues for future work (§\ref{sec:conclusions}).
\section{Background}\label{sec:background}
\subsection{Forecasting}
Studies on the efficacy of judgemental forecasting have shown mixed results \cite{Makridakis2010,TetlockExp2017,Goodwin2019}. Limitations of the judgemental approach are a result of well-documented cognitive biases \cite{Kahneman2012}, irrationalities in human probabilistic reasoning which lead to distortion of forecasts. Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success \cite{Lawrence2006}. These methodologies include, but are not limited to, prediction intervals \cite{Lawrence1989}, decomposition \cite{MacGregorDonaldG1994Jdwd}, structured analogies \cite{Green2007,Nikolopoulos2015} and unaided judgement \cite{Litsiou2019}. Various group forecasting techniques have also been explored \cite{Linstone1975,Delbecq1986,Landeta2011}, although the risks of groupthink \cite{McNees1987} and the importance of maintaining the independence of each group member's individual forecast are well established \cite{Armstrong2001}. Recent advances in the field have been led by Tetlock and Mellers' superforecasting experiment \cite{TetlockGard2016}, which leveraged \AX{geopolitical} forecasting tournaments and a base of 5000 volunteer forecasters to identify individuals with consistently exceptional accuracy (top 2\%). The experiment\AR{'s} findings underline the effectiveness of group forecasting orientated around debating \cite{Tetlock2014art}, and demonstrate a specific cognitive-intellectual approach conducive for forecasting \cite{Mellers20151,Mellers2015}, but stop short of suggesting a concrete universal methodology for higher accuracy. Instead, Tetlock draws on his own work and previous literature to crystallise a broad set of methodological principles by which superforecasters abide \cite[pg.144]{TetlockGard2016}:
\begin{itemize}
\item \emph{Pragmatic}: not wedded to any idea or agenda;
\item \emph{Analytical:} capable of stepping back from the tip-of-your-nose perspective
and considering other views;
\item \emph{Dragonfly-eyed:} value diverse views and synthesise them into their own;
\item \emph{Probabilistic:} judge using many grades of maybe;
\item \emph{Thoughtful updaters:} when facts change, they change their minds;
\item \emph{Good intuitive psychologists:} aware of the value of checking thinking for cognitive and emotional biases.
\end{itemize}
Subsequent research after the superforecasting experiment has included exploring further optimal forecasting tournament preparation \cite{penn_global_2021,Katsagounos2021} and extending Tetlock and Mellers' approach to answer broader, more time-distant questions \cite{georgetown}. It should be noted that there have been no recent advances on computational tool\AX{kits}
for the field similar to that proposed in this paper.
\iffalse
\begin{quote}
\textbf{PRAGMATIC:} Not wedded to any idea or agenda
\textbf{ANALYTICAL:} Capable of stepping back from the tip-of-your-nose perspective
and considering other views
\textbf{DRAGONFLY-EYED:} Value diverse views and synthesize them into their own
\textbf{PROBABILISTIC:} Judge using many grades of maybe
\textbf{THOUGHTFUL UPDATERS:} When facts change, they change their minds
\textbf{GOOD INTUITIVE PSYCHOLOGISTS:} Aware of the value of checking thinking for cognitive and emotional biases \cite[pg.144]{TetlockGard2016}\newline
\end{quote}
\fi
\subsection{Computational Argumentation}
We posit that existing argumentation formalisms are not well suited for the aforementioned future-based arguments, which are necessarily semantically and structurally different from arguments about present or past concerns. Specifically, forecasting arguments are inherently probabilistic and must deal with the passage of time and its implications for the outcomes at hand.
Further, several other important characteristics can be drawn from the forecasting literature which render current argumentation formalisms unsuitable, e.g. the paramountcy of dealing with bias (in data and cognitive), forming granular conclusions, fostering group debate and the co-occurrence of qualitative and quantitative arguing.
Nonetheless,
several of these characteristics have been previously explored in argumentation and our formalisation draws from several existing approache
. First and foremost, it draws in spirit from abstract argumentation frameworks (AAFs) \cite{Dung1995}, in that
the arguments' inner contents are ignored and the focus is on the relationships between arguments.
However, we consider arguments of different types and \AX{an additional relation of} support (pro),
\AX{rather than} attack (con) alone as in \cite{Dung1995}.
Past work has also introduced probabilistic constraints into argumentation frameworks. {Probabilistic AAFs} (prAAFs) propose two divergent ways for modelling uncertainty in abstract argumentation using
probabilities
- the constellation approach \cite{Dung2010,Li2012} and the epistemic approach \cite{Hunter2013,Hunter2014,Hunter2020}.
These formalisations use probability as a means to assess uncertainty over the validity of arguments (epistemic) or graph topology (constellation), but do not enable reasoning \emph{with} or \emph{about} probability, which is fundamental in forecasting.
In exploring temporality, \cite{Cobo2010} augment AAFs by providing each argument with a limited lifetime. Temporal constraints have been extended in \cite{Cobo2012} and \cite{Baron2014}. Elsewhere, \cite{Rago2017} have used argumentation to model irrationality or bias in
agents. Finally, a wide range of gradual evaluation methods have gone beyond traditional qualitative semantics by measuring arguments' acceptability on a scale (normally
[0,1]) \cite{Leite2011,Evripidou2012,Amgoud2017,Amgoud2018,Amgoud2016}.
Many of these approaches have been unified as Quantitative Bipolar Argumentation Frameworks (QBAFs) in \cite{Baroni2018}.
Amongst existing approaches, of special relevance in this paper are Quantitative Argumentation Debate (QuAD) frameworks \cite{Baroni2015},
i.e. 5-tuples ⟨$\mathcal{X}^a$, $\mathcal{X}^c$, $\mathcal{X}^p$, $\mathcal{R}$, $\ensuremath{\mathcal{\tau}}$⟩
where
$\mathcal{X}^a$ is a finite set of \emph{answer} arguments (to implicit \emph{issues}); $\mathcal{X}^c$ is a finite set of \emph{con} arguments;
$\mathcal{X}^p$ is a finite set of \emph{pro} arguments;
$\mathcal{X}^a$, $\mathcal{X}^c$ and $\mathcal{X}^p$ are pairwise disjoint; $\mathcal{R} \subseteq (\mathcal{X}^c \cup \mathcal{X}^p) \times (\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p)$ is an acyclic binary relation;
$\ensuremath{\mathcal{\tau}}$ : $(\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p) \rightarrow
[0,1]$ is a total function: $\ensuremath{\mathcal{\tau}}(a)$ is the \emph{base score} of $a$.
Here, attackers and supporters of arguments are determined by the pro and con arguments they are in relation with. Formally, for any $a\in\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p$, the set of \emph{con} (\emph{pro}\AX{)} \emph{arguments} of $a$ is $\mathcal{R}^-(a) = \{b\in\mathcal{X}^c|(b,a)\in\mathcal{R}\}$
($\mathcal{R}^+(a) = \{b\in\mathcal{X}^p|(b,a)\in\mathcal{R}\}$, resp.).
Arguments in QuAD frameworks are scored by
the \emph{Discontinuity-Free QuAD} (DF-QuAD) algorithm \cite{Rago2016}, using the argument's intrinsic base score and the \emph{strengths} of its pro/con arguments. \FTn{Given that DF-QuAD is used to define our method (see Def.~\ref{def:conscore}), for completeness we define it formally here.} DF-QuAD's \emph{strength aggregation function}
is defined as $\Sigma :
[0,1]^* \rightarrow
[0,1]$, where for $\mathcal{S} = (v_1,\ldots,v_n) \in
[0,1]^*$:
if $n=0$, $\Sigma(S) = 0$;
if $n=1$, $\Sigma(S) = v_1$;
if $n=2$, $\Sigma(S) = f(v_1, v_2)$;
if $n>2$, $\Sigma(S) = f(\Sigma(v_1,\ldots,v_{n-1}), v_n)$;
with the \emph{base function} $f
[0,1]\times [0,1] \rightarrow [0,1]$ defined, for $v_1, v_2\i
[0,1]$, as:
$f(v_1,v_2)=v_1+(1-v_1)\cdot v_2 = v_1 + v_2 - v_1\cdot v_2$.
After separate aggregation of the argument's pro/con descendants, the combination function $c :
[0,1]\time
[0,1]\time
[0,1]\rightarro
[0,1]$ combines $v^-$ and $v^+$ with the argument's base score ($v^0$):
$c(v^0,v^-,v^+)=v^0-v^0\cdot\mid v^+ - v^-\mid\:if\:v^-\geq v^+$ and
$c(v^0,v^-,v^+)=v^0+(1-v^0)\cdot\mid v^+ - v^-\mid\:if\:v^-< v^+$, resp.\
The inputs for the combination function are provided by the \emph{score function}, $\ensuremath{\mathcal{\sigma}} : \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p\rightarro
[0,1]$, which gives the argument's strength, as follows: for any $\ensuremath{x} \in \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p$:
$\ensuremath{\mathcal{\sigma}}(\ensuremath{x}) = c(\ensuremath{\mathcal{\tau}}(\ensuremath{x}),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^+(\ensuremath{x}))))$
where if $(\ensuremath{x}_1,\ldots,\ensuremath{x}_n)$ is an arbitrary permutation of the ($n \geq 0$) con arguments in $\mathcal{R}^-(\ensuremath{x})$, $\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))=(\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_1),\ldots,\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_n))$ (similarly for pro arguments).
Note that the DF-QuAD notion of $\ensuremath{\mathcal{\sigma}}$ can be applied to any argumentation framework where arguments are equipped with base scores and pro/con arguments. We will do so later, for our novel formalism.
\section
Update \AX{F}rameworks}\label{sec:fw}
We begin by defining the individual components of our frameworks, starting with the fundamental notion of a
\emph{forecast}.
\FT{This} is a probability estimate for the positive outcome of a given (binary) question.
\begin{definition}
A \emph{forecast} $\ensuremath{\mathcal{F}}$ is the probability $P(\ensuremath{\mathcal{Q}}=true) \in [0,1]$ for a given \emph{forecasting question} $\ensuremath{\mathcal{Q}}$.
\end{definition}
\begin{example}
\label{FAFEx}
Consider the forecasting question $\ensuremath{\mathcal{Q}}$: \emph{`Will the Tokyo \AX{2020 Summer} Olympics be cancelled/postponed to another year?'}.
\AX{Here, the $true$ outcome amounts to the Olympics being cancelled/postponed (and $false$ to it taking place in 2020 as planned).}
Then, a forecast $\ensuremath{\mathcal{F}}$ may be $P(\ensuremath{\mathcal{Q}}=true)= 0.15$\, which amounts to a 15\% probability of the Olympics \BIn{being cancelled/postponed}. \BI{Note that $\ensuremath{\mathcal{F}}$ may have been introduced as part of an update framework (herein described), or as an initial base rate at the outset of a FAF (Stage 1 in Figure \ref{fig:FAFdiag}).}
\end{example}
In the remainder, we will often drop $\ensuremath{\mathcal{Q}}$, implicitly assuming it is given, and use $P(true)$ to stand for $P(\ensuremath{\mathcal{Q}}=true)$.
In order to empower agents to reason about probabilities and thus support forecasting, we need, in addition to
\emph{pro/con} arguments as in QuAD frameworks, two new argument types:
\begin{itemize}
\item
\emph{proposal} arguments,
each about some forecast (and its underlying forecasting question); each proposal argument $\ensuremath{\mathcal{P}}$ has a \emph{forecast}
and, optionally, some supporting \emph{evidence
; and
\item \emph{amendment} argument
, which
\AX{suggest a modification to}
some forecast\AX{'s probability} by increasing or decreasing it, and are accordingly separated into
disjoint classes of \emph{increase} and \emph{decrease} (amendment) arguments.\footnote{Note that
we decline to include a third type of amendment argument
for arguing that $\ensuremath{\Forecast^\Proposal}$ is just right. This choice rests on the assumption that additional information always necessitates a change to $\ensuremath{\Forecast^\Proposal}$, however granular that change may be. This does not restrict individual agents arguing about $\ensuremath{\Forecast^\Proposal}$ from casting $\ensuremath{\Forecast^\Proposal}$ as their own final forecast. However, rather than cohering their argumentation around $\ensuremath{\Forecast^\Proposal}$, which we hypothesise would lead to high risk of groupthink~\cite{McNees1987}, agents are compelled to consider the impact of their amendment arguments on this more granular level.}
\end{itemize}
Note that amendment arguments are introduced specifically for arguing about a proposal argument, given that traditional QuAD pro/con
arguments are of limited use when the goal is to judge the acceptability of a probability, and that in forecasting agents must not only decide \emph{if} they agree/disagree but also \emph{how} they agree/disagree (i.e. whether they believe
the forecast is too low or too high considering, if available,
the evidence). Amendment arguments, with their increase and decrease classes, provide for this.
\begin{example
\label{ProposalExample}
A proposal argument $\ensuremath{\mathcal{P}}$ in the Tokyo Olympics setting may comprise
forecast: \emph{\AX{`}There is a 75\% chance that the Olympics will be cancelled/postponed to another year'}. It may also include
evidence: \emph{`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled. The Japanese government is likely to buckle under this pressure.'}
This argument may aim to prompt updating the earlier forecast in Example~\ref{FAFEx}.
A \emph{decrease} amendment argument may be $\ensuremath{\decarg_1}$: \emph{`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'}. An \emph{increase} amendment argument may be $\ensuremath{\incarg_1}$: \emph{`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation'}.
\end{example}
Intuitively, a proposal argument
is the focal point of the argumentation. It typically suggests a new forecast to replace prior forecasts, argued on the basis of some new evidence (as in the earlier example). We will see that proposal arguments remain immutable through each debate (update framework), which takes place via amendment arguments and standard pro/con arguments.
Note that, wrt QuAD argument types, proposal arguments replace issues and amendment arguments replace answers, in that the former are driving the debates and the latter are the options up for debate.
Note also that amendment arguments merely state a direction wrt $\ensuremath{\Forecast^\Proposal}$ and do not contain any more information, such as \emph{how much} to alter $\ensuremath{\Forecast^\Proposal}$ by.
We will see that alteration can be determined by \emph{scoring} amendment arguments.
Proposal and amendment arguments, alongside pro/con arguments, form part of
our novel update frameworks \BI{(Stage 2 of Figure \ref{fig:FAFdiag})}, defined as follows:
\begin{definition} An \emph{update framework} is a nonad
⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{V}}, \ensuremath{\Forecast^\Agents}$⟩ such that:
\item[$\bullet$] $\ensuremath{\mathcal{P}}$ is a single proposal argument
with \emph{forecast} $\NewForecas
$ and, optionally, \emph{evidence} $\mathcal{E}^\ensuremath{\mathcal{P}}$ for this forecast;
\item[$\bullet$] $\ensuremath{\mathcal{X}} = \ensuremath{\AmmArgs^\uparrow} \cup \ensuremath{\AmmArgs^\downarrow}$ is a finite set of \emph{amendment arguments} composed of subsets $\ensuremath{\AmmArgs^\uparrow}$ of \emph{increase} arguments and
$\ensuremath{\AmmArgs^\downarrow}$ of
\emph{decrease} arguments;
\item[$\bullet$] $\ensuremath{\AmmArgs^-}$ is a finite set
of \emph{con} arguments;
\item[$\bullet$] $\ensuremath{\AmmArgs^+}$ is a finite set
of \emph{pro} arguments;
\item[$\bullet$] the sets $\{\ensuremath{\mathcal{P}}\}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^-}$ and $\ensuremath{\AmmArgs^+}$ are pairwise disjoint;
\item[$\bullet$] $\ensuremath{\Rels^p}$ $\subseteq$ $\ensuremath{\mathcal{X}}$ $\times$ $\{\ensuremath{\mathcal{P}}\}$ is a directed acyclic
binary relation
between amendment arguments and the proposal argument (we may refer to this relation informally as `probabilistic');
\item[$\bullet$] $\ensuremath{\Rels}$ $\subseteq$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\times$ ($\ensuremath{\mathcal{X}}$ $\cup$ $\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) is a directed acyclic,
binary relation
\FTn{from} pro/con arguments
\FTn{to} amendment\FTn{/pro/con arguments} (we may refer to this relation informally as `argumentative');
\item[$\bullet$] $\ensuremath{\mathcal{A}} = \{ \ensuremath{a}_1, \ldots, \ensuremath{a}_n \}$ is a finite set of \emph{agents} $(n >1$);
\item[$\bullet$] $\ensuremath{\mathcal{V}}$ : $\ensuremath{\mathcal{A}}$ $\times$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow$ [0, 1] is a total function such that $\ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$ is the \emph{vote} of agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$ on argument $\ensuremath{x} \in \ensuremath{\AmmArgs^-} \cup \ensuremath{\AmmArgs^+}$; with an abuse of notation, we let $\ensuremath{\mathcal{V}}_\ensuremath{a}$ : ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow [0, 1]$ represent the votes of a \emph{single} agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$, e.g. $\ensuremath{\mathcal{V}}_\ensuremath{a}(\ensuremath{x}) = \ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$;
\item[$\bullet$] $\ensuremath{\Forecast^\Agents} = \{ \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n} \}$ is
such that $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}
$, where $i \in \{ 1, \ldots n \}$, is the \emph{forecast} of agent $\ensuremath{a}_i\in\ensuremath{\mathcal{A}}$.
\end{definition}
\BIn{Note that pro \AX{(}con\AX{)} arguments can be seen as supporting (attacking, resp.) other arguments via $\ensuremath{\mathcal{R}}$, as in the case of conventional QuAD frameworks~\cite{Baroni2015}.}
\begin{example
\label{eg:tokyo}
A possible update framework
in our running setting may include $\ensuremath{\mathcal{P}}$ as in Example~\ref{ProposalExample} as well as (see Table \ref{table:tokyo}) $\ensuremath{\mathcal{X}}=\{\ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ensuremath{\incarg_1}\}$, $\ensuremath{\AmmArgs^-}=\{\ensuremath{\attarg_1}, \ensuremath{\attarg_2}, \ensuremath{\attarg_3}\}$, $\ensuremath{\AmmArgs^+}=\{\ensuremath{\supparg_1}, \ensuremath{\supparg_2}\}$, $\ensuremath{\Rels^p}=\{(\ensuremath{\decarg_1}, \ensuremath{\mathcal{P}})$, $(\ensuremath{\decarg_2}, \ensuremath{\mathcal{P}}), (\ensuremath{\incarg_3}, \ensuremath{\mathcal{P}})\}$, and $\ensuremath{\mathcal{R}}=\{(\ensuremath{\attarg_1}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_2}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_3}, \ensuremath{\incarg_1})$, $(\ensuremath{\supparg_1}, \ensuremath{\decarg_2}),$ $ (\ensuremath{\supparg_2}, \ensuremath{\incarg_1})\}
. Figure \ref{fig:tokyo} gives a graphical representation of
these arguments and relations.
\BIn{Assuming $\ensuremath{\mathcal{A}}=\{alice, bob, charlie\}$, $\ensuremath{\mathcal{V}}$ may be such that $\AX{\ensuremath{\mathcal{V}}_{alice}(\ensuremath{\attarg_1})} = 1$, $\AX{\ensuremath{\mathcal{V}}_{bob}(\ensuremath{\supparg_1})} = 0$, and so on.}
\end{example}
\begin{table}[t]
\begin{tabular}{p{0.7cm}p{6.7cm}}
\hline
&
Content \\ \hline
$\ensuremath{\mathcal{P}}$ &
`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled owing to COVID-19, and the Japanese government is likely to buckle under this pressure ($\mathcal{E}^\ensuremath{\mathcal{P}})$. Thus, there is a 75\% chance that the Olympics will be cancelled/postponed to another year' ($\ensuremath{\Forecast^\Proposal}$). \\
$\ensuremath{\decarg_1}$ &
`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'. \\
$\ensuremath{\decarg_2}$ &
`This poll comes from an unreliable source.' \vspace{2mm}\\
$\ensuremath{\incarg_1}$ &
`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation.' \\
$\ensuremath{\attarg_1}$ &
`The IOC is bluffing - people are dying, Japan is experiencing a strike. They will not go ahead with the games if there is a risk of mass death.' \\
$\ensuremath{\attarg_2}$ &
`The Japanese government may renege on its commitment to the IOC, and use legislative or immigration levers to block the event.' \\
$\ensuremath{\attarg_3}$ &
`Japan's government has sustained a high-approval rating in the last year and is strong enough to ward off opposition attacks.' \\
$\ensuremath{\supparg_1}$ &
`This pollster has a track record of failure on Japanese domestic issues.' \\
$\ensuremath{\supparg_2}$ &
`Rising anti-government sentiment on Japanese Twitter indicates that voters may be receptive to such arguments.' \\ \hline
\end{tabular}
\caption
Arguments in the update framework
in Example~\ref{eg:tokyo}.}
\label{table:tokyo}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{images/FAF1.png}
\centering
\caption {\BIn{A graphical representation of arguments and relations in the update framework
from Example~\ref{eg:tokyo}. Nodes
represent proposal ($\ensuremath{\mathcal{P}}$), increase ($\uparrow$), decrease ($\downarrow$), pro ($+$) and con ($-$)
arguments, while \FTn{dashed/solid} edges indicat
, resp., the $\ensuremath{\Rels^p}$/$\ensuremath{\mathcal{R}}$ relations.
}
}
\label{fig:tokyo}
\end{figure}
Several considerations about update frameworks are in order.
Firstly, they represent `stratified' debates: graphically, they can be represented as trees with
the proposal argument as root, amendment arguments as children of the root, and pro/con arguments forming the lower layers, as shown in Figure \ref{fig:tokyo}.
This tree structure serves to focus argumentation towards the proposal (i.e. the probability and, if available, evidence) it puts forward.
Second, we have chosen to impose a `structure' on proposal arguments, whereby their forecast is distinct from their (optional) evidence. Here the forecast has special primacy over the evidence, because forecasts are the vital reference point and the drivers of debates in FAFs. They are, accordingly, both mandatory and required to `stand out' to participating agents. In the spirit of abstract argumentation \cite{Dung1995}, we nonetheless treat all arguments, including proposal arguments, as `abstract', and focus on relations between them rather between their components. In practice, therefore, amendment arguments may relate to a proposal argument's forecast but also, if present, to its evidence. We opt for this abstract view on the assumption that the flexibility of this approach better suits judgmental forecasting, which has a diversity of use cases (e.g. including politics, economics and sport) where different argumentative approaches may be deployed (i.e. quantitative, qualitative, directly attacking amendment nodes or raising alternative POVs) and wherein forecasters may lack even a basic knowledge of argumentation.
We leave the study of structured variants of our framework (e.g. see overview in \cite{structArg}) to future work: these may consider finer-grained representations of all arguments in terms of different components, and finer-grained notions of relations between components, rather than full arguments. Third, in update frameworks, voting is restricted to pro/con arguments. Preventing agents from voting directly on amendment arguments mitigates against the risk of arbitrary judgements: agents cannot make off-the-cuff estimations but can only express their beliefs via (pro/con) argumentation, thus
ensuring a more rigorous process of appraisal for the proposal and amendment arguments.
Note that rather than facilitating voting on arguments using a two-valued perspective (i.e. positive/negative)
or a three-valued perspective (i.e. positive/negative/neutral), $\ensuremath{\mathcal{V}}$ allows agents to cast more granular judgements of (pro/con) argument acceptability, the need for which has been highlighted in the literature \cite{Mellers2015}.
Finally, although we envisage that arguments of all types are put forward by agents during debates, we do not capture this mapping in update frameworks. Thus, we do not capture who put forward which arguments, but instead only use votes to encode and understand agents' views. This enables more nuanced reasoning and full engagement on the part of agents with alternative viewpoints (i.e. an agent may freely argue both for and against a point before taking an explicit view with their voting). Such conditions are essential in a healthy forecasting debate \cite{Landeta2011,Mellers2015}.
In the remainder of this paper, with an abuse of notation, we often use $\ensuremath{\Forecast^\Proposal}$ to denote, specifically, the probability advocated in $\ensuremath{\Forecast^\Proposal}$ (e.g. 0.75 in Example \ref{ProposalExample}).
\section{Aggregating Rational
Forecasts
}\label{sec:forecasting}
In this section we
formally introduce (in \AX{§}\ref{subsec:rationality}) our notion of rationality and discuss how it may be used to identify\BI{, and subsequently `block',} undesirable behaviour in forecasters. We then define (in \AX{§}\ref{subsec:aggregation}) a method for calculating a revised forecast \BI{(Stage 2c of Figure \ref{fig:FAFdiag})}, which aggregates the views of all agents in the update framework, whilst optimising their overall forecasting accuracy.
\subsection{Rationality}\label{subsec:rationality}
Characterising an agent’s view as irrational offers opportunities to refine the accuracy of their forecast (and thus the overall aggregated group forecast). Our definition of rationality is inspired by, but goes beyond, that of QuAD-V \cite{Rago2017}, which was introduced for the e-polling context. Whilst update frameworks eventually produce a single aggregated forecast on the basis of group deliberation, each agent is first evaluated for their rationality on an individual basis. Thus, as in QuAD-V, in order to define rationality for individual agents, we first reduce frameworks to \emph{delegate frameworks} for each agent, which are the restriction of update frameworks
to a single agent.
\begin{definition}
A \emph{delegate framework} for an agent $\ensuremath{a}$ is $\ensuremath{u}_{\ensuremath{a}} =$ ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{a}, \ensuremath{\mathcal{V}}_{\ensuremath{a}}, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩.
\end{definition}
Note that all arguments in an update framework are included in each agent's delegate framework, but only the agent's votes and forecast are carried over.
Recognising the irrationality of an agent requires
comparing the agent's forecast against (an aggregation of) their opinions on the amendment arguments and, by extension, on the proposal argument.
To this end, we evaluate the different parts of the update framework as follows. We use the DF-QuAD algorithm \cite{Rago2016} to score each amendment argument for the agent, in the context of the pro/con arguments `linked' to the amendment argument, using $\ensuremath{\mathcal{R}}$, in the context of the agent's delegate framework. We refer to the DF-QuAD score function as $\ensuremath{\mathcal{\sigma}}$.
This requires a choice of base scores for amendment arguments as well as pro/con arguments.
We assume the same base score $\ensuremath{\mathcal{\tau}}(\ensuremath{x})=0.5$ for all $\ensuremath{x} \in \ensuremath{\mathcal{X}}$;
in contrast, the base score of pro/con arguments is a result of the votes they received from the agent, in the spirit of QuAD-V \cite{Rago2017}.
The intuition behind assigning a neutral (0.5) base score for amendment arguments is that an agent's estimation of their strength from the outset would be susceptible to bias and inaccuracy.
Once each amendment argument has been scored
(using $\ensuremath{\mathcal{\sigma}}$) for the agent, we aggregate the scores of all amendment arguments (for the same agent) to
to calculate the agent's \emph{confidence score} in the proposal argument (which underpins our rationality constraints), by weighting the mean average strength of this argument's increase amendment relations against that of the set of decrease amendment relations:
\begin{definition}\label{def:conscore}
Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩
, let
$\ensuremath{\AmmArgs^\uparrow} = \{ \ensuremath{\incarg_1}, \ensuremath{\incarg_2}, \ldots , \ensuremath{\arg^\uparrow}_i \}$ and $\ensuremath{\AmmArgs^\downarrow} = \{ \ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ldots , \ensuremath{\arg^\downarrow}_j \}$.
Then,
$\ensuremath{a}$'s \emph{confidence score} is as follows:
\begin{align}
&\text{if } i\neq0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k) - \frac{1}{j} \sum_{l=1}^{j} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\
&\text{if } i\neq0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k); \nonumber \\
&\text{if } i=0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = - \frac{1}{j} \sum_{l=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\
&\text{if } i=0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = 0. \nonumber
\end{align}
\end{definition}
Note that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) \in [-1,1]$, which denotes the overall views of the agent on the forecast $\ensuremath{\Forecast^\Proposal}$ (i.e. as to whether it should be \emph{increased} or \emph{decreased}, and how far). A negative (positive) $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ indicates that an agent believes that $\ensuremath{\Forecast^\Proposal}$ should be amended down (up, resp.).
The size of $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ reflects the degree of the agent's certainty in either direction.
In turn, we can constrain an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ if it contradicts this belief
as follows.
\begin{definition}\label{def:irrationality}
Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$
, $\ensuremath{a}$’s forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ is \emph{strictly rational} (wrt $\ensuremath{u}_{\ensuremath{a}}$) iff:
\begin{align}
if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} < \ensuremath{\Forecast^\Proposal}
\nonumber \\
if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} > \ensuremath{\Forecast^\Proposal}
\nonumber \\
\centering
\mid\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\mid \geq \frac{\mid\ensuremath{\Forecast^\Proposal} - \ensuremath{\Forecast^\Agents}_\ensuremath{a}\mid}{\ensuremath{\Forecast^\Proposal}}
\nonumber
\end{align}
\end{definition}
Hereafter, we refer to forecasts which violate the first two constraints as, resp., \emph{irrational increase} and \emph{irrational decrease} forecasts, and to forecasts which violate the final constraint as \emph{irrational scale} forecasts.
This definition of rationality preserves the integrity of group forecast in two ways. First, it prevents agents from forecasting against their beliefs: an agent cannot increase $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0$
and an agent cannot decrease $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0
;
further, it ensures that agents cannot make forecasts disproportionate to their confidence score -- \emph{how far} an agent $\ensuremath{a}$ deviates from the proposed change $\ensuremath{\Forecast^\Proposal}$ is restricted by $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$;
finally, an agent must have $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ greater than or equal to the relative change to $\ensuremath{\Forecast^\Proposal}$ denoted in their forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}
.
Note that the \emph{irrational scale} constraint deals with just one direction of proportionality (i.e. providing only a maximum threshold for $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$'s deviation from $\ensuremath{\Forecast^\Proposal}$, but no minimum threshold). Here, we avoid bidirectional proportionality on the grounds that such a constraint would impose an arbitrary notion of arguments' `impact' on agents. An agent may have a very high $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$, indicating
\FT{their} belief that $\ensuremath{\Forecast^\Proposal}$ is too low, but \AX{may}, we suggest, rationally choose to increase $\ensuremath{\Forecast^\Proposal}$ by only a small amount (e.g. if, despite
\FT{their} general agreement with the arguments,
\FT{they} believe the overall issue at stake in $\ensuremath{\mathcal{P}}$ to be minor or low impact to the overall forecasting question). Our definition of rationality, which relies on notions of argument strength derived from DF-QuAD, thus informs but does not wholly dictate agents' forecasting, affording them considerable freedom. We leave alternative, stricter definitions of rationality, which may derive from probabilistic conceptions of argument strength, to future work.
\begin{example
Consider our running Tokyo Olympics example, with the same
arguments and relations from Example \ref{eg:tokyo} and an agent \BIn{$alice$} with a confidence score \BIn{$\ensuremath{\mathcal{C}}_{alice}(\ensuremath{\mathcal{P}}) = -0.5$}. From this we know that \BIn{$alice$} believes that the suggested
$\ensuremath{\Forecast^\Proposal}$ in the proposal argument $\ensuremath{\mathcal{P}}$
should be decreased.
Then, under our definition of rationality, \BIn{$alice$'s} forecast \BIn{$\ensuremath{\Forecast^\Agents}_{alice}$} is `rational' if it decreases $\ensuremath{\Forecast^\Proposal}$ by up to 50\%.
\end{example}
If an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ violates these rationality constraints then \BI{it is `blocked'} and the agent is prompted to return to the argumentation graph. From here, they may carry out one or more of the following actions to render their forecast rational:
a. Revise their forecast;
b. Revise their votes on arguments;
c. Add new arguments
(and vote on
them).
\iffalse
\begin{enumerate}[label=\alph*.]
\item Revise their forecast;
\item Revise their votes on arguments;
\item Add new arguments to the update framework (and vote on them).
\end{enumerate}
\fi
Whilst a) and b) occur on an agent-by-agent basis, confined to each delegate framework, c) affects the shared update framework and requires special consideration.
Each time new \AX{arguments}
are added to the shared graph, every agent must vote on
\AX{them}, even if they have already made a rational forecast. In certain cases, after an agent has voted on a new argument, it is possible that their rational forecast is made irrational. In this instance, the agent must resolve their irrationality via the steps above. In this way, the update framework can be refined on an iterative basis until the graph is no longer being modified and all agents' forecasts are rational. At this stage, the update framework has reached a stable state and the agents $\ensuremath{\mathcal{A}}$ are collectively rational:
\begin{definition} Given an update framework $\ensuremath{u}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩, $\ensuremath{\mathcal{A}}$ is \emph{collectively rational} (wrt \emph{u}) iff $\forall \ensuremath{a} \in \ensuremath{\mathcal{A}}$, $\ensuremath{a}$ is individually rational (wrt the
delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩).
\end{definition}
When $\ensuremath{\mathcal{A}}$ is collectively rational, the
update framework $u$ becomes immutable and the aggregation (defined next)
\AX{produces} a group forecast $\ensuremath{\Forecast^g}$, which becomes the
new $\ensuremath{\mathcal{F}}$.
\subsection{Aggregating Forecasts}\label{subsec:aggregation}
After all the agents have made a rational forecast, an aggregation function is applied to produce one collective forecast. One advantage of forecasting debates vis-a-vis
\AX{the} many other forms of debate, is that a ground truth always exists -- an event either happens or does not. This means that, over time and after enough FAF instantiations, data on the forecasting success of different agents can be amassed. In turn, the relative historical performance of forecasting agents can inform the aggregation of group forecasts. In update frameworks, a weighted aggregation function based on Brier Scoring \cite{Brier1950} is used, such that more accurate forecasting agents have greater influence over the final forecast.
Brier Scores are a widely used criterion to measure the accuracy of probabilistic predictions, effectively gauging the distance between a forecaster's predictions and an outcome after it has(n't) happened, as follows.
\begin{definition} \label{def:bscore}
Given an agent $\ensuremath{a}$, a non-empty series of forecasts $\ensuremath{\Forecast^\Agents}_\ensuremath{a}(1), \ldots, \ensuremath{\Forecast^\Agents}_\ensuremath{a}(\ensuremath{\mathcal{N}}_{\ensuremath{a}})$ with corresponding actual outcomes $\ensuremath{\mathcal{O}}_1, \ldots,$ $\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \in \{true, false\}$ (where $\ensuremath{\mathcal{N}}_{\ensuremath{a}}>0$ is the number of forecasts $\ensuremath{a}$ has made in a non-empty sequence of as many update frameworks), $\ensuremath{a}$'s Brier Score $\ensuremath{b}_{\ensuremath{a}} \in [0, 1]$ is as follows:
\begin{align}
\ensuremath{b}_{\ensuremath{a}} = \frac{1}{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \sum_{t=1}^{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} (\ensuremath{\Forecast^\Agents}_\ensuremath{a}(t) - val(\ensuremath{\mathcal{O}}_t))^2 \nonumber
\end{align}
where $val(\ensuremath{\mathcal{O}}_t)=1$ if $\ensuremath{\mathcal{O}}_t=true$, and 0 otherwise.
\end{definition}
A Brier Score $\ensuremath{b}$ is effectively the mean squared error used to gauge forecasting accuracy, where a low $\ensuremath{b}$ indicates high accuracy and high $\ensuremath{b}$ indicates low accuracy. This
can be used in the update framework's aggregation function via the weighted arithmetic mean as follows.
\AX{E}ach Brier Score is inverted to ensure that more (less, resp.) accurate forecasters have higher (lower, resp.) weighted influence\AX{s} on $\ensuremath{\Forecast^g}$:
\begin{definition}\label{def:group}
Given a set of agents $\ensuremath{\mathcal{A}} = \{\ensuremath{a}_1, \ldots,\ensuremath{a}_n\}$,
their corresponding set of Brier Scores $\ensuremath{b} = \{\ensuremath{b}_{\ensuremath{a}_1}, \ldots,\ensuremath{b}_{\ensuremath{a}_n}\}$ and
their forecasts $\ensuremath{\Forecast^\Agents} = \{\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots,\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n}\}$, and letting, for $i \!\!\in\!\! \{ 1, \ldots, n\}$, $w_{i} \!\!=\!\! 1-\ensuremath{b}_{\ensuremath{a}_i}$, the \emph{group forecast} $\ensuremath{\Forecast^g}$ is
as follows:
\begin{align}
&\text{if } \sum_{i=1}^{n}w_{i} \neq 0: &
&\ensuremath{\Forecast^g} = \frac{\sum_{i=1}^{n}w_{i}\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}}{\sum_{i=1}^{n}w_{i}}; \nonumber \\
&\text{otherwise}: &
&\ensuremath{\Forecast^g} = 0. \nonumber
\end{align}
\end{definition}
This group forecast could be `activated' after a fixed number of debates (with the mean average used prior), when sufficient data has been collected on the accuracy of participating agents, or after a single debate, in the context of our general
\emph{Forecasting Argumentation Frameworks}:
\begin{definition} A \emph{Forecasting Argumentation Framework} (FAF) is a triple ⟨$ \ensuremath{\mathcal{F}}, \ensuremath{\mathcal{U}}, \ensuremath{\mathcal{T}}$⟩ such that:
\item[$\bullet$] $\ensuremath{\mathcal{F}}$ is a \emph{forecast
;
\item[$\bullet$] $\ensuremath{\mathcal{U}}$ is a finite, non-empty sequence of update frameworks with \ensuremath{\mathcal{F}}\ the forecast of the proposal argument in the first update framework in the sequence\AR{;} the forecast of each subsequent update framework is the group forecast of the previous update framework's agents' forecasts;
\item[$\bullet$] $\ensuremath{\mathcal{T}}$ is a preset time limit representing the lifetime of the FAF;
\item[$\bullet$] each agent's forecast wrt the agent's delegate framework drawn from each update framework is strictly rational.
\end{definition}
\begin{example
\BIn{Consider our running Tokyo Olympics example: the overall FAF may be composed of $\ensuremath{\mathcal{F}} = 0.15$, update frameworks $\ensuremath{\mathcal{U}} = \{ u_1, u_2, u_3 \}$ and time limit $\ensuremath{\mathcal{T}}=14\ days$, where $u_3$ is the latest (and therefore the only open) update framework after, for example, four days.}
\end{example}
\AX{T}he superforecasting literature explores a range of forecast aggregation algorithms: extremizing algorithms \cite{Baron2014}, variations on logistic \AX{and}
Fourier $L_2E$ regression \cite{Cross2018}, with considerable success.
\AX{T}hese approaches
\AX{aim}
at ensuring that less certain
\AX{or less} accurate forecasts have a lesser influence over the final aggregated forecast. We believe that
FAFs apply a more intuitive algorithm
\AX{since} much of the `work' needed to bypass inaccurate and erroneous forecasting is
\AX{expedited}
via argumentation.
\section{Properties}\label{sec:props}
We now undertake a theoretical analysis of FAFs by considering mathematical properties they satisfy. Note that the properties of the DF-QuAD algorithm (see \cite{Rago2016}) hold (for the amendment and pro/con arguments) here. For brevity, we
focus on novel properties unique to FAFs which relate to our new argument types. These properties focus on aggregated group forecasts wrt a debate (update framework). They imply the two broad, and we posit, desirable, principles of \emph{balance} and \emph{unequal representation}.
We assume for this section a generic update framework $\ensuremath{u} = $ ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩ with
group forecast $\ensuremath{\Forecast^g}$.
\paragraph{Balance.}
The intuition for these properties is that
differences between
$\ensuremath{\Forecast^g}$ and
$\ensuremath{\Forecast^\Proposal}$ correspond to imbalances between the
\emph{increase} and \emph{decrease} amendment arguments.
The first result states that
$\ensuremath{\Forecast^g}$ only differs from
$\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\Forecast^\Proposal}$ is the dialectical target of amendment arguments.
\begin{proposition} \label{prop:balance1}
If $\ensuremath{\mathcal{X}}\!\!=\!\!\emptyset$ ($\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$), then $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Proposal}$.
\end{proposition}
\begin{proof}
\AX{If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$,
$\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!=\!0$ by Def.~\ref{def:conscore} and $\ensuremath{\Forecast^\Agents}_\ensuremath{a}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}.
Then, $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:group}.}
\end{proof}
\AX{T}his simple proposition conveys an important property for forecasting
for an agent to put forward a different forecast, amendment arguments must have been introduced.
\begin{example}
In the Olympics setting, the group of agents could only forecast higher or lower than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ after the addition of at least one of \AX{the} amendment arguments $\ensuremath{\decarg_1}$, $\ensuremath{\decarg_2}$ or $\ensuremath{\incarg_1}$.
\end{example}
In the absence of
increase \FTn{(decrease)} amendment arguments, if there
are decrease \FTn{(increase, resp.)} amendment arguments, then
$\ensuremath{\Forecast^g}$ is not higher \FTn{(lower, resp.)} than $\ensuremath{\Forecast^\Proposal}$.
\begin{proposition}\label{prop:balance2}
If $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}=\emptyset$, then $\ensuremath{\Forecast^g} \leq\ensuremath{\Forecast^\Proposal}$.
\FTn{\label{balance3prop}
If $\ensuremath{\AmmArgs^\downarrow}=\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$, then $\ensuremath{\Forecast^g}\geq\ensuremath{\Forecast^\Proposal}$.}
\end{proposition}
\begin{proof}
\AX{If $\ensuremath{\AmmArgs^\downarrow}\!\! \neq \!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\leq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\leq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}.
Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\leq\!\ensuremath{\Forecast^\Proposal}$.
If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!\neq\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\geq\!0$ by Def.~\ref{def:conscore} and then
$\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\geq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\geq\!\ensuremath{\Forecast^\Proposal}$.}
\end{proof}
This proposition demonstrates that, if a decrease \BIn{(increase)} amendment argument has an effect on proposal arguments, it can only be as its name implies.
\begin{example}
\BIn{In the Olympics setting, the
agents could not forecast higher than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ if either of the decrease amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$
\AX{had} been added, but the increase argument $\ensuremath{\incarg_1}$
\AX{had} not. Likewise, \AX{the}
agents could not forecast lower than
$\ensuremath{\Forecast^\Proposal}$ if
$\ensuremath{\incarg_1}$
\AX{had} been added, but
\AX{neither} of
$\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had
.}
\end{example}
If
$\ensuremath{\Forecast^g}$ is lower \BIn{(higher)} than
$\ensuremath{\Forecast^\Proposal}$,
there is
at least one decrease \BIn{(increase, resp.)} argument.
\begin{proposition} \label{prop:balance4}
If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. \BIn{If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$.}
\end{proposition}
\begin{proof}
\AX{
If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}<\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})<0$.
Then, irrespective of $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}>\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})>0$. Then, irrespective of \BIn{$\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$}.
}
\end{proof}
We can see here that the only way an agent can decrease \BIn{(increase)} the forecast is
\FT{by adding} decrease \BIn{(increase, resp.)} arguments, ensuring the debate is structured as
\FT{intended}.
\begin{example}
\BIn{In the Olympics setting, the group of agents could only produce a group forecast $\ensuremath{\Forecast^g}$ lower than
$\ensuremath{\Forecast^\Proposal}$ due to the presence of
\emph{decrease} amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$. Likewise, the group of agents could only produce a
$\ensuremath{\Forecast^g}$ higher than
$\ensuremath{\Forecast^\Proposal}$ due to the presence of
$\ensuremath{\incarg_1}$.}
\end{example}
\paragraph{Unequal representation.}
AFs exhibit instances of unequal representation in the final voting process. In formulating the following properties, we distinguish between two forms of unequal representation. First, \emph{dictatorship}, where a single agent dictates
$\ensuremath{\Forecast^g}$ with no input from other agents. Second, \emph{pure oligarchy}, where a group of agents dictates
$\ensuremath{\Forecast^g}$ with no input from other agents outside the group.
In the forecasting setting, these
properties are desirable as they guarantee higher accuracy
\AX{from} the group forecast $\ensuremath{\Forecast^g}$.
An agent with a forecasting record of \emph{some} accuracy exercises \emph{dictatorship} over the group forecast $\ensuremath{\Forecast^g}$, if the rest of the participating \AX{agents}
have a record of total inaccuracy.
\begin{proposition}\label{prop:dictatorship}
If $\ensuremath{a}_d\in\ensuremath{\mathcal{A}}$ has a Brier score $\ensuremath{b}_{\ensuremath{a}_d}<1$ and $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}} \setminus \{\ensuremath{a}_d$\}, $\ensuremath{b}_{\ensuremath{a}_z} = 1$, then $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$.
\end{proposition}
\begin{proof}
\AX{
By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} \!\!\!=\!\! 1$ $\forall \ensuremath{a}_z\!\in\!\ensuremath{\mathcal{A}} \!\setminus\! \{\!\ensuremath{a}_d\!\}$, then $w_{\ensuremath{a}_z}\!\!\!=\!0$; and if $\ensuremath{b}_{\ensuremath{a}_d}\!\!<\!\!1$, then $w_{\ensuremath{a}_d}\!\!>\!\!0
. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$ is weighted at 100\% and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\% so $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$.
}
\end{proof}
This proposition demonstrates how we will disregard agents with total inaccuracy, even in
\FT{the} extreme case where we allow one (more accurate) agent to dictate the forecast.
\begin{example}
\BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 0.5, 1 and 1, resp., bob's and charlie's forecasts have} no impact on $\ensuremath{\Forecast^g}$, whilst \AX{alice's} forecast becomes the group forecast $\ensuremath{\Forecast^g}$.}
\end{example}
A group of agents with a forecasting record of $some$ accuracy exercises \emph{pure oligarchy} over
$\ensuremath{\Forecast^g}$ if the rest of the
\AX{agents} all have a record of total inaccuracy.
\begin{proposition}\label{oligarchytotalprop}
Let $\ensuremath{\mathcal{A}} = \ensuremath{\mathcal{A}}_o \cup \ensuremath{\mathcal{A}}_z$ where $\ensuremath{\mathcal{A}}_o \cap \ensuremath{\mathcal{A}}_z = \emptyset$, $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o \in \ensuremath{\mathcal{A}}_o$ and $\ensuremath{b}_{\ensuremath{a}_z}=1$ $\forall \ensuremath{a}_z \in \ensuremath{\mathcal{A}}_z$. Then, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $>0\%$
and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\
.
\end{proposition}
\begin{proof}
\AX{
By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} = 1$ $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}}_z$, then $w_{\ensuremath{a}_z}=0$; and if $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o\in\ensuremath{\mathcal{A}}_o$, then $w_{\ensuremath{a}_o}>0$. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $> 0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at $0\%$.
}
\end{proof}
This proposition extends the behaviour from Proposition \ref{prop:dictatorship} to the (more desirable) case where fewer agents have a record of total inaccuracy.
\begin{example}
\BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 1, 0.2 and 0.6, resp., alice's forecast} has no impact on $\ensuremath{\Forecast^g}$, whilst \AX{bob and charlie's} aggregated forecast becomes the group forecast $\ensuremath{\Forecast^g}$.}
\end{example}
\section{Evaluation}\label{sec:experiments}
\BI{We conducted an experiment using a dataset obtained from the `Superforecasting' project, Good Judgment Inc \cite{GJInc}, to simulate four past forecasting debates in FAFs. This dataset contained 1770 datapoints (698 `forecasts' and 1072 `comments') posted by 242 anonymised users with a range of expertise. The original debates had occurred on the
publicly available group forecasting platform, the Good Judgment Open (GJO)\footnote{https://www.gjopen.com/}, providing a suitable baseline against which to compare FAFs'
accurac
.
\BI{For the experiment, we used a
prototype implementation of FAFs in the form of the publicly available web platform called \emph{Arg\&Forecast} (see \cite{Irwin2022} for an introduction to the platform and an additional human
experiment with FAFs). Python's Gensim topic modelling library \cite{rehurek2011gensim} was used
to separate the datapoints for each debate into contextual-temporal groups that could form update frameworks.} In each update framework the proposal forecast was set to the mean average of forecasts made in the update framework window and each argument appeared only once. Gensim was further used to simulate voting, matching users to specific arguments they (dis)approved of. Some 4,700 votes
\AX{were then}
generated with a three-valued system (where votes were taken from
\{0,0.5,1\}) to ensure consistency: if a user voiced approval for an argument in the debate time window, their vote for the corresponding argument(s) was set to 1; disapproval for an argument led to a vote of 0, and (in the most common case) if a user did not mention an argument at all, their vote for the corresponding argument(s) defaulted to 0.5.
With the views of all participating users wrt the proposal argument encoded in each update framework's votes, forecasts could then be simulated. If a forecast was irrational, violating any of the three constraints in Def.~\ref{def:irrationality} (referred to
\AX{in the following}
as \emph{increase}, \emph{decrease} and \emph{scale}, resp.), it was blocked and, to mimic real life use, an automatic `follow up' forecast was made. The `follow up' forecast would be the closest possible prediction (to their original choice) a user could make whilst remaining `rational'.
\BI{Note that evaluation of the aggregation function described in \AX{§}\ref{subsec:aggregation} was outside this experiment, since the past forecasting accuracy of the dataset's 242 anonymised users was unavailable. Instead, we used \AX{the} mean average whilst adopting the GJO's method for scoring the accuracy of a user and/or group over the lifetime of the question \cite{roesch_2015}. This meant calculating a daily forecast and daily Brier score
for each user, for every day of the question. After users made their first rational forecast, that forecast became their `daily forecast'
until it was updated with a new forecast. Average and range of daily Brier scores
allowed reliable comparison between (individual and aggregated) performance of the GJO versus the FAF implementation.}
\begin{table}[t]
\begin{tabular}{@{}llllll@{}}
\toprule
Q & Group $\ensuremath{b}$ & $min(\ensuremath{b})$ & $max(\ensuremath{b})$ \\ \midrule
Q1 & 0.1013 (0.1187) & 0.0214 (0) & 0.4054 (1) \\
Q2 & 0.216 (0.1741) & 0 (0) & 0.3853 (1) \\
Q3 & 0.01206 (0.0227) & 0.0003 (0) & 0.0942 (0.8281) \\
Q4 & 0.5263 (0.5518) & 0 (0) & 0.71 (1) \\ \midrule
\textbf{All} & \textbf{0.2039 (0.217)} & \textbf{0 (0)} & \textbf{1 (1)} \\ \bottomrule
\end{tabular}
\caption{The accuracy
of the platform group versus control, where \AX{`}Group $\ensuremath{b}$\AX{'} is the aggregated (mean) Brier score, `$min(\ensuremath{b})$' is the lowest individual Brier score and `$max(\ensuremath{b})$' is the highest individual Brier score. Q1-Q4 indicate the four simulated debates.}
\label{accuracyExp1}
\end{table}
\begin{table}[t]
\begin{tabular}{llllll}
\hline
\multirow{2}{*}{Q} &
\multirow{2}{*}{$\overline{\ensuremath{\mathcal{C}}}$} &
\multirow{2}{*}{Forecasts} &
\multicolumn{3}{c}{Irrational Forecasts} \\ \cline{4-6}
&
&
&
\multicolumn{1}{c}{\emph{Increase}
} \!\!\!\!
&
\multicolumn{1}{c}{\emph{Decrease}
} \!\!\!\!
&
\multicolumn{1}{c}{\emph{Scale}
}\!\! \!\!
\\ \hline
Q1 & -0.0418 & 366 & 63 & 101 & 170 \\
Q2 & 0.1827 & 84 & 11 & 15 & 34 \\
Q3 & -0.4393 & 164 & 53 & 0 & 86 \\
Q4 & 0.3664 & 84 & 4 & 19 & 15 \\ \hline
All & -0.0891 & 698 & 131 & 135 & 305 \\ \hline
\end{tabular}
\caption{Auxiliary results from \FT{the experiment}, where
$\overline{\ensuremath{\mathcal{C}}}$ is the average confidence score, `Forecasts' is number of forecasts made in each question and `Irrational Forecasts' the number in each question which violated each constraint in §\ref{subsec:rationality}.}
\label{exp1auxinfo}
\end{table}
\paragraph{Results.}
As Table \ref{accuracyExp1} demonstrates, simulating forecasting debates from GJO in \emph{Arg\&Forecast} led to predictive accuracy improvements in three of the four debates. \BIn{This is reflected in these debates by a substantial reduction in Brier scores versus control.} The greatest accuracy improvement in absolute terms was in Q4, which saw a Brier score decrease of 0.0255. In relative terms, Brier score decreases ranged from 5\% (Q4) to 47\% (Q3). \BIn{The average Brier score decrease was 33\%, representing a significant improvement in forecasting accuracy across the board}. \BIn{Table \ref{exp1auxinfo} demonstrates how
\AX{our}
rationality constraints drove forward this improvement}. 82\% of forecasts made across the four debates were classified as irrational \BIn{and subsequently moderated with a rational `follow up' forecast}. Notably, there were more \emph{irrational scale} forecasts
than
\emph{irrational increase}
and \emph{irrational decrease} forecasts
combined. These results demonstrate how argumentation-based rationality constraints can play an active role in facilitating higher forecasting accuracy, signalling the early promise of FAFs.
\section{Conclusions}\label{sec:conclusions}
We have introduced the Forecasting Argumentation Framework (FAF), a multi-agent argumentation framework which supports forecasting debates and probability estimates. FAFs are composite argumentation frameworks, comprised of multiple non-concurrent update frameworks which themselves depend on three new argument types and a novel definition of rationality for the forecasting context. Our theoretical and empirical evaluation demonstrates the potential of FAFs, namely in increasing forecasting accuracy, holding intuitive
properties, identifying irrational
behaviour and driving higher engagement with the forecasting question (more arguments and responses, and more forecasts in the user study). These strengths align with requirements set out by previous research in the field of judgmental forecasting.
There
\AX{is} a multitude of possible directions for future wor
. First, FAFs are equipped to deal only with two-valued outcomes but, given the prevalence of forecasting issues with multi-valued outcomes (e.g. `Who will win the next UK election?'), expanding their capability
would add value. Second, further work may focus on the rationality constraints,
e.g. by introducing additional parameters to adjust their strictness, or
\AX{by implementing}
alternative interpretations of rationalit
. Third,
future work could explore constraining agents' argumentation. This could involve using past Brier scores to limit the quantity or strength of agents' arguments and also
to give them greater leeway wrt the rationality constraints.
\FTn{Fourth,
our method relies upon
acyclic graphs: we believe that they are intuitive for users and note that all Good Judgment Open debates were acyclic; nonetheless, the inclusion of cyclic relations (e.g. to allow
\AX{con} arguments that attack each other) could expand the scope of the argumentative reasoning in
\AX{in FAFs.}
}
Finally, there is an immediate need for larger scale human
experiments.
\newpage
\section*{Acknowledgements}
The authors would like to thank Prof. Anthony Hunter for his helpful contributions to discussions in the build up to this work. \BIn{Special thanks, in addition, go to Prof. Philip E. Tetlock and the Good Judgment Project team for their warm cooperation and for providing datasets for the experiments.}
\AX{Finally, the authors would like to thank the anonymous reviewers and meta-reviewer for their suggestions, which led to a significantly improved paper.}
\bibliographystyle{kr}
\section{Introduction}\label{sec:intro}
Historically, humans have performed inconsistently in judgemental forecasting \cite{Makridakis2010,TetlockExp2017}, which incorporates subjective opinion and probability estimates to predictions \cite{Lawrence2006}. Yet, human judgement remains essential in cases where pure statistical methods are not applicable, e.g. where historical data alone is insufficient or for one-off, more `unknowable' events \cite{Petropoulos2016,Arvan2019,deBaets2020}. Judgemental forecasting is widely relied upon for decision-making \cite{Nikolopoulos2021}, in myriad fields from epidemiology to national security \cite{Nikolopoulos2015,Litsiou2019}. Effective tools to help humans improve their predictive capabilities thus have enormous potential for impact. Two recent global events -- the COVID-19 pandemic and the US withdrawal from Afghanistan -- underscore this by highlighting the human and financial cost of predictive deficiency. A multi-purpose system which could improve our ability to predict the incidence and impact of events
by as little as 5\%, could save millions of lives and be worth trillions of dollars per year \cite{TetlockGard2016}.
Research on judgemental forecasting (see \cite{Lawrence2006,Zellner2021} for overviews), including the recent\AX{,}
groundbreaking `Superforecasting Experiment' \cite{TetlockGard2016}, is instructive in establishing the desired properties for systems for supporting forecastin
.
In addition to reaffirming the importance of fine-grained probabilistic reasoning \cite{Mellers2015}, this literature points to the benefits of some group techniques versus solo forecasting \cite{Landeta2011,Tetlock2014art}, of synthesising qualitative and quantitative information \cite{Lawrence2006}, of combating agents' irrationality \cite{Chang2016} and of high agent engagement with the forecasting challenge, e.g. robust debating \cite{Landeta2011} and frequent prediction updates \cite{Mellers2015}.
Meanwhile, \emph{computational argumentation} (see \cite{AImagazine17,handbook} for recent overviews) is a field of AI which involves reasoning with uncertainty and resolving conflicting information, e.g. in natural language debates. As such, it is an ideal candidate for aggregating the broad, polymorphous set of information involved in judgemental group forecasting. An extensive and growing literature is based on various argumentation frameworks -- rule-based systems for aggregating, representing and evaluating sets of arguments, such as those applied in the contexts of \emph{scheduling} \cite{Cyras_19}, \emph{fact checking} \cite{Kotonya_20} or in various instances of \emph{explainable AI} \cite{Cyras_21}.
Subsets of the requirements for forecasting systems are addressed by individual formalisms, e.g. \emph{probabilistic argumentation}
\AX{\cite{Dung2010,Thimm2012,Hunter2013,Fazzinga2018}}
may effectively represent and analyse uncertain arguments about the future. However, we posit that a purpose-built argumentation framework for forecasting is essential to effectively utilise
argumentation's reasoning capabilities in this context.
\begin{figure*}
\includegraphics[width=\textwidth]{images/FAF_diagram.png}
\caption{The step-by-step process of a FAF over its lifetime.}
\label{fig:FAFdiag}
\end{figure*}
In this paper, we attempt to cross-fertilise these two as of yet unconnected academic areas. We draw from forecasting literature to inform the design of a new computational argumentation approach: \emph{Forecasting Argumentation Frameworks} (FAFs). FAFs
empower (human and artificial) agents to structure debates in real time and to deliver argumentation-based forecasting.
They offer an approach in the spirit of \emph{deliberative democracy} \cite{Bessette1980} to respond to a forecasting problem over time. The steps which underpin FAFs are depicted in Figure \ref{fig:FAFdiag} (referenced throughout) and can be described in simple terms \FT{as follows}: a FAF is initialised with a time limit \FT{(for the overall forecasting process and for each iteration therein)} and a pre-agreed `base-rate' forecast $\ensuremath{\mathcal{F}}$ (Stage 1), e.g. based on historical data.
\FT{Then,} the forecast is revised by one or more (non-concurrent) debates, \BI{in the form of `update frameworks' (Stage 2)}, opened and resolved by participating agents \FT{(}until
\FT{the} specified time limit is reached\FT{)}. Each update framework begins with a proposed revision to the current forecast (Stage 2a), and proceeds with a cycle of argumentation (Stage 2b) about the proposed forecast, voting on said argumentation and forecasting. Forecasts deemed `irrational' with a view to agents' argumentation and voting are blocked. Finally, the rational forecasts are aggregated and the result replaces the current group forecast (Stage 2c). This process may be repeated over time \BI{in an indefinite number of update frameworks} (thus continually \BI{revising} the group forecast) until the
\FT{(overall)} time limit is reached.
The composite nature of this process enables the appraisal of new
information relevant to the forecasting question as and when it arrives. Rather than confronting an unbounded forecasting question with a diffuse set of possible debates open at once, all agents concentrate their argumentation on a single topic (a proposal) at any given time.
After giving the necessary background on forecasting and argumentation (§\ref{sec:background}), we formalise our \FT{update} framework\FT{s for Step 2a}
(§\ref{sec:fw}).
We then give
\FT{our} notion of rationality \FT{(Step 2b)}, along with \FT{our} new method for
\FT{aggregating rational forecasts (Step 2c)} from a group of agents (§\ref{sec:forecasting}) \FT{and FAFs overall}. We explore the underlying properties of
\FT{FAFs} (§\ref{sec:props}), before describing
\FT{\AX{an
experiment}
with
\FT{a prototype implementing} our approach (§\ref{sec:experiments}). Finally, we conclude and suggest potentially fruitful avenues for future work (§\ref{sec:conclusions}).
\section{Background}\label{sec:background}
\subsection{Forecasting}
Studies on the efficacy of judgemental forecasting have shown mixed results \cite{Makridakis2010,TetlockExp2017,Goodwin2019}. Limitations of the judgemental approach are a result of well-documented cognitive biases \cite{Kahneman2012}, irrationalities in human probabilistic reasoning which lead to distortion of forecasts. Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success \cite{Lawrence2006}. These methodologies include, but are not limited to, prediction intervals \cite{Lawrence1989}, decomposition \cite{MacGregorDonaldG1994Jdwd}, structured analogies \cite{Green2007,Nikolopoulos2015} and unaided judgement \cite{Litsiou2019}. Various group forecasting techniques have also been explored \cite{Linstone1975,Delbecq1986,Landeta2011}, although the risks of groupthink \cite{McNees1987} and the importance of maintaining the independence of each group member's individual forecast are well established \cite{Armstrong2001}. Recent advances in the field have been led by Tetlock and Mellers' superforecasting experiment \cite{TetlockGard2016}, which leveraged \AX{geopolitical} forecasting tournaments and a base of 5000 volunteer forecasters to identify individuals with consistently exceptional accuracy (top 2\%). The experiment\AR{'s} findings underline the effectiveness of group forecasting orientated around debating \cite{Tetlock2014art}, and demonstrate a specific cognitive-intellectual approach conducive for forecasting \cite{Mellers20151,Mellers2015}, but stop short of suggesting a concrete universal methodology for higher accuracy. Instead, Tetlock draws on his own work and previous literature to crystallise a broad set of methodological principles by which superforecasters abide \cite[pg.144]{TetlockGard2016}:
\begin{itemize}
\item \emph{Pragmatic}: not wedded to any idea or agenda;
\item \emph{Analytical:} capable of stepping back from the tip-of-your-nose perspective
and considering other views;
\item \emph{Dragonfly-eyed:} value diverse views and synthesise them into their own;
\item \emph{Probabilistic:} judge using many grades of maybe;
\item \emph{Thoughtful updaters:} when facts change, they change their minds;
\item \emph{Good intuitive psychologists:} aware of the value of checking thinking for cognitive and emotional biases.
\end{itemize}
Subsequent research after the superforecasting experiment has included exploring further optimal forecasting tournament preparation \cite{penn_global_2021,Katsagounos2021} and extending Tetlock and Mellers' approach to answer broader, more time-distant questions \cite{georgetown}. It should be noted that there have been no recent advances on computational tool\AX{kits}
for the field similar to that proposed in this paper.
\iffalse
\begin{quote}
\textbf{PRAGMATIC:} Not wedded to any idea or agenda
\textbf{ANALYTICAL:} Capable of stepping back from the tip-of-your-nose perspective
and considering other views
\textbf{DRAGONFLY-EYED:} Value diverse views and synthesize them into their own
\textbf{PROBABILISTIC:} Judge using many grades of maybe
\textbf{THOUGHTFUL UPDATERS:} When facts change, they change their minds
\textbf{GOOD INTUITIVE PSYCHOLOGISTS:} Aware of the value of checking thinking for cognitive and emotional biases \cite[pg.144]{TetlockGard2016}\newline
\end{quote}
\fi
\subsection{Computational Argumentation}
We posit that existing argumentation formalisms are not well suited for the aforementioned future-based arguments, which are necessarily semantically and structurally different from arguments about present or past concerns. Specifically, forecasting arguments are inherently probabilistic and must deal with the passage of time and its implications for the outcomes at hand.
Further, several other important characteristics can be drawn from the forecasting literature which render current argumentation formalisms unsuitable, e.g. the paramountcy of dealing with bias (in data and cognitive), forming granular conclusions, fostering group debate and the co-occurrence of qualitative and quantitative arguing.
Nonetheless,
several of these characteristics have been previously explored in argumentation and our formalisation draws from several existing approache
. First and foremost, it draws in spirit from abstract argumentation frameworks (AAFs) \cite{Dung1995}, in that
the arguments' inner contents are ignored and the focus is on the relationships between arguments.
However, we consider arguments of different types and \AX{an additional relation of} support (pro),
\AX{rather than} attack (con) alone as in \cite{Dung1995}.
Past work has also introduced probabilistic constraints into argumentation frameworks. {Probabilistic AAFs} (prAAFs) propose two divergent ways for modelling uncertainty in abstract argumentation using
probabilities
- the constellation approach \cite{Dung2010,Li2012} and the epistemic approach \cite{Hunter2013,Hunter2014,Hunter2020}.
These formalisations use probability as a means to assess uncertainty over the validity of arguments (epistemic) or graph topology (constellation), but do not enable reasoning \emph{with} or \emph{about} probability, which is fundamental in forecasting.
In exploring temporality, \cite{Cobo2010} augment AAFs by providing each argument with a limited lifetime. Temporal constraints have been extended in \cite{Cobo2012} and \cite{Baron2014}. Elsewhere, \cite{Rago2017} have used argumentation to model irrationality or bias in
agents. Finally, a wide range of gradual evaluation methods have gone beyond traditional qualitative semantics by measuring arguments' acceptability on a scale (normally
[0,1]) \cite{Leite2011,Evripidou2012,Amgoud2017,Amgoud2018,Amgoud2016}.
Many of these approaches have been unified as Quantitative Bipolar Argumentation Frameworks (QBAFs) in \cite{Baroni2018}.
Amongst existing approaches, of special relevance in this paper are Quantitative Argumentation Debate (QuAD) frameworks \cite{Baroni2015},
i.e. 5-tuples ⟨$\mathcal{X}^a$, $\mathcal{X}^c$, $\mathcal{X}^p$, $\mathcal{R}$, $\ensuremath{\mathcal{\tau}}$⟩
where
$\mathcal{X}^a$ is a finite set of \emph{answer} arguments (to implicit \emph{issues}); $\mathcal{X}^c$ is a finite set of \emph{con} arguments;
$\mathcal{X}^p$ is a finite set of \emph{pro} arguments;
$\mathcal{X}^a$, $\mathcal{X}^c$ and $\mathcal{X}^p$ are pairwise disjoint; $\mathcal{R} \subseteq (\mathcal{X}^c \cup \mathcal{X}^p) \times (\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p)$ is an acyclic binary relation;
$\ensuremath{\mathcal{\tau}}$ : $(\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p) \rightarrow
[0,1]$ is a total function: $\ensuremath{\mathcal{\tau}}(a)$ is the \emph{base score} of $a$.
Here, attackers and supporters of arguments are determined by the pro and con arguments they are in relation with. Formally, for any $a\in\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p$, the set of \emph{con} (\emph{pro}\AX{)} \emph{arguments} of $a$ is $\mathcal{R}^-(a) = \{b\in\mathcal{X}^c|(b,a)\in\mathcal{R}\}$
($\mathcal{R}^+(a) = \{b\in\mathcal{X}^p|(b,a)\in\mathcal{R}\}$, resp.).
Arguments in QuAD frameworks are scored by
the \emph{Discontinuity-Free QuAD} (DF-QuAD) algorithm \cite{Rago2016}, using the argument's intrinsic base score and the \emph{strengths} of its pro/con arguments. \FTn{Given that DF-QuAD is used to define our method (see Def.~\ref{def:conscore}), for completeness we define it formally here.} DF-QuAD's \emph{strength aggregation function}
is defined as $\Sigma :
[0,1]^* \rightarrow
[0,1]$, where for $\mathcal{S} = (v_1,\ldots,v_n) \in
[0,1]^*$:
if $n=0$, $\Sigma(S) = 0$;
if $n=1$, $\Sigma(S) = v_1$;
if $n=2$, $\Sigma(S) = f(v_1, v_2)$;
if $n>2$, $\Sigma(S) = f(\Sigma(v_1,\ldots,v_{n-1}), v_n)$;
with the \emph{base function} $f
[0,1]\times [0,1] \rightarrow [0,1]$ defined, for $v_1, v_2\i
[0,1]$, as:
$f(v_1,v_2)=v_1+(1-v_1)\cdot v_2 = v_1 + v_2 - v_1\cdot v_2$.
After separate aggregation of the argument's pro/con descendants, the combination function $c :
[0,1]\time
[0,1]\time
[0,1]\rightarro
[0,1]$ combines $v^-$ and $v^+$ with the argument's base score ($v^0$):
$c(v^0,v^-,v^+)=v^0-v^0\cdot\mid v^+ - v^-\mid\:if\:v^-\geq v^+$ and
$c(v^0,v^-,v^+)=v^0+(1-v^0)\cdot\mid v^+ - v^-\mid\:if\:v^-< v^+$, resp.\
The inputs for the combination function are provided by the \emph{score function}, $\ensuremath{\mathcal{\sigma}} : \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p\rightarro
[0,1]$, which gives the argument's strength, as follows: for any $\ensuremath{x} \in \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p$:
$\ensuremath{\mathcal{\sigma}}(\ensuremath{x}) = c(\ensuremath{\mathcal{\tau}}(\ensuremath{x}),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^+(\ensuremath{x}))))$
where if $(\ensuremath{x}_1,\ldots,\ensuremath{x}_n)$ is an arbitrary permutation of the ($n \geq 0$) con arguments in $\mathcal{R}^-(\ensuremath{x})$, $\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))=(\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_1),\ldots,\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_n))$ (similarly for pro arguments).
Note that the DF-QuAD notion of $\ensuremath{\mathcal{\sigma}}$ can be applied to any argumentation framework where arguments are equipped with base scores and pro/con arguments. We will do so later, for our novel formalism.
\section
Update \AX{F}rameworks}\label{sec:fw}
We begin by defining the individual components of our frameworks, starting with the fundamental notion of a
\emph{forecast}.
\FT{This} is a probability estimate for the positive outcome of a given (binary) question.
\begin{definition}
A \emph{forecast} $\ensuremath{\mathcal{F}}$ is the probability $P(\ensuremath{\mathcal{Q}}=true) \in [0,1]$ for a given \emph{forecasting question} $\ensuremath{\mathcal{Q}}$.
\end{definition}
\begin{example}
\label{FAFEx}
Consider the forecasting question $\ensuremath{\mathcal{Q}}$: \emph{`Will the Tokyo \AX{2020 Summer} Olympics be cancelled/postponed to another year?'}.
\AX{Here, the $true$ outcome amounts to the Olympics being cancelled/postponed (and $false$ to it taking place in 2020 as planned).}
Then, a forecast $\ensuremath{\mathcal{F}}$ may be $P(\ensuremath{\mathcal{Q}}=true)= 0.15$\, which amounts to a 15\% probability of the Olympics \BIn{being cancelled/postponed}. \BI{Note that $\ensuremath{\mathcal{F}}$ may have been introduced as part of an update framework (herein described), or as an initial base rate at the outset of a FAF (Stage 1 in Figure \ref{fig:FAFdiag}).}
\end{example}
In the remainder, we will often drop $\ensuremath{\mathcal{Q}}$, implicitly assuming it is given, and use $P(true)$ to stand for $P(\ensuremath{\mathcal{Q}}=true)$.
In order to empower agents to reason about probabilities and thus support forecasting, we need, in addition to
\emph{pro/con} arguments as in QuAD frameworks, two new argument types:
\begin{itemize}
\item
\emph{proposal} arguments,
each about some forecast (and its underlying forecasting question); each proposal argument $\ensuremath{\mathcal{P}}$ has a \emph{forecast}
and, optionally, some supporting \emph{evidence
; and
\item \emph{amendment} argument
, which
\AX{suggest a modification to}
some forecast\AX{'s probability} by increasing or decreasing it, and are accordingly separated into
disjoint classes of \emph{increase} and \emph{decrease} (amendment) arguments.\footnote{Note that
we decline to include a third type of amendment argument
for arguing that $\ensuremath{\Forecast^\Proposal}$ is just right. This choice rests on the assumption that additional information always necessitates a change to $\ensuremath{\Forecast^\Proposal}$, however granular that change may be. This does not restrict individual agents arguing about $\ensuremath{\Forecast^\Proposal}$ from casting $\ensuremath{\Forecast^\Proposal}$ as their own final forecast. However, rather than cohering their argumentation around $\ensuremath{\Forecast^\Proposal}$, which we hypothesise would lead to high risk of groupthink~\cite{McNees1987}, agents are compelled to consider the impact of their amendment arguments on this more granular level.}
\end{itemize}
Note that amendment arguments are introduced specifically for arguing about a proposal argument, given that traditional QuAD pro/con
arguments are of limited use when the goal is to judge the acceptability of a probability, and that in forecasting agents must not only decide \emph{if} they agree/disagree but also \emph{how} they agree/disagree (i.e. whether they believe
the forecast is too low or too high considering, if available,
the evidence). Amendment arguments, with their increase and decrease classes, provide for this.
\begin{example
\label{ProposalExample}
A proposal argument $\ensuremath{\mathcal{P}}$ in the Tokyo Olympics setting may comprise
forecast: \emph{\AX{`}There is a 75\% chance that the Olympics will be cancelled/postponed to another year'}. It may also include
evidence: \emph{`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled. The Japanese government is likely to buckle under this pressure.'}
This argument may aim to prompt updating the earlier forecast in Example~\ref{FAFEx}.
A \emph{decrease} amendment argument may be $\ensuremath{\decarg_1}$: \emph{`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'}. An \emph{increase} amendment argument may be $\ensuremath{\incarg_1}$: \emph{`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation'}.
\end{example}
Intuitively, a proposal argument
is the focal point of the argumentation. It typically suggests a new forecast to replace prior forecasts, argued on the basis of some new evidence (as in the earlier example). We will see that proposal arguments remain immutable through each debate (update framework), which takes place via amendment arguments and standard pro/con arguments.
Note that, wrt QuAD argument types, proposal arguments replace issues and amendment arguments replace answers, in that the former are driving the debates and the latter are the options up for debate.
Note also that amendment arguments merely state a direction wrt $\ensuremath{\Forecast^\Proposal}$ and do not contain any more information, such as \emph{how much} to alter $\ensuremath{\Forecast^\Proposal}$ by.
We will see that alteration can be determined by \emph{scoring} amendment arguments.
Proposal and amendment arguments, alongside pro/con arguments, form part of
our novel update frameworks \BI{(Stage 2 of Figure \ref{fig:FAFdiag})}, defined as follows:
\begin{definition} An \emph{update framework} is a nonad
⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{V}}, \ensuremath{\Forecast^\Agents}$⟩ such that:
\item[$\bullet$] $\ensuremath{\mathcal{P}}$ is a single proposal argument
with \emph{forecast} $\NewForecas
$ and, optionally, \emph{evidence} $\mathcal{E}^\ensuremath{\mathcal{P}}$ for this forecast;
\item[$\bullet$] $\ensuremath{\mathcal{X}} = \ensuremath{\AmmArgs^\uparrow} \cup \ensuremath{\AmmArgs^\downarrow}$ is a finite set of \emph{amendment arguments} composed of subsets $\ensuremath{\AmmArgs^\uparrow}$ of \emph{increase} arguments and
$\ensuremath{\AmmArgs^\downarrow}$ of
\emph{decrease} arguments;
\item[$\bullet$] $\ensuremath{\AmmArgs^-}$ is a finite set
of \emph{con} arguments;
\item[$\bullet$] $\ensuremath{\AmmArgs^+}$ is a finite set
of \emph{pro} arguments;
\item[$\bullet$] the sets $\{\ensuremath{\mathcal{P}}\}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^-}$ and $\ensuremath{\AmmArgs^+}$ are pairwise disjoint;
\item[$\bullet$] $\ensuremath{\Rels^p}$ $\subseteq$ $\ensuremath{\mathcal{X}}$ $\times$ $\{\ensuremath{\mathcal{P}}\}$ is a directed acyclic
binary relation
between amendment arguments and the proposal argument (we may refer to this relation informally as `probabilistic');
\item[$\bullet$] $\ensuremath{\Rels}$ $\subseteq$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\times$ ($\ensuremath{\mathcal{X}}$ $\cup$ $\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) is a directed acyclic,
binary relation
\FTn{from} pro/con arguments
\FTn{to} amendment\FTn{/pro/con arguments} (we may refer to this relation informally as `argumentative');
\item[$\bullet$] $\ensuremath{\mathcal{A}} = \{ \ensuremath{a}_1, \ldots, \ensuremath{a}_n \}$ is a finite set of \emph{agents} $(n >1$);
\item[$\bullet$] $\ensuremath{\mathcal{V}}$ : $\ensuremath{\mathcal{A}}$ $\times$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow$ [0, 1] is a total function such that $\ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$ is the \emph{vote} of agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$ on argument $\ensuremath{x} \in \ensuremath{\AmmArgs^-} \cup \ensuremath{\AmmArgs^+}$; with an abuse of notation, we let $\ensuremath{\mathcal{V}}_\ensuremath{a}$ : ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow [0, 1]$ represent the votes of a \emph{single} agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$, e.g. $\ensuremath{\mathcal{V}}_\ensuremath{a}(\ensuremath{x}) = \ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$;
\item[$\bullet$] $\ensuremath{\Forecast^\Agents} = \{ \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n} \}$ is
such that $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}
$, where $i \in \{ 1, \ldots n \}$, is the \emph{forecast} of agent $\ensuremath{a}_i\in\ensuremath{\mathcal{A}}$.
\end{definition}
\BIn{Note that pro \AX{(}con\AX{)} arguments can be seen as supporting (attacking, resp.) other arguments via $\ensuremath{\mathcal{R}}$, as in the case of conventional QuAD frameworks~\cite{Baroni2015}.}
\begin{example
\label{eg:tokyo}
A possible update framework
in our running setting may include $\ensuremath{\mathcal{P}}$ as in Example~\ref{ProposalExample} as well as (see Table \ref{table:tokyo}) $\ensuremath{\mathcal{X}}=\{\ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ensuremath{\incarg_1}\}$, $\ensuremath{\AmmArgs^-}=\{\ensuremath{\attarg_1}, \ensuremath{\attarg_2}, \ensuremath{\attarg_3}\}$, $\ensuremath{\AmmArgs^+}=\{\ensuremath{\supparg_1}, \ensuremath{\supparg_2}\}$, $\ensuremath{\Rels^p}=\{(\ensuremath{\decarg_1}, \ensuremath{\mathcal{P}})$, $(\ensuremath{\decarg_2}, \ensuremath{\mathcal{P}}), (\ensuremath{\incarg_3}, \ensuremath{\mathcal{P}})\}$, and $\ensuremath{\mathcal{R}}=\{(\ensuremath{\attarg_1}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_2}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_3}, \ensuremath{\incarg_1})$, $(\ensuremath{\supparg_1}, \ensuremath{\decarg_2}),$ $ (\ensuremath{\supparg_2}, \ensuremath{\incarg_1})\}
. Figure \ref{fig:tokyo} gives a graphical representation of
these arguments and relations.
\BIn{Assuming $\ensuremath{\mathcal{A}}=\{alice, bob, charlie\}$, $\ensuremath{\mathcal{V}}$ may be such that $\AX{\ensuremath{\mathcal{V}}_{alice}(\ensuremath{\attarg_1})} = 1$, $\AX{\ensuremath{\mathcal{V}}_{bob}(\ensuremath{\supparg_1})} = 0$, and so on.}
\end{example}
\begin{table}[t]
\begin{tabular}{p{0.7cm}p{6.7cm}}
\hline
&
Content \\ \hline
$\ensuremath{\mathcal{P}}$ &
`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled owing to COVID-19, and the Japanese government is likely to buckle under this pressure ($\mathcal{E}^\ensuremath{\mathcal{P}})$. Thus, there is a 75\% chance that the Olympics will be cancelled/postponed to another year' ($\ensuremath{\Forecast^\Proposal}$). \\
$\ensuremath{\decarg_1}$ &
`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'. \\
$\ensuremath{\decarg_2}$ &
`This poll comes from an unreliable source.' \vspace{2mm}\\
$\ensuremath{\incarg_1}$ &
`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation.' \\
$\ensuremath{\attarg_1}$ &
`The IOC is bluffing - people are dying, Japan is experiencing a strike. They will not go ahead with the games if there is a risk of mass death.' \\
$\ensuremath{\attarg_2}$ &
`The Japanese government may renege on its commitment to the IOC, and use legislative or immigration levers to block the event.' \\
$\ensuremath{\attarg_3}$ &
`Japan's government has sustained a high-approval rating in the last year and is strong enough to ward off opposition attacks.' \\
$\ensuremath{\supparg_1}$ &
`This pollster has a track record of failure on Japanese domestic issues.' \\
$\ensuremath{\supparg_2}$ &
`Rising anti-government sentiment on Japanese Twitter indicates that voters may be receptive to such arguments.' \\ \hline
\end{tabular}
\caption
Arguments in the update framework
in Example~\ref{eg:tokyo}.}
\label{table:tokyo}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{images/FAF1.png}
\centering
\caption {\BIn{A graphical representation of arguments and relations in the update framework
from Example~\ref{eg:tokyo}. Nodes
represent proposal ($\ensuremath{\mathcal{P}}$), increase ($\uparrow$), decrease ($\downarrow$), pro ($+$) and con ($-$)
arguments, while \FTn{dashed/solid} edges indicat
, resp., the $\ensuremath{\Rels^p}$/$\ensuremath{\mathcal{R}}$ relations.
}
}
\label{fig:tokyo}
\end{figure}
Several considerations about update frameworks are in order.
Firstly, they represent `stratified' debates: graphically, they can be represented as trees with
the proposal argument as root, amendment arguments as children of the root, and pro/con arguments forming the lower layers, as shown in Figure \ref{fig:tokyo}.
This tree structure serves to focus argumentation towards the proposal (i.e. the probability and, if available, evidence) it puts forward.
Second, we have chosen to impose a `structure' on proposal arguments, whereby their forecast is distinct from their (optional) evidence. Here the forecast has special primacy over the evidence, because forecasts are the vital reference point and the drivers of debates in FAFs. They are, accordingly, both mandatory and required to `stand out' to participating agents. In the spirit of abstract argumentation \cite{Dung1995}, we nonetheless treat all arguments, including proposal arguments, as `abstract', and focus on relations between them rather between their components. In practice, therefore, amendment arguments may relate to a proposal argument's forecast but also, if present, to its evidence. We opt for this abstract view on the assumption that the flexibility of this approach better suits judgmental forecasting, which has a diversity of use cases (e.g. including politics, economics and sport) where different argumentative approaches may be deployed (i.e. quantitative, qualitative, directly attacking amendment nodes or raising alternative POVs) and wherein forecasters may lack even a basic knowledge of argumentation.
We leave the study of structured variants of our framework (e.g. see overview in \cite{structArg}) to future work: these may consider finer-grained representations of all arguments in terms of different components, and finer-grained notions of relations between components, rather than full arguments. Third, in update frameworks, voting is restricted to pro/con arguments. Preventing agents from voting directly on amendment arguments mitigates against the risk of arbitrary judgements: agents cannot make off-the-cuff estimations but can only express their beliefs via (pro/con) argumentation, thus
ensuring a more rigorous process of appraisal for the proposal and amendment arguments.
Note that rather than facilitating voting on arguments using a two-valued perspective (i.e. positive/negative)
or a three-valued perspective (i.e. positive/negative/neutral), $\ensuremath{\mathcal{V}}$ allows agents to cast more granular judgements of (pro/con) argument acceptability, the need for which has been highlighted in the literature \cite{Mellers2015}.
Finally, although we envisage that arguments of all types are put forward by agents during debates, we do not capture this mapping in update frameworks. Thus, we do not capture who put forward which arguments, but instead only use votes to encode and understand agents' views. This enables more nuanced reasoning and full engagement on the part of agents with alternative viewpoints (i.e. an agent may freely argue both for and against a point before taking an explicit view with their voting). Such conditions are essential in a healthy forecasting debate \cite{Landeta2011,Mellers2015}.
In the remainder of this paper, with an abuse of notation, we often use $\ensuremath{\Forecast^\Proposal}$ to denote, specifically, the probability advocated in $\ensuremath{\Forecast^\Proposal}$ (e.g. 0.75 in Example \ref{ProposalExample}).
\section{Aggregating Rational
Forecasts
}\label{sec:forecasting}
In this section we
formally introduce (in \AX{§}\ref{subsec:rationality}) our notion of rationality and discuss how it may be used to identify\BI{, and subsequently `block',} undesirable behaviour in forecasters. We then define (in \AX{§}\ref{subsec:aggregation}) a method for calculating a revised forecast \BI{(Stage 2c of Figure \ref{fig:FAFdiag})}, which aggregates the views of all agents in the update framework, whilst optimising their overall forecasting accuracy.
\subsection{Rationality}\label{subsec:rationality}
Characterising an agent’s view as irrational offers opportunities to refine the accuracy of their forecast (and thus the overall aggregated group forecast). Our definition of rationality is inspired by, but goes beyond, that of QuAD-V \cite{Rago2017}, which was introduced for the e-polling context. Whilst update frameworks eventually produce a single aggregated forecast on the basis of group deliberation, each agent is first evaluated for their rationality on an individual basis. Thus, as in QuAD-V, in order to define rationality for individual agents, we first reduce frameworks to \emph{delegate frameworks} for each agent, which are the restriction of update frameworks
to a single agent.
\begin{definition}
A \emph{delegate framework} for an agent $\ensuremath{a}$ is $\ensuremath{u}_{\ensuremath{a}} =$ ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{a}, \ensuremath{\mathcal{V}}_{\ensuremath{a}}, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩.
\end{definition}
Note that all arguments in an update framework are included in each agent's delegate framework, but only the agent's votes and forecast are carried over.
Recognising the irrationality of an agent requires
comparing the agent's forecast against (an aggregation of) their opinions on the amendment arguments and, by extension, on the proposal argument.
To this end, we evaluate the different parts of the update framework as follows. We use the DF-QuAD algorithm \cite{Rago2016} to score each amendment argument for the agent, in the context of the pro/con arguments `linked' to the amendment argument, using $\ensuremath{\mathcal{R}}$, in the context of the agent's delegate framework. We refer to the DF-QuAD score function as $\ensuremath{\mathcal{\sigma}}$.
This requires a choice of base scores for amendment arguments as well as pro/con arguments.
We assume the same base score $\ensuremath{\mathcal{\tau}}(\ensuremath{x})=0.5$ for all $\ensuremath{x} \in \ensuremath{\mathcal{X}}$;
in contrast, the base score of pro/con arguments is a result of the votes they received from the agent, in the spirit of QuAD-V \cite{Rago2017}.
The intuition behind assigning a neutral (0.5) base score for amendment arguments is that an agent's estimation of their strength from the outset would be susceptible to bias and inaccuracy.
Once each amendment argument has been scored
(using $\ensuremath{\mathcal{\sigma}}$) for the agent, we aggregate the scores of all amendment arguments (for the same agent) to
to calculate the agent's \emph{confidence score} in the proposal argument (which underpins our rationality constraints), by weighting the mean average strength of this argument's increase amendment relations against that of the set of decrease amendment relations:
\begin{definition}\label{def:conscore}
Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩
, let
$\ensuremath{\AmmArgs^\uparrow} = \{ \ensuremath{\incarg_1}, \ensuremath{\incarg_2}, \ldots , \ensuremath{\arg^\uparrow}_i \}$ and $\ensuremath{\AmmArgs^\downarrow} = \{ \ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ldots , \ensuremath{\arg^\downarrow}_j \}$.
Then,
$\ensuremath{a}$'s \emph{confidence score} is as follows:
\begin{align}
&\text{if } i\neq0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k) - \frac{1}{j} \sum_{l=1}^{j} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\
&\text{if } i\neq0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k); \nonumber \\
&\text{if } i=0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = - \frac{1}{j} \sum_{l=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\
&\text{if } i=0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = 0. \nonumber
\end{align}
\end{definition}
Note that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) \in [-1,1]$, which denotes the overall views of the agent on the forecast $\ensuremath{\Forecast^\Proposal}$ (i.e. as to whether it should be \emph{increased} or \emph{decreased}, and how far). A negative (positive) $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ indicates that an agent believes that $\ensuremath{\Forecast^\Proposal}$ should be amended down (up, resp.).
The size of $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ reflects the degree of the agent's certainty in either direction.
In turn, we can constrain an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ if it contradicts this belief
as follows.
\begin{definition}\label{def:irrationality}
Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$
, $\ensuremath{a}$’s forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ is \emph{strictly rational} (wrt $\ensuremath{u}_{\ensuremath{a}}$) iff:
\begin{align}
if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} < \ensuremath{\Forecast^\Proposal}
\nonumber \\
if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} > \ensuremath{\Forecast^\Proposal}
\nonumber \\
\centering
\mid\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\mid \geq \frac{\mid\ensuremath{\Forecast^\Proposal} - \ensuremath{\Forecast^\Agents}_\ensuremath{a}\mid}{\ensuremath{\Forecast^\Proposal}}
\nonumber
\end{align}
\end{definition}
Hereafter, we refer to forecasts which violate the first two constraints as, resp., \emph{irrational increase} and \emph{irrational decrease} forecasts, and to forecasts which violate the final constraint as \emph{irrational scale} forecasts.
This definition of rationality preserves the integrity of group forecast in two ways. First, it prevents agents from forecasting against their beliefs: an agent cannot increase $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0$
and an agent cannot decrease $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0
;
further, it ensures that agents cannot make forecasts disproportionate to their confidence score -- \emph{how far} an agent $\ensuremath{a}$ deviates from the proposed change $\ensuremath{\Forecast^\Proposal}$ is restricted by $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$;
finally, an agent must have $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ greater than or equal to the relative change to $\ensuremath{\Forecast^\Proposal}$ denoted in their forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}
.
Note that the \emph{irrational scale} constraint deals with just one direction of proportionality (i.e. providing only a maximum threshold for $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$'s deviation from $\ensuremath{\Forecast^\Proposal}$, but no minimum threshold). Here, we avoid bidirectional proportionality on the grounds that such a constraint would impose an arbitrary notion of arguments' `impact' on agents. An agent may have a very high $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$, indicating
\FT{their} belief that $\ensuremath{\Forecast^\Proposal}$ is too low, but \AX{may}, we suggest, rationally choose to increase $\ensuremath{\Forecast^\Proposal}$ by only a small amount (e.g. if, despite
\FT{their} general agreement with the arguments,
\FT{they} believe the overall issue at stake in $\ensuremath{\mathcal{P}}$ to be minor or low impact to the overall forecasting question). Our definition of rationality, which relies on notions of argument strength derived from DF-QuAD, thus informs but does not wholly dictate agents' forecasting, affording them considerable freedom. We leave alternative, stricter definitions of rationality, which may derive from probabilistic conceptions of argument strength, to future work.
\begin{example
Consider our running Tokyo Olympics example, with the same
arguments and relations from Example \ref{eg:tokyo} and an agent \BIn{$alice$} with a confidence score \BIn{$\ensuremath{\mathcal{C}}_{alice}(\ensuremath{\mathcal{P}}) = -0.5$}. From this we know that \BIn{$alice$} believes that the suggested
$\ensuremath{\Forecast^\Proposal}$ in the proposal argument $\ensuremath{\mathcal{P}}$
should be decreased.
Then, under our definition of rationality, \BIn{$alice$'s} forecast \BIn{$\ensuremath{\Forecast^\Agents}_{alice}$} is `rational' if it decreases $\ensuremath{\Forecast^\Proposal}$ by up to 50\%.
\end{example}
If an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ violates these rationality constraints then \BI{it is `blocked'} and the agent is prompted to return to the argumentation graph. From here, they may carry out one or more of the following actions to render their forecast rational:
a. Revise their forecast;
b. Revise their votes on arguments;
c. Add new arguments
(and vote on
them).
\iffalse
\begin{enumerate}[label=\alph*.]
\item Revise their forecast;
\item Revise their votes on arguments;
\item Add new arguments to the update framework (and vote on them).
\end{enumerate}
\fi
Whilst a) and b) occur on an agent-by-agent basis, confined to each delegate framework, c) affects the shared update framework and requires special consideration.
Each time new \AX{arguments}
are added to the shared graph, every agent must vote on
\AX{them}, even if they have already made a rational forecast. In certain cases, after an agent has voted on a new argument, it is possible that their rational forecast is made irrational. In this instance, the agent must resolve their irrationality via the steps above. In this way, the update framework can be refined on an iterative basis until the graph is no longer being modified and all agents' forecasts are rational. At this stage, the update framework has reached a stable state and the agents $\ensuremath{\mathcal{A}}$ are collectively rational:
\begin{definition} Given an update framework $\ensuremath{u}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩, $\ensuremath{\mathcal{A}}$ is \emph{collectively rational} (wrt \emph{u}) iff $\forall \ensuremath{a} \in \ensuremath{\mathcal{A}}$, $\ensuremath{a}$ is individually rational (wrt the
delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩).
\end{definition}
When $\ensuremath{\mathcal{A}}$ is collectively rational, the
update framework $u$ becomes immutable and the aggregation (defined next)
\AX{produces} a group forecast $\ensuremath{\Forecast^g}$, which becomes the
new $\ensuremath{\mathcal{F}}$.
\subsection{Aggregating Forecasts}\label{subsec:aggregation}
After all the agents have made a rational forecast, an aggregation function is applied to produce one collective forecast. One advantage of forecasting debates vis-a-vis
\AX{the} many other forms of debate, is that a ground truth always exists -- an event either happens or does not. This means that, over time and after enough FAF instantiations, data on the forecasting success of different agents can be amassed. In turn, the relative historical performance of forecasting agents can inform the aggregation of group forecasts. In update frameworks, a weighted aggregation function based on Brier Scoring \cite{Brier1950} is used, such that more accurate forecasting agents have greater influence over the final forecast.
Brier Scores are a widely used criterion to measure the accuracy of probabilistic predictions, effectively gauging the distance between a forecaster's predictions and an outcome after it has(n't) happened, as follows.
\begin{definition} \label{def:bscore}
Given an agent $\ensuremath{a}$, a non-empty series of forecasts $\ensuremath{\Forecast^\Agents}_\ensuremath{a}(1), \ldots, \ensuremath{\Forecast^\Agents}_\ensuremath{a}(\ensuremath{\mathcal{N}}_{\ensuremath{a}})$ with corresponding actual outcomes $\ensuremath{\mathcal{O}}_1, \ldots,$ $\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \in \{true, false\}$ (where $\ensuremath{\mathcal{N}}_{\ensuremath{a}}>0$ is the number of forecasts $\ensuremath{a}$ has made in a non-empty sequence of as many update frameworks), $\ensuremath{a}$'s Brier Score $\ensuremath{b}_{\ensuremath{a}} \in [0, 1]$ is as follows:
\begin{align}
\ensuremath{b}_{\ensuremath{a}} = \frac{1}{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \sum_{t=1}^{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} (\ensuremath{\Forecast^\Agents}_\ensuremath{a}(t) - val(\ensuremath{\mathcal{O}}_t))^2 \nonumber
\end{align}
where $val(\ensuremath{\mathcal{O}}_t)=1$ if $\ensuremath{\mathcal{O}}_t=true$, and 0 otherwise.
\end{definition}
A Brier Score $\ensuremath{b}$ is effectively the mean squared error used to gauge forecasting accuracy, where a low $\ensuremath{b}$ indicates high accuracy and high $\ensuremath{b}$ indicates low accuracy. This
can be used in the update framework's aggregation function via the weighted arithmetic mean as follows.
\AX{E}ach Brier Score is inverted to ensure that more (less, resp.) accurate forecasters have higher (lower, resp.) weighted influence\AX{s} on $\ensuremath{\Forecast^g}$:
\begin{definition}\label{def:group}
Given a set of agents $\ensuremath{\mathcal{A}} = \{\ensuremath{a}_1, \ldots,\ensuremath{a}_n\}$,
their corresponding set of Brier Scores $\ensuremath{b} = \{\ensuremath{b}_{\ensuremath{a}_1}, \ldots,\ensuremath{b}_{\ensuremath{a}_n}\}$ and
their forecasts $\ensuremath{\Forecast^\Agents} = \{\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots,\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n}\}$, and letting, for $i \!\!\in\!\! \{ 1, \ldots, n\}$, $w_{i} \!\!=\!\! 1-\ensuremath{b}_{\ensuremath{a}_i}$, the \emph{group forecast} $\ensuremath{\Forecast^g}$ is
as follows:
\begin{align}
&\text{if } \sum_{i=1}^{n}w_{i} \neq 0: &
&\ensuremath{\Forecast^g} = \frac{\sum_{i=1}^{n}w_{i}\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}}{\sum_{i=1}^{n}w_{i}}; \nonumber \\
&\text{otherwise}: &
&\ensuremath{\Forecast^g} = 0. \nonumber
\end{align}
\end{definition}
This group forecast could be `activated' after a fixed number of debates (with the mean average used prior), when sufficient data has been collected on the accuracy of participating agents, or after a single debate, in the context of our general
\emph{Forecasting Argumentation Frameworks}:
\begin{definition} A \emph{Forecasting Argumentation Framework} (FAF) is a triple ⟨$ \ensuremath{\mathcal{F}}, \ensuremath{\mathcal{U}}, \ensuremath{\mathcal{T}}$⟩ such that:
\item[$\bullet$] $\ensuremath{\mathcal{F}}$ is a \emph{forecast
;
\item[$\bullet$] $\ensuremath{\mathcal{U}}$ is a finite, non-empty sequence of update frameworks with \ensuremath{\mathcal{F}}\ the forecast of the proposal argument in the first update framework in the sequence\AR{;} the forecast of each subsequent update framework is the group forecast of the previous update framework's agents' forecasts;
\item[$\bullet$] $\ensuremath{\mathcal{T}}$ is a preset time limit representing the lifetime of the FAF;
\item[$\bullet$] each agent's forecast wrt the agent's delegate framework drawn from each update framework is strictly rational.
\end{definition}
\begin{example
\BIn{Consider our running Tokyo Olympics example: the overall FAF may be composed of $\ensuremath{\mathcal{F}} = 0.15$, update frameworks $\ensuremath{\mathcal{U}} = \{ u_1, u_2, u_3 \}$ and time limit $\ensuremath{\mathcal{T}}=14\ days$, where $u_3$ is the latest (and therefore the only open) update framework after, for example, four days.}
\end{example}
\AX{T}he superforecasting literature explores a range of forecast aggregation algorithms: extremizing algorithms \cite{Baron2014}, variations on logistic \AX{and}
Fourier $L_2E$ regression \cite{Cross2018}, with considerable success.
\AX{T}hese approaches
\AX{aim}
at ensuring that less certain
\AX{or less} accurate forecasts have a lesser influence over the final aggregated forecast. We believe that
FAFs apply a more intuitive algorithm
\AX{since} much of the `work' needed to bypass inaccurate and erroneous forecasting is
\AX{expedited}
via argumentation.
\section{Properties}\label{sec:props}
We now undertake a theoretical analysis of FAFs by considering mathematical properties they satisfy. Note that the properties of the DF-QuAD algorithm (see \cite{Rago2016}) hold (for the amendment and pro/con arguments) here. For brevity, we
focus on novel properties unique to FAFs which relate to our new argument types. These properties focus on aggregated group forecasts wrt a debate (update framework). They imply the two broad, and we posit, desirable, principles of \emph{balance} and \emph{unequal representation}.
We assume for this section a generic update framework $\ensuremath{u} = $ ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩ with
group forecast $\ensuremath{\Forecast^g}$.
\paragraph{Balance.}
The intuition for these properties is that
differences between
$\ensuremath{\Forecast^g}$ and
$\ensuremath{\Forecast^\Proposal}$ correspond to imbalances between the
\emph{increase} and \emph{decrease} amendment arguments.
The first result states that
$\ensuremath{\Forecast^g}$ only differs from
$\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\Forecast^\Proposal}$ is the dialectical target of amendment arguments.
\begin{proposition} \label{prop:balance1}
If $\ensuremath{\mathcal{X}}\!\!=\!\!\emptyset$ ($\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$), then $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Proposal}$.
\end{proposition}
\begin{proof}
\AX{If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$,
$\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!=\!0$ by Def.~\ref{def:conscore} and $\ensuremath{\Forecast^\Agents}_\ensuremath{a}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}.
Then, $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:group}.}
\end{proof}
\AX{T}his simple proposition conveys an important property for forecasting
for an agent to put forward a different forecast, amendment arguments must have been introduced.
\begin{example}
In the Olympics setting, the group of agents could only forecast higher or lower than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ after the addition of at least one of \AX{the} amendment arguments $\ensuremath{\decarg_1}$, $\ensuremath{\decarg_2}$ or $\ensuremath{\incarg_1}$.
\end{example}
In the absence of
increase \FTn{(decrease)} amendment arguments, if there
are decrease \FTn{(increase, resp.)} amendment arguments, then
$\ensuremath{\Forecast^g}$ is not higher \FTn{(lower, resp.)} than $\ensuremath{\Forecast^\Proposal}$.
\begin{proposition}\label{prop:balance2}
If $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}=\emptyset$, then $\ensuremath{\Forecast^g} \leq\ensuremath{\Forecast^\Proposal}$.
\FTn{\label{balance3prop}
If $\ensuremath{\AmmArgs^\downarrow}=\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$, then $\ensuremath{\Forecast^g}\geq\ensuremath{\Forecast^\Proposal}$.}
\end{proposition}
\begin{proof}
\AX{If $\ensuremath{\AmmArgs^\downarrow}\!\! \neq \!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\leq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\leq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}.
Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\leq\!\ensuremath{\Forecast^\Proposal}$.
If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!\neq\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\geq\!0$ by Def.~\ref{def:conscore} and then
$\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\geq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\geq\!\ensuremath{\Forecast^\Proposal}$.}
\end{proof}
This proposition demonstrates that, if a decrease \BIn{(increase)} amendment argument has an effect on proposal arguments, it can only be as its name implies.
\begin{example}
\BIn{In the Olympics setting, the
agents could not forecast higher than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ if either of the decrease amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$
\AX{had} been added, but the increase argument $\ensuremath{\incarg_1}$
\AX{had} not. Likewise, \AX{the}
agents could not forecast lower than
$\ensuremath{\Forecast^\Proposal}$ if
$\ensuremath{\incarg_1}$
\AX{had} been added, but
\AX{neither} of
$\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had
.}
\end{example}
If
$\ensuremath{\Forecast^g}$ is lower \BIn{(higher)} than
$\ensuremath{\Forecast^\Proposal}$,
there is
at least one decrease \BIn{(increase, resp.)} argument.
\begin{proposition} \label{prop:balance4}
If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. \BIn{If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$.}
\end{proposition}
\begin{proof}
\AX{
If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}<\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})<0$.
Then, irrespective of $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}>\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})>0$. Then, irrespective of \BIn{$\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$}.
}
\end{proof}
We can see here that the only way an agent can decrease \BIn{(increase)} the forecast is
\FT{by adding} decrease \BIn{(increase, resp.)} arguments, ensuring the debate is structured as
\FT{intended}.
\begin{example}
\BIn{In the Olympics setting, the group of agents could only produce a group forecast $\ensuremath{\Forecast^g}$ lower than
$\ensuremath{\Forecast^\Proposal}$ due to the presence of
\emph{decrease} amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$. Likewise, the group of agents could only produce a
$\ensuremath{\Forecast^g}$ higher than
$\ensuremath{\Forecast^\Proposal}$ due to the presence of
$\ensuremath{\incarg_1}$.}
\end{example}
\paragraph{Unequal representation.}
AFs exhibit instances of unequal representation in the final voting process. In formulating the following properties, we distinguish between two forms of unequal representation. First, \emph{dictatorship}, where a single agent dictates
$\ensuremath{\Forecast^g}$ with no input from other agents. Second, \emph{pure oligarchy}, where a group of agents dictates
$\ensuremath{\Forecast^g}$ with no input from other agents outside the group.
In the forecasting setting, these
properties are desirable as they guarantee higher accuracy
\AX{from} the group forecast $\ensuremath{\Forecast^g}$.
An agent with a forecasting record of \emph{some} accuracy exercises \emph{dictatorship} over the group forecast $\ensuremath{\Forecast^g}$, if the rest of the participating \AX{agents}
have a record of total inaccuracy.
\begin{proposition}\label{prop:dictatorship}
If $\ensuremath{a}_d\in\ensuremath{\mathcal{A}}$ has a Brier score $\ensuremath{b}_{\ensuremath{a}_d}<1$ and $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}} \setminus \{\ensuremath{a}_d$\}, $\ensuremath{b}_{\ensuremath{a}_z} = 1$, then $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$.
\end{proposition}
\begin{proof}
\AX{
By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} \!\!\!=\!\! 1$ $\forall \ensuremath{a}_z\!\in\!\ensuremath{\mathcal{A}} \!\setminus\! \{\!\ensuremath{a}_d\!\}$, then $w_{\ensuremath{a}_z}\!\!\!=\!0$; and if $\ensuremath{b}_{\ensuremath{a}_d}\!\!<\!\!1$, then $w_{\ensuremath{a}_d}\!\!>\!\!0
. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$ is weighted at 100\% and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\% so $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$.
}
\end{proof}
This proposition demonstrates how we will disregard agents with total inaccuracy, even in
\FT{the} extreme case where we allow one (more accurate) agent to dictate the forecast.
\begin{example}
\BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 0.5, 1 and 1, resp., bob's and charlie's forecasts have} no impact on $\ensuremath{\Forecast^g}$, whilst \AX{alice's} forecast becomes the group forecast $\ensuremath{\Forecast^g}$.}
\end{example}
A group of agents with a forecasting record of $some$ accuracy exercises \emph{pure oligarchy} over
$\ensuremath{\Forecast^g}$ if the rest of the
\AX{agents} all have a record of total inaccuracy.
\begin{proposition}\label{oligarchytotalprop}
Let $\ensuremath{\mathcal{A}} = \ensuremath{\mathcal{A}}_o \cup \ensuremath{\mathcal{A}}_z$ where $\ensuremath{\mathcal{A}}_o \cap \ensuremath{\mathcal{A}}_z = \emptyset$, $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o \in \ensuremath{\mathcal{A}}_o$ and $\ensuremath{b}_{\ensuremath{a}_z}=1$ $\forall \ensuremath{a}_z \in \ensuremath{\mathcal{A}}_z$. Then, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $>0\%$
and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\
.
\end{proposition}
\begin{proof}
\AX{
By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} = 1$ $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}}_z$, then $w_{\ensuremath{a}_z}=0$; and if $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o\in\ensuremath{\mathcal{A}}_o$, then $w_{\ensuremath{a}_o}>0$. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $> 0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at $0\%$.
}
\end{proof}
This proposition extends the behaviour from Proposition \ref{prop:dictatorship} to the (more desirable) case where fewer agents have a record of total inaccuracy.
\begin{example}
\BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 1, 0.2 and 0.6, resp., alice's forecast} has no impact on $\ensuremath{\Forecast^g}$, whilst \AX{bob and charlie's} aggregated forecast becomes the group forecast $\ensuremath{\Forecast^g}$.}
\end{example}
\section{Evaluation}\label{sec:experiments}
\BI{We conducted an experiment using a dataset obtained from the `Superforecasting' project, Good Judgment Inc \cite{GJInc}, to simulate four past forecasting debates in FAFs. This dataset contained 1770 datapoints (698 `forecasts' and 1072 `comments') posted by 242 anonymised users with a range of expertise. The original debates had occurred on the
publicly available group forecasting platform, the Good Judgment Open (GJO)\footnote{https://www.gjopen.com/}, providing a suitable baseline against which to compare FAFs'
accurac
.
\BI{For the experiment, we used a
prototype implementation of FAFs in the form of the publicly available web platform called \emph{Arg\&Forecast} (see \cite{Irwin2022} for an introduction to the platform and an additional human
experiment with FAFs). Python's Gensim topic modelling library \cite{rehurek2011gensim} was used
to separate the datapoints for each debate into contextual-temporal groups that could form update frameworks.} In each update framework the proposal forecast was set to the mean average of forecasts made in the update framework window and each argument appeared only once. Gensim was further used to simulate voting, matching users to specific arguments they (dis)approved of. Some 4,700 votes
\AX{were then}
generated with a three-valued system (where votes were taken from
\{0,0.5,1\}) to ensure consistency: if a user voiced approval for an argument in the debate time window, their vote for the corresponding argument(s) was set to 1; disapproval for an argument led to a vote of 0, and (in the most common case) if a user did not mention an argument at all, their vote for the corresponding argument(s) defaulted to 0.5.
With the views of all participating users wrt the proposal argument encoded in each update framework's votes, forecasts could then be simulated. If a forecast was irrational, violating any of the three constraints in Def.~\ref{def:irrationality} (referred to
\AX{in the following}
as \emph{increase}, \emph{decrease} and \emph{scale}, resp.), it was blocked and, to mimic real life use, an automatic `follow up' forecast was made. The `follow up' forecast would be the closest possible prediction (to their original choice) a user could make whilst remaining `rational'.
\BI{Note that evaluation of the aggregation function described in \AX{§}\ref{subsec:aggregation} was outside this experiment, since the past forecasting accuracy of the dataset's 242 anonymised users was unavailable. Instead, we used \AX{the} mean average whilst adopting the GJO's method for scoring the accuracy of a user and/or group over the lifetime of the question \cite{roesch_2015}. This meant calculating a daily forecast and daily Brier score
for each user, for every day of the question. After users made their first rational forecast, that forecast became their `daily forecast'
until it was updated with a new forecast. Average and range of daily Brier scores
allowed reliable comparison between (individual and aggregated) performance of the GJO versus the FAF implementation.}
\begin{table}[t]
\begin{tabular}{@{}llllll@{}}
\toprule
Q & Group $\ensuremath{b}$ & $min(\ensuremath{b})$ & $max(\ensuremath{b})$ \\ \midrule
Q1 & 0.1013 (0.1187) & 0.0214 (0) & 0.4054 (1) \\
Q2 & 0.216 (0.1741) & 0 (0) & 0.3853 (1) \\
Q3 & 0.01206 (0.0227) & 0.0003 (0) & 0.0942 (0.8281) \\
Q4 & 0.5263 (0.5518) & 0 (0) & 0.71 (1) \\ \midrule
\textbf{All} & \textbf{0.2039 (0.217)} & \textbf{0 (0)} & \textbf{1 (1)} \\ \bottomrule
\end{tabular}
\caption{The accuracy
of the platform group versus control, where \AX{`}Group $\ensuremath{b}$\AX{'} is the aggregated (mean) Brier score, `$min(\ensuremath{b})$' is the lowest individual Brier score and `$max(\ensuremath{b})$' is the highest individual Brier score. Q1-Q4 indicate the four simulated debates.}
\label{accuracyExp1}
\end{table}
\begin{table}[t]
\begin{tabular}{llllll}
\hline
\multirow{2}{*}{Q} &
\multirow{2}{*}{$\overline{\ensuremath{\mathcal{C}}}$} &
\multirow{2}{*}{Forecasts} &
\multicolumn{3}{c}{Irrational Forecasts} \\ \cline{4-6}
&
&
&
\multicolumn{1}{c}{\emph{Increase}
} \!\!\!\!
&
\multicolumn{1}{c}{\emph{Decrease}
} \!\!\!\!
&
\multicolumn{1}{c}{\emph{Scale}
}\!\! \!\!
\\ \hline
Q1 & -0.0418 & 366 & 63 & 101 & 170 \\
Q2 & 0.1827 & 84 & 11 & 15 & 34 \\
Q3 & -0.4393 & 164 & 53 & 0 & 86 \\
Q4 & 0.3664 & 84 & 4 & 19 & 15 \\ \hline
All & -0.0891 & 698 & 131 & 135 & 305 \\ \hline
\end{tabular}
\caption{Auxiliary results from \FT{the experiment}, where
$\overline{\ensuremath{\mathcal{C}}}$ is the average confidence score, `Forecasts' is number of forecasts made in each question and `Irrational Forecasts' the number in each question which violated each constraint in §\ref{subsec:rationality}.}
\label{exp1auxinfo}
\end{table}
\paragraph{Results.}
As Table \ref{accuracyExp1} demonstrates, simulating forecasting debates from GJO in \emph{Arg\&Forecast} led to predictive accuracy improvements in three of the four debates. \BIn{This is reflected in these debates by a substantial reduction in Brier scores versus control.} The greatest accuracy improvement in absolute terms was in Q4, which saw a Brier score decrease of 0.0255. In relative terms, Brier score decreases ranged from 5\% (Q4) to 47\% (Q3). \BIn{The average Brier score decrease was 33\%, representing a significant improvement in forecasting accuracy across the board}. \BIn{Table \ref{exp1auxinfo} demonstrates how
\AX{our}
rationality constraints drove forward this improvement}. 82\% of forecasts made across the four debates were classified as irrational \BIn{and subsequently moderated with a rational `follow up' forecast}. Notably, there were more \emph{irrational scale} forecasts
than
\emph{irrational increase}
and \emph{irrational decrease} forecasts
combined. These results demonstrate how argumentation-based rationality constraints can play an active role in facilitating higher forecasting accuracy, signalling the early promise of FAFs.
\section{Conclusions}\label{sec:conclusions}
We have introduced the Forecasting Argumentation Framework (FAF), a multi-agent argumentation framework which supports forecasting debates and probability estimates. FAFs are composite argumentation frameworks, comprised of multiple non-concurrent update frameworks which themselves depend on three new argument types and a novel definition of rationality for the forecasting context. Our theoretical and empirical evaluation demonstrates the potential of FAFs, namely in increasing forecasting accuracy, holding intuitive
properties, identifying irrational
behaviour and driving higher engagement with the forecasting question (more arguments and responses, and more forecasts in the user study). These strengths align with requirements set out by previous research in the field of judgmental forecasting.
There
\AX{is} a multitude of possible directions for future wor
. First, FAFs are equipped to deal only with two-valued outcomes but, given the prevalence of forecasting issues with multi-valued outcomes (e.g. `Who will win the next UK election?'), expanding their capability
would add value. Second, further work may focus on the rationality constraints,
e.g. by introducing additional parameters to adjust their strictness, or
\AX{by implementing}
alternative interpretations of rationalit
. Third,
future work could explore constraining agents' argumentation. This could involve using past Brier scores to limit the quantity or strength of agents' arguments and also
to give them greater leeway wrt the rationality constraints.
\FTn{Fourth,
our method relies upon
acyclic graphs: we believe that they are intuitive for users and note that all Good Judgment Open debates were acyclic; nonetheless, the inclusion of cyclic relations (e.g. to allow
\AX{con} arguments that attack each other) could expand the scope of the argumentative reasoning in
\AX{in FAFs.}
}
Finally, there is an immediate need for larger scale human
experiments.
\newpage
\section*{Acknowledgements}
The authors would like to thank Prof. Anthony Hunter for his helpful contributions to discussions in the build up to this work. \BIn{Special thanks, in addition, go to Prof. Philip E. Tetlock and the Good Judgment Project team for their warm cooperation and for providing datasets for the experiments.}
\AX{Finally, the authors would like to thank the anonymous reviewers and meta-reviewer for their suggestions, which led to a significantly improved paper.}
\bibliographystyle{kr}
| {'timestamp': '2022-05-25T02:01:58', 'yymm': '2205', 'arxiv_id': '2205.11590', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.11590'} |
\section{Introduction}
\label{intro}
Continuous variable (CV) systems are ubiquitous in quantum
information and communication protocols. Most of the CV
quantum information protocols are based on Gaussian states
as they are easy to prepare, manipulate and
measure~\cite{weedbrook-rmp-2012, adesso-2014}. One of the central
tasks in quantum information processing is the estimation of
quantum states which is formally called quantum state
tomography
(QST)~\cite{james-pra-2001,paris-2011,lvovsky-rmp}.
Generally, homodyne and heterodyne measurements are employed
in CV QST, which measure quadrature operators of a given
state~\cite{yuen-1982,yuen-83,vogel-pra-1989}. However,
with the recent development of experimental techniques in
photon-number-resolving-detectors
(PNRD)~\cite{silberhorn-prl-2016,josef-prl-2019}, the
possibility of carrying out QST via photon number
measurements has opened up. Cerf \textit{et al}.\, devised a scheme
using beam splitters and on-off detectors, where one can
obtain the trace and determinant of the covariance
matrix of a Gaussian
state~\cite{cerf-prl-2004,cerf-pra-2004}.
In a similar endeavor, Parthasarathy \textit{et al}.\, have
developed a theoretical scheme to determine the Gaussian
state by estimating its mean and covariance
matrix~\cite{rb-2015}.
Another important task in quantum information processing is
quantum process tomography (QPT), where we wish to
characterize quantum processes which in general are
completely positive maps. For CV systems, theoretical as
well as experimental studies for QPT have been undertaken
by several
authors~\cite{lobino-science,rahimi-njp-2011,anis-2012,
wang-pra-2013,cooper-njp-2015,
connor-reports-2015,jarom-pra-2015,rezakhani-pra-2017,
filip-reports-2017,dowling-pra-2018}. Lobino \textit{et al}.\, used
coherent state probes along with homodyne measurements to
characterize quantum processes~\cite{lobino-science}.
Similarly, Ghalaii \textit{et al}.\, have developed a coherent state
based QPT scheme via the measurement of normally ordered
moments that are measured using homodyne
detection~\cite{rezakhani-pra-2017}. In this direction,
Parthasarathy \textit{et al}.\, have utilized QST schemes based on
photon number measurements for Gaussian states, to
characterize the Gaussian channel~\cite{rb-2015}.
In this paper, we simplify the scheme given by Parthasarathy
\textit{et al}.\,~\cite{rb-2015} and describe an optimal scheme which
involves a minimum number of measurements and utilizes smaller
number of optical elements for the QST of Gaussian states
based on PNRD. We employ this scheme to devise an optimal
scheme for Gaussian channel characterization.
An $n$-mode Gaussian state is completely specified by
their $2n$ first moments and second order moments arranged
in the form of a covariance matrix which has $2 n^2+n$
parameters. Therefore, we require a total of $2 n^2 +3n$
parameters to completely determine an $n$-mode Gaussian
state. The QST based on photon number measurements is
optimal in the sense that we require exactly $2 n^2 + 3 n$ distinct
measurements to determine all the $2 n^2 + 3 n$ parameters
of the state. Next we deploy the QST scheme that we develop,
to estimate the output, with coherent state probes as inputs
for the Gaussian channel characterization. An $n$-mode
Gaussian channel is described by a pair of $2n \times 2n$
real matrices $A$ and $B$ with $B=B^T \geq 0$ which satisfy
certain complete positivity and trace preserving
conditions~\cite{heinosaari-2010,
holevo-2012,parthasarathy-2015}. The matrices $A$ and $B$
together can be described by a total of $ 6n^2+n$
parameters. We show that we can characterize a Gaussian
quantum channel optimally, \textit{i}.\textit{e}., we require exactly $6n^2+n$
distinct measurements to determine all the $6n^2+n$ parameters of the
Gaussian channel. We compare the variance of transformed
number operators arising in the aforementioned QST scheme
which provides an insight into the efficiency of the scheme.
Finally, we relate the variance of transformed number
operators to the variance of quadrature operators.
In CV quantum key distribution (QKD) protocols, one
needs to send an intense local oscillator pulse for the
purpose of measurement, which in itself is an arduous task
and can give rise to security
loopholes~\cite{sarovar-prx-2015, bing-prx-2015}. Our scheme
based on PNRD does not require such an intense local oscillator
signal, and thus may turn out to be useful in CV-QKD
protocols.
The paper is organized as follows. In Sec.~\ref{cvsystem} we
give a detailed mathematical background about CV
systems. In Sec.~\ref{sec:gaussian} we provide our optimal
QST scheme based on PNRD for Gaussian states. Thereafter,
the tomography of the Gaussian channel has been dealt with in
Sec.~\ref{sec:channel} while in Sec.~\ref{sec:variance} we
compare the variance of different transformed number
operators appearing in the state tomography scheme. Finally
in Sec.~\ref{sec:conclusion} we draw conclusions from our
results and look at future aspects.
\section{CV system}
\label{cvsystem}
An $n$-mode continuous variable
quantum system is represented by $n$
pairs of Hermitian quadrature operators $\hat{q}_i,
\hat{p}_i$ ($i=1\,,\dots, n$)
which can be arranged in a column
vector
as~\cite{arvind1995,Braunstein,adesso-2007,weedbrook-rmp-2012,adesso-2014}
\begin{equation}\label{eq:columreal}
\bm{\hat{ \xi}} =(\hat{ \xi}_i)= (\hat{q_{1}},\,
\hat{p_{1}} \dots, \hat{q_{n}},
\, \hat{p_{n}})^{T}, \quad i = 1,2, \dots ,2n.
\end{equation}
The bosonic commutation relation between them
in a compact form read as ($\hbar$=1)
\begin{equation}\label{eq:ccr}
[\hat{\xi}_i, \hat{\xi}_j] = i \Omega_{ij}, \quad (i,j=1,2,...,2n),
\end{equation}
where $\Omega$ is the 2$n$ $\times$ 2$n$ matrix given by
\begin{equation}
\Omega = \bigoplus_{k=1}^{n}\omega = \begin{pmatrix}
\omega & & \\
& \ddots& \\
& & \omega
\end{pmatrix}, \quad \omega = \begin{pmatrix}
0& 1\\
-1&0
\end{pmatrix}.
\end{equation}
The field annihilation and creation
operators $\hat{a}_i\, \text{and}\, {\hat{a}_i}
^{\dagger}$ ($i=1,2,\, \dots\, ,n$)
are related to the quadrature operators as
\begin{equation}\label{realtocom}
\hat{a}_i= \frac{1}{\sqrt{2}}(\hat{q}_i+i\hat{p}_i),
\quad \hat{a}^{\dagger}_i= \frac{1}{\sqrt{2}}(\hat{q}_i-i\hat{p}_i).
\end{equation}
The number operator for the $i^{\text{th}}$ mode and total number operator
for $n$-mode system can be expressed as
\begin{subequations}
\begin{align}\label{eq:generalcal}
\hat{N_i} = &\hat{a_i}^{\dagger}\hat{a_i} =
\frac{1}{2}\left( \hat{q_i}^2+\hat{p_i}^2 -1 \right), \\
\hat{N} = &\sum_{i=1}^{n}\hat{N_i}.
\end{align}
\end{subequations}
The state space known as Hilbert space $\mathcal{H}_i$
for $i^\text{th}$ mode is spanned
by the eigen vectors
$\vert n_i \rangle, \quad \{n_i=0,\,1, \dots ,\infty \} $ of
$N_i=a_i^{\dagger} a_i$.
The combined Hilbert space $\mathcal{H}^{\otimes n} =
\otimes_{i=1}^{n}\mathcal{H}_i$
of the $n$-mode state is spanned by the product basis
vector
$ \vert n_1\dots n_i \dots n_n\rangle$ with $\{n_1,\,
\dots\,, n_i,\, \dots\,, n_n=0,\, 1, \dots ,\infty \} $.
The numbers $n_i$
correspond to photon number in the $i^{\text{th}}$ mode.
The irreducible action of the field operators $\hat{a}_i$ and
$\hat{a}^{\dagger}_i$
on $\mathcal{H}_i$ is dictated by
the commutation
relation Eq.~(\ref{eq:ccr}) and is given by
\begin{equation}
\begin{aligned}
\hat{a_i}|n_i\rangle =& \sqrt{n_i}|n_i-1\rangle \quad n_i
\geq 1,
\quad\hat{a_i}|0\rangle = 0,\\
\hat{a_i}^{\dagger}|n_i\rangle = &\sqrt{n_i+1}|n_i+1\rangle
\quad n_i \geq 0.
\end{aligned}
\end{equation}
We define displacement operator acting on the $i^{\text{th}}$ mode
and the corresponding coherent states as:
\begin{eqnarray}
\hat{D}_i(q_i,p_i) &=& e^{i(p_i\hat{q}_i-q_i
\hat{p}_i)},\nonumber \\
\vert q_i, p_i\rangle_i &=& \hat{D}_i(q_i,p_i)|0\rangle_i.
\end{eqnarray}
Here $q_i$ and $p_i$
correspond to displacement along $\hat{q}$ and $\hat{p}$-quadrature of the $i^{\text{th}}$ mode.
\subsection{Symplectic transformations}
The group $Sp(2n,\,\mathcal{R})$ is defined as
the group of linear homogeneous transformations $S$ specified by real
$2n \times 2n$ matrices $S$ acting on the quadrature
variables and preserving the
the canonical commutation relation
Eq.~(\ref{eq:ccr}):
\begin{equation}
\hat{\xi}_i \rightarrow
\hat{\xi}_i^{\prime} = S_{ij}\hat{\xi}_{j}, \quad\quad
S\Omega S^T = \Omega.
\end{equation}
The unitary representation of this
group turns out to be infinite dimensional where we have
$\mathcal{U}(S)$ for each $S \in Sp(2n,\, \mathcal{R})$
acting on a Hilbert space and is known as the metaplectic
representation. These unitary transformations are generated
by Hamiltonian which are quadratic functions of quadrature
and field operators. Further, any symplectic matrix $S$
$\in$ $Sp(2n, \,\mathcal{R})$ can be decomposed as
\begin{equation}
S = P \cdot T,
\end{equation}
$P$ $\in$ $\Pi(n)$ is a subset of $Sp(2n,\, \mathcal{R})$
defined as
\begin{equation}
\Pi(n) = \{ S \in Sp(2n, \,\mathcal{R})\,|\,S^T =S,\,\,
S>0\},
\end{equation}
and $T$ is an element of $K(X,Y)$, the maximal compact
subgroup of $Sp(2n,\, \mathcal{R})$ which is isomorphic to
the unitary group $U(n)=X+iY$ in $n$-dimensions. The action
of $U(n)$ transformation on the annihilation and creation
operators is given as
\begin{equation}
\label{unitary}
\bm{ \hat{a}} \rightarrow U \bm{ \hat{a}}, \quad
\bm{ \hat{a}^{\dagger}} \rightarrow U^{*}\bm{ \hat{a}^{\dagger}},
\end{equation}
where $\bm{\hat{a}} =
(\hat{a}_1,\hat{a}_2,\dots,\hat{a}_n)^T$ and
$\bm{\hat{a}^{\dagger} }=
(\hat{a}_1^{\dagger},\hat{a}_2^{\dagger},\dots,\hat{a}_n^{\dagger})^T$.
The $2n\times 2n$ dimensional symplectic transformation
matrix $K(X,Y)$ acting on the Hermitian quadrature operators
can be easily obtained using Eqs.~(\ref{realtocom}) and
(\ref{unitary}).
Now we write
three basic symplectic operations which will be used later.
\par
\noindent{\bf Phase change operation\,:}
The symplectic transformation for phase change operation
acting on the quadrature operators $\hat{q}_i$, $\hat{p}_i$ is
\begin{equation}
R_i(\phi) = \begin{pmatrix}
\cos \phi & \sin \phi\\
-\sin \phi & \cos \phi
\end{pmatrix}.
\end{equation}
This operation corresponds
to $U(1)$ subgroup of $Sp(2, \mathcal{R})$, its metaplectic
representation is generated by
the Hamiltonian of the form
$H =\hat{a}^{\dagger} _i\hat{a}_i$ and its action on
the annihilation operator is
$\hat{a_i}\rightarrow e^{-i \phi} \hat{a}_i$.
\par
\noindent
{\bf Single mode squeezing operation\,:}
Symplectic transformation for the single mode squeezing operator
acting on quadrature operators $\hat{q}_i$ and $\hat{p}_i$
is written as
\begin{equation}
S_i(r) = \begin{pmatrix}
e^{-r} & 0 \\
0 & e^{r}
\end{pmatrix}.
\end{equation}
\par
\noindent
{\bf Beam splitter operation\,:}
For two-mode systems with
quadrature operators
$ \hat{\xi} = (\hat{q}_{i}, \,\hat{p}_{i},\, \hat{q}_{j},\,
\hat{p}_{j})^{T}$ the
beam splitter transformation
$B_{ij}(\theta)$
acts as follows
\begin{equation}\label{beamsplitter}
B_{ij}(\theta) = \begin{pmatrix}
\cos \theta \,\mathbb{1}_2& \sin \theta \,\mathbb{1}_2 \\
-\sin \theta \,\mathbb{1}_2& \cos \theta \,\mathbb{1}_2
\end{pmatrix},
\end{equation}
where $\mathbb{1}_2$ represents $2 \times 2$ identity matrix
and transmittivity is specified through $\theta$ via
the relation $\tau = \cos ^2 \theta$. For a balanced
(50:50)
beam splitter, $\theta = \pi/4$.
All the three operations above are generated by quadratic
Hamiltonians. It turns out that while phase change and beam
splitter operations are compact and are generated by a photon
number conserving Hamiltonian, squeezing operations are
non-compact and are generated by a photon number non-conserving
Hamiltonian.
\subsection{Phase space description}
For a density
operator $\hat{\rho}$ of a quantum system the corresponding
Wigner distribution is defined as
\begin{equation}\label{eq:wigreal}
W(\bm{\xi}) = \frac{1}{{(2 \pi)}^{n}}\int \mathrm{d}^n \bm{q'}\, \left\langle
\bm{q}-\frac{1}{2}
\bm{q}^{\prime}\right| \hat{\rho} \left|\bm{q}+\frac{1}{2}\bm{\bm{q}^{\prime}}
\right\rangle \exp(i \bm{q^{\prime T}}\cdot \bm{p}),
\end{equation}
where
$\bm{\xi} = (q_{1}, p_{1},\dots, q_{n},p_{n})^{T}$,
$\bm{q^{\prime}} \in \mathcal{R}^{n}$ and $\bm{q} = (q_1,
q_2, \dots, q_n)^T$,
$\bm{p} = (p_1, p_2, \dots, p_n)^T $.
Therefore, $W(\bm{\xi})$ depends upon
$2n$ real phase space variables.
For an $n$ mode system,
the first order moments are defined as
\begin{equation}
\bm{d} = \langle \bm{\hat{\xi}} \rangle =
\text{Tr}[\hat{\rho}\bm{\hat{\xi}}],
\end{equation}
and the second order moments are best represented by the
real symmetric $2n\times2n$ covariance matrix defined as
\begin{equation}\label{eq:cov}
V = (V_{ij})=\frac{1}{2}\langle \{\Delta \hat{\xi}_i,\Delta
\hat{\xi}_j\} \rangle,
\end{equation}
where $\Delta \hat{\xi}_i = \hat{\xi}_i-\langle \hat{\xi}_i
\rangle$, and $\{\,, \, \}$ denotes anti-commutator.
The number of independent real parameters required to
specify the covariance matrix is $n(2n+1)$.
The uncertainty principle in
terms of covariance matrix reads
$V+\frac{i}{2}\Omega \geq 0$ which
implies that the covariance matrix is positive
definite \textit{i}.\textit{e}., $V>0$.
A state is called a Gaussian state if the corresponding
Wigner distribution is a Gaussian. Gaussian states are
completely determined by their first and second order
moments and thus we require a total of $2n+ n(2n+1) = 2 n^2
+3n$ parameters to completely determine an $n$-mode Gaussian
state.
For the special case of Gaussian states, Eq.~(\ref{eq:wigreal})
can be written as~\cite{weedbrook-rmp-2012}
\begin{equation}\label{eq:wignercovariance}
W(\bm{\xi}) = \frac{\exp[-(1/2)(\bm{\xi}-\bm{d})^TV^{-1}
(\bm{\xi}-\bm{d})]}{(2 \pi)^n \sqrt{\text{det}V}},
\end{equation}
where $V$ is the covariance matrix and $\bm{d}$
denotes the displacement of the Gaussian state in phase
space.
We now compute averages of a few quantities that will be
required later, using the phase space representation.
\begin{equation}\label{symmetric}
\hat{N} = \sum_{j=1}^{n}\hat{N_i} =
\frac{1}{2}\sum_{j=1}^{n}\left( \hat{q_i}^2+\hat{p_i}^2 -1 \right)
\end{equation}
is symmetrically ordered in $\hat{q}$ and $\hat{p}$
operators, therefore,
average number of photons $\langle \hat{N} \rangle$ for an $n$-mode
Gaussian state can be readily computed using the Wigner
distribution as follows~\cite{rb-2015, manko-pra-1994}:
\begin{eqnarray}
\langle \hat{N} \rangle &=&\frac{1}{2}\sum_{j=1}^{n}\int
d^{2n} \bm{\xi}
\left( q_i^2+p_i^2 -1 \right) W(\bm{\xi}),\nonumber \\
&=&\frac{1}{2} \left[ \text{Tr}\left(
V-\frac{1}{2}\mathbb{1}_{2n}\right)+||\bm{d}||^2\right].
\label{avnumber}
\end{eqnarray}
Under a unitary transformation, while quantum states
transform in Schr\"odinger representation as
$\rho \rightarrow \,\mathcal{U} \rho
\,\mathcal{U}^{\dagger}$, in Heisenberg representation
the number operator transforms as, $\hat{N}
\rightarrow \,\mathcal{U}^{\dagger} \hat{N}
\,\mathcal{U}$.
Specifically for a phase space displacement $D(\bm{r})$, we have
\begin{equation}\label{disnumber1}
\langle \hat{D}(\bm{r})^\dagger \hat{N} \hat{D}(\bm{r})
\rangle =\frac{1}{2} \left[ \text{Tr}\left(
V-\frac{1}{2}\mathbb{1}_{2n}\right)+||\bm{d}+\bm{r}||^2\right],
\end{equation}
which simplifies by using Eq.~(\ref{avnumber}) to
\begin{equation}\label{diffdis}
\langle \hat{D}(\bm{r})^\dagger \hat{N} \hat{D}(\bm{r}) \rangle
-\langle \hat{N} \rangle = \frac{1}{2} \left( ||\bm{d}+\bm{r}||^2-||\bm{d}||^2 \right).
\end{equation}
For a homogeneous symplectic transformation $S$,
the density operator follows the metaplectic representation $\mathcal{U}(S)$
as
$\rho \rightarrow \,\mathcal{U}(S) \rho
\,\mathcal{U}(S)^{\dagger}$.
The corresponding transformation of the displacement vector $\bm{d}$
and covariance matrix $V$ is given by~\cite{arvind1995}
\begin{equation}\label{transformation}
\bm{d}\rightarrow S \bm{d},\quad \text{and}\quad V\rightarrow SVS^T.
\end{equation}
Thus, we can easily evaluate the average of the number operator
after the state has undergone a metaplectic transformation
using the Eqs.~(\ref{avnumber})~\&~(\ref{transformation}) as
\begin{equation}
\begin{aligned}
\langle \hat{\mathcal{U}}(S)^\dagger \hat{N}
\hat{\mathcal{U}}(S) \rangle =
\frac{1}{2} \text{Tr}\left( VS^T
S-\frac{1}{2}\mathbb{1}_{2n}\right)+\frac{1}{2} \bm{d}^T S^T
S \bm{d}.
\end{aligned}
\end{equation}
Therefore,
\begin{equation}\label{diffsymplectic}
\begin{aligned}
\langle \hat{\mathcal{U}}(S)^\dagger \hat{N}
\hat{\mathcal{U}}(S) \rangle-\langle \hat{N} \rangle &=
\frac{1}{2} \text{Tr}\left[ V(S^T S-\mathbb{1}_{2n})\right]\\
&+\frac{1}{2} \bm{d}^T (S^T S-\mathbb{1}_{2n}) \bm{d}.
\end{aligned}
\end{equation}
More mathematical details are available in~\cite{arvind1995}.
\section{Estimation of Gaussian states using photon number
measurements}
\label{sec:gaussian}
\begin{figure}[htbp]
\includegraphics[scale=1]{disgate.eps}
\caption{To estimate the mean of an $n$-mode Gaussian state,
the state is displaced along one of the $2n$ phase space
variables before performing photon number measurement on
each of the modes. In the figure, displacement gate
$\hat{D}_i(1,0)$ is applied on the state which displaces the
$\hat{q}$-quadrature of the $i^{\text{th}}$ mode by an unit
amount.}
\label{fig:disgate}
\end{figure}
In this section, we present a variant of the scheme
developed in~\cite{rb-2015} where the authors have devised a
scheme to estimate the mean and covariance matrix of
Gaussian state using PNRD. In our scheme which is optimal
and uses minimum optical elements, photon number measurement
is performed on the original Gaussian state as well as
transformed Gaussian state. These transformations or gates
consist of displacement, phase rotation, single mode
squeezing and beam splitter operation denoted by
$\hat{D}_i(q,p)$, $\mathcal{U}(R_i(\theta))$,
$\mathcal{U}(S_i(r))$, and $\mathcal{U} (B_{ij}(\theta))$,
respectively.
\subsection{Mean estimation}\label{sec:dis}
We first perform photon number measurement on the original
$n$-mode Gaussian state giving us $\langle \hat{N} \rangle$.
Then we consider two different photon number measurements
after displacing one of the quadratures $\hat{q_i}$ or
$\hat{p_i}$ of the $i^{\text{th}}$ mode by an unit amount
giving us $\langle \hat{D}_i(1,0)^\dagger \hat{N}
\hat{D}_i(1,0) \rangle$ and $\hat{D}_i(0,1)^\dagger \hat{N}
\hat{D}_i(0,1) \rangle$.
(Figure~\ref{fig:disgate} depicts
displacement gate
$\hat{D}_i(1,0)$ acting on the $i^{\text{th}}$ mode
of the state.)
We therefore have
by using
Eq.~(\ref{diffdis}):
\begin{eqnarray}
\langle \hat{D}_i(1,0)^\dagger \hat{N} \hat{D}_i(1,0)
\rangle-\langle \hat{N} \rangle
&= &\frac{1}{2} \left( 1 +2 d_{q_i} \right),\nonumber \\
\langle \hat{D}_i(0,1)^\dagger \hat{N} \hat{D}_i(0,1)
\rangle-\langle \hat{N} \rangle
&= &\frac{1}{2} \left( 1 +2 d_{p_i} \right),
\end{eqnarray}
which can be rewritten as
\begin{eqnarray}
d_{q_i} &=& \langle \hat{D}_i(1,0)^\dagger \hat{N}
\hat{D}_i(1,0) \rangle-\langle \hat{N} \rangle
-\frac{1}{2}, \nonumber \\
d_{p_i} &=& \langle \hat{D}_i(0,1)^\dagger \hat{N}
\hat{D}_i(0,1) \rangle-\langle \hat{N} \rangle
-\frac{1}{2}.
\label{meaneq}
\end{eqnarray}
Thus, we can obtain the mean values of $\hat{q}_i$ and
$\hat{p}_i$-quadratures once the values of $\langle
\hat{D}_i(1,0)^\dagger \hat{N} \hat{D}_i(1,0) \rangle$, $
\langle \hat{D}_i(0,1)^\dagger \hat{N} \hat{D}_i(0,1)
\rangle$, and $\langle \hat{N}\rangle$ have been obtained.
These estimations involve measuring
averages and thus require us to repeat the measurement many
times.
Therefore, to obtain all the $2n$ elements of mean $\bm{d}$
of the Gaussian state, we need to perform $2n$ photon number
measurements after displacing the state by an unit amount
along $2n$ different phase spaces variables along with
photon number measurement on the original state.
We also note that $\text{Tr}(V)$ can be obtained using
Eq.~(\ref{avnumber}) once mean $\bm{d}$ of the Gaussian
state has been obtained.
\begin{equation}
\label{tracen}
\text{Tr}(V) = 2 \langle \hat{N} \rangle -||\bm{d}||^2+n.
\end{equation}
Thus, we are able to estimate $2n$ elements of mean
$\bm{d}$ of the Gaussian state and trace of the covariance
matrix $\text{Tr}(V)$ using a total of $2n+1$ photon number
measurements.
\subsection{Estimation of intra-mode covariance matrix}
\label{intramode}
\begin{figure}[htbp]
\includegraphics[scale=1]{symgate.eps}
\caption{To estimate the intra-mode covariance matrix, that is,
the covariance matrix of the individual modes, single mode
symplectic transformations are applied on the state before
performing photon number measurement on each of the modes.
In the figure, phase shifter $\mathcal{U}(R_i(\phi)))$ followed
by a squeezer $\mathcal{U}(S_i(r))$ is applied on the $i^{\text{th}}$
mode of the state.}
\label{fig:symgate}
\end{figure}
For convenience in representation, we express the covariance
matrix of the $n$-mode Gaussian state as follows:
\begin{equation}
\label{nmodecov}
V=\begin{pmatrix}
V_{1,1} & V_{1,2} & \cdots & V_{1,n} \\
V_{2,1} & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & V_{n-1,n} \\
V_{n,1} & \cdots & V_{n,n-1} &V_{n,n}
\end{pmatrix},
\end{equation}
where $V_{i,j}$ is a $2\times 2$ matrix.
Further, we represent the mean and covariance matrix of the marginal
state of mode $i$ (or intra-mode covariance matrix for mode $i$) as
\begin{equation}
d_i = \begin{pmatrix} d_{q_i}\\ d_{p_i}\end{pmatrix}, \quad
V_{i,i} = \begin{pmatrix} \sigma_{qq}&\sigma_{qp}\\
\sigma_{qp}&\sigma_{pp} \end{pmatrix}.
\end{equation}
To estimate the intra-mode covariance matrix,
consider the single-mode symplectic gate $P_i(r,\phi)$
consisting of a squeezer and phase shifter acting on
the $i^{\text{th}}$ mode of the Gaussian state:
\begin{equation}
\label{phasesq}
P_i(r,\phi) = S_i(r) R_i(\phi) =
\begin{pmatrix}
e^{-r} & 0 \\
0 & e^r
\end{pmatrix}
\begin{pmatrix}
\cos \phi & \sin \phi \\
-\sin \phi & \cos \phi
\end{pmatrix}.
\end{equation}
The schematic representation of $P_i(r,\phi)$ is shown in
Fig.~\ref{fig:symgate}.
When $P_i(r,\phi)$ acts on the $i^{\text{th}}$ mode
of the Gaussian state, Eq.~(\ref{diffsymplectic}) reduces to
\begin{equation}
\label{phasesingle}
\begin{aligned}
\langle \hat{\mathcal{U}}(P_i)^\dagger \hat{N}
\hat{\mathcal{U}}(P_i) \rangle-\langle \hat{N} \rangle &=
\frac{1}{2} \text{Tr}\left[ V_{i,i}(P_i^T P_i-\mathbb{1}_{2})\right]\\
&+\frac{1}{2} d_i^T (P_i^T P_i-\mathbb{1}_{2}) d_i.
\end{aligned}
\end{equation}
Here
\begin{equation}
\begin{aligned}
P_i^T P_i=\begin{pmatrix}
e^{-2r}\cos^2 \phi+e^{2r}\sin^2 \phi&-\sinh 2r \sin 2\phi\\
-\sinh 2r \sin 2\phi&e^{-2r}\sin^2 \phi+e^{2r}\cos^2 \phi
\end{pmatrix}.
\end{aligned}
\end{equation}
For brevity, we assume
\begin{equation}
\begin{aligned}
P_i^T P_i&-\mathbb{1}_{2}=\begin{pmatrix}
k_1&k_3\\
k_3&k_2
\end{pmatrix},
\end{aligned}
\end{equation}
and thus Eq.~(\ref{phasesingle}) simplifies as
\begin{equation}
\begin{aligned}
\langle \hat{\mathcal{U}}(P_i)^\dagger \hat{N} \hat{\mathcal{U}}(P_i)
\rangle-&\langle \hat{N} \rangle =
\frac{1}{2} \bigg[ k_1 \sigma_{qq} +k_2 \sigma_{pp}+2k_3\sigma_{qp} \\
&+ k_1 {d_{q_i}}^2 +k_2 {d_{p_i}}^2+2 k_3 d_{q_i}d_{p_i}
\bigg].
\end{aligned}
\end{equation}
Rearranging the above equation, we obtain
\begin{equation}
\begin{aligned}\label{singlegate}
k_1 \sigma_{qq} +k_2 \sigma_{pp}+ 2k_3\sigma_{qp}&=
2 \left( \langle \hat{\mathcal{U}}(P_i)^\dagger \hat{N}
\hat{\mathcal{U}}(P_i) \rangle
-\langle \hat{N} \rangle \right)\\
& -(k_1 {d_{q_i}}^2 +k_2 {d_{p_i}}^2+2 k_3 d_{q_i}d_{p_i}).
\end{aligned}
\end{equation}
Since $d_{q_i}$ and $d_{p_i}$ have already been obtained in
Sec.~\ref{sec:dis}~(Eq.~(\ref{meaneq})), the above equation contains three
unknown parameters $\sigma_{qq}$, $\sigma_{pp}$, and
$\sigma_{qp}$. We can determine these three unknowns by
performing three distinct photon number measurement for
appropriate combinations of squeezing parameter $r$ and
phase rotation angle $\phi$, as follows:
\begin{itemize}
\item[(i)]
For $e^{r} =\sqrt{2} $ and $\phi = 0$, we obtain
\begin{equation}\label{gate3}
-\frac{1}{2}\left( \sigma_{qq}-2\sigma_{pp}\right) = c_1.
\end{equation}
\item[(ii)]
For $e^{r} =\sqrt{3} $ and $\phi = 0$, we obtain
\begin{equation}\label{gate4}
-\frac{2}{3}\left(\sigma_{qq} -3 \sigma_{pp} \right) = c_2.
\end{equation}
\item[(iii)]
For $e^{r} =\sqrt{2} $ and $\phi = \pi/4$, we obtain
\begin{equation}\label{gate5}
\frac{1}{4} \left( \sigma_{qq} +\sigma_{pp} -6\sigma_{qp} \right) = c_3.
\end{equation}
\end{itemize}
Here $c_1$, $c_2$, and $c_3$ correspond to right-hand side
(RHS) of
Eq.~(\ref{singlegate}) which can be easily determined once
the photon number measurements have been performed.
Equations~(\ref{gate3}) and~(\ref{gate4}) can be solved to
yield value of $\sigma_{qq}$ and $\sigma_{pp}$ which can be
put in Eq.~(\ref{gate5}) to obtain value of $\sigma_{qp}$.
Thus $V_{i,i}$ can be completely determined by performing
three photon number measurements after applying the three
distinct single mode symplectic gates
Eqs.~(\ref{gate3})-(\ref{gate5}). To determine all $V_{i,i}$
($1\leq i \leq n-1 $), we require $3(n-1)$ measurements. For
$V_{n,n}$, we need to determine
$\sigma_{qp}$ and
one of $\sigma_{qq}$ or
$\sigma_{pp}$, as $\text{Tr}(V)$ is already known.
Thus, a total of $3(n-1)+2=3n-1$ distinct
photon number measurements are required to determine all the
parameters of the intra-mode covariance matrix of a Gaussian
state.
\subsection{Estimation of inter-mode correlations matrix}
\begin{figure}[htbp]
\includegraphics[scale=1]{twomode.eps}
\caption{To estimate the inter-mode correlations matrix, we
apply a two-mode symplectic gate on the state before
performing photon number measurement on each of the modes.
As shown in the figure, first a phase shifter
$\mathcal{U}(R_i(\phi)))$ is applied on the $i^{\text{th}}$
mode of the state. This is followed by a balanced beam
splitter $\mathcal{U}(B_{ij}(\frac{\pi}{4})))$ acting on $i$
$j$ modes and finally a squeezer $\mathcal{U}(S_i(r))$ is
applied on the $i^{\text{th}}$ mode of the state.}
\label{fig:twomode}
\end{figure}
To estimate the inter-mode correlations matrix, we perform
two-mode symplectic operations on the Gaussian state before
measuring photon number distribution. We write the
covariance matrix of the reduced state of the $i$ $j$ mode
in accord with Eq.~(\ref{nmodecov}) as
\begin{equation}
\begin{pmatrix} V_{i,i}& V_{i,j}\\V_{i,j}^T&V_{j,j}\end{pmatrix}.
\end{equation}
Here $i<j$ need not be successive modes. Since $V_{i,i}$ and
$V_{j,j}$ has
already been determined in Sec.~\ref{intramode}, we need to
determine only $V_{i,j}$.
We further take the matrix elements of $V_{i,j}$ to be
\begin{equation}
\label{intermodecorrealtion}
V_{i,j} = \begin{pmatrix} \gamma_{qq}&\gamma_{qp}\\
\gamma_{pq}&\gamma_{pp} \end{pmatrix}.
\end{equation}
The two-mode symplectic gate is comprised of phase shifter
acting on the $i^{\text{th}}$ mode followed by a balanced
beam splitter acting on modes $i$ $j$ and finally a
squeezer acting on mode $i$.
We represent this mathematically as
\begin{equation}
\label{twomodesym}
\begin{aligned}
Q_{ij}&(r,\phi) =(S_i(r) \oplus
\mathbb{1}_2)B_{ij}\left(\frac{\pi}{4}\right) (R_i(\phi)
\oplus \mathbb{1}_2), \\
& = \begin{pmatrix}
S_i(r) & 0 \\
0& \mathbb{1}_2
\end{pmatrix}
\frac{1}{\sqrt{2}}\begin{pmatrix}
\mathbb{1}_2& \mathbb{1}_2 \\
-\mathbb{1}_2& \mathbb{1}_2
\end{pmatrix}
\begin{pmatrix}
R_i(\phi) & 0 \\
0& \mathbb{1}_2
\end{pmatrix}.
\end{aligned}
\end{equation}
The schematic representation of
$Q_{ij}(r,\phi)$ is illustrated in Fig.~\ref{fig:twomode}.
When the aforementioned gate $ Q_{ij}(r,\phi) $ acts on the
modes
$i$ $j$ of the Gaussian state,
Eq.~(\ref{diffsymplectic}) reduces to
\begin{equation}\label{twomodegate}
\begin{aligned}
\langle \hat{\mathcal{U}}(Q_{ij})^\dagger \hat{N}
&\hat{\mathcal{U}}(Q_{ij}) \rangle-\langle \hat{N} \rangle\\
& = \frac{1}{2} \text{Tr}\bigg[ \begin{pmatrix} V_{i,i}&
V_{i,j}\\V_{i,j}^T&V_{j,j}\end{pmatrix}
\begin{pmatrix}K-\mathbb{1}_2& M\\M^T&L-\mathbb{1}_2\end{pmatrix}
\bigg]\\
&+\frac{1}{2} \begin{pmatrix} d_{q_i}
\\d_{p_i}\\d_{q_j}\\d_{p_j} \end{pmatrix}^T
\begin{pmatrix}K-\mathbb{1}_2&
M\\M^T&L-\mathbb{1}_2\end{pmatrix} \begin{pmatrix} d_{q_i}
\\d_{p_i}\\d_{q_j}\\d_{p_j} \end{pmatrix},
\end{aligned}
\end{equation}
where we have used
\begin{equation}
Q_{ij}^T Q_{ij}= \begin{pmatrix}K& M\\M^T&L\end{pmatrix}.
\end{equation}
Using the following simplification for trace
\begin{equation}
\begin{aligned}
\text{Tr}&\bigg[ \begin{pmatrix} V_{i,i}& V_{i,j}\\V_{i,j}^T&V_{j,j}\end{pmatrix}
\begin{pmatrix}K-\mathbb{1}_2& M\\M^T&L-\mathbb{1}_2\end{pmatrix}
\bigg]\\
&=\text{Tr}\left[ V_{i,i}( K-\mathbb{1}_2)+ V_{j,j}
(L-\mathbb{1}_2 )\right] + 2\text{Tr}\left[ V_{i,j} M^T
\right],
\end{aligned}
\end{equation}
Eq.~(\ref{twomodegate}) can be rearranged as
\begin{eqnarray}
&&\text{Tr}\left[ V_{i,j} M^T \right]=\langle
\hat{\mathcal{U}}(Q_{ij})^\dagger \hat{N}
\hat{\mathcal{U}}(Q_{ij}) \rangle-\langle \hat{N}
\rangle \nonumber \\
&&\quad\quad -\frac{1}{2} \text{Tr}\left[ V_{i,i}(
K-\mathbb{1}_2)+
V_{j,j} (L-\mathbb{1}_2 )\right]\nonumber \\
&&\quad\quad -\frac{1}{2} \begin{pmatrix} d_{q_i}
\\d_{p_i}\\d_{q_j}\\d_{p_j} \end{pmatrix}^T
\begin{pmatrix}K-\mathbb{1}_2&
M\\M^T&L-\mathbb{1}_2\end{pmatrix} \begin{pmatrix} d_{q_i}
\\d_{p_i}\\d_{q_j}\\d_{p_j} \end{pmatrix}.
\label{doublegate}
\end{eqnarray}
Various terms appearing in the RHS of the above equation,
for instance $V_{i,i}$, $V_{j,j}$,
$d_{q_i}$, $d_{p_i}$, $d_{q_j}$, $d_{p_j}$
have already been determined.
Thus the four unknowns $\gamma_{qq}$, $\gamma_{pp}$,
$\gamma_{qp}$, $\gamma_{pq}$ appearing on the LHS of the
above equation can be determined by performing four
different photon number measurements for appropriate
combinations of squeezing parameter $r$ and phase rotation
angle $\phi$. Further, LHS of Eq.~(\ref{doublegate})
can be expressed as following:
\begin{eqnarray}
&&\text{Tr}\left[V_{i,j} M^T \right] =
\frac{1}{2}\bigg[ \left( e^{-2r}-1 \right) \cos \phi \, \gamma_{qq}
+\left( e^{2r}-1 \right) \cos \phi \, \gamma_{pp}\nonumber \\
&&\quad\quad\quad\quad+\left(1- e^{2r} \right) \sin \phi\, \gamma_{qp}
+\left( e^{-2r}-1 \right) \sin \phi \,\gamma_{pq}
\bigg].
\end{eqnarray}
We take these four different combinations of squeezing
parameter $r$ and phase rotation angle $\phi$ to determine
the four unknowns:
\begin{itemize}
\item[(i)]
For $e^{r} =\sqrt{2} $ and $\phi = 0$, we obtain
\begin{equation}\label{twomodeg1}
-\frac{1}{4}\left( \gamma_{qq}-2\gamma_{pp}\right) = d_1.
\end{equation}
\item[(ii)]
For $e^{r} =\sqrt{3} $ and $\phi = 0$, we obtain
\begin{equation}\label{twomodeg2}
-\frac{1}{3}\left(\gamma_{qq} -3 \gamma_{pp} \right) = d_2.
\end{equation}
\item[(iii)]
For $e^{r} =\sqrt{2} $ and $\phi = \pi/2$, we obtain
\begin{equation}\label{twomodeg3}
-\frac{1}{4}\left(2\gamma_{qp} + \gamma_{pq} \right) = d_3.
\end{equation}
\item[(iv)]
For $e^{r} =\sqrt{3} $ and $\phi = \pi/2$, we obtain
\begin{equation}\label{twomodeg4}
-\frac{1}{3}\left(3\gamma_{qp} + \gamma_{pq} \right) = d_4.
\end{equation}
\end{itemize}
Here $d_1$, $d_2$, $d_3$, and $d_4$ are the RHS of
Eq.~(\ref{doublegate}) which can be easily determined once
the photon number measurements have been performed.
Equations (\ref{twomodeg1}) and (\ref{twomodeg2}) can be
solved to yield values of $\gamma_{qq}$ and $\gamma_{pp}$,
whereas Eqs.~(\ref{twomodeg3}) and (\ref{twomodeg4}) can be
solved to yield value of $\gamma_{qp}$ and $\gamma_{pq}$.
Thus, we have used four distinct measurements to determine
the four parameters of $V_{i,j}$. The inter-mode
correlations of the Gaussian states thus require $4\times
n(n-1)/2 = 2n(n-1)$ measurements. So the total number of distinct
measurements required to determine all the $2 n^2 + 3 n$
parameters of the $n$-mode Gaussian state adds up to $2 n^2
+ 3 n$. The results are summarized in Table~\ref{table1}.
Thus, our tomography scheme for Gaussian state using photon
number measurement is optimal in the sense that we require
exactly the same number of distinct measurements as the
number of independent real parameters of the Gaussian state.
\begin{table}[ht!]
\caption{\label{table1}
Tomography of an $n$-mode Gaussian state by photon number
measurements}
\begin{tabular}{ p{2.5cm} p{2cm} p{2cm} p{1.9cm} }
\doubleRule
Estimate type & Parameters number & Gaussian ~~~ Operations&
Measurement number \\
\doubleRule
Mean ($\bm{d}$)& $2n$& Displacement & $~~2n+1$\\
Intra-mode ~~~~~~~~~~ covariance ($V_{i,i}$)& $3n$& Phase
shifter, squeezer & $~~3n-1$\\
Inter-mode ~~~~~~~~~~~ correlations ($V_{i,j})$& $2n(n-1)$&
Phase shifter, squeezer, beam splitter & $~~2n(n-1)$\\
\doubleRule
{\bf Total} &$\bm{2 n^2 + 3 n}$ & &$\bm{2
n^2 + 3 n}$ \\ \doubleRule
\end{tabular}
\end{table}
\section{Characterization of Gaussian channels}
\label{sec:channel}
\begin{figure}[htbp]
\includegraphics[scale=1]{channel.eps}
\caption{Scheme for a complete characterization of an
$n$-mode Gaussian channel. We send $2n$ distinct coherent state
probes through the channel and full or partial
state tomography is carried out on the output states. In the
figure, displacement operator $\hat{D}_i(1,0)$ displaces the
$\hat{q}$-quadrature of the $i^{\text{th}}$ mode by an
unit amount of an $n$-mode vacuum state to give one of the
required probe state. Single and two mode gate operations
involved in state tomography and described in
Section~\ref{sec:gaussian} are indicated as ``Gates''.
\label{fig:channel} }
\end{figure}
In this section, we move on to the characterization of a
Gaussian channel using coherent state
probes~\cite{lobino-science,rahimi-njp-2011,wang-pra-2013}
by employing the tomography techniques developed in
Section~\ref{sec:gaussian}. Gaussian channels are defined
as those channels which transform Gaussian states into
Gaussian states~\cite{holevo-2012,parthasarathy-2015}. An
$n$-mode Gaussian channel is specified by a pair of $2n
\times 2n$ real matrices $A$ and $B$ with $B=B^T \geq
0$~\cite{heinosaari-2010}. The matrices $A$ and $B$ are
described by a total of $4 n^2 + 2n(2n+1)/2 = 6n^2+n$ real
parameters and satisfy complete positivity and trace
preserving condition
\begin{equation}
B+ i\Omega -i A \Omega A^T \geq 0.
\end{equation}
The action of the Gaussian channel on mean $\bm{d}$ and
covariance matrix $V$ of a Gaussian state is given by
\begin{equation}
\label{gtransform}
\bm{d} \rightarrow A \bm{d}, \quad V \rightarrow
AVA^T+\frac{1}{2}B.
\end{equation}
Here again we follow the scheme proposed in \cite{rb-2015}.
Schematic diagram of the scheme is shown in Fig.~\ref{fig:channel}.
We prepare $2n$ distinct coherent state probes by displacing $n$-mode
vacuum state by an unit amount along any of the $2n$
different phase space variables. These coherent state probes
are sent through the channel and full or partial state tomography
using photon number measurement is carried out on the output
states. The information about the output state parameters
enable us to characterize the Gaussian channel. Now we
describe the exact scheme in detail. For convenience, we
define a $2n$ dimensional column vector as
\begin{equation}
\bm{e}_j = (0,0,\dots,1,\dots,0,0)^T,
\end{equation}
with $1$ present at the $j^{\text{th}}$ position. First set
of $n$ coherent state probes are prepared by displacing
$n$-mode vacuum state ($\bm{d}=0$, $V=\mathbb{1}_{2n}/2$) by
an unit amount along $n$ different $\hat{q}$-quadratures.
For instance, application of displacement operator
$\hat{D}_j(1,0)$ on the $j^{th}$ mode of the $n$-mode vacuum
state yields the coherent state
\begin{equation}
\begin{aligned}
|\bm{e}_{2 j-1}\rangle=\hat{D}_j(1,0) |\bm{0}\rangle,
\end{aligned}
\end{equation}
where $|\bm{0}\rangle$ denotes $n$-mode vacuum state. The
mean and covariance
matrix of the coherent state $|\bm{e}_{2 j-1}\rangle$ is given by
\begin{equation}
\bm{d} = \bm{e}_{2 j-1}, \quad V= \frac{1}{2}\mathbb{1}_{2n}.
\end{equation}
This coherent state is sent through the Gaussian channel and
the mean and covariance matrix of the probe state transforms
according to Eq.~(\ref{gtransform}):
\begin{equation}
\label{outputstate}
\bm{d}_G=A\bm{e}_{2j-1}, \quad V_G=\frac{1}{2}(A A^T +B).
\end{equation}
Now we perform full state tomography on the output state
$\rho_G$ ($j=1$) which requires $2 n^2+3 n$ measurements.
This provides us the matrix $A A^T +B$ and the first
column of matrix $A$. For the rest $n-1$ probe states ($2
\leq j \leq n$), we measure only the mean of the output
state $\rho_G$ which enables us to determine all the odd
columns of matrix $A$.
However, as we noticed in Sec.~\ref{sec:dis}, we need to
perform $2n+1$ measurements to obtain the $2n$ elements of
mean vector $\bm{d}_G$.
This leads
to overshooting of the required number of measurements
compared to the number of channel parameters for the
complete characterization of Gaussian channel, which renders
the scheme non-optimal. However, as we can see from
Eq.~(\ref{outputstate}),
$\text{Tr}(V) =
\text{Tr}(A A^T +B)/2$ is same for all probe states as all
the output states have the same covariance
matrix and has already been obtained in
the process of tomography of the first output state ($j=1$).
Now we show how this fact can be exploited to obtain the
value of $\langle \hat{N} \rangle$ for the other coherent
state probes, resulting in an optimal characterization
of the Gaussian channel.
We perform $2n$ measurements after displacing the output
state $\rho_G$ corresponding to second coherent state probe
and obtain $2n$ equations as follows:
\begin{equation}
\label{channelm}
\begin{aligned}
d_{q_i} =& \langle \hat{D}_i(1,0)^\dagger \hat{N} \hat{D}_i(1,0)
\rangle-\langle \hat{N} \rangle - \frac{1}{2}, \quad 1 \leq i \leq n, \\
d_{p_i} =& \langle \hat{D}_i(0,1)^\dagger \hat{N} \hat{D}_i(0,1)
\rangle-\langle \hat{N} \rangle - \frac{1}{2}, \quad 1 \leq i \leq n.
\end{aligned}
\end{equation}
We substitute $d_{q_i}$ and $d_{p_i}$ ($1 \leq i \leq n$) in
Eq.~(\ref{tracen}) and obtain a quadratic
equation in $\langle \hat{N} \rangle$. After solving for
$\langle \hat{N} \rangle$, we put its value in
Eq.~(\ref{channelm}) to obtain $d_{q_i}$ and $d_{p_i}$ ($1
\leq i \leq n$).
Thus for other output
states $\rho_G$ ($2
\leq j \leq n$), only $2n$ measurements are required to
determine the mean vector $\bm{d}_G$ and no additional measurements are
required.
The other set of $n$ coherent state probes are prepared by
displacing $n$-mode vacuum state by an unit amount along
$n$ different $\hat{p}$-quadratures. For instance,
application of displacement operator $\hat{D}_j(0,1)$ on the
$j^{th}$ mode of the $n$-mode vacuum state yields the
coherent state
\begin{equation}
\begin{aligned}
|\bm{e}_{2 j}\rangle=\hat{D}_j(0,1) |\bm{0}\rangle.
\end{aligned}
\end{equation}
The mean and covariance matrix of the coherent state
$|\bm{e}_{2 j}\rangle$ is given by
\begin{equation}
\bm{d} = \bm{e}_{2 j}, \quad V= \frac{1}{2}\mathbb{1}_{2n}.
\end{equation}
This coherent state is sent through the Gaussian channel and
the mean and covariance matrix of the probe state transforms
according to Eq.~(\ref{gtransform}):
\begin{equation}
\bm{d}_G=A\bm{e}_{2j}, \quad V_G=\frac{1}{2}(A A^T +B).
\end{equation}
For all these $n$ output states $\rho_G$ ($1 \leq j \leq n$
), we measure only the mean which enables us to determine
all the even columns of matrix $A$. This information
completely specifies matrix $A$ as odd columns had already
been determined using the first set of $\hat{q}$-displaced
$n$ coherent state probes. This also enables us to obtain
matrix $B$ as matrix $A A^T +B$ was already known from
the full state tomography on the first coherent state probe.
Thus, the total number of distinct measurements required sum up to
$6n^2+n$ as shown in Table~\ref{table2} which exactly
coincides with the parameters specifying a Gaussian channel.
In the scheme of
Parthasarathy \textit{et al}.\,~\cite{rb-2015}, $2n-1$ additional
measurements were required which we do not need,
leading to the optimality of our scheme.
We note that the scheme is optimal even
when the coherent state probes have different mean values,
since $\text{Tr}(V)$ is same for all the output states even in
this case.
\begin{table}[h]
\caption{\label{table2}
Tomography of an $n$-mode Gaussian channel }
\begin{tabular}{ p{1.8cm} p{2.7cm} c p{3cm}}
\doubleRule
Coherent state probe& Information ~~~~~~~ obtained&
&Measurement ~~~~~
number \\
\doubleRule
$\hat{q}$-displaced& Odd columns of $A$
\& $(A A^T +B)$
&& $2n^2+3n +(n-1) \times 2n$\\
$\hat{p}$-displaced & Even columns of $A$ && $n \times 2n$\\
\doubleRule
&{\bf Total} && $\bm{6n^2+n}$ \\ \doubleRule
\end{tabular}
\end{table}
\section{Variance in photon number measurements}
\label{sec:variance}
In this section, we analyze the variance of photon number
distribution of the original state and gate transformed
states which we used towards state and process estimation in
Sections~\ref{sec:gaussian}~\&~\ref{sec:channel}. This
study will provide us with an idea of the quality of our
estimates of the Gaussian states and channels.
To evaluate the variance of photon number we note that the
square of the number operator can be easily put in
symmetrically ordered form as follows:
\begin{eqnarray}
&&\hat{N}^2 =\frac{1}{4}\sum_{i,j=1}^{n} \left(
\hat{q_i}^2+\hat{p_i}^2 -1 \right)
\left( \hat{q_j}^2+\hat{p_j}^2 -1 \right) \nonumber \\
&&\{\hat{N}^2\}_{\rm sym}=f(\hat{q},\hat{p})=
\frac{1}{4}\sum_{\substack{i,j=1\\ i\ne j}}^{n} \left(
\hat{q_i}^2+\hat{p_i}^2 -1 \right)
\left( \hat{q_j}^2+\hat{p_j}^2 -1 \right)\nonumber \\
&&\!\!\!\!+\frac{1}{4}\sum_{i=1}^{n}\! \bigg[ \hat{q_i}^4+\hat{p_i}^4
-2\hat{q_i}^2-2\hat{p_i}^2 \nonumber
\!+\!\frac{1}{3}(\hat{q}_i^2\hat{p}_i^2+
\hat{q}_i\hat{p}_i\hat{q}_i\hat{p}_i+
\hat{q}_i\hat{p}_i^2\hat{q}_i)\!\bigg].
\nonumber \\
\end{eqnarray}
Thus
the average of $\hat{N}^2$ can be readily evaluated as
\begin{equation}
\begin{aligned}\label{av}
\langle \hat{N}^2 \rangle =\int \mathrm{d}^{2n} \bm{\xi} \; f(q,p)
W(\bm{\xi}).
\end{aligned}
\end{equation}
Using the above equation and Eq.~(\ref{avnumber}), variance
of number operator can be written in an elegant form
as~\cite{manko-pra-1994,rb-2015,Pierobon-pra-2019}
\begin{equation}
\begin{aligned}\label{ab1}
\text{Var} (\hat{N} ) =&\langle \hat{N}^2 \rangle- \langle
\hat{N} \rangle^2\\
=&\frac{1}{2} \text{Tr}\left[\left(
V-\frac{1}{2}\mathbb{1}_{2n}\right) \left(
V+\frac{1}{2}\mathbb{1}_{2n}\right) \right]+\bm{d}^T V \bm{d}.
\end{aligned}
\end{equation}
We first explore the mean and variance of photon number of
a single mode system to get some insights. We consider a
single mode Gaussian state with mean $\bm{d} = (u,u)^T$ and
covariance matrix
\begin{equation}
\label{single-mode}
V (\beta)= \frac{1}{2}(2 \mathcal{N} +1)R(\beta)S(2s)R(\beta)^T,
\end{equation}
where $\mathcal{N}$ is the thermal noise parameter, $s$ is the
squeezing and $\beta$ is the phase shift angle.
The mean and variance of the number operator for the above state reads
\begin{eqnarray}
&&\langle \hat{N} \rangle= \mathcal{N} \cosh 2s + \sinh^2 s +u^2,
\nonumber \\
&&\text{Var} (\hat{N}) =\left( \mathcal{N}+\frac{1}{2}
\right)^2 \cosh 4s
-\frac{1}{4} \nonumber \\
&&
\quad\quad\,+2 u^2 \left( \mathcal{N}+\frac{1}{2} \right)
\left(\cosh 2s + \sin 2 \beta \sinh 2s \right).
\end{eqnarray}
Here both mean and variance depend on displacement $u$ and
squeezing $s$ of the state. However, the mean photon number is
independent of the phase shift angle $\beta$ while variance
of photon number depends on $\beta$.
The variance of displaced number operator is given by
\begin{equation}
\label{vardis}
\begin{aligned}
&\text{Var}
\left(\hat{D}(\bm{r})^{\dagger}\hat{N}\hat{D}(\bm{r})\right
) =
(\bm{d}+\bm{r})^T V (\bm{d}+\bm{r})\\
&\quad\quad+\frac{1}{2} \text{Tr}\left[\left(
V-\frac{1}{2}\mathbb{1}_{2n}\right) \left(
V+\frac{1}{2}\mathbb{1}_{2n}\right) \right].
\end{aligned}
\end{equation}
\begin{figure}[htbp]
\includegraphics[scale=1]{meanvar.eps}
\caption{[Colour online]
Single-mode squeezed coherent thermal
state~(\ref{single-mode}) has been plotted with parameters
$\beta = \pi/3$ and $\mathcal{N}=1$. For all four panels,
Black solid curve represents mean and variance of $\hat{N}$
while Red dashed curve represents mean and variance of
$D^\dagger (1,0) \hat{N}D (1,0)$. $(a)$ Mean photon number
as a function of displacement $u$. $(b)$ Mean photon
number as a function of squeezing $s$. $(c)$ Variance of
photon number as a function of displacement $u$. $(d)$
Variance of photon number as a function of squeezing $s$.
\label{fig:meanvar1}
}
\end{figure}
In Fig.~\ref{fig:meanvar1}$(a)$, we plot $ \langle
\hat{N}\rangle$ and $\langle D^\dagger (1,0) \hat{N}D
(1,0)\rangle$ as a function of displacement parameter $u$
for a single-mode squeezed coherent thermal
state~(\ref{single-mode}). We see that while $\langle
D^\dagger (1,0) \hat{N}D (1,0)\rangle$ is larger than
$\langle \hat{N}\rangle$, the mean values of both the
operators increases with the displacement parameter $u$.
Further, $\langle D^\dagger (1,0) \hat{N}D
(1,0)\rangle$ equals $\langle D^\dagger (0,1) \hat{N}D
(0,1)\rangle$ as can be seen from Eq.~(\ref{disnumber1}).
Similarly, Fig.~\ref{fig:meanvar1}$(b)$ shows that mean
values
$ \langle \hat{N}\rangle$ and $\langle D^\dagger (1,0) \hat{N}D
(1,0)\rangle$
increase with squeezing $s$. We plot the variance of
the operators $ \hat{N}$ and $D^\dagger (1,0) \hat{N}D (1,0)$
as a function of displacement parameter $u$ in
Fig.~\ref{fig:meanvar1}$(c)$. We see that while the variance
of operator $D^\dagger (1,0) \hat{N}D (1,0)$ is larger than
the variance of the operator $ \hat{N}$, variance of both the
operators increase with displacement parameter $u$.
Similarly, Fig.~\ref{fig:meanvar1}$(d)$ shows that variance of
the operators $ \hat{N}$ and $D^\dagger (1,0) \hat{N}D
(1,0)$ increase with squeezing $s$.
The variance of photon number
after a symplectic transformation $S$ of the state reads
as:
\begin{equation}
\label{symvar}
\begin{aligned} &\text{Var}
\big(\mathcal{U}(S)^{\dagger}\hat{N}\mathcal{U}(S))\big )
= \bm{d}^T V \bm{d}\\ &\quad\quad+\frac{1}{2}
\text{Tr}\left[\left(
SVS^T-\frac{1}{2}\mathbb{1}_{2n}\right) \left(
SVS^T+\frac{1}{2}\mathbb{1}_{2n}\right) \right].\\
\end{aligned}
\end{equation}
Using this expression we first compare the variance of
number operator under the action of $P_i(r,\phi)$
gate (Eqn.~(\ref{phasesq})) for different values of the
parameters $r$ and $\phi$. In Fig.~\ref{fig:meanvar12}($a$), we
plot the variance of different $P_i(r,\phi)$ gate
transformed number operators as a function of displacement
$u$ for single-mode squeezed coherent thermal
state~(\ref{single-mode}). We can see that the variance of
different $P_i(r,\phi)$ gate transformed number operators
increase with displacement $u$. While the variance of
$\mathcal{U}^\dagger(P)\hat{N}\mathcal{U}(P)$ with
$e^r=\sqrt{2}$, $\phi= \pi/4$ is always lower than the
variance of $\hat{N}$ and variance of
$\mathcal{U}^\dagger(P)\hat{N}\mathcal{U}(P)$ with
$e^r=\sqrt{3}$, $\phi= 0$ is always higher than the variance
of $\hat{N}$, variance of
$\mathcal{U}^\dagger(P)\hat{N}\mathcal{U}(P)$ with
$e^r=\sqrt{2}$, $\phi= 0$ crosses over the variance of
$\hat{N}$ at a certain value of displacement $u$. We show
the variance of the photon number as a function of squeezing
parameter $s$ in Fig.~\ref{fig:meanvar12}($b$). As we can see,
variance of different $P_i(r,\phi)$ gate transformed number
operators show a similar dependence on squeezing $s$ as that
of displacement $u$.
Now to compare the variance of photon number under the
action of two mode gates $Q_{ij}(r,\phi)$ (Eqn.
(\ref{twomodesym})), we
consider a two mode Gaussian state with mean $\bm{a} =
(u,u,u,u)^T$ and covariance matrix V
\begin{equation}\label{two-modestate}
V = B_{12}\left(\frac{\pi}{4}\right)[V(\beta_1) \oplus
V(\beta_2)]B_{12}\left(\frac{\pi}{4}\right)^T,
\end{equation}
where $V (\beta)$ is defined in Eq.~(\ref{single-mode}). We
use Eq.~(\ref{symvar}) to compute the variance of
$Q_{ij}(r,\phi)$ gate transformed number operator
corresponding to the above state.
\begin{figure}[htbp]
\includegraphics[scale=1]{onetwomodevar.eps}
\caption{[Colour online] $(a)$ Variance of photon number as
a function of displacement $u$ for single-mode
squeezed thermal state~(\ref{single-mode}).
$(b)$ Variance of photon number as
a function of squeezing $s$ for single-mode squeezed
thermal state~(\ref{single-mode}).
For both panel $(a)$ and $(b)$, various curves correspond to
$\text{Var} (\mathcal{U}^\dagger(P)\hat{N}\mathcal{U}(P))$
with $e^r=\sqrt{2}$, $\phi= 0$ (Red dashed),
$e^r=\sqrt{3}$, $\phi= 0$ (Orange dotted), $e^r=\sqrt{2}$,
$\phi= \pi/4$ (Purple dot dashed), while Black solid curve
represents $\text{Var} (\hat{N})$, and parameter $\beta =\pi/3$.
$(c)$ Variance of photon number as a function of displacement
$u$ for two-mode squeezed thermal state~(\ref{two-modestate}).
$(d)$ Variance of photon number as a function of squeezing $s$
for two-mode squeezed thermal state~(\ref{two-modestate}).
For both panel $(c)$ and $(d)$, various curves correspond to
$\text{Var} (\mathcal{U}^\dagger(Q)\hat{N}\mathcal{U}(Q))$ with
$e^r=\sqrt{2}$, $\phi= 0$ (Red dashed), $e^r=\sqrt{3}$, $\phi= 0$
(Orange dotted), $e^r=\sqrt{2}$, $\phi= \pi/2$ (Purple dot dashed),
$e^r=\sqrt{3}$, $\phi= \pi/2$ (Magenta large dashed) while Black
solid curve represents $\text{Var} (\hat{N})$. For all four panels,
thermal parameter has been taken as $\mathcal{N}=1$. }
\label{fig:meanvar12}
\end{figure}
In Fig.~\ref{fig:meanvar12}$(c)$, we plot the variance of
different $Q_{ij}(r,\phi)$ gate transformed number operators
as a function of displacement $u$ for two-mode squeezed
coherent thermal state~(\ref{two-modestate}). We can see
that variance of different $Q_{ij}(r,\phi)$ gate transformed
number operators increase with displacement. While the
variance of $\mathcal{U}^\dagger(Q)\hat{N}\mathcal{U}(Q)$
with $e^r=\sqrt{2}$, $\phi= 0$, and $e^r=\sqrt{3}$, $\phi=0$
always remain higher than the variance of $\hat{N}$, the
variance of $\mathcal{U}^\dagger(Q)\hat{N}\mathcal{U}(Q)$
with $e^r=\sqrt{2}$, $\phi= \pi/2$ and $e^r=\sqrt{3}$,
$\phi=\pi/2$ crosses over the variance of $\hat{N}$ at a
certain value of the displacement parameter $u$. Variance
of different $Q_{ij}(r,\phi)$ gate transformed number
operators as a function of squeezing $s$ is shown in
Fig.~\ref{fig:meanvar12}$(d)$. As we can see, the squeezing
dependence of different variances exhibits a similar trend
as that of dependence on displacement.
Now we wish to relate the variances of transformed number
operators to variance of estimated Gaussian parameters. For
an $n$-mode system, quadrature operators $\hat{q}_i$ and
$\hat{p}_i$ can be expressed as
\begin{eqnarray} \hat{q}_i
&=& \hat{D}_i(1,0)^\dagger \hat{N} \hat{D}_i(1,0) - \hat{N}
-\frac{1}{2}, \nonumber \\ \hat{p}_i &=&
\hat{D}_i(0,1)^\dagger \hat{N} \hat{D}_i(0,1) - \hat{N}
-\frac{1}{2}. \label{quadratur}
\end{eqnarray}
Averaging
the above equation yields Eq.~(\ref{meaneq}). Since
$\hat{N}$ and $\hat{D}_i(1,0)^\dagger \hat{N}
\hat{D}_i(1,0)$ are measured on different states, these
operators are uncorrelated and the expressions for the
variance of the quadratures become
\begin{eqnarray}
\text{Var}(\hat{q}_i) &=&
\text{Var}(\hat{D}_i(1,0)^\dagger \hat{N} \hat{D}_i(1,0)) +
\text{Var}(\hat{N} ), \nonumber \\ \text{Var}(\hat{p}_i) &=&
\text{Var}(\hat{D}_i(0,1)^\dagger \hat{N} \hat{D}_i(0,1)) +
\text{Var}(\hat{N} ). \label{quadrature}
\end{eqnarray}
Thus, the variance of $\hat{q}_i$ quadrature, which
represents the quality of estimation of quadrature
$\hat{q}_i$, depends on both displacement $u$ and squeezing
$s$ as we can see from the above analysis. The optimization
of parameters $q_i$ and $p_i$ appearing in the displacement
gate $D_i(q_i,p_i)$ is required in order to minimize
$\text{Var}(\hat{q}_i)$.
Similarly we can express $\hat{q}_i^2$ as
\begin{equation}
\hat{q}_i^2 =
6\bigg[\underbrace{\hat{\mathcal{U}}(P_i)^\dagger \hat{N}
\hat{\mathcal{U}}(P_i)}_{e^r=\sqrt{3},\phi=0}
-2\underbrace{\hat{\mathcal{U}}(P_i)^\dagger \hat{N}
\hat{\mathcal{U}}(P_i)}_{e^r=\sqrt{2},\phi=0} - \hat{N}
\bigg]. \end{equation}
Thus the variance of $\hat{q}_i^2$
can be written as
\begin{equation} \begin{aligned}
\text{Var}(\hat{q}_i^2) = 6\bigg[\underbrace{\text{Var}
(\hat{\mathcal{U}}(P_i)^\dagger \hat{N}
\hat{\mathcal{U}}(P_i))}_{e^r=\sqrt{3},\phi=0}
+&2\underbrace{\text{Var}(\hat{\mathcal{U}}(P_i)^\dagger
\hat{N} \hat{\mathcal{U}}(P_i))}_{e^r=\sqrt{2},\phi=0} \\ &+
\text{Var}(\hat{N}) \bigg]. \end{aligned}
\end{equation}
We see from the above analysis that the variance of
$\hat{q}_i^2$ also depends on both
displacement $u$ and squeezing $s$.
In this case too, a proper study of
the optimization of $P_i(r,\phi)$ gate parameters for the
minimization of $\text{Var}(\hat{q}_i^2)$ is needed. Such
an analysis will be useful for the best estimation of Gaussian
state parameters. Similarly, various intra-mode correlation
terms such as $\text{Var}(\hat{p}_i^2)$ and
$\text{Var}(\hat{q}_i \hat{p}_i)$, as well as various inter-mode
correlation terms such as $\text{Var}(\hat{q}_i \hat{q}_j)$
and $\text{Var}(\hat{q}_i \hat{p}_j)$ can be expressed in
terms of the variances of different transformed number operators.
\section{Concluding remarks}
\label{sec:conclusion}
In this work we presented a Gaussian state tomography and
Gaussian process tomography scheme based on photon number
measurements. While the work builds upon the proposal given
in~\cite{rb-2015}, the current proposal offers an optimal
solution to the problem, with smaller number of optical
elements which renders the scheme more accessible to
experimentalists. After describing our optimal scheme for
Gaussian state tomography, we use it for estimation of a
Gaussian channel in an optimal way, where a total number of
$6n^2+n$ distinct measurements are required to determine
$6n^2+n$ parameters specifying a Gaussian channel. Here we
have exploited the fact that $\text{Tr}(V)$ is the same for
all the output states corresponding to coherent state probes
with same or different mean. Full state tomography of the
first coherent state probe yields an estimation of
$\text{Tr}(V)$ which can be used to estimate $\langle
\hat{N}\rangle$ for each of the remaining coherent state
probes, thus making the scheme optimal. This in some sense
completes the problem of finding an optimal solution of the
Gaussian channel characterization posed in~\cite{rb-2015}.
It should be noted that our scheme is an improvement over
similar earlier schemes based on photon number measurements
and not over homodyne and heterodyne techniques which are
currently more prevalent. Similarly, the optimality is in
terms of the number of distinct experiments needed in the
scheme while each experiment will have to be repeated to obtain
the required average values. Having said so, it is worth
mentioning that there have been attempts to develop
homodyne measurement schemes using weak local oscillators
and PNRD~\cite{olivares-2017,walmsley-pra-2020} for use in
circumstances where strong local oscillators are not
desirable and are essential for the
traditional homodyne scheme. Our scheme based on PNRD
is an advancement in this direction as it requires no local
oscillator. Homodyne and heterodyne schemes go beyond
Gaussian states, whereas our present scheme is aimed only at
the estimation of Gaussian states. In principle, PNRD based
tomography schemes that go beyond Gaussian states can be
invented, however this aspect requires more investigation.
Finally, since PNRD measurements have become possible in
recent times, it is expected that in the coming years they
will become more practical and easier.
The analysis of variance in photon number measurements of
the original and transformed states shows that the variance
increases with the mean of the state and with the squeezing
parameter. Thus, this scheme is well suited for state with
small mean values or small displacements and small values of
squeezing. Extending the scheme for states with large mean
value but better estimation performance is under
consideration and will be reported elsewhere. While we
have chosen certain specific values of gate parameters (see
Eq.~(\ref{gate3})), to extract information about the
parameters of the state, the effect of different values of
gate parameters on the quality of estimates and
determination of optimal parameters that maximize the
performance of the scheme needs further investigation. The
optimality of the procedure may have a relationship with
mutually unbiased basis for the CV systems. Further analysis
of this aspect will require us to go beyond Gaussian states
and will be taken up elsewhere.
\section*{Acknowledgement}
R.S. acknowledges financial supports from {\sf SERB MATRICS
MTR/2017/000431} and {\sf
DST/ICPS/QuST/Theme-2/2019/General} Project number {\sf
Q-90}. Arvind acknowledges the financial
support from {\sf DST/ICPS/QuST/Theme-1/2019/General} Project
number {\sf Q-68}.
| {'timestamp': '2020-07-27T02:05:26', 'yymm': '2004', 'arxiv_id': '2004.06649', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.06649'} |
\section{Introduction}
\label{sec:intro}
Monaural music source separation has been the focus of many research efforts for over a decade.
This task aims at separating a music recording into several tracks where each track corresponds to a single instrument.
A related goal is to design
algorithms that can separate vocals and accompaniment, where all the instruments are considered as one source. Music source separation algorithms have been successfully used for predominant pitch tracking \cite{fan2016singing}, accompaniment generation for Karaoke systems \cite{tachibana2016real}, or singer identification~\cite{berenzweig2002using}.
Despite these advances, a system that can successfully generalize to different music datasets has thus far remained unachievable, due to the tremendous variability of music recordings, for example in terms of genre or types of instruments used.
Unsupervised methods, such as those based on computational auditory scene analysis (CASA) \cite{li2007separation}, source/filter modeling \cite{durrieu2010source}, or low-rank and sparse modeling \cite{huang2012singing}, have difficulty in capturing the dynamics of the vocals and instruments, while supervised methods, such as those based on non-negative matrix factorization (NMF) \cite{sprechmann2012real}, F0-based estimation \cite{hsu2010improvement}, or Bayesian modeling \cite{yang2014bayesian}, suffer from generalization and processing speed issues.
Recently, deep learning has found many successful applications in audio source separation. Conventional regression-based networks try to infer the source signals directly, often by inferring time-frequency (T-F) masks to be applied to the T-F representation of the mixture so as to recover the original sources. These mask-inference networks have been shown to produce superior results compared to the traditional approaches in singing voice separation \cite{huang2014singing}.
These networks are a natural choice when the sources can be characterized as belonging to distinct classes.
Another promising approach designed for more general situations is the so-called deep clustering framework \cite{hershey2016deep}. Deep clustering has been applied very successfully to the task of single-channel speaker-independent speech separation \cite{hershey2016deep}. Because it uses of pair-wise affinities as separation criterion, deep clustering can handle mixtures with multiple sources from the same type, and an arbitrary number of sources.
Such difficult conditions are endemic to music separation.
In this study, we explore the use of both deep clustering and conventional mask-inference networks to separate the singing voice from the accompaniment, grouping all the instruments as one source and the vocals as another. The singing voice separation task that we consider here is amenable to class based separation, and would not seem to require the extra flexibility in terms of source types and number of sources that deep clustering would provide. However, in addition to opening up the potential to apply to more general settings, the additional flexibility of deep clustering may have some benefits in terms of regularization. Whereas conventional mask-inference approaches only focus on increasing the separation between sources, the deep clustering objective also reduces within-source variance in the internal representation, which could be beneficial for generalization. In recent work it has been shown that forcing deep network activations to cluster well can improve the resulting test performance \cite{liao2016learning}.
To investigate these potential benefits, we develop a two-headed ``Chimera'' network with both a deep clustering head and a mask-inference head attached to the same network body. Each head has its own objective, but the whole hybrid network is trained in a joint fashion akin to multi-task training.
Our findings show that the addition of the deep clustering criterion greatly improves upon the performance of the mask-inference network.
\section{Model Description}
\label{sec:format}
\subsection{Deep clustering}
Deep clustering operates according to the assumption that the T-F representation of the mixed signal can be partitioned into multiple sets, depending on which source is dominant (i.e., its power is the largest among all sources) in a given bin. A deep clustering
network takes features of the acoustic signal as input, and assigns a $D$-dimensional embedding to each T-F bin. The network is trained to encourage the embeddings of T-F bins dominated by the same source to be similar to each other, and the embeddings of T-F bins dominated by different sources to be different. Note that the concept of ``source'' shall be defined according to the task at hand: for example, one speaker per source for speaker separation, all vocals in one source versus all other instruments in another source for singing voice separation, etc.
A T-F mask for separating each source can then be estimated by clustering the T-F embeddings \cite{isik2016single}.
The training target is derived from a label indicator matrix ${\bf Y} \in \mathbb R^{T F \times C}$, where $T$ denotes the number of frames, $F$ the feature dimension, and $C$ the number of sources in the input mixture ${\bf x}$, such that $Y_{i,j} = 1$ if T-F bin $i=(t,f)$ is dominated by source $j$, and $Y_{i,j} = 0$ otherwise. We can construct a binary affinity matrix ${\bf A} = {\bf Y}{\bf Y}^T$, which represents the assignment of the sources in a permutation independent way: $A_{i,j} = 1$ if $i$ and $j$ are dominated by the same source, and $A_{i,j} = 0$ if they are not. The network estimates an embedding matrix ${\bf V} \in \mathbb R^{T F \times D}$,
where $D$ is the embedding dimension. The corresponding estimated affinity matrix is then defined as ${\bf \hat{A}} = {\bf V}{\bf V}^T$. The cost function for the network is
\begin{equation}
\mathcal{L}_{\text{DC}} = ||{\bf \hat{A}} - {\bf A}||_F^2 = ||{\bf V}{\bf V}^T - {\bf Y}{\bf Y}^T||_F^2. \label{eq:dc}
\end{equation}
Although the matrices ${\bf A}$ and ${\bf \hat{A}}$ are typically very large, their low-rank structure can be exploited to decrease the computational complexity \cite{hershey2016deep}.
At test time, a clustering algorithm such as K-means is applied to the embeddings $V$ to generate a cluster assignment matrix, which is used as a binary T-F mask applied to the mixture to estimate the T-F representation of each source.
\subsection{Multi-task learning and Chimera networks}
Whereas the deep clustering objective function has been shown to enable the training of neural networks for challenging source separation problems, a disadvantage of deep clustering is that the post-clustering process needed to generate the mask and recover the sources is not part of the original objective function.
On the other hand, for mask-inference networks, the objective function minimized during training is directly related to the signal recovery quality.
We seek to combine the benefits of both approaches in a strategy reminiscent of multi-task learning, except that here both approaches address the same separation task.
In \cite{hershey2016deep} and \cite{isik2016single}, the typical structure of a deep clustering network is to have multiple stacked recurrent layers (e.g., BLSTMs) yielding an $N$-dimensional vector at the top layer, followed by a fully-connected linear layer. For each frame $t$, this layer outputs a $D$-dimensional vector for each of the $F$ frequencies, resulting in a $F \times D$ representation ${\bf Z}_t$. To form the embeddings, ${\bf Z}$ then passes through a $\mathrm{tanh}$ non-linearity, and unit-length normalization independently for each T-F bin. Concatenating across time results in the $TF \times D$ embedding matrix ${\bf V}$ as used in Eq.~\ref{eq:dc}.
We extend this architecture in order to create a two-headed network, which we refer as ``Chimera'' network, with one head outputting embeddings as in a deep clustering network, and the other head outputting a soft mask, as in a mask-inference network.
The new mask-inference head is obtained starting with ${\bf Z}$, and passing it through $F$ fully-connected $D \times C$ mask estimation layers (e.g., softmax), one for each frequency, resulting in $C$ masks ${\bf M}^{(c)}$, one for each source.
The structure of the Chimera network is illustrated in Figure~\ref{fig:network_structure}.
\begin{figure}[t]
\centering
\includegraphics[width=7.5cm]{chimera_network_with_labels-crop.pdf}
\vspace{-.2cm}
\caption{Structure of the Chimera network.}
\vspace{-.3cm}
\label{fig:network_structure}
\end{figure}
The body of the network, up to the layer outputting ${\bf Z}$, can be trained with each head separately. For the deep clustering head, we use the objective $\mathcal{L}_{\text{DC}}$.
For the mask-inference head, we can use a classical magnitude spectrum approximation (MSA) objective \cite{huang2012singing,Weninger2014GlobalSIP12,erdogan2015phase}, defined as:
\begin{equation}
\mathcal{L}_{\text{MSA}}
= \sum_c ||{\bf R}^{(c)} - {\bf M}^{(c)} \odot {\bf S}||_2^2,
\end{equation}
where ${\bf R}^{(c)}$ denotes the magnitude of the T-F representation for the $c$-th clean source and ${\bf S}$ that of the mixture.
Although this objective function makes sense intuitively, one caveat is that the mixture magnitude ${\bf S}$ may be smaller than that of a given source ${\bf R}^{(c)}$ due to destructive interference. In this case, ${\bf M}^{(c)}$, which is between $0$ and $1$, cannot bring the estimate close to ${\bf R}^{(c)}$. As a remedy, we consider an alternative objective, denoted as masked magnitude spectrum approximation (mMSA), which approximates ${\bf R}^{(c)}$ as the output of a masking operation on the mixture using a reference mask ${\bf O}^{(c)}$, such that ${\bf O}^{(c)} \odot {\bf S}\approx {\bf R}^{(c)}$, for source $c$:
\begin{equation}
\mathcal{L}_{\text{mMSA}}
= \sum_c ||({\bf O}^{(c)} - {\bf M}^{(c)}) \odot {\bf S}||_2^2.
\end{equation}
Note that this is equivalent to a weighted mask approximation objective, using the mixture magnitude as the weights.
We can also define a global objective for the whole network as
\begin{equation}
\mathcal{L}_{\text{CHI}} = \alpha \frac{\mathcal{L}_{\text{DC}}}{T F} + (1 - \alpha) \mathcal{L}_{\text{MI}}
\end{equation}
where $\alpha \in [0,1]$ controls the importance between the two objectives, and the objective $\mathcal{L}_{\text{MI}}$ for the mask inference head is either $\mathcal{L}_{\text{MSA}}$ or $\mathcal{L}_{\text{mMSA}}$. Note that here we divide $\mathcal{L}_{\text{DC}}$ by $T F$ because the objective for deep clustering calculates the pair-wise loss for each of the $(TF)^2$ pairs of T-F bins, while the spectrum approximation objective calculates end-to-end loss on the $TF$ time-frequency bins.
For $\alpha=1$, only the deep clustering head gets trained together with the body, resulting in a deep clustering network. For $\alpha=0$, only the mask-inference head gets trained together with the body, resulting in a mask-inference network.
At test time, if both heads have been trained, either can be used. The mask-inference head directly outputs the T-F masks, while the deep clustering head outputs embeddings on which we perform clustering using, e.g., K-means.
\section{Evaluation and discussion}
\label{sec:typestyle}
\subsection{Datasets}
\label{sec:datasets}
For training and evaluation purposes, we built a remixed version of the DSD100 dataset for SiSEC \cite{DSD100}, which we refer to as DSD100-remix. For evaluation only, we also report results on two other datasets: the hidden iKala dataset for the MIREX submission, and the public iKala dataset for our newly proposed models.
The DSD100 dataset includes synthesized mixtures and the corresponding original sources from 100 professionally produced and mixed songs.
To build the training and validation sets of DSD100-remix, we use the DSD100 development set (50 songs). We design a simple energy-level-based detector \cite{ramirez2007voice} to remove silent parts in both the vocal and accompaniment tracks, so that the vocals and accompaniment fully overlap in the generated mixtures. After that, we downsample the tracks from 44.1~kHz to 16~kHz to reduce computational cost, and then randomly mix the vocals and accompaniment together at 0~dB SNR, creating a 15 h training set and a 0.5 h validation set.
We build the evaluation set of DSD100-remix from the DSD100 test set using a similar procedure, generating 50 pieces (one for each song) of fully-overlapped recordings with 30 seconds length each.
The input feature we use is calculated by the short-time Fourier transform (STFT) with 512-point window size and 128-point hop size. We use a 150-dimension mel-filterbank to reduce the input feature dimension. First-order delta of the mel-filterbank spectrogram is concatenated into the input feature.
We used the ideal binary mask calculated on the mel-filterbank spectrogram as the target $Y$ matrix.
\subsection{System architecture}
The Chimera network's body is comprised of 4 bi-directional long-short term memory (BLSTM) layers with 500 hidden units in each layer, followed by a linear fully-connected layer with a $D$-dimension vector output for each of the frame's $F=150$ T-F bins. Here, we use $D=20$ because it produced the best performance in a speech separation task~\cite{hershey2016deep}. In the mask-inference head, we set $C=2$ for the singing voice separation task, and use $\mathrm{softmax}$ as the non-linearity.
We use the rmsprop algorithm \cite{tieleman2012lecture} as optimizer and select the network with the lowest loss on the validation set.
At test time, we split the signal into fixed-length segments, on which we run the network independently. We also tried running the network on the full input feature sequence, as in \cite{hershey2016deep}, but this lead to worse performance, probably due to the mismatch in context size between the training and test time. The mask-inference head of the network directly generates T-F masks. For deep clustering, the masks are obtained by applying K-means on the embeddings for the whole signal.
We apply the mask for each source to the mel-filterbank spectrogram of the input, and recover the source using an inverse mel-filterbank transform and inverse-STFT with the mixture phase, followed by upsampling.
\subsection{Results for the MIREX submission}
\label{sec:results}
We first report on the system submitted to the Singing Voice Separation task of the Music Information Retrieval Evaluation eXchange (MIREX 2016) \cite{singing2016results}. That system only contains the deep clustering part, which corresponds to $\alpha=1$ in the hybrid system. In the MIREX system, dropout layers with probability $0.2$ were added between each feed-forward connection, and sequence-wise batch normalization \cite{laurent2015batch} was applied in the input-to-hidden transformation in each BLSTM layer. Similarly to \cite{isik2016single}, we also applied a curriculum learning strategy \cite{bengio2009curriculum}, where we first train the network on segments of 100 frames, then train on segments of 500 frames. As distinguishing between vocals and accompaniment was part of the task, we used a crude rule-based approach: the mask whose total number of non-zero entries in the low frequency range ($<200$~Hz) is more than a half is used as the accompaniment mask, and the other as the vocals mask.
The hidden iKala dataset has been used as the evaluation dataset throughout MIREX 2014-2016, so we can report, as shown in Table~\ref{MIREX}, the results from the past three years, comparing the best two systems in each year's competition to our submitted system for 2016. The official MIREX results are reported in terms of global normalized SDR (GNSDR), global SIR (GSIR), global SAR (GSAR) \cite{singing2016mirex}.
Due to time limitations at the time of the MIREX submission, we submitted a system that we had trained using the DSD100-remix dataset described in Section~\ref{sec:datasets}. However, as mentioned in the MIREX description, the DSD100 dataset is different from both the hidden and public parts of the iKala dataset \cite{singing2016mirex}. Nonetheless, our system not only won the 1st place in MIREX 2016 but also outperformed the best systems from past years, even without training on the better-matched public iKala dataset, showing the efficacy of deep clustering for robust music separation. Note that the hidden iKala dataset is unavailable to the public, and it is thus unfortunately impossible to evaluate here what the performance of our system would be when trained on the public iKala data.
\begin{table}[tbp]
\centering
\caption{Evaluation metrics for different systems in MIREX 2014-2016 on the hidden iKala dataset. \textit{V} denotes vocals and \textit{M} music.}\vspace{0.2cm}\label{MIREX}
\begin{tabular}{c|c|c|c|c|c|c}
\noalign{\hrule height 1.0pt}
& \multicolumn{2}{c|}{GNSDR} & \multicolumn{2}{c|}{GSIR} & \multicolumn{2}{c}{GSAR}\\
\hline
& V & M & V & M & V & M \\
\hline
\textbf{DC} &6.3&11.2&14.5&25.2&10.1&\phantom{1}7.3\\
\hline
MC2 \cite{singing2016results}&5.3&\phantom{1}9.7&10.5&19.8&11.2&\phantom{1}6.1 \\
MC3 \cite{singing2016results}&5.5&\phantom{1}9.8&10.8&19.6&11.2&\phantom{1}6.3\\
FJ1 \cite{fan2016singing}&6.8&10.1&13.3&11.2&11.5&10.0\\
FJ2 \cite{fan2016singing}&6.3&\phantom{1}9.9&13.7&11.7&10.6&\phantom{1}9.1\\
IIY1 \cite{ikemiya2016singing}&4.2&\phantom{1}7.8&15.5&12.4&\phantom{1}7.7&\phantom{1}5.4\\
IIY2 \cite{ikemiya2016singing}&4.5&\phantom{1}7.9&13.3&14.3&\phantom{1}8.6&\phantom{1}5.0\\
\noalign{\hrule height 1.0pt}
\end{tabular}
\end{table}
\subsection{Results for the proposed hybrid system}
\label{sec:hybrid_results}
We now turn to the results using the Chimera networks.
During the training phase, we use 100 frames of input features to form fixed duration segments. We train the Chimera network in three different regimes: a pure deep clustering regime ($\text{DC}$, $\alpha=1$), a pure mask-inference regime ($\text{MI}$, $\alpha=0$), and a hybrid regime ($\text{CHI}_\alpha$, $0<\alpha<1$). All networks are trained from random initialization, and no training tricks mentioned above for the MIREX system are added.
We report results on the DSD100-remix test set, which is matched to the training data, and the public iKala dataset, which is not.
By design, deep clustering provides one output for each source, and the sequence of the separation result is random. Therefore, the scores are computed by using the best permutation between references and estimates at the file level.
Table \ref{tab:all} shows the results with the MSA objective in the MI head.
We compute the source-to-distortion ratio (SDR), defined as scale-invariant SNR \cite{isik2016single}, for each test example, and report the length-weighted average over each test set of the improvement of SDR in the estimate with respect to that in the mixture (SDRi).
As can be seen in the results, $\text{MI}$ performs competitively with $\text{DC}$ on DSD100-remix, however DC performs significantly better on the public iKala data. This shows the better generalization and robustness of the deep clustering method in cases where the test and training set are not matched. The best performance is achieved by $\text{CHI}_\alpha$-$\text{MI}$, the MI head of the Chimera network. Interestingly, the performance of the DC head does not change significantly for the values of $\alpha$ tested. This suggests that joint training with the deep clustering objective allows the body of the network to learn a more powerful representation than using the mask-inference objective alone; this representation is then best exploited by the mask-inference head thanks to its signal approximation objective.
\begin{table}[tbp]
\centering
\caption{SDRi (dB) on the DSD100-remix and the public iKala datasets. The suffix after $\text{CHI}_\alpha$ denotes which head of the Chimera network is used for generating the masks.
}\vspace{0.2cm}\label{tab:all}
\begin{tabular}{c|c|c|c|c}
\noalign{\hrule height 1.0pt}
& \multicolumn{2}{c|}{DSD100-remix} & \multicolumn{2}{c}{iKala}\\
\hline
& \hspace{.4cm}V\hspace{.4cm} & \hspace{.4cm}M\hspace{.4cm} & \hspace{.4cm}V\hspace{.4cm} & \hspace{.4cm}M\hspace{.4cm}\phantom{}\\
\noalign{\hrule height 1.0pt}
$\text{DC}$ &4.9&7.2& 6.1 & 10.0\\
\hline
$\text{MI}$ &4.8&6.7&5.2&\phantom{1}8.9 \\
\hline
$\text{CHI}_{0.1}$-DC &4.8&7.2&6.0&\phantom{1}9.7 \\
$\text{CHI}_{0.1}$-MI &\bf{5.5}&\bf{7.8}&\bf{6.4}&\bf{10.5} \\
$\text{CHI}_{0.5}$-DC &4.7&7.1&5.9&\phantom{1}9.9 \\
$\text{CHI}_{0.5}$-MI &\bf{5.5}&\bf{7.8}&6.3&\bf{10.5} \\
\noalign{\hrule height 1.0pt}
\end{tabular}
\end{table}
\begin{table}[tbp]
\centering
\caption{SDRi (dB) on the DSD100-remix and the public iKala datasets with various objectives in the MI head and embedding dimensions $D$.
}\vspace{0.2cm}\label{tab:obj}
\begin{tabular}{c|c|c|c|c|c}
\noalign{\hrule height 1.0pt}
\multicolumn{2}{c|}{} & \multicolumn{2}{c|}{DSD100-remix} & \multicolumn{2}{c}{iKala}\\
\hline
$\mathcal{L}_{\text{MI}}$& $D$ & \hspace{.4cm}V\hspace{.4cm} & \hspace{.4cm}M\hspace{.4cm} & \hspace{.4cm}V\hspace{.4cm} & \hspace{.4cm}M\hspace{.4cm}\phantom{}\\
\noalign{\hrule height 1.0pt}
$\text{\phantom{m}MSA}$ & 20 &\bf{5.5}&7.8&6.4&10.5 \\
$\text{mMSA}$& 20 & 5.4 & 7.8 & 6.5 & 10.7\\
$\text{mMSA}$& 10 & \bf{5.5} & \bf{7.9} & \bf{6.6} & \bf{10.8} \\
\noalign{\hrule height 1.0pt}
\end{tabular}
\end{table}
We now look at the influence of the objective used in the MI head. For the $\text{mMSA}$ objective, we use the Wiener like mask \cite{erdogan2015phase} since it is shown to have best performance among oracle masks computed from source magnitudes. As shown in Table~\ref{tab:obj}, training a hybrid $\text{CHI}_{0.1}$ network using the $\text{mMSA}$ objective leads to slightly better MI performance overall compared to $\text{MSA}$.
We also consider varying the embedding dimension $D$, and find that reducing it from $D=20$ to $D=10$ leads to further improvements.
Because the output of the linear layer ${\bf Z}_t$ has dimension $F\times D$, decreasing $D$ also leaves room to increase the number of frequency bins $F$.
Table \ref{tab:feature} shows the results for various input features. We design various features by varying the sampling rate, the window/hop size in the STFT, and the dimension of the mel-frequency filterbanks. All networks are trained in the same hybrid regime as above, with the mMSA objective in the MI head and an embedding dimension $D=10$. For simplicity, we do not concatenate first-order deltas into the input feature. We can learn from the results that higher sampling rate, larger STFT window size STFT, and more mel-frequency bins result in better performance.
\begin{table}[tbp]
\centering
\caption{SDRi (dB) on the DSD100-remix and the public iKala datasets with various input features.
}\vspace{0.2cm}\label{tab:feature}
\begin{tabular}{c|c|c|c|c}
\noalign{\hrule height 1.0pt}
& \multicolumn{2}{c|}{DSD100-remix} & \multicolumn{2}{c}{iKala}\\
\hline
& \hspace{.3cm}V\hspace{.3cm} & \hspace{.3cm}M\hspace{.3cm} & \hspace{.3cm}V\hspace{.3cm} & \hspace{.3cm}M\hspace{.3cm}\phantom{}\\
\noalign{\hrule height 1.0pt}
$\text{16k-1024-256-mel150}$ & 5.5 & 7.9 & 6.6 & 10.6\\
$\text{16k-1024-256-mel200}$ & 5.5 & 7.9 & 6.9 & 10.9\\
$\text{22k-1024-256-mel200}$ & 5.9 & 7.9 & 7.2 & 10.7 \\
$\text{22k-2048-512-mel300}$ & \bf{6.1} & \bf{8.1} & \bf{7.4} & \bf{11.0} \\
\noalign{\hrule height 1.0pt}
\end{tabular}
\end{table}
\begin{figure}[ht]\vspace{0.0cm}
\centering
\includegraphics[width=\columnwidth]{specs.png}
\caption{Example of separation results for a 4-second excerpt from file $\text{45378\_chorus}$ in the public iKala dataset.}
\label{fig:spec}\vspace{0.0cm}
\end{figure}
\section{Conclusion}
In this paper, we investigated the effectiveness of a deep clustering model on the task of singing voice separation. Although deep clustering was originally designed for separating speech mixtures, we showed that this framework is also suitable for separating sources in music signals. Moreover, by jointly optimizing deep clustering with a classical mask-inference network, the new hybrid network outperformed both the plain deep clustering network and the mask-inference network. Experimental results confirmed the robustness of the hybrid approach in mismatched conditions.
\vspace{.2cm}
\noindent Audio examples are available at \cite{chimera_exp}.
\section{Acknowledgement}
The work of Yi Luo, Zhuo Chen, and Nima Mesgarani was funded by a grant from the National Institute of Health, NIDCD, DC014279, National Science Foundation CAREER Award, and the Pew Charitable Trusts.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
| {'timestamp': '2017-06-16T02:07:36', 'yymm': '1611', 'arxiv_id': '1611.06265', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.06265'} |
\section{Introduction}\label{sec:Intro}
We consider the problem of transmitting equiprobable messages over several uses of an additive white Gaussian noise (AWGN) channel. We consider different power restrictions at the transmitter: (i) equal power constraint (all the codewords in the transmission code have equal energy); (ii) maximal power constraint (the energy of all the codewords is below a certain threshold); and, (iii) average power constraint (while some codewords may violate the threshold, the power constraint is satisfied in average).
Given its practical importance, the AWGN channel under a power limitation has been widely studied in the literature.
In his seminal 1948 work~\cite{Shannon48}, Shannon established the capacity of this channel, which characterizes the highest transmission rate under which reliable communication is possible with arbitrarily long codewords. A more refined asymptotic analysis follows from the study of the reliability function, which characterizes the exponential dependence between the error probability and the length of the codewords for a certain transmission rate.
For the power-constrained AWGN channel, Shannon obtained the reliability function for rates close to the channel capacity~\cite{Shannon59}. Both the capacity~\cite{Shannon48} and the reliability function~\cite{Shannon59} of the AWGN channel do not depend on the specific power restriction considered at the transmitter.
We conclude that equal, maximal and average power constraints can be cast as asymptotically equivalent.\footnote{Note however that some asymptotic differences still exits. For instance, the strong-converse error exponent (relevant for rates above capacity) under equal and maximal power constraint is strictly positive, while it is equal to zero under average-power constraint~\cite[Sec. 4.3]{PolThesis}.}
While the focus of the work by Shannon \cite{Shannon59} was on the reliability function, he also obtained both upper and lower bounds on the error probability of the best codebook with a certain blocklength $n$. The proof of these bounds is based on geometric arguments applied to codewords lying on the surface of an $n$-dimensional sphere~\cite[Eq.~(20)]{Shannon59} (i.e., satisfying an equal power constraint) and it is then extended to maximal and average power limitations~\cite[Sec.~XIII]{Shannon59}.
An alternative proof technique in the derivation of converse bounds to the error probability is based on hypothesis testing.
The channel coding error probability can be related to that of a surrogate binary hypothesis test between the distribution induced by the codebook and an auxiliary distribution carefully chosen~\cite{tit16a}. An application of this technique was used in~\cite{Shannon67I} to obtain the sphere-packing bound on the channel coding reliability function for general channels (see also \cite{Haroutunian68, sason2008, altug2014, Nakiboglu19-SP} for alternative derivations and refinements). To obtain the sphere-packing exponent, the hypothesis testing technique needs to be applied with an specific auxiliary distribution, denoted as \textit{exponent-achieving output distribution} (analogously to the \textit{capacity-achieving output distribution} that follows from the channel capacity analysis).
Also using the hypothesis testing technique, Polyanskiy {\em et al.} obtained a fundamental lower bound to the error probability in the finite blocklength regime~\cite[Th. 27]{Pol10}. This result is usually referred to as \textit{meta-converse} since several converse bounds in the literature can be recovered from it. The standard application of the meta-converse bound for a specific channel requires the choice of the auxiliary distribution used in the hypothesis test.
To obtain more intuition in the structure of this auxiliary distribution, Polyanskiy analyzed in~\cite{Pol13} the properties of the solution to the minimax optimization problem in~\cite[Th. 27]{Pol10}. Exploiting the existing symmetries in the AWGN channel with an equal power constraint, \cite[Sec.~VI]{Pol13} shows that, for a certain non-product auxiliary distribution, the meta-converse bound coincides with Shannon lower bound~\cite[Eq.~(20)]{Shannon59}. Therefore, Shannon lower bound is still the tightest finite-length bound for the AWGN channel with an equal power constraint and this bound is often used as a benchmark for practical codes (see, e.g., \cite{lazic98, dolinar1998, Via99, shi2007, PolThesis, vazquez2016multiclass}).
While the choice for the auxiliary distribution in \cite[Sec.~VI]{Pol13} yields the tightest meta-converse bound, the resulting expression is still difficult to evaluate. For an auxiliary distribution equal to the capacity achieving output distribution, the meta-converse particularizes to \cite[Th. 41]{Pol10}. This bound is slightly weaker than Shannon's \cite[Eq.~(20)]{Shannon59} and can be extended to maximal and average power constraints using the techniques in~\cite[Sec.~XIII]{Shannon59} (see also \cite[Lem. 39]{Pol10}).
In this work, we complement the existing results with direct lower bounds on the error probability of codes for the AWGN channel under maximal and average power limitations at the transmitter. In particular, the main contributions in this article are the following:
\begin{enumerate}
\item We provide an exhaustive characterization of the error probability of a binary hypothesis test between two Gaussian distributions. The error probability of this test corresponds to the meta-converse bound for an equal power constraint and an auxiliary independent and identically distributed (i.i.d.) zero-mean Gaussian distribution (not necessarily capacity achieving).
\item Using this characterization, we optimize the meta-converse bound over input distributions satisfying maximal and average power constraints. The resulting hypothesis testing lower bound holds directly under a maximal power limitation. For an average power limitation, we obtain that the hypothesis testing bound holds directly if the codebook size is below a certain threshold and that it requires a simple transformation above this threshold.
\item We propose a saddlepoint expansion to estimate the error probability of a hypothesis test between two i.i.d. Gaussian distributions. This expansion yields a simple expression that can be used to evaluate \cite[Th. 41]{Pol10} and the bounds for maximal and average power constraints presented in this work.
\item We provide several numerical examples that compare the new bounds with previous results in the literature. We show that considering an exponent-achieving auxiliary distribution under equal, maximal and average power constraints yields tighter bounds in general.
\end{enumerate}
Given the difficulty of computing \cite[eq. (20)]{Shannon59} (see, e.g.,~\cite{Slepian63, Via99, Val04, Sas08}), the bounds proposed are not only tighter (for maximal and average power constraints) but also simpler to evaluate than the original lower bound by Shannon. While the results obtained are specific for the AWGN channel, the techniques used in this work can in principle be extended to other scenarios requiring the optimization of the meta-converse bound over input distributions.
The organization of the manuscript is as follows. \refS{model} presents the system model and a formal definition of the power constraints. This section also introduces the meta-converse bound that will be used in the remainder of the article. \refS{equal} compares Shannon lower bound with the meta-converse for the AWGN channel with an equal power constraint. This section provides a geometric interpretation of \cite[Th. 41]{Pol10} analogous to that presented in \cite{Shannon59}. Sections \ref{sec:maximal} and \ref{sec:average} introduce new bounds for maximal and average power constraints, respectively. The evaluation of the proposed bounds is studied in \refS{computation}. \refS{numerical} presents a numerical comparison of the bounds with previous results and studies the effect of considering capacity and exponent achieving auxiliary distributions.
Finally, \refS{discussion} concludes the article discussing the main contributions of this work.
\section{System Model and Preliminaries}\label{sec:model}
We consider the problem of transmitting $M$ equiprobable messages over $n$ uses of an AWGN channel with noise power~$\sigma^2$. Specifically, we consider a channel $W \triangleq \Pyx$ which, for an input $\x=(x_1,x_2,\ldots,x_n)\in\Xc$ and output $\y=(y_1,y_2,\ldots,y_n)\in\Yc$, with $\Xc=\Yc=\RR^n$, has a probability density function (pdf)
\begin{align}\label{eqn:Gaussian-channel}
w(\y|\x) = \prod_{i=1}^{n} \varphi_{x_i,\sigma}(y_i),
\end{align}
where $\varphi_{\mu,\sigma}(\cdot)$ denotes the pdf of the Gaussian distribution,
\begin{align}
\varphi_{\mu,\sigma}(y)\triangleq\frac{1}{\sqrt{2 \pi}\sigma} e^{-\frac{(y-\mu)^2}{2\sigma^2}}.\label{eqn:Gaussian-pdf}
\end{align}
In our communications system, the source generates a message $v\in\{1,\ldots,M\}$ randomly with equal probability. This message is then mapped by the encoder to a codeword $\cc_v$ using a codebook~$\Cc \triangleq \bigl\{\cc_1, \ldots,\cc_M \bigr\}$, and $\x = \cc_v$ is transmitted over the channel. Then, based on the channel output $\y$, the decoder guesses the transmitted message $\hat{v}\in\{1,\ldots,M\}$. In the following we shall assume that maximum likelihood (ML) decoding is used at the receiver.\footnote{Since the ML decoder minimizes the error probability, lower bounds to ML decoding error probability also apply to other decoding schemes.}
We define the average error probability of a codebook $\Cc$ as
\begin{align}
\Pe(\Cc) \triangleq \Pr \{\hat{V} \neq V\},
\end{align}
where the underlying probability is induced by the chain of source, encoder, channel, and ML decoder.%
\footnote{All the results in this article are derived under the average error probability formalism.
For the maximal error probability, defined as
$\eps_{\max}(\Cc) \triangleq \max_{v\in\{1,\ldots,M\}} \Pr \{\hat{V} \neq V\,|\,V=v\}$,
it holds that $\eps_{\max}(\Cc) \geq \Pe(\Cc)$ and lower bounds on $\Pe(\Cc)$
also apply in this case.}
\subsection{Power constrained codebooks}
The focus of this work is on obtaining lower bounds to the error probability $\Pe(\Cc)$ for codebooks $\Cc \triangleq \bigl\{\cc_1, \ldots,\cc_M \bigr\}$ satisfying the following power constraints.
\begin{enumerate}
\item Equal power constraint:
\begin{align}
\Fc_{\text{e}}(n,M,\Upsilon) \triangleq
\Bigl\{ \Cc \;\big|\; \|\cc_i\|^2 = n \Upsilon,\quad i=1,\ldots,M \Bigr\}.
\end{align}
\item Maximal power constraint:
\begin{align}
\Fc_{\text{m}}(n,M,\Upsilon) \triangleq \Bigl\{ \Cc \;\big|\; \|\cc_i\|^2 \leq n\Upsilon,\quad i=1,\ldots,M \Bigr\}.
\end{align}
\item Average power constraint:
\begin{align}
\Fc_{\text{a}}(n,M,\Upsilon) \triangleq \Bigl\{ \Cc \;\big|\; \tfrac{1}{M}\sum\nolimits_{i=1}^M\|\cc_i\|^2 \leq n\Upsilon \Bigr\}.
\end{align}
\end{enumerate}
Clearly, $\Fc_{\text{e}} \subset \Fc_{\text{m}} \subset \Fc_{\text{a}}$. Then, any lower bound on the error probability of an average power constrained codebook will also hold in the maximal power constraint setting, and any bound for a maximal power constraint also holds under an equal power constraint.
The next result relates the minimum error probability in the three scenarios considered via simple inequalities.
For fixed $n,M,\Upsilon$, we define the minimum error probability under a power constraint $i \in \{\text{e}, \text{m}, \text{a}\}$ as
\begin{align}
\epsilon_i^{\star}(n,M,\Upsilon) \triangleq \min_{\Cc \in \Fc_{i}(n,M,\Upsilon)} \Pe(\Cc).
\end{align}
\begin{lemma}[{\hspace{-.1mm}\cite[Sec.~XIII]{Shannon59}, \cite[Lemma~65]{PolThesis}}]
\label{lem:relations}
For any $n,M,\Upsilon>0$, and $0<s<1$, the following inequalities hold:
\begin{align}
\quad \epsilon_{\text{e}}^{\star}(n,M,\Upsilon)
\ &\geq \ \epsilon_{\text{m}}^{\star}(n,M,\Upsilon)
\ \geq \epsilon_{\text{e}}^{\star}(n+1,M,\Upsilon),
\label{eqn:relations-1}\\
\epsilon_{\text{m}}^{\star}(n,M,\Upsilon)
\ &\geq \ \epsilon_{\text{a}}^{\star}(n,M,\Upsilon)
\ \geq \ s\epsilon_{\text{m}}^{\star}\Bigl(n, sM,\tfrac{\Upsilon}{1-s}\Bigr).
\label{eqn:relations-2}
\end{align}
\end{lemma}
\begin{remark}
The relations \refE{relations-1}-\refE{relations-2} were first proposed by Shannon in \cite[Sec.~XIII]{Shannon59}.
Nevertheless, there is a typo in the last equation of \cite[Sec.~XIII]{Shannon59}, which has been corrected in \refE{relations-2}.
In \cite[Sec.~XIII]{Shannon59}, Shannon states that ``The probability of error for the new code [satisfying the maximal power constraint] cannot exceed $1/{\alpha}$ times that of the original code [satisfying the average power constraint]'' (brackets added). While this reasoning is right, using his notation, this statement traslates to $P_{\text{e opt}}' \leq \frac{1}{\alpha} P_{\text{e opt}}''$ and hence $P_{\text{e opt}}'' \geq \alpha P_{\text{e opt}}'$, which does not coincide with the last equation of \cite[Sec.~XIII]{Shannon59}.
These relations were rederived in \cite[Lemma~65]{PolThesis}, where the statement of the bound corresponding to \refE{relations-2} is correct.
\end{remark}
The relations from Lemma~\ref{lem:relations} show that lower and upper bounds on the error probability under a given power constraint can be adapted to other settings via simple transformations. If we focus on converse bounds, the analysis under an equal power constraint is usually simpler. However, since the maximal power constraint and average power constraint are more relevant in practice, the transformations from Lemma~\ref{lem:relations} are often used to adapt the results derived under an equal power constraint.
While the loss incurred by using these transformations becomes negligible in the asymptotic regime, it can have a relevant impact at finite blocklengths. In Sections \ref{sec:maximal} and \ref{sec:average} we will prove direct lower bounds for maximal and average power constraints, without resorting to the transformations from Lemma~\ref{lem:relations}.
\subsection{Meta-converse bound}
\label{sec:metaconverse-bound}
In \cite{Pol10}, Polyanskiy {\em et al.} proved that the error probability of a binary hypothesis test with certain parameters can be used to lower bound the error probability $\Pe(\Cc)$ for a certain channel $W$. In particular, \cite[Th. 27]{Pol10} shows that
\begin{align}
\Pe(\Cc)
&\geq \infp_{P\in\Pc} \sup_{Q} \left\{
\alpha_{\frac{1}{M}} \bigl(PW, P \times Q \bigr)\right\},\label{eqn:metaconverse}
\end{align}
where $\Pc$ is the set of distributions over the input alphabet $\Xc$ satisfying a certain constraint, $Q$ is an auxiliary distribution over the output alphabet $\Yc$ which is not allowed to depend on the input $\x$, and where $\alpha_{\beta}\left(PW, P \times Q\right)$ denotes the minimum type-I error for a maximum type-II error $\beta\in[0,1]$ in a binary hypothesis testing problem between the distributions $PW$ and $P \times Q$. Formally, for two distributions $A$ and $B$ defined over an alphabet $\Zc$, the minimum type-I error for a maximum type-II error $\beta\in[0,1]$ is given by
\begin{align}
\alpha_{\beta}(A, B)
\triangleq \inf_{\substack{0\leq T \leq 1:\\ \Ex_{B}[T(Z)] \leq \beta}} \Bigl\{ 1- \Ex_{A}[T(Z)] \Bigr\},\label{eqn:bht-alpha}
\end{align}
where $T:\Zc\to[0,1]$ and $\Ex_{P}[\cdot]$ denotes the expectation operator with respect to the random variable $Z\sim P$.
The bound \refE{metaconverse} is usually referred to as the \textit{meta-converse bound} since several converse bounds in the literature can be recovered from it via relaxation~\cite{Pol10}.
The results in this work are based on the following inequality chain, which always holds
\begin{align}
\infp_{P\in\Pc} \sup_{Q} \left\{
\alpha_{\frac{1}{M}} \bigl(PW, P \times Q \bigr)\right\} &\geq \sup_{Q} \infp_{P\in\Pc} \left\{
\alpha_{\frac{1}{M}} \bigl(PW, P \times Q \bigr)\right\}\label{eqn:metaconverse-maxmin}\\
&\geq \infp_{P\in\Pc} \left\{
\alpha_{\frac{1}{M}} \bigl(PW, P \times Q \bigr)\right\}.\label{eqn:metaconverse-fixedQ}
\end{align}
Here, the first step follows from the max-min inequality, and the second is the result of fixing the auxiliary distribution $Q$.
The properties of the exact minimax solution to the optimizations in the left-hand side of \refE{metaconverse-maxmin} are studied in~\cite{Pol13}.
Under mild assumptions, \refE{metaconverse-maxmin} holds with equality and the saddle point property holds~\cite[Sec.~V]{Pol13}. Therefore, in practice it is possible to fix the auxiliary distribution $Q$ in \refE{metaconverse} and still obtain tight lower bounds. However, the minimization needs to be carried out over all the input probability distributions $P$ (non necessarily product) satisfying the constraint $P\in \Pc$.
In the following sections we consider the optimization of the meta-converse bound over input distributions for the AWGN channel under equal, maximal and average power constraints and a certain auxiliary distribution $Q$.
\section{Lower Bounds for Equal Power Constraints}\label{sec:equal}
In this section we briefly discuss the results from \cite{Shannon59}, \cite{Pol10} and \cite{Pol13}. The bounds presented here apply for codes $\Cc \in \Fc_{\text{e}}(n,M,\Upsilon)$ satisfying an equal power constraint, and they will be relevant in the sequel.
\subsection{Shannon cone-packing bound} \label{sec:equal-shannon}
Let $\theta$ be the half-angle of a $n$-dimensional cone with vertex at the origin and with axis going through the vector $\x= (1,\ldots,1)$. We let $\Phi_n(\theta, \bar\sigma^2)$ denote the probability that such vector be moved outside this cone by effect of the i.i.d. Gaussian noise with variance $\bar\sigma^2$ in each dimension.
\begin{theorem}[{\hspace{-.1mm}\cite[Eq. (20)]{Shannon59}}]
Let $\theta_{n,M}$ denote the half-angle of a cone with solid angle equal to $\Omega_n/M$, where $\Omega_n$ is the surface of the $n$-dimensional hypersphere. Then, the error probability of an equal-power constrained code satisfies
\begin{align} \label{eqn:shannon_lower_bound}
\epsilon_{\text{e}}^{\star}(n,M,\Upsilon) \geq \Phi_n\biggl(\theta_{n,M},\frac{\sigma^2}{\Upsilon} \biggr).
\end{align}\label{thm:shannon_lower_bound}
\end{theorem}
The derivation of this bound follows from deforming the optimal decoding regions, which for codewords lying on the surface of an sphere correspond to pyramids, to cones of the same volume (see \cite[Fig.~1]{Shannon59}) and analyzing the resulting error probability.
Given this geometric interpretation, Theorem~\ref{thm:shannon_lower_bound} is often referred to as cone-packing bound. While the resulting expression is conceptually simple and accurate for low SNRs and relatively short codes~\cite{Sason06}, it is difficult to evaluate. Approximate and exact computation of this bound is treated, e.g., in~\cite{Val04, Sas08}.
\subsection{Meta-converse bound for the AWGN channel}
\label{sec:equal-metaconverse}
We consider now the meta-converse bound \refE{metaconverse} for the equal power constrained AWGN channel.
Solving the minimax optimization in the right-hand side of \refE{metaconverse}, this bound recovers \refE{shannon_lower_bound}~\cite[Sec. VI.F]{Pol13}.
Indeed, for an equal power constraint the codewords $\cc_i$ are restricted to lie on the surface of a sphere of squared radius $n \Upsilon$. The optimal decoder does not depend on the norm of the received sequence and only on its direction, and it can operate over the equivalent channel from $\tilde\x = \frac{\x}{\sqrt{n\Upsilon}} \in\tilde\Xc $ to $\tilde{\y} = \frac{\y}{\|\y\|}\in\tilde\Yc$, where $\tilde\Xc=\tilde\Yc=\SS^{n-1}$ correspond to the $(n-1)$-dimensional sphere centered at the origin and with unit radius.
Applying the meta-converse bound \refE{metaconverse} to the random map $W = P_{\tilde\Y|\tilde\X}$, we obtain that both the optimizing $P$ and $Q$ correspond to the uniform distributions on $\SS^{n-1}$~\cite[Sec. VI.F]{Pol13}.
Mapping this result to the original channel $W = P_{\Y|\X}$, it shows that the tightest bound that can be obtained from the meta-converse \refE{metaconverse} coincides with Shannon cone-packing bound \refE{shannon_lower_bound}. As it occurs with \refE{shannon_lower_bound}, the resulting expression is difficult to evaluate.
As discussed in \refS{metaconverse-bound}, the meta-converse bound \refE{metaconverse} can be weakened by fixing the auxiliary distribution $Q$.
For any input distribution lying on the surface of a $n$-dimensional hyper-sphere of squared radius $n\Upsilon$ (equal power constraint) and an auxiliary distribution $Q$ that is invariant under rotations around the origin, it holds that~\cite[Lem.~29]{Pol10}
\begin{align}\label{eqn:mc-spherical-symmetry}
\alpha_{\frac{1}{M}} \bigl(PW, P \times Q \bigr)
= \alpha_{\frac{1}{M}} \bigl( \varphi^n_{\sqrt{\Upsilon},\sigma}, Q \bigr).
\end{align}
In \cite[Sec.~III.J.2]{Pol10}, Polyanskiy {\em et al.} fixed the auxiliary distribution $Q$ to be an i.i.d. Gaussian distribution with zero-mean and variance $\theta^2$, with pdf
\begin{align}\label{eqn:Q-theta-def}
q(\y) = \prod_{i=1}^{n} \varphi_{0,\theta}(y_i),
\end{align}
to obtain the following result.
\begin{theorem}[{\hspace{-.1mm}\cite[Th. 41]{Pol10}}]
Let $\theta^2=\Upsilon+\sigma^2$. The error probability of an equal-power constrained code satisfies
\begin{align} \label{eqn:PPV_lower_bound}
\epsilon_{\text{e}}^{\star}(n,M,\Upsilon) \geq \alpha_{\frac{1}{M}} \bigl( \varphi^n_{\sqrt{\Upsilon},\sigma}, \varphi^n_{0,\theta} \bigr).
\end{align}\label{thm:PPV_lower_bound}
\end{theorem}
This expression admits a parametric form involving two Marcum-$Q$ functions (see Proposition~\ref{prop:alpha-beta-marcumQ} in Appendix~\ref{apx:f-beta-gamma}). However, for fixed rate $R\triangleq\frac{1}{n}\log_2 M$, the term $\frac{1}{M} = 2^{-n R}$ decreases exponentially with the blocklength and traditional series expansions of the Marcum-$Q$ function fail even for moderate values of $n$.
Nevertheless, in contrast with the formulation in \refE{shannon_lower_bound}, the distributions appearing in \refE{PPV_lower_bound} are i.i.d., and Laplace methods can be used to evaluate this bound (this point will be treated in Section \ref{sec:computation}).
\subsection{Geometric interpretation of Theorem~\ref{thm:PPV_lower_bound}}
Shannon lower bound from Theorem~\ref{thm:shannon_lower_bound} corresponds to the probability that the additive Gaussian noise moves a given codeword out of the $n$-dimensional cone centered at the codeword that roughly covers $1/M$-th of the output space. We show next that the hypothesis-testing bound from Theorem~\ref{thm:PPV_lower_bound} admits an analogous geometric interpretation.
Let $\x = \bigl(\sqrt{\Upsilon},\ldots, \sqrt{\Upsilon} \bigr)$ and let $\theta>\sigma$.
For the hypothesis test on the right-hand side of \refE{PPV_lower_bound}, the condition
\begin{align}
\log
\frac{\varphi_{\sqrt{\Upsilon},\sigma}^n(\y) }{\varphi_{0,\theta}^n(\y)} &= n\log \frac{\theta}{\sigma} + \frac{\|\y\|^2}{2 \theta^2} - \frac{\|\y-\x\|^2}{2 \sigma^2} = t
\label{eqn:metaconverse-rho-boundary-1}
\end{align}
defines the boundary of the decision region induced by the optimal Neyman-Pearson test for some $-\infty < t < \infty$. We next study the shape of this region. To this end, we first write
\begin{align}
\frac{\|\y\|^2}{2 \theta^2} -\frac{\|\y-\x\|^2}{2 \sigma^2}
&= - \frac{\theta^2-\sigma^2}{2 \sigma^2 \theta^2}
\bigl(\|\y\|^2 - 2a \langle\x,\y\rangle + a\|\x\|^2\bigr)\label{eqn:y2-yx2}\\
&=- \frac{\theta^2\!-\!\sigma^2}{2 \sigma^2 \theta^2}
\bigl(\|\y-a\x\|^2 + (a\!-\! a^2) \|\x\|^2 \bigr),\label{eqn:metaconverse-rho-boundary-2}
\end{align}
where we defined $a = \frac{\theta^2}{\theta^2-\sigma^2}$, and where $\langle\x,\y\rangle = \x^T \y$ denotes the inner product between $\x$ and $\y$.
The boundary of the decision region induced by the optimal NP test, defined by \refE{metaconverse-rho-boundary-1} corresponds to \refE{metaconverse-rho-boundary-2} being equal to $t-n\log\frac{\theta}{\sigma}$.
Using that $\|\x\|^2 = n\Upsilon$ and for $\theta^2=\Upsilon+\sigma^2$ from Theorem~\ref{thm:PPV_lower_bound}, it yields
\begin{align}
\biggl\|\y-\Bigl(1+\frac{\sigma^2}{\Upsilon}\Bigr)\x\biggr\|^2 = r,
\label{eqn:metaconverse-rho-boundary-3}
\end{align}
where $r = n\sigma^2 \bigl(1+\frac{\sigma^2}{\Upsilon}\bigr) \bigl( 1 - \frac{2t}{n} + \log \bigl(1+\frac{\Upsilon}{\sigma^2}\bigr)\bigr)$.
\begin{figure}[t]%
\begin{center}
\begin{subfigure}[b]{.4\linewidth}
\centering\includegraphics[width=.8\linewidth]{figs/cone-packing}%
\caption{}
\end{subfigure}%
\begin{subfigure}[b]{.4\linewidth}
\centering\includegraphics[width=.8\linewidth]{figs/sphere-packing}%
\caption{}
\end{subfigure}
\caption{Induced regions by (a) the Shannon cone-packing bound in~\refE{shannon_lower_bound}, and (b) the hypothesis-testing bound in \refE{PPV_lower_bound}, for codewords ($\bullet$) located on the shell of the sphere with squared radius $n\Upsilon$.}\label{fig:regions-bounds}
\end{center}
\end{figure}%
The region inside the boundary \refE{metaconverse-rho-boundary-3} corresponds to an $n$-dimensional sphere centered at $\bigl(1+\frac{\sigma^2}{\Upsilon}\bigr)\x$ with squared radius $r$.
Then, we can describe the lower bound in Theorem~\ref{thm:PPV_lower_bound} as the probability that the additive Gaussian noise moves a given codeword $\x$ out of the $n$-dimensional sphere centered at $\bigl(1+\frac{\sigma^2}{\Upsilon}\bigr)\x$ that roughly covers $1/M$-th of the auxiliary measure $\varphi_{0,\theta}^n$.
The ``regions'' induced by Shannon lower bound in from Theorem~\ref{thm:shannon_lower_bound} correspond to cones, while those induced by the hypothesis-testing bound in Theorem~\ref{thm:PPV_lower_bound} correspond to spheres (see Fig.~\ref{fig:regions-bounds}).
Cones are close to the optimal ML decoding regions for codewords evenly distributed on surface of an $n$-dimensional sphere with squared radius $n\Upsilon$.\footnote{Indeed, in $n=2$ dimensions Shannon lower bound yields the exact error probability of an $M$-PSK constellation. See Section \ref{sec:constellations} for a numerical example.} On the other hand, ``spherical regions'' allow different configurations of the codewords. This fact suggests that the hypothesis-testing bound in Theorem~\ref{thm:PPV_lower_bound} may hold beyond the equal power constraint setting. This intuition is shown to be correct in the next sections.
\section{Lower Bounds for Maximal Power Constraints}\label{sec:maximal}
We consider now the family of codes satisfying a maximal power limitation, $\Cc\in \Fc_{\text{m}}(n,M,\Upsilon)$. As discussed in \refS{model}, Theorems \ref{thm:shannon_lower_bound} and \ref{thm:PPV_lower_bound} can be extended to the maximal power constraint via Lemma~\ref{lem:relations}. Indeed, the second inequality in \refE{relations-1} can be slightly tightened to
\begin{align}\label{eqn:equal-to-maximal}
\epsilon_{\text{m}}^{\star}(n,M,\Upsilon) \geq \epsilon_{\text{e}}^{\star}\Bigl(n+1,M,\tfrac{n\Upsilon}{n+1}\Bigr).
\end{align}
The proof of Lemma~\ref{lem:relations} (see~\cite[Sec. XIII]{Shannon59} and \cite[Lem. 39]{Pol10}) is based on extending a maximal power constrained codebook of length $n$ by adding an extra coordinate. The energy of the new codewords of length $n+1$ is then normalized to $(n+1)\Upsilon$, so that the equal power constraint is satisfied. The proof of \refE{equal-to-maximal} follows the same lines, but normalizing the energy of the codewords to $n\Upsilon$, instead to $(n+1)\Upsilon$.
Applying \refE{equal-to-maximal} to Theorem~\ref{thm:shannon_lower_bound} we obtain the following result.
\begin{corollary}\label{cor:shannon_lower_bound}
Let $\theta_{n,M}$ denote the half-angle of a cone with solid angle equal to $\Omega_n/M$, where $\Omega_n$ is the surface of the $n$-dimensional hypersphere. Then,
\begin{align} \label{eqn:cor_shannon_lower_bound}
\epsilon_{\text{m}}^{\star}(n,M,\Upsilon) \geq \Phi_{n+1}\biggl(\theta_{n+1,M},\frac{(n+1)\sigma^2}{n\Upsilon} \biggr).
\end{align}
\end{corollary}
We now present an alternative lower bound to the error probability under maximal power constraint. To this end, we consider the weakening of the meta-converse in~\refE{metaconverse} obtained by fixing the auxiliary distribution $Q$ to be the zero-mean i.i.d. Gaussian distribution~\refE{Q-theta-def}.
\begin{theorem}[Converse, maximal power constraint]\label{thm:maximal_lower_bound}
Let $\theta \geq \sigma$, $n\geq 1$. Then,
\begin{align} \label{eqn:maximal_lower_bound}
\epsilon_{\text{m}}^{\star}(n,M,\Upsilon) \geq \alpha_{\frac{1}{M}} \bigl( \varphi^n_{\sqrt{\Upsilon},\sigma} , \varphi^n_{0,\theta} \bigr).
\end{align}
\end{theorem}
\begin{IEEEproof}
See Section~\ref{sec:maximal_lower_bound}.
\end{IEEEproof}
Setting $\theta^2=\Upsilon+\sigma^2$ in \refE{maximal_lower_bound}, we recover the bound from Theorem~\ref{thm:PPV_lower_bound}. We conclude that this lower bound also holds for maximal power constraints and not only for the restriction of equal power codewords. This is not the case however for the Shannon cone-packing bound from Theorem~\ref{thm:shannon_lower_bound} as we show with the following example.
We consider the problem of transmitting $M=16$ codewords over an additive Gaussian noise channel with $n=2$ dimensions. For $n=2$, Shannon cone-packing bound (SCPB) from Theorem~\ref{thm:shannon_lower_bound} coincides with the ML decoding error probability of a $M$-PSK constellation $\Cc_{M\text{-PSK}}$ satisfying the equal power constraint $\Upsilon$ (as $2$-dimensional cones are precisely the ML decoding regions of the $M$-PSK constellation). For instance, for a $2$-dimensional Gaussian channel with a signal-to-noise ratio (SNR) $\frac{\Upsilon}{\sigma^2}=10$ and $M=16$ codewords, we obtain SCPB $=\Pe(\Cc_{16\text{-PSK}}) \approx 0.38$. Let now define a code $\Cc_{M\text{-APSK}}$ composed by the points of an $(M-1)$-PSK constellation and an additional codeword located at $\x = (0,0)$. While this code satisfies the maximal error constraint $\Upsilon$, its error probability violates SCPB for sufficiently large $M$.
Indeed, for the previous example, the modified codebook attains $\Pe(\Cc_{16\text{-APSK}}) \approx 0.34 < 0.38 \approx$ SCPB. We conclude that in general Theorem~\ref{thm:shannon_lower_bound} holds only under an equal power constraint.
Evaluation of Corollary~\ref{cor:shannon_lower_bound} and Theorem~\ref{thm:maximal_lower_bound} with $\theta^2=\Upsilon+\sigma^2$ yields Cor.~1 $\approx 0.08$ and Th.~3 $\approx 0.15$, respectively. We can see that the direct lower bound from Theorem~\ref{thm:maximal_lower_bound} is tighter than that from Corollary~\ref{cor:shannon_lower_bound}. For a more detailed discussion comparing the bounds under different power constraints, see~\refS{numerical}.
\subsection{Proof of Theorem~\ref{thm:maximal_lower_bound}}
\label{sec:maximal_lower_bound}
We consider the set of input distributions $\X\sim P$ satisfying the maximal power constraint
\begin{align}
\Pc_{\text{m}}(\Upsilon) \triangleq \Bigl\{ P \;\Big|\; \Pr\bigl[ \|\X\|^2 \leq n\Upsilon \bigr] =1 \Bigr\}.
\end{align}
Then, the meta-converse bound~\refE{metaconverse}
for some fixed $Q$ becomes
\begin{align}
\epsilon_{\text{m}}^{\star}(n,M,\Upsilon)
&\geq \inf_{P\in\Pc_{\text{m}}(\Upsilon)} \left\{\alpha_{\frac{1}{M}} \bigl(P W, P \times Q \bigr)\right\}.\label{eqn:metaconverse-maximal}
\end{align}
In order to make the minimization over $P$ tractable we shall use the following result.
\begin{lemma}\label{lem:alpha-split
Let $\bigl\{P_{\lambda}\bigr\}$ be a family of probability measures defined over the input alphabet $\Xc$,
parametrized by $\lambda\in\RR$. Assume that the distributions $P_{\lambda}$ have pairwise disjoint supports
and that there exists a probability distribution $S$ over the parameter $\lambda$ such that
$P = \int P_{\lambda} S(\textrm{d} \lambda)$.
Then, the hypothesis testing error trade-off function satisfies
\begin{align}\label{eqn:alpha-split}
&\alpha_{\beta} \bigl(PW, P\times Q\bigr) = \min_{\substack{\{\beta_{\lambda}\}:\\\beta=\int \beta_{\lambda} S(\textrm{d} \lambda) }} \int \alpha_{\beta_{\lambda}} \bigl(P_{\lambda} W, P_{\lambda} \times Q \bigr) S(\textrm{d} \lambda).
\end{align}
\end{lemma}
\begin{IEEEproof}
This lemma is analogous to the second part of \cite[Lem. 25]{Pol13}.
Since here we require the family $P_{\lambda}$ to be parametrized by a continuous $\lambda$, for completeness, we include the proof here.
First, we observe that $\alpha_{\beta} \bigl(PW, P\times Q\bigr)$ is a jointly convex function on $(\beta,P)$ \cite[Thm.~6]{Pol13}.
Let $\bigl\{\beta_{\lambda}\bigr\}$ and $\bigl\{P_{\lambda}\bigr\}$ satisfy
$\beta = \Ex_S[\beta_{\lambda}]=\int \beta_{\lambda} S(\textrm{d} \lambda)$ and $P=\Ex_S[P_{\lambda}]=\int P_{\lambda} S(\textrm{d} \lambda)$.
Then, using Jensen's inequality it follows that
\begin{align}
\Ex_S\bigl[ \alpha_{\beta_{\lambda}} \bigl(P_{\lambda} W, P_{\lambda} \times Q \bigr) \bigr]
&\geq \alpha_{\Ex_S[\beta_{\lambda}]} \bigl(\Ex_S[P_{\lambda}] W,\, \Ex_S[P_{\lambda}] \times Q \bigr)
= \alpha_{\beta} \bigl(P W, P \times Q \bigr). \label{eqn:alpha-split-1}
\end{align}
We conclude that the right-hand side of \refE{alpha-split} is an upper bound on
$\alpha_{\beta} \bigl(P W, P \times Q \bigr)$.
To prove the identity \refE{alpha-split}, it remains to show that there exists $\bigl\{\beta_{\lambda}\bigr\}$ such that
$\beta = \int \beta_{\lambda} S(\textrm{d} \lambda)$ and such that \refE{alpha-split-1} holds with equality.
We consider the Neyman-Pearson test for the original testing problem
$\alpha_{\beta} \bigl(P W, P \times Q \bigr)$, which is given by
\begin{align}\label{eqn:alpha-split-T}
T(\x,\y) = \openone\biggl[\log\frac{W(\y|\x)}{Q(\y)}>t'\biggr] + c \openone\biggl[\log\frac{W(\y|\x)}{Q(\y)} = t'\biggr]
\end{align}
for some $t'\geq 0$ and $c\in[0,1]$
such that $\beta = \int T(\x,\y) Q(\textrm{d} \y) P(\textrm{d} \x)$.
We apply this test to the testing problem between $P_{\lambda} W$ and $P_{\lambda} \times Q$
and obtain a type-I and type-II error probabilities
\begin{align}
\epsilon_{1}(\lambda) &= 1 -\int T(\x,\y) W(\textrm{d} \y|\x) P_{\lambda}(\textrm{d} \x),\\
\epsilon_{2}(\lambda) &= \int T(\x,\y) Q(\textrm{d} \y) P_{\lambda}(\textrm{d} \x).
\end{align}
For the choice $\beta_{\lambda} = \epsilon_{2}(\lambda)$,
the test \refE{alpha-split-T} is precisely the Neyman-Pearson test
of the problem
$\alpha_{\beta_{\lambda}} \bigl(P_{\lambda} W, P_{\lambda} \times Q\bigr)$.
Therefore,
\begin{align}
\Ex_S\bigl[ \alpha_{\beta_{\lambda}} \bigl(P_{\lambda} W, P_{\lambda} \times Q \bigr) \bigr]
&= \int \epsilon_{1}(\lambda) S(\textrm{d} \lambda)\\
&= 1 - \int T(\x,\y) W(\textrm{d} \y|\x) P_{\lambda}(\textrm{d} \x) S(\textrm{d} \lambda)\\
&= 1 - \int T(\x,\y) W(\textrm{d} \y|\x) P(\textrm{d} \x)\\
&= \alpha_{\beta} \bigl(P W, P \times Q \bigr).
\end{align}
Similarly, we can show that $\Ex_S\bigl[\beta_{\lambda}\bigr] = \Ex_S\bigl[\epsilon_{2}(\lambda)\bigr] = \beta$.
We conclude that this choice of $\bigl\{\beta_{\lambda}\bigr\}$ yields equality in \refE{alpha-split-1}.
Given the bound \refE{alpha-split-1}, it also attains the minimum in \refE{alpha-split} and the result follows.
\end{IEEEproof}
Lemma \ref{lem:alpha-split} asserts that it is possible to express a binary hypothesis test as a convex combination of disjoint sub-tests provided that the type-II error is optimally distributed among them.
For any $\gamma\geq 0$, we define the input set $\Sc_{\gamma} \triangleq \bigl\{ \x \,|\, \|\x\|^2=n\gamma \bigr\}$. In words, the set $\Sc_{\gamma}$ corresponds to the spherical shell centered at the origin that contains all input sequences with energy $n\gamma$. Note that, whenever $\gamma_1\neq\gamma_2$, the sets $\Sc_{\gamma_1}$ and $\Sc_{\gamma_2}$ are disjoint.
We now decompose the input distribution~$P$ based on the parameter $\gamma = \|\x\|^2/n$. To this end, we define the distribution $S(\gamma) \triangleq \Pr\{\X \in \Sc_{\gamma}\}$, and we let $P_{\gamma}$ be a distribution defined over $\Xc$ satisfying $P_{\gamma}(\x) = 0$ for any $\x\notin\Sc_{\gamma}$, and
\begin{align}\label{eqn:P_decomposition}
P(\x) = \int P_{\gamma}(\x) S(\textrm{d}\gamma).
\end{align}
This condition implies that $P_{\gamma}(\x) = \frac{P(\x)}{S(\gamma)} \openone[\x \in \Sc_{\gamma}]$ for $S(\gamma)>0$ and where $\openone[\cdot]$ denotes the indicator function.
When $S(\gamma)=0$, then $P_{\gamma}$ can be an arbitrary distribution such that $P_{\gamma}(\x) = 0$ for any $\x\notin\Sc_{\gamma}$.
Given \refE{P_decomposition} and since the measures $P_{\gamma}$ have disjoint supports for different values of $\gamma$, the conditions in Lemma~\ref{lem:alpha-split} hold for $\lambda \leftrightarrow \gamma$. Then, using \refE{alpha-split}, we obtain that the right-hand side of \refE{metaconverse-maximal} with $Q$ given by \refE{Q-theta-def} becomes
\begin{align}
\inf_{P\in\Pc_{\text{m}}(\Upsilon)} \left\{\alpha_{\frac{1}{M}} \bigl(P W, P \times Q \bigr)\right\}
&= \inf_{\substack{\{S,\beta_{\gamma}\}:\,\gamma\leq\Upsilon,\\ \int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}}} \left\{ \int \alpha_{\beta_{\gamma}} \bigl(P_{\gamma} W, P_{\gamma} \times Q \bigr) S(\textrm{d}\gamma) \right\}
\label{eqn:metaconverse-split-1}\\
&= \inf_{\substack{\{S,\beta_{\gamma}\}:\,\gamma\leq\Upsilon,\\ \int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M} }} \left\{ \int
\alpha_{\beta_{\gamma}} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr) S(\textrm{d}\gamma) \right\},
\label{eqn:metaconverse-split-2}
\end{align}
where \refE{metaconverse-split-2} follows given the spherical
symmetry of each of the sub-tests in \refE{metaconverse-split-1}, since $\x=(\sqrt{\gamma},\ldots,\sqrt{\gamma})\in\Sc_{\gamma}$.
We transformed the original optimization over the $n$-dimensional distribution $P$ in the left-hand side of \refE{metaconverse-split-1} into an optimization over a one-dimensional distribution $S$ and auxiliary function $\beta_{\gamma}$ in the right-hand side of \refE{metaconverse-split-2}. To obtain the lower bound in the theorem, we make use of the following properties of the function $\alpha_{\beta}\bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$.
\begin{lemma}\label{lem:alpha-decreasing-gamma}
Let $0 < \sigma \leq \theta$, with $\sigma,\theta\in\RR$ and $n \geq 1$. Then, the function
\begin{align}\label{eqn:f-def}
f(\beta,\gamma) \triangleq \alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)
\end{align}
is non-increasing in~$\gamma$ for any fixed $\beta\in[0,1]$, and convex non-increasing in~$\beta$ for any fixed $\gamma > 0$.
\end{lemma}
\begin{IEEEproof}
The minimum type-I error $\alpha$ is a non-increasing convex function of the type-II error~$\beta$ (see, \textit{e.g.},~\cite[Sec. I]{Pol13}).
Since $f(\beta,\gamma)$ characterizes the trade-off between the type-I and type-II errors of a hypothesis test, for fixed $\gamma\geq 0$, $f(\beta,\gamma)$ is non-increasing and convex in $\beta\in[0,1]$.
To characterize the behavior of $f(\beta,\gamma)$ with respect to $\gamma$, in Appendix~\ref{apx:f-beta-gamma} we show that $f(\beta,\gamma)$ is differentiable and obtain the derivative of $f(\beta,\gamma)$ with respect to $\gamma$. In particular, it follows from \refE{partial-f-g} that
\begin{align}
\frac{\partial f(\beta,\gamma)}{\partial \gamma}
&= - \frac{n}{2\delta} \biggl(\frac{t\delta}{\sigma^2\sqrt{n \gamma}}\biggr)^{\frac{n}{2}} e^{-\frac{1}{2}\left( \frac{n\gamma\sigma^2}{\delta^2} + \frac{t^2}{\sigma^2}\right)} I_{\frac{n}{2}}\biggl(\frac{t\sqrt{n\gamma}}{\delta}\biggr), \label{eqn:partial-f-g-bis}
\end{align}
where $\delta = \theta^2-\sigma^2$, $t$ satisfies $\beta(\gamma,t) = \beta$ for $\beta(\gamma,t)$ defined in \refE{beta-marcumQ} and $I_{m}(\cdot)$ denotes the $m$-th order modified Bessel function of the first kind.
For any $\gamma\geq 0$ and $\beta\in[0,1]$, the parameter $t$ that follows from the identity $\beta(\gamma,t) = \beta$ is non-negative. Then, using that $e^{-x/2}\geq 0$ and since $x\geq 0$ implies $I_{m}(x)\geq 0$, we conclude that \refE{partial-f-g-bis} is non-positive for any $\delta = \theta^2 -\sigma^2 > 0$. As a result, the function $f(\beta,\gamma) = \alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$ is non-increasing in $\gamma$ for any fixed value of $\beta$, provided that the conditions of the lemma hold.
\end{IEEEproof}
According to Lemma~\ref{lem:alpha-decreasing-gamma}, for any $0 \leq \gamma\leq \Upsilon$, it holds that $\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr) = f(\beta,\gamma) \geq f(\beta,\Upsilon)$. As any maximal power constrained input distribution $P\in\Pc_{\text{m}}(\Upsilon)$ satisfies $S(\gamma) = 0$ for $\gamma>\Upsilon$, it follows that
\begin{align}
\inf_{\substack{\{S,\beta_{\gamma}\}:\, \gamma\leq\Upsilon,\\ \int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M} }} \left\{ \int
f\bigl(\beta_{\gamma},\gamma\bigr) S(\textrm{d}\gamma) \right\}
&\geq \inf_{\substack{\{S,\beta_{\gamma}\}:\, \gamma\leq\Upsilon,\\ \int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M} }} \left\{ \int
f\bigl(\beta_{\gamma},\Upsilon\bigr) S(\textrm{d}\gamma) \right\}\\
&\geq f\bigl(\tfrac{1}{M},\Upsilon\bigr), \label{eqn:metaconverse-bound-opt}
\end{align}
where in \refE{metaconverse-bound-opt} we used that the function $f(\beta,\Upsilon)$ is convex with respect to $\beta$ (Lemma~\ref{lem:alpha-decreasing-gamma}); hence, by Jensen's inequality and using the constraint $ \int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}$, we obtain
\begin{align}
\int f\bigl(\beta_{\gamma},\Upsilon\bigr) S(\textrm{d}\gamma) \geq
f\bigl(\textstyle{\int \beta_\gamma S(\textrm{d}\gamma)},\Upsilon\bigr) = f\bigl(\tfrac{1}{M},\Upsilon\bigr).
\end{align}
Then, using \refE{metaconverse-maximal}, \refE{metaconverse-split-2} and \refE{metaconverse-bound-opt}, since $f\bigl(\frac{1}{M},\Upsilon\bigr) = \alpha_{\frac{1}{M}} \bigl(\varphi_{\sqrt{\Upsilon},\sigma}^n, \varphi_{0,\theta}^n \bigr)$, the result follows.
\section{Lower Bounds for Average Power Constraints}\label{sec:average}
In this section we study lower bounds to the error probability of codes satisfying an average power limitation. To this end, we first introduce some concepts of convex analysis.
The Legendre-Fenchel (LF) transform of a function $g$ is
\begin{align}
g^{*}(b) = \max_{a\in\Ac} \bigl\{\langle a,b \rangle - g(a)\bigr\},
\label{eqn:LF-transform}
\end{align}
where $\Ac$ is the domain of the function $g$ and $\langle a,b \rangle$ denotes the interior product between $a$ and $b$.
The function $g^*$ is usually referred to as Fenchel's conjugate (or convex conjugate) of $g$. If $g$ is a convex function with closed domain, applying the LF transform twice recovers the original function, \textit{i.e.}, $g^{**} = g$. If $g$ is not convex, applying the LF transform twice returns the lower convex envelope of $g$, which is the largest lower semi-continuous convex function function majorized by $g$.
For our problem, for $f(\beta,\gamma)=\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$ with domain $\beta\in[0,1]$ and $\gamma\geq 0$, we define
\begin{align}
\underline{f}(\beta,\gamma) &\triangleq f^{**}(\beta,\gamma),
\label{eqn:conv-f}
\end{align}
and note that $\underline{f}(\beta,\gamma)\leq f(\beta,\gamma)$.
The lower convex envelope \refE{conv-f} is a lower bound to the error probability in the average power constraint setting, as the next result shows.
\begin{theorem}[Converse, average power constraint]\label{thm:average_lower_bound}
Let $\theta \geq \sigma$, $n\geq 1$. Then,
\begin{align} \label{eqn:average_lower_bound}
\epsilon_{\text{a}}^{\star}(n,M,\Upsilon) \geq \underline{f}\bigl(\tfrac{1}{M},\Upsilon\bigr),
\end{align}
where $\underline{f}(\beta,\gamma)$ is the lower convex envelope \refE{conv-f} of $f(\beta,\gamma)=\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$.
\end{theorem}
\begin{IEEEproof}
We start by considering the general meta-converse bound in~\refE{metaconverse}
where $\Pc$ is the set of distributions satisfying an average power constraint, i.e., $\Pc = \Pc_{\text{a}}(\Upsilon)$ with
\begin{align}
\Pc_{\text{a}}(\Upsilon) \triangleq \Bigl\{ \X\sim \Px \;\Big|\; \Ex\bigl[ \|\X\|^2 \bigr] \leq n\Upsilon \Bigr\}.\label{eqn:Pa_def}
\end{align}
Proceeding analogously as in \refE{metaconverse-split-1}-\refE{metaconverse-split-2}, but with the average power constraint $\int \gamma S(\textrm{d}\gamma) \leq \Upsilon$ instead of the maximal power constraint, it follows that
\begin{align}
&\inf_{P\in\Pc_{\text{a}}(\Upsilon)} \left\{\alpha_{\frac{1}{M}} \bigl(P W, P \times Q \bigr)\right\}
= \inf_{\substack{\{S,\beta_{\gamma}\}:\\
\int \gamma S(\textrm{d}\gamma) \leq \Upsilon\\
\int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}}} \left\{ \int
\alpha_{\beta_{\gamma}} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr) S(\textrm{d}\gamma) \right\}.
\label{eqn:metaconverse-split-2a}
\end{align}
The function $f(\beta,\gamma) = \alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$
is non-increasing in~$\gamma$ for any fixed $\beta\in[0,1]$ (see Lemma~\ref{lem:alpha-decreasing-gamma}).
Therefore,
\begin{align}
\inf_{\substack{\{S,\beta_{\gamma}\}:\\
\int \gamma S(\textrm{d}\gamma) \leq \Upsilon\\
\int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}}} \left\{ \int
f\bigl(\beta_{\gamma},\gamma\bigr) S(\textrm{d}\gamma) \right\}
&\geq
\inf_{\substack{\{S,\beta_{\gamma}\}:\\
\int \gamma S(\textrm{d}\gamma) = \Upsilon\\
\int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}}} \left\{ \int
f\bigl(\beta_{\gamma},\gamma\bigr) S(\textrm{d}\gamma) \right\}.
\label{eqn:metaconverse-split-3a}
\end{align}
That is, the average power constraint can be assumed to hold with equality as this relaxation does not increase the bound.
Using \refE{metaconverse-split-3a} and since $f(\beta,\gamma)\geq\underline{f}(\beta,\gamma)$,
we lower-bound the right-hand side of \refE{metaconverse-split-2a} as
\begin{align}
\inf_{\substack{\{S,\beta_{\gamma}\}:\\
\int \gamma S(\textrm{d}\gamma) \leq \Upsilon\\
\int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}}} \left\{ \int
f\bigl(\beta_{\gamma},\gamma\bigr) S(\textrm{d}\gamma) \right\}
&\geq
\inf_{\substack{\{S,\beta_{\gamma}\}:\\
\int \gamma S(\textrm{d}\gamma) = \Upsilon\\
\int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}}} \left\{ \int
\underline{f}\bigl(\beta_{\gamma},\gamma\bigr) S(\textrm{d}\gamma) \right\}
\label{eqn:metaconverse-split-4a}\\
&\geq
\inf_{\substack{\{S,\beta_{\gamma}\}:\\
\int \gamma S(\textrm{d}\gamma) = \Upsilon\\
\int \beta_{\gamma} S(\textrm{d}\gamma) = \frac{1}{M}}}
\Bigl\{ \underline{f}\bigl(\textstyle\int\beta_{\gamma}S(\textrm{d}\gamma),\textstyle\int\gamma S(\textrm{d}\gamma)\bigr) \Bigr\}
\label{eqn:metaconverse-split-5a}\\
&=
\underline{f}\bigl(\tfrac{1}{M},\Upsilon\bigr),
\label{eqn:metaconverse-split-6a}
\end{align}
where \refE{metaconverse-split-5a} follows from applying Jensen's inequality, and
\refE{metaconverse-split-6a} holds since, given the constraints $\int\beta_{\gamma}S(\textrm{d}\gamma)=\frac{1}{M}$ and
$\int\gamma S(\textrm{d}\gamma)=\Upsilon$, the objective of the optimization does not depend on $\{S,\beta_{\gamma}\}$.
The bound \refE{conv-f} then follows from combining \refE{metaconverse},
\refE{metaconverse-split-2a} and the inequalities \refE{metaconverse-split-4a}-\refE{metaconverse-split-6a}.
\end{IEEEproof}
The function $\underline{f}(\beta,\gamma)$ can be evaluated numerically by considering a $2$-dimensional grid on the parameters $(\beta,\gamma)$, computing $f(\beta,\gamma)$ over this grid, and obtaining
the corresponding convex envelope. Nevertheless, sometimes $\underline{f}(\beta,\gamma) = f(\beta,\gamma)$ and these steps can be avoided, as the next result shows.
\begin{lemma}\label{lem:envelope_equals_f}
Let $\sigma,\theta,\gamma>0$ and $n\geq 1$, be fixed parameters, and define $\delta \triangleq\theta^2-\sigma^2$. For $t\geq 0$, we define
\begin{align}
\!\xi_{1}(t) &\triangleq Q_{\frac{n}{2}}\left( \sqrt{n\gamma}\frac{\sigma}{\delta}, \frac{t}{\sigma} \right) - Q_{\frac{n}{2}}\left( 0, \sqrt{\bigl(\tfrac{t^2}{\sigma^2}-n\gamma\tfrac{\theta^2}{\delta^2} \bigr)_{+}} \right),\\
\!\xi_{2}(t) &\triangleq \frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n\gamma}{\delta}-\frac{\delta t^2}{\sigma^2\theta^2}\right)}
\biggl( Q_{\frac{n}{2}}\left( 0, \sqrt{\bigl(\tfrac{t^2}{\theta^2}-n\gamma\tfrac{\sigma^2}{\delta^2} \bigr)_{+}} \right)
-Q_{\frac{n}{2}}\left( \sqrt{n\gamma}\frac{\theta}{\delta}, \frac{t}{\theta} \right) \biggr),\\
\!\xi_{3}(t) &\triangleq \frac{n \gamma}{2\delta}\biggl(\frac{t\delta}{\sigma^2\sqrt{n\gamma}}\biggr)^{\frac{n}{2}}
e^{-\frac{1}{2}\left(n\gamma\frac{\sigma^2}{\delta^2}+\frac{t^2}{\sigma^2}\right)} I_{\frac{n}{2}}\Bigl(\sqrt{n\gamma} \frac{t}{\delta}\Bigr),\!
\end{align}
where $(a)_{+} = \max(0,a)$, $Q_m(a,b)$ is the Marcum $Q$-function and $I_{m}(\cdot)$ denotes the $m$-th order modified Bessel function of the first kind.
Let $t_0$ be the solution to the implicit equation
\begin{align} \label{eqn:avmc_t1_cond}
\xi_{1}(t_0) + \xi_{2}(t_0) + \xi_{3}(t_0) = 0.
\end{align}
Then, for any $\beta$ satisfying $\bigl(1- Q_{\frac{n}{2}}\bigl( \sqrt{n\gamma}{\theta}/{\delta},\, t_0/{\theta} \bigr) \bigr) \leq \beta\leq 1$, it holds that
\begin{align} \label{eqn:envelope_equals_f}
\underline{f}(\beta,\gamma) = f(\beta,\gamma).
\end{align}
\end{lemma}
\begin{IEEEproof}
See Appendix~\ref{apx:envelope_equals_f}.
\end{IEEEproof}
Combining Theorem~\ref{thm:average_lower_bound} and Lemma~\ref{lem:envelope_equals_f} we obtain a simple lower bound on the error probability of any code satisfying an average-power constraint, provided that its cardinality is below a certain threshold.
\begin{corollary}\label{cor:average_metaconverse_bound}
Let $\sigma,\theta>0$ and $n\geq 1$, be fixed parameters, and $\delta = \theta^2-\sigma^2$. Let $t_0$ be the solution to the implicit equation \refE{avmc_t1_cond} with $\gamma=\Upsilon$ and define
\begin{align} \label{eqn:avmc_barM_def}
\bar{M}_n \triangleq \left(1- Q_{\frac{n}{2}}\biggl( \frac{\sqrt{n\Upsilon}{\theta}}{\delta},\, \frac{t_0}{\theta} \biggr)\right)^{-1}.
\end{align}
Then, for any code $\Cc \in \Fc_{\text{a}}(n,M,\Upsilon)$ with cardinality $M\leq\bar{M}_n$,
\begin{align} \label{eqn:cor_average_metaconverse_bound}
\epsilon_{\text{a}}^{\star}(n,M,\Upsilon) \geq \alpha_{\frac{1}{M}} \bigl( \varphi^n_{\sqrt{\Upsilon},\sigma} , \varphi^n_{0,\theta} \bigr).
\end{align}
\end{corollary}
\begin{figure}[t]%
\centering\input{plots/awgn-avcond-snr9-5-0dB.tikz}%
\caption{Condition from Corollary~\ref{cor:average_metaconverse_bound} for AWGN channels with SNR of $0$ dB, $5$ dB and $9$ dB compared with the channel capacity $C$.}\label{fig:AWGN-avcond}
\end{figure}%
\refFig{AWGN-avcond} shows the condition $M\leq\bar{M}_n$ in Corollary~\ref{cor:average_metaconverse_bound} as an upper bound on the transmission rate of the system, given by $R = \frac{1}{n}\log_2 M$. In this example, we let $\theta^2=\Upsilon+\sigma^2$ and consider three different values of the signal-to-noise ratio, SNR=$10\log_{10}\frac{\Upsilon}{\sigma^2}$.
The channel capacity $C=\frac{1}{2}\log_2\bigl(1+\frac{\Upsilon}{\sigma^2}\bigr)$ for each of the SNRs is also included for reference.
According to Corollary~\ref{cor:average_metaconverse_bound},
for any code satisfying an average power constraint $\Upsilon$ with blocklength $n$ and rate $R \leq \bar{R}_n$, the lower bound \refE{cor_average_metaconverse_bound} holds.
Then, in \refF{AWGN-avcond} we can see that the condition in Corollary~\ref{cor:average_metaconverse_bound} holds except for rates very close to and above capacity (provided that the blocklegth $n$ is sufficiently large).
For rates above the curves $\bar{R}_n$ shown in the figure, \refE{cor_average_metaconverse_bound} does not apply and the refined bound from Theorem~\ref{thm:average_lower_bound} needs to be used instead.
This observation agrees with previous results in the literature. Indeed, the asymptotic analysis of the right-hand side of \refE{cor_average_metaconverse_bound} yields a strong converse behavior for rates above capacity~\cite[Th. 74]{PolThesis}. Nevertheless, \cite[Th. 77]{PolThesis} shows that under an average power constraint no strong converse holds, and therefore the bound \refE{cor_average_metaconverse_bound} cannot hold in general.
\subsection{Optimal input distribution}
\label{sec:mc-implicit-distribution}
A careful analysis of the derivation of the bounds in Theorem~\ref{thm:average_lower_bound} and Corollary~\ref{cor:average_metaconverse_bound} shows that they
are indeed tight in the sense that, for the auxiliary distribution $Q$ given in \refE{Q-theta-def}, they correspond to the tightest meta-converse bound
\begin{align}
\inf_{P\in\Pc_{\text{a}}(\Upsilon)} \left\{\alpha_{\frac{1}{M}} \bigl(P W, P \times Q \bigr)\right\} = \underline{f}\bigl(\tfrac{1}{M},\Upsilon\bigr).
\label{eqn:mc-average-uf}
\end{align}
Moreover, the optimal input distribution in the left-hand side of \refE{mc-average-uf} is characterized by the convex envelope $ \underline{f}$.
To see this, note that the right-hand side of \refE{mc-average-uf} corresponds to the value of the convex envelope $\underline{f}$ at the point $\bigl(\tfrac{1}{M},\Upsilon\bigr)$.
Using Carath\'eodory's theorem, it follows that any point on $\underline{f}$ can be written as a convex combination of (at most) $4$ points in $f$.\footnote{For a $2$-dimensional function its epigraph is a $3$-dimensional set. Therefore, Carath\'eodory's theorem implies that at most $3+1$ points are needed to construct the convex hull of the epigraph, which corresponds to the convex envelope of the original function.}
Let us denote these $4$ points as $\bigl(\beta_i,\gamma_i\bigr)$, $i=1,\ldots,4$.
Then, for some $\lambda_i\geq 0$, $i=1,\ldots,4$, such that $\sum_{i=1}^4 \lambda_i=1$,
the following identities hold
\begin{align}
\underline{f}\bigl(\tfrac{1}{M},\Upsilon\bigr) = \sum_{i=1}^4 \lambda_i f\bigl(\beta_i,\gamma_i\bigr),\quad
\frac{1}{M} = \sum_{i=1}^4 \lambda_i \beta_i,\quad
\Upsilon = \sum_{i=1}^4 \lambda_i \gamma_i.
\end{align}
Let $S$ be the probability distribution that sets its mass points at $\gamma_i$, $i=1,\ldots,4$, with probabilities $S(\gamma_i) = \lambda_i$. Let $\beta_{\gamma_i} = \beta_i$, $i=1,\ldots,4$. This choice of $\{S,\beta_{\gamma}\}$ satisfies the constraints of the left-hand side of \refE{metaconverse-split-4a}. Moreover, for this choice of $\{S,\beta_{\gamma}\}$ the left-hand side of \refE{metaconverse-split-4a} becomes $\sum_{i=1}^4 \lambda_i f\bigl(\beta_i,\gamma_i\bigr) = \underline{f}\bigl(\tfrac{1}{M},\Upsilon\bigr)$ and therefore the inequality chain in \refE{metaconverse-split-4a}-\refE{metaconverse-split-6a} holds with equality.
Also, increasing the power limit $\Upsilon$ yields a strictly lower error probability, and therefore \refE{metaconverse-split-3a} holds with equality.
Then, using \refE{metaconverse-split-2a} we conclude that the identity \refE{mc-average-uf} holds.
To characterize the input distribution minimizing the left-hand side of \refE{mc-average-uf}, we recall
\refE{P_decomposition}. We conclude that the input distribution $P$ optimizing the left-hand side of \refE{metaconverse-split-2a} concentrates its mass on (at most) 4 spherical shells with squared radius $n\gamma_i$, $i=1,\ldots,4$. The probability of each of these shells is precisely $S(\gamma_i) = \lambda_i$ and it is uniformly distributed over the surface of each of the shells~\cite[Sec. VI.F]{Pol13}.
\begin{figure}[t]
\centering\input{plots/chull-n6-s1-t2.tikz}%
\caption{In gray, the level curves of $f(\beta,\gamma)$ for $n=6$, $\sigma^2=1$, $\theta^2=2$. The bold line corresponds to the boundary \refE{chull-boundary}. The dashed lines correspond to the convex combinations that yield the convex envelope $\underline{f}(\beta,\gamma)$ below the boundary.}\label{fig:chull-n6-s1-t2}%
\end{figure}%
The results above hold for a general function $f(\beta,\gamma)$ (provided that it is strictly decreasing in $\gamma$). The structure of the convex envelope of the function $f(\beta,\gamma)=\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$ is implicit in the proof of Lemma~\ref{lem:envelope_equals_f} in Appendix~\ref{apx:envelope_equals_f}.
Let $(\beta_0, \gamma_0)$ denote the boundary described in Lemma~\ref{lem:envelope_equals_f}, i.e., the points $(\beta_0, \gamma_0)$ satisfying
\begin{align}
\beta_0 = 1- Q_{\frac{n}{2}}\bigl( \sqrt{n\gamma_0}{\theta}/{\delta},\, t_0/{\theta} \bigr) \label{eqn:chull-boundary}.
\end{align}
where $t_0$ is the solution of \refE{avmc_t1_cond} for $\gamma=\gamma_0$. We consider the two regions:
\subsubsection{Above the boundary \refE{chull-boundary}} In this region $\gamma \geq 0$ and $\bigl(1- Q_{\frac{n}{2}}\bigl( \sqrt{n\gamma}{\theta}/{\delta},\, t_0/{\theta} \bigr) \bigr) \leq \beta\leq 1$ with $t_0$ the solution of \refE{avmc_t1_cond}.
It follows that $\underline{f} = f$ and therefore $\underline{f}$ is the convex combination of a single point of $f$.
\subsubsection{Below the boundary \refE{chull-boundary}} For $\gamma \geq 0$ and $0 \leq \beta < \bigl(1- Q_{\frac{n}{2}}\bigl( \sqrt{n\gamma}{\theta}/{\delta},\, t_0/{\theta} \bigr) \bigr)$, $\underline{f}$ and $f$ do not coincide.
Instead, the convex envelope $\underline{f}$ evaluated at $(\beta,\gamma)$ corresponds to a convex combination of the function $f$ at the points $(\beta_0,\gamma_0)$ and $(\bar\beta,\bar{\rho})$, where $(\beta_0,\gamma_0)$ satisfies \refE{chull-boundary}, $\bar{\rho}=0$ and $\bar\beta = 1- Q_{\frac{n}{2}}\bigl(0,\, \bar{t}_{\star}/{\theta} \bigr)$ for $\bar{t}_{\star}$ in \refE{first-order-opt-bart}.
An example of this property is shown in \refF{chull-n6-s1-t2}. This figure shows in gray the level lines of $f\bigl(\beta,\gamma\bigr)$; the bold line corresponds to the boundary \refE{chull-boundary}; and with dashed lines we show the convex combinations between $(\beta_0,\gamma_0\bigr)$ and $(\bar\beta,0\bigr)$ for different values of $\gamma_0$.
Above the boundary, the convex envelope $\underline{f}$ coincides with $f$, and therefore the input distribution $P$ optimizing the left-hand side of \refE{mc-average-uf} is uniform over a spherical shell centered at the origin and with squared radius $n\Upsilon$.
Below the boundary, the convex envelope $\underline{f}$ corresponds to the convex combination of two points of $f$ and the optimal input distribution $P$ corresponds to a mass point at the origin ($\bar{\rho}=0$) and a spherical shell centered at the origin and with squared radius $n\gamma_0$, $\gamma_0 \geq \Upsilon$.
The probability mass of these two components of $P$ depends on the parameters of the system.
\section{Computation of $f(\beta,\gamma)=\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$}
\label{sec:computation}
In the previous sections we showed that both the function $f(\beta,\gamma)$ and its convex envelope $\underline{f}(\beta,\gamma)$ yield lower bounds to the error probability for Gaussian channels under different power constraints. In this section we provide several tools that can be used in the numerical evaluation of these functions.
\subsection{Exact computation of $f(\beta,\gamma)$}
Proposition~\ref{prop:alpha-beta-marcumQ} in Appendix~\ref{apx:f-beta-gamma} provides a parametric formulation of the function $f(\beta,\gamma)$. A non-parametric expression for $f(\beta,\gamma)$ can be obtained by combining Proposition~\ref{prop:alpha-beta-marcumQ} and \cite[Lem.~1]{tit19qp}.
\begin{theorem}[Non-parametric formulation]\label{thm:alpha-beta-opt}
Let $\sigma,\theta>0$ and $n\geq 1$, be fixed parameters.
Then, it holds that
\begin{align}
f(\beta,\gamma)=\max_{t\geq 0} \Biggl\{ Q_{\frac{n}{2}}\left(\sqrt{n\gamma}\frac{\sigma}{\delta},\frac{t}{\sigma} \right)
+ \frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n \gamma}{\delta}-\frac{\delta t^2}{\sigma^2\theta^2}\right)}\left( 1 - \beta -Q_{\frac{n}{2}}\left(\sqrt{n\gamma}\frac{\theta}{\delta},\frac{t}{\theta} \right) \right)\Biggr\}.\label{eqn:alpha-beta-opt}
\end{align}
\end{theorem}
\begin{IEEEproof}
According to \cite[Lem.~1]{tit19qp}, we obtain the following alternative expression for $\alpha_{\beta}\bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$,
\begin{align}
\alpha_{\beta}\bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)
= \max_{t'} \Bigl\{ \Pr\bigl[ j(\Y_{0}) \leq t' \bigr]
\!+\!e^{t'}\bigl( \Pr\bigl[ j(\Y_{1}) > t'\bigr]
\!-\! \beta \bigr)\!\Bigr\},\hspace{-1mm}\label{eqn:HT-im-formulation}
\end{align}
where $\Y_0 \sim \varphi_{\sqrt{\gamma},\sigma}^n$ and $\Y_1 \sim \varphi_{0,\theta}^n$
and where $j(\y)$ denotes the log-likelihood ratio
\begin{align}
j(\y) &= \log \frac{\varphi_{\sqrt{\gamma},\sigma}^n(\y)}{\varphi_{0,\theta}^n (\y)}\\
&= n \log\frac{\theta}{\sigma} - \frac{1}{2} \sum_{i=1}^{n} \frac{\theta^2(y_i-\sqrt{\gamma})^2-\sigma^2 y_i^2}{\sigma^2 \theta^2}.
\end{align}
Following analogous steps as in the proof of Proposition~\ref{prop:alpha-beta-marcumQ} in Appendix~\ref{apx:f-beta-gamma}, we obtain that
\begin{align}
\Pr\bigl[ j(\Y_{0}) \leq t' \bigr] &= Q_{\frac{n}{2}}\left(\sqrt{n\gamma}\frac{\sigma}{\delta},\frac{t}{\sigma} \right),\label{eqn:PjY0-marcumQ}\\
\Pr\bigl[ j(\Y_{1}) > t'\bigr] &= 1-Q_{\frac{n}{2}}\left(\sqrt{n\gamma}\frac{\theta}{\delta},\frac{t}{\theta} \right),\label{eqn:PjY1-marcumQ}
\end{align}
where $t'\leftrightarrow t$ are related according to \refE{change-of-variable-t}, c.f., $e^{t'} = \frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n \gamma}{\delta}-\frac{\delta t^2}{\sigma^2\theta^2}\right)}$. Then, the result follows from \refE{HT-im-formulation}, \refE{PjY0-marcumQ} and \refE{PjY1-marcumQ} via the change of variable $t'\leftrightarrow t$.
\end{IEEEproof}
The formulation in Theorem~\ref{thm:alpha-beta-opt} allows to obtain simple lower bounds on $f(\beta,\gamma)$ by fixing the value of $t$ in \refE{alpha-beta-opt}, and Verd\'u-Han-type lower bounds by using that $Q_{\frac{n}{2}}\bigl(\sqrt{n\gamma}\frac{\theta}{\delta},\frac{t}{\theta} \bigr) \leq 1$, hence
\begin{align}
f(\beta,\gamma) \geq \max_{t\geq 0} \Biggl\{ Q_{\frac{n}{2}}\left(\sqrt{n\gamma}\frac{\sigma}{\delta},\frac{t}{\sigma} \right)
- \frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n \gamma}{\delta}-\frac{\delta t^2}{\sigma^2\theta^2}\right)}\beta\Biggr\}.
\end{align}
In order to evaluate \refE{alpha-beta-opt} in Theorem~\ref{thm:alpha-beta-opt} we need to solve a maximization over the scalar parameter $t\geq 0$ with an objective involving two Marcum-$Q$ functions. The computation of the bounds from Theorems \ref{thm:PPV_lower_bound}, \ref{thm:maximal_lower_bound} and \ref{thm:average_lower_bound} for a fixed rate $R\triangleq\frac{1}{n}\log_2 M$, implies that the parameter $\beta = 2^{-n R}$ decreases exponentially with the blocklength $n$. Then, traditional Taylor series expansions of the Marcum-$Q$ function fail to achieve the required precision even for moderate values of $n$. In this regime, the following expansion yields an accurate approximation of $f(\beta,\gamma)$.
\subsection{Saddlepoint expansion of $f(\beta,\gamma)$}
\label{sec:saddlepoint}
We define the information density
\begin{align}
j(y) \triangleq \log \frac{\varphi_{\sqrt{\gamma},\sigma}(y)}{\varphi_{0,\theta} (y)} = \log\frac{\theta}{\sigma}
- \frac{1}{2} \frac{\theta^2(y-\sqrt{\gamma})^2-\sigma^2 y^2}{\sigma^2 \theta^2}, \label{eqn:j1-def}
\end{align}
and we consider the cumulant generating function of the random variable $j(Y)$, $Y \sim \varphi_{0,\theta}$, given by
\begin{align}
\kappa_{\gamma}(s)
&\triangleq \log \int_{-\infty}^{\infty}
\frac{\varphi_{\sqrt{\gamma},\sigma}(y)^s }{\varphi_{0,\theta}(y)^{s-1}} \diff y
= {\gamma} \frac{s(s-1)}{2\eta(s)}+\log\frac{\theta^{s}\sigma^{1-s}}{\sqrt{\eta(s)}},\label{eqn:d0kappa-Gaussian}
\end{align}
where we defined $\eta(s) \triangleq s\theta^2 +(1-s) \sigma^2$.
The first three derivatives of \refE{d0kappa-Gaussian} with respect to $s$ are, respectively,
\begin{align}
\kappa_{\gamma}'(s) &= {\gamma} \frac{s^2 \theta^2 - (1-s)^2 \sigma^2}{2\eta(s)^2} -\frac{\theta^2 - \sigma^2}{2 \eta(s)} + \log\frac{\theta}{\sigma}, \label{eqn:d1kappa-Gaussian}\\
\kappa_{\gamma}''(s) &= \gamma \frac{\theta^2\sigma^2}{\eta(s)^3}
+ \frac{(\theta^2 - \sigma^2)^2}{2\eta(s)^2},\label{eqn:d2kappa-Gaussian}\\
\kappa_{\gamma}'''(s) &= - \left( \gamma \frac{3\theta^2\sigma^2 (\theta^2 - \sigma^2)}{\eta(s)^4}
+ \frac{(\theta^2 - \sigma^2)^3}{2\eta(s)^3}\right).\label{eqn:d3kappa-Gaussian}
\end{align}
\begin{theorem}[Saddlepoint expansion]\label{thm:alpha-beta-sp-formulation}
Let $\sigma,\theta>0$ and $n\geq 1$, be fixed parameters. Then,
\begin{align}
f(\beta,\gamma) &= \max_{s} \Bigl\{ \bigl(a(s,\gamma)+b(s,\gamma)\bigr) e^{n(\kappa_{\gamma}(s)+(1-s)\kappa_{\gamma}'(s))}
+ \openone[s>1] + \bigl(\openone[s<0] - \beta\bigr) e^{n\kappa_{\gamma}'(s)}\!\Bigr\},\label{eqn:HT-sp-formulation}
\end{align}
where $\openone[\cdot]$ denotes the indicator function and, for $\lambda_{a}(s) \triangleq |1-s|\sqrt{n \kappa_{\gamma}''(s)}$ and $\lambda_{b}(s) \triangleq |s|\sqrt{n \kappa_{\gamma}''(s)}$,
\begin{align}
a(s,\gamma) &= \sgn(1-s) \left(\Psi\bigl(\lambda_a(s)\bigr) + \frac{n (s-1)^3}{6} \left(\frac{\lambda_a(s)^{-1}-\lambda_a(s)^{-3}}{\sqrt{2\pi}} - \Psi\bigl(\lambda_a(s)\bigr)\right) \kappa_{\gamma}'''(s)\right) + o\bigl(n^{-\frac{1}{2}}\bigr),\label{eqn:sp-an}\\
b(s,\gamma) &= \sgn(s) \left(\Psi\bigl(\lambda_b(s)\bigr) + \frac{n s^3}{6} \left(\frac{\lambda_b(s)^{-1}-\lambda_b(s)^{-3}}{\sqrt{2\pi}} - \Psi\bigl(\lambda_b(s)\bigr)\right) \kappa_{\gamma}'''(s)\right) + o\bigl(n^{-\frac{1}{2}}\bigr).\label{eqn:sp-bn}
\end{align}
Here, $\sgn(\cdot)$ denotes the sign function, defined as $\sgn(s)=-1$ for $s<0$ and $\sgn(s)=1$ otherwise; the function $\Psi(\lambda)$ is defined as $\Psi(\lambda) \triangleq \Qsf(|\lambda|) e^{\frac{\lambda^2}{2}}$ with $\Qsf(\cdot)$ the Gaussian $\Qsf$-function; and $o\bigl(g(n)\bigr)$ summarizes the terms that approach zero faster than $g(n)$, \textit{i.e.}, $\lim_{n\to \infty}\frac{o(g(n))}{g(n)}=0$.
\end{theorem}
\begin{IEEEproof}
The proof follows the lines of \cite[Th. 2]{isit18} with a more refined expansion of $a(s,\gamma)$ and $b(s,\gamma)$.
We consider a sequence of i.i.d. non-lattice real-valued random variables with positive variance, $\{Z_{\ell}\}_{\ell=1}^n$, and we define their mean as
\begin{align}
\bar{Z}_n \triangleq \frac{1}{n}\sum_{\ell=0}^n Z_{\ell}.
\end{align}
Let $\kappa_{Z}(s) \triangleq \log \Ex\bigl[e^{s Z_\ell}\bigr]$ denote the cumulant generating function of $Z_{\ell}$, and
let $\kappa_{Z}'(s)$, $\kappa_{Z}''(s)$ and $\kappa_{Z}'''(s)$ denote its 1st, 2nd and 3rd derivatives with respect to $s$, respectively.
We apply the tilting from \cite[Sec. XVI.7]{Feller71}
to the random variable $\bar{Z}_n$ and then use the
expansion \cite[Sec. XVI.4, Th. 1]{Feller71}. We obtain the following result (see \cite[Prop.~1, Part~1]{twc2020} for detailed derivation).
If there exists $s$ in the region of convergence of $\kappa_{Z}(s)$ such that $\kappa_{Z}'(s) = t$, with $t \geq \Ex\bigl[\bar{Z}_n\bigr]$,
the tail probability $\Pr\bigl[\bar{Z}_n \geq t\bigr]$ satisfies
\begin{align}
\Pr\bigl[\bar{Z}_n \geq t\bigr]
&= e^{n(\kappa_{Z}(s)-s \kappa_{Z}'(s))} \left(\Psi(\lambda_{Z,s}) + \frac{n s^3}{6} \left(\frac{\lambda_{Z,s}^{-1}-\lambda_{Z,s}^{-3}}{\sqrt{2\pi}} - \Psi(\lambda_{Z,s})\right) \kappa_{Z}'''(s) + o\bigl(n^{-\frac{1}{2}}\bigr)\right),\label{eqn:sp-expansion-1}
\end{align}
where $\lambda_{Z,s} \triangleq |s|\sqrt{n \kappa_{Z}''(s)}$ and the error term $o\bigl(n^{-\frac{1}{2}}\bigr)$ holds uniformly in $s$.
The expansion \refE{sp-expansion-1} requires that the threshold $t$ is above the average value $\Ex\bigl[\bar{Z}_n\bigr] = \Ex\bigl[{Z}_{\ell}\bigr]$. That is, the expansion \refE{sp-expansion-1} is only accurate for evaluating the tail of the probability distribution. Whenever $t < \Ex\bigl[\bar{Z}_n\bigr]$, we use that
\begin{align}
\Pr\bigl[\bar{Z}_n \geq t\bigr]
&= 1 - \Pr\bigl[\bar{Z}_n < t\bigr]
\label{eqn:PZn-1}\\
&= 1 - \Pr\bigl[-\bar{Z}_n > -t\bigr].
\label{eqn:PZn-2}
\end{align}
For non-lattice distributions, the expansion
\refE{sp-expansion-1} coincides with that of
$\Pr\bigl[\bar{Z}_n > t\bigr]$ and therefore,
it can be used to estimate $\Pr\bigl[-\bar{Z}_n > -t\bigr]$. Moreover, given the mapping $\kappa_{Z}'(s) = t$, it can be checked that the condition $t \geq \Ex\bigl[\bar{Z}_n\bigr]$ corresponds to $s\geq 0$. Similarly, for $t < \Ex\bigl[\bar{Z}_n\bigr]$ and the mapping $\kappa_{Z}'(s) = t$, we obtain $s<0$.
We now apply this expansion to the probability terms
$\Pr\bigl[ j(\Y_{0}) \leq t' \bigr]$ and $\Pr\bigl[ j(\Y_{1}) > t'\bigr]$ appearing in \refE{HT-im-formulation}.
To this end, we consider the random variables $Z_0 = -j(Y_0)$, $Y_0 \sim \varphi_{\sqrt{\gamma},\sigma}$, and $Z_1 = j(Y_1)$, $Y_1 \sim \varphi_{0,\theta}$, with $j(y)$ defined in \refE{j1-def}. The cumulant generating functions of the random variables $Z_0$ and $Z_1$ are, respectively, given by
\begin{align}
\kappa_{Z_0}(s) &=\log \Ex\left[\left(
\frac{\varphi_{\sqrt{\gamma},\sigma}(Y_0)}{\varphi_{0,\theta}(Y_0)}\right)^{-s}\right] = \kappa_{\gamma}(1-s),\label{eqn:HT-sp-kappa0}\\
\kappa_{Z_1}(s) &=\log \Ex\left[\left(\frac{\varphi_{\sqrt{\gamma},\sigma}(Y_1)}{\varphi_{0,\theta}(Y_1)}\right)^s\right] = \kappa_{\gamma}(s),\label{eqn:HT-sp-kappa1}
\end{align}
for $\kappa_{\gamma}(s)$ defined in \refE{d0kappa-Gaussian}. The fact that the cumulant generating functions $\kappa_{Z_0}(s)$ and $\kappa_{Z_1}(s)$ are shifted and mirrored versions of each other will allow us to simplify the resulting expression.
Now, we use \refE{sp-expansion-1} for $\bar{Z}_{0,n} \triangleq -\frac{1}{n} j(\Y_{0}) = -\frac{1}{n} \sum_{\ell=1}^n j(Y_{0,\ell})$, and apply \refE{PZn-1}-\refE{PZn-2} whenever $t < \Ex\bigl[\bar{Z}_{0,n}\bigr]$, to obtain
\begin{align}
\Pr\bigl[\bar{Z}_{0,n} \geq t\bigr]
&= \openone[\bar{s}< 0] + \sgn(\bar{s}) e^{n(\kappa_{Z_0}(\bar{s})-\bar{s} \kappa_{Z_0}'(\bar{s}))} \notag\\
&\qquad\qquad\times\left(\Psi(\lambda_{Z_0,\bar{s}}) + \frac{n \bar{s}^3}{6} \left(\frac{\lambda_{Z_0,\bar{s}}^{-1}-\lambda_{Z_0,\bar{s}}^{-3}}{\sqrt{2\pi}} - \Psi(\lambda_{Z_0,\bar{s}})\right) \kappa_{Z_0}'''(\bar{s}) + o\bigl(n^{-\frac{1}{2}}\bigr)\right),\label{eqn:sp-expansion-2}
\end{align}
where the value of $\bar{s}$ satisfies $\kappa_{Z_0}'(\bar{s})=t$.
We consider the change of variable $\bar{s} \leftrightarrow s$ with $s = 1-\bar{s} \Leftrightarrow \bar{s} = 1-s$ and use the identity \refE{HT-sp-kappa0}.
For this change of variables, from \refE{HT-sp-kappa0} we obtain the identities
\begin{align}
\kappa_{Z_0}(\bar{s}) &= \kappa_{\gamma}(1-\bar{s}) = \kappa_{\gamma}(s),\label{eqn:HT-sp-kappa0-0}\\
\kappa_{Z_0}'(\bar{s}) &= -\kappa_{\gamma}'(1-\bar{s}) = -\kappa_{\gamma}'(s),\label{eqn:HT-sp-kappa0-1}\\
\kappa_{Z_0}''(\bar{s}) &= \kappa_{\gamma}''(1-\bar{s}) = \kappa_{\gamma}''(s),\label{eqn:HT-sp-kappa0-2}\\
\kappa_{Z_0}'''(\bar{s}) &= -\kappa_{\gamma}'''(1-\bar{s}) = -\kappa_{\gamma}'''(s).\label{eqn:HT-sp-kappa0-3}
\end{align}
Then, using \refE{HT-sp-kappa0-0}-\refE{HT-sp-kappa0-3} in \refE{sp-expansion-2}, via the change of variable $\bar{s} \leftrightarrow s$ with $\bar{s} = 1-s$, we obtain that
\begin{align}
\Pr\bigl[ j(\Y_{0}) \leq t' \bigr]
= \openone[s>1] + a(s,\gamma) e^{n(\kappa_{\gamma}(s)+(1-s)\kappa_{\gamma}'(s))}\label{eqn:Pj0-sp}
\end{align}
where $s$ satisfies $\kappa_{\gamma}'(s)=t'/n$.
Proceeding analogously for $\bar{Z}_{1,n} = \frac{1}{n} j(\Y_{1}) = \frac{1}{n} \sum_{\ell=1}^n j(Y_{1,\ell})$, using \refE{sp-expansion-1} and \refE{HT-sp-kappa1}, we obtain
\begin{align}
\Pr\bigl[ j(\Y_{1}) > t' \bigr]
= \openone[s<0] + b(s,\gamma) e^{n(\kappa_{\gamma}(s)-s\kappa_{\gamma}'(s))}\label{eqn:Pj1-sp}
\end{align}
where $s$ satisfies again $\kappa_{\gamma}'(s)=t'/n$.
We replace \refE{Pj0-sp} and \refE{Pj1-sp} in \refE{HT-im-formulation} and change the optimization variable from $t'$ to $s$ acording to the relation $t'= n\kappa_{\gamma}'(s)$.
Noting that $e^{t'}$ in \refE{HT-im-formulation} becomes $e^{n\kappa_{\gamma}'(s)}$, then \refE{HT-im-formulation} becomes
\begin{align}
\alpha_{\beta}\bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)
= \max_{s} \Bigl\{&\openone[s>1] + a(s,\gamma) e^{n(\kappa_{\gamma}(s)+(1-s)\kappa_{\gamma}'(s))}
\notag\\&
+ e^{n\kappa_{\gamma}'(s)} \bigl( \openone[s<0] + b(s,\gamma) e^{n(\kappa_{\gamma}(s)-s\kappa_{\gamma}'(s))}
- \beta \bigr)\Bigr\}.
\label{eqn:HT-sp-formulation-1}
\end{align}
The result then follows from \refE{HT-sp-formulation-1} by reorganizing terms.
\end{IEEEproof}
The refined expressions from \refE{sp-an}-\refE{sp-bn} are needed to obtain an error term $o\bigl(n^{-\frac{1}{2}}\bigr)$ that is uniform on~$s$. For practical purposes, however, the function $f(\beta,\gamma)$ can be approximated using \refE{HT-sp-formulation} from Theorem \ref{thm:alpha-beta-sp-formulation} with $a(s,\gamma)$ and $b(s,\gamma)$ replaced by the simpler expressions
\begin{align}
\hat{a}(s,\gamma) &\triangleq \sgn(1-s) \Psi\bigl(\lambda_a(s)\bigr),\qquad
\hat{b}(s,\gamma) \triangleq \sgn(s) \Psi\bigl(\lambda_b(s)\bigr),
\end{align}
respectively. This approximation yields accurate results for blocklengths as short as $n=20$ (see \cite{isit18} for details),
and still we obtain an approximation error of order $o\bigl(n^{-\frac{1}{2}}\bigr)$ for values of $s$ satisfying $0 < s_0 \leq s \leq s_1 < 1$.
\subsection{Exponent-achieving output distribution}
\label{sec:exponent-achieving}
Often, the variance of the auxiliary distribution in Theorems \ref{thm:PPV_lower_bound}, \ref{thm:maximal_lower_bound} and
\ref{thm:average_lower_bound} is chosen to be the variance of the capacity-achieving output distribution, $\theta^2=\Upsilon+\sigma^2$. While this choice of $\theta^2$ is adequate for rates approaching the capacity of the channel, it does not attain the sphere-packing exponent in general~\cite{Shannon67I}.
An auxiliary distribution that yields the right exponential behavior in these bounds is the so-called exponent-achieving output distribution.\footnote{If we restrict the auxiliary output distribution to be product, the exponent-achieving output distribution is unique in the sense that the resulting hypothesis-testing bound attains the sphere-packing exponent~\cite{Shannon67I}. Nevertheless, using other non-product auxiliary distributions can yield the right exponential behavior. One example is the optimal distribution in the meta-converse for the AWGN channel under an equal power constraint discussed in Section~\ref{sec:equal-metaconverse}, which recovers Shannon cone-packing bound with the sphere-packing exponent.}
The exponent-achieving output distribution for the power-constrained AWGN channel is precisely the zero-mean i.i.d. Gaussian distribution defined in \refE{Q-theta-def} with the variance given by $\theta^2 = \tilde\theta_s^2$ where~\cite[Sec. 6.2]{Nakiboglu19-Augustin}
\begin{align}\label{eqn:theta_s-def}
\tilde\theta_s^2 = \sigma^2 + \frac{\gamma}{2} - \frac{\sigma^2}{2 s} + \sqrt{\biggl(\frac{\gamma}{2}-\frac{\sigma^2}{2 s}\biggr)^2 + \gamma\sigma^2}.
\end{align}
Here, $\gamma=\Upsilon$ is the power constraint and the parameter $s$ is the result of optimizing the sphere-packing exponent for a given transmission rate. Specifically, the sphere-packing exponent can be computed as~\cite{Nakiboglu19-SP}
\begin{align}\label{eqn:Esp-def}
E_{\text{sp}}(R)
&\triangleq \sup_{s\in(0,1)} \biggl\{ \frac{1-s}{s} \bigl( C_{s,W,\Upsilon} - R \bigr) \biggr\},
\end{align}
where $R$ denotes the transmission rate $R = \frac{1}{n}\log{M}$ in nats/channel use and $C_{s,W,\Upsilon}$ is the so-called Augustin capacity~\cite{Nakiboglu19-Augustin}, which for the power-constrained AWGN channel is given by~\cite[Eq.~(126)]{Nakiboglu19-Augustin}
\begin{align}\label{eqn:CAug}
C_{s,W,\Upsilon} =
\begin{cases}
\frac{s\Upsilon}{2\tilde\eta(s)} + \frac{1}{s-1}\log\frac{\tilde\theta_s^s \sigma^{1-s}}{\sqrt{\tilde\eta(s)}}, & s\geq0, s\neq 1,\\
\frac{1}{2}\log\left(1+\frac{\Upsilon}{\sigma^2} \right), & s= 1,
\end{cases}
\end{align}
where $\tilde\eta(s) \triangleq s\tilde\theta_s^2 + (1-s) \sigma^2$.
For transmission rates close to the channel capacity, the optimal value of $s$ in \refE{Esp-def} tends to $1$ as $n\to\infty$ and therefore $\theta_s^2 \to \theta_1^2 = \Upsilon +\sigma^2$. Hence, $\varphi_{0,\theta}^n(\y)$ becomes the capacity-achieving output distribution used in \cite[Th. 41]{Pol10}.
In principle, we can solve \refE{Esp-def} to determine the asymptotic optimal value of $s$, and then use it in the variance \refE{theta_s-def}. Nevertheless, the saddlepoint expansion from Theorem~\ref{thm:alpha-beta-sp-formulation} allows to introduce a dependence of $\theta^2$ with the auxiliary parameter~$s$ without incurring an extra computational cost, as the next result shows.
For $\tilde\eta(s) = s\tilde\theta_s^2 + (1-s) \sigma^2$ we define
\begin{align}
\tilde\kappa_{\gamma}(s)
&\triangleq {\gamma} \frac{s(s-1)}{2\tilde\eta(s)}+\log\frac{\tilde\theta_s^{s}\sigma^{1-s}}{\sqrt{\tilde\eta(s)}},\label{eqn:d0tkappa-Gaussian}\\
\tilde\kappa_{\gamma}^{(1)}(s) &\triangleq {\gamma} \frac{s^2 \tilde\theta_s^2 - (1-s)^2 \sigma^2}{2\tilde\eta(s)^2} -\frac{\tilde\theta_s^2 - \sigma^2}{2 \tilde\eta(s)} + \log\frac{\tilde\theta_s}{\sigma},
\label{eqn:d1tkappa-Gaussian}\\
\tilde\kappa_{\gamma}^{(2)}(s) &\triangleq \gamma \frac{\tilde\theta_s^2\sigma^2}{\tilde\eta(s)^3}
+ \frac{(\tilde\theta_s^2 - \sigma^2)^2}{2\tilde\eta(s)^2},\label{eqn:d2tkappa-Gaussian}\\
\tilde\kappa_{\gamma}^{(3)}(s) &\triangleq - \left( \gamma \frac{3\tilde\theta_s^2\sigma^2 (\tilde\theta_s^2 - \sigma^2)}{\tilde\eta(s)^4}
+ \frac{(\tilde\theta_s^2 - \sigma^2)^3}{2\tilde\eta(s)^3}\right).\label{eqn:d3tkappa-Gaussian}
\end{align}
The definitions \refE{d0tkappa-Gaussian}-\refE{d3tkappa-Gaussian} correspond to \refE{d0kappa-Gaussian}-\refE{d3kappa-Gaussian} after setting $\theta=\tilde\theta_s$.
Note that \refE{d1tkappa-Gaussian}-\refE{d3tkappa-Gaussian} do not coincide with the derivatives of \refE{d0tkappa-Gaussian} in general, since in obtaining \refE{d1kappa-Gaussian}-\refE{d3kappa-Gaussian} we have considered $\theta$ to be independent of $s$.
\begin{corollary}[Exponent-achieving saddlepoint expansion]\label{cor:alpha-beta-sp-exponent}
Let $\sigma>0$ and $n\geq 1$, be fixed parameters.
Then,
\begin{align}
\max_{\theta\geq\sigma} \Bigl\{
\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr) \Bigr\} \geq \tilde{f}(\beta,\gamma),\label{eqn:alpha-beta-sp-exponent}
\end{align}
where the function $\tilde{f}(\beta,\gamma)$ is defined as
\begin{align}
\tilde{f}(\beta,\gamma) &\triangleq \max_{s} \Bigl\{ \bigl(\tilde{a}(s,\gamma) + \tilde{b}(s,\gamma)\bigr) e^{n(\tilde\kappa_{\gamma}(s)+(1-s)\tilde\kappa_{\gamma}^{(1)}(s))}
+ \openone[s>1] + \bigl(\openone[s<0] - \beta\bigr) e^{n\tilde\kappa_{\gamma}^{(1)}(s)}\!\Bigr\},\label{eqn:HT-sp-exponent}\\
\tilde{a}(s,\gamma) &= \sgn(1-s) \left(\Psi\bigl(\tilde\lambda_a(s)\bigr) + \frac{n (s-1)^3}{6} \left(\frac{\tilde\lambda_a(s)^{-1}-\tilde\lambda_a(s)^{-3}}{\sqrt{2\pi}} - \Psi\bigl(\tilde\lambda_a(s)\bigr)\right) \tilde\kappa_{\gamma}^{(3)}(s)\right) + o\bigl(n^{-\frac{1}{2}}\bigr),\label{eqn:sp-tan}\\
\tilde{b}(s,\gamma) &= \sgn(s) \left(\Psi\bigl(\tilde\lambda_b(s)\bigr) + \frac{n s^3}{6} \left(\frac{\tilde\lambda_b(s)^{-1}-\tilde\lambda_b(s)^{-3}}{\sqrt{2\pi}} - \Psi\bigl(\tilde\lambda_b(s)\bigr)\right) \tilde\kappa_{\gamma}^{(3)}(s)\right) + o\bigl(n^{-\frac{1}{2}}\bigr).\label{eqn:sp-tbn}
\end{align}
with $\tilde\lambda_{a}(s) \triangleq |1-s|\sqrt{n \tilde\kappa_{\gamma}^{(2)}(s)}$ and $\tilde\lambda_{b}(s) \triangleq |s|\sqrt{n \tilde\kappa_{\gamma}^{(2)}(s)}$.
\end{corollary}
\begin{IEEEproof}
Using \refE{HT-sp-formulation-1}, it follows that
\begin{align}
\max_{\theta\geq\sigma}&\Bigl\{
\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr) \Bigr\}
\notag\\
&= \max_{\theta\geq\sigma} \max_{s} \Bigl\{ \bigl(a(s,\gamma)+b(s,\gamma)\bigr) e^{n(\kappa_{\gamma}(s)+(1-s)\kappa_{\gamma}'(s))}
+ \openone[s>1] + \bigl(\openone[s<0] - \beta\bigr) e^{n\kappa_{\gamma}'(s)}\!\Bigr\}\\
&= \max_{s} \max_{\theta\geq\sigma} \Bigl\{ \bigl(a(s,\gamma)+b(s,\gamma)\bigr) e^{n(\kappa_{\gamma}(s)+(1-s)\kappa_{\gamma}'(s))}
+ \openone[s>1] + \bigl(\openone[s<0] - \beta\bigr) e^{n\kappa_{\gamma}'(s)}\!\Bigr\}.
\end{align}
We can fix a value $\theta\geq\sigma$ and obtain a lower bound to $\max_{\theta\geq\sigma}\bigl\{\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr) \bigr\}$.
Moreover, as the maximization over $\theta$ is inside the maximization over $s$, the chosen value for $\theta$ may depend on $s$. Then, letting $\theta=\tilde\theta_s$ in the inner maximization we obtain the desired result.
\end{IEEEproof}
According to Corollary~\ref{cor:alpha-beta-sp-exponent}, we can use $\tilde{f}\bigl(\frac{1}{M},\gamma\bigr)$ instead of $f\bigl(\frac{1}{M},\gamma\bigr)$ in Theorems \ref{thm:PPV_lower_bound}, \ref{thm:maximal_lower_bound} and Corollary~\ref{cor:average_metaconverse_bound}.
In Theorem~\ref{thm:average_lower_bound} however, the convex hull needs to be evaluated for a fixed variance $\theta^2$, which can be the capacity-achieving $\theta^2 = \Upsilon+\sigma^2$, the exponent-achieving $\theta^2=\tilde\theta_s^2$ for the asymptotic value of $s$ optimizing~\refE{Esp-def},
or $\theta^2=\tilde\theta_s^2$ for the asymptotic value of $s$ optimizing \refE{HT-sp-exponent} the system parameters under consideration.
\section{Numerical Examples}
\label{sec:numerical}
\begin{figure}[t]%
\vspace{-0.3mm}
\centering\input{plots/awgn-bounds-equal-n-R1.5-snr10dB.tikz}%
\caption{Upper and lower bounds to the channel coding error probability for an AWGN channel under an equal power constraint. System parameters: $\text{SNR}=10$~dB, rate $R = 1.5$ bits/channel use.}\vspace{-2mm}\label{fig:equal-AWGN-Pevsn-snr10dB}
\end{figure}%
\subsection{Comparison with previous results under the different power constraints}
\label{sec:numerical-2}
We consider the transmission of $M = 2^{nR}$ codewords over $n$ uses of an AWGN channel with $R=1.5$ bits/channel use and $\text{SNR}=10\log_{10}\frac{\Upsilon}{\sigma^2}=10$ dB. The channel capacity is $C = \frac{1}{2}\log_2\bigl(1+\frac{\Upsilon}{\sigma^2}\bigr) \approx 1.73$ bits/channel use under the three power constraints considered.
\begin{figure}[t]%
\vspace{-0.3mm}
\centering\input{plots/awgn-bounds-maximal-n-R1.5-snr10dB.tikz}%
\caption{Upper and lower bounds to the channel coding error probability for an AWGN channel under a maximal power constraint. System parameters: $\text{SNR}=10$~dB, rate $R = 1.5$ bits/channel use.}\vspace{-2mm}\label{fig:maximal-AWGN-Pevsn-snr10dB}
\end{figure}%
\subsubsection{Equal power constraint}
In \refF{equal-AWGN-Pevsn-snr10dB}, we compare the lower bounds discussed in \refS{equal} for the AWGN channel under an equal power constraint.
For reference, we include the achievability part of \cite[Eq.~(20)]{Shannon59}, which was derived for an average power limitation and that, therefore, applies under equal, maximal and average power constraints. We observe that Shannon cone-packing bound from Theorem~\ref{thm:shannon_lower_bound} is the tightest lower bound in this setting.
As discussed in \refS{equal-shannon}, by considering the optimal auxiliary (non-product) distribution, the meta-converse bound \refE{metaconverse} recovers the cone-packing bound and therefore, it coincides with the curve from Theorem~\ref{thm:shannon_lower_bound}.
The hypothesis testing bound with an auxiliary Gaussian distribution from Theorem~\ref{thm:PPV_lower_bound} is slightly weaker.
Since the rate of the system $R=1.5$ bits/channel use is relatively close to channel capacity
$C \approx 1.8$ bits/channel use, there is no much gain in this example by considering an auxiliary distribution equal to the exponent achieving output distribution with $\theta=\tilde\theta_s$ (see Section~\ref{sec:exponent-achieving}). Note however that both the capacity and exponent achieving output distributions are product distributions.
\subsubsection{Maximal power constraint} For the same system parameters as in the previous example, Fig.~\ref{fig:maximal-AWGN-Pevsn-snr10dB} compares different lower bounds derived under a maximal power constraint. In particular, we consider the combination of Theorem~\ref{thm:shannon_lower_bound} with \refE{relations-1}, the slightly sharper Corollary~\ref{cor:shannon_lower_bound} and the hypothesis testing bound from
Theorem~\ref{thm:maximal_lower_bound}
with $\theta=\theta_s$ as defined in \refE{theta_s-def}.
In the figure, we can see that the tightest bound in this setting corresponds to the hypothesis testing bound from
Theorem~\ref{thm:maximal_lower_bound}.
Applying the relation \refE{relations-1} to extend the cone-packing bound from Theorem~\ref{thm:shannon_lower_bound} to a maximal power constraint incurs in a certain loss, which can be slightly tightened as shown in Corollary~\ref{cor:shannon_lower_bound}.
In the figure we observe that the three curves present the same asymptotic slope as they feature the same error exponent.
\begin{figure}[t]%
\vspace{-0.3mm}
\centering\input{plots/awgn-bounds-average-n-R1.5-snr10dB.tikz}%
\caption{Upper and lower bounds to the channel coding error probability for an AWGN channel under an average power constraint. System parameters: $\text{SNR}=10$~dB, rate $R = 1.5$ bits/channel use.}\vspace{-2mm}\label{fig:average-AWGN-Pevsn-snr10dB}
\end{figure}%
\subsubsection{Average power constraint}
We compare now the bounds for an average power constraint. In particular, we consider the combination of Theorem~\ref{thm:shannon_lower_bound} with \refE{relations-1}-\refE{relations-2},
the combination of Corollary~\ref{cor:shannon_lower_bound} with \refE{relations-2} and the hypothesis testing bound from
Theorem~\ref{thm:average_lower_bound}
with $\theta=\theta_s$ as defined in \refE{theta_s-def}.
For this set of system parameters, the condition in Corollary~\ref{cor:average_metaconverse_bound} is satisfied for all $n$, and the simplified bound \refE{cor_average_metaconverse_bound} can be used to evaluate Theorem~\ref{thm:average_lower_bound}.
The hypothesis testing bound is again the tightest bound, as shown in Fig.~\ref{fig:average-AWGN-Pevsn-snr10dB}.
The application of \refE{relations-2} to obtain bounds for an average power constraint incurs in a large loss with respect to the corresponding direct hypothesis-testing bound.\footnote{All the bounds presented here hold under the average probability of error formalism. While the counterpart of \refE{relations-2} for maximal error probability is tighter (see~\cite[Lem.~65]{PolThesis}), the finite-length gap to Theorem~\ref{thm:average_lower_bound} is still significant.}
Since the condition in Corollary~\ref{cor:average_metaconverse_bound} is satisfied for all $n$, it follows that the bounds from Theorems~\ref{thm:PPV_lower_bound}, \ref{thm:maximal_lower_bound} and \ref{thm:average_lower_bound} coincide
in Figs.~\ref{fig:equal-AWGN-Pevsn-snr10dB}, \ref{fig:maximal-AWGN-Pevsn-snr10dB} and~\ref{fig:average-AWGN-Pevsn-snr10dB}.
While in the equal power constraint setting, Shannon cone-packing bound is still the tightest, under both maximal and average power constraint the new hypothesis testing bounds yield tighter results.
Indeed, for an average power constraint the advantage of Theorem~\ref{thm:average_lower_bound} over previous results is significant in the finite blocklength regime, as shown in Fig.~\ref{fig:average-AWGN-Pevsn-snr10dB}.
\subsection{Exponent-achieving output distribution and numerical evaluation}
In the previous examples, the transmission rate was very close to channel capacity. Therefore, using the exponent achieving or the capacity achieving output distributions in the hypothesis-testing bounds did not result in significant differences.
We consider now an power-constrained AWGN channel with $\text{SNR}=10\log_{10}\frac{\Upsilon}{\sigma^2}=5$~dB. The asymptotic capacity of this channel is $C\approx 1.03$ bits/channel use and its critical rate is $R_{\text{cr}}\approx 0.577$ bits/channel use.\footnote{The critical rate of a channel is defined as the point below which the sphere-packing exponent and the random-coding exponent start to diverge~\cite{Gall68}. For the power-constrained AWGN channel this point corresponds to the rate at which the maximum in \refE{Esp-def} is attained for $s=\frac{1}{2}.$}
\begin{figure}[t]%
\centering\input{plots/awgn-bounds-n-R058-snr5dB.tikz}%
\caption{Upper and lower bounds to the channel coding error probability over an AWGN channel with $\text{SNR} = 5$~dB and $R= 0.58$ bits/channel use.}\label{fig:AWGN-Pevsn-R058-snr5dB}
\end{figure}%
\refFig{AWGN-Pevsn-R058-snr5dB} shows several of the bounds studied in the previous examples for $M = \lceil 2^{nR} \rceil$ messages with $R=0.58$ bits/channel use.
This transmission rate is slightly above the critical rate of the channel. We first note that the gap between the achievability part of \cite[Eq.~(20)]{Shannon59} and the converse bounds is larger than in the examples from Figures~\ref{fig:equal-AWGN-Pevsn-snr10dB}, \ref{fig:maximal-AWGN-Pevsn-snr10dB} and \ref{fig:average-AWGN-Pevsn-snr10dB}.
For an equal power constraint Shannon cone-packing bound Theorem~\ref{thm:shannon_lower_bound} is still the tightest bound. Nevertheless, the gap between Theorem~\ref{thm:shannon_lower_bound} and the curve of Theorem~\ref{thm:average_lower_bound} with $\theta^2=\tilde\theta_s^2$ in \refE{theta_s-def} for the value of $s$ optimizing \refE{HT-sp-exponent} is very small, and the later is valid under the three power constraints considered. In this example, Corollary~\ref{cor:shannon_lower_bound} and Corollary~\ref{cor:shannon_lower_bound} + Eq.~\refE{relations-2} yield weaker and much weaker bounds, respectively.
An error exponent analysis of these bounds shows that the asymptotic slope of the hypothesis-testing lower bound from Theorem~\ref{thm:average_lower_bound} with $\tilde\theta_s^2$ coincides with that of Shannon cone-packing lower bound from Theorem~\ref{thm:shannon_lower_bound}, hence both curves are parallel. In contrast, the curve with $\theta^2 = \Upsilon +\sigma^2$ presents a (slightly) larger error exponent and hence this bound will diverge as $n$ grows and become increasingly weaker. By using the value $\theta^2 = \tilde\theta_s^2$, we obtain not only the sphere-packing exponent but also tighter finite-length bounds, as shown in the figure.
\begin{figure}[t]%
\centering\input{plots/awgn-bounds-n-Pe1e-6-snr5dB.tikz}%
\caption{Bounds to the transmission rate in an AWGN channel with $\text{SNR} = 5$~dB and error probability $\Pe = 10^{-6}$.}\label{fig:AWGN-Rvsn-snr5dB}
\end{figure}%
The observations from \refF{AWGN-Pevsn-R058-snr5dB} are complemented in \refF{AWGN-Rvsn-snr5dB}. This figure analyzes the highest transmission rate versus the blocklength for a given error probability $\Pe=10^{-6}$.
The bounds included in this figure are precisely those from \refF{AWGN-Pevsn-R058-snr5dB}. For reference, we have also included the asymptotic channel capacity $C$ and the condition from Corollary~\ref{cor:average_metaconverse_bound} as an upper bound on the transmission rate of the system $R \leq \bar{R}_n = \frac{1}{n}\log_2 \bar{M}_n$.
In this plot the upper bounds from Theorem~\ref{thm:shannon_lower_bound}, Corollary~\ref{cor:shannon_lower_bound} and Theorem~\ref{thm:average_lower_bound} with $\theta^2=\tilde\theta_s^2$ are almost indistinguishable from each other. Note however that Theorem~\ref{thm:shannon_lower_bound} was derived for an equal power constraint, Corollary~\ref{cor:shannon_lower_bound} for a maximal power constraint and Theorem~\ref{thm:average_lower_bound} forr an average power constraint.
Comparing these bounds with the achievability bound \cite[Eq.~(20)]{Shannon59}, we observe that the behavior of the transmission rate approaching capacity is precisely characterized for blocklengths $n\geq 30$.
The upper bound from Theorem~\ref{thm:average_lower_bound} with $\theta^2=\Upsilon+\sigma^2$ is slightly looser than by considering the exponent achieving output distribution and Corollary~\ref{cor:shannon_lower_bound} + Eq.~\refE{relations-2} yields a much weaker bound for average power constraints.
The condition from Corollary~\ref{cor:average_metaconverse_bound} shows that the hypothesis testing bound \refE{cor_average_metaconverse_bound} can be used to evaluate Theorem~\ref{thm:average_lower_bound} in the range of values of $n$ considered, as $\bar{R}_n = \frac{1}{n}\log_2 \bar{M}_n$ is well-above of the Theorem~\ref{thm:average_lower_bound} curves.
\begin{figure}[t]%
\centering\input{plots/awgn-bounds-n-R08-snr5dB.tikz}%
\caption{Lower bounds to the channel coding error probability over an AWGN channel with $\text{SNR} = 5$~dB and $R= 0.8$ bits/channel use.
The bounds from Theorem~\ref{thm:average_lower_bound} have been evaluated using Theorem~\ref{thm:alpha-beta-opt} (lines) and using Theorem~\ref{thm:alpha-beta-sp-formulation} disregarding the small-o terms (markers $\bullet$). }\label{fig:AWGN-Pevsn-R08-snr5dB}
\end{figure}%
To evaluate the accuracy of the saddlepoint expansion introduced in Theorem~\ref{thm:alpha-beta-sp-formulation}, we show in \refF{AWGN-Pevsn-R08-snr5dB} the exact hypothesis-testing bound computed according to Theorem~\ref{thm:alpha-beta-opt} (lines) with the approximation that follows from disregarding the $ o\bigl(n^{-\frac{1}{2}}\bigr)$ terms in Theorem~\ref{thm:alpha-beta-sp-formulation} (markers $\bullet$). We can see that, for both the capacity-achieving variance $\theta^2$ and the exponent-achieving variance $\tilde\theta_s^2$, the approximation is accurate for blocklengths as short as $n=10$. This will also be true for larger values of $n$, for which numerical evaluation of the Marcum-$Q$ functions becomes infeasible using traditional methods.
If we compare the curves of the cone-packing bound from Theorem~\ref{thm:shannon_lower_bound} and
the hypothesis testing bounds from Theorems~\ref{thm:PPV_lower_bound},~\ref{thm:maximal_lower_bound} and
~\ref{thm:average_lower_bound} in Figs.~\ref{fig:equal-AWGN-Pevsn-snr10dB}-\ref{fig:AWGN-Pevsn-R08-snr5dB}, we only observe a small difference. Then, for practical purposes, it may be sufficient to use Theorem~\ref{thm:average_lower_bound} as lower bound --since it was derived under an average power constraint, it applies for all equal, maximal and average power limitations-- and the achievability part of \cite[Eq. (20)]{Shannon59} as an upper bound --this bound was derived assuming an equal-power constraint, and since it is an achievability result, it applies for all equal, maximal and average power limitations. The saddlepoint approximation from Theorem~\ref{thm:alpha-beta-sp-formulation} is accurate for values of $n\geq 10$ and can be safely applied in the evaluation of Theorem~\ref{thm:average_lower_bound}.
\subsection{Constellation design for uncoded transmission ($n=2$)}
\label{sec:constellations}
In the last example of this section, we consider the problem of transmitting $M\geq 2$ codewords over $n=2$ uses of an AWGN channel with $\text{SNR}=10$~dB. This problem corresponds to finding the best constellation for an uncoded quadrature communication system.
\begin{figure}[t]%
\centering\input{plots/awgn-bounds-M-n2-snr10dB.tikz}%
\caption{Lower bounds to the channel coding error probability over an AWGN channel with $n=2$ and $\text{SNR}=10$~dB. Markers show the simulated error probability of a sequence of codes satisfying an equal ($\circ$), maximal ($\times$) and average ($\bullet$) power constraints. Vertical line corresponds to the boundary $M\leq \bar{M}$ from Corollary~\ref{cor:average_metaconverse_bound}. }\label{fig:AWGN-PevsM-n2-snr10dB}
\end{figure}%
Figure~\ref{fig:AWGN-PevsM-n2-snr10dB} depicts Shannon'59 lower bound from Theorem \ref{thm:shannon_lower_bound}, valid for equal power-constraints,
the bound from Theorem \ref{thm:maximal_lower_bound} for $\theta^2=\Upsilon+\sigma^2$, which is valid for maximal power-constraint,
and that from Theorem \ref{thm:average_lower_bound}, valid for average power-constraint. The vertical line shows the boundary of the region $M \leq \bar{M}$ defined in Corollary~\ref{cor:average_metaconverse_bound} where the bounds from Theorems \ref{thm:maximal_lower_bound} and \ref{thm:average_lower_bound} coincide. With markers, we show the simulated ML decoding error probability of a sequence of $M$-PSK (phase-shift keying) constellations satisfying the equal power constraint ($\circ$) and that of a sequence of $M$-APSK (amplitude-phase-shift keying) constellations satisfying maximal ($\times$) and average ($\bullet$) power constraints.\footnote{The parameters of the $M$-APSK constellations (number of rings, number of points, amplitude and phase of each ring) have been optimized to minimize the error probability $\Pe$ for each value of $M$. To this end, the constellation parameters are randomly chosen around their best known values, and only the constellations with lower error probability are used in the next iteration of the stochastic optimization algorithm.}
Since the ML decoding regions of an $M$-PSK constellation are precisely $2$-dimensional cones, Shannon'59 lower bound coincides with the corresponding simulated probability ($\circ$). However, Shannon'59 lower bound does not apply to $M$-APSK constellations satisfying maximal ($\times$) and average ($\bullet$) power constraints, as discussed in \refS{maximal}.
We discuss now the results observed for codes satisfying maximal and average power constraints. We can see that while Theorem~\ref{thm:average_lower_bound} applies in both of these settings, this is not the case for Theorem~\ref{thm:maximal_lower_bound}, that in general only applies under maximal power constraint. As stated in Corollary~\ref{cor:average_metaconverse_bound}, the bounds from Theorems~\ref{thm:maximal_lower_bound} and \ref{thm:average_lower_bound} coincide for $M\leq\bar{M} \approx 22.8$.
Above this point, the two bounds diverge, and we can see from the figure that the average power constrained code ($\bullet$) violates the bound from Theorem~\ref{thm:maximal_lower_bound} for $M>45$.
Analyzing the constellations that violate Theorem \ref{thm:maximal_lower_bound}, we observe that they present several symbols concentrated at the origin of coordinates $(0,0)$.
As these symbols coincide, it is not possible to distinguish between them and they will often yield a decoding error. However, since the symbol $(0,0)$ does not require any energy for its transmission, the average power for the remaining constellation points is increased and this code yields an overall smaller error probability. This effect was also observed in \cite[Sec. 4.3.3]{PolThesis}, where a code with several codewords concentrated at the origin was used to study the asymptotics of the error probability in an average power constrained AWGN channel.
Interestingly, this structure is also suggested by the input distribution that follows from the derivation of Theorem~ \ref{thm:average_lower_bound}.
As discussed in \refS{mc-implicit-distribution}, the lower bound~\refE{average_lower_bound} in Theorem~\ref{thm:average_lower_bound} corresponds to the value of the convex envelope $\underline{f}$
at the point $\bigl(\tfrac{1}{M},\Upsilon\bigr)$.
Whenever $M\leq\bar{M}$, this convex envelope corresponds to a convex combination of the functions $f\bigl(\bar{\beta},0\bigr)$ and $f\bigl(\beta_0,\gamma_0\bigr)$ with $\gamma_0>\Upsilon$.
Therefore, the input distribution induced by Theorem~\ref{thm:average_lower_bound} is composed by a mass point at the origin and by a uniform distribution over the spherical shell with squared radius $\gamma_0>\Upsilon$. While this distribution does not describe how the codewords of a good code are distributed over the space, it suggest that several codewords could be concentrated at $(0,0)$.
\section{Discussion}
\label{sec:discussion}
We studied the performance of block coding on an AWGN channel under different power limitations at the transmitter. In particular, we showed that the hypothesis-testing bound, \cite[Th. 41]{Pol10} which was originally derived under an equal power limitation, also holds under maximal power constraints (Theorem~\ref{thm:maximal_lower_bound}), and, for rates below a given threshold, under average power constraints (Corollary~\ref{cor:average_metaconverse_bound}). For rates close and above capacity, we proposed a new bound using the convex envelope of the error probability a certain binary hypothesis test (Theorem~\ref{thm:average_lower_bound}).
The performance bounds described above follow from the analysis of the meta-converse bound \cite[Th.~27]{Pol10}, which corresponds to the error probability a surrogate hypothesis test between the distribution induced by the channel and a certain auxiliary distribution.
For the optimal auxiliary distribution and an equal power-constraint, Polyanskiy showed in~\cite[Sec. VI.F]{Pol13} that the meta-converse bound recovers Shannon cone-packing bound \cite[Eq. (20)]{Shannon59}. In this work, however, we chose the auxiliary distribution to be an i.i.d. Gaussian distribution with zero-mean and certain variance.
If the variance is chosen to be capacity achieving, the resulting bound has a sub-optimal error exponent~\cite{Nakiboglu19-SP,Nakiboglu19-Augustin}.
Considering the variance of the exponent-achieving output distribution yields tighter finite-length bounds in general, which feature the sphere-packing exponent. Moreover, using the saddlepoint approximation from Theorem~\ref{thm:alpha-beta-sp-formulation}, it is possible to evaluate the bounds for the asymptotically optimal distribution without incurring in extra computational cost (Corollary~\ref{cor:alpha-beta-sp-exponent}).
The numerical advantage of the new bounds compared to previous results in the literature is small for a maximal power constraint and significant for an average power constraint, as shown in Figures~\ref{fig:maximal-AWGN-Pevsn-snr10dB}-\ref{fig:AWGN-Rvsn-snr5dB}. Additionally, several of the theoretical contributions are of independent interest:
\begin{itemize}
\item We proposed a new geometric interpretation of \cite[Th. 41]{Pol10} which is analogous to the one in \cite{Shannon59}. The hypothesis testing bound \cite[Th. 41]{Pol10} can then be described as the probability of the noise moving a codeword $\x$ out of an $n$-dimensional sphere that roughly covers $1/M$-th of the output space. Interestingly, this sphere is not centered at the codeword $\x$ but at $\bigl(1+\frac{\sigma^2}{\Upsilon}\bigr)\x$.
\item This work addresses the optimization of the meta-converse bound over input distributions. While the results obtained are specific for an additive Gaussian noise channel, the techniques used can in principle be applied to more complicated channels, e.g., via the analysis of the saddlepoint expansion of the meta-converse~\cite{isit18}.
\item For an average power constraint and rates close to and above capacity, the input distribution that optimizes the meta-converse bound presents a mass point at the origin. This suggest that the optimal codes in this region must have several all-zeros codewords (as it occurs for the APSK constellations studied in \refS{constellations}) and motivates the fact that no strong-converse exists for an average power limitation at the transmitter~\cite[Th. 77]{PolThesis}.
\item In Appendix~\ref{apx:f-beta-gamma}, we provide an exhaustive characterization of the error probability of a binary hypothesis test between two Gaussian distributions, which could be of interest in related problems.
\end{itemize}
In our derivations, we did not impose any structure to the codebooks beyond the corresponding power limitation. Then, the results obtained are general and do not require the codes to belong to a certain family, to use a specific modulation, or to satisfy minimum distance constraints. Nevertheless, the study of lower bounds for structured codes remains an active area of research (see, e.g.,~\cite{Sas08}). Tight lower bounds for BPSK modulations (or general $M$-PSK modulations), can be obtained from the meta-converse bound \cite[Th. 27]{Pol10} using the results from \cite{isit18}. Evaluation of the meta-converse bound for general modulations is still an open problem due to the combinatorial nature of the optimization over input distributions.
\section*{Acknowledgment}
Fruitful discussions with Bar{\i}\c{s} Nakibo\u{g}lu, Tobias Koch and David Morales-Jimenez are gratefully acknowledged.
\appendices
\section{Analysis of $f(\beta,\gamma) = \alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$}
\label{apx:f-beta-gamma}
\subsection{Parametric computation of $f(\beta,\gamma)$}
\label{apx:alpha-beta-marcumQ}
\begin{proposition}
\label{prop:alpha-beta-marcumQ}
Let $\sigma,\theta>0$ and $n\geq 1$, be fixed parameters, and define $\delta \triangleq \theta^2-\sigma^2$.
The function trade-off between $\alpha$ and $\beta$ in $\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$ admits the following parametric formulation as a function of the auxiliary parameter $t\geq 0$,
\begin{align}
\alpha(\gamma,t) &= Q_{\frac{n}{2}}\left(\sqrt{n\gamma}\frac{\sigma}{\delta},\frac{t}{\sigma} \right),\label{eqn:alpha-marcumQ}\\
\beta(\gamma,t) &= 1-Q_{\frac{n}{2}}\left(\sqrt{n\gamma}\frac{\theta}{\delta},\frac{t}{\theta} \right),
\label{eqn:beta-marcumQ}
\end{align}
where $Q_m(a,b)$ denotes the Marcum $Q$-function, defined in \refE{marcumQ-def}.
To compute $f(\beta,\gamma) = \alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$, let $t_{\star}$ satisfy $\beta(\gamma,t_{\star})=\beta$ according to \refE{beta-marcumQ}. Then, it holds that $f(\beta,\gamma) = \alpha(\gamma,t_{\star})$ according to \refE{alpha-marcumQ}.
\end{proposition}
\begin{IEEEproof}
The proof follows the lines of that of \cite[Th. 41]{Pol10}, and it is included here for completeness.\footnote{Note that the resulting trade-off \refE{alpha-marcumQ}-\refE{beta-marcumQ} is scale invariant provided that $\sigma^2$, $\theta^2$ and $\gamma$ are scaled by the same quantity. Therefore, Proposition~\ref{prop:alpha-beta-marcumQ} is not more general than \cite[Th. 41]{Pol10} by allowing $\sigma^2\neq 1$.}
Let $\sigma,\theta>0$ and $n\geq 1$, be fixed parameters. We define the log-likelihood ratio
\begin{align}
j(\y)
&\triangleq \log \frac{\varphi_{\sqrt{\gamma},\sigma}^n(\y)}{\varphi_{0,\theta}^n (\y)}\\
&= n \log\frac{\theta}{\sigma}
- \frac{1}{2} \sum_{i=1}^{n} \frac{\theta^2(y_i-\sqrt{\gamma})^2-\sigma^2 y_i^2}{\sigma^2 \theta^2}.
\label{eqn:jrho-def}
\end{align}
According to the Neyman-Pearson lemma, the trade-off $\alpha_{\beta} \bigl(\varphi_{\sqrt{\gamma},\sigma}^n, \varphi_{0,\theta}^n \bigr)$ admits the parametric form
\begin{align}
\alpha(t') = \Pr\bigl[ j(\Y_{0}) \leq t' \bigr],\label{eqn:alpha-jY}\\
\beta(t') = \Pr\bigl[ j(\Y_{1}) > t'\bigr],\label{eqn:beta-jY}
\end{align}
in terms of the auxiliary parameter $t'\in\RR$ and where $\Y_0 \sim \varphi_{\sqrt{\gamma},\sigma}^n$, $\Y_1 \sim \varphi_{0,\theta}^n$.
Using the change of variable $\z = (\y_0-\sqrt{\gamma})/\sigma$, we obtain that the distribution of the random variable $j(\Y_0)$, $\Y_0 \sim \varphi_{\sqrt{\gamma},\sigma}^n$ coincides with that of $j_0(\Z)$, $\Z \sim \varphi_{0,1}^n$, where
\begin{align}
j_{0}(\z) &\triangleq n \log\frac{\theta}{\sigma} +\frac{n}{2} \frac{\gamma}{\delta}-\frac{1}{2}\frac{\delta}{\theta^2} \sum_{i=1}^{n}\left(z_i-\frac{\sigma\sqrt{\gamma}}{\delta}\right)^2.\label{eqn:j0rho-def}
\end{align}
Analogously, if we define
\begin{align}
j_{1}(\z)
&\triangleq n \log\frac{\theta}{\sigma} +\frac{n}{2} \frac{\gamma}{\delta}-\frac{1}{2}\frac{\delta}{\sigma^2} \sum_{i=1}^{n}\left(z_i-\frac{\theta\sqrt{\gamma}}{\delta}\right)^2,\label{eqn:j1rho-def}
\end{align}
it follows that the distributions of $j(\Y_{1})$, $\Y_1 \sim \varphi_{0,\theta}^n$, and that of $j_1(\Z)$, $\Z \sim \varphi_{0,1}^n$ coincide.
Then, we may rewrite \refE{alpha-jY}-\refE{beta-jY} as
\begin{align}
\alpha(t') = \Pr\bigl[ j_{0}(\Z) \leq t' \bigr],\label{eqn:alpha-jZ}\\
\beta(t') = \Pr\bigl[ j_{1}(\Z) > t'\bigr],\label{eqn:beta-jZ}
\end{align}
where $\Z \sim \varphi_{0,1}^n$.
Given \refE{j0rho-def} and \refE{j1rho-def}, we conclude that $j_{0}(\Z)$ and $j_{1}(\Z)$ follow a (shifted and scaled) noncentral $\chi^2$ distribution with $n$ degrees of freedom and non-centrality parameters $n\gamma \sigma^2/\delta^2$ and $n\gamma \theta^2/\delta^2$, respectively.
Using \refE{j0rho-def} in \refE{alpha-jZ}, we obtain
\begin{align}
\alpha(t') &= \Pr\left[ n \log\frac{\theta}{\sigma} +\frac{n}{2} \frac{\gamma}{\delta}-\frac{1}{2}\frac{\delta}{\theta^2} \sum_{i=1}^{n}\left(Z_i-\frac{\sigma\sqrt{\gamma}}{\delta}\right)^2 \leq t' \right].\label{eqn:alpha-jZ-2}
\end{align}
We consider the change of variable $t'\leftrightarrow t$ such that
\begin{align}
t' = n\log\frac{\theta}{\sigma} + \frac{n}{2}\frac{\gamma}{\delta} - \frac{\delta t^2}{ 2\sigma^2\theta^2}.
\label{eqn:change-of-variable-t}
\end{align}
Using \refE{change-of-variable-t} in \refE{alpha-jZ-2} and
making the dependence on the parameter $\gamma$ explicit,
we obtain
\begin{align}
\alpha(\gamma,t) &= \Pr\left[ \sum_{i=1}^{n}\left(Z_i-\frac{\sigma\sqrt{\gamma}}{\delta}\right)^2 \geq \left(\frac{t}{\sigma}\right)^2 \right].\label{eqn:alpha-jZ-3}
\end{align}
Proceeding analogously for \refE{beta-jZ} yields
\begin{align}
\beta(\gamma,t) = \Pr\left[\sum_{i=1}^{n}\left(Z_i-\frac{\theta\sqrt{\gamma}}{\delta}\right)^2 < \left(\frac{t}{\theta}\right)^2 \right].\label{eqn:beta-jZ-3}
\end{align}
The cumulative density function of a non-central $\chi^2$ distribution with $n$ degrees of freedom and non-centrality parameter $\nu$ can be written in terms of the generalized Marcum $Q$-function $Q_m(a,b)$ as~\cite{nuttall75}
\begin{align}\label{eqn:chi2-cdf}
F_{n,\nu}(x) = 1-Q_{\frac{n}{2}}\bigl(\sqrt{\nu},\sqrt{x}\bigr).
\end{align}
Noting that $F_{n,\nu}(x)$ is continuous, using \refE{chi2-cdf} in \refE{alpha-jZ-3} and \refE{beta-jZ-3}, we obtain the desired result.
\end{IEEEproof}
\subsection{Derivatives of $f(\beta,\gamma)$}
\label{apx:derivatives-f-beta-gamma}
Let $\sigma,\theta>0$ and $n\geq 1$, be fixed parameters, and define $\delta \triangleq \theta^2-\sigma^2$. To obtain the derivatives of $f(\beta,\gamma)$ with respect to $\beta$ and $\gamma$, we start from the parametric formulation from Proposition~\ref{prop:alpha-beta-marcumQ} and use the following auxiliary result.
For $a>0$ and $b>0$, the Marcum-$Q$ function is defined as
\begin{align}
Q_m(a,b)
\triangleq \int_{b}^{\infty} \frac{t^m}{a^{m-1}}
e^{-\frac{a^2+t^2}{2}} I_{m-1}(at) \diff t.
\label{eqn:marcumQ-def}
\end{align}
\begin{proposition}
\label{prop:derivatives-marcumQ}
The derivatives of $Q_m(a,b)$ with respect to its parameters $a>0$ and $b>0$ are given by
\begin{align}
\frac{\partial{Q_m(a,b)}}{\partial a} &=
\frac{b^m}{a^{m-1}}e^{-\frac{a^2+b^2}{2}} I_{m}(ab),\label{eqn:marcumQ-da}\\
\frac{\partial{Q_m(a,b)}}{\partial b} &=
- \frac{b^m}{a^{m-1}}e^{-\frac{a^2+b^2}{2}} I_{m-1}(ab),\label{eqn:marcumQ-db}
\end{align}
where $I_{m}(\cdot)$ denotes the $m$-th order modified Bessel function of the first kind.
\end{proposition}
\begin{IEEEproof}
The derivative \refE{marcumQ-db} follows since the variable $b$ appears only in the lower limit of the definite integral in \refE{marcumQ-def}, then the derivative corresponds to the integrand evaluated at $t=b$.
To prove \refE{marcumQ-da}, let $n = m + \ell$ for some $\ell\in\ZZ^{+}$, and define
\begin{align}
\tilde{Q}^{(n)}_m(a,b) \triangleq 1-e^{-\frac{a^2+b^2}{2}}\sum_{r=m}^{n} \Bigl(\frac{b}{a}\Bigr)^{r} I_{r}(ab),
\end{align}
with partial derivative
\begin{align}
\!&\frac{\partial{\tilde{Q}^{(n)}_m(a,b)}}{\partial a}
= e^{-\frac{a^2+b^2}{2}} \sum_{r=m}^{n}\Bigl(\frac{b}{a}\Bigr)^{r}\biggl(\Bigl(a\!+\!\frac{r}{a}\Bigr) I_{r}(ab) - b I'_{r}(ab) \biggr).
\end{align}
Using the identity $I_{m}'(x) = \frac{m}{x} I_{m}(x) + I_{m+1}(x)$ \cite[Sec.~8.486]{Gradshteyn07} and canceling terms we obtain
\begin{align}
\!&\frac{\partial{\tilde{Q}^{(n)}_m(a,b)}}{\partial a}
= \frac{b^m}{a^{m-1}}e^{-\frac{a^2+b^2}{2}} I_{m}(ab)
- \frac{b^{n+1}}{a^{n}}e^{-\frac{a^2+b^2}{2}} I_{n+1}(ab).
\label{eqn:marcumQ-derivative-1}
\end{align}
We next show that the sequence \refE{marcumQ-derivative-1} presents uniform convergence to the right-hand side of
\refE{marcumQ-da}. Then, since the sequence of functions $\tilde{Q}^{(n)}_m(a,b)$ converges to $Q_m(a,b)$ as $n\to\infty$ \cite[eq. (4.63)]{Simon04}, the sequence \refE{marcumQ-derivative-1} must converge to ${\partial{Q_m(a,b)}}/{\partial a}$ \cite[Sec.~0.307]{Gradshteyn07} and the identity \refE{marcumQ-da} holds.
Indeed, using \cite[Sec.~8.431]{Gradshteyn07} it follows that, for $n\geq 2$
\begin{align}
\biggl(\frac{b}{a}\biggr)^{n+1} I_{n+1}(ab)
&= \frac{(b^2/2)^{n+1}}{\Gamma\bigl(n+\frac{3}{2}\bigr)\Gamma\bigl(\frac{1}{2}\bigr)} \int_{-1}^{1} \bigl(1-t^2\bigr)^{n+\frac{1}{2}} e^{abt} \diff t\\
&\leq \frac{(b^2/2)^{n+1} e^{ab}}{\Gamma\bigl(n+\frac{3}{2}\bigr)\Gamma\bigl(\frac{1}{2}\bigr)},
\label{eqn:marcumQ-derivative-2}
\end{align}
where in the last step we used that $e^{abt} \leq e^{ab}$ for $t\in[-1,1]$ and that $\int_{-1}^{1} \bigl(1-t^2\bigr)^{n+\frac{1}{2}} \diff t < 1$ for $n\geq 2$.
Then, from \refE{marcumQ-derivative-1} and \refE{marcumQ-derivative-2} we obtain
\begin{align}
\biggl|\frac{\partial{\tilde{Q}^{(n)}_m(a,b)}}{\partial a} - \frac{b^m}{a^{m-1}}e^{-\frac{a^2+b^2}{2}} I_{m}(ab)\biggr|
&= a e^{-\frac{a^2-2ab+b^2}{2}} \frac{(b^2/2)^{n+1}}{\Gamma\bigl(n+\frac{3}{2}\bigr)\Gamma\bigl(\frac{1}{2}\bigr)}
\end{align}
which is uniformly bounded for any $0 < a \leq \bar{a}$ with $\bar{a}<\infty$, any $b<\infty$ and $n$ sufficiently large, since the growth of $\Gamma\bigl(n+\frac{3}{2}\bigr)$ is asymptotically faster than that of $(b^2/2)^{n+1}$.
\end{IEEEproof}
Using the derivatives of the Marcum-$Q$ function \refE{marcumQ-da} and \refE{marcumQ-db}, we obtain that the derivatives of \refE{alpha-marcumQ} are
\begin{align}
\frac{\partial{\alpha(\gamma,t)}}{\partial \gamma} &=
\frac{1}{2} \frac{\sigma \sqrt{{n}/{\gamma}}}{\delta} \frac{b^{\frac{n}{2}}}{a^{\frac{n}{2}-1}}e^{-\frac{a^2+b^2}{2}} I_{\frac{n}{2}}(ab),\label{eqn:da_dg}\\
\frac{\partial{\alpha(\gamma,t)}}{\partial t} &= -\frac{1}{\sigma}
\frac{b^{\frac{n}{2}}}{a^{\frac{n}{2}-1}}e^{-\frac{a^2+b^2}{2}} I_{\frac{n}{2}-1}(ab),\label{eqn:da_dt}
\end{align}
with $a = \sqrt{n \gamma}\frac{\sigma}{\delta}$ and $b=\frac{t}{\sigma}$. Proceeding analogously, for the derivatives of \refE{beta-marcumQ} we obtain
\begin{align}
\frac{\partial{\beta(\gamma,t)}}{\partial \gamma} &=
- \frac{1}{2} \frac{\theta \sqrt{{n}/{\gamma}}}{\delta} \frac{\bar{b}^{\frac{n}{2}}}{\bar{a}^{\frac{n}{2}-1}}e^{-\frac{\bar{a}^2+\bar{b}^2}{2}} I_{\frac{n}{2}}(\bar{a}\bar{b}),
\label{eqn:db_dg}\\
\frac{\partial{\beta(\gamma,t)}}{\partial t} &= \frac{1}{\theta}
\frac{\bar{b}^{\frac{n}{2}}}{\bar{a}^{\frac{n}{2}-1}}e^{-\frac{\bar{a}^2+\bar{b}^2}{2}} I_{\frac{n}{2}-1}(\bar{a}\bar{b}),\label{eqn:db_dt}
\end{align}
where $\bar{a} = \sqrt{n\gamma}\frac{\theta}{\delta}$ and $\bar{b}=\frac{t}{\theta}$. Note that $ab=\bar{a}\bar{b}$, hence, $I_{\frac{n}{2}}(ab)=I_{\frac{n}{2}}(\bar{a}\bar{b})$ and $I_{\frac{n}{2}-1}(ab)=I_{\frac{n}{2}-1}(\bar{a}\bar{b})$.
We now proceed to obtain the derivatives of $f(\beta,\gamma)$ with respect to its parameters:
\subsubsection{Derivative $\partial f(\beta,\gamma)/\partial \gamma$ for fixed $\beta$}
Let $\beta \in[0,1]$ be fixed and let $t(\gamma)$ be such that $\beta\bigl(\gamma,t(\gamma)\bigr) = \beta$ from \refE{beta-marcumQ}. We apply the chain rule for total derivatives to write
\begin{align}
\frac{\partial \beta\bigl(\gamma,t(\gamma)\bigr)}{\partial \gamma} &= \biggl( \frac{\partial\beta(\gamma,t)}{\partial\gamma} + \frac{\partial \beta(\gamma,t)}{\partial t} \frac{\partial t(\gamma)}{\partial \gamma} \biggr) \bigg|_{t=t(\gamma)}. \label{eqn:total-derivatives-beta}
\end{align}
As $\beta\bigl(\gamma,t(\gamma)\bigr) = \beta$ is fixed, then \refE{total-derivatives-beta} must be equal to $0$. Then, identifying \refE{total-derivatives-beta} to $0$ and solving for $\frac{\partial t(\gamma)}{\partial \gamma}$ yields
\begin{align}
\frac{\partial t(\gamma)}{\partial \gamma}
= - \frac{{\frac{\partial}{\partial\gamma} \beta(\gamma,t)}}{{\frac{\partial}{\partial t} \beta(\gamma,t)}}
= \frac{\theta^2}{2\delta} \sqrt{\frac{n}{\gamma}}
\frac{I_{\frac{n}{2}}\Bigl(\sqrt{n\gamma}\frac{t}{\delta}\Bigr)}
{I_{\frac{n}{2}-1}\Bigl(\sqrt{n\gamma}\frac{t}{\delta}\Bigr)}, \label{eqn:partial-t-g}
\end{align}
where $t=t(\gamma)$, and where we used \refE{db_dg} and \refE{db_dt}. Note that we obtained an expression for $\frac{\partial t(\gamma)}{\partial \gamma}$ without computing $t(\gamma)$ explicitly, as doing this would require to invert \refE{beta-marcumQ} which is not analytically tractable.
We apply now the chain rule for total derivatives to $\alpha\bigl(\gamma,t(\gamma)\bigr)$ to write
\begin{align}
\!\frac{\partial \alpha\bigl(\gamma,t(\gamma)\bigr)}{\partial \gamma} &= \biggl( \frac{\partial \alpha(\gamma,t)}{\partial\gamma} + \frac{\partial \alpha(\gamma,t)}{\partial t} \frac{\partial t(\gamma)}{\partial \gamma} \biggr) \bigg|_{t=t(\gamma)} \label{eqn:total-derivatives-alpha-1}
\end{align}
Note that, for fixed $\beta$, $\frac{\partial f(\beta,\gamma)}{\partial \gamma} = \frac{\partial \alpha(\gamma,t(\gamma))}{\partial \gamma}$. Hence,
using \refE{da_dg}, \refE{da_dt} and \refE{partial-t-g} in \refE{total-derivatives-alpha-1} we finally obtain
\begin{align}
\frac{\partial f(\beta,\gamma)}{\partial \gamma}
&= - \frac{n}{2\delta} \biggl(\frac{t\delta}{\sigma^2\sqrt{n \gamma}}\biggr)^{\frac{n}{2}} e^{-\frac{1}{2}\left( \frac{n\gamma\sigma^2}{\delta^2} + \frac{t^2}{\sigma^2}\right)} I_{\frac{n}{2}}\biggl(\sqrt{n\gamma}\frac{t}{\delta}\biggr), \label{eqn:partial-f-g}
\end{align}
where $t$ satisfies $\beta(\gamma,t) = \beta$ with $\beta(\gamma,t)$ given in \refE{beta-marcumQ}.
\subsubsection{Derivative $\partial f(\beta,\gamma)/\partial \beta$ for fixed $\gamma$}
In this case we use \refE{da_dt} and \refE{db_dt} to obtain
\begin{align}
\frac{\partial f(\beta,\gamma)}{\partial \beta}
&= \frac
{\frac{\partial}{\partial t}\alpha(\gamma,t)}
{\frac{\partial}{\partial t} \beta(\gamma,t)}
= -\frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n\gamma}{\delta} - t^2\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)\right)}
\label{eqn:partial-f-b}
\end{align}
where $t$ satisfies $\beta(\gamma,t) = \beta$ with $\beta(\gamma,t)$ given in \refE{beta-marcumQ}.
\subsubsection{Derivative $\partial^2 f(\beta,\gamma)/(\partial\beta \partial\gamma)$}
Taking the derivative of \refE{partial-f-b} with respect to $\gamma$ yields
\begin{align}
\frac{\partial^2 f(\beta,\gamma)}{\partial \beta \partial \gamma}
&= -\frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n\gamma}{\delta} - t^2\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)\right)}
\left( \frac{n}{2\delta}- t \left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)
\frac{\partial t(\gamma)}{\partial \gamma} \right)
\label{eqn:partial-f-b-g}
\end{align}
where $t$ satisfies $\beta(\gamma,t) = \beta$ with $\beta(\gamma,t)$ given in \refE{beta-marcumQ}, and where $\frac{\partial t(\gamma)}{\partial \gamma}$ is given in \refE{partial-t-g}.
\subsubsection{Derivative ${\partial^2 f(\beta,\gamma)}/{(\partial \beta)^2}$}
Taking the derivative of \refE{partial-f-b} with respect to $\beta$ yields
\begin{align}
\frac{\partial^2 f(\beta,\gamma)}{(\partial \beta)^2}
&= t \frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n\gamma}{\delta} - t^2\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)\right)}
\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)
\frac{\partial t}{\partial \beta}
\label{eqn:partial-f-b2}
\end{align}
where $t$ satisfies $\beta(\gamma,t) = \beta$ with $\beta(\gamma,t)$ given in \refE{beta-marcumQ}, and where the term $\frac{\partial t}{\partial \beta}$ can be obtained from \refE{db_dt},
\begin{align}
\frac{\partial t}{\partial\beta} &=
\left(\frac{\partial{\beta(\gamma,t)}}{\partial t}\right)^{-1}
= \frac{\delta}{\sqrt{n\gamma}}
\left(\frac{\theta^2\sqrt{n\gamma}}{t\delta}\right)^{\frac{n}{2}}
e^{\frac{1}{2}\left( \frac{n\gamma\theta^2}{\delta^2} + \frac{t^2}{\theta^2}\right)}
\left(I_{\frac{n}{2}-1}\biggl(\sqrt{n\gamma}\frac{t}{\delta}\biggr)\right)^{-1}.\label{eqn:partial-t-b}
\end{align}
\subsubsection{Derivative ${\partial^2 f(\beta,\gamma)}/{(\partial \gamma)^2}$}
Taking the derivative of \refE{partial-f-g} with respect to $\gamma$, straightforward but tedious algebra yields
\begin{align}
\frac{\partial^2 f(\beta,\gamma)}{(\partial \gamma)^2}
&= - \frac{n}{ 4 \delta} \Bigl(\frac{t\delta}{\sigma^2\sqrt{n\gamma}}\Bigr)^{\frac{n}{2}} e^{-\frac{1}{2}\left( \frac{n\gamma\sigma^2}{\delta^2} + \frac{t^2}{\sigma^2}\right)} I_{\frac{n}{2}}\biggl(\sqrt{n\gamma}\frac{t}{\delta}\biggr)\notag\\
&\quad \times\left( \frac{n}{\delta} - \frac{n}{\gamma} + \sqrt{\frac{n}{\gamma}} \frac{t}{\delta}
\left(
\frac{I_{\frac{n}{2}-1}\Bigl(\sqrt{n\gamma}\frac{t}{\delta}\Bigr)}{I_{\frac{n}{2}}\Bigl(\sqrt{n\gamma}\frac{t}{\delta}\Bigr)}
- \frac{\theta^2}{\sigma^2}
\frac{I_{\frac{n}{2}}\Bigl(\sqrt{n\gamma}\frac{t}{\delta}\Bigr)}{I_{\frac{n}{2}-1}\Bigl(\sqrt{n\gamma}\frac{t}{\delta}\Bigr)}
\right)
\right), \label{eqn:partial-f-g2}
\end{align}
where $t$ satisfies $\beta(\gamma,t) = \beta$ with $\beta(\gamma,t)$ given in \refE{beta-marcumQ}.
Here, we used the identity
$I_{m}'(x) = I_{m-1}(x) - \frac{m}{x} I_{m}(x)$
\cite[Sec.~8.486]{Gradshteyn07}.
\subsection{Derivatives of $f(\beta,\gamma)$ at $\gamma=0$}
\label{apx:derivatives-f-beta-gamma-0}
The function $f(\beta,0)$ can be evaluated by setting $\gamma=0$ and using \refE{alpha-marcumQ}-\refE{beta-marcumQ}. However, the preceding expressions for the derivatives of $f(\beta,\gamma)$ often yield an indeterminacy in this case. This can be avoided by taking the limit as $\gamma \to 0$ and using that~\cite[Sec.~8.445]{Gradshteyn07}
\begin{align}
I_{m}(x) = \frac{\bigl(\frac{x}{2}\bigr)^m}{\Gamma\bigl(m+1\bigr)}
+ o(x^m), \label{eqn:Imx-asympt}
\end{align}
where $\Gamma(\cdot)$ denotes the gamma function and $o\bigl(g(x)\bigr)$ summarizes the terms that approach zero faster than $g(x)$, \textit{i.e.}, $\lim_{x\to 0}\frac{o(g(x))}{g(x)}=0$.
For example, using \refE{Imx-asympt} and $\frac{\Gamma(m+1)}
{\Gamma(m)} = m$ we obtain from \refE{partial-t-g} that
\begin{align}
\frac{\partial t(\gamma)}{\partial \gamma}\biggr|_{\gamma=0}
&= \frac{t}{2} \frac{\theta^2}{\delta^2}. \label{eqn:partial-t-g-0}
\end{align}
Proceeding analogously for the derivatives of $f(\beta,\gamma)$, it follows that
\begin{align}
\frac{\partial f(\beta,\gamma)}{\partial \gamma}\biggr|_{\gamma=0}
&= - \frac{1}{\delta} \frac{t_0^n}{\sigma^n} \frac{e^{-\frac{1}{2} \frac{t_0^2}{\sigma^2}}}{\Gamma\bigl(\tfrac{n}{2}\bigr) 2^{\frac{n}{2}}}, \label{eqn:partial-f-g-0}\\
\frac{\partial f(\beta,\gamma)}{\partial \beta}\biggr|_{\gamma=0}
&= -\frac{\theta^n}{\sigma^n} e^{-\frac{1}{2} t_0^2\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)},
\label{eqn:partial-f-b-0}\\
\frac{\partial^2 f(\beta,\gamma)}{\partial\beta\partial\gamma}\biggr|_{\gamma=0}
&= -\frac{\theta^n}{\sigma^n}
\left( \frac{n}{2\delta}- \frac{t_0^2}{2\delta\sigma^{2}}\right)
e^{-\frac{1}{2} t_0^2\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)},
\label{eqn:partial-f-b-g-0}\\
\frac{\partial^2 f(\beta,\gamma)}{(\partial \beta)^2}\biggr|_{\gamma=0}
&= \frac{\theta^n}{\sigma^n}
\biggl(\frac{\theta\sqrt{2}}{t_0}\biggr)^{n-2}
\frac{\delta}{\sigma^2}
\Gamma\bigl(\tfrac{n}{2}\bigr)
e^{-\frac{1}{2}t_0^2\frac{\delta - \sigma^2}{\theta^2\sigma^2}}, \label{eqn:partial-f-b2-0}\\
\frac{\partial^2 f(\beta,\gamma)}{(\partial \gamma)^2}\biggr|_{\gamma=0}
&= - \frac{n}{ 4 \delta} \frac{t_0^n}{\sigma^n 2^{\frac{n}{2}}}
\left(\frac{n}{\delta} + \left( \frac{n}{n+2} - \frac{\theta^2}{\sigma^2}\right) \frac{t_0^2}{\delta^2}\right)
\frac{e^{-\frac{1}{2}\frac{t_0^2}{\sigma^2}}}{\Gamma(\frac{n}{2}+1)},
\label{eqn:partial-f-g2-0}
\end{align}
where in all cases $t_0$ satisfies $\beta(0,t_0) = \beta$ with $\beta(\gamma,t)$ given in \refE{beta-marcumQ}.
To obtain \refE{partial-f-g2-0} from \refE{partial-f-g2} we used \refE{Imx-asympt} and the expansions
\begin{align}
\frac{I_{m-1}(x)}{I_{m}(x)} = \frac{2m}{x} + \frac{x}{2(m+1)} + o(x),\qquad
\frac{I_{m}(x)}{I_{m-1}(x)} = \frac{x}{2m} + o(x).
\end{align}
\section{Proof of Lemma~\ref{lem:envelope_equals_f}}
\label{apx:envelope_equals_f}
We characterize the region where $f(\beta,\gamma)$ and its convex envelope $\underline{f}(\beta,\gamma)$ coincide using the following result.
\begin{proposition}\label{prop:convex-envelope}
Suppose $g$ is differentiable with gradient $\nabla g$. Let $\Ac$ denote the domain of $g$, and let $a_0\in\Ac$. If the inequality
\begin{align}
g(\bar a) \geq g(a_0) + \nabla g(a_0)^T (\bar a - a_0),
\label{eqn:prop-convex-envelope}
\end{align}
is satisfied for all $\bar a \in \Ac$, then, $g(a_0) = g^{**}(a_0)$ holds.
\end{proposition}
\begin{IEEEproof}
As $g^{**}$ is the lower convex envelope of $g$, then $g(a_0) \geq g^{**}(a_0)$ trivially. It remains to show that \refE{prop-convex-envelope} implies $g(a_0) \leq g^{**}(a_0)$. Fenchel's inequality~\cite[Sec.~3.3.2]{Boyd04} yields
\begin{align}
g^{**}(a_0) \geq \langle a_0,b \rangle - g^{*}(b),
\label{eqn:fenchel-ineq-0}
\end{align}
for any $b$ in the domain of $g^{*}$.
Setting $b = \nabla g(a_0)$ and using \refE{LF-transform}
in \refE{fenchel-ineq-0}, we obtain
\begin{align}
g^{**}(a_0) &\geq \nabla g(a_0)^T a_0 - \max_{\bar{a}\in\Ac} \bigl\{
\nabla g(a_0)^T\bar{a} - g(\bar a) \bigr\}
\label{eqn:fenchel-ineq-1}\\
&= \min_{\bar{a}\in\Ac} \bigl\{
\nabla g(a_0)^T(a_0-\bar{a}) + g(\bar a) \bigr\}
\label{eqn:fenchel-ineq-2}\\
&\geq \min_{\bar{a}\in\Ac} \bigl\{ g(a_0) \bigr\},
\label{eqn:fenchel-ineq-3}
\end{align}
where in the last step we used \refE{prop-convex-envelope} to lower bound $g(\bar a)$. Since the objective of \refE{fenchel-ineq-3} does not depend on $\bar{a}$, we conclude from \refE{fenchel-ineq-1}-\refE{fenchel-ineq-3} that $g(a_0) \leq g^{**}(a_0)$ and the result follows.
\end{IEEEproof}
\begin{figure}[t]%
\centering\input{plots/conv-analysis-n6-s1-t3-b0001.tikz}%
\caption{Example of Proposition~\ref{prop:convex-envelope} for the one-dimensional function $g(a) = f(\beta,a)$ with $\beta=0.001$, $n=6$, $\sigma^2=1$ and $\theta^2=3$, which is defined for $a\geq 0$.}\label{fig:convAnalysis}
\end{figure}%
\refFig{convAnalysis} shows an example of Proposition~\ref{prop:convex-envelope} for a certain one-dimensional function $g$. When $a_0=a_1$, the figure shows that \refE{first-order-condition} is violated as the dash-dotted line is above $g(\bar{a})$ (thin solid line) for small values of $\bar{a}$. Then, the (one-dimensional) convex envelope $g^{**}$ (thick solid line) is strictly smaller than the function $g$ at the point $a_0=a_1$. In contrast, for $a_0=a_2$ \refE{first-order-condition} is satisfied for all values of $\bar{a}\geq 0$. Therefore $g$ coincides with its convex envelope $g^{**}$ at $a_0 = a_2$. This is also true for any $a_0 > a_2$ (e.g., for $a_0 = a_3$), and therefore $g$ and its convex envelope $g^{**}$ coincide for any $a_0 \geq a_2$.
We apply Proposition~\ref{prop:convex-envelope} to the function $f(\beta,\gamma)$. We recall that $f(\beta,\gamma)$ is differentiable for $\beta\in[0,1]$ and $\gamma\geq 0$ with derivatives given in Appendix~\ref{apx:f-beta-gamma}. We define the gradients
\begin{align}
\nabla_{\beta} f(b,g) &\triangleq \frac{\partial f(\beta,\gamma)}{\partial \beta}\Big|_{\beta=b,\gamma=g},\\
\nabla_{\gamma} f(b,g) &\triangleq \frac{\partial f(\beta,\gamma)}{\partial \gamma}\Big|_{\beta=b,\gamma=g}.
\end{align}
According to Proposition~\ref{prop:convex-envelope}, the function $f(\beta_0,\gamma_0)$ and its convex envelope $\underline{f}(\beta_0,\gamma_0)$ coincide if
\begin{align}
f(\bar\beta,\bar\gamma) \;\geq\; f(\beta_0,\gamma_0) &+ (\bar\beta-\beta_0)\nabla_{\beta} f(\beta_0,\gamma_0)
+ (\bar\gamma-\gamma_0)\nabla_{\gamma} f(\beta_0,\gamma_0).
\label{eqn:first-order-condition}
\end{align}
is satisfied for all $\beta\in[0,1]$ and $\gamma\geq 0$.
This condition implies that the first-order Taylor approximation of $f(\beta,\gamma)$ at $(\beta_0,\gamma_0)$ is a global under-estimator of the original function $f$.
The derivatives of $f(\beta,\gamma)$, given in Appendix~\ref{apx:f-beta-gamma}, imply that the function is decreasing in both parameters, convex with respect to $\beta\in[0,1]$, and jointly convex with respect to $(\beta,\gamma)$ except for a neighborhood near the axis $\gamma=0$. Using these properties, it can be shown that the condition \refE{first-order-condition} only needs to be verified along the axis $\bar\gamma=0$. For example, for the one-dimensional function $g$ in \refF{convAnalysis}, we can see that if the first-order condition is satisfied at $\bar{a}=0$, it is also satisfied for any $\bar{a}\geq 0$.
Then, we conclude that $f(\beta_0,\gamma_0) = \underline{f}(\beta_0,\gamma_0)$ if \refE{first-order-condition} holds for every $\bar\beta\in[0,1]$ and $\bar\gamma= 0$, i.e., if
\begin{align}
f(\beta_0,\gamma_0) - f(\bar\beta,0) &\geq (\beta_0-\bar\beta)\nabla_{\beta} f(\beta_0,\gamma_0)
+ \gamma_0 \nabla_{\gamma} f(\beta_0,\gamma_0).
\label{eqn:first-order-condition-1}
\end{align}
Let $\theta \geq \sigma > 0$, $n\geq 1$. Let $t_0$ be the value such that $\beta(\gamma_0,t_0) = \beta_0$ and let $\bar{t}$ satisfy $\beta(0,\bar{t}) = \bar\beta$, for $\beta(\gamma,t)$ defined in \refE{beta-marcumQ}.
Using \refE{alpha-marcumQ} and the derivatives \refE{partial-f-g} and \refE{partial-f-b} from Appendix~\ref{apx:f-beta-gamma}, we obtain the identities
\begin{align}
\!f(\beta_0,\gamma_0) - f(\bar\beta,0) &= Q_{\frac{n}{2}}\!\left(\!\sqrt{n\gamma_0}\frac{\sigma}{\delta},\frac{t_0}{\sigma}\!\right) - Q_{\frac{n}{2}}\!\left(\!0,\frac{\bar{t}}{\sigma}\!\right)\!,\!\label{eqn:first-order-term-1}\\
\nabla_{\beta} f(\beta_0,\gamma_0) &= -\frac{\theta^n}{\sigma^n} e^{\frac{1}{2}\left(\frac{n\gamma_0}{\delta} - t_0^2\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)\right)},\label{eqn:first-order-term-2}\\
\nabla_{\gamma} f(\beta_0,\gamma_0) &= - \frac{n}{2\delta} \biggl(\frac{t_0\delta}{\sigma^2\sqrt{n \gamma_0}}\biggr)^{\frac{n}{2}}
e^{-\frac{1}{2}\left( n\gamma_0\frac{\sigma^2}{\delta^2} + \frac{t_0^2}{\sigma^2}\right)} I_{\frac{n}{2}}\biggl(\sqrt{n\gamma_0}\frac{t_0}{\delta}\biggr).\label{eqn:first-order-term-3}
\end{align}
As $\beta(\gamma_0,t_0) = \beta_0$ and $\beta(0,\bar{t}) = \bar\beta$, using \refE{beta-marcumQ}, it follows that
\begin{align}
\beta_0-\bar\beta = Q_{\frac{n}{2}}\left(0,\frac{\bar{t}}{\theta} \right) - Q_{\frac{n}{2}}\left(\sqrt{n\gamma_0}\frac{\theta}{\delta},\frac{t_0}{\theta} \right).\label{eqn:first-order-term-4}
\end{align}
Then, substituting \refE{first-order-term-1} and \refE{first-order-term-4} in \refE{first-order-condition-1}, reorganizing terms, it yields
\begin{align}
&Q_{\frac{n}{2}}\left(\sqrt{n\gamma_0}\frac{\sigma}{\delta},\frac{t_0}{\sigma} \right)
+ \nabla_{\beta} f(\beta_0,\gamma_0)
Q_{\frac{n}{2}}\left(\sqrt{n\gamma_0}\frac{\theta}{\delta},\frac{t_0}{\theta} \right)
- \gamma_0 \nabla_{\gamma} f(\beta_0,\gamma_0)
\,\geq\, h(\bar{t}),
\label{eqn:first-order-condition-2}
\end{align}
where $h(t)$ is given by
\begin{align}
h(t) \triangleq Q_{\frac{n}{2}}\left(0,\frac{t}{\sigma} \right) +\nabla_{\beta} f(\beta_0,\gamma_0)
Q_{\frac{n}{2}}\left(0,\frac{t}{\theta}\right).
\label{eqn:first-order-condition-ht}
\end{align}
The interval $\bar\beta\in[0,1]$ corresponds to $\bar{t} \geq 0$. We maximize \refE{first-order-condition-ht} over $t = \bar{t} \geq 0$ and we only verify the condition \refE{first-order-condition-2} for this maximum value.
To this end, we find the derivative of \refE{first-order-condition-ht} with respect to $t$, we identify the resulting expression with zero and solve for $t$.
Using \refE{marcumQ-db} and \refE{Imx-asympt} it follows that
\begin{align}
\frac{\partial}{\partial b} Q_{m}\left(0,b \right)
= -\frac{b^{2m-1}}{2^{m-1}}\frac{e^{-\frac{b^2}{2}}}{\Gamma(m)}\label{eqn:marcumQ-db-a0}
\end{align}
and therefore
\begin{align}
\frac{\partial}{\partial t} h(t)
&=
- \frac{1}{\sigma} \frac{(t/\sigma)^{n-1}}{2^{\frac{n}{2}-1}} \frac{e^{-\frac{t^2}{2\sigma^2}}}{\Gamma(n/2)}
- \frac{\nabla_{\beta} f(\beta_0,\gamma_0)}{\theta} \frac{(t/\theta)^{n-1}}{2^{\frac{n}{2}-1}} \frac{e^{-\frac{t^2}{2\theta^2}}}{\Gamma(n/2)}\\
&=
- \frac{t^{n-1}}{\sigma^n 2^{\frac{n}{2}-1}} \frac{e^{-\frac{t^2}{2\sigma^2}}}{\Gamma(n/2)}
+ \frac{t^{n-1}}{\sigma^n 2^{\frac{n}{2}-1}} \frac{e^{-\frac{t^2}{2\theta^2}}}{\Gamma(n/2)} e^{\frac{1}{2}\left(\frac{n\gamma_0}{\delta} - t_0^2\left(\frac{1}{\sigma^{2}}-\frac{1}{\theta^{2}}\right)\right)}
\label{eqn:first-order-condition-dht}
\end{align}
where in the second step we used \refE{first-order-term-2}. Identifying \refE{first-order-condition-dht} with zero, we obtain the roots $t=0$ and (after some algebra)
\begin{align}
t^2 = t_0^2 - n\gamma \frac{\sigma^2\theta^2}{\delta^2}.
\label{eqn:first-order-condition-dht0}
\end{align}
By evaluating the second derivative of \refE{first-order-condition-ht}, it can be verified that \refE{first-order-condition-dht0} indeed corresponds to a maximum of $h(t)$.
Therefore, we conclude that the right-hand side of \refE{first-order-condition-2} is maximized for
\begin{align}
\bar{t}_{\star} = \sqrt{\bigl(t_0^2- n\gamma {\sigma^2\theta^2}/{\delta^2}\bigr)_{+}}
\label{eqn:first-order-opt-bart}
\end{align}
where the threshold $(a)_{+} = \max(0,a)$ follows from the constraint $\bar{t} \geq 0$.
Using \refE{first-order-term-2}, \refE{first-order-term-3} and \refE{first-order-opt-bart} in \refE{first-order-condition-2} we obtain the desired characterization for the region of interest.
For the statement of the result in Lemma~\ref{lem:envelope_equals_f}, we select the smallest $t_0$ that fulfills \refE{first-order-condition-2} (which satisfies the condition with equality) and use the simpler notation $(\beta,\gamma)$ instead of $(\beta_0,\gamma_0)$.
\begin{figure}[t]
\centering\input{plots/detHessian-n6-s1-t2.tikz}%
\caption{Level curves of $\det \nabla^2 f(\beta,\gamma)$ for $n=6$, $\sigma^2=1$, $\theta^2=2$. The region where $\det \nabla^2 f(\beta,\gamma)<0$ is shaded in gray. The bold line corresponds to the points where $\beta = 1- Q_{\frac{n}{2}}\bigl( \sqrt{n\gamma}{\theta}/{\delta},\, t_0/{\theta} \bigr)$ as described in Lemma~\ref{lem:envelope_equals_f}.}\label{fig:detHessian}%
\end{figure}%
We emphasize that the condition for Lemma~\ref{lem:envelope_equals_f} derived in this appendix does not correspond to the region where $f(\beta,\gamma)$ is locally convex, but it precisely characterizes the region where $f(\beta,\gamma) = \underline{f}(\beta,\gamma)$. \refFig{detHessian} shows the difference between these two regions for a given set of parameters: the shaded area shows the points where $f(\beta,\gamma)$ is locally non-convex, while the bold line corresponds to the lower-boundary of the region where $f(\beta,\gamma) = \underline{f}(\beta,\gamma)$.
\bibliographystyle{IEEEtran}
| {'timestamp': '2020-08-19T02:22:23', 'yymm': '1907', 'arxiv_id': '1907.03163', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.03163'} |
\section{Introduction}
In recent years, vehicular technology has attracted significant attention from the automotive and telecommunication industries, leading to the emergence of vehicle-to-everything (V2X) communications for improving road safety, traffic management services and driving comfort.
V2X supported by the sixth generation (6G) is envisioned to be a key enabler of future connected autonomous vehicles \cite{9779322}. Although its transformative benefits for leveraging intelligent transportation systems, V2X still face several technical issues mainly related to performance and security.
The integration of sensing and communication (ISAC) has emerged very recently as a revolutionary element of 6G that could potentially help enabling adaptive learning and intelligent decision-making in future V2X applications.
The combination of sensing and communication allows vehicles to perceive their surroundings better, predict manoeuvres from nearby users and make intelligent decisions, thus paving the way toward a safer transportation system \cite{9665433}.
Modernized vehicles are augmented with various types of sensors divided into exteroceptive to observe their surrounding environment and proprioceptive to observe their internal states.
The former like GPS, Lidar, and Cameras are conveyed to improve situational awareness, while latter sensors, such as steering, pedal, and wheel speed, convey to improve self-awareness.
While sensing the environment, vehicles can exchange messages that assist in improving situational- and self-awareness and in coordinating maneuvers with other vehicles.
Those messages like the basic safety (BSMs) and cooperative awareness messages (CAMs) are composed of transmitting vehicle's states such as position and velocity and other vehicles' states in the vicinity. Vehicles might use their sensors, such as cameras and Lidar, to detect road users (e.g., pedestrians), which can be communicated with other road users via the V2X messages to improve the overall performance. However, V2X communication links carrying those messages are inherently vulnerable to malicious attacks due to the open and shared nature of the wireless spectrum among vehicles and other cellular users \cite{8336901}. For instance, a jammer in the vicinity might alter the information to be communicated to nearby vehicles/users or can intentionally disrupt communication between a platoon of vehicles making the legitimate signals unrecognizable for on-board units (OBUs) and/or road side units (RSUs) that endanger vehicular safety
\cite{8553649}.
In addition, the integrity of GPS signals and the correct acquisition of navigation data to compute position, velocity and time information is critical in V2X applications for their safe operation. However, since civil GPS receivers rely on unencrypted satellite signals, spoofers can easily replicate them by deceiving the GPS receiver to compute falsified positions \cite{9226611}.
Also, the long distance between satellites and terrestrial GPS receivers leads to an extremely weak signal that can be easily drowned out by a spoofer.
Thus, GPS sensors' vulnerability to spoofing attacks poses a severe threat that might be causing vehicles to be out of control or even hijacked and endanger human life \cite{9881548}.
Therefore, GPS spoofing attacks and jamming interference needs to be controlled and detected in real-time to reach secured vehicular communications allowing vehicles to securely talk to each other and interact with the infrastructure (e.g., roadside terminals, base stations) \cite{9860410}.
Existing methods for GPS spoofing detection include GPS signal analysis methods and GPS message encryption methods \cite{9845684}. However, the former requires the ground truth source during the detection process, which is not always possible to collect. In contrast, the latter involves support from a secured infrastructure and advanced computing resources on GPS receivers, which hinders their adoption in V2X applications. On the other hand, existing methods for jammer detection in vehicular networks are based on analysing the packet drop rate as in \cite{9484071}, making it difficult to detect an advanced jammer manipulating the legitimate signal instead of disrupting it.
In this work, we propose a method to jointly detect GPS spoofing and jamming attacks in the V2X network. A coupled generalized dynamic Bayesian network (C-GDBN) is employed to learn the interaction between RF signals received by the RSU from multiple vehicles and their corresponding trajectories. This integration of vehicles' positional information with vehicle-to-infrastructure (V2I) communications allows semantic learning while mapping RF signals with vehicles' trajectories and enables the RSU to jointly predict the RF signals it expects to receive from the vehicles from which it can anticipate the expected trajectories.
The main contributions of this paper can be summarized as follows: \textit{i)} A joint GPS spoofing and jamming detection method is proposed for the V2X scenario, which is based on learning a generative interactive model as the C-GDBN. Such a model encodes the cross-correlation between the RF signals transmitted by multiple vehicles and their trajectories, where their semantic meaning is coupled stochastically at a high abstraction level. \textit{ii)} A cognitive RSU equipped with the acquired C-GDBN can predict and estimate vehicle positions based on real-time RF signals. This allows RSU to evaluate whether both RF signals and vehicles' trajectories are evolving according to the dynamic rules encoded in the C-GDBN and, consequently, to identify the cause (i.e., a jammer attacking the V2I or a spoofer attacking the satellite link) of the abnormal behaviour that occurred in the V2X environment. \textit{iii)} Extensive simulation results demonstrate that the proposed method accurately estimates the vehicles' trajectories from the predicted RF signals, effectively detect any abnormal behaviour and identify the type of abnormality occurring with high detection probabilities.
To our best knowledge, this is the first work that studies the joint detection of jamming and spoofing in V2X systems.
\section{System model and problem formulation}
The system model depicted in Fig.~\ref{fig_SystemModel}, includes a single cell vehicular network consisting of a road side unit (RSU) located at $\mathrm{p}_{R}=[{x}_{R},{y}_{R}]$, a road side jammer (RSJ) located at $\mathrm{p}_{J}=[{x}_{J},{y}_{J}]$, a road side spoofer (RSS) located at $\mathrm{p}_{s}=[{x}_{s},{y}_{s}]$ and $N$ vehicles moving along multi-lane road in an urban area. The time-varying positions of the $n$-th vehicle is given by $\mathrm{p}_{n,t}=[{x}_{n,t},{y}_{n,t}]$ where $n \in N$. Among the $K$ orthogonal subchannels available for the Vehicle-to-Infrastructure (V2I) communications, RSU assigns one V2I link to each vehicle. Each vehicle exchanges messages composed of the vehicle's state (i.e., position and velocity) with RSU through the $k$-th V2I link by transmitting a signal $\textrm{x}_{t,k}$ carrying those messages at each time instant $t$ where $k \in K$. We consider a reactive RSJ that aims to attack the V2I link by injecting intentional interference to the communication link between vehicles and RSU to alter the transmitted signals by the vehicles. In contrast, the RSS purposes to mislead the vehicles by spoofing the GPS signal and so registering wrong GPS positions. RSU aims to detect both the spoofer on the satellite link and the jammer on multiple V2I links in order to take effective actions and protect the vehicular network.
The joint GPS spoofing and jamming detection problem can be formulated as the following ternary hypothesis test:
\begin{equation}
\begin{cases}
\mathcal{H}_{0}: \mathrm{z}_{t,k} = \mathrm{g}_{t,k}^{nR} \mathrm{x}_{t,k} + \mathrm{v}_{t,k}, \\
\mathcal{H}_{1}: \mathrm{z}_{t,k} = \mathrm{g}_{t,k}^{nR} \mathrm{x}_{t,k} + \mathrm{g}_{t,k}^{JR} \mathrm{x}_{t,k}^{j} + \mathrm{v}_{t,k}, \\
\mathcal{H}_{2}: \mathrm{z}_{t,k} = \mathrm{g}_{t,k}^{nR} \mathrm{x}_{t,k}^{*} + \mathrm{v}_{t,k},
\end{cases}
\end{equation}
where $\mathcal{H}_{0}$, $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ denote three hypotheses corresponding to the absence of both jammer and spoofer, the presence of the jammer, and the presence of the spoofer, respectively. $\textrm{z}_{t,k}$ is the received signal at the RSU at $t$ over the $k$-th V2I link, $\textrm{g}_{t,k}^{nR}$ is the channel power gain from vehicle $n$ to the RSU formulated as: $\textrm{g}_{t,k}^{nR} = \alpha_{t,k}^{nR} \mathrm{h}_{t,k}^{nR}$, where $\alpha_{t,k}^{nR}$ is the large-scale fading including path-loss and shadowing modeled as \cite{8723178}: $\alpha_{t,k}^{nR}=G\beta d_{t,nR}^{-\gamma}$.
\begin{figure}[t!]
\centering
\includegraphics[height=5.3cm]{Figures/SystemModel_V1.pdf}
\caption{An illustration of the system model.}
\label{fig_SystemModel}
\end{figure}
$G$ is the pathloss constant, $\beta$ is a log normal shadow fading random variable, $d_{t,nR}=\sqrt{({x}_{n,t}-x_{R})^{2}+({y}_{n,t}-y_{R})^{2}}$ is the distance between the $n$-th vehicle and the RSU. $\gamma$ is the power decay exponent and
$\mathrm{h}_{t,k}$ is the small-scale fading component distributed according to $\mathcal{CN}(0,1)$. In addition, $\mathrm{x}_{t,k}$ is the desired signal transmitted by the $n$-th vehicle, and $\mathrm{v}_{t,k}$ is an additive white Gaussian noise with variance $\sigma_{n}^{2}$. $\mathrm{x}_{t,k}^{J}$ is the jamming signal, $\mathrm{x}_{t,k}^{*}$ is the spoofed signal (i.e., the signal that carries the bits related to the wrong GPS positions), $\mathrm{g}_{t,k}^{JR} = \alpha_{t,k}^{JR} \mathrm{h}_{t,k}^{JR}$ is the channel power gain from RSJ to RSU where $\alpha_{t,k}^{JR}=G\beta d_{t,JR}^{-\gamma}$ such that $d_{t,JR}=\sqrt{({x}_{J}-x_{R})^{2}+({y}_{J}-y_{R})^{2}}$.
We assume that the channel state information (CSI) of V2I links is known and can be estimated at the RSU as in \cite{8345717}.
The RSU is equipped with an RF antenna which can track the vehicles' trajectories after decoding the received RF signals. RSU aims to learn the interaction between the RF signals received from multiple vehicles and their corresponding trajectories.
\section{Proposed method for joint detection of GPS spoofing and jamming}
\subsection{Environment Representation}
The RSU is receiving RF signals from each vehicle and tracking its trajectory (which we refer to as GPS signal) by decoding and demodulating the received RF signals.
The Generalized state-space model describing the $i$-th signal evolvement at multiple levels embodies the following equations:
\begin{equation} \label{eq_discreteLevel}
\mathrm{\Tilde{S}_{t}}^{(i)} = \mathrm{f}(\mathrm{\Tilde{S}_{t-1}}^{(i)}) + \mathrm{\tilde{w}}_{t},
\end{equation}
\begin{equation} \label{eq_continuousLevel}
\mathrm{\Tilde{X}_{t}}^{(i)} = \mathrm{A} \mathrm{\Tilde{X}_{t-1}}^{(i)} + \mathrm{B} \mathrm{U}_{\mathrm{\Tilde{S}_{t}}^{(i)}} + \mathrm{\tilde{w}}_{t},
\end{equation}
\begin{equation} \label{eq_observationLevel}
\mathrm{\Tilde{Z}_{t}}^{(i)} = \mathrm{H} \mathrm{\Tilde{X}_{t}}^{(i)} + \mathrm{\tilde{v}}_{t},
\end{equation}
where $i \in \{$RF, GPS$\}$ indicates the type of signal received by the RSU. The transition system model defined in \eqref{eq_discreteLevel} explains the evolution of the discrete random variables $\mathrm{\Tilde{S}_{t}}^{(i)}$ representing the clusters of the RF (or GPS) signal dynamics, $\mathrm{f}(.)$ is a non linear function of its argument and the additive term $\mathrm{\tilde{w}}_{t}$ denotes the process noise. The dynamic model defined in \eqref{eq_continuousLevel} explains the RF signal dynamics evolution or the motion dynamics evolution of the $n$-th vehicle, where $\mathrm{\Tilde{X}_{t}}^{(i)}$ are hidden continuous variables generating sensory signals, $\mathrm{A} \in \mathbb{R}^{2d}$ and $\mathrm{B} \in \mathbb{R}^{2d}$ are the dynamic and control matrices, respectively, and $\mathrm{U}_{\mathrm{\Tilde{S}_{t}}^{(i)}}$ is the control vector representing the dynamic rules of how the signals evolve with time. The measurement model defined in \eqref{eq_observationLevel} describes dependence of the sensory signals $\mathrm{\Tilde{Z}_{t}}^{(i)}$ on the hidden states $\mathrm{\Tilde{X}_{t}}^{(i)}$ that is parametrized by the measurement matrix $\mathrm{B} \in \mathbb{R}^{2d}$ where $d$ stands for the data dimensionality and $\mathrm{\tilde{v}}_{t}$ is a random noise.
\subsection{Learning GDBN}
The hierarchical dynamic models defined in \eqref{eq_discreteLevel}, \eqref{eq_continuousLevel} and \eqref{eq_observationLevel} are structured in a Generalized Dynamic Bayesian Network (GDBN) \cite{9858012} as shown in Fig.~\ref{fig_GDBN_CGDBN}-(a) that provides a probabilistic graphical model expressing the conditional dependencies among random hidden variables and observable states. The generative process explaining how sensory signals have been generated can be factorized as:
\begin{equation} \label{eq_generative_process}
\begin{split}
\mathrm{P}(\mathrm{\tilde{Z}}_{t}^{(i)}, \mathrm{\tilde{X}}_{t}^{(i)}, \mathrm{\tilde{S}}_{t}^{(i)}) = \mathrm{P}(\mathrm{\tilde{S}}_{0}^{(i)}) \mathrm{P}(\mathrm{\tilde{X}}_{0}^{(i)}) \\ \bigg[ \prod_{t=1}^{\mathrm{T}} \mathrm{P}(\mathrm{\tilde{Z}}_{t}^{(i)}|\mathrm{\tilde{X}}_{t}^{(i)}) \mathrm{P}(\mathrm{\tilde{X}}_{t}^{(i)}|\mathrm{\tilde{X}}_{t-1}^{(i)}, \mathrm{\tilde{S}}_{t}^{(i)}) \mathrm{P}(\mathrm{\tilde{S}}_{t}^{(i)}|\mathrm{\tilde{S}}_{t-1}^{(i)}) \bigg],
\end{split}
\end{equation}
where $\mathrm{P}(\mathrm{\tilde{S}}_{0}^{(i)})$ and $\mathrm{P}(\mathrm{\tilde{X}}_{0}^{(i)})$ are initial prior distributions, $\mathrm{P}(\mathrm{\tilde{Z}}_{t}^{(i)}|\mathrm{\tilde{X}}_{t}^{(i)})$ is the likelihood, $\mathrm{P}(\mathrm{\tilde{X}}_{t}^{(i)}|\mathrm{\tilde{X}}_{t-1}^{(i)}, \mathrm{\tilde{S}}_{t}^{(i)})$ and $\mathrm{P}(\mathrm{\tilde{S}}_{t}^{(i)}|\mathrm{\tilde{S}}_{t-1}^{(i)})$ are the transition densities describing the temporal and hierarchical dynamics of the generalized state-space model.
The generative process defined in \eqref{eq_generative_process} indicates the cause-effect relationships the model impose on the random variables $\mathrm{\tilde{S}}_{t}^{(i)}$, $\mathrm{\tilde{X}}_{t}^{(i)}$ and $\mathrm{\tilde{Z}}_{t}^{(i)}$ forming a chain of causality describing how one state contributes to the production of another state which is represented by the link $\mathrm{\tilde{S}}_{t}^{(i)} \rightarrow \mathrm{\tilde{X}}_{t}^{(i)} \rightarrow \mathrm{\tilde{Z}}_{t}^{(i)}$.
The RSU starts perceiving the environment using a static assumption about the environmental states evolution by assuming that sensory signals are only subject to random noise. Hence, RSU predicts the RF signal (or vehciles trajectory) using the following simplified model:
$\mathrm{\tilde{X}}_{t}^{(i)} = \mathrm{A} \mathrm{\tilde{X}}_{t-1}^{(i)} + \mathrm{\tilde{w}}_{t}$,
that differs from \eqref{eq_continuousLevel} in the control vector $\mathrm{U}_{\mathrm{\Tilde{S}_{t}}^{(i)}}$ which is supposed to be null, i.e., $\mathrm{U}_{\mathrm{\Tilde{S}_{t}}^{(i)}} = 0$ as the dynamic rules explaining how the environmental states evolve with time are not discovered yet.
Those rules can be discovered by exploiting the generalized errors (GEs), i.e., the difference between predictions and observations. The GEs projected into the measurement space are calculated as:
$\tilde{\varepsilon}_{\mathrm{\tilde{Z}}_{t}^{(i)}}^{} = \mathrm{\tilde{Z}}_{t}^{(i)} - \mathrm{H} \mathrm{\tilde{X}}_{t}^{(i)}$.
Projecting $\tilde{\varepsilon}_{\mathrm{\tilde{Z}}_t}^{}$ back into the generalized state space can be done as follows:
\begin{equation}\label{GE_continuousLevel_initialModel}
\tilde{\varepsilon}_{\mathrm{\tilde{X}}_t}^{(i)} = \mathrm{H}^{-1}\tilde{\varepsilon}_{\mathrm{\tilde{Z}}_{t}^{(i)}}^{}=\mathrm{H}^{-1}(\mathrm{\tilde{Z}}_{t}^{(i)}-\mathrm{H}\mathrm{\tilde{X}}_{t}^{(i)}) = \mathrm{H}^{-1}\mathrm{\tilde{Z}}_{t}^{(i)} - \mathrm{\tilde{X}}_{t}^{(i)}.
\end{equation}
The GEs defined in \eqref{GE_continuousLevel_initialModel} can be grouped into discrete clusters in an unsupervised manner by employing the Growing Neural Gas (GNG). The latter produces a set of discrete variables (clusters) denoted by:
$\mathbf{\tilde{S}^{(i)}}=\{\mathrm{\tilde{S}}_{1}^{(i)},\mathrm{\tilde{S}}_{2}^{(i)},\dots,\mathrm{\tilde{S}}_{M_{i}}^{(i)}\}$,
where $M_{i}$ is the total number of clusters and each cluster $\mathrm{\tilde{S}}_{m}^{(i)} \in \mathbf{\tilde{S}^{(i)}}$ follows a Gaussian distribution composed of GEs with homogeneous properties, such that $\mathrm{\tilde{S}}_{m}^{(i)} \sim \mathcal{N}(\tilde{\mu}_{\mathrm{\tilde{S}}_{m}^{(i)}}=[\mu_{\tilde{S}_{m}^{(i)}}, \Dot{\mu}_{\tilde{S}_{m}^{(i)}}], \Sigma_{\mathrm{\tilde{S}}_{m}^{(i)}})$.
\begin{figure}[t!]
\begin{center}
\begin{minipage}[b]{.40\linewidth}
\centering
\includegraphics[width=2.5cm]{Figures/GDBN.pdf}
\\[-1.0mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{.50\linewidth}
\centering
\includegraphics[width=5.0cm]{Figures/C_GDBN.pdf}
{\scriptsize (b)}
\end{minipage}
\caption{(a) The GDBN. (b) The coupled GDBN (C-GDBN) composed of two GDBNs representing the two signals received at the RSU where their discrete hidden variables are stochastically coupled.}
\label{fig_GDBN_CGDBN}
\end{center}
\end{figure}
The dynamic transitions of the sensory signals among the available clusters can be captured in a time-varying transition matrix ($\Pi_{\tau}$) by estimating the time-varying transition probabilities $\pi_{ij}=\mathrm{P}(\mathrm{\tilde{S}}_{t}^{(i)}=i|\mathrm{\tilde{S}}_{t-1}^{(i)}=j, \tau)$ where $\tau$ is the time spent in $\mathrm{\tilde{S}}_{t-1}^{(i)}=j$ before transition to $\mathrm{\tilde{S}}_{t}^{(i)}=i$.
\subsection{Learning Coupled GDBN (C-GDBN)}
The learning procedure described in the previous section can be executed for each signal type, i.e., RF and GPS. After learning a separated GDBN model for each signal type, we analyse the interaction behaviour between RF signal and GPS signal received at the RSU by tracking the cluster firing among $\mathbf{\tilde{S}^{(1)}}$ and $\mathbf{\tilde{S}^{(2)}}$ during a certain experience. Such an interaction can be encoded in a Coupled GDBN (C-GDBN) as shown in Fig.\ref{fig_GDBN_CGDBN}-(b) composed of the two GDBNs representing the two signals where their hidden variables at the discrete level are stochastically coupled (in $\mathrm{\tilde{C}}_{t}{=}[\mathrm{\tilde{S}}_{t}^{(1)},\mathrm{\tilde{S}}_{t}^{(2)}]$) as those variables are uncorrelated but have coupled means.
The interactive matrix $\Phi \in \mathbb{R}^{M_{1},M_{2}}$ encodes the firing cluster pattern allowing to predict the GPS signal from RF signal is defined as follows:
\begin{equation} \label{interactiveTM_fromRFtoGPS}
\Phi =
\begin{bmatrix}
\mathrm{P}(\mathrm{\Tilde{S}_{1}}^{(2)}|\mathrm{\Tilde{S}_{1}}^{(1)}) & \mathrm{P}(\mathrm{\Tilde{S}_{2}}^{(2)}|\mathrm{\Tilde{S}_{1}}^{(1)}) & \dots & \mathrm{P}(\mathrm{\Tilde{S}_{M_{2}}}^{(2)}|\mathrm{\Tilde{S}_{1}}^{(1)}) \\
\mathrm{P}(\mathrm{\Tilde{S}_{1}}^{(2)}|\mathrm{\Tilde{S}_{2}}^{(1)}) & \mathrm{P}(\mathrm{\Tilde{S}_{2}}^{(2)}|\mathrm{\Tilde{S}_{2}}^{(1)}) & \dots & \mathrm{P}(\mathrm{\Tilde{S}_{M_{2}}}^{(2)}|\mathrm{\Tilde{S}_{2}}^{(1)}) \\
\vdots & \vdots & \ddots & \vdots \\
\mathrm{P}(\mathrm{\Tilde{S}_{1}}^{(2)}|\mathrm{\Tilde{S}_{M_{1}}}^{(1)}) & \mathrm{P}(\mathrm{\Tilde{S}_{2}}^{(2)}|\mathrm{\Tilde{S}_{M_{1}}}^{(1)}) & \dots & \mathrm{P}(\mathrm{\Tilde{S}_{M_{2}}}^{(2)}|\mathrm{\Tilde{S}_{M_{1}}}^{(1)})
\end{bmatrix}.
\end{equation}
\subsection{Joint Prediction and Perception}
RSU starts predicting the RF signals it expects to receive from each vehicle based on a Modified Markov Jump Particle Filter (M-MJPF) \cite{9858012} that combines Particle filter (PF) and Kalman filter (KF) to perform temporal and hierarchical predictions. Since the acquired C-GDBN allows predicting a certain signal's dynamic evolution based on another's evolution, it requires an interactive Bayesian filter capable of dealing with more complicated predictions. To this purpose, we propose to employ an Interactive M-MJPF (IM-MJPF) on the C-GDBN. The IM-MJPF consists of a PF that propagates a set of $L$ particles equally weighted, such that $\{\mathrm{\tilde{S}}_{t,l}^{(1)}, \mathrm{W}_{t,l}^{(1)}\}{\sim}\{\pi(\mathrm{\tilde{S}}_{t}^{(1)}), \frac{1}{L}\}$, where $\mathrm{\tilde{S}}_{t,l}^{(1)}$, $l \in L$ and $(.^{(1)})$ is the RF signal type. In addition, RSU relies on $\Phi$ defined in \eqref{interactiveTM_fromRFtoGPS} to predict $\mathrm{\tilde{S}}_{t}^{(2)}$ realizing the discrete cluster of vehicle's trajectory starting from the predicted RF signal according to: $\{\mathrm{\tilde{S}}_{t}^{(2)},\mathrm{W}_{t,l}^{(2)}\}{\sim} \{\Phi(\mathrm{\tilde{S}}_{t,l}^{(1)}){=}\mathrm{P}(.|\mathrm{\tilde{S}}_{t,l}^{(1)}), \mathrm{W}_{t,l}^{(2)}\}$. For each predicted discrete variable $\mathrm{\tilde{S}}_{t,l}^{(i)}$, a multiple KF is employed to predict multiple continuous variables which guided by the predictions at the higher level as declared in \eqref{eq_continuousLevel} that can be represented probabilistically as $\mathrm{P}(\mathrm{\tilde{X}}_{t}^{(i)}|\mathrm{\tilde{X}}_{t-1}^{(i)}, \mathrm{\tilde{S}}_{t}^{(i)})$. The posterior probability that is used to evaluate expectations is given by:
\begin{multline} \label{piX}
\pi(\mathrm{\tilde{X}}_{t}^{(i)})=\mathrm{P}(\mathrm{\tilde{X}}_{t}^{(i)},\mathrm{\tilde{S}}_{t}^{(i)}|\mathrm{\tilde{Z}}_{t-1}^{(i)})= \\ \int \mathrm{P}(\mathrm{\tilde{X}}_{t}^{(i)}|\mathrm{\tilde{X}}_{t-1}^{(i)}, \mathrm{\tilde{S}}_{t}^{(i)}) \lambda(\mathrm{\tilde{X}}_{t-1}^{(i)})d\mathrm{\tilde{X}}_{t-1}^{(i)},
\end{multline}
where $\lambda(\mathrm{\tilde{X}}_{t-1}^{(i)}){=}\mathrm{P}(\mathrm{\tilde{Z}}_{t-1}^{(i)}|\mathrm{\tilde{X}}_{t-1}^{(i)})$.
The posterior distribution can be updated (and so representing the updated belief) after having seen the new evidence $\mathrm{\tilde{Z}}_{t}^{(i)}$ by exploiting the diagnostic message $\lambda(\mathrm{\tilde{X}}_{t}^{(i)})$ in the following form: $\mathrm{P}(\mathrm{\tilde{X}}_{t}^{(i)}, \mathrm{\tilde{S}}_{t}^{(i)}|\mathrm{\tilde{Z}}_{t}^{(i)}) {=} \pi(\mathrm{\tilde{X}}_{t}^{(i)})\lambda(\mathrm{\tilde{X}}_{t}^{(i)})$. Likewise, belief in discrete hidden variables can be updated according to: $\mathrm{W}_{t,l}^{(i)}{=}\mathrm{W}_{t,l}^{(i)}\lambda (\mathrm{\tilde{S}}_{t}^{(i)})$ where:
$\lambda (\mathrm{\tilde{S}}_{t}^{(i)}) {=} \lambda (\mathrm{\Tilde{X}}_{t}^{(i)})\mathrm{P}(\mathrm{\Tilde{X}}_{t}^{(i)}|\mathrm{\tilde{S}}_{t}^{(i)}) {=} \mathrm{P}(\mathrm{\tilde{Z}}_{t}^{(i)}|\mathrm{\Tilde{X}}_{t}^{(i)})\mathrm{P}(\mathrm{\Tilde{X}}_{t}^{(i)}|\mathrm{\tilde{S}}_{t}^{(i)})$.
\subsection{Joint GPS spoofing and jamming detection}
RSU can evaluate the current situation and identify if V2I is under attack, or the satellite link is under spoofing based on a multiple abnormality indicator produced by the IM-MJPF. The first indicator calculates the similarity between the predicted RF signal and the observed one, which is defined as:
\begin{equation}\label{eq_CLA1}
\Upsilon_{\mathrm{\tilde{X}}_{t}^{(1)}} = -ln \bigg( \mathcal{BC} \big(\pi(\mathrm{\tilde{X}}_{t}^{(1)}),\lambda(\mathrm{\tilde{X}}_{t}^{(1)}) \big) \bigg),
\end{equation}
where $\mathcal{BC}(.){=}\int \sqrt{\pi(\mathrm{\tilde{X}}_{t}^{(1)}),\lambda(\mathrm{\tilde{X}}_{t}^{(1)}})d\mathrm{\tilde{X}}_{t}^{(1)}$ is the Bhattacharyya coefficient.
The second indicator calculates the similarity between the predicted GPS signal (from the RF signal) and the observed one after decoding the RF signal which is defined as:
\begin{equation}\label{eq_CLA2}
\Upsilon_{\mathrm{\tilde{X}}_{t}^{(2)}} = -ln \bigg( \mathcal{BC} \big(\pi(\mathrm{\tilde{X}}_{t}^{(2)}),\lambda(\mathrm{\tilde{X}}_{t}^{(2)}) \big) \bigg),
\end{equation}
where $\mathcal{BC}(.){=}\int \sqrt{\pi(\mathrm{\tilde{X}}_{t}^{(2)}),\lambda(\mathrm{\tilde{X}}_{t}^{(2)}})d\mathrm{\tilde{X}}_{t}^{(2)}$.
Different hypotheses can be identified by the RSU to understand the current situation whether there is: a jammer attacking the V2I link, or a spoofer attacking the link between the satellite and the vehicle or both jammer and spoofer are absent according to:
\begin{equation}
\begin{cases}
\mathcal{H}_{0}: \text{if} \ \ \Upsilon_{\mathrm{\tilde{X}}_{t}^{(1)}} < \xi_{1} \ \text{and} \ \Upsilon_{\mathrm{\tilde{X}}_{t}^{(2)}} < \xi_{2}, \\
\mathcal{H}_{1}: \text{if} \ \ \Upsilon_{\mathrm{\tilde{X}}_{t}^{(1)}} \geq \xi_{1} \ \text{and} \ \Upsilon_{\mathrm{\tilde{X}}_{t}^{(2)}} \geq \xi_{2}, \\
\mathcal{H}_{2}: \text{if} \ \ \Upsilon_{\mathrm{\tilde{X}}_{t}^{(1)}} < \xi_{1} \ \text{and} \ \Upsilon_{\mathrm{\tilde{X}}_{t}^{(2)}} \geq \xi_{2},
\end{cases}
\end{equation}
where $\xi_{1} = \mathbb{E}[\Bar{\Upsilon}_{\mathrm{\tilde{X}}_{t}^{(1)}}] + 3\sqrt{\mathbb{V}[\Bar{\Upsilon}_{\mathrm{\tilde{X}}_{t}^{(1)}}]}$, and $\xi_{2} = \mathbb{E}[\Bar{\Upsilon}_{\mathrm{\tilde{X}}_{t}^{(2)}}] + 3\sqrt{\mathbb{V}[\Bar{\Upsilon}_{\mathrm{\tilde{X}}_{t}^{(2)}}]}$. In $\xi_{1}$ and $\xi_{2}$, $\Bar{\Upsilon}_{\mathrm{\tilde{X}}_{t}^{(1)}}$ and $\Bar{\Upsilon}_{\mathrm{\tilde{X}}_{t}^{(2)}}$ stand for the abnormality signals during training (i.e., normal situation when jammer and spoofer are absent).
\subsection{Evaluation metrics}
In order to evaluate the performance of the proposed method to jointly detect jammer and GPS spoofer, we adopt the jammer detection probability ($\mathrm{P}_{d}^{j}$) and the spoofer detection probability ($\mathrm{P}_{d}^{s}$), respectively, which are defined as:
\begin{equation}
\mathrm{P}_{d}^{j} = \mathrm{Pr}(\Upsilon_{\mathrm{\tilde{X}}_{t}^{(1)}}\geq \xi_{1}, \Upsilon_{\mathrm{\tilde{X}}_{t}^{(2)}} \geq \xi_{2}|\mathcal{H}_{1}),
\end{equation}
\begin{equation}
\mathrm{P}_{d}^{s} = \mathrm{Pr}(\Upsilon_{\mathrm{\tilde{X}}_{t}^{(1)}}< \xi_{1}, \Upsilon_{\mathrm{\tilde{X}}_{t}^{(2)}} \geq \xi_{2}|\mathcal{H}_{2}).
\end{equation}
Also, we evaluate the accuracy of the proposed method in predicting and estimating the vehicles' trajectories and the expected RF signals by adopting the root mean square error (RMSE) defined as:
\begin{equation}
RMSE = \sqrt{ \frac{1}{T} \sum_{t=1}^{T}\bigg( \mathrm{\tilde{Z}}_{t}^{(i)}-\mathrm{\tilde{X}}_{t}^{(i)} \bigg)^{2} },
\end{equation}
where $T$ is the total number of predictions.
\section{Simulation Results}
In this section, we evaluate the performance of the proposed method to jointly detect the jammer and the spoofer using extensive simulations. We consider $\mathrm{N}=2$ vehicles interacting inside the environment and exchanging their states (i.e., position and velocity) with the RSU. The vehicles move along predefined trajectories performing various maneuvers which are picked from the \textit{Lankershim} dataset proposed by \cite{5206559}. The dataset depicts a four way intersection and includes about $19$ intersection maneuvers. RSU assigns one subchannel realizing the V2I link for each vehicle over which the vehicles' states are transmitted. The transmitted signal carrying the vehicle's state and the jamming signal are both QPSK modulated.
The simulation settings are: carrier frequency of $2$GHz, BW${=}1.4$MHz, cell radius of $500$m, RSU antenna height and gain is $25$m and $8$ dBi, receiver noise figure of $5$dB, vehicle antenna height and gain is $1.5$m and $3$dBi, vehicle speed is $40$Km/h, V2I transmit power is $23$dBm, jammer transmit power ranging from $20$dBm to $40$dBm, SNR of $20$dB, path loss model ($128.1{+}37.6log d$), Log-normal shadowing with $8$dB standard deviation and a fast fading channel following the Rayleigh distribution.
\begin{figure}[ht!]
\begin{center}
\begin{minipage}[b]{.55\linewidth}
\centering
\includegraphics[width=5.0cm]{Results/ObservedTrajectories_reference}
\\[-1.5mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.9cm]{Results/ObservedRFsignal_Veh1_reference}
\\[-1.5mm]
{\scriptsize (b)}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.9cm]{Results/ObservedRFsignal_Veh2_reference}
\\[-1.5mm]
{\scriptsize (c)}
\end{minipage}
\caption{An example visualizing the received RF signals from the two vehicles and the corresponding trajectories: (a) Vehicles' trajectories, (b) received RF signal from vehicle 1, (c) received RF signal from vehicle 2.}
\label{fig_receivedRFsignalandTrajectory}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/clusters_trajectory_veh1}
\\[-1.5mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/clusters_trajectory_veh2}
\\[-1.5mm]
{\scriptsize (b)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/clusters_RFsignal_veh1}
\\[-1.5mm]
{\scriptsize (c)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/clusters_RFsignal_veh2}
\\[-1.5mm]
{\scriptsize (d)}
\end{minipage}
\caption{GNG output after clustering the generalized errors obtained from different experiences: (a) clustered trajectory of vehicle 1, (b) clustered trajectory of vehicle 2, (c) clustered RF signal received from vehicle 1, (d) clustered RF signal received from vehicle 2.}
\label{fig_GNG_of_receivedRFsignalandTrajectory}
\end{center}
\end{figure}
The RSU aims to learn multiple interactive models (i.e., C-GDBN models) encoding the cross relationship between the received RF signal from each vehicle and its corresponding trajectory. These models allow the RSU to predict the trajectory the vehicle will follow based on the received RF signal and evaluate whether the V2I is under jamming attacks or the satellite link is under spoofing. It is to note that the RSU is receiving only the RF signals from the two vehicles and obtaining their positions after decoding the RF signals. Thus, the RSU should be able to evaluate if the received RF signals are evolving according to the dynamic rules learned so far and if the vehicles are following the expected (right) trajectories to decide whether the V2I links are really under attack or whether the satellite link is under spoofing.
Fig.~\ref{fig_receivedRFsignalandTrajectory}-(a) illustrates an example of the interaction between the two vehicles performing a particular manoeuvre, and Fig.~\ref{fig_receivedRFsignalandTrajectory}-(b) shows the received RF signals by the RSU from the two vehicles. At the beginning of the learning process, RSU performs predictions according to the simplified model defined in \eqref{eq_continuousLevel} where $\mathrm{U}_{\mathrm{\Tilde{S}_{t}}^{(i)}} {=} 0$.
After obtaining the generalized errors as pointed out in \eqref{GE_continuousLevel_initialModel}, RUS clusters those errors using GNG to learn two GDBN models encoding the dynamic rules of how the RF signal and the GPS signal evolve with time, respectively, as showed in Fig.~\ref{fig_GNG_of_receivedRFsignalandTrajectory} and Fig.~\ref{fig_graphicalRep_transitionMatrices}. RSU can couple the two GDBNs by learning the interactive transition matrix that is encoded in a C-GDBN as shown in Fig.~\ref{fig_interactiveMatrices}.
\begin{figure}[t!]
\begin{center}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/graphTransition_Trajectory_veh1}
\\[-1.5mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/graphTransition_Trajectory_veh2}
\\[-1.5mm]
{\scriptsize (b)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/graphTransition_RFsignal_veh1}
\\[-1.5mm]
{\scriptsize (c)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.5cm]{Results/graphTransition_RFsignal_veh2}
\\[-1.5mm]
{\scriptsize (d)}
\end{minipage}
\caption{Graphical representation of the transition matrices (TM): (a) TM related to the trajectory of vehicle 1, (b) TM related to the trajectory of vehicle 2, (c) TM related to the RF signal received from vehicle 1, (d) TM related to the RF signal received from vehicle 2.}
\label{fig_graphicalRep_transitionMatrices}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=3.8cm]{Results/interactiveMatrix_RFtoGPS_Neu5_veh1}
\\[-1.0mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=3.8cm]{Results/interactiveMatrix_RFtoGPS_Neu25_veh1}
\\[-1.0mm]
{\scriptsize (d)}
\end{minipage}
\caption{Interactive transition matrix defined in \eqref{interactiveTM_fromRFtoGPS} using different configurations: (a) $\mathrm{M_{1}}=5$, $\mathrm{M_{2}}=5$, (b) $\mathrm{M_{1}}=25$, $\mathrm{M_{2}}=25$.}
\label{fig_interactiveMatrices}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.9cm]{Results/RF_situation1_best_veh1}
\\[-1.5mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.9cm]{Results/RF_situation1_worst_veh1}
\\[-1.5mm]
{\scriptsize (b)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.9cm]{Results/RF_situation1_best_veh2}
\\[-1.5mm]
{\scriptsize (c)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.9cm]{Results/RF_situation1_worst_veh2}
\\[-1.5mm]
{\scriptsize (d)}
\end{minipage}
\caption{An example visualizing the predicted and observed RF signals transmitted by the 2 vehicles using different configurations. Predicted RF signal from: (a) vehicle 1 using $\mathrm{M_{1}}{=}5$, $\mathrm{M_{2}}{=}5$, (b) vehicle 1 using $\mathrm{M_{1}}{=}25$, $\mathrm{M_{2}}{=}25$, (c) vehicle 2 using $\mathrm{M_{1}}{=}5$, $\mathrm{M_{2}}{=}5$, (d) vehicle 2 using $\mathrm{M_{1}}{=}25$, $\mathrm{M_{2}}{=}25$.}
\label{fig_situation1_PredictedRF}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.8cm]{Results/GPSfromRF_situation1_best}
\\[-1.0mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.8cm]{Results/GPSfromRF_situation1_worst}
\\[-1.0mm]
{\scriptsize (b)}
\end{minipage}
%
\caption{An example visualizing the predicted and observed trajectories of two vehicles interacting in the environment. (a) $\mathrm{M_{1}}{=}5$, $\mathrm{M_{2}}{=}5$, (b) $\mathrm{M_{1}}{=}25$, $\mathrm{M_{2}}{=}25$.}
\label{fig_situation1_VehiclesTrajectories}
\end{center}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\begin{minipage}[b]{.49\linewidth}
\centering
\includegraphics[width=4.8cm]{Results/rmse_on_trajectory}
\\[-1.0mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=4.8cm]{Results/rmse_on_RFSignal}
\\[-1.0mm]
{\scriptsize (b)}
\end{minipage}
\caption{The average RMSE after testing different experiences and examples of: (a) trajectories and (b) RF signals.}
\label{fig_rmse_onTraj_onSig}
\end{center}
\end{figure}
Fig.~\ref{fig_situation1_PredictedRF} illustrates an example comparing between predicted RF signals and observed ones based on two different configurations in learning the interactive matrix (as shown in Fig.~\ref{fig_interactiveMatrices}). Also, Fig.~\ref{fig_situation1_VehiclesTrajectories} illustrates an example comparing between the predicted and observed trajectories of the two vehicles using the two interactive matrices depicted in Fig.~\ref{fig_interactiveMatrices}. From Fig.~\ref{fig_situation1_PredictedRF} and Fig.~\ref{fig_situation1_VehiclesTrajectories} we can see that using an interactive matrix with less clusters allows to perform better predictions compared to that with more clusters. This can be validated by observing Fig.~\ref{fig_rmse_onTraj_onSig} that illustrates the RMSE values versus different number of clusters related to the two models representing the dynamics of the received RF signals and the vehicles' trajectories. It can be seen that as the number of clusters increases the RMSE error increases, since adding more clusters decreases the firing probability that explains the possibility to be in one of the $M_{2}$ clusters of the second model conditioned in being in a certain cluster of the first model.
Fig.~\ref{fig_exNormal_Spoofed_JammedTrajectories} illustrates an example of vehicle's trajectory under normal situation (i.e., jammer and spoofer are absent), under jamming attacks and under spoofing attacks. Also the figure shows the predicted trajectory which should follow the same dynamic rules learned during a normal situation. After that, we implemented the IM-MJPF on the learned C-GDBN to perform multiple predictions, i.e., to predict the RF signal that the RSU is expecting to receive from a certain vehicle and the corresponding trajectory that the vehicle is supposed to follow. IM-MJPF through the comparison between multiple predictions and observations, produces multiple abnormality signals as defined in \eqref{eq_CLA1} and \eqref{eq_CLA2} which are used to detect the jammer and the spoofer.
Fig.~\ref{fig_abnormalitySignals_JammerSpoofer} illustrates the multiple abnormality signals related to the example shown in Fig.~\ref{fig_exNormal_Spoofed_JammedTrajectories}. We can observe that the abnormal signals related to both RF signal (Fig.~\ref{fig_abnormalitySignals_JammerSpoofer}-(a)) and trajectory (Fig.~\ref{fig_abnormalitySignals_JammerSpoofer}-(b)) are below the threshold under normal situations. This proves that RSU learned the correct dynamic rules of how RF signals and trajectories evolve when the jammer and spoofer are absent (i.e., under normal situations). Also, we can see that the RSU can notice a high deviation on both the RF signal and the corresponding trajectory due to a jamming interference from what it has learned so far by relying on the abnormality signals. In contrast, we can see that under spoofing attacks, RSU notice a deviation only on the trajectory and not on the RF signal since the spoofer has affected only the positions without manipulating the RF signal. In addition, it is obvious how the proposed method allows the RSU to identify the type of abnormality occurring and to explain the cause of the detected abnormality (i.e., understanding if it was because of a jammer attacking the V2I link or a spoofer attacking the satellite link).
\begin{figure}[t!]
\centering
\includegraphics[width=6.5cm]{Results/trajectories_underJamming_andSpoofing}
\caption{Vehicle's trajectory under: normal situation, jamming and spoofing.}
\label{fig_exNormal_Spoofed_JammedTrajectories}
\end{figure}
\begin{figure}[t!]
\begin{center}
\begin{minipage}[b]{.92\linewidth}
\centering
\includegraphics[height=2.6cm]{Results/abnSignal_onRF}
\\[-1.5mm]
{\scriptsize (a)}
\end{minipage}
\begin{minipage}[b]{.92\linewidth}
\centering
\includegraphics[height=2.6cm]{Results/abnSignal_onGPS}
\\[-1.5mm]
{\scriptsize (b)}
\end{minipage}
%
\caption{Abnormality Signals related to the example shown in Fig.\ref{fig_exNormal_Spoofed_JammedTrajectories}: (a) abnormality indicators related to the RF signal, (b) abnormality indicators related to the trajectory.}
\label{fig_abnormalitySignals_JammerSpoofer}
\end{center}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[height=3.2cm]{Results/Detection_Probability_RFfromGPS_versusPj}
\caption{Detection probability ($\mathrm{P_{d}}$) versus jammer's power ($\mathrm{P_{J}}$) using different number of clusters $\mathrm{M}_{2}$.}
\label{fig_jammerDetectionProb}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[height=3.2cm]{Results/spoofingDetectionProbability_falseAlarm_versusM2}
\caption{Spoofing detection probability ($\mathrm{P}_{d}^{s}$) and spoofing false alarm ($\mathrm{P}_{f}^{s}$) versus the number of clusters $\mathrm{M}_{2}$.}
\label{fig_spooferDetectionProb}
\end{figure}
Fig.~\ref{fig_jammerDetectionProb} shows the overall performance of the proposed method in detecting the jammer by testing many situations and examples and by considering different jamming powers which ranges from $20$dBm to $40$dBm. It can be seen that the proposed method is able to detect the jammer with high probabilities (near $1$) and by considering low and high jamming powers. Also, the figure compares the performance in detecting the jammer by varying the number of clusters ($M_{2}$).
Fig.~\ref{fig_spooferDetectionProb} shows the overall performance of the proposed method in detecting the spoofer by testing different different examples of driving maneuvers. It can be seen that the RSU is able to detect the spoofer with high detection probability and null false alarm versus different number of clusters.
\section{Conclusion}
A joint detection method of GPS spoofing and jamming attacks is proposed. The method is based on learning a dynamic interactive model encoding the cross-correlation between the received RF signals from multiple vehicles and their corresponding trajectories. Simulation results show the high effectiveness of the proposed approach in jointly detecting the GPS spoofer and jammer attacks.
Subsequent work will extend the system model to consider more than two vehicles with different channel conditions and various modulation schemes to evaluate the effectiveness of the proposed method.
\bibliographystyle{IEEEtran}
| {'timestamp': '2023-02-02T02:17:21', 'yymm': '2302', 'arxiv_id': '2302.00576', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.00576'} |
\section{Introduction}
Canonical correlation analysis (CCA) is a powerful tool to integrate two data matrices \cite{klami2013bayesian,sun2008least,yang2017canonical,cai2016kernel,wang2017novel}, which has been comprehensively used in many diverse fields. Given two matrices $\bm{X}\in \mathbb{R}^{n\times p}$ and $\bm{Y}\in \mathbb{R}^{n\times q}$ from the same samples, CCA is used to find two sparse canonical vectors $\bm{u}$ and $\bm{v}$ to maximize the correlation between $\bm{X}\bm{u}$ and $\bm{Y}\bm{v}$. However, in many real-world problems like those in bioinformatics \cite{witten2009penalized,mizutani2012relating,le2009sparse,fang2016joint,yoshida2017sparse}, the number of variables in each data matrix is usually much larger than the sample size. The classical CCA leads to non-sparse canonical vectors which are difficult to interpret in biology. To conquer this issue, a large number of sparse CCA models \cite{witten2009penalized,mizutani2012relating,le2009sparse,fang2016joint,yoshida2017sparse,parkhomenko2009sparse,witten2009extensions,asteris2016simple,chu2013sparse,hardoon2011sparse,gao2015minimax} have been proposed by using regularized penalties (\emph{e.g.}, LASSO and $L_0$-norm) to obtain sparse canonical vectors for variable selection. Parkhomenko \emph{et al.} \cite{parkhomenko2009sparse} first proposed a Sparse CCA (SCCA) model using LASSO ($L_1$-norm) penalty to genomic data integration. $\mbox{L}\hat{e}$ Cao \emph{et al.} \cite{le2009sparse} further proposed a regularized CCA with Elastic-Net penalty for a similar task. Witten \emph{et al.} \cite{witten2009penalized} proposed the Penalized matrix decomposition (PMD) algorithm to solve the Sparse CCA with two penalties: LASSO and Fused LASSO to integrate DNA copy number and gene expression from the same samples/individuals. Furthermore, a large number of generalized LASSO regularized CCA models have been proposed to consider prior structural information of variables \cite{lin2013group,virtanen2011bayesian,chen2012structure,chen2012structured,du2016structured}. For example, Lin \emph{et al.} \cite{lin2013group} proposed a Group LASSO regularized CCA to explore the relationship between two types of genomic data sets. If we consider a pathway as a gene group, then these gene pathways form an overlapping group structure \cite{jacob2009group}. Chen \emph{et al.} \cite{chen2012structured} developed an overlapping group LASSO regularized CCA model to employ such group structure.
These existing sparse CCA models can find two sparse canonical vectors with a small subset of variables across all samples (Fig.1(a)). However, many real data such as the cancer genomic data show distinct heterogeneity \cite{dai2015breast,chen2016integrative}. Thus, the current CCA models fail to consider such heterogeneity and cannot directly identify a set of sample-specific correlated variables. To this end, we propose a novel Sparse weighted CCA (SWCCA) model, where weights are used for regularizing different samples with a typical penalty (\emph{e.g.}, LASSO and $L_0$-norm) (Fig.1(b)). In this way, SWCCA can not only select two variable sets, but also select a sample set (Fig.1 (b)). We further adopt an efficient alternating iterative algorithm to solve $L_0$ (or $L_1$) regularized SWCCA model. We apply $L_0$-SWCCA and related ones onto two simulated datasets and two real biological data to demonstrate its efficiency in capturing correlated variables across a subset of samples.
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{figures/fig1.pdf}
\caption{Illustration of the difference between SWCCA and SCCA. (a) SCCA is used to extract two sparse canonical vectors ($\bm{u}$ and $\bm{v}$) to measure the association of two matrices; (b) SWCCA is used to consider two subset of sample-related sparse canonical vectors. The weights ($\bm{w}$) are used for regularizing different samples in SWCCA. SWCCA can not only obtain two sparse canonical vectors, but also identify a set of samples based on those non-zero elements of $\bm{w}$.}
\end{figure}
\section{$\bm{L_0}$-regularized SWCCA}
Here, we assume that there are two data matrices $\bm{X}\in \mathbb{R}^{n\times p}$ ($n$ samples and $p$ variables) and $\bm{Y} \in \mathbb{R}^{n\times q}$ ($n$ samples and $q$ variables) across a same set of samples. The classical CCA seeks two components ($\bm{u}$ and $\bm{v}$) to maximize the correlation between linear combinations of variables from the two data matrices as Eq.(1).
\begin{equation}
\rho = \frac{\bm{u}^{\rm{T}} \bm{\Sigma}_{xy}\bm{v}}{\sqrt{(\bm{u}^{\rm{T}}\bm{\Sigma}_{x}\bm{u})(\bm{v}^{\rm{T}}\bm{\Sigma}_{y}\bm{v})}}
\end{equation}
If $\bm{X}$ and $\bm{Y}$ are centered, we obtain the empirical covariance matrices $\bm{\Sigma}_{xy} = \frac{1}{n}\bm{X}^{\rm{T}}\bm{Y}$, $\bm{\Sigma}_{x} = \frac{1}{n}\bm{X}^{\rm{T}}\bm{X}$ and $\bm{\Sigma}_{y} = \frac{1}{n}\bm{Y}^{\rm{T}}\bm{X}$. Thus we have the following equivalent criterion as Eq.(2).
\begin{equation}
\rho = \frac{\bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\bm{Y}\bm{v}} {\sqrt{(\bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\bm{X}\bm{u})(\bm{v}^{\rm{T}}\bm{Y}^{\rm{T}}\bm{Y}\bm{v})}}
\end{equation}
Obviously, $\rho$ of (2) is invariant to the scaling of $\bm{u}$ and $\bm{v}$. Thus, maximizing criterion (2) is equivalent to solve the following constrained optimization problem as Eq.(3).
\begin{equation}
\begin{array}{rl}
\max_{\bm{u},\bm{v}} & \bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\bm{Y}\bm{v} \\
\mbox{s.t.}& \bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\bm{X}\bm{u} = 1, \bm{v}^{\rm{T}}\bm{Y}^{\rm{T}}\bm{Y}\bm{v} = 1
\end{array}
\end{equation}
Previous studies \cite{witten2009penalized,witten2009extensions} have shown that considering the covariance matrix ($\frac{1}{n}\bm{X}^{\rm{T}}\bm{X}$, $\frac{1}{n}\bm{Y}^{\rm{T}}\bm{Y}$) as diagonal one can obtain better results. For this reason, Asteris \emph{et al.} \cite{asteris2016simple} assume that $\bm{X}^{\rm{T}}\bm{X} = \bm{I}$ and $\bm{Y}^{\rm{T}}\bm{Y} = \bm{I}$, and the $L_0$-regularized Sparse CCA ($L_0$-SCCA) (also called ``diagonal penalized CCA'') model can be presented as Eq.(4).
\begin{equation}
\begin{array}{rl}
\max_{\bm{u},\bm{v}} & \bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\bm{Y}\bm{v}\\
\mbox{s.t.}& \|\bm{u}\|_0\leq k_u, \|\bm{v}\|_0\leq k_v\\
& \bm{u}^{\rm{T}}\bm{u} = \bm{v}^{\rm{T}}\bm{v} = 1
\end{array}
\end{equation}
where $\|\bm{u}\|_0$ is the $L_0$-norm penalty function, which returns to the number of non-zero entries of $\bm{u}$.
Asteris \emph{et al.}\cite{asteris2016simple} applied a projection strategy to solve $L_0$-SCCA. Let $\bm{A}=\bm{X}^{\rm{T}}\bm{Y}$, then the model of Eq.(4) is equivalent to rank-one $L_0$-SVD model \cite{min2015novel}.
Let $\bm{a} = \bm{X}\bm{u}$ and $\bm{b} = \bm{Y}\bm{v}$, then the objective function $\bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\bm{Y}\bm{v} = \sum_{i=1}^n a_ib_i$. To consider the different contribution for samples, we modify the objective function of Eq.(4) to be $\sum_{i=1}^n w_i(a_ib_i)$ with $\bm{w}=[w_1,w_2,\cdots,w_n]^{\rm{T}}$. Thus, we obtain a new objective function as Eq.(5).
\begin{equation}
\sum_{i=1}^n w_i(a_ib_i) = \bm{u}\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v}
\end{equation}
Furthermore, we also force $\bm{w}$ to be sparse to select a limited number of samples. Finally we propose a $L_0$-regularized SWCCA ($L_0$-SWCCA) model as Eq.(6).
\begin{equation}
\begin{array}{rl}
\max_{\bm{u},\bm{v},\bm{w}} & \bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v}\\
\mbox{s.t.} & \|\bm{u}\|_0\leq k_u, \|\bm{v}\|_0\leq k_v, \|\bm{w}\|_0\leq k_w\\
& \bm{u}^{\rm{T}}\bm{u} = \bm{v}^{\rm{T}}\bm{v} = \bm{w}^{\rm{T}}\bm{w} = 1
\end{array}
\end{equation}
where $\mbox{diag}(\bm{w})$ is a diagonal matrix and $\mbox{diag}(\bm{w})_{ii} = w_i$. If $\mbox{diag}(\bm{w}) = \frac{1}{\sqrt{n}}\bm{I}$, then $L_0$-SWCCA reduces to $L_0$-SCCA.
\section{Optimization}
In this section, we design an alternating iterative algorithm to solve (6) by using a sparse projection strategy. We start with the sparse projection problem corresponding to the sub-problem of (6) with fixed $\bm{v}$ and $\bm{w}$ as Eq.(7).
\begin{equation}
\begin{array}{rl}
\max_{\bm{u}} & \bm{u}^{\rm{T}}\bm{z} \\
\mbox{s.t.}& \bm{u}^{\rm{T}}\bm{u} = 1, \|\bm{u}\|_0 \leq k
\end{array}
\end{equation}
For a given column vector $\bm{z} \in \mathbb{R}^{p \times 1}$ and $k\leq p$, we define a sparse project operator $\Pi(\cdot,k)$ as Eq.(8).
\begin{eqnarray}
[\Pi(\bm{z},k)]_i =
\begin{cases}
z_i, &\mbox{if}~i \in \mbox{support}(\bm{z},k)\cr
0, &\mbox{otherwise}
\end{cases}
\end{eqnarray}
where $\mbox{support}(\bm{z},k)$ is defined as a set of indexes corresponding to the largest $k$ absolute values of $\bm{z}$. For example, if $\bm{z}=[-5, 3, 5, 2, -1]^{\rm{T}}$, then $\Pi(\bm{z},3) = [-5, 3, 5, 0, 0]^{\rm{T}}$.
{\bf Theorem 1}\quad The solution of problem (7) is
\begin{equation}
\bm{u}^* = \frac{\Pi(\bm{z},k)}{\|\Pi(\bm{z},k)\|_2}
\end{equation}
Note that $\|\cdot\|_2$ denotes the Euclidean norm. We can prove the Theorem~1 by contradiction (we omit the proof here). Based on Theorem~1, we design an alternating iterative approach to solve Eq.(6).
i) Optimizing $\bm{u}$ with fixed $\bm{v}$ and $\bm{w}$.
Fix $\bm{v}$ and $\bm{w}$ in Eq.(6), let $\bm{z}_u =\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v}$, then Eq.(6) reduces to as Eq.(10).
\begin{equation}
\begin{array}{rl}
\max_{\bm{u}} & \bm{u}^{\rm{T}}\bm{z}_u \\
\mbox{s.t.}& \bm{u}^{\rm{T}}\bm{u} = 1, \|\bm{u}\|_0 \leq k_u
\end{array}
\end{equation}
Based on the Theorem~1, we obtain the update rule of $\bm{u}$ as Eq.(11).
\begin{equation}
\bm{u} \leftarrow \frac{\Pi(\bm{z}_u,k_u)}{\|\Pi(\bm{z}_u,k_u)\|_2}
\end{equation}
ii) Optimizing $\bm{v}$ with fixed $\bm{u}$ and $\bm{w}$.
Fix $\bm{u}$ and $\bm{w}$ in Eq.(6), let $\bm{z}_v=\bm{Y}^{\rm{T}}\mbox{diag}(\bm{w})\bm{X}\bm{u}$, then Eq.(6) reduces to as Eq.(12).
\begin{equation}
\begin{array}{rl}
\max_{\bm{v}} & \bm{v}^{\rm{T}}\bm{z}_v \\
\mbox{s.t.}& \bm{v}^{\rm{T}}\bm{v} = 1, \|\bm{v}\|_0 \leq k_v
\end{array}
\end{equation}
Similarly, we obtain the update rule of $\bm{v}$ as Eq.(13).
\begin{equation}
\bm{v} \leftarrow \frac{\Pi(\bm{z}_v,k_v)}{\|\Pi(\bm{z}_v,k_v)\|_2}
\end{equation}
iii) Optimizing $\bm{w}$ with fixed $\bm{u}$ and $\bm{v}$.
Fix $\bm{u}$ and $\bm{v}$ in Eq.(6), then Eq.(6) reduces to as Eq.(14).
\begin{equation}
\begin{array}{rl}
\max_{\bm{w}} & \bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v}\\
\mbox{s.t.}& \bm{w}^{\rm{T}}\bm{w} = 1, \|\bm{w}\|_0 \leq k_u
\end{array}
\end{equation}
Let $\bm{t}_1 = \bm{Xu}$, $\bm{t}_2 = \bm{Yu}$ and $\bm{z}_w = \bm{t}_1\odot \bm{t}_2$ where `$\odot$' denotes point multiplication which is equivalent to `.*' in Matlab, then we have
$\bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v} =\bm{t}_1\mbox{diag}(\bm{w})\bm{t}_2 = (\bm{t}_1\odot\bm{w})^{\rm{T}}\bm{t}_2 = \bm{w}^{\rm{T}}(\bm{t}_1\odot\bm{t}_2) = \bm{w}^{\rm{T}}\bm{z}_w$. Thus, problem (14) reduces to as Eq.(15).
\begin{equation}
\begin{array}{rl}
\max_{\bm{w}} & \bm{w}^{\rm{T}}\bm{z}_w\\
\mbox{s.t.}& \bm{w}^{\rm{T}}\bm{w} = 1, \|\bm{w}\|_0 \leq k_w
\end{array}
\end{equation}
Similarly, we obtain the update rule of $\bm{w}$ as Eq.(16).
\begin{equation}
\bm{w} \leftarrow \frac{\Pi(\bm{z}_w,k_w)}{\|\Pi(\bm{z}_w,k_w)\|_2}
\end{equation}
Finally, combining (11), (13) and (16), we propose the following alternating iterative algorithm to solve problem (6) as Algorithm 1.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/fig2.pdf}
\caption{Results of the synthetic data 1. (a), (f) and (e) denote true $\bm{u}$, $\bm{v}$ and $\bm{w}$; (b), (g) and (j) denote estimated $\bm{u}$, $\bm{v}$ and $\bm{w}$ by $L_0$-SWCCA; (c) and (h) denote estimated $\bm{u}$ and $\bm{v}$ by $L_0$-SCCA; (d) and (i) denote stimated $\bm{u}$ and $\bm{v}$ by PMD.}
\end{figure*}
\begin{algorithm}[h]
\caption{\textbf{$\bm{L_0}$-SWCCA}.} \label{alg:Framwork1}
\begin{algorithmic}[1]
\REQUIRE $\bm{X}\in \mathbb{R}^{n\times p}$, $\bm{Y}\in \mathbb{R}^{n\times q}$, $k_u$, $k_v$ and $k_w$.
\ENSURE $\bm{u}$, $\bm{v}$, and $\bm{w}$.
\STATE Initial $\bm{u}$, $\bm{v}$, and $\bm{w}$
\REPEAT
\STATE Update $\bm{u}$ according to Eq.(11)
\STATE Update $\bm{v}$ according to Eq.(13)
\STATE Update $\bm{w}$ according to Eq.(16)
\UNTIL convergence of $\bm{u}$, $\bm{v}$, and $\bm{w}$.
\end{algorithmic}
\end{algorithm}
{\bf Terminating Condition:} We can set different stop conditions to control the iterations. For example, the update length of $\bm{u}$, $\bm{v}$, and $\bm{w}$ are smaller than a given threshold (\emph{i.e.}, $\|\bm{u}^k - \bm{u}^{k+1}\|_2^2<10^{-6}$, $\|\bm{v}^k - \bm{v}^{k+1}\|_2^2<10^{-6}$ and $\|\bm{w}^k - \bm{w}^{k+1}\|_2^2<10^{-6}$), or the maximum number of iterations is a given number (\emph{e.g.}, 1000), or the change of objective value is less than a give threshold.
{\bf Computation Complexity:} The complexity of matrix multiplication with one $n\times p$ matrix and another one $n \times q$ is $\mathcal{O}(npq)$. To reduce the computational complexity of $\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v}$, we note that $\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v} = \bm{X}^{\rm{T}}(\mbox{diag}(\bm{w})\bm{Y}\bm{v}) = \bm{X}^{\rm{T}}[(\bm{Y}\bm{v})\odot\bm{w}]$. Let $\bm{t}_1 = \bm{Y}\bm{v}$, $\bm{t}_2 = \bm{t}_1\odot\bm{w}$ and $\bm{t}_3 = \bm{X}^{\rm{T}}\bm{t}_2$. Thus, the complexity of $\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v}$ is $\mathcal{O}(nq+n+np)$. Similarly, we can see that the complexity of $\bm{Y}^{\rm{T}}\mbox{diag}(\bm{w})\bm{X}\bm{u}$ is $\mathcal{O}(np+n+nq)$, and the complexity of $(\bm{Xu})\odot(\bm{Yv})$ is $\mathcal{O}(nq+np+n)$.
In Algorithm 1, we need to obtain the largest $k$ absolute values of a given vector $\bm{z}$ of size $p\times 1$ [\emph{i.e.}, $\Pi(\bm{z},k)$]. We adopt a linear time selection algorithm called Quick select (QS) algorithm to compute $\Pi(\bm{z},k)$, which applies a divide and conquer strategy, and the average time complexity of QS algorithm is $\mathcal{O}(p)$. Thus, the entire time complexity of Algorithm 1 is $\mathcal{O}(Tnp+Tnq)$, where $T$ is the number of iterations for convergence. In general, $T$ is a small number.
{\bf Convergence Analysis:} Similar with Theorem 1 in Ref. \cite{sun2015multi}, also see e.g., \cite{bolte2014proximal}, we can prove that Algorithm 1 converges globally to a critical point (we omit the proof here).
\section{Experiments}
\subsection{Synthetic data 1}
Here we generate the first synthetic data matrices $\bm{X}$ and $\bm{Y}$ with $n = 50$, $p = 100$ and $q = 80$ using the following two steps:
Step 1: Generate two canonical vectors $\bm{u}$, $\bm{v}$ and a weighted vector $\bm{w}$ as Eq.(17).
\begin{equation}
\begin{array}{rl}
\bm{u}~~= & [r(1,30),r(0,70)]^{\rm{T}}\\
\bm{v}~~= & [N(20),r(0,20),N(10),r(0,30)]^{\rm{T}}\\
\bm{w}~~= & [r(1,30),r(0,20)]^{\rm{T}}\\
\end{array}
\end{equation}
where $r(a,n)$ denotes a row vector of size $n$, whose elements are equal to $a$, $N(m)$ denotes a row vector of size $m$, whose elements are randomly sampled from a standard normal distribution.
Step 2: Generate two input matrices $\bm{X}$ and $\bm{Y}$ as Eq.(18).
\begin{equation}
\begin{array}{rl}
\bm{X}~~= & \bm{w}\bm{u}^{\rm{T}} + \bm{\epsilon}_x \\
\bm{Y}~~= & \bm{w}\bm{v}^{\rm{T}} + \bm{\epsilon}_y
\end{array}
\end{equation}
where the elements of $\bm{\epsilon}_x$ and $\bm{\epsilon}_y$ are randomly sampled from a standard normal distribution.
We evaluate the performance of $L_0$-SWCCA with the above synthetic data and compare its performance with the typical sparse CCA, including $L_0$-SCCA \cite{asteris2016simple} and PMD \cite{witten2009penalized} with $L_1$-penalty. For comparison, we set parameters $k_u = 30$, $k_v = 30$ and $k_w = 30$ for $L_0$-SWCCA; $k_u = 30$, $k_v = 30$ for $L_0$-SCCA; $c_1 = \frac{30}{100}\sqrt{p}$ and $c_2 = \frac{30}{80}\sqrt{q}$ for PMD. Note that $c_1 = c\sqrt{p}$ and $c_2 = c\sqrt{q}$ where $c \in (0, 1)$ for PMD are to approximately control the sparse proportion of the canonical vectors ($\bm{u}$ and $\bm{v}$).
The true and estimated patterns for $\bm{u}$, $\bm{v}$ and $\bm{w}$ in the synthetic data 1 are shown in Fig.2. Compared to PMD, $L_0$-SWCCA and $L_0$-SCCA does fairly well for identifying the local non-zero pattern of the underlying factors (\emph{i.e.}, $\bm{u}$ and $\bm{v}$). However, the two traditional SCCA methods ($L_0$-SCCA and PMD) do not recognize the difference between samples and remove the noisy samples. Interestingly, $L_0$-SWCCA not only discovers the true patterns for $\bm{u}$, $\bm{v}$ (Fig.2(b) and (g)), but also identifies the true non-zero characteristics of samples ($\bm{w}$) (Fig.2(e)). Furthermore, to assess our approach is indeed able to find a greater correlation level between two input matrices, we define the correlation criterion as Eq.(19).
\begin{equation}
\rho = \mbox{cor}((\bm{X}\bm{u})\odot\bm{w}, (\bm{Y}\bm{v})\odot\bm{w})
\end{equation}
where $\mbox{cor}(\cdot)$ is a function to calculate the correlation coefficient of the two vectors. For comparison, we set $\bm{w} = [1, \cdots, 1]^{\rm{T}}$ for $L_0$-SCCA and PMD to compute the correlation criterion.
$L_0$-SWCCA gets the largest $\rho=0.96$ compared to $L_0$-SCCA with $\rho=0.80$ and PMD with $\rho=0.87$ in the above synthetic data. All results show that $L_0$-SWCCA is more effective to capture the latent patterns of canonical vectors than other methods.
\subsection{Synthetic data 2}
Here we apply another way to generate synthetic data matrices $\bm{X}\in \mathbb{R}^{n\times p}$ and $\bm{Y}\in \mathbb{R}^{n\times q}$ with $n = 50$, $p = 100$, and $q = 80$. The following three steps are used to generate the second synthetic data matrices $\bm{X}$ and $\bm{Y}$:
Step 1: We first generate two zero matrices as Eq.(20).
\begin{equation}
\begin{array}{rl}
\bm{X}~~\leftarrow & matrix(0,nrow=n, ncol=p)\\
\bm{Y}~~\leftarrow & matrix(0,nrow=n, ncol=q)
\end{array}
\end{equation}
Step 2: We then update two sub-matrices in the $\bm{X}$ and $\bm{Y}$.
\begin{equation}
\begin{array}{rl}
\bm{X}[1:30,1:50]~~\leftarrow & 1\\
\bm{Y}[1:30,1:40]~~\leftarrow & - 1
\end{array}
\end{equation}
Step 3: We add the Gaussian noise in $\bm{X}$ and $\bm{Y}$.
\begin{equation}
\begin{array}{rl}
\bm{X}~~\leftarrow & \bm{X} + \bm{\epsilon}_x\\
\bm{Y}~~\leftarrow & \bm{Y} + \bm{\epsilon}_y
\end{array}
\end{equation}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/fig4.pdf}
\caption{Results of the synthetic data 2. (a), (f) and (e) denote true $\bm{u}$, $\bm{v}$ and $\bm{w}$; (b), (g) and (j) denote estimated $\bm{u}$, $\bm{v}$ and $\bm{w}$ by $L_0$-SWCCA; (c) and (h) denote estimated $\bm{u}$ and $\bm{v}$ by $L_0$-SCCA; (d) and (i) denote estimated $\bm{u}$ and $\bm{v}$ by PMD.}
\end{figure*}
For simplicity and comparison, we can set true $\bm{u} = [r(1,50),~r(0,50)]^{\rm{T}}$, true $\bm{v} = [r(-1,40),~r(0,40)]^{\rm{T}}$ and true $\bm{w} = [r(1,30),r(0,20)]^{\rm{T}}$ to characterize the patterns of $\bm{X}$ and $\bm{Y}$ (Fig.3(a), (f) and (e)). Similarly, we also apply $L_0$-SWCCA, $L_0$-SCCA \cite{asteris2016simple} and PMD \cite{witten2009penalized} to the synthetic data 2. For comparison, we set parameters $k_u = 50$, $k_v = 40$ and $k_w = 30$ for $L_0$-SWCCA; $k_u = 50$, $k_v = 40$ for $L_0$-SCCA; $c_1 = \frac{50}{100}\sqrt{p}$ and $c_2 = \frac{40}{80}\sqrt{q}$ for PMD.
The true and estimated patterns for $\bm{u}$, $\bm{v}$ and $\bm{w}$ are shown in Fig.3. $L_0$-SWCCA and $L_0$-SCCA are superior to PMD about identifying the latent patterns of canonical vectors $\bm{u}$ and $\bm{v}$ (Fig.3(b), (c), (d), (g), (h) and (i)). However $L_0$-SCCA and PMD fail to remove interference samples. Compared to $L_0$-SCCA and PMD, $L_0$-SWCCA can clearly identify the true non-zero characteristics of samples (Fig.3(e)). Similarly, we also compute the correlation criterion based on the formula (19). We find that $L_0$-SWCCA gets the largest correlation $\rho=0.97$ compared to $L_0$-SCCA with $\rho=0.93$ and PMD with $\rho=0.95$. All results show that our method is more effective to capture the latent patterns of canonical vectors than other ones.
\subsection{Breast cancer data}
We first consider a breast cancer dataset \cite{witten2009penalized,chin2006genomic} consisting of gene expression and DNA copy number variation data across 89 cancer samples. Specifically, the gene expression data $\bm{X}$ and the DNA copy number data $\bm{Y}$ are of size $n \times p$ and $n \times q$ with $n = 89$, $p = 19672$ and $q = 2149$. We apply SWCCA and related ones to this data to identify a gene set whose expression is strongly correlated with copy number changes of some genomic regions.
In PMD \cite{witten2009penalized}, we set $c_1 = c\sqrt{p}$ and $c_2 = c\sqrt{q}$, where $c \in (0, 1)$ is to approximately control the sparse ratio of canonical vectors. We ensure that the canonical vectors ($\bm{u}$ and $\bm{v}$) extracted by the three methods (PMD, $L_0$-SCCA, and $L_0$-SWCCA) have the same sparsity level for comparison. We first apply PMD in the breast cancer data to obtain two sparse canonical vectors $\bm{u}$ and $\bm{v}$ for each given $c \in (0,1)$. Then, we compute the number of nonzero elements in the above extracted $\bm{u}$ and $\bm{v}$, denoted as $N_u$ and $N_v$. Finally, we set $k_u = N_u$, $k_v = N_v$ in $L_0$-SCCA and $L_0$-SWCCA, and set $k_w = 53 \approx 0.6\times89$ in $L_0$-SWCCA to identify the sample loading $\bm{w}$ with sparse ratio $60\%$.
We adopt two criteria: correlation level defined in formula~(19) and objective value defined in Eq.(5) for comparison. Here we consider different $c$ values (\emph{i.e.}, $0.1,0.2,0.3,0.4,0.5,0.6,0.7$) to control the different sparse ratio of canonical vectors. We find that, compared to PMD and $L_0$-SCCA, $L_0$-SWCCA does obtain higher correlation level and objective value for all cases (Table 1). Since the `breast cancer data' did not collect any clinical information of patients, it is very difficult to study the specific characteristics of these selected samples. To this end, we also apply our method to another biological data with more detailed clinical information.
{\tabcolsep=2.7pt \footnotesize
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\multicolumn{8}{c}{\bf Table 1.~Results on Correlation level (CL) and Objective}\\
\multicolumn{8}{c}{\bf value (OV) for different c values.}\\
\hline
{\bf CL} & c=0.1 & c=0.2 & c=0.3 & c=0.4 & c=0.5 & c=0.6 & c=0.7\\
\hline
PMD & 0.77 & 0.81 & 0.85 & 0.86 & 0.86 & 0.85 & 0.83\\
\hline
$L_0$-SCCA& 0.88 & 0.84 & 0.86 & 0.85 & 0.84 & 0.82 & 0.81\\
\hline
$L_0$-SWCCA & 0.99 & 0.98 & 1.00 & 1.00 & 0.99 & 0.97 & 0.99\\
\hline
\hline
{\bf OV} & c=0.1 & c=0.2 & c=0.3 & c=0.4 & c=0.5 & c=0.6 & c=0.7\\
\hline
PMD &253 & 768 & 1390 & 2020 & 2589 & 3056 & 3392\\
\hline
$L_0$-SCCA &371 & 1066 & 1824 & 2467 & 3014 & 3360 & 3520\\
\hline
$L_0$-SWCCA &1570 & 3046 & 4960 & 6576 & 7860 & 10375 & 10692\\
\hline
\end{tabular}
\end{center}}
\subsection{TCGA BLCA data}
Recently, it is a hot topic to study microRNA (miRNA) and gene regulatory relationship from matched miRNA and gene expression data \cite{min2015novel,zhang2011novel}. Here, we apply SWCCA onto the bladder urothelial carcinoma (BLCA) miRNA and gene expression data across 405 patients from TCGA (\url{https://cancergenome.nih.gov/}) to identify a subtype-specific miRNA-gene co-correlation module. To remove some noise miRNAs and genes, we first adapt standard deviation method to extract 200 largest variance miRNAs and 5000 largest variance genes for further analysis. Finally, we obtain a miRNA expression matrix $\bm{X}\in \mathbb{R}^{405\times200}$, which is standardized for each miRNA, and a gene expression matrix $\bm{Y}\in \mathbb{R}^{405\times5000}$, which is standardized for ecah gene. We apply $L_0$-SWCCA onto BLCA data with $k_u = 10$, $k_v = 200$ and $k_w = 203$ to identify a miRNA set with 10 miRNAs and a gene set with 200 genes and a sample set with 203 patients. We also apply PMD with $c_1=(10/200)\sqrt{p}$, $c_2=(200/5000)\sqrt{q}$ and $L_0$-SCCA with $k_u = 10$ and $k_v = 200$ onto BLCA data for comparison.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{figures/fig6.pdf}
\caption{Kaplan-Meier survival analysis between the selected patients and the remaining patents based on the $\bm{w}$ estimated by $L_0$-SWCCA. $P$-value is calculated by log-rank test.}
\end{figure}
Similarly, $L_0$-SWCCA obtains the largest correlation level (CL) and objective value (OV) than others ones [(CL, OV): (0.98, 1210) for $L_0$-SWCCA, (0.84, 346) for PMD, (0.86,469) for $L_0$-SCCA], respectively. More importantly, we also analyze characteristics of these selected patients by $L_0$-SWCCA. We find that it is significantly different with respect to patient survival time between the selected 203 patients and the remaining 202 patients with $p$-value $=0.023$ (Fig.4). These results imply that $L_0$-SWCCA can be used to discover BLCA subtype-specific miNRA-gene co-correlation modules.
Furthermore, we also assess whether these identified genes by $L_0$-SWCCA are biologically relevant with BLCA. DAVID (\url{https://david.ncifcrf.gov/}) is used to perform the Gene Ontology (GO) biological processes (BPs) and KEGG pathways enrichment analysis. Several significantly enriched GO BPs and pathways relating to BLCA are discovered including \emph{GO:0008544:epidermis development} (B-H adjusted $P$-value $=$ 1.1E-12), \emph{hsa00591:Linoleic acid metabolism} (B-H adjusted $p$-value $=$ 4.8E-3), \emph{hsa00590:Arachidonic acid metabolism} (B-H adjusted $p$-value $=$ 3.6E-3) and \emph{hsa00601:Glycosphingolipid biosynthesis-lacto and neolacto series} (B-H adjusted $p$-value $=$ 2.6E-2). Finally, we also examine whether the identified miRNAs by $L_0$-SWCCA are associated with BLCA. Interestingly, in the identified 10 miRNAs by $L_0$-SWCCA, we find that there are six miRNAs (including \emph{hsa-miR-200a-3p, hsa-miR-200b-5p, hsa-miR-200b-3p, hsa-miR-200a-5p, hsa-miR-200c-3p and hsa-miR-200c-5p}) belonging to miR-200 family. Notably, several studies \cite{wiklund2011coordinated,cheng2016mir} have also reported miR-200 family plays key roles in BLCA. All these results imply that the identified miNRA-gene module by $L_0$-SWCCA may help us to find new therapeutic strategy for BLCA.
\section{Extensions}
\subsection{SWCCA with generalized penalties}
We first consider a general regularized SWCCA framework as Eq.(23).
\begin{equation}
\begin{array}{rl}
\max_{\bm{u},\bm{v},\bm{w}} & \bm{u}^{\rm{T}}\bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v} \\
& - \mathcal{R}_u(\bm{u}) - \mathcal{R}_v(\bm{v})-\mathcal{R}_w(\bm{w})\\
\mbox{s.t.} & \bm{u}^{\rm{T}}\bm{u} = 1,\bm{v}^{\rm{T}}\bm{v} = 1, \bm{w}^{\rm{T}}\bm{w} = 1
\end{array}
\end{equation}
where $\mathcal{R}_u(\cdot)$, $\mathcal{R}_v(\cdot)$, $\mathcal{R}_w(\cdot)$ are three regularized functions. For different prior knowledge, we can use different sparsity inducing penalties.
\subsubsection{LASSO regularized SWCCA}
If $\mathcal{R}_u(\bm{u})= \lambda_u\|\bm{u}\|_1$, $\mathcal{R}_v(\bm{v})= \lambda_v\|\bm{v}\|_1$, and $\mathcal{R}_w(\bm{w})= \lambda_w\|\bm{w}\|_1$. We obtain a $L_1$ (Lasso) regularized SWCCA ($L_1$-SWCCA).
Similar to solve Eq.(6), we only need to solve the following problem to solve $L_1$-SWCCA as Eq.(24).
\begin{equation}
\begin{array}{rl}
\min_{\bm{u}}&-\bm{u}^{\rm{T}}\bm{z} + \lambda_u\|\bm{u}\|_1\\
\mbox{s.t.}& \bm{u}^{\rm{T}}\bm{u} = 1
\end{array}
\end{equation}
where $\bm{z} = \bm{X}^{\rm{T}}\mbox{diag}(\bm{w})\bm{Y}\bm{v}$.
We first replace the constraint $\bm{u}^{\rm{T}}\bm{u} = 1$ with $\bm{u}^{\rm{T}}\bm{u} \leq 1$ and obtain the following the problem as Eq.(25).
\begin{equation}
\begin{array}{rl}
\min_{\bm{u}}&-\bm{u}^{\rm{T}}\bm{z} + \lambda_u\|\bm{u}\|_1\\
\mbox{s.t.}& \bm{u}^{\rm{T}}\bm{u} \leq 1
\end{array}
\end{equation}
It is easy to see that problem (25) is equivalent to (24). Thus, we can obtain its Lagrangian form as Eq.(26).
\begin{equation}
\mathcal{L}(\bm{u},\lambda_u, \eta_u) = -\bm{u}^{\rm{T}}\bm{z} + \lambda_u\|\bm{u}\|_1 + \eta_u(\bm{u}^{\rm{T}}\bm{u}-1)
\end{equation}
Thus, we can use a coordinate descent method to minimize Eq.(26) and obtain the following update rule of $\bm{u}$ as Eq.(27).
\begin{equation}
\bm{u} = \frac{\mathcal{S}_{\lambda_u}(\bm{z})}{\|\mathcal{S}_{\lambda_u}(\bm{z})\|_2}
\end{equation}
where $\mathcal{S}_{\lambda_u}(\cdot)$ is a soft thresholding operator and $\mathcal{S}_{\lambda_u}(z_i)=\mbox{sign}(|z_i|-\lambda_u)_+$. Based on the above, an alternating iterative strategy can be used to solve $L_1$-SWCCA.
\subsubsection{Group LASSO regularized SWCCA}
If $\mathcal{R}_u(\bm{u})= \lambda_u \sum_l\|\bm{u}^{(l)}\|_2$, $\mathcal{R}_v(\bm{v})= \lambda_v \sum_l\|\bm{v}^{(l)}\|_2$ and $\mathcal{R}_w(\bm{w})= \lambda_w \sum_l\|\bm{w}^{(l)}\|_2$. Problem (24) reduces to $L_{2,1}$-regularized SWCCA ($L_{2,1}$-SWCCA). Similarly, we should solve the following projection problem as Eq.(28).
\begin{equation}
\begin{array}{rl}
\min_{\bm{u}}&-\bm{u}^{\rm{T}}\bm{z} + \lambda_u\sum\limits_l\|\bm{u}^{(l)}\|_2\\
\mbox{s.t.}& \bm{u}^{\rm{T}}\bm{u} \leq 1
\end{array}
\end{equation}
Thus, we obtain its Lagrangian form as Eq.(29).
\begin{equation}
\mathcal{L}(\bm{u},\lambda_u, \eta_u) = -\bm{u}^{\rm{T}}\bm{z} + \lambda_u\sum_l\|\bm{u}^{(l)}\|_2 + \eta_u(\bm{u}^{\rm{T}}\bm{u}-1)
\end{equation}
where $\bm{u}^{(l)}$ is the $l$th group of $\bm{u}$. We adopt a block-coordinate descent method \cite{tseng2001convergence} to solve it and obtain the learning rule of $\bm{u}^{(l)}$ ($l = 1,\cdots, L$) as Eq.(30).
\begin{eqnarray}
\bm{u}^{(l)}=
\begin{cases}
\frac{1}{2\eta_u }\bm{z}^{(l)} (1- \frac{\lambda_u}{\|\bm{z}^{(l)}\|_2} ), &\mbox{if}~\|\bm{z}^{(l)}\|_2 > \lambda_u,\cr
\bm{0}, &\mbox{otherwise}.
\end{cases}
\end{eqnarray}
By cyclically applying the above updates, we can minimize Eq.(29). Thus, an alternating iterative strategy can be used to solve $L_{2,1}$-SWCCA.
\subsection{Multi-view sparse weighted CCA}
In various scientific fields, multiple view data (more than two views) can be available from multiple sources or diverse feature subsets. For example, multiple high-throughput molecular profiling data by omics technologies can be produced for the same individuals in bioinformatics \cite{li2012identifying,min2015novel,sun2015multi}. Integrating these data together can significantly increase the power of pattern discovery and individual classification.
Here we extend SWCCA to Multi-view SWCCA (MSWCCA) model for multi-view data analysis (Fig.5) as follows:
\begin{eqnarray*}
\begin{array}{rl}
\max_{\bm{u}_i, \bm{w}} & \bm{w}^{\rm{T}}\big[\bigodot\limits_{i=1}^M(\bm{X}_i\bm{u}_i)\big] - \sum\limits_{i=1}^M\mathcal{R}_{\bm{u}_i}(\bm{u}_i)-\mathcal{R}_w(\bm{w})\\
\mbox{s.t.} & \bm{w}^{\rm{T}}\bm{w} = 1, \bm{u}_i^{\rm{T}}\bm{u}_i = 1~\mbox{for}~i = 1,\cdots, M
\end{array}
\end{eqnarray*}
where $\bigodot_{i=1}^M(\bm{X}_i\bm{u}_i) = (\bm{X}_1\bm{u}_1)\odot(\bm{X}_2\bm{u}_2)\cdots\odot(\bm{X}_M\bm{u}_M)$. When $M=2$, we can see that $\bm{w}^{\rm{T}}[(\bm{X}_1\bm{u}_1)\odot(\bm{X}_2\bm{u}_2)] = \bm{u}_1^{\rm{T}}\bm{X}_1^{\rm{T}}\mbox{diag}(\bm{w})\bm{X}_2\bm{u}_1$ and it reduces to SWCCA. So we can solve MSWCCA in a similar manner with SWCCA.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{figures/fig7.pdf}
\caption{Illustration of the multi-view sparse weighted CCA designed for integrating multiple data matrices.}
\end{figure}
\section{Conclusion}
In this paper, we propose a sparse weighted CCA framework. Compared to SCCA, SWCCA can reveal that the selected variables are only strongly related to a subset of samples. We develop an efficient alternating iterative algorithm to solve the $L_0$-regularized SWCCA. Our tests using both simulation and biological data show that SWCCA can obtain more reasonable patterns compared to the typical SCCA. Moreover, the key idea of SWCCA is easy to be adapted by other penalties like LASSO and Group LASSO. Lastly, we extend SWCCA to MSWCCA for multi-view situation with multiple data matrices.
\section*{Acknowledgment}
Shihua Zhang and Juan Liu are the corresponding authors of this paper. Wenwen Min would like to thank the support of National Center for Mathematics and Interdisciplinary Sciences, Academy of Mathematics and Systems Science, CAS during his visit.
\small{
\bibliographystyle{named}
\balance
| {'timestamp': '2017-10-16T02:04:46', 'yymm': '1710', 'arxiv_id': '1710.04792', 'language': 'en', 'url': 'https://arxiv.org/abs/1710.04792'} |
\section{Introduction}
Several different methods to construct minimal polynomials of Salem numbers have been investigated in the literature (see e.g.\ \cite{BDGPS,boyd,grossmcmullen,smyth}). Various authors associate Salem numbers to Coxeter polynomials and use this relation in order to construct Salem numbers (cf.\ for instance \cite{cannwagr,floydplotnick,grohirmcm,lakatos,mckrowsmy}). In this paper we follow the very explicit approach of Gross et al.~\cite{grohirmcm} and provide precise information on the decomposition of Coxeter polynomials of certain star-like trees into irreducible factors, thereby giving estimates on the degree of the occurring Salem factor.
To be more precise, let $r, a_0,\ldots, a_r \in \NN$ such that $a_0\ge 2,\ldots, a_r \ge 2$. We consider the star-like tree
$T=T (a_0,\ldots, a_r)$ with $r+1$ arms of $a_0-1,\ldots, a_r-1$ edges, respectively. According to \cite[Lemma 5]{mckrowsmy}
the Coxeter polynomial of $T (a_0,\ldots, a_r)$ is given by
$$
R_{T (a_0,\ldots, a_r)} (z) =\prod_{i=0}^{r} \Bigl( \frac{z^{a_i}-1}{z-1}\Bigr)\Bigl(z+1- z \sum_{i=0}^{r} \frac{z^{a_i-1}-1}{z^{a_i}-1}\Bigr).
$$
Note that $R_T$ can be written as
\begin{equation}\label{eq:decomp}
R_T(z)=C(z)S(z),
\end{equation}
where $C$ is a product of cyclotomic polynomials and $S$ is the minimal polynomial of a Salem number or of a quadratic Pisot number. Indeed, by the results of \cite{pena}, the zeros of $R_T$ are either real and positive or have modulus 1. The decomposition \eqref{eq:decomp} now follows from \cite[Corollaries~7 and~9, together with the remark after the latter]{mckrowsmy}, as these results imply that $R_T$ has exactly one irrational real positive root of modulus greater than $1$.
For Coxeter polynomials corresponding to star-like trees with three arms we are able to say much more about the factors of the decomposition \eqref{eq:decomp}. In particular, we shall prove the following result.
\begin{theorem}\label{thm1}
Let $a_0, a_1, a_2 \in \ZZ$ such that $a_2 > a_1 >a_0>1$ and
$(a_0, a_1, a_2)\ne (2,3,t)$ for all $t\in \set{4, 5, 6}$. Further, let $T:=T (a_0, a_1, a_2)$ be the star-like tree with three arms of $a_0-1, a_1-1, a_2-1$ edges, and let $\lambda$ be its largest eigenvalue. Then $\tau >1$ defined by
$$
\sqrt {\tau}+ 1/\sqrt {\tau}= \lambda
$$
is a Salem or a quadratic Pisot root of the Coxeter polynomial $R_{T}$ of $T$.
If $S$ is the minimal polynomial of $\tau$ then we can write
\begin{equation}\label{eq:clear}
R_T(x)=S(x)C(x),
\end{equation}
where $C$ is a product of cyclotomic polynomials of orders bounded by $420 (a_2-a_1+a_0-1)$ whose roots have multiplicity bounded by an effectively computable constant $m(a_0,a_2-a_1)$. Thus
\begin{equation}\label{SalemLowerBound}
\deg S \ge \deg R_T - m(a_0,a_2-a_1)\sum_{k\le 420 (a_2-a_1+a_0-1)}\varphi(k),
\end{equation}
where $\varphi$ denotes Euler's $\varphi$-function.
\end{theorem}
\begin{remark}[Periodicity properties of cyclotomic factors]\label{rem:period}
Gross et al.~\cite{grohirmcm} study certain Coxeter polynomials and prove periodicity properties of their cyclotomic factors. Contrary to their case, our Coxeter polynomials $R_T$ do not have the same strong separability properties (cf.\ Lemma~\ref{lem:mult}). For this reason, we could not exhibit analogous results for $C(x$), however, we obtain weaker periodicity properties in the following way.
In the setting of Theorem~\ref{thm1} assume that $a_0$ as well as $a_2-a_1$ are constant. For convenience set $S_{a_1}=R_{T(a_0,a_1,a_2)}$ and let $\zeta_k$ be a root of unity of order $k$. It follows from \eqref{expzdarst} below (see \eqref{pzdef} for the definition of $P$) that $S_{a_1}(\zeta_k)=0$ if and only if $S_{a_1+k}(\zeta_k)=0$, i.e., the
fact that the $k$-th cyclotomic polynomial divides $S_{a_1}$ depends only on the residue class of $a_1 \pmod k$.
Therefore, setting $K:=\lcm\{1,2,\ldots, 420 (a_2-a_1+a_0-1)\}$,
the set of all cyclotomic polynomials dividing $S_{a_1}$ is determined by the residue class of $a_1 \pmod K$.
If we determine the set $\{k \,:\, k \le 420 (a_2-a_1+a_0-1),\, S_{a_1}(\zeta_k) = 0 \}$ for all $a_1 \le K$ we thus know exactly which cyclotomic factor divides which of the polynomials $S_{a_1}$ for $a_1\in\mathbb{N}$. Obviously, this knowledge would allow to improve the bound \eqref{SalemLowerBound}.
\end{remark}
\begin{remark}[Degrees of the Salem numbers]\label{rem:degree}
Theorem~\ref{thm1} enables us to exhibit Salem numbers of arbitrarily large degree.
Indeed, if $a_0$ and the difference $a_2-a_1$ are kept small and $a_1\to \infty$ then \eqref{SalemLowerBound} assures that $\deg S \to \infty$. We also mention here that Gross and McMullen \cite[Theorem~1.6]{grossmcmullen} showed that for any odd integer
$n\ge 3$ there exist infinitely many {\it unramified} Salem numbers of degree $2 n$;
recall that a Salem polynomial $f$ is said to be unramified if it satisfies $|f(-1)|=|f(1)|=1$.
The construction pursued in this work substantially differs from ours: it is proved that every unramified Salem polynomial arises from
an automorphism of an indefinite lattice.
\end{remark}
If two of the arms of the star-like tree under consideration get longer and longer, the associated Salem numbers converge to the $m$-bonacci number $\varphi_m$, where $m$ is the (fixed) length of the third arm. This is made precise in the following theorem.
\begin{theorem}\label{th:m}
Let $a_1 > a_0 \ge 2$ and $\eta \ge 1$ be given and set $a_2 = a_1+\eta$. Then, for $a_1\to\infty$, the Salem root $\tau(a_0, a_1, a_2)$ of the Coxeter polynomial associated with $T (a_0, a_1, a_2)$ converges to $\varphi_{a_0}$, where the degree of $\tau(a_0, a_1, a_2)$ is bounded from below by \eqref{SalemLowerBound}.
\end{theorem}
Besides that, we are able to give the following result which is valid for more general star-like trees.
\begin{theorem}\label{th:general}
Let $r\ge 1$, $a_r> \cdots > a_1 > a_0 \ge 2$, and choose $k\in\{1,\ldots,r-1\}$. Then, for fixed $a_0,\ldots,a_k$ and $a_{k+1},\ldots, a_r \to \infty$, the Salem root $\tau(a_0, \ldots , a_r)$ of the Coxeter polynomial associated with $T (a_0, \ldots, a_r)$ converges to the dominant Pisot root of
\begin{equation}\label{eq:qz}
Q(z)=(z+1-r+k)\prod_{i=0}^k(z^{a_i}-1)-z\sum_{i=0}^k(z^{a_i-1}-1)
\prod_{\begin{subarray}{c} j=0 \\ j\not=i \end{subarray}}^k (z^{a_j}-1).
\end{equation}
\end{theorem}
\section{Salem numbers generated by Coxeter polynomials of star-like trees}
For convenience, we introduce the polynomial
\begin{equation}\label{pzdef}
\begin{array}{rrl}
\displaystyle P(z)&:=&\displaystyle (z-1)^{r +1} R_{T (a_0,\ldots, a_r)} (z) \\[3pt]
&=&\displaystyle \Bigl( \prod_{i=0}^{r} (z^{a_i}-1)\Bigr) (z+1)- z \sum_{i=0}^{r} \Bigl( (z^{a_i-1}-1) \prod_{j=0, j\ne i}^{r} (z^{a_j}-1)\Bigr).
\end{array}
\end{equation}
Of course, like $R_T$, the polynomial $P$ can be decomposed as a product of a Salem (or quadratic Pisot) factor times a factor containing only cyclotomic polynomials.
Now, we concentrate on star-like trees with three arms, i.e., we assume that $r=2$.
\begin{lemma} \label{pblock}
Let $a_2 > a_1 > a_0$. Then for $T(a_0,a_1,a_2)$ the polynomial $P(z)$ reads
\begin{equation}\label{expzdarst}
P(z)= z^{a_1 + a_2} Q(z) + z^{a_1 + 1} R(z)+ S(z)
\end{equation}
with
\begin{eqnarray*}
Q(z) &=& z^{a_0+1} - 2 z^{a_0}+ 1, \\
R(z) &=& z^{a_2-a_1+a_0-1} - z^{a_2-a_1} + z^{a_0-1} -1,\\
S(z) &=& -z^{a_0 +1} + 2z -1.
\end{eqnarray*}
Moreover,
$$
\max\{ \deg(Q), \deg(R), \deg(S) \} = a_2-a_1+a_0-1, \quad \deg (P)= a_0 + a_1 +a_2 + 1,
$$
and the (naive) height of $P$ equals $2$.
\end{lemma}
\begin{proof}
This can easily be verified by direct computation.
\end{proof}
\begin{lemma}\label{lem:mult}
Let $a_2 > a_1 > a_0$ and let $P$ as in \eqref{pzdef} be associated to $T(a_0,a_1,a_2)$. Then there exists an effectively computable constant $m=m(a_0,a_2-a_1)$ which bounds the multiplicity of every root $z$ of $P$ with $|z|=1$.
\end{lemma}
\begin{proof}
Observe, that $1$ is a root of $Q$, $R$, and $S$. Thus, for technical reasons, we work with $\tilde P(z)=P(z)/(z-1)$ and, defining $\tilde Q(z)$, $\tilde R(z)$, and $\tilde S(z)$ analogously, we write
\[
\tilde P(z)= z^{a_1 + a_2} \tilde Q(z) + z^{a_1 + 1} \tilde R(z)+ \tilde S(z).
\]
Our first goal is to bound the $n$-th derivatives $|\tilde P^{(n)}(z)|$ with $|z|=1 $ away from zero. To this end we define the quantities
\begin{align*}
\eta(a_0)&:=\min\{|\tilde Q(z)| \,:\, |z|=1 \} > 0, \\
E_n=E_n(a_0)&:=\max\{|\tilde Q^{(k)}(z)| \,:\, 1\le k\le n,\, |z|=1 \}, \\
F_0=F_0(a_0,a_2-a_1)&:=\max\{|\tilde R(z)| \,:\, |z|=1 \}, \\
F_n=F_n(a_0,a_2-a_1,n)&:=\max\{|\tilde R^{(k)}(z)| \,:\, 1\le k\le n,\, |z|=1 \}, \\
G_n=G(a_0,n)&:=\max\{|\tilde S^{(n)}(z)| \,:\, |z|=1 \}.
\end{align*}
For $n\ge 1$ one easily computes that (note that $(x)_{n}=x(x-1)\cdots(x-n+1)$ denotes the Pochhammer symbol)
\begin{align*}
\tilde P^{(n)}(z) =& (a_1+a_2)_{(n)}\tilde Q(z)z^{a_1+a_2-n} +
(a_1+1)_{(n)}\tilde R(z)z^{a_1+1-n} \\
&+ \sum_{k=0}^{n-1}{n\choose k}(a_1+a_2)_{(k)}\tilde Q^{(n-k)}(z)z^{a_1+a_2-k} \\
&+ \sum_{k=0}^{n-1}{n\choose k}(a_1+1)_{(k)}\tilde R^{(n-k)}(z)z^{a_1+1-k}
+ \tilde S^{(n)}(z).
\end{align*}
Now for $|z|=1$ we estimate
\begin{align}
|\tilde P^{(n)}(z)| \ge & (a_1+a_2)_{(n)}\eta(a_0) - 2^{-n+1}(a_1+a_2)_{(n)}F_0 \nonumber \\
&- 2^{n-1}(a_1+a_2)_{(n-1)}E_n - 2^{n-1}(a_1+1)_{(n-1)}F_n - G_n\label{eq:betrag}\\
\ge& (a_1+a_2)_{(n)} \Big(\eta(a_0) - 2^{-n+1}F_0 - \frac{2^{n-1}(E_n+F_n)}{a_1+a_2-n+1} - \frac{G_n}{(a_1+a_2)_{(n)}}\Big). \nonumber
\end{align}
Now we fix $a_0$ and the difference $a_2-a_1$. Then we choose $n_0=n_0(a_0,a_2-a_1)$ such that
$$
\eta(a_0) - 2^{-n_0+1}F_0 >0.
$$
In view of \eqref{eq:betrag} there exists a constant $c=c(a_0,a_2-a_1)$ such that for all $a_1,a_2$ with $a_1+a_2 > c$ (with our fixed difference) we have $|\tilde P^{(n_0)}(z)| > 0$ for all $z$ with $|z|=1$. If, on the other hand, $a_1+a_2 \le c$, then we have $\deg \tilde P \le c+a_0$. Therefore, in any case, the multiplicity of a root of $\tilde P$ on the unit circle is bounded by $\max(n_0,c+a_0)$ and the result follows by taking $m=\max(n_0,c+a_0)+1$.
\end{proof}
The following lemma is a simple special case of Mann's theorem.
\begin{lemma}\label{abcz}
Let $a, b, c, p, q \in \ZZ$ such that $(p,q)\ne (0,0)$ and $a,b,c$ nonzero.
If $\zeta$ is a root of unity such that
$$a \zeta^p + b \zeta^q +c= 0$$
then the order of $\zeta$ divides $\; 6 \, \gcd (p, q).$
\end{lemma}
\begin{proof}
This is a special case of \cite[Theorem 1]{mann}.
\end{proof}
For subsequent use we recall some notation and facts (used in a similar context in \cite{grohirmcm}).
A {\em divisor} on the complex plane is a finite sum
$$ D=\sum_{j \in J} a_j \cdot z_j$$
where $a_j \in \ZZ\setminus \set{0} $ and $${\rm supp \,} (D):= \set{z_j \in \CC\;:\; j \in J}$$
is the support of $D$; $D$ is said to be {\em effective} if all its coefficients are positive.
\medskip
The set of all divisors on $\CC$ forms the abelian group ${\rm Div \,} (\CC)$, and the natural
evaluation map $\sigma: {\rm Div \,} (\CC) \to \CC $ is given by
$$
\sigma ( D) =\sum_{j \in J} a_j z_j\,.
$$
\medskip
A {\em polar rational polygon} (prp) is an effective divisor $D=\sum_{j \in J} a_j \cdot z_j$ such that each $z_j$ is a root of unity and $ \sigma ( D) =0\,.$ In this case the order ${\rm o } (D) $ is the cardinality of the subgroup of
$ \CC \setminus \set{0} $ generated by the roots of unity $\set{z_j / z_k \;:\; j,k\in J}$.
The prp $D$ is called {\em primitive} if there do not exist non-zero prp's $D'$ and $D''$ such that $D=D' + D''$.
In particular, the coefficients of $D', D''$ are positive, thus each prp can be expressed as a sum of primitive prp's.
\bigskip
Every polynomial $f \in \ZZ[X] \setminus \set{0}$ can be uniquely written in the form
\begin{equation}\label{dcf}
f=\sum_{j \in J} \varepsilon_j a_j X^j
\end{equation}
with $J \subseteq \set{0, \ldots, \deg (f)}$, $\varepsilon_j=\pm 1$ and $a_j > 0$. We call
$$\ell (f):= \Card (J)$$
the length of $f$. For $\zeta \in \CC$ with $f(\zeta)=0$ we define the effective divisor of $f$ (w.r.t. $\zeta$) by
$$D f (\zeta):= \sum_{j \in J} a_j (\varepsilon_j \zeta^j) \,.$$
\bigskip
\begin{proposition}\label{farc}
Let $a_2 > a_1 > a_0$ and let $P$ as in \eqref{pzdef} be associated to $T(a_0,a_1,a_2)$. If $\zeta$ is a root of unity such that $P(\zeta) = 0$ then
the order of $\zeta$ satisfies
$$
\mathrm{ord}(\zeta) \le 420 (a_2-a_1+a_0-1).
$$
\end{proposition}
\begin{proof}
We follow the proof of \cite[Theorem 2.1]{grohirmcm} and write the polynomials $Q, R, S$
in the form
$$Q (X) =\sum Q_j (X), \qquad R (X) =\sum R_j (X), \qquad S (X) =\sum S_j (X)$$
with finite sums of integer polynomials such that
$$D P (\zeta)= \sum_{j} D P_j (\zeta)= \sum_{j} \bigl( \zeta^{a_1 + a_2} D Q_j(\zeta) + \zeta^{a_1 + 1} D R_j(\zeta)+ DS_j(\zeta)\bigr)$$
is a decomposition of the divisor $ D P (\zeta)$ into primitive polar rational polygons $ D P_j (\zeta)$, thus for every $j$ the sum
$$\zeta^{a_1 + a_2} Q_j(\zeta) + \zeta^{a_1 + 1} R_j(\zeta)+ S_j(\zeta)$$
is the evaluation of the primitive prp $ D P_j (\zeta)$.
Observe that in view of Lemma \ref{pblock} the (naive) height of the polynomials $Q_j, R_j, S_j$ does not exceed $2$ since the coefficients cannot increase when performing the decomposition of a prp into primitive prp's.
\medskip
Case 1: \qquad $\max \set{\ell (Q_j), \ell (R_j), \ell (S_j)} > 1$ for some $j$
\medskip
Let us first assume $\ell (Q_j)> 1$ for some $j$. The ratio of any two roots of unity occurring in $D Q_j (\zeta)$
can be written in the form $\pm \zeta^e$ with $1 \le e \le \deg (Q_j) \le \deg (Q)$. Therefore we have
$$\frac{\mathrm{ord}(\zeta)}{2 \deg (Q) } \le {\rm o \,} (D P_j (\zeta)) .$$
By Mann's Theorem \cite{mann}, $ {\rm o \,} (D P_j (\zeta))$ is bounded by the product of primes
at most equal to
$$\ell (P_j)\le \ell (Q_j) +\ell (R_j) + \ell (S_j)\le \ell (Q) +\ell (R) + \ell (S)\le 3+4+3=10\,.$$
The product of the respective primes is at most $2\cdot 3 \cdot 5\cdot 7=210$. Therefore by Lemma \ref{pblock} we find
$$\frac{\mathrm{ord}(\zeta)}{2 (a_0+1)}=\frac{\mathrm{ord}(\zeta)}{2 \deg (Q) } \le 210,$$
which yields
$$\mathrm{ord}(\zeta) \le 420 \; (a_0+1).$$
Analogously, the other two cases yield
$$\mathrm{ord}(\zeta) \le 420 \; (a_2-a_1+a_0-1) \qquad \text{ or } \qquad\mathrm{ord}(\zeta)\le 420 \; (a_0+1),$$
and we conclude
\begin{equation}\label{resge1}
\mathrm{ord}(\zeta) \le 420 \; \max \set{a_0+1, \, a_2-a_1+a_0-1 } =420 \; (a_2-a_1+a_0-1).
\end{equation}
\medskip
Case 2: \qquad $\max \set{\ell (Q_j), \ell (R_j), \ell (S_j)} \le 1$ for all $j$
\medskip
In this case, $D P_j (\zeta)$ is either of the form
\begin{equation}\label{Pdouble}
D P_j (\zeta) = c_{j1}\zeta^{b_{j1}} + c_{j2}\zeta^{b_{j2}}
\end{equation}
or of the form
\begin{equation}\label{Ptriple}
D P_j (\zeta) = c_{j1}\zeta^{b_{j1}} + c_{j2}\zeta^{b_{j2}} + c_{j3}\zeta^{b_{j3}} ,
\end{equation}
where $c_{ji}\in \{-2,\ldots, 2\}$ by Lemma~\ref{lem:mult}. We distinguish two subcases.
\medskip
Case 2.1: There exists $j$ such that $D P_j (\zeta)$ is of the form \eqref{Ptriple}.
\medskip
In this situation $D P_j (\zeta)$ can be written more explicitly as
\begin{equation}\label{Pt1}
D P_j (\zeta) = c_{j1}\zeta^{a_1+a_2+\eta_1} + c_{j2}\zeta^{a_1+\eta_2} + c_{j3}\zeta^{\eta_{3}}
\end{equation}
or
\begin{equation}\label{Pt2}
D P_j (\zeta) = c_{j1}\zeta^{a_1+a_2+\eta_1} + c_{j2}\zeta^{a_2+\eta_2} + c_{j3}\zeta^{\eta_{3}},
\end{equation}
where $\eta_1\in\{0,a_0,a_0+1\}$, $\eta_2\in\{1,a_0\}$, and $\eta_3\in\{0,1,a_0+1\}$. If $D P_j (\zeta)$ is as in \eqref{Pt1} then $P_j (\zeta)=0$ implies that
\[
c_{j1}\zeta^{a_1+a_2+\eta_1-\eta_3} + c_{j2}\zeta^{a_1+\eta_2-\eta_3} + c_{j3} =0.
\]
Now, using Lemma~\ref{abcz} we gain
\[
\mathrm{ord}(\zeta) \mid 6 \mathrm{gcd}(a_1+a_2+\eta_1-\eta_3, a_1+\eta_2-\eta_3)
\]
which yields
\[
\mathrm{ord}(\zeta) \mid 6(a_2-a_1+\eta_1-2\eta_2+\eta_3),
\]
hence,
\begin{equation}\label{tripleresult}
\mathrm{ord}(\zeta) \le 6(2 a_0 + a_2 - a_1).
\end{equation}
If $D P_j (\zeta)$ is as in \eqref{Pt2}, by analogous arguments we again obtain \eqref{tripleresult}.
\medskip
Case 2.2: For all $j$ the divisor $D P_j (\zeta)$ is of the form \eqref{Pdouble}.
\medskip
In this case we have to form pairs of the 10 summands of $D P(\zeta)$ to obtain the divisors $D P_j (\zeta)$. As $\ell(R)=\ell(S)=4$ there must exist $j_1,j_2$ such that $\ell(R_{j_1})=0$ and $\ell(S_{j_2})=0$. In what follows, $c_{ij}\in\{-2,\ldots,2\}$, and $\eta_1,\eta_1'\in\{0,a_0,a_0+1\}$, $\eta_2,\eta_2'\in\{1,a_0\}$, and $\eta_3,\eta_3'\in\{0,1,a_0+1\}$.
Then $D P_{j_1} (\zeta)$ is of the form
\begin{equation}\label{pairj1}
D P_{j_1} (\zeta) = c_{j_11}\zeta^{a_1+a_2+\eta_1} + c_{j_13}\zeta^{\eta_{3}}
\end{equation}
which yields
\begin{equation*}\label{pairj1eq}
c_{j_11}\zeta^{a_1+a_2+\eta_1-\eta_3} + c_{j_13} = 0,
\end{equation*}
and, hence,
\begin{equation}\label{pairj1ord}
\mathrm{ord}(\zeta)\mid 2(a_1+a_2+\eta_1-\eta_3).
\end{equation}
For $D P_{j_2} (\zeta)$ we have two possibilities. Either we have
\begin{equation}\label{pairj21}
D P_{j_2} (\zeta) = c_{j_21}\zeta^{a_1+a_2+\eta'_1} + c_{j_22}\zeta^{a_1+\eta'_2} .
\end{equation}
This yields
\begin{equation*}\label{pairj21eq}
c_{j_21}\zeta^{a_2+\eta'_1-\eta_2'} + c_{j_22} = 0,
\end{equation*}
and, hence,
\begin{equation}\label{pairj21ord}
\mathrm{ord}(\zeta)\mid 2(a_2+\eta_1'-\eta_2').
\end{equation}
The second alternative for $D P_{j_2} (\zeta)$ reads
\begin{equation}\label{pairj22}
D P_{j_2} (\zeta) = c_{j_21}\zeta^{a_1+a_2+\eta'_1} + c_{j_22}\zeta^{a_2+\eta'_2}.
\end{equation}
This yields
\begin{equation*}\label{pairj22eq}
c_{j_21}\zeta^{a_1+\eta'_1-\eta_2'} + c_{j_22} = 0,
\end{equation*}
and, hence,
\begin{equation}\label{pairj22ord}
\mathrm{ord}(\zeta)\mid 2(a_1+\eta_1'-\eta_2').
\end{equation}
If $P_{j_2}$ is of the form \eqref{pairj21}, then \eqref{pairj1ord} and \eqref{pairj21ord} yield
that
\[
\mathrm{ord}(\zeta)\mid 2 \mathrm{gcd}(a_1+a_2+\eta_1-\eta_3, a_2+\eta_1'-\eta_2'),
\]
hence, $\mathrm{ord}(\zeta)\mid 2(a_1-a_2+\eta_1-2\eta_1'+2\eta_2'-\eta_3)$ and therefore
\begin{equation}\label{endres2}
\mathrm{ord}(\zeta)\le 2(a_2-a_1+3a_0+1).
\end{equation}
If $P_{j_2}$ is of the form \eqref{pairj22}, then \eqref{pairj1ord} and \eqref{pairj22ord} yield
\[
\mathrm{ord}(\zeta)\mid 2 \mathrm{gcd}(a_1+a_2+\eta_1-\eta_3, a_1+\eta_1'-\eta_2'),
\]
hence, \eqref{endres2} follows again.
Summing up, the proposition is proved by combining \eqref{resge1}, \eqref{tripleresult}, and \eqref{endres2}.
\end{proof}
Combining results from \cite{mckrowsmy} with our previous considerations we can now prove the main theorem of this paper.
\begin{proof}[Proof of Theorem~\ref{thm1}]
The fact that $\tau$ is either a Salem number or a quadratic Pisot number as well as the decomposition of $R_T$ given in \eqref{eq:clear} follows immediately from~\eqref{eq:decomp}. The bound on the orders of the roots of the cyclotomic polynomials is a consequence of Proposition~\ref{farc}. Together with Lemma~\ref{lem:mult} this proposition yields the estimate $\eqref{SalemLowerBound}$ on the degree of $S$, where the explicitly computable constant $m(a_0,a_2-a_1)$ is the one stated in Lemma~\ref{lem:mult}.
\end{proof}
\section{Convergence properties of Salem numbers generated by star-like trees}
In this section we prove Theorems~\ref{th:m} and~\ref{th:general}.
In the following proof of Theorem~\ref{th:m} we denote by $M_m(x)=x^m-x^{m-1}-\dots-x-1$ the minimal polynomial of the $m$-bonacci number $\varphi_m$.
\begin{proof}[Proof of Theorem~\ref{th:m}]
Note that $Q(x)=(x-1)M_{a_0}(x)$ holds. The theorem is proved if for each $\varepsilon > 0$ the polynomial $P(x)$ has a root $\zeta$ in the open ball $B_\varepsilon(\varphi_{a_0})$ for all sufficiently large $a_1$. We prove this by using Rouch\'e's Theorem. Let $\varepsilon > 0$ be sufficiently small and set $C_\varepsilon : =\partial B_\varepsilon(\varphi_{a_0})$. Then $\delta := \min\{|M_{a_0}(x)|\,:\, x \in C_\varepsilon \} >0$. Thus, on $C_\varepsilon$ we have the following estimations.
\begin{align}
\label{est}
|x^{a_1+1}R(x)+S(x)| &< (\varphi_{a_0}+\varepsilon)^{a_1+1}4(\varphi_{a_0}+\varepsilon)^{\eta+a_0-1} + 4(\varphi_{a_0}+\varepsilon)^{a_0+1},
\\
|x^{a_1+a_2}Q(x)| &> (\varphi_{a_0}-\varepsilon)^{2a_1+\eta}(\varphi_{a_0}-1-\varepsilon)\delta.
\nonumber
\end{align}
Combining these two inequalities yields
\begin{equation}\label{P_est}
|P(x)| > (\varphi_{a_0}-\varepsilon)^{2a_1+\eta}(\varphi_{a_0}-1-\varepsilon)\delta -
4(\varphi_{a_0}+\varepsilon)^{a_1+a_0+\eta} - 4(\varphi_{a_0}+\varepsilon)^{a_0+1}.
\end{equation}
Since \eqref{est} and \eqref{P_est} imply that for sufficiently large $a_1$ we have
\[
|P(x)| > |x^{a_1+1}R(x)+S(x)| \qquad (x\in C_\varepsilon),
\]
Rouch\'e's Theorem yields that $P(x)$ and $x^{a_1+a_2}(x-1)M_{a_0}(x)$ have the same number of roots in $B_\varepsilon(\varphi_{a_0})$. Thus our assertion is proved.
\end{proof}
To prove Theorem~\ref{th:general} we need the following auxiliary lemma.
\begin{lemma}\label{lem:aux}
Let $r\ge 1$, $a_r> \cdots > a_1 > a_0 \ge 2$, and choose $k\in\{1,\ldots,r-1\}$. If $P$ is as in \eqref{pzdef} and $Q$ as in \eqref{eq:qz} then, for fixed $a_0,\ldots,a_k$ and $a_{k+1},\ldots, a_r$ sufficiently large, we have
\[
\#\{\xi \in \mathbb{C}\;:\, P(\xi)=0,\, |\xi| > 1\} = \#\{\xi \in \mathbb{C}\;:\, Q(\xi)=0,\, |\xi| > 1\} = 1.
\]
\end{lemma}
\begin{proof}
First observe that
\begin{equation}\label{eq:PQesti}
P(z)= z^{a_{k+1} + \cdots + a_r}Q(z) + O(z^{a_{k+2} + \cdots + a_r+\eta})
\end{equation}
for some fixed constant $\eta \in \mathbb{N}$.Since by ~\eqref{eq:decomp} the polynomial $P$ has exactly one (Salem) root outside the unit disk, it is sufficient to prove the first equality in the statement of the lemma.
We first show that $Q$ has at least one root $\xi$ with $|\xi| > 1$. This is certainly true for $k < r-2$ as in this case we have $|Q(0)|>1$. For $k\in \{r-2,r-1\}$ we see that
\[
Q^{(\ell)}(1)=0,\;\hbox{for} \; 0\le \ell < a_0+\cdots+a_k - 1, \quad Q^{(a_0+\cdots+a_k - 1)}(1)<0.
\]
As the leading coefficient of $Q$ is positive, this implies that $Q(\xi)=0$ for some $\xi >1$.
Now we show that $Q$ has at most one root $\xi$ with $|\xi| > 1$. Assume on the contrary that there exist two distinct roots $\xi_1,\xi_2\in\mathbb{C}$ of $Q$ outside the closed unit circle. Applying Rouch\'e's Theorem to $P(z)$ and $z^{a_{k+1}+\cdots+a_r}Q(z)$ shows by using \eqref{eq:PQesti} that also $P$ has two zeroes outside the closed unit circle which contradicts the fact that $P$ is a product of a Salem polynomial and cyclotomic polynomials, see~\eqref{eq:decomp}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:general}]
From Lemma~\ref{lem:aux} and \eqref{eq:qz} we derive that
\[
Q(z)=C(z)T(z)z^s,
\]
where $s\in\{0,1\}$ and $T$ is a Pisot or a Salem polynomial. To show the theorem we have to prove that $T$ is a Pisot polynomial. It suffices to show that $C(z)T(z)$ is not self-reciprocal.
We distinguish three cases. If $k < r-2$ then $|C(0)T(0)|>1$, hence, as this polynomial has leading coefficient $1$ it cannot be self-reciprocal.
Denote the $\ell$-th coefficient of the polynomial $f$ by $[z^\ell]f(z)$. If $k=r-2$ we have $[z]C(z)T(z)=2(-1)^{r-1}$ and $[z^{a_0+\cdots+a_{r-2}}]C(z)T(z)=-r$. As $k>0$ we have $r>2$ and again the polynomial cannot be self-reciprocal.
Finally for $k=r-1$ we have $[z]C(z)T(z)=0$ and $[z^{a_0+\cdots+a_{r-1}-1}]C(z)T(z)=-r$ which again excludes self-reciprocity of $C(z)T(z)$.
\end{proof}
\comment{
\rmus{Finding cyclotomic parts in polynomials: \cite[Section 2]{beukerssmyth}}
\rmus{Modifying and applying techniques developped in \cite[Section~7]{grossmcmullen}?}
}
\section{Concluding remarks}
In this note we have studied Coxeter polynomials of star-like trees with special emphasis on star-like trees with three arms. It would be nice to extend Theorem~\ref{th:m} to star-like trees $T(a_0,\ldots, a_r)$ with four and more arms in order to get lower estimates on the degrees of the Salem polynomials involved in Theorem~\ref{th:general}. In fact, the estimate on the maximal multiplicity of the irreducible factors of $R_T$ contained in Lemma~\ref{lem:mult} can be carried over to star-like trees with larger values of $r$. Concerning Proposition~\ref{farc}, the argument based on Mann's Theorem used in order to settle Case~1 of its proof can be extended to $T(a_0,\ldots, a_r)$, however, in the situation of Case~2 we were not able to prove that the orders of the occurring roots of unity are bounded by a reasonable bound. We expect that a generalization of this case requires new ideas.
\bigskip
{\bf Acknowledgment}.
The authors are indebted to the referee for insightful comments on the first version
of this paper.
\def$'${$'$}
| {'timestamp': '2015-01-23T02:14:24', 'yymm': '1501', 'arxiv_id': '1501.05557', 'language': 'en', 'url': 'https://arxiv.org/abs/1501.05557'} |
\section{Introduction}
\label{intro}
The essential achievements in nanoscience and nanotechnology during the past
decade lead to great interest in studying nanoscale optical fields. The
phenomenon of surface plasmon amplification by stimulated emission of
radiation (spaser) was proposed in Ref.~\onlinecite{Bergman_Stockman} (see
also Refs.~\onlinecite{Stockman_review,Protsenko}). Spaser generates
coherent high-intensity fields of selected surface plasmon (SP) modes that
can be strongly localized on the nanoscale. The properties of localized
plasmons are reviewed in Refs.
\onlinecite{Agranovich,Klyuchnik,Zayats,Bozhevolnyi}. The spaser consists of
an active medium formed by two-level systems (semiconductor quantum dots
(QDs) or organic molecules) and a plasmon resonant nanosystem where the
surface plasmons are excited. The emitters transfer their excitation energy
by radiationless transitions through near fields to a resonant plasmon
nanosystem.
By today theoretical and experimental studies are focused on
metal-based spasers, where surface plasmons are excited in different
metallic nanostructures of different geometric shapes. A spaser
consisted of the nanosystem formed by the V-shaped silver
nanoinclusion embedded in a dielectric host with the embedded PbS
and PbSe QDs was considered ~in Ref. \cite{Bergman_Stockman}.
A spaser formed by a silver spherical nanoshell on a
dielectric core with a radius of $10-20\ \mathrm{nm}$, and
surrounded by two dense monolayers of nanocrystal QDs was
considered in Ref.~\onlinecite{Stockman}. The SPs propagating along
the bottom of a groove (channel) in the metal surface were studied
in Ref.~\onlinecite{Lisyansky}. The SPs are assumed to be coherently
excited by a linear chain of QDs at the bottom of the channel. It
was shown that for realistic values of the system parameters, gain
can exceed loss and plasmonic lasing in
a ring or linear channels in the silver surface
surrounded by a linear chain of CdSe QDs can occur. In Refs.
\cite{Andrianov_ol,Andrianov_oe,Andrianov_prb1,Andrianov_prb2} the
spaser formed by the metal sphere surrounded by the two-level
quantum dot was studied theoretically. The spaser consisting of the
spherical gain core, containing two-level systems, coated with a
metal spherical plasmonic shell was theoretically analyzed in Ref.
\cite{Baranov}. The experimental study of the spaser formed by $44
\ \mathrm{nm}$ diameter nanoparticles with the gold spherical core surrounded
by dye-doped silica shell was performed in
Refs.~\cite{Noginov_2007,Noginov}. In this experiment the emitters
were formed by dye-doped silica shell instead of QDs. It was
demonstrated that a two-dimensional array of a certain class of
plasmonic resonators supporting coherent current excitations with
high quality factor can act as a planar source of spatially and
temporally coherent radiation~\cite{Zheludev}. This structure
consists of a gain medium slab supporting a regular array of
silver asymmetric split-ring resonators. The spaser formed by $55\ \mathrm{n
}$-thick gold film with the nano-slits located on the silica
substrate surrounded by PbS QDs was experimentally studied in
Ref.~\onlinecite{Plum}. Room temperature spasing of surface plasmon
polaritons at $1.46\ \mathrm{\mu m}$ wavelength has been
demonstrated by sandwiching a gold-film plasmonic
waveguide between optically pumped InGaAs quantum-well gain media~\cit
{Flynn}.
Since plasmons can be excited also in graphene, and damping in
graphene is much less than in
metals~~\cite{Varlamov,Falkovsky_prb,Falkovsky_conf}, we propose to
use graphene nanoribbon surrounded by semiconductor QDs as the
nanosystem for the spaser. Plasmons in graphene provide a suitable
alternative to plasmons in noble metals, because they exhibit much
tighter confinement and relatively long propagation distances, with
the advantage of being highly tunable via electrostatic
gating~\cite{Koppens}. Besides, the graphene-based spaser can work
in THz frequency regime. Recently there were many experimental and
theoretical studies devoted to graphene known by
unusual properties in its band structure~\cite{Castro_Neto_rmp,Das_Sarma_rmp
. The properties of plasmons in graphene were discussed in Refs.~\cit
{Hwang_Das_Sarma,Lozovik_u,Mikhailov,Geim_plasmons}. The electronic
properties of graphene nanoribbons depend strongly on their size and
geometry~\cite{Brey_Fertig_01,Brey_Fertig}. The frequency spectrum
of oblique terahertz plasmons in graphene nanoribbon arrays was obtained~\cit
{Popov}. Besides, graphene-based spaser seems to meet the new
technological needs, since it works at the infrared (IR)
frequencies, while the metal-based spaser works at the higher
frequencies. Let us mention that the graphene-based photonic two-
and one-dimensional crystals proposed in Refs.~\onlinecite{BBKKL,BK}
also can be used effectively as the frequency filters and waveguides
for the far infrared region of electromagnetic spectrum.
In this Paper we propose the graphene nanoribbon based spaser
consisting of a graphene nanoribbon surrounded by semiconductor QDs.
The QDs excited by the laser pumping nonradiatively transfer their
excitation to the SPs localized at the graphene nanoribbon, which
results in an increase of intensity of the SP field. We calculate
the minimal population inversion that is the difference between the
surface densities of QDs in the excited and ground states needed for
the net SP amplification and study its dependence on the surface
plasmon wavevector, graphene nanoribbon width at fixed temperature
for different doping and damping parameters for the armchair
graphene nanoribbon.
The paper is organized in the following way. In Sec.~\ref{dev} the minimal
population inversion for the graphene-based spaser is obtained. The
discussion of the results and conclusions follow in Sec.~\ref{disc}.
\section{Surface plasmon amplification}
\label{dev}
The system under consideration is the graphene nanoribbon, which is
the stripe of graphene at $z=0$ in the plane $(x,y)$, that is
infinite in $x$ direction and has the width $W$ in $y$ direction.
This stripe is surrounded by the deposited dense manolayers of
nanocrystal quantum dots with the dielectric constant $\varepsilon
_{d}$ at $z<0$ and $z>0$. When the quantum dots are optically
pumped, the resonant nonradiative transmission occurs by creating a
surface plasmon localized in the graphene nanoribbon. Our goal is to
show that amplification by QDs can exceed absorption of the surface
plasmon in the graphene nanoribbon. As a result we obtain an
increase of intensity of the surface plasmon field. In other words,
the competition between gain and loss of the surface plasmon field
in the graphene nanoribbon will result in favor of the gain.
Below we derive the expression for the minimal population inversion
per unit area $N_{c}$, needed for the net amplification of SPs in a
doped graphene nanoribbon from the condition that for the regime of
the plasmon amplification the rate $\partial \bar{U}/\partial t$ of
the transfer of the average energy of the QDs is greater than the
heat released per unit time $\partial Q/\partial t$ due to the
absorption of the energy of the plasmon field in the graphene
nanoribbon.
Let us start from the Poynting theorem for the rate of the
transfer of the energy density from a region of space $\partial \mathcal{W
/\partial t=-\mathrm{div}\vec{S}$, where $\vec{S}$ is the Poynting
vector and assume that the plasmon frequency equals the QD
transition frequency. From the other side the rate of the
transferred energy related to the rates of the average energy of the
QDs and the heat released due to the absorption of the energy by the
graphene nanoribbon can be presented as
\begin{eqnarray}
\label{wuq}
-\frac{\partial }{\partial t}\int \mathcal{W}dV=\frac{\partial \bar{U}}
\partial t}-\frac{\partial Q}{\partial t}\ ,
\end{eqnarray}
where $V$ is the volume of the system. Therefore, from the Poynting
theorem we have the following expression
\begin{eqnarray}
\label{S01}
\int \nabla \cdot \vec{S}dV=\frac{\partial \bar{U}}{\partial t}-\frac
\partial Q}{\partial t}\ \ .
\end{eqnarray}
Let us consider now each term in the left hand side
of Eq.~(\ref{S01}) separately. The excitation
causing the generation of plasmons in the graphene nanoribbon comes
from the transitions in the QDs between the excited and ground
states. The average
energy $\bar{U}$ of the QDs characterized by the dipole moment is given by
\cite{Tamm}
\begin{eqnarray}
\label{uav1} \bar{U}=\frac{1}{2}\int \vec{P}\cdot \vec{E}dV\ ,
\end{eqnarray}
where $\vec{E}$ is the electric field of the graphene nanoribbon
plasmon, and $\vec{P}$ is the polarization of QDs, which is the
average total dipole moment of the unit of the volume $V$. When the
plasmon frequency $\omega $ equals the QD transition frequency, and,
$\vec{E}\sim \exp (-i\omega t)$ and $\vec{P}\sim \exp (-i\omega t)$,
the relation between the polarization of QDs $\vec{P}$ and electric
field of the graphene nanoribbon plasmon $\vec{E}$ has the
form~\cite{Lisyansky}
\begin{eqnarray}
\label{Lis} \vec{P}=-ik\frac{\tau _{p}|\mu |^{2}n_{0}}{\hbar
}\vec{E}\ ,
\end{eqnarray}
where $k=9\times 10^{9}\ \mathrm{N\times m^{2}/C^{2}}$, $n_{0}$ is
the difference between the concentrations of quantum dots in the
excited and ground states, $\tau _{p}$ is the inverse line width,
and $\mu $ is the average off-diagonal element of the dipole moment
of a single QD.
Substituting Eq.~(\ref{Lis}) into Eq.~(\ref{uav1}), we obtain the rate of
the transfer of the average energy of the QDs
\begin{eqnarray} \label{uav11}
\frac{\partial \bar{U}}{\partial t} = \int \omega \mathrm{Im} \left(\vec{E
\cdot\vec{P}^{\ast}\right) d V = \omega k\frac{\tau _{p}|\mu |^{2}}{\hbar
\int n_{0} |\vec{E}|^{2} d V \ .
\end{eqnarray}
We assume that the distances between the quantum dots are small, so
their effect on a plasmon is the same as that of a continuous
(constant) gain distribution along the graphene nanoribbon. We
consider the two-dimensional graphene nanoribbon at $z=0$ and assume
it is infinite in $x$ direction, has the width $W$ in $y$ direction
and therefore, $n_{0}=N_{0}\eta (y,-W/2,W/2)\delta (z)$, where
$N_{0}$ is the difference between the numbers of the excited and
ground state quantum dots per unit area of the graphene nanoribbon,
and $\eta (y,-W/2,W/2)=1$ at $-W/2\leq y\leq W/2$, $\eta
(y,-W/2,W/2)=0$ at $y<-W/2$ and $y>W/2$. Then, taking into account
mentioned above, we obtain from Eq.~(\ref{uav11})
\begin{eqnarray}\label{dudt}
\frac{\partial \bar{U}}{\partial t} &=&\omega k\frac{\tau _{p}|\mu |^{2}}
\hbar }\int_{-\infty }^{+\infty }dx\int_{-\infty }^{+\infty }dy\int_{-\infty
}^{+\infty }dzN_{0}\eta (y,-W/2,W/2)\delta (z)|\vec{E}(x,y,z)|^{2}=
\nonumber \label{uav111} \\
&=&\omega k\frac{\tau _{p}|\mu |^{2}N_{0}}{\hbar }\int_{-W/2}^{+W/2}d
\int_{-\infty }^{+\infty }dx|\vec{E}(x,y,0)|^{2}\ .
\end{eqnarray
Taking into account the spatial dispersion of the dielectric
function in the graphene
nanoribbon~\cite{Brey_Fertig_01,Brey_Fertig}, we use the following
expression for the rate of the heat $\partial Q/\partial t$
released due to the absorption of the energy of the plasmon field in
the graphene nanoribbon~\cite{Agranovich,Landau}
\begin{eqnarray} \label{uav2}
\frac{\partial Q}{\partial t} &=& \int \omega \mathrm{Im} \varepsilon
\omega,q_{x})\eta(y,-W/2,W/2)|\vec{E}|^{2} d V \nonumber \\
&=& \omega \mathrm{Im} \varepsilon(\omega,q_{x}) \int_{-\infty}^{+\infty} d
x \int_{-W/2}^{+W/2} d y \int_{-\infty}^{+\infty} d z |\vec{E}(x,y,z)|^{2} \
.
\end{eqnarray}
where $\mathrm{Im} \varepsilon(\omega,q_{x})$ is the imaginary part of the
dielectric function $\varepsilon(\omega,q_{x})$ of graphene nanoribbon given
by Eq.~(\ref{epsilon}).
The plasmons in a graphene nanoribbon are excited due to the
radiation caused by the transitions from the excited state to the
ground state on the QDs. Therefore, according to the conservation of
energy, the regime of the amplification of the plasmons in the
graphene nanoribbon is established, if the rate of the transfer of
the average energy $\partial \bar{U}/\partial t$ of the QDs
given by Eq.~(\ref{dudt}) is greater than the heat
released rate $\partial Q/\partial t$ due to the absorption of the
energy of the plasmon field in the graphene nanoribbon:
\begin{eqnarray}
\label{ineq} \frac{\partial \bar{U}}{\partial t}>\frac{\partial
Q}{\partial t}\ .
\end{eqnarray}
Substituting Eqs.~(\ref{uav111}) and~(\ref{uav2}) into Eq.~(\ref{ineq}), we
get
\begin{eqnarray}
\label{comb}\omega k\frac{\tau _{p}|\mu |^{2}N_{0}}{\hbar }\int_{-W/2}^{+W/2}dy\int_{
\infty }^{+\infty }dx|\vec{E}(x,y,0)|^{2}>\omega \mathrm{Im}\varepsilon
(\omega ,q_{x})\int_{-\infty }^{+\infty }dx\int_{-W/2}^{+W/2}dy\int_{-\infty
}^{+\infty }dz|\vec{E}(x,y,z)|^{2}\ .
\end{eqnarray}
From Eq.~(\ref{comb}), one can obtain the condition for the
difference between the surface densities of the quantum dots in the
excited and ground state corresponding to the amplification of
plasmons:
\begin{eqnarray}
\label{N0} N_{0}>N_{c}=\frac{\mathrm{Im}\varepsilon (\omega
,q_{x})\int_{-\infty
}^{+\infty }dx\int_{-W/2}^{+W/2}dy\int_{-\infty }^{+\infty }dz|\vec{E
(x,y,z)|^{2}}{k\frac{\tau _{p}|\mu |^{2}}{\hbar }\int_{-\infty
}^{+\infty }dx\int_{-W/2}^{+W/2}dy|\vec{E}(x,y,0)|^{2}}\ ,
\end{eqnarray}
where $N_{c}$ is the critical density of the QDs required for the
amplification of the plasmons. The evaluation of the integrals in
Eq.~(\ref{N0}) requires the knowledge of the electric field of a
plasmon in a graphene nanoribbon. The electric field of a plasmon in
a graphene nanoribbon is derived in Appendix~\ref{ap.el}. Using
Eq.~(\ref{E}) for the electric field of a plasmon, we have:
\begin{eqnarray} \label{E0}
|\vec{E}(x,y,0)|^{2}=E_{0}^{2}\left( 2q_{x}^{2}\cos
^{2}(q_{y}y)+q_{y}^{2}\right) \ ,
\end{eqnarray}
\begin{eqnarray} \label{Exyz}
|\vec{E}(x,y,z)|^{2}=E_{0}^{2}e^{-2\alpha |z|}\left( 2q_{x}^{2}\cos
^{2}(q_{y}y)+q_{y}^{2}\right) \ ,
\end{eqnarray}
where $\alpha =\sqrt{q_{x}^{2}+q_{y}^{2}}$ and for the armchair nanoribbon
we have $q_{yn}=2\pi /(3a_{0})\left( (2M+1+n)/(2M+1)\right) $ at the width
W=(3M+1)a_{0}$ \cite{Brey_Fertig_01}, where $a_{0}$ is the graphene
lattice constant, $M$ is the integer. We will use $n=1$.
Substituting Eqs.~(\ref{E0}) and~(\ref{Exyz}) into Eq.~(\ref{N0}),
we obtain
\begin{eqnarray}
\label{N01} N_{0}>N_{c}=\frac{2\hbar \mathrm{Im}\varepsilon (\omega
,q_{x})\int_{-\infty }^{+\infty
}dx\int_{-W/2}^{+W/2}dy\int_{0}^{+\infty }dze^{-2\alpha z}\left(
2q_{x}^{2}\cos ^{2}(q_{y}y)+q_{y}^{2}\right) }{k\tau _{p}|\mu
|^{2}\int_{-\infty }^{+\infty }dx\int_{-W/2}^{+W/2}dy\left(
2q_{x}^{2}\cos ^{2}(q_{y}y)+q_{y}^{2}\right) } \ .
\end{eqnarray}
Finally from Eq.~(\ref{N01}) we obtain
\begin{eqnarray}
\label{N02} N_{0}>N_{c}=\frac{\hbar \mathrm{Im}\varepsilon (\omega
,q_{x})}{\alpha k\tau _{p}|\mu |^{2}}\ .
\end{eqnarray}
Using Eqs.~(\ref{epsilon}) and (\ref{pi}) one can find $\mathrm{Im
\varepsilon (q_{x},\omega )$:
\begin{eqnarray} \label{ime}
\mathrm{Im}\varepsilon (q_{x},\omega
)=-\frac{V_{00}(q_{x})f_{1}(q_{x},\beta ,\mu )g_{s}v_{F}q_{x}\omega
\gamma }{\pi \hbar \left( \left( \omega
^{2}-v_{F}^{2}q_{x}^{2}\right) ^{2}+\omega ^{2}\gamma ^{2}\right) }
\ ,
\end{eqnarray}
where $v_{F}$ is the Fermi velocity of electrons in graphene. The
plasmon frequency $\omega$ can be obtained at $\gamma = 0$ from the
condition $\mathrm{Re} \varepsilon (q_{x},\omega) = 0$ using Eqs.~(\re
{epsilon}) and~(\ref{pi}):
\begin{eqnarray} \label{pl}
\omega^{2} = v_{F}^{2}q_{x}^{2} - \frac{V_{0,0}(q_{x})f_{1}(q_{x},\beta
\mu)g_{s}v_{F}q_{x}}{\pi\hbar} \ .
\end{eqnarray}
To perform the calculations, one should calculate the critical density $N_{c}
$ using Eq.~(\ref{N02}). $N_{c}$ is a function of the wave vector $q_{x}$,
the graphene nanoribbon width $W$, temperature $T$, and electron
concentration $n_{0}$ determined by the doping.
\section{Results and discussion}
\label{disc}
For our calculations we use the following parameters for the PbS and PbSe
QDs. Since the typical energy corresponding to the transition between the
ground and excited electron states for PbS and PbSe QDs synthesized with the
radii from $1 \ \mathrm{nm}$ to $8 \ \mathrm{nm}$ can be $0.7 \ \mathrm{eV}
, we use $\tau_{p} \approx 5.9 \ \mathrm{fs}$, and $|\mu| = 1.9 \times
10^{-17} \ \mathrm{esu} = 19 \ \mathrm{Debye}$ ($1 \ \mathrm{Debye} =
10^{-18} \ \mathrm{esu}$, $1 \ \mathrm{Debye} = 3.33564 \times 10^{-30} \
\mathrm{C\cdot m}$)~\cite{Stockman_QD}. Let us mention that the typical
frequency corresponding to the transition between the ground and excited
electron states for PbS and PbSe QDs, which is $f \approx 170 \ \mathrm{TH
}$, matches the resonance with the plasmon frequency in the armchair
graphene nanoribbon~\cite{Brey_Fertig}. Therefore, PbS and PbSe QDs can be
used for the spaser considered here. The damping in graphene $\gamma =
\tau^{-1}$ determined by $\tau$ is assumed to be either $\tau = 1 \ \mathrm
ps}$ or $\tau = 10 \ \mathrm{ps}$ or $\tau = 20 \ \mathrm{ps}$~\cit
{Neugebauer,Orlita,Dubinov,Emani}.
The dependence of the critical density of the QDs $N_{c}$ required for
the amplification of the signal on the wave vector $q_{x}$ for the
different doping electron densities $n_{0}$ at the fixed width of
the nanoribbon, temperature and dissipation time $\tau $
corresponding to the damping, obtained using Eq.~(\ref{N02}) is
presented in Fig.~\ref{Fig1}. According to Fig.~\ref{Fig1}, $N_{c}$
decreases as $q_{x}$ and $n_{0}$ increase. Let us mention that at
$q_{x}$ larger than $0.6\ \mathrm{nm^{-1}}$ there is almost no
difference between the values of $N_{c}$ corresponding to the
different doping electron densities $n_{0}$, and for large $q_{x}$
$N_{c} $ converges to approximately $18\ \mathrm{\mu m^{-2}}$. In
Fig. \ref{Fig2} the dependence of the critical density of the QDs
$N_{c}$ required for the amplification of the signal on the wave
vector $q_{x}$ for the different dissipation time corresponding to
the damping at the fixed width of the nanoribbon, temperature and
doping electron densities obtained using
Eq.~(\ref{N02}) is shown. As it follows from Fig.~\ref{Fig2}, $N_{c}$ decreases as
q_{x}$ and $\tau $ increase. This means that higher damping
corresponds to higher $N_{c}$. According to Fig.~\ref{Fig2},
starting with $q_{x}\approx 1.0\ \mathrm{(nm)^{-1}}$, $N_{c}$
depends very weakly on $q_{x}$, converging to some constant values
that depend on the value of $\tau $. The dependence of the critical
density of the QDs $N_{c}$ required for the amplification of the
signal on the width of the nanoribbon $W$ at the different wave
vector for the fixed dissipation time corresponding to the
damping,
temperature and doping electron densities obtained using Eq.~
\ref{N02}) is displayed in Fig.~\ref{Fig3}. From Fig.~\ref{Fig3} we
can conclude that $N_{c}$ increases as $W$ increases and decreases
as $q_{x}$
increases. When $W$ increases, the values of $N_{c}$ stronger depend on
q_{x}$. The dependence of the critical density of the QDs $N_{c}$
required for the amplification of the signal on the frequency $f$
at the different dissipation time corresponding to the damping for
the fixed temperature and doping electron density obtained using
Eq.~(\ref{N02}) is shown in Fig.~\ref{Fig4}. As it is demonstrated in Fig.
\ref{Fig4}, $N_{c}$ increases as $f$ and $\tau $ decrease. According
to Fig.~\ref{Fig4}, starting with $f \approx 140\ \mathrm{THz}$,
$N_{c}$ depends very weakly on frequency and converges to some
constant values that depend on the value of $\tau $. The dependence
of the plasmon frequency $f$ on the width of the nanoribbon $W$, for
the different wave vectors at the fixed dissipation time
corresponding to the damping,
temperature and doping electron density obtained using Eq.~(\re
{pl}) is presented in Fig.~\ref{Fig5}. According to Fig.~\ref{Fig5},
the plasmon frequency $f$ increases as $q_{x}$ increases and the
width of the nanoribbon $W$ decreases. If in Eq.~(\ref{N02}), the
imaginary part of the dielectric function would not depend on the
width $W$, $N_{c}$ would not depend on
$W$. However, due to the complicated dependence of $\mathrm
Im}\varepsilon (\omega ,q_{x})$ on $W$ through $V_{0,0}(q_{x})$
given by
Eq.~(\ref{V002}), this dependence exists. For the damping time we use
\tau =5\ \mathrm{ns}$, $\tau =10\ \mathrm{ns}$, and $\tau =20\
\mathrm{ns}$,
because such damping for graphene was obtained in the experimental studies
\cite{Neugebauer,Orlita,Dubinov,Emani}. One can conclude from Figs.~\re
{Fig2} and~\ref{Fig4}, that $N_{c}$ decreases when the damping time
$\tau $ increases.
\begin{figure}[tbp]
\includegraphics[width=10cm]{Fig1.eps}
\caption{The dependence of the critical density of the QDs $N_{c}$
required for the amplification of the signal on the wave vector
$q_{x}$ for the different doping electron densities $n_{0}$ at the
fixed width of the nanoribbon $W$, temperature $T$ and dissipation
time $\protect\tau$ corresponding to the damping.} \label{Fig1}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=10cm]{Fig2.eps}
\caption{The dependence of the critical density of the QDs $N_{c}$
required for the amplification of the signal on the wave vector
$q_{x}$ for the different dissipation time $\protect\tau$
corresponding to the damping at the fixed width of the nanoribbon
$W$, temperature $T$ and doping electron densitiy $n_{0}$.}
\label{Fig2}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=10cm]{Fig3.eps}
\caption{The dependence of the critical density of the QDs $N_{c}$
required for the amplification of the signal on the width of the
nanoribbon $W$ at
the different wave vector $q_{x}$ for the fixed dissipation time $\protec
\tau$ corresponding to the damping, temperature $T$ and doping electron
densities $n_{0}$.}
\label{Fig3}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=10cm]{Fig4.eps}
\caption{The dependence of the critical density of the QDs $N_{c}$
required for the amplification of the signal on the frequency $f$
at the different dissipation time $\protect\tau$ corresponding to
the damping for the fixed width $W$, temperature $T$ and doping
electron density $n_{0}$.} \label{Fig4}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=10cm]{Fig5.eps}
\caption{The dependence of the plasmon frequency $f$ on the width of
the nanoribbon $W$, for the different wave vectors $q_{x}$ at the
fixed dissipation time $\tau$ corresponding to the damping,
temperature $T$ and doping electron density $n_{0}$.} \label{Fig5}
\end{figure}
Let us mention that we used the parameters for PbS and PbSe QDs to
calculate $N_{c}$, because among different materials for the QDs the
PbS and PbSe QDs demonstrate the lowest transition
frequency~\cite{Auxier}, which can be in the resonance with the
plasmon in graphene nanoribbon in the IR region of spectrum.
According to Ref.~\onlinecite{Plum}, the transition frequency for
the QDs depends on the radius of the QDs. The PbS QDs with the radii
$2 \ \mathrm{nm}$---$5 \ \mathrm{nm}$ have the transition
frequencies $231$ and $194 \ \mathrm{THz}$~\cite{Plum}. For our
calculations we use PbS and PbSe QDs synthesized with the radii up
to $8 \ \mathrm{nm}$, which can provide the transition frequency $f
\approx 170 \ \mathrm{THz}$~\cite{Stockman_QD}. Let us mention that
changing the radius of the QDs, we can change the frequency of the
QDs resonant to the plasmon frequency in graphene nanoribbon
controlled by the wave vector $q_{x}$, and, therefore, we can
control $N_{c}$ by the radius of the $QDs$. The density of PbS QDs
with the diameter $3.2 \ \mathrm{nm}$ applied for the amplification
of plasmons in a gold film in the experiment~\cite{Plum} was $4\times 10^{6} \ \mathrm
\mu m^{-2}}$. According to Figs.~\ref{Fig1}-\ref{Fig4}, in the
graphene nanoribbon-based spaser there are possibilities to achieve
much less densities of the PbS QDs necessary for amplification than
in the gold film-based spaser.
Let us mention that in our calculations we take into account the
temporal and spatial dispersion of the dielectric function of
graphene nanoribbon in the random phase approximation
\cite{Brey_Fertig_01,Brey_Fertig}. The effects of spatial dispersion
are very important for the properties of spaser based on a flat
metal nanofilm~\cite{Larkin}. Taking into account the spatial
dispersion of the dielectric function of a metal surface in the
local random phase approximation allows to conclude that the strong
interaction of QD with unscreened metal electrons in the surface
nanolayer causes enhanced relaxation due to surface plasmon
excitation and Landau damping in a spaser based on a flat metal
nanofilm~\cite{Larkin}. And we assume that taking into account the
spatial dispersion of the dielectric function of graphene nanoribbon
is also very important to calculate the minimal population inversion
needed for the net SP amplification in the graphene nanoribbon based
spaser.
The advantages of the graphene nanoribbon based spaser are wide
frequency generation region from THz up to IR, small damping --- low
threshold for pumping, possibility of control by the gate. While we
perform our
calculations for IR radiation corresponding to the transition frequencies
170\ \mathrm{THz}$, the graphene-based spaser can work at the frequencies
much below that this one including THz regime.
\acknowledgments
The authors are grateful to M.~I. Stockman for valuable discussions.
The work was supported by PSC CUNY under Grant No. 65572-00 43.
| {'timestamp': '2013-10-02T02:05:53', 'yymm': '1310', 'arxiv_id': '1310.0136', 'language': 'en', 'url': 'https://arxiv.org/abs/1310.0136'} |
\section*{Introduction}
Quantum particles were long thought to fall into one of two classes: fermions or bosons.
The two classes are distinguished by the effect on the many-particle wavefunction of
exchanging any pair of such particles: for fermions, the wavefunction acquires a minus
sign, whereas for bosons it does not. It was later understood~\cite{LeinaasMyrheim}
that, while these are indeed the only two possibilities in three or more spatial dimensions,
the 2D case admits a remarkable generalization of the concept of bosons
and fermions known as anyons~\cite{Wilczek1,Wilczek2}.
Mathematically, it is useful to think of anyons in (2+1)-dimensional spacetime, where their exchanges are
viewed as ``braids"~\cite{Frohlich,Witten,NayakReview} in which the particles' worldlines
are wound around one another. In general, exchanging two anyons (i.e. performing a single braid)
can yield any phase between 0 (bosons) and $\pi$ (fermions)~\cite{LeinaasMyrheim}.
Even more strikingly, in certain scenarios the phase acquired after performing a series of exchanges can depend
on the order in which the exchanges occurred~\cite{Frohlich,Witten}. Aside from its fundamental scientific
importance, this ``non-Abelian" braiding of anyons has attracted substantial interest in the condensed
matter physics and quantum information communities as it can be used as a basis
for robust quantum information processing~\cite{Freedman,KitaevComputation,NayakReview}.
Anyons have been theoretically predicted to arise in a variety
of topological phases of matter, for example in fractional quantum Hall systems~\cite{Halperin,Arovas},
where they can exist as deconfined quasiparticle excitations, and in topological superconductors,
where Majorana bound states nucleate at topological defects such as domain walls and vortices.
Despite substantial effort and progress~\cite{Camino,An,Willett,Nakamura,Aasen,Karzig}, as of yet there has been no conclusive experimental evidence of anyonic braiding, Abelian or non-Abelian.
Perhaps counterintuitively, the non-Abelian phases acquired upon
braiding Majorana bound states in superconductors can be understood
from the viewpoint of noninteracting particles, wherein the
single-particle Schr\"odinger equation describes particles as
waves. In this picture, the braiding phases are geometric
phases~\cite{Pancharatnam,Berry1,Berry2} arising from the
the adiabatic variation of the phase texture of the bound-state
wavefunctions as the vortices are wound around one another~\cite{Ivanov}.
Therefore, non-Abelian braiding in its simplest
incarnation can be viewed as a universal wave phenomenon accessible
beyond electronic systems. This implies that it can be realized
experimentally in the context of photonics, where a wide range of
topological phenomena have been predicted and observed recently
\cite{HaldaneRaghu, Soljacic2009, RechtsmanFloquet, Hafezi2013, RevModPhys.91.015006}. In fact, non-Abelian gauge fields have recently been observed in a photonic device~\cite{yang2019synthesis}, hinting at the possibility of photonic braiding. In general, photonic topological devices (as well as those in other bosonic systems) are expected to have entirely complementary applications to
their condensed matter analogues.
In this paper we report on the first measurement of the geometric phase arising from braiding topological defects in an array of photonic waveguides, fabricated using the femtosecond direct laser-writing technique \cite{Davis,Szameit2010}. Following the theoretical proposal of Ref.~\cite{Iadecola2016}, we realize topological defects as vortices in a vector field that encodes the displacements of each waveguide in the array. The vortices realized in our experiment bind localized topological modes whose single-particle wavefunctions are identical to those of Majorana bound states in a 2D topological superconductor.\footnote{Note however that the model we realize is in symmetry class BDI, while the topological superconductor is in class D~\cite{Chiu}; thus the two systems have important fundamental differences.} Consequently, at the noninteracting level the effect of braiding these vortices is the same as what is expected for Majorana bound states. We experimentally realize such vortices in the waveguide array and measure the effect of braiding one such vortex with a second that resides outside the array at an effectively infinite distance from the first. In order to eliminate the effect of dynamical phases, a 180$^\circ$ braiding operation is performed in two adjacent arrays, each with a vortex at its center. If the sense of rotation of the two arrays is the same (opposite), the relative phase at the core is found to be $0$ ($\pi$). This observation matches the theoretically predicted geometric phase, providing a clear signature of braiding.
We arrange the waveguides in a near-honeycomb lattice, with each waveguide displaced from its honeycomb position $\b{r}=(x,y)$ by an $\b{r}$- and $z$-dependent amount $\b{u}_{\b{r}}(z)$.
The diffraction of light through this waveguide array is governed by the paraxial wave equation
\begin{equation}
i\partial_{z}\psi(\textbf{r},z)=-\frac{1}{2k_{0}}\nabla^{2}_{\textbf{r}}\psi(\textbf{r},z)-\frac{k_{0}\Delta n(\textbf{r})}{n_{0}}\psi(\textbf{r},z),
\label{eq:propagation}
\end{equation}
where $\psi(\textbf{r},z)$ is the envelope function of the electric field $\textbf{E}(\textbf{r},z)=\psi(\textbf{r},z)e^{i(k_{0}z-\omega t)}\hat{x}$, $k_{0}=2\pi n_{0}/\lambda$ is the wavenumber within the medium, $\lambda$ is the wavelength of light, $\nabla^{2}_{\textbf{r}}$ is the Laplacian in the transverse $(x,y)$ plane, and $\omega=2\pi c/\lambda$. Here $n_{0}$ is the refractive index of the
ambient glass and $\Delta n$ is the refractive index relative to $n_{0},$ which acts as the potential in Eq.~\eqref{eq:propagation}.
Since the displacements are small compared to the lattice spacing $a$ ($|\b{u}| \leq .25 a$) and vary slowly in the $z$-direction, the propagation of light through the waveguide array can also be described by a coupled-mode (i.e. tight-binding) equation:
\begin{equation}\label{eq_paraxial}
i \partial_z c_{\b{r}}(z) = \sum_{\langle \b{r}' \rangle} [ t + \delta t_{\b{r},\b{r}'}(z) ] c_{\b{r}'}(z)
\end{equation}
where $c_{\b{r}}(z)$ denotes the amplitude of light in waveguide $\b{r}$, and the $z$-dependent hopping modification $\delta t_{\b{r},\b{r}'}$ arises from the change in waveguide separation due to the displacements $\b{u}_{\b{r}}, \b{u}_{\b{r}'}$.
Identical to electrons in graphene, the two sublattices of the honeycomb lattice give rise to two energy bands in the spectrum, which touch at two gapless ``Dirac'' points at crystal momenta $\b{K}_{\pm} = (\pm 4\pi / (3\sqrt{3}a) , 0)$.
As shown in Fig.~\ref{fig1}, we choose the displacements $\b{u}_{\b{r}}=(u^x_{\b{r}},u^y_{\b{r}})$ corresponding to a Kekul\'e distortion~\cite{Hou,Iadecola2016} controlled by the complex order parameter $\Delta_\b{r}(z)$.
The magnitude of the order parameter determines the displacement amplitude, while its phase controls the displacement angle according to $\text{arg}(u^x_{\b{r}}+i u^y_{\b{r}}) = \b{K}_{+} \cdot \b{r} + \text{arg}(\Delta_\b{r})$.
When $\Delta_\b{r}(z)$ is constant in space, the Kekul\'e distortion couples the two Dirac cones, leading to a band gap proportional to $| \Delta |$ in the energy spectrum.
Our work concerns order parameters that vary both throughout the lattice and in the $z$-direction.
In particular, we focus on vortex configurations of the order parameter, $\Delta_{\b{r}} = | \Delta | \exp(i [\alpha + q_{v}\,\text{arg}(\b{r} - \b{r}_v)])$, where the phase of $\Delta_{\b{r}}$ winds by $2\pi q_{v}$ about the vortex center $\b{r}_v$ for a vortex of charge $q_{v}$.
For $q_{v}=\pm 1$, such vortices are known to bind a single mid-gap photonic mode, localized near the vortex core \cite{JackiwRossi,Hou,Iadecola2016}.
The presence of this mode is protected by the nontrivial topology of the vortex configuration, as well as the time-reversal and chiral symmetries\footnote{Here, chiral symmetry arises in the effective coupled mode equation due to the bipartite coupling between the two sublattices of the honeycomb lattice, similar to graphene \cite{Semenoff}.} of the system \cite{TeoKane}.
A system with multiple vortices carries a bound mode at each vortex.
Away from the vortex centers, the order parameter varies slowly compared to the lattice scale, and the system remains gapped.
This opens up the possibility of \emph{braiding} the vortex modes as functions of $z$: if braiding is executed adiabatically in $z$, light bound in a vortex mode will not disperse into the gapped bulk.
Braiding has two effects on a vortex: moving its center $\b{r}_v$, and altering the offset $\alpha$ in the local order-parameter phase.
The latter arises due to the inherent nonlocality of vortices---as two vortices braid, each one changes its location in the space-dependent phase field of the other.
The first vortex therefore experiences an effective offset $\alpha + q_{v,2}\,\text{arg}(\b{r}_{v,1} - \b{r}_{v,2})$, which increases or decreases depending on the handedness of the braid and the charge of the second vortex.
The effect of braiding on light trapped in the vortex mode is captured by the offset-dependence of the vortex-mode wavefunction $c^v_\b{r}(\b{r}_v,\alpha)$.
Specifically, the wavefunction is \emph{double-valued} in the offset,
\begin{equation}\label{eq_doublevalue}
c^v_\b{r}(\b{r}_v,\alpha + 2\pi) = -c^v_\b{r}(\b{r}_v,\alpha),
\end{equation}
signifying that light trapped in the vortex mode gains a geometric phase of $\pi$ (i.e. a minus sign) after a full $2\pi$ braid.
The double-valued form of the wavefunction is reminiscent of Majorana wavefunctions at vortices in $p + ip$ superconductors~\cite{Ivanov, ReadGreen} or at superconductor-TI interfaces~\cite{FuKane}.
Indeed, due to this double-valuedness, the photonic zero modes in our system will gain the \emph{same} non-Abelian geometric phases as Majorana zero modes in solid state systems upon braiding~\cite{Iadecola2016}.
Crucially, this phase is gained by the \emph{bosonic} wavefunction describing photons in our system---not the Majorana wavefunction describing electrons in a superconductor---leading to a distinct, reduced class of operations realizable via braiding~\cite{Iadecola2016}.
In this work, we provide a robust verification of the geometric phase $\pi$ gained by the photonic vortex modes under a $2\pi$ rotation of the order parameter $\alpha$.
We detect this phase by performing two `on-chip' interferometry experiments, in which we interfere light from two vortex modes that have undergone different rotations of $\alpha$.
In each experiment, we fabricate a waveguide array containing two disconnected Kekul\'e-distorted honeycomb lattices, which constitute the `left' and `right' arms of an interferometer [see Fig.~\ref{fig1}(A)].
The left and right lattices are initially identical and contain a single vortex in the Kekul\'e order parameter.
As $z$ increases, the offset $\alpha$ of each lattice's order parameter is monotonically increased, or decreased, by $\pi$.
The first experiment serves as a control, with $\alpha\to\alpha+\pi$ in an identical sense of rotation in both lattices.
In the language of braiding, this increase of $\alpha$ can be interpreted as braiding a vortex `at infinity' counterclockwise around each lattice by an angle $\pi$ [as indicated in Fig.~\ref{fig1}(B)].
Since the left and right lattices are identical throughout, light trapped in the left vortex mode should gain the same phase as light trapped in the right (both dynamical and geometric phases), and they should interfere constructively after braiding.
The second experiment serves to detect the geometric phase from a $2\pi$ rotation of $\alpha$.
Here the left lattice undergoes a rotation $\alpha\to\alpha+\pi$, while the right undergoes $\alpha\to\alpha-\pi$.
This is analogous to braiding a vortex at infinity $180^\circ$ counterclockwise about the left lattice, but clockwise about the right [\cite{Supplementary}, Movies S1 and S2].
Although the final left and right lattices are identical, the paths they have traversed differ by a $2\pi$ rotation of $\alpha$, and the relative phase gained by light propagating through the two vortex modes will differ according to the geometric phase of this process.
We note that chiral symmetry fixes the energy of the vortex mode to occur at precisely the middle of the band gap~\cite{JackiwRossi, Hou, Iadecola2016}, such that, despite their different $z$-evolutions, light in the left and right lattices is expected to gain the same dynamical phase during braiding.
The entire experimental waveguide array consists of three stages, occurring in succession as a function of $z$ [see Fig.~\ref{fig1}(A)].
The total length of the sample is 10 cm and the radii of the major and minor axes of the waveguides are 5.35 and 3.5 $\mu$m, respectively.
The first stage prepares the light to be coupled in-phase into the vortex modes of the left and right lattices.
This is done using an `on-chip' Y-junction beam-splitter~\cite{Izutsu}. We first precisely couple light from a tunable-wavelength laser through a lens-tipped fiber into a single waveguide at the input facet of the sample.
We then split this waveguide into two in a symmetric manner, such that both waveguides trap light of nearly equal amplitude and phase.
The two waveguides are subsequently guided to the center of the vortex cores of the left and right lattices.
The middle stage contains the braiding operation, and begins by abruptly initializing all other waveguides in each lattice.
Light traveling in the waveguides of the first stage enters each lattice through a single waveguide near each vortex core, setting the initial condition for the braiding process.
The portion of the input light that overlaps with the vortex mode remains localized in the mode, while the rest diffracts throughout the lattice.
After initialization, the waveguide displacements are smoothly varied in $z$ to produce the desired change in order parameter for the experiment being performed.
We emphasize that the individual waveguides move little during this process: each waveguide only undergoes rotation of its displacement angle from the undistorted honeycomb position.
In Fig.~\ref{fig2}, we show the experimental output of a waveguide array containing only these first two stages.
As expected, a high fraction of the light is localized near each vortex core, indicating that the initialization stage succeeded and the braiding process was performed sufficiently adiabatically.
We also observe a nonzero intensity in nearly all waveguides of both lattices, indicating that light not bound to the vortex modes has diffracted throughout each lattice.
To detect the phase of the vortex mode light after braiding, we add a third, interferometric stage to the waveguide array~\cite{Izutsu}, depicted in Fig.~\ref{fig1}(A).
The left and right lattices are abruptly terminated except for one waveguide each, chosen for its high overlap with the final vortex-mode wavefunction.
Light localized in the left and right vortex modes continues to propagate through the remaining two waveguides. The intensity and phase of these waveguides' light are proportional to those of the respective vortex modes after braiding.
In order to read out the waveguides' relative phase, we first split each waveguide symmetrically into two, similar to the first stage.
The two innermost waveguides of the left-right pair are then combined into a single waveguide, while the two outermost remain separate.
The waveguide combination performs the interferometry: if the two input arms are in phase, they will excite the symmetric bound mode of the combined waveguide; if they are out of phase, they will not overlap the bound mode and thus the light will diffract away.
The combined waveguide intensity $I_C$ thus indicates the intensity of in-phase light.
The outermost left and right waveguide intensities $I_L$ and $I_R$ measure the waveguide intensities before combination.
Results of both experiments including the final stage are shown in Fig.~\ref{fig3}.
As anticipated, braiding the left and right vortices in the same direction results in a combined waveguide with higher intensity than the individual left and right waveguides, indicating constructive interference.
On the other hand, braiding the left and right vortices in opposite directions results in nearly zero combined waveguide intensity, indicating near-complete destructive interference and therefore a relative phase of $\pi$ between the left and right vortex modes.
To quantify these results, we define the \emph{contrast} $\eta$ as the ratio between the combined waveguide intensity and the sum of the individual left and right waveguide intensities, $\eta = I_C / (I_L + I_R)$.
In the ideal case, the contrast achieves the upper bound $1$ for perfect constructive interference and the lower bound $|I_L - I_R|/(I_L + I_R)$ for perfect destructive interference.
As plotted in Fig.~\ref{fig3}(A), the experimentally observed contrast is indeed near $1$ when braiding is performed in the same direction for both lattices,
and is close to zero (the lower bound for symmetric intensities, $I_L = I_R$) when it is performed in opposite directions.
We verify that this phase difference arises as a geometric, rather than a dynamical, phase by demonstrating that the interference is insensitive to the wavelength of light used.
The wavelength sets the `time' scale in the paraxial equation, Eq.~(\ref{eq_paraxial}), so that different-wavelength light acquires different dynamical phases during propagation~\cite{Guglielmon2018}.
If the relative phase between the left and right waveguides arose as a dynamical phase, we would expect it to oscillate as the wavelength of light was varied rather than being quantized at $\pi$.
Instead, as shown in Fig.~\ref{fig3}, we observe quite consistent, non-oscillatory values of $\eta$ for both experiments over a large range of wavelengths, 1450--1650 nm. The principal sources of error/noise are interference from imperfectly in-coupled light in the braiding stage as well as diffracted light in the interferometric stage. We note that the random error is significantly lower in the sample in which we observe the $\pi$ phase associated with braiding [red points in Fig.~\ref{fig3}(A)] compared to the `control' sample in which we do not [blue points in Fig.~\ref{fig3}(A)].
The measured $\eta$ values when the two vortices are braided in equal and opposite directions are 1.055$\,\pm\,$0.247 and 0.053$\,\pm\,$0.037, respectively.
This indicates that the relative phase is fixed to $\pi$ and supports its identification as a geometric phase.
In conclusion, we have used a photonic lattice of evanescently-coupled waveguides to directly measure the braiding of vortices. The $\pi$ phase that we observe here directly implies that non-Abelian braiding operations can be carried out using similar ideas in this and many other platforms that are governed by classical wave equations. We expect this work to motivate the exploration of braiding physics in interacting bosonic systems (via optical nonlinearity or mediation by Rydberg atoms, for example), as well as the search for applications beyond robust quantum information processing.
\bibliographystyle{Science}
| {'timestamp': '2019-07-09T02:10:48', 'yymm': '1907', 'arxiv_id': '1907.03208', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.03208'} |
\section{Introduction}
The equation of motion for a supernova remnant (SNR)
can be modeled by a single law of motion or multiple
laws of motion when the appropriate boundary
conditions are provided.
Examples of a single law of motion are:
the Sedov expansion
in
the presence of a circumstellar medium (CSM) with constant density
where the radius, $r$,
scales as $r \propto t^{0.4}$, see \cite{Sedov1959},
and the momentum conservation in the
framework of the thin layer approximation
with CSM at constant density
where $R \propto t^{0.25}$,
see \cite{Dyson1997}.
Examples of piece-wise solutions for an SNR can be found
in \cite{Dalgarno1987}: a first energy conserving phase,
$r \propto t^{0.4}$ followed by a second adiabatic phase
where $r \propto t^{0.285}$.
At the same time it has been shown
that in the first ten years of \sn1993j
$r \propto t^{0.82}$, which means an observed exponent larger
than the previously suggested exponents,
see \cite{Zaninetti2011a}.
The previous analysis allows posing
a basic question: `Is it possible to find an analytical
solution for SNRs given the three observable astronomical
parameters, age, radius and velocity ?'.
In order to answer the above question,
Section \ref{secprofiles} introduces three profiles for
the CSM,
Section \ref{secmotion} derives three
Pad\'e approximated laws of motion for SNRs,
and
Section \ref{secastro}
closes the derived equations of motion for four SNRs.
\section{Profiles of density}
This section introduces three density profiles
for the CSM:
an exponential profile,
a Gaussian profile,
and a self-gravitating profile of Lane--Emden type.
\label{secprofiles}
\subsection{The exponential profile}
This density is assumed to have the following
exponential dependence on $r$
in spherical coordinates:
\begin{equation}
\rho(r;r_0,b,\rho_0) =
\rho_0 \exp{(-\frac{(r-r_0)}{b})}
\quad ,
\label{profexponential}
\end{equation}
where $b$ represents the scale.
The piece-wise density is
\begin{equation}
\rho (r;r_0,b,\rho_0) = \left\{ \begin{array}{ll}
\rho_0 & \mbox {if $r \leq r_0 $ } \\
\rho_0 \exp{-(\frac{(r-r_0)}{b})} & \mbox {if $r > r_0 $ }
\end{array}
\right.
\label{profile_exponential}
\end{equation}
The total mass swept, $M(r;r_0,b,\rho_0) $,
in the interval $[0,r]$ is
\begin{eqnarray}
M(r;r_0,b,\rho_0) = \nonumber \\
\frac{4}{3}\,\rho_{{0}}\pi\,{r_{{0}}}^{3}
\nonumber \\
-4\,b \left( 2\,{b}^{2}+2\,br+{r}^{2}
\right) \rho_{{0}}{{\rm e}^{{\frac {r_{{0}}-r}{b}}}}\pi+4\,b \left( 2
\,{b}^{2}+2\,br_{{0}}+{r_{{0}}}^{2} \right) \rho_{{0}}\pi
\quad .
\end{eqnarray}
\subsection{The Gaussian profile}
This density has the
Gaussian dependence
\begin{equation}
\rho(r;r_0,b,\rho_0) =
\rho_0 \exp{(-\frac{1}{2}\frac{r^2}{b^2})}
\quad ,
\label{prof_gaussian}
\end{equation}
and the piece-wise density is
\begin{equation}
\rho (r;r_0,b,\rho_0) = \left\{ \begin{array}{ll}
\rho_0 & \mbox {if $r \leq r_0 $ } \\
\rho_0\exp{(-\frac{1}{2}\frac{r^2}{b^2})} & \mbox {if $r > r_0 $ }
\end{array}
\right.
\end{equation}
The total mass swept, $M(r;r_0,b,\rho_0) $,
in the interval $[0,r]$ is
\begin{eqnarray}
M(r;r_0,b,\rho_0) = \nonumber \\
\frac{4}{3}\,\rho_{{0}}\pi\,{r_{{0}}}^{3}+4\,\rho_{{0}}\pi\, \big ( -{{\rm e}^
{-\frac{1}{2}\,{\frac {{r}^{2}}{{b}^{2}}}}}r{b}^{2}+\frac{1}{2}\,{b}^{3}\sqrt {\pi}
\sqrt {2}{\rm erf} (\frac{1}{2}\,{\frac {\sqrt {2}r}{b}} ) \big )\nonumber \\
-
4\,\rho_{{0}}\pi\, \big ( -{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{b}^
{2}}}}}r_{{0}}{b}^{2}+\frac{1}{2}\,{b}^{3}\sqrt {\pi}\sqrt {2}{\rm erf} (
\frac{1}{2}\,{\frac {\sqrt {2}r_{{0}}}{b}} ) \big )
\quad ,
\end{eqnarray}
where ${\rm erf}$ is the error function,
see \cite{NIST2010}.
\subsection{The Lane--Emden profile}
The Lane--Emden profile when $n=5$,
after \cite{Lane1870,Emden1907},
is
\begin{equation}
\rho(r;r_0,b,\rho_0) =\rho_0 {\frac {1}{{(1+ \frac{{r}^{2}}{3b^2})^{\frac{5}{2}}}} }
\label{profile_lane}
\quad ,
\end{equation}
\begin{equation}
\rho (r;r_0,b,\rho_0) = \left\{ \begin{array}{ll}
\rho_0 & \mbox {if $r \leq r_0 $ } \\
\rho_0 {\frac {1}{{(1+ \frac{{r}^{2}}{3b^2})^{\frac{5}{2}}}} } & \mbox {if $r > r_0 $ }
\end{array}
\right.
\end{equation}
The total mass swept, $M(r;r_0,b,\rho_0) $,
in the interval $[0,r]$ is
\begin{eqnarray}
M(r;r_0,b,\rho_0) = \nonumber \\
\frac{4}{3}\,\rho_{{0}}\pi\,{r_{{0}}}^{3}+4\,{\frac {{b}^{3}{r}^{3}\rho_{{0}}
\sqrt {3}\pi}{ \left( 3\,{b}^{2}+{r}^{2} \right) ^{\frac{3}{2}}}}-4\,{\frac {{
b}^{3}{r_{{0}}}^{3}\rho_{{0}}\sqrt {3}\pi}{ \left( 3\,{b}^{2}+{r_{{0}}
}^{2} \right) ^{\frac{3}{2}}}}
\quad .
\end{eqnarray}
\section{The equation of motion}
\label{secmotion}
The conservation of the momentum in
spherical coordinates
in the framework of the thin
layer approximation states that
\begin{equation}
M_0(r_0) \,v_0 = M(r)\,v
\quad ,
\end{equation}
where $M_0(r_0)$ and $M(r)$ are the masses swept at $r_0$ and $r$,
and $v_0$ and $v$ are the velocities of
the thin layer at $r_0$ and $r$.
\subsection{Motion with exponential profile}
Assuming an exponential profile as given by
Eq.~(\ref{profile_exponential})
the velocity is
\begin{equation}
\frac{dr}{dt} = \frac{NE}{DE}
\quad ,
\label{vel_exp}
\end{equation}
where
\begin{eqnarray}
NE =
{-{r_{{0}}}^{3}v_{{0}}}
\nonumber
\quad,
\end{eqnarray}
and
\begin{eqnarray}
DE=
6\,{{\rm e}^{{\frac {r_{{0}}-r}{b}}}}{b}^{3}+6\,{{\rm e}^{{\frac {r_{{0
}}-r}{b}}}}{b}^{2}r
\nonumber \\
+3\,{{\rm e}^{{\frac {r_{{0}}-r}{b}}}}b{r}^{2}-{r_{
{0}}}^{3}-3\,{r_{{0}}}^{2}b-6\,r_{{0}}{b}^{2}-6\,{b}^{3}
\quad .
\nonumber
\end{eqnarray}
In the above differential equation of the first order in $r$, the
variables can be separated and integration
gives the following non-linear equation:
\begin{eqnarray}
\frac {1}{{r_{{0}}}^{3}{\it v_0}}
\bigg (
18\,{{\rm e}^{{\frac {r_{{0}}-r}{b}}}}{b}^{4}+12\,{{\rm e}^{{\frac {r_
{{0}}-r}{b}}}}{b}^{3}r+3\,{{\rm e}^{{\frac {r_{{0}}-r}{b}}}}{b}^{2}{r}
^{2}-{r_{{0}}}^{4}-3\,{r_{{0}}}^{3}b
\nonumber\\
+{r_{{0}}}^{3}r-9\,{r_{{0}}}^{2}{b
}^{2}+3\,{r_{{0}}}^{2}br-18\,{b}^{3}r_{{0}}+6\,r_{{0}}{b}^{2}r-18\,{b}
^{4}+6\,{b}^{3}r
\bigg )
\nonumber \\
=\left( t-{\it t_0} \right)
\label .
\label{eqn_nl_exp}
\end{eqnarray}
In this case is not possible to find an analytical
solution for the radius, $r$,
as a function of time.
We therefore apply
the Pad\'e rational polynomial
approximation of degree 2 in the numerator
and degree 1 in the denominator about the point $r=r_0$
to the left-hand side of
Eq.~(\ref{eqn_nl_exp}):
\begin{equation}
\frac
{
- \left( r_{{0}}-r \right) \left( -5\,br-br_{{0}}-2\,rr_{{0}}+2\,{r_{
{0}}}^{2} \right)
}
{
2\,v_{{0}} \left( 2\,br-5\,br_{{0}}-rr_{{0}}+{r_{{0}}}^{2} \right)
}
= t-t_0
\quad .
\end{equation}
The resulting Pad\'e approximant for the radius $r_{2,1}$ is
\begin{eqnarray}
r_{2,1}=\frac{1}{2\,r_{{0}}+5\,b}
\Bigg ( r_{{0}}tv_{{0}}-r_{{0}}{\it t_0}\,v_{{0}}-2\,btv_{{0}}+2\,b{\it t_0}\,v_
{{0}}+2\,{r_{{0}}}^{2}+2\,r_{{0}}b
\nonumber \\
+ \biggl ( {4\,{b}^{2}{t}^{2}{v_{{0}}}^{
2}-8\,{b}^{2}t{\it t_0}\,{v_{{0}}}^{2}+4\,{b}^{2}{{\it t_0}}^{2}{v_{{0}}
}^{2}-4\,b{t}^{2}r_{{0}}{v_{{0}}}^{2}}
\nonumber\\
{
+8\,bt{\it t_0}\,r_{{0}}{v_{{0}}}^
{2}-4\,b{{\it t_0}}^{2}r_{{0}}{v_{{0}}}^{2}+{t}^{2}{r_{{0}}}^{2}{v_{{0}
}}^{2}-2\,t{\it t_0}\,{r_{{0}}}^{2}{v_{{0}}}^{2}+{{\it t_0}}^{2}{r_{{0}}
}^{2}{v_{{0}}}^{2}
}
\nonumber \\
{
+42\,{b}^{2}tr_{{0}}v_{{0}}
-42\,{b}^{2}{\it t_0}\,r_{
{0}}v_{{0}}+6\,bt{r_{{0}}}^{2}v_{{0}}-6\,b{\it t_0}\,{r_{{0}}}^{2}v_{{0
}}+9\,{r_{{0}}}^{2}{b}^{2}} \biggl )^{\frac{1}{2}}
\Bigg )
\quad ,
\label{rmotionexp}
\end{eqnarray}
and the velocity is
\begin{equation}
v_{2,1}=\frac{dr_{2,1}}{dt} =\frac{NVE}{DVE}
\quad ,
\label{vmotionexp}
\end{equation}
\begin{eqnarray}
NVE =
4\,v_{{0}} \Big \{ ( -b/2+1/4\,r_{{0}} )\times \nonumber \\
\sqrt {4\, (
b-\frac{1}{2}\,r_{{0}} ) ^{2} ( t-t_{{0}} ) ^{2}{v_{{0}}}^{2}
+42\, ( b+1/7\,r_{{0}} ) ( t-t_{{0}} ) br_{{0}}
v_{{0}}+9\,{r_{{0}}}^{2}{b}^{2}}+\nonumber \\ ( 3/4\,b+ ( t/4-1/4\,t_{{0
}} ) v_{{0}} ) {r_{{0}}}^{2}+{\frac {21\,r_{{0}}b}{4}
( v_{{0}} ( -{\frac {4\,t}{21}}+{\frac {4\,t_{{0}}}{21}}
) +b ) }
\nonumber \\
+{b}^{2}v_{{0}} ( t-t_{{0}} )
\Big \}
\quad ,
\end{eqnarray}
and
\begin{eqnarray}
DVE = \nonumber \\
\sqrt {4\, ( b-\frac{1}{2}\,r_{{0}} ) ^{2} ( t-t_{{0}}
) ^{2}{v_{{0}}}^{2}+42\, ( b+1/7\,r_{{0}} ) (
t-t_{{0}} ) br_{{0}}v_{{0}}+9\,{r_{{0}}}^{2}{b}^{2}} \times
\nonumber \\
( 2\,r
_{{0}}+5\,b )
\quad .
\end{eqnarray}
\subsection{Motion with Gaussian profile}
Assuming a Gaussian profile as given by
Eq.~(\ref{prof_gaussian})
the velocity is
\begin{equation}
\frac{dr}{dt} = \frac{NG}{DG}
\quad ,
\label{vel_gaussian}
\end{equation}
where
\begin{eqnarray}
NG= -2\,{r_{{0}}}^{3}v_{{0}}
\end{eqnarray}
and
\begin{eqnarray}
DG=
-3\,{b}^{3}\sqrt {\pi}\sqrt {2}{\rm erf} \left(\frac{1}{2}\,{\frac {\sqrt {2}r
}{b}}\right)
\nonumber \\
+3\,{b}^{3}\sqrt {\pi}\sqrt {2}{\rm erf} \left(\frac{1}{2}\,{
\frac {\sqrt {2}r_{{0}}}{b}}\right)+6\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r}^{2}
}{{b}^{2}}}}}r{b}^{2}
\nonumber \\
-6\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{b}^{2
}}}}}r_{{0}}{b}^{2}-2\,{r_{{0}}}^{3}
\quad .
\end{eqnarray}
The appropriate non-linear equation is
\begin{eqnarray}
\frac{1}{2\,{r_{{0}}}^{3}v_{{0}}}
\bigg (
( -12\,{b}^{4}+6\,r_{{0}} ( r-r_{{0}} ) {b}^{2}
) {{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{b}^{2}}}}}+12\,{b}^{4
}{{\rm e}^{-\frac{1}{2}\,{\frac {{r}^{2}}{{b}^{2}}}}}
\nonumber \\
-3\,\sqrt {\pi}{\rm erf}
(\frac{1}{2}\,{\frac {\sqrt {2}r_{{0}}}{b}} )\sqrt {2}{b}^{3}r+3\,{b
}^{3}\sqrt {\pi}\sqrt {2}{\rm erf} (\frac{1}{2}\,{\frac {\sqrt {2}r}{b}}
)r
\nonumber \\
+2\,{r_{{0}}}^{3} ( r-r_{{0}} )
\bigg )
= t-t_0 \, .
\label{eqn_nl_gaussian}
\end{eqnarray}
The Pad\'e rational polynomial
approximation of degree 2 in the numerator
and degree 1 in the denominator
about $r=r_0$
for the left-hand side of the above equation gives
\begin{eqnarray}
\frac
{
1
}
{
2\,v_{{0}} \left( 2\,{b}^{2}r-5\,r_{{0}}{b}^{2}-r{r_{{0}}}^{2}+{r_{{0}
}}^{3} \right)
}
\Bigg (
- ( r-r_{{0}} ) \bigg ( 9\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}
}^{2}}{{b}^{2}}}}}{b}^{2}r
\nonumber \\
-9\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{
b}^{2}}}}}r_{{0}}{b}^{2}-4\,{b}^{2}r+10\,r_{{0}}{b}^{2}+2\,r{r_{{0}}}^
{2}-2\,{r_{{0}}}^{3} \bigg )
\Bigg )
= t-t_0 \, .
\end{eqnarray}
The resulting Pad\'e approximant for the radius $r_{2,1}$ is
\begin{eqnarray}
r_{2,1}=
\frac{1}
{
9\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{b}^{2}}}}}{b}^{2}+2\,{r_{{0
}}}^{2}-4\,{b}^{2}
}
\Bigg \{
9\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{b}^{2}}}}}r_{{0}}{b}^{2}-2
\,{b}^{2}tv_{{0}}
\nonumber \\
+2\,{b}^{2}{\it t_0}\,v_{{0}}+{r_{{0}}}^{2}tv_{{0}}-{r
_{{0}}}^{2}{\it t_0}\,v_{{0}}-7\,r_{{0}}{b}^{2}+2\,{r_{{0}}}^{3}
\nonumber \\
+ \bigg [
{54\,{b}^{4}r_{{0}}v_{{0}} \Big( t-{\it t_0} \Big) {{\rm e}^{-\frac{1}{2}\,{
\frac {{r_{{0}}}^{2}}{{b}^{2}}}}}
}
\nonumber \\
{
+4\, \Big( \Big( t-{\it t_0}
\Big) \Big( {b}^{2}-\frac{1}{2}\,{r_{{0}}}^{2} \Big) v_{{0}}-\frac{3}{2}\,r_{{0
}}{b}^{2} \Big) ^{2}}
\bigg ]^{\frac{1}{2}}
\Bigg \}
\label{rmotiongauss}
\, ,
\end{eqnarray}
and the velocity is
\begin{equation}
v_{2,1}=\frac{dr_{2,1}}{dt} =\frac{NVG}{DVG}
\quad ,
\label{vmotiongauss}
\end{equation}
\begin{eqnarray}
NVG =
- \Bigg ( -27\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{b}^{2}}}}}r_{{0}
}{b}^{4}+ ( 2\,{b}^{2}-{r_{{0}}}^{2} ) ( v_{{0}}
( t-t_{{0}} ) {r_{{0}}}^{2}
\nonumber \\
+3\,r_{{0}}{b}^{2}
-2\,v_{{0}}{b
}^{2} ( t-t_{{0}} )
+ \bigg \{ {54\,{b}^{4}r_{{0}}v_{{0}}
( t-t_{{0}} ) {{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2}}{{b}^{
2}}}}}}
\nonumber \\
{
+4\, ( ( t-t_{{0}} ) ( {b}^{2}-\frac{1}{2}\,{r_{{0
}}}^{2} ) v_{{0}}-3/2\,r_{{0}}{b}^{2} ) ^{2}} )
\bigg \} ^{\frac{1}{2}} \Bigg ) v_{{0}}
\quad ,
\end{eqnarray}
and
\begin{eqnarray}
DVG =
\Bigg \{ {54\,{b}^{4}r_{{0}}v_{{0}} ( t-t_{{0}} ) {{\rm e}^{-1
/2\,{\frac {{r_{{0}}}^{2}}{{b}^{2}}}}}+4\, ( ( t-t_{{0}}
) ( {b}^{2}-\frac{1}{2}\,{r_{{0}}}^{2} ) v_{{0}}
}
\nonumber \\
{
-3/2\,r_{{0
}}{b}^{2} ) ^{2}}
\Bigg \}^{\frac{1}{2}}
( 9\,{{\rm e}^{-\frac{1}{2}\,{\frac {{r_{{0}}}^{2
}}{{b}^{2}}}}}{b}^{2}+2\,{r_{{0}}}^{2}-4\,{b}^{2} )
\quad .
\end{eqnarray}
\subsection{Motion with the Lane--Emden profile}
Assuming a Lane--Emden profile, $n=5$, as given by
Eq.~(\ref {profile_lane}),
the velocity is
\begin{equation}
\frac{dr}{dt} = \frac{NL}{DL}
\quad ,
\label{vel_lane}
\end{equation}
where
\begin{eqnarray}
NL = {r_{{0}}}^{3}v_{{0}} \left( 3\,{b}^{2}+{r}^{2} \right) ^{\frac{3}{2}} \left( 3
\,{b}^{2}+{r_{{0}}}^{2} \right) ^{\frac{3}{2}}
\end{eqnarray}
and
\begin{eqnarray}
DL = -3\, \left( 3\,{b}^{2}+{r}^{2} \right) ^{\frac{3}{2}}\sqrt {3}{r_{{0}}}^{3}{b}
^{3}+3\, \left( 3\,{b}^{2}+{r_{{0}}}^{2} \right) ^{\frac{3}{2}}\sqrt {3}{b}^{3
}{r}^{3}
\nonumber \\
+ \left( 3\,{b}^{2}+{r}^{2} \right) ^{\frac{3}{2}} \left( 3\,{b}^{2}+{
r_{{0}}}^{2} \right) ^{\frac{3}{2}}{r_{{0}}}^{3}
\, .
\end{eqnarray}
The connected non-linear equation is
\begin{eqnarray}
\frac {1}
{
{r_{{0}}}^{3}v_{{0}} \left( 3\,{b}^{2}+{r_{{0}}}^{2} \right) ^{\frac{3}{2}}
\sqrt {3\,{b}^{2}+{r}^{2}}
}
\times
\nonumber \\
\bigg (
54\, ( {b}^{2}+\frac{1}{3}\,{r_{{0}}}^{2} ) ( \frac{1}{18}\,{r_{{0}}}
^{3} ( r-r_{{0}} ) \sqrt {3\,{b}^{2}+{r}^{2}}
\nonumber \\
+{b}^{3}\sqrt
{3} ( {b}^{2}+\frac{1}{6}\,{r}^{2} ) ) \sqrt {3\,{b}^{2}+{r_
{{0}}}^{2}}-54\,\sqrt {3\,{b}^{2}+{r}^{2}}\sqrt {3}{b}^{3} ( {b}^
{4}
\nonumber \\
+\frac{1}{2}\,{b}^{2}{r_{{0}}}^{2}+\frac{1}{18}\,r{r_{{0}}}^{3} )
\bigg ) =t-t_0
\nonumber
\quad .
\end{eqnarray}
The Pad\'e rational polynomial
approximation of degree 2 in the numerator
and degree 1 in the denominator for the left-hand side of the above equation gives
\begin{eqnarray}
\frac
{
NP
}
{
2\, \left( 3\,{b}^{2}+{r_{{0}}}^{2} \right) ^{\frac{3}{2}}v_{{0}} \left( 2\,r{
b}^{2}-5\,{b}^{2}r_{{0}}-r{r_{{0}}}^{2} \right)
}
=t-t_0
\, ,
\end{eqnarray}
where
\begin{eqnarray}
PN =
-27\, ( r-r_{{0}} )
\Big ( \big ( -\frac{4}{9}\, ( r{b}^{2}-\frac{5}{2}\,{b}
^{2}r_{{0}}-\frac{1}{2}\,r{r_{{0}}}^{2} \big ) \times
\nonumber \\
\big ( {b}^{2}+\frac{1}{3}\,{r_{{0}}}
^{2} \big ) \sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}+{b}^{5}\sqrt {3} \big (
r-r_{{0}} \big ) \Big )
\, .
\end{eqnarray}
The Pad\'e approximant for the radius is
\begin{eqnarray}
r_{2,1}=\frac{NR}{DR}
\label{rmotionlaneemden}
\end{eqnarray}
where
\begin{eqnarray}
NR= -18\, ( {b}^{2}+\frac{1}{3}\,{r_{{0}}}^{2} ) ^{2}{b}^{2}
( -\frac{1}{2}\,{r_{{0}}}^{3}-\frac{1}{2}\,v_{{0}} ( t-t_{{0}} ) {r_{{0}}}^{2}
\nonumber \\
+ \frac{7}{2} \,{b}^{2}r_{{0}}+{b}^{2}v_{{0}} ( t-t_{{0}} ) )
\sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}+ ( 81\,{b}^{9}r_{{0}}+27\,{b}^{7
}{r_{{0}}}^{3} ) \sqrt {3}
\nonumber \\
+\sqrt {972}
\Bigg ( { ( {b}^{2}+\frac{1}{3}
\,{r_{{0}}}^{2} ) ^{4}{b}^{4} ( \frac{9}{2}\,\sqrt {3}r_{{0}}{b}^{5
}v_{{0}} ( t-t_{{0}} ) \sqrt {3\,{b}^{2}
+{r_{{0}}}^{2}}
}
\nonumber \\
{
+
\bigg ( -\frac{1}{2}\,{r_{{0}}}^{3}-\frac{1}{2}\,v_{{0}} ( t-t_{{0}} ) {r_{
{0}}}^{2}-\frac{3}{2}\,{b}^{2}r_{{0}}
}
\nonumber \\
{
+{b}^{2}v_{{0}} ( t-t_{{0}} )
) ^{2} ( {b}^{2}+\frac{1}{3}\,{r_{{0}}}^{2} ) \bigg ) } \Bigg )^{\frac{1}{2}}
\, ,
\end{eqnarray}
and
\begin{eqnarray}
DR=
{b}^{2} ( 3\,{b}^{2}+{r_{{0}}}^{2} ) \bigg ( 27\,{b}^{5}
\sqrt {3}-12\,{b}^{4}\sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}
\nonumber \\
+2\,{b}^{2}{r_{{0
}}}^{2}\sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}+2\,{r_{{0}}}^{4}\sqrt {3\,{b}^
{2}+{r_{{0}}}^{2}} \bigg )
\quad ,
\end{eqnarray}
and the velocity is
\begin{equation}
v_{2,1}=\frac{dr_{2,1}}{dt} =\frac{NVL}{DVL}
\quad ,
\label{vmotionlaneemden}
\end{equation}
where
\begin{eqnarray}
NVL =
-18\,\sqrt {3} ( 3\,{b}^{2}+{r_{{0}}}^{2} ) v_{{0}}
\bigg (
\bigg ( -243\, ( {b}^{2}+\frac{1}{3}\,{r_{{0}}}^{2} ) ^{2}{b}^{7}r_
{{0}}\sqrt {3}
\nonumber \\
+\sqrt {972}
{ \Bigg \{ ( {b}^{2}+\frac{1}{3}\,{r_{{0}}}^{2}
) ^{4}{b}^{4} ( 9/2\,\sqrt {3}r_{{0}}{b}^{5}v_{{0}}
( t-t_{{0}} ) \sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}
}
\nonumber \\
{
+ ( {b}
^{2}+\frac{1}{3}\,{r_{{0}}}^{2} ) \bigg ( -\frac{1}{2}\,{r_{{0}}}^{3}-\frac{1}{2}\,v_{{0
}} ( t-t_{{0}} ) {r_{{0}}}^{2}-3/2\,{b}^{2}r_{{0}}
}
\nonumber \\
{
+{b}^{2}v
_{{0}} ( t-t_{{0}} ) ) ^{2} \bigg ) }
\Bigg \}^{\frac{1}{2}}
( 2\,{b}^
{2}-{r_{{0}}}^{2} ) \Bigg ) \sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}
\nonumber \\
-
108\, ( {b}^{2}+\frac{1}{3}\,{r_{{0}}}^{2} ) ^{3}{b}^{2} ( -1/
2\,{r_{{0}}}^{3}-\frac{1}{2}\,v_{{0}} ( t-t_{{0}} ) {r_{{0}}}^{2}-3
/2\,{b}^{2}r_{{0}}
\nonumber \\
+{b}^{2}v_{{0}} ( t-t_{{0}} ) )
( {b}^{2}-\frac{1}{2}\,{r_{{0}}}^{2} )
\quad ,
\end{eqnarray}
and
\begin{eqnarray}
DVL =
18\,\sqrt {972}\sqrt {3}
\Bigg
\{
( {b}^{2}+\frac{1}{3}\,{r_{{0}}}^{2}
) ^{4}{b}^{4} ( 9/2\,\sqrt {3}r_{{0}}{b}^{5}v_{{0}}
\nonumber \\
( t-t_{{0}} ) \sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}+ ( {b}
^{2}+\frac{1}{3}\,{r_{{0}}}^{2} ) ( -\frac{1}{2}\,{r_{{0}}}^{3}-\frac{1}{2}\,v_{{0
}} ( t-t_{{0}} ) {r_{{0}}}^{2}
\nonumber \\
-3/2\,{b}^{2}r_{{0}}+{b}^{2}v
_{{0}} ( t-t_{{0}} ) ) ^{2} )
\Bigg
\}
^{\frac{1}{2}}
(
( -12\,{b}^{4}+2\,{b}^{2}{r_{{0}}}^{2}+2\,{r_{{0}}}^{4} )
\sqrt {3\,{b}^{2}+{r_{{0}}}^{2}}
\nonumber \\
+27\,{b}^{5}\sqrt {3} )
\quad .
\end{eqnarray}
\section{Astrophysical Applications}
\label{secastro}
In the previous section, we derived three equations of motion
in the form of non-linear equations
and three Pad\'e approximated equations of motion.
We now check the reliability of the numerical and approximated
solutions on four SNRs: Tycho, see \cite{Williams2016},
Cas A, see \cite{Patnaude2009}, Cygnus loop, see \cite{Chiad2015},
and SN~1006, see \cite{Uchida2013}.
The three astronomical measurable parameters
are the time since the explosion in years, $t$,
the actual observed radius in pc, $r$,
and the present velocity of expansion in
km\,s$^{-1}$, see Table \ref{tablesnrs}.
\begin{table}[ht!]
\caption {
Observed astronomical parameters of SNRs
}
\label{tablesnrs}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Name & Age (yr) & Radius (pc) & Velocity (km\,s$^{-1}$)& References\\
\hline
Tycho & 442 & 3.7 & 5300 & Williams~et~al.~2016 \\
Cas ~A & 328 & 2.5 & 4700 & Patnaude~and~Fesen~2009 \\
Cygnus~loop & 17000 & 24.25 & 250 & Chiad~et~al.~2015 \\
SN ~1006 & 1000 & 10.19 & 3100 & Uchida~et~al.2013 \\
\hline
\end{tabular}
\end{center}
\end{table}
The astrophysical units have not yet been specified:
pc for length and yr for time
are the units most commonly used by astronomers.
With these units, the initial velocity is
$v_0(km s^{-1})= 9.7968 \, 10^5 v_0(pc\,yr^{-1})$.
The determination of the four unknown parameters, which are
$t_0$, $r_0$, $v_0$ and $b$,
can be obtained by equating the observed astronomical velocities
and radius with those obtained with the
Pad\'e rational polynomial, i.e.
\begin{eqnarray}
\label{eqnnl1}
r_{2,1}&= Radius(pc),\\
\label{eqnnl2}
v_{2,1}&=Velocity(km s^{-1}).
\end{eqnarray}
In order to reduce the unknown parameters from four to two, we
fix $v_0$ and $t_0$.
The two parameters $b$ and $r_0$ are found by solving the
two non-linear equations (\ref{eqnnl1})
and (\ref{eqnnl2}).
The results for the three types of profiles here adopted
are reported in Tables
\ref{tablesnrsexp},
\ref{tablesnrsgauss}
and
\ref{tablesnrslaneemden}.
\begin{table}[ht!]
\caption {
Theoretical parameters of SNRs
for the Pad\'e approximated equation of motion
with an exponential profile.
}
\label{tablesnrsexp}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Name &$t_0$(yr)&$r_0$(pc)&$v_0(km\,s^{-1})$& b(pc)&$\delta\,(\%)$&
$\Delta\,v (km\,s^{-1}) $ \\
\hline
Tycho & 0.1 & 1.203 & 8000 & 0.113 & 5.893 & -1.35 \\
Cas ~A & 1 & 0.819 & 8000 & 0.1 & 6.668 & -3.29 \\
Cygnus~loop & 10 & 12.27 & 3000 & 45.79 & 6.12 & -0.155 \\
SN ~1006 & 1 & 5.49 & 3100 & 2.332 & 1.455 & -12.34 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\caption {
Theoretical parameters of SNRs
for the Pad\'e approximated equation of motion
with a Gaussian profile.
}
\label{tablesnrsgauss}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Name &$t_0$(yr)&$r_0$(pc)&$v_0(km\,s^{-1})$& b(pc)&$\delta\,(\%)$&
$\Delta\,v (km\,s^{-1}) $ \\
\hline
Tycho & 0.1 & 1.022 & 8000 & 0.561 & 8.517 & -10.469 \\
Cas ~A & 1 & 0.741 & 7000 & 0.406 & 7.571 & -13.16 \\
Cygnus~loop & 10 & 11.92 & 3000 & 21.803 & 7.875 & -0.161 \\
SN ~1006 & 1 & 5.049 & 10000 & 4.311 & 4.568 & -18.58 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\caption
{
Theoretical parameters of SNRs
for the Pad\'e approximated equation of motion
with a Lane--Emden profile.
}
\label{tablesnrslaneemden}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Name &$t_0$(yr)&$r_0$(pc)&$v_0(km\,s^{-1})$& b(pc)&$\delta\,(\%)$&
$\Delta\,v (km\,s^{-1}) $ \\
\hline
Tycho & 0.1 & 0.971 & 8000 & 0.502 & 3.27 & -14.83 \\
Cas ~A & 1 & 0.635 & 8000 & 0.35 & 4.769 & -23.454 \\
Cygnus~loop & 10 & 11.91 & 3000 & 27.203 & 7.731 & -0.162 \\
SN ~1006 & 1 & 5 & 10000 & 4.85 & 3.297 & -19.334 \\
\hline
\end{tabular}
\end{center}
\end{table}
The goodness of the approximation is evaluated
through the percentage error, $\delta$, which is
\begin{equation}
\delta = \frac{\big | r_{2,1} - r_E \big |}
{r_E} \times 100
\quad ,
\end{equation}
where $r_{2,1}$ is the
Pad\'e approximated radius and
$r_E$ is the exact solution which is obtained by solving numerically
the non-linear equation of motion, as an example Eq.~(\ref{eqn_nl_exp})
in the exponential case.
The numerical values of $\delta$ are reported
in column 6 of
Tables
\ref{tablesnrsexp},
\ref{tablesnrsgauss}
and
\ref{tablesnrslaneemden}.
Another useful astrophysical variable is the predicted decrease in velocity
on the basis of the
Pad\'e approximated velocity, $v_{2,1}$, in 10 years,
see column 7 of
Tables
\ref{tablesnrsexp},
\ref{tablesnrsgauss}
and
\ref{tablesnrslaneemden}.
\section{Conclusions}
The expansion of an SNR can be modeled by the conservation
of momentum in the presence of a decreasing density:
here we analysed an exponential, a Gaussian
and a Lane--Emden profile.
The three equations of motion have complicated left-hand sides
but simple left-hand sides, viz., $(t-t_0)$.
The application of the
Pad\'e approximant to the left-hand side of the complicated equation of motion
allows finding three approximate laws of motion,
see Eqs~(\ref{rmotionexp}, \ref{rmotiongauss}, \ref{rmotionlaneemden}),
and three approximate velocities,
see Eqs~(\ref{vmotionexp}, \ref{vmotiongauss}, \ref{vmotionlaneemden}).
The astrophysical test is performed
on four spherical SNRs assumed to be spherical
and the four sets of parameters are
reported in Tables
\ref{tablesnrsexp},
\ref{tablesnrsgauss}
and
\ref{tablesnrslaneemden}.
The percentage of error of the Pad\'e approximated
solutions for the radius is always less than
$10\%$ with respect to the numerical exact solution,
see column 6 of
the three last tables.
In order to produce an astrophysical prediction,
the theoretical decrease in velocity for the four SNRs here
analysed is evaluated,
see column 7 of
Tables
\ref{tablesnrsexp},
\ref{tablesnrsgauss}
and
\ref{tablesnrslaneemden}.
\noindent
{\bf REFERENCES}
\providecommand{\newblock}{}
| {'timestamp': '2017-01-02T02:02:35', 'yymm': '1612', 'arxiv_id': '1612.09442', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.09442'} |
\section{Introduction}
The Laplace-Beltrami operator is a fundamental and widely studied mathematical tool carrying a lot of intrinsic topological and geometric information about the Riemannian manifold on which it is defined.
Its various discretizations, through graph Laplacians, have inspired many applications in data analysis and machine learning and led to popular tools such as
Laplacian EigenMaps~\cite{belkin2003laplacian} for dimensionality reduction, spectral clustering ~\cite{von2007tutorial}, or semi-supervised learning~\cite{belkin2004semi}, just to name a few.
During the last fifteen years, many efforts, leading to a vast literature, have been made to understand the convergence of graph Laplacian operators built on top of (random) finite samples to Laplace-Beltrami operators. For example pointwise convergence results have been obtained in \cite{belkin2005towards} (see also \cite{belkin2008towards}) and \cite{hein2007graph}, and a (uniform) functional central limit theorem has been established in \cite{gine2006empirical}. Spectral convergence results have also been proved by \cite{belkin2007convergence} and \cite{von2008consistency}. More recently, \cite{ting2011analysis} analyzed the asymptotic of a large family of graph Laplacian operators by taking the diffusion process approach previously proposed in \cite{nadler2006diffusion}.
Graph Laplacians depend on scale or bandwidth parameters whose choice is often left to the user. Although many convergence results for various metrics have been established, little is known about how to rigorously and efficiently tune these parameters in practice. In this paper we address this problem in the case of unnormalized graph Laplacian. More precisely, given a Riemannian manifold $M$ of known dimension $d$ and a function $f : M \to \mathbb R$ , we consider the standard unnormalized graph Laplacian operator defined by
\[
\hat \Delta_h f(y) = \frac{1}{nh^{d+2}} \sum_i K\left( \frac{y-X_i}{h}\right) \left[ f(X_i) - f(y) \right], \qquad y \in M,\]
where $h$ is a bandwidth, $X_1,\ldots,X_n$ is a finite point cloud sampled on $M$ on which the values of $f$ can be computed, and $K$ is the Gaussian kernel: for $y \in \mathbb R^m$
\begin{equation}\label{gauss_k}
K(y) = \frac{1}{(4\pi)^{d/2}} e^{-\|y\|_m^2/4},
\end{equation}
where $\|y\|_m$ is the Euclidean norm in the ambiant space $\mathbb R^m$.
In this case, previous results (see for instance \cite{gine2006empirical}) typically say that the bandwidth parameter $h$ in $\hat \Delta_h$ should be taken of the order of $n^{-\frac{1}{d+2+\alpha}}$ for some $\alpha >0$, but in practice, for a given point cloud, these asymptotic results are not sufficient to choose $h$ efficiently.
In the context of neighbor graphs \cite{ting2011analysis} proposes self-tuning graphs by choosing $h$ locally in terms of the distances to the $k$-nearest neighbor, but note that $k$ still need to be chosen and moreover as far as we know there is no guarantee for such method to be rate-optimal. More recently a data driven method for spectral clustering has been proposed in \cite{rieser2015topological}. Cross validation~\cite{arlot2010survey} is the standard approach for tuning parameters in statistics and machine learning. Nevertheless, the problem of choosing $h$ in $\hat \Delta_h $ is not easy to rewrite as a cross validation problem, in particular because there is no obvious contrast corresponding to the problem (see \cite{arlot2010survey}).
The so-called Lepski's method is another popular method for selecting the smoothing parameter of an estimator. The method has been introduced by Lepski \cite{lepskii1992asymptotically,lepskii1993asymptotically,lepski1992problems} for kernel estimators and local polynomials for various risks and several improvements of the method have then been proposed, see~\cite{lepski1997optimal,goldenshluger2009structural,goldenshluger2008universal}. In this paper we adapt Lepski's method for selecting $h$ in the graph Laplacian estimator $\hat \Delta_h $. Our method is supported by mathematical guarantees: first we obtain an oracle inequality - see Theorem \ref{prop_oineq} - and second we obtain the correct rate of convergence - see Theorem \ref{cor_c} - already proved in the asymptotical studies of \cite{belkin2005towards} and \cite{gine2006empirical} for non data-driven choices of the bandwidth. Our approach follows the ideas recently proposed in \cite{lacour2015minimal}, but for the specific problem of Laplacian operators on smooth manifolds. In this first work about the data-driven estimation of Laplace-Beltrami operator, we focus as in \cite{belkin2005towards} and \cite{gine2006empirical} on the pointwise estimation problem: we consider a smooth function $f$ on $M$ and the aim is to estimate $\hat \Delta f$ for the $L^2$-norm $\| \cdot\|_{2,M}$ on $M \subset \mathbb R^m$. The data driven method presented here may be adapted and generalized for other types of risks (uniform norms on functional family and convergence of the spectrum) and other types of graph Laplacian operators, this will be the subject of future works.
The paper is organized as follows: Lepski's method is introduced in Section \ref{sec:Lep}. The main results are stated in Section \ref{res} and a sketch of their proof is given in Section \ref{pf_mthm} (the complete proofs are given in the supplementary material). A numerical illustration and a discussion about the proposed method are given in Sections \ref{sec:exp} and \ref{sec:disc} respectively.
\section{Lepski's procedure for estimating the Laplace-Beltrami operator}
\label{sec:Lep}
All the Riemannian manifolds considered in the paper are smooth compact $d$-dimensional submanifolds (without boundary) of $\mathbb R^m$ endowed with the Riemannian metric induced by the Euclidean structure of $\mathbb R^m$. Recall that, given a compact $d$-dimensional smooth Riemannian manifold $M$ with volume measure $\mu$, its Laplace-Beltrami operator is the linear operator $\Delta$ defined on the space of smooth functions on $M$ as $\Delta(f) = - \operatorname{\mathrm{div}}( \nabla f)$ where $\nabla f$ is the gradient vector field and $\operatorname{\mathrm{div}}$ the divergence operator. In other words, using the Stoke's formula, $\Delta$ is the unique linear operator satisfying
$$\int_M \| \nabla f \|^2 d\mu = \int_M \Delta(f) f d\mu.$$
Replacing the volume measure $\mu$ by a distribution $\mathrm P$ which is absolutely continuous with respect to $\mu$, the weighted Laplace-Beltrami operator $\Delta_{\mathrm P}$ is defined as
\begin{equation}\label{eqDL}
\Delta_{\mathrm P}f = \Delta f + \frac{1}{p} \langle \nabla p, \nabla f \rangle \, ,
\end{equation}
where $p$ is the density of $\mathrm P$ with respect to $\mu$. The reader may refer to classical textbooks such as, e.g., \cite{rosenberg1997laplacian} or \cite{grigoryan2009heat} for a general and detailed introduction to Laplace operators on manifolds.
In the following, we assume that we are given n points $X_1,\dots,X_n$ sampled on $M$ according to the distribution $\mathrm P$. Given a smooth function $f$ on $M$, the aim is to estimate $\Delta_{\mathrm P} f$, by selecting an estimator in a given finite family of graph Laplacian $ (\hat \Delta_{h} f)_{h \in \mathcal H}$, where $\mathcal H$ is a finite family of bandwidth parameters.
Lepski's procedure is generally presented as a method for selecting bandwidth in an adaptive way.
More generally, this method can be seen as an estimator selection procedure.
\subsection{Lepski's procedure}
We first shortly explain the ideas of Lepski's method. Consider a target quantity $s$, a collection of estimators $(\hat s_h)_{h\in\mathcal H}$ and a loss function $\ell(\cdot,\cdot)$. A standard objective when selecting $\hat s_h$ is trying to minimize the risk $\mathbb E \ell( s,\hat s_h)$ among the family of estimators. In most settings, the risk of an estimator can be decomposed into a bias part and a variance part. Of course neither the risk, the bias nor the variance of an estimator are known in practice. However in many cases, the variance term can be controlled quite precisely.
Lepski's method requires that the variance of each estimator $\hat s_h$ can be tightly upper bounded by a quantity $v(h)$. In most cases, the bias can be written as $\ell (s,\bar s_h)$ where $\bar s_h$ corresponds to some (deterministic) averaged version of $\hat s_h$. It thus seems natural to estimate $\ell (s,\bar s_h)$ by $\ell (\hat s_{h'},\hat s_h)$ for some $h'$ smaller than $h$. The later quantity incorporates some randomness while the bias does not. The idea is to remove the ``random part" of the estimation by considering $\left[\ell (\hat s_{h'},\hat s_h) - v(h) - v(h')\right]_+$, where $[ \: ]_+$ denotes the positive part. The bias term is estimated by considering all pairs of estimators $( s_{h},\hat s_{h'})$ through the quantity $\sup_{h'\leq h}\left[\ell (\hat s_{h'},\hat s_h) - v(h) - v(h')\right]_+$. Finally, the estimator minimizing the sum of the estimated bias and variance is selected, see ~\cref{h_lepski} below.
In our setting, the control of the variance of the graph Laplacian estimators $\hat \Delta_h$ is not tight enough to directly apply the above described method. To overcome this issue, we use a more flexible version of Lepski's method that involves some multiplicative coefficients $a$ and $b$ introduced in the variance and bias terms.
More precisely, let $ V(h) = V_f(h)$ be an upper bound for $ \mathbb E[ \| (\mathbb E[\hat \Delta_h] - \hat \Delta_h)f \|_{2,M}^2]$. The bandwidth $\hat h$ selected by our Lepski's procedure is defined by
\begin{equation}\label{h_lepski}
\hat h = \hat h_f= \mathrm{arg}\min_{h\in \mathcal H} \left\{ B(h) + b V(h) \right\}
\end{equation}
where
\begin{equation}\label{bh}
B(h) = B_f(h) = \max_{h'\leq h, \, h' \in \mathcal H} \left[ \|( \hat \Delta_{h'} - \hat \Delta_h )f\|_{2,M}^2 - a V(h') \right]_+
\end{equation}
with $0<a\leq b$. The calibration of the constants $a$ and $b$ in practice is beyond the scope of this paper, but we suggest a heuristic procedure inspired from \cite{lacour2015minimal} in \cref{sec:exp}.
\subsection{Variance of the graph Laplacian for smooth functions}\label{ref_sec}
In order to control the variance term, we consider for this paper the set $\mathcal F$ of smooth functions $f: M \to \mathbb R$ uniformly bounded up to the third order. For some constant $C_{\mathcal F} >0 $ , let
\begin{equation}\label{def_Cf}
\mathcal F = \left\{ f \in \mathcal C^3(M,\mathbb R) \, , \, \| f^{(k)}\|_{\infty} \leq C_{\mathcal F} , \, {k=0,\dots,3} \right\}
\end{equation}
Here, by $\| f^{(k)} \|_\infty \leq C_{\cal F}$ we mean that in any normal coordinate systems all the partial derivatives of order $k$ of $f$ are bounded by $C_{\cal F}$.
We introduce some notation before giving the variance term for $f \in \mathcal F$. Define
\begin{align}
\label{def_Da}
D_{\alpha} & = \frac{1}{(4\pi)^{d}} \int_{\mathbb R^d} \left( \frac{C \| u\|_d^{\alpha+2}}{2} + C_1 \|u\|_d^{\alpha} \right)\ e^{-\|u\|_d^2/4} \, \mathrm du\\
\label{def_tDa}
\tilde D_{\alpha} & = \frac{1}{(4\pi)^{d/2}} \int_{\mathbb R^d} \left( \frac{C \| u\|_d^{\alpha+2}}{4} + C_1 \|u\|_d^{\alpha} \right)\ e^{-\|u\|_d^2/8} \, \mathrm du
\end{align}
where $\| \|_d$ is the euclidean norm in $\mathbb R^d$ and where $C$ and $C_1$ are geometric constants that only depend on the metric structure of $M$ (see Lemma~\ref{lem_2.2} in the appendices). We also introduce the $d$-dimensional Gaussian kernel on $\mathbb R^d$:
\[
K_d(u) = \frac{1}{(4\pi)^{d/2}} e^{-\|u\|_d^2/4}, \qquad u \in \mathbb R^d
\]
and we denote by $\| \cdot\|_{p,d}$ the $L^p$-norm on $\mathbb R^d$.
The next proposition provides an explicit bound $V(h)$ on the variance term.
Let
\begin{equation}\label{od}
\omega_d = 3 \times 2^{d/2-1}
\end{equation}
and
\begin{equation}\label{ad}
\alpha_d(h) = h^2 \left( D_4+\frac{3}{2} \omega_d \ \| K_d\|_{2,d}^2 + \frac{2\mu(M)}{(4\pi)^{d}} \right)
+h^4 \frac{D_6}{4}.
\end{equation}
We first need to control the variance of $\hat \Delta_h f $ over $\cal F$. This will be possible by considering Taylor Young expansions of $f$ in normal coordinates.
For that purpose, for technical reasons following from Lemma~\ref{lem_2.2}, we constrain the parameter $h$ to satisfy the following inequality
\begin{equation}
\label{CondhmaxRayInj}
2 \sqrt{d+4} h \log(h^{-1})^{1/2} \leq \rho(M) ,
\end{equation}
where $\rho(M)$ is a geometric constant that only depends on the reach and the injectivity radius of M.
\begin{prop}\label{eq_v}
Given $h\in \mathcal H$ and for any $f \in \mathcal F$, we have
\[
V(h) := \frac{2 C_{\mathcal F}^2}{n h^{d+2}} \Big[ w_d \| K_d\|_{2,d}^2 + \alpha_d(h)\Big]
\leq \mathbb E[ \| (\mathbb E[\hat \Delta_h] - \hat \Delta_h)f \|_{2,M}^2].
\]
\end{prop}
\noindent
For the proof we refer to ~\cref{proof1}.\\[1mm]
\section{Results}\label{res}
We now give the main result of the paper: an oracle inequality for the estimator $\hat \Delta_{\hat h}$, or in other words, a bound on the risk that shows that the performance of the estimator is almost
as good as it would be if we knew the risks of each estimator.
In particular it performs an (almost) optimal trade-off between the variance term $V(h)$ and
the approximation term
\begin{multline*}
D(h) = D_f(h) = \max\left\{ \| (p\Delta_{\mathrm P} - \mathbb E [\hat \Delta_{ h}])f \|_{2,M}, \, \sup_{h'\leq h} \| ( \mathbb E [\hat\Delta_{ h'}]- \mathbb E [\hat\Delta_h])f \|_{2,M} \right\}\\
\leq 2 \sup_{h'\leq h} \| (p\Delta_{\mathrm P} - \mathbb E [\hat \Delta_{ h'}])f \|_{2,M}.
\end{multline*}
\begin{thm}
\label{prop_oineq}
According to the notation introduced in the previous section,
let $\epsilon = \sqrt a/2 -1$ and
\[
\delta(h) = \sum_{h'\leq h} \max\left\{ \exp\left( -\frac{\min\{\epsilon^2,\epsilon\} \sqrt n }{24} \right), \, \exp\left( -\frac{\epsilon^2}{3}\gamma_d(h') \right) \right\}
\]
and
\[
\gamma_d(h) = \frac{1}{ h^{d} \|p\|_\infty } \left[ \frac{ \omega_d \ \|K_d\|_{2,d}^2 + \alpha_d(h)}{\left(\omega_d\ \| K_d\|_{1,d} + \beta_d(h)\right)^2} \right]
\]
where $\alpha_d $ is defined by \eqref{ad} and where
\begin{align}
\label{bd}
\beta_d(h) & = h \left(\omega_d \ \| K_d\|_{1,d} +\frac{2 \mu(M)}{(4\pi)^{d/2}} \right) +h^2\tilde D_3 + \frac{h^3 \tilde D_4}{2}.
\end{align}
Given $f \in \mathcal C^2(M,\mathbb R)$, with probability at least
$1-2\sum_{h \in \mathcal H}\delta(h)$,
\[
\| (p\Delta_{\mathrm P} - \hat \Delta_{\hat h}) f\|_{2,M} \leq \inf_{h \in \mathcal H} \left\{3 D(h) + (1+\sqrt2) \sqrt{b V(h)}\right\}.
\]
\end{thm}
\noindent
Broadly speaking, Theorem~\ref{prop_oineq} says that there exists an event of large probability for which the estimator selected by Lepski's method is almost as good as the best estimator in the collection. Note that the size of the bandwidth family $\mathcal H$ has an impact on the probability term $1-2\sum_{h \in \mathcal H}\delta(h)$. If $\mathcal H$ is not too large, an oracle inequality for the risk of $\hat \Delta_{\hat h} f$ can be easily deduced from the later result. Henceforth we assume that $f \in \mathcal F$. We first give a control on the approximation term $D(h)$.
\begin{prop}\label{prop_Dh}
Assume that the density $p$ is $\mathcal C^2$.
It holds that
\[
D(h) \leq \gamma \ C_{\mathcal F} h
\]
where $C_{\mathcal F}$ is defined in ~\cref{def_Cf} and $\gamma>0$ is a constant depending on $M$, $\| p\|_{\infty}$, $\| p'\|_{\infty}$ and $\| p''\|_{\infty}$, where $\| p^{(k)} \|_\infty$ denotes the supremum of the absolute value of the partial derivatives of $p$ in any normal coordinates system.
\end{prop}
We consider the following grid of bandwidths:
\[
\mathcal H = \left\{ e^{-k}\ , \ \lceil \log\log(n)\rceil \leq k \leq \lfloor \log(n) \rfloor \right\}.
\]
Note that this choice ensures that Condition~\eqref{CondhmaxRayInj} is always satisfied for $n$ large enough.
The previous results lead to the pointwise rate of convergence of the graph Laplacian selected by Lepski's method:
\begin{thm}\label{cor_c}
Assume that the density $p$ is $\mathcal C^2$. For any $f \in \mathcal F$, we have
\begin{equation}
\label{cor_a1}
\mathbb E \left[ \| (p\Delta_{\mathrm P} - \hat \Delta_{\hat h}) f\|_{2,M} \right] \lesssim n^{-\frac{1}{d+4}}.
\end{equation}
\end{thm}
\section{Sketch of the proof of ~\cref{prop_oineq}}\label{pf_mthm}
We observe that the following inequality holds
\begin{equation}\label{prop_base}
\|(p\Delta_{\mathrm P} - \hat \Delta_{\hat h})f \|_{2,M} \leq
D(h) + \|(\mathbb E[\hat \Delta_h] - \hat \Delta_h)f\|_{2,M} + \sqrt{2\left( B(h) + b V(h) \right)} .
\end{equation}
Indeed, for $h \in \mathcal H$,
\begin{align*}
\| (p\Delta_{\mathrm P} - \hat \Delta_{\hat h})f \|_{2,M}
& \leq \| (p\Delta_{\mathrm P} - \mathbb E[\hat \Delta_h] )f \|_{2,M} + \| (\mathbb E[\hat \Delta_h] - \hat \Delta_{ h})f \|_{2,M}
+ \| (\hat \Delta_h - \hat \Delta_{\hat h} )f \|_{2,M} \\
& \leq D(h) + \| (\mathbb E[\hat \Delta_h] - \hat \Delta_{ h} )f\|_{2,M}
+ \| (\hat \Delta_h - \hat \Delta_{\hat h} )f \|_{2,M} .
\end{align*}
\noindent
By definition of $B(h)$, for any $h'\leq h$,
\[
\| (\hat \Delta_{h'} - \hat \Delta_{h})f \|_{2,M}^2 \leq B(h) + a V(h')
\leq B( \max\{ h, h'\}) + a V( \min\{h,h'\}),
\]
so that,
according to the definition of $\hat h$ in ~\cref{h_lepski} and recalling that $a\leq b$,
\[
\| (\hat \Delta_{\hat h} - \hat \Delta_{h})f \|_{2,M}^2 \leq 2 \left[ B(h) +a V(h) \right] \leq 2 \left[ B(h) + b V(h) \right]
\]
which proves ~\cref{prop_base}.
\vskip2mm
\noindent
We are now going to bound the terms that appear in ~\cref{prop_base}.
The bound for $D(h)$ is already given in ~\cref{prop_Dh}, so that in the following we focus on
$B(h)$ and
$\|(\mathbb E[\hat \Delta_h] - \hat \Delta_h)f\|_{2,M}$.
More precisely the bounds we present in the next two propositions are
based on the following lemma from \cite{lacour2015minimal}.
\begin{lem}\label{lemma1}
Let $X_1, \dots, X_n$be an i.i.d. sequence of variables.
Let $\widetilde{\mathcal S}$ a countable set of functions and let $\eta(s) = \frac{1}{n} \sum_i \left[ g_s(X_i) - \mathbb E[g_s(X_i)] \right]$ for any $s \in \widetilde{\mathcal S}$.
Assume that there exist constants $ \theta$ and $v_g$ such that for any $s\in \widetilde{\mathcal S}$
\[
\| g_s\|_{\infty} \leq \theta \quad \text{and} \quad \mathrm{Var}[ g_s(X)]\leq v_g.
\]
Denote $ H = \mathbb E[ \sup_{s \in \widetilde{\mathcal S}} \eta(s) ]$. Then for any $\epsilon>0$ and any $H'\geq H$
\[
\mathbb P\left[ \sup_{s \in \widetilde{\mathcal S}} \eta(s) \geq (1+\epsilon)H' \right]
\leq \max \left\{ \exp\left( -\frac{\epsilon^2 n H'^2}{6v_g} \right), \, \exp\left( -\frac{\min\{\epsilon^2,\epsilon\} n H'}{24\ \theta} \right) \right\}.
\]
\end{lem}
\vskip2mm
\begin{prop}\label{main_prop}
Let $\epsilon = \frac{\sqrt a}{2}-1$.
Given $h \in \mathcal H$, define
\[
\delta_1(h) = \sum_{h'\leq h} \max\left\{ \exp\left( -\frac{\min\{\epsilon^2,\epsilon\} \sqrt n }{24} \right), \, \exp\left( -\frac{\epsilon^2}{3}\gamma_d(h') \right) \right\}.
\]
With probability at least $1-\delta_1(h)$
\[
B(h) \leq 2 D(h)^2.
\]
\end{prop}
\begin{prop}\label{main_p2}
Let $\tilde \epsilon = \sqrt a -1$.
Given $h \in \mathcal H$, define
\[
\delta_2(h) = \max\left\{ \exp\left( -\frac{\min\{\tilde \epsilon^2,\tilde \epsilon\} \sqrt n }{24} \right), \, \exp\left( -\frac{\tilde \epsilon^2}{3}\gamma_d(h) \right) \right\}.
\]
With probability at least $1-\delta_2(h) $
\[
\| (\mathbb E[\hat \Delta_h] - \hat \Delta_{ h})f \|_{2,M} \leq \sqrt{a V(h)}.
\]
\end{prop}
\noindent
Combining the above propositions with ~\cref{prop_base}, we get that,
for any $h\in \mathcal H$, with probability at least $1-(\delta_1(h) + \delta_2(h) )$,
\begin{align*}
\| (p\Delta_{\mathrm P} - \hat \Delta_{\hat h}) f\|_{2,M}
& \leq D(h) + \sqrt{a V(h)} + \sqrt{4 D(h)^2 + 2b V(h)}\\
& \leq 3 D(h) + (1+\sqrt2) \sqrt{b V(h)}
\end{align*}
where we have used the fact that $a\leq b$.
Taking a union bound on $h \in \mathcal H$ we conclude the proof.
\section{Numerical illustration} \label{sec:exp}
In this section we illustrate the results of the previous section on a simple example. In ~\cref{subsec:practical}, we describe a practical procedure when the data set $\mathbb{X}$ is sampled according to the uniform measure on $M$. A numerical illustration us given in Section~\ref{subsec:sphere} when $M$ is the unit $2$-dimensional sphere in $\mathbb{R}^3$.
\subsection{Practical application of the Lepksi's method} \label{subsec:practical}
Lepski's method presented in Section~\ref{sec:Lep} can not be directly applied in practice for two reasons. First, we can not compute the $L^2$-norm $\| \, \|_{2,M}$ on $M$, the manifold $M$ being unknown. Second, the variance terms involved in Lepski's method are not completely explicit.
Regarding the first issue, we can approximate $\| \, \|_{2,M}$ by splitting the data into two samples: an estimation sample $\mathbb X_1$ for computing the estimators and a validation sample $\mathbb X_2$ for evaluating this norm. More precisely, given two estimators $ \hat \Delta_h f $ and $ \hat \Delta_{h'} f $ computed using $\mathbb X_1$, the quantity $ \| (\hat \Delta_h - \hat \Delta_{h'}) f \|_{2,M}^2 / \mu(M) $ is approximated by the averaged sum $\frac 1{n_2} \sum_{x \in \mathbb X_2} | \hat \Delta_h f(x) - \hat \Delta_{h'} f (x) |^2$, where $n_2$ is the number of points in $\mathbb X_2$. We use these approximations to evaluate the bias terms $B(h)$ defined by \eqref{bh}.
The second issue comes from the fact that the variance terms involved in Lepski's method depend on the metric properties of the manifold and on the sampling density, which are both unknown. Theses variance terms are thus only known up to a multiplicative constant. This situation contrasts with more standard frameworks for which a tight and explicit control on the variance terms can be proposed, as in \citep{lepskii1992asymptotically,lepskii1993asymptotically,lepski1992problems}. To address this second issue, we follow the calibration strategy recently proposed in \cite{lacour2015minimal} (see also \cite{lacour2016estimator}). In practice we remove all the multiplicative constants from $V(h)$: all these constants are passed into the terms a and b. This means that we rewrite Lepski's method as follows:
\begin{equation*}
\hat h(a,b) = \mathrm{arg}\min_{h\in \mathcal H} \left\{ B(h) + b \frac{1}{n h ^4} \right\}
\end{equation*}
where
\[
B(h) = \max_{h'\leq h, \, h' \in \mathcal H} \left[ \|( \hat \Delta_{h'} - \hat \Delta_h )f\|_{2,M}^2 - a \frac{1}{n {h'} ^4} \right]_+ .
\]
We choose $a$ and $b$ according to the following heuristic:
\begin{enumerate}
\item Take $b=a$ and consider the sequence of selected models: $\hat h(a,a)$,
\item Starting from large values of $a$, make $a$ decrease and find the location $a_0$ of the main {\it bandwidth jump} in the step function $a \mapsto \hat h(a,a)$,
\item Select the model $\hat h(a_0,2a_0)$.
\end{enumerate}
The justification of this calibration method is currently the subject of mathematical studies (\cite{lacour2015minimal}). Note that a similar strategy called "slope heuristic" has been proposed for calibrating $\ell_0$ penalties in various settings by strong mathematical results, see for instance \cite{birge2007minimal,arlot2009data,baudry2012slope}.
\subsection{Illustration on the sphere} \label{subsec:sphere}
In this section we illustrate the complete method on a simple example with data points generated uniformly on the sphere $\mathbb S^2$ in $\mathbb R^3$. In this case, the weighted Laplace-Beltrami operator is equal to the (non weighted) Laplace-Beltrami operator on the sphere.
We consider the function $f(x,y,z) = (x^2 + y ^2 + z ) \sin x \cos x $.
The restriction of this function on the sphere has the following representation in spherical coordinates:
$$ \tilde f(\theta,\phi) = (\sin ^2 \phi + \cos \phi ) \sin( \sin \phi \cos \theta) \cos( \sin \phi \cos \theta) .$$
It is well known that the Laplace-Beltrami operator on the sphere satisfies (see Section~3 in \cite{grigoryan2009heat}):
$$
\Delta_{\mathcal S^2} u= \frac 1 {\sin^2 \phi} \frac{\partial^2 u }{\partial \theta^2 } + \frac{1}{\sin \phi } \frac{\partial }{\partial \phi } \left( \sin \phi \frac{\partial u }{\partial \phi} \right)
$$
for any smooth polar function $u$. This allows us to derive an analytic expression of $\Delta_{\mathcal S^2} \tilde f$.
We sample $n_1= 10^6$ points on the sphere for computing the graph Laplacians and we use $n=10^3$ points for approximating the norms $ \| (\hat \Delta_h - \hat \Delta_{h'}) \tilde f \|_{2,M}^2$. We compute the graph Laplacians for bandwidths in a grid $\mathcal H$ between 0.001 and 0.8 (see \cref{fig:GraphLap}). The risk of each graph Laplacian is estimated by a standard Monte Carlo procedure (see \cref{fig:risk}).
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{LaplacianEstimationIllustration.png}
\caption{Choosing $h$ is crucial for estimating $\Delta_{\mathcal S^2} \tilde f$: small bandwidth overfits $\Delta_{\mathcal S^2} \tilde f$ whereas large bandwidth leads to almost constant approximation functions of $\Delta_{\mathcal S^2} \tilde f$.}
\label{fig:GraphLap}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{RiskIllustration.png}
\caption{Estimation of the risk of each graph Laplacian operator: the oracle Laplacian is for approximatively $h=0.15$.}
\label{fig:risk}
\end{figure}
Figure~\ref{fig:jump} illustrates the calibration method. On this picture, the $x$-axis corresponds to the values of $a$ and the $y$-axis represents the bandwidths. The blue step function represents the function $a \mapsto \hat h(a,a)$. The red step function gives the model selected by the rule $a \mapsto \hat h(a,2 a)$. Following the heuristics given in Section~\ref{subsec:practical}, one could take for this example the value $a_0 \approx 3.5$ (location of the bandwidth jump for the blue curve) which leads to select the model $\hat h(a_0,2a_0) \approx 0.2$ (red curve).
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{BandwidthJumpIllustration.png}
\caption{Bandwidth jump heuristic: find the location of the jump (blue curve) and deduce the selected bandwidth with the red curve.}
\label{fig:jump}
\end{figure}
\section{Discussion} \label{sec:disc}
This paper is a first attempt for a complete and well-founded data driven method for inferring Laplace-Beltrami operators from data points. Our results suggest various extensions and raised some questions of interest.
For instance, other versions of the graph Laplacian have been studied in the literature (see for instance \cite{hein2007graph,belkin2008towards}), for instance when data is not sampled uniformly. It would be relevant to propose a bandwidth selection method for these alternative estimators also.
From a practical point of view, as explained in ~\cref{sec:exp}, there is a gap between the theory we obtain in the paper and what can be done in practice. To fill this gap, a first objective is to prove an oracle inequality in the spirit of Theorem~\ref{prop_oineq} for a bias term defined in terms of the empirical norms computed in practice. A second objective is to propose mathematically well-founded heuristics for the calibration of the parameters $a$ and $b$.
Tuning bandwidths for the estimation of the spectrum of the Laplace-Beltrami operator is a difficult but important problem in data analysis. We are currently working on the adaptation of our results to the case of operator norms and spectrum estimation.
\section*{Appendix: the geometric constants $C$ and $C_1$} \label{sec:geomConst}
The following classical lemma (see, e.g. \cite{gine2006empirical}[Prop. 2.2 and Eq. 3.20]) relates the constants $C$ and $C_1$ introduced in Equations \eqref{def_Da} and \eqref{def_tDa} to the geometric structure of $M$.
\begin{lem} \label{lem_2.2}
There exist constants $C, C_1 >0$ and a positive real number $r>0$ such that for any $x \in M$, and any $v \in T_x M$ such that $\| v \| \leq r$,
\begin{equation}\label{eq_310}
\left|\sqrt{\mathrm{det}(g_{ij})}(v) - 1 \right| \leq C_1 \| v\|_d^2
\qquad \text{and}
\qquad \frac{1}{2} \| v\|_d^2 \leq \| v\|_d^2 - C \| v\|_d^4
\leq \|\mathcal E_x(v)- x \|_m^2 \leq \| v\|_d^2
\end{equation}
where $\mathcal E_x: T_xM \to M$ is the exponential map and $(g_{i,j})_{i,j} \in \{1,\cdots, d\}$ are the components of the metric tensor in any normal coordinate system around $x$.
\end{lem}
Although the proof of the lemma is beyond the scope of this paper, notice that one can indeed give explicit bounds on $r$ and $C$ in terms of the reach and injectivity radius of the submanifold $M$.
\subsubsection*{Acknowledgments} The authors are grateful to Pascal Massart for helpful discussions on Lepski's method and to Antonio Rieser for his careful reading and for pointing a technical bug in a preliminary version of this work. This work was supported by the ANR project TopData ANR-13-BS01-0008 and ERC Gudhi No. 339025
{\small
\bibliographystyle{alpha}
| {'timestamp': '2017-01-02T02:02:30', 'yymm': '1612', 'arxiv_id': '1612.09434', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.09434'} |
\section{Introduction}
\label{sec1}
Modern scientific research is characterized by a collection of massive and diverse data sets. One of the most important goals is to integrate these different data sets for making better predictions and statistical inferences. Given a target problem to solve, transfer learning \citep{Torrey10} aims at transferring the knowledge from different but related samples to improve the learning performance of the target problem. A typical example of transfer learning is that one can improve the accuracy of recognizing cars by using not only the labeled data for cars but some labeled data for trucks \citep{weiss2016survey}.
Besides classification, another relevant class of transfer learning problems is linear regression using auxiliary samples. In health-related studies, some biological or clinical outcomes are hard to obtain due to ethical or cost issues, in which case transfer learning can be leveraged to boost the prediction and estimation performance of these outcomes by gathering information from different but related biological outcomes.
Transfer learning has been applied to problems in medical and biological applications, including predictions of protein localization \citep{Mei11}, biological imaging diagnosis \citep{Shin16}, drug sensitivity prediction \citep{Turki17} and integrative analysis of``multi-omics'' data, see, for instance, \citet{Sun16}, \citet{Hu19}, and \citet{Wang19}. It has also been applied to natural language processing \citep{Daume07} and recommendation systems \citep{Pan13} in machine learning literature.
The application that motivated the research in this paper is to integrate gene expression data sets measured in different issues to understand the gene regulations using the Genotype-Tissue Expression (GTEx) data (\url{https://gtexportal.org/}).
These datasets are always high-dimensional with relatively small sample sizes. When studying the gene regulation relationships of a specific tissue or cell-type, it is possible to borrow information from other issues in order to enhance the learning accuracy. This motivates us to consider transfer learning in high-dimensional linear regression.
\subsection{Transfer Learning in High-dimensional Linear Regression}
\label{sec1-1}
Regression analysis is one of the most widely used statistical methods to understand the association of an outcome with a set of covariates. In many modern applications, the dimension of the covariates is usually very high as compared to the sample size. Typical examples include the genome-wide association and gene expression studies.
In this paper, we consider transfer learning in high-dimensional linear regression models. Formally, our target model can be written as
\begin{equation}
\label{m0}
y^{(0)}_i=(x_i^{(0)})^{\intercal}\beta+\eps^{(0)}_i,~i=1,\dots,n_0,
\end{equation}
where $((x_i^{(0)})^{\intercal}, y_i^{(0)}), ~i=1,\dots, n_0$, are independent samples, $\beta\in{\real}^p$ is the regression coefficient of interest, and $\eps_i^{(0)}$ are independently distributed random noises such that ${\mathbb{E}}[\eps_i^{(0)}|x_i^{(0)}]=0$. In the high-dimensional regime, where $p$ can be larger and much larger than $n_0$, $\beta$ is often assumed to be sparse such that the number of nonzero elements of $\beta$, denoted by $s$, is much smaller than $p$.
In the context of transfer learning, we observe additional samples from $K$ auxiliary studies, That is, we observe $((x_i^{(k)})^{\intercal}, y_i^{(k)})$ generated from the auxiliary model
\begin{align}
y_i^{(k)}=(x_i^{(k)})^{\intercal}w^{(k)}+\eps^{(k)}_i,~i=1,\dots,n_k,~k=1,\dots, K,
\end{align}
where $w^{(k)}\in{\real}^p$ is the true coefficient vector for the $k$-th study, and $\eps^{(k)}_i$ are the random noises such that ${\mathbb{E}}[\eps^{(k)}_i|x_i^{(k)}]=0$.
The regression coefficients $w^{(k)}$ are unknown and different from our target $\beta$ in general. The number of auxiliary studies, $K$, is allowed to grow but practically $K$ may not be too large. We will study the estimation and prediction of target model (\ref{m0}) utilizing the primary data $((x_i^{(0)})^{\intercal}, y_i^{(0)}), ~i=1,\dots, n_0,$ as well as the data from $K$ auxiliary studies $((x_i^{(k)})^{\intercal}, y_i^{(k)}),~ i=1,\dots, n_k,~k=1,\dots, K$.
If useful information can be borrowed from the auxiliary samples, the target model and some of the auxiliary models need to possess a certain level of similarity. If an auxiliary model is ``similar'' to the target model, we say that this auxiliary sample/study is informative. In this work, we characterize the informative level of the $k$-th auxiliary study using the sparsity of the difference between $w^{(k)}$ and $\beta$. Let $\delta^{(k)}=\beta-w^{(k)}$ denote the contrast between $w^{(k)}$ and $\beta$. The set of informative auxiliary samples are those whose contrasts are sufficiently sparse:
\begin{equation}
\label{def-A}
\mathcal{A}_q=\{1\leq k\leq K: \|\delta^{(k)}\|_q\leq h\},
\end{equation}
for some $q\in[0,1]$. That is, the set $\mathcal{A}_q$, which contains the auxiliary studies whose contrast vectors have $\ell_q$-sparsity at most $h$, is called the {\it informative set}. It will be seen later that as long as $h$ is relatively small to the sparsity of $\beta$, the studies in $\mathcal{A}_q$ can be useful in improving the prediction and estimation of $\beta$. In the case of $q=0$, the set $\mathcal{A}_q$ corresponds to the auxiliary samples whose contrast vectors have at most $h$ nonzero elements. We also consider approximate sparsity constraints $(q\in (0,1])$, which allows all of the coefficients to be nonzero but their absolute magnitude decays at a relatively rapid rate.
For any $q\in[0,1]$, smaller $h$ implies that the auxiliary samples in $\mathcal{A}_q$ are more informative; larger cardinality of $\mathcal{A}_q$ ($|\mathcal{A}_q|$) implies that a larger number of informative auxiliary samples. Therefore, smaller $h$ and larger $|\mathcal{A}_q|$ should be favorable. We allow $\mathcal{A}_q$ to be empty in which case none of the auxiliary samples are informative. For the auxiliary samples outside of $\mathcal{A}_q$, we do not assume sparse $\delta^{(k)}$ and hence $w^{(k)}$ can be very different from $\beta$ for $k\notin \mathcal{A}_q$.
There is a paucity of methods and fundamental theoretical results for high-dimensional linear regression in the transfer learning setting. In the case where the set of informative auxiliary samples $\mathcal{A}_q$ is known, there is a lack of rate optimal estimation and prediction methods. A closely related topic is multi-task learning \citep{Ando05, LMT09}, where the goal is to simultaneously estimate multiple models using multiple response data. The multi-task learning considered in \citet{LMT09} estimates multiple high-dimensional sparse linear models under the assumption that the support of all the regression coefficients are the same. The goal of transfer learning is however different, as one is only interested in estimating the target model and this remains to be a largely unsolved problem.
\citet{CW19} studied the minimax and adaptive methods for nonparametric classification in the transfer learning setting under similarity assumptions on all the auxiliary samples to the target distribution \citep[Definition 5]{CW19}.
In the more challenging setting where the set $\mathcal{A}_q$ is unknown as is typical in real applications, it is unclear how to avoid the effects of adversarial auxiliary samples. Additional challenges include the heterogeneity among the design matrices, which does not arise in the conventional high-dimensional regression problems and hence requires novel proposals.
\subsection{Our Contributions}
In the setting where the informative set $\mathcal{A}_q$ is known, we propose a transfer learning algorithm, called Oracle Trans-Lasso, for estimation and prediction of the target regression vector and prove its minimax optimality under mild conditions. The result demonstrates a faster rate of convergence when $\mathcal{A}_q$ is non-empty and $h$ is sufficiently smaller than $s$, in which case the knowledge from the informative auxiliary samples can be optimally transferred to substantially help solve the regression problem under the target model.
In the more challenging setting where $\mathcal{A}_q$ is unknown a priori, we introduce a data-driven algorithm, called Trans-Lasso, to adapt to the unknown $\mathcal{A}_q$. The adaption is achieved by aggregating a number of candidate estimators. The desirable properties of the aggregation method guarantee that the Trans-Lasso is not much worse than the best one among the candidate estimators. We carefully construct the candidate estimators and, leveraging the properties of aggregation, demonstrate the robustness and the efficiency of Trans-Lasso under mild conditions. In terms of robustness, the Trans-Lasso is guaranteed to be not much worse than the Lasso estimator using only the primary samples no matter how adversarial the auxiliary samples are. In terms of efficiency, the knowledge from a subset of the informative auxiliary samples can be transferred to the target problem under proper conditions. Furthermore, If the contrast vectors in the informative samples are sufficiently sparse, the Trans-Lasso estimator performs as if the informative set $\mathcal{A}_q$ is known.
When the distributions of the designs are distinct in different samples, the effects of heterogeneous designs are studied. The performance of the proposed algorithms is justified theoretically and numerically in various settings.
\subsection{Related Literature}
Methods for incorporating auxiliary information into statistical inference have received much recent interest. In this context, \cite{Tony19} and \cite{Xia2020GAP} studied the two-sample larges-scale multiple testing problems. \cite{Ban19} considered the high-dimensional sparse estimation and \cite{Mao19} focused on matrix completion.
The auxiliary information in the aforementioned papers is given as some extra covariates while we have some additional raw data, which are high-dimensional, and it is not trivial to find the best way to summarize the information. \citet{Bastani18} studied estimation and prediction in high-dimensional linear models with one informative auxiliary study, where the sample size of the auxiliary study is larger than the number of covariates. This work considers more general scenarios under weaker assumptions. Specifically, the sample size of auxiliary samples can be smaller than the number of covariates and some auxiliary studies can be non-informative, which is more practical in applications.
The problem we study here is certainly related to the high-dimensional prediction and estimation in the conventional settings where only samples from the target model are available.
Several $\ell_1$ penalized or constrained minimization methods have been proposed for prediction and estimation for high-dimensional linear regression; see, for example, \citet{Lasso, FL01, Zou06, CT07, Zhang10}. The minimax optimal rates for estimation and prediction are studied in \citet{Raskutti11} and \citet{Ver12}.
\subsection{Organization and Notation}
The rest of this paper is organized as follows. Section \ref{sec2} focuses on the setting where the informative set $\mathcal{A}_q$ is known and with the sparsity in \eqref{def-A} measured in $\ell_1$-norm. A transfer learning algorithm is proposed for estimation and prediction of the target regression vector and its minimax optimality is established. In Section \ref{sec3}, we study the estimation and prediction of the target model when $\mathcal{A}_q$ is unknown for $q=1$. In Section \ref{sec4}, we justify the theoretical performance of our proposals under heterogeneous designs and extend our main algorithms to deal with $\ell_q$-sparse contrasts for $q\in[0,1)$. In Section \ref{sec-simu}, the numerical performance of the proposed methods is studied in various settings. In Section \ref{sec-data}, the proposed algorithms are applied to an analysis of a Genotype-Tissue Expression (GTEx) dataset to investigate the association of gene expression of one gene with other genes in a target tissue by leveraging data measured on other related tissues or cell types.
We finish this section with notation. Let $X^{(0)}\in{\real}^{n_0\times p}$ and $y^{(0)}\in{\real}^{n_0}$ denote the design matrix and the response vector for the primary data, respectively. Let $X^{(k)}\in{\real}^{n_k\times p}$ and $y^{(k)}\in{\real}^{n_k}$ denote the design matrix and the response vector for the $k$-th sample, respectively. For a class of matrices $R_l\in{\real}^{n_l\times p_0}$, $l\in \mathcal{L}$, we use $\{R_l\}_{l\in \mathcal{L}}$ to denote $R_l$, $l\in \mathcal{L}$. Let $n_{\mathcal{A}_q}=\sum_{k\in \mathcal{A}_q}n_k$.
For a generic semi-positive definite matrix $\Sigma\in{\real}^{m\times m}$, let $\Lambda_{\max}(\Sigma)$ and $\Lambda_{\min}(\Sigma)$ denote the largest and smallest eigenvalues of $\Sigma$, respectively. Let $\textrm{Tr}(\Sigma)$ denote the trace of $\Sigma$. Let $e_j$ be such that its $j$-th element is 1 and all other elements are zero. Let $a\vee b$ denote $\max\{a,b\}$ and $a\wedge b$ denote $\min\{a,b\}$. We use $c,c_0,c_1,\dots$ to denote generic constants which can be different in different statements. Let $a_n=O(b_n)$ and $a_n\lesssim b_n$ denote $|a_n/b_n|\leq c<\infty$ for some constant $c$ when $n$ is large enough. Let $a_n\asymp b_n$ denote $a_n/b_n\rightarrow c$ for some positive constant $c$ as $n\rightarrow \infty$. Let $a_n=O_P(b_n)$ and $a_n\lesssim_{{\mathbb{P}}} b_n$ denote ${\mathbb{P}}(|a_n/b_n|\leq c)\rightarrow 1$ for some constant $c<\infty$. Let $a_n=o_P(b_n)$ denote ${\mathbb{P}}(|a_n/b_n|>c)\rightarrow 0$ for any constant $c>0$.
\section{Estimation with Known Informative Auxiliary Samples}
\label{sec2}
In this section, we consider transfer learning for high-dimensional linear regression when the informative set $\mathcal{A}_q$ is known. We focus on the $\ell_1$-sparse characterization of the contrast vectors and leave the $\ell_q$-sparsity, $q\in[0,1)$, to Section \ref{sec4}. The notation $\mathcal{A}_1$ will be abbreviated as $\mathcal{A}$ in the sequel without special emphasis.
\subsection{Oracle Trans-Lasso Algorithm}
\label{sec2-1}
We propose a transfer learning algorithm, called {\it Oracle Trans-Lasso}, for estimation and prediction when $\mathcal{A}$ is known. As an overview, we first compute an initial estimator using the primary sample and all the informative auxiliary samples.
However, its probabilistic limit is biased from $\beta$ as $w^{(k)}\neq \beta$ in general. We then correct its bias using the primary data in the second step. Algorithm \ref{algo1} formally presents our proposed Oracle Trans-Lasso algorithm.
\vspace{0.1in}\begin{algorithm}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetAlgoLined
\Input{Primary data $(X^{(0)},y^{(0)})$ and informative auxiliary samples $\{X^{(k)},y^{(k)}\}_{k\in\mathcal{A}}$}
\Output{$\hat{\beta}$}
\underline{Step 1}. Compute
\begin{align}
\label{eq-hbeta-init}
\hat{w}^{\mathcal{A}}&=\mathop{\rm arg\, min}_{w\in{\real}^p} \Big\{\frac{1}{2(n_{\mathcal{A}}+n_0)}\sum_{k \in \mathcal{A}\cup\{0\}}\|y^{(k)}-X^{(k)}w\|_2^2+\lambda_w\|w\|_1\Big\}\;
\end{align}
for $\lambda_w=c_1\sqrt{\log p/(n_0+n_{\mathcal{A}})}$ with some constant $c_1$.
\underline{Step 2}. Let
\begin{equation}
\label{eq-hbeta}
\hat{\beta}=\hat{w}^{\mathcal{A}}+\hat{\delta}^{\mathcal{A}},
\end{equation}
where
\begin{equation}
\label{eq-hgam}
\hat{\delta}^{\mathcal{A}}=\mathop{\rm arg\, min}_{\delta\in{\real}^p} \left\{\frac{1}{2n_0}\|y^{(0)}-X^{(0)}(\hat{w}^{\mathcal{A}}+\delta)\|_2^2+\lambda_{\delta}\|\delta\|_1\right\}
\end{equation}
for $\lambda_{\delta}=c_2\sqrt{\log p/n_0}$ with some constant $c_2$.
\caption{\textbf{Oracle Trans-Lasso algorithm}} \label{algo1}
\end{algorithm}
In Step 1, $\hat{w}^{\mathcal{A}}$ is realized based on the Lasso \citep{Lasso} using the primary sample and all the informative auxiliary samples. Its probabilistic limit is $w^{\mathcal{A}}$, which can be defined via the following moment condition
\[
{\mathbb{E}}\left[\sum_{k \in \mathcal{A}\cup\{0\}} (X^{(k)})^{\intercal}(y^{(k)}-X^{(k)}w^{\mathcal{A}})\right]=0.
\]
Denoting ${\mathbb{E}}[x_i^{(k)}(x_i^{(k)})^{\intercal}]=\Sigma^{(k)}$, $w^{\mathcal{A}}$ has the following explicit form:
\begin{align}
\label{eq-wA}
w^{\mathcal{A}}=\beta+\delta^{\mathcal{A}}
\end{align}
for $\delta^{\mathcal{A}}=\sum_{k \in \mathcal{A}}\alpha_k\delta^{(k)}$ and $\alpha_k=n_k/(n_{\mathcal{A}}+n_0)$, if $\Sigma^{(k)}=\Sigma^{(0)}$ for all $k\in \mathcal{A}$.
That is, the probabilistic limit of $\hat{w}^{\mathcal{A}}$, $w^{\mathcal{A}}$, has bias $\delta^{\mathcal{A}}$, which is a weighted average of $\delta^{(k)}$. Step 1 is related to the approach for high-dimensional misspecified models \citep{Bu15} and moment estimators. The estimator $\hat{w}^{\mathcal{A}}$ converges relatively fast as the sample size used in Step 1 is relatively large. Step 2 corrects the bias, $\delta^{\mathcal{A}}$, using the primary samples. In fact, $\delta^{\mathcal{A}}$ is a sparse high-dimensional vector whose $\ell_1$-norm is no larger than $h$.
Hence, the error of step 2 is under control for a relatively small $h$.
The choice of the tuning parameters $\lambda_w$ and $\lambda_{\delta}$ will be further specified in Theorem \ref{thm0-l1}.
\subsection{Theoretical Properties of Oracle Trans-Lasso}
Formally, the parameter space we consider can be written as
\begin{equation}
\label{eq-Thetaq}
\Theta_{q}(s,h)=\left\{(\beta,\delta^{(1)},\dots,\delta^{(K)} ): \|\beta\|_0\leq s, ~\max_{k \in \mathcal{A}_q} \|\delta^{(k)}\|_q\leq h\right\}
\end{equation}
for $\mathcal{A}_q\subseteq \{1,\dots, K\}$ and $q\in[0,1]$.
We study the rate of convergence for the Oracle Trans-Lasso algorithm under the following two conditions.
\begin{condition}
\label{cond1}{\rm
For each $k\in \mathcal{A}\cup\{0\}$, each row of $X^{(k)}$ is \textit{i.i.d.} Gaussian distributed with mean zero and covariance matrix $\Sigma$. The smallest and largest eigenvalues of $\Sigma$ are bounded away from zero and infinity, respectively.
}\end{condition}
\begin{condition}
\label{cond2}{\rm
For each $k\in\mathcal{A}\cup\{0\}$, the random noises $\eps^{(k)}_i$ are \textit{i.i.d.} sub-Gaussian distributed mean zero and variance $\sigma^2_k$. For some constant $C_0$, it holds that $\max_{\mathcal{A}\cup\{0\} }{\mathbb{E}}[\exp\{t\eps^{(k)}_i\}]\leq \exp\{t^2C_0\}$ for all $t\in {\real}$ and $\max_{0\leq k\leq K} {\mathbb{E}}[(y_i^{(k)})^2]$ is bounded away from infinity.
}\end{condition}
Condition \ref{cond1} assumes random designs with Gaussian distribution. The Gaussian assumption provides convenience for bounding the restricted eigenvalues of sample Gram matrices. Moreover, the designs are identically distributed for $k\in \mathcal{A}\cup\{0\}$. This assumption is for simplifying some technical conditions and will be relaxed in Section \ref{sec4}. Without loss of generality, we also assume the design matrices are normalized such that $\|X^{(k)}_{.,j}\|_2^2=n_k$ and $\Sigma_{j,j}=1$ for all $1\leq j\leq p$, $k\in \mathcal{A}\cup\{0\}$. Condition \ref{cond2} assumes sub-Gaussian random noises for primary and informative auxiliary samples and the second moment of the response vector is finite. Conditions \ref{cond1} and \ref{cond2} put no assumptions on the non-informative auxiliary samples as they are not used in the Oracle Trans-Lasso algorithm.
In the next theorem, we prove the convergence rate of the Oracle Trans-Lasso.
\begin{theorem}[Convergence Rate of Oracle Trans-Lasso]
\label{thm0-l1}
\text{ }\text{ }
Assume that Condition \ref{cond1} and Condition \ref{cond2} hold true. We take
$\lambda_w=$\text{ } $\max_{k\in\mathcal{A}\cup\{0\}}c_1\sqrt{{\mathbb{E}}[(y^{(k)}_i)^2]\log p/(n_{\mathcal{A}}+n_0})$ and $\lambda_\delta=c_2\sqrt{\log p/n_0}$ for some sufficiently large constants $c_1$ and $c_2$ only depending on $C_0$.
If $s\log p/(n_{\mathcal{A}}+n_0)+h(\log p/n_0)^{1/2}=o((\log p/n_0)^{1/4})$, then it holds that
\begin{align}
&\sup_{\beta\in\Theta_1(s,h)}\frac{1}{n_0}\|X^{(0)}(\hat{\beta}-\beta)\|_2^2\vee\|\hat{\beta}-\beta\|_2^2\nonumber\\
&=O_P\left(\frac{s\log p}{n_{\mathcal{A}}+n_0}+\frac{s\log p}{n_0}\wedge h\sqrt{\frac{\log p}{n_0}}\wedge h^2\right) \label{l1-re1}.
\end{align}
\end{theorem}
Theorem \ref{thm0-l1} provides the convergence rate of $\hat{\beta}$ for any $\beta\in\Theta_1(s,h)$.
In the trivial case where $\mathcal{A}$ is empty, the right-hand side in (\ref{l1-re1}) is $O_P(s\log p/n_0)$, which is the convergence rate for the Lasso only using primary samples. When $\mathcal{A}$ is non-empty, the right-hand side of (\ref{l1-re1}) is sharper than $s\log p/n_0$ if $h\sqrt{\log p/n_0}\ll s$ and $n_{\mathcal{A}}\gg n_0$. That is, if the informative auxiliary samples have contrast vectors sufficiently sparser than $\beta$ and the total sample size is significantly larger than the primary sample size, then the knowledge from the auxiliary samples can significantly improve the learning performance of the target model. In practice, even if $n_{\mathcal{A}}$ is comparable to $n_0$, the Oracle Trans-Lasso can still improve the empirical performance as shown by some numerical experiments provided in Section \ref{sec-simu}.
The sample size requirement in Theorem \ref{thm0-l1} guarantees the lower restricted eigenvalues of the sample Gram matrices in Step 1 and Step 2 are bounded away from zero with high probability. The proof of Theorem \ref{thm0-l1} involves an error analysis of $\hat{w}^{\mathcal{A}}$ and that of $\hat{\delta}^{\mathcal{A}}$. While $w^{\mathcal{A}}$ may be neither $\ell_0$- nor $\ell_1$-sparse, it can be decomposed into an $\ell_0$-sparse component plus an $\ell_1$-sparse component as illustrated in (\ref{eq-wA}). Exploiting this sparse structure is a key step in proving Theorem \ref{thm0-l1}.
Regarding the choice of tuning parameters, $\lambda_w$ depends on the second moment of $y_i^{(k)}$, which can be consistently estimated by $\|y^{(k)}\|_2^2/n_k$. The other tuning parameter $\lambda_{\delta}$ depends on the noise levels, which can be estimated by the scaled Lasso \citep{scaled-lasso}. In practice, cross validation can be performed for selecting tuning parameters.
We now establish the minimax lower bound for estimating $\beta$ in the transfer learning setup, which shows the minimax optimality of the Oracle Trans-Lasso algorithm in $\Theta_1(s,h)$.
\begin{theorem}[Minimax lower bound for $q=1$]
\label{thm2-low}
Assume Condition \ref{cond1} and Condition \ref{cond2}. If $\max\{s\log p/(n_{\mathcal{A}}+n_0),~h(\log p/n_0)^{1/2}\}=o(1)$, then
{\small
\begin{align*}
&{\mathbb{P}}\left(\inf_{\hat{\beta}}\sup_{\beta\in \Theta_1(s,h)} \|\hat{\beta}-\beta\|_2^2\geq c_1\frac{s\log p}{n_{\mathcal{A}}+n_0}+ c_2 \frac{s\log p}{n_0}\wedge h\left(\frac{\log p}{n_0}\right)^{1/2}\wedge h^2\right)\geq \frac{1}{2}
\end{align*}
}
for some positive constants $c_1$ and $c_2$.
\end{theorem}
Theorem \ref{thm2-low} implies that $\hat{\beta}$ obtained by the Oracle Trans-Lasso algorithm is minimax rate optimal in $\Theta_1(s,h)$ under the conditions of Theorem \ref{thm0-l1}. To understand the lower bound, the term $s\log p/(n_{\mathcal{A}}+n_0)$ is the optimal convergence rate when $w^{(k)}=\beta$ for all $k \in \mathcal{A}$. This is an extremely ideal case where we have $n_{\mathcal{A}}+n_0$ \textit{i.i.d.} samples from the target model. The second term in the lower bound is the optimal convergence rate when $w^{(k)}=0$ for all $k \in \mathcal{A}$, i.e., the auxiliary samples are not helpful at all. Let $\mathcal{B}_q(r)=\{u\in {\real}^p:\|u\|_q\leq r\}$ denote the $\ell_q$-ball with radius $r$ centered at zero. In this case, the definition of $\Theta_1(s,h)$ implies that $\beta\in \mathcal{B}_0(s)\cap \mathcal{B}_1(h)$ and the second term in the lower bound is indeed the minimax optimal rate for estimation when $\beta\in \mathcal{B}_0(s)\cap \mathcal{B}_1(h)$ with $n_0$ \textit{i.i.d.} samples \citep{TS14}.
\section{Unknown Set of Informative Auxiliary Samples}
\label{sec3}
The Oracle Trans-Lasso algorithm is based on the knowledge of the informative set $\mathcal{A}$. In some applications, the informative set $\mathcal{A}$ is not given, which makes the transfer learning problem more challenging. In this section, we propose a data-driven method for estimation and prediction when $\mathcal{A}$ is unknown. The proposed algorithm is described in detail in Section \ref{sec3-1} and \ref{sec3-2}. Its theoretical properties are studied in Section \ref{sec3-3}.
\subsection{The Trans-Lasso Algorithm}
\label{sec3-1}
Our proposed algorithm, called Trans-Lasso, consists of two main steps. First, we construct a collection of candidate estimators, where each of them is based on an estimate of $\mathcal{A}$. Second, we perform an aggregation step \citep{RT11, DRZ12, Dai18} on these candidate estimators. Under proper conditions, the aggregated estimator is guaranteed to be not much worse than the best candidate estimator under consideration in terms of prediction. For technical reasons, we need the candidate estimators and the sample for aggregation to be independent. Hence, we start with sample splitting.
We need some more notation. For a generic estimate of $\beta$, $b$, denote its sum of squared prediction error as
\[
\widehat{Q}(\mathcal{I},b)=\sum_{i\in \mathcal{I}}\|y^{(0)}_i-(x^{(0)}_i)^{\intercal}b\|_2^2,
\]
where $\mathcal{I}$ is a subset of $\{1,\dots,n_0\}$.
Let $\Lambda^{L+1}=\{\nu\in{\real}^{L+1}: \nu_l\geq 0,\sum_{l=0}^L\nu_l=1\}$ denote an $L$-dimensional simplex.
The Trans-Lasso algorithm is presented in Algorithm \ref{algo2}.
\begin{algorithm}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetAlgoLined
\Input{ Primary data $(X^{(0)},y^{(0)})$ and samples from $K$ auxiliary studies $\{X^{(k)},y^{(k)}\}_{k=1}^K$.}
\Output{$\hat{\beta}^{\hat{\theta}}$.}
\underline{Step 1}.
Let $\mathcal{I}$ be a random subset of $\{1,\dots,n_0\}$ such that $|\mathcal{I}|\approx n_0/2$. Let $\mathcal{I}^c=\{1,\dots,n_0\}\setminus \mathcal{I}$.
\underline{Step 2}.
Construct $L+1$ candidate sets of $\mathcal{A}$, $\big\{\widehat{G}_0,\widehat{G}_1,\dots,\widehat{G}_{L}\big\}$ such that $\widehat{G}_0=\emptyset$ and $\widehat{G}_1,\dots,\widehat{G}_{L}$ are based on (\ref{eq-hGl}) using $\left(X_{\mathcal{I},.}^{(0)},y_{\mathcal{I}}^{(0)}\right)$ and $\{X^{(k)},y^{(k)}\}_{k=1}^M$.
\underline{Step 3}. For each $0\leq l\leq L$, run the Oracle Trans-Lasso algorithm with primary sample $(X_{\mathcal{I},.}^{(0)},y_{\mathcal{I}}^{(0)})$ and auxiliary samples $\{X^{(k)},y^{(k)}\}_{k\in\widehat{G}_l}$. Denote the output as $\hat{\beta}(\widehat{G}_l)$ for $0\leq l\leq L$.
\underline{Step 4}.
Compute
{\small
\begin{align}
&\hat{\theta}=\label{theta-bma}\\
&\mathop{\rm arg\, min}_{\theta\in\Lambda^{L+1}}\left\{\widehat{Q}\big(\mathcal{I}^c,\sum_{l=0}^L\hat{\beta}(\widehat{G}_l)\theta_l\big)+\sum_{l=0}^L\theta_l \widehat{Q}(\mathcal{I}^c,\hat{\beta}(\widehat{G}_l))+\frac{2\lambda_{\theta}\log (L+1)}{n_0}\|\theta\|_1\right\}\nonumber
\end{align}
}
for some $\lambda_{\theta}>0$. Output
\begin{equation}
\label{hbeta-htheta}
\hat{\beta}^{\hat{\theta}}=\sum_{l=0}^L\hat{\theta}_{l}\hat{\beta}(\widehat{G}_l).
\end{equation}
\caption{\textbf{Trans-Lasso Algorithm}} \label{algo2}
\end{algorithm}
As an illustration, steps 2 and 3 of the Trans-Lasso algorithm are devoted to constructing some initial estimates of $\beta$, $\hat{\beta}(\widehat{G}_l)$. They are computed using the Oracle Trans-Lasso algorithm by treating each $\widehat{G}_l$ as the set of informative auxiliary samples. We construct $\widehat{G}_l$ to be a class of estimates of $\mathcal{A}$ and the detailed procedure is provided in Section \ref{sec3-2}.
Step 4 is based on the Q-aggregation proposed in \citet{DRZ12} with a uniform prior and a simplified tuning parameter. The Q-aggregation can be viewed as a weighted version of least square aggregation and exponential aggregation \citep{RT11} and it has been shown to be rate optimal both in expectation and with high probability for model selection aggregation problems.
The framework of model selection aggregation is a good fit for the transfer learning task under consideration.
On one hand, it guarantees the robustness of Trans-Lasso in the following sense. Notice that $\hat{\beta}(\widehat{G}_0)$ corresponds to the Lasso estimator only using the primary samples and it is always included in our dictionary. The purpose is that, invoking the property of model selection aggregation, the performance of $\hat{\beta}^{\hat{\theta}}$ is guaranteed to be not much worse than the performance of the original Lasso estimator under mild conditions. This shows the performance of Trans-Lasso will not be ruined by adversarial auxiliary samples. Formal statements are provided in Section \ref{sec3-3}.
On the other hand, the gain of Trans-Lasso relates to the qualities of $\widehat{G}_1,\dots,\widehat{G}_L$.
If
\begin{equation}
\label{cond-agg}
{\mathbb{P}}\left(\widehat{G}_l\subseteq\mathcal{A}, ~\text{for some}~ 1\leq l\leq L\right)\rightarrow 1,
\end{equation}
i.e., $\widehat{G}_l$ is a nonempty subset of the informative set $\mathcal{A}$, then the model selection aggregation property implies that the performance of $\hat{\beta}^{\hat{\theta}}$ is not much worse than the performance of the Oracle Trans-Lasso with $\sum_{k\in \widehat{G}_l}n_k$ informative auxiliary samples.
Ideally, one would like to achieve $\widehat{G}_l=\mathcal{A}$ for some $1\leq l\leq L$ with high probability. However, it can rely on strong assumptions that may not be guaranteed in practical situations.
To motivate our constructions of $\widehat{G}_l$, let us first point out a naive construction of candidate sets, which consists of $2^K$ candidates. These candidates are all different combinations of $\{1,\dots, K\}$, denoted by $\widehat{G}_1,\dots, \widehat{G}_{2^K}$.
It is obvious that $\mathcal{A}$ is an element of this candidate sets. However, the number of candidates is too large and it can be computationally burdensome. Furthermore, the cost of aggregation can be significantly high, which is of order $K/n_0$ as will be seen in Lemma \ref{thm-ag1}.
In contrast, we would like to pursue a much smaller number of candidate sets such that the cost of aggregation is almost negligible and (\ref{cond-agg}) can be achieved under mild conditions.
We introduce our proposed construction of candidate sets in the next subsection.
\subsection{Constructing the Candidate Sets for Aggregation}
\label{sec3-2}
As illustrated in Section \ref{sec3-1}, the goal of Step 2 is to have a class of candidate sets, $\{\widehat{G}_0,\dots,\widehat{G}_{L}\}$, that satisfy (\ref{cond-agg}) under certain conditions.
Our idea is to exploit the sparsity patterns of the contrast vectors.
Specifically, recall that the definition of $\mathcal{A}$ implies that $\{\delta^{(k)}\}_{k \in \mathcal{A}}$ are sparser than $\{\delta^{(k)}\}_{k \in \mathcal{A}^c}$, where $\mathcal{A}^c=\{1,\dots,K\}\setminus \mathcal{A}$. This property motivates us to find a sparsity index $R^{(k)}$ and its estimator $\widehat{R}^{(k)}$ for each $1\leq k\leq K$ such that
\begin{align}
\label{cond-rk}
\max_{k \in \mathcal{A}^o}R^{(k)}< \min_{k \in \mathcal{A}^c}R^{(k)}\quad\text{and}\quad {\mathbb{P}}\left(\max_{k \in \mathcal{A}^o}\widehat{R}^{(k)}< \min_{k \in \mathcal{A}^c}\widehat{R}^{(k)}\right)\rightarrow 1,
\end{align}
where $\mathcal{A}^o$ is some subset of $\mathcal{A}$.
In words, the sparsity indices in $\mathcal{A}^o$ are no larger than the sparsity indices in $\mathcal{A}^c$ and so are their estimators with high probability. To utilize (\ref{cond-rk}), we can define the candidate sets as
\begin{align}
\label{eq-hGl}
\widehat{G}_l=\left\{1\leq k\leq K: \widehat{R}^{(k)}~ \text{is among the first $l$ smallest of all}\right\}
\end{align}
for $1\leq l\leq K$.
That is, $\widehat{G}_l$ is the set of auxiliary samples whose estimated sparsity indices are among the first $l$ smallest.
A direct consequence of (\ref{cond-rk}) and (\ref{eq-hGl}) is that ${\mathbb{P}}(\widehat{G}_{|\mathcal{A}^o|}=\mathcal{A}^o)\rightarrow 1$ and hence the desirable property (\ref{cond-agg}) is satisfied.
To achieve the largest gain in transfer learning, we would like to find proper sparsity indices such that (\ref{cond-rk}) holds for $|\mathcal{A}^o|$ as large as possible.
Notice that $\widehat{G}_{K+1}=\{1,\dots, K\}$ is always included as candidates according to (\ref{eq-hGl}). Hence, in the special cases where all the auxiliary samples are informative or none of the auxiliary samples are informative, it holds that $\widehat{G}_{|\mathcal{A}|}=\mathcal{A}$ and the Trans-Lasso is not much worse than the Oracle Trans-Lasso.
The more challenging cases are $0<|\mathcal{A}|<K$.
As $\{\delta^{(k)}\}_{k\in\mathcal{A}^c}$ are not necessarily sparse, the estimation of $\delta^{(k)}$ or functions of $\delta^{(k)}$, $1\leq k\leq K$, is not trivial.
We consider using $R^{(k)}=\|\Sigma\delta^{(k)}\|_2^2$, which is a function of the population-level marginal statistics, as the oracle sparsity index for $k$-th auxiliary sample. The advantage of $R^{(k)}$ is that it has a natural unbiased estimate without further assumptions.
Let us relate $R^{(k)}$ to the sparsity of $\delta^{(k)}$ using a Bayesian characterization of sparse vectors assuming $\Sigma^{(k)}=\Sigma$ for all $0\leq k\leq K$. If $\delta^{(k)}_j$ are \textit{i.i.d.} Laplacian distributed with mean zero and variance $\nu_k^2$ for each $k$, then it follows from the properties of Laplacian distribution \citep{LK15} that
\[
{\mathbb{E}}[\|\delta^{(k)}\|_1]=p\nu_k={\mathbb{E}}^{1/2}[\|\Sigma\delta^{(k)}\|^2_2]\frac{p}{\textrm{Tr}^{1/2}(\Sigma\Sig)},
\]
where $\textrm{Tr}^{1/2}(\Sigma\Sig)/p$ does not depend on $k$. Hence, the rank of ${\mathbb{E}}[\|\Sigma\delta^{(k)}\|^2_2]$ is the same as the rank of ${\mathbb{E}}[\|\delta^{(k)}\|_1]$. As $\max_{k\in \mathcal{A}}\|\delta^{(k)}\|_1< \min_{k\in \mathcal{A}^c}\|\delta^{(k)}\|_1$, it is reasonable to expect $\max_{k\in \mathcal{A}}\|\Sigma\delta^{(k)}\|^2_2< \min_{k\in \mathcal{A}^c}\|\Sigma\delta^{(k)}\|^2_2$.
Obviously, the above derivation holds for many other zero mean prior distributions besides Laplacian. This illustrates our motivation for considering $R^{(k)}$ as the oracle sparsity index.
We next introduce the estimated version, $\widehat{R}^{(k)}$, based on the primary data $\{(x_i^{(0)})^{\intercal},y_{i}^{(0)}\}_{i\in \mathcal{I}}$ (after sample splitting) and auxiliary samples $\{X^{(k)},y^{(k)}\}_{k=1}^K$.
We first perform a SURE screening \citep{FL08} on the marginal statistics to reduce the effects of random noises.
We summarize our proposal for Step 2 of the Trans-Lasso as follows (see Algorithm \ref{algo2.2}). Let $n_*=\min_{0\leq k\leq K}n_k$.
\begin{algorithm}[H]
\underline{Step 2.1}.
For $1\leq k\leq K$, compute the marginal statistics
\begin{align}
\widehat{\Delta}^{(k)}&=\frac{1}{n_k}\sum_{i=1}^{n_k}x_i^{(k)}y_i^{(k)}-\frac{1}{|\mathcal{I}|}\sum_{i\in \mathcal{I}}x_i^{(0)}y_i^{(0)},\label{eq-hDeltak}\
\end{align}
For each $k\in\{1,\dots, K\}$, let $\widehat{T}_k$ be obtained by SURE screening such that
\begin{align*}
& \widehat{T}_k=\left\{1\leq j\leq p:~|\widehat{\Delta}_j^{(k)}|~\text{is among the first}~t_*~\text{largest of all}\right\}
\end{align*}
for a fixed $t_*=n_*^{\alpha},~0\leq \alpha<1$.
\underline{Step 2.2}. Define the estimated sparse index for the $k$-th auxiliary sample as
\begin{align}
\label{eq-hRk}
\widehat{R}^{(k)}=\left\|\widehat{\Delta}_{\widehat{T}_k}^{(k)}\right\|_2^2.
\end{align}
\underline{Step 2.3}.
Compute $\widehat{G}_l$ as in (\ref{eq-hGl}) for $l=1,\dots, L$.
\caption{\textbf{Step 2 of the Trans-Lasso Algorithm}} \label{algo2.2}
\end{algorithm}
One can see that $\widehat{\Delta}^{(k)}$ are empirical marginal statistics such that ${\mathbb{E}}[\widehat{\Delta}^{(k)}]=\Sigma\delta^{(k)}$ for $k\in\mathcal{A}$.
The set $\widehat{T}_k$ is the set of first $t_*$ largest marginal statistics for the $k$-th sample.
The purpose of screening the marginal statistics is to reduce the magnitude of noise. Notice that the un-screened version $\|\widehat{\Delta}^{(k)}\|_2^2$ is a sum of $p$ random variables and it contains noise of order $p/(n_k\wedge n_0)$, which diverges fast as $p$ is much larger than the sample sizes. By screening with $t_*$ of order $n_*^{\alpha}$, $\alpha<1$, the errors induced by the random noises is under control. In practice, the auxiliary samples with very small sample sizes can be removed from the analysis as their contributions to the target problem is mild.
Desirable choices of $\widehat{T}_k$ should keep the variation of $\Sigma\delta^{(k)}$ as much as possible.
Under proper conditions, SURE screening can consistently select a set of strong marginal statistics and hence is appropriate for the current purpose.
In Step 2.2, we compute $\widehat{R}^{(k)}$ based on the marginal statistics which are selected by SURE screening.
In practice, different choices of $t_*$ may lead to different realizations of $\widehat{G}_l$. One can compute multiple sets of $\{\widehat{R}^{(k)}\}_{k=1}^K$ with different $t_*$ which give multiple sets of $\{\widehat{G}_l\}_{l=1}^K$. It will be seen from Lemma \ref{thm-ag1} that a finite number of choices on $t_*$ does not affect the rate of convergence.
\subsection{Theoretical Properties of Trans-Lasso}
\label{sec3-3}
In this subsection, we derive the theoretical guarantees for the Trans-Lasso algorithm. We first establish the model selection aggregation type of results for the Trans-Lasso estimator $\hat{\beta}^{\hat{\theta}}$.
\begin{lemma}[Q-aggregation for Trans-Lasso]
\label{thm-ag1}
Assume that Condition \ref{cond1} and Condition \ref{cond2} hold true. Let $\hat{\theta}$ be computed with
$\lambda_{\theta}\geq 4\sigma_0^2$. With probability at least $1-t$, it holds that
\begin{align}
& \frac{1}{|\mathcal{I}^c|}\left\|X^{(0)}_{\mathcal{I}^c,.}(\hat{\beta}^{\hat{\theta}}-\beta)\right\|_2^2\leq \min_{0\leq l\leq L} \frac{1}{|\mathcal{I}^c|}\left\|X^{(0)}_{\mathcal{I}^c,.}(\hat{\beta}(\widehat{G}_l)-\beta)\right\|_2^2+\frac{\lambda_{\theta}\log (L/t)}{n_0}.\label{re0-ms}
\end{align}
If $\|\Sigma\|_2L\leq c_1n_0$ for some small enough constant $c_1$, then
\begin{align}
\label{re1-ms}
\left\|\hat{\beta}^{\hat{\theta}}-\beta\right\|_2^2\lesssim_{{\mathbb{P}}} \min_{0\leq l\leq L} \frac{1}{|\mathcal{I}^c|}\left\|X^{(0)}_{\mathcal{I}^c,.}(\hat{\beta}(\widehat{G}_l)-\beta)\right\|_2^2\vee \|\hat{\beta}(\widehat{G}_l)-\beta\|_2^2+\frac{\log L}{n_0}.
\end{align}
\end{lemma}
\begin{remark}
\label{re1}{\rm
Assume that Conditions \ref{cond1} and \ref{cond2} hold. Let $\hat{\theta}$ be obtained with
$\lambda_{\theta}\geq 4\sigma_0^2$. For any $L\geq 1$, it holds that
\[
\big\|\hat{\beta}^{\hat{\theta}}-\beta\big\|_2^2\lesssim_{{\mathbb{P}}} \min_{0\leq l\leq L} \frac{1}{|\mathcal{I}^c|}\left\|X^{(0)}_{\mathcal{I}^c,.}(\hat{\beta}(\widehat{G}_l)-\beta)\right\|_2^2\vee \|\hat{\beta}(\widehat{G}_l)-\beta\|_2^2+\sqrt{\frac{\log L}{n_0}}.
\]
}\end{remark}
Lemma \ref{thm-ag1} implies that the performance of $\hat{\beta}^{\hat{\theta}}$ only depends on the best candidate regardless of the performance of other candidates under mild conditions.
As commented before, this result guarantees the robustness and efficiency of Trans-Lasso, which can be formally stated as follows.
As the original Lasso is always in our dictionary, (\ref{re0-ms}) and (\ref{re1-ms}) imply that $\hat{\beta}^{\hat{\theta}}$ is not much worse than the Lasso in prediction and estimation. Formally, ``not much worse'' refers to the last term in (\ref{re0-ms}), which can be viewed as the cost of ``searching'' for the best candidate model within the dictionary which is of order $\log L/n_0$. This term is almost negligible, say, when $L=O(K)$, which corresponds to our constructed candidate estimators. This demonstrates the robustness of $\hat{\beta}^{\hat{\theta}}$ to adversarial auxiliary samples. Furthermore, if (\ref{cond-agg}) holds, then the prediction and estimation errors of Trans-Lasso are comparable to the Oracle Trans-Lasso based on auxiliary samples in $\mathcal{A}^o$.
The prediction error bound in (\ref{re0-ms}) follows from Corollary 3.1 in \citet{DRZ12}. However, the aggregation methods do not have theoretical guarantees in estimation error in general. Indeed, an estimator with $\ell_2$-error guarantee is crucial for more challenging tasks, such as out-of-sample prediction and inference. For our transfer learning task, we show in (\ref{re1-ms}) that the estimation error is of the same order if the cardinality of the dictionary is $L\leq cn_0$ for some small enough $c$. For our constructed dictionary, it suffices to require $K\leq cn_0$. In many practical applications, $K$ is relatively small compared to the sample sizes and hence this assumption is not very strict.
In Remark \ref{re1}, we provide an upper bound on the estimation error which holds for arbitrarily large $L$ but is slower than (\ref{re1-ms}) in general.
In the following, we provide sufficient conditions such that the desirable property (\ref{cond-rk}) holds with $\widehat{R}^{(k)}$ defined in (\ref{eq-hRk}) and hence (\ref{cond-agg}) is satisfied.
For each $k \in\mathcal{A}^c$, define a set
\begin{align}
\label{eq-Hk}
H_k=\left\{1\leq j\leq p: |\Sigma^{(k)}_{j,.}w^{(k)}-\Sigma^{(0)}_{j,.}\beta|>n_*^{-\kappa}, ~\kappa<\alpha/2\right\}.
\end{align}
Recall that $\alpha$ is defined such that $t_*=n^\alpha$. In fact, $H_k$ is the set of ``strong'' marginal statistics that can be consistently selected into $\widehat{T}_k$ for each $k\in \mathcal{A}^c$.
We see that $\Sigma^{(k)}_{j,.}w^{(k)}-\Sigma^{(0)}_{j,.}\beta=\Sigma_{j,.}\delta^{(k)}$ if $\Sigma^{(k)}=\Sigma^{(0)}$ for $k\in\mathcal{A}^c$. The definition of $\mathcal{H}_k$ in (\ref{eq-Hk}) allows for heterogeneous designs among non-informative auxiliary samples.
\begin{condition}
\label{cond4}{\rm
(a) For each $k\in \mathcal{A}^c$, each row of $X^{(k)}$ is \textit{i.i.d.} Gaussian with mean zero and covariance matrix $\Sigma^{(k)}$. The largest eigenvalue of $\Sigma^{(k)}$ is bounded away from infinity for any $k\in \mathcal{A}^c$.
For each $k\in \mathcal{A}^c$, the random noises $\eps^{(k)}_i$ are \textit{i.i.d.} Gaussian with mean zero and variance $\sigma^2_k$.
(b)It holds that $\log p\vee \log K\leq c_1 \sqrt{n_*}$ for a small enough constant $c_1$. Moreover,
\begin{align}
\label{eq1-cond4b}
\min_{k \in\mathcal{A}^c}\sum_{j\in H_k}|\Sigma^{(k)}_{j,.}w^{(k)}-\Sigma^{(0)}_{j,.}\beta|^2\geq \frac{c_1\log p}{n_*^{1-\alpha}}
\end{align}
for some large enough constant $c_1>0$.
} \end{condition}
The Gaussian assumptions in Condition \ref{cond4}(a) guarantee the desirable properties of SURE screening for the non-informative auxiliary studies. In fact, the Gaussian assumption can be relaxed to be sub-Gaussian random variables according to some recent studies \citep{SURE2}. For the conciseness of the proof, we consider Gaussian distributed random variables.
Condition \ref{cond4}(b) first puts constraint on the relative dimensions. It is trivial in the regime that $p\vee K\leq n_*^{\xi}$ for any finite $\xi>0$. The expression (\ref{eq1-cond4b}) requires that for each $k\in\mathcal{A}^c$, there exists a subset of strong marginal statistics such that their squared sum is beyond some noise barrier. This condition is mild by choosing $\alpha$ such that $\log p\ll n_*^{1-\alpha}$ and $\alpha=1/2$ is an obvious choice revoking the first part of Condition \ref{cond4}(b). For instance, if $\min_{k\in \mathcal{A}^c}\|{\mathbb{E}}[\widehat{\Delta}^{(k)}]\|_{\infty}\geq c_0>0$, then (\ref{eq1-cond4b}) holds with any $\alpha\leq 1/2$. In words, a sufficient condition for (\ref{eq1-cond4b}) is that at least one marginal statistic in the $k$-th study is of constant order for $k\in\mathcal{A}^c$. We see that larger $n_*$ makes Condition \ref{cond4} weaker. As mentioned before, it is helpful to remove the auxiliary samples with very small sample sizes from the analysis.
In the next theorem, we demonstrate the theoretical properties of $\widehat{R}^{(k)}$ and provide a complete analysis of the Trans-Lasso algorithm. Let $\mathcal{A}^o$ be a subset of $\mathcal{A}$ such that
\[
\mathcal{A}^o=\left\{k\in \mathcal{A}:\|\Sigma^{(0)}\delta^{(k)}\|_2^2\leq c_1\min_{k \in\mathcal{A}^c}\sum_{j\in H_k}|\Sigma^{(k)}_{j,.}w^{(k)}-\Sigma^{(0)}_{j,.}\beta|^2\right\}
\]
for some $c_1<1$ and $H_k$ defined in (\ref{eq-Hk}). In general, one can see that the informative auxiliary samples with sparser $\delta^{(k)}$ are more likely to be included into $\mathcal{A}^o$. Specially, the fact that $\max_{k\in \mathcal{A}}\|\Sigma^{(0)}\delta^{(k)}\|_2^2\leq \|\Sigma^{(0)}\|_2^2h^2$ implies $\mathcal{A}^o=\mathcal{A}$ when $h$ is sufficiently small. We will show (\ref{cond-rk}) for such $\mathcal{A}^o$ with $\widehat{R}^{(k)}$ defined in (\ref{eq-hRk}).
Let $n_{\mathcal{A}^o}=\sum_{k\in \mathcal{A}^o}n_k$.
\begin{theorem}[Convergence Rate of the Trans-Lasso]
\label{sec4-lem1}
Assume that the conditions of Theorem \ref{thm0-l1} and Condition \ref{cond4} hold. Then
\begin{align}
\label{cond-agg2}
{\mathbb{P}}\left(\max_{k \in \mathcal{A}^o}\widehat{R}^{(k)}< \min_{k \in \mathcal{A}^c}\widehat{R}^{(k)} \right)\rightarrow 1.
\end{align}
Let $\hat{\beta}^{\hat{\theta}}$ be computed using the Trans-Lasso algorithm with $\lambda_{\theta}\geq 4\sigma^2_0$. If $K\leq cn_0$ for a sufficiently small constant $c>0$, then
\begin{align}
& \frac{1}{|\mathcal{I}^c|}\left\|X^{(0)}_{\mathcal{I}^c,.}(\hat{\beta}^{\hat{\theta}}-\beta)\right\|_2^2\vee\left\|\hat{\beta}^{\hat{\theta}}-\beta\right\|_2^2\nonumber\\
&=O_P\left(\frac{s\log p}{n_{\mathcal{A}^o}+n_0}+\frac{s\log p}{n_0}\wedge h\sqrt{\frac{\log p}{n_0}}\wedge h^2+\frac{\log K}{n_0}\right).
\label{re3-agg}
\end{align}
\end{theorem}
\begin{remark}
\label{re3}{\rm
Under the conditions of Theorem \ref{sec4-lem1}, if
\[ \|\Sigma^{(0)}\|_2^2h^2\leq \alpha\min_{k \in\mathcal{A}^c}\sum_{j\in H_k}|\Sigma^{(k)}_{j,.}w^{(k)}-\Sigma^{(0)}_{j,.}\beta|^2~\text{for some}~\alpha<1,
\]
then ${\mathbb{P}}\left(\max_{k \in \mathcal{A}}\widehat{R}^{(k)}< \min_{k \in \mathcal{A}^c}\widehat{R}^{(k)} \right)\rightarrow 1$ and
\begin{align*}
& \frac{1}{|\mathcal{I}^c|}\left\|X^{(0)}_{\mathcal{I}^c,.}(\hat{\beta}^{\hat{\theta}}-\beta)\right\|_2^2\vee\left\|\hat{\beta}^{\hat{\theta}}-\beta\right\|_2^2\\
&=O_P\left(\frac{s\log p}{n_{\mathcal{A}}+n_0}+\frac{s\log p}{n_0}\wedge h\sqrt{\frac{\log p}{n_0}}\wedge h^2+\frac{\log K}{n_0}\right).
\end{align*}
}\end{remark}
The result in (\ref{cond-agg2}) implies the estimated sparse indices in $\mathcal{A}^o$ and in $\mathcal{A}^c$ are separated with high probability. As illustrated before, a consequence of (\ref{cond-agg2}) is (\ref{cond-agg}) for the candidate sets $\widehat{G}_l$ defined in (\ref{eq-hGl}). Together with Theorem \ref{thm0-l1} and Lemma \ref{thm-ag1}, we arrive at (\ref{re3-agg}).
In Remark \ref{re3}, we develop a sufficient condition for $\mathcal{A}^o=\mathcal{A}$, which requires sufficiently small $h$. Under this condition, the estimation and prediction errors of the Trans-Lasso are comparable the case where $\mathcal{A}$ is known, i.e. the adaptation to $\mathcal{A}$ is achieved. Remark \ref{re3} is a direct consequence of Theorem \ref{sec4-lem1} and the fact that $\max_{k\in\mathcal{A}}\|\Sigma^{(0)}\delta^{(k)}\|_2^2\leq \|\Sigma^{(0)}\|_2^2h^2$.
It is worth mentioning that Condition \ref{cond4} is only employed to show the gain of Trans-Lasso and the robustness property of Trans-Lasso holds without any conditions on the non-informative samples (Lemma \ref{thm-ag1}). In practice, missing a few informative auxiliary samples may not be a very serious concern. One can see that when $n_{\mathcal{A}^o}$ is large enough such that the first term on the right-hand side of (\ref{re3-agg}) no longer dominates, increasing the number of auxiliary samples will not improve the convergence rate. In contrast, it is more important to guarantee that the estimator is not affected by the adversarial auxiliary samples.
The empirical performance of Trans-Lasso is carefully studied in Section \ref{sec-simu}.
\section{Extensions to Heterogeneous Designs and $\ell_q$-sparse Contrasts}
\label{sec4}
In this section, we extend the algorithms and theoretical results developed in Sections \ref{sec2} and \ref{sec3}. Section \ref{sec4-1} considers the case where the design matrices are heterogeneous with difference covariance structures and Section \ref{sec4-2} generalizes the sparse contrasts from $\ell_1$-constraint to $\ell_q$-constraint for $q\in [0,1)$ and presents a rate-optimal estimator in this setting.
\subsection{Heterogeneous Designs}
\label{sec4-1}
The Oracle Trans-Lasso algorithm proposed in Section \ref{sec2} can be directly applied to the setting where the design matrices are heterogeneous.
To establish the theoretical guarantees in the heterogeneous case, we first introduce a relaxed version of Condition \ref{cond1} as follows.
\begin{condition}
\label{cond1b}{\rm
For each $k\in \mathcal{A}\cup\{0\}$, each row of $X^{(k)}$ is \textit{i.i.d.} Gaussian with mean zero and covariance matrix $\Sigma^{(k)}$. The smallest and largest eigenvalues of $\Sigma^{(k)}$ are bounded away from zero and infinity, respectively, for all $k\in \mathcal{A}\cup\{0\}$.
}\end{condition}
Define
\[
C_{\Sigma}=1+\max_{j\leq p}\max_{k \in \mathcal{A}}\Big\|e_j^{\intercal}\big(\Sigma^{(k)}-\Sigma^{(0)}\big)\Big(\sum_{k \in \mathcal{A}\cup \{0\}}\alpha_k\Sigma^{(k)}\Big)^{-1}\Big\|_1.
\]
The parameter $C_{\Sigma}$ characterizes the differences between $\Sigma^{(k)}$ and $\Sigma^{(0)}$ for $k\in\mathcal{A}$. Notice that $C_{\Sigma}$ is a constant if $\max_{1\leq j\leq p}\|e_j^{\intercal}(\Sigma^{(k)}-\Sigma^{(0)})\|_0\leq C<\infty$ for all $k \in \mathcal{A}$, where examples include block diagonal $\Sigma^{(k)}$ with constant block sizes or banded $\Sigma^{(k)}$ with constant bandwidths for $k \in \mathcal{A}$.
The following theorem characterizes the rate of convergence of the Oracle Trans-Lasso estimator in terms of $C_{\Sigma}$.
\begin{theorem}[Oracle Trans-Lasso with heterogeneous designs]
\label{thm1-l1}
Assume that Condition \ref{cond2} and Condition \ref{cond1b} hold true. We take $\lambda_w= \max_{k\in\mathcal{A}\cup\{0\}}c_1\sqrt{{\mathbb{E}}[(y^{(k)}_i)^2]\log p/(n_{\mathcal{A}}+n_0})$ and $\lambda_\delta=c_2\sigma_0 \sqrt{\log p/n_0}$ for some sufficiently large constants $c_1$ and $c_2$ only depending on $C_0$.
If $s\log p/(n_{\mathcal{A}}+n_0)+C_{\Sigma}h(\log p/n_0)^{1/2}=o((\log p/n_0)^{1/4})$, then
\begin{align}
&\frac{1}{n_0}\|X^{(0)}(\hat{\beta}-\beta)\|_2^2\vee\|\hat{\beta}-\beta\|_2^2\nonumber\\
&=O_P\left(\frac{s\log p}{n_{\mathcal{A}}+n_0}+\frac{s\log p}{n_0}\wedge C_{\Sigma}h\sqrt{\frac{\log p}{n_0}}\wedge C_{\Sigma}^2h^2\right) \label{l1-re2}.
\end{align}
\end{theorem}
When $\mathcal{A}$ is non-empty, the right-hand side of (\ref{l1-re1}) is sharper than $s\log p/n_0$ if $n_{\mathcal{A}_0}\gg n_0$ and $C_{\Sigma}h\sqrt{\log p/n_0}\ll s$. We see that small $C_{\Sigma}$ is favorable. This implies that the informative auxiliary samples should not only have sparse contrasts but have similar Gram matrices to the primary one.
When $\mathcal{A}$ is unknown, we consider $\tilde{\mathcal{A}}^o$, a subset of $\mathcal{A}$ such that
\[
\tilde{\mathcal{A}}^o=\left\{k\in \mathcal{A}:\|\Sigma^{(k)}w^{(k)}-\Sigma^{(0)}\beta\|_2^2<c_1\min_{k \in\mathcal{A}^c}\sum_{j\in H_k}|\Sigma^{(k)}_{j,.}w^{(k)}-\Sigma^{(0)}_{j,.}\beta|^2\right\}
\]
for some $c_1<1$ and $H_k$ defined in (\ref{eq-Hk}). This is a generalization of $\mathcal{A}^o$ to the case of heterogeneous designs.
\begin{corollary}[Trans-Lasso with heterogeneous designs]
\label{cor1-l1}
Assume the conditions of Theorem \ref{thm1-l1} and Condition \ref{cond4}.
Let $\hat{\beta}^{\hat{\theta}}$ be computed via the Trans-Lasso algorithm with $\lambda_{\theta}\geq 4\sigma^2_0$. If $K\leq cn_0$ for a small enough constant $c$, then
\begin{align}
& \frac{1}{|\mathcal{I}^c|}\left\|X^{(0)}_{\mathcal{I}^c,.}(\hat{\beta}^{\hat{\theta}}-\beta)\right\|_2^2\vee \|\hat{\beta}^{\hat{\theta}}-\beta\|_2^2\nonumber\\
&=O_P\left(\frac{s\log p}{n_{\tilde{\mathcal{A}}^o}+n_0}+\frac{s\log p}{n_0}\wedge C_{\Sigma}h\sqrt{\frac{\log p}{n_0}}\wedge C_{\Sigma}^2h^2+\frac{\log K}{n_0}\right).
\label{re4-agg}
\end{align}
\end{corollary}
Corollary \ref{cor1-l1} provides an upper bound for the Tran-Lasso with heterogeneous designs. The numerical experiments for this setting are studied in Section \ref{sec-simu}.
\subsection{$\ell_q$-sparse Contrasts}
\label{sec4-2}
We have so far focused on the $\ell_1$-sparse characterization of the contrast vectors. The established framework can be extended to the settings where the contrast vectors are characterized in terms of the $\ell_q$-norm for $q\in[0,1)$. We discuss the exact sparse contrasts ($q=0$) here and leave the results for $q\in(0,1)$ to the Appendix. We first consider the case when $\mathcal{A}_0$ is known and present the propopsed algorithm in Algorithm \ref{algo3}.
\vspace{0.1in}
\begin{algorithm}[H]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetAlgoLined
\Input{Primary data $(X^{(0)},y^{(0)})$ and informative auxiliary samples $\{X^{(k)},y^{(k)}\}_{k\in\mathcal{A}_0}$}
\Output{$\hat{\beta}(\mathcal{A}_0)$}
\underline{Step 1}.
Estimate each $\delta^{(k)}$, $k\in\mathcal{A}_0$, via
{\small
\begin{align*}
&\hat{\delta}^{(k)}=\mathop{\rm arg\, min}_{\delta\in{\real}^p}\left\{\frac{1}{2}\delta^{\intercal}\widehat{\Sigma}^{\mathcal{A}_0}\delta-\delta^{\intercal}[(X^{(k)})^{\intercal}y^{(k)}/n_k-(X^{(0)})^{\intercal}y^{(0)}/n_0]+\lambda_k\|\delta\|_1\right\},\nonumber
\end{align*}
}
where $\widehat{\Sigma}^{\mathcal{A}_0}=\sum_{k\in \mathcal{A}_0\cup\{0\}}(X^{(k)})^{\intercal}X^{(k)}/(n_{\mathcal{A}_0}+n_0)$ and $\lambda_k>0$.
\underline{Step 2}. Compute
{\small
\begin{align*}
&\hat{\beta}(\mathcal{A}_0)=\mathop{\rm arg\, min}_{b\in{\real}^p} \Big\{\frac{1}{2(n_{\mathcal{A}_0}+n_0)}\sum_{k \in \mathcal{A}_0\cup\{0\}}\|y^{(k)}-X^{(k)}\hat{\delta}^{(k)}-X^{(k)}b\|_2^2+\lambda_{\beta}\|b\|_1\Big\}\;\nonumber
\end{align*}
}
for some $\lambda_{\beta}>0$.
\caption{\textbf{Oracle Trans-Lasso algorithm for $q=0$}}\label{algo3}
\end{algorithm}
In the above algorithm, we estimate each $\delta^{(k)}$ based on the following moment equation:
\begin{align}
\label{eq-m1}
{\mathbb{E}}[x^{(k)}_iy_i^{(k)}]- {\mathbb{E}}[x^{(0)}_iy_i^{(0)}]-\Sigma\delta^{(k)}=0,~k\in \mathcal{A}_0,
\end{align}
assuming that $\Sigma^{(k)}=\Sigma^{(0)}=\Sigma$ for all $k \in \mathcal{A}_0$. In the realization of Step 1, we replace $ {\mathbb{E}}[x^{(k)}_iy_i^{(k)}]$ and ${\mathbb{E}}[x^{(0)}_iy_i^{(0)}]$ by their unbiased sample versions, and the population Gram matrix $\Sigma$ by its unbiased estimator $\widehat{\Sigma}^{\mathcal{A}_0}$.
In Step 2, we use the ``bias-corrected'' samples to estimate $\beta$. In contrast to the Oracle Trans-Lasso proposed in Section \ref{sec2-1}, the contrast vectors are estimated individually in the above algorithm. This is because the $\ell_1$-norm has the sub-additive property while $\ell_0$-norm does not. The computational cost in Step 1 can be relatively high if $\mathcal{A}_0$ is large. Moreover, Step 1 heavily relies on the homogeneous designs among informative auxiliary samples.
In the next theorem, we prove the convergence rate of $\hat{\beta}(\mathcal{A}_0)$.
\begin{theorem}[Convergence rate of $\hat{\beta}(\mathcal{A}_0)$]
\label{thm1-l0}
Assume that Condition \ref{cond1} and Condition \ref{cond2} hold true. Suppose that
{\small
\begin{equation}
\label{ms-ss}
\frac{h\log p}{n_0}=o\Big(\big(\frac{\log p}{n_0+n_{\mathcal{A}_0}}\big)^{1/4}\Big),~ \frac{s\log p}{n_{\mathcal{A}_0}+n_0}=o(1),~\text{and}~~n_{\mathcal{A}_0}\gtrsim |\mathcal{A}_0|n_0.
\end{equation}
}
For $\hat{\beta}(\mathcal{A}_0)$ computed with $\lambda_k\geq c_1(\|y^{(k)}\|_2/n_k+ \|y^{(0)}\|_2/n_0)\sqrt{\log p}$ and \\ $\lambda_{\beta}=c_2\sqrt{\log p/(n_0+n_{\mathcal{A}_0})}$ for large enough constants $c_1$ and $c_2$ only depending on $C_0$,
{\small
\begin{align}
&\sup_{\beta\in\Theta_0(s,h)}\frac{1}{n_0+n_{\mathcal{A}_0}}\sum\limits_{k\in\mathcal{A}_0\cup\{0\}}\|X^{(k)}(\hat{\beta}(\mathcal{A}_0)-\beta)\|_2^2 \vee \|\hat{\beta}(\mathcal{A}_0)-\beta\|_2^2\nonumber\\
&=O_P\left( \frac{s\log p}{n_{\mathcal{A}_0}+n_0}+\frac{(h\wedge s)\log p}{n_0}\right)\label{l0-re1}.
\end{align}
}
\end{theorem}
We see from (\ref{l0-re1}) that $\hat{\beta}(\mathcal{A}_0)$ has sharper convergence rate than the Lasso when $n_{\mathcal{A}_0}\gg n_0$ and $h\ll s$. The first two requirements in (\ref{ms-ss}) guarantee that the lower restricted eigenvalues of the sample Gram matrices are bounded away from zero. The last expression in (\ref{ms-ss}) requires the average auxiliary sample size is asymptotically no smaller than the primary sample size $n_0$. This is a checkable condition in practice. When $|\mathcal{A}_0|$ is fixed, this condition can be trivially satisfied. Indeed, if the auxiliary sample sizes are too small, there would not be much improvement with transfer learning.
We next establish the minimax lower bound for $\beta\in\Theta_0(s,h)$.
\begin{theorem}[Minimax lower bound for $q=0$]
\label{thm-mini2}
Assume Condition \ref{cond1} and Condition \ref{cond2}. Suppose that $\max\{h\log p/n_0, s\log p/(n_{\mathcal{A}_0}+n_0)\}=o(1)$.
There exists some constants $c_1$ and $c_2$ such that
\begin{align*}
{\mathbb{P}}\left(\inf_{\hat{\beta}}\sup_{\Theta_0(s,h)} \|\hat{\beta}-\beta\|_2^2\geq c_1\frac{s\log p}{n_{\mathcal{A}_0}+n_0}+ c_2\frac{(h\wedge s)\log p}{n_0}\right)\geq \frac{1}{2}.
\end{align*}
\end{theorem}
Theorem \ref{thm-mini2} demonstrates the minimax optimality of $\hat{\beta}(\mathcal{A}_0)$ in $\Theta_0(s,h)$ under the conditions of Theorem \ref{thm1-l0}.
When $\mathcal{A}_0$ is unknown, one can consider the Trans-Lasso algorithm where the Oracle Trans-Lasso is replaced with the Oracle Trans-Lasso for $q=0$. We see that Lemma \ref{thm-ag1} still holds and a similar result as Theorem \ref{sec4-lem1} can be established with $\mathcal{A}$ replaced by $\mathcal{A}_0$. For the sake of conciseness, it is omitted here.
While $\ell_0$-sparsity is widely assumed and studied in the high-dimensional literature, $\ell_0$-sparse contrast vectors may not be a realistic situation. First, an $\ell_0$-sparse $\delta^{(k)}$ implies that $\beta$ and $w^{(k)}$ have the same coefficients in most coordinates which may be impractical. Second, a typical data preprocessing step is to standardize the data such that $\|y^{(k)}\|_2^2=n_k$. While standardization does not change the $\ell_0$-norm of $\beta$ or $w^{(k)}$, it can change the $\ell_0$-norm of $\delta^{(k)}$. Hence, this work focuses on $\ell_1$-sparse contrasts, which is more practical in applications.
\section{Simulation Studies}
\label{sec-simu}
In this section, we evaluate the empirical performance of our proposals and some other comparable methods in various numerical experiments. Specifically, we evaluate the performance of four methods: the original Lasso, the Oracle Trans-Lasso proposed in Section \ref{sec2-1}, the Trans-Lasso proposed in Section \ref{sec3-1}, and a naive Trans-Lasso method, which naively assumes $\mathcal{A}=\{1,\dots, K\}$ in the Oracle Trans-Lasso. The purpose of including the naive Trans-Lasso is to understand the overall informative level of the auxiliary samples. In the Appendix, we report the performance of the estimated sparse indices $\widehat{R}^{(k)}$ in achieving (\ref{cond-agg2}).
\subsection{Identity Covariance Matrix for the Designs}
\label{sec5-1}
We consider $p=500$, $n_0=150$, and $n_1,\dots,n_K=100$ for $K=20$. The covariates $x^{(k)}_i$ are \textit{i.i.d.} Gaussian with mean zero and identity covariance matrix for all $0\leq k\leq K$ and $\eps^{(k)}_i$ are \textit{i.i.d.} Gaussian with mean zero and variance one for all $0\leq k\leq K$. For the target parameter $\beta$, we set $s=16$, $\beta_{j}=0.3$ for $j\in \{1,\dots,s\}$, and $\beta_j=0$ otherwise. For the regression coefficients in auxiliary samples, we consider two configurations.
(i) Let
\begin{align*}
w^{(k)}_{j}=\beta_{j}-0.3\mathbbm{1}(j\in H_k).
\end{align*}
For a given $\mathcal{A}$, if $k \in \mathcal{A}$, we set $H_k$ to be a random subset of $[p]$ with $|H_k|=h\in\{2,6,12\}$ and if $k\notin \mathcal{A}$, we set $H_k$ to be a random subset of $[p]$ with $|H_k|=50$.
(ii)
For a given $\mathcal{A}$, if $k \in \mathcal{A}$, let $H_k$ be a random subset of $[p]$ with $|H_k|=p/2$ and let
\[
w^{(k)}_{j}=\beta_{j}+\xi_j\mathbbm{1}(k\in H_k), ~\text{where}~ \xi_j\sim_{i.i.d.} \text{Laplace}(0, 2h/p),
\]
where $h\in\{2,6,12\}$ and $\text{Laplace}(a, b)$ is Laplacian distribution with mean $a$ and dispersion $b$.
If $k\notin \mathcal{A}$, we set $H_k$ to be a random subset of $[p]$ with $|H_k|=p/2$ and let
\[
w^{(k)}_j=\beta_j+\xi_j\mathbbm{1}(j\in H_k),~\text{where}~\xi_j\sim_{i.i.d.} \text{Laplace}(0, 100/p).
\]
The setting (i) can be treated as either $\ell_0$- or $\ell_1$-sparse contrasts.
In practice, the true parameters are unknown and we use $\mathcal{A}$ to denote the set of auxiliary samples without distinguishing $\ell_0$- or $\ell_1$-sparsity. We consider $|\mathcal{A}|\in\{0,4,8,\dots,20\}$.
To avoid searching for tuning parameters, we use the raw data other than the standardized data.
For the Lasso method, the tuning parameter is chosen to be $\sqrt{2\log p/n_0}$. For the Oracle Trans-Lasso, we set $\lambda_w=\sqrt{2\log p/(n_0+n_{\mathcal{A}})}$ and $\lambda_{\delta}=\sqrt{2\log p/n_0}$. The naive Trans-Lasso is computed based on the Oracle Trans-Lasso with $\mathcal{A}=\{1,\dots, K\}$. For the Trans-Lasso, we set $\mathcal{I}=\{1,\dots,n_0/2\}$ in Step 1. The sets $\widehat{G}_0,\dots,\widehat{G}_L$ are computed based on SURE screening with $t_*=n_0^{3/4}$. For the Q-aggregation, we implement Algorithm 2 (GD-BMAX) in \citet{Dai18}, which solves a dual representation of the Q-aggregation. Using their notations, we set $\omega=\sqrt{2}$, $\nu=0.5$, and $t_k=1$ in GD-BMAX. We mention that as for demonstration, the tuning parameters are not deliberately chosen. We then perform a cross-fitting by using the first half of the samples for aggregation and the other half for constructing the dictionary. Our final estimator is the average of these two Trans-Lasso estimators.
The sum of squared estimation errors (SSE) are reported in Figure \ref{fig1-simu}. As we expected, the performance of the Lasso does not change as $|\mathcal{A}|$ changes. On the other hand, all three other Trans-Lasso based algorithms have estimation errors decreasing as $|\mathcal{A}|$ increases. As $h$ increases, the problem gets harder and the estimation errors of all three methods increase. In settings (i) and (ii), the performance of Oracle Trans-Lasso and Trans-Lasso are comparable in most occasions. When $h=12$ in setting (i), we see a relatively large gap between the SSEs of Trans-Lasso and Oracle Trans-Lasso. One main reason is that the proposed $\widehat{R}^{(k)}$ does not consistently separate informative auxiliary samples from others in this case (Table \ref{table-rank} in the Appendix). This is because, the sparsity levels of $\delta^{(k)}$ are similar for $k\in\mathcal{A}$ and $k\notin \mathcal{A}$. In other cases, the proposed $\widehat{R}^{(k)}$ can separate informative auxiliary samples from others reasonably well.
On the other hand, the naive Trans-Lasso has worse performance than the Lasso when $|\mathcal{A}|$ is relatively small. This shows that the scenarios under consideration are hard in the sense that some naive methods cannot adapt the unknown $\mathcal{A}$ uniformly.
\begin{figure}
\makebox{\includegraphics[width=0.98\textwidth, height=4.5cm]{Ident-i.eps}}
\makebox{\includegraphics[width=0.98\textwidth, height=4.5cm]{Ident-ii.eps}}
\caption{\label{fig1-simu}Estimation errors of the Lasso, Naive Trans-Lasso, Oracle Trans-Lasso, and Trans-Lasso for the settings with identity covariance matrices. The two rows correspond to configurations (i) and (ii), respectively. Each point is summarized from 200 independent simulations.}
\end{figure}
\subsection{Homogeneous Designs among $\mathcal{A}\cup \{0\}$}
\label{sec5-2}
In this subsection, we consider $x^{(k)}_i$ as \textit{i.i.d.} Gaussian with mean zero and a Toeplitz covariance matrix whose first row is
\[\Sigma^{(k)}_{1,.}=(1,0.8,\dots,0.8^K, 0_{p-K-1})
\]
for $k\in \mathcal{A}\cup\{0\}$. For $k\notin \mathcal{A}\cup\{0\}$, $x^{(k)}_i$ are \textit{i.i.d.} Gaussian with mean zero and a Toeplitz covariance matrix whose first row is
\begin{equation}
\label{eq-Sigk1}
\Sigma^{(k)}_{1,.}=(1,\underbrace{1/(k+1),\dots,1/(k+1)}_{2k-1}, 0_{p-2k}).
\end{equation}
Other true parameters and the dimensions of the samples are set to be the same as in Section \ref{sec5-1}. From the results presented in Figure \ref{fig2-simu}, we see that the Trans-Lasso and Oracle Trans-Lasso have reliable performance when $\Sigma^{(k)}\neq \Sigma^{(k')}$ for $k\in\mathcal{A}\cup\{0\}$ and $k'\notin \mathcal{A}\cup\{0\}$. In Table \ref{table-rank} in the Appendix, we see that our proposed $\widehat{R}^{(k)}$ can separate informative auxiliary samples from others consistently in settings (i) and (ii).
On the other hand, we can observe from Figure \ref{fig2-simu} that the SSE of the Trans-Lasso can be slightly below those of the Oracle Trans-Lasso when $0<|\mathcal{A}|< K$. There are two potential reasons. As a cross-fitting step is performed in the Trans-Lasso, the samples for computing the Trans-Lasso and for other methods are different empirically. Second, our definition of $\mathcal{A}$ may not always be the best subset of auxiliary samples that give the smallest estimation errors.
\begin{figure}
\makebox{\includegraphics[width=0.98\textwidth, height=4.5cm]{Homo-i.eps}}
\makebox{\includegraphics[width=0.98\textwidth, height=4.5cm]{Homo-ii.eps}}
\caption{\label{fig2-simu}Estimation errors of the Lasso, Naive Trans-Lasso, Oracle Trans-Lasso, and Trans-Lasso for the settings with homogeneous covariance matrices among $k\in\mathcal{A}\cup\{0\}$. The two rows correspond to configurations (i) and (ii), respectively. Each point is summarized from 200 independent simulations.}
\end{figure}
\subsection{Heterogeneous Designs}
We now consider $x^{(k)}_i$ as \textit{i.i.d.} Gaussian with mean zero and a Toeplitz covariance matrix whose first row is (\ref{eq-Sigk1}) for $k=1,\dots, K$. Moreover, $\Sigma^{(0)}=I_p$. Other parameters and the dimensions of the samples are set to be the same as in Section \ref{sec5-1}.
Figure \ref{simu-fig4} shows that the general patterns observed in previous subsections still hold. We observe a relatively large gap between the SSEs of the Oracle Trans-Lasso and Trans-Lasso in the scenario when $h=12$ in setting (i). Again, this is because $\widehat{R}^{(k)}$ can only separate a subset of $\mathcal{A}$ from $\mathcal{A}^c$. In other cases, the performance of Trans-Lasso is comparable to the Oracle Trans-Lasso.
\begin{figure}
\makebox{\includegraphics[width=0.98\textwidth, height=4.5cm]{Hetero-i.eps}}
\makebox{\includegraphics[width=0.98\textwidth, height=4.5cm]{Hetero-ii.eps}}
\caption{\label{simu-fig4}Estimation errors of the Lasso, Naive Trans-Lasso, Oracle Trans-Lasso, and Trans-Lasso for the settings with heterogeneous covariance matrices. The two rows correspond to configurations (i) and (ii), respectively. Each point is summarized from 200 independent simulations.}
\end{figure}
\section{Application to Genotype-Tissue Expression Data}
\label{sec-data}
In this section, we demonstrate the performance of our proposed transfer learning algorithm in analyzing the Genotype-Tissue Expression (GTEx) data (\url{https://gtexportal.org/}). Overall, the data sets measure gene expression levels from 49 tissues of 838 human donors, in total comprising 1,207,976 observations of 38,187 genes. In our analysis, we focus on genes that are related to central nervous systems, which were assembled as MODULE\underline{ }137 ( \url{https://www.gsea-msigdb.org/gsea/msigdb/cards/MODULE_137.html}).
This module includes a total of 545 genes and additional 1,632 genes that are significantly enriched in the same experiments as the genes of the module. A complete list of genes can be found at \url{http://robotics.stanford.edu/~erans/cancer/modules/module_137}.
\subsection{Data Analysis Method}
To demonstrate the replicability of our proposal, we consider multiple target genes and multiple target tissues and estimate their corresponding models one by one.
For an illustration of the computation process, we consider gene \texttt{JAM2} (Junctional adhesion molecule B), as the response variable, and treat other genes in this module as covariates. \texttt{JAM2} is a protein coding gene on chromosome 21 interacting with a variety of immune cell types and may play a role in lymphocyte homing to secondary lymphoid organs \citep{JAM2}. It is of biological interest to understand how other CNS genes can predict its expression levels in different tissues/cell types.
As an example, we consider the association between \texttt{JAM2} and other genes in a brain tissue as the target models and the association between \texttt{JAM2} and other genes in other tissues as the auxiliary models. As there are multiple brain tissues in the dataset, we treat each of them as the target at each time. The list of target tissues can be found in Figure \ref{fig-data-1}. The min, average, and max of primary sample sizes in these target tissues are 126, 177, and 237, respectively. The gene \texttt{JAM2} is expressed in 49 tissues in our dataset and we use 47 tissues with more than 120 measurements on \texttt{JAM2}. The average number of auxiliary samples for each target model is 14,837 over all the non-target tissues. The covariates used are the genes that are in the enriched MODULE\underline{ }137 and do not have missing values in all of the 47 tissues. The final covariates include a total of 1,079 genes. The data is standardized before analysis.
We compare the prediction performance of Trans-Lasso with the Lasso. To understand the overall informative level of the auxiliary samples, we also compute the Naive Trans-Lasso which treats all the auxiliary samples are informative. For evaluation, we split the target sample into 5 folds and use 4 folds to train the three algorithms and use the remaining fold to test their prediction performance. We repeat this process 5 times each with a different fold of test samples.
We mention that one individual can provide expression measurements on multiple tissues and these measurements are hard to be independent. As the dependence of the samples can reduce the efficiency of the estimation algorithms, using auxiliary samples may still be beneficial. However, one need to choose proper tuning parameters. The tuning parameter for the Lasso and $\lambda_w$ in the Naive-Trans-Lasso are chosen by 8-fold cross validation. The $\lambda_{\delta}$ in the Naive Trans-Lasso is set to be $\lambda_w\sqrt{\sum_{k=0}^K n_k/n_0}$. For our proposal Trans-Lasso, we use two-thirds of the training sample to construct the dictionary and use one-third of the training sample for aggregation. The sparsity indices are computed in the same way as in our simulations. For computing each $\hat{\beta}(\widehat{G}_l)$, the $\lambda_w$ is chosen by 8-fold cross validation and $\lambda_{\delta}$ is set to be the corresponding $\lambda_w\sqrt{\sum_{k\in \widehat{G}_l} n_k/n_0}$. The tuning parameters in aggregation are chosen as in our simulation.
\subsection{Prediction Performance of the Trans-Lasso for \texttt{JAM2} Expression}
Figure \ref{fig-data-1} demonstrates the errors of Naive Trans-Lasso, and Trans-Lasso relative to the Lasso for predicting gene expression \texttt{JAM2} using other genes. The prediction errors in the raw scale are provided in the Appendix. We see that the Trans-Lasso algorithm always achieves the smallest prediction errors across different tissues. Its average gain is 17\% comparing to the Lasso. This shows that our characterization of the similarity of the target model and a given auxiliary model is suitable to the current problem and our proposed sparsity index for aggregation is effective in detecting good auxiliary samples. In tissues such as Amygdala and Nucleus accumbens basal ganglia, the Trans-Lasso achieves relatively significant improvement and it has more accurate prediction than the Naive Trans-Lasso. This implies that the knowledge from the auxiliary tissues have been successfully transferred into these target tissues for modeling \texttt{JAM2} even if not all the tissues are informative. In tissues such as Pituitary, the improvement of the Naive Trans-Lasso and Trans-Lasso are mild. This implies that the regression model for \texttt{JAM2} in Pituitary is relatively distinct from the models in other tissues so that little knowledge can be transferred. Moreover, in tissues such as Frontal Cortex, the prediction performance of naive Trans-Lasso can be worse than the Lasso whereas the Trans-Lasso still has the smallest prediction error. This again demonstrates the robustness of our proposal.
\begin{figure}[H]
\centering
\includegraphics[height=6cm, width=0.95\textwidth]{JAM2-pred2.eps}
\caption{ \label{fig-data-1}Prediction errors of the naive Trans-Lasso and Trans-Lasso relative to the Lasso evaluated via 5-fold cross validation for gene \texttt{JAM2} in multiple tissues.}
\end{figure}
\subsection{Prediction Performance of Other 25 Genes on Chromsome 21}
To demonstrate the replicability of our proposal, we also consider other genes on Chromosome 21 which are in Module\underline{ }137 as our target genes. We report the overall prediction performance of these 25 genes in Figure \ref{fig-data-2}. A complete list of these genes and some summary information can be found in the Appendix. Generally speaking, we see that the Trans-Lasso has the best overall performance among all target tissues. Specifically, the improvement of Trans-Lasso is significant in tissues including Cerebellar Hemisphere, Cortex, and Frontal Cortex. The naive Trans-Lasso is has comparable accuracy to the Lasso in most cases, which implies that the overall similarity between the auxiliary tissues and target tissues is not very strong.
\begin{figure}
\centering
\makebox{\includegraphics[height=6cm, width=0.99\textwidth]{Chr21-pred2.eps}}
\caption{ \label{fig-data-2}Prediction errors of the naive Trans-Lasso and Trans-Lasso relative to the Lasso for the 25 genes on Chromosome 21 and in Module\underline{ }137, in multiple target tissues.}
\end{figure}
\section{Discussion}
\label{sec:discussion}
This paper studies high-dimensional linear regression in the presence of additional auxiliary samples, where the similarity of the target model and a given auxiliary model is characterized by the sparsity of their contrast vector. Transfer learning algorithms for estimation and prediction are developed. The results show that if the informative set is known, the proposed Oracle Trans-Lasso is minimax optimal over a range of parameter spaces and the accuracy for estimation and prediction can be improved.
It has been considered in \citet{Bastani18} the setting of known informative set with one large-scale auxiliary study, which is a special case of our problem set-up. However, their upper bound analysis is not minimax rate optimal in the parameter space considered in this work.
Adaptation to the unknown informative set is also considered. It is shown that adaptation can be achieved by aggregating a collection of candidate estimators. Numerical experiments and real data applications support the theoretical findings.
Transfer learning for high-dimensional linear regression is an important problem with a wide range of potential applications. However, statistical theory and methods have not been well developed in the literature. Using our similarity characterization of the auxiliary studies, it is also interesting to study statistical inference such as constructing confidence intervals and hypothesis testing for high-dimensional linear regression with auxiliary samples.
In view of the results derived in this paper, one may expect weaker sample size conditions in the transfer learning setting than the conventional case. It is interesting to provide a precise characterization and develop a minimax optimal confidence interval in the transfer learning setting. On the other hand, different measurements of the similarity can be used when they are appropriate, which can lead to new methods and insights into the underlying structure of the transfer learning algorithm.
\section*{Acknowledgements}
This research was supported by NIH grants GM129781 and GM123056 and NSF Grant DMS-1712735.
| {'timestamp': '2020-06-19T02:18:23', 'yymm': '2006', 'arxiv_id': '2006.10593', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.10593'} |
\subsubsection{\large #1}}
\newcommand{\otimes}{\otimes}
\def \varkappa {\varkappa}
\def ~---\ {~---\ }
\def \nolinebreak {\nolinebreak}
\def\={\;=\;}
\def\begin{aligned}{\begin{aligned}}
\def\end{aligned}{\end{aligned}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{{\text{Ker }}}{{\text{Ker }}}
\newcommand{{\text{Spec }}}{{\text{Spec }}}
\newcommand{\mathrm{virt}}{\mathrm{virt}}
\newcommand{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}
\def \udot {{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}
\DeclareMathOperator{\ch}{ch}
\DeclareMathOperator{\ind}{ind}
\DeclareMathOperator{\Cl}{Cl}
\DeclareMathOperator{\FT}{FT}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\End}{End}
\DeclareMathOperator{\mathrm{Aut}}{Aut}
\DeclareMathOperator{\RHom}{RHom}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Ext}{Ext}
\DeclareMathOperator{\Res}{Res}
\DeclareMathOperator{\rk}{rank}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\mult}{mult}
\DeclareMathOperator{\codim}{codim}
\DeclareMathOperator{\vdim}{vdim}
\DeclareMathOperator{\supp}{supp}
\DeclareMathOperator{\spec}{spec}
\DeclareMathOperator{\sign}{sign}
\DeclareMathOperator{\Supp}{Supp}
\DeclareMathOperator{\Sing}{Sing}
\DeclareMathOperator{\ord}{ord}
\DeclareMathOperator{\Pic}{Pic}
\begin{document}
\allowdisplaybreaks
\renewcommand{\thefootnote}{$\star$}
\renewcommand{\PaperNumber}{005}
\FirstPageHeading
\ShortArticleName{Upper Bounds for Mutations of Potentials}
\ArticleName{Upper Bounds for Mutations of Potentials\footnote{This
paper is a contribution to the Special Issue ``Mirror Symmetry and Related Topics''. The full collection is available at \href{http://www.emis.de/journals/SIGMA/mirror_symmetry.html}{http://www.emis.de/journals/SIGMA/mirror\_{}symmetry.html}}}
\Author{John Alexander CRUZ MORALES~$^{\dag^1}$ and Sergey GALKIN~$^{\dag^2\dag^3\dag^4\dag^5}$}
\AuthorNameForHeading{J.A.~Cruz Morales and S.~Galkin}
\Address{$^{\dag^1}$ Department of Mathematics and Information Sciences, Tokyo Metropolitan University, \\
\hphantom{$^{\dag^1}$}~Minami-Ohsawa 1-1, Hachioji, Tokyo 192-037, Japan}
\EmailDD{\href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}}
\Address{$^{\dag^2}$~Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo,\\
\hphantom{$^{\dag^2}$}~5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan}
\Address{$^{\dag^3}$~Independent University of Moscow, 11 Bolshoy Vlasyevskiy per., 119002, Moscow, Russia}
\Address{$^{\dag^4}$~Moscow Institute of Physics and Technology, 9 Institutskii per.,\\
\hphantom{$^{\dag^4}$}~Dolgoprudny, 141700, Moscow Region, Russia}
\EmailDD{\href{mailto:[email protected]}{[email protected]}}
\Address{$^{\dag^5}$ Universit\"at Wien, Fakult\"at f\"ur Mathematik, Garnisongasse 3/14, A-1090 Wien, Austria}
\ArticleDates{Received May 31, 2012, in f\/inal form January 16, 2013; Published online January 19, 2013}
\Abstract{In this note we provide a new, algebraic proof of the excessive Laurent phenomenon for mutations of potentials (in the sense of [Galkin~S., Usnich~A., {P}reprint IPMU 10-0100, 2010]) by introducing to this theory the analogue of the upper bounds from~[Berenstein~A., Fomin~S., Zelevinsky~A., \textit{Duke Math.~J.} \textbf{126} (2005), 1--52].}
\Keywords{cluster algebras; Laurent phenomenon; mutation of potentials; mirror symmetry}
\Classification{13F60; 14J33; 53D37}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}
The idea of mutations of potentials was introduced in \cite{GU}
and the Laurent phenomenon was established in the two dimensional case
by means of birational geometry of surfaces.
More precisely, in op.\ cit.\ the authors considered
a toric surface $X$ with a rational function $W$ (a~potential),
and using certain special birational transformations (mutations),
they established the (excessive) Laurent phenomenon which roughly says that
if $W$ is a Laurent polynomial whose mutations are Laurent polynomials, then
all subsequent mutations of these polynomials are also Laurent polynomials
(see Theorem~\ref{ulemma} in Appendix~\ref{appendix2} for a precise statement of the excessive Laurent phenomenon
as established in~\cite{GU}).
The motivating examples of such potentials
come from the mirror images of special Lagrangian tori
on del Pezzo surfaces~\cite{FOOO}
and Auroux's wall-crossing formula relating invariants of dif\/ferent tori~\cite{AU}.
The cluster algebras theory of Fomin and Zelevinsky~\cite{FZ} provides
an inductive way to construct \emph{some} birational transformations
of $n$ variables as a consecutive composition of elementary ones (called elementary mutations)
with a choice of $N=n$ directions at each step.
The theory developed in \cite{GU} can be seen as an extension of the theory of cluster algebras~\cite{FZ} when the number of directions of mutations $N$ is allowed to be (much) bigger than the number of variables $n$, but at least one function remains to be a Laurent polynomial after all mutations. So, it is natural to try to extend the machinery of the theory of cluster algebras for this new setup. The main goal of this paper is to give the f\/irst step in such an extension by means of the introduction of the upper bounds (in the sense of~\cite{BFZ}) and establishing the excessive Laurent phenomenon \cite{GU} in terms of them.
It is worth noticing that a further generalization can be done and in a forthcoming work~\cite{CG} we plan to study the quantization of the mutations of potentials and their upper bounds. Naturally, this quantization can be seen as an extension of the theory of quantum cluster algebras developed in~\cite{BZ,BZ1} and the theory of cluster ensembles in~\cite{FG}.
The upper bounds introduced in this paper can be described as a collection of regular functions that remain regular after one elementary mutation in any direction. Thus, we can establish the main result of this paper in the following terms (see Theorem~\ref{ublemma} for the exact formulation).
\begin{theorem*}[Laurent phenomenon in terms of the upper bounds]
The upper bounds are preserved by mutations.
\end{theorem*}
Aside from providing a new proof for the excessive Laurent phenomenon and the already mentioned generalization in the quantized setup, the algebraic approach that we are introducing here is helpful for tackling the following two problems:
\begin{enumerate}\itemsep=0pt
\item
Develop a higher dimensional theory (i.e.\ dimension higher than $2$) for the mutations of potentials. Some work in that direction is carried out in \cite{CGA}.
\item
Present an explicit construction to compactify Landau--Ginzburg models (Problem~44 of~\cite{GU}).
\end{enumerate}
In the present paper we do not deal with the above two problems (only a small comment on~2 will be made at the end of the paper). We plan to give a detailed discussion of them in~\cite{CG} too. We just want to mention that the new algebraic approach has interesting geometrical applications.
Some words about the organization of the text are in order. In Section~\ref{sec-def} we extend the theory developed in~\cite{GU} to lattices of arbitrary rank and general bilinear forms (i.e., we can consider even degenerate and not unimodular forms) and introduce the notion of upper bounds in order to establish our main theorem. In Section~\ref{section3} we actually establish the main theorem and present its proof when the rank of the lattice is two and the form is non-degenerate which is the case of interest for the geometrical setup of~\cite{GU}. In the last section some questions and future developments are proposed. For the sake of completeness of the presentation we include two appendices. In Appendix~\ref{sec-bfz} we review some def\/initions of~\cite{BFZ} and brief\/ly compare their theory with ours. Appendix~\ref{appendix2} is dedicated to presenting the Laurent phenomenon in terms of~\cite{GU}.
\section{Mutations of potentials and upper bounds} \label{sec-def}
Now we present an extension of the theory of mutations of potentials~\cite{GU}
(as formulated by the second author and Alexandr Usnich)
and introduce our modif\/ied def\/initions with the new def\/inition of upper bound.
Notice that a slightly dif\/ferent theory (which f\/its into the framework of this paper, but not \cite{GU})
is used in our software code\footnote{\url{http://member.ipmu.jp/sergey.galkin/degmir.gp}.}.
\subsection{Combinatorial data}
Let $(\cdot,\cdot) : L^* \times L \to \mathbb{Z}$
be the canonical pairing between
a pair of dual lattices
$L \simeq \mathbb{Z}^r$ and $L^* = \Hom(L,\mathbb{Z}) \simeq \mathbb{Z}^r$.
In what follows the lattice $L$ is endowed with a skew-symmetric bilinear integral form $\omega: L \times L \to \mathbb{Z}$
(we use the notation $\ang{v,v'} = \omega(v,v')$).
In the most important (both technically, and from the point of view of applications) case
$r = \rk L = 2$, we have $\Lambda^2 L \simeq \mathbb{Z}$,
so all integer skew-symmetric bilinear forms are integer multiples $\omega_k = k \omega_1$ ($k \in \mathbb{Z}$)
where a~generator~$\omega_1$ is f\/ixed by the choice of orientation on $L \otimes \mathbb{R}$
so that $\omega_1((1,0),(0,1)) = 1$.
We would occasionally use notations
$\ang{\cdot,\cdot}_1 = \omega_1(\cdot,\cdot)$
and
$\ang{\cdot,\cdot}_k = \omega_k(\cdot,\cdot)$.
The bilinear form $\omega$ gives rise to a map $i = i_{\omega} : L \to L^*$ that sends an element $v \in L$ into a linear form
$i_\omega(v) \in L^*$ such that $(i_\omega(v),v') = \omega(v,v')$ for any $v' \in L$.
The map $i_\omega$ is an isomorphism $\iff$ the form $\omega$ is non-degenerate and unimodular,
when~$\omega$ is non-degenerate but not unimodular the map $i$ identif\/ies the lattice $L$
with a full sublattice in $L^*$ of index~$\det \omega$,
f\/inally if $\omega$ is degenerate then both the kernel and the cokernel of the map $i_\omega$ has positive rank.
We would like to have some functoriality,
so we consider a category whose objects are given by pairs $(L,\omega)$ of the lattice $L$
and a skew-symmetric bilinear form $\omega$,
and the morphisms $\Hom((L',\omega'),(L,\omega))$ are linear maps $f : L' \to L$
such that $\omega' = f^* \omega$,
i.e.\ $\omega(v_1,v_2) = \omega'(f(v_1),f(v_2))$ for all $v_1,v_2 \in L'$.
Any linear map $f : L' \to L$ def\/ines an adjoint $f^* : L^* \to L'^*$
and if it respects the bilinear forms,
then $i_{\omega'} = f^* i_{\omega} f$.
For a vector $u \in L$ we def\/ine
a \emph{symplectic reflection} $R_u$
and a \emph{piecewise linear mutation} $\mu_u$
to be the (piecewise)linear automorphisms of the set $L$ given by the formulae
\begin{gather*}
R_{\omega,u} (v) = v + \omega(u,v) u, \\
\mu_{\omega,u} v = v + \max(0,\omega(u,v)) u.
\end{gather*}
For any morphism $f \in \Hom((L',\omega'),(L,\omega))$ and any vector $u \in L'$ we have
$R_{\omega, fu} f = f R_{\omega',u}$ and
$\mu_{\omega, fu} f = f \mu_{\omega',u}$.
Indeed, $f \mu_u v = f (v + \max(0,\omega'(u,v)) u) = fv + \max(0,\omega'(u,v)) (fu) = fv + \max(0,\omega(fu,fv)) (fu) = \mu_{fu} (fv)$.
Note that
$R_{a \omega, bu} = R_{\omega,u}^{a b^2}$ for all $a,b \in \mathbb{Z}$
and
$\mu_{a \omega, bu} = \mu_{\omega,u}^{a b^2}$ for all $a,b \in \mathbb{Z}_+$.
However
$\mu_{\omega,-u} v = - \mu_u (-v) = v + \min(0,\omega(u,v)) u$,
hence
$\mu_{\omega,-u} \mu_{\omega,u} = R_{\omega,u}$.
Both $R_{\omega,u}$ and $\mu_{\omega,u}$ are invertible:
$R_{\omega,u}^{-1} v = R_{-\omega,u} v = v - \omega(u,v) u$,
$\mu_{\omega,u}^{-1} v = \mu_{-\omega,-u} v = R_{\omega,u}^{-1} \mu_{\omega, -u} v = v - \max(0,\omega(u,v)) u$.
Note that
$\mu_{\omega,-u}^{-1} v = R_{\omega,-u}^{-1} \mu_{\omega,u} v = R_{\omega,u}^{-1} \mu_{\omega,u}(v) = v - \min(0,\omega(u,v)) u$.
Therefore, changing $\max$ by $\min$ and $+$ by $-$, simultaneously,
corresponds to changing the form $\omega$ to the opposite $-\omega$.
Further we omit $\omega$ from the notations of $R_u$ and $\mu_u$ where the choice of the form is clear.
The underlying combinatorial gadget of our story
is a collection of $n$ vectors in $L$:
\begin{definition
An \emph{exchange collection} $V$ is an element of $L^n$, i.e.\ an $n$-tuple $(v_1,\dots,v_n)$ of
vectors~$v_i \in L$.
Some~$v_i$ may coincide.
For a~vector $v$ its \emph{multiplicity} $m_V(v)$ in the exchange collection $V$ equals the number of vectors in $V$ that coincide with $v$:
$m_V (v) = \# \{ 1 \leqslant i \leqslant n : v_i = v\}$. We say that an exchange collection $V'$ is a \emph{subcollection} of exchange collection $V$ if
$m_{V'} \leqslant m_V$.
Equivalently, one may def\/ine an exchange collection $V$ by its (non-negative integer) multiplicity function
$m_V : L \to \mathbb{Z}_{\geqslant 0}$. In this case $n = \sum\limits_{v \in L} m_V(v)$.
\end{definition}
The exchange collections could be pushed forward by morphisms $f \in \Hom((L',\omega'),(L,\omega))$:
$v_1',\dots,v_n' \in L'^n$ will go to $f v_1,\dots, f v_n \in L^n$.
This gives rise to a natural diagonal action of $\mathrm{Aut} (L,\omega) = \operatorname{Sp}(L,\omega)$
on $L^n$. This action commutes with the permuting action of~$S_n$.
A vector $n \in L$ is called \emph{primitive}
if it is nonzero and its coordinates are coprime, i.e.~$n$ does not belong to the sublattice $k L$ for any $k>1$,
in other words $n$ is not a multiple of other vector in~$L$.
We denote the set of all primitive vectors in~$L$ as~$L_1$.
Similarly one can def\/ine primitive vectors in the dual lattice~$L^*$.
Note that if $\det \omega \neq \pm{1}$
then $i_\omega(n)$ may be a non-primitive element of~$L^*$ even for primitive elements~$n \in L_1$.
\subsection{Birational transformations}
Consider the group ring $\mathbb{Z}[L^*]$~-- ring of Laurent polynomials of $r$ variables.
Its spectrum $T = {\text{Spec }} \mathbb{Z}[L^*] \simeq \mathrm{G}_m^r(\mathbb{Z})$ is the $r$-dimensional torus over the integers,
in particular $T(\mathbb{C}) = \Hom(L^*,\mathbb{C}^*)$,
$L^* = \Hom(T,\mathrm{G}_m)$ is the lattice of characters of $T$
and
$L = \Hom(\mathrm{G}_m,T)$ is the lattice of $1$-parameter subgroups in~$T$.
Def\/ine \emph{the ambient field} $\mathbb{K} = \mathbb{K}_L = \mathcal{Q}(L^*)$ as the fraction f\/ield of $\mathbb{Z}[L^*]$
extended by all roots of unity ($\mathcal{Q} = \mathbb{Q}(\exp(2 \pi i \mathbb{Q})$).
A vector $u \in L$ def\/ines a birational transformation of $\mathbb{K}_L$ (and its various subf\/ields and subrings) as follows
\begin{equation*}
\mu_{u,\omega} : \ X^m \to X^m \big(1+X^{i_\omega(u)}\big)^{(u,m)}.
\end{equation*}
If $f : T_1 \to T_2$ is a rational map between two tori, and $u : \mathrm{G}_m \to T_1$ is a one-parameter subgroup of $T$
then its image $f u : \mathrm{G}_m \to T_2$ is not necessarily a one-parameter subgroup, but asymptotically behaves like one,
this def\/ines a \emph{tropicalization} map $T(F) : \Hom(\mathrm{G}_m,T_1) \to \Hom(\mathrm{G}_m, T_2)$. The tropicalization of the birational
map $\mu_{u,\omega} : T_1 \to T_2$ is the piecewise-linear map
$\mu_{u,\omega} : L_1 \to L_2$ def\/ined in the previous subsection.
One can easily see most of the relations of the previous subsection on the birational level.
For example, $\mu_{-u} \mu_u = \mu_u \mu_{-u} = R_u$ and $R_v \mu_u R_v^{-1} = \mu_{R_v u}$,
where $R_u$ is the homomorphism of the torus $T$ given by
$R_{u,\omega} : X^m \to X^{m+(u,m) i_\omega u}$. Also $R_{a u, b \omega} = R_{u,\omega}^{a^2 b}$ for any $a,b \in \mathbb{Z}$,
and $\mu_{a u,\omega} = (\mu_{u, a \omega})^a$ for any $a \in \mathbb{Z}$, however neither of them is a power of $\mu_{u,\omega}$.\footnote{Since $(1+x^a)$ is not a power of $(1+x)$.} In particular, $(\mu_{u,\omega})^{-1} = \mu_{-u,-\omega}$.
Note that if $M \subset L^*$ is some sublattice of $L^*$ that contains $i_\omega(u)$ then $\mu_u$ preserves the fraction
f\/ield of $\mathbb{Z}[M] \subset \mathbb{Z}[L^*]$.
For any morphism $f \in \Hom((L',\omega'), (L,\omega))$ and a vector $u \in L'$ we have a homomorphism
$f^* : \mathbb{Z}[L^*] \to \mathbb{Z}[L'^*]$ and two birational transformations
$\mu_u \in \mathrm{Aut} \mathbb{K}_{L'}, \mu_{fu} \in \mathrm{Aut} \mathbb{K}_L$ that commute:
$\mu_u f^* = f^* \mu_{fu}$.
\begin{remark} \label{fun1}
We have the following functoriality of the mutations with respect to the lattice~$L$:
let $L' \subset L$ be a sublattice of index $k$ in the lattice~$L$,
so $L^* = \Hom(L,\mathbb{Z})$ is a sublattice of index~$k$ in~$L'^* = \Hom(L',\mathbb{Z})$,
and assume that the vector $u$
lies in the sublattice $L'$.
Then the Abelian group $G = (L/L')$ of order $k$ acts on $\mathcal{Q}[L'^*]$,\footnote{An element $n$ in $L$ multiplies monomial $X^{m'}$ by the root of unity $\exp((2 \pi i) (n,m'))$, here $(n,m')$ is bilinear pairing between
$L$ and $L'^*$ with values in $\mathbb{Q}$ extended by linearity from the pairing $L' \otimes L'^* \to \mathbb{Z}$.}
and its invariants is the subring $\mathcal{Q}[L^*]$,
so~$G$ acts on the torus $T' = {\text{Spec }} \mathcal{Q}[L'^*]$ and the torus $T = {\text{Spec }} \mathcal{Q}[L^*]$ is the quotient-torus $T = T' / G$,
let $\pi : T' \to T$ be the projection to the quotient.
The vector $u$ def\/ines the birational transformation~$\mu_{u,T}$ of the torus $T$ and the birational transformation~$\mu_{u,T'}$ of the torus~$T'$.
Then the mutation~$\mu_u$ commutes with the action of the group~$G$ and with the projections:
$\pi \mu_{u,T'} = \mu_{u,T} \pi$
and $g \mu_{u,T'} = \mu_{u,T'} g$
for any $g \in G$.
\end{remark}
\subsubsection{Rank two case}
Let us see the mutations explicitly in case $\rk L = 2$.
Let $e_1$, $e_2$ be a base of $L$ and $f_1$, $f_2$ be the dual base of~$L^*$,
so $(e_i,f_j) = \delta_{i,j}$. Also let $x_i = X^{f_i}$ be the respective monomials in $\mathbb{Z}[L^*]$.
For
the skew-symmetric bilinear form $\omega_k$ def\/ined by $\omega_k(e_1,e_2) = k$
and a vector $u = u_1 e_1 + u_2 e_2 \in L$
we have
$i_{\omega_k} (u_1 e_1 + u_2 e_2) = (-k u_2) f_1 + (k u_1) f_2$ and so
\begin{equation*}
\mu_{u,\omega_k} : \ (x_1,x_2) \to \big(x_1 \cdot \big(1+x_1^{-k u_2} x_2^{k u_1}\big)^{u_1}, x_2 \cdot \big(1+x_1^{-k u_2} x_2^{k u_1}\big)^{u_2}\big),
\end{equation*}
in particular the inverse map to $\mu_{u,\omega_1}$ is given by
$\mu_{-u,-\omega_1} : (x_1,x_2) \to (x_1 \cdot (1+x_1^{u_2} x_2^{-u_1})^{-u_1}, x_2 \cdot (1+x_1^{u_2} x_2^{-u_1})^{-u_2})$.
In particular, $\mu_{(0,1)}^* f = f\big(x_1,\frac{x_2}{1+x_1}\big)$.
For any matrix $A = \left(\begin{matrix} a & b \\ c & d\end{matrix} \right) \in \operatorname{SL}(2,\mathbb{Z}) = \operatorname{Sp}(L,\omega)$
there is a regular automorphism of the torus $t_A^* (x_1,x_2) = (x_1^a x_2^b, x_1^c x_2^d)$.
Conjugation by this automorphism acts on the set of mutations:
$\mu_{A u}^* = (t_A \mu_u t_A^{-1})^*$.
So any mutation commutes with an inf\/inite cyclic group given by the stabilizer of $u$ in $\operatorname{Sp}(L,\omega)$,
explicitly if $u=(0,1)$ then in coordinates $(x_1,x_2)$ and $(x_1,x_2' = x_1 x_2)$ the mutation $\mu_{(0,1)}$ is given by the same formula.
Also every mutation commutes with $1$-dimensional subtorus of~$T$,
in case of $u=(0,1)$ the action of the subtorus is given by $(x_1,x_2) \to (x_1, \alpha x_2)$.
\subsection{Mutations of exchange collections and seeds}
Let $L$ be a lattice equipped with a bilinear skew-symmetric form $\omega$.
A~\emph{cluster} $\mathbf{y} \in \mathbb{K}_L^m$ is a collection
$\mathbf{y} = (y_1,\dots,y_m)$
of $m$ rational functions
$y_i \in \mathbb{K}_L$.
We call $\mathbf{y}$ \emph{a base cluster} if $\mathbf{y} = (y_1,,\dots,y_r)$
is a base of the ambient f\/ield $\mathbb{K}_L$.
A~$C$-seed (supported on $(L,\omega)$)
is a pair $(\mathbf{y},V)$ of a cluster $\mathbf{y} \in \mathbb{K}_L^m$ and an exchange collection
$V = (v_1,\dots,v_n) \in L^n$.
A~$V$-seed (supported on $(L,\omega)$)
is a pair $(W,V)$ of a rational function $W \in \mathbb{K}_L$ and
an exchange collection $V = (v_1,\dots,v_n) \in L^n$.
Given two exchange collections
$V'=(v_1',\dots,v_n') \in L'^n$
and
$V = (v_1,\dots,v_n) \in L^n$
we say that $V'$ is \emph{a mutation} of $V$
in the direction $1\leqslant j\leqslant n$ and denote it by $V' = \mu_j V$ if
under the given identif\/ication $s_j : L \simeq L'$
we have $v'_j = s_j(-v_j)$
and
$v'_i = s_j(\mu_{v_j} v_i)$ for $i\neq k$.
The mutation of a $C$-seed $(\mathbf{y},V)$ in the direction $1\leqslant j\leqslant n$
is a new $C$-seed $(\mathbf{y}_j,V_j)$ where
$V_j = \mu_j V$ is a mutation of the exchange collection,
and $\mathbf{y}_j = \mu_{v_j,\omega} \mathbf{y}$ where each variable is
transformed by the birational transformation $\mu_{v_j,\omega}$.
The identity $\mu_{-u} \mu_u = R_u$ implies that $\mu_j(\mu_j(V))$ and $V$ are
related by the $\operatorname{Sp}(L,\omega)$-trans\-for\-ma\-tion~$R_u$.
\subsection[Upper bounds and property $(V)$]{Upper bounds and property $\boldsymbol{(V)}$}
\begin{definition}[property $(V)$] \label{pv}
We say a $V$-seed $(W,V)$ satisf\/ies property $(V)$ if~$W$ is a Laurent polynomial and for all $v \in L$ the functions ${(\mu_{v}^*)}^{m_V(v)} W$ are also Laurent polynomials.
\end{definition}
In this paper we introduce the upper bound of an exchange collection.
\begin{definition}[upper bounds] \label{ub}
For a $C$-seed $\Sigma = (\mathbf{y},V)$
def\/ine its \emph{upper bound} $\mathcal{U}(\Sigma)$ to be the $\mathcal{Q}$-subalgebra of $\mathbb{K}_L$ given by
\begin{equation*}
\mathcal{U}(\Sigma) = \mathcal{Q}\big[\mathbf{y}^{\pm 1}\big] \cap \big(\cap_{v \in L} \mathcal{Q}\big[{(\mu_{v}^*)}^{m_V(v)} \mathbf{y}^{\pm 1}\big]\big).
\end{equation*}
In case $\mathbf{y}$ is a base cluster (by abuse of notation)
we denote $\mathcal{U}(\Sigma)$ just by~$\mathcal{U}(V)$.
\end{definition}
The upper bounds def\/ined here are a straightforward generalization of the upper bounds in~\cite{BFZ},
but also they can be thought of as the gatherings of all potentials satisfying property~$(V)$.
\begin{proposition}[relation between property $(V)$ and upper bounds] \label{vub1}
The upper bound $\mathcal{U}(V)$ of an exchange collection $V$ consists of all functions $W \in \mathbb{K}_L$
such that the $V$-cluster $(W,V)$ satisfies property~$(V)$.
\end{proposition}
\begin{proposition} \label{prop-isom}
Any morphism $f : (L,\omega) \to (L',\omega')$
induces a dual morphism $f^* : L'^* \to L^*$,
a homomorphism of algebras $f^* : \mathbb{Z}[L'^*] \to \mathbb{Z}[L^*]$.
Assume that this homomorphism has no kernel\footnote{One can bypass this assumption by def\/ining the upper bound $\mathcal{U}(L,\omega;V)$ as a subalgebra in
some localization of $\mathbb{Z}[L^*]$ determined by the exchange collection $V$.}.
Then it induces a homomorphism of upper bounds $f^* : \mathcal{U}(L',\omega';f V) \to \mathcal{U}(L,\omega;V)$.
In particular, if $f$ is an isomorphism, then maps $f^*$ and $(f^{-1})^*$ establish the isomorphisms
between the upper bounds $f^* : \mathcal{U}(L',\omega';f V) \simeq \mathcal{U}(L,\omega;V)$.
\end{proposition}
\begin{proposition} \label{fun2}
Consider a seed $\Sigma = (L,\omega;v_1,\dots,v_n)$.
For a sublattice $L' \subset L$ that contains all vectors $v_i \in L' \subset L$
consider the seed $\Sigma' = (L',\omega_{|_{L'}};v_1,\dots,v_n)$.
By Remark~{\rm \ref{fun1}} there is a~natural action of
$G = L/L'$ on $\mathbb{K}_{L'}$
with $\mathbb{K}_L = \mathbb{K}_{L'}^G$.
Moreover,
the action of $G$ obviously preserves
the property of being a~Laurent polynomial
$($i.e.\ it preserves the subalgebras $\mathcal{Q}[L'^*])$,
and the mutations~$\mu_{v_i}$ commute with the $G$-action.
Thus the upper bound with respect to the overlattice $L$
is the subring of $G$-invariants of the upper bound with respect to the sublattice~$L'$:
$\mathcal{U}(\Sigma) = \mathcal{U}(\Sigma')^G = \mathcal{U}(\Sigma') \cap \mathcal{Q}[L^*]$.
\end{proposition}
\section{Laurent phenomenon}\label{section3}
In what follows we restrict ourselves to the case $\rk L = 2$, $\omega$ is a non-degenerate form and the vectors of exchange collection are primitive,
however none of these conditions is essential.
Next theorem is the analogue of Theorem~1.5 in~\cite{BFZ}, presented here as Theorem~\ref{bfz-theorem}.
\begin{theorem}[Laurent phenomenon in terms of upper bounds] \label{ublemma}
Consider two $C$-seeds: $\Sigma = (L,\omega;v_1,\dots,v_n)$ and
$\Sigma' = (L',\omega';v_1',\dots,v_n')$.
If $\Sigma' = \mu_i \Sigma$ is a mutation of $\Sigma$ in direction $1\leqslant i\leqslant n$
then
the upper bounds for $\Sigma$ and $\Sigma'$ coincide: $\mathcal{U}(\Sigma) = \mu_{v_i}^* \mathcal{U}(\Sigma')$.
As a~corollary, if a seed~$\Sigma'$ is obtained from a seed $\Sigma$
by a sequence of mutations, then the upper bound $\mathcal{U}(\Sigma')$
equals to the upper bound~$\mathcal{U}(\Sigma)$ under identification of the ambient field
by composition of the birational mutations.
\end{theorem}
By Proposition \ref{vub1} Theorem~\ref{ublemma} is equivalent to the next corollary,
which is easier to check in practice
and has almost the same consequences as the main theorem
of~\cite{GU}, presented here as Theorem~\ref{ulemma}.
\begin{corollary}[$V$-lemma] \label{vlemma}
If $V$-seeds $\Sigma$ and $\Sigma'$ are related by a mutation
then the seed $\Sigma$ satisfies property $(V)$ $\iff$
the seed $\Sigma'$ satisfies property~$(V)$.
\end{corollary}
In the rest of this section we prove Theorem~\ref{ublemma}.
Our proof is quite similar to that of~\cite{BFZ}\footnote{See Appendix~\ref{sec-bfz} and Remark~\ref{compare-bfz} for the detailed comparison.}:
The set-theoretic argument
reduces the problem to
exchange collection $V$ with small number of vectors~($1$~or~$2$) without counting of multiplicities.
Actually, when the collection~$V$ has only one vector
the equality of the upper bounds is obvious from the def\/initions.
When the exchange collection consists of two base vectors one can explicitly compute the upper bounds
and compare them. Finally, the case of two non-base non-collinear vectors is thanks to functoriality.
First of all, let us f\/ix the notations.
If the rank two lattice~$L$ is generated by a pair of vectors~$e_1$ and~$e_2$,
then the dual lattice $L^* = \Hom(L,\mathbb{Z})$ has the dual base $f_1$, $f_2$ determined by
$(f_i,e_j) = \delta_{i,j}$. The form $\omega$ is uniquely determined by its value
$k = \omega(e_1,e_2)$, and further we denote this isomorphism class of forms by~$\omega_k$.
We assume that $k\neq 0$, i.e.\ the form $\omega$ is
non-degenerate\footnote{If $k=0$ then $\omega=0$ and all mutations are trivial.},
by swapping~$e_1$ and~$e_2$ one can exchange $k$ to $-k$.
A base~$e_i$ of $L$ corresponds to a base $x_i = X^{f_i}$ of $\mathbb{Z}[L^*]$.
\begin{lemma} \label{l41a}
Let $V$ be an exchange collection in $(L,\omega)$ and $\Sigma = (L,\omega;V)$ be the respective seed.
\begin{enumerate}\itemsep=0pt
\item[$1.$]
If $V$ is empty, then obviously $\mathcal{U}(L,\omega;V) = \mathcal{Q}[L^*]$.
\item[$2.$]
Otherwise, let $V_\alpha$ be a set of exchange collections
such that for any $v\in L$ we have
$m_V(v) = \max_{\alpha} m_{V_\alpha}(v)$.
Then
\[ \mathcal{U}(V) = \cap_{\alpha} \mathcal{U}(V_\alpha) . \]
\item[$3.$]
In particular, if for a vector $v \in L$ we define $V_v = m_V(v) \times v$
to be an exchange collection that consists of a single vector $v$
with multiplicity $m_V(v)$ and $\Sigma_v = (L,\omega;V_v)$ be the respective seed,
then
$ \mathcal{U}(\Sigma) = \cap_{v \in L} (\mathcal{Q}[\mathbf{y}^\pm] \cap \mathcal{Q}[{(\mu_v^*)}^{m_V(v)} \mathbf{y}^\pm])
= \cap_{v \in L} \mathcal{U}(\Sigma_v)$.
In other words, the upper bound of a $C$-seed $\Sigma = (L,\omega;\mathbf{y},V)$ can be expressed as the intersection of
the upper bounds for its $1$-vector subseeds.
\item[$4.$]
Let $V$ consist of a vector $v_1$ with multiplicity $m_+ \geqslant 1$,
a vector $v_2 = - v_1$ with multiplicity \mbox{$m_- \geqslant 0$},
and vectors $v_k$ $(k\geqslant 3)$ that are non-collinear to $v_1$ with some multiplicities \mbox{$m_k \geqslant 0$}.
Consider exchange subcollections $V_0 = \{m_+\times v_1, m_-\times (-v_1)\}$
and $V_k = \{1\times v_1,$ $m_k \times v_k\}$ $(k\geqslant 3)$.
Then
\[ \mathcal{U}(V) = \mathcal{U}(V_0) \cap \mathcal{U}(V_3) \cap \mathcal{U}(V_4) \cap \cdots . \]
\item[$5.$]
Let $V' = \mu_1 V$ be an exchange collection obtained by mutation of~$V$ in~$v_1$;
it consists of vector $-v_1$ with multiplicity $m_- + 1\geqslant 1$,
vector $v_1$ with multiplicity $m_+ - 1 \geqslant 0$
and vectors $v_k' = \mu_{v_1} v_k$ $(k\geqslant 3)$ with multiplicities $m_k$.
Similarly to the previous step define $V_0' = \{(m_- + 1)\times (-v_1), (m_+ - 1)\times v_1\}$
and $V_k' = \{1\times (-v_1), m_k \times v_k'\}$ $(k \geqslant 3)$. Then
\[ \mathcal{U}(V') = \mathcal{U}(V_0') \cap \mathcal{U}(V_3') \cap \mathcal{U}(V_4') \cap \cdots. \]
\item[$6.$]
Hence, to proof Theorem~{\rm \ref{ublemma}} it is necessary and sufficient
to show that
\[ \mathcal{U}(V_0) = \mu_{v_1}^* \mathcal{U}(V_0')
\qquad \text{and} \qquad
\mathcal{U}(V_k) = \mu_{v_1}^* \mathcal{U}(V_k') \quad(\text{for all} \ k\geqslant 3 ). \]
We will prove these equalities in Proposition~{\rm \ref{l43c1}} and Lemma~{\rm \ref{l46}}.
\end{enumerate}
\end{lemma}
\begin{proposition} \label{isom2d}
Let $v_1,v_2 \in L$ be a pair of vectors $v_1 = a e_1 + b e_2$, $v_2 = c e_1 + d e_2$
such that $ad-bc = 1$.
Consider the lattice $L'$ with the base $e_1'$, $e_2'$
and the form $\omega'(e_1',e_2') = \omega(v_1,v_2)$;
let~$f_1'$,~$f_2'$ be the dual base of~$L'^*$.
Consider a map $m : L \to L'$ given by
$m(e_1) = d e_1' - b e_2', m(e_2) = -c e_1' + a e_2'$;
note that
$m(v_1) = m(a e_1 + b e_2) = e_1'$ and $m(v_2) = m(c e_1 + d e_2) = e_2'$.
The dual isomorphism $m^*: L'^* \to L^*$ is given by the transposed map
$m^* (f_1') = d f_1 - c f_2$ and $m^*(f_2') = -b f_1 + a f_2$.
Let $z_1 = X^{f_1'} = x_1^d x_2^{-c}$ and $z_2 = X^{f_2'} = x_1^{-b} x_2^a$.
Since map $m^*$ is invertible by Proposition~{\rm \ref{prop-isom}} it gives the equality
\[ \mathcal{U}(L,\omega;m_1\times v_1,m_2\times v_2) = \mathcal{U}\big(L',\omega';m_1\times e_1',m_2\times e_2'\big)\big|_{{z_1 = x_1^d x_2^{-c},\,z_2 = x_1^{-b} x_2^a}}. \]
\end{proposition}
\begin{lemma} \label{l42}
Assume a seed $\Sigma = (L,\omega;m_1\times v_1)$ consists of a unique vector $v_1$ with multiplicity $m_1 \geqslant 1$.
\begin{enumerate}\itemsep=0pt
\item[$1.$]
If $v_1 = e_2 = (0,1)$
then the upper bound $\mathcal{U}(\Sigma)$ consists of all Laurent polynomials $W$ of the form
$W = \sum_l c_l(x_1) x_2^l$
where $c_l \in \mathcal{Q}[x_1^\pm]$ and for $l \leqslant 0$ we have that $c_l$ is divisible by
$(1+x_1^k)^{-m_1 l}$.
Moreover, $\mathcal{U}(\Sigma) = \mathcal{Q}\big[x_1^\pm, x_2, \frac{1}{x_2^{\prime \prime}} = \frac{(1+x_1^k)^{m_1}}{x_2}\big]$.
\item[$2.$]
If $v_1 = a e_1 + b e_2 = (a,b)$ is an arbitrary primitive vector
then
$\mathcal{U}(\Sigma) = \mathcal{Q}\big[z^{\pm}, z_1, \frac{(1 + z^k)^{m_1}}{z_1}\big]$
where
$z = \frac{x_1^a}{x_2^b}$,
$z_1 = x_1^r x_2^s$
and $(r,s)\in\mathbb{Z}^2$ satisfies $rb+sa=1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall that mutation in the direction $e_2$ is given by
$x_1' = x$ and $x_2' = \frac{x_2}{1+x_1^k}$.
Assume we have a Laurent polynomial $W = \sum_{l \in \mathbb{Z}} c_l(x_1) x_2^l$.
Then $W$ can be expressed in terms of $x_1$ and $x_2'$ as
$W = \sum_l c_l(x_1) (1+x_1^k)^l (x_2')^l$.
This function is a Laurent polynomial in terms of $(x_1,x_2')$ $\iff$
$c_l(x_1) (1+x_1^k)^l$ is a Laurent polynomial of $x_1$ for all $l$. This is equivalent to
$c_l$ being divisible by $(1+x_1^k)^{-l}$ for $l \leqslant 0$.
Similarly if we do $m_1$ mutations then $x_2^{\prime \prime} = \frac{x_2}{(1+x_1^k)^a}$
and $W = \sum c_l (1+x_1^k)^{m_1 l} (x_2^{\prime \prime})^l$
so for $l\leqslant 0$ we have that $c_l$ is divisible by $(1+x_1^k)^{-m_1 l}$.
Let $c_l(x_1) = (1+x_1^k)^{-m_1l} c'_{-l}(x_1)$ for $l < 0$, $c'_l$ are also Laurent polynomials.
Denote $W_+ = \sum_{l \geqslant 0} c_l(x_1) x_2^l$ and $W_- = \sum_{l <0} c_l(x_1) x_2^l = \sum_{l > 0} c'_l(x_1) (x_2^{\prime \prime})^{-l}$.
Then obviously both $W_+$ and $W_-$ belong to $\mathcal{Q}\big[x_1^\pm,x_2,\frac{1}{x_2^{\prime \prime}}\big]$.
The reverse inclusion is straightforward.
Part (2) follows from Proposition~\ref{isom2d}.
\end{proof}
\begin{proposition} \label{lhz2}
Let exchange collection $V$ consists of a vector $v_1 = e_2 = (0,1)$ with multiplicity $m_1\geqslant 0$
and its inverse $v_2 = -v_1 = -e_2 = (0,-1)$ with multiplicity $m_2 \geqslant 0$,
\begin{enumerate}\itemsep=0pt
\item[$1.$]
The upper bound $\mathcal{U}(L,\omega;m_1\times e_2,m_2\times (-e_2))$
consists of all Laurent polynomials $W$ of the form
$W = \sum_l c_l(x_1) x_2^l$ where $c_l \in \mathcal{Q}[x_1^\pm]$ and for $l \leqslant 0$ we have that $c_l$ is divisible by $(1+x_1^k)^{-m_1l}$
and for $l \geqslant 0$ we have that $c_l$ is divisible by $(1+x_1^k)^{m_2l}$.
\item[$2.$]
$\mathcal{U}(L,\omega;m_1\times e_2,m_2\times(-e_2)) = \mathcal{Q}\big[x_1^\pm, x_2(1+x_1^k)^{m_2},\frac{(1+x_1^k)^{m_1}}{x_2}\big].
$
\end{enumerate}
\end{proposition}
\begin{proof}
The f\/irst statement is a straightforward corollary of Lemmas~\ref{l41a} and~\ref{l42}(1).
The proof of the second statement is similar to the end of the proof of Lemma~\ref{l42}(2):
separate the Laurent polynomial $W$ into positive and negative parts~$W_+$ and~$W_-$;
then both parts lie in the ring $\mathcal{Q}\big[x_1^\pm,x_2 (1+x_1^k)^{m_2},\frac{(1+x_1^k)^{m_1}}{x_2}\big]$.
\end{proof}
\begin{proposition} \label{l43c1}
Assume a seed $\Sigma$ consists of a vector $v_1 = (0,1)$ with multiplicity $m_1$
and its inverse $-v_1 = (0,-1)$ with multiplicity $m_2$.
Then its mutation $\Sigma' = \mu_1(\Sigma)$ consists of~$v_1$ and~$-v_1$ with respective multiplicities $m_1-1$ and $m_2+1$.
Then $\mathcal{U}(\Sigma) = \mathcal{U}(\Sigma')$.
\end{proposition}
\begin{proof}
By Proposition~\ref{lhz2} the upper bounds are expressed as:
$\mathcal{U}(\Sigma) = \mathcal{Q}\big[x_1^\pm,x_2 (1+x_1^k)^{m_2}$, $\frac{(1+x_1^k)^{m_1}}{x_2}\big]$,
$\mathcal{U}(\Sigma') = \mathcal{Q}\big[x_1'^\pm,\frac{(1+x_1'^k)^{m_1-1}}{x_2'},x_2' (1+x_1'^k)^{m_2+1}\big]$.
Since $x_1' = x_1$ and $x_2' = \frac{x_2}{1+x_1^k}$ we have the desired equality of the upper bounds.
\end{proof}
\begin{proposition} \label{lhz3}
Assume that the seed $\Sigma = (L,\omega;m_1\times v_1,m_2\times v_2)$
consists of vectors $v_1$ with multiplicity $m_1 \geqslant 0$
and $v_2$ with multiplicity $m_2 \geqslant 0$.
\begin{enumerate}\itemsep=0pt
\item[$1.$]
If $v_1 = e_1$ and $v_2 = e_2$,
then the upper bound $\mathcal{U}(\Sigma)$ equals
$\mathcal{Q}\big[x_1, x_2, \frac{(1 + x_2^k)^{m_1}}{x_1}, \frac{(1 + x_1^k)^{m_2}}{x_2}\big]$.
\item[$2.$]
If $v_1 = a e_1 + b e_2$ and $v_2 = c e_1 + d e_2$ with $ad-bc = 1$
then the upper bound $\mathcal{U}(\Sigma)$ equals
$\mathcal{Q}\big[z_1, z_2, \frac{(1 + z_2^k)^{m_1}}{z_1}, \frac{(1 + z_1^k)^{m_2}}{z_2}\big]$
with
$z_1 = x_1^dx_2^{-c}$
and
$z_2 = x_1^{-b}x_2^a$.
\end{enumerate}
\end{proposition}
\begin{proof}
For the f\/irst case,
by Lemmas \ref{l41a} and \ref{l42}
we have
$\mathcal{U}(\Sigma) = \mathcal{Q}\big[x_1^\pm,x_2,\frac{(1+x_1^k)^{m_2}}{x_2}\big] \cap \mathcal{Q}\big[x_2^\pm,x_1,\frac{(1+x_2^k)^{m_1}}{x_1}\big]$. If $m_1 = m_2 = 1$,
by Proposition~4.3 of~\cite{BFZ} (with $|b_{12}| = |b_{21}| = b = c = k$ and $q_1=q_2=r_1=r_2=1$)
this intersection equals
$\mathcal{Q}\big[x_1,x_2,\frac{1+x_2^k}{x_1},\frac{1+x_1^k}{x_2}\big]$.
Lemma~\ref{l42} covers cases with $m_1 = 0$ or $m_2 = 0$.
If $m_1$ and $m_2$ are greater than $1$,
the proof of Proposition~4.3 in~\cite{BFZ} can be easily modif\/ied to include the case we need since $x_2 \frac{1 + x_1^k}{x_2}(1 + x_1^k)^{m_{2}-1} = (1+ x_1^k)^{m_2}$ and $x_1 \frac{1 + x_2^k}{x_1}(1 + x_2^k)^{m_{1}-1} = (1+ x_2^k)^{m_1}$ , then the intersection equals $\mathcal{Q}\big[x_1,x_2,\frac{(1+x_2^k)^{m_1}}{x_1},\frac{(1+x_1^k)^{m_2}}{x_2}\big]$.
Part (2) follows from Proposition~\ref{isom2d}.
\end{proof}
\begin{lemma} \label{l46}
Let $\Sigma = (L,\omega;1 \times v_1, m_2 \times v_2)$ be a seed of two non-collinear vectors $v_1$ and $v_2$
with $m(v_1) = 1$ and $m(v_2) = m_2 \geqslant 0$
and
$\Sigma' = \Sigma_1 = (L'=L,\omega'=\omega; v_1' = -v_1, m_2 \times (v_2' = \mu_{v_1} v_2) )$
be the mutation of the seed $\Sigma$ in~$v_1$.
Then $\mathcal{U}(\Sigma) = \mu_{v_1}^* \mathcal{U}(\Sigma')$.\footnote{Denote $\Sigma_2 = \{(\mu_{v_2})^{m_2} v_1, -v_2 \}$~-- $m_2$-multiple mutation of $\Sigma$ in $v_2$,
and $\Sigma'_2 = \{ (\mu_{v_2'})^{m_2} v_1', -v_2' \}$~-- $m_2$-multiple mutation of $\Sigma'$ in $v_2'$.
We are going to prove that
$\mathcal{U}(\Sigma) = \mathcal{Q}[\mathbf{y}] \cap \mathcal{Q}[\mathbf{y}' = \mathbf{y_1}] \cap \mathcal{Q}[\mathbf{y_2}]$
equals to
$\mathcal{U}(\Sigma') = \mathcal{Q}[\mathbf{y}] \cap \mathcal{Q}[{\mathbf{y}'} = \mathbf{y_1}] \cap \mathcal{Q}[\mathbf{y_2'}]$.}
\end{lemma}
\begin{proof}
First of all note that, $\omega'(v_1',v_2') = -\omega(v_1,v_2)$ and since
$\mu_{-v_1} \mu_{v_1} = R_{v_1}$ it is suf\/f\/icient to consider only the case
$\omega(v_1,v_2) > 0$.
We f\/irst consider the case when $v_1 = e_1 = (1,0)$ and $v_2 = e_2 = (0,1)$;
denote $k = \omega(e_1,e_2) > 0$.
Let $e_1'$, $e_2'$ be the base of $L'$ that corresponds to~$e_1$,~$e_2$ under the natural
identif\/ication of $L' \simeq L$;
f\/inally consider a base $e_1''$, $e_2''$ of $L'$
given by $e_1'' = v_2' = k e_1' + e_2'$, $e_2'' = v_1' = -e_1'$.
Let $f_1'$, $f_2'$ and $f_1''$, $f_2''$ be the respective dual bases of~$L'^*$.
Thus we have one natural regular system of coordinates
$x_1 = X^{f_1}$, $x_2 = X^{f_2}$
on the torus $T = {\text{Spec }} \mathbb{Z}[L^*]$,
and two regular systems of coordinates
$x_1' = X^{f_1'}$, $x_2' = X^{f_2'}$; $x_1'' = X^{f_1''}$, $x_2'' = X^{f_2''}$
on the torus $T' = {\text{Spec }} \mathbb{Z}[L'^*]$.
Taking $a=k$, $b=1$, $c=-1$ and $d=0$
in Proposition~\ref{isom2d}
we have that $x_1'' = x_2'$ and $x_2'' = \frac{x_2'^k}{x_1'}$, since $x_2' = x_2$ and $x_1' = \frac{x_1x_2^k}{1+x_2^k}$ (they are the mutations of~$x_1$ and~$x_2$ with respect to $v_1$), thus what we need to show is that the rings
$\mathcal{Q}\big[x_1,x_2,\frac{1+x_2^k}{x_1}, \frac{1+x_1^k}{x_2}\big]$
and $\mathcal{Q}\big[x_1,x_2, \frac{1+x_2^k}{x_1}, \frac{x_1^{k} + (1 + x_2^k)^k}{x_1^kx_2}\big]$ are equal.
We will f\/irst show that
$\frac{x_1^{k} + (1 + x_2^k)^k}{x_1^kx_2} \in \mathcal{Q}\big[x_1,x_2, \frac{1+x_2^k}{x_1}, \frac{1+x_1^k}{x_2}\big]$.
We have that $ \frac{x_1^{k} + (1 + x_2^k)^k}{x_1^kx_2} = \big(\frac{1 + x_1^k}{x_2}\big)\big(\frac{(1+x_2^k)^k}{x_1^k}\big) - \sum\limits_{j=1}^k \frac{k!}{j!(k-j)!} x_2^{kj-1}$. Clearly the expression in the right side belongs to $\mathcal{Q}\big[x_1,x_2, \frac{1+x_2^k}{x_1}, \frac{1+x_1^k}{x_2}\big]$.
Now, we will show that $\frac{1+x_1^k}{x_2} \in \mathcal{Q}\big[x_1,x_2, \frac{1+x_2^k}{x_1}, \frac{x_1^k + (1 + x_2^k)^k}{x_1^kx_2}\big]$. We have that $\frac{1+x_1^k}{x_2} = x_1^k \frac{x_1^{k} + (1 + x_2^k)^k}{x_1^kx_2} - \sum\limits_{j=1}^k \frac{k!}{j(k-j)!} x_2^{kj-1}$. Again, clearly the expression in the right side belongs to $\mathcal{Q}\big[x_1,x_2, \frac{1+x_2^k}{x_1}, \frac{x_1^{k} + (1 + x_2^k)^k}{x_1^kx_2}\big]$. Thus, we have the equality between the rings. Similarly, if the multiplicity of $v_2$ is $m_2 > 1$, we have that $\mathcal{Q}\big[x_1,x_2,\frac{(1+x_2^k)}{x_1}, \frac{(1+x_1^k)^{m_2}}{x_2}\big] = \mathcal{Q}\big[x_1,x_2, \frac{1+x_2^k}{x_1}, \frac{(x_1^{k} + (1 + x_2^k)^k)^{m_2}}{x_1^{m_2k}x_2}\big]$.\footnote{The argument for showing the equality of these two rings is the same of that when $m_2 = 1$, but the computations are slightly longer, so we omit them.}
If $v_1$ and $v_2$ are another basis of $\mathbb{Z}^2$ the result follows from
Proposition~\ref{isom2d}.
In case~$v_1$ and~$v_2$ is a pair of non-collinear vectors which are not a basis for~$\mathbb{Z}^2$,
consider the sublattice $L' \subset L$ generated by $e_1' = v_1$ and $e_2' = v_2$
with the form $\omega' = \omega|_{{L'}}$.
As we just saw upper bounds with respect to the sublattice coincide:
$\mathcal{U}(L',\omega';v_1, m_2\times v_2) = \mu_{v_1}^* \mathcal{U}(L',\omega';-v_1,m_2\times \mu_{v_1} v_2)$.
Now the statement follows from the Proposition~\ref{fun2}.
\end{proof}
\begin{remark}
If a mutation of a Laurent polynomial with integer coef\/f\/icients
happened to be a Laurent polynomial, then its coef\/f\/icients are also integer.
Let $u \in L$ be a primitive vector
and $W, W' \in \mathcal{Q}[L^*]$ be a pair of Laurent polynomials with arbitrary coef\/f\/icients
such that $W = \mu_u^* W'$.
Then $W \in \mathbb{Z}[L^*] \iff W' \in \mathbb{Z}[L^*]$.
\end{remark}
\begin{proof}
Choose coordinates on $L$ so that $u = e_2$.
Assume $W$ has integer coef\/f\/icients.
By Lemma~\ref{l42}(1) $W = \sum c_l(x_1) x_2^l$
and $W' = \sum_{l\in\mathbb{Z}} c_l'(x_1) x_2'^l$
with $c_l' = c_l (1+x_1^k)^{-l}$ for all $l\in \mathbb{Z}$.
Clearly $W \in \mathbb{Z}[L^*] \iff$ all coef\/f\/icients of $W$ are integer $\iff$
for all $l\in \mathbb{Z}$ all coef\/f\/icients of $c_l$ are integer.
Since $(1+x_1^k) \in \mathbb{Q}[x_1^{\pm}]$ it is clear that $c_l' \in \mathbb{Q}[x_1^{\pm}]$.
Recall that for a Laurent polynomial $P \in \mathbb{Q}[x]$ its Gauss's content
$C(P) \in \mathbb{Q}$ is def\/ined as the greatest common divisor of all its coef\/f\/icients:
if $P = \sum a_i x^i$ then $C = \gcd(a_i)$.
Clearly $C(P) \in \mathbb{Z} \iff P \in \mathbb{Z}[x^{\pm}]$.
Gauss's lemma says that $C(P \cdot P') = C(P) \cdot C(P')$.
Since $C(1+x_1^k) = \gcd(1,1) = 1$
we see that $C(c_l') = C(c_l) \cdot 1^{-l} = C(c_l)$,
hence $c_l' \in \mathbb{Z}[x_1^\pm] \iff c_l \in \mathbb{Z}[x_1^\pm]$.
\end{proof}
\section{Questions and future developments}
In the introduction was pointed out that our def\/inition of upper bounds makes plausible to consider a quantum version of mutations of potentials and the corresponding quantum Laurent phenomenon. On the other hand, in \cite{K} a non-commutative version of the Laurent phenomenon is discussed. Thus, we would like to ask:
\begin{question}
Is it possible to consider a non-commutative version of the Laurent phenomenon for mutation of potentials and develop a theory of upper bounds in this context?
\end{question}
In \cite{GU} the following problem (Problem 44) was proposed
\begin{question} \label{ques2}
Construct a fiberwise-compact canonical mirror of a Fano variety as a~gluing of open charts given by $($all$)$ different toric degenerations.
\end{question}
Conjecture \ref{conj} (which will be proved in \cite{CG}) gives a partial answer for the above question.
\begin{conjecture} \label{conj}
For the $10$ potentials $W$ $($i.e., $(W_1,W_2,\dots,W_9,W_Q))$ listed in {\rm \cite{GU}} $($or rather the exchange collections $V$ $($resp.\ $V_1,\dots,V_9,V_Q))$ the upper bound $\mathcal{U}(V)$ is the algebra of polynomials in one variable. Moreover, this variable is $W$.
\end{conjecture}
Conjecture~\ref{conj} is useful for symplectic geometry as long as one knows two (non-trivial) properties of the FOOO's potentials~$m_0$~\cite{FOOO} (here $W = m_0$):
\begin{enumerate}\itemsep=0pt
\item
$W$ is a Laurent polynomial (this is some kind of convergence/f\/initeness property).
\item
$W$ is transformed according to Auroux's wall-crossing formula~\cite{AU},
and more specif\/ically by the mutations described in Section~\ref{sec-def}. The directions of the mutations/walls are encoded by an exchange collection~$V$.
\end{enumerate}
What we believe is that once one knows these assumptions, one should be able to prove that some disc-counting potential equals some particularly written $W$ (formally) without any actual disc counting. Needless to say this is a speculative idea.
| {'timestamp': '2013-01-22T02:00:41', 'yymm': '1301', 'arxiv_id': '1301.4541', 'language': 'en', 'url': 'https://arxiv.org/abs/1301.4541'} |
\section{Introduction \label{Introduction}}
From the string worldsheet perspective, two-dimensional non-linear $\sigma$-models with boundaries provide a rich area to describe curved background geometries with D-brane configurations in string theory. These non-perturbative degrees of freedom are essential higher-dimensional objects on which open strings can end, and of which the geometry is completely determined by the allowed worldsheet boundary conditions. The answer of what boundary conditions are \textit{allowed} is decided by symmetry. In the case of string theory, e.g., they should preserve worldsheet conformal invariance. For a $\sigma$-model describing strings in curved backgrounds, the answer is usually challenging and tractable only when the precise (boundary) CFT description is available.\\
\indent A simple but non-trivial example where one can make progress is provided by the Wess-Zumino-Witten model \cite{Witten:1983ar} describing strings in group manifolds supported by an NS-flux. The exact conformal invariance of this model is covered by the existence of two holomorphic currents underlying two copies of an affine Kac-Moody current algebra and two copies of a Virasoro algebra. The inclusion of boundaries in the WZW model has been studied in a number of works \cite{Kato:1996nu,Alekseev:1998mc,Felder:1999ka,Stanciu:1999id} by identifying maximally symmetric gluing conditions on the holomorphic currents at the boundary preserving one copy of both the Kac-Moody and Virasoro algebra. Although the former is not necessary for conformal invariance it leads to an elegant geometrical picture of the allowed D-brane configurations: they should wrap \textit{twisted conjugacy classes} of the group manifold. For example in the $SU(2)_k$ WZW model one finds two D0-branes and a further $k-1$ D2-branes that are blown up to wrap the conjugacy classes described by $S^2 \subset S^3$ \cite{Alekseev:1998mc}.\\
\indent When the precise CFT formulation is not available, however, we will see in this note an elegant D-brane picture can arise also in the context of $\sigma$-models with worldsheet integrability. Integrable stringy $\sigma$-models attracted considerable attention since the observation of worldsheet integrability in the AdS$_5\times$S$^5$ superstring \cite{Bena:2003wd}. Classically, they are characterised by the existence of an infinite number of local or non-local conserved charges in involution leading, in principle, to a dramatic simplicity and exact solvability. Including boundaries in the theory typically destroys conserved charges such as, e.g., the loss of translational invariance through the boundary. In this note, we will focus on \textit{allowed} boundary conditions that preserve the classical integrable structure by demanding the conservation of a tower of non-local charges generated by a monodromy matrix. This method has been introduced in \cite{Cherednik:1985vs,Sklyanin:1988yz} and further developed from a classical string point of view in \cite{Dekel:2011ja}.\\
\indent A suitable integrable $\sigma$-model that makes contact between the above methods is the integrable $\lambda$-deformed WZW model introduced by Sfetsos in \cite{Sfetsos:2013wia}. The deformation parameter $\lambda \in [0,1]$ interpolates between the WZW model at $\lambda = 0$ and the non-Abelian T-dual of the Principal Chiral Model (PCM) in a scaling limit $\lambda \rightarrow 1$. On ordinary Lie group manifolds, accommodating only bosonic field content, the deformation is marginally relevant \cite{Itsios:2014lca,Appadu:2015nfa}. However, significant evidence from both a worldsheet \cite{Hollowood:2014qma,Appadu:2015nfa} and target space \cite{Borsato:2016zcf,Chervonyi:2016ajp,Borsato:2016ose} perspective implies that, when applied to super-coset geometries, the $\lambda$-model is a truly marginal deformation introducing no Weyl anomaly. Hence, the deformation of the WZW group manifold can be thought of as a bosonic trunctation of a truly superstring theory. The question of establishing D-branes in this deformed geometry is therefore natural and has been pursued in the article \cite{Driezen:2018glg} on which this proceedings is based. We will see, by demanding integrability, that the geometrical picture of twisted conjugacy classes of the WZW point persists and naturally fits in the deformed geometry. The semi-classical flux quantisation will turn out to consistently be independent of the continuous $\lambda$-deformation parameter. Additionally the $\lambda$-deformation allows to track the behaviour of D-branes under generalised dualities \cite{Driezen:2018glg} --again a challenging question in general curved backgrounds-- by the non-Abelian T-dual scaling limit and Poisson-Lie T-duality to the integrable $\eta$-deformation of the PCM \cite{Vicedo:2015pna,Hoare:2015gda,Klimcik:2015gba}. Illustrated for the $G=SU(2)$ manifold one will find under both dualities D2-branes transforming to space-filling D3-branes that can be shown to preserve the classical integrable structure of the dual theories.\\
We lay out in section \ref{s:bmm} the general procedure to construct \textit{integrable} boundary conditions of two-dimensional $\sigma$-models. We apply this method in section \ref{s:lambda} to the integrable $\lambda$-deformation where we first review the model's construction, then interpret the allowed integrable boundary conditions as twisted conjugacy classes (illustrated in the $G=SU(2)$ manifold) and discuss their behaviour under generalised T-dualities. We end with some conclusions and outlook directions in section \ref{s:concl}.
\section{The boundary monodromy method for integrable systems}\label{s:bmm}
The boundary monodromy method, introduced by Cherednik and Sklyanin in \cite{Cherednik:1985vs,Sklyanin:1988yz}, is a powerful tool to derive boundary conditions preserving the integrability property of two-dimensional integrable field theories. The method consists of demanding that a monodromy matrix constructed from a Lax connection generates an infinite tower of conserved \textit{non-local} charges when a boundary is present\footnote{To have a truly classically integrable (boundary) theory one should moreover show these charges to Poisson commute. We will not discuss this here, but see e.g. \cite{Mann:2006rh}.}. We will briefly review it here, following \cite{Dekel:2011ja,Driezen:2018glg}, as well as the case without boundaries to introduce notations.
Let us first consider the no-boundary case in a general two-dimensional field theory on a periodic or infinite line. We denote the coordinates by $(\tau,\sigma)$ by analogy with the closed string worldsheet theory. It is known that an infinite tower of conserved charges can be generated when the equations of motion of the theory can be represented by a zero-curvature condition of a so-called $\mathfrak{g}^{\mathbb{C}}$-valued Lax connection ${\cal L}(z)$ that depends on a generic \textit{spectral} parameter $z \in \mathbb{C}$ \cite{Zakharov:1973pp},
\begin{equation} \label{eq:zerocurvatureLax}
\begin{aligned}
d {\cal L}(z) + {\cal L}(z) \wedge {\cal L}(z) = 0 , \qquad \forall\, z \in \mathbb{C}.
\end{aligned}
\end{equation}
In this case the transport matrix defined by,
\begin{equation}\label{eq:transport}
\begin{aligned}
T^\Omega(b,a ; z) = \overleftarrow{P \exp} \left( - \int^b_a d\sigma\, \Omega [{\cal L}_\sigma (\tau,\sigma ;z ) ] \right) \in G^{\mathbb{C}},
\end{aligned}
\end{equation}
(with $\Omega : \mathfrak{g} \rightarrow \mathfrak{g}$ a constant Lie algebra automorphism included for generality) satisfies,
\begin{equation}\label{eq:TransportToTime}
\begin{aligned}
\partial_\tau T^\Omega (b,a ; z ) = T^\Omega (b,a ; z ) \Omega [{\cal L}_\tau (\tau , a ; z)] - \Omega [{\cal L}_\tau (\tau , b ; z)] T^\Omega (b,a ; z ).
\end{aligned}
\end{equation}
Indeed, under periodic boundary conditions $\sigma \sim \sigma + 2\pi$ or asymptotic fall-off boundary conditions, one can then show that the monodromy matrix $T(2\pi,0 ; z)$ (for $\Omega = \mathbf{1}$) generates conserved charges by,
\begin{equation}
\begin{aligned}
\partial_\tau \mathrm{Tr} T(2\pi, 0 ; z )^n , \qquad \forall\ n\in \mathbb{N} \;\; \mathrm{ and } \;\; \forall\ z\in \mathbb{C} .
\end{aligned}
\end{equation}
Hence, every value of $n$ or every term in the expansion of $\mathrm{Tr} T(2\pi, 0 ; z )$ in $z$, corresponds to a conserved charge.
When the two-dimensional theory is defined on a finite line $\sigma \in [0,\pi]$, describing by analogy an open string worldsheet theory, one can determine \textit{integrable} boundary conditions on the endpoints by demanding the production of conserved charges along similar lines as above. Reminiscent to the method of image charges, one can derive these by taking a copy of the finite-line theory and act on it with a reflection $R: \sigma \rightarrow 2\pi - \sigma$. The boundary monodromy matrix $T_b (z)$ is then constructed by gluing the usual transport matrix $T(\pi, 0 ;z )$ in the original region to the transport matrix $T_R^\Omega (2\pi, \pi ; z)$ in the reflected region,
\begin{equation}
T_b (\lambda) = T_R^\Omega (2\pi, \pi ;z) T(\pi, 0 ;z).
\end{equation}
Notice that in the reflected region we have included the possibility of a non-trivial automorphism $\Omega$ acting on the Lax connection in the path-ordered exponential as in \eqref{eq:transport}. By demanding that the time derivative of the boundary monodromy matrix is given by a commutator,
\begin{equation}\label{eq:BoundMonToTime}
\partial_\tau T_b (z) = \left[ T_b (z) , N(z) \right] \, ,
\end{equation}
for some matrix $N(z)$ one will indeed find that $\partial_\tau \mathrm{Tr} T_b(z)^{n} =0$ for any $n\in \mathbb{N}$ and $z\in\mathbb{C}$. Assuming\footnote{In general this strongly depends on the specific form of the Lax connection $\mathcal{L}(z)$
but the procedure described here can be easily adapted to other cases. },
\begin{equation}\label{eq:ReflectedTransport}
T^\Omega_R (2\pi , \pi ;z) = T^\Omega(0,\pi ; z_R)\, ,
\end{equation}
we find explicitly using \eqref{eq:TransportToTime} that the time derivative of the boundary monodromy matrix satisfies,
\begin{equation}
\begin{aligned}
\partial_\tau T_b (z) =\; & \left[ T^\Omega(0,\pi; z_R) \Omega[\mathcal{L}_\tau ( \tau, \pi; z_R)] - \Omega[\mathcal{L}_\tau ( \tau , 0 ;z_R)] T^\Omega(0,\pi ; z_R) \right] T(\pi , 0 ; z) \\
& + T^\Omega(0,\pi ; z_R) \left[ T(\pi ,0 ; z)\mathcal{L}_\tau ( \tau , 0 ;z) - \mathcal{L}_\tau ( \tau, \pi ; z) T(\pi,0;z) \right] \, .
\end{aligned}
\end{equation}
The condition \eqref{eq:BoundMonToTime} sufficiently holds when $N(z) =\mathcal{L}_\tau (\tau , 0;z)$ and when we require the following boundary conditions on both the endpoints\footnote{In principle the boundary conditions can be different on each endpoint (see e.g.\ \cite{Driezen:2018glg}) which in the string theory application allows the open string to connect distinct D-brane configurations.},
\begin{equation}\label{eq:BoundCondLax1}
\mathcal{L}_\tau (\tau, 0 ;z ) = \Omega[ \mathcal{L}_\tau (\tau ,0 ; z_R) ] \, ,
\end{equation}
and similarly on $\sigma = \pi$. When studying a specific two-dimensional integrable model with a known Lax connection, and knowing its behaviour under the reflection $R$, one can now easily derive the \textit{integrable} boundary conditions on the field variables by eq.~\eqref{eq:BoundCondLax1}. Typically this will involve additional conditions on the automorphism $\Omega$ as we will see in the coming section.
\section{Applied to $\lambda$-deformations} \label{s:lambda}
We will now apply the boundary monodromy method to the (standard) $\lambda$-deformation introduced by Sfetsos in \cite{Sfetsos:2013wia}. The interest in this particular model is that it is a two-dimensional integrable field theory deforming the exactly conformal Wess-Zumino-Witten (WZW) model on group manifolds. We will therefore be able to relate the integrable boundary conditions of the $\lambda$-model to known results of stable D-brane configurations wrapping twisted conjugacy classes in the group manifold \cite{Alekseev:1998mc,Felder:1999ka,Stanciu:2000fz,Figueroa-OFarrill:2000lcd,Bachas:2000fr}.
\subsection{Construction of the $\lambda$-action}
Let us first briefly introduce the construction of $\lambda$-deformations of \cite{Sfetsos:2013wia}. One starts by doubling the degrees of freedom on a Lie group manifold $G$, by combining the WZW model on $G$ at level $k$ with the Principal Chiral Model (PCM) on $G$ with a coupling constant $\kappa^2$, i.e.,
\begin{equation}\label{eq:doubledaction}
\begin{aligned}
S_{k,\kappa^2}(g,\widetilde{g}) &= S_{\text{WZW},k}(g) + S_{\text{PCM},\kappa^2}(\widetilde{g}) ,\\
S_{\text{WZW,k}}(g) &= -\frac{k}{2\pi}\int_\Sigma d \sigma d\tau \langle g^{-1} \partial_+ g , g^{-1} \partial_- g \rangle - \frac{ k}{24\pi } \int_{M_3} \langle \bar g^{-1} d\bar g, [\bar g^{-1} d\bar g,\bar g^{-1} d\bar g] \rangle , \\
S_{\text{PCM},\kappa^2}(\widetilde{g}) &= - \frac{\kappa^2}{\pi} \int d\sigma d\tau \, \langle \widetilde{g}^{-1}\partial_+ \widetilde{g} , \widetilde{g}^{-1}\partial_- \widetilde{g} \rangle \, ,
\end{aligned}
\end{equation}
which are both realised through distinct group elements $g\in G$ and $\widetilde{g}\in G$ respectively\footnote{We have taken conventions on the worldsheet $\Sigma$ in which the two-dimensional metric is fixed as $\text{diag}(+1,-1)$, $\epsilon_{\tau\sigma} = 1$ and $\partial_\pm = \frac{1}{2}(\partial_\tau \pm \partial_\sigma)$. Moreover, we consider compact semi-simple Lie groups $G$ of which the generators $T_A$, $A \in \{1, \cdots, \text{dim}(G) \}$ of the Lie algebra $\mathfrak{g}$ are Hermitean and normalised with respect to the Cartan-Killing billinear form $\langle \cdot , \cdot \rangle$ as $\langle T_A, T_B \rangle = \frac{1}{x_R} \mathrm{Tr} (T_A T_B) = \delta_{AB}$ (with $x_R$ the index of the representation $R$). It is known that for compact groups the level $k$ should be integer quantised while for non-compact groups it can be free \cite{Witten:1983ar}.}. The fields $\bar{g}$ are an extension of $g$ into $M_3 \subset G$ such that $\partial M_3 = g(\Sigma)$. Altogether the doubled model \eqref{eq:doubledaction} has a global $G_L \times G_R$ symmetry. Next, one reduces back to $\text{dim}(G)$ degrees of freedom by gauging a subgroup acting as,
\begin{equation}
\begin{aligned}
G_L : \widetilde{g} \rightarrow h^{-1} \widetilde{g}, \qquad G_{\text{diag}}: g \rightarrow h^{-1} g h, \qquad \text{with} \; h \in G,
\end{aligned}
\end{equation}
using a common gauge field $A \rightarrow h^{-1} A h - h^{-1} d h$. Doing a minimal substitution on the PCM, by replacing $\partial_\pm \widetilde{g} \rightarrow D_\pm \widetilde{g} = \partial_\pm \widetilde{g} - A_\pm \widetilde{g}$ and replacing the WZW model by the $G/G$ gauged WZW model,
\begin{equation}
S_{\text{gWZW,k}}(g,A) = S_{\text{WZW},k}(g) + \frac{k}{\pi} \int d\sigma d\tau \, \langle A_- , \partial_+ g g^{-1} - A_+ , g^{-1}\partial_- g + A_+ , g^{-1} A_- g - A_+ , A_- \rangle \ .
\end{equation}
one finds the $\lambda$-deformation after fixing the gauge by $\widetilde{g} = \mathbf{1}$,
\begin{equation}
\begin{aligned}\label{eq:LambdaAction1}
S_{k,\lambda}(g, A) = S_{\text{WZW},k} (g)
- \frac{k}{ \pi} \int d\sigma d\tau \langle A_+ , (\lambda^{-1} - D_{g^{-1}}) A_- \rangle
- \langle A_- , \partial_+ g g^{-1} \rangle + \langle A_+ , g^{-1}\partial_- g \rangle \ .
\end{aligned}
\end{equation}
Here we have introduced the adjoint operator $D_g : \mathfrak{g} \rightarrow \mathfrak{g}$, $D_g (T_A) = g T_A g^{-1} = T_B (D_g){}^B{}_A$ with $g\in G$ and the parameter $\lambda$,
\begin{equation}
\lambda = \frac{k}{k+\kappa^2} .
\end{equation}
The gauge fields are now auxiliary and can be integrated out. Varying the action $S_{k,\lambda}(g,A) $ with respect to $A_{\pm}$ we find the constraints,
\begin{equation}\label{eq:GaugeConstraints}
A_+ = \left( \lambda^{-1} - D_g \right)^{-1} \partial_+ g g^{-1}\, , \qquad
A_- = -\left( \lambda^{-1} - D_{g^{-1}} \right)^{-1} g^{-1} \partial_- g\, .
\end{equation}
Substituting these into eq.~\eqref{eq:LambdaAction1} gives the large $k$ effective action,
\begin{equation}\label{eq:LambdaAction2}
\begin{aligned}
S_{k,\lambda}(g) = S_{\text{WZW},k}(g) - \frac{k }{\pi}\int \mathrm{d}\sigma \mathrm{d}\tau \, \partial_+ g g^{-1} \left( \lambda^{-1} - D_{g^{-1}} \right)^{-1} g^{-1}\partial_- g \ ,
\end{aligned}
\end{equation}
which is an all-loop in $\lambda$ deformation of the WZW theory with a global $g \rightarrow g_0 g g_0^{-1}$, $g_0\in G$ symmetry left. Effectively, the $\lambda$-theory thus deforms the target space metric and Kalb-Ramond field of the WZW $\sigma$-model. In addition, the Gaussian elimination of the gauge fields in the path integral results in a non-constant dilaton profile,
\begin{equation}
\Phi = \Phi_0 - \frac{1}{2} \ln \det \left( \mathbf{1} - \lambda D_{g^{-1}} \right) ,
\end{equation}
with $\Phi_0$ constant.
While the integrability of the $\lambda$-model (with periodic boundary conditions) has been first shown in \cite{Sfetsos:2013wia} starting from the effective $\sigma$-model action \eqref{eq:LambdaAction2}, one can straightforwardly show it starting from \eqref{eq:LambdaAction1} as done in \cite{Hollowood:2014rla}. The Lax connection ${\cal L}(z)$ representing the equations of motion of the fields $g$ satisfying the zero-curvature condition \eqref{eq:zerocurvatureLax} $\forall\ z \in \mathbb{C}$ is,
\begin{equation} \label{eq:LambdaLax}
{\cal L}_\pm (z) = - \frac{2}{1 +\lambda} \frac{A_\pm}{1 \mp z} ,
\end{equation}
upon the constraints \eqref{eq:GaugeConstraints}.
\subsection{Interpretation as (twisted) conjugacy classes}
To apply the boundary monodromy method to the $\lambda$-model we first need to consider the behaviour of the transport matrix $T_R^\Omega (2\pi, \pi ; z)$ under the reflection $R: \sigma \rightarrow 2\pi - \sigma$. For the $\lambda$-Lax \eqref{eq:LambdaLax} one will find that eq.\ \eqref{eq:ReflectedTransport} holds for $z_R = -z$. The resulting integrable boundary conditions \eqref{eq:BoundCondLax1} of the $\lambda$-model are then, after expanding order by order in the spectral parameter $z$,
\begin{equation}
\left. A_+ \right\vert_{\partial\Sigma} =\left. \Omega \left[ A_- \right] \right\vert_{\partial\Sigma},
\end{equation}
together with the requirement that $\Omega$ is an \textit{involutive} automorphism of the Lie algebra,
\begin{equation}
\Omega^2 = 1 .
\end{equation}
Moreover, to interpret the above boundary conditions as Dirichlet and (generalised) Neumann conditions, the automorphism $\Omega$ should be such that no energy-momentum is flowing through the boundary, i.e. the energy-momentum tensor must satisfy $T_{01}| = 0$, which turns out to require that it is metric-preserving in the sense of $\langle \Omega (T_A) , \Omega (T_B) \rangle = \langle T_A , T_B \rangle$.
Upon the constraint \eqref{eq:GaugeConstraints} the integrable boundary conditions are now given by,
\begin{equation} \label{eq:intbc}
(\mathbf{1} - \lambda D_g)^{-1} \partial_+ g g^{-1} = - \Omega (\mathbf{1} - \lambda D_{g^{-1}})^{-1} g^{-1}\partial_- g .
\end{equation}
At the WZW conformal point ($\lambda = 0$) one consistently finds the (twisted) gluing conditions of \cite{Alekseev:1998mc,Felder:1999ka,Stanciu:2000fz} of the holomorphic Kac-Moody currents $J_+ = -k\partial_+ g g^{-1},\, J_- = kg^{-1} \partial_- g$ on the boundary,
\begin{equation} \label{eq:wzwgluing}
\lambda \rightarrow 0: \qquad J_+ = \Omega (J_- ) ,
\end{equation}
preserving precisely one copy of both the Kac-Moody current algebra (iff.\ $\Omega$ is a Lie algebra automorphism, which is the case here) and the Virasoro algebra. Because only the latter property is a necessity to preserve conformal invariance, the former property led to the description of the corresponding D-brane configurations as being `maximally symmetric'. In \cite{Alekseev:1998mc,Felder:1999ka,Stanciu:2000fz} (see also \cite{Figueroa-OFarrill:1999cmq}) it was shown, starting from the corresponding Dirichlet conditions of eq.~\eqref{eq:wzwgluing}, that the D-brane worldvolumes wrap (twisted) conjugacy classes of the group $G$,
\begin{equation}
C_\omega (g) = \{ h g \omega (h^{-1} ) \, | \, \forall h\in G \} , \qquad \omega (e^{tX}) \equiv e^{t \Omega (X)} \in G , \;\; X\in \mathfrak{g},
\end{equation}
classified by the quotient of metric-preserving outer automorphisms $\omega \in \text{Out}_0 (G) = \text{Aut}_0(G)/\text{Inn}_0(G)$ \cite{Figueroa-OFarrill:1999cmq}. When $\omega \in \text{Inn}_0(G)$ is inner, i.e. $\omega(h) = \text{ad}_w (h) = w h w^{-1}$ for some $w\in G$, the twisted conjugacy class $C_\omega (g)$ is related to the ordinary conjugacy class $C_{\text{Id}}(g)$ by a (right) group translation,
\begin{equation}
C_{\text{ad}_w} (g) = C_{\text{Id}} (gw) w^{-1} ,
\end{equation}
which is a symmetry of the WZW model. The automorphisms $\omega$ are in principle not constrained any further here.
For generic $\lambda$ it was shown in \cite{Driezen:2018glg} that the geometrical picture of the integrable boundary conditions \eqref{eq:intbc} as D-branes wrapping twisted conjugacy classes persists by pleasing cancellations of the $\lambda$-dependence. This is indeed expected, since the deformation affects only target space data as the metric, while the worldvolumes are defined through the orthogonal decomposition of the tangent space with respect to the Dirichlet conditions, independently of the target space metric. However, integrability picks out only the automorphisms $\omega (e^{tX}) = e^{t\Omega(X)}$ that satisfy $\Omega^2 = 1$. Generic inner automorphisms, or group translations of the conjugacy classes, are thus excluded. Indeed, independent right group translations are not a symmetry of the $\lambda$-model which remarkably follows from demanding integrability structures.
\subsection{$G = SU(2)$ illustration}
To illustrate the above, we focus in this section on the case of the $G= SU(2)$ group manifold for which $\text{Out}_0(SU(2)) = \text{Id}$ is trivial and one will describe ordinary conjugacy classes\footnote{ For an analysis and explicit example of \textit{twisted} conjugacy classes, possible in $\lambda-SL(2,R)$, we refer to \cite{Driezen:2018glg}. Interestingly, in $G=SL(2,R)$ only the twisted conjugacy classes turn out to correspond to \textit{stable} D-brane configurations \cite{Bachas:2000fr}, telling us it is indeed worth including the possibility of twisted gluing conditions.}. We parametrise the group element $g\in SU(2)$ in Cartesian coordinates,
\begin{equation}
g = \begin{pmatrix}
X_0 + i X_3 & - X_1 + i X_2 \\ X_1 + i X_2 & X_0 - i X_3
\end{pmatrix} ,
\end{equation}
constrained to $\det g = X_0^2 + X_1^2 + X_2^2 + X_3^2 = 1$, making the embedding of $SU(2)$ as an $S^3$ in $\mathbb{R}^4$ apparent. The set of group elements in an ordinary conjugacy class $C(g)$ clearly have a constant trace and here fix the $X_0$ parameter to some constant value. The conjugacy classes are thus $S^2$-spheres of varying radius inside the $S^3$ as illustrated in figure \ref{fig:su2illustration}. When the $\lambda$-parameter is turned on, and the target space metric $G$ gets deformed, what is changed will be the size (or radius) of the $S^2$-spheres by the induced deformed metric $\left. G\right\vert_{S^2}$ on these branes. In figure \ref{fig:su2illustration} it is moreover clear that the rotational symmetry of the WZW gets lost in the deformation, which is a reflection of the $\Omega^2 = 1$ constraint coming from integrability and preventing the existence of rotated branes. Indeed, in \cite{Driezen:2018glg} a semi-classical analysis of the quadratic scalar fluctuations of the branes moreover shows that at $\lambda=0$ a massless p-wave triplet exists which becomes massive for $\lambda \neq 0$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.15]{LambdaSquashin2.pdf}
\caption{For illustrative purposes we portray here the $S^3 \simeq SU(2)$ group manifold in one dimension less. The green lines represent the $S^2$-branes or conjugacy classes in the $S^3$ that change size under the \textit{squashing} of the $S^3$ when the $\lambda$-deformation is turned on.}\label{fig:su2illustration}
\end{figure}
In both cases, there is a total of 2 static D0-branes (corresponding to the north- and southpole) and $l$ static D2-branes (corresponding to the $S^2$'s). The number $l$ is integer quantised and equal to $l=k-1$, following from topological obstructions in formulating the WZ term in \eqref{eq:doubledaction} in the presence of a boundary. In the boundary case the WZ term should be altered as \cite{Klimcik:1996hp},
\begin{equation}\label{eq:wzbdy}
\int_{M_3} H \rightarrow \int_{M_3} H + \int_{D_2}
\omega ,
\end{equation}
with $\partial M_3 = g(\Sigma) + D_2$, $D_2 \subset g(\partial\Sigma)$ and $\omega$ a two-form on $D_2$ such that $H |_{D_2} = \mathrm{d}\omega$. To cancel in the path integral ambiguities in the choice of $M_3$ recall that the closed string WZW theory on compact groups requires the level $k$ to be integer quantised. On the other hand, the open string WZW theory with \eqref{eq:wzbdy} will require the D-branes to be localised on a discrete number of positions. In the case of $G=SU(2)$ the number of branes are then indeed fixed as $l = k-1 \in \mathbb{Z}$, where we refer to \cite{Klimcik:1996hp,Stanciu:2000fz,Figueroa-OFarrill:2000lcd} for more details. Interestingly, this can be seen as a semi-classical stabilisation of the D2-branes, since their localised positions forbids them to shrink smoothly to zero size. When the deformation is turned on, \cite{Driezen:2018glg} shows the \textit{continuous} $\lambda$-dependence precisely cancels in the topological conditions\footnote{Both the $H$-form and $\omega$-form receive a $\lambda$-contribution but these precisely cancel.} and so consistently also in the $U(1)$ flux quantisation. Again, indeed, a semi-classical analysis of the scalar fluctuations gives a massive s-wave with a mass independent of $\lambda$.
\subsection{Interplay with generalised T-dualities}
Another motivation to look at $\lambda$-deformations is the close connection to generalised T-dualities. The $\lambda \rightarrow 1$ scaling limit (obtained by taking $k \rightarrow \infty$) produces e.g.\ the non-Abelian T-dual of the Principal Chiral Model (PCM) \cite{Sfetsos:2013wia}. On the other hand, for generic values of $\lambda \in [0,1]$ the model is Poisson-Lie T-dual \cite{Klimcik:1995ux,Klimcik:1995dy}, up to an additional analytical continuation, to an integrable deformation of the PCM \cite{Vicedo:2015pna,Hoare:2015gda,Klimcik:2015gba} known as Yang-Baxter or $\eta$-deformations \cite{Klimcik:2008eq,Delduc:2013fga} which have an action,
\begin{equation}
S_{t, \eta} (\widehat{g}) = \frac{1}{t} \int_\Sigma d\sigma d\tau\; \partial_+ \widehat{g} \widehat{g}^{-1} \left( \mathbf{1} - \eta {\cal R} \right)^{-1} \partial_- \widehat{g} \widehat{g}^{-1} ,
\end{equation}
where ${\cal R} : \mathfrak{g} \rightarrow \mathfrak{g}$ is an operator solving the modified classical Yang-Baxter equation.
Tracking the behaviour of D-brane configurations under these generalised T-dualities is, in general, a challenging procedure due to the lack of well-defined boundary conditions in the curved background geometry. In the $\lambda$-deformation, however, integrability dictates us precise boundary conditions (in eq.~\eqref{eq:intbc}) given in terms of worldsheet derivatives of the phase-space variables. Together with the known canonical transformations of non-Abelian T-duality \cite{Lozano:1995jx,Sfetsos:1996pm} (NATD) and Poisson-Lie (PL) T-duality \cite{Sfetsos:1997pi} this allows us to find the dual D-branes in the e.g. $G= SU(2)$ case. Schematically we find \cite{Driezen:2018glg},
\begin{equation*}
\begin{aligned}
&\text{D2-brane in the NATD of the PCM } \quad &&\xrightarrow{\text{can.\ transf.\ }} \quad &&\text{D3-brane in the original PCM} \\
&\text{D2-brane in the $\lambda$-deformed WZW} \quad &&\xrightarrow{\text{can.\ transf.\ }} \quad &&\text{D3-brane in the $\eta$-deformed PCM}
\end{aligned}
\end{equation*}
so that in both cases the $S^2$-branes pop open to space-filling D3-branes. Remarkably, the boundary conditions obtained in this way in \cite{Driezen:2018glg} match precisely with the boundary conditions that follow from the boundary monodromy method in section \ref{s:bmm} when plugging in the Lax pair of the Principal Chiral Model \cite{Zakharov:1973pp},
\begin{equation}
{\cal L }_\pm (z) = \frac{1}{1\mp z} g^{-1} \partial_\pm g,
\end{equation}
or of the $\eta$-deformed PCM \cite{Klimcik:2008eq,Delduc:2013fga},
\begin{equation}
{\cal L }_\pm (\eta ; z) = \frac{1+\eta^2}{1\pm z} D_g \cdot \frac{1}{1\pm \eta {\cal R}} \cdot \partial_\pm g g^{-1} ,
\end{equation}
in \eqref{eq:BoundCondLax1} respectively.
\section{Conclusions and outlook}\label{s:concl}
In this overview note we have seen an efficient method to derive classical integrable boundary conditions in $\sigma$-models by demanding that the monodromy matrix of the Lax connection generates conserved charges even in the presence of boundaries. As emphasized in the introduction, this is generically challenging for stringy $\sigma$-models without precise CFT formulations. The boundary monodromy method, however, essentially only requires the knowledge of the $\sigma$-model Lax connection. \\
\indent In the context of $\lambda$-deformations the boundary monodromy method dictates us integrable boundary conditions that are described elegantly by twisted conjugacy classes in the deformed target space. This geometrical picture is independent of the deformation parameter and, indeed, corresponds smoothly to the D-brane configurations dictated by CFT methods of the undeformed WZW point. Illustrated in the $SU(2)$ manifold we have seen that the deformation changes the size of the D-branes and destroys their rotational symmetry in the deformed geometry. This latter (natural) observation ties nicely together with the constraining features of integrability. Additionally, we have seen that the flux quantisation consistently remains independent of the $\lambda$-parameter and enforces the branes to sit stabilised at localised positions. These conclusions are supported in \cite{Driezen:2018glg} by an analysis of scalar fluctuations of the D-branes. Finally, armed with precise integrable boundary conditions, one can track them under generalised dualities present in $\lambda$-deformations. For $G=SU(2)$ we have seen a Dirichlet condition to transform into a generalised Neumann which, to close the circle, turns out to follow from demanding integrability of the dual models as well.\\
Let us stress the analysis so far has been purely classical. It remains an interesting question to understand the quantum description of the integrable boundary conditions in these $\lambda$-models. Here, the bulk $S$-matrix of \cite{Appadu:2017fff} should be supplemented by a boundary $K$-matrix describing particle reflections off the boundary and satisfying the boundary Yang-Baxter equation \cite{Cherednik:1985vs,Sklyanin:1988yz}. Since the $S$-matrix of \cite{Appadu:2017fff} was derived by mapping the quantum $\lambda$-model to a spin $k$ XXX Heisenberg spin chain, it would be appealing to interpret the boundary $K$-matrix in the corresponding open spin chain as well.\\
\indent Another appealing line of study, returning to the string point of view, is to consider integrable D-branes of $\lambda$-models in supercoset geometries \cite{Hollowood:2014qma} as here the deformation is expected to be truly marginal to all loops.
\section*{Acknowledgements}
I would like to thank the organisers of the ``Dualities and Generalised Geometries" session part of the Corfu Summer Institute 2018 schools and workshops for giving me the opportunity to speak and for a stimulating and interesting workshop. Additionally, I would like to thank my collaborators and advisers Daniel Thompson and Alexander Sevrin for the fruitful discussions we have had in the process of \cite{Driezen:2018glg} and Saskia Demulder for a careful read of the manuscript. Finally, I acknowledge also the support by the ``FWO-Vlaanderen'' through an aspirant fellowship and the project G006119N.
\bibliographystyle{/Users/sibylledriezen/Dropbox/PhD:Bibfile/JHEP}
| {'timestamp': '2019-04-09T02:07:57', 'yymm': '1904', 'arxiv_id': '1904.03390', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.03390'} |
\section{Introduction}
The growth of ultra-thin Ag films on Pt(111) has received much attention in the past few decades~\cite{Davies1982, Paffett1985, Roder1993a, Hartel1993, Grossmann1996, Schuster1996, Becker1993}, caused mainly by complex surface alloying~\cite{Roder1993, Struber1993, Bendounan2012} resulting in the formation of stress stabilized surface nanostructures~\cite{Zeppenfeld1994, Zeppenfeld1995, Tersoff1995, Jankowski2014}. The interest was further raised by the possibility to generate and tune novel chemical and physical properties by varying stoichiometry at the surface and careful control of the Ag growth conditions~\cite{Jankowski2014, Diemant2015, Schuettler2015}, and the formation of periodic dislocation networks~\cite{Brune1994, Ait-Mansour2012, Jankowski2014a, Hlawacek2015} used as nano-templates~\cite{Brune1998} for the growth of organic films~\cite{Ait-Mansour2006, Ait-Mansour2008, Ait-Mansour2009, Ait-Mansour2009a}.
Low-energy electron diffraction (LEED)~\cite{Paffett1985}, thermal energy atom scattering (TEAS)~\cite{Becker1993} and scanning tunnelling microscopy (STM)~\cite{Roder1993a} experiments revealed that at room temperature the first Ag layers grew through the formation of large pseudomorhic and thus strained Ag islands. This strain is caused by an about 4\% lattice mismatch between the lattice constants of bulk Ag and Pt. An increase of the surface temperature above 550~K leads to irreversible disorder at the surface~\cite{Becker1993}. STM investigations~\cite{Roder1993} revealed that this disorder is caused by the formation of a surface confined alloy~\cite{Tersoff1995} comprised of strained nanometre-sized Ag-rich structures embedded in the Pt surface~\cite{Roder1993}. The shape and size of these structures varied strongly with coverage~\cite{Zeppenfeld1994, Zeppenfeld1995, Jankowski2014}. After deposition of one monolayer~(ML) of Ag the surface was found to dealloy and a pseudomorphic Ag layer was found. Further deposition of Ag induced the formation of a triangular dislocation network which allows relief of the surface strain~\cite{Brune1994}. The third layer was reported to be a pure silver layer~\cite{Ait-Mansour2012} with a propagation of a height undulation originating from the dislocation network at the buried interface~\cite{Ait-Mansour2012, Jankowski2014a, Hlawacek2015}.
Much important microscopic work on the growth of ultra-thin silver films on Pt(111) has been done since this system has become a prototype of surface confined alloying. However, in-situ spatio-temporal information during deposition is still completely lacking. The present study fills this remarkable gap. As will be shown the system shows incredibly rich dynamic behaviour which provides important details of the (de\=/)alloying processes in this system. Our investigation reveals that during growth of the first layer at 750-800~K a AgPt surface alloy forms with areas exhibiting a different AgPt-stochiometry. Beyond about 0.5~ML added Ag, the surface starts to dealloy which becomes prominent near coalescence of the first layer islands. Further increase of the Ag coverage results in the formation of irregularly shaped vacancy islands which are gradually filled by Pt atoms expelled from the alloy phase. In the last stage, these segregated Pt atoms form compact clusters which are observed as black spots in low energy electron microscopy (LEEM) images. These areas have also been analysed \textit{ex~situ} with atomic force microscopy (AFM). The combination of these techniques leads to our identification of these black spots as heavily strained and possibly even amorphous Pt features. When the coverage reaches 1~ML we observe that the integral area of these clusters decreases, which is attributed to their partial dissolution caused by a reentrant alloying of the ultrathin Ag-film. These ``amorphous'' Pt clusters are visible on the surface up to a coverage of several layers and are stable upon exposure to atmospheric conditions.
\section{Experimental}
The LEEM measurements were performed with an Elmitec LEEM~III with a base pressure better than $1 \times 10^{-10}$~mbar. The used Pt(111) crystal had a miscut angle of less than 0.1$^{\circ}$~\cite{Linke1985}. Surface cleaning was done by prolonged repetitive cycles of argon ion bombardment, annealing in oxygen at $2 \times 10^{-7}$~mbar at 800~K, and subsequent flashing to 1300~K in the absence of oxygen. The sample was heated by electron bombardment from the rear and the temperature was measured with a W3\%/Re-W25\%/Re thermocouple. An Omicron EFM-3 evaporator was used to deposit 99.995\% purity silver from a molybdenum crucible onto the sample. The coverage was calibrated on the basis of recorded LEEM movies, where we could track growth of the first pseudomorphic silver layer followed by the growth of second layer which propagates from step edges. One monolayer (ML) is defined as a layer with an atomic density equal to that of a Pt(111) layer. LEEM images were recorded in bright-field mode using an appropriate contrast aperture around the (00) diffracted beam in the focal plane. The low energy electron diffraction (LEED) patterns were recorded using the largest available aperture of 25~$\rm\mu$m in the illumination column.
The AFM measurements were done under ambient conditions with an Agilent~5100 AFM employing amplitude-modulation for recording height topography. A MikroMasch Al-black-coated NSC35 Si$_{3}$N$_{4}$ AFM tip with a tip radius of 8~nm was used in these measurements. The resonance frequency of this tip was 205~kHz and the nominal spring constant 8.9~N/m. For the measurements an amplitude set-point of 90\% was used and an oscillation amplitude in the range from 30 to 40~nm. The amplitude modulation imaging mode provides simultaneously topographic and phase images. The latter provides information on the local variation in energy dissipation involved in the contact between the tip and the sample. Various factors are known to influence this energy dissipation, among which are viscoelasticity, adhesion and chemical composition~\cite{Garcia2007}.
\section{Results and Discussion}
\subsection{An overview of initial growth and alloying of Ag on Pt(111); LEEM}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.6]{leem_frames_alloying.eps}
\caption{LEEM images recorded in bright-field mode at coverage:(a)~0.275~ML, (b)~0.38~ML, (c)~0.5~ML, (d)~0.76~ML, (e)~1 ML, (f)~1.25~ML. The red arrows in (c) points emerging ``black'' spots. The FoV is 2~$\rm{\mu}$m, the electron energy is 2~eV and the substrate temperature is 750~K. The small dark spots, best visible in fig.~(a) are LEEM channel plate defects.}
\label{fig:leem_frames_alloying}
\end{figure*}
Figure~\ref{fig:leem_frames_alloying} shows a sequence of LEEM images recorded during the growth of the first Ag layer on Pt(111) at 750~K. The brightness of the various features in these successively recorded images cannot be compared due to the digital enhancement of each individual image in order to obtain always the best contrast settings. An image with a 2~$\rm\mu$m field of view (FoV) of the initial Ag deposition on Pt(111) surface is shown in Fig.~\ref{fig:leem_frames_alloying}(a). The results have been obtained with a deposition rate of $1.7\times10^{-3}$~ML/s. Figure~\ref{fig:leem_frames_alloying}(a) shows a large terrace in the centre, bordered by ascending monoatomic steps. Upon deposition the growth of a surface confined alloy is observed, starting from the ascending step edges~\cite{Roder1993a} and followed by the nucleation and growth of alloyed islands seen in Fig.~\ref{fig:leem_frames_alloying}(a)\=/(b). Beyond a Ag coverage of roughly 0.5~ML [Fig.~\ref{fig:leem_frames_alloying}(c)] the surface dealloys~\cite{Zeppenfeld1994} slowly and the islands start to coalesce. During the dealloying phase the surface is quite heterogeneous, especially but not exclusively, on top of the largest islands. Darkish or ``black'' spots [marked by red arrows in Fig.~\ref{fig:leem_frames_alloying}(c)] develop first in the centre of the adatom islands and later at an enhanced rate during coalescence. The black dots reveal Pt segregation upon which we focus on in this paper, not only on the process but also on its physical background ascription. At 0.75~ML, coalescence of the islands leads to formation of elongated vacancy clusters [meandering ``lines'' in Fig.~\ref{fig:leem_frames_alloying}(d)]. With continuing Ag deposition these vacancy clusters are filled by expelled Pt atoms and at 1 ML [Fig.~\ref{fig:leem_frames_alloying}(e)] they then take compact shapes as we will detail further below. Completion of the monolayer leads to a decrease of the integral exposed area of the black dots. This shrinkage of the dots is accompanied with reduced total brightness and indicates re-entrant alloying. The second layer starts to grow by step-propagation [see Fig.~\ref{fig:leem_frames_alloying}(f)] and island nucleation in the center of terraces.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{black_spots_area.eps}
\caption{Integrated fractional area of the black spots~vs~coverage. The fractional area is calculated from the images recorded with 4~$\rm{\mu}$m FoV.}
\label{fig:black_spots_area}
\end{figure*}
To illustrate the evolution of the Pt segregation we plot in Fig.~\ref{fig:black_spots_area} the fractional area of the black spots as a function of the coverage. The initial increase of the area, up to 0.8~ML, is caused by development of the black spots in the centres of the islands. Later, the steep increase, at around 0.85~ML, is related to the filling of the vacancies by Pt atoms at an enhanced rate, which is followed by a distinct decrease just before completion of the first layer. This decrease reveals the partial dissolution of the black spots due to the re-entrant alloying of the film when approaching completion of the first monolayer.
\subsection{Mixing in initial sub-monolayer growth of Ag on Pt(111) ; LEEM}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{esther_initial.eps}
\caption{LEEM images recorded in bright-field mode :(a)~clean~Pt(111), (b)~0.3~ML, (c)~0.48~ML, (d)~0.7~ML. The FOV is 2~$\rm{\mu}$m, the electron energy is 2.0~eV and the substrate temperature is 750~K. The small dark spots, best visible in fig.~(a) are LEEM channel plate defects.}
\label{fig:esther_initial}
\end{figure*}
Figure~\ref{fig:esther_initial} shows representative snapshots of a movie taken during the initial growth of Ag/Pt(111). The brightness of the exposed layer decreases immediately after opening of the Ag evaporator shutter. This is indicative of an increase of the diffuse scattering due to alloying related disorder~\cite{Becker1993} and will be discussed in more detail in the next section. It is also obvious that at first the material is accommodated at the ascending steps. These steps propagate downward toward the large terrace in the center. Initially effectively all material, i.e., the excess Ag atoms and the Pt atoms expelled from the exposed surface are sufficiently mobile to reach the pre-existing steps. During these initial stages the brightness of the surface is identical everywhere indicating that the emerging surface undergoes alloying in which the Ag embedded in the surrounding Pt matrix~\cite{Roder1993,Zeppenfeld1994,Zeppenfeld1995} is homogeneously distributed. There is also no brightness variation on the narrower terraces near the outer skirts of the images. After some incubation time of about 180~s (0.3~ML) a few islands, visible in Fig.~\ref{fig:esther_initial}(b), nucleate in the central region of the large lower terrace. This provides another indication of a peculiar, alloying related feature. Where initially the mobility was sufficiently large for the atoms to reach the bordering steps, the actual mobility becomes insufficient to reach the propagating steps even if their separation has decreased. This is the first indication for a decreasing mobility of (Pt- and Ag) ad-atoms on the alloying surface. Further evidence for this process is obtained upon a comparison of Figs.~\ref{fig:esther_initial}(c)-(d).
\begin{figure*}[h!]
\centering
\includegraphics[scale=1.3]{esther_graph_islands.eps}
\caption{The island size distribution for various submonolayer coverages of 0.30, 0.45 and 0.60 monolayers presented in (a), (b) and (c), respectively}
\label{fig:esther_graph_islands}
\end{figure*}
The number of islands increases up to very late stages of growth as illustrated in Fig.~\ref{fig:esther_graph_islands}. Unlike in conventional nucleation and growth where nucleation is about finished after deposition of 1\% of a monolayer and the emerging islands keep growing with a quite narrow size distribution, until they coalesce~\cite{Venables2000}. In the present case nucleation still is active at~≈~0.60~ML. Moreover, the island size distribution is extremely broad and positional distribution is all but homogeneous. The smallest islands emerge near the atomic steps in the originally denuded zone.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{esther_graph_N_islands.eps}
\caption{Number of first layer islands as a function of deposition time. The deposition rate is $1.7\times10^{-3}$~ML/s.}
\label{fig:esther_graph_N_islands}
\end{figure*}
Another way of illustrating the late nucleation behaviour is depicted in Fig.~\ref{fig:esther_graph_N_islands}, where we plotted the number of islands as a function of time. Progressive nucleation takes place, which leads to a peak in the island density at 64\% of a monolayer. A straightforward explanation for these combined observations is alloying in the two exposed levels. Due to the progressive heterogeneity of the top layer the effective diffusion length decreases continuously and dramatically. As a result, enhanced nucleation takes place. In a site hopping framework one would conclude that the activation energy for diffusion increases by not less than a factor of 4. One safely arrives at the conclusion that initially the mobility of the ad-atoms decreases with increasing coverage. However, an attempt to nail this down to more definite numbers will fail due to the complexity of the system. The diffusing species are both Ag atoms and expelled Pt atoms and these move across a heterogeneous top-layer with small Ag-rich patches embedded in the Pt matrix. Moreover, with increasing coverage the concentration of embedded Ag atoms increases, while in a site hopping model the residence times on top of Ag filled sites and on top of Pt filled sites will most likely be different for diffusing Ag atoms and Pt atoms. One cannot exclude either that exchange processes further slow down the diffusion rates. We note that the de-mixing occurring at higher coverage leads to a more homogeneous composition and, as a consequence of reversing the argument, to enhanced diffusion and thus exclusion of new nucleation events. During these late stages of monolayer growth the number of islands also decreases as a result of coalescence processes.
\subsection{De-alloying of the alloy above 0.5 ML ; LEEM}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{esther_spots_higher_ML.eps}
\caption{LEEM images recorded in bright-field mode :(a)~1.0~ML, (b)~1.68~ML. The FOV is 2~$\rm{\mu}$m, the electron energy is 2.0~eV and the substrate temperature is 750~K.}
\label{fig:esther_spots_higher_ML}
\end{figure*}
A striking feature is the emergence of black spots as shown in Fig.~\ref{fig:esther_spots_higher_ML}(a)-(b). The black spots do appear when approaching completion of the monolayer and are emerging in Fig.~\ref{fig:esther_initial}(c). They persist after the growth of several monolayers, and are even visible after the completion of the eighth monolayer (not shown here). We have carefully tried to identify their origin. For this purpose we have varied the deposition rate between $1.7\times10^{-4}$ and $1.1\times10^{-2}$~ML/s. Both the number density of black dots and their integrated area are at variance with the anticipated behaviour for contamination: both increase a few tens of a percent, i.e., much less than the factor 13, which might be anticipated on the exposed time of the, compared to Ag, by far more reactive Pt\=/surface layer. Occasionally, we do observe some evidence of carbon impurities (not visible in presented LEEM figures). However, their signature as a result of creating under-focus and over-focus conditions differs completely from that of the black dots in Fig.~\ref{fig:esther_initial}~and~Fig.~\ref{fig:esther_spots_higher_ML}. We therefore conclude that the ``black spots'' are inherent to the Ag-Pt(111) de-mixing process. They appear to consist of disordered Pt-rich patches which reduce in integral size with increasing film thickness, but persist even after deposition of 8 ML.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{esther_area_plot.eps}
\caption{(a) the growth process of Fig.~\ref{fig:esther_initial} analysed in the uncovered area (see text). (b) the mean bright field intensity as a function of time. The exposed toplayer of the substrate is represented by the squares, the exposed first layer by circles. The area taken by black spots is subtracted from the analysis. The break above 400 s is caused by not-well defined morphology of the surface caused by rapid de-alloying which causes big uncertainties in the perfomed analysis. Imaging electron energy is 2 eV.}
\label{fig:esther_area_plot}
\end{figure*}
Figure~\ref{fig:esther_area_plot}(a) shows the evolution of what naively would be called the uncovered area, i.e., the supposedly still exposed virgin Pt(111) surface. As we will see shortly this conjecture cannot be maintained. The area taken by the black dots is subtracted. Within the uncertainties originating from this subtraction and/or from lensing effects as a result of field irregularities due to e.g. work function differences, the ``exposed Pt'' area decreases linearly with time, indicative of the growth of a single layer. In this particular experiment it takes about 590 s to complete the first layer. More information is obtained from the averaged brightness of the layers evolving at the two exposed levels as shown in Fig.~\ref{fig:esther_area_plot}(b). In this evaluation the black dots have been disregarded. The grey symbols refer to the original level of the clean Pt(111) surface, while the black circles represent the results for the exposed surface at a coverage that is one monolayer higher. As becomes immediately evident the brightness decays strongly from the very start of the deposition. It passes through a pronounced minimum and subsequently increases again up to a maximum value after roughly 500~s~(0.85~ML) (as we will explain below this maximum brightness marks busy activity in de-mixing, segregation, stress relaxation and re-entrant mixing). It is also obvious that the average brightness of both exposed layers is very similar. Both observations are completely consistent with the formation of a mixed ad-layer, or the surface confined alloy discussed in Ref.~\cite{Roder1993,Zeppenfeld1995,Zeppenfeld1994}. Alloying proceeds during the deposition of the first half layer, while de-alloying occurs during deposition of the second half monolayer. The alloy can only persist in the vacuum exposed layer(s) due to the involved energetics. First the tensile tension in the Pt(111) is reduced by the incorporation of the larger Ag atoms~\cite{Tersoff1995}, while with increasing Ag content the tension finally becomes compressive, which leads to the exchange of first layer Pt with Ag embedded in the lower layer. In the initial phase the brightness decays due to increasing disorder by the continuing embedding of Ag patches in the Pt\=/matrix, while later on the de-mixing leads to lesser disorder: the continuing expulsion of Ag leads to larger Pt patches surrounding the embedded Ag.
\begin{figure*}[h!]
\centering
\includegraphics[scale=1.5]{esther_rims_plot.eps}
\caption{The width of the bright border around the large islands has
been measured of a number of islands of several deposition experiments.
There is a strongly proffered.}
\label{fig:esther_rims_plot}
\end{figure*}
The brightness across the level of the growing ad-layer is by no means homogeneously distributed during the growth beyond about 0.40 ML. A representative snapshot is shown in Fig.~\ref{fig:esther_initial}(c). At the chosen imaging conditions the smaller islands are bright, the ones with ``intermediate'' sizes are slightly less bright, many of them have a distinct black dot, and some of them have a distinctly brighter rim, when compared to their centres. This feature is even clearer for the largest islands, which all have a bright rim and a dimmer central part. We interpret these findings as follows: the brighter areas represent Ag(-rich) areas. The smaller islands which appear during late stages of the nucleation are constituted by Ag atoms which are still continuously deposited and Ag atoms which are now expelled from the initial level, i.e., from the alloyed exposed parts of the substrate. During later stages of growth of the intermediately sized and largest islands, the edges also accommodate mostly Ag-atoms for the same reason. The resulting bright rims appear to have a characteristic width. The results obtained for a number of islands from several deposition experiments is shown in Fig.~\ref{fig:esther_rims_plot}. The distribution of the widths of these bright rims/borders is somewhat skewed but has a pronounced maximum at about 52~nm. The fact that these widths show no distinct correlation with island size or deposition rates suggests their thermodynamic origin. We attribute this to the relaxation of stress. The compressed Ag-film can best (partly) relieve its stress near the descending step edges. The emerging of bright rims is responsible for the slightly higher brightness obtained for the highest layer around 300~-~400~s in Fig.~\ref{fig:esther_area_plot}(b).
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{esther_last_stage.eps}
\caption{LEEM images recorded in bright-field mode :(a)~0.75~ML, (b)~0.82~ML, (c)~0.88~ML, (d)~0.95~ML. The FOV is 2~$\rm{\mu}$m, the electron energy is 2.0~eV and the substrate temperature is 750~K.}
\label{fig:esther_last_stage}
\end{figure*}
With increasing silver deposition the de-mixing of the surface confined Ag/Pt alloy continues. The expulsion of surplus Ag from the lower level, i.e., the original substrate level, is apparently facilitated by the advancing steps. However, the Pt atoms, encapsulated on the intermediate and in particular on large islands lack that opportunity and experience difficulties to escape. Therefore, these centres remain relatively dark during a prolonged period. Upon proceeding de-mixing also the larger islands become brighter, the segregation intensifies and dark grey spots evolve, signalling segregation. On the largest islands, the dimmer inner parts are also Pt-rich and strong fluctuations occur as evidenced in Fig.~\ref{fig:esther_last_stage}. The dark grey areas fluctuate strongly both in position and also in brightness. Finally also the largest islands host dark grey spots [cf. Figure~\ref{fig:esther_initial}~(d) taken at a coverage of 0.7 ML]. We note that at the same time the brightness of the small islands is unaffected. Some of the dark grey areas evolve into dark spots, while others seem to dissolve again in the Ag-layer. Around a total Ag coverage of 0.85 ML, i.e., during late stages of coalescence an enormous bustle is taking place. The irregularly shaped vacancy clusters, which are the result of the continuing growth of the coalesced islands, are with overwhelming majority filled by segregation of Pt into black dots [see Fig.~\ref{fig:esther_last_stage}~(c)-(d) and Fig.~\ref{fig:fig_03_bs}]. At the same time the brightness of the large islands has now reached its maximum [see Fig.~\ref{fig:esther_area_plot}~(b)]. This late and strikingly active stage of the birth of new Pt-rich dots happens precisely in those areas where the step advancement ceases. This filling of the irregularly shaped vacancy islands with black Pt-dots goes along with a strong reduction of their integrated length, since all dots are circular in shape. The energetically unfavourable long integrated step length apparently facilitates the Pt segregation after which a much reduced domain border length surrounding the circular black spots, has developed. Both features are quantified and illustrated in Fig.~\ref{fig:fig_03_bs} and Fig.~\ref{fig:fig_04_bs} .
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{fig_03.eps}
\caption{LEEM images recorded in bright-field mode at coverage: (a)~0.75~ML, (b)~0.79~ML, (c)~0.82~ML, (d)~0.85~ML, (e)~0.89~ML, (f)~0.92~ML , (g)~0.96~ML, (h)~1~ML. The electron energy is 1.9~eV }
\label{fig:fig_03_bs}
\end{figure*}
We stress that we did carefully look for indications of structures in the recorded $\mu$LEED patterns during Ag deposition, different from the pseudomorphic (1x1) structure and did not observe any evidence for those. At this same time, as we will show in a moment, we observe relaxations of the surface lattice which reaches maximum at 0.85 ML (see Fig.~\ref{fig:leed_spots}), the coverage where the brightness of the islands reaches maximum in Fig.~\ref{fig:esther_area_plot}~(b). We therefore must conclude that these black dots are probably amorphous Pt(-rich?) areas. Further support for this conclusion is provided by their circular shape: The Pt-rich dots are heavily strained in their Ag-rich environment and undergo strong and isostropic, compressive stress. In a later stage some even coalesce and become larger, eventually resuming their circular shape. We observe that the decrease of the size of the vacancies goes along with a change of their shape to a more compact one. This shape evolution can be seen in Fig.~\ref{fig:fig_04_bs} which shows the compactness of the vacancies as a function of coverage. The compactness $C$ is defined as:
\begin{equation*}
C = \frac{(A\cdot4\pi)^\frac{1}{2}}{P}
\end{equation*}
where $P$ is the perimeter of vacancies and $A$ is their area.
At 0.85~ML presented in Fig.~\ref{fig:fig_03_bs}(d), the expelled Pt atoms start to segregate into the vacancies, which finally at 1~ML [see~Fig.~\ref{fig:fig_03_bs}(h)] leads to their complete filing. We note two more things: First with increasing compactness the length of the energetically less favourable edges decreases. Also the step energy will be lower in contact with platinum compared to a vacancy. Both features lead to a lowering of the total energy and thus facilitate the Pt segregation. In other words under heavy compressive stress the exposed Pt atoms actually experience an energetic advantage by segregation into the compacting vacancies. We note that, as illustrated in Fig.~\ref{fig:fig_04_bs}, the compactness never seems to reach its ideal value~of~1. We attribute this to difficulties inherent to setting a well defined discrimination level. Uncertainties related to field distortion effects~\cite{Nepijko2001} play a role and a principal problem is caused by the digital nature of the image. With a finite pixel size one cannot define a line with infinite sharpness. As a result the determined $C$ will always be an underestimate of the real compactness.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{fig_04.eps}
\caption{Plot of vacancies compactness as a function of coverage.}
\label{fig:fig_04_bs}
\end{figure*}
\subsection{LEED spots intensity variation and surface lattice relaxation}
\begin{figure*}[h!]
\centering
\includegraphics[scale=1]{leed_intensity.eps}
\caption{Intensity of LEED spots as a function of coverage: (a)-(00) spot, (b)-(10) spot and (c)-(01) spot. Height of the LEED spots as a function of coverage: (e)-(00) spot, (f)-(10) spot and (g)-(01) spot. Blue line marks completion of the first monolayer. Fig.~(d) represents ratio of the (10) and (01) spots intensity as a function of coverage. Fig.~(h) presents typical LEED pattern recorded from clean Pt(111) with labelled position of corresponding spots. Electron energy is 44~eV, sample temperature is 800~K.}
\label{fig:leed_intensity}
\end{figure*}
The surface lattice variation can be studied in detail from the evolution of the \textit{in~situ} $\mu$LEED patterns recorded as function of Ag coverage. In the initial phase of Ag deposition, surface alloying induces the sharp decrease of the intensity of the Bragg spots~\cite{Becker1993,Jankowski2014a} shown in Fig.~\ref{fig:leed_intensity}, but their position corresponding to the Pt(111) lattice [Fig.~\ref{fig:leed_spots}] is not altered. At around 0.7 ML, where coalescence of islands and nucleation of well defined black spots in their centres is observed, the intensity of the (01) spots decreases to reach local minimum at 0.85 ML. At this same coverage, the (01) spot intensity reaches a maximum and both first order spots broaden slightly towards the (00) spot, as shown in Fig.~\ref{fig:leed_spots}(c). We attribute this broadening to the relaxation of the surface lattice towards the 4\% larger Ag distances. This relaxation of the lattice is triggered by de-alloying of the surface and partial segregation of Pt into vacancy clusters as discussed above. Moreover, the value of (10)/(01) spots intensity ratio plotted as a function of coverage in Fig.~\ref{fig:leed_spots}(d) reaches minimum at exactly 0.85 ML. This is a direct result of a transient lowering of the threefold symmetry: The Ag-atoms in the relaxed film not only occupy FCC sites but also HCP sites. The latter contribute to enhanced sixfold symmetry features.
The first monolayer is pseudomorphic, as its observed p(1$\times$1) LEED pattern given in Fig.~\ref{fig:leed_spots}(e) corresponds to that of the Pt(111) lattice \cite{Paffett1985,Becker1993,Jankowski2014a}. We also mention that after reaching 0.85 ML the intensity of the (00) spot in Fig.~\ref{fig:leed_spots}(a) again decreases, which rationalized as re-entrant mixing of the top layer. This is in line with the partial dissolution of the black spot [cf. Fig.~\ref{fig:black_spots_area}] and the concomitant decrease of the brightness of exposed layers [Fig.~\ref{fig:esther_area_plot}~(b)]
\begin{figure*}[h!]
\centering
\includegraphics[scale=1]{leed_spots.eps}
\caption{The (01) LEED spot recorded during growth of Ag on Pt(111) at 800 K: (a) clean Pt(111), (b) at 0.65 ML, (c) at 0.74 ML, (d) at 0.85 ML, (e) at 1 ML. The arrow in figure (a) points the direction towards the (00) spot. The energy of electrons was 44 eV. The intensity of the spots is presented on logarithmic scale of colours.}
\label{fig:leed_spots}
\end{figure*}
We now focus on the nature of the black spots and apply different techniques for their characterization. The latter include spatially resolved work function (WF) measurements with LEEM and \textit{ex situ} AFM data. The latter provides support for the conclusion that excreted Pt atoms constitute the black dots.
\subsection{Work function changes}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.9]{WF_images.eps}
\caption{LEEM images recorded in bright-field mode at the indicated deposition time of silver (grey scale colour) and the concomitant change in the WF relative to clean Pt(111) (colour scale images). The deposition time needed for completion of the first layer is around $\sim$30~min}
\label{fig:WF_images}
\end{figure*}
The LEEM offers a capability of recording spatial maps of relative changes of the surface WF change (see also Ref.~\cite{Hlawacek2015} for more details) during deposition of thin films. Figure~\ref{fig:WF_images} shows a sequence of LEEM images and the corresponding spatial maps of the surface WF change as a function of deposition time. The atomically clean terraces of Pt(111) exhibit a constant value of the WF and only a 0.2~eV lower WF is seen at the step edges \cite{Besocke1977}. The initial deposition of Ag leads to a sharp decrease of the average WF, shown in Fig.~\ref{fig:fig_07_bs}. After 10~min of Ag-deposition (at around $\sim$0.3~ML) the WF of the growing islands ($\rm{\Delta\phi=-0.79}$~eV) is 0.1~eV lower than WF of the surrounding hosting terrace ($\rm{\Delta\phi=-0.69}$~eV). This contrast in WF is attributed to different levels of intermixing of Ag and Pt in the islands and their surroundings. The work function has been measured also above the black spots, assumed to be Pt(-rich) clusters. As shown in Fig.~\ref{fig:fig_07_bs}, the latter follows quite closely the WF variation of their environment at a distance of about -0.2~eV. The similarity in temporal dependence makes sense if one realizes that the Pt(-rich) clusters are small and long range interactions play a major role in determining the work function. This seems somewhat surprising at first sight since the WF of clean and smooth Pt (111) (5.8~eV) is much larger than that of clean and smooth Ag(111) (4.56~eV)~\cite{Kawano2008}. However, the WF depends strongly on the surface orientation with a tendency for lower WF observation with decreasing coordination, i.e, $\rm{\Delta\phi_{111} < \Delta\phi_{100} <\Delta\phi_{110}}$. This is even worse for amorphous dots with which we most likely deal with. Finite size effects may also lower the WF~\cite{Nepijko2001}. It is noted, that field distortions around small objects make drawing clear conclusions hazardous. Therefore, we tentatively conclude that the case for Pt as the constituting material of the clusters remains undecided on the basis of the current WF data.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.75]{fig_07.eps}
\caption{Average and Pt clusters work function as a function of coverage.}
\label{fig:fig_07_bs}
\end{figure*}
A further increase of the coverage leads to a decrease of the average WF which at coverage of 1 and 2~ML has the value of $\rm{\Delta\phi=-1.085}$~eV and $\rm{\Delta\phi=-1.33}$~eV, respectively. The rather constant value of the WF beyond two layers thick films indicates that at most minor changes in topography and composition of the surface occur beyond a film thickness of two layers.
At a coverage of 3~ML the average WF saturates at $\rm{\Delta\phi=-1.37}$~eV. This saturation value is in the range reported by Hartel~et~al.~\cite{Hartel1993}~(-1.2~eV) and by Paffet~et~al.~\cite{Paffett1985}~(\=/1.5~eV) obtained at similar coverages. We note that the reported values were measured for the alloy formed after annealing the surface at 600~K. Schuster~et~al.~\cite{Schuster1996} showed, that both temperature and time during alloying drastically influence intermixing levels of the Ag/Pt(111) surface alloy, which results in different surface morphology leading to a variation in the surface WF. Moreover, the step density of the initial Pt(111) surface may have a decisive influence of several tenths of an eV on the measured value \cite{Poelsema1982}. In that respect, we can safely conclude that our data compare favourably with the macroscopic data available in literature.
\subsection{Characterization of 3~ML thick film in ambient AFM}
To gain more insight on the composition and (if possible) structure of the black spots we apply AFM. For that purpose we prepared a 3~layers thick Ag/Pt(111) film by deposition in UHV at 800~K as monitored by LEEM. As evident from Fig.~\ref{fig:WF_images}, and the discussion above, the surface predominantly consists of silver, but a substantial number of black spots is still observed. The AFM data have been taken in ambient atmosphere.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.6]{fig_08.eps}
\caption{AFM images of 3~ML Ag deposited on Pt(111) at 800~K and subsequently cooled down to room temperature. The measurements were done at ambient conditions. (a)~topography image , (b)~phase contrast image, (c)~typical line profiles trough clusters obtained from phase contrast (red line) and topography (black line) image; the profiles have been shifted vertically for sake of clarity .}
\label{fig:fig_08_bs}
\end{figure*}
Figure~\ref{fig:fig_08_bs} shows AFM data of the surface of the 3~ML Ag/Pt(111) system grown in UHV (see above) and measured under ambient conditions. The topography image of the surface presented in Fig.~\ref{fig:fig_08_bs}(a) shows step contrast which indicates that the surface is still flat. No pronounced geometric structures are observed. The phase contrast image presented in Fig.~\ref{fig:fig_08_bs}(b), reveals the presence of many small, $\sim$50~nm wide regions for which the measured phase shift is 10$^{\circ}$ lower than for the rest of the surface. We attribute these regions to the Pt clusters observed with LEEM as the number density of the clusters obtained from the AFM- and the LEEM-images is very similar (8-15 clusters/$\rm{\mu}$m$^{2}$). From the typical height profiles shown in Fig.~\ref{fig:fig_08_bs}(c) measured along the lines drawn in Fig.~\ref{fig:fig_08_bs}(a)~and~(b) we concluded that Pt clusters are about one to two layers higher than the surrounding layer. The phase shift contrast in the recorded images can be attributed to many factors~\cite{Garcia2007}, nevertheless it points to that the chemical composition of the clusters differs from the majority at the surface, which at 3~ML is mainly composed of Ag atoms. We stress that the correlation between the phase contrast and the height contrast is perfect. The black protrusions have a different chemical composition and under the applied ambient conditions probably oxidized Pt is the natural candidate. Therefore, the conclusion that the black dots consist of mainly platinum atoms is straightforward. The fact that the clusters protrude from the surface also explains the difficulty of selecting a proper focus condition in LEEM for these objects. We do believe that the footing of these clusters is still at the substrate and an oxygen layer contributes to the height differences.
A careful look at Fig.~\ref{fig:fig_08_bs}(b) shows a pretty smooth edge on the upper terrace side of the ascending step edge. This is probably related to the initial step propagation growth mode~\cite{Jankowski2014b, Vroonhoven2005}. The area has apparently a slightly different composition and or structure. We ascribe this feature to different possibilities of dealing with stresses near a descending step.
\section{Conclusions}
The spatio-temporal \textit{in situ} information from LEEM on the growth of ultrathin Ag on Pt(111) at 750-800~K shows vivid and rich dynamical behaviour. We confirm alloying and subsequent de-alloying during the growth of the first monolayer at these elevated temperatures. Initially alloying occurs and nucleation processes are prolonged as a result of increasingly reduced mobility of Ag and Pt ad-species on the alloying surface. New nucleation events are even observed at a coverage exceeding 70\% of a monolayer! The ad-islands have a wide size distribution and are quite heterogeneous, especially the larger ones. Beyond a coverage of 50\% of a monolayer a violent segregation of Pt towards the centre of the islands occurs and bright Ag-rims become apparent. De-alloying is fast during coalescence of the adatom islands. The remaining irregularly shaped vacancy islands are quickly filled by Pt(-rich) patches (black dots) and take a compact circular shape. These features are energetically favoured since the integral step length is decreased and the boundaries between Pt(-rich) and Ag(-rich) areas are minimized. A concomitant transient relaxation of the exposed layer is deferred from the diffraction profiles. Upon further completion of the monolayer alloying re-enters and the initial pseudo-morphological structure is resumed as accompanied by re-entrant (partial) alloying, which continues in the second layer.
The black dots are attributed to stressed, possibly amorphous Pt(-rich) features. \textit{Ex situ} AFM experiments support this picture as the observed phase contrast images suggest chemical contrast. These Pt-features persist even after deposition of several monolayers and probably have a firm base at the Pt-substrate.
\section{Acknowledgments}
We want to thank Robin Berkelaar for acquiring the AFM data. This work is part of ECHO research program 700.58.026, which is financed by the Chemical Sciences Division of the Netherlands Organisation for Scientific Research(NWO).
\clearpage
\bibliographystyle{iopart-num}
{\footnotesize
| {'timestamp': '2016-11-22T02:02:53', 'yymm': '1611', 'arxiv_id': '1611.06354', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.06354'} |
\section{Introduction}
\label{sec:1}
A smooth flow $(\phi_t)$ generated by a smooth vector field $X$ on a
compact manifold~$M$ is called \emph{stable} if the range of the Lie
derivative $\mathcal L_X: C^\infty(M) \to C^\infty(M)$ is closed and
it is called \emph{cohomology-free} or \emph{rigid} if it is stable
and the range of the Lie derivative operator has codimension one. The
\emph{Katok (or Katok-Hurder) conjecture} \cite{K01}, \cite{K03},
\cite{H85} states that every cohomology-free vector field is smoothly
conjugate to a linear flow on a torus with Diophantine frequencies. It
is not hard to prove that all cohomology-free flows are volume
preserving and uniquely ergodic (see for instance \cite{Forni}).
We also recall that the Katok conjecture is equivalent to the
\emph{Greenfield-Wallach conjecture}~\cite{GW} that every globally
hypoelliptic vector field is smoothly conjugate to a Diophantine
linear flow (see \cite{Forni}). A smooth vector field $X$ is
called~\emph{globally hypoelliptic} if any $0$-dimensional current $U$
on $M$ is smooth under the condition that the current $\mathcal L_X U$
is smooth. Greenfield and Wallach in~\cite{GW} proved this conjecture
for homogeneous flows on \emph{compact} Lie groups. (The equivalence
of the Katok and Greenfield-Wallach conjectures was essentially proved
already in \cite{CC00} as noted by the third author of this paper.
The details of the proof can be found in~\cite{Forni}).
The best general result to date in the direction of a proof is the
joint paper of the third author \cite{HH} where it is proved that a
every cohomology-free vector field has a factor smoothly conjugate to
Diophantine linear flow on a torus of the dimension equal to the first
Betti number of the manifold $M$. This result has been developed
independently by several authors \cite{Forni}, \cite{Kocsard},
\cite{Matsumoto} to give a complete proof of the conjecture in
dimension $3$ and by the first author in the joint paper
\cite{FlaminioPaternain} to prove that every cohomology-free flow can
be embedded continuously as a linear flow in a possibly non-separated
Abelian group.
From the definition, it is clear that there are two main mechanisms
which may prevent a smooth flow from being cohomology-free: it can
happen that the flow is not stable or it can happen that the closure
of its range has codimension higher than one. For instance, linear
flows on tori with \emph{Liouvillean} frequencies are not stable with
range dense in a closed subspace of dimension one (the subspace of
functions of zero average), while translation flows \cite{Forni97},
\cite{MMY}, horocycle flows \cite{FF1} and nilflows \cite{FF} are in
general stable but have range of countable codimension.
Until recently, basically no other examples were known of non
cohomology-free smooth flows. In particular there was no example of
the first kind of phenomenon, that is, of flows which are not stable
with range closure of codimension one, except for flows smoothly
conjugate to \emph{linear} Liouvillean toral examples. In the past
couple of years several examples of this kind have been constructed by
Avila and collaborators. Avila and Kocsard \cite{AvilaKocsard} have
proved that every non-singular smooth flow on the $2$-torus with
irrational rotation number has range closed in the subspace of
functions of zero average with respect to the unique invariant
probability measure. Avila and Kocsard and Avila and Fayad have also
announced similar examples on certain higher dimensional compact
manifolds, not diffeomorphic to tori, which admit a non-singular
smooth circle action (hence Conjecture 6.1 of \cite{Forni} does not
hold). In all these examples the closure of the range of the Lie
derivative operator on the space smooth functions has codimension one,
or equivalently, the space of all invariant distributions for the flow
has dimension one (hence it is generated by the unique invariant
probability measure). We recall that an \emph{invariant distribution}
for a flow is a distribution (in the sense of S.~Sobolev and
L.~Schwartz) such that its Lie derivative along the flow vanishes in
the sense of distributions.
The goal of this paper is to prove that examples of this kind do not
exist among \emph{homogeneous flows}, so that a non-toral homogeneous
flow always fails to be cohomology-free also because the closure of
its range has codimension higher then one. In fact, we prove that for
any homogeneous flow on a \emph{finite-volume} homogeneous manifold
$M$, except for the case of flows smoothly isomorphic to linear toral
flows, the closure of the range of the Lie derivative operator on the
space of smooth functions has countable codimension, or, in other
terms, the space of invariant distributions for the flow has countable
dimension. As a corollary we have a proof of the Katok and
Greenfield-Wallach conjectures for general homogeneous flows on
finite-volume homogeneous manifolds. Our main result can be stated as
follows.
\begin{theorem}
\label{thm:Main}
Let $G/D$ a connected finite volume homogeneous space. A homogeneous
flow $(G/D, g^\field{R})$ is either smoothly isomorphic to a linear flow on
a torus or it has countably many independent invariant
distributions.
\end{theorem}
An important feature of our argument is that in the case of
\emph{partially hyperbolic} flows we prove the stronger and more
general result that any partially hyperbolic flow on any compact
manifold, not necessarily homogeneous, has infinitely many distinct
minimal sets (see Theorem~\ref{expandingminimal}). In particular, we
have a proof of the Katok and Greenfield-Wallach conjectures in this
case. We are not able to generalize this result to the finite-volume
case. However, we can still prove that a partially hyperbolic
\emph{homogeneous} flow on a finite-volume manifold has countably many
ergodic probability measures (see Proposition~\ref{prop:5:2}).
In the non partially hyperbolic homogeneous case, that is, in the
quasi-unipotent case, by the Levi decomposition we are able to reduce
the problem to flows on semi-simple and solvable manifolds. The
semi-simple case is reduced to the case of $\mathrm S \mathrm L_2(\field{R})$ by an
application of the Jacobson--Morozov's Lemma which states that any
nilpotent element of a semi-simple Lie algebra can be embedded in an
$\sl_2(\field{R})$-triple. The solvable case can be reduced to the nilpotent
case for which our main result was already proved by the first two
authors in~\cite{FF}. In both these cases the construction of
invariant distributions is based on the theory of unitary
representations for the relevant Lie group (Bargmann's classification
for $\mathrm S \mathrm L(2,\field{R})$ and Kirillov's theory for nilpotent Lie groups).
The paper is organized as follows. In section~\ref{sec:2} we deal with
partially hyperbolic flows on compact manifolds. In
section~\ref{sec:3} give the background on homogeneous flows that
allows us to reduce the analysis to the solvable and semi-simple
cases. A further reduction is to consider quasi-unipotent flows
(sect.~\ref{sec:4}) and partially hyperbolic flows (sect.~\ref{sec:5}) on finite-volume non-compact manifolds; then the main
theorem follows easily (sect.~\ref{sec:6}). Finally, in section~\ref{sec:7} we state
a general conjecture on the stability of homogeneous flows and a couple of more general related open problems.
\smallskip
{\bf Acknowledgments} Livio Flaminio was supported in part by
the Labex~CEMPI (ANR-11-LABX-07). Giovanni Forni was supported by NSF
grant DMS~1201534. Federico Rodriguez Hertz was supported by NSF
grant DMS~1201326. Livio Flaminio would also like to thank the Department
Mathematics of the University of Maryland, College Park, for its hospitality
during the preparation of this paper.
\section{Partially hyperbolic flows on compact manifolds}
\label{sec:2}
The goal of this section is to prove the following theorem.
\begin{theorem}
\label{expandingminimal}
Let $M$ be a compact connected manifold, $\phi^t$, $t\in\field{R}$, a flow
on $M$ and assume $\phi^t$ leaves invariant a foliation $\mathcal F$
with smooth leaves and continuous tangent bundle, e.g.\ the unstable
foliation of a partially hyperbolic flow. Assume also that the flow
$\phi^t$ expands the norm of the vectors tangent to $\mathcal F$
uniformly. Then there are infinitely many different $\phi^t$-minimal
sets.
\end{theorem}
The existence of at least one non trivial (i.e.\ different from the
whole manifold) minimal sets goes back G.~Margulis (see
\cite{Starkov}, \cite{MR96k:22022}, \cite{Kleinbock-Shah-Starkov})
and Dani \cite{MR794799},
\cite{MR870710}. A similar idea was already used by R.~Ma\~n\'e in
\cite{MR516217} and more recently by A.~Starkov \cite{Starkov}, and
F. \& J.~Rodriguez Hertz and R.~Ures \cite[Lemma A.4.2 (Keep-away
Lemma)]{MR2390288}, in different contexts.
\smallskip Theorem~\ref{expandingminimal} will follow almost
immediately from the next lemma.
\begin{lemma}
\label{induct}
Let $\phi^t$ be a flow like in Theorem~\ref{expandingminimal}. For
any $k$-tuple $p_1,\dots, p_k\in M$ of points in different orbits
and for any open set $W\subset M$, there exist $\epsilon>0$ and
$q\in W$ such that $d(\phi^t(q),p_i)\geq\epsilon$ for all $t\geq 0$
and for all $i=1,\dots,k $.
\end{lemma}
Let us show how Theorem~\ref{expandingminimal} follows from
Lemma~\ref{induct}.
\begin{proof}[Proof of Theorem~\ref{expandingminimal}]
Since $M$ is compact there is a minimal set $K$. Assume now by
induction that there are $K_1,\dots K_k$ different minimal set, then
we will show that there is a minimal set $K_{k+1}$ disjoint from the
previous ones. Let $p_i\in K_i$, $i=1,\dots, k$ be $k$ points and
take $q$ and $\epsilon>0$ from Lemma~\ref{induct}. Since
$d(\phi^t(q),p_i)\geq\epsilon$ for any $t\geq 0$ and for $i=1,\dots,
k$ we have that for any $i=1,\dots, k$, $p_i\notin\omega(q)$, the
omega-limit set of $q$. Since the $K_i$'s are minimal, this implies
that $K_i\cap \omega(q)=\emptyset$ for $i=1,\dots, k$. Take now a
minimal subset of $\omega(q)$ and call it $K_{k+1}$.
\end{proof}
Let $F=T\mathcal F$ be the tangent bundle to the foliation with fiber
at $x\in M$ given by $F(x)=T_x\mathcal F(x)$. We denote by $d$ the
distance on $M$ induced by some Riemannian metric. Let $X$ be the
generator of the flow $\phi^t$. Let also
$E(x)=(F(x)\oplus\<X(x)\>)^{\perp}$ be the orthogonal bundle and
$\mathcal E_r(x)=\exp_x(B^E_{r}(x))$ be the image of the $r$ ball in
$E(x)$ by the exponential map. Let $f$ be the dimension of the
foliation $\mathcal F$ and $m$ the dimension of $M$. For $r \le r_0$
the disjoint union $\mathcal E:=\sqcup_{x\in M} \mathcal E_r(x)$ is a
$(m-f-1)$-dimensional continuous disc bundle over~$M$. Denote with
$d_{\mathcal F}$ and $d_{\mathcal E}$ the distances along the leaves
of $\mathcal F$ and $\mathcal E$, and let
\[
\mathcal F_r(x)=\{ y\in \mathcal F(x) \mid d_{\mathcal F}(y,x) \leq
r\} \subset \mathcal F(x)
\]
be the $f$-dimensional closed disc centerd at $x$ and of radius $r>0$.
Clearly $d\le d_{\mathcal F}$ and $d\le d_{\mathcal E}$.
We may assume that the Riemannian metric on $M$ is adapted so that
$\mathcal F_r(x) \subset\phi^{-t} \mathcal F_r(\phi^t x)$ for all
$x\in M$ and $r,t\ge 0$. In fact if $g$ is a Riemannian metric such
that $\| \phi^{t}_* v \|_g \ge C \lambda^t \|v\|_g$ for all $v \in
F(x)$, all $x\in M$ and all $t\ge 0$, (where $\lambda >1$), then
setting $\hat g = \int^{T_0}_0 (\phi^t)^* g \, \hbox{d} t$, with $T_0=
- \log_\lambda (C/2)$, we have that, for all $v \in F(x)$ and $x\in
M$, the function $\| \phi^{t}_* v\|_{\hat g}$ is strictly increasing
with $t$.
We may choose $r_1< r_0$ such that if $r \le r_1$ then, for all $x\in
M$, we have $d_{\mathcal F}(y,z) \le 2 d(y,z)$ for any $y,z\in
\mathcal F_r(x)$ and $d_{\mathcal E}(y,z) \le 2 d(y,z)$ for any
$y,z\in \mathcal E_r(x)$.
For $x\in M$, let
\[
V_{\delta,r}(x)=\bigcup_{z\in \mathcal E_{\delta}(x)}\mathcal F_r(z).
\]
There exists $ r_2 \le r_1$ be such that for all $r,\delta\le r_2$
then $V_{\delta, 4 r}(x)$ is homeomorphic to a disc of dimension
$(m-1)$ transverse to the flow.
\smallskip\noindent{\bf Normalization assumption}: After a constant
rescaling of $X$ we may assume that given any $x\in M$,
$z,y\in\mathcal F(x)$ and $t\geq 1$ we have $d_{\mathcal
F}(\phi^t(z),\phi^t(y))\geq 4\,d_{\mathcal F}(z,y)$. Henceforth, in
this section, we shall tacitly make this assumption.
\begin{proof}[Proof of Lemma~\ref{induct}]
Let $p_1,\dots, p_k\in M$ be points belonging to different orbits
and let $W\subset M$ be an open set. We shall find $r>0$ and a
point $x_0 \in W$ with $ \mathcal F_r(x_0)\subset W$ and then
construct, by induction, a sequence of points $x_n\in M$ and of
iterates $\tau_n\geq 1$ satisfying, for some $\delta >0$, the
following conditions
\begin{equation}
\label{eq:1}\tag{$A_{n+1}$}
\phi^{-\tau_n}(\mathcal F_r(x_{n+1}))\subset \mathcal
F_r(x_n), \qquad \text{\ for all\ } n\ge 0,
\end{equation}
and
\begin{equation}
\label{eq:2}\tag{$B_n$}
\phi^{T_n}x\in \mathcal
F_r(x_n)\implies \phi^t(x)\notin\hbox{$\bigcup_i$}V_{\delta,
2r}(p_i)\,, \quad \text{\ for all\ } t
\in [0, T_{n+1}).
\end{equation}
Here we have set $T_n:=\sum_{k=0}^{n-1}\tau_k$. Then defining $D_n:=
\phi^{-T_n}\mathcal F_r(x_n)$ we have $D_{n+1}\subset D_n\subset
\mathcal F_r(x_0)$ and any point $q\in\bigcap_n D_n\subset \mathcal
F_r(x_0)\subset W$ will satisfy the statement of the Lemma.
By the choice of an adapted metric we have
$$
d_{\mathcal F} (\phi^t (x),\phi^t(p)) \ge d_{\mathcal F}(x,p) \,,
\qquad \text{ for all } p\in M \text{ and all } x\in \mathcal F(p)
\,.
$$
This implies that for all $i=1, \dots ,k$ any $r>0$ and any $t\ge 0$
\begin{equation}
\label{eq:leaf_incl}
x\in \mathcal F_{4r}(p_i)\setminus\mathcal F_{2r}(p_i) \implies
\phi^t(x)\not \in \mathcal F_{2r}(\phi^t(p_i)).
\end{equation}
Hence there exists $\delta_0 < r_2$ such that for all $\delta <
\delta_0$ and all $r\le r_2$ we have:
\begin{enumerate}[(i)]
\item\label{item1} for all $i\in \{1,\dots ,k\}$,
\[
\phi^{[0,1]}\big( V_{\delta, 4r}(p_i) \setminus V_{\delta,
2r}(p_i)\big) \cap V_{\delta, r}(p_i)= \emptyset \,. \]
\end{enumerate}
The above assertion follows immediately by continuity if $p_i$ is
not periodic of minimal period less or equal to $1$. If $p_i$ is
periodic of period less or equal to $1$, then it follows by
continuity from formula~\eqref{eq:leaf_incl}.
As the orbits of $p_1, \dots, p_k$ are all distinct and the set $W$
is open, we may choose a point $x_0\in W$ and positive real numbers
$r,\delta< \delta_0$ so that the following conditions are also
satisfied:
\begin{enumerate}[(i)]
\setcounter{enumi}{1}
\item\label{item2} $\mathcal F_r(x_0)\subset W$;
\item\label{item3} for all $i, j \in \{1,\dots ,k\}$, with $i\not=
j$,
$$
\phi^{[0,1]} \big( V_{\delta, 4r}(p_i)\big) \cap \phi^{[0,1]} \big(
V_{\delta, 4r}(p_j)\big)= \phi^{[0,1]} \big( V_{\delta,
4r}(p_i)\big) \cap \bigcup_{t\in [0,1]} \mathcal F_r(\phi^tx_0)
=\emptyset\,.
$$
\end{enumerate}
If for all $t>0$ we have $ \mathcal F_r(\phi^t(x_0))\cap
\hbox{$\bigcup_i$}V_{\delta, r}(p_i) =\emptyset$, then $d(\phi^t(x_0),
p_i)> r$ for all $i=1,\dots k$ and all $t>0$, proving the Lemma with
$q=x_0$ and $\epsilon =r$. Thus we may assume that
$$
\tau_0:=\inf\Big\{t>0\;:\;\mathcal F_r(\phi^t(x_0))\cap
\hbox{$\bigcup_i$}V_{\delta, r}(p_i) \neq\emptyset\Big\} < \infty
$$
and define
$$ \hat x_0 = \phi^{\tau_0}(x_0).
$$
The above condition~\eqref{item3} implies that $\tau_0\ge 1$, hence by
the normalisation assumption it follows that
\begin{equation}
\label{eq:3}
\mathcal F_{5r}(\hat x_0) \subset \phi^{\tau_0}\big(\mathcal F_r(x_0)\big).
\end{equation}
Assume, by induction, that points $x_k\in M$ and iterates $\tau_k\geq
1$ satisfying the conditions~({$A_n$}) and~\eqref{eq:2} have been
constructed for all $k\in \{0,\dots, n\}$, and assume that the point
$\hat x_n:= \phi^{\tau_n}(x_n)\in M$ is such that $\mathcal F_r(\hat
x_n)$ intersects non-trivially some disc $ V_{\delta,r}(p_i) $. Since
$V_{\delta, r}(p_i)$ is saturated by $\mathcal F$, it follows that
$\mathcal F_{2r}(\hat x_n)\cap \mathcal E_{\delta}(p_i)$ consists of a
unique point $z_n$ with $d_{\mathcal F}(z_n, \hat x_n) \le 2 r$; we
define $x_{n+1}\in \mathcal F(\hat x_n)$ as the point at distance $3r$
on the geodesic ray in $\mathcal F(\hat x_n)$ going from $z_n$ to $
\hat x_n$ (or any point on the geodesic ray issued from $z_n$ if $\hat
x_n=z_n$). Then we have
\begin{equation}
\label{eq:4}
\mathcal F_ r(x_{n+1})\subset \mathcal F_{4r}(
\hat x_n )\cap V_{\delta,4r}(p_i)\setminus
V_{\delta,2r}(p_i)\,.
\end{equation}
Since $\mathcal F_ r(x_{n+1})\subset V_{\delta, 4r}(p_i)$, for all
$t\in (0,1)$ we have
$$
\mathcal F_r(\phi^t x_{n+1}) \subset \phi^t \mathcal F_r( x_{n+1})
\subset \phi^{[0,1]}\big( V_{\delta,4r}(p_i) \setminus V_{\delta,
2r}(p_i)\big)\,.
$$
By the disjointness conditions~\eqref{item1} and~\eqref{item3}, it
follows that, for all $t\in (0, 1]$,
$$\mathcal F_r(\phi^t x_{n+1})\cap \bigcup_{i=1}^k V_{\delta,
r}(p_i) =\emptyset\,.$$
It follows that if we define
\[
\tau_{n+1}:=\inf \Big\{t>0\;:\;\mathcal F_r(\phi^t(x_{n+1}))\cap
\hbox{$\bigcup_iV_{\delta, r}(p_i)$}\neq\emptyset\Big\},\qquad \hat
x_{n+1}= \phi^{\tau_{n+1}}(x_{n+1})
\]
(assuming $\tau_{n+1}<+\infty$), then $\tau_{n+1}\ge 1$, and by the
normalisation assumption and by the inclusion in
formula~\eqref{eq:4} we have
\[
\mathcal F_{r}(x_{n+1}) \subset \mathcal F_{4r}(\hat x_{n})=
\mathcal F_{4r}(\phi^{\tau_{n}} x_{n}) \subset
\phi^{\tau_{n}}\big(\mathcal F_r(x_{n})\big)
\]
and by construction, having set $T_{n+2}:=\sum_{k=0}^{n+1}\tau_k$,
we also have
$$
x\in D_{n+1}:=\phi^{-T_{n+1}}(\mathcal F_r(x_{n+1}))\implies
\phi^t(x)\notin\hbox{$\bigcup_i$}V_{\delta, 2r}(p_i)\,, \quad \text{
for all } t \in [0, T_{n+2}).
$$
The inductive construction is thus completed. As we explained above
we have that~$(D_n)$ is a decreasing sequence of closed sub-intervals of
$\mathcal F_r(x_0)$ and that any point $q\in \bigcap_n D_n$ satisfies
$\phi^t(q) \notin \bigcup_i V_{\delta, r}(p_i)$ for all $t\ge 0$.
The above inductive construction may fail if at some stage $n\geq 0 $
we have $\tau_n = +\infty$. In this case let $q$ be any point in
$\phi^{-T_n}(\mathcal F_r(x_n))$. Again such a point $q\in W$
satisfies the statement of the Lemma, hence the proof is completed.
\end{proof}
\section{Homogeneous flows}
\label{sec:3}
Henceforth $G$ will be a connected Lie group and $G/D$ a finite volume
space; this means that $D$ is a closed subgroup of $G$ and that $G/D$
has a finite $G$-invariant (smooth) measure. The group $D$ is called
the \emph{isotropy group} of the space $G/D$.
With $g^\field{R}$ we denote a one-parameter subgroup $(g^t)_{t\in \field{R}}$ of
$G$. The flow generated by this one-parameter subgroup on the finite
volume space $G/D$ will be denoted $(G/D, g^\field{R})$ or simply $g^\field{R}$.
\begin{remark}
\label{rem:3:1}
Our first observation is that, in proving Theorem~\ref{thm:Main}, we
may suppose that the flow~$g^\field{R}$ is ergodic on $G/D$ with respect to
a finite $G$-invariant measure. This is due to the fact that the
ergodic components of the flow~$g^\field{R}$ are closed subsets of~$G/D$
(see~\cite[Thm.\ 2.5]{Starkov}). Since $G/D$ is connected, either we
have infinitely many ergodic components, in which case
Theorem~\ref{thm:Main} follows, or the flow~$g^\field{R}$ is ergodic.
\end{remark}
Henceforth we shall assume that the flow~$g^\field{R}$ is ergodic. Whenever
convenient we may also assume that $G$ is simply connected by pulling
back the isotropy group $D$ to the universal cover of $G$.
Let $G=L\ltimes R$ be the Levi decomposition of a simply connected Lie
group $G$ and let $G_\infty$ be the smallest connected normal subgroup
of $G$ containing the Levi factor~$L$. Let $q: G\to L$ be the
projection onto the Levi factor. We shall use the following result.
\begin{theorem}[{\cite{MR0444835}, \cite{MR896893}, \cite[Lemma 9.4,
Thm.\ 9.5]{Starkov}}]
\label{thm:3:2}
If $G$ is a simply connected Lie group and the flow $(G/D, g^\field{R})$ on
the finite volume space $G/D$ is ergodic then
\begin{itemize}
\item The groups $R/R\cap D$ and $q(D)$ are closed in $G$ and in $L$
respectively. Thus $G/D$ factors onto $L/q(D)\approx G/RD$ with
fiber $R/R\cap D$. The semi-simple flow $(L/q(D), q(g^\field{R}))$ is
ergodic.
\item The solvable flow $(G/\overline{G_\infty D}, g^\field{R})$ is
ergodic.
\end{itemize}
\end{theorem}
By Theorem~\ref{thm:3:2} it is possible to reduce the analysis of the
general case to that of the semi-simple and solvable cases. In fact,
the following basic result holds.
\begin{lemma}\label{lem:3:3}
If the flow $(G/D, g^\field{R})$ smoothly projects onto a flow $(G_1/D_1,
g_1^\field{R})$, via an epimorphism $p: G\to G_1$ with $D \subset
p^{-1}(D_1)$, then the existence of countable many independent
invariant distributions for the flow $(G_1/D_1, g_1^\field{R})$ implies the
existence of countable many independent invariant distributions for
the flow $(G/D, g^\field{R})$.
\end{lemma}
\begin{proof} We are going to outline two different proofs.
\noindent {\it First Proof.\/} Let $\mu$ denote the invariant smooth
measure on $G/D$, let $\mu_1$ the projected measure on
$G_1/D_1$. For all $y\in G_1/D_1$, let $\mu_y$ denote the
conditional measure on the fiber $p^{-1} \{y\} \subset G/D$ of the
projection $p:G/D\to G_1/D_1$, which can be constructed as
follows. Let $\omega$ be a volume form associated to $\mu$ on $G/D$
and let $\omega_1$ be a volume form associated to $\mu_1$ on
$G_1/D_1$. There exists a smooth form $\nu$ on $G/D$ (of degree $d$
equal to the difference in dimensions between $G/D$ and $G_1/D_1$)
such that $\omega = \nu \wedge p^\ast \omega_1$. Note that the form
$\nu$ is not unique. (In fact, it is unique up to the addition of a
$d$-form $\eta$ such that $ \eta \wedge p^\ast
\omega_1=0$). However, the restriction of the form $\nu$ to the
fiber $p^{-1}\{y\}$ is a uniquely determined volume form on
$p^{-1}\{y\}$ which induces the conditional measure $\mu_y$, for all
$y\in G_1/D_1$.
Since the measures $\mu$ is $G$-invariant and the measure $\mu_1$ is
$G_1$-invariant, the family $\{ \mu_y \vert y\in G_1/D_1\}$ is
smooth and $G_1$-equivariant. For any smooth function $f\in
C^\infty(G/D)$ let $p_\# (f)\in C^\infty(G_1/D_1)$ be the smooth
function defined as
$$
p_\# (f) (y) = \frac{1}{ \mu_y(p^{-1}\{y\})} \int_{ p^{-1}\{y\} }
f(y) d\mu_y(x) \,, \quad \text{ for all } y\in G_1/D_1\,.
$$
The map $p_\# : C^\infty(G/D) \to C^\infty(G_1/D_1)$ is bounded
linear and surjective. In fact, it is a left inverse of the
pull-back map $p^\ast: C^\infty(G_1/D_1) \to C^\infty(G/D)$. For any
$g_1^\field{R}$-invariant distribution $\mathcal D\in \mathcal D'(G_1/D_1)$,
the following formula defines a $g^\field{R}$-invariant distribution
$p^\#\mathcal D \in \mathcal D'(G/D)$:
$$
p^\# \mathcal D (f) = \mathcal D \left( p_\ast (f) \right) \,, \quad
\text{ for all } f \in C^\infty(G/D)\,.
$$
The map $p^\#: \mathcal D'(G_1/D_1) \to \mathcal D'(G/D)$ is bounded
linear and injective, since the map $p_\#$ is bounded linear and
surjective, and by construction it maps the subspace of
$g_1^\field{R}$-invariant distributions into the subspace of
$g^\field{R}$-invariant distributions.
\noindent {\it Second Proof.\/} For any $G_1$-irreducible component
$H$ of the space $L^2(G_1/D_1)$, the pull-back $p^\ast(H)$ is a $G$-
irreducible component of the space $L^2(G/D)$. Since $p$ is an
epimorphism, the space $C^\infty(p^\ast (H)) \subset p^\ast(H)$ of
$G$-smooth vectors in $p^\ast(H)$ is equal to the pull-back $p^\ast
C^\infty(H)$ of the subspace $C^\infty(H)\subset H$ of $G_1$-smooth
vectors in $H$. Hence the push-forward map $p_\ast: C^\infty(p^\ast
(H)) \to C^\infty(H)$ is well defined. Assume that there exists a
$g_1^\field{R}$-invariant distribution which does not vanish on
$C^\infty(H)\subset C^\infty(G_1/D_1)$, then there exists a
$g_1^\field{R}$-invariant distribution $\mathcal D_H$ which does not vanish
on $C^\infty(H)$ but vanishes on $C^\infty(H^\perp)$ (which can be
defined by projection). Let then $p^\ast (\mathcal D_H)$ the
distribution defined as follows:
$$
p^\ast (\mathcal D_H)(f) = \begin{cases} \mathcal D_H (p_\ast f)\,,
\quad &\text{\ if\ } f\in C^\infty(p^\ast (H)) \,;\\ 0\,, \quad
&\text{\ if\ } f\in C^\infty(p^\ast (H)^\perp) \,.
\end{cases}
$$
By construction, the distribution $p^\ast (\mathcal D_H)$ is
$g^\field{R}$-invariant. Thus the one-parameter subgroup $g^\field{R}$ on $G/D$ has
infinitely many independent invariant distributions whenever the
one-parameter subgroup $g_1^\field{R}$ on $G_1/D_1$ does.
\end{proof}
In dealing with solvable groups it is useful to recall the theorem by
Mostow (see~\cite[Theorem E.3]{Starkov})
\begin{theorem}[Mostow]
\label{thm:mostow} If $G$ is a solvable Lie group, then $G/D$ is of
finite volume if and only if $G/D$ is compact.
\end{theorem}
When $G$ is semi-simple, in proving Theorem~\ref{thm:Main}, we may
suppose that $G$ has finite center and that the isotropy group $D$ is
a lattice. This is the consequence of the following proposition.
\begin{proposition}
\label{prop:lattice_red}
Let $G$ be a connected semi-simple group and let $G/D$ be a finite
volume space. If there exists an ergodic flow on $G/D$, then the
connected component of the identity of $D$ in $G$ is normal in
$G$. Hence we may assume that $G$ has finite center and that $D$ is
discrete.
\end{proposition}
\begin{proof}
We have a decomposition $G=K\cdot S$ of $G$ as the almost-direct
product of a compact semi-simple normal subgroup $K$ and of a
totally non-compact normal semi-simple group $S$. Let $p: G\to
K_1:=G/S$ be the projection of $G$ onto the semi-simple compact
connected group $K_1$. Let $\bar g^t$ the flow generated by $\bar
X=p_* X$ on the connected, compact, Hausdorff space
$Y:=K_1/\overline{p(D)}$. As $Y$ is a homogeneous space of a compact
semi-simple group, the fundamental group of $Y$ is finite. The
closure of the one-parameter group $(\exp{t\bar X})_{t\in \field{R}}$ in
$K_1$ is a torus subgroup $T<K_1$; it follows that the closures of
the orbits of $\bar g^t$ on $Y$ are the compact tori $Tk
\,\overline{p(D)}$, ($k\in K_1$), homeomorphic to $T/T\cap k
\,\overline{p(D)}\, k^{-1}$.
Let $g^\field{R}$ be an ergodic flow on $G/D$, generated by $X\in \mathfrak
g_0$. Since $\bar g^t$ acts ergodically on $Y$, the action of $T$
on $Y$ is transitive. In this case we have $Y=T/T\cap
\overline{p(D)}$, and since the space $Y$ is a torus with finite
fundamental group, it is reduced to a point. It follows that $T<
\overline{p(D)}=K_1$. Thus $p(D)$ is dense in $K_1=G/S$ and $SD$ is
dense in $G$.
Let $\bar D^Z$ denote the Zariski closure of $\mathrm A\mathrm d(D)$ in $\mathrm A\mathrm d(G)$
(see~Remark 1.6 in \cite{MR591617}). By Borel Density Theorem
(see~\cite[Thm.~4.1, Cor.~4.2]{MR591617} and \cite{MR2158954}) the
hypothesis that $G/D$ is a finite volume space implies that $\bar
D^Z$ contains all hyperbolic elements and unipotent elements
in~$\mathrm A\mathrm d(G)$. As these elements generate $\mathrm A\mathrm d(S)$, we have $\mathrm A\mathrm d(S) <
\bar D^Z $, and the density of $SD$ in $G$ implies $\mathrm A\mathrm d(G)= \bar D^Z
$. Since the group of $\mathrm A\mathrm d(g)\in \mathrm A\mathrm d(G)$ such that $\mathrm A\mathrm d(g)
(\operatorname{Lie}(D)) = \operatorname{Lie}(D)$ is a Zariski-closed
subgroup of $\mathrm A\mathrm d(G)$ containing $\bar D^Z$, we obtain that the
identity component $D^0$ of~$D$ is a normal subgroup of~$G$ and
$G/D\approx (G/D^0)/(D/D^0)$. We have thus proved that we can
assume that $D$ is discrete. We can also assume that $G$ has finite
center since $D$ is a lattice in~$G$ and therefore it meets the
center of $G$ in a finite index subgroup of the center. This
concludes the proof.
\end{proof}
Our proof of Theorem~\ref{thm:Main} considers separately the cases of
quasi-unipotent and the partially hyperbolic flows. We recall the
relevant definitions. \smallskip
Let $X$ be the generator of the one-parameter subgroup $g^\field{R}$ and
let~$\mathfrak g^\mu$ denote the generalised eigenspaces of eigenvalue
$\mu$ of $\mathrm a\mathrm d(X)$ on $\mathfrak g = \mathfrak g_0\otimes \field{C}$. The Lie
algebra $\mathfrak g$ is the direct sum of the $\mathfrak g^{\mu}$ and
we have $[\mathfrak g^\mu, \mathfrak g^\nu] \subset \mathfrak
g^{\mu+\nu}$. Let
\[
\mathfrak p^0 =\sum_{\Re \mu =0} \mathfrak g^\mu, \quad \mathfrak p^+
=\sum_{\Re \mu >0} \mathfrak g^\mu,\quad \mathfrak p^- =\sum_{\Re \mu
<0} \mathfrak g^\mu,
\]
\begin{definition}
\label{def:qu_ph}
A flow $g^\field{R}$ on $G/D$ is called \emph{quasi-unipotent} if
$\mathfrak g = \mathfrak p^0$ and it is \emph{partially hyperbolic}
otherwise. Thus the flow subgroup $g^\field{R}$ is quasi-unipotent or
partially hyperbolic according to whether the spectrum of the group
$\mathrm A\mathrm d(g^t)$ acting on $\mathfrak g$ is contained in $U(1)$ or not.
\end{definition}
\section{The quasi-unipotent case}
\label{sec:4}
We now assume that flow $g_\field{R}$ on the finite volume space is
quasi-unipotent.
\subsection{The semi-simple quasi-unipotent case}
\label{sec:G-semisimple}
In this subsection we assume that the group $G$ is semi-simple (and
the one-parameter subgroup $g^\field{R}$ is quasi-unipotent).
\begin{definition}
An $\sl_2(\field{R})$ triple $(a, n^+, n^-)$ in a Lie algebra $\mathfrak
g_0$ is a triple satisfying the commutation relations
\[ [a, n^\pm]= \pm n^\pm, \quad [n^+,n^-]=a.
\]
\end{definition}
We recall the Jacobson--Morozov Lemma~\cite{MR559927}.
\begin{lemma}[Jacobson--Morozov Lemma]
\label{lem:Jacobson-Morozov}
Let $n^+$ be a nilpotent element in a semi-simple Lie algebra
$\mathfrak g_0$. Assume that $n^+$ commutes with a semi-simple
element $s\in \mathfrak g_0$. Then in $\mathfrak g_0$ we can find a
semi-simple element $a$ and a nilpotent element $n^-$ such that $(a,
n^+, n^-)$ is an $\sl_2(\field{R})$ triple commuting with $s$.
\end{lemma}
Given a unitary representation of $(\pi, H)$ of a Lie group on a
Hilbert space $H$, we denote by $H^\infty$ the subspace of
$C^\infty$-vectors of $H$ endowed with the $C^\infty$ topology, and by
$(H^{\infty})'$ its topological dual.
\begin{lemma}[\cite{FF1}]
\label{lem:4:1}
Let $U^t$ be a unipotent subgroup of\/ $\hbox{\rm PSL}_2(\field{R})$. For each
non-trivial irreducible unitary representation $(\pi, H) $ of\/
$\hbox{\rm PSL}_2(\field{R})$ there exists a distribution, i.e.\ an element of $D\in
(H^{\infty})'$, such that $U^t D = D$.
\end{lemma}
\begin{proposition}
\label{prop:4:2}
Let $G$ be semi-simple and let $D$ be a lattice in $G$. Then any
quasi-unipotent ergodic subgroup $g^\field{R}$ of $G$ admits infinitely
many $g_\field{R}$-invariant distributions on $C^\infty(G/D)$.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:lattice_red} we may assume that $G$ has
finite center and that $D$ is a lattice in $G$. By the Jordan
decomposition we can write $g^t= c^t \times u^t$ where $c^t$ is
semi-simple and $u^t$ is unipotent with $c^\field{R}$, $u^\field{R}$ commuting
one-parameter subgroups of~$G$. Since $c^\field{R}$ is semi-simple and
quasi-unipotent, its closure in $G$ is a torus $T$.
By the Jacobson--Morozov lemma we find an $\sl_2(\field{R})$ triple $(a,
n^+, n^-)$ commuting with $c^\field{R}$, hence with $T$. We let $(a^t, u^t,
v^t)$ be the corresponding one-parameter groups commuting with $T$
and let $S$ the subgroup generated by these flows.
It is well known that the center $Z(S)$ of $S$ is finite and that,
consequently, there exists a maximal compact subgroup
of~$S$. Indeed, the adjoint representation $\mathrm A\mathrm d_G|S$ of $S$ on the
Lie algebra of $G$, as a finite dimensional representation of $S$,
factors through $SL(2, \field{R})$, a double cover of $S/Z(S)$. The kernel
of $\mathrm A\mathrm d_G|S$ is contained in $Z(G)$, because $G$ is connected. Since
$Z(S)$ is monogenic, we have that $Z(G)$ is a subgroup of index one
or two of $Z(G)Z(S)$. It follows that $Z(S)$ is finite.
Let $K< S$ denote a maximal compact subgroup of $S$. Let $
L^2(G/D;\field{C})$ be the space of complex valued $L^2$ functions on
$G/D$. The subspaces $H_0$ and $H$ of $ L^2(G/D;\field{C})$ formed
respectively by the $T_1=T \cdot K$ and $T_2=T \cdot Z(S)$ invariant
functions are infinite dimensional Hilbert spaces since the orbits
of the subgroups $T_1$ and $T_2$ in $G/D$ are compact and the space
$G/D$ is not a finite union of these orbits. Furthermore $H_0\subset
H$ and $H$ is also invariant under~$S$. We deduce that $H$
decomposes as a direct integral or direct sum of irreducible unitary
representations of $S$; these representations are trivial on the
center $Z(S)$ and therefore define irreducible unitary
representations of ${\mathrm P\mathrm S\mathrm L}_2(\field{R})$. Since each
irreducible unitary representation of ${\mathrm P\mathrm S\mathrm
L}_2(\field{R})$ contains at most one $K$-invariant vector and $H_0$ is
infinite dimensional, we have that the cardinality of the set of
unitary irreducible representations of ${\mathrm P\mathrm S\mathrm
L}_2(\field{R})$ appearing in $H$ is not finite. By the previous Lemma
each unitary irreducible representation $H_i$ of ${\mathrm P\mathrm
S\mathrm L}_2(\field{R})$ contained in $H$ contains a distribution
$D_i\in (H_i^\infty)'$ which is $u^t$ invariant; this distribution
is also $g^t$ invariant since the action of $c^t$ on $H$ is
trivial. Since the space $H^\infty$ coincides with the Fr\'echet
space of $C^\infty$ functions on $G/D$ which are $T \cdot Z(S)$
invariant, the proposition is proved.
\end{proof}
\subsection{The solvable quasi-unipotent case}
\label{sec:solv-eucl-case}
In this subsection we assume that the group $G=R$ is solvable and the
one-parameter subgroup $g^\field{R}$ is quasi-unipotent and ergodic on the
finite volume space $R/D$.
\smallskip We recall the following definition.
\begin{definition} A solvable group $R$ is called a \emph{class
$(\mathrm I)$ group} if, for every $g\in R$, the spectrum of
$\mathrm A\mathrm d(g)$ is contained in the unit circle~$U(1)=\{z \in \field{C} \mid
|z|=1\}$.
\end{definition}
It will also be useful remark that if $R$ is solvable and $R/D$ is a
finite measure space, then we may assume that $R$ is simply connected
and that $D$ is a quasi-lattice (in the language of Auslander and
Mostow, the space $R/D$ is then a \emph{presentation}); in fact, if
$\tilde R$ is the universal covering group of $R$ and $\tilde D$ is
the pull-back of $D$ to $\tilde R$, then the connected component of
the identity $\tilde D_0$ of $\tilde D$ is simply connected
\cite[Them. 3.4]{gorbatsevich1994lie}; hence $R/D\approx \tilde R
/\tilde D \approx R'/ D' $, with $R'=\tilde R/\tilde D_0$ solvable,
connected and simply connected and with $D'=\tilde D/\tilde D_0$ a
quasi-lattice.
We also recall the construction, originally due to Malcev and
generalised by Auslander \emph{et al.}, of the \emph{semi-simple} or
\emph{Malcev splitting} of a simply connected connected solvable group
$R$ (see \cite{MR0199308}, \cite{MR0486307}, \cite{MR0486308},
\cite{0453.22006}).
A solvable Lie group $G$ is \emph{split} if $G=N_G\rtimes T$ where
$N_G$ is the nilradical of $G$ and $T$ is an Abelian group acting on
$G$ faithfully by semi-simple automorphisms.
A \emph{semi-simple} or \emph{Malcev splitting} of a connected simply
connected solvable Lie group $R$ is a split exact sequence
\[
0 \to R \overset{m}{\to}M(R) \leftrightarrows T \to 0
\]
embedding $R$ into a split connected solvable Lie group $M(R)=
N_{M(R)} \rtimes T$ such that $M(R)= N_{M(R)} \cdot m(R)$; here
$N_{M(R)}$ and $T$ are as before. The image $m(R)$ of~$R$ is normal
and closed in $M(R)$ and it will be identified with $R$.
The semi-simple splitting of a connected simply connected solvable Lie
group $R$ is unique up to an automorphism fixing $R$.
Let $\operatorname{Aut}(\mathfrak r)\approx \operatorname{Aut}(R)$ be
automorphism group of the Lie algebra $\mathfrak r$ of $R$. The
adjoint representation $\mathrm A\mathrm d$ maps the group $R$ to the solvable
subgroup $\mathrm A\mathrm d(R)<\operatorname{Aut}(\mathfrak r)$; since
$\operatorname{Aut}(\mathfrak r)$ is an algebraic group we may
consider the Zariski closure $\mathrm A\mathrm d(R)^*$ of~$\mathrm A\mathrm d(R)$. The group
$\mathrm A\mathrm d(R)^*$ is algebraic and solvable, since it's the algebraic closure
of the solvable group $\mathrm A\mathrm d(R)$. It follows that $\mathrm A\mathrm d(R)^*$ has a
Levi-Chevalley decomposition $\mathrm A\mathrm d(R)^* = U^*\rtimes T^*$, with $T^*$
an Abelian group of semi-simple automorphisms of $\mathfrak r$ and
$U^*$ the maximal subgroup of unipotent elements of $\mathrm A\mathrm d(R)^*$.
Let $T$ be the image of $\mathrm A\mathrm d(R)$ into $T^*$ by the natural projection
\hbox{$\mathrm A\mathrm d(R)^*\to T^*$}. Since $T$ is a group of automorphisms of
$R$, we may form the semi-direct product $M(R)= R\rtimes T$. By
definition we have a split sequence
$$
0 \to R \to M(R)\leftrightarrows T \to 0.
$$
It can be proved that $M(R)$ is a split connected solvable group
$N_{M(R)} \rtimes T$ and it is the semi-simple splitting of $R$
(see~.\ loc.\ cit.). We remark that the splitting $M(R)=N_{M(R)}
\rtimes T$ yields a new projection map $\tau:M(R)\to T$ by writing for
any $g\in M(R)$, $g = n\tau (g)$ with $n\in N_{M(R)}$ and $ \tau
(g)\in T$. Composing $\tau$ with the inclusion $R\to M(R)$ we obtain a
surjective homomorphism $\pi:R\to T$.
It is useful to recall a part of Mostow's structure theorem for
solvmanifolds, as reformulated by Auslander \cite[IV.3]{MR0486307}
and~\cite[p.~271]{MR0486308}:
\begin{theorem}[Mostow, Auslander]
\label{thm:mostow-auslander}
Let $D$ be a quasi-lattice in a simply connected, connected,
solvable Lie group $R$, and let $M(R)=R\rtimes T$ be the semi-simple
splitting of $R$. Then $T$ is a closed subgroup of
$\operatorname{Aut}(\mathfrak r)$ and the projection $\pi(D)$ of $D$
in $T$ is a lattice of $T$.
\end{theorem}
\begin{lemma}
\label{lem:4:3}
If the flow $g^\field{R}$ is ergodic and quasi-unipotent on the finite
volume solvmanifold $R/D$, then the group $R$ is of class (I).
\end{lemma}
\begin{proof}
We may assume $R$ simply connected and connected and $D$ a
quasi-lattice. Let $M(R)=N_{M(R)} \rtimes T = R\rtimes T$ be the
semi-simple splitting of $R$. Since the one-parameter subgroup
$g^\field{R}$ is quasi-unipotent, the closure of the projection $\pi(g^\field{R})$
in the semi-simple factor $T <\operatorname{Aut}(\mathfrak r)$ is a
compact torus $T' < T$. The surjective homomorphism $\pi:R\to T$
induces a continuous surjection $R/D \to T/\pi(D)$. By Mostow's
structure Theorem $R/D $ is compact hence $T/\pi(D)$ is a compact
torus. The orbits of $T'$ in $T/\pi(D)$ are finitely covered by
$T'$. But $g^\field{R}$ acts ergodically on $R/D$, hence $T'$ acts
ergodically and minimally on $T/\pi(D)$. It follows that $T/\pi(D)$
consists of a single $T'$ orbit and, since $T', T$ are both
connected and $T'<T$ and $T' \to T'/\pi(D)$ is a finite cover, we
obtain that $T'=T$. Thus $T$ consist of quasi-unipotent
automorphisms of $\mathfrak r$, which implies that for all $g\in R$,
the automorphism $\mathrm A\mathrm d(g)$ is quasi-unipotent. Hence the group $R$ is
of class (I).
\end{proof}
The following theorem was first proved in \cite[Thm.\ 4.4]{MR0199308}
under the hypothesis that $D\cap N_{M(R)}= D$. This amounts to suppose
that $D$ is nilpotent, which is the case when $R/D$ supports a minimal
flow, as it is proved in \cite[Thm.~C]{MR0486308}. A simplification
of the latter proof under the hypothesis that $R/D$ carries an
er\-go\-dic flow appears in \cite[Theorem 7.1]{Starkov}.
\begin{theorem}[Auslander, Starkov]
\label{thm:auslander-starkov}
An ergodic flow on a class (I) compact solvable manifold is smoothly
isomorphic to a nilflow.
\end{theorem}
The first two authors have proved that the main theorem holds for
general nilflows, that is, that the following result holds:
\begin{theorem}[\cite{FF}]
\label{thm:flaminio-forni}
An ergodic nilflow which is not toral, has countably many
independent invariant distributions.
\end{theorem}
In conclusion we have
\begin{proposition}
\label{prop:4:4}
An ergodic quasi-unipotent flow on a finite volume solvmanifold is
either smoothly conjugate to a linear toral flow or it admits
countably many independent invariant distributions.
\end{proposition}
\section{Partially hyperbolic homogeneous flows}
\label{sec:5}
In the non-compact, finite volume case, by applying results of
D.~Kleinbock and G.~Margulis we are able to generalize
Theorem~\ref{expandingminimal} to flows on \emph{semi-simple}
manifolds. We think that it is very likely that a general partially
hyperbolic flow on any finite volume manifold has infinitely many
different minimal sets, but we were not able to prove such a general
statement.
For non-compact finite volume we recall the following result by
D.~Kleinbock and G.~Margulis \cite{MR96k:22022}
and its immediate corollary.
\begin{theorem}[Kleinbock and Margulis] \label{thm:kleinbock-margulis}
Let $G$ be a connected semi-simple Lie group of dimension $n$
without compact factors, $\Gamma$ an irreducible lattice in $G$. For
any partially hyperbolic homogeneous flow $g^\field{R}$ on $G/\Gamma$, for
any closed invariant set $Z\subset G/\Gamma$ of (Haar) measure zero
and for any nonempty open subset $W$ of $G/\Gamma$, we have that
$$\dim_H(\{x\in W\;|\;g^\field{R} x\;\mbox{is bounded and}\;\overline{g^\field{R} x}\cap Z=\emptyset\})=n.$$
Here $\dim_H$ denotes the Hausdorff dimension.
\end{theorem}
Observe that if the flow $g^\field{R}$ is ergodic, it is enough to assume
that the closed invariant set $Z \subset G/\Gamma$ be proper.
\begin{corollary}\label{cor:kleinbock-margulis}
Under the conditions of Theorem~\ref{thm:kleinbock-margulis} the
flow $(G/\Gamma,g^\field{R})$ has infinitely many different compact minimal
invariant sets.
\end{corollary}
\begin{proposition}\label{prop:5:2}
Let $G$ be a connected semi-simple Lie group and $G/D$ a finite
volume space. Assume that the flow $g^\field{R}$ on $G/D$ is partially
hyperbolic. Then the flow $(G/\Gamma,g^\field{R})$ has infinitely many
distinct compact minimal invariant sets, and consequently infinitely
many $g_\field{R}$-invariant and mutually singular ergodic measures.
\end{proposition}
\begin{proof}
Let $G=K\times S$ be the decomposition of $G$ as the almost-direct
product of a compact semi-simple subgroup $K$ and of a totally
non-compact semi-simple group $S$, with both $K$ and $S$ connected
normal subgroups. Since the flow $(G/\Gamma,g^\field{R})$ is partially
hyperbolic $S$ is not trivial. Since $K$ is compact and normal, then
$D'=DK=KD\subset G$ is a closed subgroup, and since $D\subset KD$,
then $G/KD$ is of finite volume. Moreover, $(G/K)/DK\sim G/KD$ is of
finite volume and $G'=G/K\sim S/S\cap K$ is semi-simple without
compact factor with $D'\subset G'$ a closed subgroup with $G'/D'$ of
finite volume and a projection $p:G/D\to G'/D'$. Thus we may assume
that $G$ is totally non compact, and by
Proposition~\ref{prop:lattice_red}, that the center of $G$ is finite
and that $D$ is a lattice.
If $D$ is irreducible, then the statement follows immediately from
Corollary~\ref{cor:kleinbock-margulis}. Otherwise, let $G_i$, for
$i\in \{1,\dots, l\}$, be connected normal simple subgroups such
that $G=\prod_iG_i$, $G_i\cap G_j=\{e\}$ if $i\neq j$, and let
$\Gamma_i=\Gamma\cap G_i$ be an irreducible lattice in $G_i$, for
each $i\in \{1,\dots, l\}$, with $\Gamma_0=\prod_i \Gamma_i$ of
finite index in $\Gamma$. Observe that
$G/\Gamma_0\sim\prod_iG_i/\Gamma_i$. Let $p:G/\Gamma_0\to G/\Gamma$
be the finite-to-one covering and let $p_i:G/\Gamma_0\to
G_i/\Gamma_i$ be the projections onto the factors. Let $g_0^\field{R}$ be
the flow induced by the one-parameter group $g^\field{R}$ on $G/\Gamma_0$
and let $g_i^\field{R}$ be the projected flow on $G_i/\Gamma_i$, for all
$i\in \{1,\dots, l\}$. Since $\Gamma_i$ is an irreducible lattice in
$G_i$, whenever $g_i^\field{R}$ is partially hyperbolic we can apply
Corollary~\ref{cor:kleinbock-margulis}. Since $g_0^\field{R}$ is partially
hyperbolic there is at least one $j \in \{1, \dots, l\}$ such that
$g_j^\field{R}$ is partially hyperbolic. By
Corollary~\ref{cor:kleinbock-margulis}, the flow $g_j^\field{R}$ has a
countable family $\{K_n \vert n\in \field{N}\}$ of distinct minimal subsets
of $G_j/\Gamma_j$ such that each $K_n$ supports an invariant
probability measure $\eta_n$. For all $n\in \field{N}$, let us define $\hat
\mu_n:=\eta_n\times Leb$ on $G/\Gamma_0$. By construction the
measures $\hat \mu_n$ are invariant, for all $n\in \field{N}$, and have
mutually disjoint supports. Finally, since the map $p:G/\Gamma_0\to
G/\Gamma$ is finite-to-one, it follows that the family of sets
$\{p(K_n\times\prod_{i\neq j}G_i/\Gamma_i) \vert n\in \field{N}\}$ consists
of countably many disjoint closed sets supporting invariant measures
$ \mu_n:=p_*\hat\mu_n$. The proof of the Proposition is therefore
complete.
\end{proof}
\section{The general case}
\label{sec:6}
We may now prove our main theorem.
\begin{proof}[Proof of Theorem~\ref{thm:Main}]
By Remark~\ref{rem:3:1} we may suppose the flow ergodic. Let also
assume that $G$ is simply connected, by possibly pulling back $D$ to
the universal cover of $G$. Recall that by Theorem~\ref{thm:3:2} the
ergodic flow $(G/\Gamma,g^\field{R})$ projects onto the ergodic flow
$\big(L/\overline{q(D)}, q(g^\field{R})\big)$, where $L$ is the Levi factor
of $G$ and $q:G\to L$ the projection of $G$ onto this factor. Assume
that the finite measure space $\big(L/\overline{q(D)}\big)$ is not
trivial. Then the statement of the theorem follows from
Proposition~\ref{prop:4:2} if the flow $\big(L/\overline{q(D)},
q(g^\field{R})\big)$ is quasi-unipotent and by Proposition~\ref{prop:5:2}
if it is partially hyperbolic.
If the finite measure space $\big(L/\overline{q(D)}\big)$ is reduced
to a point, then, using again Theorem~\ref{thm:3:2}, we have $G/D
\approx R/R\cap D$, where $R$ is the radical of $G$. We obtain in
this way that our original flow is diffeomorphic to an ergodic flow
on a finite volume solvmanifold. By Mostow's theorem (see
Theorem~\ref{thm:mostow}), a finite volume solvmanifold is
compact. Hence the statement of the theorem follows from
Theorem~\ref{expandingminimal} if the projected flow is partially
hyperbolic and by Proposition~\ref{prop:4:4} if it is
quasi-unipotent. The proof is therefore complete.
\end{proof}
\section{Open problems}
\label{sec:7}
We conclude the paper by stating some (mostly well-known) open
problems and conjectures on the stability and the codimension of
smooth flows.
\begin{conjecture} (A. Katok)
Every homogeneous flow (on a compact
homogeneous space)
which fails to
be stable (in the sense that the range of the Lie derivative on the
space of smooth functions is not closed) projects onto a Liouvillean
linear flow on a torus. In this case, the flow is still stable on
the orthogonal complement of the subspace of toral functions (in
other words, the subspace of all functions with zero average along
each fiber of the projection).
\end{conjecture}
It is known that hyperbolic and partially hyperbolic, central
isometric (or more generally with uniform sub-exponential central
growth), accessible systems are stable (in the hyperbolic case it
follows by Livshitz theory; in the partially hyperbolic accessible
case see the work of A.~Wilkinson~\cite{Wilkinson} for accessible
partially hyperbolic maps and references therein). In the partially
hyperbolic non-accessible case, several examples are known to be
stable (see \cite{Veech} for toral automorphisms and \cite{Dolgopyat}
for group extensions of Anosov). In the unipotent case, it is proved
in \cite{FF1} that $SL(2,\field{R})$ unipotent flows (horocycles) on finite
volume homogeneous spaces are stable and in \cite{FF} that the above
conjecture holds for nilflows.
\begin{problem} Classify all compact manifolds which admit uniquely
ergodic flows with $(a)$ a unique invariant distribution (equal to
the unique invariant measure) up to normalization; $(b)$ a finite
dimensional space of invariant distributions.
\end{problem}
Example of manifolds (and flows) of type $(a)$ have been found by
A.~Avila, B.~Fayad and A.~Kocsard \cite{AFK}. Note that the Katok
(Greenfield-Wallach) conjecture implies that in all non-toral examples
of type $(a)$ the flow cannot be stable. Recently A.~Avila and
A.~Kocsard \cite{private} have announced that they
have constructed maps on the two-torus having a space of invariant
distributions of arbitrary odd dimension. It is unclear whether
examples of this type can be stable:
\begin{problem} (M. Herman) Does there exists a stable flow
with finitely many invariant distributions which is not smoothly conjugate
to a Diophantine linear flow on a torus?
\end{problem}
The only known example which comes close to an affirmative answer to
this problem is given by generic area-preserving flows on compact
higher genus surfaces \cite{Forni97}, \cite{MMY}. Such flows are
generically stable and have a finite dimensional space of invariant
distributions in every finite differentiability class (but not in the
class of infinitely differentiable functions).
| {'timestamp': '2013-03-29T01:01:31', 'yymm': '1303', 'arxiv_id': '1303.7074', 'language': 'en', 'url': 'https://arxiv.org/abs/1303.7074'} |
\section{Introduction}\label{sec:model.def}
A relatively simple formulation for the problem of directed polymer in random medium is the following \cite{Cook1989, Giacomin2007}. The lattice consists of $L$ planes in the transversal direction. In every plane there are $N$ points that are connected to all points of the previous plane and next one. For each edge $ij\,$, connecting the $t$-th plane to the $(t+1)$-th plane, a random energy $\xi_{ij} (t+1)$ is sampled from a common probability distribution $\xi$. With a slight abuse of notations we write $\xi$ for both the distribution and a random variable with distribution $\xi$. For $\omega = [\,\omega_1 \,,\ldots ,\omega_L \,]$ a standard random walk on $\mathbb{G}_N$ the complete graph on $N$ vertices, we define the energy $E_{\omega}\,$ of the directed path by summing the energies of the visited bounds
$$
E_{\omega} : = \sum_{s=1 }^{L} \xi_{\omega_s \, \omega_{s+1} } (s+1) \,.
$$
We define the probability measure $\mu_L$ on the space of all directed paths of length $L$ by
$$
\mu_L (\omega ):= Z_L (T)^{-1} \exp ( -E_{\omega} / T ) \, ,
$$
where $T$ is the temperature and $Z_L (T)$ is the partition function. The directed path $\big(\omega_i , i \big)_{i \geq 0}$ can be interpreted as a polymer chain living on $\mathbb{G}_N\times \N$, constrained to stretch in one direction and governed by the Hamiltonian $\exp ( - E_{\omega} / T)$\,.
We will be interested in the case where the random energies $\xi$ depends on $N$ the number of vertices of $\mathbb{G}_N$ and we will work at zero temperature. When $T=0$, we are faced with an optimization problem: computing the ground state energy of the model {\it i.e.} the lowest energy of all possible walks.
In \cite{Cook1989} Cook and Derrida consider the particular case of zero temperature and $\xi$ distributed according to a Bernoulli of parameter $1/N^{1+r}$, with $r\geq 0$, which they call the percolation distribution. Hence, the energy $E_\omega$ of a directed path of length $L$ is equal to the number of times $\xi_{\omega_s \, \omega_{s+1} } = 1$ along this path. Moreover, we can easily conclude that if the ground state of the polymers of length $L$ is $E_L$, then $E_{L+1} \leq E_{L}+1$.
For $N$ fixed, the ratio $E_L / L$ converges and is a constant a.s. In \cite{Cook1989} the authors call this limit the ground state energy per unity of length and they derive the following asymptotic for it, when $N \to \infty$
\begin{equation} \label{equa:bad.formula}
E = \Big(1+\lfloor 1/r \rfloor \Big)^{-1},
\end{equation}
where $\lfloor \cdot \rfloor$ denotes the integer part. Their statement is based on the observation that the typical number of sites on the $t$-th plane connected to the first plane by a path of zero energy is $N^{1-tr}$. Hence, if $N$ is large enough and $1-tr$ positive there is a path of zero energy (which is necessarily a ground state) from $0$ to $t$, whereas when $1-tr$ is negative there is no such path. Their argument, although informal, is correct, but the case where $1/r$ is an integer (the critical case) requires a more careful analysis. In this paper we formalize their argument and show that there is an additional term in (\ref{equa:bad.formula}) when $1/r$ is an integer.
\medskip
In this paper, we choose to approach the polymer problem described above through the point of view of an interacting particles system. It consists in a constant number $N$ of evolving particles on the real line initially at the positions $X_1(0), \ldots ,X_N(0)$. Then, given the positions $X_i(t)$ of the $N$ particles at time $t \in \N$, we define the positions at time $t + 1$ by:
\begin{equation} \label{definition.X.derrida.brunet}
X_i (t + 1) : = \max_{1\leq j \leq N} \big\{ X_j(t) + \xi_{j,i} (t + 1) \big\} ,
\end{equation}
where $\big\{ \xi_{i,j} (s) \, ; 1 \leq i, j \leq N\,, s \in \N \big\} $ are i.i.d. real random variables of common law $\xi$. The $N$ particles can also be seen as the fitness of a population under reproduction, mutation and selection keeping the population size constant.
Moving fronts are used to model some problems in biology and physics. It describes, for example, how the fitness of gene propagates through a population. In physics they appear in non-equilibrium statistical mechanics and in the theory of disordered systems \cite{Derrida1988}.
We now explain how (\ref{definition.X.derrida.brunet}) is related to the polymer problem studied by Cook and Derrida in \cite{Cook1989}. One can check by induction that
$$
X_i(t)= \max \Big\{ X_{j_0}(0) + \sum_{s=1}^{t} \xi_{j_{s-1} j_{s}} (s); 1 \leq j_s \leq N, \ \forall s = 0, \ldots, t-1 \text{ and } j_t = i \Big\} \,.
$$
Then we take $X_j(0) = 0$ for all $1 \leq j \leq N$ and sample $-\xi_{ij}(t)$ as a Bernoulli of parameter $1/N^{1+r}$. From the above formula, $-X_i (t)$ corresponds to the ground state energy of the polymer conditioned to be on $i$ at $t$. Therefore, the ground state is obtained by taking the maximum over all possible positions.
\begin{defi}[Front Speed] Let $\phi \big(X(t)\big) = \displaystyle \max_{1 \leq i \leq N} \ \big\{ X_i(t) \big\}$. The front speed $v_N$ is defined as
\begin{equation}\label{equa.def.speed}
v_N : =\lim_{t \to \infty} \frac{\phi \big( X(t) \big)}{t} .
\end{equation}
For $N$ fixed, the limit (\ref{equa.def.speed}) exists and is constant a.s., see \cite{Comets2013} for more details and a rigorous proof.
\end{defi}
Hence, it is not difficult to see that the ground state energy per unit of length is equal to $-v_N$ as defined in (\ref{equa.def.speed}).
\medskip
This model was introduced by Brunet and Derrida in \cite{Brunet2004} to better understand the behavior of some noisy traveling-wave equations, that arise from microscopic stochastic models. By the selection mechanism, the particles remain grouped, they are essentially pulled by the leading ones, and the global motion is similar to a front propagation in reaction-diffusion equations with traveling waves. In \cite{Brunet2004}, Brunet and Derrida solve for a specific choice of the disorder ($\xi_{ij}$ are sampled from a Gumbel distribution) the microscopic dynamics and calculate exactly the velocity and diffusion constant. Comets, Quastel and Ramirez in \cite{Comets2013} prove that if $\xi$ is a small perturbation of the Gumbel distribution the expression in \cite{Brunet2006} for the velocity of the front remains sharp and that the empirical distribution function of particles converges to the Gumbel distribution as $N \to \infty$. They also study the case of bounded jumps, for which a completely different behavior is found and finite-size corrections are extremely small.
Traveling fronts pulled by the farmost particles are of physical interest and not so well understood, see \cite{Panja2004} for a survey from a physical perspective. It is conjectured that, for a large class of such models where the front is pulled by the farmost particles, the motion and the particle structure have universal features, depending mainly on the tails distribution \cite{Brunet2004, Brunet2006}. Recent results have been rigorously proved for different models in front propagation confirming some of the conjectures.
B\'erard and Gou\'er\'e \cite{Berard2010} consider the binary Branching Random Walk (BRW) under the effect of a selection (keeping the $N$ right-most particles). They show that under some conditions on the tail distribution of the random walk the asymptotic velocity converges at the unexpectedly slow rate $(\log N)^{-2} \,$ . Couronn\'e and Gerin \cite{Couronne2011} study a particular case of BRW with selection where the corrections to the speed are extremely small. Maillard in \cite{Maillard2011} shows that there exists a killing barrier for the branching Brownian motion such that the population size stays almost constant. He also proves that the recentered position of this barrier converges to a Levy process as $N$ diverges. In the case where there are infinitely many competitors evolving on the line, called the Indy-500 model, quasi-stationary probability measures are superposition of Poisson point processes \cite{Ruzmaikina2005}.
\bigskip
In the first part of this paper, we study the model presented in \cite{Cook1989} and described above. We consider the case where the distribution of the $\xi_{ij}$ depends on $N$ and is given by
\begin{align}\label{equa:defi.2.states}
\P &\big( \xi (N) =0 \big) = p_0(N) \sim \rho/N^{1+r} \\
\P & \big( \xi (N) = - 1 \big) = 1 - \P\big( \xi (N) =0 \big) \,, \nonumber
\end{align}
where $r>0$, $\rho>0$ and for sequences $a_N , b_N$ we write $a_N \sim b_N$ if $a_N / b_N \to 1\,$. We will often omit $N$ in the notation. Since $\xi$ is non-positive, the front moves backwards. As a consequence of the selection mechanism and the features of $\xi$, all particles stay from a distance at most one from the leaders. And when the front moves, {\it i.e.} $\phi\big(X(t)\big) = \phi\big(X(t-1)\big)-1$, all particles are at the same position. This particular behavior hides a renewal structure that will be used when computing the front speed.
The case $1/r \in \N$ is critical and the system displays a different behavior. For $N$ large enough, at time $t= 1/r$, we show that there is a Poissonian number of particles $X_i $ that remain in zero. Then, at the $1/r$-th plane there exists a finite number (possibly zero) of sites that can still be connected to the first plane through a path of zero energy. Whereas, when $1/r \not\in \N$ the typical number of such sites is of order $N^{1-tr}$. This difference of behavior leads to an additional term in (\ref{equa:bad.formula}) and the following Theorem holds.
\begin{teo}\label{teo:derrida} Let $\xi$ be distributed according to (\ref{equa:defi.2.states}). Then the front speed $v_N$ satisfies
\begin{equation}
\lim_{N \to \infty} v_N = \left\{
\begin{array}{lcl}
-\big( \, 1+ \lfloor 1/r \rfloor \big)^{-1}, & \text{if} & 1/r \not\in \N \\
-\big( \, 1+ \lfloor 1/r \rfloor - e^{- \rho^{1/r}} \,\big)^{-1} , & \text{if} & 1/r \in \N \, ,
\end{array}
\right.
\end{equation}
In the case where $r=0$,
\begin{equation}
\lim_{N \to \infty} v_N = 0\,.
\end{equation}
\end{teo}
In Section \ref{sec.front.speed.three.states} we consider the case where $\xi$ takes values in the lattice $\mathbb{Z}_0 = \{ l \in \mathbb{Z} ; l \leq 0 \}\,.$ Then we set for $i \in \N$
\begin{equation}\label{equa:defi.3.states}
p_i(N) = \P(\xi(N) =-i) \,,
\end{equation}
and assume that $p_0 \sim \rho / N^{1+r}$ where $r$ and $\rho$ are non-negative. Let
\begin{equation}\label{equa.defi.q.2}
q_2(N) : = \P\big( \xi (N) \leq -2 \big)=1-p_0-p_1\,.
\end{equation}
We also assume that for $i \geq 2 $
\begin{equation}\label{equa.defi.vartheta}
\frac{p_i(N)}{q_2(N)} = \P (\vartheta = -i)\,,
\end{equation}
where $ \vartheta $ is an integrable distribution on the lattice $\mathbb{Z}_{-2}$ that does not depend on $N$.
\medskip
As we explain in Section \ref{sec.conclusion.teo.speed.3.states.gen}, we can further generalize the model and consider $\xi$ distributed as
\begin{equation}\label{equa:defi.3.states.intro.0}
\xi = p_0(N) \delta_{\lambda_0} + p_{1}(N) \delta_{\lambda_1} +q_2(N) \vartheta (dx) \,,
\end{equation}
where $\vartheta (dx)$ is an integrable probability distribution over $]-\infty\,, \lambda_1 [$ and $\delta_{\lambda_i}$ is the mass distribution. We assume that $\lambda_1 < \lambda_0$ and that $p_0(N) \sim \rho/N^{1+r}$ for a $r>0\,.$ Then, the velocity $v_N$ obeys the following asymptotic.
\begin{teo} \label{teo.speed.3.states.gen} Let $\xi$ be distributed according to (\ref{equa:defi.3.states.intro.0}). Assume that
$$
p_0(N) \sim \frac{\rho}{N^{1+r}}, \qquad \text{and} \quad \lim_{N \to \infty} q_2(N) = \theta \,,
$$
where $r>0$ and $ 0<\theta<1$. Then the front speed $v_N$ satisfies
\begin{equation}
\lim_{N \to \infty} v_N = \left\{
\begin{array}{lcl}
\lambda_0- (\lambda_0-\lambda_1)\big( \,1+ \lfloor 1/r \rfloor \big)^{-1}, & \mbox{if} & 1/r \not\in \N \\
\lambda_0 - (\lambda_0-\lambda_1)\big( \, \lfloor 1/r \rfloor + 1- 1/g(\theta) \, \big)^{-1} , & \mbox{if} & 1/r \in \N \,,
\end{array}
\right.
\end{equation}
where $g (\theta ) \geq 1 $ is a non-increasing function. The conclusion in the case $1/r \not\in \N$ still holds if $\xi$ satisfies the weaker assumption $q_2/( 1-p_0 ) \leq \theta' \,,$ for some $0< \theta'<1\,$.
\end{teo}
\medskip
The paper is organized as follows. In Subsection \ref{sec:number.particles.zero} we compute the typical number of leading particles, which corresponds to the number of paths of zero energy, and in Subsection \ref{subsec.front.speed}, we calculate the limit of $v_N$ as $N\to \infty$, exhibiting in particular the additional term appearing in (\ref{equa:bad.formula}) in the critical case. In Subsection \ref{sec:jumping} we compute the typical number of leading particles, when $\xi$ is distributed according to (\ref{equa:defi.3.states.intro.0}). Subsections \ref{bound} and \ref{sec.conv.integral} present some technical results and calculations. In Subsection \ref{sec:front.speed.3.states} we compute the front velocity and prove the discrete version of Theorem \ref{teo.speed.3.states.gen}. Finally, in Section \ref{sec.conclusion.teo.speed.3.states.gen} we sketch the proof of Theorem \ref{teo.speed.3.states.gen} .
\section{Front speed for the two-state percolation distribution}\label{sec.derrida.two}
As in \cite{Comets2013}, we consider the following stochastic process.
\begin{defi} Let $Z(t) := \big( Z_l (t) ; \, l= 0,1 \big)$ be defined as
\begin{equation}
Z_l( t ) = \sharp \big\{ i ; 1 \leq j \leq N ; X_i( t ) = \phi ( X(t - 1) ) - l \big\} \, ,
\end{equation}
where $\sharp $ denotes the number of elements of a set. Recall that $\phi ( X(t - 1) ) $ is the position of the front at $t-1$.
\end{defi}
Note that, due to the special features of the distribution (\ref{equa:defi.2.states}), $Z_0( t )$ is equal to the number of leaders if the front has not moved backwards between times $t-1$ and $t$, and to $0$ if the front moved. $Z$ is a homogeneous Markov chain on the set
$$
\Omega(N) = \bigr\{ \, x \in \{ 0, 1 , \ldots , N \}^2 \, ; x_0 + x_{1} = N \, \bigl\} \, ,
$$
where $x_i$ are the coordinates of $x$. It is obvious that $Z_0$ determines completely the vector $Z (t)\,$, since $Z_1(t) = N - Z_0( t )$. The transition rates of the Markov chain $Z_0(t)$ is given by the Binomial distributions
\begin{align}\label{equa:law.Z}
\P & \big( Z_0 ( t+1 ) = \cdot \mid Z(t) = x \big) \nonumber \\
& = \P \big( Z_0 ( t+1 ) = \cdot \mid Z_0( t ) = x_{0} \big) =\left\{
\begin{array}{lc}
\mathcal{B} \big( N, 1-(1-p_0)^{x_{0}} \big)( \cdot), & x_0 \geq 1 \\
\mathcal{B} \big( N, 1-(1-p_0)^{N} \big)( \cdot) , & x_0 = 0 \,.
\end{array}
\right.
\end{align}
We will often consider Markov chains with different starting distributions. For this purpose we introduce the notation $\P_\mu $ and $\E_\mu$ for probabilities and expectations given that the Markov chain initial position has distribution given by $\mu$. Often, the initial distribution will be concentrated at a single state $x$. We will then simply write $\P_x$ and $\E_x$ for $\P_{\delta_x}$ and $\E_{\delta_x}$.
In this Section, $\oplus $ denotes the configuration $(0,N) \in \Omega(N)\,$. Furthermore, we introduce the notation
\begin{equation}\label{eq:integer.part.r}
1/r = m + \eta \,,
\end{equation}
where $m$ stands for the integer part of $1/r$ and $\eta$ its fractional part.
\subsection{Number of Leading Particles} \label{sec:number.particles.zero}
In this Subsection, we show that under a suitable normalization and initial conditions the process $Z_0$ converges as $N$ goes to infinity.
Consider the random variable
\begin{equation} \label{eq:moving.backwards.stopping.time}
\tau = \inf \big\{t\,;\phi\big(X(t)\big) < \phi\big(X(t-1)\big) \,\big\} \,.
\end{equation}
Then, $\tau$ is a stopping time for the filtration $\mathcal{F}_t = \{ \xi_{ij} (s); s \leq t \,\,\, \text{and} \,\,\, 1\leq i,j\leq N \}$. It is not difficult to see that $\tau$ is also the first time when $Z_0$ visits zero
$$
\tau = \inf \big\{t\,; Z_0 (t) = 0 \big\} \,,
$$
and that $Z$ starts afresh from $\oplus\, $ when $t = \tau $ {\it i.e.} the distribution of $Z(\tau +t)$ is the same as the distribution of $Z(t)$ under $\P_\oplus$.
\begin{defi} \label{defi:Y}
Let $Y( t )$ be the number of leading particles at time $t$ if the front has not moved
\begin{equation}
Y( t ) := Z_{0} (t) 1_{\{t \leq \tau \}} \,.
\end{equation}
\end{defi}
Then, $Y$ is a homogeneous Markov chain with absorption at zero and transition rates given by the Binomial distributions
$$
\P \big(Y( t+1 ) = \cdot \mid Y( t ) = k \big)= \mathcal{B} \big( \, N, 1-( 1-p_0)^k \, \big) (\cdot) \,.
$$
The advantage of working with $Y$ rather than $Z_0$ is that the above formula holds even if $Y(t) = 0$.
\begin{prop}\label{prop:gen.function} Let $\xi$ be distributed according to (\ref{equa:defi.2.states}). For $k \in\{1,2,\ldots N \}$ denote by $G_{k}( s,t )$ the Laplace transform of $Y(t)$ under $\P_{k}$ at $s\in \R$. Then,
\begin{equation}\label{equa:prop.gen.function}
G_{k}(s,t): = \E_{k} [e^{s \, Y(t)}]= \exp \Big\{( e^s-1 )k \big(N p_0\big)^t \big(1 + o\big( 1\big) \big) \Big\} \, .
\end{equation}
\end{prop}
\emph{Proof.} Conditioning on $\mathcal{F}_{t-1}: = \{\xi_{ij} (s); s\leq t-1\} \,,$
\begin{align*}
\E_{k} [ e^{s \, Y( t )} ] & =\E_{k} \big[ \, \E[e^{s\,Y( t)} \mid Y(t-1)] \, \big] \\
& = \E_{k} \Big[ \Big( 1 + ( e^s-1 ) \big( 1- ( 1- p_0 )^{Y( t-1)} \big)\Big)^N \,\Big]\,.
\end{align*}
Since $p_0 \sim \rho/N^{1+r}$ with $r>0$ and $Y(t-1) \leq N $ we obtain by first order expansion that
$$
\Big( 1 + (e^s-1) \,\big(1- (1-p_0 )^{Y(t-1)}\big) \Big)^N = s_{(1)}(N)^{Y(t-1)} \, ,
$$
where $\displaystyle s_{(1)}(N)= \exp \big\{ (e^s - 1) \big( N p_0 + o (N p_0) \big) \big\} \, $ and $o ( N p_0 )$ converges to $0$ independently from $Y(t-1)$.
Repeating the argument,
$$
\E_{k} [ \ e^{s\,Y(t)} \ ]= \E_{k} [s_{(1)}(N)^{Y(t-1)}] = \E_{k} [s_{(2)}(N)^{Y( t-2)}] \, ,
$$
\vspace{0.2cm}
with $\displaystyle s_{(2)}(N)= \exp \big\{ (s_{(1)}(N)-1) \big( N p_0 + o ( N p_0 )\big) \big\} \, $.
Expanding $s_1(N)-1$,
\vspace{0.2cm}
\begin{align*}
s_{(1)}(N)-1 & = \exp \Big\{ (e^s - 1) \big( N\, p_0 + o ( N p_0 ) \big) \Big\} - 1 \\
& = (e^s-1) \big( N\, p_0 + o (N p_0) \big) \, .
\end{align*}
Hence, $s_{(2)}(N)= \exp\big\{(e^s-1) \big(N\, p_0\big)^2 + o \big( (N p_0)^2\big) \big\} \,$. We proceed recursively and obtain the expression
$$
\E_{k} [e^{s\,Y(t)}]= \exp \Big\{ k \,( e^s-1) \big(N\, p_0 \big)^t \big( 1 + o (1) \big) \Big\} \,,
$$
which proves the statement. \hfill $\Box$
\medskip
We point out that the case $k = N \, $ corresponds to $Z(0) = \oplus \, $. We now state two Corollaries of Equation (\ref{equa:prop.gen.function}).
\begin{cor} \label{cor:Y.big.t} Let $\xi$ be distributed according to (\ref{equa:defi.2.states}) and $k \in \{1, \ldots, N\}$. Then, for $t \geq m+1 $ ,
\begin{equation}\label{equa.cor:Y.big.t}
\P_{k} \big(Y(t) = 0\big) \geq 1 - \rho^{t} \, N^{1-t\,r} + o\big(N^{1-t\,r}\big) \, .
\end{equation}
\end{cor}
\emph{Proof.} Since $\P_{k} \big( Y(t) = 0\big) = \lim_{s \to - \infty } E_{k} \big[ e^{s Y(t)} \big]$, Proposition \ref{prop:gen.function} implies that
$$
\P_{k} \big( Y(t) = 0\big) = \exp \Big\{- k \big(N p_0\big)^t \big(1 + o( 1) \big) \Big\} \geq \exp \Big\{ - N \big(N p_0\big)^t \big(1 + o( 1) \big) \Big\} \,.
$$
Then, we obtain (\ref{equa.cor:Y.big.t}) by first order expansion. \hfill $\Box$
\vspace{0.2cm}
\begin{cor} \label{cor:converg.Y.eta.random} Let $\xi$ be distributed according to (\ref{equa:defi.2.states}) with $\eta=0$, {\it i.e.} $r=1/m \,$. Assume that $\kappa (N)$ is a sequence of random variables in $\,\{1, \ldots, N\}\,$ that are independent from $\xi_{ij}$ and that $\kappa(N)/N$ converges a.s. to a positive random variable $U$.
Then, under $\P_{\kappa(N)}$, $Y(m)$ converges in distribution to $ Y_\infty $ a doubly stochastic Poisson random variable characterized by its Laplace transform
\begin{equation}
\E[ e^{s\,Y_\infty }]:= \E \big[ \exp \big\{ U \,(e^s-1)\rho^{m} \big \} \big].
\end{equation}
\end{cor}
\emph{Proof.} From (\ref{equa:prop.gen.function}), we have that
$$
\E [ e^{s Y(m)} \mid \kappa (N) ] = \exp \big\{ ( e^s-1 ) \, \kappa(N) \rho^{m} \, N^{-1} \big(1 + o(1) \big) \big\} \,.
$$
The term $o(1)$ converges to zero independently from $\kappa(N)\,$ the initial position. Then, by dominated convergence, we obtain that
$$
\lim_{N \to \infty }\E [ e^{s Y(m)} ] = \E \big[ \exp \big\{ U \,(e^s-1)\rho^{m} \big\} \big] \,,
$$
which concludes the proof. \hfill $\Box$
\vspace{0.2cm}
We now prove a large deviation principle for $Y$. As in \cite{Dembo2010, DenHollender2008}, we denote by
\begin{equation}
\Lambda_{k,\,t} (s) := \lim_{N \to \infty} \frac{1}{k N^{-rt}} \log \E_{k } \big[ e^{s \, Y(t)} \big] \,,
\end{equation}
the cumulant generating function of $Y$ under $\P_k$. From (\ref{equa:prop.gen.function}) we see that $\Lambda_{k,\,t} (s) = (e^s-1)\rho^{t}\, $. Denoting by
\begin{equation}
\Lambda_{k,t}^* (x) := \sup_{s \in \R} \{ x s - \Lambda_{k, t} (s)\}\,,
\end{equation}
the Legendre transform of $Y(t)$ under $\P_{k}$, we have that
\begin{equation}\label{equa:rate.function.Y}
\Lambda_{k,\,t}^* (x) = \left\{
\begin{array}{ccl}
x ( \log x -\log \rho^t ) +\rho^t -x \,, & \mbox{ if } & x > 0 \\
\infty \,, & \mbox{ if } & x \leq 0 \,.\\
\end{array}
\right.
\end{equation}
\vspace{0.2cm}
\begin{prop}(Large Deviation Principle for $Y\,$)\label{prop:large.deviation.Y} Let $\xi$ be distributed according to (\ref{equa:defi.2.states}). For $t \leq m$, let $k(N) \leq N $ be a sequence of positive integers such that
$$
\lim k(N) \, N^{-r\,t} = \infty \,.
$$
Then, under $\P_{k(N)}$, $Y(t)/\big(k(N)N^{-r\,t}\big)$ satisfies a Large Deviation Principle with rate function given by $\Lambda^*_{k,t}$ as in (\ref{equa:rate.function.Y}) and speed $k(N)N^{r\,t}$.
\end{prop}
\emph{Proof.} In fact, it is a direct application of G\"artner-Ellis Theorem (see {\it e.g.} Theorem V.6 in \cite{DenHollender2008}). Since $\Lambda$ is smooth, it is a lower semi-continuous function, therefore the lower bound in the infimum can be taken over all points. \hfill $\Box$
\vspace{0.2cm}
Our next Corollary formalizes the statement of Cook and Derrida in \cite{Cook1989}.
\begin{cor} \label{cor:fluct.Y.random} Let $\xi$ be distributed according to (\ref{equa:defi.2.states}) with $\eta>0$. Assume that $\kappa(N)$ is a sequence of random variables in $\,\{1, 2, \ldots, N\}\,$ independent from $\xi_{ij}$ and that $\kappa(N)/N$ converges a.s. to $U$ a positive random variable.
Then, under $\P_{\kappa(N)}$ for $t \leq m$
\begin{equation}
\lim_{N\to \infty } \P \Big( \ \Big| \,\frac{ Y(t) }{ \rho^t\, U\, N^{1-t r} } -1 \, \Big| \geq \varepsilon \ \Big) = 0 \, .
\end{equation}
The same statement holds for $\eta = 0$ and $t \leq m-1 $.
\end{cor}
\emph{Proof.} We first consider the case where $\kappa(N)$ is a deterministic sequence and $\kappa(N)/N \to u \,,$ with $0 < u \leq 1$. Then the conditions of Proposition \ref{prop:large.deviation.Y} are satisfied and $Y(t)/\big( \kappa(N)\,N^{-t\,r} \big) $ satisfies a Large Deviation Principle with rate function given by (\ref{equa:rate.function.Y}), which only zero is at $\rho^t$. This implies the desired convergence.
The random case is solved by conditioning on $\kappa(N) = Y(0)$.
\begin{align*}
\P & \Big( \ \Big|\, \frac{ Y(t) }{ \rho^t U N^{1- t r} } -1 \, \Big| \geq \varepsilon \ \Big) \\
& = \int \P^{(2)}_{\kappa(N)(\omega_1)}\Big( \, \Big| \, \frac{ Y(t) }{ \rho^t U(\omega_1) N^{1- t r} } -1 \, \Big| \geq \varepsilon \, \Big) \P^{(1)} (\mathrm{d} \omega_1) \, ,
\end{align*}
where $\P^{(1)}$ is the distribution of $\kappa(N)$ and $ \P^{(2)}$ the law of $\xi_{ij}$'s. For $\P^{(1)}$ a.e. $\omega_1$
$$
\lim_{N\to \infty}\P^{(2)}_{\kappa(N)(\omega_1)}\Big( \ \Big|\, \frac{ Y(t) }{ \rho^t U(\omega_1) N^{1- tr} } -1 \, \Big| \geq \varepsilon \ \Big) = 0\,,
$$
and we conclude by dominated convergence. \hfill $\Box$
\vspace{.2cm}
In \cite{Cook1989} Cook and Derrida consider the particular case where $\rho =1$ in (\ref{equa:defi.2.states}). From Corollary \ref{cor:fluct.Y.random}, we see that $Y(t)/N^{1-rt}$ converges in probability to one. Since under $\P_N$, $Y(t)$ is equal to the number of paths with zero energy at time $t$, the typical number of such paths is $N^{1-rt}$.
\subsection{Front Speed}\label{subsec.front.speed}
In this Subsection, we give the exact asymptotic for the front speed, proving Theorem \ref{teo:derrida}. The front positions can be computed by counting the number of times $Z$ visits $\oplus$. Indeed, at a given time $t$ either the front moves backwards and $\phi\big( X(t) \big) = \phi\big( X(t-1) \big) - 1$ or it stays still and $\phi\big( X(t) \big) = \phi \big( X(t-1) \big)$. We obtain that
$$
\frac{-N_t}{t} = \frac{\phi\big( X(t) \big) }{t},
$$
where $N_t$ is the stochastic process that counts the number of times that $Z$ visited $\oplus$ until time $t$.
A classic result from renewal theory (see {\it e.g.} \cite{Durrett2010}) states that
\begin{equation}
\lim_{t \to \infty}\frac{N_t}{t} = \frac{1}{\E_\oplus[\tau]} \, .
\end{equation}
Hence, to determine the front velocity, it suffices to determine $\E_\oplus [ \tau ] \, . $
\begin{equation}\label{equa.integral.tau.sum.prob}
\E_\oplus [ \tau ] = \sum_{t = 0 }^{\infty} \P_\oplus ( \tau \geq t+1 ) =\sum_{t=0}^{\infty} \P_\oplus ( Y(t) \geq 1 ) \, .
\end{equation}
A consequence of Corollaries \ref{cor:Y.big.t}, \ref{cor:converg.Y.eta.random} and \ref{cor:fluct.Y.random} is that if $\xi$ is distributed according to (\ref{equa:defi.2.states}) with $\eta > 0$, then
$$
\lim_{N \to \infty} \P_\oplus \big( Y(t) \geq 1 \big) = \left\{
\begin{array}{lcl}
1\, , & \text{if} & t \leq m \, ; \\
0 \, , & \text{if} & t \geq m+1 \,.
\end{array}
\right.
$$
Whereas we have the following limits when $\eta = 0$
$$
\lim_{N \to \infty} \P_\oplus \big( Y(t) \geq 1 \big) = \left\{
\begin{array}{lcl}
1 \, , & \text{if} & t \leq m-1 \, ; \\
1-e^{\rho^{m}} \, , & \text{if} & t = m \, ; \\
0 \, , & \text{if} & t \geq m+1 \, .\\
\end{array}
\right.
$$
Then, to finish the proof of Theorem \ref{teo:derrida}, it suffices to show that
\begin{equation}
\lim_{N\to\infty}\sum_{t\geq m+1} \P_\oplus ( Y( t) \geq 1 ) = 0\,.
\end{equation}
Since $Y$ is a homogeneous Markov chain we use the Markov property at time $m+1$ to obtain
$$
\sum_{t \geq m +1 } \P_\oplus \big( Y(t)\geq 1 \big) =\sum_{t=0}^{\infty} \sum_{k=1}^N \P_k \big( Y(t) \geq 1 \big) \,\P_\oplus \big( Y(m+1) = k \big) \, .
$$
It is not difficult to see that under $\P_k$, $Y$ is stochastically dominated by $Y$ under $\P_N$, which implies that $\P_k \big( Y(t) \geq 1 \big) \leq \P_N \big( Y(t) \geq 1 \big) \, $. Then, applying this inequality in the above expression, we get
\begin{equation}\label{equa:est.sum}
\sum_{t \geq m +1 } \P_\oplus \big( Y(t) \geq 1 \big) \leq \P_\oplus \big( Y(m+1) \geq 1 \big) \E_\oplus[ \tau ] \, .
\end{equation}
\begin{prop}\label{prop:bound.tau} Let $\xi$ be distributed according to (\ref{equa:defi.2.states}). Then, $\E_x[\tau]$ is bounded in $N$
\begin{equation}
\sup_{N \in \N }\sup_{x\in\Omega(N)} \{\,\E_x[\tau ]\,\} < \infty
\end{equation}
\end{prop}
\emph{Proof.} By Corollary \ref{cor:fluct.Y.random}, $\lim_{N \to \infty} \P_\oplus \big( \tau \geq m+2 \big) = 0 \,$. Therefore, there exists a constant $c_{\thec}<1$ \setcounter{aux}{\value{c}} such that for $N$ sufficiently large
$$
\P_\oplus ( \, \tau \geq m+2 \, ) \leq c_{\arabic{aux}} \, .
$$
Coupling the chains started from $\delta_x$ and $\delta_\oplus$ we obtain that $\P_{x}( \tau \geq m+2 ) \leq \P_{\oplus}( \tau \geq m+2 ) \, $ for every $x\in\Omega(N)$ and therefore
\begin{equation}\label{equa.prop:bound.tau}
\P_x ( \tau \geq m+2 ) \leq c_{\arabic{aux}} \, .
\end{equation}
Then, Proposition \ref{prop:bound.tau} follows as a consequence of the Markov property and (\ref{equa.prop:bound.tau}). In Subsection \ref{bound} we present an equivalent argument in all detail.\hfill $\Box$
\medskip
Applying Proposition \ref{prop:bound.tau} and Corollary \ref{cor:Y.big.t} in (\ref{equa:est.sum}), we conclude that
$$
\sum_{t \geq m +1 } \P_\oplus ( \, Y(t) \geq 1 \, ) = \mathcal{O} (\,N^{1 - (m+1) \, r }\,) \, .
$$
Hence, from (\ref{equa.integral.tau.sum.prob}) we obtain the limits
\begin{equation} \label{equa:expected.value.tau.2.states}
\lim_{N \to \infty}\E_\oplus [\tau] = \left\{
\begin{array}{lcl}
1+ m , & \mbox{if} & r \neq 1/m \\
1+ m - e^{- \rho^{m}} , & \mbox{if} & r = 1/m \, ,
\end{array}
\right.
\end{equation}
proving Theorem \ref{teo:derrida} in the case $r>0$.
To finish the proof of Theorem \ref{teo:derrida} it remains to study the case where $r=0$. For that we use a coupling argument. Up to the end of this Subsection we denote by $ \xi (r)$
$$
\P( \xi (r) = 0 ) =1 - \P(\xi (r) = -1 ) \sim \rho/N^{1+r} \, .
$$
For $r>0$, the random variables $\xi(0)$ are stochastically larger than $\xi(r)$ for $N$ large enough. Denoting by $X_i^r (t)$ the stochastic process defined by $\xi(r)$ we construct the process in such a way that the following relation holds
$$
0 \geq \frac{\phi\big( X^0(t) \big) }{t} \geq \frac{\phi\big( X^r(t) \big) }{t}.
$$
From (\ref{equa:expected.value.tau.2.states}), if we choose $r$ such that $1/r$ is not an integer, we have the lower bound
$$
0 \geq v_N (0) \geq v_N (r) \to \big( \ 1+ \lfloor 1/ r \rfloor \ \big)^{-1};
$$
whence taking $r$ to $0$, we have that $\lim v_N(0) = 0 $, which concludes the proof of Theorem \ref{teo:derrida}.
\section{Front speed for the infinitely many states percolation distribution} \label{sec.front.speed.three.states}
In this Section, we prove a discrete version of Theorem \ref{teo.speed.3.states.gen}. We consider the case where $\xi_{ij}$ is defined as in (\ref{equa:defi.3.states}).
{\bf Assumption (A).} The random variable $\xi$ distributed according to (\ref{equa:defi.3.states}) satisfies Assumption (A) if there exists a constant $0 <\theta < 1 $ such that
$$
\lim_{N \to \infty} q_2 = \theta\,,
$$
and if $\vartheta $ defined in (\ref{equa.defi.vartheta}) is integrable.
\vspace{.2cm}
In the non-critical case we do not need to assume the convergence of $q_2$. We prove Theorem \ref{teo.speed.3.states} under the weaker condition.
{\bf Assumption (A').} The random variable $\xi$ distributed according (\ref{equa:defi.3.states}) satisfies Assumption (A') if there exists a constant $0 <\theta' < 1$ such that for $N$ large enough
$$
\frac{q_2}{ ( 1-p_0 )} \leq \theta' \,,
$$
and if $\vartheta $ defined in (\ref{equa.defi.vartheta}) is integrable.
\vspace{.2cm}
We adapt the notation of the previous Section and let $Z(t) := ( Z_l (t)\, ; l \in \N )$ be defined as
$$
Z_l(t) := \sharp \big\{ j \,; 1 \leq j \leq N \,, X_j(t) = \phi\big( X(t - 1) \big) - l \big\}\,.
$$
Then, $Z$ is a homogeneous Markov chain on the set
$$
\Omega(N) := \bigg\{ x \in \{ 0, 1 , \ldots , N \}^{\N} \, ; \sum_{i=0}^{\infty} x_i = N \bigg\},
$$
where $x_i$ are the coordinates of $x$. If at time $t$ we have that $Z(t)=x\in \Omega(N)$, it means that for each $k \in \N$ there are $x_k$ particles in position $-k\,$ with respect to the leader. In this situation, suppose that $x_0 \geq 1 $. Then the probability that at time $t+1$ there is some particle in position $-k$ with respect to the leader at time $t$ is given by,
\begin{equation}\label{equa:mult.coef}
s_k (x) := \bigg( \sum_{i=1}^{\infty} p_i \bigg)^{x_{k-1}} \ldots \bigg( \sum_{i=k}^{\infty} p_i \bigg)^{x_{0}} - \bigg( \sum_{i=1}^{\infty} p_i \bigg)^{x_{k}} \ldots \bigg( \sum_{i=k+1}^{\infty} p_i \bigg)^{x_{0}}\,,
\end{equation}
where we define $x_{-1}=0 $. So the probability that one particle has not yet moved at time $t+1$ is given by
$$
s_0(x) : = 1-\bigr( 1- p_0\bigl)^{x_0}\, .
$$
If $x_0 = 0$, we shift $( x_{0}, x_{1} ,\ldots )$ to get a nonzero first coordinate obtaining a vector $\tilde{x} \in \Omega(N)$ such that $\tilde{x}_0 \geq 1$. Then, one can check that
$$
s_r (x) = s_r (\tilde{x})\,.
$$
The transition probability of the Markov chain $Z$ is given by
\begin{equation}\label{equa.Z.mult.nom.inf.range}
\P \big( Z(t+1) = y \mid Z(t)=x \big) = \mathcal{M}\big( N;s(x) \big) (y)\,,
\end{equation}
where $s(x) = \big( s_0(x), s_1(x) \ldots \big)$ and $\mathcal{M}\big( N;s(x) \big)$ denotes a Multinomial distribution with infinitely many classes. We refer to \cite{Comets2013}, Section 6, for more details on the computations. It is clear that $Z_0(t)$ has the same transition probability as the process studied in the two states model. In particular, the results proved in Subsection \ref{sec:number.particles.zero} hold with the obvious changes.
For a stopping time $T$, we define recursively $T^{(0)} = 0$ and for $i \geq 1$
\begin{equation}\label{equa.defi.T^i}
T^{(i)} (\omega) : = \inf \{ t > T^{(i-1)} ( \omega ) ; t = T \circ \Theta_{T^{(i-1)} ( \omega )} ( \omega ) \} \,,
\end{equation}
where $\Theta_t$ is the time-shift operator. We adopt the convention that $\inf \{ \, \emptyset \,\} = \infty$. Once more we denote by $\tau$ the stopping time defined as
\begin{equation}\label{equa.defi.tau.three.states}
\tau := \inf \big\{ t\,;\,\phi\big(X(t)\big) < \phi\big(X(t-1)\big) \big\} \,.
\end{equation}
In contrast with the previous Subsection, $\tau$ is not a renewal time for $Z$. Let $T_x$ be the first time that $ Z(t)$ visits $x$
\begin{equation}
T_{x}: = \inf \{ t \, ; Z(t) = x \} \,.
\end{equation}
We adapt the notation of Section \ref{sec.derrida.two} and define $\oplus := \big(N,0, \ldots \big) \in \Omega(N)$ and $\triangle : = (0,N,0,\ldots) \in \Omega(N)\,.$ Finally, we keep notation (\ref{eq:integer.part.r}) and let $m$ be the integer part of $1/r$ and $\eta$ its fractional part.
We now state the main result of the Section.
\begin{teo} \label{teo.speed.3.states} Assume that $\xi$ satisfies Assumption (A). Then
\begin{equation}
\lim_{N \to \infty} v_N = \left\{
\begin{array}{lcl}
-\big( \,1+ \lfloor 1/r \rfloor \big)^{-1}, & \text{if} & 1/r \not\in \N \\
- \big( \, \lfloor 1/r \rfloor + 1- 1/g(\theta) \, \big)^{-1} , & \text{if} & 1/r = m \in \N \,,
\end{array}
\right.
\end{equation}
where $g (\theta ) \geq 1 $ is a non-increasing function. The conclusion in the case $r \neq 1/m$ still holds if $\xi$ satisfies the weaker Assumption (A').
\end{teo}
\subsection{The Distribution of $Z(\tau)$}\label{sec:jumping}
In this Section we study the limit distribution of $Z(\tau)$ as $N \to \infty$. When $\eta>0$ the limit is similar to the one obtained in the previous results.
\begin{prop} \label{prop:conv.prob.Xtau} Assume that $\xi$ satisfies Assumption (A') and that $\eta>0$. Then,
\begin{equation}
\lim_{N \to \infty } \P_\oplus \big( Z(\tau) = \triangle \big) = 1 \, .
\end{equation}
\end{prop}
\vspace{0.2cm}
The case $\eta = 0$ is critical. We show that $Z_1(\tau)/N$ converges in distribution and that the limit distribution is a functional of a Poisson random variable.
\begin{prop} \label{prop:conv.z.tau.under.oplus} Assume that $\xi$ satisfies Assumption (A) with $\eta=0$ . Then under $\P_\oplus$, $Z_0 (m)$ converges in distribution to $\Pi(\rho^m)$ a Poisson random variable with parameter $\rho^m$.
Moreover, there exists a function $\,G\,:\, \N \to [0 , 1]$ (see Definition \ref{equa:G}) such that
\begin{equation}
\bigg( \frac{Z_1(\tau)}{N}\,, \sum_{i=2}^{\infty} \frac{Z_i(\tau)}{N} \bigg) \stackrel{d}{\rightarrow} \Big(G\big(\Pi(\rho^m)\big), 1-G\big(\Pi(\rho^m)\big) \Big)\,.
\end{equation}
\end{prop}
Before analyzing the cases $\eta = 0 $ and $\eta >0$ separately, we prove a technical Lemma that holds in both cases. It can be interpreted as follows: if at time $t$ there are sufficiently many leading particles, then at $t+1$, with high probability, there is no particle at distance two or more from the leaders at $t$.
\begin{lem} \label{lem:techinical.multi} Assume that $\xi$ satisfies Assumption (A'). For $x = x(N) \in \Omega(N)$ such that
$$
\log N = o ( x_0 ) \, ,
$$
define $ s_{i}(x) $ as in (\ref{equa:mult.coef}) and let $\mathcal{M}\big( N;s(x) \big)$ be a Multinomial random variable with infinitely many classes as in (\ref{equa.Z.mult.nom.inf.range}). Then,
\begin{equation}
\lim_{N\to \infty}\P \bigg( \mathcal{M}\big( N;s(x) \big) \in \Big\{ y \in \Omega(N) \, ; \sum_{i=2}^{\infty} y_{i} = 0 \Big\} \, \bigg) = 1 \, .
\end{equation}
\end{lem}
\emph{Proof.} We can write
\begin{align*}
\P \bigg( & \mathcal{M}\big( N;s(x) \big) \in \Big\{ y \in \Omega(N) \, ; \sum_{i=2}^{\infty} y_{i} = 0 \Big\} \, \bigg) \\
& = \sum_{n=0}^{N} \P \bigg( \mathcal{M}\big( N;s(x) \big) \in \Big\{ y \in \Omega(N) \, ; y_{0} = n \,, y_1 =N-n \Big\} \, \bigg) \\
&= \sum_{n=0}^{N} \frac{N!}{n!(N-n)!}s_0(x)^n \, s_{1}(x)^{N-n}
\geq \big( 1 - \theta'^{\,x_0} \big)^N \, ,
\end{align*}
where the last inequality holds for $N$ large enough as a consequence of Assumption (A'). Since $o( x_0 ) = \log N$ we obtain that $\big( 1 - \theta'^{\,x_0} \big)^N \to 1 $, proving the result. \hfill $\Box$
\vspace{0.2cm}
\subsubsection*{Case $\eta>0$}
We have already introduced all necessary tools to prove Proposition \ref{prop:conv.prob.Xtau}.
\emph{Proof of Proposition \ref{prop:conv.prob.Xtau}. } From Corollaries \ref{cor:Y.big.t} and \ref{cor:fluct.Y.random} we see that $\P_\oplus \big( \tau \neq m+1 \big) \to 0 \,.$ Then, it suffices to prove that $\P_\oplus \big( Z(m+1) = \triangle ; \tau = m+1 \big) \to 1 \,.$
$$
\P_\oplus \big( Z(\tau ) = \triangle \, ; \tau = m+1 \big)= \sum_{x \in \Omega(N)} \P_\oplus \big( Z(m+1) = \triangle \, ;Z(m) = x \, ; \tau = m+1 \big) \, .
$$
Since $\tau = m+1$ it suffices to consider $x$ such that $x_0 \geq 1$. Fix $0<\varepsilon< \rho^{ m}\,$ and take $x\in \Omega(N)$ such that $| x_0/N^{r \, \eta} -\rho^{m} | < \varepsilon$. From (\ref{equa.Z.mult.nom.inf.range}),
\begin{align}\label{equa.proof.teo.Z.to.triangle}
\P_\oplus & \big( Z_0(m+1) = \triangle | Z_0 (m) = x \big) \nonumber \\
&= \mathcal{M}\big( N;s(x) \big) ( \triangle ) = s_{1}(x)^N \nonumber\\
&= \Big( \big( 1- p_0\big)^{x_0} - \big(1- p_0\big)^{x_{1}} \big(1- p_0 - p_1\big)^{x_0}\Big)^N \nonumber \\
& \geq \big(1- p_0\big)^{x_0 N} \big(1 - \theta'^{\,x_0}\big)^N \, ,
\end{align}
where the last inequality is a consequence of Assumption (A'). Since $x_0 = \mathcal{O} (\, N^{\eta r} \,)$ we conclude after a first order expansion that the lower bound in (\ref{equa.proof.teo.Z.to.triangle}) converges to one. Moreover, the rate of convergence is bounded from below by
$$
\big(1- p_0\big)^{(\rho^m+\varepsilon ) N^{1+r \eta}} \big(1 - \theta'^{(\rho^m-\varepsilon ) N^{r \eta}}\big)^N \,,
$$
which converges to one. Then, by Proposition \ref{prop:large.deviation.Y} and Equation (\ref{equa.proof.teo.Z.to.triangle}), we see that
$$
\P_\oplus \big( Z(\tau) = \triangle \big) \geq \sum_{| x_0 / N^{r \eta} -\rho^{m} | < \varepsilon } \P_\oplus \big( Z(\tau ) = \triangle ; Z(m) = x ; \tau = m+1 \big) \\
$$
converges to one, proving the result. \hfill $\Box$
\subsubsection*{Case $\eta=0$} \label{sec:jumping.eta.zero}
In this paragraph, we prove Proposition \ref{prop:conv.z.tau.under.oplus} and also a generalization that allows us to compute the distribution of $Z_1(\tau^{(i)})$.
\begin{lem} \label{lem:mult.techinique.many.particles} Assume that $\xi$ satisfies Assumption (A') with $\eta=0$. Fix $ \, 0<a<b \, $ and denote by $\Omega_a^b(N)$ the subset of $\Omega(N)$ defined as
$$
\Omega_a^b(N):= \big\{ x \in \Omega(N) \, ; a N^{1/m} \leq x_0 \leq b N^{1/m} \big\}\,.
$$
Then the following limit holds
\begin{equation}
\lim_{N \to \infty} \,\,\, \sup_{x \in \Omega_a^b (N)} \P_x \big( Z(1) \neq \triangle \mid Z_0(1) = 0 \big) = 0 \,.
\end{equation}
\end{lem}
\emph{Proof.} It is not difficult to obtain the following inequality
$$
\P_x \big( Z(1) \neq \triangle \mid Z_0(1) = 0 \big) \leq \frac{ \P_x \Big( Z(1) \in \big\{ y \in \Omega(N) \, ; \sum_{i=2}^{\infty} y_{i} \neq 0 \big\} \Big) }{\P_x \big( Z_0(1) = 0 \big)} \, .
$$
From (\ref{equa.Z.mult.nom.inf.range}) we have that under $\P_x$, $Z(1)$ is distributed according to $\mathcal{M}\big(N,s(x)\big)$ . Then, as a consequence of Lemma \ref{lem:techinical.multi}
\begin{align*}
\P_x & \bigg( Z(1) \in \Big\{ y \in \Omega(N) \, ; \sum_{i=2}^{\infty} y_{i} \neq 0 \Big\} \bigg)\\
&= 1-\P \bigg( \mathcal{M}\big( N;s(x) \big) \in \Big\{ y \in \Omega(N) \, ; \sum_{i=2}^{\infty} y_{i} = 0 \Big\} \, \bigg) \to 0 \,.
\end{align*}
Moreover, the rate of decay is bounded from above by
$$
1- \big(1-\theta'^{a N^{1/m}}\big)^N \to 0 \,.
$$
To finish the proof it suffices to show that $ \P_x \big( Z_0(1) = 0 \big) $ is bounded away from zero. Indeed,under $\P_x$, $Z_0 (1)$ is distributed according to a Binomial random variable of parameter $N$ and $s_0(x)$.
From the hypotheses of the Lemma we have that
$$
s_0(x) \geq 1-(1-p_0)^{b\,N^{1/m}}\,.
$$
Coupling $Z(1)$ with $\mathcal{B}$ a Binomial of parameter $N$ and $1-(1-p_0)^{b\,N^{1/m}}$, we conclude that
$$
\P_x \big( Z_0(1) = 0 \big) \geq \mathcal{B} \big( N, 1-(1-p_0)^{b\,N^{1/m}}\big)(0) \to e^{-\rho b} \,.
$$
\hfill $\Box$
\vspace{.2cm}
From Corollary \ref{cor:fluct.Y.random}, we see that under $\P_\oplus$, $Z_0 (m-1) /N^{1/m}$ converges in probability to $\rho^{m-1}$, as $N\to\infty$. Hence, from Lemma \ref{lem:mult.techinique.many.particles}, we conclude that
\begin{equation}\label{equation.convergence.Z(m)=0}
\lim_{N \to \infty } \P_\oplus \big( Z(\tau) = \triangle \,| \, Z_0(\,m\,) = 0 \big) = 1 \,.
\end{equation}
This is the first step to prove Proposition \ref{prop:conv.z.tau.under.oplus}. The second step is to study the conditional distribution of $Z(\tau)$ under $Z_0(m) = x_0$ for a positive integer $x_0$.
\begin{prop}\label{prop:conv.of.Z(1)} Assume that $\xi$ satisfies Assumption (A) with $\eta=0$. Let $ k $ be a nonzero integer and denote by $\Omega_k(N)$ the subset of $\Omega(N)$ defined as
$$
\Omega_k(N) : = \big\{ x \in \Omega(N) \, ; x_0 = k \big\} \,.
$$
Then, for $\varepsilon >0 $ the following limit holds
\begin{equation}
\lim_{N \to \infty} \, \sup_{ x \in \Omega_k(N) } \, \P_x \bigg( \,\, \bigg| \bigg ( \frac{Z_1 (1)}{N} \, , \frac{ \sum_{i\geq 2}Z_i (1)}{N} \bigg) - (1-\theta^k, \theta^k) \bigg| > \varepsilon \bigg) = 0 \, .
\end{equation}
\end{prop}
\emph{Proof.} From (\ref{equa.Z.mult.nom.inf.range}), we see that under $\P_x$, $Z(1)$ is distributed according to an infinite range Multinomial of parameters $N$, and $s(x)$. In particular under $\P_x$ the triplet
\begin{center}
$
\big( Z_0 (1)\,, Z_1 (1) \, , \sum_{i\geq 2}Z_i (1) \big) \,,
$
\end{center}
is distributed according to a three classes Multinomial of parameters $N$ and $ \big( s_{0}(x) , s_{1}(x) , \sum s_i(x) \big)$. If $\xi$ satisfies Assumption (A) and $x\in\Omega_k(N)$, we have that
\begin{equation}
\lim_{N\to \infty} s_0(x) =0 \,;\quad \lim_{N\to \infty} s_{1}(x) = 1-\theta^k \,; \quad \lim_{N\to \infty} \sum s_{i}(x) = \theta^k \, .
\end{equation}
The rate of convergence is uniform on $x \in \Omega_k (N)$.
A three classes Multinomial random variable as above satisfies a large deviation principle (see {\it e.g.} \cite{Dembo2010, DenHollender2008}) and the rate function is given by
\begin{equation}\label{equa:mult:larg.dev}
\Lambda^*(y) = \left\{
\begin{array}{ccl}
y_{1} \log \Big( \frac{(\theta^k) y_{1} }{(1 -y_{1})( 1-\theta^k )} \Big) - \log \Big( \frac{\theta^k }{1 -y_{1}} \Big) & \mbox{ if } & y_{1} + y_{2} = 1 \, , \\
\infty & & \text{ otherwise. }
\end{array}
\right.
\end{equation}
The only zero of $\Lambda^*$ is at $y = ( 0, 1-\theta^k, \theta^k )$. Implying the convergence in probability
\begin{center}
$
\frac{1}{N}\big( Z_0 (1)\,, Z_1 (1) \, , \sum_{i\geq 2}Z_i (1) \big) \to ( 0, 1-\theta^k, \theta^k ) \,.
$
\end{center} \hfill $\Box$
\vspace{.2cm}
We now give the definition of the function $G$ appearing in Proposition \ref{prop:conv.z.tau.under.oplus}.
\begin{defi}
Let $G: \N \longrightarrow [0,1] $ be defined as
\begin{equation}\label{equa:G}
G(k) = \left\{
\begin{array}{ccl}
1 - \theta^k, & \mbox{ if } & \, k \geq 1 \\
1 , & \mbox{ if } & \, k =0 \\
\end{array}
\right.
\end{equation}
where $\theta$ is given by Assumption (A).
\end{defi}
\emph{Proof of Proposition \ref{prop:conv.z.tau.under.oplus}.\,} From Corollary \ref{cor:converg.Y.eta.random}, we have that under $\P_\oplus$, $Z_0 (m)$ converges in distribution to a Poisson random variable of parameter $\rho^m$. Hence, to prove Proposition \ref{prop:conv.z.tau.under.oplus} it suffices to show that
\begin{align*} \label{equa:teo.favorite.site.to.jump.eta.0}
\P_\oplus & \bigg( \,\, \bigg| \bigg ( \frac{Z_1 (\tau)}{N} \, , \frac{ \sum_{i\geq 2}Z_i (\tau)}{N} \bigg) - \Big( G \big( Z_0(m) \big), 1- G \big( Z_0(m) \big) \Big) \bigg| > \varepsilon \bigg) \\
&= \sum_{k= 0}^{N} \P_\oplus \bigg( \,\, \bigg| \bigg ( \frac{Z_1 (\tau)}{N} \, , \frac{ \sum_{i\geq 2}Z_i (\tau)}{N} \bigg) - \big( G(k), 1- G (k) \big) \bigg| > \varepsilon \,; Z_0(m) = k \bigg)\,
\end{align*}
converges to zero. From (\ref{equation.convergence.Z(m)=0}) and Proposition \ref{prop:conv.of.Z(1)}, we know that for each $k \in \N$ the terms in the above sum converge to zero. Then, from the tightness of $Z_0 (m)$ we obtain that the sum itself converges to zero, proving the result.\hfill $\Box$
\medskip
We finish the present Subsection by computing the limit distribution of $Z(\tau^{(i)})$ for $i \in \N$. We also prove the convergence of some related processes that will appear when calculating the front velocity in Subsection \ref{sec:front.speed.3.states}.
\begin{prop}\label{prop:chain.started.random.position} Assume that $\xi$ satisfies Assumption (A) and that $\eta=0$. Let $\kappa(N)$ be random variables taking values in $\Omega(N)$, such that $\kappa(N)$ and the $\xi_{ij}$ are independent for every $N$. Denoting by $\kappa_0(N)$ the first coordinate of $\kappa(N)$ we also assume that
\begin{equation}\label{nontrivial.cond}
\lim_{N \to \infty} \frac{\kappa_0(N)}{N} = U \, \, \, a.s.
\end{equation}
where $U$ is a positive random variable. Then, under $\P_{\kappa(N)}$, we have that
\begin{enumerate}
\item $Z_0( m )$ converges weakly to $V$ a doubly stochastic Poisson random variable which distribution is determined by the Laplace transform
\begin{equation}
\E[ e^{s V} ] = \E \big[ \exp( e^s-1 )\,\rho^{m} \, U \big] \, .
\end{equation}
\item Furthermore, the joint convergence also holds
\begin{equation} \label{equa:conv.Z.lambda-1.Z.tau}
\bigg( Z_0( m), \frac{Z_{1} ( \tau )}{N} , \tau \bigg) \stackrel{d}{\longrightarrow } \big( V ,G(V), m+1_{\{V \neq 0\}} \big) \,.
\end{equation}
\end{enumerate}
\end{prop}
\emph{Proof.} Since $ \kappa_0(N)/ N \to U $ the hypotheses of Corollary \ref{cor:converg.Y.eta.random} are satisfied, implying the first statement of the Proposition. From Corollaries \ref{cor:Y.big.t} and \ref{cor:fluct.Y.random}, we see that $\P( m \leq \tau \leq m+1)$ converges to one. Since $\tau = m $ if and only if $Z_0(m) = 0$ we obtain the convergence in distribution
$$
\tau \stackrel{d}{\to} m+1_{\{V \neq 0\}} \,.
$$
Finally, to prove that $Z_{1} (\tau)/N$ converges to $G(V)$, we proceed as in Proposition \ref{prop:conv.z.tau.under.oplus} and show by dominated convergence that
$$
\lim_{N\to \infty} \E\Bigg[ \P_{\kappa(N)} \bigg( \,\, \bigg| \bigg ( \frac{Z_1 (\tau)}{N} \, , \frac{ \sum_{i\geq 2}Z_i (\tau)}{N} \bigg) - \big( G \big( Z_0(m) \big), 1- G \big( Z_0(m) \big) \big) \bigg| > \varepsilon \bigg) \Bigg] =0 \,.
$$
\hfill $\Box$
\vspace{.2cm}
As an application of Proposition \ref{prop:chain.started.random.position} we can calculate the distribution of $Z(\tau^{(2)})$. Indeed we can consider the convergence in Proposition \ref{prop:conv.z.tau.under.oplus} as the stronger a.s. convergence. We do not lose any generality since we can construct a sequence of random variables (possibly in an enlarged probability space) $ \kappa(N) \stackrel{d}{=}Z(\tau)$ that converges a.s. Details about this construction can be found in \cite{Billingsley2009}. Passing to the appropriate product space we also consider that the $\kappa$'s and $\xi_{ij}$'s are independent. Then by the strong Markov property, we obtain that
\begin{equation}
\P_{\kappa (N)} \big( Z \circ \Theta_{\tau} (t) \in \cdot \big) \stackrel{d}{=}\P_{Z(\tau)} \big(Z \circ \Theta_{\tau} (t) \in \cdot \big) \,,
\end{equation}
for $t \geq 0 \,$. Then, under $\P_\oplus$ we obtain that
$$
\Big(Z_0( \tau + m ) \, , \, \frac{Z_{1} (\tau^{(2)})}{N} \,, \tau^{(2)} - \tau^{(1)} \Big) \stackrel{d}{\longrightarrow } \big ( V^{(2)} , G(V^{(2)}), m+1_{\{V^{(2)} \neq 0\}}\big) \,,
$$
where $V^{(2)}$ is a doubly stochastic Poisson variable governed by $V^{(1)}$ the limit distribution of $Z(m)$. This method can be iterated to obtain the following result.
\begin{lem}\label{lem:joint.conv.finite.dem.proj} Assume that $\xi$ satisfies Assumption (A) with $\eta=0$. Let $\Delta \tau^{(i)}_N = \tau^{(i)} - \tau^{(i-1)} $. Then, under $\P_\oplus$,
\begin{equation}
\big\{ \big( Z_0 (\tau^{(i-1)} + m) \,, Z_{1}(\tau^{(i)})/N \, , \Delta \tau^{(i)}_N \big) \,; 1\leq i \leq l \big\} \,,
\end{equation}
converges weakly to
\begin{equation}
\Big\{ \, \Big( V^{(i)} , G( V^{(i)} ) \,, m+ 1_{\{ V^{(i)} \neq 0 \}} \Big) \,; 1\leq i \leq l \Big\} \,.
\end{equation}
where $l \in \N$ is fixed. The distribution of $V^{(i)}$ are determined by
\begin{align}
\P & \big( V^{(i+1)} = l \, \mid \, V^{(j)} = t_j , j \leq i \big) \nonumber \\
& = \P \big( V^{(i+1)} = l \, \mid \, V^{(i)} =t_i \big) = e^{-G(t_i) \rho^{m}}\frac{( G(t_i) \rho^{m} )^l}{l!} \, ,
\end{align}
where $2 \leq i \leq k-1\, ,t_j \in \N$ and $V^{(1)}$ is distributed according to a Poisson variable with parameter $\rho^{m}$.
\end{lem}
\emph{Proof.} It is a direct consequence of Proposition \ref{prop:chain.started.random.position} and an induction argument. \hfill $\Box$
\vspace{.2cm}
With a very small effort we can state Lemma \ref{lem:joint.conv.finite.dem.proj} in a more general framework. We consider the space of real valued sequences $\R^\N$ where we define the metric
$$
d(a,b) = \sum_{n=0}^{\infty} \frac{| a_n - b_n |}{2^n} \,.
$$
A complete description of this topological space can be found in \cite{Billingsley2009}. Since time is discrete, the following Proposition holds as a Corollary of Lemma \ref{lem:joint.conv.finite.dem.proj}.
\begin{prop}\label{prop:conv.joint.law.process} Assume that $\xi$ satisfies Assumption (A) with $\eta=0\,$. Then, under $\P_\oplus$ the process
\begin{equation}
\big\{ \, \big( Z_0(\tau^{(i-1)} +m) \, , Z_{1}( \tau^{(i)} )/N \, , \Delta \tau^{(i)} \big) ; i \in \N \big\}
\end{equation}
converges weakly in $\big( (\,\R^\N \,)^3 ,d \big)$. The limit distribution $\mathbb{W}_\theta$ is given by
\begin{equation}
\big\{ \, \big( V^{(i)}, G(V^{(i)}) , m+1_{\{ V^{(i)} \neq 0 \}} \big) ; i \in \N \big\}\,,
\end{equation}
where $V^{(i)}$ is a Markov chain with initial position at $0$ and transition matrix given by
\begin{equation}
\P \big( V^{(i+1)} = l \mid V^{(j)} = t \big) = e^{-G(t) \rho^{m}}\frac{(G(t) \rho^{m} )^l}{l!} \, ,
\end{equation}
that is a Poisson distribution with parameter $\rho^m G(t)$.
\end{prop}
\subsubsection*{Process convergence in the case $\eta>0$}
For the sake of completeness, we state the result in the case $\eta>0$. We omit the proof of the Proposition and leave the details to the reader .
\begin{prop}\label{prop:conv.joint.law.process.eta.0} Assume that $\xi$ satisfies Assumption (A') and that $\eta>0\,$. Then under $\P_\oplus$ the process
\begin{equation}
\big\{ \big( Z_{1}( \tau^{(i)})/N , \Delta \tau^{(i)} \big) ; i \in \N \big\}
\end{equation}
converges weakly in $\big( (\R^\N )^2 , d \big)$. The limit distribution is non-random, and concentrated on the sequence
\begin{equation}
\big\{ \, \big( a_i , b_i \big); \quad a_i = 1 \mbox{ and } b_i = m+1 \ \forall i \in \N \big\}\,.
\end{equation}
\end{prop}
\subsection{Uniform integrability and bounds for $T_\triangle$ } \label{bound}
In this Subsection, we show that if $\xi$ satisfies Assumption (A'), then $\E_x [ T_\triangle ]$ is bounded independently from the initial configuration $x$.
\begin{equation}\label{equa.aim.section.bound}
\sup_{N \in \N} \ \sup_{x \in \Omega(N)} \E_{x} [T_\triangle] < \infty \,.
\end{equation}
We prove (\ref{equa.aim.section.bound}) through the following steps.
\begin{enumerate}
\item There exists a set, which we denote by $\Xi $, such that for $N$ large enough and every starting point $x \in \Xi$ there is a positive probability to visit $\triangle$ before $m+1$
\begin{equation} \label{equa:first.step.x.in.Lambda}
\P_x \big( T_\triangle \leq m+1 \big) > c_{\thec} \,,
\end{equation}
\setcounter{aux1}{\value{c}} where $c_{\arabic{aux1}}>0$ does not depend on $x \in \Xi$
\item For $N$ sufficiently large and every starting point $x \in \Omega(N)$ there is a positive probability to visit $\Xi$ before $m+1$
\begin{equation} \label{equa:second.step.T.Lambda}
\P_x \big( T_\Xi \leq m+1 \big)>c_{\thec} \, ,
\end{equation}
\setcounter{aux2}{\value{c}}where $T_{\Xi}:=\inf\{ t ; Z(t) \in \Xi \}$ and $c_{\arabic{aux2}} $ does not depend on $x \in \Omega(N)\,.$
\end{enumerate}
Before proving this two statements, we show that they indeed imply (\ref{equa.aim.section.bound}).
\begin{prop} \label{prop:bound.integral.T} Assume that $\xi$ satisfies Assumption (A'). Then
\begin{equation}\label{equa.prop:bound.integral.T}
\sup_{N \in \N}\sup_{x \in \Omega(N)} \E_{x} [T_\triangle] < K \,,
\end{equation}
where $K < \infty \,.$
\end{prop}
\emph{Proof.} If (\ref{equa:first.step.x.in.Lambda}) and (\ref{equa:second.step.T.Lambda}) hold, then for $N$ large enough and any starting point $x \in \Omega(N)$
\begin{align*}
\P_x & \big( T_\triangle \leq 2 m+2 \big) \\
& \geq \P_x \big( T_\triangle \leq 2 m+2; T_\Xi \leq m+1 \big) \\
& \geq \P_x \big( T_\triangle - T_\Xi \leq m+1 ; T_\Xi \leq m+1 \big) \\
&= \E_x \big[ \P_{Z(T_\Xi )} [T_\triangle \leq m+1] 1_{ T_\Xi \leq m+1} \big] \hspace{1.0cm} \mbox{ (Markov property)} \\
& \geq c_{\arabic{aux1}} \, c_{\arabic{aux2}} >0\,.
\end{align*}
Let $c_{\thec} = 1 - c_{\arabic{aux1}} \, c_{\arabic{aux2}} < 1\,.$ \setcounter{aux}{\value{c}} Then it is clear that $\sup_{y \in \Omega (N)} \P_y (T_\triangle \geq 2 m + 3 ) \leq c_{\arabic{aux}} \,$. For $i \in \N$, let $j$ be such that $(2 m+3)j \leq i <(2 m+3)(j+1)$. Using the Markov property $j$ times we obtain the upper bound
$$
\P_x (T_\triangle \geq i) \leq \biggl( \,\, \sup_{y \in \Omega(N)} \bigl\{ \P_y \big( T_\triangle \geq (2 m+3) \big) \bigr\} \biggr)^j\,.
$$
We now show that the expected value of $T_\triangle$ is bounded.
\begin{align*}
\E_x [T_\triangle] & = \sum_{i=0}^{\infty} \P_x(\,T_\triangle \geq i\,) \\
& \leq \sum_{j=0}^{\infty}(2 m+3) \sup_{y \in \Omega(N)} \bigl\{ \P_y \big( T_\triangle \geq (2 m+3) \big) \bigr\}^j \\
& \leq \sum_{j=0}^{\infty} (2 m+3) c_{\arabic{aux}}^{\,j} = \frac{(2 m+3)}{1-c_{\arabic{aux}}}.
\end{align*}
Therefore (\ref{equa.prop:bound.integral.T}) holds with $ K= (2 m+3) / ( 1-c_{\arabic{aux}} ) \,.$ \hfill $\Box$
\vspace{.2cm}
We now present the formal definition of $\Xi$.
\begin{defi} \label{defi:Lambda} For $x\in \Omega(N)$ define $I(x) = \inf\{ i \in \N ; x_i \geq 1 \}$. Then, $\Xi$ is the subset of $\Omega(N)$ defined as follows.
$$
\Xi := \bigl\{ x \in \Omega(N)\, ; x_{I(x)} \geq \alpha N \big\},
$$
where $0<\alpha<1- \theta'$ and $\theta'$ is given by Assumption (A'). Hence, if $Z(t) \in \Xi$ there are at least $ \alpha N$ leaders at time $t\,.$
\end{defi}
We prove (\ref{equa:first.step.x.in.Lambda}) and (\ref{equa:second.step.T.Lambda}) in the next two Lemmas.
\begin{lem} \label{lem:jump.good.set} Assume that $\xi$ satisfies Assumption (A'). Then, for $\Xi$ given by Definition \ref{defi:Lambda} there exists a positive constant $c_{\arabic{aux1}}$ such that for $N$ sufficiently large (\ref{equa:first.step.x.in.Lambda}) holds $\forall \, x \in \Xi$ .
\end{lem}
\emph{Proof.} Note that
\begin{align*}
\P_x \big( T_\triangle \leq m+1 \big) & \geq \P_x \big( Z(\tau) = \triangle \, ; \tau \leq m+1 \big) \\
& = \P_x ( Z(\tau) = \triangle ) - \P_x \big( Z(\tau) = \triangle \, ; \tau \geq m+2 \big) \, ,
\end{align*}
From Corollary \ref{cor:Y.big.t}, the second term in the lower bound converges to zero as $N \to \infty$ and the rate of decay is uniform on $y \in \Omega(N)$. Hence it suffices to show that there exists a positive constant $c_{\thec}$ \setcounter{aux}{\value{c}} such that uniformly on $x \in \Xi $
\begin{equation}\label{lem:eq2}
\lim_{N \to \infty} \P_x ( \, Z(\tau) = \triangle \, ) \geq c_{\arabic{aux}} \,,
\end{equation}
To prove (\ref{lem:eq2}) we distinguish between the cases $\eta = 0$ and $\eta >0$. We start with the latter case $\eta>0\,.$ Let $Y(t)= Z_0 (t)1_{\{t \leq \tau\}} $ and denote by $Y_k$ the process started from $\delta_k$. Then, for $x \in \Xi$ we can couple the processes in such a way that
\begin{equation}\label{equa:sto.bounded}
Y_{\lfloor \alpha N \rfloor} (t) \leq Y_{x_{I(x)}} (t) \leq Y_N (t),
\end{equation}
where $x_{I(x)}$ is the number of leaders when $Z(0)= x$. From the proof of Corollary \ref{cor:fluct.Y.random} and (\ref{equa:sto.bounded}) we obtain
\begin{equation}\label{equa:lem:many:Y}
\lim_{N \to \infty}\P_{x} \big( ( \rho^{m} - \varepsilon ) \alpha N^{\eta r} \leq Z_0 (m) \leq ( \rho^{m} + \varepsilon ) N^{\eta r} \big) = 1 \, .
\end{equation}
Finally, applying the arguments of Lemma \ref{lem:techinical.multi},
$$
\lim_{N \to \infty} \P_x ( \, Z (m+1 ) = \triangle \, ) =1 \,.
$$
In particular, any $0 < c_{\arabic{aux}} < 1 $ satisfies (\ref{lem:eq2}) for $N$ sufficiently large.
The case where $\eta=0$ is similar but it requires an additional step. (\ref{equa:sto.bounded}) still holds, hence by the same arguments we obtain
$$
\P_{x} \Big( \, ( \, \rho^{m-1} - \varepsilon \, ) a N^{1/m} \leq Z_0 (m-1) \leq ( \, \rho^{m-1} + \varepsilon \, ) N^{1/m} \ \Big) = 1 \, .
$$
From Lemma \ref{lem:mult.techinique.many.particles}, we see that $\P_x \big( Z(\tau) = \triangle \mid Z_0 (m) = 0 \big) \to 1\,, $ and from the coupling argument (\ref{equa:sto.bounded}) and Corollary \ref{cor:converg.Y.eta.random} we obtain the following limit
$$
\P_{x} \big( Z_0 (m) = 0 \big) \geq P_{\oplus} \big( Z_0 (m) = 0 \big) \to 1-e^{\rho^m} \, .
$$
Then, any $c_{\arabic{aux}} $ smaller than $ 1-e^{\rho^m} $ satisfies (\ref{lem:eq2}) for $N$ sufficiently large, proving the statement. \hfill $\Box$
\medskip
\begin{lem} \label{lem:jumping.Lambda} Assume that $\xi$ satisfies Assumption (A'). Then, for $\Xi$ given by Definition \ref{defi:Lambda} there exists a positive constant $c_{\arabic{aux2}}$ such that for $N$ large enough (\ref{equa:second.step.T.Lambda}) holds $\forall \, x \in \Omega(N)$ .
\end{lem}
\emph{Proof.} Since $ \P_x ( \tau \geq m+2 )$ converges to zero uniformly on $x \in \Omega(N)$, it is sufficient to show that for $N$ sufficiently large
$$
\P_x \big( Z ( \tau ) \in \Xi \big) \geq c_{\thec}\,,
$$
\setcounter{aux}{\value{c}}where $c_{\arabic{aux}}>0$ does not depend on $x\in\Omega(N)\,$.
\begin{align}\label{inq:4}
\P_x &( Z ( \tau ) \in \Xi ) = \sum_{k = 1}^{\infty} \P_x \big( Z ( k ) \in \Xi ; \tau = k \big) \nonumber \\
& = \sum_{k = 1}^{\infty} \sum_{y \in \Omega(N)} \E_x \big[ \P_y ( Z ( 1 ) \in \Xi ; \tau = 1 ) 1_{ \{ Z(k-1)=y ;\, \tau \geq k \} } \big] \quad \mbox{(Markov property)} \nonumber \\
& \geq \inf_{y \in \Omega(N)} \big\{ \P_y ( Z ( 1 ) \in \Xi \mid \tau = 1 ) \big\} \sum_{k = 1}^{\infty} \sum_{y \in \Omega(N)} \E_x \Big[ \P_y(\tau=1) 1_{ \{ Z(k-1)=y ; \, \tau \geq k \} } \Big] \nonumber \\
& = \inf_{y \in \Omega(N)} \big\{ \P_y ( Z ( 1 ) \in \Xi \mid \tau = 1 ) \big\} \,.
\end{align}
Then, it suffices to show that the infimum in (\ref{inq:4}) is larger than $ c_{\arabic{aux}}$. Recall that under $\P_y$, $Z(1) $ is distributed according to $\mathcal{M}\big( N; s(y) \big) $ a Multinomial with infinitely many classes. Then conditionally to $\{ \tau = 1 \}$, the probability that there is some particle at $-1$ is given by
$$
\frac{s_{1}(y) }{(1-s_0(y))} \, .
$$
The positions of the particles remain independent under the conditional probability and we conclude that
$$
\P_y \big(Z_1(1) = \cdot \mid \tau=1 \big) = \mathcal{B}\big(N; s_{1}(y)/(1-s_0(y)) \big)(\cdot) \,.
$$
Assuming that $y_0 \geq 1$ otherwise we must consider $\tilde{y}$ the shifted vector
\begin{align*}
\frac{s_{1}(y)}{1-s_0(y)} & \geq \frac{\Bigr( 1- p_0 \Bigl)^{y_0} - q_2^{y_{0}} }{\Bigr( 1- p_0 \Bigl)^{y_0}}\\
& \geq 1 - (\theta'\,)^{y_{0}} > \alpha ,
\end{align*}
where the lower bound holds for $N$ large enough as consequence of Assumption (A') and the definition of $\alpha$ . A large deviation argument allow us to conclude that for $\varepsilon$ small enough
$$
\P_y \big( Z_{1}(1) \in \Xi \mid Z_0(1) = 0 \big) \geq \P_y \big( Z_{1}(1) \geq ( \alpha + \varepsilon ) N \mid Z_0(1) = 0 \big) \to 1 \,.
$$
Then, the infimum in (\ref{inq:4}) is larger than any $c_{\arabic{aux}}<1$ for $N$ sufficiently large. This finishes the proof. \hfill $\Box$
\vspace{.2cm}
The next Corollary generalizes (\ref{equa.aim.section.bound}) to the latter visiting times of $\triangle$.
\begin{cor} \label{cor:taui,bouded} Assume that $\xi$ satisfies Assumption (A'). Then, for every $i \in \N$, $\sup_{x \in \Omega }\E_x [T_\triangle^{(i)} ]$ and $\sup_{x \in \Omega }\E_x [\tau^{(i)} ]$ are bounded uniformly on $N$. In particular, under $\P_x$ the families of random variables $T_\triangle^{(i)}$ and $\tau^{(i)}$ are uniformly integrable.
\end{cor}
\emph{Proof.} Since $\tau^{(i)} \leq T_\triangle^{(i)}$, it suffices to prove the statements for $T_\triangle^{(i)}\,.$ To prove that the expectation is bounded we proceed inductively and apply the strong Markov property at time $T_\triangle^{(i-1)}\,.$ It is clear that
$$
\sup_{x \in \Omega }\E_x [T_\triangle^{(i)} ] \leq K^i \,,
$$
where $K$ is given by (\ref{equa.prop:bound.integral.T}). We now prove the uniform integrability. Applying the Markov property we obtain the upper bound
$$
\E_x [ T_\triangle^{(i)} \,;\, T_\triangle^{(i)} \geq l ] \leq \big( \sup_{x\in \Omega(N)} \E_{x} [ T_\triangle^{(i)} ] +l \, \big) \P_x ( T_\triangle^{(i)} \geq l )\,.
$$
It is not difficult to see that the right-hand side of the Equation converges to zero, finishing the proof.
\hfill $\Box$
\subsection{Convergence of Some Related Integrals}\label{sec.conv.integral}
To compute the front velocity in Subsection \ref{sec:front.speed.3.states}, we have to calculate two integrals $ \E_\oplus [T_\triangle]$ and $\E \big[ \phi \big( X ( T_\triangle ) \big) \big]$, where in the latter we assume that all particles start from zero. Hence
$$
\phi \big( X ( T_\triangle ) \big) = - \sum_{i=1}^{\infty} \min \{ l \in \N ; Z_l (\tau^{(i)}) \neq 0 \} 1_{\{\tau^{(i)} \leq T_\triangle \}} \,.
$$
In the next Lemma, we use for the first time the condition $\E[|\vartheta|] < \infty$ that appears in Assumption (A) and Assumption (A').
\begin{lem}\label{lem.int.conv.T.triangle.and.phi.X.eta>0} Assume that $\xi$ satisfies Assumption (A'). Then for every $x \in \Omega(N)$
\begin{equation}\label{equa.lem.int.conv.T.triangle.and.phi.X.eta>0}
\E_x \big[ \min \{ l \in \N ; Z_l (\tau) \neq 0 \} \big] = 1+o(1)\,.
\end{equation}
The term $o(1)$ converges to zero, as $N \to \infty$, independently from the initial condition $x \in \Omega(N)$.
\end{lem}
\emph{Proof.} To prove the Lemma, it suffices to show that the left-hand side in (\ref{equa.lem.int.conv.T.triangle.and.phi.X.eta>0}) is bounded from above by $1+o(1)$. By an argument similar to the one used in Lemma \ref{lem:jumping.Lambda} we obtain that
\begin{align}\label{equa.prova.lem.int.conv.T.triangle.and.phi.X.eta>0}
\E_x &\Big[ \min\{ l \in \N; Z_l (\tau) \neq 0 \} \Big] \nonumber \\
& \leq \sup_{y \in \Omega(N)} \E_y \Big[ \min \{l \in \N; Z_l (1) \neq 0 \} \mid \tau = 1 \Big] \nonumber \\
& =1 + \sup_{y \in \Omega(N)} \E_y \Big[ \min \{l \in \N; Z_l (1) \neq 0 \} 1_{ \{ \min\{l \in \N; Z_l (1) \neq 0 \} \geq 2 \} } \mid \tau = 1 \Big]\,.
\end{align}
Under the conditional probability $Z$ is a Multinomial with infinitely many classes and parameters $s_i(y)/ \big(1-s_0(y) \big).$ Therefore, the probability that there is some particle at $-1$ is larger than $1-\theta'$, as a consequence of Assumption (A'). Moreover, the minimum is bounded from above by some $|\xi_{ij}|$. Indeed it suffices to choose $i$ such that $X_i(0)$ is a leader. Then
$$
-\min \{l \in \N; Z_l (1) \neq 0 \} = \phi\big( X (1)\big) - X_i(0) \geq \xi_{ij}.
$$
Hence, we can give an upper bound for the right-hand side in (\ref{equa.prova.lem.int.conv.T.triangle.and.phi.X.eta>0}). In fact, for $y \in \Omega(N)$ we obtain that
\begin{align*}
\E_y & \Big[ \min \{ l \in \N; Z_l (1) \neq 0 \} 1_{ \{ \min \{l \in \N; Z_l (1) \neq 0 \} \geq 2 \} } \mid \tau = 1 \Big] \\
&\leq \E_y \Big[| \xi_{ij} | 1_{ \{ \min \{ l \in \N; Z_l (1) \neq 0 \} \geq 2 \} } \mid \tau = 1 \Big] \\
& = \E\big[ |\vartheta_{ij}| \big] \P_y \Big( \min \{ l \in \N; Z_l (1) \neq 0 \} \geq 2 \mid \tau= 1 \Big) \leq \E\big[ |\vartheta_{ij}| \big] (\theta')^N\,.
\end{align*}
It converges to zero independently from the initial position $y \in \Omega(N)$. \hfill $\Box$
With Lemma \ref{lem.int.conv.T.triangle.and.phi.X.eta>0} at hand we prove the following result in the noncritical case.
\begin{prop}\label{prop.int.conv.T.triangle.and.phi.X.eta>0} Assume that $\xi$ satisfies Assumption (A') with $\eta>0\,$. Then
\begin{equation}
\lim_{N\to \infty} \E_\oplus [T_\triangle ] = (m+1) \, \quad and \quad \lim_{N \to \infty }\E \big[ \phi \big(\,X ( T_\triangle ) \big) \big] = -1\,.
\end{equation}
\end{prop}
\emph{Proof.} The first limit is a direct consequence of the uniform integrability of $T_\triangle$ and $ \P_\oplus ( T_\triangle = m+1 ) \to 1 \,,$ as $N\to \infty\,$. We now prove the second statement.
\begin{align} \label{equa.prop.int.conv.T.triangle.and.phi.X.eta>0}
\E &\big[ \phi \big( X( T_\triangle ) \big) \big] \nonumber \\
& = -\sum_{i=1}^{\infty} \E_\oplus \bigg[ \min \{l \in \N; Z_l (\tau^{(i)}) \neq 0 \} 1_{\{ T_\triangle \geq \tau^{(i)} \}} \bigg] \nonumber \\
& = -\sum_{ i =1 }^{\infty} \sum_{y \in \Omega(N)} \E_\oplus \bigg[ \E_y \Big[ \min \{ l \in \N; Z_l (\tau^{(i)}) \neq 0 \} \Big] 1_{\{Z(\tau^{(i-1)})= y \,;\, T_\triangle \geq \tau^{(i)} \}} \bigg] \nonumber \\
& = \big( -1 +o(1) \big) \sum_{i=1}^{\infty} \P_\oplus (T_\triangle \geq \tau^{(i)}) \, .
\end{align}
The last equality in (\ref{equa.prop.int.conv.T.triangle.and.phi.X.eta>0}) is a consequence of Lemma \ref{lem.int.conv.T.triangle.and.phi.X.eta>0}. The sum in (\ref{equa.prop.int.conv.T.triangle.and.phi.X.eta>0}) also converges to one. Indeed, it is a consequence of the strong Markov property and the uniform integrability of $T_\triangle\,.$ Hence, we obtain that the Equation in (\ref{equa.prop.int.conv.T.triangle.and.phi.X.eta>0}) converges to one, which finishes the proof of the Proposition. \hfill $\Box$
The critical case is more delicate and we prove the following result.
\begin{prop}\label{prop.int.conv.T.triangle.and.phi.X} Assume that $\xi$ satisfies Assumption (A) and that $\eta=0\,$. Then
\begin{equation}\label{equa.prop.int.conv.T.triangle.and.phi.X}
\lim_{N\to \infty} \E_\oplus [T_\triangle ] = (m+1)\E_0 [\mathcal{T}_0 ] - 1 \quad and \quad \lim_{N \to \infty }\E [\phi \big(X ( T_\triangle ) \big)] = - \E_0 [\mathcal{T}_0]\,,
\end{equation}
where $\mathcal{T}_0$ is the stopping time given by $\mathcal{T}_0 := \min\{ i \in \N; V^{(i)} = 0\}$, for $V^{(i)}$ a Markov chain defined as in Proposition \ref{prop:conv.joint.law.process}.
\end{prop}
\vspace{.2cm}
From Proposition \ref{prop:conv.joint.law.process}, $Z_1(\tau^{(i)})/N $ converges in distribution to $G(V^{(i)}) $ as $N$ goes to infinity. We would like to state that
$$
T_\triangle = \min \{ i \in \N; Z_1(\tau^{(i)})/N = 1 \} \stackrel{d}{\rightarrow} \min \{ i \in \N; G(V^{(i)}) = 1 \} \,.
$$
Nevertheless the functional is not continuous and the above convergence must be justified.
\begin{lem}\label{lem.conv.mathfrak.T} Assume that $\xi$ satisfies Assumption (A) with $\eta=0\,$. Then
$$
\min \{ i \in \N ; Z(\tau^{(i-1)} +m) = 0\} \stackrel{d}{\longrightarrow} \min \{ i \in \N ; V^{(i)} = 0 \} =\mathcal{T}_0 \,.
$$
\end{lem}
\emph{Proof.} The minimum becomes continuous when restricted to $\N^\N\,.$ Since $ Z(\tau^{(i-1)} +m)$ converges in distribution to $V^{(i)}$ we conclude that the minimums also converge in distribution, which proves the result. We refer to \cite{Billingsley2009} for more details on convergence in distribution. \hfill $\Box$
\vspace{.2cm}
\begin{lem} \label{lem:law.convergence.T.triangle} Assume that $\xi$ satisfies Assumption (A) with $\eta=0\,$. Then $ T_\triangle $ converges in distribution to $\mathcal{T}_0 $ as $N \to \infty$. In particular,
$$
T_\triangle \stackrel{d}{\to} \min \{ i \in \N ; G(V^{(i)}) = 1 \} \,.
$$
\end{lem}
\emph{Proof.} From Proposition \ref{prop:chain.started.random.position}
$$
\min \{ i \in \N ; Z_0(\tau^{(i-1)} +m) = 0\} - \min \{ i \in \N ; Z_1(\tau^{(i)})/N =1 \} \,,
$$
converges in distribution to zero. Hence, from Lemma \ref{lem.conv.mathfrak.T} we obtain that $T_\triangle$ converges in distribution to $\mathcal{T}_0 $. The second statement follows from
$$
\min \{ i \in \N ; G(V^{(i)}) =1 \} = \min \{ i \in \N ; V^{(i)}= 0 \} = \mathcal{T}_0 \quad a.s.
$$ \hfill $\Box$
\vspace{.2cm}
\emph{Proof of Proposition \ref{prop.int.conv.T.triangle.and.phi.X}.} It is not hard to see that if $\min \{ i \in \N ; Z_1(\tau^{(i)})/N =1 \} = k $, then $T_\triangle = \tau^{(k)}$. So we can write
\begin{align*}
\E_\oplus [T_\triangle ] & = \sum_{k=1}^{\infty} \E_\oplus [ \tau^{(k)} \, 1_{\{T_\triangle = \tau^{(k)} \}} ] \\
& = \sum_{k=1}^{\infty} \E_\oplus \bigg[ \sum_{j=1}^k \big(\tau^{(j)} - \tau^{(j-1)} \big)\, 1_{\{ \min_{i \in \N } \{Z_1(\tau^{(i)})/N =1 \} = k \}} \bigg] \, .
\end{align*}
For a fixed $k$ the random variable $ \sum_{j=1}^k ( \tau^{(j)} - \tau^{(j-1)} ) \,1_{\{ \min_{i \in \N } \{ Z_1(\tau^{(i)})/N = 1\} = k \}}$ converges in law to
$$
\sum_{j=1}^k (m + 1_{\{ V^{(j)} \neq 0 \}}\, )\, 1_{\{ \min \{ i \in \N ; G(V^{(i)}) =1 \} = k \}} = \big(( m+1 ) \, k - 1 \big) 1_{\{ \mathcal{T}_0 = k \}} \,.
$$
Since $\tau^{(k)}$ is uniformly integrable the convergence also holds in $L^1$. From the uniform integrability of $T_\triangle$ we obtain the convergence in $L^1$ of the sum and the following limit holds.
\begin{align*}
\lim_{N\to \infty} \E_\oplus [T_\triangle ] & = \sum_{k=1}^{\infty} \big( (m+1) k -1 \big) \P_0 \big( \mathcal{T}_0 = k \big) \notag \\
& = (m+1) \E_0 \big[ \mathcal{T}_0 \big] -\sum_{k=1}^{\infty} \P_0 \big( \mathcal{T}_0 = k \big) \\
&= (m+1) \E_0 \big[ \mathcal{T}_0 \big] - 1\, .
\end{align*}
This proves the first statement of Proposition \ref{prop.int.conv.T.triangle.and.phi.X}. We now prove the second limit in (\ref{equa.prop.int.conv.T.triangle.and.phi.X}). From the proof of Proposition \ref{prop.int.conv.T.triangle.and.phi.X.eta>0} we obtain that
$$
\E_\oplus \big[ \phi \big( X( T_\triangle ) \big) \big] = -\big(1 +o(1) \big) \sum_{i=1}^{\infty} \P_\oplus \big( \tau^{(i)} \leq T_\triangle \big)
$$
From the uniform integrability of $T_\triangle$ we obtain that $\sum_{i=1}^{\infty} \P_\oplus \big( \tau^{(i)} \leq T_\triangle \big) \to \E_0 \big[ \mathcal{T}_0 \big]$ which finishes the proof. \hfill $\Box$
\vspace{.2cm}
The transition matrix of $V^{(i)}$ depends on $G$ and a fortiori on $\theta$. A coupling argument shows that $E_0 [\mathcal{T}_0]$ is non-increasing in $\theta$. Nevertheless, we do not know how to calculate explicitly the integral. However the asymptotic behaviors as $\theta \to 0 \,$ and $1$ are easy to compute.
\begin{prop}\label{prop:conv.gamma.theta.to.0} Let $V^{(i)}$ be the Markov chain whose transition matrix is given in Proposition \ref{prop:conv.joint.law.process} , then
$$
\lim_{\theta \to 0} E_0 [\mathcal{T}_0 ] = \exp \rho^m \, .
$$
\end{prop}
\emph{Proof.} We write $E_0 [ \mathcal{T}_0]= \sum_{k=0}^{\infty} P_0 ( \mathcal{T}_0 \geq k+1 ) \, .$ For $l \geq 1$, then $1 \geq G(l) \geq G(1) = 1-\theta$, and
\begin{align*}
P_0 & (\mathcal{T}_0 \geq k+1 ) \\
& = \sum_{ l_1 = 1}^{\infty} e^{-\rho^{m}} \frac{(\rho^{m} )^{l_1}}{l_1 !} \ldots \sum_{ l_{k-1} = 1}^{\infty} e^{-\rho^{m} G(l_{k-2})} \frac{(\rho^{m} G(l_{k-2}) \,)^{l_{k-1}}}{l_{k-1} !} \big(1- e^{-\rho^{m} G(l_{k-1})} \big) \,.
\end{align*}
The last expression is bounded from above by $( 1- e^{-\rho^{m} } )^k$. Since $ G(l) \to 1 $ as $\theta \to 0 $ we can conclude by dominated convergence. \hfill $\Box$
\vspace{.2cm}
We point out here that the case $\theta \to 0$ corresponds to the two state model studied in Section \ref{sec.derrida.two}. Informally, when $\theta$ is very small there is a high probability that $Z(\tau)$ starts afresh from $\triangle$. A similar computation can be done in the case where $\theta$ converges to one
\begin{prop} Let $V^{(i)}$ be the Markov chain whose transition matrix is given in Proposition \ref{prop:conv.joint.law.process}. Then
$$
\lim_{\theta \to 1} E_0 [\mathcal{T}_0 ] = 2 - \exp - \rho^m \, .
$$
\end{prop}
\emph{Proof.} The proof follows the same lines as that of Proposition \ref{prop:conv.gamma.theta.to.0} and we leave the details to the reader. \hfill $\Box$
\subsection{Front speed}\label{sec:front.speed.3.states}
As in Subsection \ref{subsec.front.speed}, we explore the renewal structure of $Z$ that starts afresh from $\triangle$. Let $N (t) = \max \{ i \ ; \ T_\triangle^{(i)} \leq t \} $. Then
$$
\phi \big( X(t)\big) = - \sum_{i=1}^{N(t)} \Big[ \phi \big( X(T_\triangle^{(i+1)} ) \big) -\phi \big( X( T_\triangle^{(i)}) \big) \Big] + o (t) \, .
$$
Taking the limit, as $t \to \infty$, we have that
\begin{align} \label{equa:eta0.aim}
\lim_{t \to \infty }\frac{\phi \big( X(t) \big)}{t} & = \lim_{t \to \infty} - \frac{1}{t}\sum_{i=1}^{N (t)} \phi \big( X( T_\triangle^{(i+1)} ) \big) - \phi \big( X( T_\triangle^{(i)}) \big) \nonumber \\
& = \frac{\E \big[ \phi \big( X(T_\triangle ) \big) \big]}{\E_\oplus [T_\triangle ] } \qquad a.s.
\end{align}
The limit is a consequence of the ergodic Theorem and the renewal structure. In Subsection \ref{sec.conv.integral}, we calculated the limits of the above expected values. We obtain that
$$
\lim_{N \to \infty} v_N = \left\{
\begin{array}{lcl}
-\big( 1+ \lfloor 1/r \rfloor \big)^{-1}, & \mbox{if} & 1/r \not\in \N \\
- \big( \lfloor 1/r \rfloor + 1- 1/E_0 [ \mathcal{T}_0] \big)^{-1} , & \mbox{if} & 1/r = m \in \N \,,
\end{array}
\right.
$$
which proves Theorem \ref{teo.speed.3.states} with $g(\theta) = E_0 [ \mathcal{T}_0]$.
\section{Conclusion and sketch of the proof of Theorem \ref{teo.speed.3.states.gen}} \label{sec.conclusion.teo.speed.3.states.gen}
Theorem \ref{teo.speed.3.states.gen} follows as a Corollary of Theorem \ref{teo.speed.3.states} proved in Section \ref{sec.front.speed.three.states}. We will not prove it in detail but we give a sketch of the proof. The constants $\lambda_0$ and $\lambda_1 - \lambda_0$ appearing in Theorem \ref{teo.speed.3.states.gen} are justified by an affine transformation. Then, it remains to explain how we pass from the distribution over the lattice to the more general one. In the proof of Theorem \ref{teo.speed.3.states} we see that in the discrete case $\vartheta $ contributes to the position of the leaders only in rare events. Indeed, if there are $k$ leaders at time $t$ the position of the front is determined by $\vartheta$ at $t+1$ only in the case where $\xi_{ij} (t+1) \leq -2 $ for at least $N^k$ random variables. The probability of this event is of order $\theta^{N^{k}}$, as a consequence of Assumption (A). This behavior still holds in the general case. For a complete proof we refer to \cite{Comets2013} Theorem 1.3, which applies also to our case with the obvious changes.
\medskip
The position of the front depends basically on the tail distribution of $\xi$, that is determined by the point masses $\lambda_0$ and $\lambda_1$. The only case where $\vartheta $ could contribute to the position of the front in long time scales is in the non-integrable case. Then the mechanism responsible for propagation is of a very different nature and the front is no longer pulled by the leading edge. In the rare events, when the front moves backwards more than $\lambda_0 - \lambda_1$ the contribution of $\vartheta$ would be non-negligible depending on its tail and the global front profile. This problem is still open and much harder to solve.
\section*{Acknowledgments}
\emph{I thank Francis Comets for suggesting this problem to me and for his guidance in my Ph.D.}
\nocite{*}
\bibliographystyle{plain}
| {'timestamp': '2013-11-20T02:11:36', 'yymm': '1311', 'arxiv_id': '1311.4815', 'language': 'en', 'url': 'https://arxiv.org/abs/1311.4815'} |
\section{Introduction}
Polarization measurements from astrophysical objects are a key piece
of information to decipher the physics and geometry of regions that
emit the observed photons. The emission from $\gamma$-ray bursts
(GRBs) has been notoriously difficult to understand due to the
complexity of modeling their broadband, prompt $\gamma$-ray
emission. Recent results have provided evidence that the prompt
emission is the result of synchrotron radiation from electrons
accelerated to ultra-high energies via magnetic reconnection
\citep{Burgess:2014aa,Zhang:2016aa,Zhang:2018aa,Burgess:2018}. Measurements
of the optical polarization from a GRB's prompt emission have
similarly pointed to a synchrotron origin of the emission
\citep{Troja:2017aa}. However, spectral modeling of photospheric based
emission has also provided adequate fits to a subset of GRBs
\citep{Ryde:2010aa,Ahlgren:2015aa,Vianello:2018il}. Measurements of
polarization can break this degeneracy
\citep{Toma2009,Gill:2018aa}. Photospheric emission will typically
produce unpolarized emission although a moderate polarization level is
possible in special circumstances \citep{Lundman:2018aa} and predicts
very specific changes of the polarization angle
\citep{Lundman:2014aa}. On the other hand, synchrotron emission
naturally produces a range of polarized emission depending on the
structure of the magnetic field and outflow geometry
\citep{Waxman2003,Lyutikov:2003, JG2003}. Thus, being able to fit
synchrotron emission to the observed spectrum while simultaneously
detecting polarization provides a clear view of the true emission
process.
Several reports of polarization measurements have been produced by a
variety of instruments. An overview of which can be found in
\citep{Covino2016}. Of these measurements, those by non-dedicated
instruments like those reported by BATSE and RHESSI suffer from
problems with instrumental effects or poorly understood systematics
\citep{MCCONNELL20171} making it impossible to draw conclusions based
on these results. Additionally, several measurements were performed
using data from two instruments onboard the INTEGRAL satellite, IBIS,
and SPI. Several of the GRB polarization measurements performed by
these instruments do not suffer from obvious errors in the analysis
and allow us to constrain the polarization parameter space. However,
for several of these measurements systematic uncertainties also make
it difficult to draw conclusions \citep{McGlynn2007}. Furthermore as
stated in for example \citep{PEARCE201954}, a lack of on-ground
calibration of the instrument responses of both IBIS and SPI to
polarized beams creates additional doubt on the validity of
polarization results from these instruments within the community. This
indicates the importance of performing polarization measurements with
carefully calibrated and dedicated instrumentation. More recently,
the AstroSAT collaboration has reported preliminary polarization
analysis results of several GRBs on the arXiv e-Print archive
\citep{Chattopadhyay:2017aa}. The systematics and
procedures related to obtaining these measurements is not
immediately clear. The quoted error distributions contain unphysical
regions of parameter space (polarization degrees greater than 100\%)
and are thus questionable. Past measurements of polarization by the
first dedicated GRB polarimeter, GAP, provided hints of polarized
emission \citep{Yonetoku:2011ef}. The results presented there indicate
an overall low polarization potentially resulting from an evolution of
the polarization angle during the long multipulse GRB, something also
reported in \citep{G_tz_2009} for GRB 041219A. Measurements by COSI
provided an upper limit on the polarization degree
\citep{Lowell:2017bq}. The statistics of these measurements do not,
however, allow constraints on the emission mechanisms. Furthermore,
the techniques for all these measurements relied on background
subtraction. As both the background and signal counts are Poisson
distributed, subtraction is an invalid procedure that destroys
statistical information, thus all reported significances are
questionable.
The POLAR experiment \citep{Produit2018} on board the Chinese space
laboratory Tiangong-2 observed 55 GRBs and reported polarization
measurements for five of these GRBs \citep{POLAR2018}. Time-integrated
analysis of these GRBs resulted in strict upper limits on the
polarization degrees. The most likely polarization degrees found in
that analysis are non-zero but remain compatible with an unpolarized
emission, leading to the conclusion that GRBs are at most moderately
polarized. Using time-resolved analysis it was however found that the
polarization of GRB 170114A was most compatible with a constant
polarization degree of $\sim 28\%$ with a varying polarization
angle. Summing polarized fluxes with varying polarization degrees
produces an unpolarized flux. The detection of an evolution in
polarization angle within this single pulse GRB could explain the low
polarization degrees found for all five GRBs. The results presented in
\citep{POLAR2018} do not, however allow for a detailed time-resolved
study of the remaining four GRBs, nor do they allow determination of
the nature of the evolution of the polarization angle in GRB 170114A.
Coincidentally, several of the GRBs observed by POLAR were
simultaneously observed by the \textit{Fermi}-GBM. In this paper, we present
a technically advanced modeling of the polarization and spectral data
simultaneously with data from both instruments. This allows the
incorporation of information contained in both data sets leading to
improved sensitivity and an altogether more robust analysis. This work
is organized as follows: The methodology and modeling is described in
Sections \ref{sec:method} and \ref{sec:synch} and the results are
interpreted in Section \ref{sec:results}.
\section{Data analysis and methodology}
\label{sec:method}
For the analysis herein, we have developed a new approach of
simultaneously fitting both the spectral data from POLAR and GBM along
with the POLAR scattering angle (SA) or polarization data
(the subset of POLAR data usable for polarization analysis
selected with cuts as defined in \citep{LI2018}). This simultaneous
fitting alleviates the need for approximate error propagation of the
spectral fits into the polarization analysis. Using the abstract data
modeling capabilities of
\texttt{3ML}\footnote{\url{https://threeml.readthedocs.io/en/latest/}}
\citep{Vianello:2015}, a framework was developed to directly model all
data simultaneously with a joint-likelihood in each dataset's
appropriate space. Below, we describe in detail each part of the methodology.
We focus on the analysis of GRB 170114A \citep{Veres:2017}
which is a single-pulse, bright GRB lasting approximately 10s which
allows us to performed detailed time-resolved spectroscopy. The
event occurred on January 14$^{\text{th}}$ 2017 with an initially
estimated fluence between 10-1000 keV of $\sim 1.93 \cdot 10^{-5}$
erg cm$^{-2}$. The high peak flux of the GRB triggered an
autonomous repoint request for the $Fermi$ satellite, however, no
LAT detection of photons occurred.
\subsection{Location and temporal analysis}
Spectral and polarization analysis for both GBM and POLAR rely on
knowledge of the sky-position ($\delta$) of the GRB in question. As they are both all-sky surveyors, GBM and POLAR lack the ability to image GRBs
directly. However, using the BALROG technique \citep{Burgess:2018et},
we can use the spectral information obtained in the GBM data to locate
the GRB. Using a synchrotron photon model (see Section
\ref{sec:synch}), we were able to locate the GRB to RA=
$13.10 \pm 0.5$ deg, DEC=$-13.0 \pm 0.6 $ deg. Using this location,
spectral and polarization responses were generated for all data
types. We note that a standard GBM position \footnote{Data obtained
from \url{https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/bursts/}}
exists and, along with their uncertainties, was used for the
polarization results presented in \citep{POLAR2018}, however, the
standard localization technique has known systematics and now possess
arbitrarily inflated error distributions
\citep{Connaughton:2015aa}. We find the BALROG derived location much
more precise than that of the standard location analysis (see Fig
\ref{fig:location}), allowing us to reduce the systematic errors included
in the polarization results presented in
\citep{POLAR2018}. Additionally, it has now been shown that the BALROG
locations as systematically more accurate \citep{Berlato:2019}.
\begin{figure}
\centering
\includegraphics{location.pdf}
\caption{BALROG location (red posterior samples) of GRB 170114A
derived by fitting the peak of the emission for both the location
and spectrum simultaneously. The blue contours display the 1,2,
and 3 $\sigma$ standard GBM catalog location as obtained from the
Fermi Science Support Center (FSSC).}
\label{fig:location}
\end{figure}
The chief focus of this analysis is temporal variation in the
polarization parameters. We computed the minimum variability timescale
(MVT) \citep[see][ for details]{Vianello:2018il} on the POLAR SA light
curve. The MVT infers the minimum timescale above the
Poisson noise floor of which variability exists in the data. This
yields an MVT of $\sim0.3$ s (Fig. \ref{fig:mvt}). For completeness,
the MVTs for both the GBM and POLAR spectral light curves were
computed as well. Both analyses yield similar results. Therefore, we
were able to analyze data on this timescale without the concern of
summing over evolution of spectral \citep{Burgess:2015iy}. However,
the raw polarization data do not allow for us to check for variability
in the polarization angle prior to fitting. Therefore, it is possible
that the angle could change on a timescale smaller than our selected
time-intervals. This could reduce the overall inferred polarization.
With the MVT determined, we utilized the Bayesian blocks algorithm
\citep{Scargle:2013aa} to objectively identify temporal bins for the
analysis. The SA light curve was utilized to perform the analysis. The
temporal bins created are on the order of the MVT. A total of nine
bins were selected and used for spectral and polarization analysis
(see Table \ref{tab:table1}).
\begin{figure}
\centering
\includegraphics{lightcurve.pdf}
\caption{Light curves of the POLAR polarization and spectral
data (the difference is explained in appendix
\ref{sec:polar_response}) as well as two GBM detector data. The
green line is the fitted background model and the gray shaded
regions show the time-intervals used for the analysis. The
binning in the analysis region is derived via Bayesian
blocks. }
\label{fig:lightcurve}
\end{figure}
\begin{figure*}%
\centering
\begin{subfigure}[]%
{\includegraphics[width=3.4in]{polar_mvts_styled.pdf}}%
\end{subfigure}%
\begin{subfigure}[]%
{\includegraphics[width=3.4in]{gbm_mvts_styled.pdf}}%
\end{subfigure}%
\caption{Minimum variability timescales for the polar
polarization data (left) and the GBM spectral data (right). The
black line indicates the background power spectrum determined via
Monte Carlo calculations and the shaded regions indicates the
uncertainty in the background. Notably, both data sets have nearly
equivalent MVTs.}
\label{fig:mvt}
\end{figure*}
\subsection{Spectral analysis}
The standard $\gamma$-ray forward-folding approach to spectral fitting
is adopted, in which we have sky location ($\delta$) dependent responses
for both the GBM and POLAR detectors ($R_{\gamma}$) and fold the proposed
photon model ($n_{\gamma}$) solution through these responses to produce
detector count spectra ($n_{\mathrm{pha}}$). Thus,
\begin{equation}
\label{eq:2} n_{\mathrm{pha}}^{i,j} = \int \diff{\varepsilon^j} n_{\gamma}(\varepsilon,\bar{\psi}) R_{\gamma}^{i,j} \left(\delta \right)
\end{equation}
\noindent
for the $i^{\mathrm{th}}$ detector in the $j^{\mathrm{th}}$
pulse-height amplitude (PHA) channel, $\varepsilon$ is the latent photon
energy and $\bar{\psi}$ are a set of photon model parameters. Here,
$\delta$ is the sky location of the GRB. Both POLAR and GBM have Poisson-distributed
total observed counts, and their backgrounds
determined via fitting polynomials in time to off-source regions of
the light curves. Thus, Gaussian-distributed background counts are
estimated by integrating these polynomial models over the source
interval of interest. The uncertainty on these estimated counts is
derived via standard Gaussian uncertainty propagation. This leads us
to use a Poisson-Gaussian likelihood\footnote{This is known as PGSTAT
in XSPEC.} for each detector for the spectral fitting.
\subsection{Polarization analysis}
To enable performing joint fits of the spectra and the polarization a
novel analysis technique was developed. Traditional polarization
analysis techniques, such as those employed in
\citep{Yonetoku:2011ef,Chattopadhyay:2017aa} as well as in
\citep{POLAR2018}, rely on fitting data to responses produced
for a specific spectrum. This method does not allow joint fits of both
the spectrum and polarization parameters, nor does it allow naturally
including systematic uncertainties from the spectral fits into the
systematic uncertainties of the polarization. Here, in order to model
the polarization signal seen in the data, we invoked a forward-folding
method similar in concept to our approach to spectral analysis. We
simulated polarized signals as function of polarization angle, degree
and energy to create a matrix of SA distributions (often
called modulation curves within the field of polarimetry) which can be
compared to the data via the likelihood in data space. For details on
the creation of the matrix see Appendix \ref{sec:polar_response}.
Mathematically,
\begin{equation}
\label{eq:1} n_{\theta}^{k} \left(\phi, \bar{p} \right) = \int \diff{\varepsilon^{j}} n_{\gamma} \left(\varepsilon; \bar{\psi} \right) R_{\theta}^{j,k} \left(\varepsilon, \phi, \bar{p} \right)
\end{equation}
\noindent
where $n_{\theta}$ are counts in SA bin $k$, and $R_{\theta}$
is the simulated response of the corresponding scattering bin. In
words, we convolved the photon spectrum over the $j^{\mathrm{th}}$
photon energy bin with the polarization response to properly weight
the number of counts observed in each SA bin. Figure
\ref{fig:demo} demonstrates how changes in polarization angle and
degree appear in the POLAR data space. Hence, our need to
simultaneously fit for the photon spectrum which allows for direct
accounting of the uncertainties in the weighting.
\begin{figure}
\centering
\includegraphics[]{demo_model.pdf}
\caption[]{Folded POLAR count space for two polarization angles
and ten levels of polarization degree. The rates have been
artificially scaled to between the different angles for visual
clarity. The green lines for both angles represent the polarization
degree $\bar{p}=0,$ and the red lines $\bar{p}=100$. Thus, we can see how various sets of polarization parameters can be identified
in the data. The peaks with a $90^\circ$ periodicity are the result
of POLAR's square shape, while the visible modulation with a
$360^\circ$ period is a result of the incoming direction of the
photons with respect to the instrument's zenith. By forward-modeling
the instrument response, the systematics induced by geometrical
effects are properly accounted for.}
\label{fig:demo}
\end{figure}
POLAR observed SAs are measured as detector counts and thus Poisson
distributed. The pollution of the source signal by background cannot
be handled by background subtraction as has been done in previous
work. Instead, a temporally off-source measurement of the background
polarization is made in order to model the background contribution to
the total measurement during the observation intervals. The background
measurement is Poisson distributed in each of the $k$ scattering
bins. Due to the temporal stability of the background, as presented in
\citep{POLAR2018}, we fit a polynomial in time to each of the $k$
scattering bins via an unbinned Poisson likelihood. This allowed us to
reduce the uncertainty of the background by leveraging the temporal
information. We were able to estimate the on-source background contribution
($b_{\theta}^k$) by integrating the polynomials over time and propagating
the temporal fit errors. This implies that the polarization likelihood
is also a Poisson-Gaussian likelihood just as with the spectral
data. We verified that our approach allowed us to identify the latent
polarization parameters via simulations in Appendix
\ref{sec:polar_assessment}. The count rates are corrected for the
proper exposure by computing the total dead-time fraction associated
with each interval. The method employed for dead-time calculation is
equivalent to that of \citet{POLAR2018}.
\begin{figure}
\centering
\includegraphics{model.pdf}
\caption{Directed graph model describing the full likelihood of
our approach. Model parameters are shown in light blue, and the data in
dark blue. The graph shows how the latent parameters of the model
are connected to each other and eventually the data. It is
important to note that the latent photon model connects both sides
of the model. The position ($\delta$) is a fixed parameter. Here
$\vec{\psi}$ represents the set of spectral parameters. }
\label{fig:like}
\end{figure}
The full joint likelihood of the data is thus a product over the
spectral and polarization likelihoods which is detailed in Appendix
\ref{sec:likelihood} (see also Figure \ref{fig:like}). We re-emphasize
that the spectral model and polarization model communicate with each
other through the likelihood. This implies that the posterior density
of the model is fully propagated to both datasets without any
assumptions such as Gaussian error propagation. As is seen in the
following sections, the resulting parameter distributions can be
highly asymmetric.
In a perfect world where all instruments are cross-calibrated over the
full energy range, the instruments' various responses would predict
similar observed fluxes for each measurement. However, we allowed for a
normalization constant between GBM and POLAR to account for any
unmodeled discrepancies between the instruments. Both POLAR's
polarization and spectral data are scaled by these constants which are
unity when no correction is required\footnote{We could have easily
applied these constants to the GBM responses. Since they are
scalings, where they are applied is arbitrary.}. This constant scale
for the effective area by no means accounts for energy-dependent
calibration issues.
In order to obtain the posterior parameter distributions, we used {\tt
MULTINEST} \citep{Feroz:2009aa, Buchner:2014aa} to simulate the model's
posterior. {\tt MULTINEST} utilizes nested sampling which is suitable
for the multimodal distributions we observe, as well as for the
non-linear model and high-dimension of our parameter space. For the
polarization parameters, we used uninformative priors of appropriate
scale. The effective area normalizations are given informative
(truncated Gaussians) priors centered at unity with a 10\% width. The
priors for the spectral modeling are discussed in the Section
\ref{sec:synch}. We ran {\tt MULTINEST} with 1500 live points to
achieve a high number of samples for posterior inference. Model
comparison was not attempted and thus we did not use the marginal
likelihood calculations\footnote{Indeed, astrophysical models operate
in the $\mathcal{M}$-open probabilistic setting and marginal
likelihood is an $\mathcal{M}$-closed tool \citep{Vehtari:2018aa}.}.
As stated, for both $\bar{p}$ and $\phi$, we used uninformative priors
in each parameters' domain. This is a valid choice for $\phi$, but we
note that an informative prior for the expected polarization from
synchrotron emission could be used as an assumption. However, as
discussed in Section \ref{sec:discussion}, the theoretical predictions
for GRB synchrotron models are not mature enough for us to assume such
a prior at the current time. Nevertheless, in our work we tested
Gaussian priors centered at moderate polarization and found that the
data allowed for this assumption. Moreover, we found that our
recovered $\phi$ was not affected by out choice of prior on $\bar{p}$.
\section{Synchrotron modeling}
\label{sec:synch}
With the recent finding that synchrotron emission can explain the
majority of single-pulse GRBs, we chose to model the time-resolved
photon spectrum with a physical synchrotron model. Following
\citet{Burgess:2018}, we set
\begin{equation}
\label{eq:3}
n_{\gamma} \left(\varepsilon ; K, B, p, \gamma_{\rm cool} \right) = \int_{0}^{t^{\prime}(\gamma_{\rm cool})} \int_{1}^{\gamma_{\rm max}} \diff{t} \diff{\gamma} \times n_e \left(\gamma; t \right) \Phi\left(\frac{\varepsilon}{\varepsilon_{\mathrm{crit}}(\gamma; B ) } \right)
,\end{equation}
\noindent
where $K$ is the arbitrary normalization of the flux, $B$ is the
magnetic field strength in Gauss, $p$ is the injection index of the
electrons, $\gamma_{\rm cool}$ is the energy to which an electron will cool
during a synchrotron cooling time,
\begin{equation}
\label{eq:4}
\Phi\left( w\right) = \int_{w}^{\infty} \diff{x} K_{5/3} \left(x \right)
\end{equation}
\noindent
and
\begin{equation}
\label{eq:5}
\varepsilon_{\mathrm{crit}} \left(\gamma ; B \right) = \frac{3}{2} \frac{B}{B_{\mathrm{crit}}} \gamma^2 \mathrm{.}
\end{equation}
\noindent
Here, $K_{5/3}$ is a Bessel function,
$B_{\mathrm{crit}} = 4.13 \cdot 10^{13}$ G, and $n_e$ is determined by
solving the cooling equation for electrons with the Chang and Cooper
method \citep{Chang:1970gk}. In mathematical expression,
\begin{equation}
\label{eq:6}
\frac{\partial}{\partial t} n_e \left(\gamma, t \right) = \frac{\partial}{\partial t} \dot{\gamma}\left( \gamma; B \right) n_e \left(\gamma, t \right) + Q(\gamma;\gamma_{\rm inj}, \gamma_{\rm max}, p)
,\end{equation}
\noindent
where the injected electrons are defined by a power law of index $p$
\begin{equation}
\label{eq:7}
Q\left(\gamma; \gamma_{\rm inj}, \gamma_{\rm max} ,p \right) \propto \gamma^{-p}\;\; \gamma_{\rm inj} \le \gamma \le \gamma_{\rm max}
,\end{equation}
\noindent
where $\gamma_{\rm inj}$ and $\gamma_{\rm max}$ are the minimum and maximum injected
electron energies respectively and the synchrotron cooling is
\begin{equation}
\label{eq:8}
\dot{\gamma}\left( \gamma ; B\right) = -\frac{\sigma_{\mathrm{T}} B^2 }{6 \pi m_e c} \gamma^2 \mathrm{.}
\end{equation}
\noindent
For our numerical calculations we created a 300-point grid,
logarithmically distributed in $\gamma$. The linear equations in the
implicit scheme form a tridiagonal matrix which is solved numerically
with standard methods. The method of \citet{Chang:1970gk} is
numerically stable and inexpensive as well as shown to conserve
particle number in the absence of sources and sinks. Thus, we are able
to solve for the synchrotron emission spectrum quickly during each
iteration of the fit. The numeric code is implemented in \texttt{C++}
and interfaced with \texttt {Python} into \texttt{astromodels}
\citep{Vianello:2018b}.
The overall emission is characterized by five parameters: $B$,
$\gamma_{\rm inj}$, $\gamma_{\rm cool}$, $\gamma_{\rm max}$, and $p$. However, a strong co-linearity
exists between $B$ and $\gamma_{\rm inj}$ as their combination sets the peak of
the photon spectrum. Thus, both parameters serve as an energy scaling
which forces the setting of one of the parameters. We chose to set
$\gamma_{\rm inj}=10^5$ though the choice is arbitrary and does not affect our
results. It is therefore important to note that all parameters are
determined relatively, that is, the values of $\gamma_{\rm cool}$ and $\gamma_{\rm max}$ are
determined as ratios to $\gamma_{\rm inj}$. Similarly, the value of $B$ is only
meaningful when determining the characteristic energies of $\gamma_{\rm cool}$
and $\gamma_{\rm max}$ or $h \nu_{\rm cool}$ and $h \nu_{\rm max}$ respectively. In other words,
with our parameterization the spectra are scale free. The degeneracies
can be eliminated by specifying temporal and radial properties of the
GRB outflow which we have neglected in this analysis.
Ideally, we would fit for the full set of parameters in the
model. However, the already high-dimensionality of the model does not
allow us to fit for the cooling regime of the model simultaneously
with the polarization due to computational time
constraints. Therefore, we first fit the spectral data alone to
determine the amount of cooling present in the data. All spectra were
found in the slow-cooling regime
\citep{Sari:1998aa,Beniamini:2013aa}. Thus, we fixed the ratio of
$\gamma_{\rm cool}$ to $\gamma_{\rm inj}$ during the full fits to the slow-cooling
regime. Tests revealed that the cooling had no impact on the recovered
polarization parameters. Additionally, the lack of high-energy data
(via the \textit{Fermi}-LAT) forces us to fix $\gamma_{\rm max}$ such that the
synchrotron cutoff is above the spectral window. We obtain three
parameter fit for the spectrum: $B$, $p$ and the arbitrary spectral
normalization ($K$). $B$ and $K$ are given uninformative scale priors
and $p$ a weakly-informative, Gaussian prior centered around
$p=3.5$. The effective area constants applied to the POLAR response
are given truncated Gaussian priors centered at unity with a width of
10\% to reflect our prior belief that the instruments are
well-calibrated to one another\footnote{This belief will be
conditioned on the data and thus can be modified.}.
\section{Results}
\label{sec:results}
In the following two sections, we present the results from the
combined polarization and spectral analysis separately. Corner plots
of the important (non-nuisance) parameter marginal distributions are
displayed in Appendix \ref{sec:params}.
\subsection{Polarization}
The POLAR polarization data are well described by our modeling of the
POLAR instrument. The scattering angle data show good agreement between
the data and the model as demonstrated in Figure
\ref{fig:polarization_counts}. In order to validate the model's
ability to generate the data, we performed posterior predictive
checks (PPCs) \citep{Betancourt:2018aa} of the polarization data for
all time intervals. For a subset of posterior samples chosen with
appropriate posterior probability, latent polarization and spectral
models were generated and subsequent data quantities where sampled
from the likelihood. The model was able to sufficiently generate
replicated data similar to the observed (see Figure \ref{fig:ppc}) in
most cases. It is likely that minor deficiencies still exist in the
instrumental responses.
The polarization observed here is compatible with that
presented in \citep{POLAR2018} where an unpolarized flux was
excluded for this single pulse GRB with 99.7\% confidence. The
analysis presented here does, however, allow us to study the time
evolution in significantly more detail. This is because, unlike in
the study \citep{POLAR2018}, the polarization degree is not forced to be equal
over all the studied time intervals but is instead left as a free
parameter, while the number of studied time bins is increased from three
to nine. Despite this significant increase in free parameters
constraining measurements can still be performed. We observe no
polarization at the beginning of the pulse and moderate ($\sim 30$\%)
polarization as time proceeds. Interestingly, we observe a large
change in the polarization angle with time (see Figure
\ref{fig:polarization}). Although the time intervals used in this
study are different from those used in \citep{POLAR2018}, it can be
deduced that the polarization angles found here agree with those in
\citep{POLAR2018}. The end of the pulse has a relatively weak signal
and thus poorly identified polarization parameters. The 68\% credible
regions are listed in Table \ref{tab:table1}. Clearly, the level of
polarization during the peak of the emission is probabilistically
equivalent to both moderate, low or even 0\% polarization during
several intervals whereas during the beginning of the emission the
polarization is definitely low even though the ratio of background to
total signal is high.
\begin{figure*}
\centering
\includegraphics{polarization_counts.pdf}
\caption{Net SA data for each fitted time interval
in our analysis. Superimposed are the posterior model predictions
from the fits. The data have been rebinned for visual clarity. The
SA presented here is measured within an arbitrary
local coordinate system of POLAR.}
\label{fig:polarization_counts}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{ppc_red2.pdf}
\caption{Posterior predictive checks for the total polarization
count rate data. The dark to light blue shaded regions indicate
the 50$^{\mathrm{th}}$, 60$^{\mathrm{th}}$, 70$^{\mathrm{th}}$,
80$^{\mathrm{th}}$, and 90$^{\mathrm{th}}$ percentiles of the
replicated data respectively of the replicated data. The observed
data are displayed in red. The estimated background count rates
are displayed in green}
\label{fig:ppc}
\end{figure*}
We stress that it is not appropriate to perform model comparison on
nested model parameters, for example, comparing between zero polarization and
greater than zero polarization. This includes the use of Bayes factors
\citep{Chattopadhyay:2017aa} which are ill-defined for improper priors
and for comparing between discrete values of a continuous parameter
\citep{Gelman:2013}. Polarization is not a detected quantity, but a
parameter. Given that we have detected the GRB, it is important to
quote the credible regions of the polarization parameter rather than
perform model comparison between discrete values.
\begin{figure*}
\centering
\includegraphics[]{polarization_arrows.pdf}
\caption{Posterior polarization results. The radial coordinate
represents polarization degree and the angular coordinate the
polarization angle. The polarization angle here is transformed to
equatorial coordinates. The contours are for the
30$^{\mathrm{th}}$, 60$^{\mathrm{th}}$, and 90$^{\mathrm{th}}$
percentiles of the credible regions. The plots are reflected about
the periodic boundary of 180$^{\circ}$ for visual clarity. For the
last three time intervals, we do not display contours and instead
show the posterior samples as the parameters are poorly
identified. The arrows that point from the last to the current
position are meant as visual guides only. }
\label{fig:polarization}
\end{figure*}
\renewcommand{\arraystretch}{1.5}%
\begin{table*}
\centering
\label{tab:table1}
\caption{Parameters with their 68\% credible regions. Here $\bar{p}$ is the polarization degree (in $\%$), $\phi$ the polarization angle (in deg.), the spectral parameter $h\nu_{\mathrm{inj}}$ (in arbitrary units) and $p$ the power law index (in arbitrary units). }
\begin{tabular}{ccccc}
\hline\hline
Time Interval & $\bar{p}$ & $\phi \; (\mathrm{deg})$ & $h\nu_{\mathrm{inj}}\; (\mathrm{keV})$ & $p$ \\
\hline
-0.2-1.4 & $13.21^{+6.10}_{-13.20}$ & $71.86^{+80.54}_{-49.87}$ & $362.46^{+59.34}_{-53.86}$ & $3.67^{+0.39}_{-0.57}$ \\
1.4-1.8 & $24.19^{+10.53}_{-23.25}$ & $61.12^{+29.47}_{-24.98}$ & $242.58^{+33.76}_{-31.49}$ & $3.91^{+0.42}_{-0.52}$ \\
1.8-2.4 & $30.10^{+16.37}_{-15.50}$ & $132.12^{+15.66}_{-15.57}$ & $268.89^{+24.96}_{-24.50}$ & $4.68^{+0.54}_{-0.55}$ \\
2.4-3.0 & $28.29^{+16.58}_{-20.44}$ & $155.09^{+15.82}_{-134.21}$ & $160.89^{+20.35}_{-17.59}$ & $3.52^{+0.25}_{-0.36}$ \\
3.0-3.6 & $28.62^{+12.04}_{-28.61}$ & $146.19^{+22.07}_{-113.64}$ & $110.83^{+18.42}_{-15.64}$ & $3.01^{+0.24}_{-0.26}$ \\
3.6-4.8 & $33.45^{+15.89}_{-26.39}$ & $38.89^{+21.08}_{-16.01}$ & $62.31^{+8.90}_{-7.97}$ & $2.67^{+0.10}_{-0.15}$ \\
4.8-6.6 & $38.26^{+15.56}_{-38.04}$ & $51.14^{+117.74}_{-40.09}$ & $103.97^{+15.64}_{-14.74}$ & $4.11^{+0.45}_{-0.59}$ \\
6.6-8.9 & $34.90^{+15.99}_{-34.86}$ & $66.94^{+66.44}_{-40.46}$ & $59.99^{+11.56}_{-10.32}$ & $3.75^{+0.38}_{- 0.46}$ \\
8.9-20.0 & $51.53^{+38.26}_{-26.99}$ & $46.18^{+110.32}_{-30.12}$ & $54.25^{+12.28}_{-10.73}$ & $3.83^{+0.46}_{-0.60}$ \\
\hline
\end{tabular}
\end{table*}
\subsection{Spectra}
POLAR and GBM observed data both agree in overall spectral shape and relative
normalization of the observed flux. Moreover, the spectral results
demonstrate that the synchrotron spectrum is a good, predictive
description of the spectral data as displayed in
Figure~\ref{fig:counts}. This is both a confirmation that past studies
with synchrotron relying on GBM data alone are reliable, as well as the
outstanding calibration between the GBM and POLAR.
\begin{figure*}
\centering
\includegraphics[]{counts.pdf}
\caption{Count spectra of POLAR and GBM from the joint
spectral and polarization fits. The shaded regions indicate the
2$\sigma$ credible regions of the fit. Data from a GBM NaI, BGO,
and POLAR and displayed in green, black, and red respectively.}
\label{fig:counts}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics{vfv.pdf}
\caption{ $\nu F_{\nu}$ spectra of the synchrotron fits scaled with
increasing time. The width of the curves represents the 1$\sigma$
credible regions of the model.}
\label{fig:spec}
\end{figure}
As noted above, it is not possible to disentangle the intrinsic
parameters of the synchrotron emission without further
assumptions. Therefore, we only quote the injection energy in Table
\ref{tab:table1}. The evolution of the spectrum is shown in Figure
\ref{fig:spec}. The temporal evolution of the $\nu F_{\nu}$ spectral peak
follows a broken power law. We find values between approximately three
and four for the electron power law
injection spectral index. These values are steeper than those of the
canonical index expected from shock acceleration \citep{Kirk:2000vr}.
It is possible that other physical spectral models also provide
acceptable, predictive, fits to the data. However, these models -- for example
subphotospheric dissipation -- have yet to demonstrate acceptable
spectral fits on a large sample of GRBs. Moreover, the numerical
schemes \citep{Peer:2005aa} required to compute the emission form
these models are more complex than that of our synchrotron modeling,
require far more computational time, and are not publicly available
for replication. Photospheric models also require special geometrical
setups to produce polarization. This makes them more predictive, and
indeed a pertinent set of models to test.
\section{Discussion}
\label{sec:discussion}
For the first time, the polarization and spectrum of GRB prompt
$\gamma$-ray emission has been fitted simultaneously. Furthermore, the
spectral data have been described with a physical synchrotron model
consistent with the spectral data of two very distinct
spectrometers. We argue that it is unlikely for the spectral and
polarization data to conspire to point toward an optically thin
synchrotron origin of the emission. However, the current predictive
power of GRB prompt emission polarization theory is not developed
enough for our measurements to definitively select synchrotron over
other emission mechanisms. Therefore, we speculatively leverage
previous spectral results that show that synchrotron emission is
dominant mechanism in single-pulse GRBs.
\citet{Burgess:2018} argue that the observation of synchrotron
emission in GRBs invalidates the standard fireball model
\citep{Eichler:2000aa}. Similar predictions were made before they were
supported by data \citep[e.g.,][]{Zhang:2009aa}. These results allude
to a magnetically dominated jet acceleration mechanism possibly
resulting in comoving emission sites or mini-jets
\citep{Barniol-Duran:2016aa,Beniamini:2018aa}. These results were
arrived at considering spectral analysis alone. The moderate
polarization degree observed in this work requires a development in
the prediction of the temporal polarization predictions of these
models in order to fully interpret their meaning.
While our observations provide broad ranges for the observed
polarization degree, the changing polarization angle is easily
observed. Although an evolution of the polarization angle has been
reported before for multipulse GRBs using data from both the GAP and
IBIS instruments, \citep{Yonetoku:2011ef,G_tz_2009} this intrapulse
evolution has not been observed before. Figure
\ref{fig:polarization_fun} shows the way in which the both the peak of
the synchrotron spectrum and the polarization angle grossly track each
other in time. Detailed model predictions for the evolution of the
polarization angle during the GRB are not available. We are therefore
not able to interpret the change in angle and encourage the community
to develop detailed predictions which can be fitted to our data in the
future. With more predictive models, appropriate informative priors
can be adopted. Moreover, spectral parameters can be formulated in
terms of polarization parameters making the models stricter and the
data more useful. Thus, we are hopeful that models are
developed in the near future.
\begin{figure}
\centering
\includegraphics[]{evo_rp.pdf}
\caption{Temporal evolution of $h\nu+{\mathrm{inj}}$ and the
polarization angle which has been doubled for visual
clarity. $h\nu_{\mathrm{inj}}$ falls as the polarization angle
increases in time. The red and yellow areas are illustrative guides for
the evolution the parameters. No fits were performed. }
\label{fig:polarization_fun}
\end{figure}
The combination of POLAR and GBM observations of GRBs enables
energy-dependent polarization measurements and is a project currently
under development. These measurements will allow us to decipher if
polarization increases around the peak of the photon spectrum which
would be a signature of synchrotron emission, or if the polarization
is higher at low energies as predicted by \citet{Lundman:2018aa}. We
encourage researchers to carry out further multimessenger studies and
missions to answer these questions.
\section{Software availability}
\label{sec:software}
The analysis software utilized in this study are primarily
\texttt{3ML} and \texttt{astromodels}. We have designed a generic,
preliminary, polarization likelihood for similar X-ray polarization
instruments both within
\texttt{3ML}\footnote{\url{https://github.com/giacomov/3ML/tree/master/threeML/utils/polarization}}
and
\texttt{astromodels}\footnote{\url{https://github.com/giacomov/astromodels/blob/master/astromodels/core/polarization.py}}. Additionally,
the POLAR
pipeline\footnote{\url{https://github.com/grburgess/polarpy}} we have
developed is fully designed to be easily modified for other
instruments with polarimetric data. We note that these software
distributions are preliminary, and we encourage the community to
participate in their development.
\begin{acknowledgements}
JMB acknowledges support from the Alexander von Humboldt
Foundation. MK acknowledges support by the Swiss National Science
Foundation and the European Cooperation in Science and
Technology. The authors are grateful to the \textit{Fermi}-GBM team
and HEASARC for public access to \textit{Fermi} data products. We
thank Damien B\'{e}gu\'{e}, Dimitrios Giannois, Thomas Siegert and
Ramandeep Gill for fruitful discussions.
\end{acknowledgements}
\bibliographystyle{aa}
| {'timestamp': '2019-08-13T02:20:41', 'yymm': '1901', 'arxiv_id': '1901.04719', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.04719'} |
\section{Introduction}
Vertex decomposable simplicial complexes are recursively defined
simplicial complexes that have been extensively studied in both
combinatorial algebraic topology and combinatorial commutative
algebra. This family of complexes, first defined by
Provan and Billera \cite{PB} for pure simplicial
complexes and later generalized to the non-pure case by
Bj\"orner and Wachs \cite{BW1997}, has many nice features.
For example, they are shellable and hence Cohen-Macaulay in the pure case.
Because of the Stanley-Reisner correspondence between
square-free monomial ideals and simplicial complexes, the
definition and properties of vertex decomposable simplicial complexes can be translated into
algebraic statements about square-free monomial ideals. For example, Moradi and Khosh-Ahang \cite[Definition 2.1]{MKA} introduced vertex splittable ideals, which are precisely the ideals of the Alexander duals of vertex decomposable simplicial complexes. As another example, which is directly relevant to this paper, Nagel and R\"omer \cite{NR}
showed that
if $I_\Delta$ is the square-free monomial
ideal associated to a vertex decomposable simplicial complex $\Delta$
via the Stanley-Reisner correspondence, then the ideal
$I_\Delta$ belongs
to the Gorenstein liasion class of a complete intersection, i.e.,
the ideal $I_\Delta$ is {\it glicci}.
Knutson, Miller, and Yong \cite{KMY} introduced the notion
of a {\it geometric vertex decomposition}, which is an ideal-theoretic generalization (beyond the square-free monomial ideal setting) of a vertex
decomposition of a simplicial complex. Building on this, Klein and Rajchgot \cite{KR} gave a recursive definition of a {\it geometrically vertex decomposable} ideal which is an ideal-theoretic generalization of a vertex decomposable simplicial complex. Indeed, when
specialized to square-free monomial ideals, those ideals that are geometrically vertex decomposable are precisely those square-free monomial ideals whose associated simplicial complexes are vertex decomposable. As shown by Klein and Rajchgot \cite[Theorem 4.4]{KR}, this definition captures some of the properties of
vertex decomposable simplicial complexes. For example,
a more general version of Nagel and R\"omer's result holds;
that is, a homogeneous
ideal that is geometrically vertex decomposable is also glicci.
Because geometrically vertex decomposable ideals are glicci, identifying such families allows us
to give further evidence to an important open question
in liaison theory: is every arithmetically Cohen-Macaulay
subscheme of $\mathbb{P}^n$ glicci (see
\cite[Question 1.6]{KMMNP})?
Since the definition of geometrically vertex decomposable ideals is
recent, there is a need to not only develop the corresponding theory
(e.g. which properties of Stanley-Reisner ideals of vertex decomposable simplicial complexes
also hold for geometrically vertex decomposable ideals?), but also a need to find families of concrete examples. There has
already been some work in these two directions.
Klein and Rajchgot \cite{KR} showed that
Schubert determinantal ideals, (homogeneous) ideals
coming from lower bound cluster algebras, and ideals defining equioriented
type A quiver loci are all geometrically vertex decomposable. Klein \cite{K}
used geometric vertex decomposability to
prove a conjecture of
Hamaker, Pechenik, and Weigandt \cite{HPW} on Gr\"obner bases
of Schubert determinantal ideals. Da Silva
and Harada have investigated the
geometric vertex decomposability of certain Hessenberg patch ideals which locally define regular nilpotent Hessenberg varieties \cite{DSH}.
We contribute to this program by
further developing the theory of geometric
vertex decomposibility, and show that many families
of toric ideals of graphs are geometrically vertex
decomposable.
Let $\mathbb{K}$ be an algebraically closed field of characteristic $0$.
If
$G = (V,E)$ is a finite simple graph
with vertex set $V = \{x_1,\ldots,x_m\}$ and edge
set $E = \{e_1,\ldots,e_n\}$, we can define
a ring homomorphism $\varphi:\mathbb{K}[e_1,\ldots,e_n] \rightarrow \mathbb{K}[x_1,\ldots,x_m]$ by letting
$\varphi(e_i) = x_kx_l$ where the edge
$e_i = \{x_k,x_l\}$. The {\it toric ideal of $G$}
is the ideal $I_G = {\rm ker}(\varphi)$.
The study of toric ideals of graphs is an active
area of research (e.g. see \cite{BOVT,CG,FHKVT,GHKKPVT,GM,OH,TT,V1}), so our
work also complements the recent developments in this area. What makes toric ideals of graphs amenable to our investigation of geometric vertex decomposability is that their (universal) Gr\"obner bases are fairly well-understood
(see Theorem \ref{generatordescription}) and can be related to the graph's structure.
Our first main result describes how
geometric vertex decomposability behaves over tensor products:
\begin{theorem}[Theorem \ref{tensorproduct}]
Let $I \subsetneq R =\mathbb{K}[x_1,\ldots,x_n]$ and
$J \subsetneq S=\mathbb{K}[y_1,\ldots,y_m]$ be proper ideals. Then $I$ and $J$ are geometrically
vertex decomposable if and only if $I+J$ is geometrically vertex decomposable in $R \otimes S =\mathbb{K}[x_1,\ldots,x_n,y_1,\ldots,y_m]$.
\end{theorem}
\noindent
Our result can be viewed as the ideal-theoretic version
of the fact that two simplicial complexes
$\Delta_1$ and $\Delta_2$ are vertex decomposable
if and only if
their join $\Delta_1 \star \Delta_2$ is vertex decomposable \cite[Propostion 2.4]{PB}. Moreover, this result allows
us to reduce our study of toric ideals of graphs
to the case that the graph $G$ is connected (Theorem \ref{connected}).
When we restrict to toric ideals of graphs, we
show that the graph operation of ``gluing'' an even length cycle
onto a graph preserves the geometric vertex decomposability property:
\begin{theorem} [Theorem \ref{gluetheorem}]
Let $G$ be a finite simple graph
with toric ideal $I_G$.
Let $H$ be obtained from $G$ by gluing a cycle of even length to $G$ along a single edge. If $I_G$ is geometrically vertex decomposable, then $I_H$ is also geometrically vertex decomposable.
\end{theorem}
This gluing operation and its connection to toric ideals of graphs first appeared in work of Favacchio, Hofscheier, Keiper and Van Tuyl \cite{FHKVT}. By repeatedly applying
this operation, we can construct many toric
ideals of graphs that are geometrically vertex decomposable and glicci.
Our gluing operation requires one to start with a graph
whose corresponding toric ideal is geometrically vertex decomposable. It
is therefore desirable to identify families of graphs whose
toric ideals have this property. Towards this end, we prove:
\begin{theorem}[Theorem \ref{thm: gvdBipartite}]\label{mainresult3}
Let $G$ be a finite simple graph with toric ideal $I_G$. If $G$ is bipartite, then $I_G$ is geometrically vertex decomposable.
\end{theorem}
\noindent
Our proof of Theorem \ref{mainresult3} relies on work of Constantinescu and Gorla \cite{CG}. For some families of
bipartite graphs, we give alternative proofs for the geometric
vertex decomposable property that exploit
the additional structure of the graph (see Theorem
\ref{families}). These families
are also used to illustrate that in certain cases, the recursive definition of geometric vertex decomposability easily lends itself to induction.
Based on our results and computer experimentation in Macaulay2 \cite{M2}, we propose
the following conjecture:
\begin{conjecture}[Conjecture \ref{mainconjecture}]\label{conjecture}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[e_1,\ldots,e_n]$.
If $\init_{<}(I_G)$ is square-free with respect to a lexicographic monomial order $<$, then $I_G$ is geometrically vertex decomposable, and thus glicci.
\end{conjecture}
\noindent
We provide a framework to prove this conjecture. In fact, we show
that the conjecture
is true if one can prove that a particular family
of ideals is equidimensional (see Theorem \ref{framework}). As further evidence
for Conjecture \ref{conjecture}, we prove the following special case:
\begin{theorem}[Theorem \ref{quadratic_GVD}]
Let $I_G$ be the toric ideal of a finite simple graph $G$. Assume that $I_G$ has a universal Gr\"obner basis consisting entirely of quadratic binomials. Then $I_G$ is geometrically vertex decomposable.
\end{theorem}
Finally, we prove that additional collections of toric ideals of graphs are glicci (though not necessarily geometrically vertex decomposable). Our first result in this direction relies on a very general result of Migliore and Nagel \cite[Lemma 2.1]{MN} from the liaison literature.
\begin{theorem}[Corollary \ref{glue=glicci}]
Let $G$ be a finite simple graph and let $I_G\subseteq R = \mathbb{K}[e_1,\dots, e_n]$ be its toric ideal.
Let $H$ be obtained from $G$ by gluing a cycle of even length to $G$ along a single edge. If $R/I_G$ is Cohen-Macaulay, then $I_H$ is glicci.
\end{theorem}
We also show that many toric ideals of graphs which contain $4$-cycles are glicci. The following is a slightly weaker version of Corollary \ref{cor:square-freeDegenGlicciGraph}.
\begin{theorem}[Corollary \ref{cor:square-freeDegenGlicciGraph}]
Let $G$ be a finite simple graph and suppose there is an edge $y\in E(G)$ contained in a $4$-cycle. If the initial ideal $\init_< I_G$ is a square-free monomial ideal for some lexicographic monomial order with $y>e$ for all $e\in E(G)$ with $e\neq y$, then $I_G$ is glicci.
\end{theorem}
As a corollary to this theorem, we show that the toric ideal of any \emph{gap-free} graph which contains a $4$-cycle is glicci. For the definition of gap-free graph and this result, see the end of Section \ref{sec:someGlicci}.
\noindent{\bf Outline of the paper. }In the next section
we formally introduce geometrically vertex decomposable ideals, along with the required background and notation about Gr\"obner bases.
We also explain how geometrically vertex decomposable ideals
behave with respect to tensor products. In Section 3 we provide the
needed background on toric ideals of graphs, and we explain how
a particular graph operation preserves the geometric
vertex decomposability property. In Section 4, we focus on
the glicci property for toric ideals of graphs that can
be deduced from the results of Section 3 together with general results from the liaison theory literature. In Section 5 we
prove that toric ideals of bipartite graphs are geometrically vertex decomposable. In Section 6
we propose a conjecture on toric ideals with a square-free initial
ideal,
describe a framework to prove this conjecture, and illustrate this framework by proving that toric ideals of graphs which have quadratic universal Gr\"obner bases are geometrically vertex decomposable.
In Section 7, we provide an example of a toric ideal which is geometrically vertex decomposable, but which has no square-free initial ideal for any monomial order.
\noindent{\bf Remark on the field $\mathbb{K}$. }Many of the arguments in this paper are valid over any infinite field. Indeed, the liaison-theoretic setup in Sections 2 and 4 requires an infinite field but is characteristic-free. Similarly, toric ideals of graphs can be defined combinatorially, and since the coefficients of their generators are $\pm 1$, defining such ideals in positive characteristic does not pose any issues. Nevertheless, we assume that $\mathbb{K}$ throughout this paper is algebraically closed of characteristic zero since some of the references that we use make this assumption (e.g. \cite[Proposition 13.15]{Sturm}, which is needed in the proof of Theorem \ref{sqfree=>cm}).
\noindent
{\bf Acknowledgments.}
We thank Patricia Klein for some helpful conversations.
Cummings was partially supported by an NSERC USRA. Da Silva was partially supported by an NSERC postdoctoral fellowship.
Rajchgot's research is supported
by NSERC Discovery Grant 2017-05732.
Van Tuyl’s research is supported by NSERC Discovery Grant 2019-05412.
\section{Geometrically vertex decomposable ideals}
In this paper $\mathbb{K}$ denotes
an algebraically closed field of
characteristic zero and $R = \mathbb{K}[x_1,\ldots,x_n]$
is the polynomial ring
in $n$ variables.
This section gives the required background on geometrically
vertex decomposable ideals, following \cite{KR}.
We also examine how geometric vertex
decomposability behaves over tensor products.
Fix a variable $y = x_i$ in $R$. For any
$f \in R$, we can write $f$ as $f = \sum_i \alpha_iy^i$, where $\alpha_i$
is a polynomial only in the variables
$\{x_1,\ldots,\hat{x}_i,\ldots,x_n\}$. For $f\neq 0$, the
{\it initial $y$-form} of $f$, denoted ${\rm in}_y(f)$, is the
non-zero coefficient of the highest power of $y^i$ appearing
in $\sum_i \alpha_iy^i$. That is,
if $\alpha_d \neq 0$, but $\alpha_t =0$ for all $t > d$, then
${\rm in}_y(f) = \alpha_dy^d$. Note that if $y$ does not appear in any
term of $f$, then ${\rm in}_y(f) = f$. For any ideal $I$ of $R$,
we set ${\rm in}_y(I) = \langle {\rm in}_y(f) ~|~ f \in I \rangle$ to
be the ideal generated by all the initial $y$-forms in $I$. A monomial
order $<$ on $R$ is said to be {\it $y$-compatible} if the initial
term of $f$ satisfies
${\rm in}_<(f) = {\rm in}_<({\rm in}_y(f))$ for all $f \in R$. For
such an order, we have ${\rm in}_<(I) = {\rm in}_<({\rm in}_y(I))$, where ${\rm in}_<(I)$ is
the {\it initial ideal} of $I$ with respect to the
order $<$.
Given an ideal $I$ and a $y$-compatible monomial order $<$, let
$\mathcal{G}(I) = \{ g_1,\ldots,g_m\}$ be a Gr\"obner basis of $I$
with respect to this monomial order. For $i=1,\ldots,m$, write $g_i$ as
$g_i = y^{d_i}q_i + r_i$, where $y$ does not divide any term of $q_i$; that is, ${\rm in}_y(g_i) = y^{d_i}q_i$. It can then be shown that
${\rm in}_y(I) = \langle y^{d_1}q_1,\ldots,y^{d_m}q_m \rangle$ (see
\cite[Theorem 2.1(a)]{KMY}).
Given this setup, we define two ideals:
$$C_{y,I} = \langle q_1,\ldots,q_m \rangle ~~\mbox{and}~~
N_{y,I} = \langle q_i ~|~ d_i = 0 \rangle.$$
Recall that an ideal $I$ is {\it unmixed} if the
ideal $I$
satisfies $\dim(R/I) = \dim(R/P)$ for
all associated primes $P \in {\rm Ass}_R(R/I)$. We come to our
main definition:
\begin{definition}\label{gvd}
An ideal $I$ of $R = \mathbb{K}[x_1,\ldots,x_n]$ is {\it geometrically
vertex decomposable} if $I$ is unmixed and
\begin{enumerate}
\item $I = \langle 1 \rangle$, or $I$ is generated by
a (possibly empty) subset of variables of $R$, or
\item there is a variable $y = x_i$ in $R$ and a
$y$-compatible monomial order $<$ such that
$${\rm in}_y(I) = C_{y,I} \cap (N_{y,I} + \langle y \rangle),$$
and the contractions of the
ideals $C_{y,I}$ and $N_{y,I}$ to the ring
$\mathbb{K}[x_1,\ldots,\hat{x}_i,\ldots,x_n]$ are geometrically
vertex decomposable.
\end{enumerate}
We make the convention that the two ideals
$\langle 0 \rangle$ and $\langle 1 \rangle$ of the ring
$\mathbb{K}$ are also geometrically
vertex decomposable.
\end{definition}
\begin{remark}\label{gvdremark}
For any ideal $I$ of $R$, if there exists a variable $y = x_i$ in $R$ and a
$y$-compatible monomial order $<$ such that
${\rm in}_y(I) = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$, then this
decomposition is called a {\it geometric vertex decomposition of $I$ with
respect to $y$}. This decomposition was first defined in
\cite{KMY}.
Consequently, Definition \ref{gvd} (2) says that there
is a variable $y$ such that $I$ has a geometric vertex decomposition with respect
to this variable.
We say that a geometric vertex decomposition is \textit{degenerate} if either $C_{y,I}=\langle 1\rangle$ or $\sqrt{C_{y,I}}=\sqrt{N_{y,I}}$ (see \cite[Section 2.2]{KR} for further details and results). Otherwise, we call a geometric vertex decomposition \textit{nondegenerate}.
\end{remark}
If elements in our Gr\"obner basis are
square-free in $y$, i.e., if ${\rm in}_y(g_i) = y^{d_i}q_i$ with
$d_i=0$ or $1$ for all $g_i \in \mathcal{G}(I)$, then
Knutson, Miller, and Yong note that we get the
geometric vertex decomposition of $I$ with respect
to $y$ for ``free":
\begin{lemma}[{\cite[Theorem 2.1 (a), (b)]{KMY}}]\label{square-freey}
Let $I$ be an ideal of $R$ and let $<$ be a $y$-compatible monomial order. Suppose that
$\mathcal{G}(I) = \{ g_1,\ldots,g_m\}$ is a Gr\"obner basis of $I$ with respect to $<$, and also suppose that
${\rm in}_y(g_i) = y^{d_i}q_i$ with $d_i=0$ or $1$ for all
$i$. Then
\begin{enumerate}
\item $\{q_1,\ldots,q_m \}$ is a Gr\"obner basis of $C_{y,I}$
and $\{q_i ~|~ d_i = 0 \}$ is a Gr\"obner basis of $N_{y,I}$.
\item ${\rm in}_y(I) = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$, i.e.,
$I$ has a geometric vertex decomposition with respect to $y$.
\end{enumerate}
\end{lemma}
\begin{remark}
If $I$ is a square-free monomial ideal in $R$, then $I$ is geometrically
vertex decomposable if and only if the simplicial complex $\Delta$
associated with $I$ via the Stanley-Reisner correspondence is a
vertex decomposable simplicial complex; see \cite[Proposition 2.8]{KR} for more details. As a consequence, we can view Definition \ref{gvd} as a generalization of the notion of vertex decomposability. When $I$ is a square-free
monomial ideal with associated
simplicial complex $\Delta$, then $C_{y,I}$ is the Stanley-Reisner
ideal of the star of $y$, i.e., ${\rm star}_\Delta(y)
= \{F \in \Delta ~|~ F \cup \{y\} \in \Delta \}$ and $N_{y,I}+\langle y\rangle$ corresponds
to the deletion of $y$ from $\Delta$, that is,
${\rm del}_\Delta(y) = \{F \in \Delta ~|~
y \not\in F \}$ (see \cite[Remark 2.5]{KR}).
\end{remark}
If $I$ has a geometric vertex decomposition with respect to
a variable $y$, we can determine some additional
information about a reduced Gr\"obner
basis of $I$ with respect to any $y$-compatible monomial
order. In the following statement, $I$
is {\it square-free in $y$} if there is a generating
set $\{g_1,\ldots,g_s\}$ of $I$ such that no
term of $g_1,\ldots,g_s$ is divisible by $y^2$.
\begin{lemma}[{\cite[Lemma 2.6]{KR}}]\label{square-freeiny}
Suppose that the ideal $I$ of $R$ has a geometric vertex
decomposition with respect to the variable $y =x_i$.
Then $I$ is square-free in $y$. Moreover, for any
$y$-compatible term order, the reduced
Gr\"obner basis of $I$ with respect to this order
has the form $\{yq_1+r_1,\ldots,yq_k+r_k,h_1,\ldots,h_t\}$
where $y$ does not divide any term of $q_i,r_i,h_j$
for $i \in \{1,\ldots,k\}$ and $j\in\{1,\ldots,t\}$.
\end{lemma}
The following lemma and its proof helps to illustrate some
of the above ideas. Furthermore, since the
definition of geometrically vertex decomposable lends
itself to proof by induction, the following facts
are sometimes useful for the base cases of our induction.
\begin{lemma} \label{simplecases}
\begin{enumerate}
\item An an ideal $I$ of $R = \mathbb{K}[x]$ is geometrically
vertex decomposable if and only if $I = \langle ax +b \rangle$ for
some $a,b \in \mathbb{K}$.
\item Let $f = c_1m_1+\cdots + c_sm_s$ be any
polynomial in $R = \mathbb{K}[x_1,\ldots,x_n]$ with $c_i \in \mathbb{K}$ and
$m_i$ a monomial. If each $m_i$ is square-free, then
$I = \langle f \rangle$ is geometrically vertex decomposable.
In particular, if $m$ is a square-free monomial, then
$\langle m \rangle$ is geometrically vertex decomposable.
\end{enumerate}
\end{lemma}
\begin{proof} (1)
$(\Leftarrow)$ If $a =0$, or $b =0$, or both $a=b=0$, the ideal
$I = \langle ax+b \rangle$ satisfies Definition \ref{gvd} (1). So, suppose
$a,b \neq 0$. The ideal $I$ is prime,
so it is unmixed. Since $x$ is the only
variable of $R$, and because
there is only one monomial order on this ring, it is easy to see
that this monomial order is $x$-compatible, and that $\{ax+b\}$ is
a Gr\"obner basis of $I$. So, $C_{x,I} = \langle a \rangle = \langle 1 \rangle$ and
$N_{x,I} = \langle 0 \rangle$. It is straightforward to check
that we have
a geometric vertex decomposition of $I$ with respect to $x$. Furthermore,
as ideals in $\mathbb{K}[\hat{x}] = \mathbb{K}$, $C_{x,I} = \langle 1 \rangle$ and
$N_{x,I} = \langle 0 \rangle$ are geometrically
vertex decomposable by definition. So, $I$ is geometrically vertex decomposable.
$(\Rightarrow)$ Since $R = \mathbb{K}[x]$ is a principal ideal domain,
$I = \langle f \rangle$ for some $f \in R$, i.e.,
$f = a_dx^d + \cdots + a_1x + a_0$ with $a_i \in \mathbb{K}$.
Since $I$ is geometrically vertex decomposable, and because
$x$ is the only variable of $R$, by Lemma \ref{square-freeiny},
the ideal $I$ is square-free in $x$. This fact then forces
$d \leq 1$, and thus $I = \langle a_1x+a_0\rangle$ as desired.
(2) We proceed by induction on the number of variables in
$R=\mathbb{K}[x_1,\ldots,x_n]$. The base case $n=1$ follows
from statement (1). Because $I = \langle f \rangle$ is
principal, $f$ is a Gr\"obner basis with respect
to any monomial order. In particular, let
$>$ be the lexicographic
order on $R$ with $x_1 > \cdots > x_n$, and
assume $m_1 > \cdots > m_s$. Let $y$ be the largest variable
dividing $m_1$.
Then
we can write $f$ as $f = y(c_1m_1'+\cdots + c_im'_i)
+ c_{i+1}m_{i+1}+ \cdots + c_sm_s$ for some $i$ such that
$y$ does not divide $m_{i+1},\ldots,m_s$. Note that $>$ is
a $y$-compatible monomial order, and so
by Lemma \ref{square-freey} we
have ${\rm in}_y(I) = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$
with $C_{y,I} = \langle c_1m'_1+\cdots + c_im'_i\rangle$ and
$N_{y,I} = \langle 0 \rangle$. The ideal $N_{y,I}$ is
geometrically vertex decomposable in $\mathbb{K}[x_1,\ldots,\hat{y},
\ldots,x_n]$ by definition, and $C_{y,I}$ is geometrically
vertex decomposable in the same ring by induction. Observe that
$I, C_{y,I}$ and $N_{y,I}$ are also unmixed since they are principal.
\end{proof}
Theorem \ref{tensorproduct}, which is of independent interest, shows how we can treat ideals whose generators lie in different sets of variables. We require a lemma about Gr\"obner
bases in tensor products. For completeness, we give
a proof, although it follows easily from standard facts about Gr\"obner bases.
We first need to recall a characterization of
Gr\"obner bases using standard representations.
Fix a monomial order $<$ on $R =\mathbb{K}[x_1,\ldots,x_n]$.
Given $G = \{g_1,\ldots,g_s\}$ in $R$, we say $f$ {\it reduces to
zero modulo $G$} if
$f$ has a {\it standard representation}
$$f =f_1g_1+\cdots +f_sg_s ~~\mbox{with $f_i \in R$}$$
with ${\rm multidegree}(f) \geq
{\rm multidegree}(f_ig_i)$
for all $i$ with $f_ig_i \neq 0$. Here
$${\rm multidegree}(h) =
\max\{\alpha \in \mathbb{N}^n ~|~
\mbox{$x^\alpha$ is a term of $h$}\},$$
where we use the monomial order $<$ to order $\mathbb{N}^n$. We then have the following result.
\begin{theorem}[{\cite[Chapter 2.9, Theorem 3]{CLO}}]\label{gbchar}
Let $R =\mathbb{K}[x_1,\ldots,x_n]$ with fixed
monomial order $<$.
A basis $G = \{g_1,\ldots,g_s\}$ of an ideal $I$ in $R$
is a Gr\"obner basis for $I$ if and only if
each $S$-polynomial $S(g_i,g_j)$ reduces to zero modulo
$G$.
\end{theorem}
For the
lemma below, note that if $R =\mathbb{K}[x_1,\ldots,x_n]$
and $S = \mathbb{K}[y_1,\ldots,y_m]$, and if
$<$ is a monomial order on $R \otimes S :=R \otimes_{\mathbb{K}} S$, then
$<$ induces a monomial order $<_R$ on $R$ where
$m_1 <_R m_2$ if and only if $m_1 < m_2$, where
we view $m_1,m_2$ as monomials of both $R$ and $R \otimes S$.
Here,``viewing $f\in R$ as an element of $R\otimes S$" means writing $\varphi_R(f)$ as $f$ where $\varphi_R:R\rightarrow R\otimes S$ is the natural inclusion $f\mapsto f\otimes 1$. Similarly, we let $<_S$ denote the induced
monomial order on $S$.
\begin{lemma}\label{grobnertensorproduct}
Let $I \subseteq R =\mathbb{K}[x_1,\ldots,x_n]$ and
$J \subseteq S=\mathbb{K}[y_1,\ldots,y_m]$ be ideals.
For any monomial order $<$ on $R \otimes S$, there exists a
Gr\"obner basis of $I+J$ in $R \otimes S$ which has the form
$\mathcal{G}(I+J) = \mathcal{G}_1 \cup \mathcal{G}_2$,
where $\mathcal{G}_1$ is a Gr\"obner basis of
$I$ in $R$ with respect to $<_R$ but viewed as
elements of $R \otimes S$, and $\mathcal{G}_2$
is a Gr\"obner basis of $J$ in $S$ with
respect to $<_S$
but viewed as elements of $R \otimes S$.
\end{lemma}
\begin{proof}
Given $<$, select a Gr\"obner basis $\mathcal{G}_1$ of $I$ and $\mathcal{G}_2$ of $J$ with respect to the induced monomial orders $<_R$ and $<_S$ on $R$ and $S$ respectively. Since $\mathcal{G}_1$ generates $I$ and $\mathcal{G}_2$
generates $J$, the set $\mathcal{G}_1 \cup \mathcal{G}_2$ generates $I+J$ as an ideal of $R \otimes
S$. To prove that $\mathcal{G}_1 \cup \mathcal{G}_2$
is a Gr\"obner basis of $I+J$, by
Theorem \ref{gbchar} it suffices to show
that for any $g_i,g_j \in \mathcal{G}_1 \cup \mathcal{G}_2$, the
$S$-polynomial $S(g_i,g_j)$ reduces
to zero modulo this set.
If $g_i,g_j \in \mathcal{G}_1$, then since $g_i,g_j \in R$, and since $\mathcal{G}_1$ is a Gr\"obner basis
of $I$ in $R$, by Theorem \ref{gbchar}, the
$S$-polynomial $S(g_i,g_j)$ reduces to zero modulo
$\mathcal{G}_1$. But then in the larger ring
$R \otimes S$, the $S$-polynomial
$S(g_i,g_j)$ also reduces to zero modulo
$\mathcal{G}_1 \cup \mathcal{G}_2$. A similar result
holds if $g_i,g_j \in \mathcal{G}_2$.
So, suppose $g_i \in \mathcal{G}_1$ and $g_j \in \mathcal{G}_2$. Note that the leading monomial of
$g_i$ is only in the variables $\{x_1,\ldots,x_n\}$,
while the leading monomial of $g_j$ is only in
the variables $\{y_1,\ldots,y_m\}$. Consequently,
their leading monomials are relatively prime. Thus, by
\cite[Chapter 2.9, Proposition 4]{CLO}, the
$S$-polynomial $S(g_i,g_j)$ reduces to zero modulo
$\mathcal{G}_1 \cup \mathcal{G}_2$.
\end{proof}
\begin{theorem}\label{tensorproduct}
Let $I \subsetneq R =\mathbb{K}[x_1,\ldots,x_n]$ and
$J \subsetneq S=\mathbb{K}[y_1,\ldots,y_m]$ be proper ideals. Then $I$ and $J$ are geometrically
vertex decomposable if and only if $(I+J)$ is geometrically vertex decomposable in $R \otimes S =\mathbb{K}[x_1,\ldots,x_n,y_1,\ldots,y_m]$.
\end{theorem}
\begin{proof}
First suppose that $I\subsetneq R$ and $J\subsetneq S$ are geometrically vertex decomposable. Since neither ideal
contains $1$, we have $I+J \neq \langle 1 \rangle$. By \cite[Corollary 2.8]{HNTT},
the set of associated primes of $(R \otimes S)/(I+J)
\cong R/I \otimes S/J$ satisfies
\begin{equation}\label{assprimes}
{\rm Ass}_{R \otimes S}(R/I \otimes S/J)
= \{P+Q ~|~ P \in {\rm Ass}_R(R/I) ~~
\mbox{and} ~ Q \in {\rm Ass}_S(S/J)\}.
\end{equation}
Thus any associated prime
$P+Q$ of $(R\otimes S)/(I+J)$ satisfies
\begin{eqnarray*}\dim((R\otimes S)/(P+Q)) &=
&\dim(R/P)+ \dim(S/Q) \\
&=& \dim(R/I)+\dim(S/J) \\
&=&
\dim((R \otimes S)/(I+J))
\end{eqnarray*}
where we are using the fact that $I$ and $J$
are unmixed for the second equality. So,
$I+J$ is also unmixed.
To see that $I+J\subseteq R\otimes S$ is geometrically vertex
decomposable, we proceed by induction on the number of variables $\ell = n+m$ in $R\otimes S$. The base case $\ell= 0$ is trivial.
Assume now that $\ell>0$.
If both $I$ and $J$ are generated by indeterminates, then $I+J$ is too and so is geometrically vertex decomposable. Thus, without loss of generality, suppose that $I$ is not generated by indeterminates (note that $I \neq \langle 1 \rangle$ by assumption).
Because $I$ is geometrically vertex decomposable in $R$,
there is a variable $y = x_i$ in $R$ such that $\text{in}_y (I) = C_{y,I}\cap (N_{y,I}+\langle y\rangle)$ is a geometric vertex decomposition and the contractions of $C_{y,I}$ and $N_{y,I}$ to $R' = \mathbb{K}[x_1,\ldots, \hat{y},\ldots, x_n]$ are geometrically vertex decomposable. Extend the $y$-compatible
monomial order $<$ on $R$ to a $y$-compatible
monomial order on $R \otimes S$ by taking any monomial
order on $S$, and let our new monomial
order $\prec$ be the product order of these two monomial orders (where
$x_i \succ y_j$ for all $i,j$).
If we write $K^e$ to denote the extension of an ideal
$K$ in $R$ into the ring $R \otimes S$, then
one checks that with respect to this new
$y$-compatible order
\begin{eqnarray*}
\text{in}_y(I+J) &=& (\text{in}_y (I))^e+ J = [C_{y,I}\cap (N_{y,I}+\langle y\rangle)]^e+J \\
&=& ((C_{y,I})^e+J)\cap ((N_{y,I})^e+J + \langle y\rangle).
\end{eqnarray*}
Using the identities
$$(C_{y,I})^e+J = C_{y,I+J} ~~\mbox{and}~~ (N_{y,I})^e +J
= N_{y,I+J}$$
(note that $\prec$ is being used to define $C_{y,I+J}$ and
$N_{y,I+J}$ and $<$ is being used to define $C_{y,I}$ and
$N_{y,I}$),
we have a geometric vertex decomposition of $I+J$
with respect to $y$ in $R \otimes S$:
$$ \text{in}_y(I+J) = C_{y,I+J} \cap (N_{y,I+J} + \langle y \rangle).$$
Now let $C'$ and $N'$ denote the contractions of $C_{y,I}$ and $N_{y,I}$ to $R'$. First assume that $C'$ and $N'$ are both proper ideals. Then, since $C'$ and $N'$ are geometrically vertex decomposable, we may apply induction to see that $C'+J$ and $N'+J$ in $R'\otimes S$ are geometrically vertex decomposable.
In particular, as $C'+J$ and $N'+J$ are the contractions of $(C_{y,I})^e+J$ and $(N_{y,I})^e+J$ to $R'\otimes S$, we have that $I+J$ is geometrically vertex decomposable by induction. If either $C'$ or $N'$ is
the ideal $\langle 1 \rangle$, the same would be true
for the contractions of $(C_{y,I})^e+J$ or $(N_{y,I})^e+J$ because
the contraction of $(C_{y,I})^e+J$, respectively $(N_{y,I})^e+J$,
contains $C'$, respectively $N'$. So $I+J$ is geometrically vertex
decomposable.
For the converse, we proceed by induction on the number of variables $\ell$ in $R\otimes S$. The base case is $\ell=0$, which is trivial.
So suppose $\ell>0$. We first show that $I$
is unmixed. Suppose that $I$ is not unmixed; that is, there are associated primes
$P_1$ and $P_2$ of ${\rm Ass}(R/I)$
such that $\dim(R/P_1) \neq \dim(R/P_2)$. For any
associated prime $Q$ of $S/J$, we know
by \eqref{assprimes} that $P_1+Q$
and $P_2+Q$ are associated primes of
$(R \otimes S)/(I+J)$. Since
$I+J$ is unmixed, we can
derive the contradiction
\begin{eqnarray*}
\dim((R\otimes S)/(I+J))& =& \dim((R \otimes S)/(P_1+Q)) \\
&=& \dim (R/P_1) + \dim (S/Q) \\
&\neq& \dim(R/P_2) + \dim(S/Q) \\
&=& \dim((R\otimes S)/(P_2+Q)) = \dim((R\otimes S)/(I+J)).
\end{eqnarray*}
So, $I$ is unmixed (the proof for $J$ is similar).
If $I+J$ is generated by indeterminates, then so are $I$ and $J$, hence they are geometrically vertex decomposable.
So, suppose that there is a variable $y$ in
$R \otimes S$ and a $y$-compatible monomial order
$<$ such that
$${\rm in}_y(I+J) = C_{y,I+J} \cap (N_{y,I+J} + \langle y \rangle).$$
Without loss of generality, assume that
$y \in \{x_1,\ldots,x_n\}$. So
$C_{y,I+J}$ and $N_{y,I+J}$ are geometrically
vertex decomposable in $\mathbb{K}[x_1,\ldots,\hat{y},\ldots,x_n,
y_1,\ldots,y_m]$.
By Lemma \ref{grobnertensorproduct}, we can construct a Gr\"obner basis $\mathcal{G}$ of $I+J$ with
respect to $<$ such that
$$\mathcal{G} = \{g_1,\ldots,g_s\} \cup \{h_1,\ldots,h_t\}$$
where $\{g_1,\ldots,g_s\}$ is a Gr\"obner basis
of $I$ with respect to the order $<_R$ in $R$, and $\{h_1,\ldots,h_t\}$ is a Gr\"obner basis of
$J$ with respect to $<_S$ in $S$. Since $y$ can only appear among the $g_i$'s,
we have
$$C_{y,I+J} = (C_{y,I}) +J~~\mbox{and}~~ N_{y,I+J} = (N_{y,I})+J$$
where $C_{y,I}$, respectively $N_{y,I}$, denote the
ideals constructed from the Gr\"obner basis $\{g_1,\ldots,g_s\}$ of $I$ in $R$ using the monomial
order $<_R$. Note that in $R$, $<_R$ is still $y$-compatible.
Since the ideals
$(C_{y,I})+J$ and $(N_{y,I})+J$ are geometrically
vertex decomposable in the ring $\mathbb{K}[x_1,\ldots,\hat{y},\ldots,x_n,
y_1,\ldots,y_m]$, by induction, $C_{y,I}$ and $N_{y,I}$
are geometrically vertex decomposable in
$\mathbb{K}[x_1,\ldots,\hat{y},\ldots,x_n]$ and $J$ is geometrically vertex decomposable in $S$.
To complete the proof, note that in $R$, we
have ${\rm in}_y(I) = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$. Thus $I$ is also geometrically
vertex decomposable in $R$.
\end{proof}
\begin{remark}
If we weaken the hypotheses in Theorem \ref{tensorproduct}
to allow $I$ or $J$ to be $\langle 1 \rangle$, then only
one direction remains true. In particular, if $I$ and $J$
are geometrically vertex decomposable, then so is $I+J$. However, the
converse statement would no longer be true. To see why,
let $I = \langle 1 \rangle$ and let $J$ to be any ideal which is not geometrically vertex decomposable. Then $I+J = \langle
1 \rangle$ is geometrically vertex decomposable
in $R \otimes S$, but we do not have that both $I$ and $J$ are
geometrically vertex decomposable.
\end{remark}
\begin{remark}\label{join}
Theorem \ref{tensorproduct} is an algebraic
generalization
of \cite[Proposition 2.4]{PB} which showed that if $\Delta_1$ and $\Delta_2$ were simplicial complexes on different sets of variables,
then the join $\Delta_1 \star \Delta_2$ is vertex
decomposable if and only if $\Delta_1$ and $\Delta_2$
are vertex decomposable.
\end{remark}
\begin{corollary}\label{monomialcor}
Let $I \subseteq R =\mathbb{K}[x_1,\ldots,x_n]$ be a square-free monomial ideal.
If $I$ is a complete intersection, then $I$ is geometrically vertex decomposable.
\end{corollary}
\begin{proof}
Suppose $I = \langle m_1,\ldots,m_t\rangle$, where $m_1,\ldots,m_t$ are the minimal square-free monomial generators. Because $I$ is a complete
intersection, the ideal is unmixed. Furthermore,
because $I$ is a complete
intersection, the support of each monomial is pairwise disjoint. So,
after a relabelling, we can assume, $m_1 = x_1x_2\cdots x_{a_1}$,
$m_2 = x_{a_1+1}\cdots x_{a_2}, \ldots, m_t = x_{a_{t-1}+1}\cdots x_{a_t}$.
Then
$$R/I \cong \mathbb{K}[x_1,\ldots,x_{a_1}]/\langle m_1 \rangle \otimes \cdots \otimes
\mathbb{K}[x_{a_{t-1}+1},\ldots,x_{a_t}]/\langle m_t \rangle \otimes \mathbb{K}[x_{a_{t+1}},\ldots,x_n].$$
By Lemma \ref{simplecases}, the ideals $\langle m_i \rangle$
are geometrically vertex decomposable for $i=1,\ldots,t$.
Now repeatedly apply Theorem \ref{tensorproduct}.
\end{proof}
\begin{remark}
Corollary \ref{monomialcor} can also be deduced via results from Stanley-Reisner
theory, which we sketch out.
One proceeds by induction on the number of generators of the complete intersection
$I$. If $I = \langle x_{1}\cdots x_k \rangle$
has one generator, then one can prove directly from the definition of
a vertex decomposable simplicial complex (e.g. see \cite{PB}), that
the simplicial complex associated with $I$, denoted by $\Delta = \Delta(I)$, is vertex
decomposable. For the induction step, note that if $I = \langle m_1,\ldots, m_t
\rangle$, then $I = I_1 + I_2 = \langle m_1,\ldots,m_{t-1}\rangle
+ \langle m_t\rangle$. If $\{w_1,\ldots,w_m\}$ are variables that
appear in the generator $m_t$ and $\{x_1,\ldots,x_\ell\}$ are the other variables,
then we have
$$R/I \cong \mathbb{K}[x_1,\ldots,x_\ell]/I_1 \otimes \mathbb{K}[w_1,\ldots, w_m]/I_2.$$
By induction, the simplicial complexes $\Delta_1$ and $\Delta_2$ defined by $I_1$ and $I_2$ are vertex decomposable. As noted in Remark \ref{join}, the join $\Delta_1 \star \Delta_2$ is also vertex decomposable. So, the ideal $I$ is a square-free monomial ideal whose associated simplicial complex is vertex decomposable. The result now
follows from
\cite[Theorem 4.4]{KR} which implies that the ideal $I$ is also
geometrically vertex decomposable.
\end{remark}
\section{Toric ideals of graphs}
This section initiates a study of the geometric
vertex decomposability of toric ideals of graphs. We have
subdivided this section into three parts: (a) a review
of the needed background on toric ideals, (b) an
analysis of the ideals $C_{y,I}$ and $N_{y,I}$ when
$I$ is the toric ideal of a graph,
and (c) an explanation of
how the graph operation of ``gluing'' a cycle to a graph preserves geometric vertex decomposability.
We will study some specific families of graphs whose toric ideals are geometrically vertex decomposable in Sections \ref{sec:bipartite} and \ref{section_square-free}.
\subsection{Toric ideals of graphs} We review the relevant
background on toric ideals of graphs.
Our main references for this material are \cite{Sturm,V}.
Let $G = (V(G),E(G))$ be a finite simple graph with
vertex set $V(G) =\{x_1,\ldots,x_n\}$ and edge
set $E(G) = \{e_1,\ldots,e_t\}$ where each $e_i = \{x_j,x_k\}$.
Let $\mathbb{K}[E(G)] = \mathbb{K}[e_1,\ldots,e_t]$ be a polynomial
ring, where we treat the $e_i$'s as indeterminates. Similarly,
let $\mathbb{K}[V(G)] = \mathbb{K}[x_1,\ldots,x_n]$. Consider the
$\mathbb{K}$-algebra homomorphism $\varphi_G:\mathbb{K}[E(G)] \rightarrow \mathbb{K}[V(G)]$ given
by
$$\varphi_G(e_i) = x_jx_k ~~\mbox{where $e_i = \{x_j,x_k\}$ for
all $i \in \{1,\ldots,t\}$}.$$
The {\it toric ideal of the graph $G$}, denoted $I_G$,
is the kernel of the homomorphism $\varphi_G$.
While the generators of $I_G$ are defined implicitly,
these generators (and a Gr\"obner basis) of $I_G$ can be
described in terms of the graph $G$, specifically,
the walks in $G$. A {\it walk} of length $\ell$ is an alternating sequence of
vertices and edges $$\{x_{i_0},e_{i_1},x_{i_1},e_{i_2},\cdots,e_{i_\ell},
x_{i_{\ell}}\}$$
such that $e_{i_j} = \{x_{i_{j-1}},x_{i_j}\}$. The walk
is {\it closed} if $x_{i_\ell} = x_{i_0}$.
When the vertices are clear, we simply write the walk as $\{e_{i_1},\ldots,e_{i_\ell}\}$. It straightforward
to check that every closed walk of even length,
say $\{e_{i_1},\ldots,e_{i_{2\ell}}\}$,
results in an element of $I_G$; indeed
$$\varphi_G(e_{i_1}e_{i_3}\cdots e_{i_{2\ell-1}} -
e_{i_2}e_{i_4}\cdots e_{2\ell}) =
x_{i_0}x_{i_1}\cdots x_{2\ell-1} - x_{i_1}x_{i_2}\cdots x_{i_{2\ell}} =0
$$
since $x_{i_{2\ell}}=x_{i_0}$. Note that
$e_{i_1}e_{i_3}\cdots e_{i_{2\ell-1}} -
e_{i_2}e_{i_4}\cdots e_{i_{2\ell}}$ is a binomial.
For any $\alpha = (a_1,\ldots,a_t) \in \mathbb{N}^t$, let
$e^\alpha = e_1^{a_1}e_2^{a_2}\cdots e_t^{a_t}$. A binomial
$e^\alpha-e^\beta \in I_G$ is {\it primitive} if there
is no other binomial $e^\gamma-e^\delta \in I_G$ such
that $e^\gamma|e^\alpha$ and $e^\delta|e^\beta$.
We can now describe generators and a universal Gr\"obner
basis of $I_G$.
\begin{theorem}\label{generatordescription}
Let $G$ be a finite simple graph.
\begin{enumerate}
\item {\cite[Proposition 10.1.5]{V}} The ideal
$I_G$ is generated by the set of binomials
$$ \{e_{i_1}e_{i_3}\cdots e_{i_{2\ell-1}} -
e_{i_2}e_{i_4}\cdots e_{i_{2\ell}} ~~|~~ \mbox{$\{e_{i_1},\ldots,e_{i_{2\ell}}\}$ is a closed even walk of $G$}\}.$$
\item {\cite[Proposition 10.1.9]{V}} The set of all primitive
binomials that also correspond to closed even walks in $G$
is a universal Gr\"obner basis of $I_G$.
\end{enumerate}
\end{theorem}
\noindent Going forward, we will write $\mathcal{U}(I_G)$ to denote
this universal Gr\"obner basis of $I_G$.
The next two results allow us to make some additional
assumptions on $G$ when studying $I_G$.
First, we can ignore leaves in $G$
when studying $I_G$.
Recall that the degree of a vertex $x \in V(G)$ is the number of edges $e \in E(G)$ that contain $x$. An edge $e= \{x,y\}$ is a {\it leaf} of $G$ if either $x$ or $y$ has degree one. In the statement below, if $e \in E(G)$, then
by $G\setminus e$ we mean the graph formed by removing the
edge $e$ from $G$; note $V(G \setminus e) = V(G)$.
We include a proof for completeness.
\begin{lemma}\label{removeleaves}
Let $G$ be a finite simple graph. If $e$ is a leaf of $G$, then $I_{G} = I_{G \setminus e}$.
\end{lemma}
\begin{proof}
For the containment $I_{G\setminus e} \subseteq I_G$,
observe that any closed even walk in $G\setminus e$ is also a closed
even walk in $G$.
For the reverse containment, if a closed even
walk $\{e_{i_1}, \ldots,e,\ldots,e_{i_{2\ell}}\}$ contains the leaf $e$, then $e$ must be repeated, i.e., $\{e_{i_1}, \ldots,e,e,\ldots,e_{i_{2\ell}}\}$. The corresponding
binomial $b_1-b_2$ is divisible by $e$,
i.e., $b_1-b_2 = e(b_1'-b_2') \in I_G$. But since $I_G$ is a prime binomial ideal, this forces $b'_1-b'_2 \in I_G$. Thus every minimal generator of $I_G$ corresponds
to a closed even walk that does not go through $e$,
and thus is an element of $I_{G\setminus e}$.
\end{proof}
A graph $G$ is {\it connected} if for any two pairs
of vertices in $G$, there is a walk in $G$ between these two
vertices. A connected component of $G$ is a subgraph of $G$ that is
connected, but it is not contained in any larger connected subgraph.
To study the geometric vertex decomposability of $I_G$, we may always assume
that $G$ is connected.
\begin{theorem}\label{connected}
Suppose that $G = H \sqcup K$ is the disjoint union of
two finite simple graphs. Then $I_G$ is geometrically vertex decomposable
in $\mathbb{K}[E(G)]$
if and only if $I_H$, respectively $I_K$, is geometrically
vertex decomposable in $\mathbb{K}[E(H)]$, respectively $\mathbb{K}[E(K)]$.
\end{theorem}
\begin{proof}
Apply Theorem \ref{tensorproduct}
to $I_G = I_H+I_K$ in $\mathbb{K}[E(G)] = \mathbb{K}[E(H)] \otimes \mathbb{K}[E(K)]$.
\end{proof}
The well-known result below gives a condition
for $\mathbb{K}[E(G)]/I_G$ to be Cohen-Macaulay.
\begin{theorem}\label{sqfree=>cm}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[E(G)]$. Suppose
that there is a monomial order $<$ such that
${\rm in}_<(I_G)$ is a square-free monomial ideal.
Then $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay.
\end{theorem}
\begin{proof}
If $\init_{<}(I_G)$ is a square-free monomial ideal,
then $I_G$ is normal by \cite[Proposition 13.15]{Sturm}. Thus, by Hochster \cite{H}, $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay.
\end{proof}
\subsection{Structure results about $N_{y,I}$ and $C_{y,I}$}
To study the geometric vertex decomposability of $I_G$,
we need access to both $N_{y,I_G}$ and $C_{y,I_G}$. While
determining $C_{y,I_G}$ in terms of $G$ will prove to be subtle, the ideal $N_{y,I_G}$ has a straightforward
description.
\begin{lemma}\label{linktoricidealgraph}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[E(G)]$.
Let $<$ by any $y$-compatible monomial order with $y = e$ for some
edge $e$ of $G$. Then
$$N_{y,I_G} = I_{G\setminus e}.$$
In particular, a universal Gr\"obner basis
of $N_{y,I_G}$ consists of all the binomials in the universal
Gr\"obner basis $\mathcal{U}(I_G)$ of $I_G$ where neither term is divisible by $y$.
\end{lemma}
\begin{proof}
By Theorem \ref{generatordescription} (2), $I_G$ has a universal Gr\"obner basis $\mathcal{U}(I_G)$
of primitive binomials
associated to closed even walks of $G$.
Write this basis as $\mathcal{U}(I_G) = \{ y^{d_1}q_1 +r_1,\ldots, y^{d_k}q_k +r_k, g_1,\ldots, g_r \}$, where $d_i>0$ and where $y$ does not divide any term of $g_i$ and $q_i$. By definition \[N_{y,I_G} = \langle g_1,\ldots, g_r \rangle.\]
In particular, $N_{y,I_G}$ is generated by primitive binomials in $\mathcal{U}(I_G)$ which do not include the variable $y$. These
primitive binomials correspond to closed even walks in $G$ which do not pass through the edge $e$. In particular, they are also closed even walks in $G\setminus e$, so $ \{ g_1,\ldots, g_r\}\subset \mathcal{U}(I_{G\setminus e})$, the universal Gr\"obner basis of $I_{G\setminus e}$ from Theorem \ref{generatordescription} (2).
To show the reverse containment $\mathcal{U}(I_{G\setminus e})\subseteq \{ g_1,\ldots, g_r\}$, suppose that there is some binomial $u-v\in \mathcal{U}(I_{G\setminus e})$ which is not in $\mathcal{U}(I_G)$. Then there would be some closed even walk of $G$ which is not primitive, but becomes primitive after deleting the edge $e$. For $u-v$ to not be primitive means that there is some primitive
binomial $u'-v' \in \mathcal{U}(I_G)$ such that $u' | u$ and $v'|v$. Since $y$ does not divide $u$ or $v$, we must have $u'-v' \in \mathcal{U}(I_{G\setminus e})$, a contradiction to $u-v$ being primitive. Therefore $\mathcal{U}(I_{G\setminus e})=\{ g_1,\ldots, g_r\}$. Since $\{g_1,\ldots,g_r\}$ generates $I_{G\setminus e}$, we have $I_{G\setminus e}= \langle g_1,\ldots,g_r \rangle
= N_{y,I_G} $, thus proving the result.
\end{proof}
It is more difficult to give a similar description for $C_{y,I_G}$. For example, $C_{y,I_G}$ may not be prime, and thus, it may not be the toric ideal of any graph.
If we make the extra assumption that the binomial generators in $\mathcal{U}(I_G)$ are
{\it doubly square-free} (i.e., each binomial is the difference of two square-free monomials), then it is possible to give a slightly more concrete description of $C_{y,I_G}$. We work out these details below.
Fix a variable $y$ in $\mathbb{K}[E(G)]$, and write the elements
of $\mathcal{U}(I_G)$ as $\{y^{d_1}q_1 +r_1,\ldots, y^{d_k}q_k +r_k, g_1,\ldots, g_r \}$, where $d_i>0$ and where $y$ does not divide $q_i$ or any term of $g_i$. Since we are assuming
the elements in $\mathcal{U}(I_G)$ are doubly square-free, we have $d_i = 1$
for $i=1,\ldots,k$ and $q_1,\ldots,q_k$ are square-free monomials.
Consequently
\[{\rm in}_y(I_G)=\langle yq_1,\ldots,yq_k, g_1,\ldots,g_r\rangle\]
is generated by doubly square-free binomials and square-free monomials.
Let $\bigcap_j Q_j$ be the primary decomposition of $\langle yq_1,
\ldots, yq_k \rangle$. Each $Q_j$ is an ideal generated by
variables since $\langle yq_1,\ldots,yq_k \rangle$ is a
square-free monomial ideal. Thus
$${\rm in}_y(I_G) = \left( \bigcap_j Q_j \right) + \langle g_1,\ldots,g_r
\rangle = \bigcap_j (Q_j+ \langle g_1,\ldots,g_r \rangle).$$
If there is a $g_l=u_l-v_l$ with either $u_l$ or $v_l\in Q_j$, then $Q_j +\langle g_1,\ldots,g_r\rangle$ can be further decomposed
into an intersection of ideals generated by variables and square-free binomials.
Continuing this process, we can
write ${\rm in}_y(I_G)=\bigcap_i P_i$, where
each $P_i = M_i+T_i$, with $M_i$ an ideal generated by a subset of indeterminates in $\{e_1,\ldots,e_t\}$, and $T_i\subseteq\mathcal{U}(I_G)$ is an ideal of binomials generated by $g_l = u_l-v_l$ where $u_l,v_l\notin M_i$. Again, we point
out that each binomial is a doubly square-free binomial
by our assumption on $\mathcal{U}(I_G)$. As the next result shows, the binomial
ideal $T_i$ is a toric ideal corresponding to
a subgraph of $G$.
\begin{theorem}\label{link_components}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[E(G)]$, and suppose that the elements of
$\mathcal{U}(I_G)$ are doubly square-free. For a fixed
variable $y$ in $\mathbb{K}[E(G)]$, suppose that
$${\rm in}_y(I_G) = \bigcap_i P_i ~~\mbox{with
$P_i = M_i +T_i$},$$
using the notation as above. Let $E_i \subseteq E(G)$
be the set of edges that correspond to the variables in $M_i + \langle y \rangle$,
and let $G \setminus E_i$ be the graph $G$ with all the edges of
$E_i$ removed.
Then $T_i = I_{G\setminus E_i}$.
\end{theorem}
\begin{proof}
The generators of $T_i$ are those elements of $\mathcal{U}(I_G)$ whose terms are not divisible by any variable contained in $M_i + \langle y \rangle$. So a generator
of $T_i$ corresponds
to a primitive closed even walk that does not contain any
of the edges in $E_i$. Therefore, each generator of $T_i$ is a closed
even walk in $G \setminus E_i$, and thus
$T_i\subset I_{G\setminus E_i}$ by
Theorem \ref{generatordescription} (1). Conversely, suppose that $\Gamma \in \mathcal{U}(I_{G\setminus E_i})$. Then by Theorem \ref{generatordescription} (2), $\Gamma$ corresponds to some primitive closed even walk of $G$ not passing through any edge of $E_i$. These are exactly the generators in $T_i$.
\end{proof}
We now arrive at a primary decomposition of ${\rm in}_y(I_G)$.
\begin{corollary}\label{decomposition1}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[E(G)]$, and suppose that the elements of
$\mathcal{U}(I_G)$ are doubly square-free.
For a fixed
variable $y$ in $\mathbb{K}[E(G)]$, suppose that
$${\rm in}_y(I_G) = \bigcap_i P_i ~~\mbox,$$
using the notation as above.
Then each $P_i$ is a prime ideal, and after removing redundant components, this intersection defines a primary
decomposition of ${\rm in}_y(I_G)$.
\end{corollary}
\begin{proof}
By the previous result, $P_i = M_i + I_{G\setminus E_i}$ for every $i$.
So the fact that $P_i$ is a prime ideal
immediately follows from the fact that any toric ideal is prime, and that no cancellation occurs between variables in $M_i$ and elements of
$T_i = I_{G\setminus E_i}$.
\end{proof}
If $I_G$ is generated by a doubly square-free universal Gr\"obner basis, choosing any $y=e_i$ defines a geometric vertex decomposition of $I_G$ with respect to $y$ by Lemma~\ref{square-freey}. Note that $\langle y \rangle$ appears in the primary decomposition of
$\langle yq_1,\ldots,yq_k\rangle$,
so one prime ideal in the decomposition given in
Corollary \ref{decomposition1}
${\rm in}_y(I_G)$ will always be $\langle y \rangle + \langle g_1,\ldots,g_r\rangle$. But this is exactly $\langle y\rangle +N_{y,I_G} = \langle y \rangle + I_{G\setminus e}$, by Theorem \ref{linktoricidealgraph}. As the next theorem
shows, if we omit this prime ideal, the remaining prime ideals form
a primary decomposition of $C_{y,I_G}$.
\begin{theorem}\label{toric_GVD}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[E(G)]$, and suppose that the elements of
$\mathcal{U}(I_G)$ are doubly square-free.
Fix any variable $y=e_i$. Suppose that
after relabelling
the primary decomposition
${\rm in}_y(I_G)$ of Corollary \ref{decomposition1}
we have
\begin{equation}\label{square-freedecomp}{\rm in}_y(I_G)=\bigcap_{i=0}^d(M_i+I_{G\setminus E_i}) = (\langle y \rangle + I_{G\setminus e_i}) \cap
\bigcap_{i=1}^d
(M_i+I_{G\setminus E_i}).
\end{equation}
Then $$C_{y,I_G} = \bigcap_{i=1}^d(M_i+I_{G\setminus E_i})$$ is a primary decomposition for $C_{y,I_G}$. Furthermore, if $<$ is a $y$-compatible monomial order,
then \eqref{square-freedecomp} is a geometric vertex decomposition for $I_G$ with respect to $y$.
\end{theorem}
\begin{proof}
The fact about the geometric vertex decomposition follows from Lemma \ref{square-freey}.
Since $\mathcal{U}(I_G)$ contains doubly square-free binomials, we can write
\begin{eqnarray*}
{\rm in}_y(I_G) &=& \langle ym_1,\ldots,ym_k, g_1,\ldots,g_r\rangle
= \langle y,g_1\ldots,g_r \rangle \cap \langle m_1,\ldots,m_k,g_1,\ldots,g_r \rangle
\end{eqnarray*}where $y$ does not divide any $m_i$ or any term of any $g_i$. By definition,
\[N_{y,I_G} = \langle g_1,\ldots, g_r \rangle \hspace{4mm}
~\mbox{and}~C_{y,I_G} = \langle m_1,\ldots, m_k, g_1,\ldots, g_r \rangle.\]
Applying the process described before Theorem \ref{link_components} to $\langle m_1,\ldots, m_k, g_1,\ldots, g_r \rangle$ proves the first claim.
\end{proof}
\begin{remark}\label{structure_remark}
Let $M$ be a square-free monomial ideal and $I_H$ a toric ideal of a graph $H$ where elements of $\mathcal{U}(H)$ are doubly square-free. The arguments presented above can be adapted to prove that $M+I_H$ has a primary decomposition into prime ideals of the form $M_i+T_i$ as in Theorem~\ref{link_components}.
\end{remark}
\subsection{Geometric vertex decomposability under graph operations}
Given a graph $G$ whose toric ideal $I_G$ is
geometrically vertex decomposable, it is natural
to ask if there are any graph operations we can
perform on $G$ to make a new graph $H$ so that
the associated toric ideal $I_H$ is also
geometrically vertex decomposable. We show that the operation
of ``gluing'' an even cycle onto a graph
$G$ is one such operation.
We make this more precise. Given a graph $G = (V(G),E(G))$ and
a subset $W \subseteq V(G)$, the {\it induced graph}
of $G$ on $W$,
denoted $G_W$, is the graph $G_W = (W,E(G_W))$ where
$E(G_W) = \{ e\in E(G) ~|~ e \subseteq W\}$. A graph $G$ is
a {\it cycle} (of length $n$) if $V(G) = \{x_1,\ldots,x_n\}$ and
$E(G) = \{\{x_1,x_2\},\{x_2,x_3\},\ldots,\{x_{n-1},x_n\},\{x_n,x_1\}\}.$
Following \cite[Construction 4.1]{FHKVT}, we define the {\it gluing}
of two graphs as follows. Let $G_1$ and $G_2$ be two graphs,
and suppose that $H_1 \subseteq G_1$ and $H_2 \subseteq G_2$ are
induced subgraphs of $G_1$ and $G_2$ that are isomorphic. If
$\varphi:H_1 \rightarrow H_2$ is the corresponding graph isomorphism,
we let $G_1 \cup_\varphi G_2$ denote the disjoint union $G_1 \sqcup G_2$
with the associated edges and vertices of $H_1 \cong H_2$ being identified.
We may say $G_1$ and $G_2$ are {\it glued along $H$} if both the induced
subgraphs $H_1 \cong H_2 \cong H$ and $\varphi$ are clear.
\begin{example} Figure \ref{fig:onecycle} (which is adapted from \cite{FHKVT})
shows
the gluing of a cycle $C$ of even length onto
a graph $G$ to make a new graph $H$. The labelling is included to help
illuminate the proof of the next theorem. In this figure, the cycle $C$ has
edges $f_1,f_2,\ldots,f_{2n}$. The edge $e$ is part of the graph $G$. We have
glued $C$ and $G$ along the edge $e \cong f_{2n}$.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.45]
\draw[dotted] (6,0) circle (3cm) node{$G$};
\draw[thin,dashed] (4,1) -- (5,2);
\draw[thin,dashed] (4,-1) -- (5,-2);
\draw (4,-1) -- (3.3,-2.25) node[midway,left] {\footnotesize{$f_{2n-1}$}};
\draw (4,1) -- (4,-1) node[midway, right] {$e$};
\draw (4,1) -- (4,-1) node[midway,left] {\footnotesize{$f_{2n}$}};
\draw (4,1) -- (3.3,2.25) node[midway,left] {\footnotesize{$f_{1}$}};
\node at (0,0) {$C$};
\draw[loosely dotted] (3.5,2) edge[bend right=10] (2,3.5);
\draw[dashed] (1,4) -- (2,3.5);
\draw (-1,4) -- (1,4);
\draw[dashed] (-1,4) -- (-2,3.5);
\draw[loosely dotted] (-2,3.5) edge[bend right=10] (-3.5,2);
\draw[dashed] (-4,1) -- (-3.5,2);
\draw (-4,1) -- (-4,-1) node[midway,left] {\footnotesize{$f_i$}};
\draw[dashed](-4,-1) -- (-3.5,-2);
\draw[loosely dotted] (-2,-3.5) edge[bend right=-10] (-3.5,-2);
\draw[dashed] (-1,-4) -- (-2,-3.5);
\draw (-1,-4) -- (1,-4);
\draw[dashed] (1,-4) -- (2,-3.5);
\draw[loosely dotted] (3.5,-2) edge[bend right=-10] (2,-3.5);
\fill[fill=white,draw=black] (-4,1) circle (.1);
\fill[fill=white,draw=black] (-4,-1) circle (.1);
\fill[fill=white,draw=black] (4,1) circle (.1);
\fill[fill=white,draw=black] (4,-1) circle (.1);
\fill[fill=white,draw=black] (-1,4) circle (.1);
\fill[fill=white,draw=black] (1,4) circle (.1);
\fill[fill=white,draw=black] (-1,-4) circle(.1);
\fill[fill=white,draw=black] (1,-4) circle (.1);
\fill[fill=white,draw=black] (3.3,2.25) circle (.1);
\fill[fill=white,draw=black] (3.3,-2.25) circle (.1);
\end{tikzpicture}
\caption{Gluing an even cycle $C$ to a graph $G$ along an edge.}
\label{fig:onecycle}
\end{figure}
\end{example}
The geometric vertex decomposability property
is preserved when an even cycle is glued along an edge of a graph whose toric ideal
is geometrically vertex decomposable.
\begin{theorem}\label{gluetheorem}
Suppose that $G$ is a graph such that $I_G$ is
geometrically vertex decomposable
in $\mathbb{K}[E(G)]$. Let $H$ be the
graph obtained from $G$ by gluing a cycle of even length
onto an edge of $G$ (as in Figure \ref{fig:onecycle}). Then $I_H$ is
geometrically vertex decomposable in $\mathbb{K}[E(H)]$.
\end{theorem}
\begin{proof}
The ideal $I_H$ is clearly unmixed since $I_H$ is a prime ideal. Now
let $E(G) = \{e_1,\ldots,e_s\}$ denote the edges of $G$ and
let $E(C) = \{f_1,\ldots,f_{2n}\}$ denote the edges of the
even cycle $C$. Let $e$ be any edge of $G$, and after relabelling
the $f_i$'s
we can assume that $C$ is glued to $G$ along $f_{2n}$ and $e$ (see
Figure \ref{fig:onecycle}). Consequently,
$$E(H) = E(G) \cup \{f_1,\ldots,f_{2n-1}\}.$$
Let $e = f_{2n} = \{a,b\}$, and suppose that $a \in f_1$ and $b \in f_{2n-1}$, i.e.,
$a$ is the vertex that $f_1$ shares with $f_{2n}$, and $b$ is the vertex of $f_{2n-1}$
shared with $f_{2n}$.
By Theorem \ref{generatordescription} (2), a universal Gr\"obner
basis of $I_H$ is given by the primitive binomials that
correspond to even closed walks. Consider any
even closed walk that passes through $f_1$. It will have
the form $$(f_1,f_2,\ldots,f_{2n-1},e) ~~\mbox{or}~~
(f_1,f_2,\ldots,f_{2n-1},e_{j_1},\ldots,e_{j_{2k-1}})$$ for some
odd walk $(e_{j_1},\ldots,e_{j_{2k-1}})$ in $G$ that connects
the vertex $a$ of $f_1$ with the vertex $b$ of $f_{2n-1}$. Thus, any primitive binomial
involving the variable
$f_1$ has the form
$$f_1f_3\cdots f_{2n-1} - ef_2\cdots f_{2n-2}$$
or
$$f_1f_3\cdots f_{2n-1}e_{j_2}e_{j_4}\cdots e_{j_{2k-2}} -
f_2f_4\cdots f_{2n-2}e_{j_1}\cdots e_{j_{2k-1}}.$$
Let $y = f_1$ and let $<$ be a $y$-compatible monomial order, and consider the universal Gr\"obner basis of $I_H$ written
as $\mathcal{U}(I_H) = \{ y^{d_1}q_1 +r_1,\ldots, y^{d_k}q_k +r_k, g_1,\ldots, g_r \}$, where $d_i>0$ and where $y$ does not divide any term of $g_i$ and $q_i$.
Each $g_1,\ldots,g_r$ corresponds to a primitive closed even walk that
does not pass through $f_1$. Consequently, each
$g_i$ corresponds to a primitive closed even walk in $G$. Thus
$\langle g_1,\ldots,g_r \rangle = I_G$ (we abuse notation and write $I_G$ for the induced ideal $I_G\mathbb{K}[E(H)]$).
Additionally, by Lemma \ref{linktoricidealgraph} we have
$N_{y,I_H} =\langle g_1, \ldots,g_r \rangle = I_{H\setminus f_1}$.
But note that in $H \setminus f_1$, the edge $f_2$ is a leaf. Removing
$f_2$ from $(H \setminus f_1)$ makes $f_3$ a leaf, and so on. So, by repeatedly
applying Lemma \ref{removeleaves}, we have
$$N_{y,I_H} = \langle g_1,\ldots,g_r \rangle = I_{H\setminus f_1} = I_{(H\setminus f_1)\setminus f_2} =
\cdots = I_{(\cdots (H\setminus f_1) \cdots )\setminus f_{2n-1}} = I_G.$$
Since $f_1f_3\cdots f_{2n-1} - ef_2\cdots f_{2n-2}$ is a primitive binomial
$f_3\cdots f_{2n-1} \in C_{y,I_H}$. Furthermore, by our discussion
above, any other primitive binomial containing a term divisible by $y =f_1$
has the form
$f_1f_3\cdots f_{2n-1}e_{j_2}e_{j_4}\cdots e_{j_{2k-2}} -
f_2f_4\cdots f_{2n-2}e_{j_1}\cdots e_{j_{2k-1}}$, and consequently
$f_3\cdots f_{2n-1}e_{j_2}\cdots e_{j_{2k-2}} \in C_{y,I_H}$. But this form
is divisible by $f_3 \cdots f_{2n-1}$, so
$$C_{y,I_H} = \langle f_3\cdots f_{2n-1}, g_1,\ldots,g_r\rangle
= \langle f_3 \cdots f_{2n-1} \rangle + I_G.$$
It is now straightforward to check that
$${\rm in}_y(I_H) = \langle f_1f_3\cdots f_{2n-1} \rangle+ I_G = C_{y,I_H} \cap (N_{y,I_H} + \langle y \rangle),$$
thus giving a geometric vertex decomposition of $I_H$ with respect to $y$. (We could also deduce this
from Lemma \ref{square-freey} since each $d_i =1$ in
our description of $\mathcal{U}(I_H)$ above.)
To complete the proof, the contraction of $N_{y,I_H}$
to $\mathbb{K}[f_2,\ldots,f_{2n},e_1,\ldots,e_s]$ satisfies
$$N_{y,I_H} = \langle 0 \rangle + I_G \subseteq \mathbb{K}[f_2,\ldots,f_{2n}]
\otimes \mathbb{K}[E(G)].$$
So $N_{y,I_H}$ is geometrically vertex decomposable by Theorem \ref{tensorproduct}
since $I_G$ is geometrically vertex decomposable in $\mathbb{K}[E(G)]$, and similarly
for $\langle 0 \rangle$ in $\mathbb{K}[f_2,\ldots,f_n]$. The ideal
$C_{y,I_H}$ contracts to
$$C_{y,I_H} = \langle f_3\cdots f_{2n-1} \rangle + I_G \subseteq
\mathbb{K}[f_2,\ldots,f_{2n}] \otimes \mathbb{K}[E(G)].$$
Since $\langle f_3f_5\cdots f_{2n-1} \rangle \subseteq \mathbb{K}[f_2,\ldots,f_{2n}]$
is geometrically vertex decomposable by Lemma \ref{simplecases} (2), and $I_G$ is geometrically vertex decomposable in $\mathbb{K}[E(G)]$ by hypothesis,
the ideal
$C_{y,I_H}$ is geometrically vertex decomposable by again appealing to
Theorem \ref{tensorproduct}. Thus $I_H$ is geometrically
vertex decomposable, as desired.
\end{proof}
\begin{example}
Let $G$ be a cycle of even length, i.e., $G$ has edge set $e_1,\ldots,e_{2n}$
with $(e_1,\ldots,e_{2n})$ a closed even walk. The ideal
$I_G = \langle e_1e_3 \cdots e_{2n-1} - e_2e_4\cdots e_{2n} \rangle$ is geometrically vertex decomposable
by Lemma \ref{simplecases} (2).
By repeatedly
applying Theorem \ref{gluetheorem}, we can glue on even cycles to
make new graphs whose toric ideals are geometrically vertex decomposable. Note that
by Lemma \ref{removeleaves}, we can also add leaves (and leaves to leaves, and so on) and not destroy
the geometrically vertex decomposability property.
These constructions allow us to build many
graphs whose toric ideal is geometrically vertex decomposable.
As a specific example, the graph in Figure \ref{gluingexample}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.45]
\draw (-4,0) -- (0,0);
\draw (0,0) -- (4,0);
\draw (4,0) -- (8,0);
\draw (8,0) -- (12,0);
\draw (12,0) -- (16,0);
\draw (-4,4) -- (0,4);
\draw (0,4) -- (4,4);
\draw (4,4) -- (8,4);
\draw (8,4) -- (12,4);
\draw (12,4) -- (16,4);
\draw (-4,0) -- (-4,4);
\draw (0,0) -- (0,4);
\draw (4,0) -- (4,4);
\draw (8,0) -- (8,4);
\draw (12,0) -- (12,4);
\draw (16,0) -- (16,4);
\draw (16,0) -- (18,3);
\draw (16,0) -- (19,0);
\draw (18,3) -- (20,4);
\draw (18,3) -- (20,3);
\draw (4,4) -- (2,6);
\draw (4,4) -- (6,6);
\draw (-6,2) -- (-4,4);
\fill[fill=white,draw=black] (-4,0) circle (.1);
\fill[fill=white,draw=black] (0,0) circle (.1);
\fill[fill=white,draw=black] (4,0) circle (.1);
\fill[fill=white,draw=black] (8,0) circle (.1);
\fill[fill=white,draw=black] (12,0) circle (.1);
\fill[fill=white,draw=black] (16,0) circle (.1);
\fill[fill=white,draw=black] (-4,4) circle (.1);
\fill[fill=white,draw=black] (0,4) circle (.1);
\fill[fill=white,draw=black] (4,4) circle (.1);
\fill[fill=white,draw=black] (8,4) circle (.1);
\fill[fill=white,draw=black] (12,4) circle (.1);
\fill[fill=white,draw=black] (16,4) circle (.1);
\fill[fill=white,draw=black] (18,3) circle (.1);
\fill[fill=white,draw=black] (20,4) circle (.1);
\fill[fill=white,draw=black] (20,3) circle (.1);
\fill[fill=white,draw=black] (19,0) circle (.1);
\fill[fill=white,draw=black] (2,6) circle (.1);
\fill[fill=white,draw=black] (6,6) circle (.1);
\fill[fill=white,draw=black] (-6,2) circle (.1);
\end{tikzpicture}
\caption{A graph whose toric ideal is geometrically vertex decomposable}
\label{gluingexample}
\end{figure}
is geometrically vertex decomposable
since we have repeatedly glued cycles of length four along edges,
and then added some leaves.
\end{example}
\section{Toric ideals of graphs and the glicci property}
In this section we recall some of the basics of Gorenstein liaison (Section \ref{sec:Gliaison}) and then show that some large classes of toric ideals of graphs are glicci (Section \ref{sec:someGlicci}).
This section is partly motivated by a result of Klein and Rajchgot \cite[Theorem 4.4]{KR}, which says that geometrically vertex decomposable ideals are
glicci. We note that while geometrically vertex decomposable ideals are glicci, glicci ideals need not be geometrically vertex decomposable. Indeed, we do not know if the toric ideals of graphs proven to be glicci in this section are also geometrically vertex decomposable. However, the results
of this section make use of the geometric vertex decomposition language
of Remark \ref{gvdremark}. For the remainder of the section, we will let $S=\mathbb{K}[x_0,\ldots,x_n]$ denote the graded polynomial ring with respect to the standard grading.
\subsection{Gorenstein liaison preliminaries}\label{sec:Gliaison}
We provide a quick review of the basics of Gorenstein liaison; our main
references for this material are \cite{MN1,MN}.
\begin{definition}
Suppose that $V_1,V_2,X$ are subschemes of $\mathbb{P}^n$ defined by saturated ideals $I_{V_1},I_{V_2}$ and $I_X$ of $S =\mathbb{K}[x_0,\ldots,x_n]$, respectively. Suppose also that $I_X\subset I_{V_1}\cap I_{V_2}$ and $I_{V_1}=I_X:I_{V_2}$ and $I_{V_2}=I_X:I_{V_1}$. We say that $V_1$ and $V_2$ are \textit{directly algebraically $G$-linked}
if $X$ is Gorenstein.
In this case we write
$V_1 \stackrel{X}{\sim} V_2$.
\end{definition}
We can now define an equivalence relation using the notion
of algebraically $G$-linked.
\begin{definition}
Let $V_1,\ldots,V_k$ be subschemes of $\mathbb{P}^n$ defined by the
saturated ideals $I_{V_1},\ldots,I_{V_k}$. If there are
Gorenstein varieties $X_1,\ldots,X_{k-1}$ such that
$V_1 \stackrel{X_1}{\sim} V_2
\stackrel{X_2}{\sim} \cdots \stackrel{X_{k-1}}{\sim} V_k$,
then we say $V_1$ and $V_k$ are in the same {\it Gorenstein liaison class} (or
{\it $G$-liaison class}) and $V_1$ and $V_k$ are {\it $G$-linked}
in $k-1$ steps. If $V_k$ is a complete intersection, then
we say $V_1$ is in the {\it Gorenstein liaison class of a complete
intersection} or {\it glicci}.
\end{definition}
In what follows, we say a homogeneous saturated ideal $I$ is glicci if the subscheme defined by
$I$ is glicci.
\begin{example}
Consider the twisted cubic $V_1\subset \mathbb{P}^3$ with
\[I_{V_1} =
\langle xz-y^2, xw-z^2, xw-yz \rangle \subseteq \mathbb{K}[x,y,z]. \]
Choose two of these quadrics, and let $X$ be subscheme defined by their intersection. It is an exercise to check that $X$ is the union of $V_1$ and a line, which we denote by $V_2$. Therefore, $V_1 \stackrel{X}{\sim} V_2$. Furthermore, since $X$ is a complete intersection, and thus
Gorenstein, the twisted cubic and a line are directly $G$-linked.
\end{example}
\begin{remark}
One of the main open questions in liaison theory asks if every arithmetically
Cohen-Macaulay subscheme of $\mathbb{P}^n$ is glicci (see \cite[Question 1.6]{KMMNP}).
\end{remark}
While it can be difficult in general to find a sequence of $G$-links between two varieties, there is a tool called an elementary $G$-biliaison which simplifies the process when it exists.
\begin{definition}
Let $S=\mathbb{K}[x_0,\ldots,x_n]$ with the standard grading. Let $C$ and $I$ be homogeneous, saturated, and unmixed ideals of $S$ such that ${\rm ht}(C)={\rm ht}(I)$. Suppose that there is some $d\in\mathbb{Z}$ and Cohen-Macaulay homogeneous ideal $N\subset C\cap I$ with ${\rm ht}(N)={\rm ht}(I)-1$ such that $I/N$ is
isomorphic to $[C/N](-d)$ as an $R/N$-module. If $N$ is generically Gorenstein, then $I$ is obtained from $C$ via an \textit{elementary $G$-biliaison of height $d$}.
\end{definition}
In fact, suppose that $V$ and $W$ are two subschemes of $\mathbb{P}^n$ such that $I_{V}$ and $I_{W}$ are homogeneous, saturated and unmixed ideals. If $I_{V}$ is obtained from $I_{W}$ by an elementary $G$-biliaison, then $V$ and $W$ are $G$-linked in two steps \cite[Theorem 3.5]{Ha}. Moreover, elementary
$G$-biliaisons preserve the Cohen-Macaulay property. This and other properties of $G$-linked varieties can be found in \cite{MN1}. Indeed, we will use the following:
\begin{lemma}\label{linkage_CM}\cite[Corollary 5.13]{MN1}
Let $I$ and $J$ be homogeneous, saturated ideals in $S$ and assume that $I$ and $J$ are directly $G$-linked. Then $S/I$ is Cohen-Macaulay if and only $S/J$ is Cohen-Macaulay.
\end{lemma}
Migliore and Nagel have given a criterion for an ideal to be glicci.
\begin{theorem}\cite[Lemma 2.1]{MN}\label{MN_glicci}
Let $I\subset S$ be a homogeneous ideal such that $S/I$ is Cohen-Macaulay and
generically Gorenstein. If $f\in S$ is a homogeneous non-zero-divisor of $S/I$, then the ideal $I + \langle f \rangle \subset S$ is glicci.
\end{theorem}
Another criterion for an ideal to be glicci is geometric vertex decomposability. In fact a geometric vertex decomposition gives rise to an elementary $G$-biliaison of height 1.
\begin{lemma}\cite[Corollary 4.3]{KR}\label{GVD_linkage}
Let $I$ be a homogeneous, saturated, unmixed ideal of $S$ and $\text{in}_y I = C_{y,I} \cap (N_{y,I}+\langle y \rangle)$ a nondegenerate geometric vertex decomposition with respect to some variable $y = x_i$ of $S$. Assume that $N_{y,I}$ is Cohen--Macaulay and generically Gorenstein and that $C_{y,I}$ is also unmixed. Then $I$ is obtained from $C_{y,I}$ by an elementary $G$-biliaison of height $1$.
\end{lemma}
\begin{theorem}\cite[Theorem 4.4]{KR}\label{gvd=>glicci} If the saturated homogeneous ideal $I \subseteq S$ is geometrically vertex
decomposable, then $I$ is glicci.
\end{theorem}
As noted in the introduction of the paper, the previous result partially motivates
our interest in developing a deeper understanding of
geometrically vertex decomposable ideals.
\subsection{Some toric ideals of graphs which are glicci}\label{sec:someGlicci}
In this section we use Migliore and Nagel's result \cite[Lemma 2.1]{MN} (see Theorem \ref{MN_glicci} above) to show that some classes of toric ideals of graphs are glicci. We begin with a straightforward consequence of this theorem together with \cite[Theorem 3.7]{FHKVT}.
\begin{theorem}\label{glue=glicci}
Let $G$ be a finite simple graph such that $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay. Let $H$ be the graph obtained by gluing an even cycle $C$ to $G$ along any edge. Then $I_H$ is glicci.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{gluetheorem}, let $E(G) = \{e_1,\ldots, e_s\}$ denote the edges of $G$ and $E(C) = \{f_1,\ldots, f_{2n}\}$ denote the (consecutive) edges of the even cycle $C$. Assume that $C$ is glued to $G$ along $f_{2n}$ and $e$. Then $\mathbb{K}[E(H)] = \mathbb{K}[E(G)]\otimes \mathbb{K}[f_1,\dots,f_{2n-1}]$. For convenience, write $I_G$ for the induced ideal $I_G\mathbb{K}[E(H)]$.
Let $F = f_1f_3\cdots f_{2n-1}-f_2f_4\cdots f_{2n-2}e$ be the primitive binomial associated to the even cycle $C$. By \cite[Theorem 3.7]{FHKVT}, $I_H = I_G+\langle F\rangle$. As $I_G$ is prime, we have that $F$ is a homogeneous non-zero-divisor on $\mathbb{K}[E(H)]/I_G$ and $\mathbb{K}[E(H)]/I_G$ is generically Gorenstein. As $\mathbb{K}[E(H)]/I_G$ is Cohen-Macaulay by assumption, Theorem \ref{MN_glicci} implies that $I_H$ is glicci.
\end{proof}
We can combine a one step geometric vertex decomposition with Theorem \ref{MN_glicci} to see that many toric ideals of graphs which contain $4$-cycles are glicci. Our main theorem in this direction is Theorem \ref{thm:gapFreeGlicci}, which says that the toric ideal of a \emph{gap-free} graph containing a $4$-cycle is glicci. We begin with a general lemma which is not necessarily about toric ideals of graphs.
\begin{lemma}\label{MN-combined}
Let $S =\mathbb{K}[x_0,\ldots, x_n]$ with the standard grading, and $I\subset S$ be a homogeneous, saturated ideal such that $S/I$ is Cohen-Macaulay. Assume the following conditions are satisfied:
\begin{enumerate}
\item $I$ is square-free in $y$ with a nondegenerate geometric vertex decomposition \[\init_y(I) = C_{y,I}\cap (N_{y,I}+\langle y \rangle);\]
\item $I$ contains a homogeneous polynomial $Q$ of degree $2$ such that $y$ divides some term of $Q$; and
\item $S/N_{y,I}$ is Cohen-Macaulay and generically Gorenstein, and $C_{y,I}$ is radical.
\end{enumerate}
Then $I$ is glicci.
\end{lemma}
\begin{proof}
By assumption (1), we have a nondegenerate geometric vertex decomposition $\init_y(I) = C_{y,I}\cap (N_{y,I}+\langle y \rangle)$.
Since $I$ is Cohen-Macaulay and hence unmixed, we can conclude that $C_{y,I}$ is equidimensional by \cite[Lemma 2.8]{KR}. Since $C_{y,I}$ is also radical by assumption (3), $C_{y,I}$ must be unmixed. Furthermore, because $S/N_{y,I}$ is Cohen-Macaulay and generically Gorenstein by assumption (3), we may use Lemma \ref{GVD_linkage} to see that the geometric vertex decomposition gives rise to an elementary $G$-biliaison of height 1 from $I$ to $C_{y,I}$. Hence $S/I$ being Cohen-Macaulay implies that $S/C_{y,I}$ is too by Lemma \ref{linkage_CM}.
Let $<$ be a $y$-compatible monomial order. By assumptions (1) and (2), $I$ contains a degree $2$ form which can be written as $Q = yf+R$ where $y$ does not divide any term in $f$ or $R$.
Thus, $f\in C_{y,I}$.
Let $z = \init_<(f)$.
Since the geometric vertex decomposition $\init_y(I) = C_{y,I}\cap (N_{y,I}+\langle y \rangle)$ is nondegenerate, we have that $C_{y,I}\neq \langle 1\rangle$. Hence $C_{y,I}$ has a reduced Gr\"obner basis of the form
$\{f',t_1,\dots, t_s\}$ where $\init_<(f') = z$ and $z$ does not divide any term of any $t_i$, $1\leq i\leq s$. Let $C' = \langle t_1,\dots, t_s\rangle$ so that $C_{y,I} = \langle f'\rangle+ C'$. With this set-up, we see that $f'\neq 0$ is a non-zero-divisor on $S/C'$.
Let $S_{\hat{z}} = \mathbb{K}[x_1,\dots,\hat{z},\dots, x_n]$. Then $S/C_{y,I}\cong S_{\hat{z}}/ C'$.
Thus, $S_{\hat{z}}/C'$ (and hence $S/C'$ after extending $C'$ to $S$) is Cohen-Macaulay because $S/C_{y,I}$ is Cohen-Macaulay. Similarly, $C_{y,I}$ being radical implies that $C'$ (viewed in $S_{\hat{z}}$ or $S$) is radical. Thus, by \cite[Lemma 2.1]{MN} (see Theorem \ref{MN_glicci}), we conclude that $C_{y,I}$ is glicci.
By applying the elementary $G$-biliaison between $I$ and $C_{y,I}$ once more, we conclude that $I$ is also glicci.
\end{proof}
We will now apply Lemma \ref{MN-combined} to see that certain classes of toric ideals of graphs are glicci. In what follows, let $y = x_i$ be an indeterminate in $S = \mathbb{K}[x_1,\dots, x_n]$ and let $<$ be a $y$-compatible monomial order. Let $M^G_{y}$ be the ideal generated by all monomials $m\in S$ such that $ym-r\in \mathcal{U}(I_G)$ and $\init_< (ym-r) = ym$. Observe that $M^G_{y}$ does not depend on the choice of $y$-compatible monomial order. Furthermore, since $I_G$ is prime and $ym-r$ is primitive, $y$ cannot appear in both terms of the binomial. We will consider generalizations of $M^G_y$ in Section \ref{section_square-free}.
\begin{theorem}\label{prop:glicciGraphs}
Let $G$ be a finite simple graph where $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay.
Suppose that there exists an edge $y\in E(G)$ such that $y$ is contained in a 4-cycle of $G$, and a $y$-compatible monomial order $<_y$ such that $\init_y(I_G)$ is square-free in $y$.
Suppose also that $I_{G\setminus y}$ is Cohen-Macaulay
and $I_{G\setminus y}+M^G_{y}$ is radical.
Then $I_G$ is glicci.
\end{theorem}
\begin{proof}
We will show that the three assumptions of Lemma \ref{MN-combined} hold. Let $<$ be a $y$-compatible monomial order.
Since $I_G$ is square-free in $y$, there exists a geometric vertex decomposition
\[\init_y(I_G) = C_{y,I_G}\cap (N_{y,I_G}+\langle y \rangle)\] by Lemma \ref{square-freey}. Then $N_{y,I_G} = I_{G\setminus y}$ and $C_{y,I_G}=I_{G\setminus y}+M^G_{y}$.
Since $I_G$ is a toric ideal of a graph, and hence generated in degree $2$ or higher, we do not have that $C_{y,I} = \langle 1\rangle$. Furthermore, $I_G$ and $N_{y,I_G}$ are each the toric ideal of a graph, hence radical (and therefore saturated since $I_G$ is not the irrelevant ideal), and $C_{y,I_G}$ is radical by assumption. Thus, by \cite[Proposition 2.4]{KR}, we conclude that the geometric vertex decomposition $\init_y(I_G) = C_{y,I_G}\cap (N_{y,I_G}+\langle y \rangle)$ is nondegenerate since the reduced Gr\"obner basis of $I_G$ involves $y$ by assumption. Thus, assumption (1) of Lemma \ref{MN-combined} holds.
Assumption (2) of Lemma \ref{MN-combined} holds because there exists an edge $y\in E(G)$ such that $y$ is contained in a 4-cycle of $G$. Assumption (3) of Lemma \ref{MN-combined} holds by the assumption that $I_{G\setminus y}$ is Cohen-Macaulay and $I_{G\setminus y}+M^G_{y}$ is radical.
\end{proof}
Recall from Theorem \ref{sqfree=>cm} that if $I_G\subseteq \mathbb{K}[E(G)]$ is a toric ideal of a graph which has a square-free degeneration, then $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay. We can use Theorem \ref{prop:glicciGraphs} to show that many toric ideals of graphs which have both square-free degenerations and $4$-cycles are glicci. Specifically, we have the following:
\begin{corollary}\label{cor:square-freeDegenGlicciGraph}
Let $G$ be a finite simple graph and suppose that there exists an edge $y\in E(G)$ such that $y$ is contained in a $4$-cycle of $G$. Suppose also that there exists some $y$-compatible monomial order $<$ such that $\init_< (I_G)$ is a square-free monomial ideal. Then $I_G$ is glicci.
\end{corollary}
\begin{proof}
Since $\init_< (I_G)$ is a square-free monomial ideal, we have that $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay.
Furthermore, $I_G$ is square-free in $y$.
Let $\{yq_1+r_1,\dots, yq_s+ r_s, h_1,\dots, h_t\}$ be a reduced Gr\"obner basis for $I_G$ so that each $\init_< (yq_i)$, $1\leq i\leq s$, and each $\init_< (h_j)$, $1\leq j\leq t$ are square-free monomials.
Consider the geometric vertex decomposition
\[
\init_y (I_G) = C_{y,I_G}\cap(N_{y,I_G}+\langle y\rangle).
\]
By \cite[Theorem 2.1]{KMY}, $\{h_1,\dots, h_t\}$ and $\{q_1,\dots, q_s, h_1,\dots, h_t\}$ are a Gr\"obner bases for $N_{y,I_G}$ and $C_{y,I_G}$ respectively. Thus, $\init_< (N_{y,I_G})$ and $\init_< (C_{y,I_G})$ are square-free monomial ideals. Since $N_{y,I_G} = I_{G\setminus y}$ is a toric ideal of a graph, it follows that $I_{G\setminus y}$ is Cohen-Macaulay. Since $C_{y,I_G} = I_{G\setminus y}+M^G_y$, it follows that $I_{G\setminus y}+M^G_y$ is radical. Thus, the assumptions of Theorem \ref{prop:glicciGraphs} hold and we conclude that $I_G$ is glicci.
\end{proof}
We end by proving that the toric ideal of a gap-free graph containing a $4$-cycle is glicci.
A graph $G$ is {\it gap-free} if for any two edges
$e_1 = \{u,v\}$ and $e_2 = \{w,x\}$ with $\{u,v\} \cap \{w,x\}
= \emptyset$, there is an edge $e \in E(G)$ that is
is adjacent to both $e_1$ and $e_2$, i.e., one of the edges
$\{u,w\}, \{u,x\}, \{v,w\}, \{v,x\}$ is also in $G$. Note that
the name for this family is not standardized; these
graphs are sometimes called $2K_2$-free, or $C_4$-free, among other
names (see D'Al\`i \cite{DAli} for more).
Note that $G$ has a $4$-cycle if and only if the graph complement $\bar{G}$ is not gap-free.
\begin{theorem}\label{thm:gapFreeGlicci}
Let $G$ be a gap-free graph such that the graph complement $\bar{G}$ is not gap-free. Then $I_G$ is glicci.
\end{theorem}
\begin{proof}
Since $\bar{G}$ is not gap-free, $G$ must contain a 4-cycle. Pick any variable $y$ belonging to this cycle. By \cite[Theorem 3.9]{DAli}, since $G$ is gap-free, there exists a $y$-compatible order $<_y$ such that $\init_{<_y}(I_G)$ is square-free (we can ensure this by choosing $<_\sigma$ in \cite[Theorem 3.9]{DAli} so that the vertices defining $y$ have the smallest weight). The result now follows from Corollary \ref{cor:square-freeDegenGlicciGraph}.
\end{proof}
\section{Toric ideals of bipartite graphs}\label{sec:bipartite}
In this section, we show that toric ideals of bipartite graphs are geometrically vertex decomposable. In Section \ref{sect:generalCase}, we treat the general case, making use of results of Constantinescu and Gorla from \cite{CG}. Then, in Section \ref{sect:specialCases} we give alternate proofs of geometric vertex decomposibility in special cases.
\subsection{Toric ideals of bipartite graphs are geometrically vertex decomposable}\label{sect:generalCase}
Recall that a simple graph $G$ is \emph{bipartite} if its vertex set $V(G) = V_1\sqcup V_2$ is a disjoint union of two sets $V_1$ and $V_2$, such that every edge in $G$ has one of its endpoints in $V_1$ and the other endpoint in $V_2$. The purpose of this subsection is to prove Theorem \ref{thm: gvdBipartite} below, which says that the toric ideal of a bipartite graph is geometrically vertex decomposable. We will make use of the results and ideas in Constaninescu and Gorla's paper \cite{CG} on Gorenstein liaison of toric ideals of bipartite graphs.
Let $G$ be a bipartite graph. Following \cite[Definition 2.2]{CG}, we say that a subset ${\bf e} = \{e_1,\dots, e_r\}\subseteq E(G)$ is a \emph{path ordered matching} of length $r$ if the vertices of $G$ can be relabelled such that $e_i = \{i,i+r\}$ and
\begin{enumerate}
\item $f_i = \{i,i+r+1\}\in E(G)$, for each $1\leq i\leq r-1$,
\item if $\{i,j+r\}\in E(G)$ and $1\leq i,j \leq r$, then $i\leq j$.
\end{enumerate}
The following is straightforward. It will be referenced later in the subsection.
\begin{lemma}\label{lem:techLemma2}
Let ${\bf e} = \{e_1,\dots, e_r\}$ be a path ordered matching. Then $\{e_1,\dots, e_{r-1}\}$ is a path ordered matching on $G\setminus e_r$.
\end{lemma}
Given a subset ${\bf e} \subseteq E(G)$,
let $M^G_{\bf e}$ be the set of all monomials $m$ such that there is some non-empty subset ${\bf \tilde{e}}\subseteq {\bf e}$ where $m\left(\prod_{e_i\in {\bf \tilde{e}}}e_i\right)-n$ is the binomial associated to a cycle in $G$.
Let
\begin{equation}\label{eq:IGe}
I^G_{\bf e} = I_{G\setminus {\bf e}}+\langle M^G_{\bf e}\rangle,
\end{equation}
and observe that when $\bf{e} = \emptyset$, $I^G_{\bf e} = I_G$.
Let $G$ be a bipartite graph and ${\bf e} = \{e_1,\dots, e_r\}$ a path ordered matching.
Let $\prec$ be a lexicographic monomial order on $\mathbb{K}[E(G)]$ with $e_r>e_{r-1}>\cdots>e_1$ and $e_1>f$ for all $f\in E(G)\setminus {\bf e}$.
Let $\mathcal{C}(G)$ denote the set of binomials associated to cycles in $G$. By \cite[Lemma 2.6]{CG}, $\mathcal{C}(G\setminus {\bf e})\cup M^G_{\bf e}$ is a Gr\"obner basis for $I^G_{\bf e}$ with respect to the term order $\prec$, and $\init_{\prec} (I^G_{\bf e})$ is a square-free monomial ideal.
\begin{remark}\label{rmk:gbBipartite}
Let $\widetilde{M}^G_{\bf e}$ be the set of monomials $m$ such that there is some non-empty subset ${\bf \tilde{e}}\subseteq {\bf e}$ where $m\left(\prod_{e_i\in {\bf \tilde{e}}}e_i\right)-n$ is the binomial associated to a cycle in $G$ and $n$ is not divisible by any $e_i\in {\bf e}$.
By \cite[Remark 2.7]{CG}, $\mathcal{C}(G\setminus {\bf e})\cup \widetilde{M}^G_{\bf e}$ is also a Gr\"obner basis for $I^G_{\bf e}$ with respect to $\prec$. Furthermore, observe that if $me_i\in \widetilde{M}^G_{\bf e}$ for some $e_i\in {\bf e}$, then $m$ is also an element of $\widetilde{M}^G_{\bf e}$. Hence, if we let $L^G_{\bf e}$ be the set of monomials in $\widetilde{M}^G_{\bf e}$ which are not divisible by any $e_i\in {\bf e}$, then $\mathcal{C}(G\setminus {\bf e})\cup L^G_{\bf e}$ is a Gr\"obner basis for $I^G_{\bf e}$ with respect to $\prec$.
\end{remark}
Using Remark \ref{rmk:gbBipartite}, we obtain the following lemma, which we will need when proving geometric vertex decomposability of toric ideals of bipartite graphs.
\begin{lemma}\label{lem:gbBipartite2}
Let $G$ be a bipartite graph and let ${\bf e} = \{e_1,\dots, e_r\}$, $r\geq 1$, be a path ordered matching on $G$, and let ${\bf e'} = \{e_1,\dots, e_{r-1}\}$. Let $\prec$ be a lexicographic monomial order on $\mathbb{K}[E(G)]$ with $e_r>e_{r-1}>\cdots>e_1$ and $e_1>f$ for all $f\in E(G)\setminus {\bf e}$.
The set $\mathcal{C}(G\setminus {\bf e'})\cup L^G_{\bf e'}$ is a Gr\"obner basis for $I^G_{\bf e'}$ with respect to $\prec$ and $\init_{\prec}(I^G_{\bf e'})$ is a square-free monomial ideal.
\end{lemma}
\begin{proof}
By Remark \ref{rmk:gbBipartite}, $\mathcal{G}:=\mathcal{C}(G\setminus {\bf e'})\cup L^G_{\bf e'}$ is a Gr\"obner basis for $I^G_{\bf e'}$ with respect to the lexicographic term order $e_{r-1}>e_{r-2}>\cdots >e_1>e_r$ and $e_r>f$ for all $f\in E(G)\setminus {\bf e}$.
Since none of $e_1,\dots, e_{r-1}$ appear in $\mathcal{G}$, we have that $\mathcal{G}$ is also a Gr\"obner basis for the lexicographic monomial order $\prec$. Furthermore, all terms of all elements in $\mathcal{G}$ are square-free, so $\init_{\prec} (I^G_{\bf e'})$ is a square-free monomial ideal.
\end{proof}
We now use Lemma \ref{lem:gbBipartite2} to obtain a geometric vertex decomposition of $I^G_{\bf e'}$:
\begin{proposition}\label{lem:bipartiteGVDgeneral}
Let $G$ be a bipartite graph and let ${\bf e} = \{e_1,\dots, e_r\}$ be a path ordered matching. Let ${\bf e'} = \{e_1,\dots, e_{r-1}\}$. Then there is a geometric vertex decomposition
\begin{equation}\label{eq:gvdEquation}
\init_{e_r}(I^G_{{\bf e'}}) = (I^{G\setminus e_r}_{\bf e'}+\langle e_r\rangle)\cap I^G_{\bf e}.
\end{equation}
\end{proposition}
\begin{proof}
Let $\prec$ be a lexicographic monomial order on $\mathbb{K}[E(G)]$ with $e_r>e_{r-1}>\cdots >e_1$ and $e_1>f$ for all $f\in E(G)\setminus {\bf e}$. This is an $e_r$-compatible monomial order.
By Lemma \ref{lem:gbBipartite2},
$\mathcal{C}(G\setminus {\bf e'})\cup L^G_{\bf e'}$ is a Gr\"obner basis for $I^G_{\bf e'}$ with respect to $\prec$, and $\mathcal{C}(G\setminus {\bf e'})\cup L^G_{\bf e'}$ are square-free in $e_r$. We can write:
\[
\mathcal{C}(G\setminus {\bf e'}) = \{e_rm_1-n_1, e_rm_2-n_2,\dots, e_rm_q-n_q, h_1,\dots, h_t\}, \text{ and }
\]
\[
L^G_{\bf e'} = \{e_ra_1,\dots, e_ra_u, b_1,\dots, b_v\}
\]
where $e_r$ does divide any $m_\ell$, $n_\ell$, $1\leq \ell\leq q$, nor any term of $h_k$, $1\leq k\leq t$, nor any of the monomials $a_1,\dots, a_u, b_1,\dots, b_v$. We thus have the geometric vertex decomposition
\begin{align*}
\text{in}_{{e}_r}(I^G_{\bf e'}) &= (\langle h_1,\dots, h_t, b_1,\dots, b_v\rangle + \langle {e}_r\rangle) \cap \langle m_1,\dots, m_q, h_1,\dots, h_t, a_1,\dots, a_u, b_1,\dots, b_v\rangle\\
&= (\langle h_1,\dots, h_t, b_1,\dots, b_v\rangle + \langle {e}_r\rangle) \cap I^G_{\bf e}.
\end{align*}
It remains to show that $\langle h_1,\dots, h_t, b_1,\dots, b_v\rangle = I^{G\setminus e_r}_{\bf e'}$.
By Lemma \ref{lem:techLemma2}, ${\bf e'}$ is a path ordered matching on $G\setminus e_r$.
Thus, $I^{G\setminus e_r}_{\bf e'}$ is generated by \[\mathcal{C}((G\setminus e_r)\setminus {\bf e'})\cup L^{G\setminus e_r}_{\bf e'} = \mathcal{C}(G\setminus {\bf e})\cup L^{G\setminus e_r}_{\bf e'}.\]
Observe that $\{h_1,\dots, h_t \} = \mathcal{C}(G\setminus {\bf e})$.
Also, it follows from the definitions that $L^{G\setminus e_r}_{\bf e'}\subseteq \{b_1,\dots, b_v\}$. Thus, we have the inclusion $I^{G\setminus e_r}_{\bf e'}\subseteq \langle h_1,\dots, h_t, b_1,\dots, b_v\rangle$.
For the reverse inclusion, fix some $b_j$, $1\leq j\leq v$. Then there is some non-empty subset ${\widetilde{\bf e}}\subseteq {\bf e'}$ and a binomial $b_j(\prod_{e_i\in {\widetilde{\bf e}}}e_i)-n$ associated to a cycle in $G$.
If $e_r$ does not divide $n$ then $b_j\in M^{G\setminus e_r}_{\bf e'}$, and hence $b_j\in I^{G\setminus e_r}_{\bf e'}$. Otherwise, since ${\bf e}$ is also a path ordered matching, one can apply the proof of \cite[Remark 2.7]{CG} to find another cycle in $G$ which does not pass through $e_r$ and which gives rise to an element $c_j\in M^G_{\bf e}$ which divides $b_j$. Since the cycle does not pass through $e_r$, we have $c_j\in M^{G\setminus e_r}_{\bf e'}$. As $\mathcal{C}((G\setminus e_r)\setminus {\bf e'})\cup M^{G\setminus e_r}_{\bf e'}$ is a Gr\"obner basis for $I^{G\setminus e_r}_{\bf e'}$, we see that $c_j$, and hence $b_j$, is an element of $I^{G\setminus e_r}_{\bf e'}$. Thus, $\langle h_1,\dots, h_t, b_1,\dots, b_v\rangle\subseteq I^{G\setminus e_r}_{\bf e'}$.
\end{proof}
We say that a path ordered matching ${\bf e} = \{e_1,\dots, e_r\}$ is \emph{right-extendable} if there is some edge $e_{r+1}\in E(G)$ such that $\{e_1,\dots,e_r,e_{r+1}\}$ is also a path ordered matching.
\begin{lemma}\label{lem:techLemma1}
Let $G$ be a bipartite graph with no leaves and let ${\bf e} = \{e_1,\dots, e_r\}$ be a path ordered matching which is not right-extendable. Then, $M^G_{\bf e}$ contains an indeterminate $x\in E(G)$ and ${\bf e}$ is a path ordered matching on $G\setminus x$. Furthermore, $I^G_{\bf e} = I^{G\setminus x}_{\bf e}+ \langle x\rangle$.
\end{lemma}
\begin{proof}
The proof is identical to the proof of \cite[Lemmas 2.12 and 2.13]{CG} upon replacing maximal path ordered matchings in \cite[Lemmas 2.12 and 2.13]{CG} with right-extendable path ordered matchings.
\end{proof}
\begin{lemma}\label{lem:techLemma3}
Let $G$ be a bipartite graph and let ${\bf e} = \{e_1,\dots, e_r\}$ be a path ordered matching.
Suppose that $G$ has a leaf $y$. Then:
\begin{enumerate}
\item if $y\notin {\bf e}$, then ${\bf e}$ is a path ordered matching in $G\setminus y$ and $I^G_{\bf e} = I^{G\setminus y}_{\bf e}$;
\item if $y\in {\bf e}$, then $y = e_1$ or $e_r$ and ${\bf e}\setminus y$ is a path ordered matching in $G\setminus y$ and $I^G_{\bf e} = I^{G\setminus y}_{{\bf e}\setminus y}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since ${\bf e}$ is a path ordered matching, the vertices of $G$ can be labelled such that $e_i = \{i,i+r\}$, $1\leq i\leq r$. Let $f_i = \{i,i+r+1\}$, $1\leq i\leq r-1$ so that $$e_1,f_1,e_2,f_2,\dots,e_{r-1},f_{r-1}, e_r$$ is a path of consecutive edges in $G$.
Since $y$ is a leaf, we see that $y\notin \{f_1,\dots, f_{r-1}\}$. If $y\notin {\bf e}$, then each $e_i, f_i$ remains and ${\bf e}$ is a path ordered matching in $G\setminus y$.
Furthermore no cycle in $G$ passes through $y$, hence $I^G_{\bf e} = I^{G\setminus y}_{\bf e}$.
If $y\in {\bf e}$, then either $y = e_1$ or $y = e_r$. In either case, since each $f_i$ remains in $G\setminus y$, ${\bf e}\setminus y$ is still a path ordered matching in $G\setminus y$.
Since there is no cycle in $G$ which passes through $y$, we have $I^{G}_{\bf e} = I^G_{{\bf e}\setminus y} = I^{G\setminus y}_{{\bf e}\setminus y}$.
\end{proof}
We will need one more result from \cite{CG}:
\begin{theorem}\cite[Theorem 2.8]{CG}\label{bipartite_CM}
Let $G$ be a bipartite graph and ${\bf e} = \{e_1,\dots, e_r\}$ a path ordered matching. Then $\mathbb{K}[E(G)]/I^G_{\bf e}$ is Cohen-Macaulay.
\end{theorem}
We now adapt the proof of \cite[Corollary 2.15]{CG} on vertex decomposability of the simplicial complex associated to an initial ideal of $I^G_{\bf e}$ to prove the main theorem of this subsection.
\begin{theorem}\label{thm: gvdBipartite}
Let $G$ be a bipartite graph and ${\bf e} = \{e_1,\dots, e_r\}$ a path ordered matching. Then the ideal $I^G_{\bf e}$ is geometrically vertex decomposable. In particular, the toric ideal $I_G$ is geometrically vertex decomposable.
\end{theorem}
\begin{proof}
Let $R = \mathbb{K}[E(G)]$. By Theorem \ref{bipartite_CM}, each $R/I^G_{\bf e}$ is Cohen-Macaulay, hence unmixed.
We proceed by double induction on $|E(G)|$ and $s-r$ where ${\bf {\tilde{e}}} = \{{\tilde{e}}_1,\dots, \tilde{e}_s\}$ is a path ordered matching that is not right-extendable and is such that $\tilde{e}_1 = e_1,\dots, \tilde{e}_r = e_r$.
If $|E(G)|\leq 3$, then $I_G = \langle 0\rangle$ as there are no primitive closed even walks in $G$, so the result holds trivially.
If $G$ has a leaf, then by Lemma \ref{lem:techLemma3}, there is an edge $y$ and a path ordered matching ${\bf e'}$ in $G\setminus y$ such that $I^G_{\bf e} = I^{G\setminus y}_{{\bf e'}}$. By induction on the number of edges in the graph, $I^{G\setminus y}_{{\bf e'}}$ is geometrically vertex decomposable, hence so is $I^G_{\bf e}$.
So, assume that $G$ has no leaves. If $s-r = 0$, then ${\bf e}$ is not right extendable.
Then, by Lemma \ref{lem:techLemma1}, there is an indeterminate $z\in M^G_{\bf e}$ such that
\[
I^G_{\bf e} = I^{G\setminus z}_{\bf e}+\langle z\rangle.
\]
By Lemma \ref{lem:techLemma1}, {\bf e} is a path ordered matching on $G\setminus z$, so again by induction on the number of edges in the graph, we have the $I^{G\setminus z}_{\bf e}$ is geometrically vertex decomposable, hence so is $I^G_{\bf e}$.
Now suppose that ${\bf e}$ is right extendable, so that $s-r>0$ and ${\bf e^*} = \{e_1,\dots, e_{r+1}\}$ is a path ordered matching. By Lemma \ref{lem:bipartiteGVDgeneral}, we have the geometric vertex decomposition
\begin{equation*}
\init_{e_{r+1}}(I^G_{{\bf e}}) = (I^{G\setminus e_{r+1}}_{\bf e}+\langle e_{r+1}\rangle)\cap I^G_{{\bf e^*}}.
\end{equation*}
By Lemma \ref{lem:techLemma2}, ${\bf e}$ is a path ordered matching on $G\setminus e_{r+1}$. So, by induction on the number of edges, $I^{G\setminus e_{r+1}}_{\bf e}$ is geometrically vertex decomposable. By induction on $s-r$, $I^G_{{\bf e^*}}$ is geometrically vertex decomposable. Hence, $I^G_{\bf e}$ is geometrically vertex decomposable.
\end{proof}
\subsection{Alternate proofs in special cases}\label{sect:specialCases}
In this section, we apply results from the literature to give alternate proofs of geometric vertex decomposability for some well-studied families of bipartite graphs. These
proofs illustrate that in some cases, we
can prove that a family of ideals is geometrically
vertex decomposable directly from the definition.
We define the relevant families
of graphs. A {\it Ferrers graph} is
a bipartite graph on the vertex set
$X = \{x_1,\ldots,x_n\}$ and $Y= \{y_1,\ldots,y_m\}$ such that
$\{x_n,y_1\}$ and $\{x_1,y_m\}$ are edges, and if $\{x_i,y_j\}$ is an edge, then so
are all the edges $\{x_k,y_l\}$ with
$1 \leq k \leq i$ and $1 \leq l \leq j$.
We associate a partition
$\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_n)$ with $\lambda_1 \geq \lambda_2
\geq \cdots \geq \lambda_n$ to a Ferrers graph where $\lambda_i =
\deg x_i$. Some of the properties of
the toric ideals of these graphs
were studied by Corso and Nagel \cite{CN}. Following Corso and Nagel, we denote
a Ferrers graph as $T_\lambda$ where $\lambda$
denotes the associated partition.
As an example, consider the
partition $\lambda = (5,3,2,1)$ which can be
visualized as
\[
\begin{tabular}{cccccccc}
& $y_1$ & $y_2$ & $y_3$ & $y_4$ & $y_5$\\
$x_1$ & $\bullet$ & $\bullet$ & $\bullet$ & $\bullet$ & $\bullet$ \\
$x_2$ & $\bullet$ & $\bullet$ & $\bullet$ & & \\
$x_3$ & $\bullet$ & $\bullet$ & & &\\
$x_4$ & $\bullet$ & & & & \\
\end{tabular}
\]
We have labelled the rows with the $x_i$
vertices and the columns with the $y_j$ vertices. From this representation,
the graph $T_\lambda$ is the graph
on the vertex set $\{x_1,\ldots,x_4,y_1,\ldots,y_5\}$ where
$\{x_i,y_j\}$ is an edge if and only if
there is dot in the row and column indexed
by $x_i$ and $y_j$ respectively. Figure \ref{fig_ferrers} gives the corresponding
bipartite graph $T_\lambda$ for $\lambda$.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (0,5);
\draw (5,0) -- (0,5);
\draw (10,0) -- (0,5);
\draw (15,0) -- (0,5);
\draw (20,0) -- (0,5);
\draw (0,0) -- (5,5);
\draw (5,0) -- (5,5);
\draw (10,0) -- (5,5);
\draw (0,0) -- (10,5);
\draw (5,0) -- (10,5);
\draw (0,0) -- (15,5);
\fill[fill=white,draw=black] (0,0) circle (.1) node[below]{$y_1$};
\fill[fill=white,draw=black] (5,0) circle (.1) node[below]{$y_2$};
\fill[fill=white,draw=black] (10,0) circle (.1) node[below]{$y_3$};
\fill[fill=white,draw=black] (15,0) circle (.1) node[below]{$y_4$};
\fill[fill=white,draw=black] (20,0) circle (.1) node[below]{$y_5$};
\fill[fill=white,draw=black] (0,5) circle (.1) node[above]{$x_1$};
\fill[fill=white,draw=black] (5,5) circle (.1) node[above]{$x_2$};
\fill[fill=white,draw=black] (10,5) circle (.1) node[above]{$x_3$};
\fill[fill=white,draw=black] (15,5) circle (.1) node[above]{$x_4$};
\end{tikzpicture}
\caption{The graph $T_\lambda$ for $\lambda = (5,3,2,1)$}
\label{fig_ferrers}
\end{figure}
Next we consider the graphs studied in Galetto,
{\it et al.} \cite{GHKKPVT} as our second family of graphs. For integers $r \geq 3$ and $d \geq 2$,
we let $G_{r,d}$ be the graph with vertex set
$$V(G_{r,d}) = \{x_1,x_2,y_1,\ldots,y_d,z_1,\ldots,z_{2r-3}\}$$
and edge set
\begin{eqnarray*}
E(G_{r,d}) &= &\{\{x_i,y_j\} ~|~ 1 \leq i \leq 2,~~ 1 \leq j \leq d\} \cup \\
& & \{\{x_1,z_1\},\{z_1,z_2\},\{z_2,z_3\},\ldots,\{z_{2r-4},z_{2r-3}\},
\{z_{2r-3},x_2\}\}.
\end{eqnarray*}
Observe that $G_{r,d}$ is the graph formed by taking
the complete bipartite graph $K_{2,d}$ (defined below), and then joining the
two vertices of degree $d$ by a path of length $2r-2$.
As an example, see Figure \ref{fig_g35} for the graph $G_{3,5}$.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (5,5);
\draw (0,0) -- (15,5);
\draw (5,0) -- (5,5);
\draw (5,0) -- (15,5);
\draw (10,0) -- (5,5);
\draw (10,0) -- (15,5);
\draw (15,0) -- (5,5);
\draw (15,0) -- (15,5);
\draw (20,0) -- (5,5);
\draw (9,4.4) node{$a_5$};
\draw (20,0) -- (15,5);
\draw (2.2,3.5) node{$a_1$};
\draw (2.4,1.5) node{$b_1$};
\draw (4.3,3) node{$a_2$};
\draw (6,3.1) node{$a_3$};
\draw (8,3.2) node{$a_4$};
\draw (6,1.2) node{$b_2$};
\draw (11.6,.8) node{$b_3$};
\draw (15.5,1) node{$b_4$};
\draw (20,1)node{$b_5$};
\draw (5,5) -- (5,8)node[midway, left] {$e_1$};
\draw (5,8) -- (10,10)node[midway, above] {$e_2$};
\draw (10,10) -- (15,8)node[midway, above] {$e_3$};
\draw (15,8) -- (15,5)node[midway, right] {$e_4$};
\fill[fill=white,draw=black] (0,0) circle (.1) node[below]{$y_1$};
\fill[fill=white,draw=black] (5,0) circle (.1) node[below]{$y_2$};
\fill[fill=white,draw=black] (10,0) circle (.1) node[below]{$y_3$};
\fill[fill=white,draw=black] (15,0) circle (.1) node[below]{$y_4$};
\fill[fill=white,draw=black] (20,0) circle (.1) node[below]{$y_5$};
\fill[fill=white,draw=black] (5,5) circle (.1) node[left]{$x_1$};
\fill[fill=white,draw=black] (15,5) circle (.1) node[right]{$x_2$};
\fill[fill=white,draw=black] (5,8) circle (.1) node[left]{$z_1$};
\fill[fill=white,draw=black] (15,8) circle (.1) node[right]{$z_3$};
\fill[fill=white,draw=black] (10,10) circle (.1) node[above] {$z_2$};
\end{tikzpicture}
\caption{The graph $G_{3,5}$}
\label{fig_g35}
\end{figure}
We label the edges so that $a_i = \{x_1,y_i\}$ and $b_i = \{x_2,y_i\}$ for $i=1,\ldots,d$,
and $e_1 = \{x_1,z_1\}$, $e_{2r-2} = \{z_{2r-3},x_2\}$ and
$e_{i+1} = \{z_i,z_{i+1}\}$ for $1 \leq i \leq 2r-4$.
Using the above labelling, we can describe the universal
Gr\"obner basis of $I_{G_{r,d}}$.
\begin{theorem}[{\cite[Corollary 3.3]{GHKKPVT}}]\label{universalGB}
Fix integers $r \geq 3$ and $d \geq 2$. A universal Gr\"obner basis for
$I_{G_{r,d}}$ is given by
$$\{a_ib_j - b_ia_j ~|~ 1\leq i < j\leq d\}\cup
\{a_ie_2e_4\cdots e_{2r-2} - b_ie_1e_3e_5 \cdots e_{2r-3} ~|~1\leq i \leq d \}.$$
\end{theorem}
The next result provides many examples of toric ideals
which are geometrically vertex decomposable.
In the statement below,
the {\it complete bipartite graph}
$K_{n,m}$ is the graph with vertex
set $V = \{x_1,\ldots,x_n,y_1,\ldots,y_m\}$ and edge set $\{\{x_i,y_j\} ~|~ 1 \leq i \leq n,~~ 1 \leq j \leq m \}$.
\begin{theorem}\label{families}
The toric ideals of the following
families of graphs are geometrically
vertex decomposable:
\begin{enumerate}
\item $G$ is a cycle;
\item $G$ is a Ferrers graph $T_\lambda$
for any partition $\lambda$;
\item $G$ is a complete bipartite
graph $K_{n,m}$; and
\item $G$ is the graph $G_{r,d}$
for any $r\geq 3, d\geq 2$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Suppose that $G$ is a cycle with $2n$ edges. Then $I_G = \langle e_1e_3\cdots e_{2n-1} - e_2e_4\cdots e_{2n} \rangle$, so the result follows from Lemma \ref{simplecases} (2). If $G$
is an odd cycle, then $I_G = \langle 0 \rangle$, and so it is geometrically
vertex decomposable by definition.
\noindent
(2) As shown in the proof of
\cite[Proposition 5.1]{CN}, the toric
ideal of $T_\lambda$ is generated by
the $2 \times 2$ minors of a one-sided ladder.
The ideal generated by the $2 \times 2$ minors of a one-sided
ladder is an example of Schubert determinantal ideal (e.g. see \cite{KMY}). The conclusion now follows
from \cite[Proposition 5.2]{KR} which showed
that all Schubert determinantal ideals
are geometrically vertex decomposable.\footnote{It is not necessary to use the connection to Schubert determinantal ideals. Indeed, it is known from the ladder determinantal ideal literature that (mixed) ladder determinantal ideals from (two-sided) ladders possess initial ideals which are Stanley-Reisner ideals of vertex decomposable simplicial complexes (see \cite{GMN} and references therein). Then, an analogous proof to our proof of Theorem \ref{thm: gvdBipartite} can be given to show that these ideals are geometrically vertex decomposable.}
\noindent
(3) Apply the previous result using the partition
$\lambda = \underbrace{(m,m,\ldots,m)}_n$.
\noindent
(4)
Let $I = I_{G_{r,d}}$. Since it is a prime ideal, it is unmixed.
We first show that the statement holds if
$d =2$ and for any $r \geq 3$.
Let $y = a_2$, and consider the lexicographic order on
$\mathbb{K}[E(G_{r,d})] = \mathbb{K}[a_1,a_2,b_1,b_2,e_1,\ldots, e_{2r-2}]$
with
$a_2 > a_1 > b_2 > b_1 > e_{2r-2} >\cdots > e_1.$
This monomial order is $y$-compatible.
By using the universal Gr\"obner basis of
Theorem \ref{universalGB}, we have
\begin{eqnarray*}
C_{y,I} &=& \langle b_1,e_2e_4\cdots e_{2r-2},a_1e_2\cdots e_{2r-2}
- b_1e_1e_3\cdots e_{2r-3} \rangle = \langle b_1,e_2e_4\cdots e_{2r-2} \rangle
\end{eqnarray*}
and $N_{y,I} = \langle a_1e_2\cdots e_{2r-2} - b_1e_1 \cdots e_{2r-3} \rangle.$ Note
that each binomial in $\mathcal{U}(I)$ is doubly
square-free, so we can use
Lemma \ref{square-freey} to deduce that
$${\rm in}_{y}(I) = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$$ is a geometric vertex decomposition.
To complete
this case, note that $C_{y,I}$ is a monomial
complete intersection in $\mathbb{K}[a_1,b_1,b_2,e_1,\ldots,e_{2r-2}],$
so this ideal is geometrically
vertex decomposable by Corollary \ref{monomialcor}.
The ideal $N_{y,I}$ is a principal ideal
generated by $a_1e_2\cdots e_{2r-2} - b_1e_1 \cdots e_{2r-3}$, so it is geometrically vertex decomposable
by Lemma \ref{simplecases} (2). So, for
all $r \geq 3$, the toric ideal $I_{G_{r,2}}$ is
geometrically vertex decomposable.
We proceed by induction on $d$. Assume $d > 2$ and let
$r \geq 3$. Let $y = a_d$, and consider the lexicographic order on
$\mathbb{K}[E(G_{r,d})] = \mathbb{K}[a_1,\ldots,a_d,b_1,\ldots,b_d,e_1,\ldots, e_{2r-2}]$
with
$a_d > \cdots > a_1 > b_d > \cdots > b_1 > e_{2r-2} >\cdots > e_1.$
This monomial order is $y$-compatible.
By again appealing to Theorem \ref{universalGB}, we have
\begin{eqnarray*}
C_{y,I} &= &\langle b_1,\ldots,b_{d-1},
e_{2}e_4\cdots e_{2r-2} \rangle +\langle a_ib_j - b_ia_j ~|~ 1\leq i < j\leq d-1 \rangle + \\
&& \langle a_ie_2e_4\cdots e_{2r-2} - b_ie_1e_3e_5 \cdots e_{2r-3} ~|~1\leq i \leq d-1 \rangle \\
& = & \langle b_1,\ldots,b_{d-1},e_2e_4\ldots e_{2r-2} \rangle,
\end{eqnarray*}
where the last equality comes from removing redundant generators.
On the other hand, by Lemma \ref{linktoricidealgraph}, $N_{y,I} = I_K$ where $K = G_{r,d} \setminus a_d$. Note that in this graph, the
edge $b_d$ is a leaf, and consequently,
$N_{y,I} = I_{G_{r,d-1}}$ since $K \setminus b_d = G_{r,d-1}$.
We can again use Lemma \ref{square-freey} to deduce that
$${\rm in}_{y}(I) = C_{y,I} \cap (N_{y,I} + \langle y \rangle)$$ is a geometric vertex decomposition.
To complete the proof, note that in the ring $\mathbb{K}[a_1,\ldots,a_{d-1},b_1,\ldots,b_d,e_1,\ldots,
e_{2r-2}]$, the ideal $C_{y,I}$ is geometrically vertex decomposable by Corollary \ref{monomialcor} since this ideal is a complete intersection monomial ideal. Also, the ideal $N_{y,I} = I_{G_{r,d-1}}$ is geometrically vertex decomposable by induction. Thus, $I_{G_{r,d}}$ is geometrically vertex decomposable for all $d \geq 2$ and $r \geq 3$.
\end{proof}
As we will see in the remainder of the paper, there are many non-bipartite graphs which have geometrically vertex decomposable toric ideals.
\section{Toric ideals with a square-free degeneration}\label{section_square-free}
As mentioned in the introduction, an important question in liaison
theory asks if every arithmetically Cohen-Macaulay subscheme of $\mathbb{P}^n$ is
glicci (e.g. see \cite[Question 1.6]{KMMNP}).
As shown by Klein and
Rajchgot (see Theorem \ref{gvd=>glicci}), if a homogeneous ideal
$I$ is a geometrically vertex decomposable ideal, then $I$ defines
an arithmetically Cohen-Macaulay subscheme, and furthermore, this scheme is glicci.
It is therefore natural to ask if every toric ideal $I_G$
of a finite graph $G$ that has the property that $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay is
also geometrically vertex decomposable. If true, then
this would imply that the scheme defined by $I_G$ is glicci.
Instead of considering all toric ideals of graphs such that $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay, we can restrict to ideals $I_G$ which possess a square-free Gr\"obner degeneration with respect to some monomial order $<$.
By Theorem \ref{sqfree=>cm}, $\mathbb{K}[E(G)]/I_G$ is
Cohen-Macaulay. Furthermore, if $\init_{<}(I_G)$ defines a vertex decomposable simplicial complex via the Stanley-Reisner correspondence, then $I_G$ would be geometrically
vertex decomposable with respect to a \textit{lexicographic} monomial order $<$ (see \cite[Proposition 2.14]{KR}).
We propose the conjecture below. Note that this conjecture
would imply that any toric ideal of a graph with a square-free initial ideal is
glicci.
\begin{conjecture}\label{mainconjecture}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[E(G)]$.
If $\init_{<}(I_G)$ is square-free with respect to a lexicographic monomial order $<$, then $I_G$ is geometrically vertex decomposable.
\end{conjecture}
\noindent
By Theorem \ref{thm: gvdBipartite}, Conjecture \ref{mainconjecture} is true in the bipartite setting. In this section, we build a framework for proving Conjecture \ref{mainconjecture}. In particular, we reduce
Conjecture \ref{mainconjecture} to checking whether certain related ideals are equidimensional, and we prove Conjecture \ref{mainconjecture} for the case where the generators in the universal Gr\"obner basis $\mathcal{U}(I_G)$ are quadratic. As a final comment, in Section
7 we show that the converse of Conjecture \ref{mainconjecture}
will not be true by giving an example of a graph $G$ whose
toric ideal $I_G$ is geometrically vertex decomposable,
but $I_G$ has no initial ideal that is a square-free monomial
ideal.
\subsection{Framework for the conjecture}
Suppose that $G$ is a labelled graph with $n$ edges $e_1,\dots, e_n$ and toric ideal $I_G\subseteq \mathbb{K}[E(G)]$. Let $<_G$ be the lexicographic monomial order induced from the ordering of the edges coming from the labelling. That is, $e_1>e_2>\cdots >e_n$.
We define a class of ideals of the form $I^G_{E,F}$ such that $E\cup F=E_k=\{e_1,\ldots,e_k\}$ for some $0\leq k \leq n$ with
$E \cap F = \emptyset$. Here $E_0 = \emptyset$. Define
\[I^{G}_{E,F} := I_{G\setminus (E\cup F)} + M^{G}_{E,F}\]
where $I_{G\setminus (E \cup F)}$ is the toric ideal of the graph $G$ with
the edges $E\cup F$ removed, and
where $M^G_{E,F}$ is the ideal of $\mathbb{K}[e_1,\dots, e_n]$ generated by those monomials $m$ with $m\ell -p\in \mathcal{U}(I_G)$ such that:
\begin{enumerate}
\item $\init_{<_G}(m\ell -p) = m\ell$,
\item $\ell$ is a monomial only involving some non-empty subset of variables in $E$, and
\item no $f\in F$ divides $m\ell$ and no $e\in E$ divides $m$.
\end{enumerate}
If there are no monomials $m$ which satisfy conditions (1), (2), and (3), we set $M^G_{E,F} = \langle 0 \rangle$.
Therefore $M^G_{\emptyset,F} =\langle 0\rangle$ and $I^G_{\emptyset,F} = I_{G\setminus F}$ (which is generated by those primitive closed even walks in $G$ which do not pass through any edge of $F=E_k$). On the other hand, if there is an
$\ell -p \in \mathcal{U}(I_G)$ with ${\rm in}_{<_G}(\ell -p) = \ell$
where $\ell$ is a monomial only involving the variables in $E$, then we take
$m = 1$, and so $M_{E,F}^G = \langle 1 \rangle$.
There is a natural set of generators for $I^G_{E,F}$ using the primitive closed even walks of $I_G$.
In particular, the ideal $I^G_{E,F}$ is generated by the
set
\[\mathcal{U}(I_{G\setminus (E\cup F)})\cup\mathcal{U}(M^G_{E,F}),\]
where $\mathcal{U}(I_{G\setminus (F\cup E)})$ is the set of binomials defined by primitive closed even walks of the graph $G\setminus (E \cup F)$, and $\mathcal{U}(M^G_{E,F})$ are those monomials $m$ appearing in a generator of $\mathcal{U}(I_G)$ and satisfying conditions (1), (2), and (3) above. Because $M^G_{E,F}$ is a monomial ideal,
its minimal generators form a universal Gr\"obner
basis, so our notation makes sense.
Going forward, we restrict our attention to the case where $\init_{<_G}(I_G)$ is square-free (this setting includes families of graphs like gap-free graphs \cite{DAli} for certain choices of $<_G$).
To illustrate some of the above ideas, we consider the
case that $E \cup F = E_1 = \{e_1\}$. This example also
highlights a connection
to the geometric vertex decomposition of
$I_G$ with respect to $e_1$.
\begin{example}\label{tree} Assume that $\init_{<_G}(I_G)$ is square-free. Then we can write \[\mathcal{U}(I_G) = \{e_1m_1-p_1,\ldots ,e_1m_r-p_r,t_1,\ldots, t_s\}\] where $e_1$ does not divide $m_i,p_i$ or any term of $t_i$. This set defines a universal Gr\"obner basis for $I_G = I^G_{\emptyset, \emptyset}$. Since $I_{G\setminus e_1}=\langle t_1,\ldots, t_s \rangle$ (by Lemma~\ref{linktoricidealgraph}), we can write
\begin{align*}
\init_{e_1}(I^G_{\emptyset,\emptyset})
&=\langle e_1m_1,\ldots, e_1m_r, t_1,\ldots, t_s \rangle\\
&= \langle e_1,t_1,\ldots, t_s \rangle\cap \langle m_1,\ldots, m_r, t_1,\ldots, t_s \rangle\\
&= (\langle e_1\rangle + I_{G\setminus e_1})\cap (M^G_{\{e_1\},\emptyset} + I_{G\setminus e_1})\\
&= (\langle e_1\rangle + I_{G\setminus e_1} + M^G_{\emptyset,\{e_1\}} )\cap I^G_{\{e_1\},\emptyset}\\
&= (\langle e_1\rangle + I^G_{\emptyset,\{e_1\}})\cap I^G_{\{e_1\},\emptyset}.
\end{align*}
Note that $I_{G\setminus e_1} = I_{G\setminus e_1}+M^G_{\emptyset, \{e_1\}}$ since $M^G_{\emptyset, \{e_1\}} = \langle 0 \rangle$.
Note that if we take $y=e_1$ and $I=I^G_{\emptyset,\emptyset}$, then we get $C_{y,I} = I^G_{\{e_1\},\emptyset}$ and $N_{y,I} = I^G_{\emptyset,\{e_1\}}$. That is, $y=e_1$ defines a geometric vertex decomposition of $I_G$. Therefore, when $E \cup F = E_1=\{e_1\}$,
either $e_1\in E$ or $e_1\in F$, and each case appears in the geometric vertex decomposition. \qed
\end{example}
If we continue the process by taking $\init_{e_2}(\cdot)$ of $I^G_{\{e_1\},\emptyset}$ and of $I^G_{\emptyset,\{e_1\}}$, we get one of four possible $C_{y,I}$ and $N_{y,I}$ ideals, each corresponding to a possible distribution of $\{e_1,e_2\}$ into the disjoint sets $E$ and $F$ such that $E\cup F=E_2$. Figure \ref{relationofideals} shows the relationship
between the ideals $I_{E,F}^G$ for the cases $E\cup F = E_i$ for $i=0,\ldots,3$.
\begin{figure}
\begin{tikzpicture}[sibling distance=24em, scale=0.82]
\node {$I^G_{\emptyset, \emptyset}$}
child { node {$I^G_{\emptyset,\{e_1\}}$}[sibling distance=12em]
child { node {$I^G_{\emptyset, \{e_1,e_2\}}$}[sibling distance=6em]
child {node {$I^G_{\emptyset, \{e_1,e_2,e_3\}}$}}
child {node {$I^G_{\{e_3\}, \{e_1,e_2\}}$}}}
child { node {$I^G_{\{e_2\}, \{e_1\}}$}[sibling distance=6em]
child {node {$I^G_{\{e_2\}, \{e_1,e_3\}}$}}
child {node {$I^G_{\{e_2,e_3\}, \{e_1\}}$}}}}
child { node {$I^G_{\{e_1\},\emptyset}$}[sibling distance=12em]
child { node {$I^G_{\{e_1\},\{e_2\}}$}[sibling distance=6em]
child {node {$I^G_{\{e_1\},\{e_2,e_3\}}$}}
child {node {$I^G_{\{e_1,e_3\},\{e_2\}}$}}}
child { node {$I^G_{\{e_1,e_2\},\emptyset}$}[sibling distance=6em]
child {node {$I^G_{\{e_1,e_2\},\{e_3\}}$}}
child {node {$I^G_{\{e_1,e_2,e_3\},\emptyset}$}}}};
\end{tikzpicture}
\caption{The relation between the ideals $I^G_{E,F}$}
\label{relationofideals}
\end{figure}
One strategy to verify Conjecture \ref{mainconjecture} is
to prove the following three statements:
\begin{itemize}
\item[$(A)$] Given $I=I^G_{E,F}$ such that $E\cup F=E_{k-1}$ and $I\neq \langle 0\rangle$ or $\langle 1\rangle$, then $y=e_k$ defines a geometric vertex decomposition. Furthermore, $N_{y,I}$ and $C_{y,I}$ must also be of the form $I^G_{E',F'}$ where $E'\cup F' = E_k$.
\item[$(B)$] If $E \cup F = E_n$, then $I^{G}_{E,F} = \langle 0 \rangle$ or $\langle 1 \rangle$.
\item[$(C)$] For any $E\cup F=E_k$, the ideal $I^G_{E,F}$ must be unmixed.
\end{itemize}
Indeed, the next theorem verifies that proving
$(A), (B)$, and $(C)$ suffices to show that $I_G$
is geometrically vertex decomposable.
\begin{theorem}\label{thingstocheck}
Let $G$ be a finite simple graph with toric ideal $I_G \subseteq \mathbb{K}[E(G)]$, and suppose that $\init_{<}(I_G)$ is square-free with respect to a lexicographic monomial order $<$.
If statements $(A)$, $(B)$, and $(C)$ are true, then $I_G$ is geometrically vertex decomposable.
\end{theorem}
\begin{proof}
Let $n$ be the number of edges of $G$. We show that
for all sets $E$ and $F$ such that $E \cup F = E_k$, the ideal $I_{E,F}^G$ is geometrically vertex decomposable, and in particular, $I_{\emptyset,\emptyset}^G = I_G$ is geometrically vertex decomposable. We do descending
induction on $|E \cup F|$. If $|E \cup F| = n$, then
$E \cup F = E_n$, and so by statement $(B),$ $I^G_{E,F} = \langle 0 \rangle$ or $\langle 1 \rangle$, both of which are
geometrically vertex decomposable by definition.
For the induction step, assume that all ideals of the form
$I^G_{E,F}$ with $E \cup F = E_\ell$ with $\ell \in \{k,\ldots,n\}$ are geometrically vertex decomposable.
Suppose that $E$ and $F$ are two sets such that $E \cup F = E_{k-1}$. The ideal $I^G_{E,F}$
is unmixed by
statement $(C)$. If $I^G_{E,F}$ is $\langle 0\rangle$ or $\langle 1\rangle$, then it is geometrically vertex decomposable by definition. Otherwise, by statement $(A)$,
the variable $y=e_k$ defines a geometric
vertex decomposition of $I =I_{E,F}^G$, i.e.,
$${\rm in}_{y}(I_{E,F}^G) = C_{y,I} \cap (N_{y,I} + \langle y \rangle).$$
Moreover, also by statement $(A)$, the ideals
$C_{y,I}$ and $N_{y,I}$ have the form $I^G_{E',F'}$ with
$E' \cup F'= E_k$. By induction, these two ideals
are geometrically vertex decomposable. So, $I^G_{E,F}$ is
geometrically vertex decomposable.
\end{proof}
We now show that $(A)$ and $(B)$ are always true. Thus,
to prove Conjecture \ref{mainconjecture}, one needs
to verify $(C)$. In fact, we will show that it is enough
to show that $\mathbb{K}[E(G)]/I_{E,F}$ is equidimensional for
all ideals of the form $I^G_{E,F}$.
We begin by proving that statement
$(A)$ holds if ${\rm in}_{<_G}(I_G)$ is a square-free
monomial ideal. In fact, we prove some additional
properties about the ideals $I^G_{E,F}$.
\begin{theorem}\label{CN_Grobner}
Let $I_G$ be the toric ideal of a finite simple graph $G$ such that $\init_{<_G}(I_G)$ is square-free.
For each $k \in \{1,\ldots,n\}$, let
$E,F$ be disjoint subsets of $\{e_1,\ldots,e_n\}$
such that $E \cup F = E_{k-1} = \{e_1,\ldots,e_{k-1}\}$.
Then
\begin{enumerate}
\item The natural generators $\mathcal{U}(I_{G\setminus (E \cup F)}) \cup \mathcal{U}(M^G_{E,F})$ of $I^G_{E,F}$ form
a Gr\"obner basis for $I^G_{E,F}$ with respect to
$<_G$. Furthermore, ${\rm in}_{<_G}(I^G_{E,F})$ is a square-free monomial ideal.
\item $I^G_{E,F}$ is a radical ideal.
\item The variable $y=e_k$ defines a geometric
vertex decomposition of $I^{G}_{E,F}$.
\item If $I = I^{G}_{E,F}$ and $y=e_k$, then
$C_{y,I} = I^G_{E \cup \{e_k\},F}$ and $N_{y,I} =
I^G_{E,F\cup\{e_k\}}$; in particular,
$$ \init_{e_k}(I^G_{E,F}) = I^G_{E\cup\{e_k\},F}\cap ( I^G_{E,F\cup \{e_k\}} +\langle e_k \rangle).$$
\end{enumerate}
\end{theorem}
\begin{proof}
$(1)$
We will proceed by induction on $|E\cup F| = r = k-1$. If $r=0$,
then $E \cup F = \emptyset$ and $I^G_{E,F} = I_G$. In
this case the natural generators are $\mathcal{U}(I_G) \cup \mathcal{U}(M^G_{\emptyset,\emptyset}) = \mathcal{U}(I_G)$,
and this set defines a universal Gr\"obner basis consisting of primitive closed even walks of $G$. Its initial ideal is square-free by the assumption on $<_G$.
Now suppose that $|E \cup F| = r \geq 1$ and assume the result holds for $r-1$. There are two cases to consider:
\vspace{.1cm}
\noindent\underline{Case 1}: Assume that $e_r\in E$. By induction, the natural generators
$$\mathcal{U}(I_{G\setminus({(E\setminus\{e_r\}) \cup F})}) \cup
\mathcal{U}(M^G_{E\setminus\{e_r\},F})$$ of
$I^G_{E\setminus \{e_r\},F}$ is a Gr\"obner basis with
respect to $<_G$ and has a square-free initial ideal with respect to $<_G$. For the computations that follow, we can restrict to a minimal Gr\"obner basis by removing elements of this generating set which do not
have a square-free lead term.
Since $e_r$ cannot divide both terms of a binomial defined by a primitive closed even walk, we must have that this minimal Gr\"obner basis is square-free in $y=e_r$ (any $e_r$ that appears in a binomial
must appear in the lead term by definition of $<_G$, because none of the generators of $I^G_{E\setminus \{e_r\},F}$ involve $e_1,\ldots,e_{r-1}$). Therefore, $I^G_{E\setminus \{e_r\},F}$ has a geometric vertex decomposition with respect to $y$ by Lemma \ref{square-freey} (2).
The ideal $C_{y,I^G_{E\setminus \{e_r\},F}}$ is therefore generated by:
\begin{itemize}
\item Binomials corresponding to primitive closed even walks not passing through any edge of $E_r$. That is, elements of $\mathcal{U}(I_{G\setminus E_r})$.
\item Monomials $m$ which appear as the coefficient of $e_r$ in $me_r -p\in \mathcal{U}(I_{G\setminus E_{r-1}})$.
\item Monomials $m$ which appear as the coefficient of $e_r$ in $\mathcal{U}(M^G_{E\setminus \{e_r\},F})$. In this case, $m$ is part of a binomial $me_r\prod\limits_{i\in \mathcal{I}} e_i-p \in \mathcal{U}(I_G)$, where $\mathcal{I}$
indexes a subset of $E\setminus \{e_r\}$.
\end{itemize}
\noindent The last two types of monomials are exactly those monomials defining $\mathcal{U}(M^G_{E,F})$. Therefore
\[C_{y,I^G_{E\setminus \{e_r\},F}} = I^G_{E,F}.\]
\noindent Furthermore, the generators listed above for $C_{y,I^G_{E\setminus \{e_r\},F}}$ are a Gr\"obner basis with respect to $<_G$ by Lemma \ref{square-freey} (1) and are a subset of the natural generators
of $I^G_{E,F}$. Its initial ideal is also square-free since we restricted to a minimal
Gr\"obner basis before computing $C_{y,I^G_{E\setminus \{e_r\},F}}$.
\vspace{.1cm}
\noindent\underline{Case 2}: Assume that $e_r\in F$. We argue similarly to Case 1 and omit the details. By induction $\mathcal{U}(I^G_{E,F\setminus \{e_r\}})$ is a Gr\"obner basis with respect to $<_G$ and defines a square-free initial ideal. We can once again restrict to a minimal Gr\"obner basis, both ensuring that all lead terms are square-free and that $y=e_r$ defines a geometric vertex decomposition. In this case, $N_{y,I^G_{E,F\setminus \{e_r\}}} = I^G_{E,F}$, and $\mathcal{U}(I^G_{E,F\setminus \{e_r\}})$ is a Gr\"obner basis by Lemma \ref{square-freey} (1) with respect to $<_G$. As in Case 1,
the initial ideal of $I^G_{E,F}$ is square-free with respect to this
monomial order since we restricted to a minimal Gr\"obner basis when
computing $N_{y,I^G_{E,F\setminus\{e_r\}}}$.
For statement $(2)$, the ideal $I^G_{E,F}$ is radical because it has a square-free degeneration. Statements $(3)$ and $(4)$ were shown as
part of the proof of statement $(1)$.
\end{proof}
We now verify that statement $(B)$ holds.
\begin{theorem} \label{statementb}
Let $I_G$ be the toric ideal of a finite simple graph $G$ such that $\init_{<_G}(I_G)$ is square-free. If $E \cup F = E_n$, then $I^G_{E,F} = \langle 0 \rangle$ or
$\langle 1 \rangle$.
\end{theorem}
\begin{proof}
Let $\mathcal{U}(I_G)$ be the universal Gr\"obner basis of $I_G$ defined in Theorem~\ref{generatordescription}. Since
$\init_{<_G}(I_G)$ is square-free, we can take a minimal Gr\"obner basis where
each lead term is square-free. We can write each element in our Gr\"obner basis
as a binomial of the form $m\ell -p$ with ${\rm in}_{<_G}(m\ell-p) = m\ell$ where
$\ell$ is a monomial only in the variables in $E$. Suppose that
there is a binomial $m\ell -p \in \mathcal{U}(I_G)$ such that $m\ell = \ell$, i.e.,
the lead term only involves variables in $E$. Then
$1 \in M^G_{E,F}$, and so $I^G_{E,F} = \langle 1 \rangle$, since the monomials
of $M^G_{E,F}$ form part of the generating set of $I^G_{E,F}$.
Otherwise, for every $m\ell -p \in \mathcal{U}(I_G)$, there is a variable $e_j \not\in E$ such that
$e_j|m$. Since $E \cup F = E_n$, we must have $e_j \in F$. But then $m$
is not in $M^G_{E,F}$ since it fails to satisfy condition $(3)$ of being
a monomial in $M^G_{E,F}$, and thus $M^G_{E,F} = \langle 0 \rangle$. Since
$G \setminus (E\cup F)$ is the graph $G$ with all of its
edges removed, $I_{G\setminus (E\cup F)} = \langle 0 \rangle$. Thus
$I^G_{E,F} = \langle 0 \rangle$.
\end{proof}
To prove Conjecture \ref{mainconjecture}, it remains to verify
statement $(C)$; that is, each ideal $I^G_{E,F}$ must be unmixed. This has proven difficult to show in general without specific restrictions on $G$. Nonetheless, the framework presented above leads to the next theorem
which reduces statement
$(C)$ to showing that $\mathbb{K}[E(G)]/I^{G}_{E,F}$ is equidimensional. Recall that a ring $R/I$ is {\it equidimensional} if $\dim(R/I) = \dim(R/P)$ for all minimal primes $P$ of ${\rm Ass}_R(R/I)$.
\begin{theorem}\label{framework}
Let $I_G$ be the toric ideal of a finite simple graph $G$ such that $\init_{<_G}(I_G)$ is square-free.
If $\mathbb{K}[E(G)]/I^G_{E,F}$ is equidimensional for every choice of $E,F,\ell$ such that $E\cup F=E_\ell$ and $0\leq \ell \leq n$, then $I_G$ is geometrically
vertex decomposable.
\end{theorem}
\begin{proof}
In light of Theorems \ref{thingstocheck}, \ref{CN_Grobner}, and \ref{statementb}, we only need to check that each $I^G_{E,F}$ is unmixed. However, by Theorem ~\ref{CN_Grobner} (3), each ideal $I^G_{E,F}$ is radical, so being unmixed is equivalent to being equidimensional.
\end{proof}
\begin{remark}
The definition of $I^G_{E,F}$ is an extension of the setup of Constantinescu and Gorla in \cite{CG} and is also used in Section 5. It is designed to utilize known results about geometric vertex decomposition. In \cite{CG}, $G$ is a bipartite graph, and techniques from liaison theory are employed to prove that $I_G$ is glicci. Using a similar argument for general $G$, we can use \[\init_{<_G}(I^G_{E,F})= e_k\init_{<_G}(I^G_{E\cup\{e_k\},F}) + \init_{<_G}(I^G_{E,F\cup \{e_k\}}) \] to show that $\init_{<_G}(I^G_{E,F})$ can be obtained from $\init_{<_G}(I^G_{E\cup\{e_k\},F})$ via a Basic Double G-link (see \cite[Lemma 2.1 and Theorem 2.8]{CG}), and so $\init_{<_G}(I^G_{E,F})$ being Cohen-Macaulay implies that $\init_{<_G}(I^G_{E\cup\{e_k\},F})$ is too (see Lemma \ref{linkage_CM}). Through induction, we could then prove that some (but not all) of the $I^G_{E,F}$ in the tree following Example~\ref{tree} are Cohen-Macaulay.
On the other hand, to produce $G$-biliaisons as in \cite[Theorem 2.11]{CG}, we would need specialized information about the graph $G$, something which is not a straightforward extension of the bipartite case.
\end{remark}
\subsection{Proof of the conjecture in the quadratic case}\label{quadratic}
In the case that $\mathcal{U}(I_G)$ contains only quadratic binomials,
we are able to verify that Conjecture \ref{mainconjecture} is true,
that is, $I_G$ is geometrically vertex decomposable. We first
show that when $\mathcal{U}(I_G)$ contains only quadratic binomials, it
has the property that ${\rm in}_{<_G}(I_G)$ is a square-free
monomial ideal for any monomial order. In the statement
below, recall that a binomial $m_1-m_2$ is doubly square-free if both monomials
that make up the binomial are square-free.
\begin{lemma}\label{quadratic_square-free}
Suppose that $G$ is a graph such that $I_G$ has a universal
Gr\"obner basis $\mathcal{U}(I_G)$ of quadratic binomials.
Then these generators are doubly square-free.
\end{lemma}
\begin{proof}
By Theorem \ref{generatordescription},
a quadratic element of $\mathcal{U}(I_G)$ comes from a primitive closed walk of length
four of $G$. Since consecutive edges cannot be equal, all primitive walks of length four are actually cycles, so no edge is repeated,
or equivalently, the generator is doubly square-free.
\end{proof}
As noted in the previous subsection, to verify the conjecture
in this case, it suffices to show that $\mathbb{K}[E(G)]/I^G_{E,F}$ is equidimensional for all $E,F,\ell$ with $E \cup F = E_\ell$. In fact,
we will show a stronger result and show that all of these rings are
Cohen-Macaulay.
We start with the useful observation that the natural set of generators of $I^G_{E,F}$ actually defines a universal Gr\"obner basis for the ideal.
\begin{lemma}\label{universal_GEF}
Under the assumptions of Theorem~\ref{CN_Grobner}, $\mathcal{U}(I_{G\setminus E_\ell})\cup\mathcal{U}(M^G_{E,F})$ is a universal Gr\"obner basis of $I^G_{E,F}$.
\end{lemma}
\begin{proof}
We will proceed by induction on $|E\cup F|$. The result is clear when $|E\cup F|=0$. For the induction step, observe that $I^G_{E,F}$ is either $N_{y,I^G_{E,F\setminus y}}$ or $C_{y,I^G_{E\setminus y,F}}$ for some variable $y=e_i$. Suppose towards a contradiction that there is some monomial order $<$ on $\mathbb{K}[e_1,\ldots,\hat{y},\ldots e_n]$ for which $\mathcal{U}(I^G_{E,F})$ is not a Gr\"obner basis. Extend $<$ to a monomial order $<_y$ on $\mathbb{K}[e_1,\ldots,e_n]$ which first chooses terms with the highest degree in $y$ and breaks ties using $<$. Clearly $<_y$ is a $y$-compatible order. By \cite[Theorem 2.1]{KMY}, $\mathcal{U}(I^G_{E,F})$ is a Gr\"obner basis with respect to $<_y$. But $<_y=<$ on $\mathbb{K}[e_1,\ldots,\hat{y},\ldots e_n]$, a contradiction.
\end{proof}
\begin{lemma}\label{CMspecialcase}
Let $R=\mathbb{K}[E(G)]$, and suppose that $G$ is finite simple graph such that $I_G$ has a universal Gr\"obner basis $\mathcal{U}(I_G)$ of quadratic binomials. Then $R/I^G_{E,F}$ is Cohen-Macaulay for every choice of $E,F$ and $\ell$ such that $E\cup F=E_\ell$.
\end{lemma}
\begin{proof}
Fix some $E$ and $F$ such that $E\cup F=E_\ell$. By definition $I^G_{E,F} = I_{G\setminus E_\ell} + M^G_{E,F}$. Since $\mathcal{U}(I_G)$ consists of quadratic binomials, then $M^G_{E,F}$ is either $\langle 1\rangle, \langle 0\rangle,$ or $\langle e_{i_1},\ldots,e_{i_s}\rangle$ with $s>0$.
The statement of the theorem clearly holds if $M^G_{E,F}=\langle 1\rangle$. If $M^G_{E,F}=\langle 0\rangle$, then $I^G_{E,F} = I_{G\setminus E_\ell}$. Then $I_{G\setminus E_\ell}$ is generated by quadratic primitive binomials and therefore possesses a square-free degeneration. By Theorem \ref{sqfree=>cm} these are toric ideals of graphs that are Cohen-Macaulay. We are left with the case that $M^G_{E,F}$ is generated by $s$ indeterminates.
We first show that each $I^G_{E,F}$ is actually equal
to $\widetilde{I}^G_{E,F}:= I_{G\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}}+M^G_{E,F}$. We certainly have $\widetilde{I}^G_{E,F} \subset I^G_{E,F}$. Let $<_{E,F}$ be the monomial order $e_{i_1}>\cdots >e_{i_s}$ and $e_{i_s}>f$ for all $f\in E(G)\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}$. By Lemma \ref{universal_GEF}, $\mathcal{U}(I_{G\setminus E_\ell})\cup\mathcal{U}(M^G_{E,F})$ is a universal Gr\"obner basis for $I^G_{E,F}$. A similar statement holds for $\widetilde{I}^G_{E,F}$ since no variable of $\mathcal{U}(M^G_{E,F})$ is used to define $I_{G\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}}$.
Clearly $\init_{<_{E,F}}(\widetilde{I}^G_{E,F}) \subset \init_{<_{E,F}}(I^G_{E,F})$. On the other hand, if there is some $u-v\in\mathcal{U}(I_{G\setminus E_\ell})$ where $u$ or $v$ is in the ideal $M^G_{E,F}$, then $\init_{<_{E,F}}(u-v)$ is a multiple of some $e_{i_j}$ for $j\in\{1,\ldots,s\}$. Therefore, $\init_{<_{E,F}}(\widetilde{I}^G_{E,F}) = \init_{<_{E,F}}(I^G_{E,F})$ which in turn implies that $\widetilde{I}^G_{E,F} = I^G_{E,F}$ (e.g. see \cite[Problem 2.8]{EH}).
Therefore, we can show that $R/I^G_{E,F}$ is Cohen-Macaulay by proving that $R/\widetilde{I}^G_{E,F}$ is. Recall that if a ring $S$ is Cohen-Macaulay and graded and $x$ is a non-zero-divisor of $S$, then $S/\langle x \rangle$ is also Cohen-Macaulay.
Now it is easy to see that $e_{i_1} + I_{G\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}},\ldots, e_{i_s} + I_{G\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}}$
is a regular sequence on $R/I_{G\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}}$. This follows from the fact that $I_{G\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}}$
is Cohen-Macaulay since it possesses a square-free degeneration, and from the fact that $\mathcal{U}(I_{G\setminus (E_\ell\cup\{e_{i_1},\ldots, e_{i_s})\}})$ is not
defined using the variables $\{e_{i_1},\ldots,e_{i_s}\}$.
\end{proof}
The previous lemma provides the unmixed condition needed to use Theorem~\ref{framework}. In summary, we have the following
result:
\begin{theorem}\label{quadratic_GVD}
Let $I_G$ be the toric ideal of a finite simple graph $G$
such that $\mathcal{U}(I_G)$ consists of quadratic binomials. Then $I_G$ is geometrically vertex decomposable and glicci.
\end{theorem}
\begin{proof}
By Lemma~\ref{quadratic_square-free}, any lexicographic order on the variables will determine a square-free degeneration of $I_G$. By Lemma~\ref{CMspecialcase} the rings $\mathbb{K}[E(G)]/I^G_{E,F}$ are Cohen-Macaulay for
all $E,F,$ and $\ell$ such that $E \cup F = E_\ell$.
In particular, all of these rings are equidimensional. Thus,
by Theorem~\ref{framework}, $I_G$ is geometrically vertex decomposable, and therefore glicci by Theorem \ref{gvd=>glicci}.
\end{proof}
\begin{remark}
Although the condition that $\mathcal{U}(I_G)$ consists of quadratic binomials is restrictive, it is worth noting that there are families of graphs for which this is true (e.g. certain bipartite graphs). See \cite[Theorem 1.2]{OH} for a characterization of when $I_G$ can be generated by quadratic binomials, and \cite[Proposition 1.3]{HNOS} for the case where the Gr\"obner basis is quadratic.
\end{remark}
\section{Toric ideals of graphs can be geometrically vertex decomposable but have no square-free degeneration}
In this short section, we show by example that there exist toric ideals of graphs which are geometrically vertex decomposable but have no square-free initial ideals. By \cite[Proposition 2.14]{KR},
this implies that there are toric ideals of graphs which are
geometrically vertex decomposable but not \emph{$<$-compatibly geometrically vertex decomposable} for any lexicographic monomial order $<$. We will make use of Macaulay2 \cite{M2}.
Indeed, consider the graph in Figure \ref{fig_twok4}, which
can be viewed as two $K_4$'s, the complete graph on four
vertices, joined at a vertex.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.45]
\draw (0,5) -- (5,10)node[midway, above] {$e_2$};
\draw (0,5) -- (5,0) node[midway, left] {$e_1$};;
\draw (0,5) -- (10,5);
\draw (4,5.4) node{$e_3$};
\draw (5,0) -- (5,10);
\draw (5.5,6) node{$e_4$};
\draw (5,0) -- (10,5) node[midway, below] {$e_5$};
\draw (10,5) -- (5,10)node[midway, above] {$e_6$};
\draw (10,5) -- (15,10)node[midway, above] {$e_8$};
\draw (10,5) -- (15,0)node[midway, left] {$e_7$};
\draw (10,5) -- (20,5);
\draw (14,5.4) node{$e_9$};
\draw (15,0) -- (15,10);
\draw (15.6,6) node{$e_{10}$};
\draw (15,0) -- (20,5) node[midway, below] {$e_{11}$};
\draw (20,5) -- (15,10)node[midway, above] {$e_{12}$};;
\fill[fill=white,draw=black] (0,5) circle (.1) node[left]{$x_1$};
\fill[fill=white,draw=black] (5,0) circle (.1) node[below]{$x_2$};
\fill[fill=white,draw=black] (5,10) circle (.1) node[above]{$x_3$};
\fill[fill=white,draw=black] (10,5) circle (.1) node[below]{$x_4$};
\fill[fill=white,draw=black] (15,0) circle (.1) node[below]{$x_5$};
\fill[fill=white,draw=black] (15,10) circle (.1) node[above]{$x_6$};
\fill[fill=white,draw=black] (20,5) circle (.1) node[right]{$x_7$};
\end{tikzpicture}
\caption{Two $K_4$'s joined at a vertex}
\label{fig_twok4}
\end{figure}
The toric ideal of this graph $G$ in the polynomial ring $R = \mathbb{K}[e_1,\ldots,e_{12}]$ is given by the
ideal
\begin{eqnarray*}
I_G &=& \langle e_8e_{11}-e_7e_{12},e_9e_{10}-e_7e_{12},e_2e_5-e_1e_6,e_3e_4-e_1e_6,e_4e_8e_9-e_5e_6e_{12},\\
&& e_2e_8e_9-e_3e_6e_{12},e_1e_8e_9-e_3e_5e_{12},e_4e_7e_9-e_5e_6e_{11},e_2e_7e_9-e_3e_6e_{11},\\
&& e_1e_7e_9-e_3e_5e_{11},e_4e_7e_8-e_5e_6e_{10},e_2e_7e_8-e_3e_6e_{10},e_1e_7e_8-e_3e_5e_{10}\rangle.
\end{eqnarray*}
One can check in Macaulay2 \cite{M2,StatePoly} that there are no square-free initial ideals of $I_G$ using the following commands. We write $I$ instead of $I_G$ in these commands.
\texttt{loadPackage "StatePolytope"}
\texttt{init = initialIdeals I;}
\texttt{all(apply(init, i -> isSquareFree monomialIdeal i), j -> not j)}
\noindent One also observes using the output \texttt{init} from the second line that there are $1350$ unique initial ideals
(e.g., using the command \texttt{\#init}). We will next need to check that $I_G$ is geometrically vertex decomposable. Note that the provided generators of the ideals below may not define a Gr\"obner basis for that ideal. As such, the reader will likely require the use of Macaulay2 to properly verify the details.
We will show $I_G$ is geometrically vertex decomposable, starting by decomposing with respect to $e_2$.
Using the lexicographic monomial order
where $e_2 > e_1 > e_3 > \cdots > e_{12}$, a
Gr\"obner basis of $I_G$ is given by
$$\begin{array}{l}
\{{e}_{8}{e}_{11}-{e}_{7}{e}_{12},
{e}_{9}{e}_{10}-{e}_{7}{e}_{12},
{e}_{3}{e}_{4}-{e}_{1}{e}_{6},
{e}_{4}{e}_{8}{e}_{9}-{e}_{5}{e}_{6}{e}_{12},
{e}_{1}{e}_{8}{e}_{9}-{e}_{3}{e}_{5}{e}_{12},
{e}_{4}{e}_{7}{e}_{9}-{e}_{5}{e}_{6}{e}_{11}, \\
{e}_{1}{e}_{7}{e}_{9}-{e}_{3}{e}_{5}{e}_{11},
{e}_{4}{e}_{7}{e}_{8}-{e}_{5}{e}_{6}{e}_{10},
{e}_{1}{e}_{7}{e}_{8}-{e}_{3}{e}_{5}{e}_{10},
{e}_{5}{e}_{6}{e}_{10}{e}_{11}-{e}_{4}{e}_{7}^{2}{e}_{12},
{e}_{3}{e}_{5}{e}_{10}{e}_{11}-{e}_{1}{e}_{7}^{2}{e}_{12},\\
{e}_{2}{e}_{5}-{e}_{1}{e}_{6},
{e}_{2}{e}_{8}{e}_{9}-{e}_{3}{e}_{6}{e}_{12},
{e}_{2}{e}_{7}{e}_{9}-{e}_{3}{e}_{6}{e}_{11},
{e}_{2}{e}_{7}{e}_{8}-{e}_{3}{e}_{6}{e}_{10},
{e}_{2}{e}_{7}^{2}{e}_{12}-{e}_{3}{e}_{6}{e}_{10}{e}_{11}\}.
\end{array}
$$
From the above Gr\"obner basis, we have
\begin{eqnarray*}
\init_{e_2}(I_G) &=& \langle e_8e_{11}-e_7e_{12}, e_9e_{10}-e_7e_{12}, e_3e_4-e_1e_6, e_3e_8e_{10}-e_5e_6e_{12}, e_1e_8e_{10}-e_4e_5e_{12},\\
&& e_3e_7e_{10}-e_5e_6e_{11}, e_1e_7e_{10}-e_4e_5e_{11}, e_3e_7e_8-e_5e_6e_9, e_1e_7e_8-e_4e_5e_9, e_2e_7^2e_{12},\\
&& e_5e_6e_9e_{11}-e_3e_7^2e_{12},
e_4e_5e_9e_{11}-e_1e_7^2e_{12}, e_2e_5, e_2e_8e_{10}, e_2e_7e_{10}, e_2e_7e_8 \rangle
\end{eqnarray*}
\noindent and we can directly compute
\begin{align*}
C_{e_2,I_G}&= \langle e_5,e_8e_{11}-e_7e_{12},e_9e_{10}-e_7e_{12},e_8e_9,e_7e_9,e_7e_8,e_3e_4-e_1e_6\rangle, \\
N_{e_2,I_G} &= \langle e_8e_{11}-e_7e_{12},e_9e_{10}-e_7e_{12},e_3e_4-e_1e_6,e_4e_8e_9-e_5e_6e_{12}, e_1e_8e_9-e_3e_5e_{12},\\
& \hspace{6mm} e_4e_7e_9-e_5e_6e_{11},e_1e_7e_9-e_3e_5e_{11},e_4e_7e_8-e_5e_6e_{10},e_1e_7e_8-e_3e_5e_{10}\rangle.
\end{align*}
Note that we have removed redundant generators in
the above ideals.
We first check that $J:=C_{e_2,I_G}$ is geometrically vertex decomposable. We note that $J$ is unmixed and proceed to geometrically vertex decompose with respect to $e_{8}$.
In
the ring $\mathbb{K}[e_1,e_3,\ldots,e_{12}]$,
we use the lexicographic monomial order where
$e_8 > e_j$ for $j \neq 8$. From this Gr\"obner basis,
which we do not display here, we find
\begin{align*}
\init_{e_8}(J) = \langle & e_5,e_9e_{10}-e_7e_{12},e_7e_9,e_3e_4-e_1e_6,e_8e_{11},e_8e_9,e_7e_8\rangle, \\
C_{e_{8},J} = \langle & e_{11},e_9,e_7,e_5,e_3e_4-e_1e_6\rangle, \\
N_{e_{8},J} = \langle & e_5,e_9e_{10}-e_7e_{12},e_7e_9,e_3e_4-e_1e_6 \rangle.
\end{align*}
\noindent The ideals $C_{e_8,J}$, $N_{e_8,J}$ are each unmixed, and it is not difficult to check that they are both geometrically vertex decomposable. This can be seen quickly with the use of Theorem \ref{tensorproduct} (by writing $N_{e_{8},J} = \langle e_5,e_3e_4-e_1e_6\rangle + \langle e_9e_{10}-e_7e_{12},e_7e_9\rangle$ for example).
Next we show that $K:=N_{e_2,I_G}$ is geometrically vertex decomposable. Notice that $K= I_{G\setminus e_2}$ by Lemma \ref{linktoricidealgraph}, and hence is unmixed. If we decompose with respect to $e_{5}$,
(again using a lexicographic
monomial order where $e_5$ is the largest variable) we get
\begin{eqnarray*} \init_{e_{5}}(K) &=& \langle e_8e_{11}-e_7e_{12},e_9e_{10}-e_7e_{12},e_3e_4-e_1e_6, e_5e_6e_{12},\\
&& e_3e_5e_{12},e_5e_6e_{11},e_3e_5e_{11},e_5e_6e_{10},e_3e_5e_{10}\rangle.
\end{eqnarray*}
Again by Lemma \ref{linktoricidealgraph}, $N_{e_5,K}= I_{G\setminus\{e_2,e_5\}}$, and the resulting graph only contains primitive closed even walks of length four. Therefore by Theorem \ref{quadratic_GVD}, $N_{e_5,K}$ is geometrically vertex decomposable. On the other hand,
\[C_{e_5,K} = \langle e_6e_{12},e_3e_{12},e_8e_{11}-e_7e_{12},e_6e_{11},e_3e_{11},e_9e_{10}-e_7e_{12},e_6e_{10},e_3e_{10},e_3e_4-e_1e_6\rangle.\]
\noindent This ideal is unmixed, but we will need two more steps to show that $L:=C_{e_5,K}$ is geometrically vertex decomposable. We start with the edge $e_{12}$ and compute the ideals:
\begin{align*}
\init_{e_{12}}(L) &=\langle e_6e_{11},e_3e_{11},e_9e_{10}-e_8e_{11},e_6e_{10},e_3e_{10},e_3e_4-e_1e_6,e_7e_{12},e_6e_{12},e_3e_{12}\rangle,\\
N_{e_{12},L} &= \langle e_6e_{11},e_3e_{11},e_9e_{10}-e_8e_{11},e_6e_{10},e_3e_{10},e_3e_4-e_1e_6 \rangle,\\
C_{e_{12},L} &= \langle e_7,e_6,e_3,e_9e_{10}-e_8e_{11} \rangle.
\end{align*}
Both $N_{e_{12},L}$ and $C_{e_{12},L}$ are unmixed, and the latter is clearly geometrically vertex decomposable. Setting $M=N_{e_{12},L}$, we continue the process once more using the edge $e_{11}$ and compute the ideals:
\begin{align*}
\init_{e_{11}}(M) &=\langle e_6e_{10},e_3e_{10},e_3e_4-e_1e_6,e_8e_{11},e_6e_{11},e_3e_{11}\rangle,\\
N_{e_{11},M} &= \langle e_6e_{10},e_3e_{10},e_3e_4-e_1e_6 \rangle,\\
C_{e_{11},M} &= \langle e_8,e_6,e_3 \rangle.
\end{align*}
The ideal $C_{e_{11},M}$ is geometrically vertex decomposable by definition. It is an easy exercise to check that $N_{e_{11},M}$ is also geometrically vertex decomposable, completing the argument.
\begin{remark}
Let $I_G$ be a toric ideal of a graph and suppose that $\mathbb{K}[E(G)]/I_G$ is Cohen-Macaulay. Up until now, every such $I_G$ that we have considered has been geometrically vertex decomposable. We end the paper by pointing out that this isn't always the case. Indeed, let $G$ be the graph given in Figure \ref{fig:2triangles}. Then $I_G = \langle e_1e_4^2e_6e_7 - e_2e_3e_5^2e_8 \rangle$, which is not geometrically vertex decomposable.
However, one can make a substitution of variables $y = e_4^2$ to obtain the related ideal $\langle ye_1e_6e_7 - e_2e_3e_5^2e_8 \rangle\subseteq \mathbb{K}[e_1,e_2,e_3,y,e_5,e_6,e_7,e_8]$ which is geometrically vertex decomposable (though is no longer homogeneous). In future work, we will further explore this idea of geometric vertex decomposition allowing substitutions of variables, its connection to Gorenstein liaison, and its implications for toric ideals of graphs.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=0.45]
\draw (0,3) -- (0,9)node[midway, left] {$e_1$};
\draw (4.5,6) -- (0,9) node[midway, above] {$e_3$};
\draw (0,3) -- (4.5,6) node[midway, below] {$e_2$};
\draw (4.5,6) -- (7.5,6) node[midway, above] {$e_4$};
\draw (7.5,6) -- (10.5,6) node[midway, above] {$e_5$};
\draw (10.5,6) -- (14,3) node[midway, below] {$e_6$};
\draw (10.5,6) -- (14,9) node[midway, above] {$e_7$};
\draw (14,3) -- (14,9) node[midway, right] {$e_8$};
\fill[fill=white,draw=black] (0,3) circle (.1) node[left]{$x_1$};
\fill[fill=white,draw=black] (0,9) circle (.1) node[left]{$x_2$};
\fill[fill=white,draw=black] (4.5,6) circle (.1) node[below]{$x_3$};
\fill[fill=white,draw=black] (7.5,6) circle (.1) node[below]{$x_4$};
\fill[fill=white,draw=black] (10.5,6) circle (.1) node[below]{$x_5$};
\fill[fill=white,draw=black] (14,3) circle (.1) node[right]{$x_6$};
\fill[fill=white,draw=black] (14,9) circle (.1) node[right]{$x_7$};
\end{tikzpicture}
\caption{Two triangles connected by a path of length two.}
\label{fig:2triangles}
\end{figure}
\end{remark}
\newpage
| {'timestamp': '2022-07-14T02:22:44', 'yymm': '2207', 'arxiv_id': '2207.06391', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.06391'} |
\section{Introduction}
The Higgs-like particle has been discovered recently in Large Hadron
Collider (LHC) \cite{Aad:2012tfa,Chatrchyan:2012ufa}.
The one of primary interests of particle physics is to understand the mechanism of
dynamical electroweak symmetry breaking.
The one of the promising mechanism to explain the Higgs particle is the
Hosotani mechanism~\cite{Hosotani:1983xw,Hosotani:1988bm} which leads
the gauge-Higgs unification.
In the Hosotani mechanism, the Higgs particle is interpreted as the
fluctuation of the extra-dimensional component of the gauge field when
the adjoint fermions are introduced with a periodic boundary condition
(PBC) because the non-zero vacuum expectation value (VEV) of the
extra-dimensional component of gauge field is realized.
Recently, same phenomena has been observed in a different context
; for example, see Ref.~\cite{Nishimura:2009me,Cossu:2009sq,Cossu:2013ora}.
When the adjoint fermions with PBC are introduced to Quantum
Chromodynamic (QCD) at finite temperature, some exotic phases are
appeared.
In such exotic phase, the traced fundamental Polyakov-loop $\Phi$
can have the non-trivial value and it show the spontaneous
gauge-symmetry breaking.
It means the realization of the Hosotani mechanism in $R^3 \times S^{1}$
space-time as shown later.
Furthermore, we consider the flavor twisted boundary condition (FTBC) for
fundamental fermions.
This FTBC is considered in Ref.~\cite{Kouno:2012zz,Sakai:2012ika} to
investigate correlations between the $Z_3$ and chiral symmetries
breaking because the $Z_3$
symmetry is not explicitly broken in the case with FTBC
even if we introduce the fundamental fermions.
In the standard fundamental fermion can not leads the spontaneous gauge
symmetry breaking, but fundamental fermions with FTBC can lead the
breaking as shown later.
The purpose of this talk is to explain
how to understand the recent lattice simulation
from the Hosotani mechanism and possibility of the spontaneous
gauge symmetry breaking by the fundamental fermions.
This talk is based on papers~\cite{Kashiwa:2013rmg,Kouno:2013mma}
\section{Formalism}
In this study, we use the perturbative one-loop effective potential
~\cite{Gross:1980br,Weiss:1980rj}
on $R^{3}\times S^1$ for gauge boson and fermions and then the imaginary
time direction is the compacted dimension.
Firstly, we expand the $SU(N)$ gauge boson field as
\begin{align}
A_\mu &= \langle A_y \rangle + \tilde{A}_\mu,
\end{align}
where $y$ stands for a compact direction, $\langle A_y \rangle$
is VEV and $\tilde{A}_\mu$ express the fluctuation part.
For latter convenience, we rewrite it as
\begin{align}
\langle A_y \rangle &=\frac{2 \pi}{gL} q,
\end{align}
where $g$ is gauge coupling constant and
$q$'s color structure is $\mathrm{diag}(q_1,q_2,...,q_{N})$
and each component should be $(q_i)_{mod~1}$.
We note that eigenvalues of $q_{i}$
are invariant under all gauge transformations preserving boundary conditions
and thus we can easily observe spontaneous gauge symmetry breaking from
values of $q_{i}$.
The gluon one-loop effective potential ${\cal V}_g$ can be expressed as
\begin{align}
{\cal V}_g
&= - \frac{2}{L^4 \pi^2} \sum_{i,j=1}^N \sum_{n=1}^{\infty}
\Bigl( 1 - \frac{1}{N} \delta_{ij} \Bigr)
\frac{\cos( 2 n \pi q^{ij})}{n^4}
\end{align}
where $q_{ij} = ( q_i - q_j )_{mod~1}$
and $N$ means the number of color degrees of freedom.
The perturbative one-loop effective potential for the massive fundamental quark
is expressed by using the second kind of the modified Bessel function $K_2(x)$ as
\begin{align}
{\cal V}_f^\phi(N_{f},m_f) &=
\frac{ 2 N_f m_{f}^{2}}{\pi^2L^2} \sum_{i=1}^N \sum_{n=1}^\infty
\frac{K_2 ( n m_{f} L )}{n^2}
\cos [2 \pi n (q_i + \phi)],
\end{align}
where $N_f$ and $m_{f}$ are the number of flavors and the mass for fundamental fermions.
The perturbative one-loop effective potential for the massive adjoint
quark ${\cal V}_a^\phi$ is
\begin{align}
{\cal V}_a^\phi (N_{a}, m_{a}) &=
\frac{ 2 N_a m_{a}^{2}}{\pi^2L^2} \sum_{i,j=1}^N \sum_{n=1}^\infty
\Bigl( 1 - \frac{1}{N} \delta_{ij} \Bigr)
\frac{K_2 ( n m_{a} L )}{n^2}
\cos [2 \pi n (q_{ij} + \phi)],
\label{Re}
\end{align}
where $N_a$ and $m_{a}$ are the number of flavors and the mass for adjoint fermions.
For the gauge theory with $N_{f}$ fundamental and $N_{a}$ adjoint fermions with
arbitrary boundary conditions,
the total perturbative one-loop effective potential becomes
\begin{equation}
{\cal V} = {\cal V}_{g}+{\cal V}_{f}^{\phi}(N_{f}, m_f)+{\cal V}_a^{\phi}(N_{a}, m_a).
\label{eq_ep_pert}
\end{equation}
This total one-loop effective potential contains eight parameters including
the compact scale $L$, the number of colors $N$, the fermion
masses $m_{f}$, $m_{a}$, the number of flavors $N_{f}$, $N_{a}$, and the
boundary conditions $\phi$ for two kinds of matter fields.
In this study, we keep $N=3$ and then the phase diagram is obtained in
$1/L$-$m_{a}$ space with fixed $m_{f}$, $N_{f}$, $N_{a}$ and $\phi$.
The reason we change $m_{a}$ while fixing $m_{f}$ is that gauge symmetry
phase diagram is more sensitive to the former than the latter.
\section{ $SU(3)$ gauge theory with adjoint and fundamental
quarks~\cite{Kashiwa:2013rmg}}
Here, we consider the case of $(N_{f},N_{a})=(0,1)$ with PBC.
We note that this case has exact $Z_3$ symmetry because the adjoint
quark dose not break the symmetry.
Figure~\ref{Fig_p_gapm_2D} shows the effective potential
$[{\cal V}_g+{\cal V}_a^{0}(N_{a}, m_{a})]L^4$ as a function of $q_{1}$
with $q_{2}=0$ for $m L=1.2$, $1.6$, $2.0$ and $3.0$ from left to right
panels ($m \equiv m_{a}$).
\begin{figure}[htbp
\begin{center}
\includegraphics[width=0.23\textwidth]{2D_potential_gap_t10_m12.eps}
\includegraphics[width=0.23\textwidth]{2D_potential_gap_t10_m16p05.eps}
\includegraphics[width=0.23\textwidth]{2D_potential_gap_t10_m20p01.eps}
\includegraphics[width=0.23\textwidth]{2D_potential_gap_t10_m30.eps}
\end{center}
\caption{
The one-loop effective potential of $SU(3)$ gauge theory
with one flavor PBC adjoint quark as a function of $q_1$ with $q_2=0$
for $m L=1.2$ (reconfined)
$1.6$ (reconfined$\leftrightarrow$split),
$2.0$ (split$\leftrightarrow$deconfined) and $3.0$ (deconfined).
}
\label{Fig_p_gapm_2D}
\end{figure}
We can clearly see that there is the first-order phase transition in
the vicinity of $m L=1.6$.
This is a transition between the reconfined phase and the other gauge-broken
phase, which we call the split phase.
In Fig.~\ref{4d_phase_p}, we show the phase diagram in $L^{-1}$-$m$ plane
with $(N_{f}, N_{a})=(0,1)$ quark based on the perturbative one-loop
effective potential.
\begin{figure}[htbp
\begin{center}
\includegraphics[width=0.55\textwidth]{phase_diagra_gap_4D.eps}
\end{center}
\caption{$L^{-1}$-$m$ phase diagram for $SU(3)$ gauge theory on
$R^{3}\times S^1$ with one PBC adjoint quark
based on one-loop effective potential.
The symbol D stands for deconfined ($SU(3)$),
S for split ($SU(2)\times U(1)$) and R for reconfined ($U(1)\times U(1)$) phases.}
\label{4d_phase_p}
\end{figure}
We note that, as $m$ appears as $m L$ in the potential,
we have liner scaling in the phase diagram.
Since we drop the non-perturbative effect in the gluon potential,
we can not obtain the confined phase at small $L^{-1}$.
The order of three phases in Fig.~\ref{4d_phase_p}
(deconfined $SU(3)$ $\to$ split $SU(2)\times U(1)$ $\to$ reconfined $U(1)\times U(1)$
from small to large $L^{-1}$)
is consistent with that of the lattice simulation~\cite{Cossu:2009sq,Cossu:2013ora}
except the confined phase.
All the critical lines in the figure are first-order.
In Fig.~\ref{Fig_pl_distribution} we show a schematic distribution
plot of $\Phi$ in the complex plane for each phases.
In the split phase, $\Phi$ has nonzero values but in a different manner from the deconfined phase.
In the reconfined phase, we have $\Phi=0$ with the vacuum which breaks the gauge symmetry.
\begin{figure}[htbp
\begin{center}
\includegraphics[width=0.45\textwidth]{pl-distribution.eps}
\end{center}
\caption{
Schematic distribution plot of Polyakov loop $\Phi$ as a function of
$\mathrm{Re}~\Phi$ and $\mathrm{Im}~\Phi$ for $SU(3)$ gauge theory
with one flavor PBC adjoint quark.
}
\label{Fig_pl_distribution}
\end{figure}
From above results, we can understand the lattice results~\cite{Cossu:2009sq}
from Hosotani mechanism as shown in Fig~\ref{PoD}.
\begin{figure}[htbp
\begin{center}
\includegraphics[width=0.65\textwidth]{PoD.eps}
\end{center}
\caption{Comparison between distribution plot of Polyakov loop $\Phi$ on the lattice \cite{Cossu:2009sq}
and that of the one-loop effective potential for $SU(3)$ gauge theory on
$R^{3}\times S^1$ with PBC adjoint quarks. Apart from the strong-coupling confined phase,
all of the specific behavior can be interpreted as the phases we found in our analytical calculations. }
\label{PoD}
\end{figure}
The schematic figure of the fundamental quark effect to the phase
diagram is shown in Fig.~\ref{PoD2}.
\begin{figure}[htbp
\begin{center}
\includegraphics[bb=0 150 850 500, clip, width=0.9\textwidth]{PoD2.eps}
\end{center}
\caption{Prediction of distribution plot of Polyakov loop $\Phi$
based on the one-loop effective potential for $SU(3)$ gauge theory on
$R^{3}\times S^1$ with PBC adjoint and fundamental quarks.}
\label{PoD2}
\end{figure}
The reconfined phase is replaced by the pseudo-confined phase because
the $Z_3$ symmetry is explicitly broken by the fundamental quark
contributions,
but the gauge symmetry breaking pattern is still same.
\section{ $SU(3)$ gauge theory with FTBC fundamental
quarks~\cite{Kouno:2013mma}}
In this section, we consider the FTBC for the fundamental fermion.
Details of FTBC are shown in Ref.~\cite{Kouno:2012zz,Sakai:2012ika}.
Contour plots of the $SU(3)$ gauge theory with $N_{F,fund}=120$ FTBC
fundamental quark for the gauge symmetric and broken phase are shown
in Fig.~\ref{CP}.
\begin{figure}[htbp
\begin{center}
\includegraphics[width=0.35\textwidth]{Fig10a.eps}
\includegraphics[width=0.35\textwidth]{Fig10b.eps}
\end{center}
\caption{
Contour plot of $[ {\cal V}_{g} + {\cal V}_{f} ] L^4$
in the $q_1$-$q_2$ plane for the case of $N_{F,fund}=120$ FTBC fermions.
The upper panel corresponds to the $SU(3)$ deconfined phase and
the lower panel does to the $SU(2)\times U(1)$ C-broken phase.}
\label{CP}
\end{figure}
Unlike the standard fundamental quark, we can clearly see the existence
of the spontaneous gauge symmetry breaking of $SU(3) \to SU(2) \times U(1)$.
The distribution plot of the fundamental Polyakov-loop is shown in Fig.~\ref{DP}.
\begin{figure}[htbp
\begin{center}
\includegraphics[width=0.6\textwidth]{Fig11.eps}
\end{center}
\caption{
Distribution of the Polyakov loop in the complex plane for a $SU(3)$ gauge theory on
with $N_{F,fund}=120$ FTBC fermions.
Solid circles correspond to the deconfinement phase and open circles do
to the gauge symmetry broken phase.
}
\label{DP}
\end{figure}
The phase diagram is shown in Fig.~\ref{PD} in the $L^{-1}$-$m$ plane.
\begin{figure}[htbp
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig12.eps}
\end{center}
\caption{The phase diagram in the $L^{-1}$-$m$ plane
for a $SU(3)$ gauge theory with $N_{F,fund}=120$ FTBC fermions.
The symbol D stands for the deconfinement phase
and GB for the $SU(2)\times U(1)$ gauge symmetry broken phase.
In the gauge symmetry broken phase, charge conjugation is also
spontaneously broken which can be seen from the charge-conjugation pairs.}
\label{PD}
\end{figure}
In the case with FTBC fundamental fermions, there is no $U(1) \times
U(1)$ phase, but $SU(2) \times U(1)$ phase is still exist.
The $Z_3$ symmetry is not explicitly broken as same as the adjoint
fermions
and also lattice simulations are possible.
Therefore, this system is very interesting to consider the
gauge symmetry breaking and also the confinement-deconfinement transition.
| {'timestamp': '2013-11-21T02:02:00', 'yymm': '1311', 'arxiv_id': '1311.4918', 'language': 'en', 'url': 'https://arxiv.org/abs/1311.4918'} |
\section{Introduction}
Gravitational microlensing is arguably the most successful detection technique for finding extrasolar planets beyond the snowline. Moreover, microlensing planets are located several kiloparsecs away \citep{par06, ben96, wam97} and provide an independent measure of the planet abundance in our Galaxy \citep{sno04,gau02,gou10,sum10,sum11,cas12,suz16,tsa16}. Microlensing planets are detected because light from a distant source star is attracted by a foreground lens star. As a consequence, more light from the source reaches an observer on Earth and the source appears to be brighter. The timescale of the event is related to the total mass of the lens, but also depends on relative proper motion as well as distance to the lens and distance to the source. Some of these parameters can be constrained by analyzing light curves of microlensing events. Further insight can be gained if the distance to the source and its proper motion are known, which is typically achieved by combining a fit to multi-band photometry. Only $\sim1$ in a million stars in the Galactic bulge is sufficiently aligned with a lens star to be detected by the observer as predicted by the seminal paper of \cite{pac86} and the planet name reflects that most of the underlying microlensing events have been discovered by the OGLE \citep{uda94,uda15} and MOA teams \citep{bon04}.
The ESA Gaia mission \citep{gai16} is obtaining accurate parallaxes and proper motions of about 1.7 billion sources brighter than $G\approx21$. The second data release (Gaia DR2), including five-parameter astrometric solutions with parallaxes, and proper motions, was released to the community on 25 April 2018 \citep{gai18a}. Comparing the list of 58 microlensing planets on NASA's Exoplanet Archive\footnote{\url{http://exoplanetarchive.ipac.caltech.edu} 25 April 2018} with the initial Gaia data release \citep{gai16b} reveals that 13 microlensing events can be positionally cross-matched and all of them can be found in Gaia DR2, albeit not with five-parameter astrometric solutions. In the following we describe which observable microlensing parameters are related to Gaia parameters and suggest a way to check if they comply with findings of Gaia.
Most microlensing events follow a simple symmetric Paczy{\'n}ski light curve. In a co-linear lens-source-observer configuration the source star image appears as Einstein ring radius
\begin{equation}
\theta_{\mathrm{E}} = \sqrt{\frac{4 G M_{\mathrm{L}}}{c^2} \left(D^{-1}_{\mathrm{L}}-D^{-1}_{\mathrm{S}}\right)}
\label{eq1}
\end{equation}
constraining the typical angular scale of the effect, where $D_{\mathrm{L}}$ denotes the distance from observer to deflecting lens of mass $M_{\mathrm{L}}$ and $D_{\mathrm{S}}$ denote the distance to the source star.
The size of the Einstein radius is on the order of $\sim1\,\mathrm{mas}$ and thus cannot be resolved. The relative proper motion between lens and source
\begin{equation}
\mu_{\mathrm{rel}} = \mu_{\mathrm{S}} - \mu_{\mathrm{L}},
\label{eq2}
\end{equation}
changes the alignment of the lens and source stars. This causes the brightness to change accordingly since Einstein's deflection angle depends on the impact parameter and thus magnifies the source. The typical time-scale of the event can be expressed as
\begin{equation}
t_{\mathrm{E}} = \frac{\theta_{\mathrm{E}}}{\mu_{\mathrm{rel}}}.
\label{eq3}
\end{equation}
The former is called Einstein time and can be obtained from a fit to the microlensing light curve. Usually, the source star is sufficiently bright to be seen when being magnified. If the source star is sufficiently bright, one can expect to obtain a parallax vector constraining $D_{\mathrm{S}}$ and the proper motion $\mu_{\mathrm{S}}$ which would leave us with the task to find a constraint on $\mu_{\mathrm{L}}$.
It should be stated that some microlensing light curves themselves provide a way of measuring $\mu_{\mathrm{rel}}$ by using finite source effects, namely the angular source star radius $\rho$ expressed in units of $\theta_{\mathrm{E}}$ so that
\begin{equation}
\theta_{\mathrm{E}} = \frac{\theta_{\mathrm{\star}}}{\rho},
\label{eq4}
\end{equation}
where $\theta_{\mathrm{\star}}$ is the angular size of the source star inferred from the source color and $\rho$ can be retrieved from a fit. Using Eq.~\ref{eq3} then leads to the relative proper motion. A direct comparison with Gaia DR2 is hard to achieve because the lens is usually too faint to be detected. Within the scope of this work, we could only check if the proper motion of the source or the lens star exceeds the expected distribution of $\mu_{\mathrm{L}}$ or $\mu_{\mathrm{S}}$ that can be obtained from Gaia DR2 itself.
Finally, an asymmetry in the light curve can lead to the detection of a parallax vector which we will refer to as microlensing parallax in order to distinguish it from the parallax measured by Gaia. Observing a microlensing event from different observatories, at different times of the year and/or by using satellite observations \citep{gou94, gou00} introduces the microlensing parallax as further fit parameter. It is related to source and lens distance as well as the Einstein radius $\theta_{\mathrm{E}}$ through
\begin{equation}
\pi_{\mathrm{E}} = \frac{1}{\theta_{\mathrm{E}}} \left(\frac{\mathrm{AU}}{D_{\mathrm{L}}} - \frac{\mathrm{AU}}{D_{\mathrm{S}}}\right).
\label{eq5}
\end{equation}
\section{Rationale of the comparison}
At first glance, the comparison of existing microlensing planets reported in the literature with Gaia DR2 data seems to be straight-forward. One needs to cross-check if the source star is in the Gaia catalog, ensure that the reported brightness is consistent and apply the corrected parameters to the reported physical parameter estimates. In practice, one faces several challenges as far as selecting a meaningful sample is concerned. Not all published planets come with reported uncertainties on all relevant parameters. In some cases, the parameter space is too complicated to provide an unambiguously set of fit parameters. Moreover, when uncertainties are reported, it is often not clear how the underlying parameter estimates are distributed and if the parameters for the best solution are consistent. In this work, we will follow a simple approach and rely on the best fit in a least-squares sense and use the corresponding parameters as a starting point.
\subsection{Initial selection of planets}
First we devise a filter criterion to determine which planets can be used for resampling the lens mass distribution. Out of 58 planets listed as confirmed in NASA's Exoplanet Archive, we only consider 53 due to a lack of reported values for $\theta_{\mathrm{E}}$. Therefore, the microlensing events MOA-2007-BLG-192L \citep{2008ApJ...684..663B} and OGLE-2016-BLG-0263L \citep{2017AJ....154..133H} are missing.
If applicable, we also compare $\mu_{\mathrm{rel}}$, $\pi_{\mathrm{E}}$, $t_{\mathrm{E}}$, $\theta_{\star}$ and $\rho_{\star}$ using Eqs.~\ref{eq1}, \ref{eq3}, \ref{eq4} and \ref{eq5} in order to assess whether they describe our event and to perform a consistency check so one can argue how compatible they are with the reported $\theta_{\mathrm{E}}$, $D_{\mathrm{L}}$ and $D_{\mathrm{S}}$. Unmentioned source distances are assumed to be $D_{\mathrm{S}}=8\,\mathrm{kpc}$. Moreover, the given mass ratio
\begin{equation}
q=\frac{M_{\mathrm{pl}}}{M_{\mathrm{host}}}
\end{equation}
is compared with the reported planet and host star masses. The planet is then obtained from
\begin{equation}
M_{\mathrm{pl}}=\frac{q}{q+1}\cdot\frac{\theta^2_{\mathrm{E}}}{\kappa}\cdot \left(\frac{\mathrm{AU}}{D_{\mathrm{L}}} - \frac{\mathrm{AU}}{D_{\mathrm{S}}}\right)^{-1}.
\label{eq6}
\end{equation}
Depending on the available parameters we determine the following quantities where applicable:
\begin{equation}
M_{\mathrm{pl}}=\frac{q}{q+1}\cdot\frac{\theta^2_{\mathrm{E}}}{\kappa\cdot\pi_{\mathrm{E}}},
\label{eq7}
\end{equation}
\begin{equation}
M_{\mathrm{pl}}=\frac{q}{q+1}\cdot\frac{(\mu_{\mathrm{rel}}\cdot t_{\mathrm{E}})^2}{\kappa}\cdot \left(\frac{\mathrm{AU}}{D_{\mathrm{L}}} - \frac{\mathrm{AU}}{D_{\mathrm{S}}}\right)^{-1},
\label{eq8}
\end{equation}
\begin{equation}
M_{\mathrm{pl}}=\frac{q}{q+1}\cdot\frac{(\mu_{\mathrm{rel}}\cdot t_{\mathrm{E}})^2}{\kappa\cdot\pi_{\mathrm{E}}},
\label{eq9}
\end{equation}
\begin{equation}
M_{\mathrm{pl}}=\frac{q}{q+1}\cdot\frac{\theta_{\star}}{\kappa\cdot\rho}\cdot \left(\frac{\mathrm{AU}}{D_{\mathrm{L}}} - \frac{\mathrm{AU}}{D_{\mathrm{S}}}\right)^{-1},
\label{eq10}
\end{equation}
\begin{equation}
M_{\mathrm{pl}}=\frac{q}{q+1}\cdot\frac{\theta_{\star}}{\kappa\cdot\rho\cdot\pi_{\mathrm{E}}}.
\label{eq11}
\end{equation}
We accept a discrepancy of 20\,\% on $\theta_{\mathrm{E}}$ and 10\,\% on the other aforementioned parameter values as a lower threshold. Any other preselection would drastically decrease the number of considerable planets. If these parameters are completely incompatible, we refrain from including them in our study. This applies to the events MOA-2010-BLG-328L \citep{2013ApJ...779...91F}, OGLE-2005-BLG-169L \citep{2006ApJ...644L..37G}, and OGLE-2013-BLG-0341L B \citep{2014Sci...345...46G}.
After passing these checks, all available parameters are used to calculate $M_{\mathrm{pl}}$ based on Eqs.~\ref{eq6} to~\ref{eq11}, where $\kappa=8.144 \frac{\mathrm{mas}}{\mathrm{M}_{\odot}}$. The results are compared with the reported planet masses and the method delivering the closest deviation is chosen to be used. It turns out, that in many cases applying Eq.~\ref{eq6} leads to the most reliable results as compared to the published planet mass whereas the other approaches are just partially applicable.
For the actual comparison we need a five-parameter astrometric solution including parallaxes. That reduces our target list to 20 microlensing planets, and some are listed with negative parallaxes, which is not a surprise given that most microlensing events are located in crowded fields and the limiting magnitude is $G\approx18$. In addition to the positional cross-match we are checking if the reported I magnitudes are consistent with the blue and red Gaia magnitudes $G_{\mathrm{BP}}, G_{\mathrm{RP}}$ which have been transformed to the Johnson-Cousin I band using the relations of \cite{jor10}. Fig.~\ref{fig1} shows the distance to lens and source as well as the cross-matched Gaia DR2 candidate
\section{Results}
We find that 9 cross-matched stars are within 0.5\,arcsec of the reported target position and within 0.2\,mag of the reported source magnitude. Two more events are positionally cross-matched within 0.5\,arcsec, but do not match the source magnitude. Both events are highly blended, but the inferred distance cannot confirm or rule out if the lens is the blend.
There was only one cross-matched planetary event with consistent brightness that did not match the reported lens and source distance. \cite{2015ApJ...804...33S} report for OGLE-2011-BLG-0265L a source color $V-I$ of 3.2 which differs from the Gaia DR2 target. The Gaia target is likely not the blend, because it is reported it to be $>20$\,mag in the I band. The cross-match separation is within 0.06\,arcsec. The magnitude of the relative proper motion of the Gaia DR2 target is 10\,mas/yr. Typical values for the relative proper motion are in the range of 2 and 8\,mas/yr which by itself would not exclude the possibility of the Gaia DR2 target being the source. Since the event was discovered, the position should not have changed enough to affect the cross-match. We can also exclude that the deviation comes from the inferred distances, since a direct inversion of the parallax is similarly close. Due to the faintness of the event in Gaia ($G\approx 18.9$) we expect that the crowded field has affected the event.
As a side-remark, Gaia DR2 also reports duplicated sources as diagnostic information which can indicate highly blended events before they occur. In the given selection OGLE-2006-BLG-109L \citep{2010ApJ...713..837} and MOA-2008-BLG-379L \citep{2014ApJ...780..123S} were reported and duplicated sources.
\begin{figure}
\centering
\resizebox{.8\textwidth}{!}{\includegraphics{gaia_distance_revised3.png}}
\caption{Distances to the reported microlensing planets and their respective host stars is shown along with the corresponding cross-matched distances based on Gaia DR2 parallaxes. The sample is limited to microlensing events towards the Galactic center.}
\label{fig1}
\end{figure}
Considering different avenues of comparing parameters, we compare in Table~\ref{tab1} the reported lens and source distance with the distance of the nearest catalog entry based on the discovery paper\footnote{For some of the events revised or extended parameter estimates are available \citep{bat17, bat14, ben06, don09, ben15, ben10, bea16, tsa14}}. The reference distance and asymmetric uncertainties are based on the inferred distances provided by \cite{bai18} because some of the cross-matched targets are reported to have negative parallaxes. That increases our sample to 20 microlensing planets. The approach has already been tested on Gaia DR1 data \citep{bai15, ast16a, ast16b}. We report if $D_{\mathrm{S}}$ or $D_{\mathrm{L}}$ are within $2\,\sigma$ of the respective asymmetric uncertainties.
\begin{table}[h]
\caption{Comparison between the reported $D_{\mathrm{S}}, D_{\mathrm{L}}$ in the discovery paper and the inferred distance by \cite{bai18}. Converted Johnson-Cousins magnitudes and colors $I_{\mathrm{G}},(V-I)_{\mathrm{G}}$ are calculated based on \citep{jor10}. The first part of the table contains matches within 0.2\,mag of the published source star and a separation below 0.5\,arcec. The second part contains events that cannot be matched to the source magnitude. The last part contains entries without Gaia colors and separations $>0.5$\,arcsec.}
\label{table:1}
\centering
\renewcommand{\arraystretch}{1.3
\begin{tabular}{c c c c c}
\hline\hline
Hostname & $\Delta D_{\mathrm{S}}$ & $\Delta D_{\mathrm{L}}$ & $I_{\mathrm{G}}$ & $(V-I)_{\mathrm{G}}$ \\
& $\in [\pm 2\sigma]$ & $\in [\pm 2\sigma]$ & $[\mathrm{mag}]$ & $[\mathrm{mag}]$\\% table heading
\hline
MOA-2009-BLG-266L & yes & yes & 15.9 & 1.7 \\
MOA-2011-BLG-028L & yes & yes & 15.3 & 1.8 \\
OGLE-2008-BLG-092L & yes & yes & 13.9 & 2.0 \\
OGLE-2011-BLG-0265L & no & no & 17.6 & 2.4 \\
OGLE-2012-BLG-0358L & yes & yes & 16.5 & 2.4 \\
OGLE-2013-BLG-0102L & yes & yes & 17.3 & 2.5 \\
OGLE-2015-BLG-0051L & yes & yes & 16.8 & 2.3 \\
OGLE-2016-BLG-0263L & yes & yes & 17.0 & 2.0 \\
OGLE-2017-BLG-1522L & yes & yes & 17.2 & 1.7 \\
\hline
OGLE-2011-BLG-0251L & yes & yes & 15.8 & 3.0 \\
MOA-2010-BLG-117L & yes & yes & 16.8 & 2.0 \\
\hline
MOA-2010-BLG-073L & yes & yes & 15.4 & 2.1 \\
MOA-2012-BLG-505L & yes & yes & 17.0 & 2.2 \\
OGLE-2006-BLG-109L & yes & no & 16.8 & 2.2 \\
OGLE-2007-BLG-368L & yes & yes & 16.0 & 2.0 \\
MOA-2008-BLG-379L & yes & yes & -- & -- \\
MOA-2012-BLG-006L & yes & yes & -- & -- \\
OGLE-2005-BLG-390L & yes & yes & -- & -- \\
OGLE-2012-BLG-0406L & yes & yes & -- & -- \\
OGLE-2017-BLG-0173L & yes & yes & -- & -- \\
\hline
\end{tabular}
\label{tab1}
\end{table}
\section{Conclusions}
\begin{figure}
\centering
\resizebox{.5\textwidth}{!}{\includegraphics{revised_galaxy_plot3.jpg}}
\caption{Positions of source stars and reported microlensing planets are shown in an artist's impression of our Galaxy along with cross-matched Gaia DR2 distances. Credit:
NASA/JPL-Caltech/ESO/R. Hurt}
\label{fig2}
\end{figure}
We have subjected the published microlensing planets to a first compatibility check with Gaia DR2. We have shown that 19 of 20 planetary events with a five-parameter astrometric solution are within $2\,\sigma$ of the reported source distance or the expected lens distance and 9 of them can be matched to the source magnitude within 0.2\,mag.
When revisiting the published parameters, we find that the lack of uncertainties, parameters and Monte C samples makes a fair comparison difficult. We would like to note that we have also treated all events equally, but feature-rich and long events constraining source radius and parallax provide estimates that are independent of a Galactic model and by definition are more informative.
The lens and source star distances of microlensing events are likely to be constrained in a better way by using color-magnitude diagrams along with source and blend flux as fit parameter. For OGLE-2011-BLG-0265L, where the inferred Gaia distance is not compatible with source or lens, we assume that the corresponding Gaia DR2 catalog entry belongs to the source star.
Since we can not reliably assign a Gaia source identifier to the lens, the source or the blend, the reported proper motion can not be used for further constraining the lens parameters. For our selection of 20 events, the proper motion stays around $7\pm3\,\,\mathrm{mas}/\mathrm{yr}$ and does not require a more careful treatment of the observational epoch. The cross-matched targets for OGLE-2005-BLG-390L and OGLE-2005-BLG-265L have the highest reported proper motion (10.0 and 14.6\,mas/yr respectively).
Fig.~\ref{fig2} illustrates how the assumed microlensing planet distribution looks like. Green circles correspond to the source star positions which are quite often constrained to 8\,kpc. We also want to highlight that the published lens distances are hinting at more lens stars closer to the sun than one would expect. These events are more likely to enable the measurement of $\pi_{\mathrm{E}}$ and might be affected by a publication bias which makes it easier to unambiguously characterize them as a planet.
We expect that the distribution of parameters provided by Gaia DR2 will also contribute decisively to refining the most recent Galactic models \citep{pas18} that can be used to assess the nature of microlensing events as well as complementing high-resolution follow-up observations. It might be useful for the inferred Gaia DR2 distance estimates to compare all single and binary microlensing events with corresponding microlensing parallax measurements in order to see how the inferred distances at several kiloparsecs are affected by crowded fields.
\section*{Acknowledgement}
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
This work is based in part on services provided by the GAVO data center.
| {'timestamp': '2018-05-01T02:17:17', 'yymm': '1804', 'arxiv_id': '1804.10136', 'language': 'en', 'url': 'https://arxiv.org/abs/1804.10136'} |
\section{Introduction}
In the theory of motives, rigid tensor categories arise which have a faithful tensor functor
to a category of super vector spaces over a field of characteristic $0$, but which
are not known to be super Tannakian.
In view of the good properties of super Tannakian categories, this raises the questions
of whether for each such rigid tensor category $\mathcal C$ there is a super
Tannakian category $\mathcal C'$ which most closely approximates it,
and of how the objects and morphisms of $\mathcal C'$ are related to those of $\mathcal C$.
To describe the situation in more detail, we first fix some terminology.
By a tensor category we mean a symmetric monoidal category
whose hom-sets have structures of abelian group for which
the composition and tensor product are bilinear.
Such a category will be called rigid if every object is dualisable.
A tensor functor between tensor categories is an additive strong symmetric monoidal functor.
We call a tensor category \emph{pseudo-Tannakian} if it is essentially small and
has a faithful tensor functor to a category of super vector spaces over a field
of characteristic $0$.
If $\mathcal D$ is an abelian pseudo-Tannakian category, then $\End_{\mathcal D}(\mathbf 1)$ is a field
of characteristic $0$, and $\mathcal D$ is a super Tannakian category in the usual sense over this field.
We then say that $\mathcal D$ is super Tannakian.
By a \emph{super Tannakian hull} of a pseudo-Tannakian category $\mathcal C$ we mean a faithful tensor
functor $U:\mathcal C \to \mathcal C'$ with $\mathcal C'$ super Tannakian such that for every super Tannakian
category $\mathcal D$, composition with $U$ defines an equivalence from the groupoid of
faithful tensor functors $\mathcal C' \to \mathcal D$ to the groupoid of faithful tensor functors
$\mathcal C \to \mathcal D$.
Such a $\mathcal C'$ if it exists will be the required closest super Tannakian approximation to $\mathcal C$.
That a super Tannakian hull for every pseudo-Tannakian category $\mathcal C$
exists will be proved in Section~\ref{s:supTann}.
It can be described explicitly as follows.
Denote by $\widehat{\mathcal C}$ the category of additive functors from $\mathcal C^{\mathrm{op}}$ to
abelian groups.
Then $\widehat{\mathcal C}$ is abelian and we have a fully faithful additive Yoneda embedding
\begin{equation*}
\mathcal C \to \widehat{\mathcal C}.
\end{equation*}
The tensor structure of $\mathcal C$ induces a tensor structure on $\widehat{\mathcal C}$,
and the embedding has a canonical structure of tensor functor.
An object $M$ of $\widehat{\mathcal C}$ will be called a \emph{torsion object} if for each object
$B$ of $\mathcal C$ and element $b$ of $M(B)$ there exists a non-zero morphism $a:A \to \mathbf 1$ in $\mathcal C$
such that
\begin{equation*}
M(a \otimes B):M(B) \to M(A \otimes B)
\end{equation*}
sends $b$ to $0$.
The full subcategory $(\widehat{\mathcal C})_{\mathrm{tors}}$ of $\widehat{\mathcal C}$ consisting of
the torsion objects is a Serre subcategory,
and we may form the quotient
\begin{equation*}
\widetilde{\mathcal C} = \widehat{\mathcal C}/(\widehat{\mathcal C})_{\mathrm{tors}}.
\end{equation*}
It has a unique structure of tensor category such that the projection $\mathcal C \to \widetilde{\mathcal C}$
is a strict tensor functor.
Since $\mathcal C$ is rigid, the composite
\begin{equation*}
\mathcal C \to \widehat{\mathcal C} \to \widetilde{\mathcal C}
\end{equation*}
of the projection with the Yoneda embedding factors through the full tensor subcategory
$(\widetilde{\mathcal C})_{\mathrm{rig}}$ of $\widetilde{\mathcal C}$ consisting of the dualisable objects.
This factorisation
\begin{equation}\label{e:Tannhull}
\mathcal C \to (\widetilde{\mathcal C})_{\mathrm{rig}}
\end{equation}
is then the required super Tannakian hull (Theorem~\ref{t:Tannhull}).
We may factor any tensor functor essentially uniquely as a strict
tensor functor which is the identity on objects followed by a fully faithful tensor functor.
In the case of \eqref{e:Tannhull}, this factorisation is
\begin{equation*}
\mathcal C \to \mathcal C_\mathrm{fr} \to (\widetilde{\mathcal C})_{\mathrm{rig}}
\end{equation*}
where $\mathcal C_\mathrm{fr}$ is what will be called the \emph{fractional closure} of $\mathcal C$.
A morphism $C \to C'$ in $\mathcal C_\mathrm{fr}$ is an equivalence class of pairs $(h,f)$ with $0 \ne f:A \to A'$
and $h:A \otimes C \to A' \otimes C'$ morphisms in $\mathcal C$ such that the square
\eqref{e:propdef} below commutes, where $(h,f)$ and $(l,g)$ for $0 \ne g:B \to B'$ are equivalent
when the square \eqref{e:propequiv} below commutes.
If we denote the class of $(h,f)$ by $h/f$, then $\mathcal C \to \mathcal C_\mathrm{fr}$ sends $j$ to $j/1_{\mathbf 1}$.
It follows in particular that $\mathcal C$ can be embedded in a super Tannakian category if and only if
$\mathcal C$ is fractionally closed, i.e. $\mathcal C \to \mathcal C_\mathrm{fr}$ is an isomorphism, or equivalently for
every $(h,f)$ as above we have $h = f \otimes j$ for some $j:C \to C'$.
The endomorphism ring of $\mathbf 1$ in $\mathcal C_\mathrm{fr}$ is a field which will be denoted by $\kappa(\mathcal C)$.
It contains the field of fractions of the endomorphism ring of $\mathbf 1$ in $\mathcal C$,
but is in general strictly larger.
The tensor category $(\widetilde{\mathcal C})_{\mathrm{rig}}$ has a canonical structure
of super Tannakian category over $\kappa(\mathcal C)$.
Let $k'$ be a field of characteristic $0$.
If we write $k$ for the endomorphism ring of $\mathbf 1$ in $\mathcal C$,
then for any faithful tensor functor $T$ from $\mathcal C$ to super
$k'$\nobreakdash-\hspace{0pt} vector spaces, the $k'$\nobreakdash-\hspace{0pt} linear extension of $T$ to a functor from
$k' \otimes_k \mathcal C$ is faithful if and only if $\kappa(\mathcal C)$ is the field of fractions of $k$.
Any $T$ induces a homomorphism $\rho$ from $\kappa(\mathcal C)$ to $k'$,
and if $k'$ is algebraically closed, a $T$ inducing a given $\rho$ exists (Theorem~\ref{t:Tannequiv}) and
it is unique up to tensor isomorphism (Corollary~\ref{c:fibfununique}).
The essential point in proving that \eqref{e:Tannhull} has the
required universal property is to show that $(\widetilde{\mathcal C})_{\mathrm{rig}}$ is super
Tannakian.
This will be done by showing that $\widetilde{\mathcal C}$ is a category
of modules over a transitive affine super groupoid,
with $(\widetilde{\mathcal C})_{\mathrm{rig}}$ consisting of the ones of finite type
(Theorem~\ref{t:Tannequiv}).
Consider first the case where $\mathcal C$ has a faithful tensor functor to a category
of vector spaces over a field of characteristic $0$.
Then \cite[Lemma~3.4]{O} there is a product (not necessarily finite) $G$
of general linear groups over $\mathbf Q$ such that, after tensoring with $\mathbf Q$ and
passing to the pseudo-abelian hull, $\mathcal C$ is a category of $G$\nobreakdash-\hspace{0pt} equivariant vector bundles
over an integral affine $G$\nobreakdash-\hspace{0pt} scheme $X$.
It follows from this that $\widehat{\mathcal C}$ is the category of $G$\nobreakdash-\hspace{0pt} equivariant
quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} modules.
If $G = 1$, then the torsion objects of $\widehat{\mathcal C}$ are the usual torsion sheaves,
$\widetilde{\mathcal C}$ is the category of vector spaces over the function field $\kappa(\mathcal C)$ of $X$ with
$(\widetilde{\mathcal C})_{\mathrm{rig}}$ the category of finite-dimensional ones,
and \eqref{e:Tannhull} is passage to the generic fibre.
On the other hand suppose that $G$ is arbitrary but that $X$ is of finite type.
Then \eqref{e:Tannhull} is given by pullback onto the ``generic orbit'' $X_0$ of $X$.
Explicitly, $X_0$ is the intersection of the non-empty open $G$\nobreakdash-\hspace{0pt} subschemes of $X$,
with $\widetilde{\mathcal C}$ the category of $G$\nobreakdash-\hspace{0pt} equivariant quasi-coherent $\mathcal O_{X_0}$\nobreakdash-\hspace{0pt} modules,
and $(\widetilde{\mathcal C})_{\mathrm{rig}}$ the category of $G$\nobreakdash-\hspace{0pt} equivariant vector bundles
over $X_0$.
The $\mathbf Q$\nobreakdash-\hspace{0pt} algebra of invariants of $H^0(X_0,\mathcal O_{X_0})$ under $G$ is an extension $k$ of $\mathbf Q$,
and $X_0$ is a homogeneous quasi-affine $G_k$\nobreakdash-\hspace{0pt} scheme of finite type.
Then $(\widetilde{\mathcal C})_{\mathrm{rig}}$ is a Tannakian category over $k$ with
category of ind-objects $\widetilde{\mathcal C}$, and $\kappa(\mathcal C) = k$.
For arbitrary $G$ and $X$ we may write $X$ as the filtered limit $\lim_\lambda X_\lambda$
of integral affine $G$\nobreakdash-\hspace{0pt} schemes $X_\lambda$ of finite type with dominant transition morphisms,
and the generic orbits form a filtered inverse system $(X_0{}_\lambda)$ of $G$\nobreakdash-\hspace{0pt} schemes.
It is not clear whether the limit $\lim_\lambda X_0{}_\lambda$ exists, but
the generic point of $X$ lies in the inverse image of each $X_0{}_\lambda$,
and passing to the generic fibre shows that
$(\widetilde{\mathcal C})_{\mathrm{rig}}$ is the category of representations of
a transitive affine groupoid in $\kappa(\mathcal C)$\nobreakdash-\hspace{0pt} schemes,
and hence is Tannakian over $\kappa(\mathcal C)$.
In the case of an arbitrary pseudo-Tannakian
category $\mathcal C$, it is natural to modify the above by taking for
$G$ a product of super general linear groups and for $X$ an appropriate
affine super $G$\nobreakdash-\hspace{0pt} scheme.
In this case we no longer have an explicit description of $\mathcal C$ as a category of
equivariant vector bundles or of $\widehat{\mathcal C}$ as a category of equivariant sheaves.
It can still however be shown (Theorem~\ref{t:Ftildeequiv})
that $\widetilde{\mathcal C}$ is a category of equivariant sheaves modulo torsion.
It is then possible to argue as above.
The paper is organised as follows.
After recalling some notation and terminology in Section~\ref{s:prelim},
the fractional closure of a tensor category is defined in Section~\ref{s:frac}.
Sections~\ref{s:rep} and \ref{s:free} deal with the connection between free rigid tensor categories
and categories of representations of super general linear groups over a field
of characteristic $0$.
In Sections~\ref{s:fun}--\ref{s:mod} the definitions and basic properties of the
categories $\widehat{\mathcal C}$ and their quotients $\widetilde{\mathcal C}$ modulo torsion are given,
culminating with the fact that for $\mathcal C$ pseudo-Tannakian, $\widetilde{\mathcal C}$ is a
category of equivariant sheaves modulo torsion over some super scheme.
Such categories of equivariant sheaves are studied in Section~\ref{s:equ},
and it is shown that modulo torsion they are categories of modules over
transitive affine super groupoids.
This result is applied in Section~\ref{s:supTann} to prove the existence
and basic properties of super Tannakian hulls.
Applications to motives and algebraic cycles will be given in a separate paper.
\section{Preliminaries}\label{s:prelim}
In this section we fix some notation and terminology for tensor categories and for
super groups and their representations.
By an \emph{additive} category we mean a category enriched over the category
$\mathrm{Ab}$ of abelian groups.
It is not assumed that direct sums or a zero object exist.
A \emph{tensor category} is an additive category equipped with a structure of symmetric
monoidal category for which the tensor product is bilinear.
A \emph{tensor functor} is an additive strong symmetric monoidal functor.
A monoidal natural isomorphism will also be called a tensor isomorphism.
It may always be assumed that the units of tensor categories are strict, and that they are strictly preserved
by tensor functors.
Let $k$ be a commutative ring.
By a \emph{$k$\nobreakdash-\hspace{0pt} linear category} we mean a category enriched over the category of $k$\nobreakdash-\hspace{0pt} modules.
If $\mathcal A$ is a cocomplete $k$\nobreakdash-\hspace{0pt} linear category and $V$ is a $k$\nobreakdash-\hspace{0pt} module, we have
for every object $M$ of $\mathcal A$ an object
\begin{equation*}
V \otimes_k M
\end{equation*}
of $\mathcal A$ which represents the functor $\Hom_k(V,\Hom_{\mathcal A}(M,-))$.
Its formation commutes with colimits of $k$\nobreakdash-\hspace{0pt} modules and in $\mathcal A$, and when $V = k$ it coincides
with $M$.
By a \emph{$k$\nobreakdash-\hspace{0pt} tensor category} we mean a $k$\nobreakdash-\hspace{0pt} linear category with a structure of symmetric monoidal
category with bilinear tensor product.
A $k$\nobreakdash-\hspace{0pt} tensor category is thus a tensor category for which $\End(\mathbf 1)$ has a structure of
$k$\nobreakdash-\hspace{0pt} algebra.
Similarly we define $k$\nobreakdash-\hspace{0pt} tensor functors.
Let $\mathcal A$ and $\mathcal A'$ be tensor categories.
As well as tensor functors from $\mathcal A$ to $\mathcal A'$, it will sometimes be necessary to consider
more generally lax tensor functors, which are defined as additive symmetric monoidal functors.
Explicitly, a \emph{lax tensor functor} from $\mathcal A$ to $\mathcal A'$ it is an additive functor $H:\mathcal A \to \mathcal A'$ together
with a morphism $\mathbf 1 \to H(\mathbf 1)$ and morphisms
\begin{equation*}
H(M) \otimes H(N) \to H(M \otimes N),
\end{equation*}
natural in $M$ and $N$, which satisfy conditions similar to those for a tensor functor.
Any right adjoint to a tensor functor has a unique structure of lax tensor functor
for which the unit and counit of the adjunction are compatible with the tensor
and lax tensor structures.
By a \emph{tensor equivalence} we mean a tensor functor $\mathcal A \to \mathcal A'$ which has a quasi-inverse
$\mathcal A' \to \mathcal A$ in the $2$\nobreakdash-\hspace{0pt} category of tensor categories, tensor functors and tensor isomorphisms.
It equivalent to require that the underlying additive functor be an equivalence.
When a lax tensor functor $H:\mathcal A \to \mathcal A'$ is said to be a tensor equivalence, this will always mean
that $H$ is a tensor functor which is an equivalence in the above sense.
It is in general \emph{not} sufficient for this merely that the underlying additive functor
of $H$ be an equivalence.
A \emph{dual} of an object $M$ in a tensor category $\mathcal A$ is an object $M^\vee$ of $\mathcal A$
together with a unit $\mathbf 1 \to N^\vee \otimes M$ and a counit $M \otimes M^\vee \to \mathbf 1$ satisfying
the usual triangular identities.
If such a dual exists it is unique up to unique isomorphism, and $M$ will be said to be dualisable.
Any tensor functor preserves dualisable objects.
For $M$ dualisable, the trace in $\End(\mathbf 1)$ of an endomorphism of $M$ is defined,
as is for example the contraction $L \to N$ of a morphism from $L \otimes M$ to $N \otimes M$.
We say that $\mathcal A$ is \emph{rigid} if every object of $\mathcal A$ is dualisable.
The full subcategory
\begin{equation*}
\mathcal A_\mathrm{rig}
\end{equation*}
of $\mathcal A$ consisting of the dualisable objects is a rigid tensor subcategory of $\mathcal A$ which
is closed under the formation of direct sums and direct summands.
We define in the usual way algebras and commutative algebras in a tensor category $\mathcal A$, and modules
over such algebras.
Suppose that $\mathcal A$ is abelian, with $\otimes$ right exact.
Let $R$ be a commutative algebra in $\mathcal A$.
Define the \emph{tensor product $M \otimes_R N$} of $R$\nobreakdash-\hspace{0pt} modules $M$ and $N$ in $\mathcal A$ as
the target of the universal morphism from $M \otimes N$ to an $R$\nobreakdash-\hspace{0pt} module in $\mathcal A$ which is an $R$\nobreakdash-\hspace{0pt} module
morphism for the actions of $R$ on $M \otimes N$ through either $M$ or $N$.
Explicitly, $M \otimes_R N$ is the coequaliser of the two morphisms from $M \otimes R \otimes N$ to $M \otimes N$
defined by the actions of $R$ on $M$ and $N$.
We then have a structure of abelian tensor category on the category
\begin{equation*}
\MOD_{\mathcal A}(R)
\end{equation*}
of $R$\nobreakdash-\hspace{0pt} modules in $\mathcal A$, with unit $R$, tensor product $\otimes_R$, and constraints defined by the
universal property.
As usual we assume the tensor product is chosen so that the unit $R$ is strict.
We write
\begin{equation*}
\Mod_{\mathcal A}(R)
\end{equation*}
for the full rigid tensor subcategory $\MOD_{\mathcal A}(R)_{\mathrm{rig}}$ of $\MOD_{\mathcal A}(R)$.
Let $\mathcal A$ be an abelian tensor category with right exact tensor product,
and $R$ be a commutative algebra in $\mathcal A$.
Then we have a tensor functor $R \otimes -$ from $\mathcal A$ to $\MOD_{\mathcal A}(R)$.
If $\mathcal A'$ is an abelian tensor category with right exact tensor product,
then any lax tensor functor (resp.\ right exact tensor functor) $H$ from $\mathcal A$ to $\mathcal A'$
induces a lax tensor functor (resp.\ right exact tensor functor)
from $\MOD_{\mathcal A}(R)$ to $\MOD_{\mathcal A'}(H(R))$.
Let $k$ be a field of characteristic $0$.
The $k$\nobreakdash-\hspace{0pt} tensor category of super $k$\nobreakdash-\hspace{0pt} vector spaces will be written
\begin{equation*}
\MOD(k).
\end{equation*}
The full rigid tensor subcategory $\MOD(k)_\mathrm{rig}$ consists of finite-dimensional
super $k$\nobreakdash-\hspace{0pt} vector spaces, and will be written $\Mod(k)$.
If $X$ is a super $k$\nobreakdash-\hspace{0pt} scheme, we write
\begin{equation*}
\MOD(X)
\end{equation*}
for the $k$\nobreakdash-\hspace{0pt} tensor category of
quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} modules.
The full rigid tensor subcategory $\MOD(k)_\mathrm{rig}$ of $\MOD(X)$ consists of
the vector bundles over $X$, i.e.\ the $\mathcal O_X$\nobreakdash-\hspace{0pt} modules locally isomorphic to $\mathcal O_X{}\!^{m|n}$,
and will be written $\Mod(X)$.
We denote by $\iota$ the canonical automorphism of order $2$ of the identity functor of the category
of super $k$\nobreakdash-\hspace{0pt} schemes.
If $X$ is a super $k$\nobreakdash-\hspace{0pt} scheme and $\mathcal V$ is a quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} module, we denote by
$\iota_{\mathcal V}$ the canonical automorphism of $\mathcal V$ above $\iota_X$.
A super monoid scheme over $k$ will also be called a \emph{super $k$\nobreakdash-\hspace{0pt} monoid}.
By a \emph{super $k$\nobreakdash-\hspace{0pt} monoid with involution} we mean a pair $(M,\varepsilon)$ with $M$ a super
$k$\nobreakdash-\hspace{0pt} monoid and $\varepsilon$ a $k$\nobreakdash-\hspace{0pt} point of $M$ with $\varepsilon^2 = 1$
such that conjugation by $\varepsilon$ is $\iota_M$.
If $M$ is a super group scheme over $k$ we also speak of a \emph{super $k$\nobreakdash-\hspace{0pt} group} and
a \emph{super $k$\nobreakdash-\hspace{0pt} group with involution}.
Let $(M,\varepsilon)$ be a super $k$\nobreakdash-\hspace{0pt} monoid with involution.
By an \emph{$(M,\varepsilon)$\nobreakdash-\hspace{0pt} module} we mean an $M$\nobreakdash-\hspace{0pt} module $V$ for which
$\varepsilon$ acts as $(-1)^i$ on the summand $V_i$ of degree $i$,
and by a \emph{representation of $(M,\varepsilon)$} we mean a finite-dimensional
$(M,\varepsilon)$\nobreakdash-\hspace{0pt} module.
Every $M$\nobreakdash-\hspace{0pt} submodule and $M$\nobreakdash-\hspace{0pt} quotient module of an $(M,\varepsilon)$\nobreakdash-\hspace{0pt} module is an
$(M,\varepsilon)$\nobreakdash-\hspace{0pt} module.
Let $(G,\varepsilon)$ be a super $k$\nobreakdash-\hspace{0pt} group with involution.
The $k$\nobreakdash-\hspace{0pt} tensor category of $(G,\varepsilon)$\nobreakdash-\hspace{0pt} modules will be written as
\begin{equation*}
\MOD_{G,\varepsilon}(k).
\end{equation*}
The full rigid $k$\nobreakdash-\hspace{0pt} tensor subcategory $\MOD_{G,\varepsilon}(k)_\mathrm{rig}$ consists of the
representations of $(G,\varepsilon)$, and will be written as $\Mod_{G,\varepsilon}(k)$.
By a \emph{super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme} we mean a super $k$\nobreakdash-\hspace{0pt} scheme $X$
equipped with an action of $G$ such that $\varepsilon$ acts on $X$ as $\iota_X$.
If $X$ is a super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme, we write
\begin{equation*}
\MOD_{G,\varepsilon}(X)
\end{equation*}
for the $k$\nobreakdash-\hspace{0pt} tensor category of
$(G,\varepsilon)$\nobreakdash-\hspace{0pt} equivariant quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} modules, i.e.\ those
$G$\nobreakdash-\hspace{0pt} equivariant quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} modules $\mathcal V$ for which
$\varepsilon$ acts as $\iota_{\mathcal V}$.
The full rigid $k$\nobreakdash-\hspace{0pt} tensor subcategory $\MOD_{G,\varepsilon}(X)_\mathrm{rig}$ consists
of the $\mathcal V$ whose underlying $\mathcal O_X$\nobreakdash-\hspace{0pt} module is a vector bundle, and will be written
$\Mod_{G,\varepsilon}(X)$.
An algebra in $\MOD_{G,\varepsilon}(k)$ will also be called a \emph{$(G,\varepsilon)$\nobreakdash-\hspace{0pt} algebra}.
If $R$ is a commutative $(G,\varepsilon)$\nobreakdash-\hspace{0pt} algebra then $X = \Spec(R)$ is a
$(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme, and
\begin{equation*}
\MOD_{G,\varepsilon}(X) = \MOD_{\mathcal A}(R)
\end{equation*}
with $\mathcal A = \MOD_{G,\varepsilon}(k)$.
The full subcategory of the category of super $k$\nobreakdash-\hspace{0pt} schemes consisting of the
reduced $k$\nobreakdash-\hspace{0pt} schemes is coreflective, and the coreflector $X \mapsto X_\mathrm{red}$
preserves finite products.
Thus $X \mapsto X_\mathrm{red}$ sends super $k$\nobreakdash-\hspace{0pt} groups to $k$\nobreakdash-\hspace{0pt} groups,
and actions of a super $k$\nobreakdash-\hspace{0pt} group on a super $k$\nobreakdash-\hspace{0pt} scheme to actions
of a $k$\nobreakdash-\hspace{0pt} group on a $k$\nobreakdash-\hspace{0pt} scheme.
\section{Fractional closures}\label{s:frac}
In this section we define the fractional closure of a tensor category.
For tensor categories with one object,
identified with commutative rings, the fractional closure is the
total ring of fractions.
Let $\mathcal C$ be a tensor category.
A morphism $f$ in $\mathcal C$ will be called \emph{regular} if
$f \otimes g = 0$ implies $g = 0$ for every morphism $g$ in $\mathcal C$.
A morphism in $\mathcal C$ is regular if and only if its image in the pseudo-abelian hull of $\mathcal C$ is regular.
We say that $\mathcal C$ is \emph{integral} if $1_{\mathbf 1} \ne 0$ in $\mathcal C$ and the tensor product of two non-zero
morphisms in $\mathcal C$ is non-zero.
Equivalently $\mathcal C$ is integral if $1_{\mathbf 1} \ne 0$ in $\mathcal C$ and every non-zero morphism of
$\mathcal C$ is regular.
A tensor functor will be called \emph{regular} if it preserves regular morphisms.
Any faithful tensor functor reflects regular morphisms.
A tensor functor between integral tensor categories is regular if and only if it is faithful.
The embedding of a tensor category into its pseudo-abelian hull is regular.
If direct sums exist in $\mathcal C$, or if $\mathcal C$ is integral,
then a tensor functor $\mathcal C \to \mathcal C'$ is regular if and only if the tensor functor
it induces on pseudo-abelian hulls is regular.
Given a regular morphism $f:A \to A'$ and objects $C$ and $C'$ in $\mathcal C$,
denote by
\begin{equation*}
\mathcal C_f(C,C')
\end{equation*}
the subgroup of the hom-group $\mathcal C(A \otimes C,A' \otimes C')$ consisting of those
morphisms $h$ for which the square
\begin{equation}\label{e:propdef}
\begin{gathered}
\xymatrix{
A \otimes (A \otimes C) \ar_{f \otimes h}[d] \ar_{\sigma_{AAC}}^{\sim}[r] &
A \otimes (A \otimes C) \ar^{f \otimes h}[d] \\
A' \otimes (A' \otimes C') \ar_{\sigma_{A'A'C'}}^{\sim}[r] & A' \otimes (A' \otimes C')
}
\end{gathered}
\end{equation}
commutes, where we write
\begin{equation*}
\sigma_{ABC}:A \otimes (B \otimes C) \xrightarrow{\sim} B \otimes (A \otimes C)
\end{equation*}
for the symmetry.
We have
\begin{equation*}
\mathcal C_{1_{\mathbf 1}}(C,C') = \mathcal C(C,C')
\end{equation*}
for every $C$ and $C'$.
A morphism $f$ in $\mathcal C$ will be called \emph{strongly regular} if it is regular and
the embedding
\begin{equation}\label{e:fracclose}
f \otimes -:\mathcal C(C,C') \to \mathcal C_f(C,C')
\end{equation}
is an isomorphism for every pair of objects $C$ and $C'$ in $\mathcal C$.
A morphism in $\mathcal C$ is strongly regular if and only if its image in the pseudo-abelian hull
of $\mathcal C$ is strongly regular.
We say that $\mathcal C$ is \emph{fractionally closed} if every regular morphism in $\mathcal C$
is strongly regular.
Let $C$ and $C'$ be objects of $\mathcal C$.
We define an equivalence relation on the set of pairs $(h,f)$ with
$f$ a regular morphism in $\mathcal C$ and $h$ in $\mathcal C_f(C,C')$ by calling
$(h,f)$ and $(l,g)$ with $f:A \to A'$ and $g:B \to B'$ equivalent if
the square
\begin{equation}\label{e:propequiv}
\begin{gathered}
\xymatrix{
B \otimes (A \otimes C) \ar_{g \otimes h}[d] \ar_{\sigma_{BAC}}^{\sim}[r] &
A \otimes (B \otimes C) \ar^{f \otimes l}[d] \\
B' \otimes (A' \otimes C') \ar_{\sigma_{B'A'C'}}^{\sim}[r] & A' \otimes (B' \otimes C')
}
\end{gathered}
\end{equation}
commutes.
That this relation is reflexive is clear from \eqref{e:propdef},
that it is symmetric from the fact that $\sigma_{ABC}$ is the inverse of $\sigma_{BAC}$,
and that it is transitive can be seen as follows:
given $(h_i,f_i)$ for $i = 1,2,3$, if we write $\mathcal D_i$ for the square expressing the equivalence
of the two $(h_j,f_j)$ for $j \ne i$, then $f_2 \otimes \mathcal D_2$ can be obtained by combining
$f_1 \otimes \mathcal D_1$, $f_3 \otimes \mathcal D_3$, and three commutative squares expressing the naturality
of the symmetries.
For a given regular $f$, the pairs $(h_1,f)$ and $(h_2,f)$ are equivalent if and only if $h_1 = h_2$.
Let $f:A \to A'$ be regular in $\mathcal C$.
If $j:B \to A$ and $j':A' \to B'$ are morphisms in $\mathcal C$ with $j' \circ f \circ j$
(and hence $j$ and $j'$) regular, then we have a homomorphism
\begin{equation}\label{e:propcomp}
\mathcal C_f(C,C') \to \mathcal C_{j' \circ f \circ j}(C,C')
\end{equation}
which sends $h$ to $(j' \otimes C') \circ h \circ (j \otimes C)$.
If $l:D \to D'$ is a regular morphism in $\mathcal C$, we have a homomorphism
\begin{equation}\label{e:proptens}
\mathcal C_f(C,C') \to \mathcal C_{l \otimes f}(C,C')
\end{equation}
which, modulo the appropriate associativities, sends $h$ to $l \otimes h$.
The image of $h$ under \eqref{e:propcomp} is the unique element $m$ of $\mathcal C_{j' \circ f \circ j}(C,C')$
with $(m,j' \circ f \circ j)$ equivalent to $(h,f)$,
and similarly for \eqref{e:proptens}.
Define a structure of preorder on the set $\Reg(\mathcal C)$ regular morphisms of $\mathcal C$
by writing $f \le g$ for $f:A \to A'$ and $g:B \to B'$ if
there exist (necessarily regular) morphisms $l:D \to D'$, $j:B \to D \otimes A$ and
$j':D' \otimes A' \to B'$ in $\mathcal C$ such that
\begin{equation*}
g = j' \circ (l \otimes f) \circ j.
\end{equation*}
The preorder $\Reg(\mathcal C)$ is filtered, because if $f_1$ and $f_2$
are elements, both $f_1$ and (taking symmetries for $j$
and $j'$) $f_2$ are $\le$ $f_2 \otimes f_1$.
It is essentially small if $\mathcal C \to \mathcal C'$ is.
A tensor functor $\mathcal C \to \mathcal C'$ is regular if it sends every element
of some cofinal subset of $\Reg(\mathcal C)$ to a regular morphism in $\mathcal C'$.
Any regular tensor functor $\mathcal C \to \mathcal C'$ induces an order-preserving map
$\Reg(\mathcal C) \to \Reg(\mathcal C')$.
For $f \le g$ in $\Reg(\mathcal C)$ and objects $C$ and $C'$ of $\mathcal C$, we have by
\eqref{e:propcomp} and \eqref{e:proptens} a homomorphism
\begin{equation}\label{e:proptrans}
\mathcal C_f(C,C') \to \mathcal C_g(C,C'),
\end{equation}
necessarily injective, which sends $h$ in $\mathcal C_f(C,C')$ to the unique $m$ in
$\mathcal C_g(C,C')$ with $(m,g)$ equivalent to $(f,h)$.
We thus have a filtered system $(\mathcal C_f(C,C'))_{f \in \Reg{\mathcal C}}$.
If $f \le g$ in $\Reg(\mathcal C)$ and $g$ is strongly regular then $f$ is strongly regular.
Suppose that either finite direct sums exist in $\mathcal C$, or that $\mathcal C$ is integral.
Then the embedding $\mathcal C \to \mathcal C'$ of $\mathcal C$ into its pseudo-abelian hull $\mathcal C'$
induces a cofinal map $\Reg(\mathcal C) \to \Reg(\mathcal C')$.
Thus $\mathcal C$ is fractionally closed if and only if its pseudo-abelian hull is.
Suppose now that $\mathcal C$ is essentially small.
We define as follows a fractionally closed tensor category $\mathcal C_\mathrm{fr}$,
the \emph{fractional closure of $\mathcal C$}, together with a faithful, regular, strict tensor functor
\begin{equation*}
E_{\mathcal C}:\mathcal C \to \mathcal C_\mathrm{fr}.
\end{equation*}
The objects of $\mathcal C_\mathrm{fr}$ are those of $\mathcal C$, and the hom groups
are the filtered colimits
\begin{equation}\label{e:frachom}
\mathcal C_\mathrm{fr}(C,C') = \colim_{f \in \Reg(\mathcal C)} \mathcal C_f(C,C'),
\end{equation}
which exist because $\Reg(\mathcal C)$ is essentially small.
Thus $\mathcal C_\mathrm{fr}(C,C')$ is the set of equivalence class of pairs $(h,f)$
with $f$ a regular morphism in $\mathcal C$ and $h$ an element of $\mathcal C_f(C,C')$.
We write $h/f$ for the class of $(h,f)$.
The identity in $\mathcal C_\mathrm{fr}(C,C)$ is $1_C/1_{\mathbf 1}$.
The composite of $h'/f'$ in $\mathcal C_\mathrm{fr}(\mathcal C)(C',C'')$ and $h/f$ in $\mathcal C_\mathrm{fr}(C,C')$ is defined as
$(h'{}\!_1 \circ h_1)/(f'{}\!_1 \circ f_1)$ for any $(h_1,f_1)$ equivalent to $(h,f)$
and $(h'{}\!_1,f'{}\!_1)$ equivalent to $(h',f')$ with $f'{}\!_1$ and $f_1$ composable:
for $f:A \to A'$ and $f':B' \to B''$ we may take for example $f_1 = B' \otimes f$ and
$f'{}\!_1 = f' \otimes A'$.
The bilinearity, identity and associativity properties of the composition in $\mathcal C_\mathrm{fr}$
follow from those for $\mathcal C$.
The functor $E_{\mathcal C}$ is the identity on objects, and on the hom group
$\mathcal C(C,C')$ it is the coprojection of \eqref{e:frachom} at $f = 1_{\mathbf 1}$, so that
\begin{equation*}
E_{\mathcal C}(j) = j/1_{\mathbf 1}
\end{equation*}
for any $j:C \to C'$.
The unit and tensor product of objects of $\mathcal C_\mathrm{fr}$ are those of $\mathcal C$,
and for $j$ in $\mathcal C_g(D,D')$ and $h$ in $\mathcal C_f(C,C')$ with $g:B \to B'$ and $f:A \to A'$,
the tensor product $(j/g) \otimes (h/f)$ is $l/(g \otimes h)$ with $l$ defined by
the commutative square
\begin{equation*}
\xymatrix{
(B \otimes D) \otimes (A \otimes C) \ar_{j \otimes h}[d] \ar^{\sim}_{\sigma_{BDAC}}[r] &
(B \otimes A) \otimes (D \otimes C) \ar^l[d] \\
(B' \otimes D') \otimes (A' \otimes C') \ar^{\sim}_{\sigma_{B'D'A'C'}}[r] &
(B' \otimes A') \otimes (D' \otimes C')
}
\end{equation*}
where $\sigma_{ABCD}:(A \otimes B) \otimes (C \otimes D) \xrightarrow{\sim} (A \otimes C) \otimes (B \otimes D)$
is the symmetry.
The associativities and symmetries of $\mathcal C_\mathrm{fr}$ are the images under $E_{\mathcal C}$
of those of $\mathcal C$, and the naturality and required compatibilities follow from those in $\mathcal C$.
For $f$ regular in $\mathcal C$, a morphism $h/f$ in $\mathcal C_\mathrm{fr}$ is regular if and only if
$h$ is regular in $\mathcal C$.
In particular $E_{\mathcal C}$ is regular.
To prove that $\mathcal C_\mathrm{fr}$ is fractionally closed, let $l:A \to A'$ and $g:B \to B'$
be regular morphisms in $\mathcal C$, and
\begin{equation*}
h:B \otimes (A \otimes C) \to B' \otimes (A' \otimes C')
\end{equation*}
be a morphism in $\mathcal C_g(A \otimes C,A' \otimes C')$ such that $h/g$ in
$\mathcal C_\mathrm{fr}(A \otimes C,A' \otimes C')$ lies in $(\mathcal C_\mathrm{fr})_{l/1_{\mathbf 1}}(C,C')$.
Then the morphism
\begin{equation*}
h':(B \otimes A) \otimes C \to (B' \otimes A') \otimes C'
\end{equation*}
that coincides modulo associativities with $h$ lies in $\mathcal C_{g \otimes l}(C,C')$, and
\begin{equation}\label{e:hgl}
h/g = (l/1_{\mathbf 1}) \otimes (h'/(g \otimes l))
\end{equation}
in $\mathcal C_\mathrm{fr}(A \otimes C,A' \otimes C')$.
If $g = 1_{\mathbf 1}$ then $h = h'$, and \eqref{e:hgl} becomes
\begin{equation*}
h/1_{\mathbf 1} = (l/1_{\mathbf 1}) \otimes (h/l).
\end{equation*}
The $h/1_{\mathbf 1}$ for $h$ in $\Reg(\mathcal C)$ are thus cofinal in $\Reg(\mathcal C_\mathrm{fr})$.
Hence it is enough to prove that $l/1_{\mathbf 1}$ is strongly regular in $\mathcal C_\mathrm{fr}$
for every $l$ in $\Reg(\mathcal C)$, which follows from \eqref{e:hgl}.
It is clear from the construction that $\mathcal C$ is fractionally closed
if and only if $E_{\mathcal C}$ is fully faithful if and only if $E_\mathcal C$ is an isomorphism
of tensor categories.
If $\mathcal C$ is integral, then $\mathcal C_\mathrm{fr}$ is integral.
Let $T:\mathcal C \to \mathcal D$ be a regular tensor functor.
If a tensor functor $T_1:\mathcal C_\mathrm{fr} \to \mathcal D$ with $T = T_1E_{\mathcal C}$ exists, it is unique,
and $T_1$ is then regular.
Explicitly such a $T_1$ coincides with $T$ on objects, the tensor structural isomorphisms
of $T_1$ are those of $T$, and if $f:A \to A'$ is regular in $\mathcal C$
and $h$ is in $\mathcal C_f(C,C')$, then the morphism
\begin{equation*}
T(h)':T(A) \otimes T(C) \to T(A') \otimes T(C')
\end{equation*}
in $\mathcal D$ that coincides modulo the tensor structural isomorphisms of $T$ with $T(h)$
lies in $\mathcal D_{T(f)}(T(C),T(C'))$, and $T_1(h/f)$ is the unique morphism with
\begin{equation}\label{e:Tfactor}
T(f) \otimes T_1(h/f) = T(h)'.
\end{equation}
Such a $T_1$ exists if $\mathcal D$ is fractionally closed, and more generally
if $T$ sends regular morphisms in $\mathcal C$ to strongly regular morphisms in $\mathcal D$.
If $T$ and $T'$ are regular tensor functors $\mathcal C \to \mathcal D$ with $T = T_1E_{\mathcal C}$
and $T' = T'{}\!_1E_{\mathcal C}$,
then any tensor isomorphism $T \xrightarrow{\sim} T'$ is at the same time a tensor isomorphism
$T_1 \xrightarrow{\sim} T'{}\!_1$.
Let $\mathcal C'$ be an essentially small tensor category.
Then for any regular tensor functor $T:\mathcal C \to \mathcal C'$ we have $E_{\mathcal C'}T = T_\mathrm{fr}E_{\mathcal C}$
for a unique tensor functor
\begin{equation}\label{e:FrT}
T_\mathrm{fr}:\mathcal C_\mathrm{fr} \to \mathcal C'{}\!_\mathrm{fr},
\end{equation}
and $T_\mathrm{fr}$ is regular.
If $\varphi:T \xrightarrow{\sim} T'$ is a tensor isomorphism, there is a unique tensor isomorphism
$\varphi_\mathrm{fr}:T_\mathrm{fr} \xrightarrow{\sim} T'{}\!_\mathrm{fr}$ with
$E_{\mathcal C'}\varphi = \varphi_\mathrm{fr}E_{\mathcal C}$.
If $T$ is faithful then $T_\mathrm{fr}$ is faithful, but in general $T$ fully faithful
does not imply $T_\mathrm{fr}$ fully faithful.
However if either finite direct sums exist in $\mathcal C$, or $\mathcal C$ is integral,
then $T_\mathrm{fr}$ is fully faithful for $T:\mathcal C \to \mathcal C'$ the embedding of $\mathcal C$ into
its pseudo-abelian hull, because $T$ then induces a cofinal map $\Reg(\mathcal C) \to \Reg(\mathcal C')$.
The commutative endomorphism ring $\mathcal C_\mathrm{fr}(\mathbf 1,\mathbf 1)$ will be important in what follows.
Explicitly, $\mathcal C_f(\mathbf 1,\mathbf 1)$ for $f:A \to A'$ regular is the subgroup of $\mathcal C(A,A')$
consisting of those $h:A \to A'$ for which
\begin{equation*}
h \otimes f = f \otimes h,
\end{equation*}
and $h/f = l/g$ in $\mathcal C_\mathrm{fr}(\mathbf 1,\mathbf 1)$ when
\begin{equation*}
h \otimes g = f \otimes l.
\end{equation*}
The addition in $\mathcal C_\mathrm{fr}(\mathbf 1,\mathbf 1)$ is given by
\begin{equation*}
h/f + h'/f' = (h \otimes f' + f \otimes h')/(f \otimes f')
\end{equation*}
and the product by
\begin{equation*}
(h/f)(h'/f') = (h \otimes h')/(f \otimes f').
\end{equation*}
If $h$ is regular, then $h/f$ in $\mathcal C_\mathrm{fr}(\mathbf 1,\mathbf 1)$ is invertible, with inverse $f/h$.
When $\mathcal C$ is an integral tensor category, $\mathcal C_\mathrm{fr}(\mathbf 1,\mathbf 1)$ is a field, and we then also write
\begin{equation*}
\kappa(\mathcal C)
\end{equation*}
for $\mathcal C_\mathrm{fr}(\mathbf 1,\mathbf 1)$.
Suppose that $\mathcal C$ is rigid.
If $f:A \to A'$ and $g:A'{}^\vee \otimes A \to \mathbf 1$ correspond to one another by duality,
then explicitly $g$ is obtained from
$f$ by composing the counit $A'{}^\vee \otimes A' \to \mathbf 1$ with $A'{}^\vee \otimes f$,
and modulo the appropriate associativity $f$ is obtained from $g$ by composing
$A' \otimes g$ with the tensor product of the unit $\mathbf 1 \to A' \otimes A'{}^\vee$ and $A$.
Thus $g$ is regular if and only if $f$ is, and $f$ and $g$ are then isomorphic in $\Reg(\mathcal C)$.
Hence by \eqref{e:proptrans} every morphism
in $\mathcal C_\mathrm{fr}$ can be written in the form $m/g$ for some regular $g:B \to \mathbf 1$,
and similarly in the form $m'/g'$ for some regular $g':\mathbf 1 \to B'$.
If $f$ is regular in $\mathcal C$, we have an isomorphism
\begin{equation*}
\mathcal C_f(C,C') \xrightarrow{\sim} \mathcal C_f(\mathbf 1,C' \otimes C^\vee)
\end{equation*}
for every $C$ and $C'$ in $\mathcal C$,
defined using the unit $\mathbf 1 \to C \otimes C^\vee$, and with inverse defined using the
counit $C^\vee \otimes C \to \mathbf 1$.
Such isomorphisms are compatible with homomorphisms of the form
\eqref{e:fracclose} and \eqref{e:proptrans}.
In particular $\mathcal C$ is fractionally closed if and only if the homomorphism
\begin{equation}\label{e:fraccloseI}
f \otimes - = - \circ f:\mathcal C(\mathbf 1,D) \to \mathcal C_f(\mathbf 1,D)
\end{equation}
is an isomorphism for every regular $f:A \to \mathbf 1$ and $D$ in $\mathcal C$.
\begin{lem}\label{l:abfracclose}
Every abelian rigid tensor category is fractionally closed.
\end{lem}
\begin{proof}
Let $f:A \to \mathbf 1$ be a regular morphism in an abelian rigid tensor category $\mathcal C$.
Then $f$ is an epimorphism in $\mathcal C$, because $f \otimes l = l \circ f$ for every
$l:\mathbf 1 \to D$ in $\mathcal C$.
If $h:A \to D$ lies in $\mathcal C_f(\mathbf 1,D)$ and $i:\Ker f \to A$ is the embedding, then
\begin{equation*}
(h \circ i) \otimes f = (h \otimes f) \circ (i \otimes A) =
(f \otimes h) \circ (i \otimes A) = 0.
\end{equation*}
Thus $h \circ i = 0$, so that $h = l \circ f$ for some $l:\mathbf 1 \to D$.
Hence \eqref{e:fraccloseI} is an isomorphism.
\end{proof}
Let $k$ be a commutative ring
and $\mathcal C_1$ and $\mathcal C_2$ be $k$\nobreakdash-\hspace{0pt} tensor categories.
For $i = 1,2$ we have a canonical $k$\nobreakdash-\hspace{0pt} tensor functor $I_i:\mathcal C_i \to \mathcal C_1 \otimes_k \mathcal C_2$
where $I_1$ sends $M_1$ to $(M_1,\mathbf 1)$ and $I_2$ sends $M_2$ to $(\mathbf 1,M_2)$.
If $\End_{\mathcal C_2}(\mathbf 1) = k$, then $I_1$ is fully faithful.
Given a $k$\nobreakdash-\hspace{0pt} tensor functor $T_i:\mathcal C_i \to \mathcal C$ for $i = 1,2$, there is a $k$\nobreakdash-\hspace{0pt} tensor
functor $T:\mathcal C_1 \otimes_k \mathcal C_2 \to \mathcal C$ with $TI_i = T_i$, and such a $T$
is unique up to unique tensor isomorphism $\varphi$ with $\varphi I_i$ the identity of $T_i$.
Indeed we may take for $T$ the composite of $\otimes:\mathcal C \otimes_k \mathcal C \to \mathcal C$ with $T_1 \otimes_k T_2$.
Similar considerations apply for the tensor product of any finite number of categories.
\begin{lem}\label{l:extfaith}
Let $k$ be a field, $\mathcal C$ and $\mathcal C'$ be integral $k$\nobreakdash-\hspace{0pt} tensor categories with $\mathcal C$
essentially small, and $T:\mathcal C \to \mathcal C'$ be a faithful $k$\nobreakdash-\hspace{0pt} tensor functor.
Suppose that $\kappa(\mathcal C) = k$.
Then the $k$\nobreakdash-\hspace{0pt} tensor functor $\mathcal C' \otimes_k \mathcal C \to \mathcal C'$
defined by $\Id_{\mathcal C'}$ and $T$ is faithful.
\end{lem}
\begin{proof}
Let $f'{}\!_1,f'{}\!_2,\dots,f'{}\!_n$ be elements of some hom-space of $\mathcal C'$ which are
linearly independent over $k$.
It is to be shown that if
\begin{equation}\label{e:tensrel}
f'{}\!_1 \otimes T(f_1) + f'{}\!_2 \otimes T(f_2) + \dots + f'{}\!_n \otimes T(f_n) = 0
\end{equation}
in $\mathcal C'$ for elements $f_1,f_2,\dots,f_n$ of some hom-space of $\mathcal C$, then $f_i = 0$ for each $i$.
We argue by induction on $n$.
The required result holds for $n = 1$ because $\mathcal C'$ is integral and $T$ is faithful.
Suppose that it holds for $n < r$.
If we had an equality \eqref{e:tensrel} with $n = r$ and $f_r \ne 0$, then tensoring with $T(f_r)$ would give
\[
\sum_{i=1}^r f'{}\!_i \otimes T(f_i \otimes f_r) = 0 = \sum_{i=1}^r f'{}\!_i \otimes T(f_r \otimes f_i),
\]
so that $\sum_{i=1}^{r-1} f'{}\!_i \otimes T(f_i \otimes f_r - f_r \otimes f_i) = 0$, and hence by induction
$f_i \otimes f_r = f_r \otimes f_i$ for $i < r$.
Since $\kappa(\mathcal C) = k$, this would imply $f_i = \alpha_i f_r$ for $i < r$ with $\alpha_i \in k$,
and hence by \eqref{e:tensrel}
\[
(\alpha_1 f'{}\!_1 + \dots + \alpha_r f'{}\!_{r-1} + f'{}\!_r) \otimes T(f_r) = 0,
\]
which is impossible by the case $n = 1$.
The required result thus holds for $n = r$.
\end{proof}
\begin{lem}\label{l:tensfaith}
Let $k$ be a field and $\mathcal C$ be an essentially small $k$\nobreakdash-\hspace{0pt} tensor category.
Then the following conditions are equivalent:
\begin{enumerate}
\renewcommand{\theenumi}{(\alph{enumi})}
\item\label{i:tensfaithkappa}
$\mathcal C$ is integral with $\kappa(\mathcal C) = k$;
\item\label{i:tensfaithprod}
$1_{\mathbf 1} \ne 0$ in $\mathcal C$ and the tensor product $\mathcal C \otimes_k \mathcal C \to \mathcal C$ is faithful.
\end{enumerate}
\end{lem}
\begin{proof}
\ref{i:tensfaithkappa} $\implies$ \ref{i:tensfaithprod}:
Take $T = \Id_{\mathcal C}$ of Lemma~\ref{l:extfaith}.
\ref{i:tensfaithprod} $\implies$ \ref{i:tensfaithkappa}:
Suppose that \ref{i:tensfaithprod} holds.
That $\mathcal C$ is integral is clear.
Let $h/f$ be an element of $\kappa(\mathcal C)$.
Then $f \otimes h = h \otimes f$.
If $\mathcal C \otimes_k \mathcal C \to \mathcal C$ is faithful, it follows that $I_1(f) \otimes I_2(h) = I_1(h) \otimes I_2(f)$,
so that $h$ lies in the $1$-dimensional $k$\nobreakdash-\hspace{0pt} subspace generated by $h$, and $h/f$
lies in $\kappa(\mathcal C)$.
Thus $\kappa(\mathcal C) = k$.
\end{proof}
\begin{lem}\label{l:ext}
Let $k$ be a field, $k'$ be an extension of $k$, and $\mathcal C$ be an essentially small integral
$k$\nobreakdash-\hspace{0pt} tensor category with $\kappa(\mathcal C) = k$.
\begin{enumerate}
\item\label{i:extint}
$k' \otimes_k \mathcal C$ is integral with $\kappa(k' \otimes_k \mathcal C) = k'$.
\item\label{i:extfaith}
If a $k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal C$ to an integral $k'$\nobreakdash-\hspace{0pt} tensor category $\mathcal C'$ is faithful,
then its extension to a $k'$\nobreakdash-\hspace{0pt} tensor functor from $k' \otimes_k \mathcal C$ to $\mathcal C'$ is faithful.
\end{enumerate}
\end{lem}
\begin{proof}
\ref{i:extint} follows from Lemma~\ref{l:tensfaith}, and \ref{i:extfaith} from Lemma~\ref{l:extfaith}
by composing with $I' \otimes_k \mathcal C$ where $I'$ is the embedding of $\mathbf 1$ into $\mathcal C'$.
\end{proof}
\section{Representations of the super general linear group}\label{s:rep}
This section contains the results on representations of the super general linear group
that will be required in the next section for the construction of certain quotients of
free rigid tensor categories by tensor ideals.
Let $k$ be a field of characteristic $0$.
We denote by $|V|$ the underlying $k$\nobreakdash-\hspace{0pt} vector space of a super $k$\nobreakdash-\hspace{0pt} vector space $V$.
Let $A$ be a commutative super $k$\nobreakdash-\hspace{0pt} algebra.
Then $|A|$ is a (not necessarily commutative) $k$\nobreakdash-\hspace{0pt} algebra.
If we regard $|A|$ as a right $|A|$\nobreakdash-\hspace{0pt} module over itself, then elements $a$ of $|A|$ may be identifed with
endomorphisms $a-$ of right $|A|$\nobreakdash-\hspace{0pt} modules.
Given a $k$\nobreakdash-\hspace{0pt} linear map $f:|V| \to |V'|$, we have for every $a$ in $|A|$ a morphism
$f \otimes_k a:|V| \otimes_k |A| \to |V'| \otimes_k |A|$ of right $|A|$\nobreakdash-\hspace{0pt} modules,
which when $f$ and $a$ are homogeneous of the same degree underlies a morphism
\begin{equation*}
f \otimes_k a:V \otimes_k A \to V' \otimes_k A
\end{equation*}
of $A$\nobreakdash-\hspace{0pt} modules.
Given also $f:|V'| \to |V''|$ and $a'$ in $|A|$, we have
\begin{equation*}
(f' \otimes_k a') \circ (f \otimes_k a) = (f' \circ f) \otimes_k a'a.
\end{equation*}
If $V$, $V'$, $W$, $W'$ are super $k$\nobreakdash-\hspace{0pt} vector spaces concentrated in respective degrees $v$, $v'$, $w$, $w'$,
and if $a$ and $a'$ in $|A|$ are homogeneous of respective degrees $v+v'$ and $w+w'$,
then for any $f:|V| \to |V'|$, $g:|W| \to |W'|$ we have
\begin{equation}\label{e:fagb}
(f \otimes_k a) \otimes_A (g \otimes_k b) = (f \otimes_k g) \otimes_k ((-1)^{(v+v')w'}ab),
\end{equation}
where we identify for example $(V \otimes_k A) \otimes_A (W \otimes_k A)$ with $(V \otimes_k W) \otimes_k A$
using the tensor structure of the functor $- \otimes_k A$.
We write $\mathrm M_m$ for the ring scheme over $k$ of endomorphisms of $k^m$, and
$\mathrm M_{m|n}$ for the super ring scheme over $k$ of endomorphisms of $k^{m|n}$.
Explicitly, a point of $\mathrm M_{m|n}$ in the commutative super $k$\nobreakdash-\hspace{0pt} algebra $A$ is an $A$\nobreakdash-\hspace{0pt} endomorphism
of the $A$\nobreakdash-\hspace{0pt} module $k^{m|n} \otimes_k A$.
Such a point may be identified with an endomorphism of degree $0$ of the right $|A|$\nobreakdash-\hspace{0pt} module
\begin{equation*}
|k^{m|n}| \otimes_k |A| = k^{m+n} \otimes_k |A|,
\end{equation*}
and hence with an $(m + n) \times (m + n)$ matrix with entries in the diagonal
$m \times m$ and $n \times n$ blocks in $A_0$ and entries in the two off-diagonal
blocks in $A_1$.
We denote by $E$ the standard representation of $\mathrm M_{m|n}$ on $k^{m|n}$, which assigns to a point
of $\mathrm M_{m|n}$ in $A$ its defining $A$\nobreakdash-\hspace{0pt} endomorphism of $k^{m|n} \otimes_k A$.
The point $\varepsilon_{m|n}$ of $\mathrm M_{m|n}(k)$ which is $1$ on the
diagonal $m \times m$ block, $-1$ on the diagonal $n \times n$ block, and $0$ on the off-diagonal blocks,
defines a structure $(\mathrm M_{m|n},\varepsilon_{m|n})$ of super $k$\nobreakdash-\hspace{0pt} monoid with involution on $\mathrm M_{m|n}$.
We usually write $\varepsilon_{m|n}$ simply as $\varepsilon$.
The standard representation $E$ of $\mathrm M_{m|n}$ is a representation of $(\mathrm M_{m|n},\varepsilon)$.
\begin{lem}\label{l:standend}
Every $\mathrm M_{m|n}$\nobreakdash-\hspace{0pt} endomorphism of $E^{\otimes d}$ is a $k$\nobreakdash-\hspace{0pt} linear combination of symmetries of $E^{\otimes d}$.
\end{lem}
\begin{proof}
For $i,j = 1,2, \dots ,m+n$, write $e_{ij}$ for the $k$\nobreakdash-\hspace{0pt} endomorphism of $|E| = k^{m+n}$
whose matrix has $(i,j)$th entry $1$ and all other entries $0$.
If $[\sigma]$ denotes the symmetry of $E^{\otimes d}$ defined by $\sigma$ in $\mathfrak{S}_d$,
then the $k$\nobreakdash-\hspace{0pt} endomorphisms of $|E|^{\otimes d}$ which commute with every symmetry of $E^{\otimes d}$
are $k$\nobreakdash-\hspace{0pt} linear combinations of those of the form
\begin{equation}\label{e:esigma}
\sum_{\sigma \in \mathfrak{S}_d} [\sigma] \circ
(e_{i_1j_1} \otimes_k \dots \otimes_k e_{i_dj_d})
\circ [\sigma^{-1}]
\end{equation}
The $k$\nobreakdash-\hspace{0pt} subalgebra $S$ of $\End_{\mathrm M_{m|n}}(E^{\otimes d})$ generated by the symmetries of
$E^{\otimes d}$ is a quotient of $k[\mathfrak{S}_d]$, and hence semisimple.
Thus $S$ is its own bicommutant in $\End_k(|E|^{\otimes d})$.
Hence it is enough to show that every $\mathrm M_{m|n}$\nobreakdash-\hspace{0pt} endomorphism of $E^{\otimes d}$ commutes
with every $k$\nobreakdash-\hspace{0pt} endomorphism of $|E|^{\otimes d}$ of the form \eqref{e:esigma}.
When $(i_r,j_r) = (i_s,j_s)$ in \eqref{e:esigma} for some $r \ne s$ with $(i_r,j_r)$ from the off-diagonal blocks,
\eqref{e:esigma} vanishes: if $\sigma_{rs}$ is the symmetry of $E^{\otimes d}$ that interchanges
the $r$th and $s$th factor $E$, then for any $\tau$ the terms of \eqref{e:esigma} with $\sigma = \tau$ and
$\sigma = \tau\sigma_{rs}$ cancel.
Let $h$ be an $\mathrm M_{m|n}$\nobreakdash-\hspace{0pt} endomorphism of $E^{\otimes d}$.
To show that $h$ commutes with \eqref{e:esigma}, we may suppose that $(i_r,j_r) \ne (i_s,j_s)$ in \eqref{e:esigma}
when $r \ne s$ and $(i_r,j_r)$ lies in the off-diagonal blocks.
Denote by $A$ the commutative super $k$\nobreakdash-\hspace{0pt} algebra freely generated by the elements $t_{ij}$ for $i,j = 1, \dots, m+n$
with $t_{ij}$ of degree $0$ for $(i,j)$ in the diagonal blocks and of degree $1$ for $(i,j)$
in the off-diagonal blocks.
The element of $\mathrm M_{m|n}(A)$ with matrix $(t_{ij})$ acts on $E \otimes_k A$ as the $A$\nobreakdash-\hspace{0pt} endomorphism
\begin{equation*}
\sum_{i,j = 1}^{m+n}e_{ij} \otimes_k t_{ij}
\end{equation*}
and hence on $E^{\otimes d} \otimes_k A$ as the tensor power
\begin{equation}\label{e:thetasum}
(\sum_{i,j = 1}^{m+n}e_{ij} \otimes_k t_{ij})^{\otimes d} =
\sum_{i_1,j_1,\dots, i_d,j_d = 1}^{m+n} \theta_{i_1,j_1, \dots ,i_d,j_d}
\end{equation}
over $A$ of $A$\nobreakdash-\hspace{0pt} endomorphisms, where
\begin{equation*}
\theta_{i_1,j_1, \dots ,i_d,j_d}
= (e_{i_1j_1} \otimes_k t_{i_1j_1}) \otimes_A \dots \otimes_A (e_{i_dj_d} \otimes_k t_{i_dj_d}).
\end{equation*}
Thus $h \otimes_k A$ commutes with \eqref{e:thetasum}.
Since $- \otimes_k A$ is a tensor functor, the symmetries of $E^{\otimes d} \otimes_k A$ are the
$[\sigma] \otimes_k A$, so that by their naturality
\begin{equation}\label{e:thetaconj}
([\sigma] \otimes_k A) \circ \theta_{i_1,j_1, \dots ,i_d,j_d} \circ ([\sigma^{-1}] \otimes_k A)
= \theta_{i_{\sigma^{-1}(1)},j_{\sigma^{-1}(1)}, \dots ,i_{\sigma^{-1}(d)},j_{\sigma^{-1}(d)}}
\end{equation}
for every $\sigma$.
Repeatedly applying \eqref{e:fagb} shows that
\begin{equation}\label{e:thetat}
\theta_{i_1,j_1, \dots ,i_d,j_d} =
(e_{i_1j_1} \otimes_k \dots \otimes_k e_{i_dj_d}) \otimes_k
(\delta t_{i_1j_1} \dots t_{i_dj_d})
\end{equation}
with $\delta = \pm 1$.
Now $t_{i_1j_1} \dots t_{i_dj_d}$ is non-zero if and only if the $(i_r,j_r)$ lying in the off-diagonal blocks
are distinct, and when this is so $t_{i'_1j'_1} \dots t_{i'_dj'_d}$ generates the same $1$\nobreakdash-\hspace{0pt} dimensional
subspace of the $k$\nobreakdash-\hspace{0pt} vector space $|A|$ as $t_{i_1j_1} \dots t_{i_dj_d}$ if and only if
\begin{equation*}
(i'_1,j'_1, \dots ,i'_d,j'_d) =
(i_{\sigma^{-1}(1)},j_{\sigma^{-1}(1)}, \dots ,i_{\sigma^{-1}(d)},j_{\sigma^{-1}(d)})
\end{equation*}
for some $\sigma \in \mathfrak{S}_d$.
Further the distinct $1$\nobreakdash-\hspace{0pt} dimensional subspaces generated in this way are linearly independent.
It thus follows from \eqref{e:thetaconj} and \eqref{e:thetat} that $h \otimes_k A$ commutes with
\begin{equation*}
\sum_{\sigma \in \mathfrak{S}_d}
([\sigma] \otimes_k A) \circ \theta_{i_1,j_1, \dots ,i_d,j_d} \circ ([\sigma^{-1}] \otimes_k A),
\end{equation*}
and hence from \eqref{e:thetat} that $h$ commutes with \eqref{e:esigma}.
\end{proof}
\begin{lem}\label{l:Mhomtens}
Let $(M,\varepsilon)$ and $(M',\varepsilon')$ be super $k$\nobreakdash-\hspace{0pt} monoids with involution,
$V$ be a representation of $(M,\varepsilon)$ and $V'$ a representation of $(M',\varepsilon')$,
and $W$ be a $(M,\varepsilon)$\nobreakdash-\hspace{0pt} module and $W'$ an $(M',\varepsilon')$\nobreakdash-\hspace{0pt} module.
Then the canonical $k$\nobreakdash-\hspace{0pt} linear map
\begin{equation*}
\Hom_M(V,W) \otimes_k \Hom_{M'}(V',W') \to
\Hom_{M \times_k M'}(V \otimes_k V',W \otimes_k W')
\end{equation*}
is an isomorphism.
\end{lem}
\begin{proof}
The canonical homomorphism is that induced by the canonical isomorphism
\begin{equation}\label{e:homtens}
\Hom_k(|V|,|W|) \otimes_k \Hom_k(|V'|,|W'|) \xrightarrow{\sim}
\Hom_k(|V| \otimes_k |V'|,|W| \otimes_k |W'|).
\end{equation}
It is thus enough to show that every $(M \times_k M')$\nobreakdash-\hspace{0pt} homomorphism $f$ from
$V \otimes_k V'$ to $W \otimes_k W'$
lies in the image under \eqref{e:homtens} of both
\begin{equation*}
\Hom_M(V,W) \otimes_k \Hom_k(V',W')
\end{equation*}
and the similar subspace defined using $M'$\nobreakdash-\hspace{0pt} homomorphisms.
In fact $f$ sends $V \otimes_k V'{}\!_i$ to $W \otimes_k W'{}\!_i$ because $(1,\varepsilon')$ acts on
them as $(-1)^i$.
Regarding $V \otimes_k V'$ and $W \otimes_k W'$ as $M$\nobreakdash-\hspace{0pt} modules by restricting to the factor $M$
thus gives the required result for $M$\nobreakdash-\hspace{0pt} homomorphisms.
The argument for $M'$\nobreakdash-\hspace{0pt} homomorphisms is similar.
\end{proof}
Let $\mathbf m = (m_\gamma)_{\gamma \in \Gamma}$ and $\mathbf n = (n_\gamma)_{\gamma \in \Gamma}$ be families of integers
$\ge 0$ indexed by the same set $\Gamma$.
We write
\begin{equation*}
\mathrm M_{\mathbf m|\mathbf n} = \prod_{\gamma \in \Gamma} \mathrm M_{m_\gamma|n_\gamma}.
\end{equation*}
If $\varepsilon_{\mathbf m|\mathbf n}$ is the $k$\nobreakdash-\hspace{0pt} point
$(\varepsilon_{m_\gamma|n_\gamma})_{\gamma \in \Gamma}$ of $M_{\mathbf m|\mathbf n}$,
then $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon_{\mathbf m|\mathbf n})$ is a super $k$\nobreakdash-\hspace{0pt} monoid with involution,
which we usually write simply $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$.
We also write $E_\gamma$ for the representation of $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$
given by inflation along the projection
$\mathrm M_{\mathbf m|\mathbf n} \to \mathrm M_{m_\gamma|n_\gamma}$ of the standard representation $E$ of
$(\mathrm M_{m_\gamma|n_\gamma},\varepsilon)$.
\begin{lem}\label{l:standss}
Let $\mathbf m$ and $\mathbf n$ be families of integers $\ge 0$ indexed by $\Gamma$.
Then the full subcategory of the category of representations of $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$
consisting of the direct summands of direct sums representations of the form $\bigotimes_{i=1}^r E_{\gamma_i}$
for $\gamma_i \in \Gamma$ is semisimple abelian.
\end{lem}
\begin{proof}
It is enough to show that the $k$\nobreakdash-\hspace{0pt} algebra $\End_{\mathrm M_{\mathbf m|\mathbf n}}(V)$ is semisimple for $V$ a direct sum of
representations $\bigotimes_i E_{\gamma_i}$.
The centre of $\mathrm M_{\mathbf m|\mathbf n}$ is $\prod_{\gamma \in \Gamma}\mathrm M_1$ with each factor diagonally embedded,
and the monoid of central characters is the free commutative monoid on $\Gamma$.
Since $\bigotimes_i E_{\gamma_i}$ has central character $\sum_i \gamma_i$, the hom-space between two
non-isomorphic representations of this form is $0$.
Thus we may suppose that $V = (\bigotimes_i E_{\gamma_i})^r$ for some $r$.
Since $\End_{\mathrm M_{\mathbf m|\mathbf n}}(W^r)$ is the tensor product over $k$ of $\End_{\mathrm M_{\mathbf m|\mathbf n}}(W)$
with the full matrix algebra $\mathrm M_r(k)$, we may suppose further that $r = 1$.
By Lemma~\ref{l:Mhomtens} we may suppose finally that $V = E_\gamma{}\!^{\otimes d}$ for some $\gamma$.
Since the group algebra $k[\mathfrak{S}_d]$ is semisimple, the required result then follows
from Lemma~\ref{l:standend}.
\end{proof}
\begin{lem}\label{l:subobj}
Let $\mathcal C$ be an abelian category and $\mathcal C_0$ be a strictly full abelian subcategory of $\mathcal C$
for which the embedding $\mathcal C_0 \to \mathcal C$ is left exact.
Suppose that every object of $\mathcal C$ is a subobject of an object of $\mathcal C_0$.
Then $\mathcal C_0 = \mathcal C$.
\end{lem}
\begin{proof}
Let $N$ be an object of $\mathcal C$.
By hypothesis, $N$ is a subobject of an object $N_0$ in $\mathcal C_0$,
and $N_0/N$ is a subobject of an object $M_0$ in $\mathcal C_0$.
Then $N$ is the kernel of the composite $N_0 \to N_0/N \to M_0$, and hence lies in $\mathcal C_0$.
\end{proof}
\begin{prop}\label{p:Msummand}
Let $\mathbf m$ and $\mathbf n$ be families of integers $\ge 0$ indexed by $\Gamma$.
\begin{enumerate}
\item\label{i:Msummandss}
The category of representations of $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$ is semisimple abelian.
\item\label{i:Msummand}
Every representation of $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$ is a direct summand of a direct sum of a
representations of the form $\bigotimes_{i=1}^r E_{\gamma_i}$ for $\gamma_i \in \Gamma$.
\end{enumerate}
\end{prop}
\begin{proof}
By Lemma~\ref{l:standss} it is enough to prove \ref{i:Msummand}.
Let $V$ be a representation of $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$.
Choosing an isomorphism of super $k$\nobreakdash-\hspace{0pt} vector spaces $V \xrightarrow{\sim} k^{t|u}$
gives an embedding
\[
0 \to V \to k^{t|u} \otimes_k k[\mathrm M_{\mathbf m|\mathbf n}]
\]
of $\mathrm M_{\mathbf m|\mathbf n}$\nobreakdash-\hspace{0pt} modules, where $\mathrm M_{\mathbf m|\mathbf n}$ acts trivially on $k^{t|u}$ and $k[\mathrm M_{\mathbf m|\mathbf n}]$
is the right regular $\mathrm M_{\mathbf m|\mathbf n}$\nobreakdash-\hspace{0pt} module.
By the isomorphism
\begin{equation*}
k[\mathrm M_{\mathbf m|\mathbf n}] \xrightarrow{\sim} \Sym(\coprod_{\gamma \in \Gamma} k^{m_\gamma|n_\gamma} \otimes_k E_\gamma)
\end{equation*}
of $\mathrm M_{\mathbf m|\mathbf n}$\nobreakdash-\hspace{0pt} modules, $k^{t|u} \otimes_k k[\mathrm M_{\mathbf m|\mathbf n}]$ is a coproduct of $\mathrm M_{\mathbf m|\mathbf n}$\nobreakdash-\hspace{0pt} modules
of the form either $\bigotimes_i E_{\gamma_i}$ or $k^{0|1} \otimes_k \bigotimes_i E_{\gamma_i}$.
There are no non-zero $\mathrm M_{\mathbf m|\mathbf n}$\nobreakdash-\hspace{0pt} homomorphisms to $k^{0|1} \otimes_k \bigotimes_i E_{\gamma_i}$ from $V$,
because $\varepsilon$ acts on its even part as $-1$ and on its odd part as $1$.
It follows that $V$ can be embedded in a direct sum of representations $\bigotimes_i E_{\gamma_i}$.
Taking for $\mathcal C$ in Lemma~\ref{l:subobj} the category of representations of $(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$,
for $\mathcal C_0$ the strictly full subcategory consisting of the direct summands of direct sums
of representations $\bigotimes_i E_{\gamma_i}$, and using Lemma~\ref{l:standss},
now gives the required result.
\end{proof}
A point of $\mathrm M_{m|n}$ in a commutative super $k$\nobreakdash-\hspace{0pt} algebra $A$ may be written uniquely as
$g + \alpha$, with $g$ the in the diagonal blocks and $\alpha$ in the off-diagonal blocks.
Then $g$ is a point of $\mathrm M_m \times \mathrm M_n$ in $A_0$.
A point of $\mathrm M_{m|n}$ in $A$ is invertible if and only the corresponding point in $A_{\mathrm{red}}$ is invertible:
reduce to the case where the point in $A_{\mathrm{red}}$ is the identity.
The functor that sends $A$ to the group of invertible elements of $\mathrm M_{m|n}(A)$ is thus represented by an affine open
super $k$\nobreakdash-\hspace{0pt} subgroup $\mathrm {GL}_{m|n}$ of $\mathrm M_{m|n}$ with $(\mathrm {GL}_{m|n})_{\mathrm{red}}$ the open
$k$\nobreakdash-\hspace{0pt} subgroup $\mathrm {GL}_m \times \mathrm {GL}_n$ of $(\mathrm M_{m|n})_{\mathrm{red}} = \mathrm M_m \times \mathrm M_n$.
The point $g + \alpha$ of $\mathrm M_{m|n}$ lies in $\mathrm {GL}_{m|n}$ if and only if the point $g$ of $\mathrm M_m \times \mathrm M_n$
lies in $\mathrm {GL}_m \times \mathrm {GL}_n$.
Write
\[
q \in k[\mathrm M_{m|n}]_0
\]
for the pullback along the projection $g + \alpha \mapsto g$ from $\mathrm M_{m|n}$ to $\mathrm M_m \times \mathrm M_n$
of the element of $k[\mathrm M_m \times \mathrm M_n]$ that sends $(h,j)$ to $\det(h)\det(j)$.
Then $\mathrm {GL}_{m|n}$ is the open super subscheme $(\mathrm M_{m|n})_q$ of $\mathrm M_{m|n}$ where $q$ is invertible, and
\begin{equation}\label{e:kGLkM}
k[\mathrm {GL}_{m|n}] = k[\mathrm M_{m|n}]_q
\end{equation}
is obtained from $k[\mathrm M_{m|n}]$ by inverting $q$.
If $M$ is an affine super $k$\nobreakdash-\hspace{0pt} monoid and $V$ is an $M$\nobreakdash-\hspace{0pt} module with defining homomorphism
\begin{equation*}
\mu:V \to V \otimes_k k[M]
\end{equation*}
then the \emph{coefficient} of $V$ associated to a $v$ in $|V|$ and a $k$\nobreakdash-\hspace{0pt} linear map $\pi:|V| \to k$
is the element $(\pi \otimes_k k[M])(\mu(v))$ of $k[M]$.
By \eqref{e:kGLkM}, the embedding of $\mathrm {GL}_{m|n}$ into $\mathrm M_{m|n}$ defines an embedding
of $k[\mathrm M_{m|n}]$ into $k[\mathrm {GL}_{m|n}]$.
The category of $\mathrm M_{m|n}$\nobreakdash-\hspace{0pt} modules may thus be identified with the full subcategory of
the category of $\mathrm {GL}_{m|n}$\nobreakdash-\hspace{0pt} modules consisting of those $\mathrm {GL}_{m|n}$\nobreakdash-\hspace{0pt} modules for which
every coefficient lies in $k[M_{m|n}]$.
The point $\varepsilon = \varepsilon_{m|n}$ of $\mathrm M_{m|n}(k)$ lies in $\mathrm {GL}_{m|n}(k)$, so that
we have a super $k$\nobreakdash-\hspace{0pt} group with involution $(\mathrm {GL}_{m|n},\varepsilon)$.
\begin{lem}\label{l:coeff}
For every integer $d \ge 0$ there exists a representation $W \ne 0$ of $(\mathrm M_{m|n},\varepsilon)$ such that
every coefficient of $W$ lies in $q^dk[\mathrm M_{m|n}]$.
\end{lem}
\begin{proof}
It is enough to prove that there exists a representation $W \ne 0$ of $(\mathrm {GL}_{m|n},\varepsilon)$
such that every coefficient of $W$ lies in $q^dk[\mathrm M_{m|n}] \subset k[\mathrm {GL}_{m|n}]$.
Let $r$ be an integer $\ge 0$, and $W_r$ be the $\mathrm {GL}_{m|n}$\nobreakdash-\hspace{0pt} submodule of the right regular $\mathrm {GL}_{m|n}$\nobreakdash-\hspace{0pt} module
$k[\mathrm {GL}_{m|n}]$ generated by $q^{2r}$.
Explicitly, if the coalgebra $k$\nobreakdash-\hspace{0pt} homomorphism is
\[
\mu:k[\mathrm {GL}_{m|n}] \to k[\mathrm {GL}_{m|n}] \otimes_k k[\mathrm {GL}_{m|n}],
\]
then $W_r$ is the smallest super $k$\nobreakdash-\hspace{0pt} vector subspace $V$ of $k[\mathrm {GL}_{m|n}]$ such that
\begin{equation}\label{e:muqr}
\mu(q^{2r}) \in V \otimes_k k[\mathrm {GL}_{m|n}].
\end{equation}
Since left translation by $\varepsilon$ leaves $q^{2r}$ fixed,
$W_r$ is contained in the super $\mathrm {GL}_{m|n}$\nobreakdash-\hspace{0pt} subspace of $k[\mathrm {GL}_{m|n}]$
of invariants under left translation by $\varepsilon$.
Thus $W_r$ is a representation of $(\mathrm {GL}_{m|n},\varepsilon)$,
because conjugation of $\mathrm {GL}_{m|n}$ by $\varepsilon$ acts on $k[\mathrm {GL}_{m|n}]$ as $(-1)^i$ in degree $i$.
Let $g + \alpha$ and $g' + \alpha'$ be points of $\mathrm {GL}_{m|n}$ in a commutative super $k$\nobreakdash-\hspace{0pt} algebra $A$.
Then
\[
q^{2r}((g+\alpha)(g'+\alpha')) = q^{2r}(gg' + \alpha\alpha') =
q^{2r}(gg')q^{2r}(1 + (gg')^{-1}\alpha\alpha').
\]
Since $(gg')^{-1}$ is $q^{-1}(gg')a(gg')$ with the entries of $a(gg')$ polynomials
in those of $gg'$, we may write $q^{2r}(1 + (gg')^{-1}\alpha\alpha')$ as a sum of terms of the form
\begin{equation*}
q^{-l}(g)q^{-l}(g')p(g,g')\alpha_1 \dots \alpha_l\alpha'\!_1 \dots \alpha'\!_l,
\end{equation*}
where $p(g,g')$ is a polynomial in the entries of $g$ and $g'$,
and the $\alpha_i$ and $\alpha'\!_i$ are entries of $\alpha$ and $\alpha'$.
The product of the $\alpha_i$ or $\alpha'\!_i$ is $0$ if $l > 2mn$,
so that
\begin{equation*}
\mu(q^{2r}) \in q^{2r-2mn}k[\mathrm M_{m|n}] \otimes_k q^{2r-2mn}k[\mathrm M_{m|n}].
\end{equation*}
Thus \eqref{e:muqr} holds with $V = q^{2r-2mn}k[\mathrm M_{m|n}]$, so that
\begin{equation*}
W_r \subset q^{2r-2mn}k[\mathrm M_{m|n}].
\end{equation*}
Suppose $r \ge mn$.
Then applying $\mu$ and using the fact that
$\mu$ is a morphism of super $k$\nobreakdash-\hspace{0pt} algebras sending $k[\mathrm M_{m|n}]$ into
$k[\mathrm M_{m|n}] \otimes_k k[\mathrm M_{m|n}]$ shows that
\begin{equation*}
\mu(W_r) \subset q^{2r-4mn}k[\mathrm M_{m|n}] \otimes_k q^{2r-4mn}k[\mathrm M_{m|n}].
\end{equation*}
It follows that every coefficient of $W_r$ lies in $q^{2r-4mn}k[\mathrm M_{m|n}]$.
We may thus take $W = W_r$ for $r \ge 2mn + d/2$.
\end{proof}
Let $\mathbf m$ and $\mathbf n$ be families of integers $\ge 0$ indexed by $\Gamma$.
We write
\begin{equation*}
\mathrm {GL}_{\mathbf m|\mathbf n} = \prod_{\gamma \in \Gamma}\mathrm {GL}_{m_\gamma|n_\gamma}.
\end{equation*}
The $k$\nobreakdash-\hspace{0pt} point $\varepsilon = \varepsilon_{\mathbf m|\mathbf n}$ of $\mathrm M_{\mathbf m|\mathbf n}$ lies in $\mathrm {GL}_{\mathbf m|\mathbf n}$,
and we have an affine super $k$\nobreakdash-\hspace{0pt} group with involution $(\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon)$.
As with $\mathrm M_{m|n}$ and $\mathrm {GL}_{m|n}$, we may identify $\mathrm M_{\mathbf m|\mathbf n}$\nobreakdash-\hspace{0pt} modules with
$\mathrm {GL}_{\mathbf m|\mathbf n}$\nobreakdash-\hspace{0pt} modules whose coefficients lie in $k[M_{\mathbf m|\mathbf n}]$.
\begin{thm}\label{t:VtensW}
For every representation $V$ of $(GL_{\mathbf{m}|\mathbf{n}},\varepsilon)$ there exists a representation $W \ne 0$ of
$(M_{\mathbf m|\mathbf n},\varepsilon)$ such that $V \otimes_k W$ is a representation of $(M_{\mathbf m|\mathbf n},\varepsilon)$.
\end{thm}
\begin{proof}
Write $q_\gamma$ for the image of $q$ in $k[\mathrm M_{m_\gamma|n_\gamma}]$ under the embedding
of $k[\mathrm M_{m_\gamma|n_\gamma}]$ into $k[\mathrm M_{\mathbf m|\mathbf n}]$ defined by the projection from $\mathrm M_{\mathbf m|\mathbf n}$ to
$\mathrm M_{m_\gamma|n_\gamma}$.
The coefficients of $V$ generate a finite-dimensional $k$\nobreakdash-\hspace{0pt} vector subspace of $k[\mathrm {GL}_{m|n}]$, and hence lie in
\begin{equation*}
(\prod_{\gamma \in \Gamma_0}q_\gamma{}\!^{-d})k[\mathrm M_{\mathbf m|\mathbf n}]
\end{equation*}
for some finite subset $\Gamma_0$ of $\Gamma$ and $d \ge 0$.
Since the coefficients of $V \otimes_k W$ are linear combinations of products of those of $V$ and $W$,
it follows from Lemma~\ref{l:coeff} that we may take $W = \bigotimes_{\gamma \in \Gamma_0} W_\gamma$
with $W_\gamma \ne 0$ for $\gamma \in \Gamma$ an appropriate representation of the factor
$\mathrm M_{m_\gamma|n_\gamma}$ of $\mathrm M_{\mathbf m|\mathbf n}$.
\end{proof}
\begin{cor}\label{c:quotsub}
Let $\mathbf m$ and $\mathbf n$ be families of integers $\ge 0$ indexed by $\Gamma$.
Then every representation of $(\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon)$ is a
subrepresentation (resp.\ quotient representation)
of a direct sum of representations of the form
$\bigotimes_{i=1}^r E_{\gamma_i} \otimes_k \bigotimes_{j=1}^s E_{\gamma'{}\!_j}\!^{\vee}$ for
$\gamma, \gamma'\!{}_j \in \Gamma$.
\end{cor}
\begin{proof}
Let $V$ be a representation of $(GL_{\mathbf m|\mathbf n},\varepsilon)$.
If $W \ne 0$ is as in Theorem~\ref{t:VtensW}, then by Proposition~\ref{p:Msummand},
both $W$ and $V \otimes_k W$ are direct summands of direct sums of representations
of the form $\bigotimes_i E_{\gamma_i}$.
Since $V \otimes \eta_W$ with $\eta_W:k \to W \otimes_k W^\vee$ the unit
embeds $V$ into $V \otimes_k W \otimes_k W^\vee$, the result for subrepresentations follows.
The result for quotients follows by taking duals.
\end{proof}
\section{Free rigid tensor categories}\label{s:free}
This section deals with the connection between free rigid tensor categories
and categories of representations of super general linear groups over a field
of characteristic $0$.
Let $k$ be a field of characteristic $0$.
Given $r \in k$ we denote by
\begin{equation*}
\mathcal F_r
\end{equation*}
the free rigid $k$\nobreakdash-\hspace{0pt} tensor category on a dualisable
object $N$ of rank $r$.
If $M$ is a dualisable object of rank $r$ in a $k$\nobreakdash-\hspace{0pt} tensor category $\mathcal C$,
there is then a $k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal F_r$ to $\mathcal C$ that sends $N$ to $M$,
and if $T_1$ and $T_2$ are tensor functors
from $\mathcal F_r$ to $\mathcal C$ and $\theta$ is an isomorphism from $T_1(N)$ to $T_2(N)$,
there is a unique tensor isomorphism $\varphi$ from $T_1$ to $T_2$ with $\varphi_N = \theta$.
The objects of $\mathcal F_r$ are tensor products of copies of $N$ and $N^\vee$.
The full tensor subcategory $\mathcal H$ of $\mathcal F_r$ consisting of tensor powers of $N$ is independent of
$r$ and is the free $k$\nobreakdash-\hspace{0pt} tensor category on $N$.
The symmetries of $N^{\otimes d}$ then form a basis for its endomorphism algebra, so that
\begin{equation*}
\End(N^{\otimes d}) = k[\mathfrak{S}_d],
\end{equation*}
while if $d_1 \ne d_2$ we have $\Hom(N^{\otimes d_1},N^{\otimes d_2}) = 0$.
Dualising shows that a tensor ideal $\mathcal J$ of $\mathcal F_r$ is completely determined
by its restriction to $\mathcal H$, and hence
by the ideals $\mathcal J(N^{\otimes d},N^{\otimes d})$ of the $k$\nobreakdash-\hspace{0pt} algebras $\End(N^{\otimes d})$.
A sequence of ideals $\mathcal J^{(d)}$ of the $\End(N^{\otimes d})$ arises from a tensor ideal of
$\mathcal F_r$ if and only if $\mathcal J^{(d)} \otimes N$ is contained in $\mathcal J^{(d+1)}$
and contraction sends $\mathcal J^{(d+1)}$ into $\mathcal J^{(d)}$
for each $d$.
Write $c_\lambda$ for the Young symmetriser in $k[\mathfrak{S}_d]$ associated to
the partition $\lambda$ of $d$.
The two-sided ideal of $k[\mathfrak{S}_d]$ generated by $c_\lambda$ consists of those
elements that act trivially on all irreducible representations of $\mathfrak{S}_d$
other than the one associated to $\lambda$.
Every minimal two-sided ideal of $k[\mathfrak{S}_d]$ is of this form, for a unique $\lambda$.
If $\lambda'$ is a partition of $d' \ge d$ and $\mathfrak{S}_d$ is embedded in
$\mathfrak{S}_{d'}$ by an embedding of $[1,d]$ into $[1,d']$, then $c_{\lambda'}$ lies
in the two-sided ideal of $k[\mathfrak{S}_{d'}]$ generated by $c_\lambda$ if and only
if the diagram $[\lambda']$ contains $[\lambda]$ \cite[4.44]{FulHar}.
Suppose now that $r$ is an integer.
Let $m|n$ be a pair of integers $\ge 0$ with $m-n = r$.
Then there exists a $k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal F_r$ to the category
of super $k$\nobreakdash-\hspace{0pt} vector spaces, unique up to tensor isomorphism, that sends
$N$ to a super $k$\nobreakdash-\hspace{0pt} vector space of dimension $m|n$.
Its kernel is a tensor ideal
\begin{equation*}
\mathcal J_{m|n}
\end{equation*}
of $\mathcal F_r$.
Denote by $\lambda_{m|n}$ the partition of $(m+1)(n+1)$ such that $[\lambda_{m|n}]$ has $m+1$ columns
and $n+1$ rows.
Then \cite[1.9]{Del02}
\begin{equation*}
c_\lambda \in \mathcal J_{m|n}(N^{\otimes d},N^{\otimes d}) \subset \End(N^{\otimes d}) = k[\mathfrak{S}_d]
\end{equation*}
if and only if $[\lambda]$ contains $[\lambda_{m|n}]$,
so that $\mathcal J_{m|n}(N^{\otimes d},N^{\otimes d})$ is the two-sided ideal of $k[\mathfrak{S}_d]$
generated by $c_{\lambda_{m|n}}$, and $\mathcal J_{m|n}$ is the tensor ideal of $\mathcal F_r$
generated by $c_{\lambda_{m|n}}$.
In particular
\begin{equation}\label{e:Jmnstrincl}
\mathcal J_{m+1|n+1} \subsetneqq \mathcal J_{m|n}.
\end{equation}
To show that the $\mathcal J_{m|n}$ are the only tensor ideals of $\mathcal F_r$ other than $0$ or $\mathcal F_r$,
we need the following lemma.
\begin{lem}\label{l:rectangle}
Let $V$ be a super $k$\nobreakdash-\hspace{0pt} vector space of dimension $m|n$ and $\lambda$ be a partition of $d>0$.
Suppose that the endomorphism $f$ of $V^{\otimes d}$ induced by the Young
symmetriser $c_\lambda$ is $\ne 0$, but that each contraction of $f$ with respect to a factor
$V$ of $V^{\otimes d}$ is $0$.
Then $\lambda = \lambda_{m'|n'}$ for some $m'|n'$ with $m' - n' = m - n$.
\end{lem}
\begin{proof}
Since $f \ne 0$, the diagram $[\lambda]$ does not contain a box $(n+1,m+1)$.
Fix a basis $e^+_1, e^+_2, \dots , e^+_m$ of $V_0$ and $e^-_1, e^-_2, \dots , e^-_n$ of $V_1$.
The $e^+_r$ and $e^-_s$ define a basis of $|V|$ and hence of the tensor powers of $|V|$.
The boxes of $[\lambda]$ correspond to factors $V$ in $V^{\otimes d}$, and assignments to
each box of an $e^+_r$ or $e^-_s$ define basis elements of $|V|^{\otimes d}$.
Let $(1,l)$ be a box in the first row of $[\lambda]$ and $e$ be one of the $e^+_r$ or $e^-_s$.
Assign as follows to each box in $[\lambda]$ a basis element $e^+_r$ or $e^-_s$.
Write $n_0$ for the lesser of $n$ or the number of rows of $[\lambda]$.
To $(1,l)$ assign $e$.
To $(i,j) \ne (1,l)$ with $i \le n_0$ assign $e^-_i$.
To $(i,j) \ne (1,l)$ with $i > n_0$ assign $e^+_j$,
which is possible because $j \le m$ for $i > n$.
The diagram
\begin{equation*}
\begin{ytableau}
e^-_1 & e^-_1 & e^-_1 & e & e^-_1 & e^-_1 \\
e^-_2 & e^-_2 & e^-_2 & e^-_2 & e^-_2 \\
e^-_3 & e^-_3 & e^-_3 & e^-_3 \\
e^+_1 & e^+_2 & e^+_3 & e^+_4 \\
e^+_1 & e^+_2
\end{ytableau}
\end{equation*}
is an example with $l = 4$ and $n_0 = 3$.
This assignment defines a basis element of $|V|^{\otimes d}$.
If we write $f_{l;e}$ for the diagonal matrix entry of
$|f|:|V|^{\otimes d} \to |V|^{\otimes d}$
corresponding to this basis element, and $f'$ for the contraction of $f$ with respect to the factor
$V$ of $V^{\otimes d}$
corresponding to the box $(1,l)$, then
\begin{equation}\label{e:contractentry}
\sum_{r = 1}^m f_{l;e^+_r} - \sum_{s = 1}^n f_{l;e^-_s}
\end{equation}
is a diagonal matrix entry of $|f'|$.
It is thus enough to show that if \eqref{e:contractentry} is $0$ for every $l$ then $[\lambda]$ is rectangular
with the difference of the number of its rows and its columns $m - n$.
Write $\lambda = (\lambda_1, \dots ,\lambda_p)$ and $\lambda^t = (\lambda^t{}\!_1, \dots ,\lambda^t{}\!_q)$.
If $n = 0$ then \eqref{e:contractentry} is
\begin{equation*}
f_{l;e^+_l} = \lambda^t{}\!_1 ! \lambda^t{}\!_2 ! \dots \lambda^t{}\!_q ! \ne 0.
\end{equation*}
We may thus suppose that $n > 0$.
Then \eqref{e:contractentry} is
\begin{multline*}
C(-1 -\frac{n-\min\{n_0,\lambda^t{}\!_l\}}{\lambda_1} +
\frac{\max\{1,\lambda^t{}\!_l -n_0 +1\} - \delta_l}{\lambda_1}
+ \frac{m-1 + \delta_l}{\lambda_1}) = \\
= \frac{C}{\lambda_1}(m - n - \lambda_1 + \lambda^t{}\!_l)
\end{multline*}
with $C = \lambda_1! \dots \lambda_p!(\lambda^t{}\!_l - n_0)! \dots (\lambda^t{}\!_{q_0} - n_0)! \ne 0$
where $q_0$ is the largest $i$ with $\lambda^t{}\!_i \ge n_0$,
and with $\delta_l = 0$ when $l \le m$ and $\delta_l = 1$ when $l > m$.
Here the first term on the left corresponds to $f_{l;e^-_1}$, the second to $f_{l;e^-_s}$ for
$\min\{n_0,\lambda^t{}\!_l\} < s \le n$, the third to $f_{l;e^+_l}$ when $l \le n$, the fourth to
$f_{l;e^+_r}$ for $r \ne l$, and the $f_{l;e^-_s}$ for $1 < s \le \min\{n_0,\lambda^t{}\!_l\}$ are $0$.
Since $\lambda_1$ is the number of columns and $\lambda^t{}\!_l$ is the length of the $l$th column of
$[\lambda]$, the result follows.
\end{proof}
\begin{lem}\label{l:idealJmn}
Let $r$ be an integer.
Then every tensor ideal of $\mathcal F_r$ other than $0$ or $\mathcal F_r$ is of the form $\mathcal J_{m|n}$
for a unique $m|n$ with $m-n = r$.
\end{lem}
\begin{proof}
The uniqueness is clear from \eqref{e:Jmnstrincl}.
Let $\mathcal J$ be a tensor ideal of $\mathcal F_r$ other than $0$ or $\mathcal F_r$.
Since $\mathcal J \ne 0$, there exists a $d>0$ and a partition $\lambda$ of $d$ such that
$c_\lambda$ in $\End(N^{\otimes d})$ lies in $\mathcal J$.
Hence $c_{\lambda_{m|n}}$ lies in $\mathcal J$ for $m,n$ sufficiently large, so that
\begin{equation}\label{e:Jmnincl}
\mathcal J_{m|n} \subset \mathcal J
\end{equation}
for some $m,n$ with $m-n =r$.
Let $(m_0,n_0)$ be the pair with $m_0 \ge 0$, $n_0 \ge 0$, $m_0-n_0=r$ and
$\mathcal J_{m_0|n_0}$ largest such that \eqref{e:Jmnincl} holds with $(m,n) = (m_0,n_0)$.
We show by induction on $d$ that
\begin{equation}\label{e:Jmnequ}
\mathcal J_{m_0|n_0}(N^{\otimes d},N^{\otimes d}) = \mathcal J(N^{\otimes d},N^{\otimes d})
\end{equation}
for each $d \ge 0$ so that $\mathcal J_{m_0|n_0} = \mathcal J$.
Since $\mathcal J \ne \mathcal F_r$, \eqref{e:Jmnequ} holds for $d = 0$.
Suppose that \eqref{e:Jmnequ} holds for $d= d_0$.
Let $\lambda$ be a partition of $d_0 + 1$ with $c_\lambda \in \mathcal J$.
Suppose that $c_\lambda \notin \mathcal J_{m_0|n_0}$.
If $V$ is a super $k$\nobreakdash-\hspace{0pt} vector space of dimension $m_0|n_0$ there is by definition
of $\mathcal J_{m_0|n_0}$ a tensor functor with kernel $\mathcal J_{m_0|n_0}$
from $\mathcal F_r$ to the category of super $k$\nobreakdash-\hspace{0pt} vector spaces which sends $N$ to $V$.
It sends $c_\lambda$ to the endomorphism $f$ of $V^{\otimes (d_0+1)}$
induced by $c_\lambda$.
Since $c_\lambda \notin \mathcal J_{m_0|n_0}$ we have $f \ne 0$.
By the induction hypothesis each contraction of $c_\lambda$ lies in $\mathcal J_{m_0|n_0}$,
and hence each contraction of $f$ is $0$.
Thus by Lemma~\ref{l:rectangle} $\lambda = \lambda_{m|n}$ for some
$m$ and $n$ with $m-n=r$.
The tensor ideal $\mathcal J_{m|n}$ of $\mathcal F_r$ generated by $c_\lambda$
is then contained in $\mathcal J$ and hence by definition of $(m_0,n_0)$ in $\mathcal J_{m_0|n_0}$,
contradicting the assumption that $c_\lambda \notin \mathcal J_{m_0|n_0}$.
Thus $c_\lambda \in \mathcal J$ implies $c_\lambda \in \mathcal J_{m_0|n_0}$ for every partition
$\lambda$ of $d_0+1$, so that \eqref{e:Jmnequ} holds for $d = d_0 + 1$.
\end{proof}
By Lemma~\ref{l:abfracclose} we have $\kappa(\Mod(k)) = k$.
Hence $\kappa(\mathcal F_r/\mathcal J_{m|n}) = k$ for every integer $r$ and $m|n$ with $m - n = r$,
because there is a faithful $k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal F_r/\mathcal J_{m|n}$ to $\Mod(k)$.
By \eqref{e:Jmnstrincl} and Lemma~\ref{l:idealJmn}, the intersection of the
$\mathcal J_{m|n} \subset \mathcal F_r$ is $0$.
It thus follows from Lemma~\ref{l:idealJmn} that $\kappa(\mathcal F_r/\mathcal J) = k$ for every
tensor ideal $\mathcal J \ne \mathcal F_r$ of $\mathcal F_r$.
Let $\mathbf r = (r_\gamma)_{\gamma \in \Gamma}$ be a family of elements of $k$.
We denote by $\mathcal F_{\mathbf r}$ the free $k$\nobreakdash-\hspace{0pt} tensor category on a family
$(N_\gamma)_{\gamma \in \Gamma}$ of dualisable
objects with $N_\gamma$ of rank $r_\gamma$.
If $(M_\gamma)_{\gamma \in \Gamma}$ is a family dualisable objects with
$M_\gamma$ of rank $r_\gamma$ in a $k$\nobreakdash-\hspace{0pt} tensor category $\mathcal C$,
there is then a $k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal F_{\mathbf r}$ to $\mathcal C$ that sends
$N_\gamma$ to $M_\gamma$ for each $\gamma$,
and if $T_1$ and $T_2$ are tensor functors
from $\mathcal F_{\mathbf r}$ to $\mathcal C$ and $(\theta_\gamma)_{\gamma \in \Gamma}$ is a family
with $\theta_\gamma$ an isomorphism from
$T_1(N_\gamma)$ to $T_2(N_\gamma)$,
there is a unique tensor isomorphism $\varphi$ from $T_1$ to $T_2$ with
$\varphi_{N_\gamma} = \theta_\gamma$.
Any $\Gamma' \subset \Gamma$ defines a $k$\nobreakdash-\hspace{0pt} tensor functor $\mathcal F_{\mathbf r'} \to \mathcal F_{\mathbf r}$
with $\mathbf r' = (r_\gamma)_{\gamma \in \Gamma'}$ which sends $N_\gamma$ to $N_\gamma$
for $\gamma \in \Gamma'$.
The $k$\nobreakdash-\hspace{0pt} tensor functors $\mathcal F_{\mathbf r'} \to \mathcal F_{\mathbf r}$ and $\mathcal F_{\mathbf r''} \to \mathcal F_{\mathbf r}$
given by a decomposition $\Gamma = \Gamma' \amalg \Gamma''$ define as above
a $k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal F_{\mathbf r'} \otimes_k \mathcal F_{\mathbf r''}$ to $\mathcal F_{\mathbf r}$,
which by the universal properties is an equivalence.
Since $\End_{\mathcal F_{\mathbf r''}}(\mathbf 1) = k$, it follows that $\mathcal F_{\mathbf r'} \to \mathcal F_{\mathbf r}$
is fully faithful.
Thus we may identify $\mathcal F_{\mathbf r'}$ with the strictly full rigid $k$\nobreakdash-\hspace{0pt} tensor subcategory
of $\mathcal F_{\mathbf r}$ generated by the $N_\gamma$ for $\gamma \in \Gamma'$,
and $\mathcal F_{\mathbf r}$ is then the filtered union of the $\mathcal F_{\mathbf r'}$ for
$\Gamma' \subset \Gamma$ finite.
It also follows that if $\Gamma = \{1,2, \dots, t\}$ is finite, then the $k$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation}\label{e:freetens}
\mathcal F_{r_1} \otimes_k \dots \otimes_k \mathcal F_{r_t} \to \mathcal F_{\mathbf r}
\end{equation}
defined by the embeddings $\mathcal F_{r_i} \to \mathcal F_{\mathbf r}$ is an equivalence.
Suppose that $\mathbf r = (r_\gamma)_{\gamma \in \Gamma}$ is a family of integers.
Let $\mathbf m|\mathbf n = (m_\gamma|n_\gamma)_{\gamma \in \Gamma}$ be a family of pairs
of integers $\ge 0$ with $\mathbf m - \mathbf n = \mathbf r$.
There exists a tensor functor from $\mathcal F_{\mathbf r}$ to the category
of super $k$\nobreakdash-\hspace{0pt} vector spaces, unique up to tensor isomorphism, that sends
$N_\gamma$ to a super $k$\nobreakdash-\hspace{0pt} vector space of dimension $m_\gamma|n_\gamma$.
Its kernel is a tensor ideal
\begin{equation*}
\mathcal J_{\mathbf m|\mathbf n}
\end{equation*}
of $\mathcal F_{\mathbf r}$.
A tensor ideal $\mathcal J$ in a tensor category $\mathcal C$ will be called \emph{prime} if $\mathcal C/\mathcal J$
is integral.
\begin{lem}\label{l:prime}
Let $\mathbf r = (r_\gamma)_{\gamma \in \Gamma}$ be a family of integers and $\mathcal J$ be
a prime tensor ideal of $\mathcal F_{\mathbf r}$.
For each $\gamma \in \Gamma$, denote by $\mathcal J_\gamma$ the restriction of $\mathcal J$ to
the full tensor subcategory $\mathcal F_{r_\gamma}$ of $\mathcal F_{\mathbf r}$ associated to $\gamma$.
\begin{enumerate}
\item\label{i:primegen}
$\mathcal J$ is the tensor ideal of $\mathcal F_{\mathbf r}$ generated by the $\mathcal J_\gamma$ for
$\gamma \in \Gamma$.
\item\label{i:primetens}
If $\Gamma = \{1,2, \dots, t\}$ is finite, then the $k$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation*}
\mathcal F_{r_1}/\mathcal J_1 \otimes_k \dots \otimes_k \mathcal F_{r_t}/\mathcal J_t
\to \mathcal F_{\mathbf r}/\mathcal J
\end{equation*}
induced by the embeddings $\mathcal F_{r_i} \to \mathcal F_{\mathbf r}$ is an equivalence.
\item\label{i:primemn}
If $\mathcal J_\gamma \ne 0$ for each $\gamma \in \Gamma$, then $\mathcal J = \mathcal J_{\mathbf m|\mathbf n}$
for a unique $\mathbf m|\mathbf n$ with $\mathbf m - \mathbf n = \mathbf r$.
\end{enumerate}
\end{lem}
\begin{proof}
\ref{i:primetens}
The fullness and essential surjectivity follow from those of \eqref{e:freetens}.
Since $\kappa(\mathcal F_{r_i}/\mathcal J_i) = k$ and $\mathcal F_{r_i}/\mathcal J_i \to \mathcal F_{r_i}/\mathcal J$ is faithful for each $i$,
the faithfulness follows inductively from Lemma~\ref{l:extfaith}.
\ref{i:primegen}
Since $\mathcal F_{\mathbf r}$ is the filtered union of its full tensor subcategories
$\mathcal F_{\mathbf r'}$ associated to finite subsets $\Gamma'$ of $\Gamma$,
and since the restriction
of $\mathcal J$ to each $\mathcal F_{\mathbf r'}$ is a prime tensor ideal of $\mathcal F_{\mathbf r'}$,
we may suppose that $\Gamma = \{1,2, \dots, t\}$ is finite.
Let $\mathcal I \subset \mathcal J$ be a tensor ideal of $\mathcal F_{\mathbf r}$ containing each $\mathcal J_i$.
By the fullness and essential surjectivity of \eqref{e:freetens}, the equivalence of
\ref{i:primetens} factors through an equivalence with target $\mathcal F_{\mathbf r}/\mathcal I$.
Thus $\mathcal I = \mathcal J$.
\ref{i:primemn}
By Lemma~\ref{l:idealJmn}, for each $\gamma$ we have $\mathcal J_\gamma = \mathcal J_{m_\gamma|n_\gamma}$
for a unique $m_\gamma|n_\gamma$ with $m_\gamma - n_\gamma = r_\gamma$.
Since the restriction to $\mathcal F_{r_\gamma}$ of $\mathcal J_{\mathbf m'|\mathbf n'}$
with $\mathbf m'|\mathbf n' = (m'{}\!_\gamma|n'{}\!_\gamma)_{\gamma \in \Gamma}$ is
$\mathcal J_{m'{}\!_\gamma|n'{}\!_\gamma}$, the required result thus follows from \ref{i:primegen}.
\end{proof}
Let $\mathbf m|\mathbf n = (m_\gamma|n_\gamma)_{\gamma \in \Gamma}$ be a family of pairs
of integers $\ge 0$.
We write
\begin{equation*}
\mathcal F_{\mathbf m|\mathbf n} = \mathcal F_{\mathbf m - \mathbf n}/\mathcal J_{\mathbf m|\mathbf n}.
\end{equation*}
There exists a $k$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation}\label{e:freeGL}
\mathcal F_{\mathbf m|\mathbf n} \to \Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k),
\end{equation}
unique up to tensor isomorphism, which sends $N_\gamma$ to the standard
representation $E_\gamma$ of the factor $\mathrm {GL}_{m_\gamma|n_\gamma}$ at $\gamma$.
Indeed composing with the forgetful $k$\nobreakdash-\hspace{0pt} tensor functor from
$\Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)$ to super $k$\nobreakdash-\hspace{0pt} vector spaces shows that the
$k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal F_{\mathbf m - \mathbf n}$ to $\Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)$
that sends $N_\gamma$ to $E_\gamma$ has kernel $\mathcal J_{\mathbf m|\mathbf n}$.
\begin{lem}\label{l:freeGLff}
The $k$\nobreakdash-\hspace{0pt} tensor functor \eqref{e:freeGL} is fully faithful.
\end{lem}
\begin{proof}
Since the $k$\nobreakdash-\hspace{0pt} tensor functor $T:\mathcal F_{\mathbf m - \mathbf n} \to \Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)$
with $T(N_\gamma) = E_\gamma$ has kernel $\mathcal J_{\mathbf m|\mathbf n}$,
it need only be shown that $T$ is full.
Writing $\mathcal F_{\mathbf m - \mathbf n}$ as the filtered union of its full
$k$\nobreakdash-\hspace{0pt} tensor subcategories $\mathcal F_{\mathbf m' - \mathbf n'}$ as $\mathbf m'|\mathbf n'$ runs over the
finite subfamilies of $\mathbf m|\mathbf n$, we may assume that $\Gamma$ is finite.
By the equivalence \eqref{e:freetens} and Lemma~\ref{l:Mhomtens}, we may further
assume that the family $\mathbf m|\mathbf n$ reduces to a single member $m|n$.
After dualising, it is then enough to show that
$\mathcal F_{m-n} \to \Mod_{\mathrm {GL}_{m|n},\varepsilon}(k)$ sending $N$ to $E$ is surjective
on hom spaces $\Hom(N^{\otimes r},N^{\otimes s})$.
When $r \ne s$ this is clear because $\Hom(E^{\otimes r},E^{\otimes s})$ is $0$,
and when $r = s$ it follows from Lemma~\ref{l:standend}.
\end{proof}
\begin{lem}\label{l:Fmnfracclose}
The $k$\nobreakdash-\hspace{0pt} tensor category $\mathcal F_{\mathbf m|\mathbf n}$ is fractionally closed.
\end{lem}
\begin{proof}
The $k$\nobreakdash-\hspace{0pt} tensor category $\Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)$ is integral,
and by Lemma~\ref{l:abfracclose} it is fractionally closed.
Since \eqref{e:freeGL} is fully faithful by Lemma~\ref{l:freeGLff},
it is thus regular and \eqref{e:fracclose}
is an isomorphism for $\mathcal C = \mathcal F_{\mathbf m|\mathbf n}$ and every $C,C'$ and regular $f$ in $\mathcal C$.
\end{proof}
\begin{lem}\label{l:rankinteger}
Let $\mathcal C$ be a $k$\nobreakdash-\hspace{0pt} tensor category with $\End_{\mathcal C}(\mathbf 1)$ reduced and indecomposable and
$M$ be a dualisable object in $\mathcal C$.
Suppose that for some $d$ there
exists an element $\ne 0$ of $k[\mathfrak{S}_d]$ which acts as $0$ on $M^{\otimes d}$.
Then the rank of $M$ is an integer.
\end{lem}
\begin{proof}
Since $k[\mathfrak{S}_d] \to \End_{\mathcal C}(M^{\otimes d})$ has non-zero kernel,
there exists a partition $\lambda$ of $d$ such that the primitive idempotent $e_\lambda$
associated to $\lambda$ acts as $0$ on $M^{\otimes d}$.
Increasing $d$ if necessary, we may suppose that there is in integer $m_0$ such that
$d = m_0{}\!^2$ and $\lambda = (m_0m_0 \dots m_0)$ is square with
$m_0$ rows and columns.
Let $\alpha$ be an element of $k[\mathfrak{S}_d]$.
Reducing to the case where $\alpha$ is an element of $\mathfrak{S}_d$ and successively contracting
shows that there is a polynomial $p_\alpha(t)$ in $k[t]$ with the following property:
If $N$ is a dualisable object of rank $r$ in a $k$\nobreakdash-\hspace{0pt} tensor category and $f$ is the endomorphism of
$N^{\otimes d}$ induced by $\alpha$, then
\begin{equation*}
\tr(f) = p_\alpha(r).
\end{equation*}
Further $p_\alpha(t)$ is unique: taking for $N$ a $k$\nobreakdash-\hspace{0pt} vector space of dimension $m$
shows that the value of $p_\alpha(t)$ is determined for $t$ any integer $m \ge 0$.
Let $V$ be a $k$\nobreakdash-\hspace{0pt} vector space of dimension $m$.
Then the trace $\tau(m)$ of the endomorphism of $V^{\otimes d}$ defined by $e_\lambda$
is the dimension of the $k$\nobreakdash-\hspace{0pt} vector space $S_\lambda V$ obtained by applying the Schur functor
$S_\lambda$ to $V$.
Thus \cite[Theorem~6.3(1)]{FulHar} if $m \ge 2m_0$
\begin{equation*}
\tau(m) = \prod_{1 \le i < j \le m}\frac{\lambda_i - \lambda_j + j - i}{j - i}
= \prod_{l > 0} \left( \frac{m_0+l}{l} \right)^{\rho(l)} = \prod_{l > 0} l^{\rho(l-m_0) - \rho(l)}
\end{equation*}
where $\rho(l) = \min \{l,m-l,m_0\}$ when $0 < l < m$, and $\rho(l) = 0$ otherwise.
Hence
\begin{equation*}
p_{e_\lambda}(t) = \prod_{-m_0 < i < m_0}(m_0-i)^{|i|-m_0}(t-i)^{m_0-|i|},
\end{equation*}
because then $p_{e_\lambda}(m) = \tau(m)$ for $m \ge 3m_0$.
If $M$ has rank $r$ then $p_{e_\lambda}(r) = 0$, because $e_\lambda$ acts as $0$ on
$M^{\otimes d}$.
Since $\End(\mathbf 1)$ is reduced and indecomposable, the result follows.
\end{proof}
\begin{prop}\label{p:FmnCesssurj}
Let $\mathcal C$ be an essentially small, integral, rigid $k$\nobreakdash-\hspace{0pt} tensor category.
Suppose that for each object $M$ of $\mathcal C$ there exists a $d$ such that
some element $\ne 0$ of $k[\mathfrak{S}_d]$ acts as $0$ on $M^{\otimes d}$.
Then for some family $\mathbf m|\mathbf n$ of pairs of integers $\ge 0$ there exists a
faithful essentially surjective $k$\nobreakdash-\hspace{0pt} tensor functor from $\mathcal F_{\mathbf m|\mathbf n}$
to $\mathcal C$.
\end{prop}
\begin{proof}
Let $(M_\gamma)_{\gamma \in \Gamma}$ be a (small) family of objects of $\mathcal C$ with
every object of $\mathcal C$ isomorphic to some $M_\gamma$.
By Lemma~\ref{l:rankinteger}, the rank $r_\gamma$ of $M_\gamma$ is an integer.
If $\mathbf r = (r_\gamma)_{\gamma \in \Gamma}$
there exists an essentially surjective $k$\nobreakdash-\hspace{0pt} tensor functor
from $\mathcal F_{\mathbf r}$ to $\mathcal C$ which sends $N_\gamma$ to $M_\gamma$ for each $\gamma$.
Its kernel $\mathcal J$ is prime, and
by hypothesis the restriction of $\mathcal J$ to the full tensor subcategory
$\mathcal F_{r_\gamma}$ of $\mathcal F_{\mathbf r}$ associated to $\gamma$ is $\ne 0$ for each $\gamma$.
It thus follows from Lemma~\ref{l:prime}\ref{i:primemn} that $\mathcal J = \mathcal J_{\mathbf m|\mathbf n}$
for some $\mathbf m|\mathbf n$ with $\mathbf m - \mathbf n = \mathbf r$.
\end{proof}
\section{Functor categories}\label{s:fun}
In this section we describe how the usual additive Yoneda embedding of an additive category
extends to embedding of a tensor category into an abelian tensor category.
For tensor categories with one object, identified with commutative rings,
this is the embedding into the tensor category of modules over the ring.
The main result is Theorem~\ref{t:Fhatequiv}, which is crucial for the geometric description
in Section~\ref{s:mod} of appropriate functor categories modulo torsion.
Let $\mathcal C$ be an essentially small additive category.
We denote by
\begin{equation*}
\widehat{\mathcal C}
\end{equation*}
the additive category of additive functors from $\mathcal C^\mathrm{op}$ to $\mathrm{Ab}$.
The category $\widehat{\mathcal C}$ is abelian, with (small) limits and colimits, computed argumentwise.
Finite limits in $\widehat{\mathcal C}$ commute with filtered colimits.
For every object $A$ in $\mathcal C$ we have an object
\begin{equation*}
h_A = \mathcal C(-,A)
\end{equation*}
in $\widehat{\mathcal C}$, and evaluation at $1_A$ gives the Yoneda isomorphism
\begin{equation}\label{e:Yonedaiso}
\widehat{\mathcal C}(h_A,M) \xrightarrow{\sim} M(A),
\end{equation}
natural in $A$ and $M$.
We have the fully faithful Yoneda embedding
\begin{equation}\label{e:Yoneda}
h_-:\mathcal C \to \widehat{\mathcal C}
\end{equation}
of additive categories, which preserves limits.
The $h_A$ for $A$ in a skeleton of $\mathcal C$ form a small set of generators for $\widehat{\mathcal C}$.
It follows that every object of $\widehat{\mathcal C}$ is the quotient of a small coproduct of objects $h_A$,
and hence that every object $M$ of $\widehat{\mathcal C}$ is a cokernel
\begin{equation}\label{e:pres}
\coprod_{\delta \in \Delta} h_{B_\delta} \to \coprod_{\gamma \in \Gamma} h_{A_\gamma} \to M \to 0
\end{equation}
for some small families $(A_\gamma)_{\gamma \in \Gamma}$ and $(B_\delta)_{\delta \in \Delta}$.
When $\mathcal C$ has direct sums, the embedding $h_-:\mathcal C \to \widehat{\mathcal C}$ is dense,
i.e.\ every $M$ in $\mathcal C$ can be expressed as a canonical colimit $\colim_{(A,a),a \in M(A)}h_A$
over the comma category $h_-/M$.
Indeed if we write $M'$ for the colimit, then given $(A_i,a_i)$ and $f_i:B \to A_i$ in $h_{A_i}(B)$
such that the $M(f_i)(a_i)$ have sum $0$ in $M(B)$, the images of the $f_i$ in $M'(B)$ also
have sum $0$, because the $f_i$ define a morphism from $(B,0)$ to $(\bigoplus_iA_i,\sum_ia_i)$
in the comma category which sends $1_B$ in $h_B(B)$ to $(f_i)$ in
$h_{\bigoplus A_i}(B)$.
When $\mathcal C$ has finite colimits, an object of $\widehat{\mathcal C}$ is
a left exact functor $\mathcal C^\mathrm{op} \to \mathrm{Ab}$ if and only if it is a filtered colimit
of objects $h_A$.
Indeed the comma category $h_-/M$ is essentially small and for a left exact $M$ it is filtered.
Every $h_A$ is projective in $\widehat{\mathcal C}$.
Thus by \eqref{e:pres} $\widehat{\mathcal C}$ has enough projectives,
and an object in $\widehat{\mathcal C}$ is projective
if and only if it is a direct summand of a coproduct of objects $h_A$.
An object $M$ in a category with filtered colimits will be said to be \emph{of finite presentation}
(resp.\ \emph{of finite type}) if $\Hom(M,-)$ preserves filtered colimits (resp.\ filtered colimits
with coprojections monomorphisms).
In an abelian category, an object $M$ is projective of finite type if and only if it is projective
of finite presentation if and only if $\Hom(M,-)$ is cocontinuous.
It follows from \eqref{e:pres} that an object $M$ in $\widehat{\mathcal C}$ is of finite presentation
if and only if it is a cokernel
\begin{equation}\label{e:finpres}
\bigoplus_{j=1}^n h_{B_j} \to \bigoplus_{i=1}^m h_{A_i} \to M \to 0
\end{equation}
for some finite families $(A_i)$ and $(B_j)$ of objects of $\mathcal C$,
and that $M$ is of finite type if and only if it is a quotient
\begin{equation}\label{e:fintype}
\bigoplus_{i=1}^m h_{A_i} \to M \to 0
\end{equation}
for some finite family $(A_i)$ of objects of $\mathcal C$.
By \eqref{e:finpres},
every object of $\widehat{\mathcal C}$ is a filtered colimit of objects of finite presentation,
and by \eqref{e:fintype} every object of $\widehat{\mathcal C}$ is the filtered
colimit of its subobjects of finite type.
By \eqref{e:fintype}, an object of $\widehat{\mathcal C}$ is projective of finite type if
and only if it is a direct summand of a finite direct sum of objects $h_A$.
The full subcategory of $\widehat{\mathcal C}$ consisting of such objects is thus the pseudo-abelian hull of $\mathcal C$.
The coproduct of the $ h_A$ for $A$ in a skeleton of $\mathcal C$ is a generator for $\widehat{\mathcal C}$.
The category $\widehat{\mathcal C}$ also has a cogenerator: if $L$ is a generator for
$\widehat{\mathcal C^\mathrm{op}}$ and if for $M$ an object in $\widehat{\mathcal C}$ we write $M^\dagger$ for the object
$\Hom_{\mathbf Z}(M(-),\mathbf Q/\mathbf Z)$ in $\widehat{\mathcal C^\mathrm{op}}$,
then $M^\dagger$ is a quotient of a small coproduct of copies of $L$,
so that $M^{\dagger\dagger}$ and hence $M$ is a subobject of a small product
of copies of $L^\dagger$.
Since $\widehat{\mathcal C}$ is complete and well-powered, it follows that any continuous functor from
$\widehat{\mathcal C}$ to $\mathrm{Ab}$ is representable.
If $\mathcal C$ has a structure of tensor category,
we define as follows a structure of tensor category on $\widehat{\mathcal C}$.
Given objects $L$, $M$ and $N$ in $\widehat{\mathcal C}$, call a family of biadditive maps
\begin{equation*}
M(A) \times N(B) \to L(A \otimes B)
\end{equation*}
which is natural in the objects $A$ and $B$ in $\mathcal C$ a \emph{bimorphism} from $(M,N)$ to $L$.
The tensor product $M \otimes N$ is then defined as the
target of the universal bimorphism from $(M,N)$,
which exists because the relevant functor is representable.
If we similarly define trimorphisms, then both $(M \otimes N) \otimes P$ and $M \otimes (N \otimes P)$
are the target of a universal trimorphism from $(M,N,P)$, and the associativity constraint is the
isomorphism between them defined by the universal property.
The symmetries of $\widehat{\mathcal C}$ are defined similarly, and the unit is $h_{\mathbf 1}$.
The required compatibilities hold by the universal property of the tensor product.
We may assume that the tensor product is chosen so that the unit $\mathbf 1 = h_{\mathbf 1}$ is strict.
The tensor product of $\widehat{\mathcal C}$ is cocontinuous.
The bimorphism from $(h_A,h_B)$ to $h_{A \otimes B}$ that sends $(f,g)$ in $h_A(A') \times h_B(B')$
to $f \otimes g$ in $h_{A \otimes B}(A' \otimes B')$ defines a structure
a structure
\begin{equation}\label{e:tensfun}
h_A \otimes h_B \xrightarrow{\sim} h_{A \otimes B}
\end{equation}
of tensor functor on the embedding \eqref{e:Yoneda}.
If $\mathcal C$ has finite colimits, then the full subcategory of $\widehat{\mathcal C}$ consisting
of the left exact functors is a tensor subcategory.
The image of $(a,b)$ under the component
\begin{equation*}
M(A) \times N(B) \to (M \otimes N)(A \otimes B)
\end{equation*}
of the universal bimorphism will be written $a \otimes b$.
Modulo \eqref{e:Yonedaiso} and \eqref{e:tensfun},
$a \otimes b$ is the tensor product of the morphisms $a$ and $b$ in $\widehat{\mathcal C}$.
Let $\mathcal C$ be an essentially small additive category, $\mathcal D$ be cocomplete additive category,
and $T:\mathcal C \to \mathcal D$ be an additive functor.
Then the additive left Kan extension
\begin{equation*}
T^*:\widehat{\mathcal C} \to \mathcal D
\end{equation*}
of $T$ along $\mathcal C \to \widehat{\mathcal C}$ exists.
It is given by the coend formula
\begin{equation}\label{e:addKanT}
T^*(M) = \int^{A \in \mathcal C} M(A) \otimes_{\mathbf Z} T(A),
\end{equation}
and it is cocontinuous and is preserved by any cocontinuous functor from $\mathcal D$
to a cocomplete additive category.
Since $h_-$ is fully faithful, the universal natural transformation
from $T$ to $T^*h_-$ is a natural isomorphism
\begin{equation}\label{e:Th}
T \xrightarrow{\sim} T^*h_-.
\end{equation}
By cocontinuity of $h_-{}\!^*$ and \eqref{e:pres},
the canonical natural transformation from $h_-{}\!^*$ to $\Id_{\widehat{\mathcal C}}$
is an isomorphism.
Thus $\Id_{\widehat{\mathcal C}}$ is the additive left Kan extension
of $h_-$ along itself, with universal natural transformation the identity.
It follows that composition with $h_-$ defines an equivalence from cocontinuous
functors $\widehat{\mathcal C} \to \mathcal D$ to additive functors $\mathcal C \to \mathcal D$, with quasi-inverse
given by additive left Kan extension.
The functor $T^*$ has a right adjoint
\begin{equation*}
T_*:\mathcal D \to \widehat{\mathcal C}
\end{equation*}
with $T_*(N):\mathcal C^{\mathrm{op}} \to \mathrm{Ab}$ given by
\begin{equation}\label{e:Tstardef}
T_*(N) = \mathcal D(T(-),N)
\end{equation}
where the unit $\Id_{\widehat{\mathcal C}} \to T_*T^*$ corresponds under the universal property of
the additive left Kan extension $\Id_{\widehat{\mathcal C}}$ of $h_-$ along itself to the composite
\begin{equation}\label{e:unitdef}
h_- \to T_*T \xrightarrow{\sim} T_*T^*h_-
\end{equation}
in which the isomorphism is $T_*$ applied to \eqref{e:Th} and the first arrow has component
\begin{equation*}
h_A \to T_*T(A) = \mathcal D(T(-),T(A))
\end{equation*}
at $A$ defined by $1_{T(A)}$.
That $\Id_{\widehat{\mathcal C}} \to T_*T^*$ so defined induces an isomorphism
\begin{equation*}
\mathcal D(T^*(M),N) \xrightarrow{\sim} \widehat{\mathcal C}(M,T_*(N))
\end{equation*}
for $M$ in $\widehat{\mathcal C}$ and $N$ in $\mathcal D$ can be seen by reducing by cocontinuity
and \eqref{e:pres} to the case $M = h_A$, where we have isomorphisms
\begin{equation*}
\mathcal D(T^*(h_A),N) \xrightarrow{\sim} \mathcal D(T(A),N) \xrightarrow{\sim} \widehat{\mathcal C}(h_A,T_*(N))
\end{equation*}
induced by \eqref{e:unitdef}.
Let $\varphi:T \to T'$ be a natural transformation of additive functors $\mathcal C \to \mathcal D$.
Denote by
\begin{equation*}
\varphi^*:T^* \to T'{}^*
\end{equation*}
the unique natural transformation
such that $\varphi$ and $\varphi^*h_-$ are compatible with \eqref{e:Th} and the corresponding natural
isomorphism for $T'$.
The natural transformation
\begin{equation*}
\varphi_*:T'{}\!_* \to T_*
\end{equation*}
induced on right adjoints has component $\varphi_*{}_M$ at $M$ given by
\begin{equation*}
(\varphi_*{}_M)_A = \mathcal D(\varphi_A,M):\mathcal D(T'(A),M) \to \mathcal D(T(A),M).
\end{equation*}
This follows from the diagram
\begin{equation*}
\xymatrix{
h_A \ar [d] \ar[r] & T'{}\!_*T'(A) \ar[d]^{\varphi_*{}_{T'(A)}} \ar[r] & T'{}\!_*(M) \ar[d]^{\varphi_*{}_M} \\
T_*T(A) \ar[r]^{T_*(\varphi_A)} & T_*T'(A) \ar[r] & T_*(M)
}
\end{equation*}
defined by a morphism $T'(A) \to M$, where the right square commutes by naturality of $\varphi_*$ and
the commutativity of the left square can be seen starting from the compatibility of $T_*\varphi^*$ and
$\varphi_*T'{}^*$ with the units by evaluating at $h_A$ and using the isomorphisms of the form \eqref{e:Th}
and their compatibility with $\varphi$ and $\varphi^*h_-$.
Now let $\mathcal C$ be an essentially small tensor category, $\mathcal D$ be cocomplete tensor category,
and $T:\mathcal C \to \mathcal D$ be a tensor functor.
By cocontinuity of $T^*:\widehat{\mathcal C} \to \mathcal D$, it follows either directly from \eqref{e:addKanT}
or after first replacing $\mathcal C$ by its additive hull from the density of
$h_-:\mathcal C \to \widehat{\mathcal C}$ that $T^*$ has a unique structure of
tensor functor such that \eqref{e:Th} is a tensor isomorphism.
We may suppose after replacing $T^*$ if necessary by an isomorphic functor that the component of
\eqref{e:Th} at $\mathbf 1$ is the identity, so that $T^*$ preserves the unit strictly.
There then exists a unique structure of lax tensor functor on $T_*:\mathcal D \to \widehat{\mathcal C}$ such that
the unit and counit for $T^*$ and $T_*$ are compatible with the lax tensor structures.
Explicitly, since $T^*$ and $T$ preserve $\mathbf 1$ strictly, the unit
\begin{equation*}
h_{\mathbf 1} \to T_*(\mathbf 1) = \mathcal D(T(-),\mathbf 1)
\end{equation*}
of $T_*$ is $\eqref{e:unitdef}$ evaluated at $\mathbf 1$, and hence is defined by $1_{\mathbf 1} \in \mathcal D(\mathbf 1,\mathbf 1)$.
The lax tensor structure of $T_*$ is given using \eqref{e:Tstardef} by the biadditive maps
\begin{equation*}
\mathcal D(T(A),M) \times \mathcal D(T(B),N) \to \mathcal D(T(A \otimes B),M \otimes N)
\end{equation*}
which send $(a,b)$, modulo the isomorphism defining the tensor structure of $T$, to $a \otimes b$.
This can be seen from the diagram
\begin{equation*}
\xymatrix@R=2pc@C=3pc{
& h_A \otimes h_B \ar[dl] \ar[r] & T_*T(A) \otimes T_*T(B)
\ar[dl]!<1.5em,0ex> \ar[d] \ar[r]^-{\scriptscriptstyle{T_*(a) \otimes T_*(b)}} & T_*(M) \otimes T_*(N) \ar[d] \\
h_{A \otimes B} \ar[r] & T_*T(A \otimes B) \ar[r]^-{\sim} & T_*(T(A) \otimes T(B))
\ar[r]^-{\scriptscriptstyle{T_*(a \otimes b)}} & T_*(M \otimes N)
}
\end{equation*}
in which the parallelogram commutes because \eqref{e:unitdef} and hence the first arrow of \eqref{e:unitdef}
is compatible with the tensor structures, the triangle commutes by definition of the tensor structure
of a composite functor, and the square commutes by naturality of the tensor structure of $T_*$.
Let $\varphi:T \to T'$ be a natural transformation of tensor functors $\mathcal C \to \mathcal D$ which is compatible with the
tensor structures.
Then by \eqref{e:pres}, the compatibility of the isomorphisms of the form \eqref{e:Th}
with $\varphi$ and $\varphi^*h_-$, and cocontinuity of $T^*$, the natural transformation $\varphi^*:T^* \to T'{}^*$
is compatible with the tensor structures.
It follows that $\varphi_*:T'{}\!_* \to T_*$ is compatible with the tensor structures.
It also follows that composition with $h_-$ defines an equivalence from
cocontinuous tensor functors $\widehat{\mathcal C} \to \mathcal D$ to tensor functors $\mathcal C \to \mathcal D$,
with quasi-inverse $T \mapsto T^*$.
Let $\mathcal A$ and $\mathcal A'$ be tensor categories, $H:\mathcal A \to \mathcal A'$ be a tensor functor,
and $H':\mathcal A' \to \mathcal A$ be a lax tensor functor right adjoint to $H$.
Given $M$ in $\mathcal A$ and $M'$ in $\mathcal A'$, we have a canonical morphism
\begin{equation}\label{e:proj}
H'(M') \otimes M \to H'(M' \otimes H(M))
\end{equation}
in $\mathcal A$, natural in $M$ and $M'$, given by the composite
\begin{equation}\label{e:projdef}
H'(M') \otimes M \to H'(M') \otimes H'H(M) \to H'(M' \otimes H(M))
\end{equation}
with the first arrow defined using the unit and the second by the lax tensor structure of $H'$.
It is adjoint to the morphism
\begin{equation}\label{e:projadj}
H(H'(M') \otimes M) \xrightarrow{\sim} HH'(M') \otimes H(M) \to M' \otimes H(M)
\end{equation}
defined using the tensor structure of $H$ and the counit.
If $M$ is dualisable then \eqref{e:proj} is an isomorphism, as follows from the top row of
the commutative diagram
\begin{equation*}
\[email protected]{
\mathcal A(L,H'(M') \otimes M) \ar[d]^{\wr} \ar[r] & \mathcal A'(H(L),HH'(M') \otimes H(M))
\ar[d] \ar[r] & \mathcal A'(H(L),M' \otimes H(M)) \ar[d]^{\wr} \\
\mathcal A(L \otimes M^{\vee},H'(M')) \ar[r] & \mathcal A'(H(L) \otimes H(M)^{\vee},HH'(M'))
\ar[r] & \mathcal A'(H(L) \otimes H(M)^{\vee},M')
}
\end{equation*}
with arrows natural in $L$, where the bottom row, modulo the tensor structure of $H$, is the adjunction isomorphism.
Suppose $\mathcal A$ is abelian with $\otimes$ right exact.
For any commutative algebra $R'$ in $\mathcal A'$, the lax tensor structure of $H'$ defines on $H'(R')$ a
structure of commutative algebra in $\mathcal A$, and similarly for modules over $R'$ in $\mathcal A'$.
In particular, since $\mathbf 1$ has a unique structure of commutative algebra in $\mathcal A'$ and every $M'$ in $\mathcal A'$
has a unique structure of module over $\mathbf 1$, we have
a commutative algebra $H'(\mathbf 1)$ in $\mathcal A$ and a canonical structure of $H'(\mathbf 1)$\nobreakdash-\hspace{0pt} module on every $H'(M')$.
Thus $H'$ factors as a lax tensor functor
\begin{equation}\label{e:EilMoore}
\mathcal A' \to \MOD_{\mathcal A}(H'(\mathbf 1))
\end{equation}
followed by the forgetful functor.
The morphism of $H'(\mathbf 1)$\nobreakdash-\hspace{0pt} modules
\begin{equation}\label{e:EMunit}
H'(\mathbf 1) \otimes M \to H'H(M)
\end{equation}
corresponding to the unit $M \to H'H(M)$ in $\mathcal A$ is given by taking $M' = \mathbf 1$ in \eqref{e:proj},
and hence is an isomorphism for $M$ dualisable.
It is natural in $M$ and is compatible with the tensor structure of $H'(\mathbf 1) \otimes -$ and
the lax tensor structure of $H'H$.
The morphism
\begin{equation}\label{e:EMtens}
H'(M') \otimes_{H'(\mathbf 1)} H'(N') \to H'(M' \otimes N')
\end{equation}
defining the lax tensor structure of \eqref{e:EilMoore} is an isomorphism for $N' = H(M)$ with $M$ dualisable,
because the composite \eqref{e:projdef} defining \eqref{e:proj} then factors as an isomorphism from
$H'(M') \otimes M$ to $H'(M') \otimes_{H'(\mathbf 1)} H'H(M)$ followed by \eqref{e:EMtens}.
Similarly, the homomorphism
\begin{equation}\label{e:EMhom}
\mathcal A'(M',N') \to \Hom_{H'(\mathbf 1)}(H'(M'),H'(N'))
\end{equation}
defined by \eqref{e:EilMoore} is an isomorphism for $M' = H(M)$ with $M$ dualisable,
because the adjunction isomorphism from $\mathcal A'(H(M),N')$ to $\mathcal A(M,H'(N'))$ then factors as
\eqref{e:EMhom} followed by an isomorphism induced by the unit.
Let $\mathcal C$ and $\mathcal C'$ be essentially small additive categories and $F:\mathcal C \to \mathcal C'$ be an additive functor.
Then we write
\begin{equation*}
\widehat{F}:\widehat{\mathcal C} \to \widehat{\mathcal C'}
\end{equation*}
for $T^*$ with $T:\mathcal C \to \widehat{\mathcal C'}$ the composite of $h_-:\mathcal C' \to \widehat{\mathcal C'}$ with $F$, and
\begin{equation*}
F_{\wedge}:\widehat{\mathcal C'} \to \widehat{\mathcal C}
\end{equation*}
for $T_*$.
In this case, \eqref{e:Tstardef} becomes
\begin{equation*}
F_{\wedge}(N) = NF^\mathrm{op}
\end{equation*}
for $N:\mathcal C'{}^\mathrm{op} \to \mathrm{Ab}$.
Thus $F_\wedge$ is continuous and cocontinuous.
If $F$ is fully faithful, then $\widehat{F}$ is fully faithful, by \eqref{e:pres} and the fact that
$h_A$ for every $A$ in $\mathcal C$ or $\mathcal C'$ is projective of finite presentation.
Similarly we define $\widehat{\varphi}$ and $\varphi_{\wedge}$ for a natural transformation $\varphi$.
Given also an essentially small additive category $\mathcal C''$ and an additive functor $F':\mathcal C' \to \mathcal C''$,
we have $(F'F)_{\wedge} = F_{\wedge}F'{}\!_{\wedge}$.
Thus $\widehat{F'F}$ and $\widehat{F'}\widehat{F}$ are canonically isomorphic,
with composites of three or more isomorphisms satisfying the usual compatibilities.
Suppose that $\mathcal C$ and $\mathcal C'$ have structures of tensor category and $F$ has a structure of tensor functor.
Then as above $\widehat{F}$ has a structure of tensor functor and $F_{\wedge}$ of lax tensor functor,
and $\widehat{\varphi}$ and $\varphi_{\wedge}$ are compatible with the tensor structures if
the natural transformation $\varphi$ is.
Taking $\mathcal A = \widehat{\mathcal C}$, $\mathcal A' = \widehat{\mathcal C'}$,
and $H' = F_{\wedge}$ in \eqref{e:EilMoore} shows that $F_{\wedge}$ factors as a lax tensor functor
\begin{equation}\label{e:EilMoorehat}
\widehat{\mathcal C'} \to \MOD_{\widehat{\mathcal C}}(F_{\wedge}(\mathbf 1))
\end{equation}
followed by the forgetful functor.
If $F'$ has a structure of tensor functor, the lax tensor functors $(F'F)_{\wedge}$ and
$F_{\wedge}F'{}\!_{\wedge}$ coincide, and the canonical isomorphism from $\widehat{F'F}$ to
$\widehat{F'}\widehat{F}$ is a tensor isomorphism.
\begin{lem}\label{l:EilMoore}
Let $\mathcal C$ and $\mathcal C'$ be essentially small tensor categories with $\mathcal C$ rigid, and $F:\mathcal C \to \mathcal C'$
be a tensor functor.
\begin{enumerate}
\item\label{i:EilMooretens}
The composite of \eqref{e:EilMoorehat} with $\widehat{F}:\widehat{\mathcal C} \to \widehat{\mathcal C'}$
is tensor isomorphic to
\begin{equation*}
F_{\wedge}(\mathbf 1) \otimes -:\widehat{\mathcal C} \to \MOD_{\widehat{\mathcal C}}(F_{\wedge}(\mathbf 1)).
\end{equation*}
\item\label{i:EilMooreequiv}
If $F$ is essentially surjective, then \eqref{e:EilMoorehat} is a tensor equivalence.
\end{enumerate}
\end{lem}
\begin{proof}
Write $\mathcal A = \widehat{\mathcal C}$, $\mathcal A' = \widehat{\mathcal C'}$, $H = \widehat{F}$ and $H' = F_{\wedge}$.
Then \eqref{e:EilMoore} is \eqref{e:EilMoorehat}, and \eqref{e:EMunit} is the component at
$M$ of a natural transformation, compatible with the tensor structures,
from $F_{\wedge}(\mathbf 1) \otimes -$ to the composite of \eqref{e:EilMoorehat} with $\widehat{F}$.
Further \eqref{e:EMunit} is an isomorphism for $M = h_A$ because $M$ is
then dualisable, and hence by \eqref{e:pres} and continuity of $\otimes$, $H$ and $H'$, for every $M$.
This gives \ref{i:EilMooretens}.
Suppose that $F$ is essentially surjective.
Then \eqref{e:EMhom} is an isomorphism for $M' = h_{A'}$ and every $N'$, because
$h_{A'}$ is isomorphic to some $H(h_A)$ and $h_A$ is dualisable.
Thus by \eqref{e:pres} and cocontinuity of $H'$, \eqref{e:EMhom} is an isomorphism for every $M'$ and $N'$,
so that \eqref{e:EilMoorehat} is fully faithful.
Now \eqref{e:EilMoorehat} is cocontinuous, because $F_{\wedge}$ is.
Since by \ref{i:EilMooretens} the essential image of \eqref{e:EilMoorehat}
contains the $F_{\wedge}(\mathbf 1)$\nobreakdash-\hspace{0pt} modules $F_{\wedge}(\mathbf 1) \otimes h_A$,
it thus contains every $F_{\wedge}(\mathbf 1) \otimes M$, and hence every $F_{\wedge}(\mathbf 1)$\nobreakdash-\hspace{0pt} module.
Finally \eqref{e:EMunit} is an isomorphism for $M = \mathbf 1$,
and \eqref{e:EMtens} is an isomorphism for $N' = H(h_A)$, and hence for every $N'$.
This proves \ref{i:EilMooreequiv}.
\end{proof}
\begin{thm}\label{t:Fhatequiv}
Let $k$ be a field of characteristic $0$ and $\mathcal C$ be as in
Proposition~\textnormal{\ref{p:FmnCesssurj}}.
Then for some $\mathbf m|\mathbf n$ there exists a commutative algebra $R$ in $\widehat{\mathcal F_{\mathbf m|\mathbf n}}$ with
\begin{equation*}
R \otimes - :(\widehat{\mathcal F_{\mathbf m|\mathbf n}})_\mathrm{rig} \to \Mod_{\widehat{\mathcal F_{\mathbf m|\mathbf n}}}(R)
\end{equation*}
faithful such that $\widehat{\mathcal C}$ is $k$\nobreakdash-\hspace{0pt} tensor equivalent to $\MOD_{\widehat{\mathcal F_{\mathbf m|\mathbf n}}}(R)$.
\end{thm}
\begin{proof}
Lemma~\ref{l:EilMoore} with $F$ the $k$\nobreakdash-\hspace{0pt} tensor functor
$\mathcal F_{\mathbf m|\mathbf n} \to \mathcal C$ of Proposition~\ref{p:FmnCesssurj} gives a $k$\nobreakdash-\hspace{0pt} tensor equivalence
from $\widehat{\mathcal C}$ to $\MOD_{\widehat{\mathcal F_{\mathbf m|\mathbf n}}}(R)$ with $R = F_\wedge(\mathbf 1)$
whose composite with $\widehat{F}$ is tensor isomorphic to $R \otimes -$.
Since $F$ is faithful, the $k$\nobreakdash-\hspace{0pt} tensor functor from
$(\widehat{\mathcal F_{\mathbf m|\mathbf n}})_\mathrm{rig}$ to $(\widehat{\mathcal C})_\mathrm{rig}$ induced by
$\widehat{F}$ on pseudo-abelian hulls is faithful.
\end{proof}
\section{Torsion}\label{s:tor}
This section deals with the notion of torsion in appropriate abelian tensor
categories, which is fundamental for the construction of super Tannakian hulls.
Let $\mathcal A$ be an abelian category.
Recall that a full subcategory $\mathcal S$ of $\mathcal A$ containing $0$ is said to be a
Serre subcategory of $\mathcal A$ if for any short exact sequence in $\mathcal A$
\begin{equation}\label{e:sex}
0 \to M' \to M \to M'' \to 0,
\end{equation}
$M$ lies in $\mathcal S$ if and only if both $M'$ and $M''$ do.
Suppose that $\mathcal A$ is well-powered,
and let $\mathcal S$ be a Serre subcategory of $\mathcal C$.
Then (e.g.\ \cite[III \S~1]{Gab58}) the quotient $\mathcal A/\mathcal S$ of $\mathcal A$ by $\mathcal S$ is the abelian category
with objects those of $\mathcal A$, where
\begin{equation}\label{e:quotcolim}
(\mathcal A/\mathcal S)(M,N) = \colim_{M' \subset M, \, N' \subset N, \; M/M', N' \in \mathcal S}\mathcal A(M',N/N')
\end{equation}
with the colimit over an essentially small filtered category, and an evident composition.
We have an exact functor
\begin{equation}\label{e:quotproj}
\mathcal A \to \mathcal A/\mathcal S
\end{equation}
which is the identity on objects
and on the hom-group $\mathcal A(M,N)$ is the coprojection at $M' = M$, $N' = 0$ of \eqref{e:quotcolim}.
If $\mathcal A$ has coproducts and coproducts of monomorphisms in $\mathcal A$ are monomorphisms
(i.e.\ if $\mathcal A$ is an (AB4) category in the sense of Grothendieck)
and $\mathcal S$ is closed under formation of coproducts in $\mathcal A$, then $\mathcal A/\mathcal S$ has colimits
and \eqref{e:quotproj} preserves colimits:
it is enough to check that \eqref{e:quotproj} preserves coproducts,
and if we take $M = \coprod_{\alpha}M_{\alpha}$ in \eqref{e:quotcolim}, then every
$M' \subset M$ with $M/M'$ in $\mathcal S$ contains $\coprod_{\alpha}M'{}\!_{\alpha}$ with $M'{}\!_{\alpha}$
the kernel of $M_{\alpha} \to M/M'$ and hence $M_{\alpha}/M'{}\!_{\alpha}$ in $\mathcal S$.
An object of $\mathcal A$ has image $0$ in $\mathcal A/\mathcal S$ if and only if it lies in $\mathcal S$.
A morphism in $\mathcal A$ is an $\mathcal S$\nobreakdash-\hspace{0pt} isomorphism, i.e.\ has image in $\mathcal A/\mathcal S$ an isomorphism,
if and only if both its kernel and cokernel lie in $\mathcal S$.
A morphism in $\mathcal S$ is $\mathcal S$\nobreakdash-\hspace{0pt} trivial, i.e.\ has image $0$ in $\mathcal A/\mathcal S$, if and only if
its image lies in $\mathcal S$ if and only if both the embedding of its kernel and the projection onto
its cokernel are $\mathcal S$\nobreakdash-\hspace{0pt} isomorphisms.
Any exact functor from $\mathcal A$ to an abelian category $\mathcal A'$
which sends every object of $\mathcal S$ to $0$
factors uniquely through \eqref{e:quotproj}, and $\mathcal A/\mathcal S \to \mathcal A'$ is then exact.
Further $\mathcal A/\mathcal S \to \mathcal A'$ is faithful if and only if the objects of $\mathcal S$ are the
only ones in $\mathcal A$ sent to $0$ in $\mathcal A'$.
Let $\mathcal A'$ be an abelian category and $T:\mathcal A \to \mathcal A'$ be an additive functor which
sends every epimorphism with kernel in $\mathcal S$ and every monomorphism
with cokernel in $\mathcal S$ to an isomorphism, or equivalently which sends every $\mathcal S$\nobreakdash-\hspace{0pt} isomorphism to an isomorphism.
Then $T$ factors uniquely through \eqref{e:quotproj}:
the factorisation
\begin{equation*}
\overline{T}:\mathcal A/\mathcal S \to \mathcal A'
\end{equation*}
of $T$ coincides with $T$ on objects, and if $\overline{f}$ in
$(\mathcal A/\mathcal S)(M,N)$ is the image of $f$ in $\mathcal A(M',N/N')$ in the colimit \eqref{e:quotcolim},
with $e:M' \to M$ the embedding and $p:N \to N/N'$ the projection, then
\begin{equation}\label{e:Tbardef}
\overline{T}(\overline{f}) = T(p)^{-1} \circ T(f) \circ T(e)^{-1}.
\end{equation}
Further if $T$ is right (resp.\ left) exact then $\overline{T}$ is right (resp.\ left) exact:
to prove that for $T$ left exact $\overline{T}$ preserves the kernel $\overline{f}:L \to M$ of $\overline{g}:M \to N$,
we may suppose after replacing $L$ by $\Ker(g \circ f)$ that $g \circ f = 0$,
in which case $f$ induces an $\mathcal S$\nobreakdash-\hspace{0pt} isomorphism $L \to \Ker(g)$,
so that $T(f)$ is the kernel of $T(g)$.
If $\mathcal A$ is an (AB4) category and $T$ preserves colimits then $\overline{T}$ preserves colimits.
Given also $T':\mathcal A \to \mathcal A'$ satisfying similar conditions to $T$,
there exists for any natural transformation
$\varphi:T \to T'$ a unique $\overline{\varphi}:\overline{T} \to \overline{T'}$ compatible
with $\varphi$ and $\mathcal A \to \mathcal A/\mathcal S$.
Suppose that $\mathcal A$ has a structure of tensor category,
and that if $f$ is an $\mathcal S$\nobreakdash-\hspace{0pt} isomorphism in $\mathcal A$ then $f \otimes N$ is an $\mathcal S$\nobreakdash-\hspace{0pt} isomorphism
for any object $N$ of $\mathcal A$.
Then $\mathcal A/\mathcal S$ has a unique structure of tensor category such that \eqref{e:quotproj}
is a strict tensor functor.
Explicitly, the tensor product of $\mathcal A/\mathcal S$ coincides with that of $\mathcal A$ on objects,
with associativity constraints and symmetries the images under \eqref{e:quotproj} of those
of $\mathcal A$.
The composite of \eqref{e:quotproj} with the endofunctor $- \otimes N$ of $\mathcal A$
factors uniquely as the composite of an additive endofunctor $- \otimes N$ of $\mathcal A/\mathcal S$
with \eqref{e:quotproj}.
This endofunctor $- \otimes N$ of $\mathcal A/\mathcal S$ together with the similarly defined endofunctor
$M \otimes -$ define on $\mathcal A$ a bifunctor:
the condition for bifunctoriality can be verified using \eqref{e:Tbardef}.
The naturality of the constraints for this bifunctor $- \otimes -$ on $\mathcal A/\mathcal S$ follows
from \eqref{e:Tbardef}, their required compatibilities
from those in $\mathcal A$,
and the compatibility of \eqref{e:quotproj} with
$- \otimes -$ in $\mathcal A$ and $\mathcal A/\mathcal S$ from its compatibility with each $M \otimes -$
and $- \otimes N$.
Let $\mathcal A$ and $\mathcal A'$ be abelian tensor categories and $T:\mathcal A \to \mathcal A'$ be a tensor functor.
Suppose that the above conditions on $T$ and the tensor product of $\mathcal A$ are satisfied.
Then $\overline{T}:\mathcal A/\mathcal S \to \mathcal A'$ has a unique structure of tensor functor
whose composite with the strict tensor functor \eqref{e:quotproj} is $T$.
Given similarly $T':\mathcal A \to \mathcal A'$, if $\varphi:T \to T'$ is compatible with the tensor structures,
then so is $\overline{\varphi}:\overline{T} \to \overline{T'}$.
Let $\mathcal A$ be a tensor category.
For any $M$ in $\mathcal A_{\mathrm{rig}}$, the endofunctor $M \otimes -$ is both right and left adjoint to
$M^\vee \otimes -$, and hence preserves limits and colimits.
If $\mathcal A$ is abelian and $\mathbf 1$ is projective in $\mathcal A$, then for $M$ in $\mathcal A_{\mathrm{rig}}$
the natural isomorphism
\begin{equation*}
\mathcal A(M,-) \xrightarrow{\sim} \mathcal A(\mathbf 1,M^\vee \otimes -)
\end{equation*}
shows that $M$ is projective in $\mathcal A$.
Similarly if $\mathcal A$ has filtered colimits and $\mathbf 1$ is of finite type (resp.\ of finite presentation)
in $\mathcal A$, then any $M$ in $\mathcal A_{\mathrm{rig}}$ is of finite type (resp.\ of finite presentation)
in $\mathcal A$.
A cocomplete abelian category $\mathcal A$ will be called a Grothendieck category
if filtered colimits are exact in $\mathcal A$.
It is equivalent to require (\cite[III~1.9]{Mit65},
\cite[\href{https://stacks.math.columbia.edu/tag/0032}{Tag 0032}]{stacks-project})
that for every filtered system $(A_\lambda)$ of subobjects $A_\lambda$ of an object $A$ of $\mathcal A$
we have
\begin{equation}\label{e:AB5}
(\cup A_\lambda) \cap B = \cup(A_\lambda \cap B)
\end{equation}
for every subobject $B$ of $A$.
\begin{defn}
A Grothendieck tensor category $\mathcal A$ will be called \emph{well-dualled} if $\mathbf 1$ is of finite type
in $\mathcal A$ and $\otimes$ is cocontinuous in $\mathcal A$, and if $\mathcal A_{\mathrm{rig}}$ is essentially
small and its objects generate $\mathcal A$.
\end{defn}
\emph{For the rest of this section $\mathcal A$ will be a well-dualled Grothendieck tensor category.}
\medskip
Since $\mathcal A$ is an abelian category with a generator, it is well-powered.
An object of $\mathcal A$ is of finite type if and only if it is a quotient of some object in $\mathcal A_{\mathrm{rig}}$.
Every object of $\mathcal A$ is the filtered colimit of its subobjects of finite type.
If $M' \to M$ is an epimorphism in $\mathcal A$ with $M$ of finite type, there exists a $B$ in $\mathcal A_{\mathrm{rig}}$
and a morphism $B \to M'$ such that $B \to M' \to M$ is an epimorphism.
Every object of $\mathcal A$ is the quotient of a coproduct of objects in $\mathcal A_{\mathrm{rig}}$.
It follows that for example every object of $\mathcal A$ has a (left) resolution with objects such coproducts,
or that any short exact sequence in $\mathcal A$ has a resolution by a short exact sequence of complexes with
objects such coproducts.
Call an object $M$ of $\mathcal A$ \emph{flat} if the functor $M \otimes -$ on $\mathcal A$ is exact.
Any filtered colimit of flat objects is flat, and any object lying in $\mathcal A_{\mathrm{rig}}$ is flat.
Any object, or any short exact sequence, in $\mathcal A$ has a flat resolution.
\begin{lem}\label{l:Mflat}
For any object $N$ of $\mathcal A$, the functor $N \otimes -$ preserves those short exact sequences
\eqref{e:sex} in $\mathcal A$ with $M''$ flat.
\end{lem}
\begin{proof}
Let $P_\bullet$ be a flat resolution of $N$.
Then we have a short exact sequence
\begin{equation*}
0 \to P_\bullet \otimes M' \to P_\bullet \otimes M \to P_\bullet \otimes M'' \to 0
\end{equation*}
of complexes in $\mathcal A$ with homology in degree $0$ given by applying $N \otimes -$ to \eqref{e:sex},
where the homology of $P_\bullet \otimes M''$ is concentrated in degree $0$.
The long exact homology sequence thus gives what is required.
\end{proof}
\begin{lem}\label{l:balanced}
Let $P_\bullet$ be a flat resolution of an object $M$ and $Q_\bullet$ be a flat resolution of an object
$N$ in $\mathcal A$.
Then the homologies of any given degree of the complexes $M \otimes Q_\bullet$ and $P_\bullet \otimes N$
are isomorphic.
\end{lem}
\begin{proof}
The terms $E^2_{p0}$ of the two spectral sequences associated to the double complex $P_\bullet \otimes Q_\bullet$
are isomorphic to the homologies of degree $p$ of the complexes $M \otimes Q_\bullet$ and $P_\bullet \otimes N$,
while the terms $E^2_{pq}$ for $q>0$ are $0$.
The two spectral sequences thus both degenerate at $E^2$, with the terms $E^2_{p0}$ both isomorphic to the
homology in degree $p$ of the total complex associated to $P_\bullet \otimes Q_\bullet$.
\end{proof}
An object $M$ of $\mathcal A$ will be called a \emph{torsion object} if for every morphism $b:B \to M$ with $B$ in
$\mathcal A_{\mathrm{rig}}$,
there exists a morphism $a:A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ which is regular in $\mathcal A_{\mathrm{rig}}$ such that
\begin{equation*}
a \otimes b = 0:A \otimes B \to M
\end{equation*}
We say that $M$ is \emph{torsion free} if for every $b:B \to M$ in $\mathcal A$ with $B$ in
$\mathcal A_{\mathrm{rig}}$ and $b \ne 0$ and every regular $a:A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$
we have $a \otimes b \ne 0$.
Equivalent conditions for an object to be torsion or torsion free can be obtained
by dualising, with for example $b$ replaced by a morphism
$\mathbf 1 \to B \otimes M$ and $a$ by a morphism $\mathbf 1 \to A$.
Any object of $\mathcal A_{\mathrm{rig}}$ is torsion free.
Since the objects of $\mathcal A_{\mathrm{rig}}$ generate $\mathcal A$,
the tensor product of a regular morphism in $\mathcal A_{\mathrm{rig}}$ with any non-zero
morphism $N \to M$ in $\mathcal A$ with $M$ torsion free is non-zero.
An object of $\mathcal A$ is torsion (resp.\ torsion free) if and only if each of its subobjects of finite type is torsion
(resp.\ torsion free).
An object $M$ of finite type in $\mathcal A$ is a torsion object
if and only if
\begin{equation*}
a \otimes M = 0:A \otimes M \to M
\end{equation*}
for some regular $a:A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$.
The full subcategory
\begin{equation*}
\mathcal A_{\mathrm{tors}}
\end{equation*}
of $\mathcal A$ consisting of the torsion objects is a Serre subcategory
which is closed under the formation of colimits and tensor product with any object of $\mathcal A$.
For any object $M$ of $\mathcal A$, the filtered colimit $M_{\mathrm{tors}}$ of its torsion subobjects
of finite type is the largest torsion subobject of $M$, and $M_{\mathrm{tf}} = M/M_{\mathrm{tors}}$
is the largest torsion free quotient of $M$.
\begin{lem}\label{l:adjtorspres}
Let $T:\mathcal A \to \mathcal A'$ be a cocontinuous tensor functor between well-dualled Grothendieck
tensor categories.
Suppose that the tensor functor $\mathcal A_{\mathrm{rig}} \to \mathcal A'{}\!_{\mathrm{rig}}$ induced by $T$
is regular.
Then $T$ preserves torsion objects.
If $T':\mathcal A' \to \mathcal A$ is a lax tensor functor right adjoint to $T$,
then $T'$ preserves torsion free objects.
\end{lem}
\begin{proof}
That $T(M)$ is a torsion object if $M$ is a torsion object is clear when $M$ is of finite type,
because then $a \otimes M = 0$ for some regular $a:A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$.
The general case then follows from the cocontinuity of $T$ by writing $M$ as the colimit of its
subobjects of finite type.
Let $N$ be a torsion free object of $\mathcal A'$.
Then $M \to T'(N)$ is $0$ for any torsion object $M$ of $\mathcal A$, because the morphism
$T(M) \to N$ corresponding to it under adjunction is $0$.
Thus $T'(N)$ is a torsion free object of $\mathcal A$.
\end{proof}
$\mathcal A$ has no non-zero torsion objects if and only if $\mathbf 1$ has no non-zero torsion quotients in $\mathcal A$.
Indeed for $M$ non-zero in $\mathcal A$ there exists a non-zero $A \to M$ with $A$ in $\mathcal A_{\mathrm{rig}}$,
and hence a non-zero $\mathbf 1 \to M \otimes A^\vee$, with image a torsion object if $M$ is.
\begin{lem}\label{l:regtorssub}
Let $J$ be a subobject of $\mathbf 1$ in $\mathcal A$.
Then $\mathbf 1/J$ is a torsion object if and only if there exists a regular morphism
$A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ which factors through $J$.
\end{lem}
\begin{proof}
Write $p:\mathbf 1 \to \mathbf 1/J$ for the projection.
Then $\mathbf 1/J$ is a torsion object if and only if
$p \circ a = p \otimes a$ is $0$
for some regular $a:A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$.
\end{proof}
\begin{lem}\label{l:regtorscok}
A morphism $A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ is regular if and only its
cokernel in $\mathcal A$ is a torsion object.
\end{lem}
\begin{proof}
Let $a:A \to \mathbf 1$ be a morphism in $\mathcal A_{\mathrm{rig}}$ whose cokernel in $\mathcal A$
is a torsion object.
If $b:\mathbf 1 \to B$ is a morphism in $\mathcal A_{\mathrm{rig}}$ for which $a \otimes b = b \circ a$
is $0$, then $b$ is $0$ because it factors through the cokernel of $a$ and $B$ is torsion free.
This proves the ``if''.
The ``only if'' follows from Lemma~\ref{l:regtorssub}.
\end{proof}
By an \emph{isomorphism up to torsion} in $\mathcal A$ we mean a morphism in $\mathcal A$ whose kernel and cokernel
are torsion objects, i.e.\ an $\mathcal S$\nobreakdash-\hspace{0pt} isomorphism in $\mathcal A$ for $\mathcal S$ the Serre subcategory of $\mathcal A$
consisting of the torsion objects.
\begin{lem}\label{l:tensisotors}
If $N$ is an object of $\mathcal A$ and $f$ is an isomorphism up to torsion in $\mathcal A$, then $N \otimes f$
is an isomorphism up to torsion in $\mathcal A$.
\end{lem}
\begin{proof}
Since $N \otimes -$ is right exact and sends torsion objects to torsion objects,
$N \otimes f$ is an isomorphism up to torsion for $f$ an epimorphism with torsion
kernel.
To show that $N \otimes f$ is an isomorphism up to torsion for $f$ a monomorphism
with torsion cokernel,
it is enough to show that a short exact sequence \eqref{e:sex} in $\mathcal A$ with $M''$ a torsion object
induces an exact sequence
\begin{equation}\label{e:torexact}
0 \to L \to N \otimes M' \to N \otimes M \to N \otimes M'' \to 0
\end{equation}
in $\mathcal A$ with $L$ a torsion object.
The short exact sequence \eqref{e:sex} has a flat resolution
\begin{equation*}
0 \to P'{}\!_\bullet \to P_\bullet \to P''{}\!_\bullet \to 0.
\end{equation*}
Tensoring with $N$, we obtain using Lemma~\ref{l:Mflat} a short exact sequence of complexes
\begin{equation*}
0 \to N \otimes P'{}\!_\bullet \to N \otimes P_\bullet \to N \otimes P''{}\!_\bullet \to 0.
\end{equation*}
Passing to the long exact homology sequence, we obtain an exact sequence \eqref{e:torexact}
with $L$ a quotient of the homology in degree $1$ of the complex $N \otimes P''{}\!_\bullet$.
By Lemma~\ref{l:balanced}, this homology is isomorphic to the homology in degree $1$ of
$Q_\bullet \otimes M''$ with $Q_\bullet$ a flat resolution of $N$, and hence is a torsion object.
\end{proof}
Since $\mathcal A$ is well-powered, we may form the quotient category $\mathcal A/\mathcal A_{\mathrm{tors}}$.
We write
\begin{equation*}
\overline{\mathcal A} = \mathcal A/\mathcal A_{\mathrm{tors}},
\end{equation*}
and denote by a bar the image in $\overline{\mathcal A}$ of an object or morphism of $\mathcal A$.
The projection $\mathcal A \to \overline{\mathcal A}$ is thus bijective on objects, with
\begin{equation}\label{e:torscolim}
\overline{\mathcal A}(\overline{M},\overline{N}) =
\colim_{M' \subset M, \; M/M' \; \textrm{torsion}} \mathcal A(M',N_{\mathrm{tf}})
\end{equation}
where the colimit runs over those subobjects $M'$ of $M$ for which $M/M'$ is torsion.
The category $\overline{\mathcal A}$ is abelian and cocomplete,
and the projection $\mathcal A \to \overline{\mathcal A}$
is exact and cocontinuous.
By Lemma~\ref{l:tensisotors}, $\overline{\mathcal A}$ has a unique structure of tensor category such that
the projection is a strict tensor functor.
Further $\otimes$ in $\overline{\mathcal A}$ preserves coproducts and hence is cocontinuous,
and the $\overline{M}$ for $M$ dualisable in $\mathcal A$ generate $\overline{\mathcal A}$.
\begin{lem}\label{l:atens}
Let $B$ be an object in $\mathcal A_{\mathrm{rig}}$ and $N$ be a torsion free object in $\mathcal A$.
Then for any morphism $l:\overline{B} \to \overline{N}$ in $\overline{\mathcal A}$
there exists a regular morphism $a:A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ and a
morphism $h :A \otimes B \to N$ in $\mathcal A$ such that $\overline{a} \otimes l = \overline{h}$.
\end{lem}
\begin{proof}
We may suppose after dualising that $B = \mathbf 1$.
Let $J$ be a suboject of $\mathbf 1$ in $\mathcal A$ with $\mathbf 1/J$ a torsion object
such that $l$ is the image in the colimit defining $\overline{\mathcal A}(\mathbf 1,\overline{N})$ of some
$j$ in $\mathcal A(J,N)$.
By Lemma~\ref{l:regtorssub}, there exists a regular morphism $a:A \to \mathbf 1$
in $\mathcal A$ which factors through a morphism $a_0:A \to J$.
If $h$ is $j \circ a_0$, then $\overline{h}$ is $l \circ \overline{a} = \overline{a} \otimes l$.
\end{proof}
Let $N$ be a torsion free object of $\mathcal A$.
Then for every object $M$ of $\mathcal A$ and subobject $M'$ of $M$ with $M/M'$
a torsion object, restriction from $M$ to $M'$ defines an injective homomorphism
\begin{equation}\label{e:torsfreeinj}
0 \to \mathcal A(M,N) \to \mathcal A(M',N).
\end{equation}
The transition homomorphisms in the colimit of \eqref{e:torscolim} are thus injective, so that
the projection $\mathcal A \to \overline{\mathcal A}$ is injective on hom-groups of $\mathcal A$ with torsion free target.
\begin{lem}\label{l:projreg}
The tensor functor $\mathcal A_{\mathrm{rig}} \to (\overline{\mathcal A})_{\mathrm{rig}}$ induced by the
projection $\mathcal A \to \overline{\mathcal A}$ is faithful and regular.
\end{lem}
\begin{proof}
The faithfulness follows from the above injectivity on hom groups with torsion free target.
Let $a$ be a regular morphism in $\mathcal A_{\mathrm{rig}}$.
We show that $\overline{a} \otimes j$ is non-zero for any non-zero morphism
$j:\overline{N} \to \overline{M}$ in $\overline{\mathcal A}$.
We may suppose that $M$ is torsion free in $\mathcal A$
and that $j = \overline{h}$ for some $h:N \to M$ in $\mathcal A$.
Then $h$ and hence $a \otimes h$ is non-zero, so that $\overline{a} \otimes \overline{h}$
is non-zero by the above injectivity.
\end{proof}
If $N$ is an object of $\mathcal A$, then every subobject of $\overline{N}$ lifts to a subobject of $N$.
This can be seen by reducing after taking inverse images along $N \to N_\mathrm{tf}$ to the
case where $N$ is torsion free, when by \eqref{e:torscolim} any subobject of $\overline{N}$
is of the form $\Img \overline{f}$ for some morphism $f:M' \to N$ in $\mathcal A$, and hence lifts to the subobject
$\Img f$ of $N$.
The set of subobjects of $N$ lifting a given subobject of $\overline{N}$ is directed,
and its colimit is the unique largest such object of $N$.
By assigning to each subobject of $\overline{N}$ the largest subobject of $N$ lifting it,
we obtain an order-preserving map from the set of subobjects of $\overline{N}$ to
the set of subobjects of $N$.
\begin{prop}\label{p:welldualled}
$\overline{\mathcal A}$ is a well-dualled Grothendieck tensor category.
\end{prop}
\begin{proof}
The cocompleteness of $\overline{\mathcal A}$ has been seen.
That the equalities of the form \eqref{e:AB5} hold in $\overline{\mathcal A}$
follows after lifting to $\mathcal A$ from the fact that they hold in $\mathcal A$ together
with the exactness and cocontinuity of the projection onto $\mathcal A$.
Thus $\overline{\mathcal A}$ is a Grothendieck category.
The cocontinuity of the tensor product of $\overline{\mathcal A}$ has been seen.
To see that $\mathbf 1$ is of finite type in $\overline{\mathcal A}$, it is enough since $\overline{\mathcal A}$
is a Grothendieck category to show that if $\mathbf 1$ in $\overline{\mathcal A}$ is the union
of a filtered system $(J_\lambda)$ of subobjects $J_\lambda$, then $J_\lambda = \mathbf 1$
for some $\lambda$.
If $(J_0{}_\lambda)$ is a lifting of $(J_\lambda)$ to a system of subobjects
of $\mathbf 1$ in $\mathcal A$, then by cocontinuity of the projection onto $\overline{\mathcal A}$,
the subobject
\begin{equation*}
J = \colim J_0{}_\lambda
\end{equation*}
of $\mathbf 1$ in $\mathcal A$ has torsion quotient $\mathbf 1/J$.
Thus by Lemma~\ref{l:regtorssub}, there exists a regular morphism
$A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ which factors through $J$.
Then $A \to \mathbf 1$ factors through some $J_0{}_\lambda$, because $A$ is of finite type in $\mathcal A$.
Thus $\mathbf 1/J_0{}_\lambda$ is a torsion object in $\mathcal A$ by Lemma~\ref{l:regtorssub},
so that $J_\lambda = \mathbf 1$ in $\overline{\mathcal A}$.
Since $\mathbf 1$ is of finite type in $\overline{\mathcal A}$, so also is any dualisable object.
Thus since the $\overline{M}$ for $M$ dualisable in $\mathcal A$
generate $\overline{\mathcal A}$, every dualisable object in $\overline{\mathcal A}$ is a
quotient of such an $\overline{M}$.
It follows that $(\overline{\mathcal A})_{\mathrm{rig}}$ is essentially small.
Thus $\overline{\mathcal A}$ is well-dualled.
\end{proof}
\begin{lem}\label{l:regcofinal}
For every regular morphism $a:A \to \mathbf 1$ in $(\overline{\mathcal A})_{\mathrm{rig}}$ there exists
a regular morphism $a_0:A_0 \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ such that $\overline{a_0}$
factors through $a$.
\end{lem}
\begin{proof}
Since an epimorphism $p:\overline{B} \to A$ in $(\overline{\mathcal A})_{\mathrm{rig}}$ exists
with $B$ in $\mathcal A_{\mathrm{rig}}$,
and since $a \circ p$ is regular by Lemma~\ref{l:regtorscok}, we may after replacing $a$
by $a \circ p$ suppose that $A = \overline{B}$ with $B$ in $\mathcal A_{\mathrm{rig}}$.
By Lemma~\ref{l:atens}, there is a regular $c:C \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ with
\begin{equation*}
a \circ (\overline{c} \otimes \overline{B}) = \overline{c} \otimes a = \overline{a_0}
\end{equation*}
for some $a_0:C \otimes B \to \mathbf 1$.
By Lemma~\ref{l:projreg}, $\overline{c}$ and
hence $\overline{a_0}$ is regular in $(\overline{\mathcal A})_{\mathrm{rig}}$,
so that again by Lemma~\ref{l:projreg} $a_0$ is regular in $\mathcal A_{\mathrm{rig}}$.
\end{proof}
\begin{prop}\label{p:notors}
$\overline{\mathcal A}$ has no non-zero torsion objects.
\end{prop}
\begin{proof}
Let $N$ be torsion free object in $\mathcal A$ with $\overline{N}$ a torsion object in
$\overline{\mathcal A}$,
and $b:B \to N$ be a morphism in $\mathcal A$ with $B$ in $\mathcal A_{\mathrm{rig}}$.
Then $a \otimes \overline{b} = 0$ for some regular $a:A \to \mathbf 1$ in
$(\overline{\mathcal A})_{\mathrm{rig}}$.
By Lemma~\ref{l:regcofinal}, we may suppose that $a = \overline{a_0}$ with
$a_0:A_0 \to \mathbf 1$ regular in $\mathcal A_{\mathrm{rig}}$.
Then $a_0 \otimes b = 0$ by Lemma~\ref{l:projreg}.
Thus $N$ is a torsion object, and $\overline{N} = 0$.
\end{proof}
\begin{lem}\label{l:torsfrac}
Let $f:A \to \mathbf 1$ be a regular morphism in $\mathcal A_{\mathrm{rig}}$ and $h:A \to M$ be
a morphism in $\mathcal A$ with $M$ torsion free.
Then $f \otimes h = h \otimes f$ if and only if $\Ker h \supset \Ker f$.
\end{lem}
\begin{proof}
Write $J$ for the image of $f$ and $j:J \to \mathbf 1$ for the embedding.
Then
\begin{equation*}
f = j \circ f_0.
\end{equation*}
with $f_0:A \to J$ the projection.
By Lemma~\ref{l:regtorscok} $j$ is an isomorphism up to torsion, so that by
Lemma~\ref{l:tensisotors} $j \otimes j$ is an isomorphism up to torsion.
In particular $\Ker (j \otimes j)$ is a torsion object.
Now
\begin{equation*}
(j \otimes j) \circ (1 - \sigma) = j \otimes j - j \otimes j = 0
\end{equation*}
with $\sigma:J \otimes J \xrightarrow{\sim} J \otimes J$ the symmetry.
The image of $1 - \sigma$ is thus a torsion object.
Suppose that $\Ker h \supset \Ker f$.
Then $h = h_0 \circ f_0$ for some $h_0:J \to M$,
and
\begin{equation*}
j \otimes h_0 - h_0 \otimes j = (j \otimes h_0) \circ (1 - \sigma) = 0,
\end{equation*}
because $M$ is torsion free so that $j \otimes h_0$ sends the image of
$1 - \sigma$ to $0$.
Composing with $f_0 \otimes f_0$ then shows that $f \otimes h = h \otimes f$.
Conversely suppose that $f \otimes h = h \otimes f$.
If $i:\Ker f \to A$ is the embedding, then
\begin{equation*}
(h \circ i) \otimes f = (h \otimes f) \circ (i \otimes A) =
(f \otimes h) \circ (i \otimes A) = 0.
\end{equation*}
Since $f$ is regular and $M$ is torsion free, it follows that $h \circ i = 0$.
\end{proof}
\begin{prop}\label{p:fracclos}
$(\overline{\mathcal A})_{\mathrm{rig}}$ is fractionally closed.
\end{prop}
\begin{proof}
Let $f:A \to \mathbf 1$ be a regular morphism in $(\overline{\mathcal A})_{\mathrm{rig}}$
and $h:A \to D$ be a morphism in $(\overline{\mathcal A})_{\mathrm{rig}}$ with $f \otimes h = h \otimes f$.
Since $\overline{\mathcal A}$ has no non-zero torsion objects by Proposition~\ref{p:notors},
$f$ is an epimorphism in $\overline{\mathcal A}$ by Lemma~\ref{l:regtorscok}.
Hence $h$ is of the form $h_0 \circ f$ = $f \otimes h_0$ for some $h_0:\mathbf 1 \to D$
by Lemma~\ref{l:torsfrac}.
Thus \eqref{e:fraccloseI} with $\mathcal C = (\overline{\mathcal A})_{\mathrm{rig}}$ is surjective,
as required.
\end{proof}
By the universal property of
$(\mathcal A_{\mathrm{rig}})_\mathrm{fr}$ together with Lemma~\ref{l:projreg}
and Proposition~\ref{p:fracclos}, the tensor functor
$\mathcal A_{\mathrm{rig}} \to (\overline{\mathcal A})_{\mathrm{rig}}$ factors uniquely as
$E_{\mathcal A_{\mathrm{rig}}}:\mathcal A_{\mathrm{rig}} \to (\mathcal A_{\mathrm{rig}})_\mathrm{fr}$
followed by a tensor functor
\begin{equation}\label{e:projfactor}
(\mathcal A_{\mathrm{rig}})_\mathrm{fr} \to (\overline{\mathcal A})_{\mathrm{rig}}.
\end{equation}
Explicitly, \eqref{e:projfactor} is the strict tensor functor that
coincides with $\mathcal A_{\mathrm{rig}} \to (\overline{\mathcal A})_{\mathrm{rig}}$ on objects,
and sends the morphism $h/f$ to the unique morphism $l$ with
\begin{equation}\label{e:projfactordef}
\overline{f} \otimes l = \overline{h},
\end{equation}
as in \eqref{e:Tfactor}.
\begin{prop}\label{p:Frff}
The tensor functor \eqref{e:projfactor} is fully faithful.
\end{prop}
\begin{proof}
By \eqref{e:projfactordef}, the faithfulness follows from Lemma~\ref{l:projreg}
and the fullness from Lemma~\ref{l:atens}.
\end{proof}
Let $\mathcal C$ be an essentially small rigid tensor category.
Since $\mathbf 1$ is projective of finite type in $\widehat{\mathcal C}$, so also is
any dualisable object of $\widehat{\mathcal C}$.
Thus by \eqref{e:fintype}, $(\widehat{\mathcal C})_{\mathrm{rig}}$ is the pseudo-abelian hull
of $\mathcal C$ in $\widehat{\mathcal C}$.
It follows that $\widehat{\mathcal C}$ is a well-dualled Grothendieck tensor category.
We write $\widetilde{\mathcal C}$ for $\widehat{\mathcal C}$ modulo torsion:
\begin{equation*}
\widetilde{\mathcal C} = \overline{\widehat{\mathcal C}}.
\end{equation*}
The composite $\mathcal C \to \widetilde{\mathcal C}$ of the projection
$\widehat{\mathcal C} \to \widetilde{\mathcal C}$ with $h_-:\mathcal C \to \widehat{\mathcal C}$ factors
through a regular tensor functor $\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$,
which in turn factors uniquely through a tensor functor
\begin{equation}\label{e:Cprojfactor}
\mathcal C_\mathrm{fr} \to (\widetilde{\mathcal C})_\mathrm{rig}
\end{equation}
by Proposition~\ref{p:fracclos}.
\begin{cor}\label{c:FrCff}
Suppose that either direct sums exist in $\mathcal C$ or that $\mathcal C$ is integral.
Then the tensor functor \eqref{e:Cprojfactor} is fully faithful.
\end{cor}
\begin{proof}
Since the tensor functor $T:\mathcal C \to (\widehat{\mathcal C})_\mathrm{rig}$ defined by $h_-$ is the embedding of
$\mathcal C$ into its pseudo-abelian hull, \eqref{e:FrT} with $\mathcal C' = (\widehat{\mathcal C})_\mathrm{rig}$
is fully faithful.
The required result thus follows from Proposition~\ref{p:Frff}, because \eqref{e:Cprojfactor}
factors as $T_\mathrm{fr}$ followed by \eqref{e:projfactor} with $\mathcal A = \widehat{\mathcal C}$.
\end{proof}
Let $\mathcal C'$ be an essentially small rigid tensor category and $F:\mathcal C \to \mathcal C'$ be a tensor functor.
As in Section~\ref{s:fun}, we have a tensor functor $\widehat{F}:\widehat{\mathcal C} \to \widehat{\mathcal C'}$ and a
cocontinuous lax tensor functor $F_\wedge:\widehat{\mathcal C'} \to \widehat{\mathcal C}$ right adjoint to $\widehat{F}$.
By \eqref{e:pres} and the fact \eqref{e:proj} is an isomorphism for $M$ dualisable, we have an isomorphism
\begin{equation}\label{e:projhat}
F_\wedge(M') \otimes M \xrightarrow{\sim} F_\wedge(M' \otimes \widehat{F}(M))
\end{equation}
natural in $M$ in $\widehat{\mathcal C}$ and $M'$ in $\widehat{\mathcal C'}$.
Similarly, with $\mathcal A = \widehat{\mathcal C}$, $\mathcal A' = \widehat{\mathcal C'}$, $H = \widehat{F}$ and $H' = F_\wedge$,
\eqref{e:EMtens} is an isomorphism for every $N'$ in the essential image of $\widehat{F}$,
and \eqref{e:EMhom} is an isomorphism for every $M'$ in the essential image of $\widehat{F}$.
\begin{lem}\label{l:isotors}
Let $\mathcal C$ and $\mathcal C'$ be essentially small tensor categories with $\mathcal C$ rigid,
and $F:\mathcal C \to \mathcal C'$ be a regular tensor functor.
Suppose that either direct sums exist in $\mathcal C$ or that $\mathcal C$ is integral.
Then $\widehat{F}:\widehat{\mathcal C} \to \widehat{\mathcal C'}$ sends isomorphisms up to torsion to isomorphisms up to torsion.
\end{lem}
\begin{proof}
The tensor functor $(\widehat{\mathcal C})_{\mathrm{rig}} \to (\widehat{\mathcal C'})_{\mathrm{rig}}$ induced by
$\widehat{F}$ between the pseudo-abelian hulls of $\mathcal C$ and $\mathcal C'$ is regular, because
$\Reg(\mathcal C) \to \Reg((\widehat{\mathcal C})_{\mathrm{rig}})$ is cofinal.
Since $\widehat{F}$ is cocontinuous, it sends epimorphisms with torsion kernel to epimorphisms with torsion kernel.
It remains to show that $\widehat{F}$ sends monomorphism with torsion cokernel to morphisms
with torsion kernel and cokernel.
Suppose given a short exact sequence \eqref{e:sex} in $\widehat{\mathcal C}$ with $M''$ torsion.
Then
\begin{equation*}
\widehat{F}(M') \to \widehat{F}(M)
\end{equation*}
has torsion cokernel.
If $P'$ is its kernel, it is to be shown that for any $N'$ in $(\widehat{\mathcal C'})_{\mathrm{rig}}$
and morphism $b':\mathbf 1 \to N' \otimes P'$ there is a regular $a':\mathbf 1 \to L'$ in
$(\widehat{\mathcal C'})_{\mathrm{rig}}$ with $a' \otimes b' = 0$.
The morphism $M' \to M$ induces the horizontal arrows of a commutative square
\begin{equation*}
\xymatrix{
\widehat{\mathcal C'}(\mathbf 1, N' \otimes \widehat{F}(M')) \ar[r] & \widehat{\mathcal C'}(\mathbf 1,N' \otimes \widehat{F}(M)) \\
\widehat{\mathcal C}(\mathbf 1, F_\wedge(N') \otimes M') \ar^{\wr}[u] \ar[r] & \widehat{\mathcal C}(\mathbf 1, F_\wedge(N') \otimes M) \ar^{\wr}[u]
}
\end{equation*}
with the vertical isomorphisms given by the isomorphisms induced by those of the form
\eqref{e:projhat} followed by the adjunction isomorphisms.
Since $N' \otimes -$ is exact,
we may identify $b'$ with an element in the kernel of the top arrow of the square.
It is enough to show that there is
a regular $a':\mathbf 1 \to L'$ in $(\widehat{\mathcal C'})_{\mathrm{rig}}$
such that $a' \otimes b'$ in $\widehat{\mathcal C'}(\mathbf 1, L' \otimes N' \otimes \widehat{F}(M'))$ is $0$.
If $b'$ is the image under the left isomorphism of $b$, then $b:\mathbf 1 \to F_\wedge(N') \otimes M'$ factors through
the kernel $P$ of
\begin{equation*}
F_\wedge(N') \otimes M' \to F_\wedge(N') \otimes M.
\end{equation*}
Since $P$ is a torsion object by Lemma~\ref{l:tensisotors}, there is a regular $a:\mathbf 1 \to L$ in
$(\widehat{\mathcal C})_{\mathrm{rig}}$ such that $a \otimes b = 0$.
Now by \eqref{e:projadj}, the left isomorphism of the square is given by applying $\widehat{F}$ and then using the counit
$\widehat{F}F_\wedge(N') \to N'$.
Thus $a' \otimes b' = 0$ with $a' = \widehat{F}(a)$.
\end{proof}
Let $\mathcal C$ and $\mathcal C'$ be essentially small rigid tensor categories.
Suppose that either direct sums exist in $\mathcal C$ or that $\mathcal C$ is integral.
Then it follows from Lemma~\ref{l:isotors} that for any
regular tensor functor $F:\mathcal C \to \mathcal C'$ there is a unique tensor functor
\begin{equation*}
\widetilde{F}:\widetilde{\mathcal C} \to \widetilde{\mathcal C'}
\end{equation*}
which is compatible with $\widehat{F}$ and the projections
$\widehat{\mathcal C} \to \widetilde{\mathcal C}$
and $\widehat{\mathcal C'} \to \widetilde{\mathcal C'}$.
Further for any tensor isomorphism $\varphi:F \xrightarrow{\sim} F'$ of regular tensor functors $\mathcal C \to \mathcal C'$
there is a unique tensor isomorphism
$\widetilde{\varphi}:\widetilde{F} \xrightarrow{\sim} \widetilde{F'}$
which is compatible with $\widehat{\varphi}$ and the projections.
There are the usual canonical tensor isomorphisms, satisfying the usual compatibilities,
for composable regular tensor functors.
\section{Modules}\label{s:mod}
In this section we study the behaviour of categories of modules in a tensor category
under passage to quotients modulo torsion.
The goal is to prove Theorem~\ref{t:Ftildeequiv}, which gives a geometric description
of the functor categories modulo torsion which will be used to define super Tannakian hulls.
Let $\mathcal A$ be a well-dualled Grothendieck tensor category and $R$ be a commutative algebra in $\mathcal A$.
Since the forgetful functor from $\MOD_{\mathcal A}(R)$ to $\mathcal A$ creates limits and colimits,
$\MOD_{\mathcal A}(R)$ is a Grothendieck category.
Further the unit $R$ is of finite type in $\MOD_{\mathcal A}(R)$ because $\Hom_R(R,-)$ is isomorphic
to $\mathcal A(\mathbf 1,-)$, and $\otimes_R$ is cocontinuous in $\MOD_{\mathcal A}(R)$ by its definition using a coequaliser.
The free $R$\nobreakdash-\hspace{0pt} modules $R \otimes M$ with $M$ dualisable form a set of generators for $\MOD_{\mathcal A}(R)$.
Thus any dualisable object $N$ of $\MOD_{\mathcal A}(R)$ is a quotient of $R \otimes M$ with $M$ dualisable,
because $N$ is of finite type in $\MOD_{\mathcal A}(R)$.
It follows that $\MOD_{\mathcal A}(R)_{\mathrm{rig}}$ is essentially small.
Thus $\MOD_{\mathcal A}(R)$ is a well-dualled Grothendieck tensor category.
\begin{lem}\label{l:regtorsfree}
The tensor functor $R \otimes -:\mathcal A_{\mathrm{rig}} \to \Mod_{\mathcal A}(R)$ is regular
if and only if $R$ is a torsion free object of $\mathcal A$.
\end{lem}
\begin{proof}
The tensor functor $R \otimes -$ is regular if and only if for every
regular $a:A \to \mathbf 1$ in $\mathcal A_{\mathrm{rig}}$ and non-zero
$b':R \otimes B \to R$ in $\Mod_{\mathcal A}(R)$ with $B$ in $\mathcal A_{\mathrm{rig}}$,
the morphism $(R \otimes a) \otimes_R b'$ is non-zero.
If $b'$ corresponds under the adjunction isomorphism
\begin{equation*}
\Hom_{\mathcal A}(-,R) \xrightarrow{\sim} \Hom_R(R \otimes -,R)
\end{equation*}
to $b:B \to R$ in $\mathcal A$,
then $(R \otimes a) \otimes_R b'$ corresponds, modulo the tensor structural
isomorphism of $R \otimes -$, to $a \otimes b$.
The result follows.
\end{proof}
\begin{lem}\label{l:Rextpres}
Suppose that $R$ is a torsion free object of $\mathcal A$.
Then the tensor functor $R \otimes -:\mathcal A \to \MOD_{\mathcal A}(R)$ preserves torsion objects and the
forgetful lax tensor functor $\MOD_{\mathcal A}(R) \to \mathcal A$ preserves torsion free objects and reflects
torsion objects.
\end{lem}
\begin{proof}
That $R \otimes -$ preserves torsion objects and the forgetful functor preserves
torsion free objects follows from Lemma~\ref{l:adjtorspres}
with $T = R \otimes -$ and $T'$ the forgetful functor together with Lemma~\ref{l:regtorsfree}.
If $M$ in $\MOD_{\mathcal A}(R)$ is a torsion object in $\mathcal A$, then $M$ is a torsion object in
$\MOD_{\mathcal A}(R)$ because it is a quotient of $R \otimes M$.
\end{proof}
The projection $\mathcal A \to \overline{\mathcal A}$ defines a canonical tensor functor
\begin{equation}\label{e:MODfun}
\MOD_{\mathcal A}(R) \to \MOD_{\overline{\mathcal A}}(\overline{R}).
\end{equation}
which is compatible with the projection and the forgetful lax tensor functors to
$\mathcal C$ and $\mathcal C'$.
It is exact and cocontinuous.
\begin{lem}\label{l:MODfunsurj}
The tensor functor \eqref{e:MODfun} is essentially surjective.
\end{lem}
\begin{proof}
Any $\overline{R}$\nobreakdash-\hspace{0pt} module is isomorphic to $\overline{M}/L$
for some free $R$\nobreakdash-\hspace{0pt} module $M$ and $\overline{R}$\nobreakdash-\hspace{0pt} submodule $L$ of $\overline{M}$.
There exists a subobject $N$ of $M$ in $\mathcal A$ with $\overline{N} = L$,
and the image $RN$ of $R \otimes N \to L$ is an $R$\nobreakdash-\hspace{0pt} submodule of $M$
with $\overline{RN} = L$.
We then have $\overline{M/RN} = \overline{M}/L$.
\end{proof}
\begin{lem}\label{l:barfac}
Suppose that $R$ is a torsion free object of $\mathcal A$.
Then for any morphism $a:M \to \overline{R}$ in $\Mod_{\overline{\mathcal A}}(\overline{R})$
there exists a morphism $a_0:M_0 \to R$ in $\Mod_{\mathcal A}(R)$ such that
$\overline{a_0} = a \circ p$ with
$p:\overline{M_0} \to M$ an epimorphism in
$\MOD_{\overline{\mathcal A}}(\overline{R})$.
\end{lem}
\begin{proof}
After composing with an appropriate epimorphism, we may suppose that
$M = \overline{R} \otimes \overline{B}$ with $B$ in $\mathcal A_{\mathrm{rig}}$.
If $b:\overline{B} \to \overline{R}$ in $\overline{\mathcal A}$
corresponds under adjunction to $a$,
we show that for some $b_0:A \to R$ in $\mathcal A$ with $A$ in $\mathcal A_{\mathrm{rig}}$
\begin{equation}\label{e:barfac}
\overline{b_0} = b \circ q
\end{equation}
with $q:\overline{A} \to \overline{B}$ an epimorphism in $\overline{\mathcal A}$.
We may then take for $a_0$ the morphism $R \otimes A \to R$
corresponding under adjunction to $b_0$, and $p = \overline{R} \otimes q$.
Since $R$ is torsion free, there exist by \eqref{e:torscolim} a subobject
$L$ of $B$ and a morphism $f:L \to R$ in $\mathcal A$ such that the embedding of $L$
induces an isomorphism
$i:\overline{L} \xrightarrow{\sim} \overline{B}$ with
\begin{equation*}
b \circ i = \overline{f}.
\end{equation*}
Let $r$ be an epimorphism to $L$ in $\mathcal A$ from a coproduct of objects $A_\lambda$
in $\mathcal A_{\mathrm{rig}}$.
Then $i \circ \overline{r}$ is an epimorphism in $\overline{\mathcal A}$.
Since $\overline{B}$ is of finite type in $\overline{\mathcal A}$,
there is a finite coproduct $A$ of the $A_\lambda$ such that
$i \circ \overline{r_0}$ with $r_0:A \to L$ the restriction of $r$ to $A$
is an epimorphism in $\overline{\mathcal A}$.
Then \eqref{e:barfac} holds with $b_0 = f \circ r_0$ and $q = i \circ \overline{r_0}$.
\end{proof}
\begin{lem}\label{l:Rmodbar}
Suppose that $R$ is a torsion free object of $\mathcal A$.
\begin{enumerate}
\item\label{i:Rmodbarreg}
The tensor functor
$\Mod_{\mathcal A}(R) \to \Mod_{\overline{\mathcal A}}(\overline{R})$ induced by \eqref{e:MODfun}
is faithful and regular.
\item\label{i:Rmodbarpres}
The tensor functor \eqref{e:MODfun} preserves and reflects torsion objects,
and preserves torsion free objects.
\end{enumerate}
\end{lem}
\begin{proof}
\ref{i:Rmodbarreg}
Since $R$ is torsion free, \eqref{e:torsfreeinj} shows using \eqref{e:torscolim}
that the projection $\mathcal A \to \overline{\mathcal A}$ is injective on hom groups with target $R$.
Thus \eqref{e:MODfun} is injective on hom groups
\begin{equation*}
\Hom_R(R \otimes M,R)
\end{equation*}
for every $M$ in $\mathcal A$.
Since every $R$\nobreakdash-\hspace{0pt} module is a quotient
of a free $R$\nobreakdash-\hspace{0pt} module, this gives the faithfulness.
For the regularity it is enough to show that if $c:C \to R$ is regular in $\Mod_{\mathcal A}(R)$ then
$\overline{c} \otimes_{\overline{R}} d$ is non-zero for every non-zero $d:D \to \overline{R}$
in $\Mod_{\overline{\mathcal A}}(\overline{R})$.
By Lemma~\ref{l:barfac}, we may after composing $d$ with an appropriate epimorphism of
$\overline{R}$\nobreakdash-\hspace{0pt} modules assume that $d = \overline{d_0}$ for some $d_0:D_0 \to R$ in
$\Mod_{\mathcal A}(R)$.
The required result then follows from the faithfulness.
\ref{i:Rmodbarpres}
That \eqref{e:MODfun} preserves torsion objects follows from \ref{i:Rmodbarreg}
and Lemma~\ref{l:adjtorspres} applied to \eqref{e:MODfun}.
It follows from \ref{i:Rmodbarreg} that
$\Mod_{\mathcal A}(R) \to \Mod_{\overline{\mathcal A}}(\overline{R})$ reflects regular morphisms.
Thus by Lemma~\ref{l:regtorscok} applied to $\MOD_{\overline{\mathcal A}}(\overline{R})$
and Lemma~\ref{l:barfac},
the composite of any regular morphism $a:A \to \overline{R}$ in
$\Mod_{\overline{\mathcal A}}(\overline{R})$
with an appropriate epimorphism in $\MOD_{\overline{\mathcal A}}(\overline{R})$ is of the form
$\overline{a_0}$ for some regular morphism $a_0:A_0 \to R$ in $\Mod_{\mathcal A}(R)$.
Now let $M$ be an $R$\nobreakdash-\hspace{0pt} module with $\overline{M}$ a torsion object in $\MOD_{\mathcal A}(R)$.
Then if $b:B \to M$ is a morphism in $\MOD_{\mathcal A}(R)$ with $B$ in $\Mod_{\mathcal A}(R)$,
there exists a regular morphism $a_0:A_0 \to R$ in $\Mod_{\mathcal A}(R)$ with
$\overline{a_0} \otimes_{\overline{R}} \overline{b} = 0$ and hence
$a_0 \otimes_R b = 0$ by \ref{i:Rmodbarreg}.
Thus \eqref{e:MODfun} reflects torsion objects.
Similarly \eqref{e:MODfun} preserves torsion free objects.
\end{proof}
Suppose that $R$ is a torsion free object of $\mathcal A$.
Then \eqref{e:MODfun} induces an exact tensor functor
\begin{equation}\label{e:MODfuntors}
\overline{\MOD_{\mathcal A}(R)} \to
\overline{\MOD_{\overline{\mathcal A}}(\overline{R})}
\end{equation}
because \eqref{e:MODfun} is an exact tensor functor which by
Lemma~\ref{l:Rmodbar}\ref{i:Rmodbarpres} preserves torsion objects.
\begin{prop}\label{p:torsequ}
Suppose that $R$ is a torsion free object of $\mathcal A$.
Then \eqref{e:MODfuntors} is a tensor equivalence.
\end{prop}
\begin{proof}
Since \eqref{e:MODfun} is essentially surjective by Lemma~\ref{l:MODfunsurj},
so also is \eqref{e:MODfuntors}.
To show that \eqref{e:MODfuntors} is fully faithful, it is enough by \eqref{e:torscolim}
to show for every pair of $R$\nobreakdash-\hspace{0pt} modules
$M$ and $N$ with $N$ torsion free in $\MOD_{\mathcal A}(R)$ that the homomorphism
\begin{equation}\label{e:torsequff}
\colim_{M' \subset M, \, M/M' \, \text{torsion}}\Hom_R(M',N) \to
\colim_{L \subset \overline{M}, \, \overline{M}/L \, \text{torsion}}\Hom_{\overline{R}}(L,\overline{N})
\end{equation}
is an isomorphism, where $M'$ runs over the $R$\nobreakdash-\hspace{0pt} submodules of $M$ with $M/M'$ a torsion $R$\nobreakdash-\hspace{0pt} module,
$\overline{N}$ is torsion free by Lemma~\ref{l:Rmodbar}\ref{i:Rmodbarpres},
$L$ runs over the $\overline{R}$\nobreakdash-\hspace{0pt} submodules of $\overline{M}$ with $\overline{M}/L$ a torsion
$\overline{R}$\nobreakdash-\hspace{0pt} module,
and the class of $i:M' \to N$ is sent to the class of $\overline{\imath}:\overline{M'} \to \overline{N}$.
Let $M'$ be an $R$\nobreakdash-\hspace{0pt} submodule of $M$ with $M/M'$ a torsion $R$\nobreakdash-\hspace{0pt} module, and
\begin{equation*}
i:M' \to N
\end{equation*}
be a morphism of $R$\nobreakdash-\hspace{0pt} modules whose class in the source of \eqref{e:torsequff} lies
in the kernel of \eqref{e:torsequff}.
Since the transition homomorphisms of the target of \eqref{e:torsequff} are injective
by the analogue of \eqref{e:torsfreeinj} for $\overline{R}$\nobreakdash-\hspace{0pt} modules,
$\overline{\imath}:\overline{M'} \to \overline{N}$ is $0$ in $\overline{\mathcal A}$.
Thus $\Img i$ is a torsion object in $\mathcal A$.
On the other hand, $\Img i$ is a torsion free object in $\mathcal A$, because it is a subobject of $N$
with $N$ torsion free in $\mathcal A$ by Lemma~\ref{l:Rextpres}.
Hence $\Img i = 0$ and $i = 0$.
Thus \eqref{e:torsequff} is injective.
Let $L$ be an $\overline{R}$\nobreakdash-\hspace{0pt} submodule of $\overline{M}$ with $\overline{M}/L$ a torsion
$\overline{R}$\nobreakdash-\hspace{0pt} module, and
\begin{equation*}
j:L \to \overline{N}
\end{equation*}
be a morphism of $\overline{R}$\nobreakdash-\hspace{0pt} modules.
Then $L$ lifts to a subobject $M_0$ of $M$ in $\mathcal A$.
Replacing if necessary $M_0$ by a smaller subobject $M_1$ of $M$ in $\mathcal A$ with $M_0/M_1$
a torsion object in $\mathcal A$, we may assume further that $j$ lifts to a morphism
\begin{equation*}
i_0:M_0 \to N
\end{equation*}
in $\mathcal A$.
The image $M' = RM_0$ of the morphism
\begin{equation*}
m:R \otimes M_0 \to M
\end{equation*}
of $R$\nobreakdash-\hspace{0pt} modules corresponding to the embedding $M_0 \to M$ is an $R$\nobreakdash-\hspace{0pt} submodule of $M$ above $L$.
By Lemma~\ref{l:Rmodbar}\ref{i:Rmodbarpres}, $M/M'$ is a torsion $R$\nobreakdash-\hspace{0pt} module because
$\overline{M/M'} = \overline{M}/L$ is a torsion $\overline{R}$\nobreakdash-\hspace{0pt} module.
Write
\begin{equation*}
i_1:R \otimes M_0 \to N
\end{equation*}
for the morphism of $R$\nobreakdash-\hspace{0pt} modules corresponding to the morphism $i_0:M_0 \to N$ in $\mathcal A$.
Since $i_0$ lies above the morphism $j$ of $\overline{R}$\nobreakdash-\hspace{0pt} modules,
we have $\overline{i_1(\Ker m)} = 0$ in $\overline{\mathcal A}$.
Hence $i_1(\Ker m)$ is a torsion object in $\mathcal A$.
On the other hand $i_1(\Ker m)$ is a subobject of $N$ in $\mathcal A$, and hence
by Lemma~\ref{l:Rextpres} is torsion free in $\mathcal A$.
Thus $i_1(\Ker m) = 0$, so that $i_1$ factors as
\begin{equation*}
R \otimes M_0 \to M' \xrightarrow{i} N
\end{equation*}
where the first arrow is the epimorphism defined by $m$
and $i$ is a morphism of $R$\nobreakdash-\hspace{0pt} modules.
Thus $M'$ and $i$ lie above $L$ and $j$, so that the image of
the class of $i$ under \eqref{e:torsequff} is the class of $j$.
Hence \eqref{e:torsequff} is surjective.
\end{proof}
Let $\mathcal C$ be a rigid tensor category, $\mathcal D$ be a cocomplete tensor category
with every object of $\mathcal D_\mathrm{rig}$ of finite
presentation in $\mathcal D$, and
\begin{equation*}
T:\mathcal C \to \mathcal D
\end{equation*}
be a fully faithful tensor functor.
As in Section~\ref{s:fun}, we have by additive Kan extension along $h_-:\mathcal C \to \widehat{\mathcal C}$
a tensor functor $T^*:\widehat{\mathcal C} \to \mathcal D$
with the universal natural transformation a tensor isomorphism \eqref{e:Th},
and there exists a lax tensor functor
$T_*:\mathcal D \to \widehat{\mathcal C}$ right adjoint to $T$.
By \eqref{e:pres}, a morphism
$j$ in $\widehat{\mathcal C}$ is an isomorphism if and only if $\widehat{\mathcal C}(h_A,j)$
is an isomorphism for every $A$.
Thus $T_*$ preserves filtered colimits:
take for $j$ the canonical morphism from $\colim T_*(V_\lambda)$ to
$T_*(\colim V_\lambda)$ and use the fact that the $h_A$ and the objects of
$\mathcal D_\mathrm{rig}$ are of finite presentation.
Similarly composing the unit for the adjunction with $h_-$ gives an isomorphism
\begin{equation}\label{e:unithiso}
h_- \xrightarrow{\sim} T_*T^*h_-,
\end{equation}
as can be seen by taking for $j$ the components of \eqref{e:unithiso} and using
the fact that by \eqref{e:Th} $T^*h_-$ is fully faithful.
Taking the component of \eqref{e:unithiso} at $\mathbf 1$ shows using the compatibility of \eqref{e:unithiso}
with the tensor structures that the unit
for $T$ is an isomorphism
\begin{equation}\label{e:Tunitiso}
\mathbf 1 \xrightarrow{\sim} T_*(\mathbf 1).
\end{equation}
Thus the forgetful lax tensor functor
$\MOD_{\widehat{\mathcal C}}(T_*(\mathbf 1)) \to \widehat{\mathcal C}$ is a tensor equivalence,
so that by \eqref{e:EMtens} with $H = T^*$ and $H' = T_*$ the tensor structural
morphism
\begin{equation}\label{e:Ttensiso}
T_*(V) \otimes T_*(W) \to T_*(V \otimes W)
\end{equation}
is an isomorphism for $W$ in the essential image of $T$.
Let $k$ be a field of characteristic $0$.
If $\mathbf m|\mathbf n$ is a family of pairs of integers $\ge 0$,
we may take $\mathcal C = \mathcal F_{\mathbf m|\mathbf n}$ and $\mathcal D = \MOD_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)$
above, and for
\begin{equation*}
T:\mathcal F_{\mathbf m|\mathbf n} \to \MOD_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)
\end{equation*}
the $k$\nobreakdash-\hspace{0pt} tensor functor obtained by composing the embedding of
$\Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)$ into $\MOD_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k)$
with \eqref{e:freeGL}:
that $T$ is fully faithful follows from Lemma~\ref{l:freeGLff}.
We define a lax tensor functor
\begin{equation}\label{e:Hdef}
\MOD_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(k) \to \widetilde{\mathcal F_{\mathbf m|\mathbf n}}
\end{equation}
by composing the projection $\widehat{\mathcal F_{\mathbf m|\mathbf n}} \to \widetilde{\mathcal F_{\mathbf m|\mathbf n}}$
with $T_*$.
\begin{prop}\label{p:MODGLFequiv}
The lax tensor functor \eqref{e:Hdef} is a tensor equivalence.
\end{prop}
\begin{proof}
Write $H$ for \eqref{e:Hdef}, $G$ for $\mathrm {GL}_{\mathbf m|\mathbf n}$ and $\mathcal F$ for $\mathcal F_{\mathbf m|\mathbf n}$.
By Lemma~\ref{l:freeGLff}, $\mathcal F$ is integral.
The functor $H$ is left exact and preserves filtered colimits.
By \eqref{e:Tunitiso}, the unit for the lax tensor functor $H$ is an isomorphism
\begin{equation}\label{e:Hunitiso}
\mathbf 1 \xrightarrow{\sim} H(k)
\end{equation}
and by \eqref{e:Ttensiso} the tensor structural morphism
\begin{equation}\label{e:Htensiso}
H(V) \otimes H(W) \to H(V \otimes_k W)
\end{equation}
of $H$ is an isomorphism for $W$ in the essential image of $T$.
By Lemma~\ref{l:Fmnfracclose} and Corollary~\ref{c:FrCff}, the composite
$\mathcal F \to \widetilde{\mathcal F}$ of the projection
$\widehat{\mathcal F} \to \widetilde{\mathcal F}$ with $h_-$ is fully faithful.
Thus by \eqref{e:Th} and \eqref{e:unithiso}, $HT$ is fully faithful.
The homomorphism
\begin{equation}\label{e:Hhomiso}
\Hom_G(V,W) \to \Hom_{\widetilde{\mathcal F}}(H(V),H(W))
\end{equation}
induced by $H$ is then an isomorphism for $V$ and $W$ in the essential image of $T$.
Given a short exact sequence
\begin{equation}\label{e:Vsex}
0 \to V' \to V \to V'' \to 0
\end{equation}
in $\Mod_{G,\varepsilon}(k)$, there exists by Theorem~\ref{t:VtensW} and
Proposition~\ref{p:Msummand}\ref{i:Msummandss} a representation $W_0 \ne 0$ of
$(\mathrm M_{\mathbf m|\mathbf n},\varepsilon)$ such that
\begin{equation*}
0 \to V' \otimes_k W_0 \to V \otimes_k W_0 \to V'' \otimes_k W_0 \to 0
\end{equation*}
is a split short exact sequence.
By Proposition~\ref{p:Msummand}\ref{i:Msummand},
$W_0$ lies in the pseudo-abelian hull of the essential image of $T$,
so that \eqref{e:Htensiso} is an isomorphism for $W = W_0$ and any $V$,
and \eqref{e:Hhomiso} is an isomorphism
for $V = W = W_0$.
The naturality of \eqref{e:Htensiso} then shows that applying $H$ to \eqref{e:Vsex} and tensoring
with $H(W_0)$ gives split short exact sequence
\begin{equation*}
0 \to H(V') \otimes H(W_0) \to H(V) \otimes H(W_0) \to H(V'') \otimes H(W_0) \to 0.
\end{equation*}
Since \eqref{e:Htensiso} with $W_0$ or $W_0{}\!^\vee$ for $W$ is an isomorphism,
$H(W_0)$ is dualisable.
Hence $- \otimes H(W_0)$ is exact.
If $M$ is the cokernel of $H(V') \to H(V)$ and
\begin{equation*}
i:M \to H(V'')
\end{equation*}
is the unique factorisation of $H(V) \to H(V'')$ through $H(V) \to M$,
it follows that $i \otimes H(W_0)$ is an isomorphism.
Thus
\begin{equation*}
(\Ker i) \otimes H(W_0) = 0 = (\Coker i) \otimes H(W_0)
\end{equation*}
by exactness of $- \otimes H(W_0)$.
Now $H(W_0) \ne 0$ because \eqref{e:Hhomiso} is an isomorphism with $V = W = W_0$,
so that $1_{H(W_0)}$ is regular in the pseudo-abelian hull
$(\widetilde{\mathcal F})_\mathrm{rig}$ of the integral $k$\nobreakdash-\hspace{0pt} tensor
category $\mathcal F$.
Since $\widetilde{\mathcal F}$ has no non-zero torsion objects
by Proposition~\ref{p:notors} with $\mathcal A = \widehat{\mathcal F}$, it follows that
\begin{equation*}
\Ker i = 0 = \Coker i.
\end{equation*}
Thus $i$ is an isomorphism, so that the restriction of $H$ to
$\Mod_{G,\varepsilon}(k)$ is right exact.
Since $H$ preserves filtered colimits
and every morphism in $\MOD_{G,\varepsilon}(k)$ is a filtered colimit of morphisms
in $\Mod_{G,\varepsilon}(k)$,
it follows that $H$ is right exact, and hence exact and cocontinuous.
By Corollary~\ref{c:quotsub}, every object in $\Mod_{G,\varepsilon}(k)$ is a kernel
(resp.\ a cokernel) of a morphism in the pseudo-abelian hull of the essential image of $T$.
Thus \eqref{e:Htensiso} is an isomorphism for $W$ in $\Mod_{G,\varepsilon}(k)$ and every $V$.
In particular $H(V)$ is dualisable and hence of finite type for $V$ in
$\Mod_{G,\varepsilon}(k)$.
Similarly \eqref{e:Hhomiso} is an isomorphism for $V$ and $W$ in $\Mod_{G,\varepsilon}(k)$.
Since every object of $\MOD_{G,\varepsilon}(k)$ is the filtered colimit of its subobjects
in $\Mod_{G,\varepsilon}(k)$, it follows that \eqref{e:Htensiso} is an isomorphism for
every $V$ and $W$, and that \eqref{e:Hhomiso} is an isomorphism for $V$ in
$\Mod_{G,\varepsilon}(k)$ and every $W$, and hence for every $V$ and $W$.
That \eqref{e:Htensiso} and \eqref{e:Hhomiso} are isomorphisms for every $V$ and $W$ shows together with
\eqref{e:Hunitiso} that $T$ is a fully faithful tensor functor.
By \eqref{e:Th} and \eqref{e:unithiso} together with \eqref{e:pres}, every object of $\widehat{\mathcal F}$ is a
cokernel of a morphism between objects in the essential image of $T_*$.
Hence by exactness of the projection onto $\widetilde{\mathcal F}$,
every object of $\widetilde{\mathcal F}$
is a cokernel of a morphism between objects in the essential image of $H$.
The essential surjectivity of $H$ thus follows from its exactness and full faithfulness.
\end{proof}
\begin{thm}\label{t:Ftildeequiv}
Let $k$ be a field of characteristic $0$ and $\mathcal C$ be as in Proposition~\textnormal{\ref{p:FmnCesssurj}}.
Then for some $\mathbf m|\mathbf n$ there exists an affine super $(\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon)$\nobreakdash-\hspace{0pt} scheme $X$ with
$\Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(X)$ integral such that $\widetilde{\mathcal C}$ is $k$\nobreakdash-\hspace{0pt} tensor
equivalent to $\overline{\MOD_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(X)}$.
\end{thm}
\begin{proof}
Let $\mathbf m|\mathbf n$ and $R$ be as in Theorem~\ref{t:Fhatequiv}.
Since $\mathcal C$ and $\mathcal F_{\mathbf m|\mathbf n}$ are integral, so are their pseudo-abelian hulls
$(\widehat{\mathcal C})_\mathrm{rig}$ and $(\widehat{\mathcal F_{\mathbf m|\mathbf n}})_\mathrm{rig}$,
and hence also $\Mod_{\widehat{\mathcal F_{\mathbf m|\mathbf n}}}(R)$.
Thus $R \otimes -$ from $(\widehat{\mathcal F_{\mathbf m|\mathbf n}})_\mathrm{rig}$ to
$\Mod_{\widehat{\mathcal F_{\mathbf m|\mathbf n}}}(R)$ is regular, so that by Lemma~\ref{l:regtorsfree}
$R$ is a torsion free object of $\widehat{\mathcal F_{\mathbf m|\mathbf n}}$.
By Propositions~\ref{p:torsequ} and \ref{p:MODGLFequiv}, if $R'$ is the image of
$\overline{R}$ under a quasi-inverse of the $k$\nobreakdash-\hspace{0pt} tensor equivalence \eqref{e:Hdef},
we have $k$\nobreakdash-\hspace{0pt} tensor equivalences
\begin{equation*}
\widetilde{\mathcal C} \to \overline{\MOD_{\widetilde{\mathcal F_{\mathbf m|\mathbf n}}}(\overline{R})} \to
\overline{\MOD_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(R')}.
\end{equation*}
Applying $(-)_\mathrm{rig}$ and using Lemma~\ref{l:projreg} shows that
$\Mod_{\mathrm {GL}_{\mathbf m|\mathbf n},\varepsilon}(R')$ is integral.
Thus we may take $X = \Spec(R')$.
\end{proof}
\section{Equivariant sheaves}\label{s:equ}
In this section we consider tensor categories of equivariant quasi-coherent sheaves
over a super scheme with an action of an affine super group over a field of characteristic $0$.
The main results, Theorems~\ref{t:transaff} and \ref{t:GKMODequiv},
show that, under appropriate restrictions, the quotient of such a tensor category
modulo torsion is tensor equivalent to the category of modules over a transitive affine groupoid.
Throughout this section, $k$ is a field of characteristic $0$ and $(G,\varepsilon)$
is an affine super $k$\nobreakdash-\hspace{0pt} group with involution.
A morphism $Y \to X$ of super $k$\nobreakdash-\hspace{0pt} schemes will be called \emph{quasi-compact}
if the inverse image of every quasi-compact open super subscheme of $X$ is quasi-compact
in $Y$, \emph{quasi-separated} if the diagonal $Y \to Y \times_X Y$ is quasi-compact,
and \emph{quasi-affine} if it is quasi-compact and the inverse image of any affine open
super subscheme of $X$ is isomorphic to an open super subscheme of an affine
super $k$\nobreakdash-\hspace{0pt} scheme.
A super $k$\nobreakdash-\hspace{0pt} scheme will be called quasi-compact (quasi-separated, quasi-affine) if its
structural morphism is.
Let $f:Y \to X$ be a quasi-compact and quasi-separated morphism of super $k$\nobreakdash-\hspace{0pt} schemes and $\mathcal V$ be a
quasi-coherent $\mathcal O_Y$\nobreakdash-\hspace{0pt} module.
Then $f_*\mathcal V$ is a quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} module:
reduce first to the case where $X$ is affine, then by taking a finite affine open
cover of $Y$ to the case where $Y$ is quasi-affine, and finally to the case where $Y$ is affine.
Similarly the base change morphism for pullback of $f$ and $\mathcal V$ along a flat morphism $X' \to X$ of super schemes
is an isomorphism.
When $X = \Spec(k)$, the push forward $f_*\mathcal V$ of $\mathcal V$ may be identified with the
super $k$\nobreakdash-\hspace{0pt} vector space
\begin{equation*}
H^0(X,\mathcal V)
\end{equation*}
of global sections of $\mathcal V$.
A morphism $f:Y \to X$ will be called \emph{super schematically dominant} if for
every open super subscheme $X'$ of $X$ the morphism $Y' \to X'$ induced by $f$ on the
inverse image $Y'$ of $X'$ in $Y$ factors through no closed super subscheme of $X'$
strictly contained in $X'$.
For $f$ quasi-compact and quasi-separated, it is equivalent to require that the
canonical morphism $\mathcal O_X \to f_*\mathcal O_Y$ in $\MOD(X)$ be a monomorphism.
A super subscheme of $X$ will be called \emph{super schematically dense} if the embedding is
super schematically dominant
Suppose that $f$ is quasi-affine.
The canonical morphism
\begin{equation*}
Y \to \Spec(f_*\mathcal O_Y)
\end{equation*}
is an open immersion.
For any quasi-coherent $\mathcal O_Y$\nobreakdash-\hspace{0pt} module $\mathcal V$ the counit $f^*f_*\mathcal V \to \mathcal V$ is an epimorphism:
reduce to the cases where $f$ is an open immersion or where $f$ is affine.
Let $f:Y \to X$ be a quasi-compact and quasi-separated morphism of super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes
and $\mathcal V$ be a $(G,\varepsilon)$\nobreakdash-\hspace{0pt} equivariant quasi-coherent $\mathcal O_Y$\nobreakdash-\hspace{0pt} module.
Then $f_*\mathcal V$ has a
canonical structure of $(G,\varepsilon)$\nobreakdash-\hspace{0pt} equivariant $\mathcal O_X$\nobreakdash-\hspace{0pt} module.
We have a lax tensor functor $f_*$ from $\MOD_{G,\varepsilon}(Y)$ to $\MOD_{G,\varepsilon}(Y)$
right adjoint to $f^*$, where the unit and counit for the adjunction have the same
components as those for the underlying modules.
Let $Y$ be a quasi-compact quasi-separated super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
Then $H^0(Y,\mathcal V)$ for $\mathcal V$ in $\MOD_{G,\varepsilon}(Y)$ has a canonical structure of
$(G,\varepsilon)$\nobreakdash-\hspace{0pt} module.
We denote by
\begin{equation*}
H^0_G(Y,\mathcal V)
\end{equation*}
the $k$\nobreakdash-\hspace{0pt} vector subspace of $H^0(Y,\mathcal V)$ of invariants under $G$.
It consists of those global sections $v$ of $\mathcal V$, necessarily lying in the even
part $\mathcal V_0$ of $\mathcal V$, such that the action of $G$ on $\mathcal V$ sends the pullback
of $v$ along the projection from $G \times_k Y$ to $Y$ to its pullback along
the action of $G$ on $Y$.
Thus
\begin{equation*}
H^0_G(Y,\mathcal V) = \Hom_{G,\mathcal O_Y}(\mathcal O_Y,\mathcal V).
\end{equation*}
The super $k$\nobreakdash-\hspace{0pt} algebra $H^0(Y,\mathcal O_Y)$ is a commutative $(G,\varepsilon)$\nobreakdash-\hspace{0pt} algebra
with the canonical morphism
\begin{equation*}
Y \to \Spec(H^0(Y,\mathcal O_Y))
\end{equation*}
a morphism of $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes.
\begin{lem}\label{l:quotsub}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
\begin{enumerate}
\item\label{i:quotsubmod}
Every object of $\MOD_{G,\varepsilon}(X)$ is a quotient of one of the form $V \otimes_k \mathcal O_X$
for some $(G,\varepsilon)$\nobreakdash-\hspace{0pt} module $V$.
\item\label{i:quotsubrep}
Every object of $\Mod_{G,\varepsilon}(X)$ is a quotient (resp.\ subobject) of one of the form
$V \otimes_k \mathcal O_X$ for some representation $V$ of $(G,\varepsilon)$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $V$ be the push forward of $\mathcal V$ in $\MOD_{G,\varepsilon}(X)$ along the structural morphism of $X$.
Then the counit $V \otimes_k \mathcal O_X \to \mathcal V$ is an epimorphism in $\MOD(X)$,
and hence in $\MOD_{G,\varepsilon}(X)$.
This gives \ref{i:quotsubmod}.
If $\mathcal V$ lies in $\Mod_{G,\varepsilon}(X)$, writing
$V$ as the filtered colimit of its $(G,\varepsilon)$\nobreakdash-\hspace{0pt} submodules of finite type
and using the quasi-compactness of $X$ gives the quotient case of \ref{i:quotsubrep}.
The subobject case follows by taking duals.
\end{proof}
It follows from Lemma~\ref{l:quotsub}\ref{i:quotsubmod} that if $X$ is a quasi-affine
super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme then $\MOD_{G,\varepsilon}(X)$ is a well-dualled
Grothendieck tensor category.
\begin{lem}\label{l:domfaith}
Let $f:X' \to X$ be a morphism of super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes, with $X$ quasi-affine and $X'$
quasi-compact and quasi-separated.
Then $f$ is super schematically dominant if and only if
$f^*:\Mod_{G,\varepsilon}(X) \to \Mod_{G,\varepsilon}(X')$ is faithful.
\end{lem}
\begin{proof}
Since $f$ is quasi-compact and quasi-separated, the unit $\mathcal O_X \to f_*\mathcal O_{X'}$ is a morphism in
$\MOD_{G,\varepsilon}(X)$.
By Lemma~\ref{l:quotsub}\ref{i:quotsubmod} applied to its kernel, it is a monomorphism
if and only if its composite with every non-zero morphism $V \otimes_k \mathcal O_X \to \mathcal O_X$
in $\Mod_{G,\varepsilon}(X)$ with $V$ in $\Mod_{G,\varepsilon}(k)$ is non-zero.
\end{proof}
A tensor category will be called \emph{reduced} if $f^{\otimes n} = 0$ implies $f = 0$
for every morphism $f$.
\begin{lem}\label{l:rednil}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ reduced.
Then there are no non-zero $(G,\varepsilon)$\nobreakdash-\hspace{0pt} ideals of $\mathcal O_X$ contained
in the nilradical of $\mathcal O_X$.
\end{lem}
\begin{proof}
Let $\mathcal J$ be a non-zero $(G,\varepsilon)$\nobreakdash-\hspace{0pt} ideal of $\mathcal O_X$.
By Lemma~\ref{l:quotsub}\ref{i:quotsubmod}, there is for some representation $V$ of $(G,\varepsilon)$ a non-zero
morphism $\varphi$ from $V \otimes_k \mathcal O_X$ to $\mathcal O_X$ which factors through $\mathcal J$.
Thus $\mathcal J$ cannot be contained in the nilradical of $\mathcal O_X$, because otherwise the quasi-compactness of $X$
would imply that $\varphi^{\otimes n} = 0$ for some $n$.
\end{proof}
\begin{lem}\label{l:pullepi}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ reduced,
and $j:Z \to X$ be a morphism of super $k$\nobreakdash-\hspace{0pt} schemes which factors through every
non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Then $j^*$ sends every non-zero morphism in $\MOD_{G,\varepsilon}(X)$ with target $\mathcal O_X$
to an epimorphism in $\MOD(Z)$.
\end{lem}
\begin{proof}
The image of a non-zero morphism $\mathcal V \to \mathcal O_X$ in $\MOD_{G,\varepsilon}(X)$ is a non-zero
$G$\nobreakdash-\hspace{0pt} ideal $\mathcal J$ of $\mathcal O_X$.
By Lemma~\ref{l:rednil}, the complement $Y$ of the closed super $G$\nobreakdash-\hspace{0pt} subscheme of $X$ defined
by $\mathcal J$ is a non-empty open $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Thus $j$ factors through $Y$, so that $j^*$ sends $\mathcal O_X/\mathcal J$ to $0$, and hence $\mathcal V \to \mathcal O_X$
to an epimorphism.
\end{proof}
Let $X$ be a quasi-compact and quasi-separated super $k$\nobreakdash-\hspace{0pt} scheme.
Then $H^0(X,-)$ preserves filtered colimits of quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} modules:
reduce by taking a finite affine
open cover of $X$ first to the case where $X$ is quasi-affine and hence separated,
and then to the case where $X$ is affine.
If $X$ is the limit of a filtered system $(X_\lambda)$ of quasi-compact and quasi-separated super $k$\nobreakdash-\hspace{0pt} schemes
with affine transition morphisms, pushing forward the structure sheaves onto some $X_{\lambda_0}$ shows that
$H^0(X,\mathcal O_X)$ is the colimit of the $H^0(X_\lambda,\mathcal O_{X_\lambda})$.
\begin{lem}\label{l:limopensub}
Let $X$ be a filtered limit $\lim_{\lambda \in \Lambda}X_\lambda$ of quasi-affine super $(G.\varepsilon)$\nobreakdash-\hspace{0pt} schemes
with affine transition morphisms, and $Y$ be an open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Then for each point $y$ of $Y$ there exists for some $\lambda \in \Lambda$ a quasi-compact
open super $G$\nobreakdash-\hspace{0pt} subscheme
$Y'$ of $X_\lambda$ containing the image of $y$ in $X_\lambda$
such that the inverse image of $Y'$ in $X$ is contained in $Y$.
\end{lem}
\begin{proof}
We may suppose that $G$, $X$ and the $X_\lambda$ are reduced.
Since $X$ is quasi-affine, $y \in X_f \subset Y$ for some $f$ in $H^0(X,\mathcal O_X)$.
If $f_1, \dots , f_n$ is a basis of the $G$\nobreakdash-\hspace{0pt} submodule of $H^0(X,\mathcal O_X)$ generated by $f$, then
\begin{equation*}
y \in X_{f_1} \cup \dots \cup X_{f_n} \subset Y.
\end{equation*}
For some $\lambda \in \Lambda$, each $f_i$ comes from an $f'{}\!_i$ in
$H^0(X_\lambda,\mathcal O_{X_\lambda})$ with the $f'{}\!_i$ a
basis of a $G$\nobreakdash-\hspace{0pt} submodule of $H^0(X_\lambda,\mathcal O_{X_\lambda})$.
Then we may take $X_\lambda{}_{f'{}\!_1} \cup \dots \cup X_\lambda{}_{f'{}\!_n}$
for $Y'$.
\end{proof}
\begin{lem}\label{l:qafflim}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
Then $X$ is the limit of a filtered inverse system $(X_\lambda)_{\lambda \in \Lambda}$ of
quasi-affine super $(G.\varepsilon)$\nobreakdash-\hspace{0pt} schemes of finite type with affine transition morphisms
and super schematically dominant projections.
If $X$ is affine then the $X_\lambda$ may be taken to be affine.
\end{lem}
\begin{proof}
If $X = \Spec(R)$ is affine, writing $R$ as the filtered colimit of its finitely generated
super $G$\nobreakdash-\hspace{0pt} subalgebras gives the required result, with the $X_\lambda$ affine.
In general, we may regard $X$ as an open super $G$\nobreakdash-\hspace{0pt} subscheme of the spectrum $Z$ of
$H^0(X,\mathcal O_X)$.
By the affine case, $Z$ is the limit of a filtered inverse system $(Z_\lambda)$ of affine
$(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes of finite type with super schematically dominant projections.
By quasi-compactness of $X$ and Lemma~\ref{l:limopensub}, there exists a $\lambda_0$
such that $X$ is the inverse image of an open super $G$\nobreakdash-\hspace{0pt} subscheme $X'$ of $Z_{\lambda_0}$.
Then the system $(X_\lambda)_{\lambda \ge \lambda_0}$ with $X_\lambda$ the inverse image
of $X'$ in $Z_\lambda$ has the required properties.
\end{proof}
\begin{lem}\label{l:schdense}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ integral.
Then any non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$ is super schematically dense.
\end{lem}
\begin{proof}
Suppose first that $X$ is of finite type.
Let $U_1$ be a non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
The open super subscheme $U_2$ of $X$ on the complement of the closure of $U_1$
is a super $G$\nobreakdash-\hspace{0pt} subscheme which is disjoint from $U_1$, and $U_1 \cup U_2$
is dense in $X$.
If $j_i:U_i \to X$ is the embedding, then the canonical morphism
\begin{equation*}
\mathcal O_X \to j_i{}_*\mathcal O_{U_i}
\end{equation*}
has kernel a quasi-coherent $G$\nobreakdash-\hspace{0pt} ideal $\mathcal J_i$ of $\mathcal O_X$.
Since $\mathcal J_1 \cap \mathcal J_2$ is $0$ on $U_1 \cup U_2$, it is contained in the
nilradical of $\mathcal O_X$, and hence is $0$ by Lemma~\ref{l:rednil}.
Thus $\mathcal J_1\mathcal J_2 = 0$.
Now $\mathcal J_2 \ne 0$ because $U_1$ is non-empty and $U_1 \cap U_2 = \emptyset$.
If $\mathcal J_1$ were $\ne 0$, then by Lemma~\ref{l:quotsub}\ref{i:quotsubmod} there would for each
$i$ be a non-zero morphism $V_i \otimes_k \mathcal O_X \to \mathcal O_X$ in $\Mod_{G,\varepsilon}(X)$
which factors through $\mathcal J_i$,
contradicting the integrality of $\Mod_{G,\varepsilon}(X)$.
Thus $\mathcal J_1 = 0$, and $U_1$ is super schematically dense in $X$.
To prove the general case, write $X$ as the limit of a filtered inverse system
$(X_\lambda)_{\lambda \in \Lambda}$ as in Lemma~\ref{l:qafflim}.
Let $U$ be a non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
There exists by Lemma~\ref{l:limopensub} a $\lambda_0 \in \Lambda$ such that
\begin{equation*}
\emptyset \ne \pr_{\lambda_0}^{-1}(U_0) \subset U
\end{equation*}
for an open super $G$\nobreakdash-\hspace{0pt} subscheme $U_0$ of $X_{\lambda_0}$.
Now by Lemma~\ref{l:domfaith}
the $\Mod_{G,\varepsilon}(X_\lambda)$ are integral.
Thus by the case where $X$ is of finite type, the inverse image of $U_0$ in $X_\lambda$
is super schematically dense for each $\lambda \ge \lambda_0$.
Covering $X_{\lambda_0}$ with affine open subsets then shows that
$\pr_{\lambda_0}^{-1}(U_0)$ and hence $U$ is super schematically dense in $X$.
\end{proof}
Let $X$ be a super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme and $x$ be a point of $X$.
We write
\begin{equation*}
\omega_{X,x}:\Mod_{G,\varepsilon}(X) \to \Mod(\kappa(x))
\end{equation*}
for the $k$\nobreakdash-\hspace{0pt} tensor functor given by taking the fibre at $x$.
\begin{lem}\label{l:pointfaith}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ integral.
Then $X$ has a point which lies in every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
For any such point $x$, the functor $\omega_{X,x}$ from $\Mod_{G.\varepsilon}(X)$ to
$\Mod(\kappa(x))$ is faithful.
\end{lem}
\begin{proof}
Write $X$ as the limit of a filtered inverse system
$(X_\lambda)_{\lambda \in \Lambda}$ as in Lemma~\ref{l:qafflim}.
Let $\overline{k}$ be an algebraic closure of $k$, and for each $\lambda$
write $\overline{\mathcal M}_\lambda$ for the set of maximal points of
$(X_\lambda)_{\overline{k}}$.
The group
\begin{equation*}
\Gamma = G(\overline{k}) \rtimes \Gal(\overline{k}/k)
\end{equation*}
acts on $\overline{\mathcal M}_\lambda$,
and the quotient $\overline{\mathcal M}_\lambda/\Gal(\overline{k}/k)$ by the subgroup
$\Gal(\overline{k}/k)$ may be identified with the set $\mathcal M_\lambda$
of maximal points of $X_\lambda$.
Let $\overline{\mathcal M}_1$ be a non-empty $\Gamma$\nobreakdash-\hspace{0pt} subset of $\overline{\mathcal M}_\lambda$.
If $\overline{\mathcal M}_2$ is the complement of $\overline{\mathcal M}_1$ in $\overline{\mathcal M}_\lambda$,
then the complement of the closure of $\overline{\mathcal M}_2$ in $(X_\lambda)_{\overline{k}}$
is an open super subscheme $\overline{U}_1$ of $(X_\lambda)_{\overline{k}}$
which contains $\overline{\mathcal M}_1$ but no point of $\overline{\mathcal M}_2$.
Further $\overline{U}_1$ is stable under $\Gamma$, and hence descends to an open super
$G$\nobreakdash-\hspace{0pt} subscheme $U_1$ of $X_\lambda$ which contains the image $\mathcal M_1$ of $\overline{\mathcal M}_1$
in $\mathcal M_\lambda$ but no point of the image $\mathcal M_2$ of $\overline{\mathcal M}_2$.
Since $\Mod_{G,\varepsilon}(X_\lambda)$ is integral by Lemma~\ref{l:domfaith},
$U_1$ is dense in $X_\lambda$ by Lemma~\ref{l:schdense}.
Thus $\mathcal M_2$ and hence $\overline{\mathcal M}_2$ is empty, so that $\mathcal M_1 = \overline{\mathcal M}_\lambda$.
This shows that $\Gamma$ acts transitively on $\overline{\mathcal M}_\lambda$.
Since $(X_\lambda)_{\overline{k}} \to (X_\mu)_{\overline{k}}$
is dominant and compatible with the action of $\Gamma$ for $\lambda \ge \mu$,
it follows that it sends $\overline{\mathcal M}_\lambda$ to $\overline{\mathcal M}_\mu$.
Hence $X_\lambda \to X_\mu$ sends $\mathcal M_\lambda$ to $\mathcal M_\mu$, and the $\mathcal M_\lambda$
form a filtered inverse system of finite non-empty subsets of the $X_\lambda$.
By Tychonoff's theorem, the set $\lim_\lambda \mathcal M_\lambda$ is non-empty.
Let $(x_\lambda)$ be an element.
Then there is a (unique) point $x$ of $X$ which lies above $x_\lambda$ in $X_\lambda$
for each $\lambda$.
By Lemma~\ref{l:limopensub}, any non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme $U$ of $X$
contains the inverse image in $X$ of a non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme $U_\lambda$
of some $X_\lambda$.
Since $U_\lambda$ contains $x_\lambda$ by Lemma~\ref{l:schdense},
it follows that $U$ contains $x$.
The final statement follows from Lemma~\ref{l:pullepi} by taking $Z = \Spec(\kappa(x))$.
\end{proof}
\begin{lem}\label{l:openGsubint}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ integral,
and $Y$ be a non-empty quasi-compact open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Then $\Mod_{G,\varepsilon}(Y)$ is integral, the restriction functor from
$\Mod_{G,\varepsilon}(X)$ to $\Mod_{G,\varepsilon}(Y)$ is faithful, and the induced homomorphism
from $\kappa(\Mod_{G,\varepsilon}(X))$ to $\kappa(\Mod_{G,\varepsilon}(Y))$
is an isomorphism.
\end{lem}
\begin{proof}
The faithfulness is clear from Lemmas~\ref{l:domfaith} and \ref{l:schdense}.
Write $j:Y \to X$ for the embedding.
By Lemma~\ref{l:quotsub}\ref{i:quotsubrep}, any morphism in $\Mod_{G,\varepsilon}(Y)$
with source $\mathcal O_Y$ may,
after composing with an appropriate monomorphism, be put into the form
\begin{equation*}
f:\mathcal O_Y \to j^*\mathcal V
\end{equation*}
with $\mathcal V$ in $\Mod_{G,\varepsilon}(X)$.
To prove the integrality and isomorphism statements,
it is then enough by the faithfulness to show that if $f \ne 0$ there exist
morphisms $h$ and $f' \ne 0$ in $\Mod_{G,\varepsilon}(X)$ such that
\begin{equation}\label{e:fjh}
f \otimes j^*(h) = j^*(f')
\end{equation}
We have $f = j^*(f_0)$, where $f_0:\mathcal O_X \to j_*j^*\mathcal V$ corresponds under adjunction to $f$.
Pulling back $f_0$ along the unit $\eta_{\mathcal V}$ gives a cartesian square
\begin{equation*}
\xymatrix{
\mathcal W_1 \ar_{u}[d] \ar^{f_1}[r] & \mathcal V \ar^{\eta_{\mathcal V}}[d] \\
\mathcal O_X \ar^{f_0}[r] & j_*j^*\mathcal V
}
\end{equation*}
in $\MOD_{G,\varepsilon}(X)$, with $j^*(\eta_{\mathcal V})$ the identity and hence $j^*(u)$ an isomorphism.
Then $j^*(f_1) \ne 0$ and hence $f_1 \ne 0$, so that by Lemma~\ref{l:quotsub}\ref{i:quotsubmod}
there is a $v:\mathcal W \to \mathcal W_1$ with $\mathcal W$ in $\Mod_{G,\varepsilon}(X)$ such that $f_1 \circ v \ne 0$ in
$\Mod_{G,\varepsilon}(X)$.
If $h = u \circ v$ and $f' = f_1 \circ v$, then \eqref{e:fjh} is satisfied because
$f \otimes j^*(h) = f \circ j^*(h)$.
\end{proof}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme
with $\Mod_{G,\varepsilon}(X)$ integral.
The endomorphism $k$\nobreakdash-\hspace{0pt} algebra of $\mathbf 1$ in $\Mod_{G,\varepsilon}(X)$
is given by
\begin{equation*}
\End_{G,\mathcal O_X}(\mathcal O_X) = H^0_G(X,\mathcal O_X).
\end{equation*}
For $Y$ as in Lemma~\ref{l:openGsubint} we then have a commutative square
\begin{equation*}
\xymatrix{
H^0_G(X,\mathcal O_X) \ar[d] \ar[r] & H^0_G(Y,\mathcal O_Y) \ar[d] \\
\kappa(\Mod_{G,\varepsilon}(X)) \ar^{\sim}[r] & \kappa(\Mod_{G,\varepsilon}(Y))
}
\end{equation*}
with the arrows injective.
As $Y$ varies, such squares form a filtered system with the left arrow fixed.
It can be seen as follows that the homomorphism
\begin{equation}\label{e:funfieldiso}
\colim_{Y \subset X, \; Y \ne \emptyset}H^0_G(Y,\mathcal O_Y) \to \kappa(\Mod_{G,\varepsilon}(X))
\end{equation}
it defines is an isomorphism, where the colimit is over the filtered system of non-empty quasi-compact open super
$G$\nobreakdash-\hspace{0pt} subschemes $Y$ of $X$.
Given $\alpha$ in $\kappa(\Mod_{G,\varepsilon}(X))$, it is to be shown that
there exists a $Y$ such that the image of $\alpha$ in
$\kappa(\Mod_{G,\varepsilon}(Y))$ is the image of an element of $H^0_G(Y,\mathcal O_Y)$.
We have $\alpha = h/f$ for
morphisms $f,h:\mathcal V \to \mathcal O_X$ in $\Mod_{G,\varepsilon}(X)$ with $f \ne 0$ and
$f \otimes h = h \otimes f$.
The closed super subscheme of $X$ defined by the ideal $\Img f$ of $\mathcal O_X$ is a $G$\nobreakdash-\hspace{0pt} subscheme,
and we may take for $Y$ its complement.
Indeed $Y$ so defined is quasi-compact and by Lemma~\ref{l:rednil} it is non-empty,
and since the restriction of $f$ to $Y$
is an epimorphism, the image of $\alpha$ in $\kappa(\Mod_{G,\varepsilon}(Y))$ has the required property
by Lemma~\ref{l:torsfrac} with $\mathcal A = \MOD_{G,\varepsilon}(Y)$.
\begin{lem}\label{l:redmono}
Let $f:X' \to X$ be a super schematically dominant morphism of super $k$\nobreakdash-\hspace{0pt} schemes of finite type
which induces an isomorphism from $f^{-1}(X_{\mathrm{red}})$ to $X_{\mathrm{red}}$.
Then $f$ is an isomorphism.
\end{lem}
\begin{proof}
Write $\mathcal N$ and $\mathcal N'$ for the nilradicals of $X$ and $X'$,
and
\[
\alpha:\mathcal O_X \to f_*\mathcal O_{X'}
\]
for the monomorphism induced by $f$.
Then $\mathcal N'$ contains the inverse image of $\mathcal N$ in $\mathcal O_{X'}$,
and hence coincides with it, because $f^{-1}(X_{\mathrm{red}})$ is reduced.
Thus $X'{}\!_{\mathrm{red}} = f^{-1}(X_{\mathrm{red}})$, so that
$f_{\mathrm{red}}:X'{}\!_{\mathrm{red}} \to X_{\mathrm{red}}$ is an isomorphism and $f$ is a
homeomorphism.
It remains to show that $\alpha$ is an isomorphism.
We show by induction on $n$ that for $n \ge 1$
\begin{align}
\label{e:fNfNn}
f_*\mathcal N' & = \alpha(\mathcal N) + (f_*\mathcal N')^n \\
\label{e:fOfNn}
f_*\mathcal O_{X'} & = \alpha(\mathcal O_X) + (f_*\mathcal N')^n.
\end{align}
Since $f_*\mathcal N'$ is a nilpotent ideal of $f_*\mathcal O_{X'}$, it will follow from \eqref{e:fOfNn} for $n$ large
that $f_*\mathcal O_{X'} = \alpha(\mathcal O_X)$, so that $\alpha$ is indeed an isomorphism.
That \eqref{e:fNfNn} holds for $n=1$ is immediate.
That \eqref{e:fOfNn} holds for $n=1$ follows from the fact that
since $f_{\mathrm{red}}$ is an isomorphism,
$\alpha$ induces an isomorphism from $\mathcal O_X/\mathcal N$ to $f_*\mathcal O_{X'}/f_*\mathcal N'$.
Suppose that \eqref{e:fNfNn} and \eqref{e:fOfNn} hold for $n=r$.
Then inserting \eqref{e:fOfNn} with $n=r$ into
\begin{equation*}
f_*\mathcal N' = \alpha(\mathcal N)f_*\mathcal O_{X'}.
\end{equation*}
shows that \eqref{e:fNfNn} holds for $n = r+1$,
and inserting \eqref{e:fNfNn} with $n=r$ into \eqref{e:fOfNn} with $n=r$ shows that \eqref{e:fOfNn}
holds for $n = r+1$.
\end{proof}
By an equivalence relation on a super $k$\nobreakdash-\hspace{0pt} scheme $X$ we mean a super subscheme $E$
of $X \times_k X$ with faithfully flat quasi-compact projections $E \to X$ such that
$E(S)$ is an equivalence relation on the set $X(S)$ for any super $k$\nobreakdash-\hspace{0pt} scheme $S$.
By a quotient of $X$ by $E$ we mean a super $k$\nobreakdash-\hspace{0pt} scheme $Y$ together with a
faithfully flat quasi-compact $k$\nobreakdash-\hspace{0pt} morphism $X \to Y$ which coequalises the
projections $E \to X$, such that the square with two sides $X \to Y$ and the other two sides the projections
$E \to X$ is cartesian.
Such an $X \to Y$ is the coequaliser in the category of super local $k$\nobreakdash-\hspace{0pt} ringed spaces
of the projections $E \to X$, and hence is unique up to unique isomorphism when it exists.
We write $X/E$ for the quotient of $X$ by $E$ when it exists.
Formation of quotients is compatible with extension of scalars.
The quotient $(T \times_k X)/(T \times_k E)$ exists if $X/E$ does, and may be identified with $T \times_k (X/E)$.
Suppose that $X/E$ exists.
If $E'$ is an equivalence relation on $X$ coarser than $E$, and if
$E' \to X \times_k X$ is a closed immersion, then by faithfully flat descent
of super $k$\nobreakdash-\hspace{0pt} subschemes, $E$ descends along
\begin{equation*}
X \times_k X \to (X/E) \times_k (X/E)
\end{equation*}
to an equivalence relation $\overline{E'}$ on $X/E$.
The quotient $X/E'$ exists if and only if $(X/E)/\overline{E'}$ does, and they then coincide.
A super subscheme $T$ of a super $k$\nobreakdash-\hspace{0pt} scheme $X$ will be called a \emph{transversal} to an equivalence relation
$E$ on $X$ if the restriction of the first projection $\pr_1:E \to X$ to $\pr_2{}\!^{-1}(T)$ is an isomorphism.
With $X$ identified with $\pr_2{}\!^{-1}(T)$ by this isomorphism, the projection $X \to T$ is that onto the
quotient of $X$ by $E$.
For $G$ a super $k$\nobreakdash-\hspace{0pt} group and $H$ a super $k$\nobreakdash-\hspace{0pt} subgroup of $G$, we denote as usual by $G/H$ the quotient,
when it exists, of $G$ by the equivalence relation defined by right translation by $H$.
The left action of $G$ on itself by left translation defines a structure of super $G$\nobreakdash-\hspace{0pt} scheme on $G/H$.
If $X$ is a super $G$\nobreakdash-\hspace{0pt} scheme and $x$ is a $k$\nobreakdash-\hspace{0pt} point of $X$ which is fixed by $H$, there is a unique morphism
$G/H \to X$ of $G$\nobreakdash-\hspace{0pt} schemes which sends the image in $G/H$ of the identity of $G$ to $x$.
\begin{lem}\label{l:superrediso}
Let $X$ and $X'$ be smooth super $k$\nobreakdash-\hspace{0pt} schemes, $f$ be a $k$\nobreakdash-\hspace{0pt} morphism from $X'$ to $X$,
and $x'$ be a $k$\nobreakdash-\hspace{0pt} point of $X'$.
Suppose that $f$ induces an isomorphism from
$X'{}\!_{\mathrm{red}}$ to $X_{\mathrm{red}}$,
and an isomorphism from the tangent space of $X'$ at $x'$ to that of $X$ at $f(x')$.
Then there exists an open super subscheme $X_0$ of $X$ containing $f(x')$ such that $f$
induces an isomorphism from $f^{-1}(X_0)$ to $X_0$.
\end{lem}
\begin{proof}
It is enough to show that the canonical morphism $\mathcal O_X \to f_*\mathcal O_{X'}$ is an isomorphism
in some neighbourhood of $f(x')$.
We may suppose that $X$ and $X'$ are pure of the same dimension $m|n$.
Write $\mathcal N$ and $\mathcal N'$ for the nilradicals of $\mathcal O_X$ and $\mathcal O_{X'}$.
Both $\mathcal N^{n+1}$ and $\mathcal N'{}^{n+1}$ are $0$.
Since $f_*$ is exact because $f$ is a homeomorphism,
it is thus enough to show that for every $i$ the canonical morphism
\begin{equation*}
h_i:\mathcal N^i/\mathcal N^{i+1} \to f_*(\mathcal N'{}^i/\mathcal N'{}^{i+1})
\end{equation*}
is an isomorphism in a neighbourhood of $f(x')$.
Now $\mathcal N/\mathcal N^2$ is a locally free $\mathcal O_{X_\mathrm{red}}$\nobreakdash-\hspace{0pt} module and $\mathcal N'/\mathcal N'{}^2$ a locally free
$\mathcal O_{X'{}\!_\mathrm{red}}$\nobreakdash-\hspace{0pt} module of rank $0|n$,
and $\mathcal N^i/\mathcal N^{i+1}$ and $\mathcal N'{}^i/\mathcal N'{}^{i+1}$ are the $i$th symmetric powers of
$\mathcal N/\mathcal N^2$ and $\mathcal N'/\mathcal N'{}^2$ over $\mathcal O_{X_\mathrm{red}}$ and $\mathcal O_{X'{}\!_\mathrm{red}}$.
Since by hypothesis $f$ induces an isomorphism from $X'{}\!_\mathrm{red}$ to $X_\mathrm{red}$,
we may suppose that $i=1$.
The morphism of super $k$\nobreakdash-\hspace{0pt} vector spaces induced by $h_1$ at $f(x')$ is the dual of that
induced by $f$ on the degree $1$ part of the tangent spaces at $x'$ and $f(x')$, and hence is an isomorphism.
The required result follows.
\end{proof}
\begin{lem}
Let $G$ be an affine super $k$\nobreakdash-\hspace{0pt} group of finite type and $G_0$ be a closed super $k$\nobreakdash-\hspace{0pt} subgroup of $G$.
Then the quotient $G/G_0$ exists and is smooth over $k$.
Further the projection from $G$ to $G/G_0$ is smooth.
\end{lem}
\begin{proof}
Both $G$ and $G_0$ are smooth over $k$, of respective super dimensions $m|n$ and $m_0|n_0$, say.
There then exists a closed immersion $\mathbf A^{0|n} \to G$.
Composing the multiplication $G \times_k G \to G$ of $G$ with the product of the embeddings of $\mathbf A^{0|n}$
and $G_{\mathrm{red}}$ into $G$, we obtain a morphism
\begin{equation*}
\mathbf A^{0|n} \times_k G_{\mathrm{red}} \to G
\end{equation*}
of right $G_{\mathrm{red}}$\nobreakdash-\hspace{0pt} schemes.
It is an isomorphism, as can be seen by passing to an algebraic closure and using Lemma~\ref{l:superrediso}.
Thus
\begin{equation*}
\overline{G} = G/G_0{}_{\mathrm{red}} = \mathbf A^{0|n} \times_k (G_{\mathrm{red}}/G_0{}_{\mathrm{red}})
\end{equation*}
exists and is smooth over $k$ of dimension $(m-m_0)|n$, and $G \to \overline{G}$ is smooth
because it is the product of $\mathbf A^{0|n}$ with a smooth morphism.
The equivalence relation $E$ on $G$ defined by the right action of $G_0$ on $G$ then descends to an equivalence
relation $\overline{E}$ on $\overline{G}$.
It is enough to show that
$\overline{G}/\overline{E}$ exists and is smooth over $k$, and that the projection
$\overline{G} \to \overline{G}/\overline{E}$ is smooth.
Every open super subscheme of $\overline{G}$ is $\overline{E}$\nobreakdash-\hspace{0pt} saturated.
The isomorphism $G \times_k G_0 \xrightarrow{\sim} E$ that sends $(g,g_0)$ to $(g,gg_0)$ is compatible with the
first projections onto $G$, so that $E$ is smooth over $G$ and hence over $k$.
Since $E$ is the inverse image of $\overline{E}$ under the surjective smooth morphism
$G \times_k G \to \overline{G} \times_k \overline{G}$, it follows that $\overline{E}$
is smooth over $k$ of dimension $(m-m_0)|(n+n_0)$.
Further the projections $\overline{E} \to \overline{G}$ are smooth because their composites with
the surjective morphism $E \to \overline{E}$ factor as the composite of smooth morphisms
$G \to \overline{G}$ and $E \to G$.
Given $n' \le n$, there corresponds to each $n'$\nobreakdash-\hspace{0pt} dimensional subspace $k$\nobreakdash-\hspace{0pt} vector subspace of $k^n$, or equivalently to each $k$\nobreakdash-\hspace{0pt} point $t$ of the Grassmannian $\mathbf{Gr}(n',n)$, a
linearly embedded super subscheme $Z_t$ of $\mathbf A^{0|n}$ isomorphic to $\mathbf A^{0|n'}$.
Let $\mathcal S$ be a finite set of closed points of $G_{\mathrm{red}}/G_0{}_{\mathrm{red}}$.
We now show that there exists
a non-empty open subscheme $Y$ of $\mathbf{Gr}(n-n_0,n)$ with the following property:
for every $t$ in $Y(k)$ there is an open subscheme $U_t$ of $G_{\mathrm{red}}/G_0{}_{\mathrm{red}}$ containing
$\mathcal S$ such that $Z_t \times_k U_t$ is a transversal for the restriction
of $\overline{E}$ to $\mathbf A^{0|n} \times_k U_t$.
Suppose first that $k$ is algebraically closed.
Then closed points of $G_{\mathrm{red}}/G_0{}_{\mathrm{red}}$ may be identified with $k$\nobreakdash-\hspace{0pt} points,
and also with $k$\nobreakdash-\hspace{0pt} points of $\overline{G}$ or $\overline{E}$.
If $x$ is such a point, then $\overline{E}$ induces a linear equivalence relation on the tangent space
$V_x$ of $\overline{G}$, given by the tangent space $T_x$ of $\overline{E}$ at $x$, regarded
as a super $k$\nobreakdash-\hspace{0pt} vector subspace of $V_x \oplus V_x$.
Thus $T_x$ is the inverse image of a super subspace $W_x$ of $V_x$ of dimension $0|n_0$
under the subtraction homomorphism $V_x \oplus V_x \to V_x$.
Then $|W_x|$ is a subspace of dimension $n_0$ of the odd part
$k^n$ of $V_x$.
Now take for $Y$ the open subscheme of $\mathbf{Gr}(n-n_0,n)$ parametrising the $(n-n_0)$\nobreakdash-\hspace{0pt} dimensional
subspaces of $k^d$ complementary to $|W_x|$ at each $x$ in $\mathcal S$.
Then for $t$ in $Y(k)$, the morphism
\begin{equation*}
f:\pr_2{}\!^{-1}(Z_t \times (G_{\mathrm{red}}/G_0{}_{\mathrm{red}})) \to
\mathbf A^{0|n} \times (G_{\mathrm{red}}/G_0{}_{\mathrm{red}})
\end{equation*}
defined by restricting the projection $\pr_1:\overline{E} \to \overline{G}$ to the inverse image
along $\pr_2$ induces an isomorphism on the tangent space at each $x$ in $\mathcal S$.
The required $U_t$ thus exists by Lemma~\ref{l:superrediso} applied to $f$.
For arbitrary $k$ with algebraic closure $\overline{k}$,
applying the algebraically closed case with $k$, $G$, $G_0$, and $\mathcal S$ replaced by
$\overline{k}$, $G_{\overline{k}}$, $G_0{}_{\overline{k}}$, and the inverse image $\mathcal S'$ of $\mathcal S$
in $G_{\mathrm{red}}/G_0{}_{\mathrm{red}}$,
we obtain a non-empty open subscheme $Y'$ of $\mathbf{Gr}(n-n_0,n)_{\overline{k}}$ and for each
$\overline{k}$\nobreakdash-\hspace{0pt} point $t'$ of $Y'$ over $\overline{k}$ an open subscheme $U'{}\!_{t'}$ of
$(G_{\mathrm{red}}/G_0{}_{\mathrm{red}})_{\overline{k}}$ containing $\mathcal S'$, with properties
similar to the above.
Replacing $Y'$ by the intersection of its finite set of conjugates under $\Gal(\overline{k}/k)$,
we may assume that $Y' = Y_{\overline{k}}$ for some $Y$.
For $t$ in $Y(k)$, the intersection of the conjugates of $U'{}\!_t$
descends to the required $U_t$.
Since every non-empty open subscheme of a Grassmannian has a $k$\nobreakdash-\hspace{0pt} point,
taking sets $\mathcal S $ consisting of a single closed point shows that
$G_{\mathrm{red}}/G_0{}_{\mathrm{red}}$ may be covered by open subschemes $U$ such that the restriction
of $\overline{E}$ to $\mathbf A^{0|n} \times_k U$ has a transversal $Z \times_k U$ with $Z$
a closed super subscheme of $\mathbf A^{0|n}$ isomorphic to $\mathbf A^{0|n-n_0}$.
For such a $U$ and $Z$ and any open subscheme $U_0$ of $U$, the restriction of
$\overline{E}$ to $\mathbf A^{0|n} \times_k U_0$ then has a transversal $Z \times_k U_0$.
Thus the quotients $Z \times_k U$ of the $\mathbf A^{0|n} \times_k U$ patch together to give the required quotient
$\overline{G}/\overline{E}$.
The smoothness over $k$ of $\overline{G}/\overline{E}$ follows from that of the $Z \times_k U$,
and the smoothness of $\overline{G} \to \overline{G}/\overline{E}$ from the fact that the projections
$\mathbf A^{0|n} \times_k U \to Z \times_k U$ are isomorphic to pullbacks of $\pr_2:\overline{E} \to \overline{G}$.
\end{proof}
\begin{lem}\label{l:homogGsub}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type.
Suppose that $\Mod_{G,\varepsilon}(X)$ is integral with $\kappa(\Mod_{G,\varepsilon}(X)) = k$.
Then $X$ has a unique non-empty open homogeneous super $G$\nobreakdash-\hspace{0pt} subscheme $X_0$.
Every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$ contains $X_0$.
\end{lem}
\begin{proof}
It is enough to prove that a non-empty open homogeneous super $G$\nobreakdash-\hspace{0pt} subscheme $X_0$ of $X$ exists:
for every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme $Y$ of $X$ the open super $G$\nobreakdash-\hspace{0pt} subscheme
$X_0 \cap Y$ of $X_0$ will be non-empty by Lemma~\ref{l:pointfaith},
and the final statement and hence the uniqueness of $X_0$ will follow.
After replacing $G$ by a quotient, we may suppose that $G$ is of finite type.
Let $k'$ be an extension of $k$.
By Lemma~\ref{l:quotsub}\ref{i:quotsubrep} and Lemma~\ref{l:ext}\ref{i:extint}, the hypotheses
on $X$ hold with $X$ and $k$ replaced by $X_{k'}$ and $k'$.
Suppose that $X_{k'}$ has a non-empty open homogeneous super $G_{k'}$\nobreakdash-\hspace{0pt} subscheme $X'{}\!_0$.
Then by the uniqueness, for any extension $k''$ of $k$ the inverse images of $X'{}\!_0$ under the
$k$\nobreakdash-\hspace{0pt} morphisms $X_{k''} \to X_{k'}$ induced by any two $k$\nobreakdash-\hspace{0pt} homomorphisms $k' \to k''$ coincide.
Thus $X'{}\!_0$ descends to an open super subscheme $X_0$ of $X$, necessarily a
homogeneous super $G$\nobreakdash-\hspace{0pt} subscheme.
Let $x$ be a point of $X$.
If $x'$ is the $\kappa(x)$\nobreakdash-\hspace{0pt} rational point of $X_{\kappa(x)}$ above $x$,
then $\omega_{X,x}$ factors as
\[
\Mod_{G,\varepsilon}(X) \to \kappa(x) \otimes_k \Mod_{G,\varepsilon}(X) \to
\Mod_{G_{\kappa(x)},\varepsilon}(X_{\kappa(x)}) \xrightarrow{\omega_{X_{\kappa(x)},x'}} \Mod(\kappa(x))
\]
with the second arrow fully faithful.
By Lemma~\ref{l:pointfaith}, an $x$ exists with $\omega_{X,x}$ faithful.
The composite of the second two arrows is then faithful by
Lemma~\ref{l:ext}\ref{i:extfaith}, and hence $\omega_{X_{\kappa(x)},x'}$ is faithful
by Lemma~\ref{l:quotsub}\ref{i:quotsubrep}.
Replacing $k$, $X$ and $x$ by $\kappa(x)$, $X_{\kappa(x)}$ and $x'$, we may thus suppose that $x$
is $k$\nobreakdash-\hspace{0pt} rational and $\omega_{X,x}$ is faithful.
Write $G_0$ for the stabiliser of $x$ under $G$.
The unique morphism $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes
\[
\varphi:G/G_0 \to X
\]
that sends the base point $z$ of $G/G_0$ to $x$
defines a factorisation
\[
\Mod_{G,\varepsilon}(X) \xrightarrow{\varphi^*} \Mod_{G,\varepsilon}(G/G_0) \xrightarrow{\omega_{G/G_0,z}} \Mod(k).
\]
of $\omega_{X,x}$.
Thus $\varphi^*$ is faithful,
so that by Lemma~\ref{l:domfaith} $\varphi$ is super schematically dominant.
The stabiliser of the $k$\nobreakdash-\hspace{0pt} point $x$ of $X_{\mathrm{red}}$ under the action
of $G_{\mathrm{red}}$ is $G_0{}_{\mathrm{red}}$, and
\[
\varphi_{\mathrm{red}}:G_{\mathrm{red}}/G_0{}_{\mathrm{red}} = (G/G_0)_{\mathrm{red}} \to X_{\mathrm{red}}
\]
is the morphism of $G_{\mathrm{red}}$\nobreakdash-\hspace{0pt} schemes that sends the base point to $x$.
Since $\varphi_{\mathrm{red}}$ is dominant, it factors by homogeneity of $G_{\mathrm{red}}/G_0{}_{\mathrm{red}}$
as an isomorphism onto an open super $G_{\mathrm{red}}$\nobreakdash-\hspace{0pt} subscheme $X_1$ of $X_{\mathrm{red}}$
followed by the embedding.
Then
\[
X_1 = X_0{}_{\mathrm{red}}
\]
for an open super $G$\nobreakdash-\hspace{0pt} subscheme $X_0$ of $X$,
and $\varphi$ factors as
\[
\varphi_0:G/G_0 \to X_0
\]
followed by the embedding.
The morphism from $\varphi_0{}\!^{-1}(X_0{}_{\mathrm{red}})$ to $X_0{}_{\mathrm{red}}$
induced by $\varphi_0$ is a morphism of $G_{\mathrm{red}}$\nobreakdash-\hspace{0pt} schemes, and hence is an isomorphism
because $X_0{}_{\mathrm{red}} = X_1$ is homogeneous while the fibre $G_0/G_0$ of $\varphi_0$ above $x$
is $\Spec(k)$.
Since $\varphi_0$ is super schematically dominant,
it is thus an isomorphism by Lemma~\ref{l:redmono} with $f = \varphi_0$.
\end{proof}
Let $X$ be a super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type, $S$ be a $k$\nobreakdash-\hspace{0pt} scheme on which $G$ acts trivially,
$X \to S$ be a morphism of super $G$\nobreakdash-\hspace{0pt} schemes, and $s$ be a point of $S$.
Write $k'$ for $\kappa(s)$ and $X'$ for the fibre $X_s$ of $X$ above $s$.
Then $X'$ is a super $(G_{k'},\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type.
If $U'$ is an open super $G_{k'}$\nobreakdash-\hspace{0pt} subscheme of $X'$,
then the reduced scheme on the complement $Z'$ of $U'$ is a closed
$(G_{k'})_\mathrm{red}$\nobreakdash-\hspace{0pt} subscheme, and hence a closed $G_\mathrm{red}$\nobreakdash-\hspace{0pt} subscheme, of $X'$.
The reduced subscheme of $X$ on the closure $Z$ of $Z'$ is then a closed $G_\mathrm{red}$\nobreakdash-\hspace{0pt} subscheme of $X$,
and its complement is an open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$ with $U \cap X' = U'$.
Thus every open super $G_{k'}$\nobreakdash-\hspace{0pt} subscheme of $X'$ is the intersection of $X'$
with an open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Suppose now that $S$ is integral and that $s$ is its generic point.
Then $X'$ is the intersection of the family $(X_\lambda)_{\lambda \in \Lambda}$ of
inverse images under $X \to S$ of the non-empty open subschemes of $S$.
If we write $j_\lambda$ for the embedding $X_\lambda \to X$ and $j$ for $X' \to X$,
it can be seen as follows that the canonical homomorphism
\begin{equation*}
\colim_{\lambda \in \Lambda} H^0_G(X_\lambda,j_\lambda{}\!^*\mathcal V) \to H^0_G(X',j^*\mathcal V) = H^0_{G_{k'}}(X',j^*\mathcal V)
\end{equation*}
is an isomorphism for every $\mathcal V$ in $\MOD_{G,\varepsilon}(X)$.
It is enough to prove that the corresponding homomorphism where there is no group acting and $H^0_G$
is replaced by $H^0$ is an isomorphism: taking $G \times_k X$ and $X$ for $X$ and considering the pullbacks
along the projection and the action from $G \times_k X$ to $X$ will then give the required result for $H^0_G$.
We may assume that the $j_\lambda$ are affine.
Covering $X$ with affine open sets, we reduce first to the case where $X$ is quasi-affine, and finally to the case
where $X$ is affine, which is clear.
By Lemmas~\ref{l:quotsub}\ref{i:quotsubrep} and \ref{l:openGsubint},
it follows that if $X$ is a quasi-affine $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type
with $\Mod_{G,\varepsilon}(X)$ integral,
then $\Mod_{G_{k'},\varepsilon}(X')$ is integral, the functor
\begin{equation*}
\Mod_{G,\varepsilon}(X) \to \Mod_{G,\varepsilon}(X') = \Mod_{G_{k'},\varepsilon}(X')
\end{equation*}
induced by $X' \to X$ is faithful, and the induced homomorphism
\begin{equation}\label{e:genfieldiso}
\kappa(\Mod_{G,\varepsilon}(X)) \to \kappa(\Mod_{G_{k'},\varepsilon}(X'))
\end{equation}
an isomorphism.
\begin{lem}\label{l:smallequiv}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type
with $\Mod_{G,\varepsilon}(X)$ integral.
Denote by $A$ the $k$\nobreakdash-\hspace{0pt} algebra $H^0_G(X,\mathcal O_X)$,
and by $k'$ its field of fractions.
Then the following conditions are equivalent:
\begin{enumerate}
\renewcommand{\theenumi}{(\alph{enumi})}
\item\label{i:smallsub}
every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$ contains one of the form $X_f$ for some $f \ne 0$ in $A$;
\item\label{i:smallhomog}
the generic fibre of $X \to \Spec(A)$ is homogeneous as a super $G_{k'}$\nobreakdash-\hspace{0pt} scheme.
\end{enumerate}
When these conditions hold, the canonical homomorphism $k' \to \kappa(\Mod_{G,\varepsilon}(X))$ is an isomorphism.
\end{lem}
\begin{proof}
The generic fibre $X'$ of $X \to \Spec(A)$
is the intersection of the open super $G$\nobreakdash-\hspace{0pt} subschemes $X_f$ of $X$ for $f \ne 0$ in $A$.
Writing the push forward of $\mathcal O_{X_f}$ along the embedding $X_f \to X$ as the filtered colimit
of copies of $\mathcal O_X$ with transition morphisms given by powers of $f$ shows that
\begin{equation*}
H^0_G(X_f,\mathcal O_{X_f}) = A_f,
\end{equation*}
where we have used the fact that $H^0_G(X,-)$ commutes with filtered colimits
in $\MOD_{G,\varepsilon}(X)$, because $X$ is of finite type.
Suppose that \ref{i:smallsub} holds.
Then $k' \to \kappa(\Mod_{G,\varepsilon}(X))$ is an isomorphism because $\kappa(\Mod_{G,\varepsilon}(X))$
is by the isomorphism \eqref{e:funfieldiso} the filtered colimit of the $H^0_G(X_f,\mathcal O_{X_f})$ for $f \ne 0$.
Thus
\begin{equation*}
k' \to \kappa(\Mod_{G_{k'},\varepsilon}(X'))
\end{equation*}
is an isomorphism because \eqref{e:genfieldiso} is.
Further any non-empty open $G_{k'}$\nobreakdash-\hspace{0pt} subscheme of $X'$ coincides with $X'$, because as above
it contains one of the form $X_f \cap X' = X'$ for $f \ne 0$.
Thus \ref{i:smallhomog} holds by Lemma~\ref{l:homogGsub}.
Conversely suppose that \ref{i:smallhomog} holds.
Then every non-empty open super $G_{k'}$\nobreakdash-\hspace{0pt} subscheme of $X'$ coincides with $X'$.
Let $Y$ be a non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Then $Y \cap X$ is non-empty by Lemma~\ref{l:pointfaith}, so that $Y$ contains $X'$.
Since $X'$ is the intersection of the $X_f$ for $f \ne 0$, there thus exists
for each point $z$ of the complement $Z$ of $Y$ an $f \ne 0$ for which $X_f$
does not contain $z$.
It follows that there exists an $f \ne 0$ such that $X_f$ does not contain any maximal point of $Z$.
Then $X_f \cap Z = \emptyset$, so that $Y$ contains $X_f$.
\end{proof}
\begin{lem}\label{l:smallexist}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type
with $\Mod_{G,\varepsilon}(X)$ integral.
Then $X$ has an open super $G$\nobreakdash-\hspace{0pt} subscheme $X_1$ such that the equivalent conditions of
Lemma~\textnormal{\ref{l:smallequiv}} are satisfied with $X$ replaced by $X_1$.
\end{lem}
\begin{proof}
Let $A$ and $k'$ be as in Lemma~\ref{l:smallequiv},
and write $k''$ for $\kappa(\Mod_{G,\varepsilon}(X))$.
By Lemma~\ref{l:pointfaith}, $\omega_{X,x}$ for some point $x$ of $X$
defines an embedding of $k''$
as a subextension of the extension $\kappa(x)$ of $k$.
Thus $k''$ is a finitely generated extension of $k$.
By Lemma~\ref{l:openGsubint}, we may after replacing $X$ by a non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme
suppose that $A$ contains a set of generators for $k''$ over $k$.
Thus $k' = k''$,
The generic fibre $X'$ of $X \to \Spec(A)$ is a quasi-affine super $(G_{k'},\varepsilon)$\nobreakdash-\hspace{0pt} scheme
of finite type with $\Mod_{G_{k'},\varepsilon}(X')$ integral,
and since \eqref{e:genfieldiso} is an isomorphism we have $\kappa(\Mod_{G_{k'},\varepsilon}(X')) = k'$.
By Lemma~\ref{l:homogGsub}, $X'$ has thus a non-empty open homogeneous super $G_{k'}$\nobreakdash-\hspace{0pt} subscheme $X'{}\!_1$.
By the remarks preceding Lemma~\ref{l:smallequiv}, there is an open super $G$\nobreakdash-\hspace{0pt} subscheme $X_1$ of $X$
with $X_1 \cap X' = X'{}\!_1$.
If we write $A_1$ for $H^0_G(X_1,\mathcal O_{X_1})$, then by Lemma~\ref{l:openGsubint},
$A \to A_1$ is injective and $A_1$ has field of fractions $k'$.
Thus $X'{}\!_1$ is the generic fibre of $X_1 \to \Spec(A_1)$.
It follows that \ref{i:smallhomog} of Lemma~\ref{l:smallequiv} is satisfied with $X_1$ for $X$.
\end{proof}
Filtered limits exist in the category of local super $k$\nobreakdash-\hspace{0pt} ringed spaces: they are given by taking the
filtered limit of the underlying topological spaces, equipped with the filtered colimit of the of the
pullbacks of the structure sheaves.
Any filtered limit of super $k$\nobreakdash-\hspace{0pt} schemes with affine transition morphisms is a super $k$\nobreakdash-\hspace{0pt} scheme.
Since the forgetful functor from the category of super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes to the category of
super $k$\nobreakdash-\hspace{0pt} schemes creates limits, any limit of super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes which exists
has a canonical structure of super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
Let $X$ be a local super $k$\nobreakdash-\hspace{0pt} ringed space and $X_0$ be a topological subspace of $X$ with structure
sheaf $\mathcal O_{X_0}$ the restriction of $\mathcal O_X$ to $X_0$.
Then pullback of $\mathcal O_X$\nobreakdash-\hspace{0pt} modules along the embedding $j:X_0 \to X$
coincides with pullback of the underlying sheaves of abelian groups.
In particular $j$ is flat.
This applies in particular when $X_0$ is the intersection of a family of open
super $k$\nobreakdash-\hspace{0pt} ringed subspaces of $X$.
\begin{lem}\label{l:homog}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type
with $\Mod_{G,\varepsilon}(X)$ integral.
Then the intersection $X_0$ in the category of local super ringed $k$\nobreakdash-\hspace{0pt} spaces of the non-empty open
super $G$\nobreakdash-\hspace{0pt} subschemes of $X$ is a super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
If $k_0$ denotes the extension $\kappa(\Mod_{G,\varepsilon}(X))$ of $k$ and $X_0$ is given the
structure of $k_0$\nobreakdash-\hspace{0pt} scheme defined by the isomorphism \eqref{e:funfieldiso},
then $X_0$ is a non-empty quasi-affine homogeneous
super $(G_{k_0},\varepsilon)$\nobreakdash-\hspace{0pt} scheme of finite type over $k_0$.
\end{lem}
\begin{proof}
By Lemma~\ref{l:smallexist}, we may suppose after replacing $X$ by a non-empty open
super $G$\nobreakdash-\hspace{0pt} subscheme that the equivalent conditions \ref{i:smallsub} and \ref{i:smallhomog}
of Lemma~\ref{l:smallequiv} are satisfied.
With notation as in Lemma~\ref{l:smallequiv}, we then have $k' = k_0$,
and the required intersection is the generic fibre of $X \to \Spec(A)$ by \ref{i:smallsub},
while the final statement holds by \ref{i:smallhomog}.
\end{proof}
Let $X$ be the limit of a filtered inverse system $(X_\lambda)_{\lambda \in \Lambda}$ of
super $k$\nobreakdash-\hspace{0pt} schemes with affine transition morphisms.
The assignment
\begin{equation}\label{e:colimfunctor}
(\mathcal V_\lambda)_{\lambda \in \Lambda} \mapsto \colim_{\lambda \in \Lambda}
\pr_\lambda{}\!^*\mathcal V_\lambda
\end{equation}
defines a functor to $\MOD(X)$ from the category of systems
$(\mathcal V_\lambda)_{\lambda \in \Lambda}$ above $(X_\lambda)_{\Lambda \in \Lambda}$
with $\mathcal V_\lambda$ in $\MOD(X_\lambda)$.
The functor \eqref{e:colimfunctor} is exact: we may suppose that $X = \Spec(R)$ and the
$X_\lambda = \Spec(R_\lambda)$ are affine, and if $\mathcal V_\lambda$ is the
$\mathcal O_{X_\lambda}$\nobreakdash-\hspace{0pt} module associated to the $R_\lambda$\nobreakdash-\hspace{0pt} module $V_\lambda$, we have an
isomorphism
\begin{equation*}
\colim_{\lambda \in \Lambda}V_\lambda \xrightarrow{\sim}
\colim_{\lambda \in \Lambda} R \otimes_{R_\lambda} V_\lambda
\end{equation*}
natural in $(V_\lambda)_{\lambda \in \Lambda}$.
Suppose that the $X_\lambda$ are quasi-compact and quasi-separated, and that
$\Lambda$ has an initial object $\lambda_0$.
Write $q_\lambda:X_\lambda \to X_{\lambda_0}$ for the transition morphism.
Then for $\mathcal V_0$ in $\Mod(X_{\lambda_0})$ and $\mathcal W_0$ in $\MOD(X_{\lambda_0})$,
the pullback functors define an isomorphism
\begin{equation}\label{e:colimHomVW}
\colim_\lambda\Hom_{\mathcal O_{X_\lambda}}(q_\lambda{}\!^*\mathcal V_0,q_\lambda{}\!^*\mathcal W_0)
\xrightarrow{\sim} \Hom_{\mathcal O_X}(\pr_{\lambda_0}{}\!^*\mathcal V_0,\pr_{\lambda_0}{}\!^*\mathcal W_0).
\end{equation}
This can be seen by reducing after taking a finite affine open cover of $X_{\lambda_0}$ to the case
where $X_{\lambda_0}$ is affine.
If further $(X_\lambda)_{\lambda \in \Lambda}$ is a system of $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes and
$\mathcal V_0$ and $\mathcal W_0$ are equivariant $(G,\varepsilon)$\nobreakdash-\hspace{0pt} modules we have an isomorphism
\begin{equation}\label{e:colimHomGVW}
\colim_\lambda\Hom_{G,\mathcal O_{X_\lambda}}(q_\lambda{}\!^*\mathcal V_0,q_\lambda{}\!^*\mathcal W_0)
\xrightarrow{\sim} \Hom_{G,\mathcal O_X}(\pr_{\lambda_0}{}\!^*\mathcal V_0,\pr_{\lambda_0}{}\!^*\mathcal W_0).
\end{equation}
This follows from \eqref{e:colimHomVW} and the similar isomorphism for
$(G \times_k X_\lambda)_{\lambda \in \Lambda}$
because for example $\Hom_{G,\mathcal O_X}(\mathcal V,\mathcal W)$ is the equaliser of two
appropriately defined homomorphisms from $\Hom_{\mathcal O_X}(\mathcal V,\mathcal W)$ to
$\Hom_{\mathcal O_{G \times_k X}}(\pr_2{}\!^*\mathcal V,\alpha{}\!^*\mathcal W)$, where $\alpha$ is the action of $G$
on $X$.
Similarly, taking an appropriate equaliser shows that for $\mathcal V$ in $\Mod_G(X)$
the functor $\Hom_{G,\mathcal O_X}(\mathcal V,-)$ preserves filtered colimits in $\MOD_G(X)$.
It follows that for $\mathcal V_0$ in $\Mod_G(X_{\lambda_0})$ and
$(\mathcal W_\lambda)_{\lambda \in \Lambda}$ a system above
$(X_\lambda)_{\lambda \in \Lambda}$ with $\mathcal W_\lambda$ in $\MOD_G(X_\lambda)$,
the pullback functors define an isomorphism
\begin{equation}\label{e:colimHomGVWl}
\colim_\lambda\Hom_{G,\mathcal O_{X_\lambda}}(q_\lambda{}\!^*\mathcal V_0,\mathcal W_\lambda)
\xrightarrow{\sim} \Hom_{G,\mathcal O_X}(\pr_{\lambda_0}{}\!^*\mathcal V_0,\colim_\lambda \pr_\lambda{}\!^*\mathcal W_\lambda).
\end{equation}
Indeed if $q_{\mu\lambda}:X_\mu \to X_\lambda$ is the transition morphism,
then since $\lambda \mapsto (\lambda,\lambda)$ is cofinal in the set of $(\lambda,\mu)$ with $\mu \ge \lambda$,
\eqref{e:colimHomGVWl} factors as an isomorphism to
\begin{equation*}
\colim_\lambda \colim_{\mu \ge \lambda}
\Hom_{G,\mathcal O_{X_\mu}}(q_\mu{}\!^*\mathcal V_0,q_{\mu\lambda}{}\!^*\mathcal W_\lambda)
\end{equation*}
followed by a colimit of isomorphisms of the form \eqref{e:colimHomGVW}.
Suppose now that the $X_\lambda$ are quasi-affine $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes.
Let $\mathcal V$ be an object in $\MOD_{G,\varepsilon}(X)$.
By Lemma~\ref{l:quotsub}\ref{i:quotsubmod} we may write $\mathcal V$ as a cokernel
\begin{equation}\label{e:Vcoker}
V'{}\!_X \to V_X \to \mathcal V \to 0
\end{equation}
for some $V$ and $V'$ in $\MOD_{G,\varepsilon}(k)$.
By \eqref{e:colimHomGVW}, for each pair of subobjects $W$ of $V$ and $W'$ of $V'$
in $\Mod_{G,\varepsilon}(k)$ such that $V'{}\!_X \to V_X$ sends $W'{}\!_X$ into $W_X$,
there exist a $\lambda \in \Lambda$ and an $h:W'{}\!_{X_\lambda} \to W_{X_\lambda}$
in $\Mod_{G,\varepsilon}(X_\lambda)$ for which $\pr_\lambda{}\!^*(h)$ coincides modulo
the pullback isomorphisms with $W'{}\!_X \to W_X$.
The category $\Lambda'$ of quadruples $(W,W',\lambda,h)$ is then filtered,
$\Lambda' \to \Lambda$ given by $(W,W',\lambda,h) \mapsto \lambda$ is cofinal,
and $\mathcal V$ is the colimit over $\Lambda'$ of the $\pr_\lambda{}\!^*(\Coker h)$.
Thus $\mathcal V$ is isomorphic to an object in the image of
a functor \eqref{e:colimfunctor} with $\Lambda$ replaced by $\Lambda'$.
Similarly if $l$ is a morphism in $\MOD_{G,\varepsilon}(X)$, then starting from a
commutative diagram with exact rows of the form \eqref{e:Vcoker}
and right vertical arrow $l$ shows that
$l$ is isomorphic to a morphism in the image of a functor \eqref{e:colimfunctor}
for an appropriate $\Lambda$.
Let $X$ be a homogeneous super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme and $j:Z \to X$ be a morphism of
super $k$\nobreakdash-\hspace{0pt} schemes with $Z$ non-empty.
Then the functor $j^*$ from $\MOD_{G,\varepsilon}(X)$ to $\MOD(Z)$
is faithful and exact.
Indeed if $j_0$ and $j_1$ are the morphisms from $G \times_k Z$ to $X$
that send $(g,z)$ respectively to $j(z)$ and $gj(z)$, then the equivariant action of $G$
defines a natural isomorphism from $j_0{}\!^*$ to $j_1{}\!^*$,
and $j_1$ is faithfully flat while $j_0 = j \circ \pr_2$ with $\pr_2$ faithfully flat.
\begin{lem}\label{l:pointexact}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ integral,
and $x$ be a point of $X$ which lies in every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Then the functor from $\MOD_{G,\varepsilon}(X)$ to $\MOD(\kappa(x))$ defined by passing to the fibre
at $x$ is exact.
\end{lem}
\begin{proof}
Suppose first that $X$ is of finite type.
Then $x$ is a point of the super scheme $X_0$ over $k_0$ of Lemma~\ref{l:homog},
and passage to the fibre at $x$ factors through pullback onto $X_0$.
Pullback onto $X_0$ is exact because the monomorphism $X_0 \to X$ is flat.
Since $X_0$ is a homogeneous $G_{k_0}$\nobreakdash-\hspace{0pt} scheme, passage to the fibre
at the point $x$ of $X_0$ is as above also exact.
To prove the general case, write $X$ as the limit of a filtered inverse system
$(X_\lambda)_{\lambda \in \Lambda}$ as in Lemma~\ref{l:qafflim}.
If $j:\Spec(\kappa(x)) \to X$ is the morphism defined by $x$, it is enough to show
that $j^*$ preserves the kernel of any morphism $l$ in $\MOD_{G,\varepsilon}(X)$.
As above, we may after replacing $\Lambda$ if necessary suppose that $l$ is the
image under \eqref{e:colimfunctor} of a morphism of systems
\begin{equation*}
(l_\lambda)_{\lambda \in \Lambda}:(\mathcal V'{}\!_\lambda)_{\lambda \in \Lambda}
\to (\mathcal V_\lambda)_{\lambda \in \Lambda}
\end{equation*}
above $(X_\lambda)_{\lambda \in \Lambda}$.
Since $j^*$ is cocontinuous, the left arrow of the commutative square
\begin{equation*}
\xymatrix{
j^*\Ker l \ar[r] & \Ker j^*(l) \\
\colim_\lambda j^*\pr_\lambda{}\!^* \Ker l_\lambda \ar[u] \ar[r] &
\colim_\lambda \Ker j^*\pr_\lambda{}\!^*(l_\lambda) \ar[u]
}
\end{equation*}
is an isomorphism by exactness of \eqref{e:colimfunctor},
and the right arrow by exactness of filtered colimits.
The bottom arrow is an isomorphism because
$j^*\pr_\lambda{}\!^* \simeq (\pr_\lambda \circ j)^*$ is exact
by the case where $X$ is of finite type.
Thus the top arrow is an isomorphism.
\end{proof}
Let $X$ be a super $k$\nobreakdash-\hspace{0pt} scheme.
If $\mathcal V$ and $\mathcal W$ are $\mathcal O_X$\nobreakdash-\hspace{0pt} modules, then the internal hom
\begin{equation*}
\underline{\Hom}_{\mathcal O_X}(\mathcal V,\mathcal W)
\end{equation*}
is the $\mathcal O_X$\nobreakdash-\hspace{0pt} module with sections of degree $i$ above the open subset $U$ of $X$
the $\mathcal O_U$\nobreakdash-\hspace{0pt} homomorphisms of degree $i$ from $\mathcal V|U$ to $\mathcal W|U$.
Its formation commutes with restriction to open super subschemes.
When $\mathcal V = \mathcal W$, the internal hom
\begin{equation*}
\underline{\End}_{\mathcal O_X}(\mathcal V) = \underline{\Hom}_{\mathcal O_X}(\mathcal V,\mathcal V)
\end{equation*}
is an $\mathcal O_X$\nobreakdash-\hspace{0pt} algebra, with unit
\begin{equation}\label{e:endunit}
\mathcal O_X \to \underline{\End}_{\mathcal O_X}(\mathcal V)
\end{equation}
given by $1_{\mathcal V}$ and composition by that of $\mathcal O_U$\nobreakdash-\hspace{0pt} endomorphisms of $\mathcal V|U$.
The kernel of \eqref{e:endunit} is the annihilator of $\mathcal V$, i.e.\ the ideal of $\mathcal O_X$
with sections above $U$ those of $\mathcal O_X$ that annihilate $\mathcal V|U$.
If $u:\mathcal U \to \mathcal O_X$ is a morphism of $\mathcal O_X$\nobreakdash-\hspace{0pt} modules, then
\begin{equation}\label{e:utensVzero}
u \otimes \mathcal V = 0:\mathcal U \otimes_{\mathcal O_X} \mathcal V \to \mathcal V
\end{equation}
if and only if $u$ factors through the annihilator of $\mathcal V$.
We may identify $\underline{\Hom}_{\mathcal O_X}(\mathcal O_X,\mathcal W)$ with $\mathcal W$, and
$\underline{\Hom}_{\mathcal O_X}(\mathcal O_X{}^{0|1},\mathcal W)$ with the $\mathcal O_X$\nobreakdash-\hspace{0pt} module $\Pi \mathcal W$
given by interchanging the parities of $\mathcal W$.
Let $h:X' \to X$ be a morphism of super $k$\nobreakdash-\hspace{0pt} schemes.
Then there is a morphism
\begin{equation}\label{e:inthomlower}
\underline{\Hom}_{\mathcal O_X}(\mathcal V,\mathcal W) \to h_*\underline{\Hom}_{\mathcal O_{X'}}(h^*\mathcal V,h^*\mathcal W),
\end{equation}
natural in $\mathcal V$ and $\mathcal W$, which sends $u$ from $\mathcal V|U$ to $\mathcal W|U$ to
$h^*(u)$ from $h^*\mathcal V|h^{-1}(U)$ to $h^*\mathcal W|h^{-1}(U)$.
By adjunction, we then have a morphism
\begin{equation}\label{e:inthomupper}
h^*\underline{\Hom}_{\mathcal O_X}(\mathcal V,\mathcal W) \to \underline{\Hom}_{\mathcal O_{X'}}(h^*\mathcal V,h^*\mathcal W)
\end{equation}
which is natural in $\mathcal V$ and $\mathcal W$.
It is an isomorphism for $\mathcal V = \mathcal O_X{}^{m|n}$ and any $\mathcal W$,
because when $\mathcal V = \mathcal O_X$, \eqref{e:inthomlower} is the unit $\mathcal W \to h_*h^*\mathcal W$ and hence
\eqref{e:inthomupper} is the identity of $h^*\mathcal W$,
and when $\mathcal V = \mathcal O_X{}^{0|1}$, \eqref{e:inthomlower} is the unit $\Pi\mathcal W \to h_*h^*\Pi\mathcal W$
and \eqref{e:inthomupper} is the identity of $h^*\Pi\mathcal W$.
If $\mathcal W$ is a quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} module, and $\mathcal V$ is locally on $X$
a cokernel of a morphism of $\mathcal O_X$\nobreakdash-\hspace{0pt} modules $\mathcal O_X{}^{m'|n'} \to \mathcal O_X{}^{m|n}$,
then $\underline{\Hom}_{\mathcal O_X}(\mathcal V,\mathcal W)$ is a quasi-coherent $\mathcal O_X$\nobreakdash-\hspace{0pt} module,
and \eqref{e:inthomupper} is an isomorphism for $h$ flat.
Suppose now that $X$ has a structure of $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
Let $\mathcal V$ and $\mathcal W$ be objects of $\MOD_{G,\varepsilon}(X)$, with the underlying
$\mathcal O_X$\nobreakdash-\hspace{0pt} module of $\mathcal V$ locally on $X$ the cokernel
of a morphism $\mathcal O_X{}^{m'|n'} \to \mathcal O_X{}^{m|n}$.
We define a $(G,\varepsilon)$\nobreakdash-\hspace{0pt} equivariant structure on the quasi-coherent
$\mathcal O_X$\nobreakdash-\hspace{0pt} module $\underline{\Hom}_{\mathcal O_X}(\mathcal V,\mathcal W)$ as follows:
if $\alpha$ and $\beta$ are the isomorphisms between
the pullbacks respectively of $\mathcal V$ and $\mathcal W$ along the projection and the action
of $G$ from $G \times_k X$ to $X$ defining the actions of $G$ on $\mathcal V$ and $\mathcal W$,
then the action of $G$ on $\underline{\Hom}_{\mathcal O_X}(\mathcal V,\mathcal W)$ is given, modulo isomorphisms
of the form \eqref{e:inthomupper}, by $\Hom_{\mathcal O_{G \times_k X}}(\alpha^{-1},\beta)$.
If $\mathcal V = \mathcal W$, then \eqref{e:endunit} is a morphism in $\MOD_{G,\varepsilon}(X)$,
and the annihilator of $\mathcal V$ is a subobject of $\mathcal O_X$ in $\MOD_{G,\varepsilon}(X)$.
\begin{lem}\label{l:pointtors}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ integral,
and $x$ be a point of $X$ which lies in every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Then an object of $\MOD_{G,\varepsilon}(X)$ is a torsion object if and only if
its fibre at $x$ is $0$.
\end{lem}
\begin{proof}
Write $j:\Spec(\kappa(x)) \to X$ for the morphism defined by $x$,
and let $\mathcal V$ be an object of $\MOD_{G,\varepsilon}(X)$.
If $\mathcal V$ is a torsion object, then $j^*\mathcal V$ is a torsion object of $\MOD(\kappa(x))$
by Lemmas~\ref{l:adjtorspres} and \ref{l:pointfaith}, so that $j^*\mathcal V = 0$.
Conversely suppose that $j^*\mathcal V = 0$.
By Lemma~\ref{l:pointexact}, $j^*$ is exact.
Hence $j^*\mathcal W = 0$ for every subobject $\mathcal W$ of $\mathcal V$.
By Lemma~\ref{l:quotsub}\ref{i:quotsubmod}, $\mathcal V$ is the filtered colimit
of subobjects which are quotients of objects $V_X$ with $V$ in $\Mod_{G,\varepsilon}(k)$.
To prove that $\mathcal V$ is a torsion object, we may thus suppose that $\mathcal V$ is such a quotient.
Then by Lemma~\ref{l:quotsub}\ref{i:quotsubmod}, $\mathcal V$ is a cokernel \eqref{e:Vcoker}
with $V$ in $\Mod_{G,\varepsilon}(k)$ and $V'$ in $\MOD_{G,\varepsilon}(k)$.
If we write $V'$ as the filtered colimit of its subobjects $W'$ in $\Mod_{G,\varepsilon}(k)$,
then by cocontinuity $j^*$ sends the cokernel of some $W'{}\!_X \to V_X$ to $0$.
Thus we may suppose that $\mathcal V$ is a cokernel \eqref{e:Vcoker} with both $V$ and $V'$
in $\Mod_{G,\varepsilon}(k)$.
Then $\underline{\End}_{\mathcal O_X}(\mathcal V)$ and the annihilator $\mathcal J$ of $\mathcal V$ exist in
$\MOD_{G,\varepsilon}(X)$.
If $h = j$ then \eqref{e:inthomupper} is an isomorphism because it
is an isomorphism with $V_X$ or $V'{}\!_X$ for $\mathcal V$ and $j^*$ is exact.
Thus $j^*\underline{\End}_{\mathcal O_X}(\mathcal V) = 0$.
By the exactness of $j^*$, it follows that $j^*\mathcal J \ne 0$ and hence $\mathcal J \ne 0$.
There is then by Lemma~\ref{l:quotsub}\ref{i:quotsubmod} a non-zero
$u:\mathcal U \to \mathcal O_X$ in $\Mod_{G,\varepsilon}(X)$ which factors through $\mathcal J$,
so that \eqref{e:utensVzero} holds.
Thus $\mathcal V$ is a torsion object.
\end{proof}
Let $f:X \to X'$ be a super schematically dominant morphism of quasi-affine
super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes with $\Mod_{G,\varepsilon}(X)$ integral.
Then by Lemma~\ref{l:domfaith}, $f^*$ from $\Mod_{G,\varepsilon}(X')$ to $\Mod_{G,\varepsilon}(X)$ is faithful
and hence regular.
By Lemmas~\ref{l:pointfaith}, \ref{l:pointexact} and \ref{l:pointtors},
$f^*$ from $\MOD_{G,\varepsilon}(X')$ to $\MOD_{G,\varepsilon}(X)$ sends
isomorphisms up to torsion to isomorphisms up to torsion.
There is thus a unique tensor functor
\begin{equation*}
\overline{\MOD_{G,\varepsilon}(X')} \to \overline{\MOD_{G,\varepsilon}(X)}
\end{equation*}
compatible with $f^*$ and the projections.
If $f$ is \emph{affine}, it can also be seen as follows that $f^*$ sends isomorphisms
up to torsion to isomorphisms up to torsion.
We may identify $\MOD_{G,\varepsilon}(X)$ with the tensor category of $f_*\mathcal O_X$\nobreakdash-\hspace{0pt} modules
in $\MOD_{G,\varepsilon}(X')$, and $f^*$ with $f_*\mathcal O_X \otimes_{\mathcal O_{X'}} -$.
By Lemma~\ref{l:regtorsfree}, $f_*\mathcal O_X$ is torsion free in
$\MOD_{G,\varepsilon}(X')$, so that by Lemma~\ref{l:Rextpres} an $f_*\mathcal O_X$\nobreakdash-\hspace{0pt} module
is a torsion $f_*\mathcal O_X$\nobreakdash-\hspace{0pt} module if it is a torsion object of $\MOD_{G,\varepsilon}(X')$.
The required result now follows from Lemma~\ref{l:tensisotors}.
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ integral,
and $k'$ be an extension of $k$.
By the isomorphism \eqref{e:funfieldiso} there is associated
to any $k'$\nobreakdash-\hspace{0pt} point of $X$ which lies in every
non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$ a $k$\nobreakdash-\hspace{0pt} homomorphism
\begin{equation*}
\kappa(\Mod_{G,\varepsilon}(X)) \to k'.
\end{equation*}
This is the same $k$\nobreakdash-\hspace{0pt} homomorphism as that associated to the $k$\nobreakdash-\hspace{0pt} tensor functor
from $\Mod_{G,\varepsilon}(X)$ to $\Mod(k)$ which is faithful by Lemma~\ref{l:pointfaith}.
We write
\begin{equation*}
X(k')_\rho \subset X(k')
\end{equation*}
for the set of $k'$\nobreakdash-\hspace{0pt} points lying in every
non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$ with associated $k$\nobreakdash-\hspace{0pt} homomorphism $\rho$.
The action of $G(k')$ on $X(k')$ sends $X(k')_\rho$ to itself.
The isomorphisms of the form \eqref{e:funfieldiso}, as well as passage to the associated $k$\nobreakdash-\hspace{0pt} homomorphism,
are compatible with super schematically dominant morphisms $X' \to X$ of quasi-affine super
$(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes
with $\Mod_{G,\varepsilon}(X')$ integral.
\begin{lem}\label{l:kbarpoints}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$ integral,
$k'$ be an algebraically closed extension of $k$,
and $\rho$ be a $k$\nobreakdash-\hspace{0pt} homomorphism from $\kappa(\Mod_{G,\varepsilon}(X))$ to $k'$.
Then $X(k')_\rho$ is non-empty,
and $G(k')$ acts transitively on it.
\end{lem}
\begin{proof}
Write $X$ as a the limit of a filtered inverse system $(X_\lambda)_{\lambda \in \Lambda}$
of quasi-affine $(G,\varepsilon)$\nobreakdash-\hspace{0pt} schemes of finite type as in Lemma~\ref{l:qafflim}.
For each $\lambda$ there is a $k$\nobreakdash-\hspace{0pt} quotient of finite type of $G$ through which it acts on $X_\lambda$.
If $\Lambda'$ is the set of pairs $(\lambda,G')$ with $\lambda$ in $\Lambda$ and
$G'$ a $k$\nobreakdash-\hspace{0pt} quotient of $G$ of finite type
through which $G$ acts on $X_\lambda$, where $(\lambda,G') \le (\lambda',G'')$ when $\lambda \le \lambda'$
and $G \to G'$ factors through $G \to G''$, then $(\lambda,G') \mapsto \lambda$ from
$\Lambda'$ to $\Lambda$ is cofinal, as is $(\lambda,G') \mapsto G'$ from $\Lambda'$ to
$k$\nobreakdash-\hspace{0pt} quotients of $G$ of finite type with the reverse of its natural order.
Replacing $\Lambda$ by $\Lambda'$, we may assume that there exists an inverse system
$(G_\lambda)_{\lambda \in \Lambda}$
of $k$\nobreakdash-\hspace{0pt} quotients of $G$ of finite type with limit $G$ such that the action of $G$ on
$X_\lambda$ factors through $G_\lambda$.
Write $k_0$ and $k_\lambda$ for $\kappa(\Mod_{G,\varepsilon}(X))$ and
$\kappa(\Mod_{G,\varepsilon}(X_\lambda))$,
and $\rho_\lambda:k_\lambda \to k'$ for the composite of $\rho$ with $k_\lambda \to k_0$.
By Lemma~\ref{l:limopensub}, a $k'$\nobreakdash-\hspace{0pt} point $(x_\lambda)$ of $X$ lies in every non-empty
open super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} subscheme of $X$ if and only if
$x_\lambda$ lies in every non-empty open super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} subscheme of $X_\lambda$ for
each $\lambda$.
We thus have an equality
\begin{equation*}
X(k')_\rho = \lim_\lambda X_\lambda(k')_{\rho_\lambda}
\end{equation*}
of $G(k')$\nobreakdash-\hspace{0pt} sets,
because the isomorphisms of the form \eqref{e:funfieldiso} are compatible with the projections
and by isomorphisms of the form \eqref{e:colimHomGVW}, $k_0$ is the filtered
colimit of the $k_\lambda$.
By Lemma~\ref{l:homog}, the intersection $X_\lambda{}_0$ of the non-empty open super
$(G,\varepsilon)$\nobreakdash-\hspace{0pt} subschemes of $X_\lambda$, regarded as super $k_\lambda$\nobreakdash-\hspace{0pt} scheme
by means of the isomorphism \eqref{e:funfieldiso} with $X_\lambda$ for $X$, is a non-empty homogeneous
super $(G_{k_\lambda},\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
If $X'{}\!_\lambda$ is obtained from $X_\lambda{}_0$ by extension of scalars along
$\rho_\lambda$, we then have an equality
\begin{equation*}
X_\lambda(k')_{\rho_\lambda} = X'{}\!_\lambda(k')_{k'}
\end{equation*}
of $G(k')$\nobreakdash-\hspace{0pt} sets, compatible with the transition maps, where
$X'{}\!_\lambda(k')_{k'}$ denotes the set of $k'$ points over $k$.
It is thus to be shown that $\lim_\lambda X'{}\!_\lambda(k')_{k'}$ is non-empty, and that
$G(k') = G_{k'}(k')_{k'}$ acts transitively on it.
Since the action of $G$ on $X_\lambda$ factors through $G_\lambda$, the action of $G_{k'}$ on
$X'{}\!_\lambda$ factors through $G_{\lambda}{}_{k'}$, and $X'{}\!_\lambda$ is a non-empty
homogeneous super $G_{\lambda}{}_{k'}$\nobreakdash-\hspace{0pt} scheme.
If
\begin{equation*}
U_\lambda = (G_{\lambda}{}_{k'})_{\mathrm{red}},
\end{equation*}
then $U_\lambda(k')_{k'} = G_{\lambda}{}_{k'}(k')_{k'}$
acts transitively on the non-empty set $X'{}\!_\lambda(k')_{k'}$.
Further if $H$ is the stabiliser of the $k'$\nobreakdash-\hspace{0pt} point $z$ of $X'{}\!_\lambda$ over $k'$,
then the stabiliser of the element $z$ of $X'{}\!_\lambda(k')_{k'}$ is the group of
$k'$\nobreakdash-\hspace{0pt} points over $k'$ of the $k'$\nobreakdash-\hspace{0pt} subgroup $H_{\mathrm{red}}$ of $U_\lambda$.
The required result now follows from \cite[1.1.1]{O10} with $k'$ for $k$ and
$X'{}\!_\lambda(k')_{k'}$ for $X_\lambda$.
\end{proof}
Let $Z$ be a super $k$\nobreakdash-\hspace{0pt} scheme.
By a \emph{super groupoid over $Z$} we mean a super $k$\nobreakdash-\hspace{0pt} scheme $K$ together
with a source morphism $d_1$ and a target morphism $d_0$ from $K$ to $Z$, an identity morphism $s_0$ from $Z$ to $K$,
and a composition morphism $\circ$ from $K \times_{{}^{d_1}Z^{d_0}} K$ to $K$, such that $\circ$ is associative with
$s_0$ its left and right identity, and has inverses (necessarily unique).
The morphism
\begin{equation*}
(d_0,d_1):K \to Z \times_k Z
\end{equation*}
defines a structure of super scheme over $Z \times_k Z$ on $K$.
The super groupoid $K$ over $Z$ will be called \emph{affine} if it is affine
as super scheme over $Z \times_k Z$,
and \emph{transitive} if it is faithfully flat as a super scheme over $Z \times_k Z$.
By a \emph{super groupoid with involution over $Z$} we mean a pair $(K,\varepsilon)$ with
$K$ a super groupoid over $Z$ and $\varepsilon:Z \to K$ a lifting of $(\iota_Z,1_Z):Z \to Z \times_k Z$ to $K$
with $\varepsilon$ and $\varepsilon \circ \iota_Z$ inverse to one another,
such that conjugation by $\varepsilon$ acts as $\iota_K$ on $K$.
An \emph{action} of a super groupoid $K$ over $Z$ on a quasi-coherent $\mathcal O_Z$\nobreakdash-\hspace{0pt} module $\mathcal V$
is an isomorphism of $\mathcal O_K$\nobreakdash-\hspace{0pt} modules
\begin{equation*}
\alpha:d_1{}\!^*\mathcal V \xrightarrow{\sim} d_0{}\!^*\mathcal V
\end{equation*}
such that if $\alpha_v:\mathcal V_{z_0} \xrightarrow{\sim} \mathcal V_{z_1}$ is the fibre of $\alpha$ at the point $v$
of $K$ above $(z_1,z_0)$, then
\begin{equation*}
\alpha_{w \circ v} = \alpha_w \circ \alpha_v
\end{equation*}
for $w$ above $(z_2,z_1)$.
Let $(K,\varepsilon)$ be a super groupoid with involution over $Z$.
We define a \emph{$(K,\varepsilon)$\nobreakdash-\hspace{0pt} module} as a quasi-coherent $\mathcal O_Z$\nobreakdash-\hspace{0pt} module $\mathcal V$ together with
an action $\alpha$ of $K$ on $\mathcal V$ such that
\begin{equation*}
\alpha_\varepsilon = \iota_{\mathcal V}:\mathcal V \xrightarrow{\sim} \iota_Z{}\!^*\mathcal V.
\end{equation*}
A morphism $\mathcal V \to \mathcal W$ of $(K,\varepsilon)$\nobreakdash-\hspace{0pt} modules is a morphism $f:\mathcal V \to \mathcal W$ of the
undelying $\mathcal O_Z$\nobreakdash-\hspace{0pt} modules for which the square formed by $d_1{}\!^*(f)$, $d_0{}\!^*(f)$, and the actions
commutes.
With tensor product that of the underlying $\mathcal O_Z$\nobreakdash-\hspace{0pt} modules, $(K,\varepsilon)$\nobreakdash-\hspace{0pt} modules form a tensor
category $\MOD_{K,\varepsilon}(Z)$.
A $(K,\varepsilon)$\nobreakdash-\hspace{0pt} module with underlying $\mathcal O_Z$\nobreakdash-\hspace{0pt} module a vector bundle over $Z$
will also be called a \emph{representation of $(K,\varepsilon)$}.
We write $\Mod_{K,\varepsilon}(Z)$ for the full rigid tensor subcategory of $\MOD_{K,\varepsilon}(Z)$
consisting of the representations of $(K,\varepsilon)$.
Suppose that $Z$ has a structure of super $k_1$\nobreakdash-\hspace{0pt} scheme for an extension $k_1$ of $k$.
If $K$ is a groupoid over $Z$ such that $(d_0,d_1)$ factors through the super subscheme $Z \times_{k_1} Z$ of
$Z \times_k Z$, then $K$ may be regarded as a groupoid in the category of super $k_1$\nobreakdash-\hspace{0pt} schemes.
We then say that $K$ is a groupoid over $Z/k_1$.
In that case any lifting $\varepsilon$ as above is at the same time a lifting of $(\iota_Z,1_Z):Z \to Z \times_{k_1} Z$,
and the category $\MOD_{K,\varepsilon}(Z)$ is the same whether $K$ is
regarded as a groupoid over $Z$ or over $Z/k_1$.
If $Z = \Spec(k')$ for an extension $k'$ of $k$,
then a groupoid over $Z$ will also be called a groupoid over $k'$, and a groupoid over $Z/k_1$
a groupoid over $k'/k_1$.
Let $h:Z' \to Z$ be a morphism of super $k$\nobreakdash-\hspace{0pt} schemes.
A morphism over $h$ from a super groupoid $K'$ over $Z'$ to a super groupoid $K$ over $Z$
is a morphism $l:K' \to K$ such that
$h$ and $l$ are compatible with the source, target, identity and composition for $K$ and $K'$.
When $Z' = Z$ and $h = 1_Z$ we speak of a morphism over $Z$.
The \emph{pullback of $K$ along $h$} is the super groupoid
\begin{equation*}
K \times_{Z \times_k Z} Z' \times_k Z'
\end{equation*}
over $Z'$ where the identity sends $z'$ to $(s_0(h(z')),z',z')$ and the composition sends
$((w,z'{}\!_2,z'{}\!_1),(v,z'{}\!_1,z'{}\!_0))$ to $(w \circ v,z'{}\!_2,z'{}\!_0)$.
Together with the projection to $K$, it is universal among super groupoids over $Z'$
equipped with a morphism to $K$ over $h$.
The pullback along $h$ of a super groupoid with involution $(K,\varepsilon)$ over $Z$
is defined as $(K',\varepsilon')$ where $K'$ is the pullback of $K$ along $h$ and
$\varepsilon'$ sends $z'$ to $(\varepsilon(h(z')),\iota_{Z'}(z'),z')$.
Let $(K,\varepsilon)$ be a super groupoid with involution over $Z$ and
$(K',\varepsilon')$ be a super groupoid with involution over $Z'$.
Given a morphism $l:K' \to K$ be of super groupoids over $h:Z' \to Z$ with
$l \circ \varepsilon' = \varepsilon \circ h$,
we define as follows a tensor functor
\begin{equation*}
l^*:\MOD_{K,\varepsilon}(Z) \to \MOD_{K',\varepsilon'}(Z').
\end{equation*}
On the underlying $\mathcal O_Z$\nobreakdash-\hspace{0pt} modules, $l^*$ is $h^*$.
If $\alpha$ is the action of $K$ on $\mathcal V$, then modulo the pullback isomorphisms,
the action of $K'$ on $h^*\mathcal V$ is $l^*(\alpha)$.
When $K$ is a transitive affine groupoid over $Z$ and $(K',\varepsilon')$
is the pullback of $(K,\varepsilon)$ along $h$, it follows
from faithfully flat descent for quasi-coherent modules over a super $k$\nobreakdash-\hspace{0pt} scheme that
for $Z'$ non-empty, $l^*$ is an equivalence, and that it induces an equivalence from
$\Mod_{K,\varepsilon}(Z)$ to $\Mod_{K',\varepsilon'}(Z')$.
Let $X$ be a super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme.
Then we have a super groupoid with involution
\begin{equation*}
(G \times_k X,\varepsilon \times_k X)
\end{equation*}
over $X$, where $d_0$ is the action,
$d_1$ is the projection, the identity sends $x$ to $(1,x)$, and the composition
sends $((g',gx),(g,x))$ to $(g'g,x)$.
Further
\begin{equation*}
\MOD_{G,\varepsilon}(X) = \MOD_{G \times_k X,\varepsilon \times_k X}(X)
\end{equation*}
with a similar identification for $\Mod_{G,\varepsilon}(X)$ and $\Mod_{G \times_k X,\varepsilon \times_k X}(X)$.
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme, $Z$ be a super $k$\nobreakdash-\hspace{0pt} scheme
and $j:Z \to X$ be a $k$\nobreakdash-\hspace{0pt} morphism.
Write $X$ as the limit $\lim_\lambda X_\lambda$ with $(X_\lambda)_{\lambda \in \Lambda}$ as in
Lemma~\ref{l:qafflim}.
Denote by $(K,\varepsilon_0)$ the pullback of $(G \times_k X,G \times_k \varepsilon)$ along $j$ and by
$(K_\lambda,\varepsilon_\lambda)$ the pullback of $(G \times_k X_\lambda,\varepsilon \times_k X_\lambda)$
along $j_\lambda = \pr_\lambda \circ j$.
We have a commutative square
\begin{equation}\label{e:GXKlambda}
\begin{gathered}
\xymatrix{
G \times_k X \ar_{G \times_k \pr_\lambda}[d] & K \ar_-{l}[l] \ar^{q_\lambda}[d] \\
G \times_k X_\lambda & K_\lambda \ar_-{l_\lambda}[l]
}
\end{gathered}
\end{equation}
of groupoids, compatible with $\varepsilon \times_k X$, $\varepsilon \times_k X_\lambda$,
$\varepsilon_0$ and $\varepsilon_\lambda$, where $l$ and $l_\lambda$ are the projections and
$q_\lambda$ is the fibre product of
$G \times_k \pr_\lambda$ and $Z \times_k Z$ over $j_\lambda \times_k j_\lambda$.
Then
\begin{equation}\label{e:KlimKlambda}
K = \lim_\lambda K_\lambda.
\end{equation}
in the category of groupoids over $Z$,
with projections $q_\lambda$.
\begin{thm}\label{t:transaff}
Let $X$ be a quasi-affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme with $\Mod_{G,\varepsilon}(X)$
integral, $Z$ be a non-empty super $k$\nobreakdash-\hspace{0pt} scheme and $j:Z \to X$ be a morphism of
super $k$\nobreakdash-\hspace{0pt} schemes which factors through every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$.
Then if $Z$ is given the structure of $\kappa(\Mod_{G,\varepsilon}(X))$\nobreakdash-\hspace{0pt} scheme defined by
the isomorphism \eqref{e:funfieldiso}, the pullback along $j$ of the super groupoid
$G \times_k X$ over $X/k$ is a
transitive affine super groupoid over $Z/\kappa(\Mod_{G,\varepsilon}(X))$.
\end{thm}
\begin{proof}
Write $k_0$ for $\kappa(\Mod_{G,\varepsilon}(X))$.
Suppose first that $X$ is of finite type.
Then by Lemma~\ref{l:homog} the intersection $X_0$ of the non-empty
open super $G$\nobreakdash-\hspace{0pt} subschemes of $X$ with its structure $k_0$\nobreakdash-\hspace{0pt} scheme
defined by the isomorphism \eqref{e:funfieldiso} is a homogeneous $G_{k_0}$\nobreakdash-\hspace{0pt} scheme.
Since the projection $e:X_0 \to X$ is a monomorphism of super $k$\nobreakdash-\hspace{0pt} schemes
underlying a morphism of super $G$\nobreakdash-\hspace{0pt} schemes, the pullback of $G \times_k X$ along $e$ is
\begin{equation*}
G \times_k X_0 = G_{k_0} \times_{k_0} X_0,
\end{equation*}
and hence is transitive affine over $X_0/k_0$.
The result follows, because $Z$ factors through $X_0$.
To prove the general case, write $X$ as the limit of a filtered inverse system
$(X_\lambda)_{\lambda \in \Lambda}$ as above.
Let $K_\lambda$ be as above, and write $k_\lambda$ for
$\kappa(\Mod_{G,\varepsilon}(X_\lambda))$.
By the case where $X$ is of finite type, each $K_\lambda$ is transitive affine
over $X_\lambda/k_\lambda$.
We have
\begin{equation*}
k_0 = \colim_\lambda k_\lambda
\end{equation*}
by isomorphisms of the form \eqref{e:colimHomGVW}.
Thus $Z \times_{k_0} Z = \lim_\lambda Z \times_{k_\lambda} Z$.
The result now follows from \eqref{e:KlimKlambda}.
\end{proof}
Let $X$, $Z$ and $j$ be as in Theorem~\ref{t:transaff}.
Denote by $(K,\varepsilon_0)$ the pullback of $(G \times_k X,G \times_k \varepsilon)$ along $j$,
and by $l:K \to G \times_k X$ the projection.
Since pullback onto a point of $Z$
defines an equivalence on $\MOD_{K,\varepsilon_0}(Z)$,
it follows from Lemmas~\ref{l:pointexact} and \ref{l:pointtors} that the pullback tensor functor
\begin{equation*}
l^*:\MOD_{G,\varepsilon}(X) \to \MOD_{K,\varepsilon_0}(Z)
\end{equation*}
is exact, and that $\mathcal V$ is a torsion object in $\MOD_{G,\varepsilon}(X)$ if and only if
$l^*\mathcal V = 0$.
Thus $l^*$ factors uniquely through the projection onto $\overline{\MOD_{G,\varepsilon}(X)}$
as a $k$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation}\label{e:MODbarK}
\overline{\MOD_{G,\varepsilon}(X)} \to \MOD_{K,\varepsilon_0}(Z),
\end{equation}
and \eqref{e:MODbarK} is faithful, exact and cocontinuous.
\begin{thm}\label{t:GKMODequiv}
The $k$\nobreakdash-\hspace{0pt} tensor functor \eqref{e:MODbarK} is an equivalence.
\end{thm}
\begin{proof}
Since \eqref{e:MODbarK} is faithful, it remains to prove that it is full
and essentially surjective.
Suppose first that $X$ is of finite type.
Write $X_0$ for the intersection of the non-empty open super $G$\nobreakdash-\hspace{0pt} subschemes of $X$
as in Lemma~\ref{l:homog},
and $e$ for the monomorphism $X_0 \to X$.
By Lemma~\ref{l:homog}, $l^*$ factors as
\begin{equation*}
e^*:\MOD_{G,\varepsilon}(X) \to \MOD_{G,\varepsilon}(X_0)
\end{equation*}
followed by a tensor equivalence.
Thus $e^*$ is exact, and $e^*\mathcal V = 0$ if and only if $\mathcal V$ is a torsion object.
Further $e^*$ factors as the projection onto $\overline{\MOD_{G,\varepsilon}(X)}$ followed
by a $k$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation}\label{e:MODbarG}
\overline{\MOD_{G,\varepsilon}(X)} \to \MOD_{G,\varepsilon}(X_0),
\end{equation}
and \eqref{e:MODbarK} is \eqref{e:MODbarG} followed by a tensor equivalence.
Since $X_0$ is a topological subspace of $X$
with structure sheaf the restriction of $\mathcal O_X$,
the counit for $e^*$ and $e_*$ is an isomorphism
\begin{equation*}
e^*e_* \xrightarrow{\sim} \Id.
\end{equation*}
Thus \eqref{e:MODbarG} and \eqref{e:MODbarK} are essentially surjective, and surjective on
hom groups between objects of the form $\overline{e_*\mathcal V_0}$.
If $\eta$ is the unit, then $e^*\eta$ is an isomorphism by the triangular identity.
Thus $\eta_{\mathcal V}$ is an isomorphism up to torsion for every $\mathcal V$,
and $\overline{\eta_{\mathcal V}}$ is an isomorphism $\overline{\mathcal V} \xrightarrow{\sim} \overline{e_*e^*\mathcal V}$.
It follows that \eqref{e:MODbarG} and \eqref{e:MODbarK} are full.
To prove that \eqref{e:MODbarK} is full and essentially surjective for arbitrary $X$,
write $X$ as the limit of a filtered system $(X_\lambda)_{\lambda \in \Lambda}$ as in
Lemma~\ref{l:qafflim}, and let $K_\lambda$, $\varepsilon_\lambda$,
$l_\lambda$ and $q_\lambda$ be as in \eqref{e:GXKlambda}.
Let $\mathcal V$ and $\mathcal W$ be objects of $\MOD_{G,\varepsilon}(X)$.
To prove that \eqref{e:MODbarK} defines a surjection
\begin{equation}\label{e:HomVWbar}
\Hom(\overline{\mathcal V},\overline{\mathcal W}) \to \Hom_{K,\mathcal O_X}(l^*\mathcal V,l^*\mathcal W)
\end{equation}
we may suppose by \eqref{e:Vcoker} and the faithfulness and cocontinuity of
\eqref{e:MODbarK} that $\mathcal V = V_X$ with $V$ in $\Mod_{G,\varepsilon}(k)$.
Since $l^*(V_X)$ is of finite type in $\MOD_{K,\varepsilon_0}(Z)$,
and $\mathcal W$ is by Lemma~\ref{l:quotsub}\ref{i:quotsubmod} the filtered colimit of
its subobjects which are quotients of objects $W_X$ with $W$ in $\Mod_{G,\varepsilon}(k)$,
we may further suppose that $\mathcal W$ is such a quotient.
Then $\mathcal W$ is the cokernel of a morphism $f':W'{}\!_X \to W_X$ for some $W'$ in
$\MOD_{G,\varepsilon}(k)$.
Writing $W'$ as the filtered colimit of its subobjects $W''$ in $\Mod_{G,\varepsilon}(k)$
shows that $l^*\mathcal W$ is for some $W''$ the cokernel of
$l^*f''$ with
\begin{equation*}
f'':W''{}\!_X \to W_X
\end{equation*}
the restriction of $f'$ to $W''{}\!_X$.
If $p$ is the canonical morphism from $\Coker f''$ to $\mathcal W = \Coker f'$, then $l^*p$ is
an isomorphism, so that by Lemmas~\ref{l:pointexact} and \ref{l:pointtors}
$p$ is an isomorphism up to torsion and $\overline{p}$ is an isomorphism.
Thus we may suppose further that $\mathcal W = \Coker f''$.
By \eqref{e:colimHomGVW}, $f''$ descends to a morphism
\begin{equation*}
f_0:W''{}\!_{X_{\lambda_0}} \to W_{X_{\lambda_0}}
\end{equation*}
for some $\lambda_0 \in \Lambda$.
If we replace $\Lambda$ by $\lambda_0/\Lambda$, we may suppose finally by taking
$\mathcal V_0 = V_{X_{\lambda_0}}$ and $\mathcal W_0 = \Coker f_0$ that $\Lambda$ has an
initial object $\lambda_0$ and $\mathcal V = (\pr_{\lambda_0})^*\mathcal V_0$ and
$\mathcal W = (\pr_{\lambda_0})^*\mathcal W_0$ for $\mathcal V_0$ and $\mathcal W_0$ in $\MOD_{G,\varepsilon}(X_{\lambda_0})$
with $(l_{\lambda_0})^*\mathcal V_0$ and $(l_{\lambda_0})^*\mathcal W_0$ in
$\Mod_{K_{\lambda_0},\varepsilon}(X_{\lambda_0})$.
Let $h:l^*\mathcal V \to l^*\mathcal W$ be an element of the target of \eqref{e:HomVWbar}.
Then if $\mathcal V_\lambda$ and $\mathcal W_\lambda$ are the pullbacks of $\mathcal V_0$ and $\mathcal W_0$
along $X_\lambda \to X_{\lambda_0}$, restricting
to a non-empty affine open subset of $Z$ shows that there exists a $\lambda \in \Lambda$
and a morphism
\begin{equation*}
h_1:l_\lambda{}\!^*\mathcal V_\lambda \to l_\lambda{}\!^*\mathcal W_\lambda
\end{equation*}
such that $q_\lambda{}\!^*h_1$ coincides, modulo pullback isomorphisms, with $h$.
By the case where $X$ is of finite type, $h_1$ is the image of a morphism
$\overline{\mathcal V_\lambda} \to \overline{\mathcal W_\lambda}$.
Thus there exist isomorphisms up to torsion $s:\mathcal V'{}\!_\lambda \to \mathcal V_\lambda$
and $r:\mathcal W_\lambda \to \mathcal W'{}\!_\lambda$ and a morphism
$h_0:\mathcal V'{}\!_\lambda \to \mathcal W'{}\!_\lambda$ such that
$l_\lambda{}\!^*h_0 = l_\lambda{}\!^*r \circ h_1 \circ l_\lambda{}\!^*s$.
We then have a diagram
\begin{equation*}
\xymatrix{
l^*\mathcal V \ar_h[d] \ar^-{l^*a}[r] & l^*(\pr_\lambda)^*\mathcal V_\lambda \ar[d] &
l^*(\pr_\lambda)^*\mathcal V'{}\!_\lambda \ar_-{l^*(\pr_\lambda)^*s}[l]
\ar^{l^*(\pr_\lambda)^*h_0}[d] \\
l^*\mathcal W \ar_-{l^*b}[r] & l^*(\pr_\lambda)^*\mathcal W_\lambda \ar_-{l^*(\pr_\lambda)^*r}[r] &
l^*(\pr_\lambda)^*\mathcal W'{}\!_\lambda
}
\end{equation*}
where $a$ and $b$ are pullback isomorphisms, the middle arrow coincides modulo
pullback isomorphisms with $h$ and $q_\lambda{}\!^*h_1$,
and the right square commutes by naturality of the pullback
isomorphism $l^*(\pr_\lambda)^* \xrightarrow{\sim} q_\lambda{}\!^*l_\lambda{}\!^*$.
By Lemmas~\ref{l:pointexact} and \ref{l:pointtors}, $(\pr_\lambda)^*r$ and
$(\pr_\lambda)^*s$ are isomorphisms up to torsion.
The exterior of the diagram thus shows that $h$ is in the image of
\eqref{e:HomVWbar}.
This proves that \eqref{e:MODbarK} is full.
Let $\mathcal U$ be an object of $\Mod_{K,\varepsilon_0}(Z)$.
Restricting to a non-empty affine open subset of $Z$ shows that there exists
a $\lambda \in \Lambda$ such that $\mathcal U = q_\lambda{}\!^*\mathcal U_1$ for some $\mathcal U_1$
in $\Mod_{K_\lambda,\varepsilon_\lambda}(Z)$.
By the case where $X$ is of finite type, $\mathcal U_1$ is isomorphic to $l_\lambda{}\!^*\mathcal U_0$
for some $\mathcal U_0$ in $\MOD_{G,\varepsilon}(X_\lambda)$.
Then $\mathcal U$ is isomorphic to $q_\lambda{}\!^*l_\lambda{}\!^*\mathcal U_0$ and hence
to $l^*(\pr_\lambda)^*\mathcal U_0$.
Since every object of $\MOD_{K,\varepsilon_0}(Z)$ is the filtered colimit
of its subobjects in $\Mod_{K,\varepsilon_0}(Z)$, and since \eqref{e:MODbarK}
is fully faithful and cocontinuous, the essential surjectivity of \eqref{e:MODbarK}
follows.
\end{proof}
\section{Super Tannakian hulls}\label{s:supTann}
In this section we combine the results of the preceding two sections to construct a
super Tannakian hull for any pseudo-Tannakian category as the full tensor subcategory
of dualisable objects of a functor category modulo torsion.
Let $\mathcal C$ be an essentially small integral rigid tensor category.
Recall that $\widetilde{\mathcal C}$ is the quotient $\overline{\widehat{\mathcal C}}$
of $\widehat{\mathcal C}$ by the torsion objects.
The canonical tensor functor
$\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$ factors uniquely through the
faithful strict tensor functor
$E_{\mathcal C}$ as
\begin{equation}\label{e:canfac}
\mathcal C \xrightarrow{E_{\mathcal C}} \mathcal C_\mathrm{fr} \to (\widetilde{\mathcal C})_\mathrm{rig},
\end{equation}
and by Corollary~\ref{c:FrCff} the second arrow is fully faithful.
In particular $\mathcal C \to \widetilde{\mathcal C}$ defines an isomorphism
\begin{equation}\label{e:kappCEnd}
\kappa(\mathcal C) \xrightarrow{\sim} \End_{\widetilde{\mathcal C}}(\mathbf 1).
\end{equation}
Thus we may regard $\widetilde{\mathcal C}$ as a $\kappa(\mathcal C)$\nobreakdash-\hspace{0pt} tensor category.
Further $(\widetilde{\mathcal C})_\mathrm{rig}$
is integral because $\mathcal C_\mathrm{fr}$ is integral and any morphism $A \to \mathbf 1$
in $(\widetilde{\mathcal C})_\mathrm{rig}$,
after composing with an appropriate epimorphism $A' \to A$ in $\widetilde{\mathcal C}$,
lies in the image of the additive hull of $\mathcal C_\mathrm{fr}$.
An integral tensor category $\mathcal C$ will be said to be \emph{of characteristic $0$}
if the integral domain $\End_{\mathcal C}(\mathbf 1)$ is of characteristic $0$.
When this is so, the canonical tensor functor from $\mathcal C$ to $\mathbf Q \otimes_{\mathbf Z} \mathcal C$ is
faithful.
\begin{lem}\label{l:CCQequiv}
Let $\mathcal C$ be an essentially small, integral, rigid tensor category of characteristic $0$.
Then the tensor functor $\widetilde{\mathcal C} \to (\mathbf Q \otimes_{\mathbf Z} \mathcal C)^\sim$
induced by the canonical tensor functor $\mathcal C \to \mathbf Q \otimes_{\mathbf Z} \mathcal C$ is a tensor equivalence.
\end{lem}
\begin{proof}
Write $\mathcal C'$ for $\mathbf Q \otimes_{\mathbf Z} \mathcal C$ and $I:\mathcal C \to \mathcal C'$ for the canonical tensor functor.
We may identify the additive category $\widehat{\mathcal C'}$ with the category
of additive functors from $\mathcal C$ to $\mathbf Q$\nobreakdash-\hspace{0pt} vector spaces.
Composing with the extension of scalars functor $\mathbf Q \otimes_{\mathbf Z} -$ from abelian groups to
$\mathbf Q$\nobreakdash-\hspace{0pt} vector spaces then defines a functor $H:\widehat{\mathcal C} \to \widehat{\mathcal C'}$.
Further $H$ is cocontinuous and $Hh_- = h_-I$ is isomorphic to $\widehat{I}h_-$, so that
$H$ is isomorphic to $\widehat{I}$.
Since the forgetful functor from $\mathbf Q$\nobreakdash-\hspace{0pt} vector spaces to abelian groups is right adjoint
to $\mathbf Q \otimes_{\mathbf Z} -$, composing with it defines a forgetful functor
$H':\widehat{\mathcal C'} \to \widehat{\mathcal C}$ right adjoint to $H$.
The counit for the adjunction is an isomorphism, while the component $\eta_M:M \to H'H(M)$
of the unit at $M$ in $\widehat{\mathcal C}$ is given by the canonical homomorphisms
\begin{equation}\label{e:unitMAQ}
M(A) \to \mathbf Q \otimes_{\mathbf Z} M(A) = H'H(M)(A)
\end{equation}
for $A$ in $\mathcal C$.
For every $M$, the kernel and cokernel of $\eta_M$ are torsion objects in $\widehat{\mathcal C}$,
because the kernel and cokernel of \eqref{e:unitMAQ} are torsion abelian groups,
so that each of their elements is annulled by some non-zero element of $\mathbf Z \subset \mathcal C(\mathbf 1,\mathbf 1)$.
Since the hom groups $\mathcal C(A,\mathbf 1)$ are torsion free and every element of $\mathcal C'(A,\mathbf 1)$
is a rational multiple of one in $\mathcal C(A,\mathbf 1)$, the torsion objects in $\widehat{\mathcal C'}$
are those which are torsion as objects of $\widehat{\mathcal C}$.
The forgetful functor $H'$ thus preserves torsion objects.
Similarly $H$ preserves torsion objects.
Since $H$ and $H'$ are exact, they thus define functors
$\overline{H}:\widetilde{\mathcal C} \to \widetilde{\mathcal C'}$ and
$\overline{H'}:\widetilde{\mathcal C'} \to \widetilde{\mathcal C}$.
Further $\overline{H}$ and $\overline{H'}$ are adjoint to one another with the counit of the adjunction
an isomorphism.
The unit, with components $\overline{\eta_M}$, is also an isomorphism because each
$\eta_M$ is an isomorphism up to torsion.
Thus $\overline{H}$ and hence $\widetilde{I}$ is an equivalence.
\end{proof}
\begin{defn}\label{d:pseudoTann}
By a \emph{pseudo-Tannakian category} we mean
an essentially small integral rigid tensor category of characteristic $0$ in which
each object $M$ has a tensor power $M^{\otimes n}$ on which some non-zero element of
$\mathbf Z[\mathfrak{S}_n]$ acts as $0$.
An abelian pseudo-Tannakian category will be called a \emph{super Tannakian category}.
\end{defn}
By Lemma~\ref{l:abfracclose},
$\End(\mathbf 1)$ is a field of characteristic $0$ in any super Tannakian category.
A super Tannakian category in the sense of Definition~\ref{d:pseudoTann}
is thus the same as a tensor category $\mathcal C$ such that $\End_{\mathcal C}(\mathbf 1)$ is
a field of characteristic $0$ and $\mathcal C$ is a super Tannakian category over $\End_{\mathcal C}(\mathbf 1)$
in the usual sense.
If $\mathcal C$ is a pseudo-Tannakian category then $\mathcal C_\mathrm{fr}$ is a pseudo-Tannakian category.
By Lemma~\ref{l:abfracclose}, any super Tannakian category is fractionally closed.
\begin{thm}\label{t:Tannequiv}
Let $\mathcal C$ be a pseudo-Tannakian category
and $\overline{\kappa(\mathcal C)}$ be an algebraic closure of $\kappa(\mathcal C)$.
Then $\widetilde{\mathcal C}$ is $\kappa(\mathcal C)$\nobreakdash-\hspace{0pt} tensor equivalent to
$\MOD_{K,\varepsilon}(\overline{\kappa(\mathcal C)})$ for some
transitive affine super groupoid with involution
$(K,\varepsilon)$ over $\overline{\kappa(\mathcal C)}/\kappa(\mathcal C)$.
\end{thm}
\begin{proof}
We may suppose by Lemma~\ref{l:CCQequiv} that $\mathcal C$ is a $\mathbf Q$\nobreakdash-\hspace{0pt} tensor category.
By Theorem~\ref{t:Ftildeequiv} there exist an affine $\mathbf Q$\nobreakdash-\hspace{0pt} group
with involution $(G,\varepsilon)$ and an affine super $(G,\varepsilon)$\nobreakdash-\hspace{0pt} scheme $X$
with $\Mod_{G,\varepsilon}(X)$
integral such that we have tensor equivalence
\begin{equation*}
\widetilde{\mathcal C} \to \overline{\MOD_{G,\varepsilon}(X)}.
\end{equation*}
It induces by \eqref{e:kappCEnd} and Proposition~\ref{p:Frff} an isomorphism
\begin{equation*}
i:\kappa(\mathcal C) \xrightarrow{\sim} \kappa(\Mod_{G,\varepsilon}(X)).
\end{equation*}
With $\rho:\kappa(\Mod_{G,\varepsilon}(X)) \to \overline{\kappa(\mathcal C)}$ the
composite of the embedding $\kappa(\mathcal C) \to \overline{\kappa(\mathcal C)}$ and $i^{-1}$,
there then exists by Lemma~\ref{l:kbarpoints} a $\overline{\kappa(\mathcal C)}$\nobreakdash-\hspace{0pt} point $z$ of $X$
lying in every non-empty open super $G$\nobreakdash-\hspace{0pt} subscheme of $X$, such that the homomorphism
from $\kappa(\Mod_{G,\varepsilon}(X))$ to $\overline{\kappa(\mathcal C)}$ defined by $z$
is $\rho$.
If $(K,\varepsilon_0)$ is the pullback of $(G \times_k X,G \times_k \varepsilon)$
along $z$,
then $K$ is transitive affine over $\overline{\kappa(\mathcal C)}/\kappa(\Mod_{G,\varepsilon}(X))$
by Theorem~\ref{t:transaff} with $Z = \Spec(\overline{\kappa(\mathcal C)})$ and $j = z$,
and pullback along $z$ defines by Theorem~\ref{t:GKMODequiv} a
$\kappa(\Mod_{G,\varepsilon}(X))$\nobreakdash-\hspace{0pt} tensor equivalence
\begin{equation*}
\overline{\MOD_{G,\varepsilon}(X)} \to \MOD_{K,\varepsilon_0}(\overline{\kappa(\mathcal C)}).
\end{equation*}
Regarding $K$ as a transitive affine groupoid over $\overline{\kappa(\mathcal C)}/\kappa(\mathcal C)$
using $i$ and writing $\varepsilon$ instead of $\varepsilon_0$
thus gives the required $\kappa(\mathcal C)$\nobreakdash-\hspace{0pt} tensor equivalence.
\end{proof}
Let $\mathcal C$ be a pseudo-Tannakian category.
By Theorem~\ref{t:Tannequiv},
the full subcategory $(\widetilde{\mathcal C})_\mathrm{rig}$
of $\widetilde{\mathcal C}$ is abelian and exactly embedded, and closed under the formation
of subquotients and extensions.
Further every object of $(\widetilde{\mathcal C})_\mathrm{rig}$ is of finite presentation
in $\widetilde{\mathcal C}$, and every object of $\widetilde{\mathcal C}$ is the filtered colimit
of its subobjects in $(\widetilde{\mathcal C})_\mathrm{rig}$.
Write $\mathcal C_0$ for the full subcategory of $(\widetilde{\mathcal C})_\mathrm{rig}$
whose objects are direct sums of those in the image of $\mathcal C \to \widetilde{\mathcal C}$.
Then the second arrow of \eqref{e:canfac} factors uniquely through
the embedding of $\mathcal C_0$ as
\begin{equation}\label{e:addhull}
\mathcal C_\mathrm{fr} \to \mathcal C_0 \to (\widetilde{\mathcal C})_\mathrm{rig}
\end{equation}
where the first arrow is the embedding of $\mathcal C_\mathrm{fr}$ into its additive hull.
Every object of $(\widetilde{\mathcal C})_\mathrm{rig}$ is a quotient of one
in $\mathcal C_0$.
It follows that every object of
$(\widetilde{\mathcal C})_\mathrm{rig}$ is the cokernel of a morphism in $\mathcal C_0$.
Taking duals then shows that every object of $(\widetilde{\mathcal C})_\mathrm{rig}$ is
the kernel of a morphism in $\mathcal C_0$.
\begin{cor}
A rigid tensor category is equivalent to a full tensor subcategory of a super Tannakian
category if and only if it is fractionally closed and pseudo-Tannakian.
\end{cor}
\begin{proof}
The ``only if'' is clear.
Conversely, if $\mathcal C$ is a fractionally closed pseudo-Tannakian category, then
$\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$ is fully faithful by \eqref{e:canfac}
with $(\widetilde{\mathcal C})_\mathrm{rig}$ super Tannakian by Theorem~\ref{t:Tannequiv}.
\end{proof}
\begin{lem}\label{l:Tannhullab}
Let $\mathcal C$ be a super Tannakian category.
Then the canonical tensor functor $\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$ is
a tensor equivalence.
\end{lem}
\begin{proof}
By Lemma~\ref{l:abfracclose}, $E_{\mathcal C}$ is an equivalence, so that by \eqref{e:canfac}
$\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$ is fully faithful with essential image
$\mathcal C_0$ as in \eqref{e:addhull}.
Thus any object of $(\widetilde{\mathcal C})_\mathrm{rig}$ is the kernel of a morphism
in the image of $\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$.
Since $\mathcal C \to \widehat{\mathcal C}$ is left exact, so also are $\mathcal C \to \widetilde{\mathcal C}$ and
$\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$.
The essential surjectivity follows.
\end{proof}
\begin{cor}\label{c:Tannequivab}
Let $\mathcal C$ be a super Tannakian category.
Denote by $k$ the field $\End_{\mathcal C}(\mathbf 1)$, and let $\overline{k}$ be an
algebraic closure of $k$.
Then $\mathcal C$ is $k$\nobreakdash-\hspace{0pt} tensor equivalent to
$\Mod_{K,\varepsilon}(\overline{k})$ for some
transitive affine super groupoid with involution
$(K,\varepsilon)$ over $\overline{k}/k$.
\end{cor}
\begin{proof}
Since $k = \kappa(\mathcal C)$ by Lemma~\ref{l:abfracclose},
the result follows from Theorem~\ref{t:Tannequiv} and Lemma~\ref{l:Tannhullab}.
\end{proof}
Recall that if $V$ is a super vector space over a field of characteristic $0$
of super dimension $m|n$, then $S^\lambda V = 0$
for a partition $\lambda$ if and only if $[\lambda]$ contains the rectangular diagram with
$n+1$ rows and $m+1$ columns.
The same therefore holds with $V$ replaced by a super linear map with
image of super dimension $m|n$.
\begin{lem}\label{l:supertensexact}
Any faithful tensor functor between super Tannakian categories is exact.
\end{lem}
\begin{proof}
By Corollary~\ref{c:Tannequivab}, it is enough to prove the exactness of any faithful
tensor functor $T$ between tensor categories of the form $\Mod_{K,\varepsilon}(k')$
for an extension $k'$ of a field $k$ of characteristic $0$
and a transitive affine groupoid with involution $(K,\varepsilon)$ over $k'/k$.
The property of Schur functors recalled above shows that
$T$ preserves super dimensions of representations,
and also monomorphisms and epimorphisms.
Since super dimensions are additive for short exact sequences of representations,
the result follows.
\end{proof}
\begin{lem}\label{l:Tannhullff}
Let $\mathcal C$ be a pseudo-Tannakian category and $\mathcal A$ be an abelian tensor category.
Then composition with the canonical tensor functor $\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$
defines a fully faithful functor from the groupoid of right exact regular tensor
functors $(\widetilde{\mathcal C})_\mathrm{rig} \to \mathcal A$ to the groupoid of regular tensor functors
$\mathcal C \to \mathcal A$.
\end{lem}
\begin{proof}
Composition with $E_{\mathcal C}$ defines a fully faithful functor from regular
tensor functors $\mathcal C_\mathrm{fr} \to \mathcal A$ to regular tensor functors $\mathcal C \to \mathcal A$,
and composition with the first arrow of \eqref{e:addhull} defines an equivalence
from tensor functors $\mathcal C_0 \to \mathcal A$ to tensor functors $\mathcal C_\mathrm{fr} \to \mathcal A$.
It thus enough to show that given right exact tensor functors $T_1$ and $T_2$
from $(\widetilde{\mathcal C})_\mathrm{rig}$ to $\mathcal A$ and a tensor isomorphism
\begin{equation*}
\varphi_0:T_1|\mathcal C_0 \xrightarrow{\sim} T_2|\mathcal C_0,
\end{equation*}
there is a unique tensor isomorphism
$\varphi:T_1 \xrightarrow{\sim} T_2$ with $\varphi|\mathcal C_0 = \varphi_0$.
As above, every object $A$ of $(\widetilde{\mathcal C})_\mathrm{rig}$ is the target
of an epimorphism $p:B \to A$ in $(\widetilde{\mathcal C})_\mathrm{rig}$ with $B$ in $\mathcal C_0$,
and $p$ may be written as the cokernel of a morphism in $\mathcal C_0$.
Thus $\varphi$ is unique if it exists, because $\varphi_A$ must render the square
\begin{equation*}
\xymatrix{
T_1(B) \ar_{T_1(p)}[d] \ar^{\varphi_0{}_B}[r] & T_2(B) \ar^{T_2(p)}[d] \\
T_1(A) \ar^{\varphi_A}[r] & T_2(A)
}
\end{equation*}
in $\mathcal A$ commutative, with the left arrow an epimorphism.
That $\varphi_A$ as defined by the square exists and is an isomorphism can be seen
by writing $p$ as the cokernel of a morphism in $(\widetilde{\mathcal C})_\mathrm{rig}$
and using the naturality of $\varphi_0$.
It is independent of the choice of $p$, because two epimorphisms $B_1 \to A$ and $B_2 \to A$
both factor through the same epimorphism $B_1 \oplus B_2 \to A$.
That $\varphi_A = \varphi_0{}_A$ for $A$ in $\mathcal C_0$ is clear.
It remains to check that $\varphi_A$ is natural in $A$ and compatible with the
tensor product.
Let $a:A' \to A$ be a morphism in $(\widetilde{\mathcal C})_\mathrm{rig}$.
If $p:B \to A$
is an epimorphism with $B$ in $\mathcal C_0$,
composing the pullback of $p$ along $a$ with an appropriate epimorphism gives
a commutative square
\begin{equation*}
\xymatrix{
B' \ar_{p'}[d] \ar[r] & B \ar^{p}[d] \\
A' \ar^a[r] & A
}
\end{equation*}
in $(\widetilde{\mathcal C})_\mathrm{rig}$ with $p$ and $p'$ epimorphisms and $B$ and $B'$ in $\mathcal C_0$.
We obtain from it a cube with front and back faces given by applying $T_1$ and $T_2$,
left and right faces by the commutative diagrams defining $\varphi_{A'}$ and $\varphi_A$,
whose top face commutes by naturality of $\varphi_0$.
Since $T_1(p')$ is an epimorphism, the bottom face also commutes.
This gives the naturality of $\varphi$.
Similarly given $A$ and $A'$, we obtain from epimorphisms $p:B \to A$
and $p':B' \to A'$ with $B$ and $B'$ in $\mathcal C_0$ a cube where the front
and back faces commute by naturality of the tensor structural isomorphisms of
$T_1$ and $T_2$, the left and right faces by naturality of $\varphi$,
and the top face by compatibility of $\varphi_0$ with the tensor product.
The bottom face, expressing the compatibility of $\varphi$ with the tensor product,
thus also commutes, because $T_1(A)$ is dualisable and hence
$T_1(A) \otimes -$ is exact and $T_1(p) \otimes T_1(p')$ is an epimorphism.
\end{proof}
\begin{defn}
Let $\mathcal C$ be a pseudo-Tannakian category.
A faithful tensor functor $T:\mathcal C \to \mathcal C'$ will be called a \emph{super Tannakian hull of $\mathcal C$}
if for every super Tannakian category $\mathcal D$, composition with $T$ defines an equivalence
from the groupoid of faithful tensor functors $\mathcal C' \to \mathcal D$ to the groupoid of
faithful tensor functors $\mathcal C \to \mathcal D$.
\end{defn}
\begin{thm}\label{t:Tannhull}
For any pseudo-Tannakian category $\mathcal C$,
the canonical tensor functor $\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$ is
a super Tannakian hull of $\mathcal C$.
\end{thm}
\begin{proof}
Let $\mathcal D$ be a super Tannakian category.
By Lemma~\ref{l:supertensexact} and Lemma~\ref{l:Tannhullff} with $\mathcal A = \mathcal D$,
it is enough to show that every
faithful tensor functor $T:\mathcal C \to \mathcal D$ factors up to tensor isomorphism through
$\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$.
We have a diagram
\begin{equation*}
\xymatrix{
\mathcal C \ar^{T}[d] \ar[r] & \widehat{\mathcal C} \ar^{\widehat{T}}[d] \ar[r] & \widetilde{\mathcal C}
\ar^{\widetilde{T}}[d] \\
\mathcal D \ar[r] & \widehat{\mathcal D} \ar[r] & \widetilde{\mathcal D}
}
\end{equation*}
of tensor functors, where the horizontal arrows are the canonical ones,
the left square commutes up to tensor isomorphism, and the right square commutes.
By Lemma~\ref{l:Tannhullab}, the composite if the bottom two arrows is fully faithful
with essential image $(\widetilde{\mathcal D})_\mathrm{rig}$.
Since $\widetilde{T}$ sends $(\widetilde{\mathcal C})_\mathrm{rig}$ into $(\widetilde{\mathcal D})_\mathrm{rig}$,
the required factorisation follows.
\end{proof}
\begin{cor}\label{c:Tannhulleq}
Let $\mathcal C$ be a pseudo-Tannakian category, $\mathcal C'$ be a super Tannakian category,
and $T:\mathcal C \to \mathcal C'$ be a faithful tensor functor.
Denote by $T':\mathcal C_\mathrm{fr} \to \mathcal C'$ the tensor functor with $T = T'E_{\mathcal C}$.
Then the following conditions are equivalent:
\begin{enumerate}
\renewcommand{\theenumi}{(\alph{enumi})}
\item\label{i:Tannhulldef}
$T$ is a super Tannakian hull of $\mathcal C$.
\item\label{i:Tannhullffq}
$T'$ is fully faithful and every object of $\mathcal C'$ is a quotient of
a direct sum of objects in the essential image of $T'$.
\end{enumerate}
\end{cor}
\begin{proof}
Write $U:\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$ for the canonical tensor functor.
We may assume by Theorem~\ref{t:Tannhull} that $T = T''U$ for a faithful tensor functor
\begin{equation*}
T'':(\widetilde{\mathcal C})_\mathrm{rig} \to \mathcal C'.
\end{equation*}
It is then to be shown that \ref{i:Tannhullffq} holds if and only if $T''$ is an equivalence.
We have $T' = T''U'$ with $U'$ the second arrow of \eqref{e:canfac},
and it has been seen following \eqref{e:canfac} and \eqref{e:addhull} that
\ref{i:Tannhullffq} holds with $T$ and $T'$ replaced by $U$ and $U'$.
Thus if $T''$ is an equivalence then \ref{i:Tannhullffq} holds.
Conversely suppose that \ref{i:Tannhullffq} holds.
Then the restriction of $T''$ to the essential image of the fully faithful
functor $U'$ is fully faithful, and hence so also is the restriction of $T''$
to the additive hull $\mathcal C_0$ of this essential image in $(\widetilde{\mathcal C})_\mathrm{rig}$.
Since every object of $(\widetilde{\mathcal C})_\mathrm{rig}$ is a cokernel of a morphism
in $\mathcal C_0$, and since $T''$ is exact by Lemma~\ref{l:supertensexact},
considering morphisms in $(\widetilde{\mathcal C})_\mathrm{rig}$ with target $\mathbf 1$ shows that
$T''$ is fully faithful.
The essential image of $T''$ contains the additive hull of the essential image of $T'$.
Thus by the hypothesis on $T'$, every object of $\mathcal C$ is a cokernel of a morphism
between objects in the essential image of $T''$.
The essential surjectivity of $T''$ then follows from its full faithfulness and exactness.
\end{proof}
\begin{cor}
Any faithful tensor functor from a fractionally closed rigid tensor category
to a category of super vector spaces over a field of characteristic $0$ is conservative.
\end{cor}
\begin{proof}
Let $\mathcal C$ be a fractionally closed rigid tensor category and $T:\mathcal C \to \Mod(k)$
be a faithful tensor functor with $k$ a field of characteristic $0$.
Then $\mathcal C$ is integral.
To prove that $T$ is conservative, we may after replacing $\mathcal C$ by a full rigid tensor
subcategory assume that $\mathcal C$ is essentially small, and
hence pseudo-Tannakian.
By Theorem~\ref{t:Tannhull},
$T$ factors up to tensor isomorphism as the canonical tensor functor
$U:\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}$
followed by a faithful tensor functor $T'$.
By Lemma~\ref{l:supertensexact}, $T'$ is exact and hence conservative.
Since $U$ is fully faithful, $T$ is thus conservative.
\end{proof}
Let $k$ be a commutative ring, $k'$ be a commutative $k$\nobreakdash-\hspace{0pt} algebra, and $\mathcal A$ be a cocomplete
abelian $k$\nobreakdash-\hspace{0pt} tensor category with cocontinuous tensor product.
Then $k' \otimes_k \mathbf 1$ has a canonical structure of commutative algebra in $\mathcal A$,
and $k'$ acts on this algebra through its action on $k'$.
Thus we have a cocomplete
abelian $k'$\nobreakdash-\hspace{0pt} tensor category $\MOD_{\mathcal A}(k' \otimes_k \mathbf 1)$ with cocontinuous tensor product.
If $k$ is a field of characteristic $0$ and $k'$ is an extension of $k$,
then for $\mathcal A = \MOD(X)$ with $X$ a super $k$\nobreakdash-\hspace{0pt} scheme
\begin{equation*}
\MOD_{\mathcal A}(k' \otimes_k \mathbf 1) = \MOD(X_{k'}),
\end{equation*}
and for $\mathcal A = \MOD_{K,\varepsilon}(X)$ with $(K,\varepsilon)$ a super groupoid with
involution over $X/k$
\begin{equation}\label{e:MODAMODK}
\MOD_{\mathcal A}(k' \otimes_k \mathbf 1) = \MOD_{K_{k'},\varepsilon}(X_{k'}).
\end{equation}
It thus follows from Theorem~\ref{t:Tannequiv} that if $\mathcal C$ is a pseudo-Tannakian
$k$\nobreakdash-\hspace{0pt} tensor category with $\kappa(\mathcal C) = k$, then $\Mod_{\widetilde{\mathcal C}}(k' \otimes_k \mathbf 1)$
is a super Tannakian category over $k'$.
\begin{cor}
Let $k$ be a field of characteristic $0$ and $k'$ be an extension of $k$.
Let $T:\mathcal C \to \mathcal C'$ be a faithful $k$\nobreakdash-\hspace{0pt} tensor functor with $\mathcal C$ pseudo-Tannakian
and $\mathcal C'$ super Tannakian, and $T':k' \otimes_k \mathcal C \to \Mod_{\widetilde{\mathcal C'}}(k' \otimes_k \mathbf 1)$
be the $k'$\nobreakdash-\hspace{0pt} tensor functor induced by $T$.
Suppose that $\kappa(\mathcal C) = k$.
Then $T$ is a super Tannakian hull of $\mathcal C$ if and only if $T'$ is a super Tannakian hull of
$k' \otimes_k \mathcal C$.
\end{cor}
\begin{proof}
By Lemma~\ref{l:ext}, $k' \otimes_k \mathcal C$ is an integral $k'$\nobreakdash-\hspace{0pt} tensor category
with $\kappa(k' \otimes_k \mathcal C) = k'$, and $T'$ is faithful.
If $T = T_1E_{\mathcal C}$ and $T' = T'{}\!_1E_{k' \otimes_k \mathcal C}$,
we have a commutative square
\begin{equation*}
\xymatrix{
(k' \otimes_k \mathcal C)_\mathrm{fr} \ar^-{T'{}\!_1}[r] & \Mod_{\widetilde{\mathcal C'}}(k' \otimes_k \mathbf 1) \\
k' \otimes_k \mathcal C_\mathrm{fr} \ar[u] \ar^-{k' \otimes_k T_1}[r] & k' \otimes_k \mathcal C' \ar[u]
}
\end{equation*}
with the left arrow defined by the factorisation of $\mathcal C \to (k' \otimes_k \mathcal C)_\mathrm{fr}$
through $\mathcal C_\mathrm{fr}$.
The right arrow of the square is fully faithful, and every object $M$ of
$\Mod_{\widetilde{\mathcal C'}}(k' \otimes_k \mathbf 1)$ is a quotient of an object in its image:
the $(k' \otimes_k \mathbf 1)$\nobreakdash-\hspace{0pt} module structure $k' \otimes_k M \to M$ of $M$ restricts to an epimorphism
$k' \otimes_k M_0 \to M$ for some subobject $M_0$ of $M$ in $\mathcal C'$.
Suppose that $T$ is a super Tannakian hull of $\mathcal C$.
Then by Corollary~\ref{c:Tannhulleq}, $T_1$ is fully faithful, and every object $\mathcal C'$ is a quotient of
a direct sum of objects in its image.
Since $T'{}\!_1$ is faithful, it follows from the square that the same holds with $T_1$ replaced
by $T'{}\!_1$.
Thus by Corollary~\ref{c:Tannhulleq}, $T'$ is a super Tannakian hull of $k' \otimes_k \mathcal C$.
Conversely suppose that $T'$ is a super Tannakian hull of $k' \otimes_k \mathcal C$.
Let $U:\mathcal C \to \mathcal C''$ be a super Tannakian hull of $\mathcal C$.
Then $\mathcal C''$ has a unique structure of $k$\nobreakdash-\hspace{0pt} tensor category such that $U$ is a $k$\nobreakdash-\hspace{0pt} tensor
functor, and $T$ factors up to tensor isomorphism as $U$ followed by a $k$\nobreakdash-\hspace{0pt} tensor functor
$\mathcal C'' \to \mathcal C'$.
The induced $k'$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation*}
\Mod_{\widetilde{\mathcal C''}}(k' \otimes_k \mathbf 1) \to \Mod_{\widetilde{\mathcal C'}}(k' \otimes_k \mathbf 1)
\end{equation*}
is an equivalence by the ``only if'' with $T$ replaced by $U$.
Its full faithfulness implies that of $\mathcal C'' \to \mathcal C'$.
Its essential surjectivity implies that every $k' \otimes_k M$ with $M$ in $\mathcal C'$ is a
quotient in $\Mod_{\widetilde{\mathcal C'}}(k' \otimes_k \mathbf 1)$ of $k' \otimes_k M'$ for some $M'$
in the image of $\mathcal C'' \to \mathcal C'$,
and hence by choosing an epimorphism $k' \to k$ of $k$\nobreakdash-\hspace{0pt} vector spaces that
$M$ is the quotient in $\mathcal C'$ of some $M'{}^n$.
It follows that $\mathcal C'' \to \mathcal C'$ is an equivalence, so that $T$ is a super Tannakian
hull of $\mathcal C$.
\end{proof}
Let $f$ be a morphism of super vector bundles over a super scheme $X$.
If the fibre $f_x$
of $f$ above a point $x$ of $X$ is an epimorphism of super vector spaces,
then $f$ is an epimorphism in $\MOD(X')$ for some open super subscheme $X'$ of $X$
containing $x$, by Nakayama's lemma applied to the morphism induced on stalks.
In particular if $f_x$ is an isomorphism, then $f$ is an isomorphism
in some neighbourhood of $x$.
\begin{lem}\label{l:vecbunhom}
Let $X$ be a super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme and $f$ be a morphism
of vector bundles over $X$.
For each point $x$ of $X$ and partition $\lambda$, suppose that $S^\lambda(f_x) = 0$
implies $S^\lambda(f) = 0$.
Then the kernel, cokernel and image of $f$ in $\MOD(X)$ are super vector bundles,
and the image has constant super rank.
\end{lem}
\begin{proof}
Since $S^\lambda(f) = 0$ implies $S^\lambda(f_x) = 0$,
it follows from the property of Schur functors recalled before Lemma~\ref{l:supertensexact}
that the super dimension $m|n$ of the image of $f_x$ is constant.
It is thus enough to show that every point $x$ of $X$ is contained in an open super subscheme $X'$
of $X$ such that the kernel, cokernel and image of the restriction $f':\mathcal V' \to \mathcal W'$ of
$f:\mathcal V \to \mathcal W$ above $X'$ are super vector bundles.
Suppose that $f_x$ has super rank $m|n$.
Then we may write $\mathcal V_x = V_1 \oplus V_2$ and $\mathcal W_x = W_1 \oplus W_2$
with $V_1$ and $W_1$ of super dimension $m|n$ so that the matrix entry
$V_1 \to W_1$ of $f_x$ is an isomorphism.
For sufficiently small $X'$, we have decompositions $\mathcal V' = \mathcal V'{}\!_1 \oplus \mathcal V'{}\!_2$
and $\mathcal W' = \mathcal W'{}\!_1 \oplus \mathcal W'{}\!_2$ such that $(\mathcal V'{}\!_i)_x = V_i$
and $(\mathcal W'{}\!_i)_x = W_i$,
with $\mathcal V'{}\!_1$ and $\mathcal W'{}\!_1$ of constant rank $m|n$.
The entry $f'{}\!_{11}:\mathcal V'{}\!_1 \to \mathcal W'{}\!_1$
of the matrix $(f'{}\!_{ij})$ of $f'$ is then an isomorphism.
We have
\begin{gather*}
\begin{pmatrix}
1 & 0 \\
- f'{}\!_{21} \circ f'{}\!_{11}{}\!^{-1} & 1
\end{pmatrix}
\begin{pmatrix}
f'{}\!_{11} & f'{}\!_{12} \\
f'{}\!_{21} & f'{}\!_{22}
\end{pmatrix}
\begin{pmatrix}
1 & - f'{}\!_{11}{}\!^{-1} \circ f'{}\!_{12} \\
0 & 1
\end{pmatrix}
=
\begin{pmatrix}
f'{}\!_{11} & 0 \\
0 & h
\end{pmatrix}
\end{gather*}
with $h = f'{}\!_{22} - f'{}\!_{21} \circ f'{}\!_{11}{}\!^{-1} \circ f'{}\!_{12}$.
After modifying appropriately the direct sum decompositions of $\mathcal V'$ and $\mathcal W'$,
we may thus assume that $f'$ is the direct sum
\begin{equation*}
f' = f'{}\!_{11} \oplus h:\mathcal V'{}\!_1 \oplus \mathcal V'{}\!_2 \to \mathcal W'{}\!_1 \oplus \mathcal W'{}\!_2
\end{equation*}
with $f'{}\!_{11}$ an isomorphism of vector bundles of constant rank $m|n$.
If $\lambda$ is the partition whose diagram is rectangular with
$n+1$ rows and $m+1$ columns, then $S^\lambda(f_x) = 0$, so that by hypothesis
$S^\lambda(f) = 0$ and $S^\lambda(f') = 0$.
Thus by the formula \cite[1.8]{Del02} for the decomposition
of the Schur functor of a direct sum we have
\begin{equation*}
0 =
S^\lambda(f'{}\!_{11} \oplus h) = \bigoplus_{|\mu| + |\nu| = (m+1)(n+1)}
(S^\mu(f'{}\!_{11}) \otimes_{\mathcal O_{X'}} S^\nu(h))^{[\lambda:\mu,\nu]}
\end{equation*}
with the multiplicity $[\lambda:\mu,\nu]$ given by the Littlewood--Richardson rule.
If $[\mu] \subset [\lambda]$ and $|\nu| = 1$, then $[\lambda:\mu,\nu] = 1$
\cite[1.5.1]{Del02}, and $S^\mu(f'{}\!_{11})$ is an isomorphism of non-zero
constant rank super vector bundles.
Thus $h = 0$.
\end{proof}
\begin{defn}
Let $X$ be a super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme.
A functor $T$ with target $\Mod(X)$ will be called \emph{pointwise faithful}
if for every point $x$ of $X$,
the composite of passage to the fibre at $x$ with $T$ is faithful.
\end{defn}
Let $T:\mathcal C \to \Mod(X)$ be a tensor functor.
If $T$ is pointwise faithful, then every morphism $f$ in the image of $T$ satisfies
the hypothesis of Lemma~\ref{l:vecbunhom}.
If $T$ is pointwise faithful and $h$ is a non-zero morphism in $\mathcal C$,
then $T(h)$ is strongly regular in $\Mod(X)$ and even in $\MOD(X)$, because
by Lemma~\ref{l:vecbunhom} it factors locally on $X$ as a retraction of super vector bundles
followed by a section.
Thus any pointwise faithful $T$ factors through a tensor functor from $\mathcal C_\mathrm{fr}$
to $\Mod(X)$, which is also pointwise faithful.
When $\mathcal C$ is rigid, $T$ is pointwise faithful if and only if it sends every non-zero
morphism with target $\mathbf 1$ in $\mathcal C$ to an epimorphism in $\MOD(X)$.
For $\mathcal C$ super Tannakian, it follows from Lemma~\ref{l:supertensexact}
that $T$ is pointwise faithful if and only if the composite of the embedding of
$\Mod(X)$ into $\MOD(X)$ with $T$ is exact.
Let $\mathcal A$ and $\mathcal A'$ be cocomplete abelian categories, and $\mathcal A_0$ be an essentially
small full abelian subcategory of $\mathcal A$.
Suppose that every object of $\mathcal A_0$ is of finite presentation in $\mathcal A$,
and that every object of $\mathcal A$ is a filtered colimit of objects in $\mathcal A_0$.
Then the embedding $\mathcal A_0 \to \mathcal A$ is exact, and every morphism in $\mathcal A$ is a filtered colimit
of morphisms in $\mathcal A_0$.
The embedding is also dense, because if $A$ in $\mathcal A$ is the filtered colimit
$\colim_{\lambda \in \Lambda} A_\lambda$ with the $A_\lambda$ in $\mathcal A_0$,
then the canonical functor $\Lambda \to \mathcal A_0/A$ is cofinal.
Let $H_0:\mathcal A_0 \to \mathcal A'$ be a right exact functor.
The additive left Kan extension $H:\mathcal A \to \mathcal A'$
of $H_0$ along the embedding of $\mathcal A_0$ into $\mathcal C$ exists, with the universal natural
transformation from $H_0$ to the restriction of $H$ to $\mathcal A_0$ an isomorphism.
It is given by the coend formula
\begin{equation*}
H = \int^{A_0 \in \mathcal A_0} \mathcal A(A_0,-) \otimes_{\mathbf Z} H_0(A_0),
\end{equation*}
and it is preserved by any cocontinuous functor $\mathcal A' \to \mathcal A''$.
By the formula, $H$ commutes with filtered colimits,
and the canonical natural transformation from the additive left Kan extension of the embedding
$\mathcal A_0 \to \mathcal A$ along itself to the identity of $\mathcal A$ is an isomorphism (i.e.\ $\mathcal A_0$
is additively dense in $\mathcal A$).
Thus $H$ is cocontinuous, and it is exact if $H_0$ is.
It follows that restriction from $\mathcal A$ to $\mathcal A_0$ defines an equivalence from cocontinuous
functors $\mathcal A \to \mathcal A'$ to right exact functors $\mathcal A_0 \to \mathcal A'$, with quasi-inverse
given by additive left Kan extension.
Suppose now that $\mathcal A$ and $\mathcal A'$ have tensor structures with the tensor products cocontinuous,
and that $\mathcal A_0$ is a full tensor subcategory of $\mathcal A$.
If $H:\mathcal A \to \mathcal A'$ is a cocontinuous functor with restriction $H_0$ to $\mathcal A_0$,
then by density of $\mathcal A_0 \to \mathcal A$, any tensor structure
on $H_0$ can be extended uniquely to a tensor structure on $H$.
Also if $H':\mathcal A \to \mathcal A'$ is a tensor functor, then
any natural isomorphism $H \xrightarrow{\sim} H'$ whose restriction to $\mathcal A_0$ is a tensor isomorphism
is itself a tensor isomorphism.
Thus restriction from $\mathcal A$ to $\mathcal A_0$ defines an equivalence from the groupoid of
cocontinuous tensor functors $\mathcal A \to \mathcal A'$ to the groupoid of right exact tensor
functors $\mathcal A_0 \to \mathcal A'$.
The conditions on $\mathcal A$ and $\mathcal A_0$ are satisfied in particular when
$\mathcal A = \MOD_{K,\varepsilon}(X)$ and
$\mathcal A_0 = \Mod_{K,\varepsilon}(X)$ for $X$ a super scheme over a field $k$ of characteristic $0$
and $(K,\varepsilon)$ a transitive affine groupoid over $X/k$, and hence by Theorem~\ref{t:Tannequiv}
when $\mathcal A = \widetilde{\mathcal C}$ and $\mathcal A_0 = (\widetilde{\mathcal C})_{\mathrm{rig}}$ for $\mathcal C$ pseudo-Tannakian.
\begin{thm}\label{t:TannhullModX}
Let $\mathcal C$ be a pseudo-Tannakian category, $U:\mathcal C \to \mathcal C'$ be a super Tannakian
hull of $\mathcal C$, and $X$ be a super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme.
Then composition with $U$ defines an equivalence from the groupoid of pointwise faithful tensor
functors $\mathcal C' \to \Mod(X)$ to the groupoid of pointwise faithful tensor functors $\mathcal C \to \Mod(X)$.
\end{thm}
\begin{proof}
We may suppose by Theorem~\ref{t:Tannhull} that $\mathcal C' = (\widetilde{\mathcal C})_\mathrm{rig}$ and
\begin{equation*}
U:\mathcal C \to (\widetilde{\mathcal C})_\mathrm{rig}
\end{equation*}
is the canonical tensor functor.
Since the composite of the embedding of $\Mod(X)$ into $\MOD(X)$ with any pointwise
faithful tensor functor $(\widetilde{\mathcal C})_\mathrm{rig} \to \Mod(X)$ is regular and exact,
the required full faithfulness follows from Lemma~\ref{l:Tannhullff} with $\mathcal A = \MOD(X)$.
Let $T:\mathcal C \to \Mod(X)$ be a pointwise faithful tensor functor.
To prove the essential surjectivity, it is to be shown that $T$ factors up to tensor
isomorphism through $U$.
The composite $T_1:\mathcal C \to \MOD(X)$ of the embedding into $\MOD(X)$ with $T$ gives by
additive left Kan extension a cocontinuous tensor functor
\begin{equation*}
T_1{}\!^*:\widehat{\mathcal C} \to \MOD(X)
\end{equation*}
with $T_1{}\!^*h_-$ tensor isomorphic to $T_1$.
We show that $T_1{}\!^*$ sends isomorphisms up to torsion to isomorphisms.
It will follow that $T_1{}\!^*$ factors through the projection
\begin{equation*}
P:\widehat{\mathcal C} \to \widetilde{\mathcal C},
\end{equation*}
and composing with $h_-$ will give the required factorisation of $T$.
Let $j:M \to N$ be an isomorphism up to torsion in $\widehat{\mathcal C}$.
Then for any subobject $N_0$ of $N$, the morphism
$j^{-1}(N_0) \to N_0$ induced by $j$ is an isomorphism up to torsion.
If $N_0$ is of finite type, and we write $j^{-1}(N_0)$ as the filtered colimit
of its subobjects $M_0$ of finite type, then by Theorem~\ref{t:Tannequiv}
and the exactness and cocontinuity of $P$, there is an $M_0$ such that the
morphism $P(M_0) \to P(N_0)$ induced by $P(j)$ is an isomorphism.
Thus $M_0 \to N_0$ induced by $j$ is an isomorphism up to torsion.
It follows that $j$ may be written as a filtered colimit of isomorphisms
up to torsion between objects of finite type.
To prove that $T_1{}\!^*(j)$ is an isomorphism, we thus reduce by cocontinuity
of $T_1{}\!^*$ to the case where $M$ and $N$ are of finite type.
The restriction of $T_1{}\!^*$ to the pseudo-abelian hull of $\mathcal C$ in $\widehat{\mathcal C}$ is
pointwise faithful, and hence by Lemma~\ref{l:vecbunhom} applied to identities,
it factors through the full subcategory of $\MOD(X)$ of vector bundles over $X$
of constant super rank.
Thus by right exactness of $T_1{}\!^*$ and Lemma~\ref{l:vecbunhom},
$T_1{}\!^*(M)$ is a vector bundle over $X$ of constant super rank for $M$ of finite
presentation in $\widehat{\mathcal C}$.
Suppose that $M$ is of finite type in $\widehat{\mathcal C}$.
Then $M$ is the colimit of a filtered system $(M_\lambda)$ of objects $M_\lambda$
of finite presentation in $\widehat{\mathcal C}$ with transition morphisms epimorphisms.
Thus $T_1{}\!^*(M)$ again is a super vector bundle over $X$ of constant super rank
by cocontinuity of $T_1{}\!^*$.
Write $F_x:\Mod(X) \to \Mod(\kappa(x))$ and
$F_{1x}:\MOD(X) \to \MOD(\kappa(x))$ for the tensor functors defined by
passing to the fibre at the point $x$ of $X$.
Since $T$ is pointwise faithful, $F_xT$ is faithful.
Then we have a diagram
\begin{equation*}
\xymatrix{
\widehat{\mathcal C} \ar^-{P}[r] & \widetilde{\mathcal C} \ar^-{L_x}[r] & \MOD(\kappa(x)) \\
\mathcal C \ar^{h_-}[u] \ar^-{U}[r] & (\widetilde{\mathcal C})_\mathrm{rig} \ar[u] \ar^-{H_x}[r] &
\Mod(\kappa(x)) \ar[u]
}
\end{equation*}
where the middle and right vertical arrows are the embeddings, the left square commutes
by definition, $H_x$ is given up to tensor isomorphism by factoring
$F_xT$ through $U$ and is right exact by Lemma~\ref{l:supertensexact},
and $L_x$ is given up to tensor isomorphism by requiring that it be cocontinuous
and that the right square commute up to tensor isomorphism.
Then $F_{1x} T_1{}\!^*h_-$ is tensor isomorphic to the bottom right leg of the diagram,
so that $L_xPh_-$ and $F_{1x} T_1{}\!^*h_-$ are tensor isomorphic.
There thus exists a tensor isomorphism
\begin{equation*}
L_xP \xrightarrow{\sim} F_{1x} T_1{}\!^*,
\end{equation*}
because $L_xP$ and $F_{1x} T_1{}\!^*$ are cocontinuous.
Now let $j:M \to N$ be an isomorphism up to torsion in $\widehat{\mathcal C}$ with $M$ and $N$
of finite type.
Then $L_x(P(j))$ and hence the fibre $F_{1x}(T_1{}\!^*(j))$ of $T_1{}\!^*(j)$ above $x$
is an isomorphism for every point $x$ of $X$.
Since $T_1{}\!^*(M)$ and $T_1{}\!^*(N)$ are super vector bundles over $X$, it follows that
$T_1{}\!^*(j)$ is an isomorphism, as required.
\end{proof}
Let $k$ be a field of characteristic $0$ and $(G,\varepsilon)$ be an affine super
$k$\nobreakdash-\hspace{0pt} group with involution.
Then the right action of $G \times_k G$ on $G$ for which $(g_1,g_2)$ sends $g$ to $g_1{}\!^{-1}gg_2$
defines a structure of $(G \times_k G,(\varepsilon,\varepsilon))$\nobreakdash-\hspace{0pt} module on $k[G]$.
For $i = 1,2$, pullback along the $i$th projection defines a $k$\nobreakdash-\hspace{0pt} tensor functor from
$(G,\varepsilon)$\nobreakdash-\hspace{0pt} modules to $(G \times_k G,(\varepsilon,\varepsilon))$\nobreakdash-\hspace{0pt} modules.
If $V$ is a $(G,\varepsilon)$\nobreakdash-\hspace{0pt} module, the action of $G$ on $V$ is a morphism
\begin{equation}\label{e:VGGact}
\pr_2{}\!^*V \to \pr_1{}\!^*V \otimes_k k[G]
\end{equation}
of $(G \times_k G,(\varepsilon,\varepsilon))$\nobreakdash-\hspace{0pt} modules.
Evaluation at the identity of $G$ defines a $k$\nobreakdash-\hspace{0pt} linear map left inverse
to \eqref{e:VGGact}.
If $G_i$ denotes the normal super $k$\nobreakdash-\hspace{0pt} subgroup of $G \times_k G$ given by embedding $G$ as
the $i$th factor, then the restriction of the left inverse to the
$(G \times_k G,(\varepsilon,\varepsilon))$\nobreakdash-\hspace{0pt} submodule
$(\pr_1{}\!^*V \otimes_k k[G])^{G_1}$ of invariants under $G_1$ is injective,
because a $G_1$\nobreakdash-\hspace{0pt} invariant section of $\pr_1{}\!^*V \otimes_k \mathcal O_G$ above $G$ is determined
by its value at the identity.
Thus \eqref{e:VGGact} factors through an isomorphism
\begin{equation}\label{e:VGGactinv}
\pr_2{}\!^*V \xrightarrow{\sim} (\pr_1{}\!^*V \otimes_k k[G])^{G_1}
\end{equation}
of $(G \times_k G,(\varepsilon,\varepsilon))$\nobreakdash-\hspace{0pt} modules.
The embedding of $\mathbf Z/2$ into $G$ that sends $1$ to $\varepsilon$ defines a super $k$\nobreakdash-\hspace{0pt} subgroup
with involution
\begin{equation}\label{e:Gprimedef}
(G',\varepsilon') = (G \times_k (\mathbf Z/2),(\varepsilon,1))
\end{equation}
of $(G \times_k G,(\varepsilon,\varepsilon))$.
Restriction from $G \times_k G$ to $G'$ then defines a structure of $(G',\varepsilon')$\nobreakdash-\hspace{0pt} module
on $k[G]$.
We may identify $\MOD_{G,\varepsilon}(k)$ with the full subcategory
of $\MOD_{G',\varepsilon'}(k)$ consisting of those
$(G',\varepsilon')$\nobreakdash-\hspace{0pt} modules on which the factor $\mathbf Z/2$ of $G'$ acts trivially,
and the category $\MOD_{\mathbf Z/2,1}(k)$ of super $k$\nobreakdash-\hspace{0pt} vector spaces with the
$(G',\varepsilon')$\nobreakdash-\hspace{0pt} modules on which $G$ acts trivially.
If we write $\Omega$ for the forgetful functor from $(G,\varepsilon)$\nobreakdash-\hspace{0pt} modules
to super $k$\nobreakdash-\hspace{0pt} vector spaces, then for any $(G,\varepsilon)$\nobreakdash-\hspace{0pt} module $V$ the
action of $G$ on $V$ is by \eqref{e:VGGact} a morphism
\begin{equation}\label{e:VGprimeact}
\Omega(V) \to V \otimes_k k[G]
\end{equation}
of $(G',\varepsilon')$\nobreakdash-\hspace{0pt} modules, and by \eqref{e:VGGactinv} it factors through an isomorphism
\begin{equation}\label{e:VGprimeactinv}
\Omega(V) \xrightarrow{\sim} (V \otimes_k k[G])^G
\end{equation}
of $(G',\varepsilon')$\nobreakdash-\hspace{0pt} modules.
The central embedding of the factor $\mathbf Z/2$ of $G'$ defines a $(\mathbf Z/2)$\nobreakdash-\hspace{0pt} grading
on $\MOD_{G',\varepsilon'}(k)$, so that $\MOD_{G',\varepsilon'}(k)$ is $k$\nobreakdash-\hspace{0pt} tensor equivalent
to the $k$\nobreakdash-\hspace{0pt} tensor category of $(\mathbf Z/2)$\nobreakdash-\hspace{0pt} graded objects of $\MOD_{G,\varepsilon}(k)$
with symmetry given by the Koszul rule.
Let $W$ be a $(G',\varepsilon')$\nobreakdash-\hspace{0pt} module.
For $V$ a representation of $(G,\varepsilon)$ we have
a morphism
\begin{equation}\label{e:WVG}
V^\vee \otimes_k (V \otimes_k W)^G \to W
\end{equation}
of $(G',\varepsilon')$\nobreakdash-\hspace{0pt} modules,
natural in $W$ and extranatural in $V$, defined by the embedding of the invariants under $G$
and the counit for $V^\vee$.
Thus we have a morphism
\begin{equation}\label{e:intWVG}
\int^{V \in \Mod_{G,\varepsilon}(k)} V^\vee \otimes_k (V \otimes_k W)^G \to W
\end{equation}
of $(G',\varepsilon')$\nobreakdash-\hspace{0pt} modules, natural in $W$.
If $W$ lies in $\MOD_{G,\varepsilon}(k)$ then $(V \otimes_k W)^G$ is the trivial
$(G',\varepsilon')$\nobreakdash-\hspace{0pt} module given by the $k$\nobreakdash-\hspace{0pt} vector space $\Hom_G(V^\vee,W)$,
and \eqref{e:WVG} is
\begin{equation*}
V^\vee \otimes_k \Hom_G(V^\vee,W) \to W
\end{equation*}
defined by evaluation.
For $W$ in $\Mod_{G,\varepsilon}(k)$, \eqref{e:intWVG} is thus an isomorphism, with
inverse the composite of the coprojection at $V = W^\vee$ and
\begin{equation*}
W \to W \otimes_k \Hom_G(W,W)
\end{equation*}
defined by $1_W$.
Since $ \Hom_G(V^\vee,-)$ commutes with filtered colimits, \eqref{e:intWVG}
is an isomorphism for any $W$ in $\MOD_{G,\varepsilon}(k)$.
It follows that \eqref{e:intWVG} is an isomorphism for $W$ the tensor product
of a $(G,\varepsilon)$\nobreakdash-\hspace{0pt} module with $k^{0|1}$, because the factor $k^{0|1}$ may
be taken outside $(-)^G$.
Hence \eqref{e:intWVG} is an isomorphism for an arbitrary $(G',\varepsilon')$\nobreakdash-\hspace{0pt} module $W$.
Taking $W = k[G]$ and using \eqref{e:VGprimeactinv} gives an isomorphism
\begin{equation}\label{e:Vomegaint}
\int^{V \in \Mod_{G,\varepsilon}(k)} V^\vee \otimes_k \Omega(V) \xrightarrow{\sim} k[G]
\end{equation}
of $(G',\varepsilon')$\nobreakdash-\hspace{0pt} modules, with component at $V$ obtained from \eqref{e:VGprimeact} by dualising.
Let $\mathcal C$ be an essentially small rigid tensor category, $X$ be a super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme,
and $T_1$ and $T_2$ be tensor functors $\mathcal C \to \Mod(X)$.
It can be seen as follows that the functor on super schemes over $X$ that assigns to
$p:X' \to X$ the set $\Iso^\otimes(p^*T_1,p^*T_2)$ of tensor
isomorphisms from $p^*T_1$ to $p^*T_2$ is represented by an affine super scheme
\begin{equation*}
\underline{\Iso}^\otimes(T_1,T_2) = \Spec(\mathcal R)
\end{equation*}
over $X$.
The object $\mathcal R$ in $\MOD(X)$ is given by
\begin{equation}\label{e:Rint}
\mathcal R = \int^{C \in \mathcal C} T_2(C)^\vee \otimes_{\mathcal O_X} T_1(C).
\end{equation}
Its structure of commutative algebra is given by the morphisms
\begin{equation*}
(T_2(C)^\vee \otimes_{\mathcal O_X} T_1(C)) \otimes_{\mathcal O_X}
(T_2(C')^\vee \otimes_{\mathcal O_X} T_1(C')) \to
(T_2(C \otimes C')^\vee \otimes_{\mathcal O_X} T_1(C \otimes C'))
\end{equation*}
and $\mathcal O_X \to T_2(\mathbf 1)^\vee \otimes_{\mathcal O_X} T_1(\mathbf 1)$ defined using the tensor structures
of $T_1$ and $T_2$.
We have bijections
\begin{equation*}
\Hom_{\mathcal O_{X'}}(p^*\mathcal R,\mathcal O_{X'}) \xrightarrow{\sim}
\int_{C \in \mathcal C} \Hom_{\mathcal O_{X'}}(p^*T_1(C),p^*T_2(C))
\xrightarrow{\sim} \Nat(p^*T_1,p^*T_2)
\end{equation*}
natural in $p:X' \to X$, which when $p$ is the structural morphism $\pi$ of $\Spec(\mathcal R)$
send the canonical morphism $\pi^*\mathcal R \to \mathcal O_{\Spec(\mathcal R)}$ to
\begin{equation*}
\alpha:\pi^*T_1 \to \pi^*T_2
\end{equation*}
where $\alpha_C:\pi^*T_1(C) \to \pi^*T_2(C)$ corresponds to $T_1(C) \to \mathcal R \otimes_{\mathcal O_X} T_2(C)$
dual to the coprojection at $C$ of the coend defining $\mathcal R$.
Further $p^*\mathcal R \to \mathcal O_{X'}$ is a homomorphism of algebras if and only if the
corresponding natural transformation $p^*T_1 \to p^*T_2$ is compatible with the
tensor structures.
Thus we have a bijection
\begin{equation*}
\Hom_X(X',\Spec(\mathcal R)) \xrightarrow{\sim} \Iso^\otimes(p^*T_1,p^*T_2)
\end{equation*}
natural in $p:X' \to X$, which gives the required representation, with universal element $\alpha$.
The affine super scheme $\underline{\Iso}^\otimes(T_1,T_2)$ over $X$ is functorial in $T_1$ and $T_2$.
It is clear from the definition that we have a canonical isomorphism
\begin{equation}\label{e:Isopull}
\underline{\Iso}^\otimes(p^*T_1,p^*T_2) \xrightarrow{\sim} p^*\underline{\Iso}^\otimes(T_1,T_2)
\end{equation}
over $X'$ for every $p:X' \to X$.
If $U:\mathcal C \to \mathcal C'$ is a tensor functor with $\mathcal C'$ essentially small and rigid such that
$T_i = T'{}\!_iU$ for $i = 1,2$,
then composition with $U$ defines a morphism
\begin{equation}\label{e:IsoTannhull}
\underline{\Iso}^\otimes(T'{}\!_1,T'{}\!_2) \to \underline{\Iso}^\otimes(T_1,T_2)
\end{equation}
over $X$, which is an isomorphism if
$\Iso^\otimes(p^*T'{}\!_1,p^*T'{}\!_2) \to \Iso^\otimes(p^*T_1,p^*T_2)$ is bijective for every $p$.
Suppose that $X$ is non-empty and that $T_1$ and $T_2$ are pointwise faithful.
Then $\mathcal C$ is integral and pseudo-Tannakian, and $T_1$ and $T_2$ factor through $\mathcal C_\mathrm{fr}$.
Thus $T_1$ and $T_2$ induce homomorphisms from $\kappa(\mathcal C)$ to $H^0(X,\mathcal O_X)$.
The equaliser of the corresponding morphisms $X \to \Spec(\kappa(\mathcal C))$
is a closed super subscheme $Y$ of $X$,
and $\Iso^\otimes(p^*T_1,p^*T_2)$ is empty unless $p:X' \to X$ factors through $Y$.
Thus $\underline{\Iso}^\otimes(T_1,T_2)$ may be regarded as a scheme over $Y$.
The following lemma, together with \eqref{e:Isopull} with $p$ the embedding of $Y$,
shows that $\underline{\Iso}^\otimes(T_1,T_2)$ is faithfully
flat over $Y$.
\begin{thm}\label{t:Isofthfl}
Let $\mathcal C$ be an essentially small rigid tensor category, $X$ be a non-empty
super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme,
and $T_1$ and $T_2$ be pointwise faithful tensor functors from $\mathcal C$ to $\Mod(X)$
which induce the same homomorphism from $\kappa(\mathcal C)$ to $H^0(X,\mathcal O_X)$.
Then $\underline{\Iso}^\otimes(T_1,T_2)$ is faithfully flat over $X$.
\end{thm}
\begin{proof}
With $T'{}\!_i$ the factorisation
of $T_i$ through $\mathcal C_\mathrm{fr}$, \eqref{e:IsoTannhull} is an isomorphism.
After replacing $\mathcal C$ by $\mathcal C_\mathrm{fr}$, we may thus suppose that $\End_{\mathcal C}(\mathbf 1)$
is a field $k$ of characteristic $0$ and that $\mathcal C$ is a $k$\nobreakdash-\hspace{0pt} tensor
category with $\kappa(\mathcal C) = k$.
The homomorphism from $k$ to $H^0(X,\mathcal O_X)$ induced by the $T_i$
then defines a structure of super $k$\nobreakdash-\hspace{0pt} scheme on $X$ such that the $T_i$ are
$k$\nobreakdash-\hspace{0pt} tensor functors.
Let $\overline{k}$ be an algebraic closure of $k$.
By \eqref{e:Isopull} with $p$ the projection $X_{\overline{k}} \to X$, we may
after replacing $X$ by $X_{\overline{k}}$ suppose that the structure of
super $k$\nobreakdash-\hspace{0pt} scheme on $X$ extends to a structure of $\overline{k}$\nobreakdash-\hspace{0pt} scheme.
Then with
\begin{equation*}
T'{}\!_i:\overline{k} \otimes_k \mathcal C \to \Mod(X)
\end{equation*}
the $\overline{k}$\nobreakdash-\hspace{0pt} tensor functor through which $T_i$ factors, \eqref{e:IsoTannhull}
is an isomorphism.
Since $\kappa(\mathcal C) = k$, it follows from Lemma~\ref{l:ext} that
$\overline{k} \otimes_k \mathcal C$ is integral with
$\kappa(\overline{k} \otimes_k \mathcal C) = \overline{k}$ and that the $T'{}\!_i$
are pointwise faithful.
Replacing $k$ by $\overline{k}$ and $\mathcal C$ by $\overline{k} \otimes_k \mathcal C$,
we may thus further suppose that $k$ is algebraically closed.
Let $U:\mathcal C \to \mathcal C'$ be a super Tannakian hull of $\mathcal C$.
Then by Theorem~\ref{t:TannhullModX}, \eqref{e:IsoTannhull} is an isomorphism with
$T_i$ replaced by a tensor isomorphic functor $T'{}\!_iU$.
Since $\kappa(\mathcal C') = k$ by Corollary~\ref{c:Tannhulleq}, we may after
replacing $\mathcal C$ by $\mathcal C'$ suppose that $\mathcal C$ is super Tannakian.
Thus by Corollary~\ref{c:Tannequivab} we may suppose finally that
\begin{equation*}
\mathcal C = \Mod_{G,\varepsilon}(k)
\end{equation*}
for an affine super $k$\nobreakdash-\hspace{0pt} group with involution $(G,\varepsilon)$.
Write $T_0$ for the forgetful functor
from $\Mod_{G,\varepsilon}(k)$ to $\Mod(k)$ followed by pullback along
the structural morphism of $X$.
Consider first the case where $T_1 = T_0$.
We may extend $T_2$ to a cocontinuous $k$\nobreakdash-\hspace{0pt} tensor functor from
$\MOD_{G,\varepsilon}(k)$ to $\MOD(X)$, because it is pointwise faithful and
hence as a functor to $\MOD(X)$ right exact.
With $(G',\varepsilon')$ as in \eqref{e:Gprimedef}, $\MOD_{G',\varepsilon'}(k)$
is the $k$\nobreakdash-\hspace{0pt} tensor category of $\mathbf Z/2$\nobreakdash-\hspace{0pt} graded objects of $\MOD_{G,\varepsilon}(k)$.
Thus $T_2$ extends further to a cocontinuous $k$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation*}
T:\MOD_{G',\varepsilon'}(k) \to \MOD(X)
\end{equation*}
which sends $k^{0|1}$ to $\mathcal O_X{}\!^{0|1}$.
It follows from \eqref{e:Vomegaint} that $\mathcal R$ in \eqref{e:Rint} is isomorphic
to $T(k[G])$, because the $k$\nobreakdash-\hspace{0pt} tensor functor $V \mapsto T(\Omega(V))$ from
$\Mod_{G,\varepsilon}(k)$ to $\Mod(X)$ is tensor isomorphic to $T_0$.
Since the unit $k \to k[G]$ is non-zero and the functor from $\Mod_{G',\varepsilon'}(k)$
to $\Mod(X)$ induced by $T$ is pointwise faithful, writing $k[G]$ as the filtered colimit
of its subobjects in $\Mod_{G',\varepsilon'}(k)$ shows that the $\mathcal O_X$\nobreakdash-\hspace{0pt} module $\mathcal R$
is flat with non-zero fibres, as required.
The case where $T_1$ is arbitrary reduces to that where $T_1 = T_0$ by \eqref{e:Isopull}
with $p$ the faithfully flat structural morphism of $\underline{\Iso}^\otimes(T_0,T_1)$.
\end{proof}
Let $k$ be an algebraically closed field of characteristic $0$
and $G$ be an affine super $k$\nobreakdash-\hspace{0pt} group.
It can be seen as follows that if $Y$ is a super $G$\nobreakdash-\hspace{0pt} scheme such that $Y_{k'}$
is isomorphic to $G_{k'}$ acting on itself by left translation for some extension $k'$ of $k$,
then $Y$ has a $k$\nobreakdash-\hspace{0pt} point, and hence is isomorphic to $G$ acting on itself by left translation.
When $G$ is of finite type this is clear.
Replacing $G$ and $Y$ by $G_{\mathrm{red}}$ and $Y_{\mathrm{red}}$,
we may suppose that $G$ is an affine $k$\nobreakdash-\hspace{0pt} group.
Write $G$ as the filtered limit of its affine $k$\nobreakdash-\hspace{0pt} quotients $G_\lambda = G/H_\lambda$ of finite type.
If $Y = \Spec(R)$ and $Y_\lambda = \Spec(R^{H_\lambda})$, then $(Y_\lambda)_{k'}$ is
$(G_\lambda)_{k'}$\nobreakdash-\hspace{0pt} isomorphic to $(G_\lambda)_{k'}$.
Thus $Y_\lambda$ is $G_\lambda$\nobreakdash-\hspace{0pt} isomorphic to $G_\lambda$, so that
$G_\lambda(k)$ acts simply transitively on $Y_\lambda(k)$.
Since $Y = \lim_\lambda Y_\lambda$, it follows from \cite[Lemma~1.1.1]{O10} that $Y$ has a $k$\nobreakdash-\hspace{0pt} point.
\begin{cor}\label{c:fibfununique}
Let $\mathcal C$ be an essentially small rigid tensor category and $k$ be an algebraically closed
field of characteristic $0$.
Then any two faithful tensor functors from $\mathcal C$ to $\Mod(k)$
which induce the same homomorphism from $\kappa(\mathcal C)$ to $k$ are tensor isomorphic.
\end{cor}
\begin{proof}
Let $T_1$ and $T_2$ be faithful tensor functors from $\mathcal C$ to $\Mod(k)$
which induce the same homomorphism from $\kappa(\mathcal C)$ to $k$.
Write $G$ for the affine super $k$\nobreakdash-\hspace{0pt} group $\underline{\Iso}^\otimes(T_2,T_2)$, and
$Y$ for the super $G$\nobreakdash-\hspace{0pt} scheme $\underline{\Iso}^\otimes(T_1,T_2)$.
By Theorem~\ref{t:Isofthfl}, $Y$ is non-empty, and hence has a $k'$\nobreakdash-\hspace{0pt} point $y$
for some extension $k'$ of $k$.
Then $y$ defines a tensor isomorphism from $p^*T_1$ to $p^*T_2$ with $p^*$ extension
of scalars from $k$ to $k'$, and hence by \eqref{e:Isopull} a $G_{k'}$\nobreakdash-\hspace{0pt} isomorphism from
$G_{k'}$ to $Y_{k'}$.
By the above, $Y$ has thus a $k$\nobreakdash-\hspace{0pt} point.
\end{proof}
Let $\mathcal C$ be an essentially small rigid tensor category,
$X$ be a non-empty super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme,
and $T:\mathcal C \to \Mod(X)$ be a pointwise faithful tensor functor.
Then if $X$ is regarded as a super $\kappa(\mathcal C)$\nobreakdash-\hspace{0pt} scheme using the homomorphism
$\kappa(\mathcal C) \to H^0(X,\mathcal O_X)$ defined by $T$,
we have by Theorem~\ref{t:Isofthfl} a transitive affine groupoid
\begin{equation*}
\underline{\Iso}^\otimes(T) = \underline{\Iso}^\otimes(\pr_1{}\!^*T,\pr_2{}\!^*T)
\end{equation*}
over $X/\kappa(\mathcal C)$, with the evident identities and composition,
and $C \mapsto \iota_{T(C)}$ defines an involution $\varepsilon_T$ of
$\underline{\Iso}^\otimes(T)$.
We have a canonical factorisation
\begin{equation*}
\mathcal C \xrightarrow{\Phi_T} \Mod_{\underline{\Iso}^\otimes(T),\varepsilon_T}(X) \to \Mod(X)
\end{equation*}
of $T$, where $\Phi_T$ is defined by the universal element for $\underline{\Iso}^\otimes(T)$
and the second arrow is the forgetful functor.
Given $U:\mathcal C\to \mathcal C'$ and $T'$ with $T = T'U$, composition with $U$ defines as in
\eqref{e:IsoTannhull} a morphism $\underline{\Iso}^\otimes(T') \to \underline{\Iso}^\otimes(T)$, and the square
\begin{equation}\label{e:ModIsosquare}
\begin{gathered}
\xymatrix{
\Mod_{\underline{\Iso}^\otimes(T)}(X) \ar[r] & \Mod_{\underline{\Iso}^\otimes(T')}(X) \\
\mathcal C \ar^{\Phi_T}[u] \ar^{U}[r] & \mathcal C' \ar_{\Phi_{T'}}[u]
}
\end{gathered}
\end{equation}
commutes.
For $p:X' \to X$ with $X'$ non-empty, we have
by \eqref{e:Isopull} a canonical isomorphism from $\underline{\Iso}^\otimes(p^*T)$ to the pullback of
$\underline{\Iso}^\otimes(T)$ along $p$.
Thus $\Phi_{p^*T}$ factors as
\begin{eqnarray}\label{e:ModIsofac}
\mathcal C \xrightarrow{\Phi_T} \Mod_{\underline{\Iso}^\otimes(T)}(X) \to \Mod_{\underline{\Iso}^\otimes(p^*T)}(X')
\end{eqnarray}
with the second arrow an equivalence.
Let $S$ be a super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme and $(K,\varepsilon)$ be
a super groupoid with involution over $S$.
Write
\begin{equation*}
\omega_{K,\varepsilon}:\Mod_{K,\varepsilon}(S) \to \Mod(S)
\end{equation*}
for the forgetful tensor functor.
The action of $K$ defines a morphism
\begin{equation*}
\theta_{K,\varepsilon}:K \to \underline{\Iso}^\otimes(\omega_{K,\varepsilon})
\end{equation*}
of groupoids over $S$, compatible with the involutions.
Then $\Phi_{\omega_{K,\varepsilon}}$ followed by pullback along $\theta_{K,\varepsilon}$ is the
identity of $\Mod_{K,\varepsilon}(S)$.
Thus $\Phi_{\omega_{K,\varepsilon}}$ is an equivalence when $\theta_{K,\varepsilon}$
is an isomorphism.
Let $k$ be a commutative ring, $k'$ be a commutative $k$\nobreakdash-\hspace{0pt} algebra, and $\mathcal A$ be a cocomplete abelian
$k$\nobreakdash-\hspace{0pt} tensor category with cocontinuous tensor product.
Then we have a $k$\nobreakdash-\hspace{0pt} tensor functor
\begin{equation}\label{e:tenscatext}
k' \otimes_k -:\mathcal A \to \MOD_{\mathcal A}(k' \otimes_k \mathbf 1).
\end{equation}
If $\mathcal A'$ is a cocomplete abelian $k'$\nobreakdash-\hspace{0pt} tensor category with cocontinuous tensor product,
the canonical $k$\nobreakdash-\hspace{0pt} homomorphism from $k'$ to $\End_{\mathcal A'}(\mathbf 1)$ defines a morphism of algebras
from $k' \otimes_k \mathbf 1$ to $\mathbf 1$ in $\mathcal A'$.
Composition with \eqref{e:tenscatext} then defines an equivalence from the category of
cocontinuous $k'$\nobreakdash-\hspace{0pt} tensor functors $\MOD_{\mathcal A}(k' \otimes_k \mathbf 1) \to \mathcal A'$ to the category of
cocontinuous $k$\nobreakdash-\hspace{0pt} tensor functors $\mathcal A \to \mathcal A'$, with a quasi-inverse sending $H$ to
$H(-) \otimes_{k' \otimes_k \mathbf 1} \mathbf 1$.
\begin{lem}\label{l:thetaPhi}
Let $S$ be a non-empty super scheme over a field $k$ of characteristic $0$ and $(K,\varepsilon)$ be a
transitive affine super groupoid over $S/k$.
Then $\theta_{K,\varepsilon}$ is an isomorphism and $\Phi_{\omega_{K,\varepsilon}}$ is an equivalence.
\end{lem}
\begin{proof}
It is enough to prove that $\theta_{K,\varepsilon}$ is an isomorphism.
Suppose first that $S = \Spec(k)$.
Then $K$ is an affine $k$\nobreakdash-\hspace{0pt} super group $G$, and $\theta_{G,\varepsilon}$ is the morphism
associated to the isomorphism of super $k$\nobreakdash-\hspace{0pt} algebras underlying \eqref{e:Vomegaint}.
To prove the general case, let $k'$ be an extension of $k$ for which $S$ has a
$k'$\nobreakdash-\hspace{0pt} point $s$ over $k$.
Then the fibre of $(K,\varepsilon)$ above the $k'$\nobreakdash-\hspace{0pt} point $(s,s)$ of $S$
is an affine super $k'$\nobreakdash-\hspace{0pt} group $(K_{s,s},\varepsilon_s)$.
If we take $X = S$ in \eqref{e:MODAMODK}, then \eqref{e:tenscatext} is extension of scalars from
$\MOD_{K,\varepsilon}(S)$ to $\MOD_{K_{k'},\varepsilon_{k'}}(S_{k'})$.
Composing with the equivalence from $\MOD_{K_{k'},\varepsilon_{k'}}(S_{k'})$ to
$\MOD_{K_{s,s},\varepsilon_s}(k')$ defined by taking the fibre at the $k'$\nobreakdash-\hspace{0pt} point
of $s$ of $S_{k'}$ over $k'$ shows that for $\mathcal A'$ as above,
composition with restriction
\begin{equation*}
\MOD_{K,\varepsilon}(S) \to \MOD_{K_{s,s},\varepsilon_s}(k')
\end{equation*}
from $K$ to $K_{s,s}$ defines an equivalence from $k'$\nobreakdash-\hspace{0pt} tensor functors from
$\MOD_{K_{s,s},\varepsilon_s}(k')$ to $\mathcal A'$ to $k$\nobreakdash-\hspace{0pt} tensor functors from
$\MOD_{K,\varepsilon}(S)$ to $\mathcal A'$.
Restricting to categories of representations then shows that the same holds with
$\MOD$ replaced by $\Mod$.
By taking $\mathcal A'$ of the form $\MOD(X')$ for super $k'$\nobreakdash-\hspace{0pt} schemes $X'$, it follows that
the morphism
\begin{equation}\label{e:IsoomegaKfib}
\underline{\Iso}^\otimes(\omega_{K_{s,s},\varepsilon_s}) \to
\underline{\Iso}^\otimes(\omega_{K,\varepsilon})
\end{equation}
defined by the embedding $K_{s,s} \to K$ is an isomorphism
onto $\underline{\Iso}^\otimes(\omega_{K,\varepsilon})_{s,s}$.
We have a commutative square
\begin{equation*}
\begin{gathered}
\xymatrix{
K_{s,s} \ar[d] \ar^-{\theta_{K_{s,s}},\varepsilon_s}[r] &
\underline{\Iso}^\otimes(\omega_{K_{s,s},\varepsilon_s}) \ar[d] \\
K \ar^-{\theta_{K,\varepsilon}}[r] & \underline{\Iso}^\otimes(\omega_{K,\varepsilon})
}
\end{gathered}
\end{equation*}
where the left arrow is the embedding and the right arrow is \eqref{e:IsoomegaKfib}.
By the case where $S = \Spec(k)$ with $k'$ for $k$, the top arrow is an isomorphism.
Thus the bottom arrow is an isomorphism
because it is an isomorphism on fibres over $(s,s)$ and $K$ and
$\underline{\Iso}^\otimes(\omega_{K,\varepsilon})$ are transitive affine over $S/k$.
\end{proof}
\begin{thm}
Let $\mathcal C$ be a pseudo-Tannakian category, $X$ be a non-empty super $\mathbf Q$\nobreakdash-\hspace{0pt} scheme,
and $T:\mathcal C \to \Mod(X)$ be a pointwise faithful tensor functor.
Then the canonical tensor functor $\mathcal C \to \Mod_{\underline{\Iso}^\otimes(T),\varepsilon_T}(X)$
is a super Tannakian hull of $\mathcal C$.
\end{thm}
\begin{proof}
Let $U:\mathcal C \to \mathcal C'$ is a super Tannakian hull of $\mathcal C$.
Replacing $T$ by a tensor isomorphic functor, we may assume by Theorem~\ref{t:TannhullModX}
that $T$ factors as $T = T'U$.
Then by Theorem~\ref{t:TannhullModX}, $U$ induces an isomorphism from $\underline{\Iso}^\otimes(T')$ to
$\underline{\Iso}^\otimes(T)$, so that the top arrow of \eqref{e:ModIsosquare} is an isomorphism.
Replacing $T$ by $T'$, it is thus enough to show that when $\mathcal C$ is super Tannakian
$\Phi_T$ is an equivalence.
By Corollary~\ref{c:Tannequivab}, we may assume that $\mathcal C = \Mod_{K,\varepsilon}(k')$
for an algebraically closed extension $k'$ of a field $k$ of characteristic $0$
and transitive affine groupoid with involution $(K,\varepsilon)$ over $k'/k$.
We may suppose after replacing $k'$ by an extension that $X$ has a $k'$\nobreakdash-\hspace{0pt} point $x$ over $k$.
Then by \eqref{e:ModIsofac} with $p:X' \to X$ the inclusion of $x$ we may after replacing $X$
by $X'$ assume that $X = \Spec(k')$.
By Corollary~\ref{c:fibfununique} may further assume that $T$ is the forgetful functor
$\omega_{K,\varepsilon}$.
That $\Phi_T$ is an equivalence then follows from Lemma~\ref{l:thetaPhi} with $S = \Spec(k')$.
\end{proof}
| {'timestamp': '2021-01-01T02:38:02', 'yymm': '2012', 'arxiv_id': '2012.15703', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.15703'} |
\section{Introduction}
In the seminal work on two-dimensional $N=(2,2)$ supersymmetric gauged linear sigma models \cite{Witten:1993yc}, Witten has introduced a powerful machinery to analyze both geometric and non-geometric phases of supersymmetric worldsheet theories. In particular the study of Abelian gauged linear sigma models --- which physically realize toric varieties in terms of $\mathbb{C}^*$~quotients in geometric invariant theory --- led to far-reaching developments in our understanding of the phase structure of string compactifications. For instance, gauged linear sigma model techniques continuously connect supersymmetric Landau--Ginzburg models of rational conformal field theories to semi-classical non-linear sigma models with geometric Calabi--Yau target spaces \cite{Witten:1993yc}. The identification of explicit rational conformal field theories in the moduli space of gauged linear sigma models has historically been an important ingredient in finding Calabi--Yau mirror pairs \cite{Greene:1990ud}, which resulted in the remarkable combinatorial construction of mirror manifolds for complete intersection Calabi--Yau manifolds in toric varieties \cite{Batyrev:1994hm,Batyrev:1994pg,Morrison:1995yh,Hori:2000kt} and led to a mirror theorem for these classes of mirror pairs \cite{MR1408320,MR1621573}.
While the use of Abelian gauged linear sigma models in string theory has become textbook knowledge,\footnote{See for example refs.~\cite{Greene:1996cy,Cox:2000vi,Hori:2003ic} and references therein.} the role of $N=(2,2)$ supersymmetric non-Abelian gauged linear sigma models for string compactifications is less systematically explored. Such non-Abelian gauge theories are of recent interest in their own right \cite{Doroud:2012xw,Benini:2012ui,Benini:2014mia,Gerchkovitz:2014gta}, and they allow us to describe a much broader class of Calabi--Yau geometries such as certain non-complete intersections in toric varieties \cite{Witten:1993yc,Witten:1993xi,Lerche:2001vj,Hori:2006dk,Donagi:2007hi,Hori:2011pd,Jockers:2012zr,Hori:2013gga,Sharpe:2012ji}.
Compared to their Abelian cousins, the non-Abelian gauged linear sigma models exhibit further interesting properties, such as strongly coupled phases, which sometimes can be mapped by duality to weakly coupled semi-classical phases \cite{Hori:2006dk,Hori:2011pd,Jockers:2012zr,Hori:2013gga}. A convincing argument to justify such dualities is given by identifying two sphere partition functions of such dual gauge theories \cite{Doroud:2012xw}.\footnote{For the Calabi--Yau phases examined in refs.~\cite{MR1775415,Hori:2006dk}, this approach is demonstrated in ref.~\cite{Jockers:2012dk}, where the two sphere partition function of the associated strongly coupled two-dimensional gauge theory is matched to its weakly coupled dual and semi-classical Calabi--Yau phase.} The two sphere partition function encodes the Zamolodchikov metric of the $N=(2,2)$ superconformal infrared fixed point of the renormalization group flow \cite{Doroud:2012xw,Benini:2012ui,Jockers:2012dk,Gomis:2012wy,Halverson:2013eua,Gerchkovitz:2014gta}, which is exact in the worldsheet coupling~$\alpha'$. As result the two sphere sphere partition function encodes both perturbative and non-perturbative worldsheet corrections, which in the semi-classical large volume regime are respectively identified with certain characteristic classes and genus zero Gromov--Witten invariants \cite{Jockers:2012dk,Halverson:2013qca}.
As these two-dimensional theories have four supercharges, these infrared strong-weak coupling dualities can be viewed as the two-dimensional analog of $N=1$ Seiberg dualities in four dimensions \cite{Seiberg:1994bz}, which --- due to recent progress on localizing supersymmetric gauge theories on curved spaces so as to compute partition functions on compact backgrounds \cite{Pestun:2007rz,Festuccia:2011ws} --- have passed similarly impressive consistency checks by matching partition functions of dual four-dimensional $N=1$ gauge theories \cite{Romelsberger:2005eg,Dolan:2008qi}.
An immediate consequence of connecting semi-classical Calabi--Yau phases through strong--weak coupling dualities is that seemingly distinct $N=(2,2)$ non-linear sigma models with Calabi--Yau threefold target spaces can emerge as two regimes in the same moduli space of the underlying common family of $N=(2,2)$ worldsheet theories. The first and prominent example of this kind furnishes the pair of the degree $14$ R\o{}dland Pfaffian Calabi--Yau threefold and a certain degree $42$ complete intersection Calabi--Yau threefold in the Grassmannian $\operatorname{Gr}(2,7)$ \cite{MR1775415},\footnote{Borisov and Libgober recently studied a generalization of this correspondence to higher dimensional Calabi--Yau varieties \cite{Borisov:2015vqa}.} which --- as shown in the important work by Hori and Tong \cite{Hori:2006dk} --- arise respectively as a weakly and a strongly coupled phase of a non-Abelian gauge linear sigma model. An immediate consequence of this result is that the associated Calabi--Yau categories of B-branes must be equivalent,\footnote{Mathematically, the B-brane category of a geometric Calabi--Yau threefold phase is described by its derived categories of bounded complexes of coherent sheaves \cite{Sharpe:1999qz,Douglas:2000gi,Diaconescu:2001ze}.} as for the above example is mathematically shown in ref.~\cite{MR2475813,Kuznetsov:2006arxiv,Addington:2014sla}. Hosono and Takagi present another interesting example of such an equivalence \cite{MR3166392}, which also fits into the framework of non-Abelian gauged linear sigma models \cite{Jockers:2012zr}. More generally, two-dimensional dualities of gauged linear sigma models connect phases of worldsheet theories in a common moduli space, which then implies equivalences of the associated B-brane categories. It should be stressed that the categorical equivalences that arise from strong--weak coupling dualities are clearly distinct from equivalences due to birational transformations studied in ref.~\cite{MR1949787} or other typical phase transitions in Abelian gauged linear sigma models --- such as, e.g., the Landau--Ginzburg and large volume correspondence for Calabi--Yau hypersurfaces in weighted projective spaces \cite{MR2641200,Herbst:2008jq}.\footnote{For non-birational phase transitions in Abelian gauged linear sigma models see refs.~\cite{Caldararu:2007tc,Addington:2012zv}.} As proposed in ref.~\cite{Caldararu:2007tc}, including these more general identifications of Calabi--Yau geometries on the level of their B-brane categories yields derived equivalences that sometimes realize particular instances of homological projective duality introduced by Kuznetsov \cite{MR2354207}.
In this work we study the phase structure of a certain class of non-Abelian gauged linear sigma models with symplectic gauge groups, to which we refer to as the skew symplectic sigma model s. We formulate a strong-weak coupling duality among skew symplectic sigma model s together with non-trivial consistency checks in support of our proposal. In particular, we study in detail two families of Calabi--Yau threefolds with a single K\"ahler modulus arising as weakly and strongly coupled phases in the moduli space of certain skew symplectic sigma model s. Furthermore, we argue that these Calabi--Yau geometries are related by a strong-weak coupling dualities of a dual pair of skew symplectic sigma model s. Geometrically, this pair of threefolds gives rise to interesting examples for non-trivial derived equivalences. For our analysis the two sphere partition function furnishes an important tool, which we use to study the phase structure of skew symplectic sigma model s and to confirm the proposed duality relation. In the analyzed large volume phases we extract from the two sphere partition function both perturbative and non-perturbative quantum corrections of the corresponding geometric large volume string compactifications.
As the studied gauged linear sigma models involve higher rank gauge groups, their two sphere partition functions arise from higher-dimensional Mellin--Barnes type integrals, which are technically challenging to evaluate. Therefore, we extend the technique of Zhdanov and Tsikh \cite{MR1631772} --- which transforms two-dimensional Mellin--Barnes type integral to a suitable combinatorial sum of local Grothendieck residues --- to Mellin--Barnes type integral of arbitrary dimension. While this generalization is essential to carry out the calculations in this work, we hope that these technical advances will prove useful in another context as well.
The organization of this work is as follows: In Section~\ref{sec:SSSM} we introduce the skew symplectic sigma model s --- a certain class of two-dimensional $N=(2,2)$ non-Abelian gauged linear sigma models. We study their phase structure, which generically exhibit a weakly and a strongly coupled phase. Both phases become in the infrared non-linear sigma models with interesting Calabi--Yau threefold target spaces. These threefolds are projective varieties of the non-complete intersection type, and we analyze their geometric properties. Finally, the observed phase structure together with the established geometric properties of the semi-classical infrared non-linear sigma model regimes allow us to propose a non-trivial duality relation among skew symplectic sigma model s. In Section~\ref{sec:ZS2}, we support our duality proposal by matching the two sphere partition function for a pair of dual skew symplectic sigma model s and for certain self-dual skew symplectic sigma model s. While the Mellin--Barnes type integral expression for the two sphere partition functions of the dual models look rather distinct --- e.g., even the dimensionality of the integration domains are distinct due to different ranks of the gauge group in dual skew symplectic sigma model s --- we demonstrate that the dual two sphere partition functions are indeed identical. In Section~\ref{sec:derivedequiv} we discuss further implications of the phase structure of the skew symplectic sigma model s with geometric Calabi--Yau threefold target space regimes. We argue that the skew symplectic sigma model s give rise to an equivalence of topological B-brane spectra, which in the formulation of modern algebraic geometry conjectures a non-trivial equivalence between the derived category of bounded complexes of coherent sheaves of the studied Calabi--Yau threefolds. Then we present our conclusions and outlook in Section~\ref{sec:con}. In Appendix~\ref{sec:MB} ---building upon previous work on two-dimensional Mellin--Barnes integrals by Zhdanov and Tsikh and using multidimensional residue calculus --- we present a systematic approach to evaluate Mellin--Barnes type integrals in arbitrary dimensions. These technical results are necessary to carry out the computations described in Section~\ref{sec:ZS2}. More generally, this method can be applied to systematically evaluate two sphere partition functions of two-dimensional $N=(2,2)$ gauged linear sigma model with higher rank gauge groups.
\section{The skew symplectic sigma model} \label{sec:SSSM}
For the Calabi--Yau threefolds studied in this work, we introduce a class of two-dimensional $N=(2,2)$ supersymmetric non-Abelian gauged linear sigma models based on the compact Lie group\footnote{Note that the compact Lie group $\operatorname{USp}(2k) = U(2k)\cap \operatorname{Sp}(2k,\mathbb{C})$ is often denoted by $\operatorname{Sp}(k)$. Here we use the first notation to indicate the dimension of the defining representation.}
\begin{equation} \label{eq:G}
G \,=\, \frac{U(1) \times \operatorname{USp}(2k)}{\mathbb{Z}_2} \ .
\end{equation}
Here the $\mathbb{Z}_2$ quotient is generated by the element $\left( e^{i \pi} , - \mathbf{1}_{2k\times 2k} \right)$ of order two in the center of $U(1) \times \operatorname{USp}(2k)$. Furthermore, the studied gauged linear sigma model possesses a vector $U(1)_V$ R-symmetry and an axial $U(1)_A$ R-symmetry, under which the $N=(2,2)$ supercharges transform as $(Q_\pm,\bar Q_\pm) \to (e^{i\alpha} Q_\pm,e^{-i\alpha} \bar Q_\pm)$ and $(Q_\pm,\bar Q_\pm) \to (e^{\pm i\beta} Q_\pm,e^{-\mp i\beta} \bar Q_\pm)$, respectively. While the $U(1)_V$ R-symmetry is always preserved at the quantum level, the the $U(1)_A$ R-symmetry is generically anomalous.
In addition to the vector multiplets $V_{U(1)}$ and $V_{\operatorname{USp}(2k)}$ of the gauge group $G$, the chiral matter fields of the studied non-Abelian gauged linear sigma model are listed in Table~\ref{tb:spec1}. Thus the entire non-Abelian gauged linear sigma model is determined by three positive integers $(k,m,n)$. Due to the skew-symmetric multiplicity labels of the chiral field $P^{[ij]}$, we call these models the skew symplectic sigma model s or in short the $SSSM_{k,m,n}$.
A generic gauge invariant superpotential of R-charge $+2$ for the skew symplectic sigma model{} reads
\begin{equation} \label{eq:W}
W\,=\, \operatorname{tr}\left[ P \left(A(\phi) + X^T \epsilon X \right) \right] + B(\phi) \cdot Q^T\epsilon X \ , \qquad
\epsilon=\begin{pmatrix} 0 & \mathbf{1}_{k\times k} \\ -\mathbf{1}_{k\times k} & 0 \end{pmatrix} \ ,
\end{equation}
in terms of $A(\phi) = \sum_a A^a \phi_a$ constructed out of $m$ skew-symmetric matrices $A^a_{[ij]}$ of dimension $n \times n$ and the column vector $B(\phi) = \sum_a B^a \phi_a$ constructed out of the vectors $B^{ai}$ of dimension $n$. Here the trace is taken over the symplectic multiplicity indices $i,j$ of the multiplets $P^{[ij]}$ and $X_i$. While this superpotential does not represent the most general form, for generic superpotential couplings it can always be cast into this form with the help of suitable field redefinitions.
Due to the Abelian factor in the gauge group $G$, the skew symplectic sigma model{} allows for a Fayet--Iliopoulos terms, which together with the theta angle generates the two-dimensional twisted superpotential \cite{Witten:1993yc}
\begin{equation} \label{eq:Wt}
\widetilde W \,=\, \frac{i \tilde{t}}{2\sqrt{2}} \Sigma_{U(1)} \ ,
\end{equation}
in terms of the twisted chiral field strength $\Sigma_{U(1)}=\frac{1}{\sqrt{2}}\overline{D}_+ D_- V_{U(1)}$ and the complexified Fayet--Iliopoulos coupling
\begin{equation}
\tilde{t} \,=\, i r + \frac{\theta}{2\pi} \ .
\end{equation}
In this note, the complexified Fayet--Iliopoulos coupling $\tilde{t}$ is the key player, as we study the phase structure of the skew symplectic sigma model s as a function of this parameter $\tilde{t}$.
Note that the constructed non-Abelian gauged linear sigma model is somewhat related to the PAX and PAXY model studied in ref.~\cite{Jockers:2012zr}. Essentially the unitary gauge group factor is replaced by the symplectic gauge group $\operatorname{USp}(2k)$, while the matter contents exhibit a similar structure.
\begin{table}[t]
\centering
\hfil\vbox{
\offinterlineskip
\tabskip=0pt
\halign{\vrule height2.6ex depth1.4ex width1pt~#~\hfil\vrule&~\hfil~#~\hfil\vrule&~\hfil~#~\hfil\vrule&\hfil~#~\hfil\vrule height2.6ex depth1.4ex width 1pt\cr
\noalign{\hrule height 1pt}
\hfil chiral multiplets & multiplicity & $G$ representation & $U(1)_V$ R-charge \cr
\noalign{\hrule height 1pt}
\ $P^{[ij]}$, $1\le i < j \le n$ & $\binom{n}2$ & $\mathbf{1}_{-2}$ & $2-2\mathfrak{q}$ \cr
\noalign{\hrule}
\ $Q$ & $1$ & $\mathbf{2k}_{-3}$ & $2-3\mathfrak{q}$ \cr
\noalign{\hrule}
\ $\phi_a$, $a=1,\ldots,m$ & $m$ & $\mathbf{1}_{+2}$ & $2\mathfrak{q}$ \cr
\noalign{\hrule}
\ $X_i$, $i=1,\ldots,n$ & $n$ & $\mathbf{2k}_{+1}$ & $\mathfrak{q}$ \cr
\noalign{\hrule height 1pt}
}}\hfil
\caption{Listed are the matter fields of the skew symplectic sigma model{} together with their multiplicities, their representations of the gauge group $G$, and their $U(1)_V$ R-charges. The gauge group representations are labeled by representations of $\operatorname{USp}(2k)$ with a subscript for the $U(1)$ gauge charge. As the $U(1)_V$ R-symmetry is ambiguous up to transformations with respect to the $U(1)$ gauge symmetry, the $U(1)_V$ charges of the multiplets exhibit the real parameter $\mathfrak{q}$.}\label{tb:spec1}
\end{table}
\subsection{Infrared limit of the skew symplectic sigma model}
For the skew symplectic sigma model{} to flow to a non-trivial infrared $N=(2,2)$ superconformal fixed point in the infrared, we follow the general strategy and require that the skew symplectic sigma model{} possesses in addition to the vector $U(1)_V$ R-symmetry a non-anomalous axial $U(1)_A$ R-symmetry. Then in the infrared limit these two R-symmetries yield the left- and right-moving $U(1)$~currents of the $N=(2,2)$ superconformal algebra. The axial anomaly $\partial_\mu j_A^\mu$ appears at one loop from the operator product of the axial $U(1)_A$ current and the gauge current. From the axial $U(1)_A$ R-charges of the fermions in the chiral multiplets we arrive at
\begin{equation} \label{eq:axialcurr}
\begin{aligned}
\partial_\mu j_A^\mu \,&\sim\, \underbrace{(-2)\binom{n}2}_{P_{[ij]}} + \underbrace{\left( - 3 \right) (2k)}_Q
+ \underbrace{(+2) m}_{\phi_a} + \underbrace{\left(+1 \right) (2kn)}_{X_i} \\
\,&=\, (n-3)(2k-n) + 2(m-n) \ .
\end{aligned}
\end{equation}
As argued in ref.~\cite{Witten:1993yc}, we find that the axial $U(1)_A$ R-symmetry is non-anomalous, if the $U(1)$ gauge charge of the matter multiplets (weighted with the dimension of the non-Abelian $\operatorname{USp}(2k)$ representation) add up to zero, and the condition for a conserved axial current $\partial_\mu j_A^\mu = 0$ reads
\begin{equation} \label{eq:axial}
m \,=\, \frac12 (n-3)(n-2k) + n \ .
\end{equation}
Furthermore, if the axial $U(1)_A$ R-symmetry is non-anomalous, we can also calculate the central charge $c$ of the $N=(2,2)$ superconformal field theory at the infrared fixed point. It arises at one loop from the operator product $\Gamma_{A-V}$ of the vector $U(1)_V$ and the axial $U(1)_A$ currents
\begin{equation}
\begin{aligned}
\Gamma_{A-V}(\mathfrak{q}) \,=\,
\underbrace{(2\mathfrak{q}-1) \binom{n}2}_{P_{[ij]}} &+\underbrace{(3\mathfrak{q}-1)(2k)}_Q
+ \underbrace{(-2\mathfrak{q}+1)m}_{\phi_a} \\
&\qquad+\underbrace{(-\mathfrak{q}+1)(2 k n)}_{X_i}
+\underbrace{(-1)(2k+1)k}_{V_{\operatorname{USp(2k)}}} +\underbrace{(-1)}_{V_{U(1)}} \ .
\end{aligned} \label{eq:mixed}
\end{equation}
Inserting the axial anomaly cancellation condition~\eqref{eq:axial}, we arrive at the central charge~$c$ of the infrared $N=(2,2)$ SCFT
\begin{equation} \label{eq:central}
\frac{c}{3} = \Gamma_{A-V}(\mathfrak{q}) = k(n-2k)-1 \ .
\end{equation}
Thus the two integers $k$ and $n$ together with the requirement of a non-anomalous axial $U(1)_A$ current determine the multiplicity label $m$ and therefore specify the associated skew symplectic sigma model{} with a controlled renormalization group flow to a $N=(2,2)$ superconformal fixed point.
\subsection{Phases of the skew symplectic sigma model}
Our first task is to analyze the scalar potential $U$ of the skew symplectic sigma model
\begin{equation}
\begin{aligned}
U \,=\, & \frac12 D_{U(1)}^2 + | \sigma_{U(1)} |^2
\left(8\!\!\!\sum_{1\le i<j\le n}\!\!\!|P^{[ij]}|^2+18\,Q^\dagger Q
+ 8\sum_{a=1}^m|\phi_a|^2+2\sum_{i=1}^n X^\dagger_i X_i \right)\\
&+\frac12 \operatorname{tr} D_{\operatorname{USp}(2k)}^2
+ 2 ( \sigma_{\operatorname{USp}(2k)} Q)^\dagger \sigma_{\operatorname{USp}(2k)} Q
+ 2 \sum_{i=1}^n ( \sigma_{\operatorname{USp}(2k)} X_i)^\dagger \sigma_{\operatorname{USp}(2k)} X_i\\
&+ \sum_{1\le i<j\le n} \left| F_{P^{[ij]}} \right|^2 + F_{Q}^\dagger F_Q + \sum_{a=1}^m \left|F_{\phi_a}\right|^2
+ \sum_{i=1}^n F_{X_i}^\dagger F_{X_i} \ ,
\end{aligned}
\end{equation}
which is a sum of non-negative terms given in terms of the complex scalar fields $\sigma_{U(1)}$ and $\sigma_{\operatorname{USp}(2k)}$ and the auxiliary D-terms of the $N=(2,2)$ vector multiplets, and in terms of the complex scalar fields of the $N=(2,2)$ chiral multiplets and their auxiliary F-terms. For a stable supersymmetric vacuum --- i.e., for $U=0$ --- all non-negative D- and F-terms must vanish separately.
The D-terms of the vector multiplets $V_U(1)$ and $V_{\operatorname{USp}(2k)}$ become
\begin{equation} \label{eq:DU(1)}
D_{U(1)} \,=\, 2\sum_a |\phi_a|^2 + \sum_i X_i^\dagger X_i - 3\,Q^\dagger Q -2 \sum_{1\le i<j\le n} |P^{[ij]}|^2 - r \ ,
\end{equation}
including the Fayet--Iliopoulos parameter $r$, and
\begin{equation} \label{eq:DUSp}
D_{\operatorname{USp}(2k)}^A \,=\, \sum_i X_i^\dagger T^A X_i + Q^\dagger T^A Q \ , \quad A=1,\ldots, (2k+1)k \ ,
\end{equation}
where $T^A$ are the generators of the Lie algebra $\mathfrak{usp}(2k)$.\footnote{The semi-simple Lie algebra $\mathfrak{usp}(2k)=\mathfrak{u}(2k) \cap\mathfrak{sp}(2k,\mathbb{C})$ can be represented by the set of complex matrices $\left\{\begin{pmatrix} A & B \\ -B^\dagger & -A^T \end{pmatrix} \in \operatorname{Mat}(2k,\mathbb{C}) \,\middle|\,A + A^\dagger = 0 \,,\ B = B^T \right\}$.} Geometrically, the D-terms enjoy the interpretation as the moment map $\mu: V_\text{chiral} \to \mathfrak{g}^*$ with respect to the group action of the Lie group $G$ on the complex vector space $V_\text{chiral}$ of the chiral multiplets and its canonical $G$-invariant symplectic K\"ahler form.
The F-terms of the chiral multiplets are determined from the gradient of the superpotential
\begin{equation} \label{eq:Fterms}
\begin{aligned}
F_{P^{[ij]}} &= A_{[ij]}(\phi) + X_i^T\epsilon X_j \ , &
F_{Q}&= B(\phi)\cdot X \, \\
F_{\phi_a}&=\operatorname{tr} \left[P A^a\right] +B^a \cdot Q^T \epsilon X\ , &
F_{X_i}&=2 \sum_{j} P^{[ij]} X_j + B^i(\phi) Q\ .
\end{aligned}
\end{equation}
\subsubsection{The skew symplectic sigma model{} phase $r\gg0$} \label{sec:Xphase}
Let us first analyze the skew symplectic sigma model{} in the regime $r\gg 0$ of the Fayet--Iliopoulos parameter $r$. Then setting the D-term~\eqref{eq:DU(1)} to zero enforces at least one of the scalar fields in the chiral multiplets $\phi_a$ or $X_i$ to have a non-vanishing expectation value. Next we consider the F-term $F_{\phi_a}$ and $F_{X_i}$, which impose $m+2kn$ constraints on the $\binom{n}2+2k$ degrees of freedom arising from the fields $P^{[ij]}$ and $Q$. Assuming genericness of all these constraints, for $m\ge\frac12(n-1)(n-4k)$ we obtain mass terms for the fields $P^{[ij]}$ and $Q$, which set their scalar expectation values to zero. Note that the condition $m\ge\frac12(n-1)(n-4k)$ is automatically fulfilled for skew symplectic sigma model s with conserved axial current~\eqref{eq:axial}, which are the models of our main interest.
With the field $Q$ set to zero, combining the constraints from the Abelian and the non-Abelian D-terms further requires that at least one scalar $\phi_a$ must be non-vanishing. As a result the F-term condition $F_{P^{[ij]}}=0$ equates the expectation values of the bilinears $\Lambda_{[ij]}:=X_i^T\epsilon X_j$ with $A_{[ij]}(\phi)$. However, not all bilinears $\Lambda_{[ij]}$ can acquire independent expectation values, because they are quadratic in the fields $X_i, i=1,\ldots,n$, which in turn are collectively represented by the $(2k)\times n$-matrix~$X$. As a result the skew-symmetric matrix $\Lambda = X^T \epsilon X$ and also $A(\phi)$ has at most rank $2k$. Conversely, together with the non-Abelian D-term constraint for $n\ge 2k$ any skew-symmetric matrix $\Lambda$ of rank $2k$ can --- up to a $\operatorname{USp}(2k)$ transformation --- uniquely be written as the bilinear form of a $(2k)\times n$-matrix $X$. Finally, multiplying the F-term $F_Q$ with $X_i$ implies the constraint $A(\phi) \cdot B(\phi)=0$. Therefore, altogether we obtain in the semi-classical $r\gg 0$ phase the target space geometry
\begin{equation} \label{eq:XSSSM}
\mathcal{X}_{k,m,n} \,=\, \left\{\, \phi \in \mathbb{P}^{m-1} \,\middle|\,
\operatorname{rk} A(\phi) \le 2k \ \text{and}\ A(\phi) \cdot B(\phi) = 0 \, \right\} \ ,
\end{equation}
in terms of the $n\times n$ skew-symmetric matrix $A(\phi)$ and the $n$-dimensional vector $B(\phi)$. We note that the target space variety $\mathcal{X}_{k,m,n}$ can alternatively be written as\footnote{We would like to thank Sergey Galkin for sharing this construction of Calabi--Yau threefolds with us, which among other places he first presented in a lecture at Tokyo University \cite{Galkin:2014Talk}. It pointed us towards the discovery of the skew symplectic sigma model s.}
\begin{equation}\label{eq:Xvariety}
\mathcal{X}_{k,m,n} \,=\, \left\{ [\phi,\tilde P] \in \mathbb{P}( V, \Lambda^2 V^*) \, \middle| \,
\operatorname{rk} \tilde P \le 2k \ \text{and}\ \phi \in \ker\tilde P \right\}\,\cap\, \mathbb{P}(L) \ , \quad
V \simeq \mathbb{C}^n \ ,
\end{equation}
with the linear subspace $L$ of dimension $m$ in $V\oplus \Lambda^2 V^*$ given by
\begin{equation} \label{eq:L}
L \,=\, \left\{ \, \left(B(\phi), A(\phi)\right) \in V\oplus \Lambda^2 V^* \,\right\} \ .
\end{equation}
This construction for projective varieties has been put forward in refs.~\cite{Galkin:2014Talk,Galkin:2015InPrep}. The dimension of the non-complete intersection variety $\mathcal{X}_{k,m,n}$ is given by
\begin{equation} \label{eq:dimX}
\dim_\mathbb{C} \mathcal{X}_{k,m,n} \,=\, (m-1) - \frac12 (n-2k-1)(n-2k) - 2k \ .
\end{equation}
where $\frac12(n-2k-1)(n-2k)$ yields the codimension of the rank condition imposed on the skew forms in $\Lambda^2 V^*$, while $2k$ takes into account the codimension of the kernel condition imposed on the vectors in $V$.
Typically the variety $\mathcal{X}_{k,m,n}$ is singular at those points, where the rank of $\tilde P$ is reduced, i.e., $\operatorname{rk} \tilde P < 2k$. For generic choices of the projective subspace $\mathbb{P}(L)$ this occurs in codimension $\frac12 (n-2k+1)(n-2k+2) + 2k - 2$ (or bigger). Hence, for
\begin{equation} \label{eq:redrk}
\dim_\mathbb{C} \mathcal{X}_{k,m,n}<2(n-2k)-1 \ ,
\end{equation}
the generic variety $\mathcal{X}_{k,m,n}$ has constant rank $\operatorname{rk}\tilde P = 2k$, and thus does not acquire any singularities from a reduction of the rank of $\tilde P$. Then the validity of the presented semi-classical analysis is guaranteed.
However, if the rank condition~\eqref{eq:redrk} is not met, it does not necessarily imply that the variety $\mathcal{X}_{k,m,n}$ acquires singularities in codimension $2(n-2k)-1$ where $\operatorname{rk}\tilde P<2k$. Note that for $n=2k+2$ the rank constraint in the definition~\eqref{eq:Xvariety} becomes redundant, because the second constraint $\phi\in\ker \tilde P$ automatically implies that the even-dimensional skew symmetric matrix $\lambda$ obeys $\operatorname{rk} \tilde P \le n-2$, i.e.,
\begin{equation} \label{eq:spec}
\mathcal{X}_{k,m,2k+2} \,=\, \left\{ [\phi,\tilde P] \in \mathbb{P}( V, \Lambda^2 V^*) \, \middle| \, \phi \in \ker \tilde P \right\}\cap \mathbb{P}(L) \ ,
\quad V \simeq \mathbb{C}^{2k+2} \ .
\end{equation}
This kernel condition imposes $2k+2$ bilinear constraints of the form $f_i = \sum_k \lambda_{[ik]} \phi^k$ with one relation $\sum_i \phi^i f_i = 0$. Hence, we find for the dimension
\begin{equation}
\dim_\mathbb{C} \mathcal{X}_{k,m,2k+2} \,=\, (m-1) - (2k +1 ) = m - 2k -2 \ ,
\end{equation}
which is in agreement with eq.~\eqref{eq:dimX}. For generic choice of the linear subspace $L$ in $V\oplus \Lambda^2 V^*$ the variety $\mathcal{X}_{k,m,2k+2}$ is indeed smooth.
Note that if the rank condition $\operatorname{rk}\tilde P=2k$ is saturated the non-Abelian gauge group factor $\operatorname{USp}(2k)$ is spontaneously broken at all points in the target space variety. Then the discussed phase is weakly coupled, and we do not expect any strong coupling dynamics in the infrared. If, however, there are points in the target space variety $\mathcal{X}_{k,m,n}$, where the rank condition is not saturated, the non-Abelian gauge group $\operatorname{USp}(2k)$ is not entirely broken and strong coupling dynamics in the infrared becomes important at such points \cite{Hori:2006dk,Hori:2011pd}. While this is not surprising at singular points of the target space varieties, such strong coupling effects are in principle even present for the models $SSSM_{k,m,2k+2}$, which as discussed naively give rise to the smooth target space varieties~$\mathcal{X}_{k,m,2k+2}$. For smooth target space varieties we heuristically expect that the infrared dynamics of the gauge theory --- as for instance described by its correlation functions --- varies continuously with respect to the scalar field expectation values, which realize coordinates on the target space. For this reason we suppose that, due to the smoothness of the target space, such strong coupling effects are not relevant in the infrared. We do not further examine such strong coupling effects here, but instead give a stronger indirect argument --- by evaluating the two sphere partition function of such a model in Section~\ref{sec:ZS2} --- that the models $SSSM_{k,m,2k+2}$ reduce for $r\gg 0$ to semi-classical non-linear sigma model with the smooth target space variety $\mathcal{X}_{k,m,2k+2}$, even at the loci of reduced rank.
In this work we focus on those skew symplectic sigma model{}s with an expected semi-classical non-linear sigma model phase for $r \gg 0$, which by the above argument includes the models $SSSM_{k,m,2k+2}$.
\subsubsection{The skew symplectic sigma model{} phase $r\ll0$} \label{sec:strongphase}
Let us now turn to the regime $r\ll 0$ for the Fayet--Iliopoulos parameter $r$ of the skew symplectic sigma model. For this phase non-Abelian strong coupling effects become essential \cite{Hori:2015priv}, and the analysis of this phase can be tackled analogously to the strongly coupled gauged linear sigma model phases studied by Hori and Tong~\cite{Hori:2006dk}.\footnote{We are deeply grateful to Kentaro Hori for explaining in detail to us, how to analyze this strong coupling phase.}
In this phase the abelian D-term~\eqref{eq:DU(1)} ensures a non-vanishing expectation value for the chiral multiplets $P^{[ij]}$ or $Q$. For generic choices of the parameters $B^{ai}$, the F-term $F_{\phi_a}=0$ together with the non-Abelian D-term~\eqref{eq:DUSp} even implies that at least one chiral multiplet $P^{[ij]}$ must be non-zero. Thus the Abelian gauge group factor of the gauge group is broken, and we are left with the non-Abelian $\operatorname{USp}(2k)$ gauge theory, which we view adiabatically fibered over the total space
\begin{equation}
\mathcal{V} \,=\, \operatorname{Tot}\left(\mathcal{O}(-1)^{\oplus m}\to \mathbb{P}^{{\binom{n}2}-1}\right) \ ,
\end{equation}
where the homogeneous coordinates of the base are given by the expectation values of the $\binom{n}2$ chiral fields $P^{[ij]}$ and the fibers by the $m$ chiral fields $\phi_a$.
As presented by Hori and Tong for $SU(k)$ gauge theories fibered over a base \cite{Hori:2006dk}, the strategy is now to solve the infrared dynamics of the $\operatorname{USp}(2k)$ gauge theory at each point of the variety $\mathcal{V}$. For any point $(\phi,P)\in\mathcal{V}$ the $n+1$ fundamental flavors $(X_i,Q)$ couple according to the F-terms $F_{X_i}$ and $F_{Q}$ to the (skew-symplectic) mass matrix
\begin{equation} \label{eq:Mmat}
M_{\phi,P} \,=\, \begin{pmatrix} P^{[ij]} & B^i(\phi) \\ - B^i(\phi)^T & 0 \end{pmatrix} \ ,
\end{equation}
such that the dimension of the kernel $M^2_{\phi,P}$ determines the number of massless flavors $n_f$ at low energies, i.e.,
\begin{equation} \label{eq:nf}
n_f \,=\, \dim \ker M_{\phi,P} \,=\, n + 1 - \operatorname{rk} M_{\phi,P} \ .
\end{equation}
To this end we recall the structure of two-dimensional $N=(2,2)$ non-Abelian $\operatorname{USp}(2k)$ gauge theories with $n_f$ fundamental flavors as established by Hori~\cite{Hori:2011pd}. The theory is regular for $n_f$ odd, which means that the theory does not possess any non-compact Coulomb branches. Imposing regularity thus implies that $n$ must be even according to eq.~\eqref{eq:nf},
\begin{equation}
n \in 2\mathbb{Z} \ ,
\end{equation}
which we assume to be the case in the remainder of this section. For $n_f < 2k+1$ the infrared theory does not have any normalizable supersymmetric ground state, for $n_f = 2k+1$ the theory becomes a free conformal field theory of mesons, and for $n_f \ge 2k+3$ the theory is non-trivial and can be described by a dual interacting symplectic theory with gauge group $\operatorname{USp}(\frac12 (n_f-1)-k)$.
As a consequence the theory localizes at those points of $\mathcal{V}$ with $n_f\ge 2k+1$ \cite{Hori:2006dk,Hori:2015priv}, which translates into the constraint
\begin{equation} \label{eq:rkcon}
\operatorname{rk} M_{\phi,P} \le n-2k \ .
\end{equation}
First, we focus on the degeneration locus, where the above inequality is saturated, i.e., $(\phi,P)\in\mathcal{V}$ such that $\operatorname{rk} M_{\phi,P}= n-2k$. At such points we have $n_f = 2k +1$ and the low energy effective theory becomes a theory of mesons $\pi_{[AB]}$, $A,B=1,\ldots,n_f$, arising from the $n_f$ fundamentals in the kernel of $M_{\phi,P}$, together with the fields $T_\alpha$ and $N_\mu$ for tangential and normal fluctuations with respect to the point $(\phi,P)$ in $\mathcal{V}$, i.e.,
\begin{equation}
\begin{aligned}
&(\phi,P) \,\longrightarrow\, (\phi,P) + \sum_\alpha T_\alpha (\delta\phi^{\alpha},\delta P^\alpha) + \sum_\mu N_\mu (\delta\phi^\mu,\delta P^\mu) \ , \\
&\alpha = 1,\ldots,(m-1)+\frac12n(n-1)-k(2k+1) \ ,\quad \mu=1,\ldots,k(2k+1) \ .
\end{aligned}
\end{equation}
Here the multiplicities $k(2k+1)$ and $(m-1)+\frac12n(n-1)-k(2k+1)$ account for the normal and tangential directions to the point $(\phi,P)$ at the degeneration locus $\operatorname{rk} M_{\phi,P}=n-2k$. Then the effective superpotential of the resulting theory becomes
\begin{equation}
\begin{aligned}
W_\text{eff}(\pi,T,N) \,=\, N_\mu \operatorname{tr}\left( \pi \, M_{\delta\phi^\mu,\delta P^\mu} \right)
&+\left(T_\alpha \delta P^\alpha + N_\mu \delta P^\mu\right)^{[ij]} A^a_{[ij]} \phi_a \\
& \qquad\qquad + P^{[ij]} A^a_{[ij]}(T_\alpha \delta \phi^\alpha_a + N_\mu \delta \phi^\mu_a)\ ,
\end{aligned}
\end{equation}
with the effective F-terms
\begin{equation}
\begin{aligned}
&F_{\pi_{[AB]}} \,=\, N_\mu\,M_{\delta\phi^\mu,\delta P^\mu}^{[AB]} \ , \qquad
F_{T_\alpha} \,=\, (\delta P^\alpha A)^a \phi_a + P^{[ij]}(A\delta\phi)_{[ij]}^\alpha \ , \\
&F_{N_\mu} \,=\, \operatorname{tr}\left( \pi \, M_{\delta\phi^\mu,\delta P^\mu} \right) + (\delta P^\mu A)^a \phi_a + P^{[ij]}(A\delta\phi)_{[ij]}^\mu \ .
\end{aligned}
\end{equation}
First, we observe that at a generic point $(\phi,P)$ of the degeneration locus the mesons $\pi_{[AB]}$ --- arising from the massless eigenvectors of the mass matrix $M_{\phi,P}$ --- do not have a definite $U(1)_V$ R-charge because they are general linear combinations of bilinears built from the fundamentals $X_i$ and $Q$ of $U(1)_V$ R-charge $\mathfrak{q}$ and $2-3\mathfrak{q}$, respectively. At such points the F-term constraint $F_{T_\alpha}=0$ is generically not fulfilled, but as there are as many constraints as dimensions of the degeneration locus, there could be discrete solutions for $(\phi,P)$ on the degeneration component. We do not expect any normalizable supersymmetric ground states from such points. For specific skew symplectic sigma model s (with $n$ even) this expectation is indirectly confirmed in Section~\ref{sec:ZS2} by extracting the Witten index $(-1)^F$ --- or strictly speaking at least that the sum of all such supersymmetric discrete ground states do not contribute to the Witten index $(-1)^F$.
Therefore, we now look at a non-generic component of the degeneration locus with mesons of definite $U(1)_V$ R-charge. Let us consider a point $(\phi,P)$ on such a component that
\begin{equation} \label{eq:Bconst}
B(\phi)\,=\,B(\delta\phi^\alpha)\,=\,0 \ .
\end{equation}
Then in the vicinity of $(\phi,P)$ of the degeneration locus the $2k+1$ massless mesons are given by
\begin{equation}
\tilde\phi_i \,=\, X_i \epsilon Q \quad \text{with} \quad \sum_j P^{[ij]}\tilde\phi_j = 0 \ ,
\end{equation}
with the definite $U(1)_V$ R-charge $2-2\mathfrak{q}$. Generically, the F-term $F_{T_\alpha}=0$ together with eq.~\eqref{eq:Bconst} requires that $\phi=0$ and that all tangential fluctuations $(\delta \phi^\alpha,\delta P^\alpha)$ to the degeneration locus take the non-generic form $(0,\delta P^\alpha)$. Finally, the F-terms $F_{\pi_{[AB]}}$ set $N_\nu$ to zero, whereas the F-terms $F_{N_\mu}$ yields the constraint
\begin{equation} \label{eq:Hyperii}
0 \,=\, A^{a}_{[ij]}P^{[ij]} + B^{ai} \tilde\phi_i \ .
\end{equation}
Thus, we find that the low energy theory of this degeneration locus yields a non-linear sigma model with target space variety
\begin{equation} \label{eq:Yvariety}
\begin{aligned}
&\mathcal{Y}_{k,m,n} \,=\, \left\{ [\tilde\phi,P] \in \mathbb{P}( V^*, \Lambda^2 V) \, \middle| \,
\operatorname{rk} P \le n-2 k \ \text{and}\ \tilde\phi \in \ker P \right\}\cap \mathbb{P}(L^\perp) \ , \\
&V^* \simeq \mathbb{C}^n \ , \quad \dim_\mathbb{C} L^\perp= \frac12n(n+1) - m \ , \quad n \text{ even} \ .
\end{aligned}
\end{equation}
The linear subspace $L^\perp$ of $V^*\oplus \Lambda^2 V$ is determined by eq.~\eqref{eq:Hyperii}, and it is dual to linear subspace $L$ of $V \oplus \Lambda^2 V^*$ with $\dim_\mathbb{C} L = m$ defined in eq.~\eqref{eq:L}, which means that
\begin{equation} \label{eq:Lperp}
L^\perp \,=\, \left\{ v \in V^*\oplus \Lambda^2 V \, \middle| \, v(L) = 0 \, \right\} \ .
\end{equation}
Let us briefly comment on the locus where the inequality $\operatorname{rk} P\le n-2k$ is not saturated. At these points the infrared non-Abelian gauge theory becomes non-trivial, which can be described in terms of a dual interacting symplectic theory with gauge group $\operatorname{USp}(n-2k-\operatorname{rk} P)$ \cite{Hori:2011pd}. Generically, at those points the target space variety $\mathcal{Y}_{k,m,n}$ becomes singular, which indicates geometrically that the derivation of the semi-classical non-linear sigma model target space $\mathcal{Y}_{k,m,n}$ breaks down. However, similarly as in the phase $r\gg0$, we assume that the resulting target space variety $\mathcal{Y}_{k,m,n}$ furnishes the correct semi-classical non-linear sigma model description, as long as the variety $\mathcal{Y}_{k,m,n}$ remains smooth --- even if there are points where the rank condition is not saturated. Borrowing the analysis of Section~\ref{sec:Xphase}, the latter phenomenon occurs in the phase $r\ll 0$ for the models $SSSM_{1,(k+1)(2k+3)-m,2k+2}$ with smooth target space varieties $\mathcal{Y}_{1,(k+1)(2k+3)-m,2k+2}$.
Furthermore, we remark that as the target spaces of the two phases $r\gg 0$ and $r\ll 0$ are of the same type, we arrive at the identification
\begin{equation} \label{eq:XYdual}
\mathcal{Y}_{k,m,n} \simeq \mathcal{X}_{\tilde k,\tilde m,n} \ , \quad
\tilde k = \frac{n}2-k \ , \quad \tilde m = \frac12n(n+1)-m \ , \quad n\text{ even} \ ,
\end{equation}
with the linear subspaces $L$ of $\mathcal{X}_{\tilde k,\tilde m,n}$ and $L^\perp$ of $\mathcal{Y}_{k,m,n}$ of dimension $\dim_\mathbb{C} L = \dim_{\mathbb{C}} L^\perp=\tilde m$ canonically identified. However, we should stress that the two varieties are realized rather differently in the two phases of the skew symplectic sigma model. While the target spaces~$\mathcal{X}_{k,m,n}$ arise as a weakly coupled phase in the regime $r\gg 0$ of the skew symplectic sigma model, the variety $\mathcal{Y}_{k,m,n}$ emerges due to strong coupling phenomena with mesonic degrees of freedom in the low energy regime of the two-dimensional gauge theory. Physically, the identification~\eqref{eq:XYdual} already points towards a non-trivial duality among skew symplectic sigma model s. However, we postpone this aspect to Section~\ref{sec:duality} and first examine in detail the geometry of the discovered target spaces $\mathcal{X}_{k,m,n}$ and $\mathcal{Y}_{k,m,n}$ in Section~\ref{sec:Geom}.
\subsection{Geometric properties of target space varieties} \label{sec:Geom}
Our next task is to further study the target space varieties $\mathcal{X}_{k,m,n}$ and $\mathcal{Y}_{k,m,n}$. For ease of notation we focus on $\mathcal{X}_{k,m,n}$, as all results immediately carry over to $\mathcal{Y}_{k,m,n}$ due to the equivalence~\eqref{eq:XYdual}.
Similarly as in refs.~\cite{MR1827863,Jockers:2012zr} --- in order to deduce geometric properties of the target space varieties $\mathcal{X}_{k,m,n}$ --- we introduce the incidence correspondence
\begin{equation} \label{eq:inc}
\widehat{\mathcal{X}}_{k,m,n} \,=\, \left\{ (x,p) \in \mathbb{P}^{m-1} \times \operatorname{Gr}(2k,V) \,\middle|\, \Lambda(x,p)=0 \ \text{and}\ B(x,p)=0 \right\} \ .
\end{equation}
Here $\Lambda(x,p)$ is a generic section of the bundle $\mathcal{R}$
\begin{equation}
\mathcal{R} \,=\, \frac{\mathcal{O}(1) \otimes \Lambda^2 V^* }{ \mathcal{O}(1) \otimes \Lambda^2 \mathcal{S} } \ ,
\end{equation}
in terms of the hyperplane line bundle $\mathcal{O}(1)$ of the projective space $\mathbb{P}^{m-1}$, and the universal subbundle $\mathcal{S}$ of rank $2k$ of the Grassmannian $\operatorname{Gr}(2k,V)$. Furthermore, $B(x,p)$ is a generic section of the rank $2k$ bundle $\mathcal{B}$
\begin{equation}
\mathcal{B} \,=\, \mathcal{O}(1) \otimes \mathcal{S}^* \ .
\end{equation}
Thus the incidence correspondence realizes the variety $\widehat{\mathcal{X}}_{k,m,n}$ as the transverse zero locus of a generic section of the bundle $\mathcal{R}\oplus\mathcal{B}$. Then the total Chern class of the varieties $\widehat{\mathcal{X}}_{k,m,n}$ is given by
\begin{equation} \label{eq:totalChern}
c(\widehat{\mathcal{X}}_{k,m,n}) \,=\, \frac{c(\mathbb{P}^{m-1})\,c(\operatorname{Gr}(2k,V))}{c(\mathcal{R})\,c(\mathcal{B})} \ .
\end{equation}
This formula can explicitly be evaluated as the tangent sheaf $\mathcal{T}_{\operatorname{Gr}(2k,V)}$ is canonically identified with $\operatorname{Hom}(\mathcal{S},\mathcal{Q})$ of the universal subbundle~$\mathcal{S}$ and the quotient bundle $\mathcal{Q}$, and the Chern class of the Grassmannian factor becomes $c(\operatorname{Gr}(2k,V))=c(\mathcal{S}^*\otimes \mathcal{Q})$.
The intersection numbers $\kappa_{\widehat{\mathcal{X}}_{k,m,n}}$ of the variety $\widehat{\mathcal{X}}_{k,n,m}$ for the (induced) hyperplane divisor $H$ and Schubert divisor $\sigma_1$ of $\mathbb{P}^{m-1}\times\operatorname{Gr}(2k,V)$ are defined as
\begin{equation}
\kappa_{\widehat{\mathcal{X}}_{k,m,n}}(H^r,\sigma_1^s) \,=\, \int_{\widehat{\mathcal{X}}_{k,m,n}} H^r \wedge \sigma_1^s \ , \quad r+s = \dim_\mathbb{C} \widehat{\mathcal{X}}_{k,m,n} \ .
\end{equation}
They can explicitly be computed from the top Chern classes of the bundles $\mathcal{R}$ and $\mathcal{B}$ according to
\begin{equation} \label{eq:inter}
\kappa_{\widehat{\mathcal{X}}_{k,m,n}}(H^r,\sigma_1^s) \,=\, \int_{\mathbb{P}^{m-1} \times \operatorname{Gr}(2k,V)} H^r\wedge \sigma_1^s
\wedge c_\text{top}(\mathcal{R})\wedge c_\text{top}(\mathcal{B}) \ .
\end{equation}
These formulas for the incidence variety $\widehat{\mathcal{X}}_{k,m,n}$ form our basic tools to study topological properties of the varieties $\mathcal{X}_{k,m,n}$ of interest, as detailed in the following.
For varieties $\mathcal{X}_{k,m,n}$ that fulfill the constant rank condition~\eqref{eq:redrk}, it is straight forward to demonstrate the isomorphism
\begin{equation} \label{eq:iso}
\mathcal{X}_{k,m,n} \,\simeq \,\widehat{\mathcal{X}}_{k,m,n} \quad \text{for} \quad k(2k-1)+\frac{n}2(n-4k+3)>m \ ,
\end{equation}
because the zeros of $\Lambda(x,p)$ yield the rank condition $\operatorname{rk} \tilde P = 2k$ in $\mathbb{P}^{m-1}$ in eq.~\eqref{eq:Xvariety} and assign for each point in $\mathbb{P}^{m-1}$ a unique $2k$ plane in the Grassmannian $\operatorname{Gr}(2k,V)$. The zeros of $B(x,p)$ give rise to the kernel condition $\phi\in \ker\tilde P$ in eq.~\eqref{eq:Xvariety}. Thus, we can readily use this isomorphism to determine the total Chern class for this class of varieties $\mathcal{X}_{k,m,n}$. In particular from eq.~\eqref{eq:totalChern} we find for the first Chern class
\begin{equation} \label{eq:c1}
\begin{aligned}
c_1(\mathcal{X}_{k,m,n})
&= c_1(\mathbb{P}^{m-1}) + c_1(\operatorname{Gr}(2k,V)) - c_1(\mathcal{R}) - c_1(\mathcal{B}) \\
&= m H + n \sigma_1 - \left( \binom{n}2H - \binom{2k}2H + (2k-1)\sigma_1 \right) - \left( 2kH + \sigma_1 \right) \\
&= \frac12 (n-3)(2k-n)H+(m-n)H \ .
\end{aligned}
\end{equation}
In the last step we have used the relation $k H - \sigma_1=0$, which holds on the level of the cohomology for the pull-backs of $H$ and $\sigma_1$ to the variety $\mathcal{X}_{k,m,n}$ and can be argued for by similar techniques presented in refs.~\cite{MR974411,MR2560663}. Thus, we find that the axial current~\eqref{eq:axialcurr} is proportional to the first Chern class of the target space Calabi--Yau manifold. This is consistent as the first Chern class is the obstruction to a conserved axial current in a semi-classical non-linear sigma model description \cite{Witten:1991zz}.
Moreover, the degree of the variety is deduced from the intersection formula \eqref{eq:inter} according to
\begin{equation}
\deg(\mathcal{X}_{k,m,n}) \,=\, \int_{\mathbb{P}^{m-1} \times \operatorname{Gr}(2k,V)} H^{\dim_\mathbb{C} \mathcal{X}_{k,m,n}}
\wedge c_\text{top}(\mathcal{R})\wedge c_\text{top}(\mathcal{B}) \ .
\end{equation}
The isomorphism~\eqref{eq:iso} does not hold for the special class of varieties $\mathcal{X}_{k,m,2k+2}$ as the constant rank condition~\eqref{eq:redrk} is not fulfilled. The projection $\tilde\pi: \mathbb{P}^{m-1} \times \operatorname{Gr}(2k,V)\to \mathbb{P}^{m-1}$ induces the map $\pi: \widehat{\mathcal{X}}_{k,m,2k+2} \to \mathcal{X}_{k,m,2k+2}$, which restricts to an isomorphism on the Zariski open set $\operatorname{rk}\lambda=2k$. The preimage $\pi^{-1}(\mathcal{I})$ of the subvariety~$\mathcal{I}$ of reduced rank $\operatorname{rk}\lambda<2k$ yields the exceptional divisor $E_\mathcal{I}$, which at each point $p\in\mathcal{I}$ maps to a Schubert cycle in $\operatorname{Gr}(2k,V)$ of dimension greater than zero. Thus the map $\pi: \widehat{\mathcal{X}}_{k,m,2k+2} \to \mathcal{X}_{k,m,2k+2}$ realizes a resolution of $\mathcal{X}_{k,m,2k+2}$ along the ideal $\mathcal{I}$ with the exceptional locus $\pi^{-1}(\mathcal{I})$ in $\widehat{\mathcal{X}}_{k,m,2k+2}$, and the isomorphism~\eqref{eq:iso} generalizes to
\begin{equation} \label{eq:iso2}
\widehat{\mathcal{X}}_{k,m,2k+2} \simeq \operatorname{Bl}_\mathcal{I}\mathcal{X}_{k,m,2k+2} \ , \quad \mathcal{I} = \left\{ \operatorname{rk} \lambda<2k \right\} \ ,
\end{equation}
with $\operatorname{Bl}_\mathcal{I}\mathcal{X}_{k,m,2k+2}$ the resolution along the ideal $\mathcal{I}$. Thus the varieties $\widehat{\mathcal{X}}_{k,m,2k+2}$ are not isomorphic but instead birational to $\mathcal{X}_{k,m,2k+2}$.
\subsubsection{Calabi--Yau threefolds $\mathcal{X}_{1,12,6}$ and $\mathcal{X}_{2,9,6}$} \label{sec:CYs}
The focus in this work is mainly on the conformal skew symplectic sigma model s with a smooth semi-classical Calabi--Yau threefold target space phase. Thus imposing $\dim_\mathbb{C} \mathcal{X}_{k,m,n}=3$ in eq.~\eqref{eq:dimX} and requiring $c_1(\mathcal{X}_{k,m,n})=0$, we find the two smooth Calabi--Yau threefold $\mathcal{X}_{1,12,6}$ and $\mathcal{X}_{2,9,6}$.
The first variety $\mathcal{X}_{1,12,6}$ fulfills the rank condition \eqref{eq:redrk} and yields a Calabi--Yau threefold with vanishing first Chern class according to eq.~\eqref{eq:c1}. Using the isomorphism~\eqref{eq:iso}, it is then straight forward to calculate its topological data\footnote{Here $h^{1,1}(\mathcal{X}_{1,12,6})=1$ follows from arguments along the line of refs.~\cite{MR974411,MR2560663}, which then allows us to infer $h^{2,1}(\mathcal{X}_{1,12,6})$ from the Euler characteristic $\chi(\mathcal{X}_{1,12,6})$.}
\begin{equation} \label{eq:DataCY1}
\begin{aligned}
&\,h^{1,1}(\mathcal{X}_{1,12,6}) \,=\, 1 \ , &
&h^{2,1}(\mathcal{X}_{1,12,6}) \,=\, 52 \ , &
&\chi(\mathcal{X}_{1,12,6}) \,=\, -102 \ , \\
&\deg(\mathcal{X}_{1,12,6}) \,=\, 33 \ , &
&c_2(\mathcal{X}_{1,12,6}) \cdot H \,=\, 78 \ .
\end{aligned}
\end{equation}
The second variety $\mathcal{X}_{2,9,6}$ is of the special type $\mathcal{X}_{k,m,2k+2}$. Calculating the Euler characteristic $\chi(\mathcal{X}_{1,9,6})=33$ of the reduced rank locus $\mathcal{X}_{1,9,6} \subset \mathcal{X}_{2,9,6}$, we find $33$ (smooth) points $p_1, \ldots, p_{33}$ in $\mathcal{X}_{2,9,6}$ with reduced rank. Thus we find the isomorphism
\begin{equation}
\widehat{\mathcal{X}}_{2,9,6} \simeq \operatorname{Bl}_{\{p_1,\ldots,p_{33}\}}\mathcal{X}_{2,9,6} \ .
\end{equation}
The exceptional divisor $E = \bigcup_k \pi^{-1}(p_k)$ consists of $33$ exceptional $\mathbb{P}^2$. The divisor $E$ is linear equivalent to $2 H-\sigma_1$, because the intersection numbers are $(2H-\sigma_1)^3 = 33$ and $(2H-\sigma_1)^2 H = (2H-\sigma_1) H^2 =0$. Furthermore, we find from eq.~\eqref{eq:totalChern} for the first Chern class
\begin{equation}
c_1(\widehat{\mathcal{X}}_{2,9,6}) = -4 H+ 2 \sigma_1 = -2 E \ .
\end{equation}
Due to the general relation $c_1(\widehat{\mathcal{X}}_{2,9,6}) = \pi^*c_1(\mathcal{X}_{2,9,6}) - 2 E$ for threefold varieties blown up at smooth points, we infer that $c_1(\mathcal{X}_{2,9,6})=0$ and $\mathcal{X}_{2,9,6}$ is a Calabi--Yau threefold. Finally, with $\chi(\widehat{\mathcal{X}}_{2,9,6}) = \chi(\mathcal{X}_{2,9,6}) + 33 (1 -\chi(\mathbb{P}^2))$ and $\deg(\mathcal{X}_{2,9,6})= \kappa_{\widehat{\mathcal{X}}_{2,9,6}}(H^3)$ and applying formulas~\eqref{eq:totalChern} and \eqref{eq:inter}, we arrive at the topological data for the Calabi--Yau threefold $\mathcal{X}_{2,9,6}$\footnote{Again, $h^{1,1}(\mathcal{X}_{2,9,6})=1$ determines $h^{2,1}(\mathcal{X}_{2,9,6})$ from $\chi(\mathcal{X}_{2,9,6})$.}
\begin{equation}
\begin{aligned}
&\,h^{1,1}(\mathcal{X}_{2,9,6}) \,=\, 1 \ , &
&h^{2,1}(\mathcal{X}_{2,9,6}) \,=\, 52 \ , &
&\chi(\mathcal{X}_{2,9,6}) \,=\, -102 \ , \\
&\deg(\mathcal{X}_{2,9,6}) \,=\, 21 \ , &
&c_2(\mathcal{X}_{2,9,6}) \cdot H \,=\, 66 \ .
\end{aligned} \label{eq:DataCY2}
\end{equation}
It is intriguing to observe that the Hodge numbers of $\mathcal{X}_{1,12,6}$ and $\mathcal{X}_{2,9,6}$ (and thus their Euler characteristic) agree while their degrees are distinct. This gives a first hint that these two Calabi--Yau manifolds are related by a non-trivial derived equivalence. Using the R\o{}dland's argument \cite{MR1775415}, we observe that the two Calabi--Yau threefolds $\mathcal{X}_{1,12,6}$ and $\mathcal{X}_{2,9,6}$ cannot be birationally equivalent. If they were birational their hyperplane classes $H$ would be related by a rational factor. But since the ratio of the degrees are $\frac{7}{11}$ --- arising from the ratio of third powers of the two hyperplane classes --- is not a third power of a rational number, the two hyperplane classes are not related by a rational multiple.
From a different geometric construction, Miura analyzes the Calabi--Yau threefold $\mathcal{X}_{1,12,6}$ in ref.~\cite{Miura:2013arxiv}. Furthermore, by means of mirror symmetry he conjectures that in the quantum K\"ahler moduli space of $\mathcal{X}_{1,12,6}$ there emerges another Calabi--Yau threefold, whose invariants match with $\mathcal{X}_{2,9,6}$.\footnote{In refs.~\cite{MR2282973,Almkvist:2005arxiv,vanStraten:2012db,Hofmann:2013PhDThesis} certain Picard--Fuchs operators for one parameter Calabi--Yau threefolds are classified, and their geometric invariants are calculated at points of maximal unipotent monodromy~\cite{vanStraten:2012db,Hofmann:2013PhDThesis} In this way --- without the knowledge of the underlying Calabi--Yau variety $\mathcal{X}_{1,12,6}$ --- the invariants~\eqref{eq:DataCY1} are determined in refs.~\cite{vanStraten:2012db,Hofmann:2013PhDThesis}.} The connection between Miura's construction and the geometric realization of the two varieties $\mathcal{X}_{1,12,6}$ and $\mathcal{X}_{2,9,6}$ of the underlying skew symplectic sigma model s is mathematically demonstrated in refs.~\cite{Galkin:2014Talk,Galkin:2015InPrep}. In the remainder of this paper we give strong evidence that the pair of Calabi--Yau threefolds $\mathcal{X}_{1,12,6}$ and $\mathcal{X}_{2,9,6}$ are actually related by duality. This provides for a physics argument that the Calabi--Yau manifolds $\mathcal{X}_{1,12,6}$ and $\mathcal{X}_{2,9,6}$ are derived equivalent.
\subsection{Dualities among skew symplectic sigma model s} \label{sec:duality}
The observed geometric correspondence~\eqref{eq:XYdual} among low energy effective target space varieties $\mathcal{X}_{k,m,n}$ and $\mathcal{Y}_{k,m,n}$ suggests a duality among the associated skew symplectic sigma model s, i.e.,
\begin{equation} \label{eq:SSSMdual}
SSSM_{k,m,n} \,\simeq\, SSSM_{\tilde k,\tilde m,n} \ \text{for }n \text{ even}\,,\
\tilde k=\frac{n}2 - k, \ \tilde m= \frac12n(n+1)-m \ .
\end{equation}
Thus we propose that such pairs of skew symplectic sigma model s --- which clearly have distinct degrees of freedom as ultraviolet $N=(2,2)$ gauged linear sigma models --- become two equivalent two-dimensional non-linear $N=(2,2)$ effective field theories at low energies.
While the match of effective target space varieties~\eqref{eq:XYdual} is already a strong indication, let us now collect further evidence for our proposal. Similarly, as for four-dimensional Seiberg dual gauge theories with four supercharges \cite{Seiberg:1994bz}, we check the 't~Hooft anomaly matching conditions for global symmetry currents of the skew symplectic sigma model s. We first compare the anomaly~\eqref{eq:axialcurr} of the $U(1)_A$ axial current. Inserting the identification \eqref{eq:SSSMdual} for the labels of the dual skew symplectic sigma model s, we find with eq.~\eqref{eq:axialcurr}
\begin{equation}
\partial_\mu j^\mu_{A, SSSM_{k,m,n} } \,=\, - \partial_\mu j^\mu_{A, SSSM_{\tilde k, \tilde m,n} } \ .
\end{equation}
Thus, the 't~Hooft anomaly matching condition for the $U(1)_A$ axial current is satisfied up to a relative sign in the $U(1)$ gauge charges between dual skew symplectic sigma model s. An overall flip in sign of the $U(1)$ charges simultaneously reverses the sign of the associated Fayet--Iliopoulos parameter $r$. This change in sign is already implicit in the geometric correspondence~\eqref{eq:XYdual}, as it relates skew symplectic sigma model{} phases at $r\gg 0$ and $r\ll 0$, respectively.
Furthermore, due to eq.~\eqref{eq:mixed} the mixed axial--vector anomaly~$\Gamma_{A-V}$ obeys
\begin{equation}
\begin{aligned}
\Gamma_{A-V, SSSM_{k,m,n}}(\mathfrak{q}) \,=\, \Gamma_{A-V, SSSM_{\tilde k,\tilde m,n}}(\tilde{\mathfrak{q}}) \ , \qquad \tilde{\mathfrak{q}}\,=\,1-\mathfrak{q} \ .
\end{aligned}
\end{equation}
Since this mixed anomaly depends in general on an overall ambiguity~$\mathfrak{q}$ in the assignment of $U(1)_V$ R-charges in the spectrum of Table~\ref{tb:spec1}, the anomaly matching is not really a consistency check but instead fixes the relative assignment of $U(1)_V$ charges among dual skew symplectic sigma model s. If, however, the axial anomaly vanishes, the mixed axial--vector anomaly becomes independent of $\mathfrak{q}$ and calculates the central charge of the infrared $N=(2,2)$ superconformal field theory according to \eqref{eq:central}. In this case we find
\begin{equation}
c_{SSSM_{k,m,n}} = c_{SSSM_{\tilde k,\tilde m,n}} \quad \text{for} \quad \partial_\mu j^\mu_A=0 \ .
\end{equation}
For this class of skew symplectic sigma model s with vanishing $U(1)_A$ axial anomaly a further non-trivial 't~Hooft anomaly matching condition is fulfilled.
In summary, the analyzed 't~Hooft anomaly matching conditions allow us now to refine our duality proposal for skew symplectic sigma model s as follows
\begin{equation} \label{eq:dualSSSM}
\begin{aligned}
&SSSM_{k,m,n}(r,\mathfrak{q}) \, \simeq \, SSSM_{\tilde k,\tilde m,n}(\tilde r,\tilde{\mathfrak{q}})\quad \text{for}\quad n\text{ even} \ , \\
&\tilde k=\frac{n}2 - k, \quad \tilde m= \frac12n(n+1)-m, \quad \tilde r=-r, \quad \tilde{\mathfrak{q}}\,=1-\mathfrak{q} \ .
\end{aligned}
\end{equation}
So far this is really a duality relation between families of skew symplectic sigma model s. While the duality map for the Fayet--Iliopoulos parameter~$r$ is spelled out explicitly, the skew symplectic sigma model s also depend on the coupling constants in the superpotential~\eqref{eq:W}. In order to match the coupling constants in the superpotential as well, the linear subspace $L_{k,m,n}\equiv L$ for the model $SSSM_{k,m,n}$ of eq.~\eqref{eq:L} must be identified with the linear subspace $L^\perp_{\tilde k,\tilde m,\tilde n}\equiv L^\perp$ for the model $SSSM_{\tilde k,\tilde m,\tilde n}$ as specified in eq.~\eqref{eq:Lperp}. As both $L$ and $L^\perp$ are entirely determined by the couplings $A_{[ij]}^a$ and $B^{ia}$ in the superpotential. The duality proposal is therefore further refined by identifying these couplings according to
\begin{equation} \label{eq:csdual}
V \oplus \Lambda^2V^* \,\supset\, L_{k,m,n}(A,B) \,\simeq \, L^\perp_{\tilde k,\tilde m,\tilde n}(\tilde A,\tilde B) \,\subset\, \tilde V^* \oplus \Lambda^2\tilde V \ , \quad
V \oplus \Lambda^2V^* \,\simeq\, \tilde V^* \oplus \Lambda^2\tilde V \ .
\end{equation}
Note that $\dim_\mathbb{C} L_{k,m,n}=\dim_\mathbb{C} L^\perp_{\tilde k,\tilde m,\tilde n}$, and the identification follows from comparing the phase structure for $r\gg0$ and $r\ll 0$ of the skew symplectic sigma model s together with the geometric identification~\eqref{eq:XYdual}. This relationship does not uniquely fix the correspondence between the couplings $A_{[ij]}^a$ and $B^{ia}$ of $SSSM_{k,m,n}$ and $\tilde A_{[ij]}^a$ and $\tilde B^{ia}$ of $SSSM_{\tilde k,\tilde m,\tilde n}$. However, such ambiguities are not physically relevant as they can be absorbed into field redefinitions of the chiral multiplets.
Up to here the discussed duality is based upon the identification of target spaces. However, the proposed duality falls into the class of two-dimensional Hori dualities \cite{Hori:2011pd}, which can be argued for in the following way \cite{Hori:2015priv}.\footnote{We would like to thank Kentaro Hori for sharing his duality argument with us.} The model $SSSM_{k,n,m}$ comes with $n_f=n+1$ fundamentals of $\operatorname{USp}(2k)$, which for odd $n_f$ and $n_f \ge 2k+3$ --- that is to say for even $n$ with $n\ge 2k+2$ --- is dual to $\operatorname{USp}(2\tilde k)$ for $\tilde k=\frac{n}2 - k$ and with $\binom{n_f}2$ singlets and $n_f$ fundamentals \cite{Hori:2011pd}. The singlets are the mesons of the original gauge theory
\begin{equation}
\tilde P_{[ij]} \,=\, X_i \epsilon X_j \ , \qquad \tilde\phi_i \,=\, Q \epsilon X_i \ .
\end{equation}
They couple to the fundamentals $\tilde X^i$ and $\tilde Q$ of $\operatorname{USp}(2\tilde k)$ through the superpotential $W=\operatorname{tr}\left( \tilde P \cdot \tilde X^T\tilde\epsilon\tilde X\right) + \tilde\phi \cdot \tilde Q^T\tilde\epsilon \tilde X$, where $\tilde\epsilon$ is the epsilon tensor of the dual gauge group $\operatorname{USp}(2\tilde k)$ and the trace is taken over flavor indices of the fundamentals $\tilde X^i$. Thus in summary, we arrive at the dual theory with the chiral fields $P^{[ij]}$, $\phi_a$, $\tilde P_{[ij]}$, $\tilde \phi_i$, $\tilde X^i$ and $\tilde Q$ transforming respectively in the representations $\mathbf{1}_{-2}$, $\mathbf{1}_{+2}$, $\mathbf{1}_{+2}$, $\mathbf{1}_{-2}$, $\mathbf{2\tilde k}_{-1}$ and $\mathbf{2\tilde k}_{+3}$ of the dual gauge group $\frac{U(1)\times\operatorname{USp}(2\tilde k)}{\mathbb{Z}_2}$. The dual superpotential reads
\begin{equation}
W \,=\, \operatorname{tr}\left[ P \left(A(\phi) + \tilde P \right) \right] + B(\phi) \cdot \tilde\phi
+\operatorname{tr}\left( \tilde P \cdot \tilde X^T\tilde\epsilon\tilde X\right) + \tilde\phi \cdot \tilde Q^T\tilde\epsilon \tilde X \ .
\end{equation}
Following ref.~\cite{Hori:2013gga} this theory can be further simplified by integrating out the chiral multiplets $\phi_a$ of multiplicity $m$, which yields the F-term constraints
\begin{equation} \label{eq:intout}
F_{\phi_a} \,=\, \operatorname{tr} \left( P \cdot A^a \right) + B^a \cdot \tilde \phi \,=\, 0 \ .
\end{equation}
Generically, there are $\tilde m = \binom{n+1}2-m$ solutions to these conditions, which we parametrize by the chiral fields $\tilde\phi^a$, $a=1,\ldots,\tilde m$, as
\begin{equation}
P^{[ij]}=\tilde A^{[ij]}_a \tilde\phi^a \ , \qquad \tilde\phi_i=\tilde B_{ia}\tilde\phi^a \ .
\end{equation}
such that eq.~\eqref{eq:intout} is fulfilled. Altogether, we obtain the simplified dual theory with gauge group $\frac{U(1)\times\operatorname{USp}(2\tilde k)}{\mathbb{Z}_2}$ and the chiral fields $\tilde P_{[ij]}$, $\tilde \phi^a$, $\tilde X^i$ and $\tilde Q$, which interact through the superpotential
\begin{equation}
W\,=\, \operatorname{tr}\left[ \tilde P \left(\tilde A(\tilde \phi) + \tilde X^T \tilde\epsilon \tilde X \right) \right] + \tilde B(\tilde\phi) \cdot \tilde Q^T\tilde\epsilon\tilde X \ .
\end{equation}
Up to a change of sign of the $U(1)$ gauge charges, this is just the skew symplectic sigma model{} with the chiral spectrum as in Table~\ref{tb:spec1} and the superpotential~\eqref{eq:W} for the integers $(\tilde k,\tilde m,n)$. This agrees with the proposed duality \eqref{eq:dualSSSM} for even~$n$.
Next we turn to explicit skew symplectic sigma model s that give rise to $N=(2,2)$ superconformal field theories in the infrared. In this note we mainly focus on $N=(2,2)$ superconformal field theories in the context of compactifications of type~II string theories. The skew symplectic sigma model s with central charges three, six, nine and twelve are of particular interest, as such theories describe the internal worldsheet theories of type~II string compactifications to eight, six, four and two space-time dimensions, respectively. The possible models in this range of central charges --- which possess a semi-classical geometric non-linear sigma model phase --- are summarized in Table~\ref{tb:models}.
\begin{table}[t]
\centering
\hfil\vbox{
\offinterlineskip
\tabskip=0pt
\halign{\vrule height2.2ex depth1ex width1pt~#~\hfil\vrule&\hfil~#~\hfil\vrule&~#~\hfil\vrule height2.2ex depth1ex width 1pt\cr
\noalign{\hrule height 1pt}
\hfil sigma model & IR central charge & smooth $r\gg 0$ target space phase\cr
\noalign{\hrule height 1pt}
$SSSM_{1,5,4}$ & $\frac{c}3=1$ & degree $5$ $T^2$ curve\cr
\noalign{\hrule}
$SSSM_{1,8,5}$ & $\frac{c}3=2$ & degree $12$ K3 surface\cr
\noalign{\hrule}
$SSSM_{1,12,6}$ & $\frac{c}3=3$ & degree $33$ CY 3-fold ($\chi=-102$)\cr
$SSSM_{2,9,6}$ & $\frac{c}3=3$ & degree $21$ CY 3-fold ($\chi=-102$) \cr
\noalign{\hrule}
$SSSM_{1,17,7}$ & $\frac{c}3=4$ & degree $98$ CY 4-fold ($\chi=672$)\cr
\noalign{\hrule height 1pt}
}}\hfil
\caption{Listed are the skew symplectic sigma model s ($SSSM_{k,m,n}$) with vanishing axial $U(1)_A$ anomaly, infrared central charges $\frac{c}{3}=1,2,3,4$, and a smooth target space phase. They are associated to supersymmetric type~II string compactifications to $\left(10-2\cdot\frac{c}{3}\right)$-spacetime dimensions. The last column lists the geometric target space in the semi-classical non-linear sigma model regime $r\gg 0$.}\label{tb:models}
\end{table}
In particular the models $SSSM_{1,5,4}$, $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$ with even~$n$ possess dual skew symplectic sigma model{} descriptions according to our proposal~\eqref{eq:dualSSSM}. The model $SSSM_{1,5,4}$ is self-dual with two equivalent geometric $T^2$ phases for $r\gg 0$ and $r\ll 0$, namely we find the self-duality relationship
\begin{equation} \label{eq:dualT2s}
SSSM_{1,5,4}(r,\mathfrak{q}) \,\simeq\, SSSM_{1,5,4}(-r,1-\mathfrak{q}) \ .
\end{equation}
The other two models, $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$, give rise to dual families of $N=(2,2)$ superconformal field theories with central charge nine, that is to say
\begin{equation} \label{eq:dualpair}
SSSM_{1,12,6}(r,\mathfrak{q}) \,\simeq\, SSSM_{2,9,6}(-r,1-\mathfrak{q}) \ .
\end{equation}
Moreover, these models possess two geometric non-linear sigma model phases with Calabi--Yau threefold target spaces $\mathcal{X}_{1,12,6}$ and $\mathcal{Y}_{1,12,6}$, and $\mathcal{Y}_{2,9,6}$ and $\mathcal{X}_{2,9,6}$ for $r\gg 0$ and $r\ll 0$, respectively, as discussed in Section~\ref{sec:CYs}. They obey the geometric equivalences $\mathcal{X}_{1,12,6}\simeq\mathcal{Y}_{2,9,6}$ and $\mathcal{X}_{2,9,6}\simeq\mathcal{Y}_{1,12,6}$ according to eq.~\eqref{eq:XYdual}.
Finally, let us turn to the model $SSSM_{1,8,5}$ in Table~\ref{tb:models}. While a general and detailed analysis of the non-linear sigma model phase $r \ll 0$ for skew symplectic sigma model s with $n$ odd is beyond the scope of this work, we arrive at a duality proposal for the specific model $SSSM_{1,8,5}$. The degree twelve K3~surface $\mathcal{X}_{1,8,5}$ of geometric phase for $r\gg0$ is actually well-known in the mathematical literature \cite{MR1714828,MR2047679,Hosono:2014ty}.\footnote{We would like to thank Shinobu Hosono for pointing out and explaining to us the geometric properties and the relevance of the degree twelve K3~surface as discussed in the following.} There is a degree twelve K3 surfaces $\mathcal{X}_{\mathbb{S}_5}$, which is a linear section of codimension eight of either one of the two isomorphic connected components $\operatorname{OGr}^\pm(5,10)$ of the ten-dimensional orthogonal Grassmannian $\operatorname{OGr}(5,10)=O(10,\mathbb{C})/(O(5,\mathbb{C})\times O(5,\mathbb{C}))$ \cite{MR1714828,Hosono:2014ty}. The two isomorphic components $\operatorname{OGr}^\pm(5,10)$ are known as the ten-dimensional spinor varieties~$\mathbb{S}_5$ embedded in $\mathbb{P}(V)$ with $V\simeq \mathbb{C}^{16}$. The resulting degree twelve K3 surface is therefore given by
\begin{equation} \label{eq:K31}
\mathcal{X}_{\mathbb{S}_5} \,=\, \mathbb{S}_5 \cap \mathbb{P}(L) \,\subset\, \mathbb{P}(V) \ , \quad V \,\simeq\, \mathbb{C}^{16} \ , \quad
L \simeq\mathbb{C}^8 \ .
\end{equation}
In addition, there is a dual degree twelve K3 surface $\mathcal{Y}_{\mathbb{S}_5^*}$ based on the projective dual variety $\mathbb{S}_5^*\subset \mathbb{P}(V^*)$ {}\footnote{For details on projective dual varieties we refer the reader to the interesting review~\cite{MR2027446}.}
\begin{equation} \label{eq:K32}
\mathcal{Y}_{\mathbb{S}_5^*} \,=\, \mathbb{S}_5^* \cap \mathbb{P}(L^\perp) \,\subset\, \mathbb{P}(V^*) \ ,
\end{equation}
with the eight-dimensional linear section $L^\perp \subset V^*$ orthogonal to $L \subset V$. Since $\mathbb{S}_5^*$ is isomorphic to $\mathbb{S}_5$, the K3 surfaces $\mathcal{X}_{\mathbb{S}_5}$ and $\mathcal{Y}_{\mathbb{S}_5^*}$ are members of the same family of degree twelve polarized K3 surfaces. By embedding the spinor variety $\mathbb{S}_{5}$ into the projective space $\mathbb{P}^{15}$ with the Pl\"ucker embedding, an isomorphism between the polarized K3 surfaces $\mathcal{X}_{1,8,5}$ and $\mathcal{X}_{\mathbb{S}_5}$ is demonstrated for instance in ref.~\cite{MR2071808}. For us the important result is that both polarized K3 surfaces $\mathcal{X}_{\mathbb{S}_5}$ and $\mathcal{Y}_{\mathbb{S}_5^*}$ appear as phases in the same quantum K\"ahler moduli space \cite{MR2047679}. Therefore, it is natural to expect that the second varieties $\mathcal{Y}_{\mathbb{S}_5^*}$ is identified with a geometric target space $\mathcal{Y}_{1,8,5}$ of $SSSM_{1,8,5}$ in the phase $r\ll 0$. With both varieties $\mathcal{X}_{1,8,5}$ and $\mathcal{Y}_{1,8,5}$ as members of the same family of degree twelve polarized K3 surfaces, we are led to the self-duality proposal
\begin{equation} \label{eq:K3sd}
SSSM_{1,8,5}(r,\mathfrak{q}) \,\simeq\, SSSM_{1,8,5}(-r,1-\mathfrak{q}) \ .
\end{equation}
Again a suitable identification of superpotential couplings is assumed to accommodate for the relationship between the complex structure moduli of the two polarized K3~surfaces~\eqref{eq:K31} and \eqref{eq:K32}.
In the remainder of this work the dual pair $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$ and the two self-dual models $SSSM_{1,5,4}$ and $SSSM_{1,8,5}$ of the skew symplectic sigma model s listed in Table~\ref{tb:models} are our key player. In the following section we present additional evidence in support of our duality proposals.
\section{The two sphere partition function}\label{sec:ZS2}
In the previous section we have proposed a remarkable duality for the infrared theories of skew symplectic sigma model s. As discussed for the models $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$, the duality implies a correspondence between Calabi--Yau threefold target spaces spelled out in eq.~\eqref{eq:XYdual}. While our duality proposal has passed some non-trivial checks --- such as the agreement of target space geometries of dual pairs in a semi-classical low energy analysis and the realization of 't~Hooft anomaly matching conditions, the aim of this section is to study the two sphere partition functions $Z_{S^2}$ of the skew symplectic sigma model s in Table~\ref{tb:models}. For four-dimensional $N=1$ gauge theories, the comparison of partition functions provides an impressive consistency check for Seiberg duality at the quantum level~\cite{Romelsberger:2005eg,Dolan:2008qi}. Analogously, we here use the two sphere partition function as a similarly strong argument in support of our duality proposal.
Using novel localization techniques for partition functions of supersymmetric gauge theories on curved spaces \cite{Pestun:2007rz,Festuccia:2011ws}, the two sphere partition function $Z_{S^2}$ of two-dimensional $N=(2,2)$ gauge theories is explicitly calculated in refs.~\cite{Doroud:2012xw,Benini:2012ui}\footnote{Note that the partition function stated in refs.~\cite{Doroud:2012xw,Benini:2012ui} is more general, as it includes twisted mass parameters for twisted chiral multiplets. Such twisted masses, however, do not play a role here.}
\begin{equation} \label{eq:ZS2}
Z_{S^2}(\boldsymbol{r},\boldsymbol{\theta})
\,=\, \frac{1}{(2\pi)^{\dim\mathfrak{h}}|\mathcal{W}|} \sum_{\mathfrak{m} \in\Lambda_\mathfrak{m}} \int_{\mathfrak{h}}
\!\!d^{\dim\mathfrak{h}}\boldsymbol{\sigma}\,
Z_G(\mathfrak{m},\boldsymbol{\sigma})
Z_\text{matter}(\mathfrak{m},\boldsymbol{\sigma})
Z_\text{cl}(\mathfrak{m},\boldsymbol{\sigma},\boldsymbol{r},\boldsymbol{\theta}) \ .
\end{equation}
Here $\mathfrak{h}$ is the Cartan subalgebra of the Lie algebra~$\mathfrak{g}$ --- which decomposes into the Abelian Lie algebra~$\mathfrak{u}(1)^\ell$ and the simple Lie algebra~$\mathfrak{g}_s$ according to $\mathfrak{g} =\mathfrak{u}(1)^\ell \oplus \mathfrak{g}_s$ --- and $|\mathcal{W}|$ is the cardinality of the Weyl group~$\mathcal{W}$ of the gauge group $G$. The sum of $\mathfrak{m}$ is taken over the magnetic charge lattice $\Lambda_\mathfrak{m}\subset\mathfrak{h}$ of the gauge group~$G$ \cite{Goddard:1976qe}, which is the cocharacter lattice of the Cartan torus of the gauge group $G$,\footnote{The coweight lattice of the Lie algebra $\mathfrak{g}$ is a sublattice of the cocharacter lattice~$\Lambda_\mathfrak{m}$ of the Lie group $G$. The latter is sensitive to the global structure of $G$. For instance, the coweight lattice of $\mathfrak{so}(n)$ is an index two sublattice of the cocharacter lattice of $\operatorname{SO}(n)$, but coincides with the cocharacter lattice of its double cover $\operatorname{Spin}(n)$. Similarly, the coweight lattice the gauge group $G$ of the skew symplectic sigma model{} is an index two sublattice of its cocharacter lattice due to the $\mathbb{Z}_2$ quotient in $G$.}while the integral of $\boldsymbol{\sigma}$ is performed over the Cartan subalgebra $\mathfrak{h}$. The vector-valued parameters $\boldsymbol{r}$ and $\boldsymbol{\theta}$ of dimension $\ell$ are the Fayet--Iliopoulos terms and the theta angles of the Abelian gauge group factors, formally residing in the annihilator of the Lie bracket, i.e., $\boldsymbol{r},\boldsymbol{\theta} \in \operatorname{Ann}([\mathfrak{g},\mathfrak{g}])\simeq\mathfrak{u}(1)^{*\,\ell} \subset\mathfrak{g}^*$. Finally, with the canonical pairing~$\langle\,\cdot\,,\,\cdot\,\rangle$ between the weight and the coroot lattice the factors in the integrand are given by\footnote{Compared to refs.~\cite{Doroud:2012xw,Benini:2012ui}, the additional factor $(-1)^{\langle \alpha,\mathfrak{m} \rangle}$ in $Z_G$ has been established in ref.~\cite{Hori:2013ika}.}
\begin{equation}
\begin{aligned}
Z_G(\mathfrak{m},\boldsymbol{\sigma})\,&=\,
\prod_{\alpha \in\Delta^+} (-1)^{\langle \alpha,\mathfrak{m} \rangle}\left( \frac14 \langle \alpha,\mathfrak{m} \rangle^2+ \langle \alpha,\boldsymbol{\sigma} \rangle^2\right) \ , \\
Z_\text{matter}(\mathfrak{m},\boldsymbol{\sigma})\,&=\,
\prod_{\rho_j\in\operatorname{Irrep}(\rho)} \prod_{\beta\in w(\rho_j)}
\frac{\Gamma(\tfrac12 \mathfrak{q}_j-i \langle\beta,\boldsymbol{\sigma}\rangle-\tfrac12\langle\beta,\mathfrak{m}\rangle)}
{\Gamma(1-\tfrac12 \mathfrak{q}_j+i \langle\beta,\boldsymbol{\sigma}\rangle-\tfrac12\langle\beta,\mathfrak{m}\rangle)} \ , \\[1ex]
Z_\text{cl}(\mathfrak{m},\boldsymbol{\sigma},\boldsymbol{r},\boldsymbol{\theta})\,&=\,
e^{-4\pi i \langle \boldsymbol{r},\boldsymbol{\sigma}\rangle - i \langle \boldsymbol{\theta},\mathfrak{m}\rangle} \ .
\end{aligned} \label{eq:ZS2comp}
\end{equation}
The gauge group contribution~$Z_G$ arises as a finite product over the positive roots $\Delta^+$ of the Lie algebra~$\mathfrak{g}$. The matter term $Z_\text{matter}$ is a nested finite product, where the outer product is taken over all irreducible representations $\operatorname{Irrep}(\rho)$ of the chiral matter spectrum and the inner product is over the weights~$w(\rho_j)$ of a given irreducible representation $\rho_j$ with $U(1)_V$ R-charge $\mathfrak{q}_j$.\footnote{For the chosen contour $\mathfrak{h}$ of the integral of the two sphere partition function, it is necessary to choose the $U(1)_V$ R-charges of all chiral multiplets to be bigger than zero, i.e., $\mathfrak{q}_j>0$ for all $j$.} Finally, the classical term $Z_\text{cl}$ depends on the Fayet--Iliopoulos parameters $\boldsymbol{r}$ and the theta angles $\boldsymbol{\theta}$ of the two-dimensional $N=(2,2)$ gauge theory.
Note that $Z_{S^2}$ is a function of the Fayet--Iliopoulos parameters and the theta angles of the $N=(2,2)$ gauge theory, but not of the superpotential couplings. In refs.~\cite{Jockers:2012dk,Gomis:2012wy,Halverson:2013eua,Gerchkovitz:2014gta} it is argued that $Z_{S^2}$ calculates the sign-reversed exponentiated K\"ahler potential of the Zamolodchikov metric for the marginal operators in the $(c,a)$ ring of the infrared $N=(2,2)$ superconformal field theory. As a consequence, in phases with a non-linear sigma model interpretation it determines the exact sign-reversed exponentiated K\"ahler potential of the quantum K\"ahler moduli space of the Calabi--Yau target space \cite{Jockers:2012dk}, which for a Calabi--Yau threefold $\mathcal{X}$ with a single K\"ahler modulus reads\footnote{As $h^{1,1}(\mathcal{X}_{1,12,6})=h^{1,1}(\mathcal{X}_{2,9,6})=1$, we only spell out the quantum K\"ahler potential for Calabi--Yau threefolds with a single K\"ahler modulus. For the general case we refer the reader to ref.~\cite{Jockers:2012dk}.}
\begin{equation}
\begin{aligned}
e^{-K(t)} \,=\, -\frac{i}{6}& \operatorname{deg}(\mathcal{X}) (t-\bar t)^3 + \frac{\zeta(3)}{4\pi^3}\chi(\mathcal{X}) \\
&- \sum_{d=1}^{+\infty} N_d(\mathcal{X}) \left( \frac1{4\pi^3} \operatorname{Li}_3(e^{2\pi i t d}) - \frac{i}{4\pi^2} (t-\bar t) \operatorname{Li}_2(e^{2\pi i t d})
+ \text{c.c.} \right) \ .
\end{aligned} \label{eq:expK}
\end{equation}
Here $\operatorname{Li}_k(x)=\sum_{n=1}^{+\infty}\frac{x^n}{n^k}$ is the polylogarithm, $t$ is the complexified K\"ahler coordinate, and $\operatorname{deg}(\mathcal{X})$, $\chi(\mathcal{X})$, and $N_d(\mathcal{X})$ are the degree, the Euler characteristic, and the degree $d$ integral genus zero Gromov--Witten invariants of the Calabi--Yau threefold $\mathcal{X}$ respectively. The real function~$K(t)$ is the exact K\"ahler potential of the Weil--Petersson metric of the quantum K\"ahler moduli space, which is expressed in terms of the flat quantum K\"ahler coordinates $t$ of the Calabi--Yau threefold $\mathcal{X}$ that is calculated by the partition function correspondence and the IR--UV map \cite{Jockers:2012dk}
\begin{equation}\label{eq:ZtoK}
Z_{S^2}(r,\theta) \,=\, e^{-K(t)} \ , \quad q \,=\, e^{- r + i \theta} \ , \quad q\,=\,q(t) \ .
\end{equation}
That is to say, in matching the two sphere partition function $Z_{S^2}$ to the form~\eqref{eq:expK} both the IR--UV map $q(t)$ and the K\"ahler potential~$K(t)$ of a geometric phase are unambiguously determined, as explained in detail in ref.~\cite{Jockers:2012dk}.
In the following we calculate the two sphere partition function~$Z_{S^2}$ for both skew symplectic sigma model s $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$. In particular, we determine explicitly the quantum K\"ahler potential~\eqref{eq:expK} in the geometric phases of both skew symplectic sigma model s, so as to support the $N=(2,2)$ gauge theory duality~\eqref{eq:dualpair} and the associated geometric correspondences~$\mathcal{X}_{1,12,6}\simeq \mathcal{Y}_{2,9,6}$ and $\mathcal{X}_{2,9,6}\simeq \mathcal{Y}_{1,12,6}$ arising from eq.~\eqref{eq:XYdual}.
From a computational point of view the two sphere partition function~$Z_{S^2}$ is expressed in terms of higher-dimensional Mellin--Barnes type integrals. While the one-dimensional case can be treated by standard residue calculus in complex analysis with one variable, Zhdanov and Tsikh show that the two-dimensional Mellin--Barnes type integrals are determined by combinatorial sums of two-dimensional Grothendieck residues \cite{MR1631772}. While this technique is sufficient to evaluate the two-dimensional integrals in $Z_{S^2}$ for the model~$SSSM_{1,12,6}$, we need to evaluate three-dimensional integrals in $Z_{S^2}$ for the model~$SSSM_{2,6,9}$. Therefore, in Appendix~\ref{sec:MB} we revisit the result of Zhdanov and Tsikh and generalize it to arbitrary higher dimensions. The calculations below closely follow the general procedure described there.
\subsection{The model $SSSM_{1,12,6}$}\label{sec:M1General}
The skew symplectic sigma model{} $SSSM_{1,12,6}$ is determined by the gauge group
\begin{equation}
G \,=\, \frac{U(1) \times \operatorname{USp}(2)}{\mathbb{Z}_2} \,\simeq\, \frac{U(1) \times SU(2)}{\mathbb{Z}_2} \ ,
\end{equation}
and the chiral matter multiplets listed in Table \ref{tb:spec1}, which are the $SU(2)$~scalar multiplets $P_{[ij]}$ and $\phi_a$ of $U(1)$~charge $-2$ and $+2$, and the $SU(2)$ spin-$\frac12$ multiplets $Q$ and $X_i$ of $U(1)$~charge $-3$ and $+1$. These multiplets --- together with their multiplicities as specified by the range of their indices --- form the irreducible representations~$\operatorname{Irrep}(\rho)$ that appear in the expressions~\eqref{eq:ZS2} and \eqref{eq:ZS2comp} for the two sphere partition function $Z_{S^2}$.
We first discuss the relevant representation theory of the Lie algebra $\mathfrak{su}(2)$ of the non-Abelian part of the gauge group $G$. Its Cartan matrix reads
\begin{equation}
A_{\mathfrak{su}(2)} \,=\, \begin{pmatrix} 2 \end{pmatrix} \ .
\end{equation}
Hence, there is one fundamental weight $\omega_1$ generating the weight lattice $\Lambda_w$, and the single simple root is $\alpha_1 = 2 \omega_1$. The weight and root lattice are illustrated in Figure~\ref{fig:SU2Lattice}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.65\textwidth]{SU2WeightLattice}
\caption{\label{fig:SU2Lattice}Weight and root lattice of $\mathfrak{su}(2)$, roots are represented as squares.}
\end{figure}
The simple root~$\alpha_1$ is the only positive root $\left\{\alpha_1\right\}=\Delta^+$, and $\left\{\omega_1, -\omega_1 \right\}$ are the weights of the spin-$\frac12$ representations~$\mathbf{2}_s$. Linearity defines the scalar product of weights by
\begin{equation}
\left\langle \omega_1, \omega_1 \right\rangle \,=\, \frac{1}{2} \ ,
\end{equation}
which is given by the symmetric quadratic form matrix of $\mathfrak{su}(2)$.
In order to evaluate the two sphere partition function $Z_{S^2}$, we need to carefully take into account the $\mathbb{Z}_2$ quotient in the definition of the gauge group~$G$. An irreducible representation $\rho$ of the double cover $U(1)\times SU(2)$ induces a projective representation $\hat\rho$ of $G$. However, $\hat\rho$ is an ordinary representation of $G$, if $\rho$ is invariant with respect to the $\mathbb{Z}_2$ generator $(e^{i\pi},-\mathbf{1}_{2\times 2})$ in the center of $U(1)\times SU(2)$. Such representations are represented by those highest weight vectors $\lambda \in \Lambda_w$ of $\mathfrak{u}(1)\times\mathfrak{su}(2)$ with
\begin{equation} \label{eq:U1SU2w}
\lambda\,=\,\lambda_a q +\lambda_n \omega_1 \quad \text{for} \quad \lambda_a+\lambda_n \in 2\mathbb{Z} \ .
\end{equation}
Here the Abelian charge $q$ is canonically normalized as
\begin{equation}
\left\langle q, q \right\rangle \,=\, 1 \ .
\end{equation}
The restriction to the representations~\eqref{eq:U1SU2w} has important consequences for the evaluation of the partition function~$Z_{S^2}$. The localized BPS solutions contributing to $Z_{S^2}$ are classified by the magnetic quantum numbers $\mathfrak{m}$ of gauge flux configurations \cite{Doroud:2012xw,Benini:2012ui}. They obey the generalized Dirac quantization condition, which says that $\left\langle \lambda, \mathfrak{m} \right\rangle$ must be integral for all weights $\lambda$ in the representations of the gauge group~$G$ \cite{Goddard:1976qe}. This explicitly implies here
\begin{equation} \label{eq:M1P1SumRestrict}
\mathfrak{m} \,=\, \frac12 m_a q + m_n \omega_1 \ , \quad m_a+m_n \in 2\mathbb{Z} \ ,
\end{equation}
with a conventional factor $\frac12$ to avoid half-integral magnetic quantum numbers $m_a$. Analogously, we set
\begin{equation}
\boldsymbol{\sigma} \,=\, \frac12 \sigma_a q + \sigma_n \omega_1 \ , \qquad
\boldsymbol{r} = 2 r q \ , \qquad
\boldsymbol{\theta}=2 \theta q \quad \text{with} \quad \theta \sim \theta + 2 \pi \ .
\end{equation}
Note that due to the $\mathbb{Z}_2$ quotient in the definition of the gauge group, the electron of the $U(1)$ factor in the gauge group~$G$ --- which is a singlet of $SU(2)$ --- has charge two. Thus the Abelian background electric field contributing to the vacuum energy can be reduced by electron-positron pairs of charge $\pm 2$ \cite{Witten:1993yc}, and the periodicity of the theta angle $2\theta$ is $2\theta\sim2\theta+4\pi$, which implies the stated periodicity for $\theta$.
With the general expression for the two sphere partition function~\eqref{eq:ZS2} and taking into account the discussed magnetic charge lattice
\begin{equation}
\Lambda_\mathfrak{m} \,=\, \left\{ \, (m_a,m_n)\in\mathbb{Z}^2 \,\middle|\, m_a+m_n \in 2\mathbb{Z} \, \right\} \ ,
\end{equation}
we arrive at the two sphere partition function
\begin{equation} \label{eq:ZM1P1Raw}
Z_{S^2}^{1,12,6}(r,\theta) \,=\, \frac{1}{8\pi^2}\!\!\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}}\!\!
\int_{\mathbb{R}^2}\!\! d^2\boldsymbol{\sigma}\,
Z_G(m_n,\sigma_n)
Z_\text{matter}(\mathfrak{m},\boldsymbol{\sigma})
Z_{\text{cl}}(m_a,\sigma_a,r,\theta) \ ,
\end{equation}
with $d^2\boldsymbol{\sigma} = \frac12 d\sigma_a d\sigma_n$ and where
\begin{equation}
\begin{aligned}
&Z_G(m_n,\sigma_n)\, =\, (-1)^{m_n} \left(\frac{m_{n}^2}{4}+\sigma_{n}^2\right)\ , \qquad
Z_{\text{cl}}(m_a,\sigma_a,r,\theta) \,=\, e^{-4 \pi i r \,\sigma_{a}-i\theta \, m_{a}} \ , \\
&Z_\text{matter}(\mathfrak{m},\boldsymbol{\sigma}) \,=\,
Z_P(m_a,\sigma_a)^{15}\,Z_\phi(m_a,\sigma_a)^{12}\,Z_Q(\mathfrak{m},\boldsymbol{\sigma})\,
Z_X(\mathfrak{m},\boldsymbol{\sigma})^6 \ .
\end{aligned}
\end{equation}
The multiplicities of the irreducible representations of the matter multiplets are encoded by their respective powers in $Z_\text{matter}$, and they are individually given by
\begin{align*}
Z_P &= \frac{\Gamma \left(\frac{m_{a}}{2}-\mathfrak{q}+i \sigma_{a}+1\right)}{\Gamma
\left(\frac{m_{a}}{2}+\mathfrak{q}-i \sigma_{a}\right)} \ , \hspace{27mm}
Z_\phi = \frac{\Gamma \left(-\frac{m_{a}}{2}+\mathfrak{q}-i \sigma_{a}\right)}{\Gamma
\left(-\frac{m_{a}}{2}-\mathfrak{q}+i \sigma_{a}+1\right)} \ ,\\
Z_Q &=
\frac{\Gamma \left(\frac{3 m_{a}}{4}-\frac{m_{n}}{4}-\frac{3 \mathfrak{q}}{2}+\frac{3 i
\sigma_{a}}{2}-\frac{i \sigma_{n}}{2}+1\right)}{\Gamma \left(\frac{3
m_{a}}{4}-\frac{m_{n}}{4}+\frac{3 \mathfrak{q}}{2}-\frac{3 i \sigma_a}{2}+\frac{i \sigma_{n}}{2}\right)}
\cdot\frac{\Gamma \left(\frac{3
m_{a}}{4}+\frac{m_{n}}{4}-\frac{3 \mathfrak{q}}{2}+\frac{3 i \sigma_a}{2}+\frac{i \sigma_{n}}{2}+1\right)}{\Gamma \left(\frac{3
m_{a}}{4}+\frac{m_{n}}{4}+\frac{3 \mathfrak{q}}{2}-\frac{3 i \sigma_a}{2}-\frac{i \sigma_{n}}{2}\right)}
\ , \\
Z_X &=
\frac{\Gamma \left(-\frac{m_{a}}{4}-\frac{m_{n}}{4}+\frac{\mathfrak{q}}{2}-\frac{i
\sigma_{a}}{2}-\frac{i \sigma_{n}}{2}\right)}{\Gamma
\left(-\frac{m_{a}}{4}-\frac{m_{n}}{4}-\frac{\mathfrak{q}}{2}+\frac{i \sigma_a}{2}+\frac{i \sigma_{n}}{2}+1\right)}
\cdot\frac{\Gamma
\left(-\frac{m_{a}}{4}+\frac{m_{n}}{4}+\frac{\mathfrak{q}}{2}-\frac{i \sigma_a}{2}+\frac{i \sigma_{n}}{2}\right)}{\Gamma
\left(-\frac{m_{a}}{4}+\frac{m_{n}}{4}-\frac{\mathfrak{q}}{2}+\frac{i \sigma_a}{2}-\frac{i \sigma_{n}}{2}+1\right)}
\ .
\end{align*}
By the general structure of the two sphere partition function, the Weyl group $\mathcal{W}_{\mathfrak{su}(2)}\simeq \mathbb{Z}_2$ of $SU(2)$ acts as a symmetry on the integrand summed over the weight lattice $\Lambda_\mathfrak{m}$ through the action generated by
\begin{equation} \label{eq:Wsu(2)}
\mathfrak{s}: \left(m_n,\sigma_n\right) \longmapsto \left(-m_n,-\sigma_n\right) \ .
\end{equation}
Keeping track of this symmetry transformation proves useful in evaluating $Z_{S^2}^{1,12,6}$ in the following.
For consistency let us first confirm that the two sphere partition function~\eqref{eq:ZM1P1Raw} is actually real. Using the reflection formula
\begin{equation} \label{eq:GammaReflect}
\Gamma(x) \Gamma(1-x) = \frac{\pi}{\operatorname{sin}(\pi x)} \ ,
\end{equation}
and the generalized Dirac quantization condition~\eqref{eq:M1P1SumRestrict}, we establish with a few steps of algebra the conjugation formulas
\begin{equation}
\begin{aligned}
&Z_P(m_a,\sigma_a)^*=(-1)^{-m_a} Z_P(-m_a,-\sigma_a) \ ,
&&Z_\phi(m_a,\sigma_a)^*=(-1)^{m_a} Z_\phi(-m_a,-\sigma_a) \ , \\
&Z_Q(\mathfrak{m},\boldsymbol{\sigma})^*=(-1)^{-3m_a} Z_Q(-\mathfrak{m},-\boldsymbol{\sigma}) \ ,
&&Z_X(\mathfrak{m},\boldsymbol{\sigma})^*=(-1)^{m_a} Z_X(-\mathfrak{m},-\boldsymbol{\sigma}) \ .
\end{aligned}
\end{equation}
Taking into account the multiplicities of the individual chiral multiplets, we find $Z_\text{matter}(\mathfrak{m},\boldsymbol{\sigma})^*=Z_\text{matter}(-\mathfrak{m},-\boldsymbol{\sigma})$, which readily shows that after substituting the summation and integration variables $(\mathfrak{m},\boldsymbol{\sigma})$ to $(-\mathfrak{m},-\boldsymbol{\sigma})$ the two sphere partition function $Z^{1,12,6}_{S^2}$ is real.
For the detailed analysis of $Z^{1,12,6}_{S^2}$, it is convenient to work with integration variables that diagonalize the arguments of two Gamma functions and simultaneously remove the parameter $\mathfrak{q}$ of the $U(1)_V$ R-symmetry from all Gamma functions. One particular choice of such a substitution reads
\begin{equation}
\sigma_a = -i\, (\mathfrak{q} + x_1 + x_2 ) \ , \qquad \sigma_n =- i\, (x_1-x_2) \ ,
\end{equation}
which results with $\boldsymbol{x}=(x_1,x_2)$ in
\begin{equation}
Z_{S^2}^{1,12,6}(r,\theta) \,=\, \frac{e^{-4\pi r \mathfrak{q}}}{8\pi^2}
\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}}
\int_{\gamma+i\mathbb{R}^2} \omega(\mathfrak{m},\boldsymbol{x}) \,dx_1 \wedge dx_2 \ , \quad
\gamma = -\left(\tfrac{\mathfrak{q}}2,\tfrac{\mathfrak{q}}2\right) \ .\label{eq:ZM1Integral}
\end{equation}
The integrand is given by
\begin{equation} \label{eq:ZModel1}
\omega(\mathfrak{m},\boldsymbol{x}) = Z'_G(m_n,\boldsymbol{x})
Z'_P(m_a,\boldsymbol{x})^{15}Z'_\phi(m_a,\boldsymbol{x})^{12}
Z'_Q(\mathfrak{m},\boldsymbol{x})Z'_X(\mathfrak{m},\boldsymbol{x})^6
Z'_\text{cl}(r,\theta,m_a,\boldsymbol{x}) \ ,
\end{equation}
where
\begin{align*}
Z'_{\text{cl}} &= e^{-4 \pi r \left(x_1+x_2\right) - i\,\theta \, m_a} \ ,
\hspace{30mm} Z'_G = (-1)^{m_n} \left(\frac{m_{n}^2}4-(x_1-x_2)^2\right) \ , \\
Z'_P &= \frac{\Gamma \left(1+ \frac{m_a}{2} +x_1+x_2 \right)}{\Gamma \left( \frac{m_a}{2} -x_1-x_2 \right)} \ ,
\hspace{21.5mm} Z'_\phi = \frac{\Gamma \left(-\frac{m_a}{2}-x_1-x_2\right)}{\Gamma \left(1-\frac{m_a}{2}+x_1+x_2\right)} \ , \\
Z'_Q &= \underbrace{\frac{\Gamma \left(1+ \frac{3m_a-m_n}{4}+x_1+2x_2 \right)}{\Gamma\left(\frac{3m_a-m_n}{4}-x_1-2x_2\right)}}_{Z'_{Q_1}} \cdot
\underbrace{\frac{\Gamma\left(1+\frac{3m_a+m_n}{4}+2x_1+x_2 \right)}{\Gamma \left(\frac{3m_a+m_n}{4}-2x_1-x_2 \right)}}_{Z'_{Q_2}} \ ,\\
Z'_X &= \underbrace{\frac{\Gamma \left(-\frac{m_a+m_n}{4}-x_1\right)}{\Gamma \left(1-\frac{m_a+m_n}{4}+x_1\right)}}_{Z'_{X_1}} \cdot
\underbrace{\frac{\Gamma \left(-\frac{m_a-m_n}{4} -x_2 \right)}{\Gamma \left(1-\frac{m_a-m_n}{4} +x_2 \right)}}_{Z'_{X_2}}\ .
\end{align*}
Since the integral~\eqref{eq:ZM1Integral} is now of the same form as the Mellin--Barnes type integral~\eqref{eq:MBInt}, we proceed as in Appendix~\ref{sec:MB} and rewrite the integral as a sum of local Grothendieck residues. We also record that in terms of the new variables $\boldsymbol{x}$ the generator \eqref{eq:Wsu(2)} of the Weyl group $\mathcal{W}_{\mathfrak{su}(2)}$ acting on the integrand becomes
\begin{equation} \label{eq:Wsu(2)new}
\mathfrak{s}: \left(m_n,x_1,x_2\right) \longmapsto \left(-m_n,x_2,x_1\right) \ ,
\end{equation}
which induces the action on the signed volume form
\begin{equation} \label{eq:Wsu(2)vol}
\mathfrak{s}: dx_1 \wedge dx_2\longmapsto -dx_1 \wedge dx_2 \ .
\end{equation}
Firstly, we have to determine the divisors for the poles of the integrand $\omega(\mathfrak{m},\boldsymbol{x})$ in $\mathbb{R}^2 \subset\mathbb{C}^2$. Such poles arise from non-positive integral arguments of Gamma functions in the numerator. Taking into account cancellation between the poles and zeros --- arising from non-positive integral arguments of Gamma functions in the denominator --- we find the following divisors of poles in terms of the (constraint) integers $n_P, n_{Q_1},n_{Q_2},n_{X_1},n_{X_2}$
\begin{equation}\label{eq:DivisModel1}
\begin{aligned}
D^{n_P}_P\,&=\, x_1+x_2 + n_P + \tfrac{m_a}{2} +1 &&\text{ for } n_P\geq \operatorname{Max}\left[0,-m_a\right] \ ,\\
D^{n_{Q_1}}_{Q_1}\,&=\,x_1+2x_2 + n_{Q_1}+\tfrac{3m_a-m_n}{4}+1 &&\text{ for } n_{Q_1}\geq \operatorname{Max}\left[0,-\tfrac{3m_a-m_n}{2}\right] \ ,\\
D^{n_{Q_2}}_{Q_2}\,&=\,2x_1+x_2 + n_{Q_2}+\tfrac{3m_a+m_n}{4}+1 &&\text{ for } n_{Q_2}\geq \operatorname{Max}\left[0,-\tfrac{3m_a+m_n}{2}\right] \ ,\\
D^{n_{X_1}}_{X_1}\,&=\,x_1 - n_{X_1} +\tfrac{m_a+m_n}{4} &&\text{ for } n_{X_1}\geq \operatorname{Max}\left[0,\tfrac{m_a+m_n}{2}\right] \ , \\
D^{n_{X_2}}_{X_2}\,&=\,x_2 - n_{X_2} +\tfrac{m_a-m_n}{4} &&\text{ for } n_{X_2}\geq \operatorname{Max}\left[0,\tfrac{m_a-m_n}{2}\right].
\end{aligned}
\end{equation}
Note that $Z'_\phi$ does not yield a contribution, since all its poles are canceled by the denominator of $Z'_P$.
Secondly, to determine the critical line $\partial H$ introduced in eq.~\eqref{eq:MBHn2}, we identify $\boldsymbol{p}$ in eq.~\eqref{eq:MBw} as $\boldsymbol{p} =4 \pi r (1,1)\in\mathbb{R}^2$, and therefore find
\begin{equation} \label{eq:UntiltedLine}
\partial H = \left\{ \,(x_1,x_2) \in \mathbb{R}^2 \, \middle| \, x_1+x_2 = -\mathfrak{q}\, \right\} \ .
\end{equation}
Since this line is parallel to the divisors $D^{n_P}_P$, we apply the method of Appendix~\ref{sec:MBParallel} and introduce an additional exponential factor such that
\begin{equation} \label{eq:OmegaPrime}
\omega(\mathfrak{m},\boldsymbol{x}) \, \longrightarrow \,
\omega'(\mathfrak{m},\boldsymbol{x}) = \omega(\mathfrak{m},\boldsymbol{x}) \cdot e^{-4\pi r \varepsilon\,\boldsymbol{\delta}\cdot\boldsymbol{x}} \ ,
\qquad \boldsymbol{\delta}=(\delta_1,\delta_2) \ ,
\end{equation}
where we eventually take the limit $\varepsilon \to 0^+$ after carrying out the integral. Then the critical line $\partial H$ is modified to
\begin{equation}
\partial H' = \left\{ \, \boldsymbol{x} \in \mathbb{R}^2 \, \middle| \, \sum_{i=1}^2 \left(1+\varepsilon\,\delta_i\right)x_i = -\left(1+\varepsilon\,\sum_{i=1}^2\frac{\delta_i}{2}\right)\,\mathfrak{q} \, \right\} \ ,
\end{equation}
which for $\varepsilon$ small and $\delta_1 \ne \delta_2$ indeed removes parallelness among $\partial H'$ and the divisors $D^{n_P}_P$.\footnote{Note that the perturbation $\boldsymbol{\delta}$ with $\delta_1 \ne \delta_2$ breaks the Weyl symmetry $\mathcal{W}_{\mathfrak{su}(2)}$.} The point $\gamma = -\left(\frac{\mathfrak{q}}2,\frac{\mathfrak{q}}2\right) \in \partial H'$ splits the critical line into the two rays
\begin{equation} \label{eq:M1TwoRays}
\partial H'_1 =\left\{ (x_1,x_2) \in \partial H' \, \middle| \, x_1>-\tfrac{\mathfrak{q}}{2} \right\} \ ,
\quad \partial H'_2 =\left\{ (x_1,x_2) \in \partial H' \, \middle| \, x_1<-\tfrac{\mathfrak{q}}{2} \right\}\ ,
\end{equation}
which according to the conventions in Appendix~\ref{sec:2dMB} correspond in the respective regimes to the rays
\begin{equation}
\partial H_+ \,=\,\begin{cases} \partial H'_1 & \text{for }r\gg0 \\ \partial H'_2 & \text{for }r\ll0 \end{cases} \ , \qquad
\partial H_- \,=\,\begin{cases} \partial H'_2 & \text{for }r\gg0 \\ \partial H'_1 & \text{for }r\ll0 \end{cases} \ .
\end{equation}
Thirdly, we calculate the intersection points of the divisors with $\partial H'$. Up to leading order as $\varepsilon \to 0^+$ they are given by
\begin{equation}\label{eq:M1qinwhichRay}
\begin{aligned}
q^{n_P}_P\,&=\,\left(\tfrac{1+\frac{m_a}{2}+n_P-\mathfrak{q}}{\varepsilon(\delta_{1}-\delta_{2})}, -\tfrac{1+\frac{m_a}{2}+n_P-\mathfrak{q}}{\varepsilon(\delta_{1}-\delta_{2})} \right)\quad \in\, \begin{cases} \partial H'_1 & \delta_1>\delta_2\\ \partial H'_2 & \delta_2>\delta_1 \end{cases} \ , \\
q^{n_{Q_1}}_{Q_1}\,&=\,\left(1+\tfrac{3 m_a-m_n}{4}+n_{Q_1}-2\mathfrak{q},-1-\tfrac{3 m_a-m_n}{4}-n_{Q_1}+\mathfrak{q}\right)\quad \in\, \partial H'_1 \ , \\
q^{n_{Q_2}}_{Q_2}\,&=\,\left(-1-\tfrac{3 m_a+m_n}{4}-n_{Q_2}+\mathfrak{q},1+\tfrac{3 m_a+m_n}{4}+n_{Q_2}-2\mathfrak{q}\right)\quad \in\, \partial H'_2 \ , \\
q^{n_{X_1}}_{X_1}\,&=\,\left(-\tfrac{m_a+m_n}{4}+n_{X_1},\tfrac{m_a+m_n}{4}-n_{X_1}-\mathfrak{q}\right)\quad \in\, \partial H'_1 \ ,\\
q^{n_{X_2}}_{X_2}\,&=\,\left(\tfrac{m_a-m_n}{4}-n_{X_2}-\mathfrak{q},-\tfrac{m_a-m_n}{4}+n_{X_2}\right)\quad \in\, \partial H'_2 \ ,
\end{aligned}
\end{equation}
where the respective constraints on the integers $n_P, n_{Q_1},n_{Q_2},n_{X_1},n_{X_2}$ and the condition~$0<\mathfrak{q}<\frac23$ determine the associated rays $\partial H'_1$ and $\partial H'_2$.
Now we are ready to determine those pairs of divisors associated to poles that appear in the sets $\Pi_\pm$, which are defined in eq.~\eqref{eq:P2pm} via the index set \eqref{eq:I2pm}. To this end we need to consider the two disjoint half-spaces $H'_1$ and $H'_2$ bounded by the critical line
\begin{equation}
\begin{aligned}
H'_1 &= \left\{\,\boldsymbol{x} \in \mathbb{R}^2 \, \middle| \, \sum_{i=1}^2 \left(1+\varepsilon\,\delta_{i}\right)x_i > -\left(1+\varepsilon\,\sum_{i=1}^2\frac{\delta_{i}}{2}\right)\,\mathfrak{q} \right\} \ ,\\
H'_2 &= \left\{\,\boldsymbol{x} \in \mathbb{R}^2 \, \middle| \, \sum_{i=1}^2 \left(1+\varepsilon\,\delta_{i}\right)x_i < -\left(1+\varepsilon\,\sum_{i=1}^2\frac{\delta_{i}}{2}\right)\,\mathfrak{q} \right\} \ ,
\end{aligned}
\end{equation}
such that the relevant half-space $H$ for the respective gauged linear sigma model phases is
\begin{equation}
H \,=\,\begin{cases} H'_1 & \text{for }r\gg0 \\ H'_2 & \text{for }r\ll0 \end{cases} \ .
\end{equation}
Then choosing the orientation of the intersections conveniently, we find
\begin{equation} \label{eq:PipmM1P1}
\Pi_+ \,=\,\begin{cases}
\left\{\, p^{\vec n_8}_{\vec\jmath_8} \,\right\} & \text{ for } r\gg 0 \ , \\[1.5ex]
\left\{\, p^{\vec n_i}_{\vec\jmath_i}\,\middle|\, i=1,2,3,4,5 \,\right\} & \text{ for } r\ll 0,\ \delta_1>\delta_2 \ , \\[1.5ex]
\left\{\, p^{\vec n_i}_{\vec\jmath_i}\,\middle|\, i=3,4,5,6,7 \,\right\} & \text{ for } r\ll 0,\ \delta_2>\delta_1 \ ,
\end{cases} \qquad
\Pi_- = \emptyset \ ,
\end{equation}
in terms of the (oriented) divisor intersections with the labels
\begin{equation}\label{eq:M1jLables}
\begin{aligned}
&(\vec\jmath_1,\vec n_1)\,=\, (Q_2,P; n_{Q_2},n_P) \ , &&(\vec\jmath_2,\vec n_2)\,=\, (X_2,P; n_{X_2},n_P) \ , \\
&(\vec\jmath_3,\vec n_3)\,=\, (Q_2,Q_1; n_{Q_2},n_{Q_1}) \ , &&(\vec\jmath_4,\vec n_4)\,=\, (X_2,Q_1; n_{X_2},n_{Q_1}) \ , \\
&(\vec\jmath_5,\vec n_5)\,=\, (Q_2,X_1; n_{Q_2},n_{X_1}) \ , &&(\vec\jmath_6,\vec n_6)\,=\, (P,Q_1; n_{P},n_{Q_1}) \ , \\
&(\vec\jmath_7,\vec n_7)\,=\, (P,X_1; n_{P},n_{X_1}) \ , &&(\vec\jmath_8,\vec n_8)\,=\, (X_1,X_2; n_{X_1},n_{X_2}) \ , \\
\end{aligned}
\end{equation}
and the pole loci
\begin{equation}
\begin{aligned}
p^{\vec n_1}_{\vec\jmath_1} \,&=\, \left(n_P - n_{Q_2} -\tfrac{m_a+m_n}{4} , -1+n_{Q_2}-2n_P -\tfrac{m_a-m_n}{4}\right) \ , \\
p^{\vec n_2}_{\vec\jmath_2} \,&=\,\left(-1-n_{X_2}-n_P -\tfrac{m_a+m_n}{4} , n_{X_2} -\tfrac{m_a-m_n}{4}\right) \ , \\
p^{\vec n_3}_{\vec\jmath_1} \,&=\, \left(\tfrac{-1+n_{Q_1}-2n_{Q_2}}{3}-\tfrac{m_a+m_n}{4},\tfrac{-1+n_{Q_2}-2n_{Q_1}}{3}-\tfrac{m_a-m_n}{4}\right) \ , \\
p^{\vec n_4}_{\vec\jmath_2} \,&=\,\left(-1-n_{Q_1}-2n_{X_2} -\tfrac{m_a+m_n}{4},n_{X_2} -\tfrac{m_a-m_n}{4}\right) \ , \\
p^{\vec n_5}_{\vec\jmath_1} \,&=\, \left(n_{X_1} -\tfrac{m_a+m_n}{4},-1-n_{Q_2}-2n_{X_1} -\tfrac{m_a-m_n}{4}\right) \ , \\
p^{\vec n_6}_{\vec\jmath_2} \,&=\, \left(-1+n_{Q_1}-2n_P-\tfrac{m_a+m_n}{4},n_P - n_{Q_1} -\tfrac{m_a-m_n}{4}\right)\ , \\
p^{\vec n_7}_{\vec\jmath_1} \,&=\, \left( n_{X_1} -\tfrac{m_a+m_n}{4},-1-n_P-n_{X_1}-\tfrac{m_a-m_n}{2}\right) \ , \\
p^{\vec n_8}_{\vec\jmath_2} \,&=\,\left( n_{X_1} -\tfrac{m_a+m_n}{4}, n_{X_2} -\tfrac{m_a-m_n}{4}\right) \ .
\end{aligned}\label{eq:PolesM1P1}
\end{equation}
This allows us to express the two sphere partition function according to eq.~\eqref{eq:MB2D} as
\begin{equation}
Z_{S^2}^{1,12,6}(r,\theta) \,=\,- \frac{e^{-4\pi r \mathfrak{q}}}{2} \,\lim_{\varepsilon\to0^+}\,\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi_+}\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x}) \ ,\label{eq:ZM1}
\end{equation}
where $\Pi_+$ depends both on the sign of the parameter $r$ and the regulator $\boldsymbol{\delta}$.
For consistency, in the limit $\varepsilon \to 0^+$ the two sphere partition function~$Z_{S^2}^{1,12,6}$ must not depend on the perturbation $\boldsymbol{\delta}$. According to eq.~\eqref{eq:PipmM1P1}, this is obviously true in the phase $r\gg0$, whereas it is not manifest in the regime $r\ll0$. In the latter phase, it is a consequence of the Weyl transformation~\eqref{eq:Wsu(2)} generated by $\mathfrak{s}$, which acts upon the divisor intersections~\eqref{eq:PipmM1P1} as
\begin{equation}
\begin{aligned}
&(\vec\jmath_1,\vec n_1) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_6,\vec n_6) \ ,
&&(\vec\jmath_2,\vec n_2) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_7,\vec n_7) \ ,
&&(\vec\jmath_3,\vec n_3) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_3,\vec n_3) \ , \\
&(\vec\jmath_4,\vec n_4) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_5,\vec n_5) \ ,
&&(\vec\jmath_5,\vec n_5) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_4,\vec n_4) \ ,
&&(\vec\jmath_6,\vec n_6) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_1,\vec n_1) \ ,\\
&(\vec\jmath_7,\vec n_7) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_2,\vec n_2) \ ,
&&(\vec\jmath_8,\vec n_8) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_8,\vec n_8) \ .
\end{aligned}
\end{equation}
Here we have taken into account that the minus signs from the orientation-reversal of the ordered pair of intersecting divisors are compensated by the sign-reversal of the volume form in eq.~\eqref{eq:Wsu(2)vol}. These transformations together with eq.~\eqref{eq:PipmM1P1} demonstrate that the end result is indeed independent of the perturbation $\boldsymbol{\delta}$.
\subsubsection{The $SSSM_{1,12,6}$ phase $r\gg 0$}
In this subsection we specialize to the phase $r\gg0$. From eq.~\eqref{eq:PipmM1P1} the relevant set of poles $\Pi_+$ is seen to be independent of the regulator $\boldsymbol{\delta}$. We thus expect to manifestly recover the antisymmetry under $\mathfrak{s}$ in the limit $\varepsilon \to 0^+$. In fact, this limit will turn out to be trivial in the present case. In order to illustrate the mechanism, we shall however keep $\varepsilon >0$ and choose $\delta_{1} = 1/(2\pi r) >\delta_{2}=0$ for definiteness.
For a given $(m_a,m_n)\in\Lambda_{\mathfrak{m}}$ we have
\begin{equation}
\begin{aligned}
\Pi_+ &= \{\,p^{\vec{n}_8}_{\vec{j}_8}\,|n_{X_1}\geq \operatorname{Max}\left[0,\tfrac{m_a+m_n}{2}\right], n_{X_2}\geq \operatorname{Max}\left[0,\tfrac{m_a-m_n}{2}\right]\}\ ,\\
p^{\vec{n}_8}_{\vec{j}_8} &= \left( n_{X_1} -\tfrac{m_a+m_n}{4}, n_{X_2} -\tfrac{m_a-m_n}{4}\right)\ .
\end{aligned}
\end{equation}
Upon changing variables to
\begin{equation}
a = n_{X_1}+n_{X_2}- m_a\ ,\quad b=n_{X_1}-\tfrac{m_a+m_n}{2}\ , c = n_{X_1}+n_{X_2}\ , \quad d=n_{X_1} \ ,
\end{equation}
the sums in eq.~\eqref{eq:ZM1} symbolically simplify to
\begin{equation}
\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}} \, \sum_{\boldsymbol{x}\in\Pi_+} \quad \longrightarrow \quad \sum_{a=0}^\infty\, \sum_{c=0}^\infty \,\sum_{b=0}^a \,\sum_{d=0}^c \ .
\end{equation}
Introducing coordinates around the large volume point
\begin{equation}
z = e^{- 2\pi r + i \theta}, \quad \overline{z} =e^{- 2\pi r - i \theta}\ , \label{eq:zvar}
\end{equation}
the partition function takes the form
\begin{equation}
Z_{S^2,r\gg0}^{1,12,6}(r,\theta)=-\frac{\left(z \overline{z}\right)^\mathfrak{q}}{2} \lim_{\varepsilon \to 0^+} \operatorname{Res}_{\boldsymbol{x}=0} \left( e^{-2x_1\varepsilon} Z_{\text{sing}} \left|z^{x_1+x_2}\sum_{a=0}^\infty (-z)^a \sum_{b=0}^a Z_{\text{reg}} e^{-b\varepsilon}\right|^2\right), \label{eq:ZModel1Final}
\end{equation}
with
\begin{equation}\label{eq:ZModel1FinalZusatz}
\begin{aligned}
Z_{\text{sing}} &=\pi^7 \, \frac{\text{sin}\left[\pi (x_1+x_2)\right]^3\text{sin}\left[\pi (2x_1+x_2)\right]\text{sin}\left[\pi (x_1+2x_2)\right]}{\text{sin}\left(\pi x_1\right)^6\text{sin}\left(\pi x_2\right)^6}\ ,\\
Z_{\text{reg}} &=(a-2b-x_1+x_2) \frac{\Gamma (1+a+x_1+x_2)^3}{\Gamma(1+b+x_1)^6 \Gamma(1+a-b+x_2)^6}\\
&\, \qquad \quad\cdot \Gamma(1+a+b+2x_1+x_2)\Gamma(1+2a-b+x_1+2x_2)\ ,
\end{aligned}
\end{equation}
Here the residue is being evaluated with respect to the oriented pair of divisors $(x_1,x_2)$ and complex conjugation is defined to not act on $x_1$ and $x_2$. Since for any given order in $z$ and $\overline{z}$ there are only finitely many terms, the limit $\varepsilon \to 0^+$ can safely be taken before summation. An infinite sum would have been automatically regularized by $\varepsilon>0$. Note that the transformation $\mathfrak{s}$ acts on eq.~\eqref{eq:ZModel1Final} by
\begin{equation}
\mathfrak{s}:\, (x_1,x_2,b,d) \longmapsto (x_2,x_1,a-b,c-d) \ ,
\end{equation}
under which $Z_{S^2,r\gg0}^{1,12,6}$ for $\varepsilon = 0$ indeed displays the Weyl symmetry.
From eq.~\eqref{eq:ZModel1Final} the partition function can now be evaluated to any fixed order in $z$ and $\overline{z}$. By exploiting the relation~\eqref{eq:ZtoK}, we want to read off geometric data of the associated infrared target Calabi-Yau threefold $\mathcal{X}_{1,12,6}$ from the partition function~$Z_{S^2,r\gg0}^{1,12,6}$. For this purpose it is sufficient to keep terms of order $0$ in $\overline{z}$ only, i.e., we can set $c = d = 0$. Keeping $a$ and $b$ still arbitrary, the fundamental period $\omega^{1,12,6}_{0,r\gg0}(z)$ is then found as the coefficient function of $\left(\operatorname{log}\, z\right)^3$. Fixing its normalization such that the expansion starts with $1$, we find
\begin{equation} \label{eq:M1P1FundPeriod}
\begin{aligned}
\omega^{1,12,6}_{0,r\gg0}(z)
& \,=\, \sum_{a=0}^\infty \sum_{b=0}^a \frac{a!^3(2a-b)!(a+b)!}{(a-b)!^6 b!^6}\bigg[ 1 + (2b-a) \left(H_{a+b}-6 H_{b}\right) \bigg] (-z)^a \\
& \,=\, 1+7 z + 199 z^2 +8\,359 z^3 +423\,751 z^4+23\,973\,757 z^5 + \ldots \, ,
\end{aligned}
\end{equation}
where $H_n$ is the $n$-th harmonic number. We then use the Euler characteristic $\chi(\mathcal{X}_{1,12,6})=-102$ calculated in eq.~\eqref{eq:DataCY1} and by a K\"ahler transformation\footnote{Here, this K\"ahler transformation is divison by $8\pi^3(z\overline{z})^{\mathfrak{q}} \left|\omega^{1,12,6}_{0,r\gg0}(z)\right|^2$ with $\omega^{1,12,6}_{0,r\gg0}(z)$ as in eq.~\eqref{eq:M1P1FundPeriod}.} fix the overall normalization of $Z_{S^2,r\gg0}^{1,12,6}$ to match with the canonical large volume form~\eqref{eq:expK}. Subsequently, we can read off the degree and the degree $d$ integral genus zero Gromov--Witten invariants
\begin{equation} \label{eq:GW1}
\begin{aligned}
\text{deg}(\mathcal{X}_{1,12,6})& = 33 \ , \\
N_d(\mathcal{X}_{1,12,6}) &=
\begin{cases}
252, \quad 1\,854, \quad 27\,156, \quad 567\,063, \quad 14\,514\,039, \\
424\,256\,409, \quad 13\,599\,543\,618 \quad 466\,563\,312\,360, \\
16\,861\,067\,232\,735, \quad 634\,912\,711\,612\,848, \quad \ldots
\end{cases}
\end{aligned}
\end{equation}
These results are agreement with eq. \eqref{eq:DataCY1} and the Gromov--Witten invariants determined in ref.~\cite{Miura:2013arxiv}, and they coincide with the calculated data obtained in the classification program of one parameter Picard--Fuchs operators \cite{vanStraten:2012db,Hofmann:2013PhDThesis}.
\subsubsection{The $SSSM_{1,12,6}$ phase $r\ll0$}\label{sec:Model1Strong}
We turn to the discussion of the phase $r\ll 0$. According to eq.~\eqref{eq:PipmM1P1}, the set $\Pi_+$ now depends on the sign of $\delta_{1}-\delta_{2}$. Therefore we cannot expect to obtain a result that in the limit $\varepsilon \to 0^+$ is manifestly symmetric with respect to the Weyl symmetry generated by $\mathfrak{s}$. An equivalent argument is that --- irrespective of which case is chosen --- the relevant pairs of divisors do not separate into full orbits under $\mathfrak{s}$, as has been the case for $r\gg0$. We could restore manifest antisymmetry by adding the two terms obtained for the two different choices. As was shown at the end of Section~\ref{sec:M1General}, these two terms are, however, equal anyway. Adding them would thus be mere cosmetics. We proceed by choosing $\delta_{1} = -\frac1{2\pi r} >\delta_{2}=0$.
As seen from eq.~\eqref{eq:PipmM1P1} there are now five sets of poles contributing to $\Pi_+$. Since these five sets might have non-trivial intersections, independently summing over them would wrongly count certain poles several times. In general this can be accounted for by dividing the contribution from each pole by the number of pairs it belongs to. We, however, continue along a different way, which allows us to write the partition function in a more compact form. As we shall demonstrate
\begin{equation} \label{eq:M1P2TwoSets}
\Pi_+ = \left\{\, p^{\vec n_i}_{\vec\jmath_i}\,\middle|\, i=1,2,3,4,5 \,\right\} = \Pi^1_+ \cup \Pi^2_+ \ ,
\end{equation}
in terms of the disjoint sets
\begin{equation}
\begin{aligned}
\Pi^1_+ &= \left\{\, p^{\vec n_1}_{\vec\jmath_1}\,\middle|\,n_{Q_2}\geq \operatorname{Max}\left[0,-\frac{3m_a+m_n}{2}\right],n_{P}\geq \operatorname{Max}\left[0,-m_a\right] \,\right\},\\
\Pi^2_+ &= \left\{\, p^{\vec n_3}_{\vec\jmath_3}\,\middle|\,n_{Q_2/Q_1}\geq \operatorname{Max}\left[0,-\frac{3m_a\pm m_n}{2}\right],n_{Q_1}+n_{Q_2} \notin 3 \mathbb{N}_0+1 \,\right\}.
\end{aligned}
\end{equation}
In order to show an inclusion of the type $\{p^{\vec n}_{\vec \jmath}\} \subset \{p^{\vec m}_{\vec k}\}$ we proceed in two steps:
\begin{enumerate}
\item For given $m_a$ and $m_n$, equate $p^{\vec n}_{\vec\jmath}=p^{\vec m}_{\vec k}$ and solve for $n_{k_1}$ and $n_{k_2}$.
\item Assume that $n_{\jmath_1}$ and $n_{\jmath_2}$ fulfill their constraints given in eq.~\eqref{eq:DivisModel1}. If $n_{k_1}$ and $n_{k_2}$ --- as given by the previously obtained equations --- fulfill their constraints as well, the desired results have been established.
\end{enumerate}
Let us illustrate this procedure by showing $\{p^{\vec{n}_2}_{\vec \jmath_2}\} \subset \{p^{\vec n_1}_{\vec \jmath_1}\}$ explicitly. Denoting $\vec{n}_2 = \left(\overline{n_{X_2}},\overline{n_P}\right)$ to distinguish between common divisors, we find
\begin{equation}
\begin{aligned}
\text{(i)}& \quad \left(\begin{array}{l}n_P - n_{Q_2} -\frac{m_a+m_n}{4} \\ -1+n_{Q_2}-2n_P -\frac{m_a-m_n}{4}\end{array}\right) = \left(\begin{array}{l} -1-\overline{n_{X_2}}-\overline{n_P} -\frac{m_a+m_n}{4} \\ \overline{n_{X_2}} -\frac{m_a-m_n}{4} \end{array}\right), \\[0.3em]
\Rightarrow \text{(ii)}& \quad \begin{array}{ll} n_P = \overline{n_P} \geq \operatorname{Max}\left[0,-m_a\right]\\[0.1em] n_{Q_2} = 1 + 2 \overline{n_P} + \overline{n_{X_2}} \geq 1 +\operatorname{Max}\left[0,-\frac{3m_a+m_n}{2}\right].
\end{array}
\end{aligned}\label{eq:inclusion}
\end{equation}
Similar calculations show further inclusions and with these eq.~\eqref{eq:M1P2TwoSets} can be established. With eq.~\eqref{eq:ZM1} we then find
\begin{equation}\label{eq:ZM1P2}
\begin{aligned}
Z_{S^2,r\ll0}^{1,12,6}(r,\theta) \,=\,&- \frac{e^{-4\pi r \mathfrak{q}}}{2} \, \lim_{\varepsilon\to0^+}\,\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi^1_+}\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x})\\
&- \frac{e^{-4\pi r \mathfrak{q}}}{2} \, \underbrace{\lim_{\varepsilon\to0^+}\,\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi^2_+}\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x})}_{A}.
\end{aligned}
\end{equation}
Next we show that $A$ is equal to zero. It will prove useful to introduce the coordinates
\begin{equation}
w =z^{-1} = e^{2\pi r - i \theta}, \quad \overline{w} =(\overline{z})^{-1} = e^{2\pi r + i \theta}
\end{equation}
at this point already. In terms of the new variables
\begin{equation}\label{eq:M1P2QQVars1}
a =n_{Q_1}+n_{Q_2}+3m_a \ , \quad b=n_{Q_1}+\frac{3m_a-m_n}{2} \ , \quad c = n_{Q_1}+n_{Q_2} \ , \quad d = n_{Q_1} \ ,
\end{equation}
the summation --- including powers of $w$ and $\overline{w}$ --- simplifies according to
\begin{align}
\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}} \, \sum_{\boldsymbol{x}\in\Pi^2_+}\longrightarrow (w \overline{w})\sum_{a=0}^{\infty} w^{\frac{a-1}{3}}\,\sum_{c=0}^{\infty} \left(\overline{w}\right)^{\frac{c-1}{3}}\,\sum_{b=0}^{a}\,\sum_{d=0}^{b}\,_{\big| \substack{c-a \,\in\, 3\mathbb{N}_0\\ c\,\notin\,3\mathbb{N}_0 +1}}. \label{eq:ZPowQQ}
\end{align}
As for eq.~\eqref{eq:ZModel1Final}, the limit $\varepsilon \to 0^+$ in $A$ can be taken before summation, since there are only finitely many terms for a given order of $w$ and $\overline{w}$. From the right-hand side of eq.~\eqref{eq:ZPowQQ} we see that the sums produce fractional powers of $w$ and $\overline{w}$ only. Indeed there are three cases
\begin{equation}\label{eq:ZPowQQ3Cases}
\begin{tabular}{lllll}
(1) & $a \in 3\mathbb{N}_0$ & $\Rightarrow $ & $c \in 3\mathbb{N}_0$, & powers of $w$ and $\overline{w}$ in $\mathbb{N}_0-\frac{1}{3}$ \ ,\\
(2) & $a \in 3\mathbb{N}_0+2$ & $\Rightarrow $ & $c \in 3\mathbb{N}_0+2$, & powers of $w$ and $\overline{w}$ in $\mathbb{N}_0+\frac{1}{3}$\ , \\
(3) & $a \in 3\mathbb{N}_0+1$ & $\Rightarrow $ & $c \in 3\mathbb{N}_0+1$, & powers of $w$ and $\overline{w}$ in $\mathbb{N}_0$ \ ,
\end{tabular}
\end{equation}
where the last case is excluded in eq.~\eqref{eq:ZPowQQ}. Due to the residual K\"ahler transformation that still needs to be fixed, it is not obvious from the present discussion which of these three cases corresponds to integer powers in the partition function. We will later on see that the powers within the first term in eq.~\eqref{eq:ZM1P2} are spaced by integers. All of them are --- as the terms in $A$ are as well --- multiplied by $\left(w\overline{w}\right)^{1-q}$. This common prefactor will be removed by the final K\"ahler transformation, such that the first two cases in eq.~\eqref{eq:ZPowQQ3Cases} correspond two fractional powers in the partition function. Precisely these two cases are realized in $A$, hence they independently have to sum to zero. Now, we show this cancellation by an explicit calculation.
In order to treat both cases simultaneously, we replace $a$ and $c$ introduced in eq.~\eqref{eq:M1P2QQVars1} with
\begin{equation} \label{eq:M1P2QQVars2}
n_a = \frac{a-2m}{3} \ ,\quad n_c = \frac{c-2m}{3} \quad \text{for }m=1,2 \ ,
\end{equation}
such that $m=0$ corresponds to the first and $m=1$ to the second case in eq.~\eqref{eq:ZPowQQ3Cases}. With this we have
\begin{align}
A &= -\sum_{m=0}^1 \left(w\overline{w}\right)^{\frac{2+2m}{3}}\operatorname{Res}_{\boldsymbol{x}=0}\left(Z_{\text{sing}} \left| \left(w\overline{w}\right)^{-\left(x_1+x_2\right)}\sum_{n_a=0}^\infty(-w)^{n_a} \sum_{b=0}^{3n_a+2m} Z_{\text{reg}} \right|^2 \right), \label{eq:M1P2A}
\end{align}
where
\small
$$
\begin{aligned}
Z_{\text{sing}} &= \frac{1}{\operatorname{sin}[\pi(2x_1+x_2)]\operatorname{sin}[\pi(x_1+2x_2)]}\cdot \frac{\operatorname{cos}\left[\pi\,\frac{1-8m+6x_1}{6}\right]^6\operatorname{cos}\left[\pi\,\frac{1+4m+6x_2}{6}\right]^6}{\pi^7 \operatorname{cos}\left[\pi\,\frac{1+4m-x_1-x_2}{6}\right]^3} \ ,\\
Z_{\text{reg}} &= \frac{(2b-2m-3n_a+x_1-x_2)\,\Gamma\left(\frac{1+4m}{3}-b+2n_a-x_1\right)^6\Gamma\left(\frac{1-2m}{3}+b-n_a-x_2\right)^6}{\Gamma\left(\frac{2+2m}{3}+n_a-x_1-x_2\right)^3\Gamma\left(1+b-x_1-2x_2\right)\Gamma\left(1+2m-b+3n_a-2x_1-x_2\right)} \ .
\end{aligned}
$$
\normalsize
Here, complex conjugation does not act on $x_1$ and $x_2$ and the residue is with respect to the oriented pair of divisors $(2x_1+x_2,x_1+2x_2)$. An explicit evaluation then gives
\begin{equation}
\begin{aligned}
A &= \frac{27\sqrt{3}}{512\pi^9}\sum_{m=0}^1 (-1)^{m+1}\Bigg|\sum_{n_a=0}^{\infty}\frac{(-1)^{n_a} w^{n_a+\frac{2+2m}{3}}}{\Gamma\left(\frac{2+2m}{3}+n_a\right)^3}\\
&\,\,\,\, \underbrace{\sum_{b=0}^{3n_a+2m}\frac{(2b-2m-3n_a)\Gamma\left(\frac{1+4m}{3}-b+2n_a\right)^6\Gamma\left(\frac{1-2m}{3}+b-n_a \right)^6}{\Gamma\left(1+b\right)\Gamma\left(1+2m-b+3n_a\right)}}_{=0}\Bigg|^2=0 \ .
\end{aligned}
\end{equation}
The inner sum vanishes due to antisymmetry of its summand under the transformation $b \longmapsto 3n_a+2m-b = a-b$, which is indeed nothing but $\mathfrak{s}$ expressed in terms of the variables introduced in eqs.~\eqref{eq:M1P2QQVars1} and \eqref{eq:M1P2QQVars2}. By this we have established
\begin{equation}\label{eq:ZM1P2_2}
Z_{S^2,r\ll0}^{1,12,6}(r,\theta) \,=\,-\frac{e^{-4\pi r \mathfrak{q}}}{2} \, \lim_{\varepsilon\to0^+}\,\sum_{(m_a,m_n)\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi^1_+}\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x}) \ .
\end{equation}
Similar to the previous calculations we introduce new variables
\begin{equation}\label{eq:M1P2SumVars}
a =n_{P}+m_a \ , \quad c =n_{P}\ ,\quad k= n_{Q_2}+\frac{3m_a+m_n}{2} \ ,\quad l = n_{Q_2}\ ,
\end{equation}
in terms of which the sums in eq.~\eqref{eq:ZM1P2_2} simplify, then the partition function takes the form
\begin{equation}
\begin{aligned}
Z_{S^2,r\ll0}^{1,12,6}(r,\theta) =&-\frac{\left(w \overline{w}\right)^{1-\mathfrak{q}}}{2} \lim_{\varepsilon \to 0^+}\\
& \operatorname{Res}_{\boldsymbol{x}=0} \left( e^{2x_1 \varepsilon} Z_{\text{sing}} \left|w^{-(x_1+x_2)}\sum_{a=0}^\infty w^a e^{\varepsilon a} \sum_{k=0}^\infty (-1)^k Z_{2} e^{-\varepsilon k}\right|^2\right)\text{,}
\end{aligned} \label{eq:ZM1P2Final}
\end{equation}
where
\begin{equation}
\begin{aligned}
Z_{\text{sing}} &= \frac{\operatorname{sin}(\pi x_1)^6\operatorname{sin}(\pi x_2)^6\operatorname{sin}\left[\pi \left(x_1+2x_2\right)\right]}{\pi^9 \operatorname{sin}\left[\pi (x_1+x_2)\right]^3\operatorname{sin}\left[\pi(2x_1+x_2)\right]}\\
Z_2 &= \left(1+3a-2k+x_1-x_2\right) \cdot \\
&\,\qquad \frac{\Gamma (-a+k-x_1)^6\Gamma (1+2a-k-x_2)^6\Gamma(-1-3a+k+x_1+2x_2)}{\Gamma(1+k-2x_1-x_2)\Gamma(1+a-x_1-x_2)^3}.
\end{aligned}\label{eq:ZM1P2FinalTerms}
\end{equation}
The residue is taken with respect to the ordering of divisors specified in eq.~\eqref{eq:M1jLables} and complex conjugation does not act on $x_1$ and $x_2$. At this point a few comments are in order:
\begin{enumerate}
\item For given powers of $w$ and $\overline{w}$ there are --- as opposed to eq.~\eqref{eq:ZModel1Final} --- infinitely many terms in eq.~\eqref{eq:ZM1P2Final}. At the technical level the appearance of this infinite sum can be understood from the following consideration: In eq.~\eqref{eq:ZM1P2_2} the partition function is given by summing over the residues at all poles stemming from the divisors $(D^{n_{Q_2}}_{Q_2},D^{n_P}_P)$. For fixed $n_P$ there are still infinitely many poles on the divisor $D^{n_P}_{P}$, enumerated by $n_{Q_2}$. Now recall that the line $\partial H$ --- given in eq.~\eqref{eq:UntiltedLine} --- by construction depends on the same linear combination of $x_1$ and $x_2$ that appears in the classical term $Z'_{\text{cl}}$, given after eq.~\eqref{eq:ZModel1}. Since this term gives the powers of $w$ and $\overline{w}$ once they have been introduced, their exponents are constant along lines parallel to $\partial H$, which is the case for $D^{n_P}_{P}$. Having tilted the critical line to $\partial H'$, the exponents of $w$ and $\overline{w}$ are no longer constant along $D^{n_P}_{P}$ but vary at order $\varepsilon$. This results in the factor $\operatorname{exp}(-\varepsilon k)$, which automatically regularizes the possibly divergent sum. Note that the factor $\operatorname{exp}(\varepsilon a)$ is unproblematic, as we do not require the sum over $a$ to converge.
\item The factor $Z_2$ in eq.~\eqref{eq:ZM1P2Final} is not for all values of $a$ and $k$ regular at the origin. By construction every pole in the sum is located on the intersection of some $(D^{n_{Q_2}}_{Q_2},D^{n_P}_P)$. Depending on $a$ and $k$, the pole can, however, lie on additional divisors. This is the case if at least one of the Gamma functions in the numerator of $Z_2$ diverges at the origin. Intuitively this was to be expected due to inclusions of the type $\{p^{\vec n}_{\vec \jmath}\} \subset \{p^{\vec m}_{\vec k}\}$. Poles located on more than two divisors are treated as described in Appendix \ref{sec:MBDegenerate}.
\item As has already been argued at the beginning of this section, antisymmetry under $\mathfrak{s}$ is not manifest in eq.~\eqref{eq:ZM1P2Final}. In fact, the variables of summation introduced in eq.~\eqref{eq:M1P2SumVars} do not even transform amongst themselves. The variable $l = n_{Q_2}$, for example, is mapped to $n_{Q_1}$, which does not appear in eq.~\eqref{eq:M1P2SumVars} at all.
\end{enumerate}
From eq.~\eqref{eq:ZM1P2Final} the partition function can in principle be evaluated up to any fixed order in $w$ and $\overline{w}$, and using eq.~\eqref{eq:ZtoK} we can read off geometric data of the Calabi-Yau threefold~$\mathcal{Y}_{1,12,6}$. For this purpose it is sufficient to keep terms of order $0$ in $\overline{w}$, we thus set $c=0$. For a given value of $a$, and by this a given power of $w$, it then depends on the values of $k$ and $l$ how many and which divisors intersect. Hence a distinction of cases is needed, which is, however, not a conceptual obstacle. For those cases in which $k$ and $l$ span an infinite range it is essential to calculate the residue without fixing their value, such that the infinitely many contributions can be summed up afterwards. This turns out to be computationally expensive. The sums would in fact be divergent without the regularization by $\operatorname{exp}(-\varepsilon k)$ and $\operatorname{exp}(-\varepsilon l)$. We observe that only finitely many terms contribute to the $(\operatorname{log}z)^3$ term at a given order, hence we are able to determine the exact fundamental period
\begin{equation} \label{eq:M1P2FundPeriod}
\begin{aligned}
\omega^{1,12,6}_{0,r\ll0}(w) &= \sum_{a=0}^\infty \sum_{n=0}^a (1+3a-2n)\cdot\frac{(2a-n)!^6}{n!\,(1+3a-n)!\,a!^3\,(a-n)!^6}\,(-w)^a \\
&= 1 - 11w +559 w^2 -42\,923 w^3+3\,996\,751 w^4 -416\,148\,761 w^5 +\ldots\ .
\end{aligned}
\end{equation}
Using $\chi\left(\mathcal{Y}_{1,12,6}\right)=-102$ we fix the overall normalization of the partition function such that it agrees with eq.~\eqref{eq:expK},\footnote{The required K\"ahler transformation is divison by $8\pi^3(w\overline{w})^{1-\mathfrak{q}} \left|\omega^{1,12,6}_{0,r\ll0}(w)\right|^2$ with $\omega^{1,12,6}_{0,r\ll0}(w)$ as in eq.~\eqref{eq:M1P2FundPeriod}.} then we read off the degree as well as the leading order degree $d$ integral genus zero Gromov-Witten invariants
\begin{equation} \label{eq:X1126Inv}
\operatorname{deg}(\mathcal{Y}_{1,12,6}) \,=\, 21 \ , \qquad N_d(\mathcal{Y}_{1,12,6}) \,=\, 387, \ 4\,671, \quad \ldots \ .
\end{equation}
This is in agreement with eq.~\eqref{eq:DataCY2} and with the results in ref.~\cite{Miura:2013arxiv}, in which these Gromov--Witten invariants are determined by mirror symmetry for a conjectured second geometric Calabi--Yau threefold phase.
Note that this agreement also confirms that the performed strong coupling analysis in Section~\ref{sec:strongphase} is legitimate. In particular, we see that the Euler characteristic of the variety $\mathcal{Y}_{1,12,6}$ --- that is to say the Witten index $(-1)^F$ of the $SSSM_{1,12,6}$ --- is in accord with the two sphere partition function. Thus we justify in retrospect that --- at least for the specific skew symplectic sigma model s discussed in this section --- the analyzed non-generic component \eqref{eq:Yvariety} of the degeneration locus is the only relevant contribution in the strong coupling phase $r\ll 0$.
\subsection{The model $SSSM_{2,9,6}$}
As the analysis of the skew symplectic sigma model{} $SSSM_{2,9,6}$ parallels in many aspects the discussion of the model $SSSM_{1,12,6}$ in Section~\ref{sec:M1General}, our presentation is less detailed here. In the following we focus on new features of the model $SSSM_{2,9,6}$.
The model $SSSM_{2,9,6}$ comes with the gauge group
\begin{equation}
G \,=\, \frac{U(1) \times \operatorname{USp}(4)}{\mathbb{Z}_2} \ ,
\end{equation}
and according to Table~\ref{tb:spec1} its irreducible representations are the $\operatorname{USp}(4)$ singlets $P_{[ij]}$ and $\phi_a$ with $U(1)$ charge $-2$ and $+2$ and the multiplets $Q$ and $X_i$ in the defining representation $\mathbf{4}$ of $\operatorname{USp}(4)$ and with $U(1)$ charge $-3$ and $+1$.\footnote{The Lie group $\operatorname{USp}(4)$ is isomorphic to $\operatorname{Spin}(5)$, and the defining representation~$\mathbf{4}$ of $\operatorname{USp}(4)$ is the spinor representation $\mathbf{4}_s$ of $\operatorname{Spin}(5)$.} Compared to the $SSSM_{1,12,6}$, the non-Abelian gauge group factor and the multiplicity of the singlet multiplets~$\phi_a$ are changed.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{USp4WeightLattice}
\caption{\label{fig:USp4WeightLattice}Depicted is the $\mathfrak{usp}(4)\simeq\mathfrak{so}(5)$ weight lattice generated by the fundamental weights $\omega_1$ and $\omega_2$. The weights of the defining representation~$\mathbf{4}$ are pictured as red circles and numerated for reference. The ten roots --- including two Cartan elements at the origin --- are represented by red squares, and $\alpha_1$ and $\alpha_2$ are the simple roots.}
\end{figure}
Let us recall the structure of the Lie algebra $\mathfrak{usp}(4)\simeq\mathfrak{so}(5)$, which is summarized in terms of the weight lattice in Figure~\ref{fig:USp4WeightLattice} and the Cartan matrix
\begin{equation}
A_{\mathfrak{usp}(4)} \,=\, \begin{pmatrix} 2 & -1 \\ -2 & 2 \end{pmatrix} \ .
\end{equation}
We identify the Dynkin labels~$(\lambda_1,\lambda_2)$ --- specifying the weight $\lambda_1\omega_1+\lambda_2\omega_2$ --- of the positive roots $\Delta^+$ as
\begin{equation}
\Delta^+ \,=\, \left\{\, (2,-1),\, (2,0),\, (0,1),\, (-2,2) \,\right\} \ ,
\end{equation}
and the Dynkin labels of the defining representation $\mathbf{4}$ as
\begin{equation}
w(\mathbf{4})\,=\, \left\{\, (1,0),\, (-1,1),\, (-1,0),\, (1,-1)\,\right\} \ .
\end{equation}
The pairing of the weight lattice is given by the symmetric quadratic form matrix of $\mathfrak{usp}(4)$, which yields
\begin{equation}
\langle \omega_1, \omega_1 \rangle \,=\, \langle \omega_1, \omega_2 \rangle \,=\, \frac12 \ , \quad
\langle \omega_2, \omega_2 \rangle \,=\, 1 \ .
\end{equation}
As before, for the two sphere partition function we need to determine the magnetic quantum numbers $\mathfrak{m}$ with respect to the gauge group $G$. We first observe that the representations of the gauge group~$G$ are induced from those representations of the double cover~$U(1)\times\operatorname{USp}(4)$ corresponding to highest weight vectors $\lambda\in\Lambda_w$ of $\mathfrak{u}(1)\times\mathfrak{usp}(4)$ with
\begin{equation} \label{eq:hw2}
\lambda\,=\,\lambda_a q + \lambda_1 \omega_1 + \lambda_2 \omega_2 \quad
\text{for} \quad \lambda_a + \lambda_1 \in 2\mathbb{Z} \ .
\end{equation}
The Abelian charge $q$ is canonically normalized as
\begin{equation}
\langle q , q \rangle \,=\, 1 \ .
\end{equation}
Then we determine again the magnetic charge quanta $\mathfrak{m}$ by evaluating the generalized Dirac quantization condition \cite{Goddard:1976qe}. These are given by all those $\mathfrak{m}$ with $\langle \lambda , \mathfrak{m} \rangle\in\mathbb{Z}$ for all weights of the representations of the gauge group~$G$. For the highest weights~\eqref{eq:hw2}, we arrive at
\begin{equation}
\mathfrak{m} \,=\, \frac12 m_a q + m_1 \omega_1 + m_2 \omega_2 \quad\text{with}\quad m_1,m_a + m_2 \in 2\mathbb{Z} \ ,
\end{equation}
and the magnetic charge lattice
\begin{equation}
\Lambda_\mathfrak{m} \,=\, \left\{ \, (m_a,m_1,m_2)\in\mathbb{Z}^3 \,\middle|\, m_1, m_a+m_2 \in 2\mathbb{Z} \, \right\} \ .
\end{equation}
Analogously to the other skew symplectic sigma model, we set
\begin{equation}
\boldsymbol{\sigma} \,=\, \frac12 \sigma_a q + \sigma_1 \omega_1+\sigma_2 \omega_2 \ , \qquad
\boldsymbol{r} = 2 r q \ , \qquad
\boldsymbol{\theta}=2 \theta q \quad \text{with} \quad \theta \sim \theta + 2 \pi \ .
\end{equation}
Now we are ready to spell out the two sphere partition function
\begin{equation} \label{eq:ZModel2Raw}
Z_{S^2}^{2,9,6}(r,\theta) \,=\, \frac{1}{64\pi^3}\!\!\sum_{(m_a,\vec m)\in\Lambda_\mathfrak{m}}\!\!
\int_{\mathbb{R}^3}\!\! d^3\boldsymbol{\sigma}\,
Z_G(\vec m,\vec\sigma)
Z_\text{matter}(\mathfrak{m},\boldsymbol{\sigma})
Z_{\text{cl}}(m_a,\sigma_a,r,\theta) \ ,
\end{equation}
where $\boldsymbol{\sigma}=(\sigma_a,\sigma_1,\sigma_2)=(\sigma_a,\vec\sigma)$, $\vec m=(m_1,m_2)$, $d^3\boldsymbol{\sigma} = \frac12 d\sigma_ad\sigma_1d\sigma_2$ and
\begin{equation}
\begin{aligned}
&Z_G(\vec m,\vec\sigma)\,=\,(-1)^{m_2}\cdot\frac{m_1^2 +4\sigma_1^2}{16}\cdot\frac{\left(m_1+m_2\right)^2+4\left(\sigma_1+\sigma_2\right)^2}{4}\\
&\qquad\qquad\qquad\cdot\frac{\left(\frac{m_1}{2}+m_2\right)^2+\left(\sigma_1+2\sigma_2\right)^2}{4}\cdot\frac{m_2^2 +4\sigma_2^2}{4} \ , \\[1ex]
&Z_{\text{cl}}(m_a,\sigma_a,r,\theta) \,=\, e^{-4 \pi i r \,\sigma_{a}-i\theta \, m_{a}} \ , \\[1ex]
&Z_\text{matter}(\mathfrak{m},\boldsymbol{\sigma}) \,=\,
Z_P(m_a,\sigma_a)^{15}\,Z_\phi(m_a,\sigma_a)^{9}\,Z_Q(\mathfrak{m},\boldsymbol{\sigma})\,
Z_X(\mathfrak{m},\boldsymbol{\sigma})^6 \ .
\end{aligned}
\end{equation}
The individual contributions to $Z_\text{matter}$ are given by
\small
$$
\begin{aligned}
Z_P &= \frac{\Gamma \left(\frac{m_{a}}{2}-q+i \sigma_{a}+1\right)}{\Gamma
\left(\frac{m_{a}}{2}+q-i \sigma_{a}\right)} \ , \hspace{33mm}
Z_\phi = \frac{\Gamma \left(-\frac{m_{a}}{2}+q-i \sigma_{a}\right)}{\Gamma \left(-\frac{m_{a}}{2}-q+i \sigma_{a}+1\right)} \ , \\
Z_Q &=\underbrace{\frac{\Gamma\left(\frac{3m_a}{4}-\frac{m_1}{4}-\frac{m_2}{4}-\frac{3q}{2}+\frac{3i\sigma_a}{2}-\frac{i\sigma_1}{2}-\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(\frac{3m_a}{4}-\frac{m_1}{4}-\frac{m_2}{4}+\frac{3q}{2}-\frac{3i\sigma_a}{2}+\frac{i\sigma_1}{2}+\frac{i\sigma_2}{2}\right)}}_{Z_{Q_1}} \underbrace{\frac{\Gamma\left(\frac{3m_a}{4}-\frac{m_2}{4}-\frac{3q}{2}+\frac{3i\sigma_a}{2}-\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(\frac{3m_a}{4}-\frac{m_2}{4}+\frac{3q}{2}-\frac{3i\sigma_a}{2}+\frac{i\sigma_2}{2}\right)}}_{Z_{Q_2}}\\
&\cdot\underbrace{\frac{\Gamma\left(\frac{3m_a}{4}+\frac{m_1}{4}+\frac{m_2}{4}-\frac{3q}{2}+\frac{3i\sigma_a}{2}+\frac{i\sigma_1}{2}+\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(\frac{3m_a}{4}+\frac{m_1}{4}+\frac{m_2}{4}+\frac{3q}{2}-\frac{3i\sigma_a}{2}-\frac{i\sigma_1}{2}-\frac{i\sigma_2}{2}\right)}}_{Z_{Q_3}} \underbrace{\frac{\Gamma\left(\frac{3m_a}{4}+\frac{m_2}{4}-\frac{3q}{2}+\frac{3i\sigma_a}{2}+\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(\frac{3m_a}{4}+\frac{m_2}{4}+\frac{3q}{2}-\frac{3i\sigma_a}{2}-\frac{i\sigma_2}{2}\right)}}_{Z_{Q_4}}\ ,\\
Z_X &=\underbrace{\frac{\Gamma\left(-\frac{m_a}{4}-\frac{m_1}{4}-\frac{m_2}{4}+\frac{q}{2}-\frac{i\sigma_a}{2}-\frac{i\sigma_1}{2}-\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(-\frac{m_a}{4}-\frac{m_1}{4}-\frac{m_2}{4}-\frac{q}{2}+\frac{i\sigma_a}{2}+\frac{i\sigma_1}{2}+\frac{i\sigma_2}{2}\right)}}_{Z_{X_1}} \underbrace{\frac{\Gamma\left(-\frac{m_a}{4}-\frac{m_2}{4}+\frac{q}{2}-\frac{i\sigma_a}{2}-\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(-\frac{m_a}{4}-\frac{m_2}{4}-\frac{q}{2}+\frac{i\sigma_a}{2}+\frac{i\sigma_2}{2}\right)}}_{Z_{X_2}}\\
&\cdot \underbrace{\frac{\Gamma\left(-\frac{m_a}{4}+\frac{m_1}{4}+\frac{m_2}{4}+\frac{q}{2}-\frac{i\sigma_a}{2}+\frac{i\sigma_1}{2}+\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(-\frac{m_a}{4}+\frac{m_1}{4}+\frac{m_2}{4}-\frac{q}{2}+\frac{i\sigma_a}{2}-\frac{i\sigma_1}{2}-\frac{i\sigma_2}{2}\right)}}_{Z_{X_3}}\underbrace{\frac{\Gamma\left(-\frac{m_a}{4}+\frac{m_2}{4}+\frac{q}{2}-\frac{i\sigma_a}{2}+\frac{i\sigma_2}{2}+1\right)}{\Gamma\left(-\frac{m_a}{4}+\frac{m_2}{4}-\frac{q}{2}+\frac{i\sigma_a}{2}-\frac{i\sigma_2}{2}\right)}}_{Z_{X_4}}\ .
\end{aligned}
$$
\normalsize
Here the indices of the individual factors $Z_{Q_i}$ and $Z_{X_i}$ of $Z_Q$ and $Z_X$ number the weights of the defining representation~$\mathbf{4}$ as in Figure~\ref{fig:USp4WeightLattice}. Using the reflection formula~\eqref{eq:GammaReflect} for Gamma functions, we confirm that the defined two sphere partition function~$Z_{S^2}^{2,9,6}$ is indeed real.
Let us also observe that the integrand of the two sphere partition function is invariant with respect to the Weyl symmetry $\mathcal{W}_{\mathfrak{usp}(4)} \simeq S_2 \ltimes (\mathbb{Z}_2\times\mathbb{Z}_2) \simeq D_4$ --- i.e., the order eight dihedral group $D_4$ --- generated by
\begin{equation}
\begin{aligned}
&\mathfrak{s} : (m_1,m_2,\sigma_1,\sigma_2) \longmapsto (-m_1,m_1+m_2,-\sigma_1+\sigma_1+\sigma_2) \ , \\
&\mathfrak{r} : (m_1,m_2,\sigma_1,\sigma_2) \longmapsto (-m_1-2m_2,m_1+m_2,-\sigma_1-2\sigma_2,\sigma_1+\sigma_2) \ .
\end{aligned} \label{eq:WeylM2A}
\end{equation}
To further evaluate the partition function~$Z_{S^2}^{2,9,6}$, we substitute the integration variables $\boldsymbol{\sigma}=(\sigma_a,\vec\sigma)$ by $\boldsymbol{x}=(x_1,x_2,x_3)$ according to
\begin{equation} \label{eq:SubsModel2}
\sigma_a = i \left(x_1+x_2+x_3-\mathfrak{q}\right) \ , \quad \sigma_1 = 2i \left(x_1-x_2\right) \ ,
\quad \sigma_2 = -i \left(3x_1+x_2+x_3\right) \ .
\end{equation}
Then we arrive at
\begin{equation}
Z_{S^2}^{2,9,6}(r,\theta) = i\, \frac{e^{-4\pi r \mathfrak{q}}}{32\pi^3}\!\!\!
\sum_{(m_a,\vec m)\in\Lambda_\mathfrak{m}}\!\!\!
\int_{\gamma+i\mathbb{R}^3}\!\!\! \omega(\mathfrak{m},\boldsymbol{x}) \,dx_1 \wedge dx_2 \wedge dx_3 \ , \quad
\gamma = \left(-\tfrac{\mathfrak{q}}2,-\tfrac{\mathfrak{q}}2,2\mathfrak{q}\right) \ ,\label{eq:ZModel2Formal}
\end{equation}
with
\begin{equation} \label{eq:ZModel2}
\omega(\mathfrak{m},\boldsymbol{x})\,=\,
Z'_G(\vec m,\boldsymbol{x})Z'_P(m_a,\boldsymbol{x})^{15}Z'_\phi(m_a,\boldsymbol{x})^{9}
Z'_Q(\mathfrak{m},\boldsymbol{x})Z'_X(\mathfrak{m},\boldsymbol{x})^6
Z'_\text{cl}(r,\theta,m_a,\boldsymbol{x}) \ ,
\end{equation}
where $Z'_\bullet=\left.Z_\bullet\right|_{\boldsymbol{\sigma}=\boldsymbol{\sigma}(\boldsymbol{x})}$ for $\bullet=G,P,\phi,Q,X$, and
\begin{equation}
Z'_{\text{cl}}(r,\theta,m_a,\boldsymbol{x}) \,=\, e^{4 \pi r \left(x_1+x_2+x_3\right) - i\,\theta \, m_a} \ .
\end{equation}
The integral in eq.~\eqref{eq:ZModel2Formal} is now of the type~\eqref{eq:MBInt} and we proceed with its evaluation as in Appendix~\ref{sec:MB}. We record that in terms of the new variables $\boldsymbol{x}$ the generators eq.~\eqref{eq:WeylM2A} of the Weyl group $\mathcal{W}_{\mathfrak{usp}(4)}$ take the form
\begin{equation}\label{eq:WeylM2B}
\begin{aligned}
&\mathfrak{s} : (m_1,m_2,x_1,x_2) \mapsto (-m_1,m_1+m_2,x_2,x_1) \, , \\
&\mathfrak{r} : (m_1,m_2,x_1,x_2,x_3) \mapsto (-m_1-2m_2,m_1+m_2,x_2,-2x_1-x_2-x_3,3x_1+x_2+2x_3) \, .
\end{aligned}
\end{equation}
which induces the action on the signed volume form
\begin{equation} \label{eq:Wusp(4)vol}
\begin{aligned}
\mathfrak{s}: dx_1 \wedge dx_2 \wedge dx_3\longmapsto -dx_1 \wedge dx_2 \wedge dx_3\ , \\
\mathfrak{r}: dx_1 \wedge dx_2 \wedge dx_3\longmapsto +dx_1 \wedge dx_2 \wedge dx_3\ ,
\end{aligned}
\end{equation}
Firstly, we again determine the divisors for the poles of the integrand $\omega(\mathfrak{m},\boldsymbol{x})$. Taking into account cancellation between the poles and zeros, we find the following divisors of poles in terms of the constraint integers $n_P, n_{Q_1},n_{Q_2}, n_{Q_3},n_{Q_4},n_{X_1},n_{X_2},n_{X_3},n_{X_4}$
\small
\begin{equation}\label{eq:DivisModel2}
\begin{aligned}
D^{n_P}_{P} &= x_1+x_2+x_3-n_P -\tfrac{m_a}{2}-1 &&\text{ for } n_P \geq \operatorname{Max}\left[0,-m_a\right] \ ,\\
D^{n_{Q_1}}_{Q_1} &= 2x_1+3x_2+2x_3-\tfrac{4+4n_{Q_1}+3m_a-m_1-m_2}{4} &&\text{ for }n_{Q_1} \geq \operatorname{Max}\left[0,\tfrac{-3m_a+m_1+m_2}{2}\right]\ ,\\
D^{n_{Q_2}}_{Q_2} &= 3x_1+2x_2+2x_3-\tfrac{4+4n_{Q_2}+3m_a-m_2}{4} &&\text{ for } n_{Q_2} \geq \operatorname{Max}\left[0,\tfrac{-3m_a+m_2}{2}\right]\ ,\\
D^{n_{Q_3}}_{Q_3} &= x_1+x_3-\tfrac{4+4n_{Q_3}+3m_a+m_1+m_2}{4} &&\text{ for } n_{Q_3} \geq \operatorname{Max}\left[0,\tfrac{-3m_a-m_1-m_2}{2}\right]\ ,\\
D^{n_{Q_4}}_{Q_4} &= x_2+x_3-\tfrac{4+4n_{Q_4}+3m_a+m_2}{4} &&\text{ for } n_{Q_4} \geq \operatorname{Max}\left[0,\tfrac{-3m_a-m_2}{2}\right]\ ,\\
D^{n_{X_1}}_{X_1} &= x_2-\tfrac{4n_{X_1}-m_a-m_1-m_2}{4} &&\text{ for } n_{X_1} \geq \operatorname{Max}\left[0,\tfrac{m_a+m_1+m_2}{2}\right]\ ,\\
D^{n_{X_2}}_{X_2} &= x_1-\tfrac{4n_{X_2}-m_a-m_2}{4} &&\text{ for } n_{X_2} \geq \operatorname{Max}\left[0,\tfrac{m_a+m_2}{2}\right]\ ,\\
D^{n_{X_3}}_{X_3} &= x_1+2x_2+x_3-\tfrac{-4n_{X_3}+m_a-m_1-m_2}{4} &&\text{ for } n_{X_3} \geq \operatorname{Max}\left[0,\tfrac{m_a-m_1-m_2}{2}\right]\ ,\\
D^{n_{X_4}}_{X_4} &= 2x_1+x_2+x_3-\tfrac{-4n_{X_4}+m_a-m_2}{4} &&\text{ for } n_{X_4} \geq \operatorname{Max}\left[0,\tfrac{m_a-m_2}{2}\right]\ .
\end{aligned}
\end{equation}
\normalsize
There is no contribution from $Z'_\phi$ because all its poles are canceled by the denominator of $Z'_P$.
Secondly, we identify $\boldsymbol{p}$ in eq.~\eqref{eq:MBw} as $\boldsymbol{p}= -4\pi r(1,1,1)$ from which we find the critical plane $\partial H$ introduced in eq.~\eqref{eq:MBHHigher} to be
\begin{equation}
\partial H = \left\{ x \in \mathbb{R}^3 \,|\, x_1 + x_2 + x_3 =\mathfrak{q} \right\}. \label{eq:UntiltedHM2}
\end{equation}
Since this critical line leads to degenerate simplices~\eqref{eq:MBSimplex}, it is necessary to apply the method of Appendix \ref{sec:MBParallel}. We thus introduce an additional factor such that
\begin{equation} \label{eq:OmegaPrimeM2}
\omega(\mathfrak{m},\boldsymbol{x}) \, \longrightarrow \,
\omega'(\mathfrak{m},\boldsymbol{x}) = \omega(\mathfrak{m},\boldsymbol{x}) \cdot e^{+4\pi r \varepsilon\,\boldsymbol{\delta}\cdot\boldsymbol{x}} \ ,
\qquad \boldsymbol{\delta}=(\delta_1,\delta_2,\delta_3) \ ,
\end{equation}
where the limit $\varepsilon \to 0^+$ is taken outside the integral. This modifies the critical plane to
\begin{equation}
\partial H' = \left\{ x \in \mathbb{R}^3 \,\bigg|\, \sum_{i=1}^3\left(1+\varepsilon\,\delta_{i}\right)x_i=\mathfrak{q}- \frac{\mathfrak{q}}{2}\left(\delta_{1}+\delta_{2}-4\delta_3\right) \right\}\ ,\label{eq:TiltedHM2}
\end{equation}
which for $\delta_1 \neq \delta_3$ and $\delta_2 \neq \delta_3$ indeed results in having no degenerate simplices. With the two disjoint half-spaces bounded by the critical plane
\begin{equation}
\begin{aligned}
H'_1 &= \left\{ x \in \mathbb{R}^3 \,\bigg|\, \sum_{i=1}^3\left(1+\varepsilon\,\delta_{i}\right)x_i < \mathfrak{q}- \frac{\mathfrak{q}}{2}\left(\delta_{1}+\delta_{2}-4\delta_3\right) \right\}\ ,\\
H'_2 &= \left\{ x \in \mathbb{R}^3 \,\bigg|\, \sum_{i=1}^3\left(1+\varepsilon\,\delta_{i}\right)x_i > \mathfrak{q}- \frac{\mathfrak{q}}{2}\left(\delta_{1}+\delta_{2}-4\delta_3\right) \right\}\ ,
\end{aligned}\label{eq:HalfSpacesM2}
\end{equation}
the relevant half-spaces $H$ for the respective skew symplectic sigma model{} phases are
\begin{equation}
H \,=\,\begin{cases} H'_1 & \text{for }r\gg0 \\ H'_2 & \text{for }r\ll0 \end{cases} \ .
\end{equation}
Thirdly, we have to determine the set $\Pi$ of contributing poles that is introduced in eq.~\eqref{eq:MBpoles}. This is a straight forward but tedious calculation, for the sake of brevity we specialize to the case $\delta_3=0$ and $\delta_1, \delta_2 > 0$. Then we find
\begin{equation}
\Pi = \begin{cases} \left\{\, p^{\vec n_i}_{\vec\jmath_i}\, | \, i = 1,2,3,4 \,\right\} & \text{ for } r\gg 0 \ , \\[1.5ex]
\left\{\, p^{\vec n_i}_{\vec\jmath_i}\, | \, i=5, \ldots, 20 \,\right\} & \text{ for } r\ll 0 \ , \\[1.5ex]
\end{cases}\label{eq:M2Pi}
\end{equation}
in terms of the oriented divisor intersections with labels
\begin{equation}\label{eq:M2jLables}
\begin{aligned}
&(\vec\jmath_1,\vec n_1)\,=\, (X_3,X_1,X_4; n_{X_3},n_{X_1},n_{X_4}) \ , &&(\vec\jmath_2,\vec n_2)\,=\, (X_3,X_1,Q_4; n_{X_3},n_{X_1},n_{Q_4}) \ , \\
&(\vec\jmath_3,\vec n_3)\,=\, (X_3,X_2,X_4; n_{X_3},n_{X_2},n_{X_4}) \ , &&(\vec\jmath_4,\vec n_4)\,=\, (X_2,X_4,Q_3; n_{X_2},n_{X_4},n_{Q_3}) \ , \\
&(\vec\jmath_5,\vec n_6)\,=\, (P,X_1,X_2; n_{P},n_{X_1},n_{X_2}) \ , &&(\vec\jmath_6,\vec n_6)\,=\, (P,X_1,Q_2; n_{P},n_{X_1},n_{Q_2}) \ , \\
&(\vec\jmath_7,\vec n_7)\,=\, (P,Q_1,X_2; n_{P},n_{Q_1},n_{X_2}) \ , &&(\vec\jmath_8,\vec n_8)\,=\, (P,Q_1,Q_2; n_{P},n_{Q_1},n_{Q_2}) \ , \\
&(\vec\jmath_9,\vec n_9)\,=\, (X_1,X_2,Q_3; n_{X_1},n_{X_2},n_{Q_3}) \ , &&(\vec\jmath_{10},\vec n_{10})\,=\, (X_1,X_2,Q_4; n_{X_1},n_{X_2},n_{Q_4}) \ , \\
&(\vec\jmath_{11},\vec n_{11})\,=\, (X_1,Q_2,X_4; n_{X_1},n_{Q_2},n_{X_4}) \ , &&(\vec\jmath_{12},\vec n_{12})\,=\, (X_1,Q_2,Q_3; n_{X_1},n_{Q_2},n_{Q_3}) \ , \\
&(\vec\jmath_{13},\vec n_{13})\,=\, (X_1,Q_2,Q_4; n_{X_1},n_{Q_2},n_{Q_4}) \ , &&(\vec\jmath_{14},\vec n_{14})\,=\, (X_3,Q_1,X_2; n_{X_3},n_{Q_1},n_{X_2}) \ , \\
&(\vec\jmath_{15},\vec n_{15})\,=\, (X_3,Q_1,Q_2; n_{X_3},n_{Q_1},n_{Q_2}) \ , &&(\vec\jmath_{16},\vec n_{16})\,=\, (X_2,Q_3,Q_1; n_{X_2},n_{Q_3},n_{Q_1}) \ , \\
&(\vec\jmath_{17},\vec n_{19})\,=\, (X_2,Q_4,Q_1; n_{X_2},n_{Q_4},n_{Q_1}) \ , &&(\vec\jmath_{18},\vec n_{18})\,=\, (X_4,Q_1,Q_2; n_{X_4},n_{Q_1},n_{Q_2}) \ , \\
&(\vec\jmath_{19},\vec n_{19})\,=\, (Q_1,Q_2,Q_3; n_{Q_1},n_{Q_2},n_{Q_3}) \ , &&(\vec\jmath_{20},\vec n_{20})\,=\, (Q_1,Q_2,Q_4; n_{Q_1},n_{Q_2},n_{Q_4}) \ , \\
\end{aligned}
\end{equation}
and the pole loci given by
\begin{equation}
p^{\vec n_i}_{\vec\jmath_i} = \left(\,\bigcap_{k=1}^3 \,\{\,D^{(\vec n_i)_k}_{(\vec\jmath_i)_k} = 0 \, \} \right)\ .
\end{equation}
Moreover, the labels~\eqref{eq:M2jLables} have been chosen such that $\operatorname{sign}\Delta^{\vec n_i}_{\vec\jmath_i} = 1$ for all $i$. According to eq.~\eqref{eq:MBFormula} the partition function thus takes the form
\begin{equation}
Z_{S^2}^{2,9,6}(r,\theta) \,=\,\frac{e^{-4\pi r \mathfrak{q}}}{4} \,\lim_{\varepsilon\to0^+}\,\sum_{(m_a,\vec{m})\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi}\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x}) \ ,\label{eq:ZM2}
\end{equation}
where the residue is with respect to the order of divisors specified in eq.~\eqref{eq:M2jLables}.
\subsubsection{The $SSSM_{2,9,6}$ phase $r\gg0$}
We now specialize to the phase $r \gg 0$, for which eq.~\eqref{eq:M2Pi} shows that there are four relevant pole loci
\footnotesize
$$
\begin{aligned}
p^{\vec n_1}_{\vec\jmath_1} &= \left(
n_{X_1}+n_{X_3}-n_{X_4}-\tfrac{m_a+m_2}{4} , n_{X_1}-\tfrac{m_a+m_1+m_2}{4}, n_{X_4}-2n_{X_3}-3n_{X_1}+\tfrac{4m_a+m_1+2m_2}{4} \right) \ , \\
p^{\vec n_2}_{\vec\jmath_2} &= \left(
-1-n_{X_1}+n_{X_3}-n_{Q_4}-\tfrac{m_a+m_2}{4},n_{X_1}-\tfrac{m_a+m_1+m_2}{4},1+ n_{Q_4}-n_{X_1}+\tfrac{4m_a+m_1+2m_2}{4}\right),\\
p^{\vec n_3}_{\vec\jmath_2} &= \left(
n_{X_2}-\tfrac{m_a+m_2}{4}, n_{X_2}+n_{X_4}-n_{X_3}-\tfrac{m_a+m_1+m_2}{4}, n_{X_3}-2n_{X_4}-3n_{X_2}+\tfrac{4m_a+m_1+2m_2}{4}
\right)\ ,\\
p^{\vec n_4}_{\vec\jmath_4} &= \left(
n_{X_2}-\tfrac{m_a+m_2}{4},-1-n_{X_2}-n_{X_4}-n_{Q_3}-\tfrac{m_a+m_1+m_2}{4},1+ n_{Q_3}-n_{X_2}+\tfrac{4m_a+m_1+2m_2}{4} \right)\ .
\end{aligned}
$$
\normalsize
By calculations similar to those in eq.~\eqref{eq:inclusion} it can be shown that
\begin{equation}
\Pi = \left\{\, p^{\vec n_i}_{\vec\jmath_i}\, | \, i = 1,2,3,4 \,\right\} = \Pi^1 \cup \Pi^2\, \quad \text{with}\quad \Pi^1 = \left\{\, p^{\vec n_1}_{\vec\jmath_1}\,\right\} \ ,\
\Pi^2 = \left\{\, p^{\vec n_3}_{\vec\jmath_3}\,\right\} \ .
\end{equation}
The sets $\Pi^1$ and $\Pi^2$ are, however, not disjoint. Taking care to not count poles in their intersection twice, the partition function reads
\begin{equation}
\begin{aligned}
Z_{S^2,r\gg0}^{2,9,6}(r,\theta) \,=\,&\frac{e^{-4\pi r \mathfrak{q}}}{4} \,\lim_{\varepsilon\to0^+}\,\sum_{(m_a,\vec{m})\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi^1}\alpha(\boldsymbol{x})\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x})\\
+&\frac{e^{-4\pi r \mathfrak{q}}}{4} \,\lim_{\varepsilon\to0^+}\,\sum_{(m_a,\vec{m})\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi^2}\alpha(\boldsymbol{x})\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x})\\
\text{with } \alpha(\boldsymbol{x})&= \begin{cases}\frac{1}{2} \quad \text{for } \boldsymbol{x} \in \Pi^1 \cap \Pi^2 \\ 1 \quad \text{otherwise}\end{cases}.
\end{aligned}\label{eq:ZModel22Terms}
\end{equation}
Further, the two terms in eq.~\eqref{eq:ZModel22Terms} are equal. This is due to the symmetry of the integrand of the two sphere partition function under the generator $\mathfrak{s}$ of Weyl transformations given in eq.~\eqref{eq:WeylM2A}, which acts on the relevant divisor intersections as
\begin{equation}
(\vec\jmath_1,\vec n_1) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_3,\vec n_3) \ , \quad(\vec\jmath_3,\vec n_3) \stackrel{\mathfrak{s}}{\longmapsto} (\vec\jmath_1,\vec n_1) \ .
\end{equation}
Here we have accounted for that the minus sign from the orientation-reversal of the ordered triplet of divisors precisely cancels the sign-reversal obtained by transforming the signed volume form in eq.~\eqref{eq:Wusp(4)vol}. With this we find
\begin{equation}\label{eq:ZModel21Term}
\begin{aligned}
Z_{S^2,r\gg0}^{2,9,6}(r,\theta) \,=\,&\frac{e^{-4\pi r \mathfrak{q}}}{2} \,\lim_{\varepsilon\to0^+}\,\sum_{(m_a,\vec{m})\in\Lambda_\mathfrak{m}}
\, \sum_{\boldsymbol{x}\in\Pi^1}\alpha(\boldsymbol{x})\operatorname{Res}_{\boldsymbol{x}} \omega'(\mathfrak{m},\boldsymbol{x}) \ .
\end{aligned}
\end{equation}
Similarly to the previous calculations we introduce new variables
\begin{equation}
\begin{aligned}
&a = n_{X_1}+n_{X_3}-m_a\ , && b= n_{X_3} -\frac{m_a-m_1-m_2}{2}\ , && c = n_{X_1}+n_{X_3}\ , \\
&d = n_{X_3} \ , && k = n_{X_4}-\frac{m_a-m_2}{2} \ , && l = n_{X_4} \ ,
\end{aligned}
\end{equation}
in order to simplify the summation in eq.~\eqref{eq:ZModel21Term}. In terms of $z$ and $\overline{z}$ introduced in eq.~\eqref{eq:zvar}, and with $\delta_1 = \delta_2 = \frac1{2\pi r}$ the partition function is found to be
\begin{align}
Z_{S^2,r\gg 0}^{2,9,6}(r,\theta) \,=\,&\frac{(z\overline{z})^{\mathfrak{q}}}{2} \,\lim_{\varepsilon\to0^+} \operatorname{Res}_{\boldsymbol{x}=0}
\Bigg( e^{2(x_1+x_2)\varepsilon} Z_{1} \left(z\overline{z}\right)^{-\sum_{i} x_i}\sum_{a=0}^\infty z^a e^{2a\varepsilon}\sum_{c=0}^\infty (\overline{z})^c e^{2c\varepsilon}\notag \\
&\sum_{b=0}^a\sum_{d=0}^c\sum_{k=0}^\infty\sum_{l=0}^\infty\alpha\, Z_{(a,b,k)} Z_{(c,d,l)}e^{-(k+l+b+d)\varepsilon} \Bigg)\ , \label{eq:ZModel2FinalPhase1}
\end{align}
with
\begin{align*}
Z_1 &= \,\pi^2\frac{\operatorname{sin}\left(\pi x_1\right)^6\operatorname{sin}\left[\pi\left( x_1+x_2+x_3\right)\right]^6\operatorname{sin}\left[\pi\left( x_1+x_3\right)\right]\operatorname{sin}\left[\pi\left(x_2+x_3\right)\right]}{ \operatorname{sin}\left(\pi x_2\right)^6\operatorname{sin}\left[\pi\left(2 x_1+x_2+x_3\right)\right]^6\operatorname{sin}\left[\pi\left(x_1+2x_2+x_3\right)\right]^6}\\
&\, \hspace{2mm}\cdot\operatorname{sin}\left[\pi\left(3 x_1+2x_2+2x_3\right)\right]\operatorname{sin}\left[\pi\left(2x_1+3x_2+2x_3\right)\right]\ ,\\
Z_{(a,b,k)} &= \left(a-2k+3x_1+x_2+x_3\right)\left(b-k+x_1-x_2\right)\left(a-b-k+2x_1+2x_2+x_3\right) \\
&\, \hspace{2mm}\cdot \frac{\left(a-2b+x_1+3x_2+x_3\right)\Gamma\left(-a+k-x_1\right)^6\Gamma\left(1+a-x_1-x_2-x_3\right)^6}{\Gamma\left(1+a-b+x_2\right)^6\Gamma\left(1+b-x_1-2x_2-x_3\right)^6\Gamma\left(1+k-2x_1-x_2-x_3\right)^6}\\
&\, \hspace{2mm}\cdot \Gamma\left(1+a+b-2x_1-3x_2-2x_3\right)\Gamma\left(1+2a-b-x_1-x_3\right)\\
&\, \hspace{2mm}\cdot \Gamma\left(1+a+k-3x_1-2x_2-2x_3\right)\Gamma\left(1+2a-k-x_2-x_3\right)\ ,\\
\alpha &= \begin{cases}\frac{1}{2}\quad \text{for } k\leq a \text{ and } l\leq c\\1\quad \text{otherwise}\end{cases}\ .
\end{align*}
Note that the factor $\alpha$ is the same as the one defined in eq.~\eqref{eq:ZModel22Terms}, it prevents double counting of poles in $\Pi^1\cap \Pi^2$. Due to its appearance, eq.~\eqref{eq:ZModel2FinalPhase1} could not be written as compactly as the partition functions in eqs.~\eqref{eq:ZModel1Final} and \eqref{eq:ZM1P2Final}. However, eq.~\eqref{eq:ZModel2FinalPhase1} is still manifestly real, and its complex form is not a complication in the explicit evaluation. Moreover, in analogy to the factor $Z_2$ in eq.~\eqref{eq:ZM1P2FinalTerms}, the terms $Z_{(a,b,k)}$ and $Z_{(c,d,l)}$ are not regular at the origin for all values of the summation variables. Hence, some of the relevant poles are located on more than two divisors. Further, for a given order in $z$ and $\overline{z}$ there are two finite and two infinite sums. The infinite sums are automatically regularized by the exponential factors.
The explicit evaluation of eq.~\eqref{eq:ZModel2FinalPhase1} now proceeds in the same way as for the $r\ll0$ phase of $SSSM_{1,12,6}$. The infinite sums are here, however, observed to converge even for $\varepsilon = 0$. Moreover, contributions to the $\left(\operatorname{log}z\right)^3$ term are observed to only stem from poles in $\Pi^1\cap \Pi^2$. Since there are only finitely many such poles at a given order, we have been able to determine the exact fundamental period. Although its expression is too complicated to write down, its expansion has been checked to coincide with eq.~\eqref{eq:M1P2FundPeriod}. After fixing the overall normalization of the partition function by using $\chi\left(\mathcal{X}_{2,9,6}\right) = -102$,\footnote{The required K\"ahler transformation is divison by $-16\pi^3(z\overline{z})^{\mathfrak{q}} \left|\omega^{1,12,6}_{0,r\ll0}(z)\right|^2$ with $\omega^{1,12,6}_{0,r\ll0}(z)$ as in eq.~\eqref{eq:M1P2FundPeriod}.} we read off
\begin{equation} \label{eq:X296Inv}
\operatorname{deg}(\mathcal{X}_{2,9,6}) \,=\, 21\ , \qquad N_d(\mathcal{X}_{2,9,6}) \,=\, 387, \ 4\,671, \ \ldots \ ,
\end{equation}
in agreement with eq.~\eqref{eq:DataCY2}. The extracted geometric data justifies the assumption made at the end of Section~\eqref{sec:Xphase}, namely that the naive semi-classical derivation of the smooth target space variety $\mathcal{X}_{2,9,6}$ is correct even though it contains discrete points, where the non-Abelian gauge group $\operatorname{USp}(4)$ is not entirely broken.
Let us pause to stress the significance of this result. By calculating the two sphere partition functions of the two models $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$, we find agreement among the invariants~\eqref{eq:X296Inv} and \eqref{eq:X1126Inv} in their respective phase $r\ll0$ and $r\gg0$.\footnote{In the same fashion we could check that the other two geometric phases of the $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$ agree, that is to say $SSSM_{1,12,6}$ in the phase $r\gg0$ and $SSSM_{2,9,6}$ in the phase $r\ll 0$. However, by analytic continuation of the above result such a match is guaranteed as well.} On the level of the partition functions we find the equality
\begin{equation}
Z^{1,12,6}_{S^2}(r,\theta,\mathfrak{q}) \,=\, - \frac12 \,Z^{2,9,6}_{S^2}(-r,-\theta,1-\mathfrak{q}) \ ,
\end{equation}
which implies that the partition functions $Z^{1,12,6}_{S^2}$ and $Z^{2,9,6}_{S^2}$ differ only by a K\"ahler transformation resulting in the relative normalization factor $-\frac12$. This identity is highly non-trivial on the level of the two distinct integral representations~\eqref{eq:ZS2} of the two sphere partition function for the two skew symplectic sigma model s $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$, and thus furnishes strong evidence in support of our duality proposal~\eqref{eq:dualpair} in Section~\ref{sec:duality}.
\subsection{Other conformal skew symplectic sigma model s}
Let us now briefly examine the remaining skew symplectic sigma model s in the Table~\ref{tb:models}. As has been argued in Section \ref{sec:duality}, the model $SSSM_{1,5,4}$ is self-dual with two equivalent geometric $T^2$ phases for $r\gg0$ and $r\ll0$ according to eq.~\eqref{eq:dualT2s}. We here want to provide further evidence for this duality in terms of the two-sphere partition function. While eq.~\eqref{eq:ZtoK} still holds, the sign-reversed exponentiated K\"ahler potential~\eqref{eq:expK} takes in flat coordinates the simple form
\begin{equation} \label{eq:ZTor}
e^{-K(t)} \,=\, i (t -\bar t)\ .
\end{equation}
It gives rise to the Zamolodchikov metric $ds^2= \frac{i\,dt d\bar t}{t-\bar t} $ of the Teichm\"uller space of the torus. While the Zamolodchikov metric does not contain any additional geometric information of any geometric invariants, we nevertheless expect that the two sphere partition functions $Z_{S^2}^{1,5,4}$ takes with the canonical K\"ahler gauge in the (singular) large volume phases the asymptotic form
\begin{equation}
Z^{1,5,4}_{S^2}(z) \,=\, \frac{c}{2\pi i} \left| \omega^{1,5,4}_0(z) \right|^2 (\log z - \log\bar z) + \ldots \quad \text{for} \quad |z|\text{ small} \ ,
\end{equation}
with the fundamental period $\omega^{1,5,4}_0(z)$ and a real constant $c$. Thus, our aim is to calculate $\omega^{1,5,4}_0(z)$ with the two sphere partition function correspondence \cite{Jockers:2012dk}, so as to show that they agree in the two geometric toroidal large volume phases.
For K3 surfaces the quantum K\"ahler metric does not receive non-perturbative worldsheet instanton corrections and the sign-reversed exponentiated K\"ahler potential~\eqref{eq:expK} exhibits in flat coordinates the characteristic form \cite{Halverson:2013qca}
\begin{equation} \label{eq:ZK3}
e^{-K(t)} \,=\, \frac12 \kappa_{ab}(t^a -\bar t^a)(t^b-\bar t^b)\ ,
\end{equation}
where $t^a$ are the complexified K\"ahler moduli of the analyzed polarized K3~surface and $\kappa_{ab}$ are their intersection numbers. For the specific model $SSSM_{1,8,5}$ the large volume polarized K3~surfaces $\mathcal{X}_{\mathbb{S}_5}$ and $\mathcal{Y}_{\mathbb{S}^*_5}$ depend on a single K\"ahler modulus $t$ such that
\begin{equation}
e^{-K(t)} \,=\, \frac12 \deg(\mathcal{X}) (t -\bar t)^2 \ .
\end{equation}
However, the overall normalization can be changed by a K\"ahler transformation. Nevertheless, analogously to the toriodal model, the two sphere partition function in the canonical K\"ahler gauge takes the form
\begin{equation}
Z^{1,8,5}_{S^2}(z) \,=\, -\frac{c}{4\pi^2} \left| \omega^{1,8,5}_0(z) \right|^2 (\log z - \log\bar z)^2 + \ldots \quad \text{for} \quad |z|\text{ small} \ ,
\end{equation}
which we want to compare in the two proposed dual phases.
Let us now examine the remaining models in Table~\ref{tb:models}. The conformal skew symplectic sigma model s with central charge three, six and twelve differ from $SSSM_{1,12,6}$ only in the integers $m$ and $n$, which determine the multiplicities of the irreducible representations of the matter multiplets. In particular, the gauge group and the set of different irreducible representations are unchanged. Since $\binom{n}2>m$ holds for all these models, poles arising from $Z_\phi$ are always canceled by the denominator of $Z_P$ --- as noted below eq.~\eqref{eq:DivisModel1} for $SSSM_{1,12,6}$. Therefore the discussion in Section~\ref{sec:M1General} directly carries over, only the powers of the respective terms in $Z_{\text{matter}}$ have to be changed. For the phase $r\gg0$ we find
\begin{equation}
Z_{S^2,r\gg0}^{1,m,n}(r,\theta)=(-1)^{m+\binom{n}2}\frac{\left(z \overline{z}\right)^\mathfrak{q}}{2} \operatorname{Res}_{\boldsymbol{x}=0} \left(Z_{\text{sing}} \left|z^{x_1+x_2}\sum_{a=0}^\infty (-1)^{a(1+n+m)} z^a \sum_{b=0}^a Z_{\text{reg}} \right|^2\right), \label{eq:ZGen1}
\end{equation}
with
\begin{equation}
\begin{aligned}
Z_{\text{sing}} &=\pi^{2(n-1)-\binom{n}2+m} \, \frac{\text{sin}\left[\pi (x_1+x_2)\right]^{\binom{n}2-m}\text{sin}\left[\pi (2x_1+x_2)\right]\text{sin}\left[\pi (x_1+2x_2)\right]}{\text{sin}\left(\pi x_1\right)^n\text{sin}\left(\pi x_2\right)^n}\ ,\\
Z_{\text{reg}} &=(a-2b-x_1+x_2) \frac{\Gamma (1+a+x_1+x_2)^{\binom{n}2-m}}{\Gamma(1+b+x_1)^n \Gamma(1+a-b+x_2)^n}\\
&\, \qquad \quad\cdot \Gamma(1+a+b+2x_1+x_2)\Gamma(1+2a-b+x_1+2x_2)\ .
\end{aligned}
\end{equation}
The residue is being taken with respect to the oriented pair of divisors $(x_1,x_2)$. An explicit evaluation of eq.~\eqref{eq:ZGen1} then gives
\begin{equation}\label{eq:PeriodGen1}
\begin{aligned}
\omega^{1,m,n}_{0,r\gg0}(z) = \sum_{a=0}^\infty \sum_{b=0}^a (-1)^{a(1+m+n)}&\frac{a!^{\binom{n}2-m}(2a-b)!(a+b)!}{(a-b)!^n b!^n}\\
&\qquad\quad\cdot\bigg[1 +(2b-a)\left(H_{a+b}-n H_{b}\right) \bigg] z^a \ .
\end{aligned}
\end{equation}
Similarly, in the phase $r\ll0$ the partition function reads
\begin{equation}
\begin{aligned}
Z_{S^2,r\ll0}^{1,m,n}(r,\theta) =&(-1)^{n+1} \frac{\left(w \overline{w}\right)^{1-\mathfrak{q}}}{2} \lim_{\varepsilon \to 0^+}\\
& \operatorname{Res}_{\boldsymbol{x}=0} \left( e^{2x_1 \varepsilon} Z_{\text{sing}} \left|w^{-(x_1+x_2)}\sum_{a=0}^\infty (-1)^{a(m+n)} w^a e^{\varepsilon a} \sum_{k=0}^\infty (-1)^k Z_{2} e^{-\varepsilon k}\right|^2\right)\text{,}
\end{aligned} \label{eq:ZGen2}
\end{equation}
where
\begin{equation}
\begin{aligned}
Z_{\text{sing}} &= \frac{\operatorname{sin}(\pi x_1)^n\operatorname{sin}(\pi x_2)^n\operatorname{sin}\left[\pi \left(x_1+2x_2\right)\right]}{\pi^{2n-\binom{n}2+m} \operatorname{sin}\left[\pi (x_1+x_2)\right]^{\binom{n}2-m}\operatorname{sin}\left[\pi(2x_1+x_2)\right]}\ , \\
Z_2 &= \left(1+3a-2k+x_1-x_2\right) \cdot \\
&\,\qquad \frac{\Gamma (-a+k-x_1)^n\Gamma (1+2a-k-x_2)^n\Gamma(-1-3a+k+x_1+2x_2)}{\Gamma(1+k-2x_1-x_2)\Gamma(1+a-x_1-x_2)^{\binom{n}2-m}}\ .
\end{aligned}
\end{equation}
Here, the residue is taken with respect to the ordering of divisors specified in eq.~\eqref{eq:M1jLables}. In this phase, the fundamental period reads
\begin{equation}
\begin{aligned}
\omega^{1,m,n}_{0,r\ll0}(w) &= \sum_{a=0}^\infty \sum_{k=0}^a (-1)^{k\,n+a\left[n+\binom{n}2\right]}\frac{(1+3a-2k)(2a-k)!^n}{k! (1+3a-k)! a!^{\binom{n}2-m} (a-k)!^n} w^a \label{eq:PeriodGen2}\ .
\end{aligned}
\end{equation}
Let us now first specialize to the toroidal model $SSSM_{1,5,4}$, which we expect to be self-dual. Using the general expressions~\eqref{eq:PeriodGen1} and \eqref{eq:PeriodGen2} we in fact find
\begin{equation}
\begin{aligned}
\omega^{1,5,4}_{0,r\gg0}(z) &= 1-3z+19z^2-147z^3 +1\,251 z^4-11\,253z^5 + \ldots \ , \\
\omega^{1,5,4}_{0,r\ll0}(w) &= 1+3w+19w^2+147w^3 +1\,251 w^4+11\,253w^5 + \ldots \ ,
\end{aligned}
\end{equation}
i.e., in the expansion we observe the equality
\begin{equation}
\omega^{1,5,4}_{0,r\gg0}(z) = \omega^{1,5,4}_{0,r\ll0}(w) \quad \text{for } w = -z \ .
\end{equation}
From this we deduce that the two-sphere partition function satisfies the equality
\begin{equation}
Z^{1,5,4}_{S^2}(r,\theta,\mathfrak{q}) \,=\, Z^{1,5,4}_{S^2}(-r,\pi-\theta,1-\mathfrak{q}) \ ,
\end{equation}
which provides further evidence for self-duality of the model $SSSM_{1,5,4}$. On the level of the associated Picard--Fuchs operator
\begin{equation}
\begin{aligned}
\mathcal{L}^{1,5,4}_1&=\mathcal{L}^{1,5,4}(z) = \theta_z^2 + z\left(3+11\theta_z+11\theta_z^2\right) - z^2\left(1+\theta_z\right)^2, \quad \theta_z = z \frac{\partial}{\partial z}\ ,\\
\mathcal{L}_2& = w^2\mathcal{L}(w^{-1}),
\end{aligned}
\end{equation}
which annihilates the fundamental period, the self-duality becomes also apparent as the Picard--Fuchs operator exhibits the symmetry
\begin{equation}
\mathcal{L}^{1,5,4}_1\, \omega^{1,5,4}_{0,r\gg0}(z) = 0 \ ,\qquad \mathcal{L}^{1,5,4}_2 \,\left( w \,\omega^{1,5,4}_{0,r\ll0}(w) \right) = 0 \ .
\end{equation}
This symmetry of the Picard--Fuchs operator becomes even more manifest after the shift
\begin{equation}
\widehat{\mathcal{L}}^{\,1,5,4}_1 =\widehat{\mathcal{L}}^{\,1,5,4}(z) = \left(\theta_z-\frac{1}{2}\right)^2+z\left(11\theta_z^2+\frac{1}{4}\right) -z^2\left(\theta_z+\frac{1}{2}\right)^2,
\end{equation}
which fulfills
\begin{equation}
\widehat{\mathcal{L}}^{\,1,5,4}_1 \,\left( z^{\frac{1}{2}}\,\omega^{1,5,4}_{0,r\gg0}(z) \right) = 0\ , \quad
\widehat{\mathcal{L}}^{\,1,5,4}_2 \, \left(w^{\frac{1}{2}}\,\omega^{1,5,4}_{0,r\ll0}(-w) \right)= 0\ ,
\end{equation}
with $\widehat{\mathcal{L}}^{\,1,5,4}_2 = w^2\widehat{\mathcal{L}}^{\,1,5,4}(-w^{-1}) = - \widehat{\mathcal{L}}^{\,1,5,4}_1$.
Next, we look at the model $SSSM_{1,8,5}$ describing a K3 surface, which is also expected to be self-dual according to eq.~\eqref{eq:K3sd}. In expansion the fundamental periods read
\begin{equation}
\begin{aligned}
\omega^{1,5,4}_{0,r\gg0}(z) &= 1-5z+73z^2-1\,445z^3 +33\,001 z^4-819\,005z^5 + \ldots \ , \\
\omega^{1,5,4}_{0,r\ll0}(w) &= 1-5w+73w^2-1\,445w^3 +33\,001 w^4-819\,005w^5 + \ldots + \ldots \ ,
\end{aligned}
\end{equation}
from which we deduce
\begin{equation}
\begin{aligned}
\omega^{1,8,5}_{0,r\gg0}(z) &= \omega^{1,8,5}_{0,r\ll0}(w) \quad \text{for } w = z \ ,\\
Z^{1,8,5}_{S^2}(r,\theta,\mathfrak{q}) \,&=\, Z^{1,8,5}_{S^2}(-r,-\theta,1-\mathfrak{q}) \ ,
\end{aligned}
\end{equation}
in support of the self-duality. Similar to the previous case, the duality is also manifest in terms of the associated (shifted) Picard--Fuchs operator
\begin{equation}
\widehat{\mathcal{L}}^{\,1,8,5}_1 =\widehat{\mathcal{L}}^{\,1,8,5}(z) = \left( \theta_z - \frac{1}{2}\right)^3
+ \frac{1}{2}z\, \theta_z \left(68 \theta_z+3\right) + z^2\left(\theta_z + \frac{1}{2}\right)^3\ ,
\end{equation}
which fulfills
\begin{equation}
\widehat{\mathcal{L}}^{\,1,8,5}_1 \,\left( z^{\frac{1}{2}}\,\omega^{1,8,5}_{0,r\gg0}(z) \right) = 0\ , \quad
\widehat{\mathcal{L}}^{\,1,8,5}_2 \, \left(w^{\frac{1}{2}}\,\omega^{1,8,5}_{0,r\ll0}(w) \right)= 0\ ,
\end{equation}
with $\widehat{\mathcal{L}}^{\,1,8,5}_2 = w^2\widehat{\mathcal{L}}^{\,1,8,5}(w^{-1}) = - \widehat{\mathcal{L}}^{\,1,8,5}_1$.
Finally, we turn to the model $SSSM_{1,17,7}$ with central charge twelve. Geometric invariants of the large volume phases can be extracted from the two sphere partition function according to ref.~\cite{Honma:2013hma}. Here we do not further examine the geometric properties of the phases of this model, but instead just determine the respective fundamental periods
\begin{equation}
\begin{aligned}
\omega^{1,17,7}_{0,r\gg0}(z) &= 1+9z+469z^2+38\,601z^3 +4\,008\,501 z^4+ \ldots \ , \\
\omega^{1,17,7}_{0,r\ll0}(w) &= 1+21w+2\,989w^2+714\,549w^3 +217\,515\,501w^4+ \ldots \ .
\end{aligned}
\end{equation}
As expected for this model we do not find an indication for a self-duality correspondence.
\section{A derived equivalence of Calabi--Yau manifolds} \label{sec:derivedequiv}
The proposed duality between the skew symplectic sigma model s $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$ --- together with the resulting identification of geometric large volume phases according to eq.~\eqref{eq:XYdual} --- implies that the Calabi--Yau threefolds $\mathcal{X}_{1,12,6}\simeq\mathcal{Y}_{2,9,6}$ and $\mathcal{X}_{2,9,6}\simeq\mathcal{Y}_{1,12,6}$ occur both as geometric phases in the quantum K\"ahler moduli space of the infrared worldsheet theory of the associated type~II string compactification. In this section we discuss the implications of our findings for the associated brane spectra.
First, we briefly summarize our findings concerning the target spaces $\mathcal{X}_{k,m,n}$ and $\mathcal{Y}_{k,m,n}$. In particular, we recall the explicit correspondence between their complex structure moduli. As formulated in eqs.~\eqref{eq:Xvariety} and \eqref{eq:Yvariety}, their respective complex structures are encoded in terms of their intersecting projective subspaces $\mathbb{P}(L) \subset \mathbb{P}(V,\Lambda^2V^*)$ and $\mathbb{P}(L^\perp) \subset \mathbb{P}(V^*,\Lambda^2V)$, which are related by the orthogonality condition~\eqref{eq:Lperp}.
The result is that for given a skew symplectic sigma model{} $SSSM_{k,m,n}$ --- with all the previously assumed genericness conditions and the constraints on the integers $(k,m,n)$ fulfilled --- we find for even $n$ that the geometric phases $\mathcal{X}_{k,m,n}$ and $\mathcal{Y}_{k,m,n}$ are realized in terms of the projective varieties
\begin{equation}
\begin{aligned}
\mathcal{X}_{k,m,n} \,&=\, \left\{ [\phi,\tilde P] \in \mathbb{P}( V, \Lambda^2 V^*) \, \middle| \,
\operatorname{rk} \tilde P \le 2k \ \text{and}\ \phi \in \ker \tilde P \right\}\cap \mathbb{P}(L) \ , \\
\mathcal{Y}_{k,m,n}\,&=\,
\left\{ [\tilde\phi,P] \in \mathbb{P}( V^*, \Lambda^2 V) \, \middle| \,
\operatorname{rk} P \le 2\tilde k \ \text{and}\ \tilde\phi \in \ker P \right\} \cap \mathbb{P}(L^\perp) \ ,
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
&n \text{ even } \ , \quad \tilde k= \frac{n}2 - k \ , \quad L(L^\perp) = \emptyset \ \text{ for } \
L\subset V\oplus \Lambda^2 V^*\ ,\ L^\perp \subset V^*\oplus \Lambda^2 V \ , \\
&\dim_\mathbb{C} V=n \ , \quad \dim_\mathbb{C} L = m \ , \quad \dim_\mathbb{C} L^\perp = \binom{n+1}2-m \ .
\end{aligned}
\end{equation}
These projective spaces are constructed analogously to refs.~\cite{Galkin:2014Talk,Galkin:2015InPrep}.
Our main interest in this section is the model $SSSM_{1,12,6}$ with the two Calabi--Yau threefolds phases $\mathcal{X}_{1,12,6}$ and $ \mathcal{Y}_{1,12,6}$. In ref.~\cite{Miura:2013arxiv} Miura studies the Calabi--Yau threefold variety $\mathcal{X}_{1,12,6}$ with a different construction and anticipates the emergence of the Calabi--Yau variety $\mathcal{Y}_{1,12,6}$ in the same quantum K\"ahler moduli space by means of mirror symmetry. The above construction of both Calabi--Yau varieties and the relationship to Miura's Calabi--Yau threefold has already appeared in ref.~\cite{Galkin:2014Talk,Galkin:2015InPrep}. In this work this picture is confirmed, since the Calabi--Yau threefold~$\mathcal{X}_{1,12,6}$ arises as the weakly-coupled phase $r\gg0$ and $\mathcal{Y}_{1,12,6}$ as the strong-coupled phase $r\ll0$ of the model~$SSSM_{1,12,6}$. According to the duality relation eqs.~\eqref{eq:XYdual} and \eqref{eq:dualpair}, the same large volume Calabi--Yau threefold phases occur in the dual model $SSSM_{2,9,6}$ just with the role of weakly- and strongly-coupled phases exchanged.
From the combined analysis of $SSSM_{1,12,6}$ and $SSSM_{2,9,6}$ we conclude that the two distinct geometric Calabi--Yau threefold phases $\mathcal{X}_{1,12,6}$ and $\mathcal{Y}_{1,12,6}$ are path connected in the quantum K\"ahler moduli space. As already argued at the end of Section~\ref{sec:CYs} these threefolds are not birational to each other.
The appearance of two geometric phases in the same connected quantum K\"ahler moduli space has an immediate consequence for the spectra of branes in the two geometric regimes. In the topological B-twisted string theory associated to the skew symplectic sigma model s we capture the topological B-branes, which after imposing a suitable stability condition give rise to B-type BPS branes in the physical theory \cite{Douglas:2000gi,Douglas:2000ah,Aspinwall:2001dz}. While the notion of stability depends on the K\"ahler moduli of the theory, the spectrum of topological B-branes are insensitive to K\"ahler deformation.\footnote{The spectrum of topological B-branes, however, does depend on the complex structure moduli in a non-trivial way, which for instance becomes explicit in the study of open-closed deformation spaces, see e.g. refs.~\cite{Jockers:2008pe,Alim:2009rf,Alim:2009bx}.} Therefore, by continuity we can determine the category of topological B-branes at any point in the quantum K\"ahler moduli space. In particular, for any value of the (complexified) Fayet--Iliopoulos parameter the infrared worldsheet theory comes with the same spectrum of topological B-branes \cite{Herbst:2008jq}.
The categories of topological B-branes in the geometric Calabi--Yau threefold phases $\mathcal{X}_{1,12,6}$ and $\mathcal{Y}_{1,12,6}$ are mathematically described in terms of their derived categories of bounded complexes of coherent sheaves $\mathcal{D}^b(\mathcal{X}_{1,12,6})$ and $\mathcal{D}^b(\mathcal{Y}_{1,12,6})$ \cite{Douglas:2000gi,Diaconescu:2001ze}. As a consequence of the universality of the category of topological branes in the model $SSSM_{1,12,6}$, we propose a derived equivalence between the Calabi--Yau threefolds $\mathcal{X}_{1,12,6}$ and $\mathcal{Y}_{1,12,6}$, i.e.,
\begin{equation}
\mathcal{D}^b(\mathcal{X}_{1,12,6}) \, \simeq \, \mathcal{D}^b(\mathcal{Y}_{1,12,6}) \ , \qquad
\mathcal{D}^b(\mathcal{X}_{2,9,6}) \, \simeq \, \mathcal{D}^b(\mathcal{Y}_{2,9,6}) \ .
\end{equation}
Using mirror symmetry Miura also proposes the existence of such a derived equivalence \cite{Miura:2013arxiv}, which --- combined with the work by Galkin, Kuznetsov and Movshev \cite{Galkin:2014Talk,Galkin:2015InPrep} --- directly relates to our proposal.\footnote{Due to Orlov's theorem such a derived equivalence can be proven by finding an invertible Fourier--Mukai transformation between the derived categories of bounded complexes of coherent sheaves of the Calabi--Yau varieties \cite{MR1465519}.}
Finally, we briefly comment on the model $SSSM_{1,8,5}$ with the self-duality correspondence~\eqref{eq:K3sd} and with two geometric phases of degree twelve polarized K3~surfaces $\mathcal{X}_{\mathbb{S}_5}$ and $\mathcal{Y}_{\mathbb{S}_5^*}$ in eqs.~\eqref{eq:K31} and \eqref{eq:K32}. By the same arguments, their appearance in a connected component of the quantum K\"ahler moduli space of the model $SSSM_{1,8,5}$ implies the derived equivalence
\begin{equation}
\mathcal{D}^b(\mathcal{X}_{\mathbb{S}_5}) \, \simeq \, \mathcal{D}^b(\mathcal{Y}_{\mathbb{S}_5^*}) \ .
\end{equation}
This derived autoequivalence among polarized K3 surfaces of degree twelve is well-known in the literature~\cite{MR1714828,MR2047679,Hosono:2014ty}. Here, it supports the validity of the presented gauged linear sigma model arguments.
\section{Conclusions} \label{sec:con}
The aim of this work was to introduce and to study skew symplectic sigma model s, which furnished a specific class of two-dimensional $N=(2,2)$ gauged linear sigma models based upon a non-Abelian symplectic gauge group factor. While part of our discussion even applied for an anomalous axial $U(1)_A$ R-symmetry, we mainly focused on skew symplectic sigma model s with non-anomalous axial $U(1)_A$ R-symmetries. Using standard arguments~\cite{Witten:1993yc}, at low energies they described interesting $N=(2,2)$ superconformal field theories that were suitable for worldsheet theories of type~II string compactifications. In particular, we identified skew symplectic sigma model s with two distinct and non-birational large volume Calabi--Yau threefold target spaces. The emergent Calabi--Yau spaces were projective varieties of the non-complete intersection type also discussed in refs.~\cite{Miura:2013arxiv,Galkin:2014Talk,Galkin:2015InPrep}. The phase structure of the gauged linear sigma models led towards a non-trivial duality proposal within the class of skew symplectic sigma model s, for which strong coupling effects of the symplectic gauge group factor became important at low energies.
To support our claims we used the two sphere partition function correspondence so as to extract perturbative and non-perturbative geometric invariants --- namely the intersection numbers and genus zero Gromov--Witten invariants of the semi-classical target space phases. In this way, we were able to confirm the geometric properties of the studied Calabi--Yau threefold target space varieties. Furthermore, by finding agreement of the two sphere partition functions of skew symplectic sigma model s, we presented strong evidence in favor of our duality proposal. As a side let us remark that the demonstrated match of the analytic expressions for dual two sphere partition functions was a result of rather non-trivial identities among nested infinite sums. Analyzing the observed identities and tracing in detail their emergence through gauge dualities on the level of two sphere partition functions could be rewarding both from a physics and a mathematics perspective.
In order to compute the two sphere partition functions of skew symplectic sigma model s, we developed a systematic way to evaluate higher-dimensional Mellin--Barnes type integrals in terms of local Grothendieck residues. Our techniques were inspired by the interesting work of Zhdanov and Tsikh in ref.~\cite{MR1631772}, where a method to evaluate two-dimensional Mellin--Barnes type integrals is established. Our extension to higher dimensions admits a systematic series expansion of integrals arising from two sphere partition functions of two-dimensional $N=(2,2)$ gauge theories with higher rank gauge groups. However, our approach is also applicable in other contexts where Mellin--Barnes type integrals appear, such as the analytic continuation of periods arising from systems of Picard--Fuchs differential equations.
Interpreting the skew symplectic sigma model s with an axial $U(1)_A$ anomaly-free R-symmetry as worldsheet theories for type~II strings, we arrived at an interesting equivalence between the derived category of the bounded complexes of coherent sheaves for a pair of Calabi--Yau threefolds. For the explicit pairs of derived equivalent Calabi--Yau threefolds their embedding in projective and dual projective spaces played a prominent role. This suggests that the proposed derived equivalence is actually a consequence of homological projective duality as proposed by Kuznetsov \cite{MR2354207} and as proven for the R\o{}dland example in ref.~\cite{Kuznetsov:2006arxiv}. As the two respective Calabi--Yau threefold phases were related by a non-trivial two-dimensional $N=(2,2)$ gauge duality, it would be interesting to see more generally if there is a non-trivial relationship between two-dimensional gauge theory dualities --- as established in refs.~\cite{Hori:2006dk,Hori:2011pd} --- and homological projective duality in algebraic geometry \cite{Kuznetsov:2006arxiv}. While the connection between geometric invariant theory --- realizing mathematically the target space geometries of gauged linear sigma models --- and homological projective duality is for instance studied in refs.~\cite{Ballard:2012rt,Ballard:2013fxa}, the phase structure arising from non-trivial two-dimensional gauge theory dualities seems to require a more general notion of geometric invariant theory quotients \cite{Addington:2012zv,Addington:2014sla,Segal:2014jua}.
\bigskip
\subsection*{Acknowledgments}
We would like to thank
Lev Borisov,
Stefano Cremonesi,
Sergey Galkin,
Shinobu Hosono,
Daniel Huybrechts,
Albrecht Klemm,
and
Eric Sharpe
for discussions and correspondences. In particular, we are indebted to Kentaro Hori, who resolved conceptual shortcomings in a preliminary draft of this work, and as a result the discussion in Section~\ref{sec:strongphase} crucially builds upon his input.
A.G.~is supported by the graduate school BCGS and the Studienstiftung des deutschen Volkes, and
H.J.~is supported by the DFG grant KL 2271/1-1.
\newpage
| {'timestamp': '2017-01-05T02:04:51', 'yymm': '1505', 'arxiv_id': '1505.00099', 'language': 'en', 'url': 'https://arxiv.org/abs/1505.00099'} |
\section{Introduction}
Let $(U_t)$ be a one-dimensional time-homogeneous diffusion process satisfying the stochastic differential equation
\[ dU_t = \nu(U_t) dt + \sigma(U_t) dW_t, \qquad U_0 = u_0, \]
where $(W_t)$ denotes a Brownian motion. The aim of this note is to bound, from above and below, the transition probability density function for $(U_t)$, $p_U(t, u_0, w) := \frac{d}{dw} P_{U}(t,u_0,w)$, where
$P_{U}(t,u_0,w) := \Pr(U_t \leq w | U_0 = u_0)$. While the focus is on the one-dimensional case, the results are easily extended to some special cases in $\mathbb{R}^n$, $n \geq 2$ (see remark at the end of this section). Some simple bounds for the distribution function are also considered.
Except for a few special cases, the transition functions are unknown for general diffusion processes, so finding approximations to them is an important alternative approach. We use Girsanov's theorem and then a transformation of the Radon-Nikodym density of the type suggested in \cite{Baldi_etal_0802} to relate probabilities for a general diffusion $(U_t)$ to those of a `reference diffusion'. Using a reference diffusion with known transition functions, we are able to derive various bounds for the transition functions under mild conditions on the original process. The results have a simple form and are readily evaluated.
As an aside, the generator of the diffusion $(U_t)$ is given by
\[ Af(x) = \nu(x) \partf{f}{x} + \frac{1}{2} \sigma^2(x) \partfd{f}{x}{2}, \]
and the transition probability density function is the minimal fundamental solution to the parabolic equation
\[ \left(A - \partf{}{t}\right)u(t,x) = 0. \]
Thus the results presented here also bound solutions to certain types of parabolic partial differential equations.
Several papers on this topic are available in the literature, especially for bounding the transition probability density. Most recently, \cite{Qian_etal_0504} proposed upper and lower bounds for diffusions whose drift satisfied a linear growth constraint. This appears to be the first such paper to relax the assumption of a bounded drift term. The results in \cite{Qian_etal_0504} will be compared with those obtained in the current paper, although the former can not be used for processes not satisfying the linear growth constraint. To the best of our knowledge, the bounds presented in the current paper are the only ones to relax this constraint, and also appear to generally offer a tightening of the bounds previously available. For further background on diffusions with bounded drift, see e.g.\ \cite{Qian_etal_1103} and references therein.
In addition, the same ideas allow us to obtain bounds for other functions related to the diffusions. This is not the focus of this note and is not discussed in great detail here, but as an example at the end of Section~\ref{sec:main_result} we consider the density of the process and its first crossing time. This has application in many areas, such as the pricing of financial barrier options. Bounds for other probabilities may be derived in the same manner.
Consider a one-dimensional time-homogeneous non-explosive diffusion $(U_t)$ governed by the stochastic differential equation (SDE)
\begin{align}
\label{eq:original_diff}
dU_t = \nu (U_t) dt + \sigma(U_t) dW_t,
\end{align}
where $(W_t)$ is a Brownian motion and $\sigma(y)$ is differentiable and non-zero inside the diffusion interval (that is, the the smallest interval $I \subseteq \mathbb{R}$ such that $U_t \in I$ a.s.). As is well-known, one can transform the process to one with unit diffusion coefficient by letting
\begin{align}
\label{eq:fn_transform}
F(y) := \int_{y_0}^y \frac{1}{\sigma(u)} du
\end{align}
for some $y_0$ from the diffusion interval of $(U_t)$ and then considering $X_t := F(U_t)$ (see e.g.\ \cite{Rogers_xx85}, p.161). By It\^o's formula, $(X_t)$ will have unit diffusion coefficient and a drift coefficient $\mu(y)$ given by the composition
\[ \mu(y) := \left( \frac{\nu}{\sigma} - \frac{1}{2} \sigma'\right) \circ F^{-1}(y). \]
From here on we work with the transformed diffusion process $(X_t)$ governed by the SDE
\begin{align*}
dX_t = \mu(X_t) dt + dW_t, \qquad X_0 = F(U_0) =: x.
\end{align*}
Conditions mentioned throughout refer to the transformed process $(X_t)$ and its drift coefficient $\mu$.
We will consider the following two cases only (the results extend to diffusions with other diffusion intervals with one finite endpoint by employing appropriate transforms):
\begin{enumerate}
\item[] [A] The diffusion interval of $(X_t)$ is the whole real line $\mathbb{R}$.
\item[] [B] The diffusion interval of $(X_t)$ is $(0, \infty)$.
\end{enumerate}
The results extend to diffusions with other diffusion intervals with one finite endpoint by employing appropriate transforms.
For the diffusion $(X_t)$ we will need a reference diffusion $(Y_t)$ with certain characteristics. The reference diffusion must have the same diffusion interval as $(X_t)$ and a unit diffusion coefficient, so that Girsanov's theorem may be applied to $(X_t)$. To be of any practical use, the reference process must also have known transition functions. In case [A], we use the Brownian motion as the reference process, while in case [B] we use the Bessel process of an arbitrary dimension $d \geq 3$.
Recall the definition of the Bessel process $(R_t)$ of dimension $d = 3, 4, \ldots,$ starting at a point $x>0$. This process gives the Euclidean norm of the $d$-dimensional Brownian motion originating at $(x,0, \ldots, 0)$, that is,
\[ R_t = \sqrt{\bigl(x +W_t^{(1)}\bigr)^2 + \cdots + \bigl(W_t^{(d)}\bigr)^2}, \]
where the $\bigl(W_t^{(i)}\bigr)$ are independent standard Brownian motions, $i = 1, \ldots, d$. As is well known (see e.g.\ \cite{Revuz_etal_xx99}, p.445), $(R_t)$ satisfies the SDE
\begin{align*}
dR_t = \frac{d-1}{2} \frac{1}{R_t}dt + dW_t.
\end{align*}
Note that for non-integer values of $d$ the Bessel process of `dimension' $d$ is defined using the above SDE. The process has the transition density function
\begin{align*}
p_R(t,y,z) = z \left(\frac{z}{y}\right)^{\eta} t^{-1} e^{-(y^2 + z^2)/2t} {\cal I}_{\eta} \left(\frac{yz}{t} \right),
\end{align*}
where $\eta = d/2 -1$ and ${\cal I}_{\eta}(z)$ is the modified Bessel function of the first kind. For further information, see Chapter XI in \cite{Revuz_etal_xx99}.
We denote by $\Pr_x$ and $\mathbb{E}_x$ probabilities and expectations conditional on the process in question ($(X_t)$ or some other process, which will be obvious from the context) starting at $x$. We work with the natural filtration $\cal{F}_s := \sigma\left(X_u: u \leq s\right)$.
Finally, note that the present work can be easily extended to a class of $n$-dimensional diffusions for $n \geq 2$. If $(X_t)$ is an $n$-dimensional diffusion satisfying the SDE
\[ dX_t = \mu(X_t) dt + dW_t, \]
$(W_t)$ being an $n$-dimensional Brownian motion, then the majority of results can be extended assuming $\mu(\cdot)$ is curl-free. The extension is straight-forward, and in this note we shall only concern ourselves with the one-dimensional case.
\section{Main Results}
\label{sec:main_result}
This section states and proves a result relating probabilities for the diffusion $(X_t)$ to expectations under an appropriate reference measure. In the case [A], the result may be known, and we state it here for completeness. The extension to case [B] is straight-forward. We then apply this proposition to obtain bounds for transition densities and distributions.
\textbf{Relation to the Reference Process}
We define the functions $G(y)$ and $N(t)$ as follows, according to the diffusion interval of $(X_t)$:
\begin{enumerate}
\item[][A] If the diffusion interval of $(X_t)$ is $\mathbb{R}$, then we define, for some fixed $y_0 \in \mathbb{R}$,
\begin{equation}
\begin{array}{rl}
G(y) \!\!\!&:= \displaystyle\int_{y_0}^{y} \mu(z) dz,\\
\vphantom{.}\\
N(t) \!\!\!&:= \displaystyle\int_0^t \left(\mu'(X_u) + \mu^2(X_u)\right) du.
\end{array}
\label{eq:bm_G}
\end{equation}
\item[][B] If the diffusion interval of $(X_t)$ is $(0, \infty)$, then we define, for some fixed $d \geq 3$ (the dimension of the reference Bessel process) and $y_0 > 0$,
\begin{align*}
G(y) &:= \int_{y_0}^{y} \left(\mu(z) - \frac{d-1}{2z} \right) dz,\\
N(t) &:= \int_0^t \left( \mu'(X_u) - \frac{(d-1)(d-3)}{4X_u^2} + \mu^2(X_u) \right) du.
\end{align*}
\end{enumerate}
\begin{rem}
For diffusions on $(0, \infty)$, the choice of $d$ is arbitrary subject to $d \geq 3$. Therefore this choice can be used to optimise any bounds presented in the next subsection.
\end{rem}
\begin{prop}
\label{th:gen_int}
Assume the the drift coefficient $\mu$ of $(X_t)$ is absolutely continuous. Then, for any $A \in \mathcal{F}_t$,
\[ \Pr_x(A) = \hat{\Exp}_x\left[ e^{G(X_t) - G(x)} e^{-(1/2) N(t)}\mathbbm{1}} %{1\hspace{-2.5mm}{1}_A \right], \]
where $\hat{\Exp}_x$ denotes expectation with respect to the law of the reference process.
\end{prop}
\begin{rem}
In terms of the original process $(U_t)$ defined in \eqref{eq:original_diff}, the condition of absolute continuity of $\mu(y)$ requires $\nu(z)$ and $\sigma'(z)$ to be absolutely continuous.
\end{rem}
\begin{proof}
The proof is a straight-forward application of Girsanov's theorem and its idea is similar to the one used in \cite{Baldi_etal_0802}. We present the proof for case [A], the proof for case [B] is completed similarly (see \cite{Downes_etal_xx08} for the general approach).
Define $\mathbb{Q}_x$ to be the reference measure such that under $\mathbb{Q}_x$, $X_0 = x$ and
\[ dX_s = d\widetilde{W}_s, \]
for a $\mathbb{Q}_x$ Brownian motion $(\widetilde{W}_s)$. Set
\begin{align*}
\zeta_s &:= \frac{d\Pr_x}{d\mathbb{Q}_x} = \exp \left\{ \int_0^s \mu (X_u) d\widetilde{W}_u - \frac{1}{2} \int_0^s \mu^2(X_u) du \right\},
\end{align*}
so by Girsanov's theorem under $\Pr_x$ we regain the original process $(X_s)$ satisfying
\[ dX_s = \mu(X_s) ds + dW_s, \]
for a $\Pr_x$ Brownian motion $(W_s)$. The regularity conditions allowing this application of Girsanov's theorem are satisfied (see e.g.\ Theorem~7.19 in \cite{Liptser_etal_xx01}), since under both $\Pr_x$ and $\mathbb{Q}_x$ the process $(X_s)$ is non-explosive and $\mu(y)$ is locally bounded so we have, for any $t>0$,
\[ \Pr_x \left( \int_0^t \mu^2(X_s) ds < \infty \right) = \mathbb{Q}_x \left( \int_0^t \mu^2(X_s) ds < \infty \right) = 1. \]
We then have, under $\mathbb{Q}_x$, using It\^o's formula and \eqref{eq:bm_G},
\begin{align}
\label{eq:dG}
d G (X_s) &= \mu(X_s) dX_s + \frac{1}{2} \mu'(X_s) (dX_s)^2\notag\\
&= \mu(X_s) d\widetilde{W}_s + \frac{1}{2} \mu'(X_s) ds.
\end{align}
Note that in order to apply It\^o's formula, we only require $\mu$ to be absolutely continuous with Radon-Nikodym derivative $\mu'$ (see e.g.\ Theorem~19.5 in \cite{Kallenberg_xx97}). This also implies the above is defined uniquely only up to a set of Lebesgue measure zero, and we are free to assign an arbitrary value to $\mu'$ at points of discontinuity.
Rearranging \eqref{eq:dG} gives
\[ \int_0^s \mu (X_u) d\widetilde{W}_u = G(X_s) - G(X_0) - \frac{1}{2} \int_0^s \mu'(X_u) du. \]
Hence
\[ \zeta_s = \exp \left\{ G(X_s) - G(X_0) - \frac{1}{2} \int_0^s \left( \mu'(X_u) + \mu^2 (X_u) \right) du \right\}, \]
which together with
\begin{align*}
\Pr_x(A) = \mathbb{E}_x[\mathbbm{1}} %{1\hspace{-2.5mm}{1}_A] = \int \mathbbm{1}} %{1\hspace{-2.5mm}{1}_A d\Pr_x = \int \mathbbm{1}} %{1\hspace{-2.5mm}{1}_A \zeta_t d\mathbb{Q}_x = \hat{\Exp}_x [ \zeta_t \mathbbm{1}} %{1\hspace{-2.5mm}{1}_A ],
\end{align*}
completes the proof of the proposition.
\end{proof}
\textbf{Bounds for Transition Densities and Distributions}
Define $L$ and $M$ as follows, according to the diffusion interval of $(X_t)$:
\begin{enumerate}
\item[][A] If the diffusion interval of $(X_t)$ is $\mathbb{R}$, then
\begin{align*}
L &:= \displaystyle \mbox{ess sup}\left(\mu'(y) + \mu^2(y)\right),\\
M &:= \displaystyle \mbox{ess inf}\left(\mu'(y) + \mu^2(y)\right),
\end{align*}
where the essential supremum/infimum is taken over $\mathbb{R}$.
\item[][B] If the diffusion interval of $(X_t)$ is $(0, \infty)$, then, for some fixed $d \geq 3$ (the dimension of the reference Bessel process), we put
\begin{align*}
L &:= \displaystyle \mbox{ess sup} \left( \mu'(y) - \frac{(d-1)(d-3)}{4y^2} + \mu^2(y) \right),\\
M &:= \displaystyle \mbox{ess inf} \left( \mu'(y) - \frac{(d-1)(d-3)}{4y^2} + \mu^2(y) \right),
\end{align*}
where the essential supremum/infimum is taken over $(0, \infty)$.
\end{enumerate}
Note that in what follows, in the case [B], the dimension of the reference Bessel process may be chosen so as to optimise the particular bound. Recall also that $(Y_t)$ denotes the reference process (the Weiner process in case [A], the $d$-dimensional Bessel process in case [B]).
\begin{cor}
\label{cor:trans_dens}
The transition density of the diffusion $(X_t)$ is bounded according to
\begin{align}
\label{eq:trans_dens}
e^{-tL/2} \leq \frac{p_X(t,x,w)}{e^{G(w) - G(x)} p_Y(t,x,w)} \leq e^{-tM/2}.
\end{align}
\end{cor}
\begin{rem}
The bound is sharp: for a constant drift coefficient $\mu$, equalities hold in \eqref{eq:trans_dens}.
\end{rem}
\begin{proof}
Recall (see the proof of Proposition~\ref{th:gen_int}) we only required $\mu$ to be absolutely continuous, and its value on a set of Lebesgue measure zero is irrelevant. Hence $L$ (respectively $M$) gives an upper (lower) bound for the integrand in $N(t)$ for all paths. Applying Proposition~\ref{th:gen_int} with $A = \{X_t \in [w, w+h)\}$, $h>0$, gives
\begin{align*}
\inf_{w \leq y \leq w+h} e^{G(y) - G(x)} e^{-tL/2} \Pr_x(Y_t \in [w, &w+h)) \leq \Pr_x(X_t \in [w, w+h))\\
&\leq \sup_{w \leq y \leq w+h} e^{G(y) - G(x)} e^{-tM/2} \Pr_x(Y_t \in [w, w+h)).
\end{align*}
Taking the limits as $h \rightarrow 0$ gives the required result.
\end{proof}
In the case of bounded $L$ and $M$ this immediately gives an asymptotic expression for the density $p_X(t,x,w)$ as $t \rightarrow 0$.
\begin{cor}
If $-\infty < L, M < \infty$, then, as $t \rightarrow 0$,
\[ p_X(t,x,w) \sim e^{G(w) - G(x)} p_Y(t,x,w), \]
uniformly in $x$, $w$.
\end{cor}
While the tightest bounds for the transition distribution are obtained by integrating the bounds for the density given above, this does not, in general, yield a simple closed form expression. We mention other, less tight bounds that are simple and are obtained by a further application of Proposition~\ref{th:gen_int}.
\begin{cor}
\label{cor:trans_dist}
The transition distribution function of the diffusion $(X_t)$ admits the following bound: for any $w \in \mathbb{R}$,
\begin{align*}
\inf_{ \ell \leq y \leq w} e^{G(y) - G(x)} e^{-tL/2} P_Y(t,x,w) \leq P_X(t,x,w)
\leq \sup_{\ell \leq y \leq w} e^{G(y) - G(x)} e^{-tM/2} P_Y(t,x,w),
\end{align*}
where $\ell$ is the lower bound of the diffusion interval.
\end{cor}
The assertion of the corollary immediately follows from that of Proposition~\ref{th:gen_int} with $A = \{X_t \leq w \}$.
By considering other events (e.g.\ $A = \{ X_t > w \}$), other similar bounds can be derived.
\textbf{Further Probabilities}
While the focus of this note is on bounds for the transition functions, Proposition \ref{th:gen_int} can be used to obtain other useful results. For example, consider
\[ \eta_X(t, x, y, w) := \frac{d}{dw} \Pr_x \left( \sup_{0 \leq s \leq t} X_s \geq y, X_t \leq w \right). \]
Such a function has applications in many areas, for example the pricing of barrier options in financial markets. Using ideas similar to the proof of Corollary~\ref{cor:trans_dens} immediately gives
\begin{cor}
\label{cor:other_probs}
For the diffusion $(X_t)$,
\[ e^{ -tL/2} \leq \frac{\eta_X(t,x,y,w)}{e^{G(w) - G(x)} \eta_Y(t,x,y,w)} \leq e^{ -tM/2}. \]
\end{cor}
Note that for such probabilities the bounds may be improved, if desired, by replacing $L$ and $M$ with appropriate constants on a case-by-case basis. For example, if we are considering the probability our diffusion stays between two constant boundaries at the levels $c_1 < c_2$, then the supremum (for $L$) and infimum (for $M$) need only be taken over the range $c_1 \leq y \leq c_2$.
Other probabilities may be considered in a similar way.
\section{Numerical Results}
\label{sec:num_results}
Here we illustrate the precision of the results from the previous section for transition densities. Bounds from Corollary~\ref{cor:trans_dens} are compared with known transition density functions and previously available bounds for the Ornstein-Uhlenbeck process in the case [A]. For the case [B], we only compare the bounds obtained in the current paper with exact results, since there appears to be no other known bounds in the literature. We also construct a `truncated Ornstein-Uhlenbeck' process in order to compare our results with other bounds available in the literature. For the Ornstein-Uhlenbeck process we also consider an example to illustrate Corollary~\ref{cor:other_probs}.
\textbf{The Ornstein-Uhlenbeck Process}
We consider an Ornstein-Uhlenbeck process $(S_t)$, which satisfies the SDE
\[ dS_t = -S_t dt + dW_t. \]
This process has the transition density
\[ p_S(t,x,w) = \frac{ e^{t}}{\sqrt{\pi (e^{2 t}-1)}} \exp\left(\frac{\left(w e^{t} - x\right)^2}{1-e^{2 t}} \right), \]
see e.g.\ (1.0.6) in \cite{Borodin_etal_xx02}, p.522, and we begin by comparing this with the bound obtained in Corollary~\ref{cor:trans_dens}. Since $\mu(z) = -z$, we have
\[ M = -1, \qquad G(w) - G(x) = \frac{1}{2}(x^2 - w^2), \]
giving the estimate
\[ p_S(t,x,w) \leq e^{\frac{1}{2}(x^2 - w^2 + t)} p_W(t,x,w). \]
Clearly in this case the bound is tighter for smaller values of $|x|$ and $t$. Figure~\ref{fig:OU_dens_cent} displays a plot of the right-hand side of this bound together with the exact density for $x=0$ and $t=1,2$.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.90 \textwidth, height = 2.5 in]{ou_dens}
\caption{Transition density for an Ornstein-Uhlenbeck process, alongside its upper bound, with $x=0$. The left-hand side displays the functions for $t=1$, the right for $t=2$.}
\label{fig:OU_dens_cent}
\end{centering}
\end{figure}
To compare our results with other known bounds for transition functions, we look at the bound given by (3.3) in \cite{Qian_etal_0504} (which, to the best of the author's knowledge, is the only bound available for such a process). Figure~\ref{fig:OU_dens_comp} compares this bound with that obtained in Corollary~\ref{cor:trans_dens} and the exact transition density. The values $x=0$ and $t=1$ are used (for the bound in \cite{Qian_etal_0504}, $q=1.2$ seemed to give the best result, see \cite{Qian_etal_0504} for further information on notation). Note that \cite{Qian_etal_0504} gives a sharper bound for $w$ close to zero, but quickly grows to very large values as $|w|$ increases, and in general the bounds presented in this note offer a large improvement. This is typical for all values of $t$, with the effect becoming more pronounced as $t$ decreases. A meaningful lower bound for this process is unavailable by the methods of the present paper, since $L = -\infty$.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.45 \textwidth, height = 2.5 in]{ou_comp}
\caption{Transition density for an Ornstein-Uhlenbeck process (solid line) compared to bounds given in \cite{Qian_etal_0504} (dashed line) and Corollary~\ref{cor:trans_dens} (dotted line), for $x=0$ and $t=1$.}
\label{fig:OU_dens_comp}
\end{centering}
\end{figure}
For this example, we briefly look at the bound obtained in Corollary~\ref{cor:other_probs}. We have, see e.g.\ (1.1.8) in \cite{Borodin_etal_xx02}, p. 522,
\[ \eta_S(t,x,0,z) = \frac{1}{\sqrt{\pi (1 - e^{-2 t})}} \exp \left( -\frac{(|z| - x e^{-t})^2}{1 - e^{-2 t}} \right). \]
Figure~\ref{fig:ou_eta} compares this as a function of $t \in [0,1]$ with the bound obtained in Corollary~\ref{cor:other_probs},
\begin{align*}
\eta_S(t,x,0,z) &\leq \exp \left\{\frac{1}{2} (x^2 - z^2 + t) \right\} \eta_W(t,x,0,z)\\
&= \exp \left\{ \frac{1}{2} (x^2 - z^2 + t) \right\} \frac{1}{\sqrt{2 \pi t}} \exp \left\{ - \frac{1}{2t} (|z| - x)^2 \right\},
\end{align*}
where $\eta_W(t,x,0,z)$ is given by (1.1.8) on p. 154 of \cite{Borodin_etal_xx02}.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.45 \textwidth, height = 2.5 in]{sup_ou}
\caption{Bound for $\eta_S(t,x,0,z)$ compared with its true value, for $x = z = -0.5$.}
\label{fig:ou_eta}
\end{centering}
\end{figure}
\textbf{The Truncated Ornstein-Uhlenbeck Process}
Other density bounds available in the literature hold only for processes which have bounded drift. For completeness we compare one such bound with the results of this paper. We use the bound in \cite{Qian_etal_1103}, which is the most recent for bounded drift and seems to give the best results over a large domain. To use these results, however, we need a process with bounded drift. As such, we have chosen the `truncated Ornstein-Uhlenbeck' process, which we define as a process $(\overline{S}_t)$ satisfying the SDE
\[ d\overline{S}_t = \mu(\overline{S}_t) dt + dW_t, \]
where, for a fixed $c > 0$,
\[ \mu(z) =
\begin{cases}
c, & \qquad z < -c,\\
-z, & \qquad |z| \leq c,\\
-c, & \qquad z > c.
\end{cases}
\]
For this process we again have $M=-1$ and, assuming $|x| \leq c$,
\[ G(w) - G(x) =
\begin{cases}
\frac{1}{2}(c^2 + x^2) + cw, & \qquad w < -c,\\
\frac{1}{2}(x^2 - w^2), & \qquad |w| \leq c,\\
\frac{1}{2}(c^2 + x^2) - cw, & \qquad w > c.
\end{cases}
\]
Figure~\ref{fig:trunc_ou} displays the bounds from Corollary~\ref{cor:trans_dens} together with those in \cite{Qian_etal_1103} for different values of $c$ with $x=0$ and $t=1$. Smaller values of $c$ move the bounds closer together, however for the given choice of $x$ and $t$ they do not touch until we use the (rather severe) truncation $c \approx 0.45$. In general the method outlined in this note provides a dramatic improvement. We have also plotted an estimate for the transition density using simulation. The simulation was performed using the predictor-corrector method (see e.g.\ \cite{Kloeden_etal_xx94} p.198), with $10^5$ simulations and $100$ time-steps.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.90 \textwidth, height = 2.5 in]{outrunc_dens}
\caption{Simulated density and bounds for the transition density of the truncated Ornstein-Uhlenbeck process $(\overline{S}_t)$. The solid lines give the simulated densities, the dotted lines the bound given in Corollary~\ref{cor:trans_dens} and the dashed lines the bounds from \cite{Qian_etal_1103}. Both graphs display the functions for $x=0$ and $t=1$, while the left graph displays them for $c=1$, the right for $c=2$.}
\label{fig:trunc_ou}
\end{centering}
\end{figure}
\textbf{A Diffusion on $(0, \infty)$}
Finally we consider a process from the case [B]. The author believes this is the first paper to present a bound on transition densities without the linear growth constraint. The process $(V_t)$ satisfying the SDE
\begin{align}
\label{eq:feller_sde}
dV_t = (p V_t + q)dt + \sqrt{2 r V_t} dW_t
\end{align}
with $p$, $q \in \mathbb{R}$ and $r>0$, has a known transition density (see (26) in \cite{Giorno_etal_0686}). After applying the transform $Z_t = F(V_t)$, with $F(y) = \sqrt{\frac{2}{r}y}$ by \eqref{eq:fn_transform}, we obtain the process
\[ dZ_t = \mu(Z_t) dt + dW_t, \]
where
\[ \mu(y) = \frac{p}{2} y + \frac{1}{y}\left( \frac{q}{r} - \frac{1}{2} \right). \]
For $q > r$ this dominates the drift of a Bessel process of order $2q/r > 2$ so is clearly a diffusion on $(0, \infty)$.
We take the values $q=2.5$, $r=1$ and $p=1$. Using these values, we have
\[ M = \inf_{0 \leq y \leq \infty} \left[ \frac{y^2}{4} + 2.5 + \frac{1}{y^2} \left(2 - \frac{(d-1)(d-3)}{4}\right) \right], \]
and
\begin{align*}
G(y) - G(x) = \frac{1}{4}(y^2 - x^2) + c \log \left( \frac{y}{x} \right),
\end{align*}
where $d$ is the order of the reference Bessel process and $c = 2 - (d-1)/2$.
It remains to choose the order of the reference Bessel process. It is not clear how to define the `best' order of the reference process for a range of $z$ values, as for fixed $t$ and $x$ the upper bound for $p_Z(t,x,w)$ is minimised for different values of $d$ depending on the value of $z$. In Figure~\ref{fig:rplus_dens} we have taken $t=x=0.5$ and used $d=4.7$, however depending on the relevant criterion improvements can be made. Again, a meaningful lower bound for this process is unavailable by the methods of this paper, since $L= -\infty$.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width = 0.45 \textwidth, height = 2.5 in]{plus_dens}
\caption{Transition density for the diffusion \eqref{eq:feller_sde}, alongside its upper bound, with $x=0.5$ and $t=0.5$.}
\label{fig:rplus_dens}
\end{centering}
\end{figure}
\newpage
{\bf Acknowledgements:} This research was supported by the ARC Centre of Excellence for Mathematics and Statistics of Complex Systems. The author is grateful for many useful discussions with K. Borovkov which lead to improvements in the paper.
| {'timestamp': '2008-07-29T09:15:37', 'yymm': '0807', 'arxiv_id': '0807.4586', 'language': 'en', 'url': 'https://arxiv.org/abs/0807.4586'} |
\section{Introduction}
An intriguing open question in condensed matter physics, one with potential practical significance, is whether there exists a lower bound for the density of a crystal. In other words, how large can the average distance between nearest neighbor atoms be, before crystalline long-range order is suppressed by thermal and quantum fluctuations? It remains unclear whether a lower limit exists, or
how to approach it experimentally. We begin our discussion by rehashing a few basic facts about the crystalline phase of matter.
\\ \indent
Crystallization occurs at low temperature ($T$) in almost all known substances, as the state of lowest energy (ground state) is approached. Classically, the ground state is one
in which the potential energy of interaction among the constituent particles is minimized,
a condition that corresponds to an orderly arrangement of particles in regular, periodic lattices.
In most cases, quantum mechanics affects this fundamental conclusion only quantitatively, as zero-point motion of particles results in lower equilibrium densities and melting temperatures, with respect to what one would predict classically; typically, these corrections are relatively small. Only in helium is the classical picture upended by quantum mechanics, as the fluid resists crystallization all the way down to temperature $T=0$,K, under the pressure of its own vapor.
\\ \indent
In many
condensed matter systems, the interaction among atoms
is dominated by pairwise contributions, whose ubiquitous features are a strong repulsion at interparticle distances less than a characteristic length $\sigma$, as well as an attractive part, which can be described by an effective energy well depth $\epsilon$. The dimensionless parameter $\Lambda=\hbar^2/[m\epsilon\sigma^2]$, where $m$ is the particle mass, quantifies the relative importance of quantum-mechanical effects in the ground state of the system. \\ \indent
It has been established for a specific model pair potential incorporating the above basic features, namely the
Lennard-Jones potential,
and for Bose statistics,
that the equilibrium phase is a crystal if $\Lambda < \Lambda_c \approx 0.15$; it is a (super)fluid if $0.15\lesssim \Lambda\lesssim 0.46$, while, for $\Lambda > 0.46$, the system only exists in the gas phase \cite{pnas}. The value of $\Lambda_c$ is estimated \cite{nosanow} to be slightly higher ($\approx$ 20 \%) for systems obeying Fermi statistics. For substances whose elementary constituents are relatively simple atoms or molecules, this result provides a useful, general criterion to assess their propensity to crystallize, as measured by their proximity in parameter space to a quantum phase transition to a fluid phase. It can be used to infer, at least semi-quantitatively, macroscopic properties of the crystal, such as its density and melting temperature, both of which generally decrease \cite{pnas} as $\Lambda\to\Lambda_c$.
\\ \indent
The two stable isotopes of helium have the highest known values of $\Lambda$, namely 0.24 (0.18) for $^3$He ($^4$He); for all other
naturally occurring substances, $\Lambda$ is considerably lower. The next highest value is that of molecular hydrogen (H$_2$), namely $\Lambda = 0.08$, quickly decreasing for heavier elements and compounds. At low temperature, H$_2$ forms one of the least dense crystals known, of density $\rho=0.0261$ \AA$^{-3}$ (mass density 0.086 gr/cc). The low mass of a H$_2$ molecule (half of that of a $^4$He atom) and its bosonic character (its total spin is $S=0$ in its ground state)
led to the speculation that liquid H$_2$ may turn superfluid at low temperature \cite{ginzburg}, just like $^4$He. In practice, no superfluid phase is observed (not even a metastable one), as molecular interactions, and specifically \cite{boninsegni18} the relatively large value of $\sigma$ ($\sim 3$ \AA) cause H$_2$ to crystallize at a temperature $T=13.8$\,K.
\\ \indent
Present consensus is
that H$_2$ is a non-superfluid, insulating crystal at low temperature, including in reduced dimensions \cite{2d,1d}; only small clusters of parahydrogen ($\sim 30$ molecules or less) are predicted \cite{sindzingre,mezz1,mezz2,2020} to turn superfluid at $T\sim 1$ K,
for which some experimental evidence has been reported \cite{grebenev}.
If, hypothetically, the mass of the molecules could be progressively reduced, while leaving the interaction unchanged, thus increasing $\Lambda$ from its value for H$_2$ all the way to $\Lambda_c$, several intriguing scenarios might arise, including a low temperature superfluid liquid phase, freezing into a low-density crystal at $T=0$, and even a supersolid phase, namely one enjoying at the same time crystalline order and superfluidity \cite{supersolid}. Obviously, the value of $\Lambda$ can also be modified by changing one or both of the interaction parameters ($\epsilon$ and $\sigma$) independently of the mass; in this work, however, we focus for simplicity on the effect of mass on the physics of the system.
\\ \indent
One potential way to tune the mass is via substitution of protons or
electrons with muons. For example, an
assembly of molecules of muonium hydride (HMu)
differ from H$_2$ by the replacement of one of the two protons with an antimuon ($\mu^+$), whose mass is approximately 11\% of that of a proton.
Quantum chemistry calculations
have shown that it is very similar in size to H$_2$, and has the same quantum numbers in the ground state
\footnote{Specifically, the average distance between the proton and the muon in a HMu molecule is estimated at 0.8 \AA, whereas that between the two protons in a H$_2$ molecule is 0.74 \AA. See Refs, \cite{suff,zhou}}.
It is therefore not inconceivable that the interaction between two HMu might be quantitatively close to that between two H$_2$
(we further discuss this aspect below). In this case the value of the parameter $\Lambda$
is $\sim 0.14$, i.e., very close to $\Lambda_c$ for a Bose system. This leads to the speculation that this substance may crystallize into a highly quantal solid, displaying a strikingly unique behavior, compared to ordinary crystals.
In order to investigate such a scenario, and more specifically to gain insight into the effect of mass reduction, we studied theoretically the low temperature phase diagram of a hypothetical condensed phase of HMu,
by means of first principle computer simulations of a microscopic model derived from that of H$_2$.
\\ \indent
The main result is that the equilibrium phase of the system at low temperature is a crystal of very
low density, some $\sim 5\%$ lower than the $T=0$ equilibrium density of {\em liquid} $^4$He. Despite the low density, however, such a crystal melts at a fairly high temperature, close to 9 K, i.e., only a few K lower than that of H$_2$. No superfluid phases are observed, either fluid or crystalline, as exchanges of indistinguishable particles, known to underlie superfluidity \cite{feynman}, are strongly suppressed in this system, much as they are in H$_2$, by the relatively large size of the hard core repulsion at short distances (i.e., $\sigma$). As a result, the behavior of this hypothetical system can be largely understood along classical lines.
This underscores once again the crucial role of exchanges of identical particles in destabilizing the classical picture, which is not qualitatively altered by zero-point motion of particles alone \cite{role}; it also reinforces the conclusion \cite{boninsegni18} reached elsewhere that it is the size of the hard core diameter of the intermolecular interaction that prevents superfluidity in H$_2$, {\em not} the depth $\epsilon$ of its attractive part.
\\ \indent
To complement our investigation, we also studied nanoscale HMu
clusters of varying sizes, comprising up to few hundred molecules. We find the behavior of these clusters to be much closer to that of $^4$He (rather than H$_2$) clusters. For example, at $T=1$ K the structure of HMu clusters is liquid-like, and their superfluid response approaches 100\%, even for the largest cluster considered (200 HMu molecules). Thus, while mass reduction does not bring about substantial physical differences between the behavior of bulk HMu and that of H$_2$, it significantly differentiate the physics of nanoscale size clusters.
\section{Model}\label{mod}
We model the HMu molecules as point-like, identical particles of mass $m$ and spin $S=0$, thus obeying Bose statistics.
For the bulk studies, the system is enclosed in a cubic cell of volume $V=L^3$ with periodic boundary conditions in the three directions, giving a
density of $\rho=N/V$.
For the cluster studies, the $N$ particles are
placed in a supercell of large enough size to remove boundary effects.
The quantum-mechanical many-body Hamiltonian reads as follows:
\begin{eqnarray}\label{u}
\hat H = - \lambda \sum_{i}\nabla^2_{i}+\sum_{i<j}v(r_{ij})
\end{eqnarray}
where the first (second) sum runs over all particles (pairs of particles), $\lambda\equiv\hbar^2/2m=21.63$ K\AA$^{2}$ (reflecting the replacement of a proton with a $\mu^+$ in a H$_2$ molecule), $r_{ij}\equiv |{\bf r}_i-{\bf r}_j|$ and $v(r)$ denotes the pairwise interaction between two HMu molecules, which is assumed spherically symmetric.
\begin{figure}[h]
\centering
\includegraphics[width=2.0in]{com.pdf}
\caption{Geometry utilized in the calculation of the effective interaction between two HMu molecules. Shown is the line connecting the centers of mass of the two molecules, as well the three angles describing their relative orientation, i.e., the two polar angles $\theta_1, \theta_2$, as well as the azimuthal angle $\phi$.}
\label{geom}
\end{figure}
\\ \indent
In order to decide on an adequate model potential to adopt in our calculation, we use as our starting point the H$_2$ intermolecular potential, for which a considerable amount of theoretical work has been carried out \cite{SG,DJ,Szal}. We consider here for definiteness the {\em ab initio} pair potential proposed in Ref. \onlinecite{Szal}.
The
most important effect of the replacement of one of the protons of the H$_2$ molecule with a $\mu^+$ is the shift of the center of mass of the molecule away from the midpoint, and toward the proton. We assume that this effect provides the leading order correction with respect to the H$_2$ intermolecular potential, and we set out to obtain a corrected version of the interaction for the new geometry, as illustrated in Fig.~\ref{geom}. We use the program provided in Ref. \cite{Szal} to generate a potential as a function of the distance between the midpoints and the angular configurations,
and then transform that to a potential as a function of the distance between the centers of mass and the angular configurations. Finally, we average over the angular configurations to obtain a one-dimensional isotropic potential. We take the distance between the proton and the $\mu^+$ to be that computed in Ref. \cite{zhou}, which differs only slightly from the distance between the two protons in the H$_2$ molecule.
\\ \indent
In Fig. \ref{fig:pot}, we compare the potential energy of interaction between two HMu molecules resulting from this procedure with that of obtained for two H$_2$ molecules, i.e., with the center of mass at the midpoint. The comparison suggests that the displacement of the center of mass of the molecule results in a slight stiffening of the potential at short distance. Indeed, in the range of average interparticle separations explored in this work (namely the 3.5-3.7 \AA \ range), the differences between the interactions are minimal. Also shown in Fig. \ref{fig:pot} is the Silvera-Goldman potential \cite{SG}, which is arguably the most widely adopted in theoretical studies of the condensed phase of H$_2$, and has proven to afford a quantitatively accurate \cite{op} description of structure and energetics of the crystal. As one can see, it has a significantly smaller diameter and is considerably ``softer'' than the potential of Ref. \onlinecite{Szal}; the reason is that it incorporates, in an effective way, non-additive contributions (chiefly triplets), whose overall effect is to soften the repulsive core of the pair-wise part computed {\em ab initio}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{pot}
\caption{The intermolecular interaction energy $E$ (in K) as a function of distance (\AA) between the centers of mass of the molecules, for
the H$_2$ (red, dashed) and HMu
(black, solid line) cases, obtained with the aid of the programs provided in Ref. \cite{Szal}. Also shown (blue, solid line) is the Silvera-Goldman potential.}
\label{fig:pot}
\end{figure}
\\ \indent
It therefore seems reasonable to utilize the Silvera-Goldman potential \cite{SG} to carry out our study, as the use of a quantitatively more accurate potential is not likely to affect the conclusions of our study in a significant way. Furthermore, because the majority of the theoretical studies of condensed H$_2$ have been carried out using the Silvera-Goldman potential, its use in this study allows us to assess the effect of mass alone.
\\ \indent
\section*{Methodology}
The low temperature phase diagram of the thermodynamic system described by Eq. (\ref{u}) as a function of density and temperature has been studied in this work by means of first principles numerical simulations, based on the continuous-space Worm Algorithm \cite{worm1,worm2}. Since this technique is by now fairly well-established, and extensively described in the literature, we shall not review it here; we used a variant of the algorithm in which the number of particles $N$ is fixed \cite{mezz1,mezz2}.
Details of the simulation are standard; we made use of the fourth-order approximation for the short imaginary time ($\tau$) propagator (see, for instance, Ref. \onlinecite{jltp2}), and all of the results presented here are extrapolated to the $\tau\to 0$ limit. We generally found numerical estimates for structural and energetic properties of interest here, obtained with a value of the time step $\tau\sim 3.0\times 10^{-3}$ K$^{-1}$ to be indistinguishable from the extrapolated ones, within the statistical uncertainties of the calculation.
We carried out simulations of systems typically comprising a number $N$ of particles. In the cluster calculations, $N$ can be chosen arbitrarily.
For the bulk, the precise value of $N$
depends on the type of crystalline structure assumed; typically, $N$ was set to 128 for simulations of body-centered cubic (bcc) and face-centered cubic (fcc) structures, 216 for hexagonal close-packed (hcp). However, we also performed a few targeted simulations with twice as many particles, in order to gauge the quantitative importance of finite size effects.
\\ \indent
Physical quantities of interest for the bulk calculations include the energy per particle and pressure as a function of density and temperature, i.e., the thermodynamic equation of state in the low temperature limit. We estimated the contribution to the energy and the pressure arising from pairs of particles at distances greater than the largest distance allowed by the size of the simulation cell (i.e., $L/2$), by approximating the pair correlation function $g(r)$ with 1, for $r > L/2$. We have also computed the pair correlation function and the related static structure factor, in order to assess the presence of crystalline order, which can also be detected through visual inspection of the imaginary-time paths.
\\ \indent
We probed for possible superfluid order through the direct calculation of the superfluid fraction using
the well-established winding number estimator \cite{pollock}. In order to assess the propensity of the system to develop a superfluid response, and its proximity to a superfluid transition, we also rely on a more indirect criterion, namely we monitor the frequency of cycles of permutations of identical particles involving a significant fraction of the particles in the system.
While there is no quantitative connection between permutation cycles and the superfluid fraction \cite{mezzacapo08}, a global superfluid phase requires exchanges of macroscopic numbers of particles (see, for instance, Ref. \onlinecite{feynman}).
\section {Results}
\subsection{Bulk}
We simulated crystalline phases of HMu assuming different structures, namely bcc, fcc and hcp. At low temperature, all of these crystals remain stable in the simulation. The energy difference between different crystal structures is typically small, of the order of the statistical errors of our calculation, i.e., few tenths of a K. This is similar to what is observed in H$_2$ \cite{op}). Consequently, we did not attempt to establish what the actual equilibrium structure is, as this is not central to our investigation.
\\
\indent
As a general remark, we note that the estimates of the physical quantities in which we are interested remain unchanged below a temperature $T=2$ K. Thus, results for temperatures lower than 2 K can be considered ground state estimates.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{energy}
\caption{Energy per particle $e$ in (K) as a function of density $\rho$ in \AA$^{-3}$, computed at temperature $T=1$ K. Solid line is a quadratic fit to the data. These energies are computed assuming a bcc solid structure.}
\label{fig:energy}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{sq.pdf}
\caption{Static structure factor computed at temperature $T=1$ K, for a hcp crystal of HMu of density $\rho=0.0209$ \AA$^{-3}$. The simulated system comprises $N=216$ molecules.}
\label{fig:sq}
\end{figure}
We begin by discussing the equation of state of the system in the $T\to 0$ limit.
Fig. \ref{fig:energy} shows the energy per HMu molecule computed as a function of density for a temperature $T=1$ K. The solid line is a quadratic fit to the data, whose fitting parameters yield the equilibrium density $\rho_0=0.02090(5)$ \AA$^{-3}$, corresponding to an average intermolecular distance 3.63 \AA, as well as the ground state energy $e_0=-45.75(2)$ K. The ground state energy is almost exactly \cite{op} one half of that of H$_2$, with a kinetic energy of 70.4(1) K, virtually identical to that of H$_2$ at equilibrium, i.e., at a 30\% higher density.
\\ \indent
The equilibrium density $\rho_0$ is lower than that of superfluid $^4$He, which is 0.02183 \AA$^{-3}$.
The system is in the crystalline phase, however,
as we can ascertain through the calculation of the static structure factor $S(q)$, shown in Fig. \ref{fig:sq}. The result shown in the figure pertains to a case in which particles are initially arranged into a hcp crystal. The sharp peaks occurring at values of $q$ corresponding to wave vectors in the reciprocal lattice signals long-range crystalline order.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{gr.pdf}
\caption{Pair correlation function $g(r)$ for HMu at density $\rho_0=0.0209$ \AA$^{-3}$ and $T=1$ K (squares). Also shown for comparison are the corresponding correlation functions for para-H$_2$ at $\rho=0.0261$ \AA$^{-3}$ (circles), and solid $^4$He at $\rho=0.0287$ \AA$^{-3}$ (triangles), at the same temperature. In all three cases the crystalline structure is hcp. Statistical errors are smaller than symbol sizes.}
\label{fig:gr}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{press.pdf}
\caption{Pressure of the HMu bcc crystal as a function of density at
$T=1$ K (circles). The line is a linear fit to the data. Statistical errors are smaller than symbol sizes.}
\label{fig:press}
\end{figure}
It is interesting to compare the structure of a HMu crystal to that of two other reference quantum solids, namely H$_2$ (more precisely, para-H$_2$) at its equilibrium density, namely $\rho=0.0261$ \AA$^{-3}$, and solid $^4$He near melting, at density $\rho=0.0287$ \AA$^{-3}$. The pair correlation functions shown in Fig. \ref {fig:gr} were computed for hcp crystals at temperature $T=1$ K (the results for H$_2$ and $^4$He were obtained in Ref. \onlinecite{marisa}). While the fact that the peaks appear at different distances reflects the difference in density, $^4$He being the most
and HMu the least dense crystals, the most noticeable feature is that the height of the peaks is much more pronounced in H$_2$ than in HMu and $^4$He, whose peaks height and widths are
similar.
\\ \indent
Fig. \ref{fig:press} shows the pressure of a HMu crystal computed at $T=1$ K, as a function of density. In the relatively narrow density range considered, a linear fit is satisfactory, and allows us to compute the speed of sound $v = (m\rho\kappa)^{-1/2}$, where $\kappa = \rho^{-1}(\partial\rho/\partial P)$ is the compressibility. We
obtained the speed of sound $1300\pm100$ m/s at the equilibrium density.
\\ \indent
Next, we discuss the possible superfluid properties of the crystal, as well as of the fluid into which the crystal melts upon raising the temperature. There is no evidence that the crystalline phase of HMu may display a finite superfluid response in the $T\to 0$ limit. Indeed, the observation is that the behavior of this crystal is virtually identical to that of solid H$_2$, as far as superfluidity is concerned. The main observation is that exchanges of identical particles, which underlie superfluidity, are strongly suppressed in HMu, much like they are in H$_2$, mainly due to the relatively large diameter of the repulsive core of the pairwise interaction. Indeed, exchanges are essentially non-existent in solid HMu at the lowest temperature considered here ($T=1$ K), much like in solid H$_2$; the reduction of particle mass by a factor two does not quantitatively alter the physics of the system in this regard.
\\ \indent
As temperature is raised, we estimate the melting temperature to be around $T\sim 9$ K. We arrive to that conclusion by computing the pressure as a function of temperature for different densities (at and below $\rho_0$), and by verifying that the system retains solid order even at negative pressure, all the way up to $T=8$ K. No evidence is seen of any metastable, under-pressurized fluid phase; on lowering the density, the system eventually breaks down in solid clusters, just like in H$_2$ \cite{2d}. Above 8 K, the pressure of the solid phase jumps and stable fluid phases appear at lower densities, signaling the occurrence of melting. These fluid phases do not display any superfluid properties; exchanges of two or three particles occur with a frequency of approximately 0.1\%. It is known that {quantum mechanical effects} in H$_2$ contribute about one half of the Lindemann ratio at melting \cite{marisa}; we did not perform the same calculation for this system, but in this system quantum mechanics should be more important, on account of the lighter mass. However, qualitatively melting appears to occur very similarly as in H$_2$.
Also, our results suggest that thermal expansion, which is negligible \cite{cabrillo} in solid H$_2$, is likely very small in this crystal.
\subsection{Clusters}
It is also interesting to study the physics of nanoscale size clusters of HMu, and compare their behavior to that of parahydrogen clusters, for which, as mentioned above, superfluid behavior is predicted at temperature of the order of 1 K, if their size is approximately 30 molecules or less. Crystalline behavior emerges rather rapidly for parahydrogen clusters of greater size \cite{cqp}, with ``supersolid'' behavior occurring for specific clusters \cite{supercl}. This calculation is carried out with the same methodology adopted for the bulk phase of the system, the only difference being that the simulation cell is now taken large enough that a single cluster forms. In this respect, the behavior of HMu clusters is closer to that of parahydrogen than $^4$He clusters, in that no external potential is required to keep them together (as in the case of $^4$He), at least at sufficiently low temperature ($\lesssim 4$ K).
\\ \indent
However, the reduced molecular mass makes the physics of HMu clusters both quantitatively and qualitatively different from that of parahydrogen clusters. The first observation is that the superfluid response is greatly enhanced; specifically, it is found that clusters with as many as $N=200$ molecules are close to 100\% superfluid at $T=1$ K, and remain at least 50\% superfluid up to $T\lesssim4$ K, at which point they begin to evaporate, i.e., they do not appear to stay together as normal clusters.
This is in stark contrast with parahydrogen clusters, where exchanges are rare in clusters of more than 30 molecules at this temperature, and they would involve at most $\sim 10$ particles. Consistently, the superfluid response is insignificant in parahydrogen clusters, and they stay together almost exclusively due to the potential energy of interaction.
Our results highlight the importance of the energetic contribution of quantum-mechanical exchanges in the stabilization of HMu clusters at low temperature. Indeed, at $T=1$ K we observe cycles of exchanges involving {\em all} of the particles in HMu clusters comprising as many as 200 molecules.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{profiles.pdf}
\caption{Radial density profiles (computed with respect to the center of mass) for HMu clusters comprising $N=30$, 100 and 200 molecules, at temperature $T=1$ K. At this temperature, all of these clusters are essentially 100\% superfluid. Also shown for comparison is the same quantity for a cluster of $N=30$ parahydrogen molecules, at the same temperature; this cluster has no significant superfluid response.}
\label{profiles}
\end{figure}
\\ \indent
Fig. \ref{profiles} shows density profiles computed with respect to the
center of mass
of the system for three different clusters of HMu, comprising $N=30$, 100 and 200 molecules. As mentioned above, all these clusters are essentially fully superfluid at this temperature. These density profiles are qualitatively similar, nearly featureless and reminiscent of those computed for $^4$He droplets \cite{sindzingre89}. The comparison with the density profile computed for a cluster of 30 parahydrogen molecules, also shown in Fig. \ref{profiles}, illustrates how the latter is much more compact and displays pronounced oscillations, which are indicative of a well-defined, solid-like shell structure. \\ \indent
As the number of molecules increases, clusters of HMu ought to evolve into solid-like objects, their structure approaching that of the bulk. The calculation of the radial superfluid density \cite{kwonls,mezzacapo08} suggests that crystallization begins to occur at the center of the cluster; for example, at $T=1$ K the largest HMu cluster studied here ($N=200$) displays a suppressed superfluid response inside a central core of approximately 5 \AA\ radius, in which crystalline order slowly begins to emerge, while the rest of the cluster is essentially entirely superfluid. We expect the non-superfluid core to grow with the size of the cluster. In other words, large clusters consist of a rigid, insulating core and a superfluid surface layer, with a rather clear demarcation between the two. This is qualitatively different from the behavior observed in parahydrogen clusters, in which crystallization occurs for much smaller sizes, and ``supersolid'' clusters simultaneously displaying solid-like and superfluid properties can be identified \cite{supercl}.
\section{Conclusions}
We have investigated the low temperature phase diagram of a bulk assembly of muonium hydride molecules, by means of first principle quantum simulations. Our model assumes a pairwise, central interaction among HMu molecules which is identical to that of H$_2$ molecules.
By a mapping of the H$_2$ inter-atomic potentials derived from ab initio
calculations, we showed that this model provides a reasonable description
of the HMu-HMu interation.
It is certainly possible to carry out a more accurate determination of the pair potential, but
the main effect of the lower $\mu^+$ mass should be that of increasing the diameter of the repulsive core of the interaction, in turn suppressing exchanges even more. As illustrated in Ref. \onlinecite{role}, exchanges of identical particles play a crucial role in destabilizing the classical picture in a many-body system; when exchanges are suppressed, quantum zero point motion can only alter such a picture quantitatively, not qualitatively. Alternatively, this can be understood by the fact that $\Lambda$ is decreased
as the hard core radius is increased, making the system more classical. Obviously, regarding the interaction as spherically symmetric is also an approximation, but one that affords quantitatively accurate results for parahydrogen \cite{op}, for which the potential energy of interaction among molecules plays a quantitatively more important role in shaping the phase diagram of the condensed system.
\\ \indent
Perhaps the most significant observation of this study is the different physics of bulk and nanoscale size clusters of HMu. Bulk HMu is very similar to H$_2$; despite its very low density (lower than that of superfluid $^4$He), the equilibrium crystalline phase is stable below a temperature of about 9 K, much closer to the melting temperature of H$_2$ (13.8 K) than the mass difference may have led one to expect.
No evidence is observed of any superfluid phase, neither liquid nor crystalline, underscoring once again how, in order for a supersolid phase to be possible in continuous space, { it requires some physical mechanism to cause a ``softening" of the} repulsive part of the pair potential at short distances \cite{su}, even if only along one direction, like in the case of a dipolar interaction \cite{patterned}. On the other hand, clusters of HMu including up to a few hundred molecules display superfluid behavior similar to that of $^4$He clusters. This suggests that, as the value of the quantumness parameter $\Lambda$ approaches $\Lambda_c$ from below, one may observe nanoscale superfluidity in clusters of rather large size.
\\ \indent
We conclude by discussing the possible experimental realization of the system described in this work. The replacement of elementary constituents of matter, typically electrons, with other subatomic particles of the same charge, such as muons \cite{Egan}, has been discussed for a long time, and some experimental success has been reported. Even a bold scenario consisting of replacing {\em all} electrons \cite{wheeler} in atoms with muons (the so-called ``muonic matter'') has been considered; recently a long-lived ``pionic helium'' has been created \cite{pionic}. Thus,
it also seems plausible to replace a proton in a H$_2$ molecule with an antimuon; indeed, muonium chemistry has been an active area of research for several decades \cite{muonium}.
In order for ``muonium condensed matter" to be feasible,
a main challenge to overcome is the very short lifetime of the $\mu^+$, of the order of a $\mu$s.
\section*{Acknowledgements}
This work was supported by the Natural Sciences and Engineering Research Council of Canada,
a Simons Investigator grant (DTS) and the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, DTS).
Computing support of Compute Canada
and of the Flatiron Institute
are
gratefully acknowledged.
The Flatiron Institute is a division of the Simons Foundation.
| {'timestamp': '2021-05-12T02:02:43', 'yymm': '2103', 'arxiv_id': '2103.04245', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.04245'} |
\section{Introduction}
A current exciting challenge in particle physics is the explanation of the smallness of the neutrino masses through new physics at TeV scale. In this regard, the inverse seesaw mechanism(ISS) \cite{ISS} became the paradigm of successful TeV scale seesaw mechanism. Its minimal implementation requires the introduction to the electroweak standard model (SM) of two sets of three neutral fermion singlets , $N=(N_1\,,\,N_2\,,\,N_3) $ and $S=(S_1\,,\,S_2\,,\,S_3)$, composing the following mass terms in the flavor basis,
\begin{equation}
{\cal L}_{mass} \supset \bar \nu M_D N + \bar N M_N S + \frac{1}{2} \bar S^C \mu_N S + h.c,
\label{masstermsoriginal}
\end{equation}
where $\nu=(\nu_1\,,\,\nu_2\,,\,\nu_3)$ is the set of standard neutrinos. In the basis $(\nu\,,\,N^C\,,\,S)$ the neutrino mass may be put in the following $9 \times 9$ matrix form,
\begin{equation}
M_\nu=
\begin{pmatrix}
0 & M_D & 0 \\
M^T_D & 0 & M_N\\
0 & M^T_N & \mu_N
\end{pmatrix}.
\label{ISSmatrix}
\end{equation}
In the regime $\mu_N << M_D < M_N$, the mechanism provides $m_\nu = M_D^T M_N^{-1}\mu_N (M_N^T)^{-1} M_D$ for the mass matrix of the standard neutrinos. Taking $M_D$ at electroweak scale, $M_N$ at TeV and $\mu_N$ at keV scale, the mechanism provides standard neutrinos at eV scale. The new set of fermion singlets $(N\,,\,S)$ develop mass at $M_N$ scale and may be probed at the LHC.
The challenge concerning the ISS mechanism is to find scenarios that realize it. This means to propose models that generate the mass terms in Eq. (\ref{masstermsoriginal}). In this regard, as the ISS mechanism works in the TeV scale, it seems to be natural to look for realization of the ISS mechanism in the framework of theories that we expect will manifest at TeV scale\cite{ISSnonsusy1, ISSnonsusy2}, which is the case of supersymmetry ( SUSY). Thus it seems to be interesting to look for scenarios that realize the ISS mechanism in the context of SUSY\cite{ISSSUSY1,ISSSUSY2,ISSSUSYR}.
We know already that a natural way of obtaining small neutrino mass in the context of the MSSM is to consider that R-parity, $R \equiv (-1)^{2S+3(B-L)}$, is violated through bilinear terms like $\mu_i \hat L_i \hat H_u$ in the superpotential\cite{standardRPV}. Thus we wonder if R-parity violation (RPV) is an interesting framework for the realization of the SUSYISS mechanism. For this, we implement the SUSYISS mechanism in a framework where R-parity and lepton number are violated explicitly but baryon number is conserved in a way that we call the minimal realization of the SUSYISS mechanism once the necessary set of superfields required to realize it is the original one, $\hat N^C_i$ and $ \hat S_i$, only.
Moreover, it has been extensively discussed that the minimal supersymmetric standard model (MSSM) faces difficulties in accommodating a Higgs of mass of 125 GeV, as discovered by ATLAS and CMS\cite{atlasCMS} while keeping the principle of naturalness\cite{higgsmassanalysis}. This is so because, at tree level, the MSSM predicts a Higgs with a mass whose value cannot exceed $91$ GeV. Thus robust loop corrections are necessary in order to lift this value to $125$ GeV. Consequently stops with mass far above $1$TeV are required. To accept this is to put the naturalness principle aside. We show that the SUSYISS mechanism developed here accommodates a $125$ GeV Higgs mass without resort to robust loop corrections.
\section{The mechanism}
The supersymmetric version of the ISS (SUSYISS) mechanism\cite{ISSSUSY1} requires the assumption of two sets of three singlet superfields $\hat N^C_i\,,\, \hat S_i$ ($i=1,2,3$) composing, with the MSSM superfields, $\hat L^T_ i =(\hat \nu_i\,,\,\hat e_i)^T\,,\, \hat H_d^T=(\hat H^-_d \,,\,\hat H^0_d )^T\,,\, \hat H_u^T=(\hat H^+_u \,,\,\hat H^0_u)^T $, the following extra terms in the superpotential, $W \supset \hat L \hat H_u \hat N^C + \hat S M_N \hat N^C + \frac{1}{2} \hat S \mu_N \hat S$. A successful extension of the MSSM that realizes the SUSYISS mechanism must generate these terms. This would be an interesting result in particle physics since we would be providing an origin for the energy scales $M_N$ and $\mu_N$\cite{ISSSUSY2}.
The mechanism we propose here is minimal in the sense that it requires the addition to the MSSM of the two canonical singlet superfields $\hat N^C_i $ and $ \hat S_i$, only. Moreover, we impose that the superpotential be invariant under the following set of discrete symmetries, $Z_3 \otimes Z_2$, according to the following transformation: under $Z_3$ the transformations are,
\begin{equation}
(\hat S_i \,,\, \hat N^C_i\,,\, \hat e^C_i)\,\, \rightarrow \,\,w(\hat S_i \,,\, N^C_i\,,\, \hat e^C_i), \,\,\,\,\hat L_i \,\,\rightarrow \,\, w^2 \hat L_i,
\label{z3}
\end{equation}
with $w=\exp^{i2\pi/3}$. Under $Z_2$ we have, $\hat S_i \rightarrow - \hat S_i$, with all the remaining superfields transforming trivially by $Z_3\otimes Z_2$.
Thus the superpotential of the SUSYISS mechanism we propose here involves the following terms,
\small
\begin{eqnarray}
\hat{W}&=&\mu \hat{H}^{a}_{u}\hat{H}_{da} +
Y^{ij}_{u}\epsilon_{ab}\hat{Q}^{a}_{i}\hat{H}^{b}_{u}\hat{u}^{c}_{j} +
Y^{ij}_{d}\hat{Q}^{a}_{i}\hat{H}^{b}_{d}\hat{d}^{c}_{j} +
Y^{ij}_{e}\hat{L}^{a}_{i}\hat{H}^{b}_{d}\hat{e}^{c}_{j} \nonumber \\
&+&
Y^{ij}_{\nu}\epsilon_{ab}\hat{L}^{a}_i\hat{H}^{b}_{u}\hat{N}^{c}_{i} + \frac{1}{2}\lambda^{ijk}_{s}\hat{N}^{c}_{i}\hat{S}_{j}\hat{S}_{k} + \frac{1}{3}\lambda^{ijk}_{v}\hat{N}^{c}_{i}\hat{N}^{c}_{j}\hat{N}^{c}_{k},
\label{superpotential}
\end{eqnarray}
where $a \,,\,b$ are $SU(2)$ indices and $i$ and $j$ are generation indices. $\hat{Q}_{i}$, $\hat{u}^{c}_i$, $\hat{d}^{c}_{i}$ and $\hat{e}^{c}_{i}$ are the standard superfields of the MSSM. Perceive that the $Z_3 \otimes Z_2$ symmetry permits that lepton number as well as R-parity be explicitly violated in this model by terms in the superpotential that involve the singlet superfields $\hat N^C_i $ and $ \hat S_i$, only .
Now we make an important assumption. We assume that the scalars that compose the superfields $\hat N^C_i $ and $\hat S_i$ develop nonzero vacuum expectation value (VEV), $\langle \tilde S \rangle = v_{S_i}$ and $ \langle \tilde N^C_i \rangle =v_{N_i}$, respectively. This assumption provides the source of the canonical mass terms $M_N$ and $\mu_N$ of the SUSYISS mechanism. Note that, from the last two terms in the superpotential above, we have that the VEV of the scalar $ \tilde S$ becomes the source of the mass scale $M_N$ while the VEV of the scalar $ \tilde N^C$ becomes the source of the mass scale $\mu_N$ . In other words, the superpotential above together with the assumption that the scalars $\hat N^C_i $ and $\hat S_i $ develop non zero VEVs has the required ingredients to realize the SUSYISS mechanism.
Another important point of the model is to discuss the possible values $v_{S_i}$ and $v_{N_i}$ may take. For this we have to obtain the potential of the model. The soft breaking sector will play an important role in the form of the potential.
The most general soft breaking sector of our interest involves the following terms,
\begin{eqnarray}
-\cal{ L}_{\mbox{soft}} &=& M^{2}_{Q_{ij}}\tilde{Q^a_i}^{*}\tilde{Q^a_j}
+ M^{2}_{u^{c}_{ij}}\tilde{u^c_i}^{*}\tilde{u^c_j} +
M^{2}_{d^{c}_{ij}}\tilde{d^c_i}^{*}\tilde{d^c_j} \nonumber \\
&+&
M^{2}_{L_{ij}}\tilde{L^a_i}^{*}\tilde{L^a_j} +
M^{2}_{e^{c}_{ij}}\tilde{e^c_i}^{*}\tilde{e^c_j} +
M^{2}_{h_{u}}H^{a*}_u H^{a}_u \nonumber \\
&+& M^{2}_{h_{d}}H^{a*}_d H^{a}_d + M^{2}_{\tilde N_i}
\tilde{N_i}^{*C}
\tilde{N}^{C}_i + M^{2}_{\tilde S_i}
\tilde{S^*_i} \tilde{S_i}
\nonumber \\
&-&
[\left(A_{u}Y_{u}\right)_{ij}\epsilon_{ab}\tilde{Q}^{a}_{i}H^{b}_{u}\tilde{u}^{c}_{j}
+ \left(A_{d}Y_{d}\right)_{ij}\tilde{Q}^{a}_{i}H^{a}_{d}\tilde{d}^{c}_{j}
\nonumber \\
&+&
\left(A_{e}Y_{e}\right)_{ij}\tilde{L}^{a}_{i}H^{a}_{d}\tilde{e}^{c}_{j} +
h.c.] - [B\mu H^{a}_u H^{a}_d + h.c.] \nonumber \\
&+& \frac{1}{2}
\left(M_{3}\tilde{\lambda}_{3}\tilde{\lambda}_{3}
+ M_{2}\tilde{\lambda}_{2}\tilde{\lambda}_{2} +
M_{1}\tilde{\lambda}_{1}\tilde{\lambda}_{1} + h.c.\right) \nonumber \\
&+& (A_{y}Y_{\nu})^{ij}
\epsilon_{ab}\tilde{L}^{a}_i H^{b}_{u}\tilde{N}^{*C}_j \nonumber \\
&+& [\frac{1}{2}
(A_{s}\lambda_{s})^{ijk}\tilde{N}^{*C}_i\tilde{S_j}\tilde{S_k} +
\frac{1}{3}(A_{v}\lambda_{v})^{ijk}\tilde{N}^{* C}_i\tilde{N}^{*C}_j
\tilde{N}^{*C}_k \nonumber \\ &+& h.c.].
\label{softterms}
\end{eqnarray}
Note that the last two trilinear terms violate explicitly lepton number and the energy scale parameters $A_s$ and $A_v$ characterize such violation.
A common assumption in developing ISS mechanisms it to assume that the new neutral singlet fermions are degenerated in masses and self-couplings. However, for our case here, it seems to be more convenient, instead of considering the degenerated case, to consider the case of only one generation of superfields. The extension for the case of three generations is straightforward and the results are practically the same.
The potential of the model is composed by the terms $V=V_{soft} + V_D + V_F$. The soft
term, $V_F$, is given above in Eq. (\ref{softterms}). The relevant contributions to $V_D$ are,
\begin{equation}
V_{D}= \frac{1}{8}(g^{2}+g^{\prime 2})(\tilde{\nu}\tilde{\nu}^* + H^0_d H^{0*}_d -
H^0_u H^{0*}_u)^2.
\label{Dterm}
\end{equation}
In what concerns the F-term, the relevant contributions are given by the following terms,
\begin{eqnarray}
V_{F} &=& \left|\frac{\partial \hat{W}}{\partial \hat{H^{0}_{u}}
}\right|^{2}_{H_{u}} + \left|\frac{\partial \hat{W}}{\partial
\hat{H^{0}_{d}} }\right|^{2}_{H_{d}} + \left|\frac{\partial \hat{W}}{\partial
\hat{\nu}} \right|^{2}_{\tilde{\nu}} + \left|\frac{\partial \hat{W}}{\partial
\hat{N^{C}} }\right|^{2}_{\tilde{N}} + \left|\frac{\partial
\hat{W}}{\partial \hat{S}_{L}}\right|^{2}_{\tilde{S}} \nonumber \\
&=& \mu^{2}\left|H^{0}_{u}\right|^{2} + \mu^{2}\left|H^{0}_{d}\right|^{2} +
Y^{2}_{\nu}\left|\tilde{N}^{C}\right|^{2}\left|\tilde{\nu}\right|^{2} + Y_{v}\mu
H^{0*}_{d}\tilde{N}^{C*}\tilde{\nu}
\nonumber \\
&+& Y^{2}_{\nu}\left|H^{0}_{u}\right|^{2}\left|\tilde{\nu}\right|^{2}
+ \frac{1}{4}\lambda^{2}_{s}\left|\tilde{S}\right|^{4} + 4
\lambda^{2}_{v}\left|\tilde{N}^{C}\right|^{4} +
\lambda^{2}_{s}\left|\tilde{N}^{C}\right|^{2}\left|\tilde{S}\right|^{2}\nonumber \\
&+&
\frac{Y_{\nu}\lambda_{s}H^{0}_{u}\tilde{\nu}\tilde{S}^{* 2}}{2} +
2Y_{\nu}\lambda_{v}\left|\tilde{N}^{C}\right|^{2}H^{0}_{u}\tilde{\nu} +
Y^{2}_{\nu}\left|\tilde{N}^{C}\right|^{2}\left|H^{0}_{u}\right|^{2} \nonumber \\
&+&
\lambda_{s}\lambda_{v}\left|\tilde{N}^{C}\right|^{2}\tilde{S}^{2} + h.c.
\label{Fterm}
\end{eqnarray}
With the potential of the model in hand, we are ready to obtain the set of constraint equations for the neutral scalars $H^0_u\,,\, H^0_d\,,\,\tilde \nu\,,\, \tilde S\,,\,\tilde N^C$,
\begin{eqnarray}
&& v_u\left( M^{2}_{h_u} + \mu^{2} +
\frac{1}{4}(g^{2}+g^{\prime 2})(v^{2}_{u}-v^{2}_{d}-v^2_\nu) +Y^2_\nu v^2_N +Y^2_\nu v^2_\nu \right) +\nonumber \\
&& -B\mu v_d+ \frac{1}{2}Y_{\nu}\lambda_{s}v_\nu v^{2}_{S}+Y_\nu A_y v_\nu v_N + 2Y_\nu\lambda_v v_\nu v^2_N =0,\nonumber
\\
&& v_d\left(M^{2}_{h_d} + \mu^{2} -
\frac{1}{4}(g^{2}+g^{\prime 2})(v^{2}_{u}-v^{2}_{d}-v^2_\nu) \right) -B\mu v_u+
Y_\nu \mu v_\nu v_N=0,\nonumber
\\
&& v_\nu \left(M^{2}_{\tilde \nu} +
\frac{1}{4}(g^{2}+g^{\prime 2})(v^2_\nu + v^{2}_{d}-v^{2}_{u})+Y^{2}_{\nu}v^{2}_{u} +
Y^{2}_{\nu}v^{2}_{N}\right) + \nonumber \\
&& + \frac{1}{2}\lambda_{s}Y_{\nu}v_{u}v^{2}_{S} + Y_\nu A_y v_u v_N + 2Y_\nu
\lambda_v v_u v^2_N + Y_\nu \mu v_d v_N =0, \nonumber \\
&& M^{2}_{\tilde S} + \lambda_{s}Y_{\nu}v_{u}v_\nu + \frac{1}{2}
\lambda^{2}_{s} v^{2}_{S} +\lambda_s A_s v_N + \lambda^2_s v^2_N +
2\lambda_s\lambda_v v^2_N=0,\nonumber
\\
&&v_{N}\left( M^{2}_{\tilde N} + Y^{2}_{\nu}v^{2}_{u} + \lambda^{2}_{s}v^{2}_{S} +
3\lambda_v A_v v_N + 8\lambda^2_v v^2_N + 4\lambda_{v}Y_{\nu} v_{u} v_\nu
+ 2\lambda_{v}\lambda_{s}
v^{2}_{S} + Y^{2}_{\nu}v^2_\nu \right) +\nonumber \\
&&+ Y_\nu v_\nu (A_{y}v_{u} + \mu v_{d})+\frac{1}{2} A_{s} \lambda_{s} v^{2}_{S}=0.
\label{constraints}
\end{eqnarray}
Let us first focus on the third relation in the equation above. Observe that the dominant term inside the parenthesis is $M^{2}_{\tilde \nu} $. Outside the parenthesis, on considering for while that $v_N < v_S$, the dominant term is $\frac{1}{2}\lambda_{s}Y_{\nu}v_{u}v^{2}_{S} $. In view of this, from the third relation above we have that,
\begin{equation}
v_\nu \approx -\frac{1}{2}
\frac{\lambda_{s}Y_{\nu}v_{u}v^{2}_{S}}{M^2_{\tilde \nu}}.
\label{vnu}
\end{equation}
For $M_{\tilde \nu} > v_S$, we have $v_\nu < v_{u , d, S} $, as expected.
Let us now focus on the fifth relation of the Eq. (\ref{constraints}). The dominant term inside the parenthesis is $M^{2}_{\tilde N}$, while outside the parenthesis the dominant term is $\frac{1}{2} A_{s} \lambda_{s} v^{2}_{S}$. Thus the fifth relation provides,
\begin{equation}
v_N \approx -\frac{1}{2}\frac{ A_{s} \lambda_{s} v^{2}_{S}}{M^2_{\tilde N}} .
\label{smallseesaw}
\end{equation}
This expression for $v_N$ is similar to the $v_\nu $ case and suggests that $v_N$ is also small.
Let us now focus on the forth relation. Taking $v_\nu\,,\,v_N \ll v_S$, we have that the dominant terms in that relation are,
\begin{equation}
M^{2}_{\tilde S} + \frac{1}{2}
\lambda^{2}_{s} v^{2}_{S} =0.
\label{vS}
\end{equation}
Perceive that $M_S$ dictates the value of $v_S$. As the neutral singlet scalar $\tilde S$ belongs to an extension of the MSSM, then it is reasonable to expect that its soft mass term $M_S$ lies at TeV scale. Consequently $v_S$ must assume values around TeV. In regard to the first and second relations they control the standard VEVs $v_u$ and $v_d$.
Let us return to the expression to $v_N$ in Eq. (\ref{smallseesaw}). As the neutral singlet scalar $\tilde N$ also belongs to an extension of the MSSM, then it is reasonable to expect that its soft mass term $M_{\tilde N}$ lies at TeV scale, too. In this case perceive that the value of $v_N$ get dictated by the soft trilinear term $A_s$. Thus a small $v_N$ means a tiny $A_s$. As $A_s$ is a trilinear soft breaking term, then it must be generated by some spontaneous SUSY breaking scheme. The problem is that we do not know how SUSY is spontaneously broken. Thus there is no way to infer exactly the value of $A_s$. Moreover, note that $A_s$ is a soft trilinear term involving only neutral scalar singlets by MSSM which turns its estimation even more complex. We argue here that it is somehow natural to expect that such terms be small.
For this we have to think in terms of spontaneous SUSY breaking schemes. For example, in the framework of gauge mediated supersymmetry breaking (GMSB) all soft trilinear terms are naturally suppressed once they arise from loops. In our case the new singlets are sterile by the standard gauge group. The minimal scenario where such soft trilinear terms could arise would be one that involve the GMSM of the B-L gauge extension of the MSSM. To build such extension and evaluate $A_s$ in such a scenario is out of the scope of this paper. However, whatever be the case, in the framework of GMSB scheme $A_s$ must be naturally small and consequently $v_N$, too. In this point we call the attention to the fact that the idea behind the ISS mechanism is that lepton number is explicitly violated at low energy scale. This suggests that the GMSB seems to be the adequate spontaneous SUSY breaking scheme to be adopted in realizing SUSYISS mechanism.
Let us discuss the case of gravity mediated supersymmetry breaking. As in the ISS mechanism lepton number is assumed to be explicitly violated at low energy scale, it is expected that $v_N\,,\,v_S\,,\, v_\nu \, ,\, A_s \,,\,A_v$ are all null at GUT scale. Considering this, the authors of Ref. \cite{SUSYISSvalle} evaluated the running of soft trilinear terms involving scalar singlets from GUT to down scales in a different realization of the SUSYISS model. As a result they obtained that these terms develop small values at electroweak scale. Our case is somehow similar to the case of Ref. \cite{SUSYISSvalle} and it seems reasonable to expect that, in the general case of three generations, on doing such evaluation of the running of the soft trilinear terms, our mechanism recover the result of Ref. \cite{SUSYISSvalle}. As we are just presenting the idea by means of only one generation, such evaluation of the running of $A_s$ is out of the scope of this work.
Thus it seems to be reasonable to expect that, whatever be the spontaneous SUSY breaking scheme adopted, the soft trilinear terms that violate explicitly lepton number involving neutral singlet fields as $\tilde S$ and $\tilde N$ have the tendency to develop small values. In what follow we assume that $A_s$ and $A_v$ lies at keV scale.
There is still an issue to consider in respect to the scalar potential. As
can be easily verified, the value of the potential at origin of the fields is zero.
In order to guarantee that electroweak symmetry will be broken, we need the potential in the minimum to be negative. Taking the constraints in Eq. (\ref{constraints}) to eliminate the soft masses in the
scalar potential, we have,
\begin{eqnarray}
\langle V \rangle_{mim} = &-&\frac{1}{8}\left(g^{2}+g'^{2}\right)\left(v^{2}_{\nu} + v^{2}_{d} -
v^{2}_{u}\right)^2 - Y^{2}_{\nu}\left( v^{2}_{\nu}v^{2}_{N} +
v^{2}_{\nu}v^{2}_{u} + v^{2}_{u}v^{2}_{N}\right) -
\lambda^{2}_{s}v^{2}_{S}v^{2}_{N} - \frac{1}{4}\lambda^{2}_{s}v^{4}_{S} - A_{y}Y_{\nu}v_{\nu}v_{N}v_{u} \nonumber
\\ &-&
\frac{1}{2}A_{s}\lambda_{s}v_{N}v^{2}_{S} - A_{v}\lambda_{v}v^{3}_{N}
-Y_{\nu}\lambda_{s}v_{\nu}v_{u}v^{2}_{S} -
4Y_{\nu}\lambda_{v}v_{\nu}v_{u}v^{2}_{N} -
2\lambda_{s}\lambda_{v}v^{2}_{N}v^{2}_{S} - 4\lambda^{2}_{v}v^{4}_{N} \nonumber
\\ &-& Y_{\nu}\mu v_{\nu}v_{N}v_{d}.
\end{eqnarray}
For the magnitudes of VEVs discussed above, the dominant term is $-
\frac{1}{4}\lambda^{2}_{s}v^{4}_{S}$, which is negative. For the case of one generation considered here this is a strong evidence of the stability of the potential.
After all these considerations, we are ready to go to the central part of this work that is to develop the neutrino sector. For this we have, first, to obtain the mass matrix that involves the neutrinos. Due to the RPV the gauginos and Higgsinos mix with the neutrinos $\nu$, $N$ and $S$. Considering the basis $(\lambda_{0},\lambda_{3},\psi_{h^{0}_{u}}, \psi_{h^{0}_{d}},\nu,N^{c},S)$, we obtain the following mass matrix for these neutral fermions,
\begin{eqnarray}
\left(\begin{array}{ccccccc} M_{1} &
0 & \frac{g' v_{u}}{\sqrt{2}} & -\frac{g' v_{d}}{\sqrt{2}} & -\frac{g'
v_\nu}{\sqrt{2}} & 0 & 0 \\
0 & M_{2} & -\frac{g v_{u}}{\sqrt{2}} & \frac{g v_{d}}{\sqrt{2}} & \frac{g v_\nu}{\sqrt{2}} & 0 & 0\\
\frac{g' v_{u}}{\sqrt{2}} & -\frac{g v_{u}}{\sqrt{2}} & 0 & \mu & Y_{\nu} v_{N} & Y_{\nu} v_\nu & 0 \\
-\frac{g' v_{d}}{\sqrt{2}} & \frac{g v_{d}}{\sqrt{2}} & \mu & 0 & 0 & 0 & 0 \\
-\frac{g' v_\nu}{\sqrt{2}} & \frac{g v_\nu}{\sqrt{2}} & Y_\nu v_N & 0 & 0 & Y_{\nu} v_{u} & 0 \\
0 & 0 & Y_\nu v_\nu & 0 & Y_{\nu} v_{u} & 6 \lambda_{v} v_{N} & \lambda_{s} v_{S} \\
0 & 0 & 0 & 0 & 0 & \lambda_{s} v_{S} & \lambda_{s} v_{N}
\end{array}\right),
\label{generalneutrinomassmatrix}
\end{eqnarray}
where $M_{1}$ e $M_{2}$ are the standard soft breaking terms of the gauginos. We remark that on considering the hierarchy $v_N< v_\nu < v_d< v_u<v_S$ the bottom right $3 \times 3$ block of this matrix, which involves only the neutrinos, decouples from the gauginos and Higgsinos sector leaving the neutrinos with the following mass matrix in the basis $(\nu,N^{c},S)$
\begin{eqnarray}
M_\nu \approx \left(\begin{array}{ccc}
0 & Y_{\nu} v_{u} & 0 \\
Y_{\nu} v_{u} & 2 \lambda_{v} v_{N} & \lambda_{s} v_{S} \\
0 & \lambda_{s} v_{S} & \lambda_{s} v_{N}
\end{array}\right).
\label{neutrinomassmatrix}
\end{eqnarray}
For this decoupling to be effective we must have
$v_\nu$ of order MeV or less. Diagonalization of this mass matrix implies
that the lightest neutrino, which is predominantly the standard one, $\nu$, get the following mass expression,
\begin{equation}
m_\nu \approx \frac{Y_{\nu}^2}{\lambda_s}\frac{ v_{u}^2}{ v_{S}^2} v_{N} .
\label{ISSmass}
\end{equation}
This is exactly the mass expression of the ISS mechanism. For $v_S$ around
TeV and $v_N$ around keV we obtain neutrinos at eV scale for $v_u$ at
electroweak scale. In the case of three generations the pattern of the neutrino
masses will be determined by $Y^{ij}_\nu$ .
To demonstrate the validity of
these aproximations we can compute the mass eigenvalues from the full matrix in
(\ref{generalneutrinomassmatrix}). For typical values of the supersymmetric parameters and $v_S \sim 10$ TeV, $v_N \sim 10$ keV, $v_\nu \sim 1$ MeV and $Y_\nu \sim \lambda_s=0.21$, we have the following order of magnitude for the mass
eigenvalues ($\sim$ TeV, $\sim$ TeV,
$\sim 10^2$ GeV, $\sim 10^2$ GeV, $\sim 10^{-1}$ eV, $\sim$ TeV,
$\sim$ TeV), where the lightest particle is exclusively
the standard neutrino. This result is encouraging and indicates that RPV is an interesting framework to realize the SUSYISS mechanism.
We end this section making a comparison of the SUSYISS developed here with the $\mu\nu$SSM in Ref. {\cite{munucase}}. This model resorts to R-parity violation to solve the $\mu$ problem. However neutrino masses at sub-eV scale require considerable amount of fine tuning of the Yukawa couplings. We stress that, in spite of the fact that the SUSYISS model contains the particle content of the $\mu\nu$SSM, unfortunately it does not realize the $\mu\nu$SSM. This is so because if we allow a term like $\hat S \hat H_u \hat H_d$ in the superpotential in Eq. (\ref{superpotential}), as consequence the entries $\psi_{h^{0}_{d}} S$ and $\psi_{h^{0}_{u}} S$ in the mass matrix in Eq. (\ref{generalneutrinomassmatrix}) would develop robust values which jeopardize the realization of the ISS mechanism.
\section{The mass of the Higgs}
Now, let us focus on the scalar sector of the model. We restrict our interest in checking if the model may accommodates a $125$ GeV Higgs mass without resorting to tight loop contributions. For the case of one generation the model involves five neutral scalars whose mass terms compose a $5\times 5 $ mass matrix that we consider in the basis $(H_u\,,\,H_d\,, \tilde \nu \,,\, \tilde N \,,\, \tilde S)$. We do not show such a mass matrix here because of the complexity of their entries. Instead of dealing with a $5 \times 5$ mass matrix, which is very difficult to handle analytically, we make use of a result that says that an upper bound on the mass of the lightest scalar, which we consider as the Higgs, can be obtained by computing the eigenvalues of the $2 \times 2$ submatrix in the upper left corner of this $5 \times 5$ mass matrix\cite{upperbound}. This is a common procedure adopted in such cases which give us an idea of the potential of the model to generate the 125 GeV Higgs mass.
The dominant terms of this $2 \times 2$ submatrix are given by,
\begin{eqnarray}
M^{2}_{2\times 2}\approx \left(\begin{array}{cc} B\mu\cot(\beta)+
M^{2}_{Z}\sin^{2}(\beta)-\frac{Y_{\nu}\lambda_{s}v_\nu}{2v_u}
v^2_{S} & -B\mu-M^{2}_{Z}\sin(\beta)\cos(\beta)\\
-B\mu-M^{2}_{Z}\sin(\beta)\cos(\beta) & B\mu \tan(\beta) + M^{2}_{Z}\cos^{2}(\beta) \end{array}\right).
\label{2x2massmatrix}
\end{eqnarray}
We made use of the hierarchy among the VEVs, as discussed above, to obtain such a $2 \times 2$ submatrix. On diagonalizing this $2 \times 2$ submatrix we obtain the following upper bound on the mass of the Higgs,
\begin{equation}
m^{2}_{h}\leq
M^{2}_{Z}\cos^2(2\beta)-\frac{Y_{\nu}\lambda_{s}v_\nu}{2v_u}v^2_{S}.
\label{upperbound}
\end{equation}
Note also that Eq. ({\ref{vS}}) imposes that either $M^2_{\tilde
S}$ or $v^2_{S}$ is negative. In order to the second term in Eq. (\ref{upperbound}) gives a positive contribution to the Higgs mass we take $M^2_S$ negative and $Y_\nu$ and $\lambda_s$ with opposite sign.
What is remarkable in the mass expression in Eq. (\ref{upperbound}) is that the second term provides a robust correction to the Higgs mass even involving the parameters that dictate the neutrino masses as the
couplings $Y_\nu$ and $\lambda_s$ and the VEV $v_S$. This suggest an interesting connection between
neutrino and Higgs mass. For illustrative proposals, perceive that for
$Y_\nu$ of the same order of $\lambda_s$, $v_\nu$ around MeV, $v_u$ around
$10^2$ GeV and $v_{S}$ of order tens of TeV, the second term provides a contribution of tens of
GeV to the Higgs mass. This contribution is enough to alleviate the pressure on
the stop masses and their mixing in order to keep valid the principle of naturalness.
In order to check the range of values the stop mass and the $A_t$ term may
develop in this model, we add to $m^{2}_{h}$ given above the leading 1-$\textit{loop}$ corrections
coming from the MSSM stop terms\cite{loopcorrection},
\begin{equation}
\Delta m^{2}_{h}=
\frac{3m^{4}_{t}}{4\pi^{2}v^{2}}\left(log\left(\frac{m^{2}_{s}}{m^{2}_{t}}\right)
+
\frac{X^{2}_{t}}{m^{2}_{s}}\left(1-\frac{X^{2}_{t}}{12m^{2}_{s}}\right)\right),
\label{Limh2}
\end{equation}
where $m_{t}=173.2$ GeV is the top mass, $v=\sqrt{v^2_u + v^2_d}=174$ GeV is the VEV
of the standard model, $X_{t}\equiv A_t-\mu cot (\beta)$ is the stop mixing parameter and $m_{s} \equiv
(m_{\tilde{t}_{1}}m_{\tilde{t}_{2}})^{1/2}$ is the SUSY scale (scale of
superpartners masses) where $m_{\tilde t}$ is the stop mass. In the analysis done below, we work with
degenerated stops and, in all plots, we take $v_\nu=1$ MeV and
$v_{S}=4 \times 10^4$ GeV.
Figure 1 shows possible values for the magnitude of $Y_\nu$ and $\lambda_s$
that provide a Higgs with a mass of $125$ GeV. Note that the plot tells us that
such a mass requires $Y_\nu$ and $\lambda_s$ around
$10^{-1}$. This range of values for $Y_\nu$ and $\lambda_s$ provides, through
Eq.(\ref{ISSmass}), $m_\nu \approx 0.1$ eV for $v_S=10$ TeV and $v_N=10$ keV.
Thus neutrino at sub-eV scale is compatible with $m_h=125$ GeV effortlessly.
Figure 2 tell us that the model yields the desired Higgs mass for stop mass
and mixing parameters below the TeV scale. Finally, Figure 3 shows that a
Higgs of mass of $125$ GeV is obtained for a broad range of values of $tan (\beta)$.
Let us discuss a little some phenomenological aspects of the SUSYISS mechanism
developed here. First of all, observe that the aspects of RPV concerning the
mixing among neutralinos and neutrinos, as well as charginos and charged
leptons, are dictated by the VEVs $v_\nu$ and $v_{N}$ and the couplings $Y_\nu$
and $\lambda_s$, which are both small. The squarks sector is practically
unaffected. Thus, with relation to these sectors, the phenomenology of the
SUSYISS mechanism is practically similar to the case of the supersymmetric
version of the ISS mechanism\cite{ISSSUSY1, ISSpheno}. The signature of the
SUSYISS mechanism developed here should manifest mainly in the scalar sector of
the model due to the mixing of the neutral scalars with the sneutrinos which
will generate Higgs decay channel with lepton flavor violation $h \rightarrow
l_i l_j$.
In general, as far as we know, this is the first time the ISS mechanism is
developed in the framework of RPV. Thus many theoretical, as well
phenomenological aspects of the model proposed here must be addressed such as
experimental constraints from RPV, accelerator physics, analysis of the
renormalization group equation, spontaneously SUSY breaking schemas, etc.,
which we postpone to a future paper\cite{future}. Moreover, needless to say that
in SUSY models with RPV the lightest supersymmetric particle is not stable which
means that neither the neutralino nor sneutrino are candidates for dark
matter\cite{DMcandidate} any longer. We would like to remark that
because of the $Z_3$ symmetry used in the superpotential above cosmological
domain wall problems are expected\cite{DWproblem}. However, the solution of this
problem in the NMSSM as well in the $\mu\nu$SSM\cite{munucase} cases may be
applied to our case, too\cite{DWsolution}.
Finally, concerning the stability of the vacuum, we have to impose that the potential be bounded from below when the scalar fields become large in any direction of the space fields and that the potential does not present charge and color breaking minima. Concerning the latter condition, we do not have to worry about this condition here because the new scalar fields associated to the superfield singlets, $\hat S$ and $\hat N^C$, are neutrals under electric and color charges. Concerning the former issue, the worry arises because at large values of the fields the quartic terms dominate the potential. Thus we have to guarantee that at large values of the fields the potential be positive. Thus we have to worry with the quartic couplings, only. The negative value of $\lambda_s$ leads to two negative quartic terms. Considering this, on analyzing the potential above, we did not find any direction in the field space in which $\lambda_s$ negative leads to a negative potential. All direction we find involves a set of condition where it is always possible to guarantee that the potential be positive at large value of the fields\cite{casas}. Moreover, we took $\lambda_s$ negative for convenience. We may arrange the things such that all couplings be positive. For example, on taking $\lambda_s$ positive, $v_\nu$ in Eq. (\ref{vnu}) get negative, which guarantee a positive contribution to the Higgs masses and that all quartic couplings be positive. However, a complete analysis of the stability of the potential is necessary. This will be done in \cite{future}.
\section{ Conclusions}
In this work we proposed the realization of the SUSYISS model in the framework
of RPV. The main advantage of such framework is that it allows the realization
of the SUSYISS model with a minimal set of superfield content where the
superfields $\hat S$ and $\hat N^C$ of the minimal implementation are sufficient
to realize the model. To grasp the important features of the SUSYISS, we
restricted our work to the case of one generation of superfields. As nice
result, the canonical mass parameters $M_N$ and $\mu_N$ of the SUSYISS mechanism
are recognized as the VEVs of the scalars $\tilde S$ and $\tilde N$ that compose
the superfields $\hat S$ and $\hat N^C$. There is no way to fix the values of
the VEVs $v_S$ and $v_N$. However, it seems plausible that $v_S$ and $v_N$
develop values around TeV and keV scale, respectively. Thus, we conclude that
RPV seems to be an interesting framework for the realization of the SUSYISS
mechanism. We recognize that in order to establish the model a lot of work have
to be done, yet. For example, we have to find the spontaneous SUSY breaking
scheme that better accommodates the mechanism, develop the phenomenology of
the model and its embedding in GUT schemes. We end by saying that the main
results of this work are that the model proposed here realizes minimally the
SUSYISS mechanism and provides a 125 GeV Higgs mass respecting the naturalness principle.
\acknowledgments
This work was supported by Conselho Nacional de Pesquisa e
Desenvolvimento Cient\'{i}fico- CNPq (C.A.S.P and P.S.R.S ) and Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'{i}vel Superior - CAPES (J.G.R).
| {'timestamp': '2016-06-03T02:10:10', 'yymm': '1602', 'arxiv_id': '1602.08126', 'language': 'en', 'url': 'https://arxiv.org/abs/1602.08126'} |
\subsubsection{Auxiliary Multicut}
Given a graph $G$ and a function $f : V(G) \to [p]$, the \emph{cost} of $f$ in $G$, denoted $\mathfrak{c}_G(f)$ is defined as the number of edges $e = uv \in E(G)$ with $f(u) \neq f(v)$.
We use the following intermediate problem, which we call \textsc{Auxiliary Multicut}.
The input consists of a graph $G$, integers $k,p$, terminal sets $(T_i)_{i=1}^\tau$ (not necessarily disjoint), and a set $I \subseteq [\tau] \times [p]$.
The goal is to find a function $f \colon V(G) \to [p]$ of cost at most $k$ such that for every $(i,j) \in I$, there exists $v \in T_i$ with $f(v) = j$.
\begin{theorem}\label{thm:auxcut}
\textsc{Auxiliary Multicut} on connected graphs $G$ can be solved in time $2^{\mathcal{O}((k+|I|) \log (k+|I|))} n^{\mathcal{O}(1)}$.
\end{theorem}
\begin{proof}
Let $(G,k,p,(T_i)_{i=1}^\tau,I^\circ)$ be an input to \textsc{Auxiliary Multicut}. Assume $G$ is connected and let $n = |V(G)|$.
Without loss of generality, we can assume that every set $T_i$ is nonempty and for every $j \in [p]$ there exists some $i \in [\tau]$ with $(i,j) \in I^\circ$.
Thus, the image of the sought function $f$ (henceforth called a \emph{solution}) needs to be equal the whole $[p]$.
Consequently, the connectivity of $G$ allows us to assume $p \leq k+1$, as otherwise the input instance is a no-instance: any such function $f$ would have cost larger than $k$.
We invoke the algorithm of Theorem~\ref{thm:decomp} to $G$ and $k$,
obtaining a rooted compact
tree decomposition $(T,\beta)$ of $G$ whose every bag is $(k,k)$-edge-unbreakable
and every adhesion is of size at most $k$. The running time of this step
is $2^{\mathcal{O}(k \log k)} n^{\mathcal{O}(1)}$.
We perform a bottom-up dynamic programming algorithm on $(T,\beta)$.
For every node $t \in V(T)$, every subset $I \subseteq I^\circ$, and every
function $f^\sigma\colon \sigma(t) \to [p]$ we compute a value $M[t, I, f^\sigma] \in \{0, 1, 2, \ldots, k, +\infty\}$
with the following properties.
\begin{enumerate}[label={(\alph*)}]
\item If $M[t,I, f^\sigma] \neq +\infty$, then there exists a function $f : V(G_t) \to [p]$ such that:\label{i:ac:cmpl}
\begin{itemize}
\item $f|_{\sigma(t)} = f^\sigma$,
\item for every $(i,j) \in I$ there exists $w\in V(G_t) \cap T_i$ with $f(w) = j$,
\item the cost of $f$ in $G_t$ is at most $M[t,I,f^\sigma]$.
\end{itemize}
\item For every function $f \colon V(G) \to [p]$ that satisfies:\label{i:ac:sound}
\begin{itemize}
\item $f|_{\sigma(t)} = f^\sigma$,
\item for every $(i,j) \in I^\circ$ there exists $w \in T_i$ with $f(w) = j$, and,
furthermore, if $i \in I$, then there exists such $w$ in $T_i \cap V(G_t)$,
\item the cost of $f$ in $G$ is at most $k$,
\end{itemize}
the cost of $f|_{V(G_t)}$ in $G_t$ is at least $M[t,I,f^\sigma]$.
\end{enumerate}
We first observe that the table $M[\cdot]$ is sufficient for our purposes.
\begin{claim}\label{cl:ac:M}
There exists a solution if and only if
$M[r,I^\circ,\emptyset] \neq +\infty$ for the root $r$ of $T$.
\end{claim}
\begin{proof}
In one direction, if $f$ is a solution, then note that
$f$ satisfies all conditions of Point~\ref{i:ac:sound} for the cell
$M[r,I^\circ,\emptyset]$. Hence, by Point~\ref{i:ac:sound}, $M[r,I^\circ,\emptyset] < +\infty$.
In the other direction, let $M[r,I^\circ,\emptyset] < +\infty$ and
let $f$ be the function whose existence is asserted by Point~\ref{i:ac:cmpl}.
Then for every $(i,j) \in I^\circ$ there exists $w \in T_i$ with $f(w) = j$ and its cost in $G_r = G$ is at most $k$.
Consequently, $f$ is a solution.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
Thus, to prove Theorem~\ref{thm:auxcut}, it suffices to show how to compute in time
$2^{\mathcal{O}((|I^\circ| + k) \log (k + |I^\circ|))} n^{\mathcal{O}(1)}$ entries $M[t,\cdot,\cdot]$ for a fixed node $t \in V(T)$, given
entries $M[s,\cdot,\cdot]$ for all children $s$ of $t$ in $T$. Since $|\sigma(t)| \leq k$ and $p \leq k+1$,
it suffices to focus on a computation of a single cell $M[t,I,f^\sigma]$.
Let $Z$ be the set of children of $t$ in~$T$.
In a single step of a dynamic programming algorithm, we would like to focus on finding
a function $f^\beta : \beta(t) \to [p]$ that extends $f^\sigma$
and read the best way to extend $f^\beta$ to subgraphs $G[\alpha(s)]$
for $s \in Z$ from the tables $M[s,\cdot,\cdot]$.
To this end, every adhesion $\sigma(s)$ for $s \in Z$ serves as a ``black-box'' that, given
$f_s^\sigma : \sigma(s) \to [p]$ and a request $I_s \subseteq I$, returns the minimum possible cost in $G_s$
of an extension $f_s$ of $f_s^\sigma$ that satisfies the request $I_s$ (i.e., for every $(i,j) \in I_s$ there exists $w \in T_i \cap V(G_s)$ with $f_s(w) = j$).
Within the same framework, one can think of edges $e \in E(G_t[\beta(t)])$ as ``mini-black-boxes''
that force us to pay $1$ if $f^\beta$ assigns different values to the endpoints of $e$
and vertices $w \in T_i$ as ``mini-black-boxes'' that allow us to ``score'' a pair $(i,j) \in I$
if we assign $f^\beta(w) = j$.
This motivates the following definition of a family $\mathcal{C}$ of \emph{constraints};
every child $s \in Z$, every edge $e \in E(G_t[\beta(t)])$, and every pair $((i,j),w)$ for $(i,j) \in I$, $w \in T_i \cap \beta(t)$ gives raise to a single constraint.
A constraint $\Gamma \in \mathcal{C}$ consists of:
\begin{itemize}
\item a set $X_\Gamma \subseteq \beta(t)$ of size at most $k$;
\item a function $M_\Gamma: 2^{I} \times [p]^{X_\Gamma} \to \{0,1,2,\ldots,k,+\infty\}$.
\end{itemize}
For a child $s \in Z$, we define a \emph{child constraint} $\Gamma(s)$ as:
\begin{itemize}
\item $X_{\Gamma(s)} = \sigma(s)$,
\item $M_{\Gamma(s)}(I_s,f^\sigma_s) = M[s,I_s,f^\sigma_s]$ for every $I_s \subseteq I$ and $f^\sigma_s : \sigma(s) \to [p]$.
\end{itemize}
For an edge $uv = e \in E(G_t[\beta(t)])$, we define an \emph{edge constraint} $\Gamma(e)$ as:
\begin{itemize}
\item $X_{\Gamma(e)} = e$,
\item $M_{\Gamma(e)}(\emptyset, f^\sigma_e) = 0$
if $f^\sigma_e(u) = f^\sigma_e(v)$,
$M_{\Gamma(e)}(\emptyset, f^\sigma_e) = 1$
if $f^\sigma_e(u) \neq f^\sigma_e(v)$,
and $M_{\Gamma(e)}(I_e, f^\sigma_e) = +\infty$ for every $I_e \neq \emptyset$ and $f^\sigma_e : e \to [p]$.
\end{itemize}
For a pair $((i,j),w)$ with $(i,j) \in I$ and $w \in \beta(t) \cap T_i$ we define a \emph{terminal constraint} $\Gamma(i,j,w)$ as:
\begin{itemize}
\item $X_{\Gamma(i,j,w)} = \{w\}$,
\item $M_{\Gamma(i,j,w)}(\emptyset, \cdot) = 0$, $M_{\Gamma(i,j,w)}(\{(i,j)\},f^\sigma_{i,j,w}) = 0$ if
$f^\sigma_{i,j,w}(w) = j$, and $M_{\Gamma(i,j,w)}(I_{i,j,w}, f^\sigma_{i,j,w}) = +\infty$ for every other $I_{i,j,w} \subseteq I$ and $f^\sigma_{i,j,w} : \{w\} \to [p]$.
\end{itemize}
A \emph{responsibility assignment} is a function $\rho$ that assigns to every constraint $\Gamma \in \mathcal{C}$ a subset $\rho(\Gamma) \subseteq I$ such that
the values $\rho(\Gamma)$ are pairwise disjoint.
For a function $f^\beta : \beta(t) \to [p]$ we define the \emph{majority value} $\mathfrak{j}(f^\beta)$ as the minimum $j \in [p]$ among values $j$
maximizing $|(f^\beta)^{-1}(j)|$ (that is, we take the smallest among the most common values of $f^\beta$).
A function $f^\beta : \beta(t) \to [p]$ is \emph{unbreakable-consistent} if either $|\beta(t)| \leq 3k$ or at most $k$ vertices of $\beta(t)$ are assigned values different
than $\mathfrak{j}(f^\beta)$.
A \emph{feasible function} is an unbreakable-consistent function $f^\beta : \beta(t) \to [p]$ that extends $f^\sigma$ on $\sigma(t)$.
Given a responsibility assignment $\rho$ and a feasible function $f^\beta$, their \emph{cost} is defined as
\begin{equation}\label{eq:ac:cost}
\mathfrak{c}(\rho,f^\beta) = \sum_{\Gamma \in \mathcal{C}} M_\Gamma(\rho(\Gamma), f^\beta|_{X_\Gamma}).
\end{equation}
We claim the following.
\begin{claim}\label{cl:ac:feas}
Assume that the values $M[s,\cdot,\cdot]$ satisfy Points~\ref{i:ac:cmpl} and~\ref{i:ac:sound} for every $s \in Z$.
Then the following assignment satisfies Points~\ref{i:ac:cmpl} and~\ref{i:ac:sound} for $t$:
\begin{equation}\label{cl:ac:Mdef}
M[t,I,f^\sigma] = \min \{ \mathfrak{c}(\rho, f^\beta)~|~\rho\mathrm{\ is\ a\ responsibility\ assignment,\ }f^\beta\mathrm{\ is\ a\ feasible\ function,\ }\bigcup_{\Gamma \in \mathcal{C}} \rho(\Gamma) = I \}.
\end{equation}
In the above, $M[t, I, f^\sigma] := +\infty$ if the right hand side exceeds $k$.
\end{claim}
\begin{proof}
Let $\rho^\ast$ and $f^{\beta,\ast}$ be values for which the minimum of the right hand side of~\eqref{cl:ac:Mdef} is attained.
Consider first Point~\ref{i:ac:cmpl} and assume $\rho^\ast$ and $f^{\beta,\ast}$ exist and $\mathfrak{c}(\rho^\ast, f^{\beta,\ast}) \leq k$.
In particular, from~\eqref{eq:ac:cost} we infer that for every $\Gamma \in \mathcal{C}$ we have $M_\Gamma(\rho^\ast(\Gamma), f^{\beta,\ast}|_{X_\Gamma}) \leq k$.
For every $s \in Z$ let $f_s$ be the function extending $f^{\beta,\ast}|_{\sigma(s)}$ whose existence is promised by Point~\ref{i:ac:cmpl} for the cell
$M[s, \rho^\ast(\Gamma(s)), f^{\beta,\ast}|_{\sigma(s)}] \leq k$.
We claim that
$$f := f^{\beta,\ast} \cup \bigcup_{s \in Z} f_s$$
satisfies the requirements for Point~\ref{i:ac:cmpl} for the cell $M[t, I, f^\sigma]$.
First, note that $f$ is well-defined as every $f_s$ agrees with $f^{\beta,\ast}$ on $\sigma(s)$.
Clearly, $f$ extends $f^\sigma$ as $f^{\beta,\ast}$ extends $f^\sigma$.
Second, fix $(i,j) \in I$; our goal is to show a vertex $w \in V(G_t) \cap T_i$ for which $f(w) = j$.
By~\eqref{cl:ac:Mdef}, we have $\bigcup_{\Gamma \in \mathcal{C}} \rho^\ast(\Gamma) = I$.
Hence, there exists $\Gamma \in \mathcal{C}$ with $(i,j) \in \rho(\Gamma)$.
Since $M_{\Gamma}(\rho^\ast(\Gamma), f^{\beta,\ast}|_{X_{\Gamma}}) \leq k$, $\Gamma$ is not an edge constraint $\Gamma(e)$.
If $\Gamma$ is a terminal constraint, $\Gamma = \Gamma((i,j),w)$, then from the definition of a terminal constraint we obtain that $f(w) = j$ and $w \in T_i$.
Finally, if $\Gamma$ is a child constraint, $\Gamma = \Gamma(s)$ for some $s \in Z$, then since $f_s$ is a function promised by Point~\ref{i:ac:cmpl} for the cell
$M[s, \rho^\ast(\Gamma(s)), f^{\beta,\ast}|_{\sigma(s)}]$ there exists $w \in V(G_s) \cap T_i$ with $f_s(w) = j$.
This concludes the proof that for every $(i,j) \in I$ there exists $w \in V(G_t) \cap T_i$ with $f(w) = j$.
Finally, let us compute the cost of $f$. Let $e = uv \in E(G_t)$ with $f(u) \neq f(v)$.
By the definition of the graphs $G_t$ and $G_s$, $s \in Z$, $e$ either belongs to $E(G_t[\beta(t)])$ or to exactly one of the subgraphs $G_s$, $s \in Z$.
In the first case, the constraint $\Gamma(e)$ contributes $1$ to $\mathfrak{c}(\rho^\ast, f^{\beta,\ast})$.
In the second case, for every $s \in Z$, the number of edges $e = uv \in E(G_s)$ with $f(u) \neq f(v)$ equals exactly the cost of $f_s$, which
is not larger than $M_{\Gamma(s)}(\rho^\ast(\Gamma(s)), f^{\beta,\ast}_{\sigma(s)})$.
Point~\ref{i:ac:cmpl} for the cell $M[t, I, f^\sigma]$ follows.
Let $f$ be a function as in Point~\ref{i:ac:sound} for the cell $M[t, I, f^\sigma]$.
Define responsibility assignment $\rho$ as follows. Start with $\rho(\Gamma) = \emptyset$ for every $\Gamma \in \mathcal{C}$.
For every $(i,j) \in I$, proceed as follows. By the properties of Point~\ref{i:ac:sound}, there exists $w \in V(G_t) \cap T_i$ with $f(w) = j$.
If $w \in \beta(t)$, then we insert $(i,j)$ into $\rho(\Gamma(i, j, w))$. Otherwise, $w \in V(G_s) \setminus \sigma(s)$ for some $s \in Z$; we insert then
$(i,j)$ into $\rho(\Gamma(s))$. Clearly, $\rho$ is a responsibility assignment and $\bigcup_{\Gamma \in \mathcal{C}} \rho(\Gamma) = I$.
We define $f^\beta = f|_{\beta(t)}$ and we claim that $f^\beta$ is a feasible function. Clearly, $f^\beta$ extends $f^\sigma$.
To show that $f^\beta$ is unbreakable-consistent, observe that the fact that $\beta(t)$ is $(k,k)$-edge-unbreakable in $G$ with conjunction with the assumption
that the cost of $f$ is at most $k$ implies that for every partition $[p] = J_1 \uplus J_2$ either $(f^\beta)^{-1}(J_1)$ or $(f^\beta)^{-1}(J_2)$ is of size at most $k$.
If $|(f^\beta)^{-1}(\mathfrak{j}(f^\beta))| > k$, then this implies that $|(f^\beta)^{-1}([p] \setminus \{\mathfrak{j}(f^\beta)\})| \leq k$, as desired.
Otherwise, we have $|(f^\beta)^{-1}(i)| \leq k$ for every $i \in [p]$, and unless $|\beta(t)| \leq 3k$ there exist a partition $[p] = J_1 \uplus J_2$ such that
$|(f^\beta)^{-1}(J_j)| > k$ for $j=1,2$, a contradiction. This proves that $f^\beta$ is unbreakable-consistent, and thus a feasible function.
To finish the proof of the claim, it suffices to show that for the above defined $\rho$ and $f^\beta$ the cost $\mathfrak{c}(\rho, f^\beta)$ is at most the cost of $f|_{V(G_t)}$ in $G_t$.
Note that the cost of $f|_{V(G_t)}$ in $G_t$ equals the cost of $f|_{\beta(t)}$ in $G_t[\beta(t)]$ plus the sum over all $s \in Z$ of the cost of $f|_{V(G_s)}$ in $G_s$.
Consider the types of constraints one by one.
First, consider a child constraint $\Gamma(s)$ for some $s \in Z$. By the definition of $\rho(\Gamma(s))$, $f$ satisfies the requirements for Point~\ref{i:ac:sound}
for the cell $M[s, \rho(\Gamma(s)), f|_{\sigma(s)}]$. Consequently, the cost of $f|_{V(G_s)}$ in $G_s$ is not smaller than
$M[s, \rho(\Gamma(s)), f|_{\sigma(s)}] = M_{\Gamma(s)}(\rho(\Gamma(s)), f|_{\sigma(s)})$.
The latter term is exactly the contribution of the constraint $\Gamma(s)$ to the cost $\mathfrak{c}(\rho,f^\beta)$.
Second, consider an edge $e = uv \in E(G_t[\beta(t)])$. Note that $\rho(\Gamma(e)) = \emptyset$ by definition while $M_{\Gamma(e)}[ \emptyset, f|_e]$ equals $1$
if $f(u) \neq f(v)$ and $0$ otherwise. Thus, the contribution of the edge $e$ towards the cost of $f|_{\beta(t)}$ in $G_t[\beta(t)]$ is the same as the contribution
of $\Gamma(e)$ towards $\mathfrak{c}(\rho,f^\beta)$.
Third, consider a terminal constraint $\Gamma(i,j, w)$. If $\rho(\Gamma(i,j, w)) = \emptyset$, then the contribution of this constraint to $\mathfrak{c}(\rho,f^\beta)$ is $0$.
Otherwise, we have $\rho(\Gamma(i,j, w)) = \{(i,j)\}$ and this can only happen if $f(w) = j$ and $w \in T_i$.
By definition, this implies $M_{\Gamma(i,j,w)}(\{(i,j)\}, f|_{\{w\}}) = 0$, and again the contribution of this constraint to $\mathfrak{c}(\rho,f^\beta)$ is $0$.
This concludes the proof that the cost of $f|_{V(G_t)}$ in $G_t$ is not smaller than $\mathfrak{c}(\rho, f^\beta)$ and concludes the proof of the claim.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
By Claim~\ref{cl:ac:feas}, it suffices to minimize $\mathfrak{c}(\rho, f^\beta)$ over responsibility assignments $\rho$ and feasible functions $f^\beta$ such that $\bigcup_{\Gamma \in \mathcal{C}} \rho(\Gamma) = I$.
Assume that this minimum is at most $k$ and fix some minimizing arguments $(\rho^\ast, f^{\beta,\ast})$.
We say that a constraint $\Gamma$ is \emph{touched} if either $\rho^\ast(\Gamma) \neq \emptyset$ or $f^{\beta,\ast}|_{X_\Gamma}$ is not a constant function. We claim the following
\begin{claim}\label{cl:ac:touched}
There are at most $|I|+k$ touched constraints.
\end{claim}
\begin{proof}
By the assumption that the values $\rho^\ast(\Gamma)$ are pairwise disjoint, there are at most $|I|$ constraints
with $\rho^\ast(\Gamma) \neq \emptyset$.
Fix a constraint $\Gamma$ such that $f^{\beta,\ast}|_{X_\Gamma}$ is not a constant function. To finish the proof of the claim
it suffices to show that $\Gamma$ contributes at least $1$ to the sum in~\eqref{eq:ac:cost}.
Clearly, $\Gamma$ is not a terminal constraint. If $\Gamma$ is an edge constraint, $\Gamma = \Gamma(e)$
for some $e = uv \in E(G_t[\beta(t)])$, then as $X_{\Gamma(e)} = e$ we have $f^{\beta,\ast}(u) \neq f^{\beta,\ast}(v)$.
Hence, $\Gamma$ contributes $1$ to the sum in~\eqref{eq:ac:cost}.
Finally, if $\Gamma = \Gamma(s)$ for some $s \in Z$, then $\Gamma$ contributes $M[s, \rho^\ast(\Gamma(s)), f^{\beta,\ast}|_{\sigma(s)}]$
to the sum in~\eqref{eq:ac:cost}. By Point~\ref{i:ac:cmpl}, there exists an extension $f_s$ of $f^{\beta,\ast}|_{\sigma(s)}$ to $V(G_s)$ of cost at most $M[s,\rho^\ast(\Gamma(s)),f^{\beta,\ast}|_{\sigma(s)}]$. By compactness and the fact that $f^{\beta,\ast}|_{\sigma(s)}$ is not a constant function, any such extension has a positive cost in $G_s$. This finishes the proof of the claim.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
Let $A^\ast = (f^{\beta,\ast})^{-1}([p] \setminus \{\mathfrak{j}(f^{\beta,\ast})\})$. Since $f^{\beta,\ast}$ is unbreakable-consistent, we have $|A^\ast| \leq 3k$.
Let $B^\ast$ be the set of all vertices $v \in \beta(t) \setminus A^\ast$ for which there exists a touched constraint
$\Gamma \in \mathcal{C}$ with $v \in X_\Gamma$.
By Claim~\ref{cl:ac:touched}, we have that $|B^\ast| \leq k(k+|I|)$.
Our application of Lemma~\ref{lem:random2} is encapsulated in the following claim.
\begin{claim}\label{cl:ac:random}
In time $2^{\mathcal{O}((k+|I|) \log (k+|I|))} n^{\mathcal{O}(1)}$ one can generate a family $\mathcal{F}$
of pairs $(j,g)$ where $j \in [p]$ and $g \colon \beta(t) \to [p]$.
The family is of size $2^{\mathcal{O}((k+|I|) \log (k+|I|))} n$ and
there exists $(j,g) \in \mathcal{F}$ such that $j = \mathfrak{j}(f^{\beta,\ast})$ and $g$ agrees with $f^{\beta,\ast}$ on
$A^\ast \cup B^\ast$. Furthermore, for every element $(j,g) \in \mathcal{F}$, $g$ extends $f^\sigma$.
\end{claim}
\begin{proof}
First, we iterate over all choices of nonnegative integers $j \in [p]$ and $a_1,a_2, \ldots, a_{p}$
such that $\sum_{i=1}^{p} a_i \leq 3k+k(k+|I|)$ and $\sum_{i \in [p] \setminus \{j\}} a_i \leq 3k$.
Clearly, there are $2^{\mathcal{O}(k)}(|I|+k)$ options as $p \leq k+1$.
For a fixed choice of $j$ and $(a_i)_{i=1}^{p}$ we invoke Lemma~\ref{lem:random2}
for $U = \beta(t) \setminus \sigma(t)$ and integers $r = p$, $(a_i)_{i=1}^{p}$, obtaining a family
$\mathcal{F}'$. For every $g' \in \mathcal{F}'$, we insert $(j, g' \cup f^\sigma)$
into $\mathcal{F}$.
The bound on the size of the output family $\mathcal{F}$ follows from bound of Lemma~\ref{lem:random2}
and the inequality $\log^k n \leq 2^{\mathcal{O}(k \log k)} n$.
Finally, note that the promised pair $(j,g)$ will be generated for the choice of
$j = \mathfrak{j}(f^{\beta,\ast})$ and $a_i = |(f^{\beta,\ast})^{-1}(i) \cap (A^\ast \cup B^\ast)\setminus \sigma(t)|$ for every $i \in [p]$.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
Invoke the algorithm of Claim~\ref{cl:ac:random}, obtaining a family $\mathcal{F}$. We say
that $(j,g) \in \mathcal{F}$ is \emph{lucky} if $f^{\beta,\ast}$ exists, $j = \mathfrak{j}(f^{\beta,\ast})$, and $f^{\beta,\ast}$
agrees with $g$ on $A^\ast \cup B^\ast$.
Consider now an auxiliary graph $H$ with $V(H) = \beta(t)$ and $uv \in E(H)$ if and only if
$u \neq v$ and there exists a constraint $\Gamma \in \mathcal{C}$ with $u,v\in X_\Gamma$.
By the definition of constraints $\Gamma(e)$, we have that $G_t[\beta(t)]$ is a subgraph of
$H$, but in $H$ we also turn all adhesions $\sigma(s)$ for $s \in Z$ into cliques.
For a pair $(j,g) \in \mathcal{F}$, we define $\Sigma(j,g) := g^{-1}([p] \setminus \{j\})$.
Observe the following.
\begin{claim}\label{cl:ac:stains}
For a lucky pair $(j,g)$,
every connected component of $H[\Sigma(j,g)]$ is either completely contained
in $(f^{\beta,\ast})^{-1}(j)$ or completely contained in $A^\ast$.
\end{claim}
\begin{proof}
Assume the contrary, and let $uv \in E(H)$ be an edge violating the condition. By symmetry, assume $f^{\beta,\ast}(u) \neq j$.
Then, since $(j,g)$ is lucky, we have that $u \in A^\ast$. Hence $v \notin A^\ast$, that is, $f^{\beta,\ast}(v) = j$.
By the definition of $H$, there exists a constraint $\Gamma \in \mathcal{C}$ with $u,v \in X_{\Gamma}$ (either $\Gamma = \Gamma(uv)$ or
$\Gamma = \Gamma(s)$ for some $s \in Z$ with $u,v \in \sigma(s)$). Consequently, $\Gamma$ is touched, and $v \in B^\ast$.
However, then from the fact that $(j,g)$ is lucky it follows that $g(v) = j$, a contradiction.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
Claim~\ref{cl:ac:stains} motivates the following approach. We try every pair $(j,g) \in \mathcal{F}$,
and proceed under the assumption that $(j,g)$ is lucky.
We inspect connected components of $H[\Sigma(j,g)]$ one-by-one and either try to
set a candidate for function $f^{\beta,\ast}$ to be equal to $g$ or constantly equal $j$ on the component.
Claim~\ref{cl:ac:stains} asserts that one could construct $f^{\beta,\ast}$ in this manner.
The definition of $H$ implies that for every constraint $\Gamma \in \mathcal{C}$, the set $X_\Gamma$ intersects at most one component
of $H[\Sigma(j,g)]$. This gives significant
independence of the decisions, allowing us to execute a multidimensional knapsack-type dynamic
programming, as between different connected components of $H[\Sigma(j,g)]$ we need only to keep intermediate values of the
cost and the union of values of the constructed responsibility assignment.
We proceed with formal arguments.
For every $(j,g) \in \mathcal{F}$, we proceed as follows.
Let $C_1,C_2,\ldots,C_\ell$ be the connected components of $H[\Sigma(j,g)]$ and for $1 \leq i \leq \ell$,
let $\mathcal{C}_i$ be the set of constraints $\Gamma \in \mathcal{C}_i$ with
$X_\Gamma \cap C_i \neq \emptyset$.
By the definition of $H$,
the sets $\mathcal{C}_i$ are pairwise disjoint.
Let $\mathcal{C}_0 = \mathcal{C} \setminus \bigcup_{i=1}^\ell \mathcal{C}_i$ be the remaining constraints, i.e., the constraints $\Gamma \in \mathcal{C}$ with $X_\Gamma \cap \Sigma(j,g) = \emptyset$.
It will be convenient for us to denote $C_0 = \emptyset$ to be a component accompanying
$\mathcal{C}_0$.
For $J \subseteq \{0,1,2,\ldots,\ell\}$, denote $g_J$ to be the function $g$ modified as follows:
for every $i \in \{0,1,\ldots,\ell\} \setminus J$ and every $v \in C_i$ we set $g_J(v) = j$.
Furthermore, let $J^\sigma \subseteq \{1,2,\ldots,\ell\}$ be the set of these indices $j$
for which $C_j \cap \sigma(t) \neq \emptyset$.
For $0 \leq i \leq \ell$, denote $\mathcal{C}_{\leq i} = \bigcup_{j \leq i} \mathcal{C}_j$.
Let $m = |\mathcal{C}|$. We order constraints in $\mathcal{C}$ as $\Gamma_1,\Gamma_2,\ldots,\Gamma_m$
according to which set $\mathcal{C}_i$ they belong to. That is, if $\Gamma_a \in \mathcal{C}_i$,
$\Gamma_b \in \mathcal{C}_j$ and $i < j$, then $a < b$.
For $0 \leq i \leq \ell$,
let $\overrightarrow{a}(i) = |\mathcal{C}_{\leq i}|$, that is, $\Gamma_{\overrightarrow{a}(i)} \in \mathcal{C}_{\leq i}$
but $\Gamma_{\overrightarrow{a}(i)+1}$ does not belong to $\mathcal{C}_{\leq i}$ (if exists).
Furthermore, let $\overrightarrow{a}(-1) = 0$.
Let $0 \leq i \leq \ell$ and let $\overrightarrow{a}(i-1) \leq a \leq \overrightarrow{a}(i)$.
An \emph{$a$-partial responsibility assignment} $\rho$ is a responsibility assignment
defined on constraints $\Gamma_1, \ldots, \Gamma_a$ such that $\bigcup_{b=1}^a \rho(\Gamma_b) \subseteq I$.
For a set $J \subseteq \{0,1,\ldots,i\}$ containing $J^\sigma \cap \{0,1,\ldots,i\}$
and an $a$-partial responsibility assignment $\rho$, we define the cost of $J$ and $\rho$ as
$$\mathfrak{c}_{i,a}(J, \rho) = \sum_{b=1}^a M_{\Gamma_b}(\rho(\Gamma_b), g_J|_{X_{\Gamma_b}}).$$
Furthermore, let $k^\beta = k$ if $|\beta(t)| > 3k$ and $k^\beta = |\beta(t)|$ otherwise.
The goal of our dynamic programming algorithm is to compute, for every $-1 \leq i \leq \ell$, $0 \leq k^\bullet \leq k^\beta$,
and $I^\bullet \subseteq I$
a value $Q_i[k^\bullet, I^\bullet]$ that equals a minimum possible cost $\mathfrak{c}_{i,\overrightarrow{a}(i)}(J,\rho)$ over all $J \subseteq \{0,1,\ldots,i\}$ containing $J^\sigma \cap \{0,1,\ldots,i\}$ with $|\bigcup_{i' \in J} C_{i'}| \leq k^\bullet$,
and $\overrightarrow{a}(i)$-partial responsibility assignment $\rho$
with $\bigcup_{b=1}^{\overrightarrow{a}(i)} \rho(\Gamma_b) = I^\bullet$.
The next claim shows that it suffices.
\begin{claim}\label{cl:ac:dp}
We have $Q_\ell[k^\beta, I] \geq \mathfrak{c}(\rho^\ast, f^{\beta,\ast})$.
Furthermore, if $\mathfrak{c}(\rho^\ast,f^{\beta,\ast}) \leq k$ and $(j,g)$ is lucky, then
$Q_\ell[k^\beta,I] = \mathfrak{c}(\rho^\ast, f^{\beta,\ast})$.
\end{claim}
\begin{proof}
For the first claim, let $(J, \rho)$ be the witnessing arguments for the value $Q_\ell[k^\beta, I]$.
Note that $\rho$ is a responsibility assignment. Furthermore, $g_J$ is a feasible function: it extends $f^\sigma$
due to the requirement $J^\sigma \subseteq J$ and it is unbreakable-consistent
due to the requirement $|\bigcup_{i' \in J} C_{i'}| \leq k^\beta$.
The first claim follows from the minimality of $(\rho^\ast, f^{\beta,\ast})$.
For the second claim, by Claim~\ref{cl:ac:stains} we know that on every connected component $C_i$,
the function $f^{\beta,\ast}$ either equals $g$ or is constant at $j$.
Let $J^\ast$ be the set where the first option happens; note that $J^\sigma \subseteq J^\ast$
and $g_{J^\ast} = f^{\beta,\ast}$. Consequently, $\mathfrak{c}_{\ell,m}(J^\ast, \rho^\ast) = \mathfrak{c}(\rho^\ast, f^{\beta,\ast})$.
Finally, as $f^{\beta,\ast}$ is unbreakable-consistent, we have $|\bigcup_{i' \in J^\ast} C_{i'}| \leq k^\beta$.
The claim follows from the minimality of $Q_\ell[k^\beta, I]$.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
For the initialization step of our dynamic programming algorithm, note that for $i=-1$
we have $\overrightarrow{a}(i)=0$ and the minimization for $Q_{-1}[\cdot,\cdot]$
takes into account only $J = \emptyset$ and $\rho=\emptyset$.
Fix now $0 \leq i \leq \ell$, we are to compute values $Q_i[\cdot,\cdot]$.
To this end, we use another layer of dynamic programming.
For $\overrightarrow{a}(i-1) \leq a \leq \overrightarrow{a}(i)$, $0 \leq k^\bullet \leq k^\beta$, and $I^\bullet \subseteq I$
we define $Q_{i,a}^\in[k^\bullet, I^\bullet]$ to be a minimum possible cost
$\mathfrak{c}_{i,a}(\rho,J)$ over all $J \subseteq \{0,1,2,\ldots,i\}$ containing
$J^\sigma \cap \{0,1,2,\ldots,i\}$ satisfying that $i\in J$ and
$|\bigcup_{i' \in J} C_{i'}| \leq k^\bullet$, and all $a$-partial responsibility assignments
$\rho$ with $\bigcup_{b=1}^a \rho(\Gamma_b) = I^\bullet$.
Similarly we define $Q_{i,a}^{\notin}[k^\bullet, I^\bullet]$ with the requirement that $i \notin J$.
This definition is applicable only if $i \notin J^\sigma$; for indices $i \in J^\sigma$ we do not compute the
values $Q_{i,a}^{\notin}$.
Since $Q_{i,\overrightarrow{a}(i-1)}^\in$, $Q_{i,\overrightarrow{a}(i-1)}^{\notin}$, and $Q_{i-1}$ use all constraints up to $\overrightarrow{a}(i-1)$,
we have
$$Q_{i,\overrightarrow{a}(i-1)}^\in[k^\beta + |C_i|, I^\bullet] = Q_{i,\overrightarrow{a}(i-1)}^{\notin}[k^\beta, I^\bullet] = Q_{i-1}[k^\beta, I^\bullet].$$
Here, and in subsequent formulae, we assume that a value of a cell $Q_{i,a}^\in[\cdot,\cdot]$
or $Q_i[\cdot,\cdot]$ equals $0$ if the first argument is negative.
The above formula serves as the initialization step for computing values $Q_{i,a}^\in[\cdot,\cdot]$
and $Q_{i,a}^{\notin}[\cdot,\cdot]$.
For a single computation step, fix $\overrightarrow{a}(i-1) < a \leq \overrightarrow{a}(i)$.
The definitions of $Q_{i,a}^\in[\cdot]$ and $Q_{i,a-1}^\in[\cdot,\cdot]$ differ only in the requirement
to define $f(\Gamma_a)$. Furthermore, $X_{\Gamma_a}$ intersects only the component $C_i$
(if $i > 0$), while $i$ is required to be contained in $J$ in the definition
of both $Q_{i,a}^\in[\cdot,\cdot]$ and $Q_{i,a-1}^\in[\cdot,\cdot]$. Consequently,
$$Q_{i,a}^\in[k^\bullet, I^\bullet] = \min_{I' \subseteq I^\bullet} \left(Q_{i,a-1}^\in[k^\bullet, I^\bullet \setminus I'] + M_{\Gamma_a}(I', g|_{X_{\Gamma_a}})\right).$$
Similarly, for $i \notin J^\sigma$ we have
$$Q_{i,a}^{\notin}[k^\bullet, I^\bullet] = \min_{I' \subseteq I^\bullet} \left(Q_{i,a-1}^{\notin}[k^\bullet, I^\bullet \setminus I'] + M_{\Gamma_a}(I', X_{\Gamma_a} \times \{j\})\right).$$
Finally, we set for $i \in J^\sigma$:
$$Q_i[k^\bullet, I^\bullet] = Q_{i,\overrightarrow{a}(i)}[k^\bullet, I^\bullet],$$
and for $i \notin J^\sigma$:
$$Q_i[k^\bullet, I^\bullet] = \min \left( Q_{i,\overrightarrow{a}(i)}^\in[k^\bullet, I^\bullet], Q_{i,\overrightarrow{a}(i)}^{\notin}[k^\bullet, I^\bullet]\right)$$
The above dynamic programming algorithm computes the values $Q_\ell[\cdot,\cdot]$ in time $3^{|I|} \cdot n^{\mathcal{O}(1)}$.
By Claims~\ref{cl:ac:feas} and~\ref{cl:ac:dp}, we can take $M[t, I, f^\sigma]$
to be the minimum value of $Q_\ell[k^\beta, I]$ encountered over all choices of $(j,g) \in \mathcal{F}$.
Claim~\ref{cl:ac:M} shows that the above suffices to conclude the proof of Theorem~\ref{thm:auxcut}.
\end{proof}
\subsubsection{Steiner Cut}
\begin{proof}[Proof of Theorem~\ref{thm:steinercut}.]
Let $(G,T,p,k)$ be an input to \textsc{Steiner Cut} and let $n = |V(G)|$.
First, we observe that it is sufficient to solve \textsc{Steiner Cut} with the additional assumption that $G$ is connected.
Indeed, in the general case we can proceed as follows. First, we discard all connected components of $G$
that are disjoint with $T$.
Second, for every connected component $G'$ of $G$ and every $0 \leq k' \leq k$, $0 \leq p' \leq p$, we solve \textsc{Steiner Cut}
on the instance $(G', T \cap V(G'), p', k')$.
Finally, the results of these computations allow us to solve the input instance by a straightforward knapsack-type
dynamic programming algorithm.
Thus, we assume that $G$ is connected. In particular, we can assume that $p \leq k+1$ as otherwise the input
instance is a trivial no-instance.
With these assumptions, it is straightforward to observe that the input \textsc{Steiner Cut} instance $(G,T,p,k)$
is equivalent to \textsc{Auxiliary Cut} instance $(G,p,k,(T),\{1\} \times [p])$, that is, we set $\tau = 1$,
$T_1 = T$, and $I = \{1\} \times [p]$. Theorem~\ref{thm:steinercut} follows.
\end{proof}
\subsubsection{Steiner Multicut}
\begin{proof}[Proof of Theorem~\ref{thm:multicut}.]
Let $(G,(T_i)_{i=1}^t,k)$ be an input to \textsc{Steiner Multicut} and let $n = |V(G)|$.
First, we observe that it is sufficient to solve \textsc{Steiner Multicut} with the additional assumption that $G$ is connected.
Indeed, in the general case we can proceed as follows. First, we discard all sets $T_i$ that intersect more than one connected component of $G$. Second, for every connected component $G'$ of $G$ and every $0 \leq k' \leq k$, we solve \textsc{Steiner Multicut}
on graph $G$, terminal sets $\{T_i~|~T_i \subseteq V(G')\}$, and budget $k'$.
Finally, the results of these computations allow us to solve the input instance by a straightforward knapsack-type
dynamic programming algorithm.
Thus, henceforth we assume that $G$ is connected.
In particular, any function $f : V(G) \to \mathbb{Z}$ of cost at most $k$ can attain at most $k+1$ distinct values. Let $p = k+1$.
If $G$ is connected, then the input \textsc{Steiner Multicut} instance $(G,(T_i)_{i=1}^t,k)$ is a yes-instance
if and only if there exists $f \colon V(G) \to [p]$ of cost at most $k$ such that for every $i \in [t]$, $f$ is not constant on $T_i$.
Thus, to reduce \textsc{Steiner Multicut} to \textsc{Auxiliary Multicut}, it suffices to guess, for every $i \in [t]$, two distinct values from $[p]$ than $f$ attains on $T_i$.
More formally, we iterate over all sequences $(M_i)_{i=1}^t$ such that for every $i \in [t]$ we have $M_i \subseteq [p]$
and $|M_i| = 2$. There are $2^{\mathcal{O}(t \log t)}$ such sequences.
For every sequence $(M_i)_{i=1}^t$ we define $I = \bigcup_{i=1}^t \{i\} \times M_i$ and invoke
the algorithm of Theorem~\ref{thm:auxcut} for \textsc{Auxiliary Cut} instance $(G,p,k,(T_i)_{i=1}^t,I)$.
By the discussion above, the input \textsc{Steiner Multicut} instance is a yes-instance if and only
if at least one of the generated \textsc{Auxiliary Cut} instances is a yes-instance.
Since in every call we have $|I| = 2t$, Theorem~\ref{thm:multicut} follows.
\end{proof}
\section{Conclusions}
In this paper we presented an algorithm that constructs a tree decomposition as in~\cite{minbisection-STOC}, but faster, with better parameter bounds, and arguably simpler.
This allowed us to improve the parametric factor in the running time bounds for a number of parameterized algorithms for cut problems to $2^{\mathcal{O}(k \log k)}$.
We conclude with a few open questions.
First, can we construct the decomposition as in this paper in time $2^{\mathcal{O}(k)} n^{\mathcal{O}(1)}$? The current techniques seem to fall short of this goal.
The problem of finding a lean witness for a bag that is not $(k,k)$-unbreakable can be generalized to the following {\sc{Densest Subhypergraph}} problem:
given a hypergraph $H=(V,E)$, where the same hyperedge may appear multiple times (i.e. it is a multihypergraph), and an integer $k$,
one is asked to find a set $X\subseteq V$ consisting of $k$ vertices that maximizes the number of hyperedges contained in $X$.
A trivial approximation algorithm
is to just find the hyperedge of size at most $k$ that has the largest multiplicity and cover it; this achieves the approximation factor of $2^k$.
We conjecture that it is not possible to obtain a $2^{o(k)}$-approximation in FPT time for this problem.
Third, we are not aware of any conditional lower bounds for \textsc{Minimum Bisection} that are even close to our $2^{\mathcal{O}(k \log k)} n^{\mathcal{O}(1)}$ upper bound.
To the best of our knowledge, even an algorithm with running time $2^{o(k)}\cdot n^{\mathcal{O}(1)}$ is not excluded under the Exponential Time Hypothesis using the known {\sc{NP}}-hardness reductions.
It would be interesting to understand to what extent the running time of our algorithm is optimal.
\section{Constructing a lean decomposition: Proof of Theorem~\ref{thm:decomp}}\label{s:lean}
\subsection{Refinement step}
Bellenbaum and Diestel~\cite{BellenbaumD02} defined an improvement step that, given a tree
decomposition and a lean witness, refines the decomposition so that it is in some sense
closer to being lean. We will use the same refinement step, but only for the special single bag case. Hence, in subsequent sections we
focus on finding a single bag lean witness in a current candidate tree decomposition.
The following observation is a direct consequence of Menger's theorem.
\begin{claim}
\label{claim:equiv}
For a tree decomposition $(T,\beta)$, a node $s \in T$, and subsets $Z_1,Z_2 \subseteq \beta(s)$ with $|Z_1|=|Z_2|$,
the following two conditions are equivalent:
\begin{itemize}
\item $(s,s,Z_1,Z_2)$ is a single bag lean witness for $(T,\beta)$,
\item there is a separation $(A_1,A_2)$ of $G$ and a set of vertex disjoint $Z_1-Z_2$ paths $\{P_x\}_{x \in X}$, where $X = A_1 \cap A_2$,
such that $Z_i \subseteq A_i$ for $i\in \{1,2\}$, $|X| < |Z_1|$, and $x \in V(P_x)$ for every $x \in X$.
\end{itemize}
Moreover given a single bag lean witness $(s,s,Z_1,Z_2)$ one can find the above separation $(A_1,A_2)$ and set of paths $\{P_x\}_{x \in X}$ in polynomial time.
\end{claim}
The minimum order of a separation from the second point of Claim~\ref{claim:equiv} is called the {\em{order}}
of the single bag lean witness $(s,s,Z_1,Z_2)$.
To argue that the refinement process stops after a small number of steps, or that it stops
at all, we define a potential for a graph $G$, a tree decomposition $(T,\beta)$, and an integer $k$
as follows.
\begin{align}
\Phi^1_{G,k}(T, \beta) & = \sum_{t \in T} \max(0, |\beta(t)|-2k-1) \\
\Phi^2_{G,k}(T, \beta) & = \sum_{t \in T} (2^{\min(|\beta(t)|, 2k+1)}-1) \\
\Phi_{G,k}(T, \beta) &= 2^{2k+1} (n+1) \Phi^1_{G,k}(T, \beta) + \Phi^2_{G,k}(T, \beta).
\end{align}
Note that if $T$ has at most $n+1$ bags, then $\Phi_{G,k}(T, \beta)$ is order-equivalent to
the lexicographical order of $(\Phi^1_{G,k}(T, \beta), \Phi^2_{G,k}(T, \beta))$.
Also note that this potential is different from the one used by Bellenbaum and Diestel in~\cite{BellenbaumD02}:
the potential of~\cite{BellenbaumD02} can be exponential in $n$ while being oblivious to the cut
size $k$.
Given a witness, a single refinement step we use is encapsulated in the following lemma, which is essentially a repetition
of the refinement process of~\cite{BellenbaumD02} with the analysis of the new potential.
We emphasize that in this part, all considered tree decompositions are unrooted.
\begin{lemma}\label{lem:vertex-refine}
Assume we are given a graph $G$, an integer $k$, a tree decomposition $(T, \beta)$ of $G$
with every adhesion of size at most $k$, a node $s \in T$, and
a single bag lean witness $(s,s,Z_1,Z_2)$ with $|Z_1| \le k+1$.
Then one can in polynomial time compute a tree decomposition $(T',\beta')$ of $G$
with every adhesion of size at most $k$ and having at most $n+1$ bags
such that $\Phi_{G,k}(T,\beta) > \Phi_{G,k}(T',\beta')$.
\end{lemma}
\begin{proof}
Apply Claim~\ref{claim:equiv}, yielding a separation $(A_1,A_2)$ and a family $\{P_x\}_{x \in X}$ of vertex disjoint $Z_1-Z_2$ paths, where $X=A_1\cap A_2$.
Note that $k+1 \ge |Z_1| > |X|$, hence the order of $(A_1,A_2)$ is at most $k$.
For every $x \in X$, the path $P_x$ is split by $x$ into two subpaths $P_x^i$, $i=1,2$, each leading from $Z_i$ to $x$.
Note that $V(P_x^i) \setminus \{x\} \subseteq A_i \setminus A_{3-i}$ for $i=1,2$.
We construct a tree decomposition $(T',\beta')$ as follows.
First for every $i = 1,2$, we construct a decomposition $(T^i,\beta^i)$ of $G[A_i]$:
we start with $T^i$ being a copy of $T$, where a node $t \in V(T)$ corresponds
to a node $t^i \in V(T^i)$, and we set $\beta^i(t^i) := \beta(t) \cap A_i$ for every $t \in V(T)$.
Then for every $x \in X \setminus \beta(s)$ we take the node $t_x \in V(T)$ such that $x \in \beta(t_x)$ and $t_x$
is the closest to $s$ in $T$ among such nodes, and insert $x$ into every bag $\beta^i(t^i)$ for $t^i$
lying on the path between $s^i$ (inclusive) and $t_x^i$ (exclusive) in $T^i$.
Clearly, $(T^i,\beta^i)$ is a tree decomposition of $G[A_i]$ and $X \subseteq \beta^i(s^i)$.
We construct $(T^\circ,\beta^\circ)$ by taking $T^\circ$ to be the disjoint union of $T^1$ and $T^2$, with
the copies of the node $s$ connected by an edge $s^1s^2$, and $\beta^\circ := \beta^1 \cup \beta^2$.
Since $(A_1,A_2)$ is a separation and $X = A_1 \cap A_2$ is present in both bags $\beta^1(s^1)$, $\beta^2(s^2)$, we infer
that $(T^\circ,\beta^\circ)$ is a tree decomposition of $G$.
Finally, we apply the cleanup procedure to $(T^\circ,\beta^\circ)$, thus obtaining the final tree decomposition $(T',\beta')$ where $T'$ has at most $n$ edges.
From the properties of the separation $(A_1,A_2)$ we infer that $\beta^i(s^i) \subsetneq \beta(s)$ and
$\beta^i(s^i) \not\subseteq A_{3-i}$ for $i=1,2$. In particular, the edge $s^1s^2$ is not contracted during the cleanup
and in $(T',\beta')$ the adhesion of the edge $s^1s^2$ is exactly $X$, which is of size at most $k$.
Consider now a node $t^i$ in the tree decomposition $(T^i,\beta^i)$. The set $\beta^i(t^i) \setminus \beta(t)$
consists of some vertices of $X$, namely those vertices $x \in X \setminus \beta(s)$ for which $t$
lies on the path between $s$ (inclusive) and $t_x$ (exclusive).
On the other hand, by the properties of a tree decomposition, $\beta(t)$ contains at least one vertex of $V(P_x^{3-i})\setminus\{x\} \subseteq A_{3-i} \setminus A_i$.
This vertex is not present in $A_i$, hence it is also not present in $\beta^i(t^i)$.
We conclude that $|\beta^i(t^i)\setminus \beta(t)|\leq |\beta(t)\setminus \beta^i(t^i)|$, implying $|\beta^i(t^i)| \leq |\beta(t)|$.
The same argumentation can be applied to every edge $e^i \in E(T^i)$ and adhesion of this edge, showing that $|\sigma^i(e^i)|\leq |\sigma(e)|$.
We infer that every adhesion of $(T^\circ,\beta^\circ)$ (and thus also of $(T',\beta')$) is of size at most $k$.
We are left with analysing the potential decrease. We make an auxiliary claim.
\begin{claim}
The first part of the potential is non-increasing, i.e.,
$\Phi^1_{G,k}(T, \beta) \geq \Phi^1_{G,k}(T', \beta')$.
Furthermore, if $|\beta(t)| > \max(|\beta^1(t^1)|, |\beta^2(t^2)|)$ for any $t \in V(T)$
with $|\beta(t)|>2k+1$, then $\Phi_{G,k}^1(T, \beta) > \Phi_{G,k}^1(T', \beta')$.
\end{claim}
\begin{proof}
It suffices to show that this holds for $(T^\circ,\beta^\circ)$, as the cleanup operation does not increase any of the parts of the potential.
Fix $t \in V(T)$. We analyse the difference between the contribution to the potential
of $t$ in $(T,\beta)$ and the copies of $t$ in $(T^\circ,\beta^\circ$).
First, we have already argued that $|\beta^i(t^i)| \leq |\beta(t)|$
for $i=1,2$. Consequently, if $|\beta(t)| \leq 2k+1$, then $|\beta^i(t^i)| \leq 2k+1$ for $i=1,2$
and the discussed contributions are all equal to~$0$.
Furthermore, if $|\beta^i(t^i)| \leq 2k+1$ for some $i=1,2$, then $\max(|\beta^{i}(t^{i})|-2k-1,0)=0$ and
$\max(|\beta(t)|-2k-1,0) \geq \max(|\beta^{3-i}(t^{3-i})|-2k-1,0)$ due to
$|\beta^{3-i}(t^{3-i})| \leq |\beta(t)|$, yielding the claim.
Otherwise, assume that $|\beta(t)| > 2k+1$ and $|\beta^i(t^i)| > 2k+1$ for $i=1,2$.
Note that $\beta^1(t^1) \cup \beta^2(t^2) \subseteq \beta(t) \cup X$
while $\beta^1(t^1) \cap \beta^2(t^2) \subseteq X$ and $|X| \leq k$.
Consequently,
\begin{align}
|\beta(t)|-2k-1 &\geq |\beta(t) \setminus A_2| + |\beta(t) \setminus A_1| - 2k-1\nonumber \\
&\geq |(\beta(t) \setminus A_2) \cup X| + |(\beta(t) \setminus A_1) \cup X| - 4k-1\label{eq:raccoon} \\
&> |\beta^1(t^1)| -2k-1 + |\beta^2(t^2)| -2k-1.\nonumber
\end{align}
We infer that for every bag $t \in V(T)$, the contribution of $t$ to the potential
$\Phi^1_{G,k}(T,\beta)$ is not smaller than the contribution of the two copies of $t$
in $\Phi^1_{G,k}(T^\circ,\beta^\circ)$.
Finally, assume that $|\beta(t)| > \max(|\beta^1(t^1)|, |\beta^2(t^2)|)$ and $|\beta(t)| > 2k+1$.
If $|\beta^i(t^i)| > 2k+1$ for $i=1,2$, then by~\eqref{eq:raccoon} the potential $\Phi^1$ decreases, so assume (w.l.o.g.) that $|\beta^2(t^2)| \leq 2k+1$.
Then $t^2$ contributes nothing to $\Phi^1_{G,k}(T^\circ, \beta^\circ)$, while
the contribution of $t^1$ to $\Phi^1_{G,k}(T^\circ, \beta^\circ)$ is strictly smaller than
the contribution of $t$ to $\Phi^1_{G,k}(T, \beta)$.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
We proceed with the proof.
Note that since both $T$ and $T'$ have at most $n+1$ vertices due to the cleanup step
$\Phi^1_{G,k}(T, \beta) > \Phi^1_{G,k}(T', \beta')$ entails $\Phi_{G,k}(T, \beta) > \Phi_{G,k}(T', \beta')$.
Hence, in the remainder we assume that for every $t \in V(T)$ with $|\beta(t)|>2k+1$
we have $\max_{i=1, 2} |\beta^i(t^i)|=|\beta(t)|$.
We argue that in this case, one of $t^1$ and $t^2$ disappears in cleanup.
Let $t \in V(T)$ be arbitrary with $|\beta(t)|>2k+1$, and assume
w.l.o.g.\ that $|\beta(t)| = |\beta^1(t^1)|$.
This in particular implies that $s \neq t$.
Let $X' = \beta^1(t^1) \setminus \beta(t)$; note that $X' \subseteq X$.
Recall that for every $x \in X'$ the bag $\beta(t)$ contains at least one vertex of $V(P_x^2) \setminus \{x\} \subseteq A_2 \setminus A_1$ which is no longer present in $\beta^1(t^1)$.
Since $|\beta(t)| = |\beta^1(t^1)|$, $\beta(t)$
contains exactly one vertex $v_x$ of $V(P_x^2) \setminus \{x\}$ for every $x \in X'$
and
$$\beta^2(t^2) = (\beta(t) \cap X) \cup \{v_x\colon x \in X'\}.$$
Let $t_\circ$ be the neighbor of $t$ that lies on the path from $t$ to $s$ in $T$.
Fix $x \in X'$; we claim that $v_x \in \beta(t_\circ)$.
This is clear if $v_x \in Z_2$ is the endpoint of $P_x^2$, because then $v_x\in \beta(s)\cap \beta(t)\subseteq \beta(t_\circ)$. Otherwise, let $w_x$ be the neighbor of $v_x$ on $P_x^2$ in the direction of the endpoint in $Z_2$.
Since $v_x$ is the only vertex of $V(P_x^2) \setminus \{x\}$ in $\beta(t)$, while the endpoint of $P_x^2$ belonging to $Z_2$ lies in $\beta(s)$,
$w_x$ belongs to a bag at a node $t'$ of $T$ that lies on the path between $t$ and $s$ in $T$.
As $w_x\notin \beta(t)$ while $v_x$ and $w_x$ are adjacent, we infer that $v_x \in \beta(t_\circ)$, as desired.
Since $v_x\in A_2$, this entails $v_x\in \beta^2(t_\circ^2)$.
The argumentation of the previous paragraph shows that $\{v_x\colon x\in X'\}\subseteq \beta^2(t_\circ^2)$.
On the other hand, from the construction we directly have $\beta(t) \cap X \subseteq \beta^2(t_\circ^2)$.
Thus $\beta^2(t^2) \subseteq \beta^2(t_\circ^2)$ and $t^2$ disappears in the cleanup procedure.
Hence, its contribution to $\Phi_{G,k}(T', \beta')$ can be ignored.
Finally, we claim that under these conditions, $\Phi^2_{G,k}(T, \beta) > \Phi^2_{G,k}(T', \beta')$.
Take any $t \in V(T)$.
If either $t^1$ or $t^2$ disappears in $T'$, then the contribution of $t$ to $\Phi^2_{G,k}(T, \beta)$ is not smaller than the total contribution of $t^1$ and $t^2$ to $\Phi^2_{G,k}(T', \beta')$.
Hence, suppose that both $t^1$ and $t^2$ remain in $T'$.
Letting $\ell=|\beta(t)|$, by the assumption we have $\ell \leq 2k+1$ and the contribution
of $t$ to $\Phi^2_{G,k}(T,\beta)$ is equal to $2^\ell-1$. On the other hand, since $\max_{i=1, 2} |\beta^i(t^i)|<|\beta(t)|$,
the total contribution of $t^1$ and $t^2$ to $\Phi^2_{G,k}(T', \beta')$ is
at most $2 \cdot (2^{\ell-1}-1) = 2^\ell - 2$, which is strictly smaller than $2^{\ell}-1$.
Since (as noted) $|\beta^i(s^i)| < |\beta(s)|$ for $i=1,2$, both $s^1$ and $s^2$ remain in $T'$, so
it follows that $\Phi_{G,k}(T, \beta) > \Phi_{G,k}(T', \beta')$.
\end{proof}
\subsection{Finding lean witness}\label{s:vertex}
\input{decomposition-new}
\subsection{The algorithm}
With Lemma~\ref{lem:separations} established, we now complete the proof of
Theorem~\ref{thm:decomp}.
\begin{proof}[Proof of Theorem~\ref{thm:decomp}.]
First, we can assume that $G$ is connected, as otherwise we can compute a tree decomposition
for every component separately, and then glue them up in an arbitrary fashion.
We start with a naive unrooted tree decomposition $(T,\beta)$ that has a single node whose bag is the entire vertex set.
Then we iteratively improve it using Lemma~\ref{lem:vertex-refine},
until it satisfies the conditions of Theorem~\ref{thm:decomp}, except for compactness,
which we will handle in the end.
We maintain the invariant that every adhesion of $(T,\beta)$ is of size at most $k$.
At every step the potential $\Phi_{G,k}(T, \beta)$ will decrease, leading to at most
$\Phi_{G,k}(T,\beta)=\mathcal{O}(4^kn^2)$ steps of the algorithm.
Let us now elaborate on a single step of the algorithm.
There is one reason why $(T,\beta)$ may not satisfy
the conditions of Theorem~\ref{thm:decomp}: namely, it contains a bag that is not $(i,i)$-unbreakable for some $1 \leq i \leq k$.
Consider a bag $S := \beta(t)$ that is not $(i,i)$-unbreakable for some $1 \leq i \leq k$.
Since every adhesion of $(T,\beta)$ is of size at most $k$, we have
that for every connected component $D$ of $G-S$ it holds that $|N_G(D)| \leq k$.
Consequently, Lemma~\ref{lem:separations} allows us to find a single-bag lean witness of order
at most $k$ for the node $t$
in time $2^{\mathcal{O}(k \log k)}\cdot n^{\mathcal{O}(1)}$.
Suppose we uncovered a single-bag lean witness for the node $t$.
Then we may refine the decomposition by applying Lemma~\ref{lem:vertex-refine}, and proceed iteratively with the refined decomposition.
As asserted by Lemma~\ref{lem:vertex-refine}, the potential $\Phi_{G,k}(T,\beta)$ strictly decreases in each iteration,
while the number of edges in the decomposition is always upper bounded by $n$
Observe that the potential $\Phi_{G,k}(T,\beta)$ is always upper bounded by $2^{\mathcal{O}(k)} \cdot n^{\mathcal{O}(1)}$
and every iteration can be executed in time $2^{\mathcal{O}(k\log k)}\cdot n^{\mathcal{O}(1)}$.
Hence, we conclude that the refinement process finishes within the claimed time complexity and outputs an unrooted tree decomposition $(T,\beta)$
that satisfies all the requirements of Theorem~\ref{thm:decomp}, except for being compact.
This can be remedied by applying the algorithm of Lemma~\ref{lem:compactification}.
Note that neither the unbreakability of bags nor the upper bound on the sizes of adhesions can deteriorate as a result of applying the algorithm of Lemma~\ref{lem:compactification},
as every bag (resp. every adhesion) of the obtained tree decomposition is a subset of a bag (resp. an adhesion) of the original one.
\end{proof}
\section{Introduction}
Since the pioneering work of Marx~\cite{Marx06},
the study of graph separation problems has been a large and live subarea of parameterized complexity.
It led to the development of many interesting algorithmic techniques, including
important separators and shadow removal~\cite{ChenLLOR08,dir-mwc,LokshtanovM13,MarxR14,RazgonO09},
branching algorithms based on half-integral relaxations~\cite{mwc-a-lp,sylvain,Iwata17,magnus,IwataYY17},
matroid-based algorithms for preprocessing~\cite{ms2,ms1}, the treewidth reduction technique~\cite{MarxOR13},
and, what is the most relevant for this work, the framework of randomized contractions~\cite{randcontr,KT11}.
The work of Marx~\cite{Marx06} left a number of fundamental questions open, including the parameterized complexity
of the \textsc{$p$-Way Cut} problem: given a graph $G$ and integers $p$ and $k$, can one delete at most $k$
edges from $G$ to obtain a graph with at least $p$ connected components?
We remark that it is easy to reduce the problem to the case when $G$ is connected and $p \leq k+1$.
Marx proved $\mathsf{W}[1]$-hardness of the vertex-deletion variant of the problem, but the question about fixed-parameter tractability of the edge-deletion
variant remained open until Kawarabayashi and Thorup settled it in affirmative in 2011~\cite{KT11}.
In their algorithm, Kawarabayashi and Thorup introduced a useful recursive scheme.
For a graph $G$, an \emph{edge cut} is a pair $A, B \subseteq V(G)$ such that $A \cup B = V(G)$ and $A \cap B = \emptyset$.
The {\em{order}} of an edge cut $(A,B)$ is $|E(A,B)|$.
Assume one discovers in the input graph $G$ an edge cut $(A,B)$ of order at most $k$ such that
both $|A|$ and $|B|$ are large; say $|A|,|B| > q$ for some parameter $q$ to be fixed later.
Then one can recurse on one of the sides, say $A$, in the following manner.
For every behavior of the problem on the set $N(B)$ --- in the context of \textsc{$p$-Way Cut},
for every assignment of the vertices of $N(B)$ into $p$ target components ---
one recurses on an annotated version of the problem to find a minimum-size partial solution in $G[A]$.
Since $|N(B)|$ is bounded by the order of the edge cut $(A,B)$, there is only bounded in $k$ number
of behaviors to consider. Thus, if $q$ is larger than the number of behaviors times $k$ (which is still a function of $k$
only), there is an edge
that is \emph{not used} in any of the found minimum partial solutions. Such an edge can be safely contracted
and the algorithm is restarted.
It remains to show how to find such an edge cut $(A,B)$ and how the algorithm should work in the absence of such a cut.
Since the absence of such balanced cuts is a critical notion in this work, we make the following definitions
that take into account vertex cuts as well. Here, a pair $(A,B)$ of vertex subsets in a graph $G$ is a {\em{separation}}
if $A\cup B=V(G)$ and there is no edge with one endpoint in $A\setminus B$ and the other in $B\setminus A$; the {\em{order}} of the separation $(A,B)$ is $|A\cap B|$.
\begin{definition}[unbreakability]
Let $G$ be a graph.
A vertex subset $X \subseteq V(G)$ is \emph{$(q,k)$-unbreakable} if every separation $(A,B)$ of order at most $k$
satisfies $|A \cap X| \leq q$ or $|B \cap X| \leq q$.\footnote{Note that this definition of a
$(q,k)$-unbreakable set is stronger than the one used in~\cite{minbisection-STOC} (and in a previous
version of the present paper). In that definition, which we may call \emph{weakly $(q,k)$-unbreakable},
it is only required that $|(A \setminus B) \cap X| \leq q$ or $|(B \setminus A) \cap X| \leq q$.
To illustrate the difference, consider the case where $X=V(G)$. Then testing whether $X$ is
$(k,k)$-unbreakable reduces to testing whether $G$ is $k+1$-connected, whereas testing
whether $X$ is weakly $(k,k)$-unbreakable is as hard as the \textsc{Hall Set} problem, and
thus W[1]-hard~\cite{GaspersKOSS12}. Thus, the present version is more suitable for Theorem~\ref{thm:decomp}.}
A vertex subset $Y \subseteq V(G)$ is \emph{$(q,k)$-edge-unbreakable} if every edge cut $(A,B)$ of order at most $k$
satisfies $|A \cap Y| \leq q$ or $|B \cap Y| \leq q$.
\end{definition}
\noindent Observe that every set that is $(q,k)$-unbreakable is also $(q,k)$-edge-unbreakable.
Hence, the leaves of the recursion of Kawarayashi and Thorup deal with graphs $G$ where $V(G)$ is $(q,k)$-edge-unbreakable.
The algorithm of~\cite{KT11} employs involved arguments originating in the graph minor theory both to deal with this case
and to find the desired edge cut $(A,B)$ for recursion. These arguments, unfortunately, imply a large overhead in the
running time bound and are problem-specific.
A year later, Chitnis et al.~\cite{randcontr-FOCS,randcontr} replaced the arguments based on the graph minor theory with steps
based on \emph{color coding}: a simple yet powerful algorithmic technique introduced by Alon, Yuster, and Zwick in 1995~\cite{AlonYZ95}.
This approach is both arguably simpler and leads to better running time bounds.
Furthermore, the general methodology of~\cite{randcontr} --- dubbed \emph{randomized contractions} --- turns out to be more generic
and can be applied to solve such problems as \textsc{Unique Label Cover}, \textsc{Multiway Cut-Uncut}~\cite{randcontr},
or \textsc{Steiner Multicut}~\cite{BringmannHML16}.
All the abovementioned algorithms have running time bounds of the order of $2^{\mathrm{poly}(k)} \mathrm{poly}(n)$
with both notions of $\mathrm{poly}$ hiding quadratic or cubic polynomials.
Later, Lokshtanov et al.~\cite{LokshtanovR0Z18} showed how the idea of randomized contractions can be applied to give a reduction
for the {\sc{CMSO}} model-checking problem from general graphs to highly connected graphs.
While powerful, the randomized contractions technique seemed to be one step short of providing
a parameterized algorithm for the \textsc{Minimum Bisection} problem, which was an open problem at that time.
In this problem, given a graph $G$ and an integer $k$, one asks for an edge cut $(A,B)$ of order at most $k$
such that $|A|=|B|$.
The only step that fails is the recursive step itself: the number of possible behaviors of the problem on an edge cut
of small order is unbounded by a function of $k$, as the description of the behavior needs to include some indicator of the balance
between the number of vertices assigned to the sides $A$ and $B$.
This problem has been circumvented by a subset of the current authors in 2014~\cite{minbisection-STOC}
by replacing the recursive strategy with a dynamic programming algorithm on an appropriately constructed tree decomposition.
To properly describe the contribution, we need some more definitions.
A \emph{tree decomposition} of a graph $G$ is a pair $(T,\beta)$ where $T$ is a tree and $\beta$ is a mapping that assigns to every $t \in V(T)$
a set $\beta(t) \subseteq V(G)$, called a \emph{bag}, such that the following holds: (i) for every $e \in E(G)$ there exists $t \in V(T)$ with $e \subseteq \beta(t)$, and (ii)
for every $v \in V(G)$ the set $\beta^{-1}(v) := \{t \in V(T) \colon v \in \beta(t)\}$ induces a connected nonempty subgraph of $T$.
For a tree decomposition $(T,\beta)$ fix an edge $e = tt' \in E(T)$. The deletion of $e$ from $T$ splits $T$ into two trees $T_1$ and $T_2$, and naturally induces a separation $(A_1,A_2)$ in $G$
with $A_i := \bigcup_{t \in V(T_i)} \beta(t)$, which we henceforth call \emph{the separation associated with $e$}.
The set $\sigma_{T,\beta}(e) := A_1 \cap A_2 = \beta(t) \cap \beta(t')$ is called the \emph{adhesion} of $e$.
We suppress the subscript if the decomposition is clear from the context.
Some of our tree decompositions are rooted, that is, the tree $T$ in a tree decomposition $(T,\beta)$
is rooted at some node $r$.
For $s,t \in V(T)$ we say that \emph{$s$ is a descendant of $t$}
or that \emph{$t$ is an ancestor of $s$} if $t$ lies on the unique path from $s$ to the root;
note that a node is both an ancestor and a descendant of itself.
For a node $t$ that is not a root of~$T$, by $\sigma_{T,\beta}(t)$ we mean
the adhesion $\sigma_{T,\beta}(e)$ where $e$ is the edge connecting $t$ with its parent in $T$.
We extend this notation to $\sigma_{T,\beta}(r) = \emptyset$ for the root $r$.
Again, we omit the subscript if the decomposition is clear from the context.
We define the following functions for convenience:
\begin{align*}
\gamma(t) &= \bigcup_{s:\textrm{\ descendant\ of\ }t} \beta(s), &
\alpha(t) &= \gamma(t) \setminus \sigma(t), &
G_t &= G[\gamma(t)] - E(G[\sigma(t)]).
\end{align*}
We say that a rooted tree decomposition $(T,\beta)$ of $G$ is \emph{compact}
if for every node $t \in V(T)$ for which $\sigma(t) \neq \emptyset$ we have
that $G[\alpha(t)]$ is connected and $N_G(\alpha(t)) = \sigma(t)$.
The main technical contribution of~\cite{minbisection-STOC} is an algorithm that, given a graph $G$
and an integer $k$, computes a tree decomposition of $G$ with the following properties: (i) the size
of every adhesion is bounded by a function of $k$, and (ii) every bag of the decomposition is $(q,k)$-unbreakable.
In~\cite{minbisection-STOC}, the construction relied on involved arguments using the framework of important separators
and, in essence, also shadow removal. This led to bounds of the form $2^{\mathcal{O}(k)}$ on the obtained value of $q$ and on the sizes of adhesions.
The construction algorithm had a running time bound of $2^{\mathcal{O}(k^2)} n^2 m$.
\paragraph*{Our results}
The main technical contribution of this paper is an improved construction algorithm of a decomposition with the aforementioned properties.
\begin{theorem}\label{thm:decomp}\label{thm:decomposition}
Given an $n$-vertex graph $G$ and an integer $k$, one can in time $2^{\mathcal{O}(k \log k)} n^{\mathcal{O}(1)}$ compute a rooted compact tree decomposition $(T,\beta)$ of $G$
such that
\begin{enumerate}
\item every adhesion of $(T,\beta)$ is of size at most $k$;
\item every bag of $(T,\beta)$ is $(i,i)$-unbreakable in $G$ for every $1 \leq i \leq k$.
\end{enumerate}
\end{theorem}
\noindent Note that since every bag of the output decomposition $(T,\beta)$ is $(k,k)$-unbreakable, it is also $(k,k)$-edge-unbreakable.
The main highlights of Theorem~\ref{thm:decomp} is the improved dependency on $k$ in the running time bound
and the best possible bounds both for the unbreakability and for the adhesion sizes. These improvements have direct consequences for algorithmic applications:
they allow us to develop $2^{\mathcal{O}(k \log k)} n^{\mathcal{O}(1)}$-time parameterized algorithms for a number
of problems that ask for an edge cut of order at most $k$, with the most prominent one being \textsc{Minimum Bisection}.
That is, all applications mentioned below consider edge deletion problems and in their proofs we rely only on $(k,k)$-edge-unbreakability of the bags.
\begin{theorem}\label{thm:bisection}
\textsc{Minimum Bisection} can be solved in time
$2^{\mathcal{O}(k \log k)} n^{\mathcal{O}(1)}$.
\end{theorem}
This improves the parametric factor of the running time from $2^{\mathcal{O}(k^3)}$, provided in~\cite{minbisection-STOC}, to $2^{\mathcal{O}(k\log k)}$.
In our second application, the \textsc{Steiner Cut} problem, we are given an undirected graph $G$, a set $T \subseteq V(G)$ of terminals, and integers $k,p$.
The goal is to delete at most $k$ edges from $G$ so that $G$ has at least $p$ connected components containing
at least one terminal. This problem generalizes \textsc{$p$-Way Cut} that corresponds to the case $T=V(G)$.
\begin{theorem}\label{thm:steinercut}
\textsc{Steiner Cut} can be solved in time $2^{\mathcal{O}(k \log k)} n^{\mathcal{O}(1)}$.
\end{theorem}
This improves the parametric factor of the running time from $2^{\mathcal{O}(k^2\log k)}$, provided in~\cite{randcontr}, to $2^{\mathcal{O}(k\log k)}$.
In the \textsc{Steiner Multicut}
problem we are given an undirected graph $G$, $t$ sets of terminals $T_1,T_2,\ldots,T_t$, each of size at most $p$,
and an integer $k$. The goal is to delete at most $k$ edges from $G$ so that every terminal set $T_i$ is separated:
for every $1 \leq i \leq t$, there does not exist a single connected component of the resulting graph that contains the
entire set $T_i$.
Note that for $p=2$, the problem becomes the classic \textsc{Edge Multicut} problem.
Bringmann et al.~\cite{BringmannHML16} showed an FPT algorithm for \textsc{Steiner Multicut} when parameterized by $k+t$.
We use our decomposition theorem to improve the exponential part of the running time of this algorithm.
\begin{theorem}\label{thm:multicut}
\textsc{Steiner Multicut} can be solved in time $2^{\mathcal{O}((t + k) \log (k + t))} n^{\mathcal{O}(1)}$.
\end{theorem}
This improves the parametric factor of the running time from $2^{\mathcal{O}(k^2t\log k)}$, provided in~\cite{BringmannHML16}, to $2^{\mathcal{O}((t+k)\log (t+k))}$.
\paragraph*{Our techniques}
Our starting point is the definition of a lean tree decomposition of Thomas~\cite{Thomas90}; we follow the formulation of~\cite{BellenbaumD02}.
\begin{definition}
A tree decomposition $(T,\beta)$ of a graph $G$ is called \emph{lean} if for every $t_1,t_2 \in V(T)$
and all sets $Z_1 \subseteq \beta(t_1)$ and $Z_2 \subseteq \beta(t_2)$ with $|Z_1| = |Z_2|$, either
$G$ contains $|Z_1|$ vertex-disjoint $Z_1-Z_2$ paths, or there exists an edge $e \in E(T)$ on the path from $t_1$ to $t_2$
such that $|\sigma(e)| < |Z_1|$.
\end{definition}
For a graph $G$ and a tree decomposition $(T,\beta)$ that is not lean, a quadruple
$(t_1,t_2,Z_1,Z_2)$ for which the above assertion is not true is called a \emph{lean witness}.
Note that it may happen that $t_1 = t_2$ or $Z_1 \cap Z_2 \neq \emptyset$.
In particular $(s,s,Z_1,Z_2)$ is called a \emph{single bag lean witness}.
The \emph{order} of a lean witness is the minimum order of a separation $(A_1,A_2)$ such that $Z_i \subseteq A_i$ for $i=1,2$.
Bellenbaum and Diestel~\cite{BellenbaumD02} defined an improvement step that, given a tree
decomposition and a lean witness, refines the decomposition so that it is in some sense
closer to being lean.
Given a lean witness $(t_1,t_2,Z_1,Z_2)$, the refinement step finds a minimum order separation $(A_1,A_2)$ with $Z_i \subseteq A_i$ for $i=1,2$ and rearranges the tree decomposition so that $A_1 \cap A_2$ appears
as a new adhesion on some edge of the decomposition.
Bellenbaum and Diestel introduced a potential function, bounded exponentially in $n$, that decreases at every refinement step.
Thus, one can exhaustively apply the refinement step while a lean witness exists, obtaining (after possibly an exponential number of steps) a lean decomposition.
A simple but crucial observation connecting lean decompositions with the decomposition promised by Theorem~\ref{thm:decomposition} is that if a tree decomposition admits no single bag lean witness of order at most $k$,
then every bag is $(i,i)$-unbreakable for every $1 \leq i \leq k$.
By combining this with the fact that the refinement step applied to a lean witness of order $k'$ introduces one new adhesion of size $k'$ (and does not increase the size of other adhesions),
we obtain the following.
\begin{theorem}\label{thm:exist}
For every graph $G$ and integer $k$, there exists a tree decomposition $(T,\beta)$ of $G$ such that every adhesion of $(T,\beta)$ is of size at most $k$
and every bag is $(i,i)$-unbreakable for every $1 \leq i \leq k$.
\end{theorem}
\begin{proof}[Proof sketch.]
Start with a trivial tree decomposition $(T,\beta)$ that consists of a single bag $V(G)$.
As long as there exists a single bag lean witness of order at most $k$ in $(T,\beta)$, apply the refinement step of Bellenbaum and Diestel to it.
It now remains to observe that if any bag $\beta(t)$ for some $t\in T$ was not $(i,i)$-unbreakable for some $1 \leq i \leq k$,
then the separation witnessing this would give rise to a single bag lean witness for $\beta(t)$.
\end{proof}
A naive implementation of the procedure of Theorem~\ref{thm:exist} runs in time exponential in $n$, while for any application in parameterized algorithms one needs an FPT algorithm with $k$ as the parameter.
To achieve this goal, one needs to overcome two obstacles.
First, the potential provided by Bellenbaum and Diestel gave only an exponential in $n$ bound on the number of needed refinement steps.
Fortunately, one can use the fact that we only refine using single bag witnesses of bounded size to provide a different potential, this time bounded as $2^{\mathcal{O}(k)}n^{\mathcal{O}(1)}$.
Second, one needs to efficiently (in FPT time) verify whether a bag is $(i,i)$-unbreakable for every $1 \leq i \leq k$ and, if not, find a corresponding single bag lean witness.
We design such an algorithm using color-coding: in time $2^{\mathcal{O}(k \log k)} n^{\mathcal{O}(1)}$ we can either
certify that all bags of a given tree decomposition are $(i,i)$-unbreakable for every $1 \leq i \leq k$
or produce a single bag lean witness of order at most $k$.
These ingredients lead to constructing a decomposition with guarantees as in Theorem~\ref{thm:decomposition}.
\medskip
All our applications (Theorems~\ref{thm:bisection},~\ref{thm:steinercut}, and~\ref{thm:multicut}) use the decomposition of Theorem~\ref{thm:decomposition} and follow well-paved ways
of~\cite{randcontr,minbisection-STOC} to perform bottom-up dynamic programming.
Let us briefly sketch this for the case of \textsc{Minimum Bisection}.
Let $(G,k)$ be a \textsc{Minimum Bisection} instance and let $(T,\beta)$ be a tree decomposition of $G$ provided by Theorem~\ref{thm:decomposition}.
The states of our dynamic programming algorithm are the straightforward ones: for every $t \in V(T)$, every $A^\sigma \subseteq \sigma(t)$, and every $0 \leq n^\circ \leq |\alpha(t)|$
we compute value $M[t,A^\sigma,n^\circ]$ equal to the minimum order of an edge cut $(A,B)$ in $G_t$ such that $A \cap \sigma(t) = A^\sigma$ and $|A \setminus \sigma(t)| = n^\circ$.
Furthermore, we are not interested in cut orders larger than $k$, and we replace them with $+\infty$.
Using the unbreakability of $\beta(t)$ we can additionally require that either $A \cap \beta(t)$ or $B \cap \beta(t)$ is of size at most $k$.
In a single step of dynamic programming, one would like to populate $M[t,\cdot,\cdot]$ using the values $M[s,\cdot,\cdot]$ for all children $s$ of $t$ in $T$.
Fix a cell $M[t,A^\sigma,n^\circ]$. Intuitively, one would like to iterate over all partitions $\beta(t) = A^\beta \uplus B^\beta$ with $A^\beta \cap \sigma(t) = A^\sigma$
and, for fixed $(A^\beta,B^\beta)$, for every child $s$ of $t$ use the cells $M[s,\cdot,\cdot]$ to read the best way to extend $(A^\beta \cap \sigma(s), B^\beta \cap \sigma(s))$ to $\alpha(s)$.
However, $\beta(t)$ can be large, so we cannot iterate over all such partitions $(A^\beta,B^\beta)$.
Here, the properties of the decomposition $(T,\beta)$ come into play: since $\beta(t)$ is $(k,k)$-edge-unbreakable, and in the end we are looking for a solution to \textsc{Minimum Bisection} of order at most $k$,
we can only focus on partitions $(A^\beta,B^\beta)$ such that $|A^\beta| \leq k$ or $|B^\beta| \leq k$. While this still does not allow us to iterate over all such partitions, we can highlight important parts of them
by color coding, similarly as it is done in the leaves of the recursion in the randomized contractions framework~\cite{randcontr}.
\section{Applications}\label{s:app}
\input{algo-intro}
\subsection{Minimum Bisection}\label{ss:bisection}
\input{algo-bisection}
\subsection{Steiner Cut and Steiner Multicut}\label{ss:steiner}
\input{algo-steiner}
\input{conclusions}
\section{Preliminaries}\label{s:prelim}
\paragraph*{Color coding toolbox}
Let $[n]=\{1,\ldots,n\}$ for a positive integer $n$.
Many of our proofs follow the same outline as the treatment of the high-connectivity phase of the randomized contractions technique~\cite{randcontr}.
As in~\cite{randcontr}, the color coding step is encapsulated in the following lemma:
\newcommand{\mathcal{F}}{\mathcal{F}}
\begin{lemma}[\cite{randcontr}]\label{lem:random}
Given a set $U$ of size $n$ and integers $0 \leq a,b \leq n$, one
can in time $2^{O(\min(a,b) \log (a+b))} n \log n$
construct a family $\mathcal{F}$ of at most $2^{O(\min(a,b) \log (a+b))} \log n$
subsets of $U$ such that the following holds:
for any sets $A,B \subseteq U$ with $A \cap B = \emptyset$, $|A|\leq a$, and $|B|\leq b$,
there exists a set $S \in \mathcal{F}$ with $A \subseteq S$ and $B \cap S = \emptyset$.
\end{lemma}
We also need the following more general version that can be obtained from Lemma~\ref{lem:random}
by a straightforward induction on $r$.
\begin{lemma}\label{lem:random2}
Given a set $U$ of size $n$ and integers $r \geq 1$, $0 \leq a_1,a_2,\ldots,a_r \leq n$
with $s = \sum_{i=1}^r a_i$, and $c = \max_{i=1}^r a_i$,
one can in time
$2^{\mathcal{O}((s-c) \cdot \log s)} n \log^{r-1} n$
construct a family $\mathcal{F}$ of at most $2^{\mathcal{O}((s-c) \cdot \log s)} \log^{r-1} n$
functions $f \colon U \to \{1,\ldots,r\}$ such that the following holds:
for any pairwise disjoint sets $A_1,A_2,\ldots,A_r \subseteq U$ such that $|A_i|\leq a_i$ for each
$i \in [r]$,
there exists a function $f \in \mathcal{F}$
with $A_i \subseteq f^{-1}(i)$ for every $i \in [r]$.
\end{lemma}
\paragraph*{Tree decompositions: cleanup and compactification}
We will use the following \emph{cleanup procedure} on a tree decomposition $(T,\beta)$ of a graph $G$: as long as there exists
an edge $st \in E(T)$ with $\beta(s) \subseteq \beta(t)$, contract the edge $st$ in $T$, keeping the name $t$ and the bag $\beta(t)$
at the resulting vertex. We shall say that a node $s$ and bag $\beta(s)$ \emph{disappears} in a cleanup step.
Clearly, the final result $(T',\beta')$ of a cleanup procedure is a tree decomposition of $G$ and every adhesion of $(T',\beta')$ is also an adhesion of $(T,\beta)$. Observe that $|E(T')| \leq |V(G)|$: if one roots $T'$ at an arbitrary vertex,
going from child to parent on every edge $T'$ we forget at least one vertex of $G$, and every vertex can be forgotten only once.
It is well-known that every rooted tree decomposition can be refined to a compact one; see e.g.~\cite[Lemma 2.8]{BojanczykP16}.
For convenience, we provide a formulation of this fact suited for our needs;
a full proof can be found in Appendix~\ref{app:compactification}.
\begin{lemma}\label{lem:compactification}
Given a graph $G$ and its tree decomposition $(T,\beta)$, one can compute in polynomial time a compact tree decomposition $(\wh{T},\wh{\beta})$ of $G$ such that every bag of $(\wh{T},\wh{\beta})$
is a subset of some bag of $(T,\beta)$, and every adhesion of $(\wh{T},\wh{\beta})$ is a subset of some adhesion of $(T,\beta)$.
\end{lemma}
| {'timestamp': '2019-07-11T02:10:27', 'yymm': '1810', 'arxiv_id': '1810.06864', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.06864'} |
\section{Introduction}
In psychology, social sciences, and many other fields, researchers are usually interested in ``latent'' variables that cannot be measured directly, e.g., depression, anxiety, or intelligence. To get a grip on these latent concepts, one commonly-used strategy is to construct a measurement model for such a latent variable, in the sense that domain experts design multiple ``items'' or ``questions'' that are considered to be indicators of the latent variable. For exploring evidence of construct validity in theory-based instrument construction, confirmatory factor analysis (CFA) has been widely studied~\citep{joreskog1969general,castro2015likelihood,li2016confirmatory}. In CFA, researchers start with several hypothesised latent variable models that are then fitted to the data individually, after which the one that fits the data best is picked to explain the observed phenomenon. In this process, the fundamental task is to learn the parameters of a hypothesised model from observed data, which is the focus of this paper. For convenience, we simply refer to these hypothesised latent variable models as CFA models from now on.
The most common method for parameter estimation in CFA models is maximum likelihood (ML), because of its attractive statistical properties (consistency, asymptotic normality, and efficiency). The ML method, however, relies on the assumption that observed variables follow a multivariate normal distribution~\citep{joreskog1969general}. When the normality assumption is not deemed empirically
tenable, ML may not only reduce the accuracy of parameter estimates, but may also yield misleading conclusions drawn from empirical data~\citep{li2016confirmatory}. To this end, a robust version of ML was introduced for CFA models when the normality assumption is slightly or moderately violated~\citep{kaplan2008structural}, but still requires the observations to be continuous. In the real world, the indicator data in questionnaires are usually measured on an ordinal scale (resulting in a bunch of ordered categorical variables, or simply ordinal variables)~\citep{poon2012latent}, in which neither normality nor continuity is plausible~\citep{lubke2004applying}. In such cases, diagonally weighted least squares (DWLS in LISREL; WLSMV or robust WLS in M\textit{plus}) has been suggested to be superior to the ML method and is usually considered to be preferable over other methods~\citep{barendse2015using,li2016confirmatory}.
However, there are two major issues that the existing approaches do not consider. One is the mixture of continuous and ordinal data. As we mentioned above ordinal variables are omnipresent in questionnaires, whereas sensor data are usually continuous. Therefore, a more realistic case in real applications is mixed continuous and ordinal data. A second important issue concerns missing values. In practice, all branches of experimental science are plagued by missing values~\citep{rja1987statistical}, e.g., failure of sensors, or unwillingness to answer certain questions in a survey. A straightforward idea in this case is to combine missing values techniques with existing parameter estimation approaches, e.g., performing listwise-deletion or pairwise-deletion first on the original data and then applying DWLS to learn parameters of a CFA model. However, such deletion methods are only consistent when the data are \textit{missing completely at random} (MCAR), which is a rather strong assumption~\citep{rubin1976inference}, and cannot transfer the sampling variability incurred by missing values to follow-up studies. The two modern missing data techniques, maximum likelihood and multiple imputation, are valid under a less restrictive assumption, \textit{missing at random} (MAR)~\citep{schafer2002missing}, but they require the data to be multivariate normal.
Therefore, there is a strong demand for an approach that is not only valid under MAR but also works for mixed continuous and ordinal data. For this purpose, we propose a novel Bayesian Gaussian copula factor (BGCF) approach, in which a Gibbs sampler is used to draw pseudo Gaussian data in a latent space restricted by the observed data (unrestricted if that value is missing) and draw posterior samples of parameters given the pseudo data, iteratively. We prove that this approach is consistent under MCAR and empirically show that it works quite well under MAR.
The rest of this paper is organized as follows. Section~\ref{sec:background} reviews background knowledge and related work. Section~\ref{sec:method} gives the definition of a Gaussian copula factor model and presents our novel inference procedure for this model. Section~\ref{sec:simulation} compares our BGCF approach with two alternative approaches on simulated data, and Section~\ref{sec:application} gives an illustration on the `Holzinger \& Swineford 1939' dataset. Section~\ref{sec:conclusion} concludes this paper and provides some discussion.
\section{Background} \label{sec:background}
This section reviews basic missingness mechanisms and related work on parameter estimation in CFA models.
\subsection{Missingness Mechanism}
Following~\citet{rubin1976inference}, let $\bm{Y} = (y_{ij}) \in \mathbb{R}^{n \times p}$ be a data matrix with the rows representing independent samples, and $ \bm{R} = (r_{ij}) \in \{0,1\}^{n \times p}$ be a matrix of indicators, where $r_{ij} = 1$ if $y_{ij}$ was observed and $r_{ij} = 0$ otherwise. $\bm{Y}$ consists of two parts, $\bm{Y}_{obs}$ and $\bm{Y}_{miss}$, representing observed and missing elements in $\bm{Y}$ respectively. When the missingness does not depend on the data, i.e., $P(\bm{R}|\bm{Y}, \theta) = P(\bm{R}|\theta)$ with $\theta$ denoting unknown parameters, the data are said to be \emph{missing completely at random} (MCAR), which is a special case of a more realistic assumption called \emph{missing at random} (MAR). MAR allows the dependency between missingness and observed values, i.e., $P(\bm{R}|\bm{Y}, \theta) = P(\bm{R}|\bm{Y}_{obs},\theta)$. For example, all people in a group are required to take a blood pressure test at time point 1, while only those whose values at time point 1 lie in the abnormal range need to take the test at time point 2. This results in some missing values at time point 2 that are MAR.
\subsection{Parameter Estimation in CFA Models}
When the observations follow a multivariate normal distribution, maximum likelihood (ML) is the mostly-used method. It is equivalent to minimizing the discrepancy function $F_{\rm{ML}}$~\citep{joreskog1969general}:
\[
F_{\rm{ML}} = \ln\lvert\Sigma(\theta)\lvert + \tr[S\Sigma^{-1}(\theta)] - \ln\lvert S\lvert - p \:,
\]
where $\theta$ is the vector of model parameters, $\Sigma(\theta)$ is the model-implied covariance matrix, $S$ is the sample covariance matrix, and $p$ is the number of observed variables in the model. When the normality assumption is violated either slightly or moderately, robust ML (MLR) offers an alternative. Here parameter estimates are still obtained using the asymptotically unbiased ML estimator, but standard errors are statistically corrected to enhance the robustness of ML against departures from normality~\citep{kaplan2008structural,muthen2010mplus}. Another method for continuous nonnormal data is the so-called asymptotically distribution free method, which is a weighted least squares (WLS)
method using the inverse of the asymptotic covariance matrix of the sample variances and
covariances as a weight matrix~\citep{browne1984asymptotically}.
When the observed data are on ordinal scales, \citet{muthen1984general} proposed a three-stage approach. It assumes that a normal latent variable $x^*$ underlies an observed ordinal variable $x$, i.e.,
\[
x = m, \mbox{~if~} \tau_{m-1} < x^* < \tau_m \:,
\]
where $m$ $(=1,2,...,c)$ denotes the observed values of $x$, $\tau_m$ are thresholds $(-\infty=\tau_0 < \tau_1 < \tau_2 < ... < \tau_c = +\infty)$, and $c$ is the number of categories. The thresholds and polychoric correlations are estimated from the bivariate contingency table in the first two stages~\citep{olsson1979maximum,joreskog2005structural}. Parameter estimates and the associated standard errors are then obtained by minimizing the weighted least squares
fit function $F_{\rm{WLS}}$:
\[
F_{\rm{WLS}} = [s-\sigma(\theta)]^T\bm{W}^{-1}[s-\sigma(\theta)]\:,
\]
where $\theta$ is the vector of model parameters, $\sigma(\theta)$ is the model-implied vector containing the nonredundant vectorized elements of $\Sigma(\theta)$, $s$ is the vector
containing the estimated polychoric correlations, and the weight matrix $\bm{W}$ is the asymptotic covariance matrix of the
polychoric correlations. A mathematically simple form of the WLS estimator, the unweighted least squares (ULS), arises when the matrix $\bm{W}$ is replaced with the identity matrix $\bm{I}$. Another variant of WLS is the diagonally weighted least squares (DWLS), in which only the diagonal elements of $\bm{W}$ are used in the fit function~\citep{muthen1997robust,muthen2010mplus}, i.e.,
\[
F_{\rm{DWLS}} = [s-\sigma(\theta)]^T\bm{W}^{-1}_{\rm{D}}[s-\sigma(\theta)]\:,
\]
where $\bm{W}^{-1}_{\rm{D}} = \diag(\bm{W})$ is the diagonal weight matrix. Various recent simulation studies have shown that DWLS is favorable compared to WLS, ULS, as well as the ML-based methods for ordinal data~\citep{barendse2015using,li2016confirmatory}.
\section{Method}
\label{sec:method}
In this section, we introduce the Gaussian copula factor model and propose a Bayesian inference procedure for this model. Then, we theoretically analyze the identifiability and prove the consistency of our procedure.
\subsection{Gaussian Copula Factor Model}\label{sec:model}
\begin{defi}[Gaussian Copula Factor Model] \label{def:GCFM}
Consider a latent random (factor) vector $\bm{\eta} = (\eta_1,\ldots,\eta_k)^T$, a response random vector $\bm{Z} = (Z_1,\ldots,Z_p)^T$ and an observed random vector $\bm{Y}=(Y_1,\ldots,Y_p)^T$, satisfying
\begin{gather}
\label{eq:GCFM_latent}
\bm{\eta} \sim \mathcal{N}(0,C), \\
\label{eq:GCFM_response}
\bm{Z} = \Lambda \bm{\eta} + \bm{\epsilon}, \\
\label{eq:GCFM_observe}
Y_j = F_j^{-1}\big(\Phi\big[Z_j/\sigma(Z_j)\big]\big), \forall j = 1,\ldots,p,
\end{gather}
with $C$ a correlation matrix over factors, $\Lambda = (\lambda_{ij})$ a $p \times k$ matrix of factor loadings ($k \leq p$), $\bm{\epsilon} \sim \mathcal{N}(0,D)$ residuals with $D = \diag (\sigma_1^2,\ldots,\sigma_p^2)$, $\sigma(Z_j)$ the standard deviation of $Z_j$, $\Phi(\cdot)$ the cumulative distribution function (CDF) of the standard Gaussian, and ${F_{j}}^{-1}(t) = \inf\{ x: F_{j}(x) \geq t\}$ the pseudo-inverse of a CDF $F_j(\cdot)$. Then this model is called a \emph{Gaussian copula factor model}.
\end{defi}
\begin{figure}[h]
\centering
\begin{tabular}{c}
\parbox[t]{0.5\textwidth}
\scalebox{0.9}{\centerline{\xymatrix @R=1.1em @C=1em{
*=<2.5em,2em>[F]{Y_1} & *=<3em,2em>[o][F]{Z_1} \ar[l] & & & & *=<3em,2em>[o][F]{Z_{5}} \ar[r] &
*=<2.5em,2em>[F]{Y_{5}} \\
& &
*=<3em,2em>[o][F]{\eta_1} \ar[ul] \ar@{<->}[rrdd] \ar@{<->}[dd] \ar@{<->}[rr]& &
*=<3em,2em>[o][F]{\eta_3} \ar[ur] \ar[r] \ar[rd] \ar@{<->}[dd] \ar@{<->}[lldd]&
*=<3em,2em>[o][F]{Z_{6}} \ar[r] &
*=<2.5em,2em>[F]{Y_{6}} \\
*=<2.5em,2em>[F]{Y_2}& *=<2.5em,2em>[o][F]{Z_2} \ar[l]
& & & & *=<3em,2em>[o][F]{Z_{7}} \ar[r] & *=<2.5em,2em>[F]{Y_{7}} \\
*=<2.5em,2em>[F]{Y_3}& *=<2.5em,2em>[o][F]{Z_3} \ar[l] &
*=<3em,2em>[o][F]{\eta_2} \ar[ld] \ar[lu] \ar[l] \ar@{<->}[rr] & &
*=<3em,2em>[o][F]{\eta_4} \ar[rd] \ar[r]&
*=<3em,2em>[o][F]{Z_{8}} \ar[r] &
*=<2.5em,2em>[F]{Y_{8}}
\\
*=<2.5em,2em>[F]{Y_4} &
*=<3em,2em>[o][F]{Z_{4}} \ar[l] & & & &
*=<3em,2em>[o][F]{Z_{9}} \ar[r] &
*=<2.5em,2em>[F]{Y_{9}} \\
}}}}
\end{tabular}
\caption{Gaussian copula factor model.}
\label{GaussianCopulaModelDemo}
\end{figure}
The model is also defined in~\citet{murray2013bayesian}, but the authors restrict the factors to be independent of each other while we allow for their interactions. Our model is a combination of a Gaussian factor model (from $\bm{\eta}$ to $\bm{Z}$) and a Gaussian copula model (from $\bm{Z}$ to $\bm{Y}$). The first part allows us to model the latent concepts that are measured by multiple indicators, and the second part provides a good way to model diverse types of variables (depending on $F_j(\cdot)$ in Equation~\ref{eq:GCFM_observe}, $Y_j$ can be either continuous or ordinal). Figure~\ref{GaussianCopulaModelDemo} shows an example of the model. Note that we allow the special case of a factor having a single indicator, e.g., $\eta_1 \rightarrow Z_1 \rightarrow Y_1$, because this allows us to incorporate other (explicit) variables (such as age and income) into our model. In this special case, we set $\lambda_{11} = 1$ and $\epsilon_1 = 0$, thus $Y_1 = F_1^{-1}(\Phi[\eta_1])$.
In the typical design for questionnaires, one tries to get a grip on a latent concept through a particular set of well-designed questions~\citep{martinez2006procedure,byrne2013structural}, which implies that a factor (latent concept) in our model is connected to multiple indicators (questions) while an indicator is only used to measure a single factor, as shown in Figure~\ref{GaussianCopulaModelDemo}. This kind of measurement model is called a \emph{pure measurement model} (Definition 8 in~\citet{silva2006learning}). Throughout this paper, we assume that all measurement models are pure, which indicates that there is only a single non-zero entry in each row of the factor loadings matrix $\Lambda$. This inductive bias about the sparsity pattern of $\Lambda$ is fully motivated by the typical design of a measurement model.
In what follows, we transform the Gaussian copula factor model into an equivalent model that is used for inference in the next subsection. We consider an integrated $(p + k)$-dimensional random vector $\bm{X} = (\bm{Z}^T, \bm{\eta}^T)^T$, which is still multivariate Gaussian, and obtain its covariance matrix
\begin{equation}
\label{eq:cov_X}
\Sigma = \begin{bmatrix}
\Lambda C \Lambda^T + D & \Lambda C \\
C \Lambda^T & C \\
\end{bmatrix} \:,
\end{equation}
and precision matrix
\begin{equation}
\label{eq:integratedPrecison}
\Omega = \Sigma^{-1} = \begin{bmatrix}
D^{-1} & -D^{-1} \Lambda \\
-\Lambda^T D^{-1} & C^{-1} + \Lambda^T D^{-1} \Lambda \\
\end{bmatrix} \:.
\end{equation}
Since $D$ is diagonal and $\Lambda$ only has one non-zero entry per row, $\Omega$ contains many intrinsic zeros. The sparsity pattern of such $\Omega = (\omega_{ij})$ can be represented by an undirected graph $G = (\bm{V}, \bm{E})$, where $(i,j) \not\in \bm{E}$ whenever $\omega_{ij} = 0$ by construction.
Then, a Gaussian copula factor model can be transformed into an equivalent model controlled by a single precision matrix $\Omega$, which in turn is constrained by $G$, i.e., $P(\bm{X}|C,\Lambda,D) = P(\bm{X}|\Omega_G)$.
\begin{defi}[$G$-Wishart Distribution]
Given an undirected graph $G = (\bm{V},\bm{E})$, a zero-constrained random matrix $\Omega$ has a $G$-Wishart distribution, if its density function is
$$
p(\Omega|G) = \frac{|\Omega|^{(\nu - 2)/2}}{I_G(\nu, \Psi)} \exp \bigg[-\frac{1}{2} \tr(\Psi \Omega)\bigg] \mathbbm{1}_{\Omega \in M^+(G)},
$$
with $M^+(G)$ the space of symmetric positive definite matrices with off-diagonal elements $\omega_{ij} = 0$ whenever $(i,j) \not\in \bm{E}$, $\nu$ the number of degrees of freedom, $\Psi$ a scale matrix, $I_G(\nu, \Psi)$ the normalizing constant, and $\mathbbm{1}$ the indicator function~\citep{roverato2002hyper}.
\end{defi}
The $G$-Wishart distribution is the conjugate prior of precision matrices $\Omega$ that are constrained by a graph $G$~\citep{roverato2002hyper}. That is, given the $G$-Wishart prior, i.e., $P(\Omega|G) = \Wish_G(\nu_0, \Psi_0)$ and data $\bm{X} = (\bm{x_1},\ldots,\bm{x_n})^T$ drawn from $\mathcal{N}(0,\Omega^{-1})$, the posterior for $\Omega$ is another $G$-Wishart distribution:
\begin{equation*}
\label{posteriorDistribution}
P(\Omega | G, \bm{X}) = \Wish_G (\nu_0 + n, \Psi_0 + \bm{X}^T \bm{X}).
\end{equation*}
When the graph $G$ is fully connected, the $G$-Wishart distribution reduces to a Wishart distribution~\citep{murphy2007conjugate}. Placing a $G$-Wishart prior on $\Omega$ is equivalent to placing an inverse-Wishart on $C$, a product of multivariate normals on $\Lambda$, and an inverse-gamma on the diagonal elements of $D$. With a diagonal scale matrix $\Psi_0$ and the number of degrees of freedom $\nu_0$ equal to the number of factors plus one, the implied marginal densities between any pair of factors are uniformly distributed between $[-1,1]$~\citep{barnard2000modeling}.
\subsection{Inference for Gaussian Copula Factor Model}
We first introduce the inference procedure for complete mixed data and incomplete Gaussian data respectively, based on which the procedure for mixed data with missing values is then derived. From this point on, we use $S$ to denote the correlation matrix over the response vector $\bm{Z}$.
\subsubsection{Mixed Data without Missing Values} \label{sec:infer_mixed_complete}
For a Gaussian copula model,~\citet{hoff2007extending} proposed a likelihood that only concerns the ranks among observations, which is derived as follows. Since the transformation $Y_j = F_j^{-1}\big(\Phi\big[Z_j\big]\big)$ is non-decreasing, observing $\bm{y}_j = (y_{1,j},\ldots,y_{n,j})^T$ implies a partial ordering on $\bm{z}_j = (z_{1,j},\ldots,z_{n,j})^T$, i.e., $\bm{z}_j$ lies in the space restricted by $\bm{y}_j$:
\[
\D(\bm{y}_j) = \{\bm{z}_j \in \mathbb{R}^n: y_{i,j} < y_{k,j} \Rightarrow z_{i,j} < z_{k,j}\} \:.
\]
Therefore, observing $\bm{Y}$ suggests that $\bm{Z}$ must be in
\[
\D(\bm{Y}) = \{\bm{Z} \in \mathbb{R}^{n \times p}: \bm{z}_j \in \D(\bm{y}_j), \forall j = 1,\ldots,p\} \:.
\]
Taking the occurrence of this event as the data, one can compute the following likelihood~\cite{hoff2007extending}
\begin{align*}
P(\bm{Z} \in \D(\bm{Y})|S,F_1,\ldots,F_p) = P(\bm{Z} \in \D(\bm{Y})|S).
\end{align*}
Following the same argumentation, the likelihood in our Gaussian copula factor model reads
\begin{equation*}
P(\bm{Z} \in \D(\bm{Y})|\bm{\eta},\Omega,F_1,\ldots,F_p) = P(\bm{Z} \in \D(\bm{Y})|\bm{\eta},\Omega), \:
\end{equation*}
which is independent of the margins $F_j$.
For the Gaussian copula factor model, inference for the precision matrix $\Omega$ of the vector $\bm{X} = (\bm{Z}^T, \bm{\eta}^T)^T$ can now proceed via construction of a Markov chain having its stationary distribution equal to $P(\bm{Z},\bm{\eta},\Omega|\bm{Z} \in \D(\bm{Y}),G)$, where we ignore the values for $\bm{\eta}$ and $\bm{Z}$ in our samples. The prior graph $G$ is uniquely determined by the sparsity pattern of the loading matrix $\Lambda = (\lambda_{ij})$ and the residual matrix $D$ (see Equation \ref{eq:integratedPrecison}), which in turn is uniquely decided by the pure measurement models. The Markov chain can be constructed by iterating the following three steps:
\begin{enumerate}
\item \textbf{Sample $\bm{Z}$}: $\bm{Z} \sim P(\bm{Z}|\bm{\eta},\bm{Z} \in \D(\bm{Y}),\Omega)$; \\
Since each coordinate $Z_j$ directly depends on only one factor, i.e., $\eta_q$ such that $\lambda_{jq} \neq 0$, we can sample each of them independently through
$ Z_j \sim P(Z_j|\eta_q,\bm{z}_j \in \D(\bm{y}_j),\Omega) $.
\item \textbf{Sample $\bm{\eta}$}: $\bm{\eta} \sim P(\bm{\eta}|\bm{Z},\Omega)$;
\item \textbf{Sample $\Omega$}: $\Omega \sim P(\Omega|\bm{Z},\bm{\eta},G)$.
\end{enumerate}
\subsubsection{Gaussian Data with Missing Values} \label{sec:infer_Gaussian}
Suppose that we have Gaussian data $\bm{Z}$ consisting of two parts, $\bm{Z}_{obs}$ and $\bm{Z}_{miss}$, denoting observed and missing values in $\bm{Z}$ respectively. The inference for the correlation matrix of $\bm{Z}$ in this case can be done via the so-called data augmentation technique that is also a Markov chain Monte Carlo procedure and has been proven to be consistent under MAR~\citep{schafer1997analysis}. This approach iterates the following two steps to impute missing values (Step 1) and draw correlation matrix samples from the posterior (Step 2):
\begin{enumerate}
\item $\bm{Z}_{miss} \sim P(\bm{Z}_{miss}|\bm{Z}_{obs},S)$ ;
\item $S \sim P(S|\bm{Z}_{obs},\bm{Z}_{miss})$.
\end{enumerate}
\subsubsection{Mixed Data with Missing Values}
For the most general case of mixed data with missing values, we combine the procedures of Sections~\ref{sec:infer_mixed_complete} and~\ref{sec:infer_Gaussian} into the following four-step inference procedure:
\begin{enumerate}
\item $\bm{Z}_{obs} \sim P(\bm{Z}_{obs}|\bm{\eta},\bm{Z}_{obs} \in \D(\bm{Y}_{obs}),\Omega)$;
\item $\bm{Z}_{miss} \sim P(\bm{Z}_{miss}|\bm{\eta},\bm{Z}_{obs},\Omega)$;
\item $\bm{\eta} \sim P(\bm{\eta}|\bm{Z}_{obs},\bm{Z}_{miss},\Omega)$;
\item $\Omega \sim P(\Omega|\bm{Z}_{obs},\bm{Z}_{miss},\bm{\eta},G)$.
\end{enumerate}
A Gibbs sampler that achieves this Markov chain is summarized in Algorithm~\ref{GS_GCFM} and implemented in \textsf{R}.\footnote{The code including those used in simulations and real-world applications is provided in \url{https://github.com/cuiruifei/CopulaFactorModel}.} Note that we put Step 1 and Step 2 together in the actual implementation since they share some common computations (lines 2 - 4). The difference between the two steps is that the values in Step 1 are drawn from a space restricted by the observed data (lines 5 - 13) while the values in Step 2 are drawn from an unrestricted space (lines 14 - 17). Another important point is that we need to relocate the data such that the mean of each coordinate of $\bm{Z}$ is zero (line 20). This is necessary for the algorithm to be sound because the mean may shift when missing values depend on the observed data (MAR).
\begin{algorithm}[!t]
\caption{Gibbs sampler for Gaussian copula factor model with missing values}
\label{GS_GCFM}
\begin{algorithmic}[1]
\REQUIRE Prior graph $G$, observed data $\bm{Y}$. \\
\# \textbf{Step 1} and \textbf{Step 2}:
\FOR{$j \in \{1,\ldots,p\}$}
\STATE $q=$ factor index of $Z_j$
\STATE $ a = \Sigma_{[j,q+p]} / \Sigma_{[q+p,q+p]}$
\STATE $\sigma_j^2 = \Sigma_{[j,j]}-a \times \Sigma_{[q+p,j]}$ \\
\# \textbf{Step 1}: $\bm{Z}_{obs} \sim P(\bm{Z}_{obs}|\bm{\eta},\bm{Z}_{obs} \in \D(\bm{Y}_{obs}),\Omega)$
\FOR{$y \in \unique \{y_{1,j},\ldots,y_{n,j}\}$}
\STATE $z_l = \max\{z_{i,j}:y_{i,j}<y\}$
\STATE $z_u = \min\{z_{i,j}:y<y_{i,j}\}$
\FOR{$i$ such that $\: y_{i,j} = y$}
\STATE $\mu_{i,j} = \bm{\eta}_{[i,q]} \times a$
\STATE $u_{i,j} \sim \mathcal{U}\big(\Phi\big[\frac{z_l-\mu_{i,j}}{\sigma_j}\big],\Phi\big[\frac{z_u-\mu_{i,j}}{\sigma_j}\big]\big)$
\STATE $z_{i,j} = \mu_{i,j} + \sigma_j \times \Phi^{-1}(u_{i,j})$
\ENDFOR
\ENDFOR \\
\# \textbf{Step 2}: $\bm{Z}_{miss} \sim P(\bm{Z}_{miss}|\bm{\eta},\bm{Z}_{obs},\Omega)$
\FOR{$i$ such that $y_{i,j} \in \bm{Y}_{miss}$}
\STATE $\mu_{i,j} = \bm{\eta}_{[i,q]} \times a$
\STATE $z_{i,j} \sim \mathcal{N}(\mu_{i,j}, \sigma_j^2)$
\ENDFOR
\ENDFOR
\STATE $\bm{Z} = (\bm{Z}_{obs}, \bm{Z}_{miss})$
\STATE $\bm{Z} = (\bm{Z}^T - \bm{\mu})^T$, with $\bm{\mu}$ the mean vector of $\bm{Z}$ \\
\# \textbf{Step 3}: $\bm{\eta} \sim P(\bm{\eta}|\bm{Z},\Omega)$
\STATE $A = \Sigma_{[\bm{\eta},\bm{Z}]}\Sigma_{[\bm{Z},\bm{Z}]}^{-1}$
\STATE $B = \Sigma_{[\bm{\eta},\bm{\eta}]}-A\Sigma_{[\bm{Z},\bm{\eta}]}$
\FOR{$i \in \{1,\ldots,n\}$}
\STATE $\bm{\mu}_i = (\bm{Z}_{[i,:]}A^T)^T$
\STATE $\bm{\eta}_{[i,:]} \sim \mathcal{N}(\bm{\mu}_i, B)$
\ENDFOR
\STATE $\bm{\eta}_{[:,j]} = \bm{\eta}_{[:,j]} \times \sign(\Cov{\bm{\eta}_{[:,j]}, \bm{Z}_{[:,f(j)]}}), \: \forall j$, where $f(j)$ is the index of the first indicator of $\eta_j$. \\
\# \textbf{Step 4}: $\Omega \sim P(\Omega|\bm{Z},\bm{\eta},G)$
\STATE $\bm{X} = (\bm{Z}, \bm{\eta})$
\STATE $\Omega \sim \Wish_G(\nu_0 + n, \Psi_0 + \bm{X}^T\bm{X})$
\STATE $\Sigma = \Omega^{-1}$
\STATE $\Sigma_{ij} = \Sigma_{ij}/\sqrt{\Sigma_{ii}\Sigma_{jj}}, \forall i,j$
\end{algorithmic}
\end{algorithm}
By iterating the steps in Algorithm~\ref{GS_GCFM}, we can draw correlation matrix samples over the integrated random vector $\bm{X}$, denoted by $\{\Sigma^{(1)},\ldots, \Sigma^{(m)}\}$. The mean over all the samples is a natural estimate of the true $\Sigma$, i.e.,
\begin{equation}
\hat{\Sigma} = \dfrac{1}{m}\sum_{i = 1}^{m} \Sigma^{(i)} \:.\label{eq:Sigma}
\end{equation}
Based on Equations (\ref{eq:cov_X}) and (\ref{eq:Sigma}), we obtain estimates of the parameters of interests:
\begin{align}
&\hat{C} = \hat{\Sigma}_{[\bm{\eta}, \bm{\eta}]}; \nonumber \\
&\hat{\Lambda} = \hat{\Sigma}_{[\bm{Z}, \bm{\eta}]} \hat{C}^{-1}\: ; \\
&\hat{D} = \hat{S} - \hat{\Lambda}\hat{C}\hat{\Lambda}^T, \mbox{~~with~~} \hat{S} = \hat{\Sigma}_{[\bm{Z}, \bm{Z}]} \nonumber \:.
\end{align}
We refer to this procedure as a \emph{Bayesian Gaussian copula factor approach} (BGCF).
\subsection{Theoretical Analysis}
\paragraph{Identifiability of $C$} Without additional constraints, $C$ is non-identifiable~\citep{anderson1956statistical}. More precisely, given a decomposable matrix $S = \Lambda C \Lambda^T + D$, we can always replace $\Lambda$ with $\Lambda U$ and $C$ with $U^{-1} C U^{-T}$ to obtain an equivalent decomposition $S = (\Lambda U)(U^{-1} C U^{-T})(U^T \Lambda ^T) + D$, where $U$ is a $k \times k$ invertible matrix. Since $\Lambda$ only has one non-zero entry per row in our model, $U$ can only be diagonal to ensure that $\Lambda U$ has the same sparsity pattern as $\Lambda$ (see Lemma~\ref{lemm:lambda} in Appendix). Thus, from the same $S$, we get a class of solutions for $C$, i.e., $U^{-1} C U^{-1}$, where $U$ can be any invertible diagonal matrix. In order to get a unique solution for $C$, we impose two sufficient identifying conditions: 1) restrict $C$ to be a correlation matrix; 2) force the first non-zero entry in each column of $\Lambda$ to be positive. See Lemma~\ref{lemm:identifiability} in Appendix for proof. Condition 1 is implemented via line 31 in Algorithm~\ref{GS_GCFM}. As for the second condition, we force the covariance between a factor and its first indicator to be positive (line 27), which is equivalent to Condition 2. Note that these conditions are not unique; one could choose one's favorite conditions to identify $C$, e.g., setting the first loading to 1 for each factor. The reason for our choice of conditions is to keep it consistent with our model definition where $C$ is a correlation matrix.
\paragraph{Identifiability of $\Lambda$ and $D$} Under the two conditions for identifying $C$, factor loadings $\Lambda$ and residual variances $D$ are also identified except for the case in which there exists one factor that is independent of all the others and this factor only has two indicators. For such a factor, we have 4 free parameters (2 loadings, 2 residuals) while we only have 3 available equations (2 variances, 1 covariance), which yields an underdetermined system. See Lemmas~\ref{lemm:identifiability_lambda} and~\ref{lemm:identifiability_D} in Appendix for detailed analysis. Once this happens, one could put additional constraints to guarantee a unique solution, e.g., by setting the variance of the first residual to zero. However, we would recommend to leave such an independent factor out (especially in association analysis) or study it separately from the other factors.
Under sufficient conditions for identifying $C$, $\Lambda$, and $D$, our BGCF approach is consistent even with MCAR missing values. This is shown in Theorem~\ref{thm:consistency}, whose proof is provided in Appendix.
\begin{thm}[Consistency of the BGCF Approach]
\label{thm:consistency}
Let $\bm{Y}_n=(\bm{y}_1,\ldots,\bm{y}_n)^T$ be independent observations drawn from a Gaussian copula factor model. If $\bm{Y}_n$ is complete (no missing data) or contains missing values that are missing completely at random, then
\begin{gather*}
\lim\limits_{n \to\infty} P\big(\hat{C}_n = C_0\big) = 1 \:, \\
\lim\limits_{n \to\infty} P\big(\hat{\Lambda}_n = \Lambda_0\big) = 1 \:, \\
\lim\limits_{n \to\infty} P\big(\hat{D}_n = D_0\big) = 1 \:,
\end{gather*}
where $\hat{C}_n$, $\hat{\Lambda}_n$, and $\hat{D}_n$ are parameters learned by BGCF, while $C_0$, $\Lambda_0$, and $D_0$ are the true ones.
\end{thm}
\section{Simulation Study} \label{sec:simulation}
In this section, we compare our BGCF approach with alternative approaches via simulations.
\subsection{Setup}
\paragraph{Model specification} Following typical simulation studies on CFA models in the literature~\citep{yang2010confirmatory,li2016confirmatory}, we consider a correlated 4-factor model in our study. Each factor is measured by 4 indicators, since~\citet{marsh1998more} concluded that the accuracy of parameter estimates appeared to be optimal when the number of indicators per factor was four and marginally improved as the number increased. The interfactor correlations (off-diagonal elements of the correlation matrix $C$ over factors) are randomly drawn from [0.2, 0.4], which is considered a reasonable and empirical range in the applied literature~\citep{li2016confirmatory}. For the ease of reproducibility, we construct our $C$ as follows.
\begin{minipage}{\linewidth}
\begin{lstlisting}
set.seed(12345)
C <- matrix(runif(4^2, 0.2, 0.4), ncol=4)
C <- (C*lower.tri(C)) + t(C*lower.tri(C))
diag(C) <- 1
\end{lstlisting}
\end{minipage}
In the majority of empirical research and simulation studies~\citep{distefano2002impact}, reported standardized factor loadings range from 0.4 to 0.9. For facilitating interpretability and again reproducibility, each factor loading is set to 0.7. Each corresponding residual variance is then automatically set to 0.51 under a standardized solution in the population model, as done in~\citep{li2016confirmatory}.
\paragraph{Data generation} Given the specified model, one can generate data in the response space (the $\bm{Z}$ in Definition~\ref{def:GCFM}) via Equations \eqref{eq:GCFM_latent} and \eqref{eq:GCFM_response}. When the observed data (the $\bm{Y}$ in Definition~\ref{def:GCFM}) are ordinal, we discretize the corresponding margins into the desired number of categories. When the observed data are nonparanormal, we set the $F_j(\cdot)$ in Equation~\eqref{eq:GCFM_observe} to the CDF of a $\chi^2$-distribution with degrees of freedom \textit{df}. The reason for choosing a $\chi^2$-distribution is that we can easily use \textit{df} to control the extent of non-normality: a higher \textit{df} implies a distribution closer to a Gaussian. To fill in a certain percentage $\beta$ of missing values (we only consider MAR), we follow the procedure in~\citet{kolar2012estimating}, i.e.,
for $j = 1,\ldots,\lfloor p/2 \rfloor$, $i = 1,\ldots,n$: $y_{i,2*j}$ is missing if $z_{i,2*j-1} < \Phi^{-1}(2*\beta)$.
\paragraph{Evaluation metrics} We use average relative bias (ARB) and root mean squared error (RMSE) to examine the parameter estimates, which are defined as
\begin{gather*}
\arb = \dfrac{1}{r}\sum_{i = 1}^{r} \dfrac{\hat{\theta_i} - \theta_i}{\theta_i}\:, \:\:
\rmse = \sqrt{\dfrac{1}{r}\sum_{i = 1}^{r} (\hat{\theta_i} - \theta_i)^2} \:,
\end{gather*}
where $\hat{\theta_i}$ and $\theta_i$ represent the estimated and true values respectively. An ARB value less than 5\% is
interpreted as a \textit{trivial} bias, between 5\% and 10\% as a \textit{moderate} bias, and greater than 10\% as a \textit{substantial} bias~\citep{curran1996robustness}. Note that ARB describes an overall picture of average bias, that is, summing up bias
in a positive and a negative direction together. A smaller absolute value of ARB indicates better performance on average.
\subsection{Ordinal Data without Missing Values}
In this subsection, we consider ordinal complete data since this matches the assumptions of the diagonally weighted least squares (DWLS) method, in which we set the number of ordinal categories to be 4. We also incorporate the robust maximum likelihood (MLR) as an alternative approach, which was shown to be empirically tenable when the number of categories is more than 5~\citep{rhemtulla2012can,li2016confirmatory}. See Section~\ref{sec:background} for details of the two approaches.
Before conducting comparisons, we first check the convergence property of the Gibbs sampler used in our BGCF approach. Figure~\ref{fig:convergence} shows the RMSE of estimated interfactor correlations (left panel) and factor loadings (right panel) over 100 iterations for a randomly-drawn sample with sample size $n=500$. We see quite a good convergence of the Gibbs sampler, in which the burn-in period is only around 10. More experiments done for different numbers of categories and different random samples show that the burn-in is less than 20 on the whole across various conditions.
\begin{figure}[h]
\centering
\includegraphics[scale=0.54]{convergence}
\caption{Convergence property of our Gibbs sampler over 100 iterations. Left panel: RMSE of interfactor correlations; Right panel: RMSE of factor loadings.}
\label{fig:convergence}
\end{figure}
Now we evaluate the three involved approaches. Figure~\ref{fig:complete} shows the performance of BGCF, DWLS, and MLR over different sample sizes $n \in \{100, 200, 500, 1000\}$, providing the mean of ARB (left panel) and the mean of RMSE with 95\% confidence interval (right panel) over 100 experiments. From Figure~\ref{fig:corr_complete}, interfactor correlations are, on average, trivially biased (within two dashed lines) for all the three methods that in turn give indistinguishable RMSE regardless of sample sizes. From Figure~\ref{fig:loadings_complete}, MLR moderately underestimates the factor loadings, and performs worse than DWLS w.r.t. RMSE especially for a larger sample size, which confirms the conclusion in previous studies~\citep{barendse2015using,li2016confirmatory}. Most importantly, our BGCF approach outperforms DWLS in learning factor loadings especially for small sample sizes, even if the experimental conditions entirely match the assumptions of DWLS.
\begin{figure}[t]
\centering
\subfloat[Interfactor Correlations]{\includegraphics[scale=0.7]{k4_complete_corr}\label{fig:corr_complete}}
\hfill
\subfloat[Factor Loadings]{\includegraphics[scale=0.7]{k4_complete_loadings}\label{fig:loadings_complete}}
\caption{Results obtained by the Bayesian Gaussian copula factor (BGCF) approach, the diagonally weighted least squares (DWLS), and the robust maximum likelihood (MLR) on complete ordinal data (4 categories) over different sample sizes, showing the mean of ARB (left panel) and the mean of RMSE with 95\% confidence interval (right panel) over 100 experiments for (a) interfactor correlations and (b) factor loadings, where dashed lines and dotted lines in left panels denote $\pm 5\%$ and $\pm 10\%$ bias respectively.}
\label{fig:complete}
\end{figure}
\subsection{Mixed Data with Missing Values}
In this subsection, we consider mixed nonparanormal and ordinal data with missing values, since some latent variables in real-world applications are measured by sensors that usually produce continuous but not necessarily Gaussian data. The 8 indicators of the first 2 factors (4 per factor) are transformed into a $\chi^2$-distribution with $df = 8$, which yields a slightly-nonnormal distribution (skewness is 1, excess kurtosis is 1.5)~\citep{li2016confirmatory}. The 8 indicators of the last 2 factors are discretized into ordinal with 4 categories.
One alternative approach in such cases is DWLS with pairwise-deletion (PD), in which heterogeneous correlations (Pearson correlations between numeric variables, polyserial correlations between numeric and ordinal variables, and polychoric correlations between ordinal variables) are first computed based on pairwise complete observations, and then DWLS is used to estimate model parameters. A second alternative concerns the full information maximum likelihood (FIML)~\citep{arbuckle1996full,rosseel2012lavaan}, which first applies an EM algorithm to impute missing values and then uses MLR to learn model parameters.
Figure~\ref{fig:incomplete} shows the performance of BGCF, DWLS with PD, and FIML for $n = 500$ over different percentages of missing values $\beta \in \{0\%, 10\%, 20\%, 30\%\}$. First, despite a good performance with complete data ($\beta = 0\%$) DWLS (with PD) deteriorates significantly with an increasing percent of missing values especially for factor loadings, while BGCF and FIML show quite good scalability. Second, our BGCF approach overall outperforms FIML: indistinguishable for interfactor correlations but better for factor loadings.
\begin{figure}[t]
\centering
\subfloat[Interfactor Correlations]{\includegraphics[scale=0.7]{k4_incomplete_corr}}\label{fig:corr_incomplete}
\subfloat[Factor Loadings]{\includegraphics[scale=0.7]{k4_incomplete_loadings}}\label{fig:loadings_incomplete}
\caption{Results for $n = 500$ obtained by BGCF, DWLS with pairwise-deletion, and the full information maximum likelihood (FIML) on mixed nonparanormal (\textit{df} = 8) and ordinal (4 categories) data with different percentages of missing values, for the same experiments as in Figure~\ref{fig:complete}.}
\label{fig:incomplete}
\end{figure}
Two more experiments are provided in Appendix. One concerns incomplete ordinal data with different numbers of categories, showing that BGCF is substantially favorable over DWLS (with PD) and FIML for learning factor loadings, which becomes more prominent with a smaller number of categories. Another one considers incomplete nonparanormal data with different extents of deviation from a Gaussian, which indicates that FIML is rather sensitive to the deviation and only performs well for a slightly-nonnormal distribution while the deviation has no influence on BGCF at all. See Appendix for more details.
\section{Application to Real-world Data} \label{sec:application}
In this section, we illustrate our approach on the `Holzinger \& Swineford 1939' dataset~\citep{holzinger1939study}, a classic dataset widely used in the literature and publicly available in the \textsf{R} package \textbf{lavaan}~\citep{rosseel2012lavaan}. The data consists of mental ability test scores of 301 students, in which we focus on 9 out of the original 26 tests as done in~\citet{rosseel2012lavaan}. A latent variable model that is often proposed to explore these 9 variables is a correlated 3-factor model shown in Figure~\ref{fig:HS_path}, where we rename the observed variables to ``Y1, Y2, \ldots, Y9'' for simplicity in visualization and to keep it identical to our definition of observed variables (Definition~\ref{def:GCFM}). The interpretation of these variables is given in the following list.
\begin{itemize}
\item Y1: Visual perception;
\item Y2: Cubes;
\item Y3: Lozenges;
\item Y4: Paragraph comprehension;
\item Y5: Sentence completion;
\item Y6: Word meaning;
\item Y7: Speeded addition;
\item Y8: Speeded counting of dots;
\item Y9: Speeded discrimination straight and curved capitals.
\end{itemize}
\begin{figure*}[t]
\centering
\begin{tabular}{c}
\parbox[t]{0.5\textwidth}
\scalebox{1.1}{\centerline{\xymatrix @R=2em @C=3em{
*=<2.5em,2em>[F]{Y_1} \ar@{<->}@(ul,ur)^{0.42} & *=<2.5em,2em>[F]{Y_2} \ar@{<->}@(ul,ur)^{0.83} & *=<2.5em,2em>[F]{Y_3} \ar@{<->}@(ul,ur)^{0.68} & & *=<2.5em,2em>[F]{Y_4} \ar@{<->}@(ru,rd)^{0.29}\\
& *=<4em,2em>[o][F]{visual} \ar[lu]|-{0.76} \ar[u]|-{0.41} \ar[ru]|-{0.57} \ar[rr]^{0.44} \ar[dd]^{0.47} & & *=<4em,2em>[o][F]{textual} \ar[ru]|-{0.84} \ar[r]|-{0.87} \ar[rd]|-{0.84} \ar[ll] \ar[lldd]^{0.28} & *=<2.5em,2em>[F]{Y_5} \ar@{<->}@(ru,rd)^{0.25} \\
*=<2.5em,2em>[F]{Y_7} \ar@{<->}@(lu,ld)_{0.67} & & & & *=<2.5em,2em>[F]{Y_6} \ar@{<->}@(ru,rd)^{0.30} \\
*=<2.5em,2em>[F]{Y_8} \ar@{<->}@(lu,ld)_{0.48} & *=<4em,2em>[o][F]{speed} \ar[lu]|-{0.58} \ar[l]|-{0.72} \ar[ld]|-{0.66} \ar[uu] \ar[rruu] & & & \\
*=<2.5em,2em>[F]{Y_9} \ar@{<->}@(lu,ld)_{0.57} & & & &
}}}}
\end{tabular}
\caption{Path diagram for the Holzinger \& Swineford data, in which latent variables are in ovals while observed variables are in squares, bidirected edges between latent variables denote correlation coefficients (interfactor correlations), directed edges denote factor loadings, and self-referring arrows denote residual variance, respectively. The edge weights in the graph are the model parameters learned by our BGCF approach.}
\label{fig:HS_path}
\end{figure*}
The summary of the 9 variables in this dataset is provided in Table~\ref{tab:summary_HS}, showing the number of unique values, skewness, and (excess) kurtosis for each variable. From the column of uniques values, we notice that the data are approximately continuous. The average of `absolute skewness' and `absolute excess kurtosis' over the 9 variables are around 0.40 and 0.54 respectively, which is considered to be slightly nonnormal~\citep{li2016confirmatory}. Therefore, we choose MLR as the alternative to be compared with our BGCF approach, since these conditions match the assumptions of MLR.
\begin{table}[h]
\caption{The number of unique values, skewness, and (excess) kurtosis of each variable in the `HolzingerSwineford1939' dataset.}
\label{tab:summary_HS}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{ccccc}
\toprule
Variables & Unique Values & Skewness & Kurtosis \\
\midrule
Y1 & 35 & -0.26 & 0.33 \\
Y2 & 25 & 0.47 & 0.35 \\
Y3 & 35 & 0.39 & -0.89 \\
Y4 & 20 & 0.27 & 0.10 \\
Y5 & 25 & -0.35 & -0.54 \\
Y6 & 40 & 0.86 & 0.84 \\
Y7 & 97 & 0.25 & -0.29 \\
Y8 & 84 & 0.53 & 1.20 \\
Y9 & 129 & 0.20 & 0.31 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
We run our Bayesian Gaussian copula factor approach on this dataset. The learned parameter estimates are shown in Figure~\ref{fig:HS_path}, in which interfactor correlations are on the bidirected edges, factor loadings are in the directed edges, and unique variance for each variable is around the self-referring arrows. The parameters learned by the MLR approach are not shown here, since we do not know the ground truth so that it is hard to conduct a comparison between the two approaches.
In order to compare the BGCF approach with MLR quantitatively, we consider answering the question: ``What is the value of $Y_j$ when we observe the values of the other variables, denoted by $\bm{Y}_{\backslash j}$, given the population model structure in Figure~\ref{fig:HS_path}?".
This is a regression problem but with additional constraints to obey the population model structure. The difference from a traditional regression problem is that we should learn the regression coefficients from the model-implied covariance matrix rather than the sample covariance matrix over observed variables.
\begin{itemize}
\item For MLR, we first learn the model parameters on the training set, from which we extract the linear regression intercept and coefficients of $Y_j$ on $\bm{Y}_{\backslash j}$. Then we predict the value of $Y_j$ based on the values of $\bm{Y}_{\backslash j}$. See Algorithm~\ref{alg:MLR_reg} for pseudo code of this procedure.
\item For BGCF, we first estimate the correlation matrix $\hat{S}$ over response variables (the $\bm{Z}$ in Definition~\ref{def:GCFM}) and the empirical CDF $\hat{F}_j$ of $Y_j$ on the training set. Then we draw latent Gaussian data $Z_j$ given $\hat{S}$ and $\bm{Y}_{\backslash j}$, i.e., $P(Z_j |\hat{S}, \bm{Z}_{\backslash j} \in \mathcal{D}(\bm{Y}_{\backslash j}))$. Lastly, we obtain the value of $Y_j$ from $Z_j$ via $\hat{F}_j$, i.e., $Y_j = \hat{F}_j^{-1} \big(\Phi[Z_j]\big)$. See Algorithm~\ref{alg:BGCF_reg} for pseudo code of this procedure. Note that we iterate the prediction stage (lines 7-8) for multiple times in the actual implementation to get multiple solutions to $Y_j^{(new)}$, then the average over these solutions is taken as the final predicted value of $Y_j^{(new)}$. This idea is quite similar to multiple imputation.
\end{itemize}
\begin{algorithm}[H]
\caption{Pseudo code of MLR for regression.}
\label{alg:MLR_reg}
\begin{algorithmic}[1]
\STATE \textbf{Input:} $\bm{Y}^{(train)}$ and $\bm{Y}_{\backslash j}^{(new)}$.
\STATE \textbf{Output:} $Y_j^{(new)}$.
\STATE \textbf{Training Stage:}
\STATE Fit the model using MLR on $\bm{Y}^{(train)}$;
\STATE Extract the model-implied covariance matrix from the fitted model, denoted by $\hat{S}$;
\STATE Extract regression coefficients $\bm{b}$ of $Y_j$ on $\bm{Y}_{\backslash j}$ from $\hat{S}$, that is, $\bm{b} = \hat{S}_{[\backslash j,\backslash j]}^{-1} \hat{S}_{[\backslash j,j]}$;
\STATE Obtain the regression intercept $b_0$, that is, \\ $b_0 = \E (Y_j^{(train)}) - \bm{b} \cdot \E (\bm{Y}_{\backslash j}^{(train)})$.
\STATE \textbf{Prediction Stage:}
\STATE $Y_j^{(new)} = b_0 + \bm{b} \cdot \bm{Y}_{\backslash j}^{(new)}$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{Pseudo code of BGCF for regression.}
\label{alg:BGCF_reg}
\begin{algorithmic}[1]
\STATE \textbf{Input:} $\bm{Y}^{(train)}$ and $\bm{Y}_{\backslash j}^{(new)}$.
\STATE \textbf{Output:} $Y_j^{(new)}$.
\STATE \textbf{Training Stage:}
\STATE Apply BGCF to learn the correlation matrix over response variables, i.e., $\hat{S} = \hat{\Sigma}_{[\bm{Z}, \bm{Z}]}$;
\STATE Learn the empirical cumulative distribution function of $Y_j$, denoted by $\hat{F}_j$.
\STATE \textbf{Prediction Stage:}
\STATE Sample $Z_j^{(new)}$ from $P(Z_j^{(new)} |\hat{S}, \bm{Z}_{\backslash j} \in \mathcal{D}(\bm{Y}_{\backslash j}))$;
\STATE Obtain $Y_j^{(new)}$, i.e., $Y_j^{(new)} = \hat{F}_j^{-1} \big(\Phi[Z_j^{(new)}]\big)$.
\end{algorithmic}
\end{algorithm}
\begin{figure*}
\centering
\includegraphics[scale = 0.9]{HS_MSE}
\caption{MSE obtained by BGCF and MLR when we take each $Y_j$ as outcome variable (the others as predictors) alternately, showing the mean over 100 experiments (10 times 10-fold cross validation) with error bars representing a standard error.}
\label{fig:HS_MSE}
\end{figure*}
The mean squared error (MSE) is used to evaluate the prediction accuracy, where we repeat a 10-fold cross validation for 10 times (thus 100 MSE estimates totally). Also, we take $Y_j$ as the outcome variable alternately while treating the others as predictors (thus 9 tasks totally). Figure~\ref{fig:HS_MSE} provides the results of BGCF and MLR for all the 9 tasks, showing the mean of MSE with a standard error represented by error bars over the 100 estimates. We see that BGCF outperforms MLR for Tasks 5 and 6 although they perform indistinguishably for the other tasks. The advantage of BGCF over MLR is encouraging, considering that the experimental conditions match the assumptions of MLR. More experiments are done (not shown) after we make the data moderately or substantially nonnormal, suggesting that BGCF is significantly favorable to MLR, as expected.
\section{Summary and Discussion} \label{sec:conclusion}
In this paper, we proposed a novel Bayesian Gaussian copula factor (BGCF) approach for learning parameters of CFA models that can handle mixed continuous and ordinal data with missing values. We analyzed the separate identifiability of interfactor correlations $C$, factor loadings $\Lambda$, and residual variances $D$, since different researchers may care about different parameters. For instance, it is sufficient to identify $C$ for researchers interested in learning causal relations among latent variables~\citep{silva2006bayesian,silva2006learning,cui2016copula}, with no need to worry about additional conditions to identify $\Lambda$ and $D$. Under sufficient identification conditions, we proved that our approach is consistent for MCAR data and empirically showed that it works quite well for MAR data.
In the experiments, our approach outperforms DWLS even under the assumptions of DWLS. Apparently, the approximations inherent in DWLS, such as the use of the polychoric correlation and its asymptotic covariance, incur a small loss in accuracy compared to an integral approach like the BGCF. When the data follow from a more complicated distribution and contain missing values, the advantage of BGCF over its competitors becomes more prominent. Another highlight of our approach is that the Gibbs sampler converges quite fast, where the burn-in period is rather short. To further reduce the time complexity, a potential optimization of the sampling process is available~\citep{kalaitzis2013flexible}.
There are various generalizations to our inference approach. While our focus in this paper is on the correlated $k$-factor models, it is straightforward to extent the current procedure to other class of latent models that are often considered in CFA, such as bi-factor models and second-order models, by simply adjusting the sparsity structure of the prior graph $G$. Also, one may concern models with impure measurement indicators, e.g., a model with an indicator measuring multiple factors or a model with residual covariances~\citep{bollenstructural}, which can be easily solved with BGCF by changing the sparsity pattern of $\Lambda$ and $D$. Another line of future work is to analyze standard errors and confidence intervals while this paper concentrates on the accuracy of parameter estimates. Our conjecture is that BGCF is still favorable because it naturally transfers the extra variability incurred by missing values to the posterior Gibbs samples: we indeed observed a growing variance of the posterior distribution with the increase of missing values in our simulations. On top of the posterior distribution, one could conduct further studies, e.g., causal discovery over latent factors~\citep{silva2006learning,cui2018learning}, regression analysis (as we did in Section~\ref{sec:application}), or other machine learning tasks.
\section*{Appendix A: Proof of Theorem 1}
\setcounter{thm}{0}
\begin{thm}[Consistency of the BGCF Approach]
\label{thm:consistency}
Let $\bm{Y}_n=(\bm{y}_1,\ldots,\bm{y}_n)^T$ be independent observations drawn from a Gaussian copula factor model. If $\bm{Y}_n$ is complete (no missing data) or contains missing values that are missing completely at random, then
\begin{gather*}
\lim\limits_{n \to\infty} P\big(\hat{C}_n = C_0\big) = 1 \:, \\
\lim\limits_{n \to\infty} P\big(\hat{\Lambda}_n = \Lambda_0\big) = 1 \:, \\
\lim\limits_{n \to\infty} P\big(\hat{D}_n = D_0\big) = 1 \:,
\end{gather*}
where $\hat{C}_n$, $\hat{\Lambda}_n$, and $\hat{D}_n$ are parameters learned by BGCF, while $C_0$, $\Lambda_0$, and $D_0$ are the true ones.
\end{thm}
\begin{proof}
If $S = \Lambda C \Lambda^T + D$ is the response vector's covariance matrix, then its correlation matrix is $\widetilde{S} = V^{-\frac{1}{2}} S V^{-\frac{1}{2}} = V^{-\frac{1}{2}} \Lambda C \Lambda^T V^{-\frac{1}{2}} + V^{-\frac{1}{2}} D V^{-\frac{1}{2}} = \widetilde{\Lambda} C \widetilde{\Lambda}^T + \widetilde{D}$, where $V$ is a diagonal matrix containing the diagonal entries of $S$. We make use of Theorem 1 from~\citet{murray2013bayesian} to show the consistency of $\widetilde{S}$. Our factor-analytic prior puts positive probability density almost everywhere on the set of correlation matrices that have a $k$-factor decomposition. Then, by applying Theorem 1 in~\citet{murray2013bayesian}, we obtain the consistency of the posterior distribution on the response vector's correlation matrix for complete data, i.e.,
\begin{equation}
\label{eq:consistency_S}
\lim_{n \rightarrow \infty} \Pi(\widetilde{S} \in \mathcal{V}(\widetilde{S}_0) | \bm{Z}_n \in \D(\bm{Y}_n) ) = 1 \; a.s. \; \forall \; \mathcal{V}(\widetilde{S}_0),
\end{equation}
where $\D(\bm{Y}_n)$ is the space restricted by observed data, and $\mathcal{V}(\widetilde{S}_0)$ is a neighborhood of the true parameter $\widetilde{S}_0$. When the data contain missing values that are completely at random (MCAR), we can also directly obtain the consistency of $\widetilde{S}$ by again using Theorem 1 in~\citet{murray2013bayesian}, with an additional observation that the estimation of ordinary and polychoric/polyserial correlations from pairwise complete data is still consistent under MCAR. That is to say, the consistency shown in Equation~(\ref{eq:consistency_S}) also holds for data with MCAR missing values.
From this point on, to simplify notation, we will omit adding the tilde to refer to the rescaled matrices $\widetilde{S}$, $\widetilde{\Lambda}$, and $\widetilde{D}$.
Thus, $S$ from now on refers to the correlation matrix of the response vector. $\Lambda$ and $D$ refer to the scaled factor loadings and noise variance respectively.
The Gibbs sampler underlying the BGCF approach has the posterior of $\Sigma$ (the correlation matrix of the integrated vector $\bm{X}$) as its stationary distribution. $\Sigma$ contains
$S$, the correlation matrix of the response random vector, in the upper left block and $C$ in the lower right block. Here $C$ is the correlation matrix of factors, which implicitly depends on the \textit{Gaussian copula factor model} from Definition 1 of the main paper via the formula $S = \Lambda C \Lambda^T + D$. In order to render this decomposition identifiable, we need to put constraints on $C$, $\Lambda$, $D$. Otherwise, we can always replace $\Lambda$ with $\Lambda U$ and $C$ with $U^{-1} C U^{-1}$, where $U$ is any $k \times k$ invertible matrix, to obtain the equivalent decomposition $S = (\Lambda U)(U^{-1} C U^{-T})(U^T \Lambda ^T) + D$. However, we have assumed that $\Lambda$ follows a particular sparsity structure in which there is only a single non-zero entry for each row. This assumption restricts the space of equivalent solutions, since any $\Lambda U$ has to follow the same sparsity structure as $\Lambda$. More explicitly, $\Lambda U$ maintains the same sparsity pattern if and only if $U$ is a diagonal matrix (Lemma~\ref{lemm:lambda}).
By decomposing $S$, we get a class of solutions for $C$ and $\Lambda$, i.e., $U^{-1} C U^{-1}$ and $\Lambda U$, where $U$ can be any invertible diagonal matrix. In order to get a unique solution for $C$, we impose two identifying conditions: 1) we restrict $C$ to be a correlation matrix; 2) we force the first non-zero entry in each column of $\Lambda$ to be positive. These conditions are sufficient for identifying $C$ uniquely (Lemma~\ref{lemm:identifiability}). We point out that these sufficient conditions are not unique. For example, one could replace the two conditions with restricting the first non-zero entry in each column of $\Lambda$ to be one. The reason for our choice of conditions is to keep it consistent with our model definition where $C$ is a correlation matrix. Under the two conditions for identifying $C$, factor loadings $\Lambda$ and residual variances $D$ are also identified except for the case in which there exists one factor that is independent of all the others and this factor only has two indicators. For such a factor, we have 4 free parameters (2 loadings, 2 residuals) while we only have 3 available equations (2 variances, 1 covariance), which yields an underdetermined system. Therefore, the identifiability of $\Lambda$ and $D$ relies on the observation that a factor has a single or at least three indicators if it is independent of all the others. See Lemmas~\ref{lemm:identifiability_lambda} and~\ref{lemm:identifiability_D} for detailed analysis.
Now, given the consistency of $S$ and the unique smooth map from $S$ to $C$, $\Lambda$, and $D$, we obtain the consistency of the posterior mean of the parameter $C$, $\Lambda$, and $D$, which concludes our proof.
\end{proof}
\begin{lemm}
\label{lemm:lambda}
If $\Lambda = (\lambda_{ij})$ is a $p \times k$ factor loading matrix with only a single non-zero entry for each row, then $\Lambda U$ will have the same sparsity pattern if and only if $U = (u_{ij})$ is diagonal.
\end{lemm}
\begin{proof}
($\Rightarrow$) We prove the direct statement by contradiction. We assume that $U$ has an off-diagonal entry that is not equal to zero. We arbitrarily choose that entry to be $u_{rs}, r, s \in \{1, 2, \ldots, k\}, r \neq s$. Due to the particular sparsity pattern we have chosen for $\Lambda$, there exists $q \in \{1, 2, \ldots, p\}$ such that $\lambda_{qr} \neq 0$ and $\lambda_{qs} = 0$, i.e., the unique factor corresponding to the response $Z_q$ is $\eta_r$. However, we have $(\Lambda U)_{qs} = \lambda_{qr} u_{rs} \neq 0$, which means $(\Lambda U)$ has a different sparsity pattern from $\Lambda$. We have reached a contradiction, therefore $U$ is diagonal.
($\Leftarrow$) If $U$ is diagonal, i.e., $U = \diag(u_1, u_2, \ldots, u_k)$, then $(\Lambda U)_{ij} = \lambda_{ij} u_j$. This means that $(\Lambda U)_{ij} = 0 \iff \lambda_{ij} u_j = 0 \iff \lambda_{ij} = 0$, so the sparsity pattern is preserved.
\end{proof}
\begin{lemm}[Identifiability of $C$] \label{lemm:identifiability}
Given the factor structure defined in Section 3 of the main paper, we can uniquely recover $C$ from $S = \Lambda C \Lambda^T + D$ if 1) we constrain $C$ to be a correlation matrix; 2) we force the first element in each column of $\Lambda$ to be positive.
\end{lemm}
\begin{proof}
Here we assume that the model has the stated factor structure, i.e., that there is some $\Lambda$, $C$, and $D$ such that $S = \Lambda C \Lambda^T + D$. We then show that our chosen restrictions are sufficient for identification using an argument similar to that in~\citet{anderson1956statistical}.
The decomposition $S = \Lambda C \Lambda^T + D$ constitutes a system of $\frac{p (p + 1)}{2}$ equations:
\begin{equation} \label{eqn:decomposition}
\begin{aligned}
s_{ii} & = \lambda_{if(i)}^2 + d_{ii} \\
s_{ij} & = c_{f(i)f(j)} \lambda_{if(i)} \lambda_{jf(j)} \: , \: i < j \:, \\
\end{aligned}
\end{equation}
where $S = (s_{ij}), \Lambda = (\lambda_{ij}), C = (c_{ij}), D = (d_{ij})$, and $f : \{1, 2, \ldots, p \} \to \{1, 2, \ldots, k \} $ is the map from a response variable to its corresponding factor. Looking at the equation system in~\eqref{eqn:decomposition}, we notice that each factor correlation term $c_{qr}, q \neq r$, appears only in the equations corresponding to response variables indexed by $i$ and $j$ such that $f(i) = q$ and $f(j) = r$ or vice versa. This suggests that we can restrict our analysis to submodels that include only two factors by considering the submatrices of $S, \Lambda, C, D$ that only involve those two factors. To be more precise, the idea is to look only at the equations corresponding to the submatrix $S_{f^{-1}(q) f^{-1}(r)}$, where $f^{-1}$ is the preimage of $\{1, 2, \ldots, k \}$ under $f$. Indeed, we will show that we can identify each individual correlation term corresponding to pairs of factors only by looking at these submatrices. Any information concerning the correlation term provided by the other equations is then redundant.
Let us then consider an arbitrary pair of factors in our model and the corresponding submatrices of $\Lambda$, $C$, $D$, and $S$. (The case of a single factor is trivial.) In order to simplify notation, we will also use $\Lambda$, $C$, $D$, and $S$ to refer to these submatrices. We also re-index the two factors involved to $\eta_1$ and $\eta_2$ for simplicity. In order to recover the correlation between a pair of factors from $S$, we have to analyze three separate cases to cover all the bases (see Figure~\ref{fig:GaussianCopulaModelDemo} for examples concerning each case):
\begin{enumerate}
\item The two factors are not correlated, i.e., $c_{12} = 0$. (There are no restrictions on the number of response variables that the factors can have.)
\item The two factors are correlated, i.e., $c_{12} \neq 0$, and each has a single response, which implies that $Z_1 = \eta_1$ and $Z_2 = \eta_2$.
\item The two factors are correlated, i.e., $c_{12} \neq 0$, but at least one of them has at least two responses.
\end{enumerate}
\begin{figure*}[!h]
\hspace{10pt}
\begin{tabular}{ccc}
\parbox[t]{0.33\textwidth}
\scalebox{0.6}{\centerline{\xymatrix @R=1em @C=2em{
& & & *=<3em,2em>[o][F]{Z_{2}} \\
*=<3em,2em>[o][F]{Z_1} & *=<3em,2em>[o][F]{\eta_1} \ar[l] &
*=<3em,2em>[o][F]{\eta_2} \ar[ur] \ar[rd] & \\
& & & *=<3em,2em>[o][F]{Z_{3}} \\
}}}}
\parbox[t]{0.33\textwidth}
\scalebox{0.6}{\centerline{\xymatrix @R=1em @C=2em{
& & & *=<3em,2em>[o][B]{} \\
*=<3em,2em>[o][F]{Z_1} & *=<3em,2em>[o][F]{\eta_1} \ar[l] \ar@{-}[r] &
*=<3em,2em>[o][F]{\eta_2} \ar[r] &
*=<3em,2em>[o][F]{Z_{2}} \\
& & & *=<3em,2em>[o][B]{} \\
}}}}
\parbox[t]{0.33\textwidth}
\scalebox{0.6}{\centerline{\xymatrix @R=1em @C=2em{
*=<3em,2em>[o][F]{Z_1} & & & *=<3em,2em>[o][F]{Z_{3}} \\
& *=<3em,2em>[o][F]{\eta_1} \ar[ul] \ar[dl] \ar@{-}[r] &
*=<3em,2em>[o][F]{\eta_2} \ar[ur] \ar[rd] &
\\
*=<3em,2em>[o][F]{Z_{2}} & & & *=<3em,2em>[o][F]{Z_{4}} \\
}}}}
\end{tabular}
\caption{Left panel: \textit{Case 1} ($c_{12} = 0$); Middle panel: \textit{Case 2} ($c_{12} \neq 0$ and only one response per factor); Right panel: \textit{Case 3} ($c_{12} \neq 0$ and at least one factor has multiple responses).}
\label{fig:GaussianCopulaModelDemo}
\end{figure*}
\textit{Case 1:} If the two factors are not correlated (see the example in the left panel of Figure~\ref{fig:GaussianCopulaModelDemo}), this fact will be reflected in the matrix $S$. More specifically, the off-diagonal blocks in $S$, which correspond to the covariance between the responses of one factor and the responses of the other factor, will be set to zero. If we notice this zero pattern in $S$, we can immediately determine that $c_{12} = 0$.
\textit{Case 2:} If the two factors are correlated and each factor has a single associated response (see the middle panel of Figure~\ref{fig:GaussianCopulaModelDemo}), the model reduces to a Gaussian Copula model. Then, we directly get $c_{12} = s_{12}$ since we have put the constraints $Z=\eta$ if $\eta$ has a single indicator $Z$.
\textit{Case 3:} If at least one of the factors (w.l.o.g., $\eta_{1}$) is allowed to have more than one response (see the example in the right panel of Figure~\ref{fig:GaussianCopulaModelDemo}), we arbitrarily choose two of these responses. We also require one response variable corresponding to the other factor ($\eta_{2}$). We use $\lambda_{i1}, \lambda_{j1}$, and $\lambda_{l2}$ to denote the loadings of these response variables, where $i, j, l \in \{1, 2, \ldots, p \}$. From Equation \eqref{eqn:decomposition} we have:
\begin{align*}
s_{ij} & = \lambda_{i1} \lambda_{j1} \\
s_{il} & = c_{12} \lambda_{i1} \lambda_{l2} \\
s_{jl} & = c_{12} \lambda_{j1} \lambda_{l2} \:.
\end{align*}
Since we are in the case in which $c_{12} \neq 0$, which automatically implies that $s_{jl} \neq 0$, we can divide the last two equations to obtain $\frac{s_{il}}{s_{jl}} = \frac{\lambda_{i1}}{\lambda_{j1}}$. We then multiply the result with the first equation to get $\frac{s_{ij} s_{il}}{s_{jl}} = \lambda_{i1}^2$. Without loss of generality, we can say that $\lambda_{i1}$ is the first entry in the first column of $\Lambda$, which means that $\lambda_{i1} > 0$. This means that we have uniquely recovered $\lambda_{i1}$ and $\lambda_{j1}$.
We can also assume without loss of generality that $\lambda_{l2}$ is the first entry in the second column of $\Lambda$, so $\lambda_{l2} > 0$. If $\eta_2$ has at least two responses, we use a similar argument to the one before to uniquely recover $\lambda_{l2}$. We can then use the above equations to get $c_{12}$. If $\eta_2$ has only one response, then $d_{ll} = 0$, which means that $s_{ll} = \lambda_{l2}^2$, so again $\lambda_{l2}$ is uniquely recoverable and we can obtain $c_{12}$ from the equations above.
Thus, we have shown that we can correctly determine $c_{qr}$ only from $S_{f^{-1}(q) f^{-1}(r)}$ in all three cases. By applying this approach to all pairs of factors, we can uniquely recover all pairwise correlations. This means that, given our constraints, we can uniquely identify $C$ from the decomposition of $S$.
\end{proof}
\begin{lemm}[Identifiability of $\Lambda$] \label{lemm:identifiability_lambda}
Given the factor structure defined in Section~\ref{sec:method} of the main paper, we can uniquely recover $\Lambda$ from $S = \Lambda C \Lambda^T + D$ if 1) we constrain $C$ to be a correlation matrix; 2) we force the first element in each column of $\Lambda$ to be positive; 3) when a factor is independent of all the others, it has either a single or at least three indicators.
\end{lemm}
\begin{proof}
Compared to identifying $C$, we need to consider another case in which there is only one factor or there exists one factor that is independent of all the others (the former can be treated as a special case of the latter). When such a factor only has a single indicator, e.g., $\eta_1$ in the left panel of Figure~\ref{fig:GaussianCopulaModelDemo}, we directly identify $d_{11} = 0$ because of the constraint $Z_1 = \eta_1$. When the factor has two indicators, e.g., $\eta_2$ in the left panel of Figure~\ref{fig:GaussianCopulaModelDemo}, we have four free parameters ($\lambda_{22}$, $\lambda_{32}$, $d_{22}$, and $d_{33}$) while we can only construct three equations from $S$ ($s_{22}$, $s_{33}$, and $s_{23}$), which cannot give us a unique solution. Now we turn to the three-indicator case, as shown in Figure~\ref{fig:Demo2}. From Equation \eqref{eqn:decomposition} we have:
\begin{align*}
s_{12} & = \lambda_{11} \lambda_{21} \\
s_{13} & =\lambda_{11} \lambda_{31} \\
s_{23} & = \lambda_{21} \lambda_{31} \:.
\end{align*}
We then have $\frac{s_{12}s_{13}}{s_{23}} = \lambda_{11}^2$, which has a unique solution for $\lambda_{11}$ together with the second constraint $\lambda_{11}>0$, after which we can naturally get the solutions to $\lambda_{21}$ and $\lambda_{31}$. For the other cases, the proof follows the same line of reasoning as Lemma~\ref{lemm:identifiability}.
\begin{figure}[h]
\centering
\begin{tabular}{c}
\parbox[t]{0.5\textwidth}
\scalebox{0.7}{\centerline{\xymatrix @R=1em @C=2em{
& *=<3em,2em>[o][F]{Z_{1}} \\
*=<3em,2em>[o][F]{\eta_1} \ar[ur] \ar[r] \ar[rd] & *=<3em,2em>[o][F]{Z_{2}} \\
& *=<3em,2em>[o][F]{Z_{3}} \\
}}}}
\end{tabular}
\caption{A factor model with three indicators.}
\label{fig:Demo2}
\end{figure}
\end{proof}
\begin{lemm}[Identifiability of $D$] \label{lemm:identifiability_D}
Given the factor structure defined in Section~\ref{sec:method} of the main paper, we can uniquely recover $D$ from $S = \Lambda C \Lambda^T + D$ if 1) we constrain $C$ to be a correlation matrix; 2) when a factor is independent of all the others, it has either a single or at least three indicators.
\end{lemm}
\begin{proof}
We conduct our analysis case by case. For the case where a factor has a single indicator, we trivially set $d_{ii} = 0$. For the case in Figure~\ref{fig:Demo2}, it is straightforward to get $d_{11} = s_{11} - \lambda_{11}^2$ from $\frac{s_{12}s_{13}}{s_{23}} = \lambda_{11}^2$ (the same for $d_{22}$ and $d_{33}$). Another case we need to consider is Case 3 in Figure~\ref{fig:GaussianCopulaModelDemo}, where we have $\frac{s_{ij} s_{il}}{s_{jl}} = \lambda_{i1}^2$ (see analysis in Lemma~\ref{lemm:identifiability}), based on which we obtain $d_{ii} = s_{ii} - \lambda_{i1}^2$. By applying this approach to all single factors or pairs of factors, we can uniquely recover all elements of $D$
\end{proof}
\section*{Appendix B: Extended Simulations}
This section continues the experiments in Section~\ref{sec:simulation} of the main paper, in order to check the influence of the number of categories for ordinal data and the extent of non-normality for nonparanormal data.
\begin{figure}[t]
\centering
\subfloat[Interfactor Correlations]{\includegraphics[scale=0.7]{k4_NumCat_corr}}\label{fig:corr_cat}
\hfill
\subfloat[Factor Loadings]{\includegraphics[scale=0.7]{k4_NumCat_loadings}}\label{fig:loadings_cat}
\caption{Results for $n = 500$ and $\beta = 10\%$ obtained by BGCF, DWLS with PD, and FIML on ordinal data with different numbers of categories, showing the mean of ARB (left panel) and the mean of RMSE with 95\% confidence interval (right panel) over 100 experiments for (a) interfactor correlations and (b) factor loadings, where dashed lines and dotted lines in left panels denote $\pm 5\%$ and $\pm 10\%$ bias respectively.}
\label{fig:num_cat}
\end{figure}
\subsection*{B1: Ordinal Data with Different Numbers of Categories}
In this subsection, we consider ordinal data with various numbers of categories $c \in \{2, 4, 6, 8\}$, in which the sample size and missing values percentage are set to $n = 500$ and $\beta = 10\%$ respectively. Figure~\ref{fig:num_cat} shows the results obtained by BGCF (Bayesian Gaussian copula factor), DWLS (diagonally weighted least squares) with PD (pairwise deletion), and FIML (full information maximum likelihood), providing the mean of ARB (average relative bias) and the mean of RMSE (root mean squared error) with 95\% confidence interval over 100 experiments for (a) interfactor correlations and (b) factor loadings. In the case of two categories, FIML underestimates factor loadings dramatically, DWLS obtains a moderate bias, while BGCF just gives trivial bias. With an increasing number of categories, FIML gets closer and closer to BGCF, but still BGCF is favorable.
\subsection*{B2: Nonparanormal Data with Different Extents of Non-normality}
In this subsection, we consider nonparanormal data, in which we use the degrees of freedom $df$ of a $\chi^2$-distribution to control the extent of non-normality (see Section 5.1 of the main paper for details). The sample size and missing values percentage are set to $n = 500$ and $\beta = 10\%$ respectively, while the degrees of freedom varies $df \in \{2, 4, 6, 8\}$.
Figure~\ref{fig:nonpara} shows the results obtained by BGCF, DWLS with PD, and FIML, providing the mean of ARB (left panel) and the mean of RMSE with 95\% confidence interval (right panel) over 100 experiments for (a) interfactor correlations and (b) factor loadings. The major conclusion drawn here is that, while a nonparanormal transformation has no effect on our BGCF approach, FIML is quite sensitive to the extent of non-normality, especially for factor loadings.
\begin{figure}[H]
\centering
\subfloat[Interfactor Correlations]{\includegraphics[scale=0.7]{k4_nonnormal_corr}}\label{fig:corr_nonpara}
\hfill
\subfloat[Factor Loadings]{\includegraphics[scale=0.7]{k4_nonnormal_loadings}}\label{fig:loadings_nonpara}
\caption{Results for $n = 500$ and $\beta = 10\%$ obtained by BGCF, DWLS with PD, and FIML on nonparanormal data with different extents of non-normality, for the same experiments as in Figure~\ref{fig:num_cat}.}
\label{fig:nonpara}
\end{figure}
| {'timestamp': '2018-06-13T02:12:57', 'yymm': '1806', 'arxiv_id': '1806.04610', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.04610'} |
\section{Introduction}
\subsection{Motivation}
A central problem in K\"ahler geometry, proposed by Calabi \cite{EC2} in the 1980's, is to find a canonical K\"ahler metric in a given cohomology class of a compact K\"ahler manifold. Calabi suggested looking for \textit{extremal K\"ahler metrics}, characterized by the property that the flow of the gradient of the scalar curvature preserves the complex structure \cite{EC}. Constant scalar curvature (cscK for short) and the much studied K\"ahler–Einstein metrics are particular examples of such metrics.
The existence of an extremal K\"ahler metric in a given K\"ahler class is conjecturally equivalent to a certain notion of \textit{stability} through an extension of the Yau-Tian-Donaldson's (YTD) conjecture, introduced \cite{GGS2, GGS} for the polarized case and \cite{RDJR} for a general K\"ahler class. This conjecture, its ramification \cite{GGS2, GGS} and extension \cite{VA7, DR, EI, AL, GGS2, GGS, ZSD} have generated tremendous efforts in K\"ahler geometry and have led to many interesting developments during the last decades.
The question has been settled in some special cases, especially on smooth toric varieties \cite{XC1, XC2, SKD, TH, CL, ZZ} where the relevant stability notion is expressed in terms of the convex affine geometry of the corresponding Delzant polytope, and is referred to as \emph{uniform K-stability} of the polytope. Other special cases include Fano manifolds (see e.g. \cite{BBJ, CSS, CSS2, CSS3, LTW, GT}), total spaces of projective line bundles over a cscK base \cite{VA8, VA3} and certain varieties with a large symmetry group \cite{TDD}. In general, though, the YTD conjecture is still open and it is expected that the relevant notion of stability would be the one of \emph{relative uniform K-stability}, see e.g. \cite{BBJ, RD, GGS2, GGS}.
In \cite{VA2}, the authors introduced a class of fiber bundles, called \textit{semi-simple rigid toric bundles}, which have isomorphic toric K\"ahler fibers. They are obtained from \textit{the generalized Calabi construction} \cite{VA2, VA3} involving a product of polarized K\"ahler manifolds, a certain principal torus bundle and a given toric K\"ahler manifold. In this paper, we are interested in the special case of \textit{semi-simple principal toric fibration}. Namely, this is the case when the basis is a global product of cscK Hodge manifolds, and there are no blow-downs, see \cite{VA3} or Remark \ref{remark-semi-simple} below. Examples include the total space of the projectivisation of a direct sum of holomorphic line bundles over a compact complex curve, as well as the $\mathbb{P}^1$-bundle constructions over the product of cscK Hodge manifolds originally used by Calabi \cite{EC2} and generalized in many subsequent works (see e.g.
\cite{VA8, DG2, ADH, ADH2, KS, YS, CTF}). On any semi-simple rigid toric fibration, the authors of \cite{VA2} introduced a class of K\"ahler metrics, called \textit{compatible K\"ahler metrics}. A K\"ahler class containing a compatible K\"ahler metric is referred to as a \textit{compatible K\"ahler class}. For any compatible K\"ahler metric on $M$, the momentum map of the toric K\"ahler fiber $(V, \omega_V, J_V, \mathbb{T})$ can be identified with the momentum map of the induced $\mathbb{T}$-action on $(M, J, \omega_M)$, so that the momentum image of $M$ is the Delzant polytope $P$ of $(V, \omega_V, J_V, \mathbb{T})$. In \cite{VA3}, the authors made the following conjecture:
\begin{conjecture}[\cite{VA3}]{\label{Conjecture}}
Let $(M,J,\omega,\mathbb{T})$ a semi-simple rigid toric fibration and $P$ its associated Delzant polytope. Suppose $[\omega]$ is a compatible K\"ahler class. Then the following statements are equivalent:
\begin{enumerate}
\item $(M,J,[\omega])$ admits an extremal K\"ahler metric;
\item $(M,J,[\omega])$ admits a compatible extremal K\"ahler metric;
\item $P$ is weighted K-stable.
\end{enumerate}
\end{conjecture}
\noindent In the third assertion, the notion of \textit{weighted stability} is a weighted version of the notion of K-stability introduced in \cite{SKD} (see \S \ref{subsection-K-stability-and-proper}) for appropriate values of the weight functions, and asks for the positivity of a linear functional defined on the space of convex piece-wise linear functions which are not affine-linear over $P$.
\subsection{Main results} Our purpose in this paper is to solve Conjecture \ref{Conjecture} for semi-simple principal toric fibrations.
\begin{customthm}{1}{\label{theorem1}}
For $M$ a semi-simple principal toric fibration, Conjecture $\ref{Conjecture}$ is true if we replace condition $(3)$ with the notion of weighted uniform K-stability, see Definition $\ref{uniform-K-stable}$.
\end{customthm}
\noindent In this paper, we use normalized continuous convex functions which are \textit{smooth} in the interior of $P$ and the usual $L^1$-norm of $P$ in order to define \emph{uniform} (weighted) K-stability, see Definition \ref{uniform-K-stable}. By $C^0$ density and continuity, this is equivalent to the uniform (weighted) K-stability of $P$, defined in terms of normalized convex piece-wise linear functions and the $L^1$-norm.~\footnote{After the submission of the first version of our article on the arXiv, we have been contacted by Yasufumi Nitta who kindly shared with us his manuscript with Shunsuke Sato in which the authors establish, in the case of a polarized toric variety, the equivalence between various notions of uniform K-stability of $P$. In particular, their result gives a strong evidence and establishes in a certain case the equivalence between the uniform weighted K-stability and and a suitable notion of weighted K-stability of $P$ in Conjecture \ref{Conjecture} $(3)$}
We split Theorem \ref{theorem1} in two statements: Theorem \ref{theoremAA} and Theorem \ref{theoremBB}. Theorem \ref{theoremAA} corresponds to the statement "(1) $\Leftrightarrow$ (2)" in Conjecture \ref{Conjecture}.
\begin{customthm}{2}[Theorem \ref{theoremA}]{\label{theoremAA}}
Let $(M,J, \omega_M, \mathbb{T})$ be a semi-simple principal toric fibration with fiber $(V,J_V, \omega_V, \mathbb{T} )$. Then, the following statements are equivalent:
\begin{enumerate}
\item there exists an extremal K\"ahler metric in $(M,J,[\omega_M],\mathbb{T})$;
\item there exists a compatible extremal K\"ahler metric in $(M,J,[\omega_M],\mathbb{T})$;
\item there exists a weighted cscK metric in $(V,J_V,[\omega_V],\mathbb{T})$ for the weights defined in $(\ref{weights})$ below.
\end{enumerate}
\end{customthm}
In the third assertion, the notion of weighted cscK metric is in the sense of \cite{AL}, see \S\ref{section-scalv} for a precise definition. The equivalence $(2) \Leftrightarrow (3)$, established in \cite{VA2}, is recall in \S \ref{section-rigidtoric}. The main idea behind the proof of $(1) \Rightarrow (2)$ is to use that \cite{XC1, XC2, WH} the existence of an extremal K\"ahler metric implies a certain properness condition of the corresponding relative Mabuchi functional (see Theorem \ref{Chen--Cheng-existence} below for a precise statement) and then show that the continuity path of \cite{XC5} can be made in the subspace of \emph{compatible} K\"ahler metrics in $[\omega_M]$. The deep results \cite{XC1,XC2, WH} then yield the existence of an extremal K\"ahler metric in $[\omega_M]$ given by the generalized Calabi construction of \cite{VA2}.
Recently, building on the proof of Theorem \ref{theoremAA}, a similar statement was established in \cite{VA6} for a larger class of fibrations associated to a certain class of principal $\mathbb{T}$-bundles over products of cscK Hodge manifolds, whose fiber is an arbitrary compact K\"ahler manifold containing $\mathbb{T}$ in its reduced automorphism group.
Theorem \ref{theoremBB} below corresponds to the statement "(1) $\Leftrightarrow$ (3)" in Conjecture \ref{Conjecture} and provides a \emph{criterion} for verifying the equivalent conditions of Theorem \ref{theoremAA}, expressed in terms of the Delzant polytope of the fiber and data depending of the topology of $M$ and the compatible K\"ahler class.
\begin{customthm}{3}[Theorem \ref{theorem-B}]{\label{theoremBB}}
Let $(M,J, \omega_M, \mathbb{T})$ be a semi-simple principal toric fibration with fiber $(V,J_V, \omega_V, \mathbb{T})$ and denote by $P$ its associated Delzant polytope. Then there exists a weighted cscK metric in $[\omega_V]$ if and only if $P$ is weighted uniformly $K$-stable, for the weighted defined in $(\ref{weights})$. In particular, the latter condition is necessary and sufficient for $[\omega_M]$ to admit an extremal K\"ahler metric.
\end{customthm}
\noindent The strategy of proof of the above result consists in considering the extremal K\"ahler metrics on the total space $(M,J,\mathbb{T})$ as weighted $(\v,\mathrm{w})$-cscK metrics on the corresponding toric fiber $(V,J_V, \omega_V, \mathbb{T})$ via Theorem \ref{theoremAA}.
We then use the Abreu--Guillemin formalism and a weighted adaptation of the results in \cite{CLS, SKD, ZZ} to establish the equivalence on $(V, \omega_V, J_V, \mathbb{T})$: in one direction, namely showing that the existence implies that polytope is weighted uniformly K-stable, the argument follows from a straightforward modiifcation of the result in \cite{CLS}, which appears in \cite{LLS2}. To show the other direction, we build on \cite{SKD, ZZ} to obtain in Proposition \ref{stable-equivaut-energy-propr-v} that the uniform weighted K-stability of the polytope implies a certain notion of coercivity of the weighted Mabuchi energy. We then show that the later implies the properness of the Mabuchi energy of $M$, and we finally conclude by invoking again \cite{XC1,XC2, WH}.
Finally, we will be interested in a certain class of almost K\"ahler metrics on a toric manifold $(V,\omega,\mathbb{T})$. They are, by definition, almost K\"ahler metrics such that the orthogonal distribution to the $\mathbb{T}$-orbits
is involutive (see \cite{ML}) and we will refer to such metrics as \textit{involutive almost K\"ahler metrics}. The idea of studying such metrics comes from \cite{SKD} (see \cite{VA2} for the weighted case), where it was conjectured that the existence of a weighted involutive csc almost K\"ahler metric is equivalent to the existence of a weighted cscK metric.
\begin{iprop}[Proposition \ref{equivalence-almostcsck}]{\label{equivalence-almostcsck2}}
Let $(V, \omega,\mathbb{T})$ be a toric manifold associated to a Delzant polytope $P$. Then, for the weights defined in $(\ref{weights})$, the following statements are equivalent:
\begin{enumerate}
\item there exists a weighted cscK metric on $(V,\omega,\mathbb{T})$;
\item there exists an involutive weighted csc almost K\"ahler metric on $(V,\omega,\mathbb{T})$;
\item $P$ is weighted uniformly K-stable.
\end{enumerate}
\end{iprop}
As an application of the above result, we study the existence of extremal K\"ahler metrics on the projectivisation $\mathbb{P}(\mathcal{L}_0 \oplus \mathcal{L}_1 \oplus \mathcal{L}_2)$ of a direct sum of line bundles $\mathcal{L}_i$ over a compact complex curve $S_{\textnormal{\textbf{g}}}$ of genus $\textnormal{\textbf{g}}$. In \cite[Proposition 4]{VA3}, the authors established the existence of involutive weighted csc almost K\"ahler metrics on $\mathbb{P}(\mathcal{L}_0 \oplus \mathcal{L}_1 \oplus \mathcal{L}_2)$, depending on the degrees of the line bundles, the genus of the basis and the K\"ahler class. Combining with Proposition \ref{equivalence-almostcsck2}, we deduce the following:
\begin{icorollary}{\label{prop-ex}}
Let $M=\mathbb{P}( \mathcal{L}_0 \oplus \mathcal{L}_1 \oplus \mathcal{L}_2) \longrightarrow S_{\textnormal{\textbf{g}}}$ be a projective $\mathbb{P}^2$-bundle over a complex curve $S_{\textnormal{\textbf{g}}}$ of genus $\textnormal{\textbf{g}}$. If $\textnormal{\textbf{g}}=0,1$, then $M$ is a Calabi dream manifold, i.e. $M$ admits an extremal K\"ahler metric in each K\"ahler class. Furthermore, the extremal K\"ahler metrics are given by the generalized Calabi ansatz of \cite{VA2}.
\end{icorollary}
\noindent
We conclude by pointing out that when $\textnormal{\textbf{g}}=0$, the existence part of Corollary $\ref{prop-ex}$ was already obtained in \cite{EL}. We prove in addition that these extremal metrics are given by the Calabi ansatz of \cite{VA2}.
\subsection{Outline of the paper} Section \ref{section-scalv} is a brief summary of the notion of weighted $(\v,\mathrm{w})$-scalar curvature introduced by Lahdili \cite{AL}. In Section 3, we recall the construction and key results of semi-simple principal toric fibration established in \cite{VA2, VA3}. In Section \ref{section-distance}, we introduce weighted distances, weighted functionals and weighted differential operators. Section \ref{section-chencheng} gives a brief exposition of the existence result of \cite{XC1, XC2, WH}. We explain why their argument works equally when the properness is relative to a maximal torus of the reduced group of automorphism (and not only a connected maximal compact subgroup). In Section \ref{section-theoremA}, our main result, Theorem \ref{theoremAA}, is stated and proved. In Section 7 we review the basic facts of toric K\"ahler geometry and give the proof of Theorem \ref{theoremBB}. In Section \ref{section-application}, we show Corollary \ref{prop-ex}.
\section*{Acknowledgement}
This paper is part of my PhD thesis. I am very grateful to my advisors Vestislav Apostolov and Eveline Legendre for their immeasurable help and invaluable advices. I would also like to thank Abdellah Lahdili for his carefull reading and constructive criticism in the earlier versions and to Yasufumi Nitta for his interest and to shared with me his manuscript with Shunsuke Sato. I am grateful to l'Université du Québec à Montréal and l'Université Toulouse III Paul Sabatier for their financial support.
\section{The $\mathrm{v}$-scalar curvature}{\label{section-scalv}}
In this section, we review briefly the notion of \textit{weighted $\mathrm{v}$-scalar curvature} introduced by Lahdili in \cite{AL}. Consider a smooth compact K\"ahler manifold $(M,J,\omega)$. We denote by $\mathrm{Aut}_{\mathrm{red}}(M)$ the reduced group of automorphisms whose Lie algebra $\mathfrak{h}_{\mathrm{red}}$ is given by the ideal of real holomorphic vector fields with zeros, see \cite{PG}. Let $\mathbb{T}$ be an $\ell$-dimensional real torus in $\mathrm{Aut}_{\mathrm{red}}(M)$ with Lie algebra $\mathfrak{t}$. Suppose $\omega_0$ is a $\mathbb{T}$-invariant K\"ahler form and consider the set of smooth $\mathbb{T}$-invariant K\"ahler potentials $\mathcal{K}(M,\omega_0)^{\mathbb{T}}$ relative to $\omega_0$. For $\varphi \in \mathcal{K}(M,\omega_0)^{\mathbb{T}}$ we denote by $\omega_{\varphi}=\omega_0+dd^c\varphi$ the corresponding K\"ahler metric. It is well known that the $\mathbb{T}$-action on $M$ is $\omega_{\varphi}$-Hamiltonian (see \cite{PG}) and we let $m_{\varphi} : M \longrightarrow \mathfrak{t}^{*}$ denote a $\omega_{\varphi}$-momentum map of $\mathbb{T}$. It is also known \cite{MA, VGSS, AL} that $P_{\varphi}:= m_{\varphi}(M)$ is a convex polytope in $\mathfrak{t}^{*}$ and we can normalize $m_{\varphi}$ by
\begin{equation}{\label{normalizing-moment-map}}
m_{\varphi}=m_{0} + d^c\varphi,
\end{equation}
\noindent in such a way that $P=P_{\varphi}$ is $\varphi$-independent, see \cite[Lemma 1]{AL}.
\begin{define}{\label{weighted-scalar}}
For $\mathrm{v}\in \mathcal{C}^{\infty}(P,\mathbb{R}_{>0})$ we define the (weighted) $\mathrm{v}$-scalar curvature of the K\"ahler metric $\omega_{\varphi}$, $\varphi \in \mathcal{K}(M,\omega_0)^{\mathbb{T}}$, to be
\begin{equation*}
Scal_{\mathrm{v}}(\omega_{\varphi}):=\mathrm{v}(m_{\varphi})Scal(\omega_{\varphi})+ 2 \Delta_{\omega_{\varphi}}\big(\mathrm{v}(m_{\varphi})\big) + \textnormal{Tr}\big(G_{\varphi} \circ (\textnormal{Hess}(\mathrm{v}) \circ m_{\varphi} )\big),
\end{equation*}
\noindent where $\Delta_{\omega_{\varphi}}$ is the Riemannian Laplacian associated to $g_{\varphi}:=\omega_{\varphi}(\cdot,J\cdot)$, $\textnormal{Hess}(\mathrm{v})$ is the Hessian of $\v$ viewed as bilinear form on $\mathfrak{t}^*$ whereas $G_{\varphi}$ is the bilinear form with smooth coefficients on $\mathfrak{t}$, given by the restriction of the Riemannian metric $g_{\varphi}$ on fundamental vector fields and $Scal(\omega_{\varphi})$ is the scalar curvature of $(M,J,\omega_{\varphi})$.
\end{define}
In a basis $\boldsymbol{\xi}=(\xi_i)_{i=1 \cdots \ell}$ of $\mathfrak{t}$ we have
\begin{equation*}
\text{Tr}\big(G_{\varphi} \circ (\text{Hess}(\v) \circ m_{\varphi} )\big) = \sum_{1\leq i,j\leq \ell}\mathrm{v}_{,ij}(m_{\varphi})g_{\varphi}(\xi_i,\xi_j)
\end{equation*}
\noindent where $\mathrm{v}_{,ij}$ stands for the partial derivatives of $\v$ in the dual basis of $\boldsymbol{\xi}$.
\begin{define}{\label{define-scalv}}
Let $(M,J,\omega_0)$ be a compact K\"ahler manifold, $\mathbb{T}\subset \mathrm{Aut}_{\mathrm{red}}(M)$ a real torus
with normalized momentum image $P \subset \mathfrak{t}^*$ associated to $[\omega_0]$, and $\mathrm{v}\in \mathcal{C}^{\infty}(P, \mathbb{R}_{>0})$,
$\mathrm{w} \in \mathcal{C}^{\infty}(P, \mathbb{R})$. A $(\mathrm{v}, \mathrm{w})$-cscK metric is a $\mathbb{T}$-invariant K\"ahler metric satisfying
\begin{equation}{\label{weighted-cscK-metric}}
Scal_\v(\omega_{\varphi})=\mathrm{w}(m_{\varphi}).
\end{equation}
\end{define}
The motivation for studying (\ref{weighted-cscK-metric}) is that many natural geometric problems in K\"ahler geometry correspond to (\ref{weighted-cscK-metric}) for suitable choices of $\mathrm{v}$ and $\mathrm{w}$. For example, for $\mathbb{T}$ a maximal torus in $\mathrm{Aut}_{\mathrm{red}}(M)$, $\mathrm{v}\equiv1$ and $\mathrm{w}_{\textnormal{ext}}$ a certain affine-linear function on $\mathfrak{t}^*$, the $(1,\mathrm{w}_{\textnormal{ext}})$-cscK metrics are the extremal metrics in the sense of Calabi. Another example, which will be the one of the main interest of this paper, is the existence theory of extremal K\"ahler metrics on a class of toric fibrations, which can be reduced to the study of $(\mathrm{v},\mathrm{w})$-cscK on the toric fiber for suitable choices of $\mathrm{v}$ and $\mathrm{w}$. Weighted K\"ahler metrics have been extensively studied and related to a notion
of $(\v, \mathrm{w})$-weighted K-stability, see for example \cite{VA7, VA6, VA5, EI, AL}.
\section{A class of toric fibrations}{\label{section-rigidtoric}}
\subsection{Semi-simple principal toric fibrations}{\label{subsection-torique-rigide}}
Let $\mathbb{T}$ be an $\ell$-dimensional torus. We denote by $\mathfrak{t}$ is Lie algebra and by $\Lambda \subset \mathfrak{t}$ the lattice of the generators of circle subgroups, so that $\mathbb{T} = \mathfrak{t}/2\pi \Lambda$. Consider $ \pi_S : Q \longrightarrow (S,J_S)$ a principal $\mathbb{T}$-bundle over a $2d$-dimensional product of cscK Hodge manifold $(S,J_S,\omega_S)=\prod_{a=1}^k (S_a,J_a,\omega_a)$.
Let $\theta \in \Omega^1(Q)\otimes \mathfrak{t}$ be a connection $1$-form with curvature
\begin{equation}{\label{first-chern}}
d \theta = \sum_{a=1}^k \pi_S^*\omega_a \otimes p_a \text{ } \text{ } \text{ } p_a \in \Lambda \subset \mathfrak{t}.
\end{equation}
\noindent The connection $1$-form $\theta$ gives rise to a \textit{horizontal} distribution $\mathcal{H}:=ann(\theta)$ and the tangent space splits as
\begin{equation*}
TQ= \mathcal{H} \oplus \mathfrak{t},
\end{equation*}
\noindent where, by definition, $\mathcal{H}_s \overset{d_s\pi_S}{\cong} T_sS$ for all $s\in S$. The complex structure $J_S$ acts on vector fields in $\mathcal{H}$ via the unique horizontal lift from $TS$ defined via $\theta$.
Now consider a $2\ell$-dimensional compact toric K\"ahler manifold $(V,J_V,\omega,\mathbb{T})$ with associated compact Delzant polytope $P$ \cite{TD}. We will consider various actions of $\mathbb{T}$ in this paper. In order to avoid confusion, we specify on which $\mathbb{T}$ acts as a subscript, e.g. $\mathbb{T}_Q$ acts on $Q$. The interior $P^0$ is the set of regular value of the moment $m_{\omega} : V \longrightarrow P \subset \mathfrak{t}^*$ of $(V,\omega,\mathbb{T})$ and $V^0:=m_{\omega} ^{-1}(P^0)$ is the open dense subset of points with regular $\mathbb{T}_V$-orbits. Introducing angular coordinates $t: V^0 \to \mathfrak{t} /2\pi \Lambda$ with respect to the K\"ahler structure $(J_V, \omega)$ (see e.g. \cite{MA3}), we identify
\begin{equation}{\label{identification-moment-angle}}
V^0 \cong \mathbb{T} \times P^0 \text{ } \text{ and } \text{ } T_xV^0\cong \mathfrak{t} \oplus \mathfrak{t}^*.
\end{equation}
\noindent for all $x \in V^0$. Notice that the first diffemorphism is $\mathbb{T}$-equivariant.
We consider the $2n=2(\ell+d)$ dimensional smooth manifold
\begin{equation*}
M^0:=Q \times_{\mathbb{T}} V^0,
\end{equation*}
\noindent where the $\mathbb{T}_{Q\times V^0}$-action is given by $ \gamma \cdot (q, x) = (\gamma \cdot q, \gamma^{-1} \cdot x)$, $q \in Q$, $x \in V^0$, and $\gamma \in \mathbb{T}$. Using (\ref{identification-moment-angle}) we identify
\begin{equation}{\label{identification-M0}}
M^0 \cong Q \times P^0.
\end{equation}
\noindent We will still denote by $\pi_S : M^0 \longrightarrow S$ the projection. At the level of the tangent space we get
\begin{equation}{\label{splitting}}
TM^0= \mathcal{H} \oplus \mathcal{V},
\end{equation}
\noindent where, for all $s\in S$, $\mathcal{V}_s:= \textnormal{ker} d_s\pi_S \cong \mathfrak{t} \oplus \mathfrak{t}^* $ is referred to as the \textit{vertical space}. Since $V^0$ compactifies to $V$, the smooth manifold $M^0$ compactifies to a fiber bundle with fiber $V$
\begin{equation*}
M := \overline{M^0}= Q \times_{\mathbb{T}} V.
\end{equation*}
\noindent By construction, $M^0$ is an open dense subset of $M$ consisting of points with regular $\mathbb{T}_M$-orbits.
One can show that the almost complex structure $J_M:=J_S \oplus J_V$ on $M^0$ is integrable and extends on $M$ as $J_V$ extends to $V$. In other words, $M$ is a compactification of a principal $(\mathbb{C}^*)^{\ell}$-bundle $\pi_S : (M^0,J) \longrightarrow (S,J_S)$.
\subsection{Compatible K\"ahler metrics}
Following \cite{VA3}, we introduce a family of K\"ahler metrics \textit{compatible with the bundle structure}.
In momentum-angular coordinates $(m_{\omega},t)$, the K\"ahler form $\omega$ of $(V,J_V,\mathbb{T})$ is writen on $V^0$
\begin{equation}{\label{toric_symplectic}}
\omega=\langle dm_{\omega} \wedge dt \rangle,
\end{equation}
\noindent where the angle bracket denotes the contraction of $\mathfrak{t}^*$ and $\mathfrak{t}$. By (\ref{identification-M0}), we can equivalently see $\theta$ on $M^0= Q \times_{\mathbb{T}} V^0$ which satisfies $\theta(\xi^M)=\xi$ and $\theta(J\xi^M)=0$, where $\xi^M$ is the fundamental vector field defined by $\xi \in \mathfrak{t}$. Then, $\langle dm_{\omega} \wedge \theta \rangle$ is well defined on $M^0$ and restricts to $\langle dm_{\omega} \wedge dt \rangle$ on the fibers. Thus, we define more generally
\begin{equation*}
\omega:=\langle dm_{\omega} \wedge \theta \rangle.
\end{equation*}
\noindent We choose the real constants $c_a$, $1\leq a \leq k$, such that the affine-linear functions $\langle p_a,m_{\omega} \rangle +c_a$ are positive on $P$, where, we recall the elements $p_a \in \Lambda$ are defined by $(\ref{first-chern})$. We then define the 2-form on $M^0$
\begin{equation}{\label{metriccalabidata}}
\tilde{\omega}=\sum_{a=1}^k\left(\langle p_a,m_{\omega}\rangle +c_a\right)\omega_a + \langle dm_{\omega} \wedge \theta \rangle,
\end{equation}
\noindent which extends to a smooth K\"ahler on $(M,J)$ since $\omega$ does on $(V,J_V)$. In the sequel, we fix the metrics $\omega_a$, the $1$-form $\theta$ and the constants $c_a$, noting that $p_a \in \mathfrak{t}$ are topological constants of the bundle construction. The K\"ahler manifold $(M,J,\tilde{\omega},\mathbb{T})$ is then a fiber bundle over $S$, with fiber the K\"ahler toric manifold $(V, J_V, \omega, \mathbb{T})$, obtained from the principal $\mathbb{T}$-bundle $Q$. Following \cite{VA3}, we define:
\begin{define}
The K\"ahler manifold $(M,J,\tilde{\omega},\mathbb{T})$ constructed above is referred to as \textit{a semi-simple principal toric fibration} and the K\"ahler metric given explicitly on $M^0$ by $(\ref{metriccalabidata})$, is referred to as a \textit{compatible K\"ahler metric}. The corresponding K\"ahler classes on $(M,J)$ are called \textit{compatible K\"ahler classes} and, in the above set up, are parametrized by the real constant $c_a$.
\end{define}
\begin{remark}{\label{remark-semi-simple}}Let $(M,J, \tilde{g}, \tilde{\omega})$ be a compact K\"ahler $2n$-manifold endowed with an effective isometric hamiltonian action of an $\ell$-torus $\mathbb{T} \subset \mathrm{Aut}_{\mathrm{red}}(M)$ and momentum map $m : M \rightarrow \mathfrak{t}^{*}$. Following \cite{VA3}, we say the action is rigid if for all $x$ in $M$ $R^{*}_x\tilde{g}$ depends only on $m(x)$, where $R_x : \mathbb{T} \rightarrow \mathbb{T} \cdot x$ is the orbit map. This action is said semi-simple rigid if moreover, for any regular value $x_0$ of the momentum map, the derivative with respect to $x$ of the family $\omega_S(x)$ of K\"ahler forms on the complex quotient $(S,J_{S})$ of $(M, J)$ is parallel and diagonalizable with respect to $\omega_{S}(x_0)$.
By the result of \cite{VA4, VA2, VA3}, semi-simple principal toric fibrations $(M, J, \tilde{\omega}, \mathbb{T})$ correspond to K\"ahler manifolds with a semi-simple rigid torus action, such that the K\"ahler quotient $S$ is a global product of cscK manifolds, and there are no blow-downs.
\end{remark}
\noindent The volume form of a compatible K\"ahler metric (\ref{metriccalabidata}) satisfies
\begin{equation}{\label{volume-form}}
\tilde{\omega}^{[n]}=\omega_S^{[d]}\wedge \mathrm{v}(m_{\omega}) \omega^{[\ell]}=\bigwedge_{a=1}^k \omega_a^{[d_a]}\wedge \mathrm{v}(m_{\omega}) \omega^{[\ell]}
\end{equation}
\noindent where $\mathrm{v}(m_{\omega}):=\prod_{a=1}^k\big(\langle p_a,m_{\omega} \rangle +c_a\big)^{d_a}$, $d_a$ is the complex dimension of $S_a$ and $\omega^{[i]}:=\frac{\omega^i}{i!}$ for $ 1 \leq i \leq n$. It follows from \cite{VA2} and \cite[Sect. 6]{AL} that the scalar curvature of a compatible metric is given by
\begin{equation}{\label{weighted-scalarcurv}}
Scal(\tilde{\omega})=\sum_{a =1}^k \frac{Scal_a}{\langle p_a,m_{\omega} \rangle +c_a} - \frac{1}{\mathrm{v}(m_{\omega})} Scal_{\v}(\omega),
\end{equation}
\noindent where $Scal_a$ is the constant scalar curvature of $(S_a,J_a,\omega_a)$ and $Scal_{\v}(\omega)$ is the $\v$-weighted scalar curvature of $(V,J_V,\omega, \mathbb{T})$, see Definition \ref{weighted-scalar}.
\subsection{The extremal vector field}{\label{section-extremal-vector-fields}} We now recall the definition of the extremal vector field on a general compact K\"ahler manifold $M$. To this end, we fix a maximal torus $T \subset \mathrm{Aut}_{\mathrm{red}}(M)$ and a K\"ahler class $[\tilde{\omega}_0]$. Given any $T$-invariant K\"ahler metric $\tilde{\omega} \in [\tilde{\omega}_0]$, we consider the $L^2_{\tilde{\omega}}$-orthogonal projection
\begin{equation*}
\Pi_{\tilde{\omega}} : L^2_{\tilde{\omega}} \longrightarrow P^T_{\tilde{\omega}}
\end{equation*}
\noindent where $P^T_{\tilde{\omega}}$ is the space of $\tilde{\omega}$-Killing potentials. Futaki and Mabuchi \cite{FM} showed that $\Pi_{\tilde{\omega}}\big(Scal(\tilde{\omega})\big)$ does not depend on the chosen $T$-invariant K\"ahler metric $\tilde{\omega}$ in $[\tilde{\omega}_0]$. Therefore, with respect to the normalized moment map $m_{\tilde{\omega}} : M \longrightarrow Lie(T)^*$, see (\ref{normalizing-moment-map}), one can write
\begin{equation}{\label{affin-extremal}}
\Pi_{\tilde \omega}\big(Scal(\tilde \omega)\big)=\langle \xi_{\textnormal{ext}}, m_{\tilde{\omega}} \rangle + c_{\textnormal{ext}} =:\ell_{\textnormal{ext}}(m_{\tilde{\omega}})
\end{equation}
\noindent where $\xi_{\textnormal{ext}} \in Lie(T)$, $c_{\textnormal{ext}} \in \mathbb{R}$ and $\ell_{\textnormal{ext}} \in \mathrm{Aff}\big(Lie(T)^*\big)$. See \cite[Lemma 1]{AL} for more details.
Assume now $(M,J,\tilde{\omega},\mathbb{T})$ is a semi-simple principal toric fibration. Then by \cite[Proposition 1]{VA3}, the extremal vector field is tangent to the fibers, i.e. $\xi_{\mathrm{ext}} \in \mathfrak{t}$.
\begin{prop}{\label{extremal-vector-fields}}
Let $(M,J)$ be a semi-simple principal toric fibration and $T_S$ be a maximal torus in the isometry group of $g_S:=\sum_{a=1}^k g_a$, where $g_a$ is the Riemannian metric of $\omega_a$. Any compatible K\"ahler metric $\tilde{\omega}$ on $(M,J)$ is invariant by the action of a maximal torus $T \subset \mathrm{Aut}_{\mathrm{red}}(M)$ such that there exists an exact sequence
\begin{equation}{\label{exact-sequence}}
Id \longrightarrow \mathbb{T}_M \longrightarrow T \longrightarrow T_S \rightarrow Id,
\end{equation}
\noindent where $T_S$ is a maximal torus in $\mathrm{Aut}_{\mathrm{red}}(S)$. Moreover, the extremal vector field $\xi_{\textnormal{ext}}$ belongs in the Lie algebra $\mathfrak{t}$ of $\mathbb{T}_M$.
\end{prop}
\noindent As shown in \cite{VA3}, we get from (\ref{weighted-scalarcurv}) and (\ref{affin-extremal}):
\begin{corollary}{\label{equivalence-scalar}}
A compatible K\"ahler metric $\tilde{\omega}$ on $(M,J)$ is extremal if and only if its corresponding toric K\"ahler metric $\omega$ on $(V,J_V)$ is $(\mathrm{v},\mathrm{w})$-cscK in the sense of Definition \ref{weighted-cscK-metric}, where the weights are given by
\begin{equation}{\label{weights}}
\begin{split}
\mathrm{v}(x) &= \prod_{a=1}^k \big(\langle p_a,x \rangle +c_a \big)^{d_a}\\
\mathrm{w}(x)&=\v(x)\left( \ell_{\textnormal{ext}}(x) - \sum_{a=1}^k \frac{Scal_a}{\langle p_a, x \rangle +c_a} \right),
\end{split}
\end{equation}
\noindent where $\ell_{\textnormal{ext}} \in \mathrm{Aff}(P)$ is defined in $(\ref{affin-extremal})$.
\end{corollary}
\subsection{The space of functions}
Any $\mathbb{T}_M$-invariant smooth function on $M$ pulls back to a $\mathbb{T}_Q\times \mathbb{T}_V$-invariant function on $Q\times V,$ and therefore descends to a $\mathbb{T}_V$-invariant smooth function on $S\times V$ (see $\S$\ref{subsection-torique-rigide}). This gives rise to an isomorphism of Fréchet spaces
\begin{equation}{\label{identification-functio-cartesian}}
C^{\infty}(M)^{\mathbb{T}} \cong C^{\infty}(S \times V)^{\mathbb{T}_V},
\end{equation}
\noindent which we shall tacitly use throughout the paper. Moreover, by (\ref{identification-M0}) we get
\begin{equation*}
C^{\infty}(M^0)^{\mathbb{T}} \cong C^{\infty}(S \times P^0)
\end{equation*}
Given $f\in \mathcal{C}^{\infty}(M)^{\mathbb{T}}$, for any $s\in S$, we denote by $f_s \in \mathcal{C}^{\infty}(V)^{\mathbb{T}}$ the induced smooth function on $V$ with respect to the identification (\ref{identification-functio-cartesian}). Similarly, for any $x \in V$, we denote by $f_x \in \mathcal{C}^{\infty}(S)$ the induced smooth function on $S$. It follows that on $\mathcal{C}^{\infty}(M)^{\mathbb{T}}$, the differential operator $d$ splits as $d=d_S+d_V$, where $d_S$ and $d_V$, is the exterior derivative on $S$ and $V$ respectively. We get
\begin{equation}{\label{function-from-fiber}}
\mathcal{C}^{\infty}(V)^{\mathbb{T}}\cong\{ f \in \mathcal{C}^{\infty}(M)^{\mathbb{T}} \text{ } | \text{ } d_Sf_x=0 \text{ }\forall x \in V\},
\end{equation}
\noindent showing that $\mathcal{C}^{\infty}(V)^{\mathbb{T}}$ is closed in $ \mathcal{C}^{\infty}(M)^{\mathbb{T}}$ for the Fréchet topology.
\subsection{The space of compatible potentials}{\label{section-distance}} We fix a reference K\"ahler metric $\tilde{\omega}_0$ on $(M,J)$, its corresponding K\"ahler metric $\omega_0$ on $(V,J_V)$ and the weights $(\mathrm{v},\mathrm{w})$ given by (\ref{weights}). We denote by $\mathcal{K}(V, \omega_0)^{\mathbb{T}}$ the space of smooth K\"ahler potentials on $V$ relative to $\omega_0$ and denote by $\omega_{\varphi}=\omega_0+d_Vd_V^c\varphi$ the corresponding K\"ahler metric on $(V,J_V)$. Similarly, we denote by $\mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$ the space of smooth $\mathbb{T}$-invariant K\"ahler potentials on $(M,J)$ relative to $\tilde{\omega}_0$ and we denote by $\tilde{\omega}_{\varphi}=\tilde{\omega}_0+dd^c\varphi$ the corresponding K\"ahler metric. The following Lemma is established in \cite[Lemma 7]{VA3}.
\begin{lemma}{\label{changeofmetric}}
Let $\omega_{\varphi}=\omega_0 +d_Vd^c_V \varphi$ be a $\mathbb{T}$-invariant K\"ahler metric on $(V,J_V)$ and denote by $m_{\varphi}$ the moment map which satisfies normalization (\ref{normalizing-moment-map}). Then the compatible K\"ahler metric $\tilde{\omega}_{\varphi}$ induced by $\omega_{\varphi}$ on $M$ is given by $\tilde{\omega}_{\varphi}=\tilde{\omega}_0+dd^c\varphi$, where $\varphi$ is seen as a smooth function on $M$ via $(\ref{identification-functio-cartesian})$.
\end{lemma}
It follows that $\mathcal{K}(V, \omega_0)^{\mathbb{T}}$ parametrizes the compatible K\"ahler metric on $(M,J)$ given explicitly by $(\ref{metriccalabidata})$ and will be referred to as \textit{the space of compatible K\"ahler potentials}. It follows from Proposition \ref{extremal-vector-fields} and Lemma \ref{changeofmetric}:
\begin{corollary}{\label{change-of-metric-cor}}
There is an embedding of Frechet spaces $ \mathcal{K}(V,\omega_0)^{\mathbb{T}} \hookrightarrow \mathcal{K}(M,\tilde{\omega}_0)^{T}$.
\end{corollary}
\section{Weighted distance, functionals and operators}{\label{section-distance}}
\subsection{Weighted distance}
Thanks to the work of Mabuchi \cite{TM1, TM} it is well-known, that $\mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$ is infinite dimensional Riemannian manifold when equipped with the Mabuchi metric:
\begin{equation*}
\langle \dot{\varphi}_0, \dot{\varphi}_1 \rangle_{\varphi} = \int_M \dot{\varphi}_0 \dot{\varphi}_1 \tilde{\omega}_{\varphi}^{[n]} \text{ } \text{ } \forall \dot{\varphi}_0, \dot{\varphi}_1 \in T_{\varphi} \mathcal{K}(M, \tilde{\omega}_0)^{\mathbb{T}}.
\end{equation*}
\noindent Furthermore, a path $(\varphi_t)_{t\in[0,1]} \in \mathcal{K}(M, \tilde{\omega}_0)^{\mathbb{T}} $ connecting two points is a smooth geodesic if and only if
\begin{equation}{\label{geodesic}}
\Ddot{\varphi}_t - |d\dot{\varphi}_t|^2_{\varphi_t}=0.
\end{equation}
The following result is proved in \cite[Lemma 5.6]{VA6} in the more general context of semi-simple principal fiber bundles and follows easily from the expression $(\ref{volume-form})$ of the volume form of a compatible K\"ahler metric.
\begin{lemma}{\label{equality-mabuchi-norm}}
Let $\varphi \in \mathcal{K}(V, \omega_0)^{\mathbb{T}}$ and $f \in T_{\varphi}\mathcal{K}(V, \omega_0)^{\mathbb{T}}$, also viewed as an element of $T_{\varphi}\mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$. Then
\begin{equation*}
|df|^2_{\tilde{\omega}_{\varphi}}=|df|^2_{{\omega}_{\varphi}}.
\end{equation*}
\noindent In particular, $\mathcal{K}(V,\omega_0)^{\mathbb{T}}$ is a totally geodesic submanifold of $\mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$ with respect to the Mabuchi metric.
\end{lemma}
In \cite{DG}, Guan showed the existence of a smooth geodesic between two K\"ahler potentials on a toric manifold. The same argument shows the geodesic connectedness of two elements $\varphi_0$, $\varphi_1 \in \mathcal{K}(V,\omega_0)^{\mathbb{T}} \subset \mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$.
\begin{remark}
On a general compact K\"ahler manifold $(M, J, \omega_0)$, Darvas \cite{DT} introduced the distance $d_1$ as
\begin{equation}{\label{distance-d1}}
d_1(\varphi_0,\varphi_1):=\inf_{\varphi_t} \int_0^1 \int_M | \dot{\varphi}_t| \omega^{[n]}_t,
\end{equation}
\noindent where $\omega_t^{[n]}$ is the volume form associated to the metric $\omega_t=\omega+dd^c\varphi_t$ and the infimum is taken over the space of smooth curves $\{\varphi_t\}_{t\in[0,1]} \subset \mathcal{K}(M,\omega_0)^{\mathbb{T}}$ joining $\varphi_0$ to $\varphi_1$. In the above formula, $\dot{\varphi}_t$ is the variation of $\varphi_t$ with respect to $t$. It is showed in \cite{TD} that $d_1(\varphi_0, \varphi_1)$ equals to the length of the unique (weak) $C^{1, \bar 1}$ geodesic \cite{XC} joining $\varphi_0$ and $\varphi_1$.
\end{remark}
\begin{lemma}{\label{restriction-distance}}
The distance $d_1$ restricts (up to a positive multiplicative constant) to the distance $d_{1,\v}$ on $\mathcal{K}(V, \omega_0)^{\mathbb{T}}$, defined by
\begin{equation}{\label{distance-pondéré}}
d^V_{1,\mathrm{v}}(\varphi_0,\varphi_1):=\inf_{\varphi_t} \int_0^1 \int_V | \dot{\varphi}_t| \mathrm{v}(m_t)\omega_t^{[\ell]}.
\end{equation}
\end{lemma}
\noindent We refer to \cite[Corollary 5.5]{VA6} for the proof. It follows also directly from $(\ref{volume-form})$ and the smooth geodesic connectedness of $\mathcal{K}(V, \omega_0)^{\mathbb{T}}$.
\subsection{Weighted functionals}{\label{section-functionals}}
\noindent We consider the Mabuchi energy on $\mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$ relative to $\mathbb{T}$, characterized by its variation
\begin{equation*}
d_{\varphi}\mathcal{M}^{\mathbb{T}}(\Dot{\varphi}) = - \int_M \Dot{\varphi} \big(Scal(\tilde{\omega}_{\varphi}) - \Pi_{\tilde{\omega}_{\varphi}}\big(Scal(\tilde{\omega}_{\varphi})\big) \tilde{\omega}_{\varphi}^{[n]}, \text{ } \text{ }
\mathcal{M}^{\mathbb{T}}(0)=0.
\end{equation*}
\noindent When restricted to $\mathcal{K}(V, \omega_0)^{\mathbb{T}} \subset \mathcal{K}(M, \tilde \omega_0)^{\mathbb{T}}$, this functional is a special case of the weighted Mabuchi functional introduced in \cite{AL}. In our case, the following lemma, established in \cite{VA3}, follows directly from (\ref{weighted-scalarcurv}) and (\ref{weights}).
\begin{lemma}{\label{mabuchi-energy-restriction}}
The restriction of the Mabuchi relative energy $\mathcal{M}^{\mathbb{T}}$ to $\mathcal{K}(V, \omega_0)^{\mathbb{T}}$ is equal (up to a positive multiplicative constant) to the weighted Mabuchi energy, defined by
\begin{equation}{\label{definition-weighted-eneergy}}
d_{\varphi}\mathcal{M}_{\mathrm{v},\mathrm{w} }(\Dot{\varphi}) := - \int_V\big(Scal_{\mathrm{v}}(\omega_{\varphi}) - \mathrm{w}(m_{\varphi})\big)\Dot{\varphi}\omega_{\varphi}^{[\ell]}, \text{ } \text{ } \mathcal{M}_{\mathrm{v},\mathrm{w} }(\varphi)=0,
\end{equation}
\noindent with weights $(\v,\mathrm{w})$ given by $(\ref{weights})$, and where $\varphi \in \mathcal{K}(V, \omega_0)^{\mathbb{T}}$ and $\dot{\varphi} \in T_{\varphi}\mathcal{K}(V, \omega_0)^{\mathbb{T}}$. In particular, compatible extremal K\"ahler metrics in $[\tilde{\omega}_0]$ are critical points of $\mathcal{M}_{\mathrm{v},\mathrm{w}} : \mathcal{K}(V,\omega)^{\mathbb{T}} \longrightarrow \mathbb{R}$.
\end{lemma}
The Aubin-Mabuchi functional $\mathcal{I} : \mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}} \longrightarrow \mathbb{R}$ is defined by
\begin{equation*}\label{I-functionnal}
d_{\varphi}\mathcal{I}(\Dot{\varphi}) = \int_M\Dot{\varphi}\tilde{\omega}_{\varphi}^{[n]}, \text{ } \text{ } \mathcal{I}(0)=0,
\end{equation*}
\noindent for any $\dot{\varphi} \in T_{\varphi}\mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$. By (\ref{volume-form}), its restriction to $\mathcal{K}(V,\omega_0)^{\mathbb{T}} \subset \mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$ is equal to (up to a positive multiplicative constant)
\begin{equation}{\label{Ir-functionnal}}
d_{\varphi}\mathcal{I}_{\v}(\Dot{\varphi}) := \int_V\Dot{\varphi}\v(m_{\varphi})\omega_{\varphi}^{[\ell]}, \text{ } \mathcal{I}_{\v}(0)=0
\end{equation}
\noindent for any $\dot{\varphi} \in T_{\varphi}\mathcal{K}(V, \tilde{\omega}_0)^{\mathbb{T}}$. We define the space of $\mathcal{I}$-normalized relative K\"ahler potentials as
\begin{equation}{\label{normalized-potential}}
\mathring{\mathcal{K}}(M,\tilde{\omega}_0)^{\mathbb{T}}:=\mathcal{I}^{-1}(0) \subset \mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}.
\end{equation}
\noindent It is well known, see e.g. \cite[chapter 4]{PG}, that this space is totally geodesic in $\mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$. Similarly, we define
\begin{equation}{\label{normalized-compatible-potential}}
\mathring{\mathcal{K}}_{\v}(V,\omega_0)^{\mathbb{T}}:=\mathcal{I}_{\v}^{-1}(0) \subset \mathcal{K}_{\v}(V,\omega_0)^{\mathbb{T}}.
\end{equation}
\noindent It follows from $(\ref{Ir-functionnal})$ that we also have $\mathring{\mathcal{K}}_{\v}(V,\omega_0)^{\mathbb{T}} \subset \mathring{\mathcal{K}}(M,\tilde{\omega}_0)^{\mathbb{T}} $.
\subsection{Weighted differential operators}{\label{section-operator}}
Following \cite{VA3}, we introduce the $\v$-Laplacian of $(V,J_V,\omega)$ acting on smooth function
\begin{equation*}
\Delta^V_{\omega,\v}f:=\frac{1}{\v(m_{\omega})}\delta \big(\v(m_{\omega})d_Vf\big),
\end{equation*}
\noindent where $\delta$ is the formal adjoint of the differential $d_V$ with respect to $\omega^{[\ell]}$. This definition immediately implies that $\Delta^V_{\omega,\v}$ is self-adjoint with respect to $\v(m_{\omega})\omega^{[\ell]}$. Moreover, it follows from the computations in \cite[Lemma~8]{VA3} that $\Delta^V_{\omega,\v}$ can be alternatively expressed as
\begin{equation}{\label{expression-v-laplacien}}
\Delta^V_{\omega,\v}f= \Delta_{\omega}f - \sum_{a=1}^k \frac{d_ad_V^cf(p^V_a)}{\langle p_a,m_{\omega} \rangle + c_a}
\end{equation}
\noindent \noindent for any $f \in \mathcal{C}^{\infty}(V)$, where $\Delta_{\omega}$ is the Laplacian with respect to $\omega$ and $p_a^V$ is the fundamental vector field on $V$ defined by $p_a \in \mathfrak{t}$. As in \cite{AL}, we introduce the $\v$-weighted Lichnerowicz operator of $(V,J_V,\omega)$ defined on the smooth functions $f \in \mathcal{C}^{\infty}(V)$ to be
\begin{equation}{\label{define-lichne-pondéré}}
\mathbb{L}^V_{\omega,\v}f:=\frac{\delta \delta\big(\v(m_{\omega})(D^-d_Vf)\big)}{\v(m_{\omega})},
\end{equation}
\noindent where $D$ is the Levi-Civita connection of $\omega$, $D^-d_V$ denotes the $(2,0)+(0,2)$ part of $Dd_V$ and $\delta : \otimes^pT^*V \longrightarrow \otimes^{p-1}T^*V$ is defined in any local orthogonal frame $\{e_1,\dots, e_{2n} \}$ by
\begin{equation*}
\delta \psi := - \sum_{i=1}^{2n} e_i \lrcorner D_{e_i} \psi
\end{equation*}
\noindent where $\lrcorner$ denotes the interior product. The operator $\delta \delta$ is the formal adjoint of $D^-d_V$ with respect to $\omega^{[\ell]}$. Hence, the $\v$-weighted Lichnerowicz operator is self-adjoint with respect to the volume form $\v(m_{\omega})\omega^{[\ell]}$. Let $\tilde{\omega}$ be the compatible K\"ahler metric on $(M,J)$ corresponding to $\omega$. We denote by $\omega_S(x)$ the K\"ahler form on $(S,J_S)$ induced by $\tilde{\omega}$:
\begin{equation*}
\omega_S(x):= \sum_{a =1}^k\big(\langle p_a, m_{\omega}(x) \rangle +c_a \big) \omega_a.
\end{equation*}
\noindent The following is etablished in the proof of \cite[Lemma 8]{VA3}.
\begin{prop}{\label{decomposition-Lichne-prop}}
Let $f$ be a $\mathbb{T}$-invariant smooth function on $M$, seen as a $ \mathbb{T}_V$-invariant function on $V\times S$ via $(\ref{identification-functio-cartesian})$. We denote by $\Delta_{\tilde{\omega}}$ the Laplacian of $(M,J,\tilde{\omega})$ and by $\Delta_x^S$, respectively $\mathbb{L}^S_x$, the Laplacian, respectively the Lichnerowicz operator, of $(S,J_S,\omega_S(x))$. We then have
\begin{equation*}
\Delta_{\tilde{\omega}}f=\Delta^V_{\omega,\v}f_s + \Delta_x^Sf_x
\end{equation*}
\noindent Furthermore, the corresponding Licherowicz's operators $\mathbb{L}_{\tilde{\omega}}$, $\mathbb{L}^V_{\omega,\v}$ and $\mathbb{L}_x^S$ are related by
\begin{equation}{\label{decomposition-Lichne}}
\mathbb{L}_{\tilde{\omega}}f=\mathbb{L}^V_{\omega,\v}f_s+\mathbb{L}^S_xf_x+\Delta^S_x\big(\Delta^V_{\omega,\v}f_s\big)_x + \Delta^V_{\omega}\big(\Delta_x^S f_x\big)_s + \sum_{a=1}^kQ_a(x)\Delta_{a}f_x
\end{equation}
\noindent where $\Delta _a$ is the Laplacian with respect to $(S_a,J_a,\omega_a)$ and $Q_a(x)$ is a smooth function on $V$.
\end{prop}
\noindent We fix a compatible K\"ahler metric
\begin{equation*}
\tilde{\chi}=\sum_{a=1}^k (\langle p_a, m_{\chi} \rangle + c_{a,\alpha})\omega_a + \chi
\end{equation*}
\noindent corresponding to a K\"ahler metric $\chi$ on $(V,J_V)$, where $m_{\chi}$ is a moment map with respect to $\chi$ and $c_{a, \alpha}$ are constants depending on $\alpha:=[\tilde{\chi}]$ such that $\langle p_a, m_{\chi} \rangle + c_{a,\alpha}>0$.
Hashimoto introduced in \cite{YH} the operator $\mathbb{H}_{\tilde{\omega}}^{\tilde{\chi}} : \mathcal{C}^{\infty}(M)^{\mathbb{T}} \longrightarrow \mathcal{C}^{\infty}(M)^{\mathbb{T}}$ defined by
\begin{equation}{\label{Hashimoto-operator-definition}}
\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}f:= g_{\tilde{\omega}}\big(\tilde{\chi}, dd^c f\big) + g_{\tilde{\omega}}\big(d \Lambda_{\tilde{\omega}} \tilde{ \chi}, df\big).
\end{equation}
\noindent According to \cite[Lemma~1]{YH}, $\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}$ is a second order elliptic self-adjoint differential operator with respect to $\tilde{\omega}^{[n]}$. Furthermore, the kernel of $\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}$ is the space of constant functions. We define the $\v$-weighted \textit{Hashimoto operator} $\mathbb{H}^{\chi}_{\omega,\v} : \mathcal{C}^{\infty}(V)^{\mathbb{T}} \rightarrow \mathcal{C}^{\infty}(V)^{\mathbb{T}}$ by
\begin{equation*}
\mathbb{H}^{\chi}_{\omega,\v}f:= g_{\omega}\big(\chi, d_Vd_V^c f \big) + g_{\omega}\big(d_V \Lambda_{\omega} \chi, d_Vf\big) + \frac{1}{\v(m_{\omega})}g_{\omega}\big(\chi,d_V\v(m_{\omega}) \wedge d_V^cf\big).
\end{equation*}
\begin{prop}{\label{decomposition-Hashimoto}}
Let $f\in \mathcal{C}^{\infty}(M)^{\mathbb{T}}$ be seen as a function on $\mathbb{T}_V$-invariant function on $V \times S$ via $(\ref{identification-functio-cartesian})$. The Hashimoto operator admits the following decomposition
\begin{equation*}
\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}f= \mathbb{H}_{\omega,\v}^{\chi}f_s + \sum_{a=1}^k R_a(x)\Delta_a f_x,
\end{equation*}
\noindent where $R_a(x)$ is a smooth function on $V$ depending on $\chi$ and $\alpha$.
\end{prop}
\begin{proof}
\noindent For simplicity, we denote by $m$ the moment map of $\omega$ and
\begin{equation*}
q(m):=\sum_{a=1}^k \frac{d_a \big( \langle p_a,m_{\chi} \rangle +c_{a,\alpha}\big) }{\langle p_a,m \rangle +c_{a}}.
\end{equation*}
Let $K \in \mathcal{C}^{\infty}(V)^{\mathbb{T}} \otimes \mathfrak{t}^*$ the generator of the $\mathbb{T}_V$-action. By definition $d_V^cf(K)$ is a smooth $\mathbb{T}_V$-invariant $\mathfrak{t}^*$-valued function on $V$ and induces a smooth $\mathbb{T}_M$-invariant $\mathfrak{t}^*$-valued function on $M$ via (\ref{identification-functio-cartesian}). It is shown in the proof of \cite[Lemma 8]{VA3} that on $M^0$
\begin{equation}{\label{decomposition-ddc}}
\begin{split}
dd^cf=& \langle d_V(d^c_Vf_s(K))_s \wedge \theta \rangle + \langle d_S(d^c_Vf_s(K))_x \wedge \theta \rangle \\
&+ \sum_{a=1}^k d^c_Vf_s(p_a^V) \omega_a + d_Sd^c_Sf_x + \langle d^c_S(d^c_Vf_s(K))_x,J\theta \rangle.
\end{split}
\end{equation}
\noindent First, we recall the general identity
\begin{equation}{\label{1er-term-produit}}
g_{\tilde{\omega}}(dd^cf,\tilde{\chi})\tilde{\omega}^{[n]}=-dd^cf \wedge \tilde{\chi} \wedge \tilde{\omega}^{[n-2]} - \Delta_{\tilde{\omega}}f \Lambda_{\tilde{\omega}}(\tilde{\chi}) \tilde{\omega}^{[n]}.
\end{equation}
\noindent From the expression of $\tilde{\chi}$ and $\tilde{\omega}$, we can see that
\begin{equation*}
\bigg( \langle d_S(d^c_Vf_s(K))_x \wedge \theta \rangle + \langle d^c_S(d^c_Vf_s(K))_x,J\theta \rangle\bigg) \wedge \tilde{\chi} \wedge \tilde{\omega}^{[n-2]}=0.
\end{equation*}
\noindent A straightforward computation gives
\begin{equation}{\label{trace-calcul-equation-nvx}}
\Lambda_{\tilde{\omega}}(\tilde{\chi})=\Lambda_{\omega}(\chi) + \sum_{a=1}^k \frac{d_a \big( \langle p_a,m_{\chi} \rangle +c_{a,\alpha}\big) }{\langle p_a,m \rangle +c_{a}}.
\end{equation}
\noindent From Proposition \ref{decomposition-Lichne-prop}, (\ref{trace-calcul-equation-nvx}) and (\ref{1er-term-produit}) we have
\begin{equation}
g_{\tilde{\omega}}(dd^cf_s,\tilde{\chi}) = g_{\omega}(d_Vd^c_Vf_s,\chi) + \sum_{a=1}^k\frac{d_a d_Vf_s(p_a^V) (\langle p_a,m_{\chi}\rangle +c_{a,\alpha})}{(\langle p_a,m\rangle +c_{a})^2}.
\end{equation}
\noindent Using (\ref{trace-calcul-equation-nvx}) we get
\begin{equation}{\label{enfin-hasimoto}}
g_{\tilde{\omega}}\big(d\Lambda_{\tilde{\omega}}(\tilde{\chi}),df_s\big)= g_{\omega}\big(d_V\Lambda_{\omega}(\chi),d_Vf_s\big)+ g_{\omega}\big(d_Vq(m), d_V^cf_s \big).
\end{equation}
\noindent To resume, we have shown
\begin{equation}{\label{Hashimoto-1}}
\begin{split}
\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}f_s=& g_{\omega}(d_Vd^c_Vf_s,\chi)+ g_{\omega}\big(d_V\Lambda_{\omega}(\chi),d_Vf_s\big)\\
&+ g_{\omega}\big(d_Vq(m) , d_V^cf_s \big) + \sum_{a =1}^k\frac{d_a d_Vf_s(p_a^V) (\langle p_a,m_{\chi}\rangle +c_{a,\alpha})}{(\langle p_a,m\rangle +c_{a})^2}.
\end{split}
\end{equation}
\noindent Using (\ref{weights}) we have
\begin{equation}{\label{terme-en-trop}}
\frac{1}{\v(m)} g_{\omega}\big(\chi,d_V\v(m) \wedge d_V^cf_s\big)= g_{\omega}\big(d_Vq(m) , d_V^cf_s \big) + \sum_{a =1}^k\frac{d_a d_Vf_s(p_a^V) (\langle p_a,m_{\chi}\rangle +c_{a,\alpha})}{(\langle p_a,m\rangle +c_{a})^2}.
\end{equation}
\noindent From $(\ref{Hashimoto-1})$ and $(\ref{terme-en-trop})$ we get
\begin{equation*}
\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}f_s = \mathbb{H}^{\chi}_{\omega,\v}f_s.
\end{equation*}
\noindent The term $\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}f_x$ is obtained via similar computation.
\end{proof}
\begin{remark}{\label{remark-Hashimoto}}
Proposition \ref{decomposition-Lichne-prop} implies in particular that the restriction of $\mathbb{H}^{\tilde{\chi}}_{\tilde{\omega}}$ to the Frechet subspace $\mathcal{C}^{\infty}(V)^{\mathbb{T}} \subset \mathcal{C}^{\infty}(M)^{\mathbb{T}}$ coincides with $\mathbb{H}_{\omega,\v}^{\chi}$. It follows that $\mathbb{H}_{\omega,\v}^{\chi}$ is a self-adjoint (with respect to $\v(m_{\omega})\omega^{[\ell]}$) second order elliptic operator.
\end{remark}
\section{An analytic criterion for the existence of extremal K\"ahler metrics}{\label{section-chencheng}}
In this section we recall the existence results of extremal K\"ahler metrics in a given K\"ahler class, proved by Chen--Cheng \cite{XC1, XC2} in the constant scalar curvature case and extended by He \cite{WH} to the extremal case.
We fix a compact K\"ahler manifold $(M,J,\omega_0)$, a maximal compact subgroup $K \subset \mathrm{Aut}_{\mathrm{red}}(M)$ and let $\xi_{\textnormal{ext}}$ denote the corresponding extremal vector field, as explained in \S \ref{section-extremal-vector-fields}. The identity component of the complex automorphism group acts on $\mathring{\mathcal{K}}(M,\omega_0)^{\mathbb{T}}$ (\ref{normalized-potential}) via the natural action on K\"ahler metrics in $[\omega_0]$. Let $G$ be the complexification of $K$ and we introduce the distance $d_{1,G}$ relative to $G$
\begin{equation}{\label{distance-relative}}
d_{1,G}(\varphi_1,\varphi_2):=\inf_{\gamma \in G}d_1(\varphi_1, \gamma \cdot \varphi_2),
\end{equation}
\noindent where $d_1$ is defined in (\ref{distance-d1}). We recall the following definition from \cite{DR}:
\begin{define}{\label{def-proper}} The relative Mabuchi energy $\mathcal{M}^K$ is said proper with respect to $d_{1,G}$ if
\begin{itemize}
\item $\mathcal{M}^K$ is bounded from below on $\mathcal{K}(M, \omega_0)^K$;
\item for any sequence $\varphi_i \in \mathring{\mathcal{K}}(M,\omega_0)^{K}$, $d_{1,G}(0, \varphi_i) \rightarrow \infty$ implies that $\mathcal{M}^K(\varphi_i)\rightarrow \infty$.
\end{itemize}
\end{define}
\begin{theorem}{\label{Chen--Cheng-existence}}
Let $K$ be a maximal compact subgroup of $\mathrm{Aut}_{\mathrm{red}}(M)$ and $G$ its complexification. Then, the relative Mabuchi energy $\mathcal{M}^K$ is $d_{1,G}$-proper if and only if there exists an extremal K\"ahler metric in $(M,J,[\omega_0])$ with extremal vector fields $\xi_{\textnormal{ext}}$.
Moreover the same assertion holds by replacing $K$ by a maximal torus $T \subset \mathrm{Aut}_{\mathrm{red}}(M)$ and $G$ by its complexification $T^{\mathbb{C}}$.
\end{theorem}
The first assertion is established in \cite{WH}. We can directly modify the argument to obtain the second. Indeed, let $T \subset \mathrm{Aut}_{\mathrm{red}}(M)$ be a maximal torus included in $K$. Since the extremal vector field $\xi_{\textnormal{ext}}$ is central in the Lie algebra of $K$, we can introduce the Mabuchi energy $\mathcal{M}^T$ relative to $T$ and $\mathcal{M}^T|_{\mathcal{K}(M,\omega_0)^K}=\mathcal{M}^K$. Since any $T$-orbit belongs to a $K$-orbit, the $d_{1,T^{\mathbb{C}}}$-properness is stronger than the $d_{1,G}$-properness. We obtain that the $d_{1, T^{\mathbb{C}}}$-properness implies the existence of a $T$-invariant extremal K\"ahler metric in $[\omega_0]$.
Conversely, suppose $[\omega_0]$ admits a $T$-invariant extremal K\"ahler metric. Then the proof of \cite[Theorem~3.7]{WH} will yield the $T^{\mathbb{C}}$-properness of $\mathcal{M}^T$, should one has the uniqueness of the $T$-invariant extremal K\"ahler metrics modulo $T^{\mathbb{C}}$. Generalizing the result of Berman-Berndtsson \cite{BB} and Chen-Paun-Zeng \cite{CPZ}, Lahdili showed, in the more general context of $(\v,\mathrm{w})$-weighted metrics \cite[Theorem~2, Remark~2]{AL2}, that the $T$-invariant extremal metrics are unique modulo the action of $T^{\mathbb{C}}$.
\section{An analytic criterion}{\label{section-theoremA}}
\noindent This section is devoted to prove of the following result (where we use notation of \S\ref{section-rigidtoric}).
\begin{theorem}{\label{theoremA}}
Let $(M,J, \tilde{\omega}_0, \mathbb{T})$ be a semi-simple principal toric fibration with K\"ahler toric fiber $(V,J_V, \omega_0, \mathbb{T} )$ and let $(\v, \mathrm{w})$ be the corresponding weight functions defined in $(\ref{weights})$. Then, the following statements are equivalent:
\begin{enumerate}
\item there exists an extremal K\"ahler metric in $(M,J,[\tilde{\omega}_0],\mathbb{T})$;
\item there exists a compatible extremal K\"ahler metric in $(M,J,[\tilde{\omega}_0],\mathbb{T})$;
\item there exists a $(\v,\mathrm{w})$-cscK metric in $(V,J_V,[\omega_0],\mathbb{T})$.
\end{enumerate}
\end{theorem}
The statement $(2) \Leftrightarrow (3)$ is established in Corollary \ref{equivalence-scalar} whereas the statement $(2)\Rightarrow(1)$ is clear. We focus on $(1) \Rightarrow (2)$.
We follow the argument of He \cite{WH} by restricting the continuity path of Chen \cite{XC5} to compatible K\"ahler metrics. We consider the continuity path for $ \varphi \in \mathcal{K}(V,\omega_0)^{\mathbb{T}} \subset \mathcal{K}(M,\tilde{\omega}_0)^{\mathbb{T}}$ given by
\begin{equation}{\label{continuity-path-weithed}}
t\big(Scal_{\v}(\omega_{\varphi})-\mathrm{w}(m_{\varphi})\big) = (1-t)\big(\Lambda_{\omega_{\varphi},\v}(\omega_0)-n\big), \text{ } \text{ } t\in[0,1].
\end{equation}
\noindent In the above formula
\begin{equation*}
\Lambda_{\omega_{\varphi},\v}(\omega_0):=\Lambda_{\omega_{\varphi}}(\omega_0) + \sum_{a=1}^k \frac{d_a \big( \langle p_a,m_0 \rangle +c_{a}\big) }{\langle p_a,m_{\varphi} \rangle +c_{a}}
\end{equation*}
\noindent is a smooth function on $V$ equal to $\Lambda_{\tilde{\omega}_{\varphi}}(\tilde{\omega}_0)$. By definition, a solution $\varphi_t$ at $t=1$ corresponds to a compatible extremal metric on $(M,J)$ or equivalently to a $(\v,\mathrm{w})$-cscK on $(V,J_V)$. For $t_1 \in (0,1]$, we define
\begin{equation}{\label{set-solution}}
S_{t_1}:=\{ t \in (0,t_1] \text{ }| \text{ } (\ref{continuity-path-weithed}) \text{ has a solution } \varphi_t \in \mathcal{K}(V,\omega_0)^{\mathbb{T}}\}.
\end{equation}
\noindent We need to show that $S_1$ is open, closed and non empty.
\subsection{Openness}
\begin{prop}{\label{openess}}
$S_1$ is open and non empty.
\end{prop}
\noindent For a compatible K\"ahler form $\tilde{\omega}$ on $(M,J)$ corresponding to a K\"ahler metric $\omega$ on $(V,J_V)$, we denote by $C^{\infty}(M,\tilde{\omega})^{\mathbb{T}}$ the space of $\mathbb{T}_M$-invariant smooth functions with zero mean value with respect to $\tilde{\omega}^{[n]}$ and by $\mathcal{C}_{\v}^{\infty}(V, \omega)^{\mathbb{T}} \subset C^{\infty}(M,\tilde{\omega})^{\mathbb{T}}$ the space of $\mathbb{T}_V$-invariant smooth functions with zero mean value with respect to $\v(m_{\omega})\omega^{[\ell]}$. The following is an adaptation of \cite[Lemma~3.2]{WH}.
\begin{lemma}{\label{prop-starting-point}}
$S_1$ is non empty.
\end{lemma}
\begin{proof}
Let $\omega$ a K\"ahler metric on $(V,J_V)$ and $\tilde{\omega}$ its associate compatible K\"ahler metric on $(M,J)$ via (\ref{metriccalabidata}). Since $\Delta_{\omega,\v}^V$ is self-adjoint with respect to $\v(m_{\omega})\omega^{[\ell]}$, it follows from the proof of Proposition \ref{operator-iso} below that
\begin{equation}{\label{iso-laplacian-poid}}
\Delta_{\omega,\v}^V : \mathcal{C}_{\v}^{\infty}(V, \omega)^{\mathbb{T}} \longrightarrow \mathcal{C}_{\v}^{\infty}(V, \omega)^{\mathbb{T}}
\end{equation}
\noindent is an isomorphism. Denote by $f \in C^{\infty}(M,\tilde{\omega})^{\mathbb{T}}$ the unique solution of
\begin{equation}{\label{poisson-solution}}
\Delta_{\tilde{\omega}} f = Scal_{\v}(\omega) - \mathrm{w}(m_{\omega})
\end{equation}
\noindent By $(\ref{iso-laplacian-poid})$, $f \in \mathcal{C}_{\v}^{\infty}(V, \omega)^{\mathbb{T}}$. Now we choose $\tilde{\omega}:=\tilde{\omega}_0+dd^c\frac{f}{r}$. Since $f$ is a $\mathbb{T}_V$-invariant smooth function on $V$, $\tilde{\omega}$ is both K\"ahler and compatible for $r$ suffisently large by Lemma \ref{changeofmetric}. Then
\begin{equation*}
\begin{split}
\Delta_{\tilde{\omega}}f=&r\Delta_{\tilde{\omega}}f\frac{1}{r} = -r\Lambda_{\tilde{\omega}}dd^c\frac{f}{r} \\
=& r \Lambda_{\tilde{\omega}} \bigg(\tilde{\omega}-dd^c\frac{f}{r} - \tilde{\omega} \bigg) \\
=& r\big( \Lambda_{\omega,\v}(\omega_0) - n \big).
\end{split}
\end{equation*}
\noindent Now let us write $r=t_0^{-1}-1$, for $t_0 \in (0,1)$ sufficiently small. Then $(\omega,t_0)$ is solution of $(\ref{continuity-path-weithed})$.
\end{proof}
Now we show that $S_1$ is open. We fix $(\omega_{t_0},t_0)$ the solution of (\ref{continuity-path-weithed}) given by Lemma~\ref{prop-starting-point}. Let $\tilde{\omega}_{t_0}=\omega_0 +dd^c\varphi_{t_0}$ be its associated compatible K\"ahler metric on $(M,J)$, with $\varphi_{t_0} \in \mathcal{K}(V,\omega)^{\mathbb{T}}$. Let $\pi : \mathcal{C}^{\infty}(M)^{\mathbb{T}} \longrightarrow \mathcal{C}^{\infty}(M,\tilde{\omega}_{t_0})^{\mathbb{T}}$ be the linear projection:
\begin{equation*}
\pi(f):= f - \frac{1}{\int_M \tilde{\omega}^{[n]}_{t_0}} \int_M f \tilde{\omega}^{[n]}_{t_0}.
\end{equation*}
\noindent We consider
\begin{equation*}
R : \mathring{\mathcal{K}}(M,\tilde{\omega}_0)^{\mathbb{T}} \times [0,1] \longrightarrow \mathcal{C}^{\infty}(M)^{\mathbb{T}},
\end{equation*}
\noindent defined by
\begin{equation*}
R(\varphi,t):= t\big(Scal(\tilde{\omega}_{\varphi}) - \Pi_{\tilde{\omega}_{\varphi}}(Scal(\tilde{\omega}_{\varphi})\big) - (1-t)(\Lambda_{\tilde{\omega}_{\varphi}}\tilde{\omega}_0-n).
\end{equation*}
\noindent The linearization of the composition $\pi \circ R$ at $(\varphi_{t_0},t_0)$ is given by
\begin{equation}
D (\pi \circ R)(\varphi_{t_0},t_0)[f,s]= \pi \bigg(\mathcal{L}_{\tilde{\omega}_{t_0}}f+s\bigg(Scal(\tilde{\omega}_{t_0}) - \Pi_{\tilde{\omega}_{t_0}}\big(Scal(\tilde{\omega}_{t_0})\big) +\Lambda_{\tilde{\omega}_{t_0}}\tilde{\omega}_0-n \bigg)\bigg),
\end{equation}
\noindent where
\begin{equation*}
\mathcal{L}_{\tilde{\omega}_{t_0}}=-2t_0\mathbb{L}_{\tilde{\omega}_{t_0}}+(1-t_0)\mathbb{H}^{\tilde{\omega}_0}_{\tilde{\omega}_{t_0}}.
\end{equation*}
\noindent Above we used the notation
\begin{equation*}
\begin{split}
\mathbb{L}_{\tilde{\omega}_{t_0}}f:&=\delta \delta D^-df \\
&= \frac{1}{2}\Delta^2_{\tilde{\omega}_{t_0}}f+g_{\tilde{\omega}_{t_0}}\big(dd^cf,Ric(\varphi_{t_0})\big) + \frac{1}{2}g_{\tilde{\omega}_{t_0}}\big(df,dScal(\varphi_{t_0})\big),
\end{split}
\end{equation*}
\noindent
\noindent where $D^-d$ and $\delta$ is introduced in $(\ref{define-lichne-pondéré})$ and and $\mathbb{H}^{\tilde{\omega}_0}_{\tilde{\omega}_{t_0}}$ is introduced in (\ref{Hashimoto-operator-definition}). Since $\mathcal{L}_{\tilde{\omega}_{t_0}}$ is a self-adjoint operator with respect to $\tilde{\omega}_{t_0}^{[n]}$ we get
\begin{equation*}
D (\pi \circ R)(\varphi_{t_0},t_0)[f,s]=\mathcal{L}_{\tilde{\omega}_{t_0}}f.
\end{equation*}
\noindent By Proposition \ref{decomposition-Lichne-prop} and Proposition \ref{decomposition-Hashimoto}, the restriction of $\mathcal{L}_{\tilde{\omega}_{t_0}}$ to $\mathcal{C}^{\infty}(V)^{\mathbb{T}}$ is equal to $ \mathcal{L}^V_{\omega_{t_0},\v}$, where
\begin{equation}{\label{decomposition-our-operator-prop}}
\mathcal{L}^V_{\omega_{t_0},\v}:= -2t\mathbb{L}^V_{\omega_{t_0},\v}+(1-t)\mathbb{H}^{\omega_0}_{\omega_{t_0},\v}.
\end{equation}
\noindent In the above equality, $\omega_{t_0}$ is the K\"ahler metric on $(V,J_V)$ corresponding to $\tilde{\omega}_{t_0}$. By Proposition \ref{decomposition-Lichne-prop} and Proposition \ref{decomposition-Hashimoto} we obtain
\begin{equation}{\label{decomposition-notre-operator}}
\begin{split}
\mathcal{L}_{\tilde{\omega}_{t_0}}f=&\mathcal{L}^V_{\omega_{t_0},\v}f_s + t_0\mathbb{L}^S_xf_x+t_0\Delta^S_x\big(\Delta^V_{\omega_{t_0},\v}f_s\big) _x \\
&+ t_0\Delta^V_{\omega_{t_0},\v} \big(\Delta_x^S f_x\big) _s+\sum_{a=1}^kU_a(x)\Delta_{a}f_x
\end{split}
\end{equation}
\noindent for all $f \in \mathcal{C}^{\infty}(M)^{\mathbb{T}}$, where $U_a(x)$ is a smooth function on $V$. By \cite[Lemma 3.1]{WH} the operator $\mathcal{L}_{\tilde{\omega}_{t_0}}$ extends to an isomorphism between H\"older spaces
\begin{equation}{\label{operator-continuity-path}}
\mathcal{L}_{\tilde{\omega}_{t_0}} : \mathcal{C}^{4,\alpha}(M, \tilde{\omega}_{t_0})^{\mathbb{T}}\longrightarrow \mathcal{C}^{0,\alpha}(M, \tilde{\omega}_{t_0})^{\mathbb{T}},
\end{equation}
\noindent where $\mathcal{C}^{4,\alpha}(M, \tilde{\omega}_{t_0})^{\mathbb{T}}$ is the space of $\mathbb{T}_M$-invariant functions with regularity $(4,\alpha)$ with zero mean value with respect to $\tilde{\omega}_{t_0}^{[n]}$ and similary for $\mathcal{C}^{0,\alpha}(M, \tilde{\omega}_{t_0})^{\mathbb{T}}$. By (\ref{decomposition-our-operator-prop}), the restriction of the operator $\mathcal{L}_{\tilde{\omega}_{t_0}}$ to the space $\mathcal{C}^{4,\alpha}_{\v}(V,\omega_{t_0})^{\mathbb{T}}$ is equal to $\mathcal{L}_{\omega_{t_0},\v}^V$, where $\mathcal{C}^{4,\alpha}_{\v}(V,\omega_{t_0})^{\mathbb{T}}$ is the space of $\mathbb{T}_V$-invariant functions of regularity $(4,\alpha)$ with zero mean value with respect to $\v(m_{t_0})\omega^{[\ell]}_{t_0}$.
\begin{prop}{\label{operator-iso}}
The operator $\mathcal{L}^V_{\omega_{t_0},\v} : \mathcal{C}^{4,\alpha}_{\v}(V,\omega_{t_0})^{\mathbb{T}} \longrightarrow \mathcal{C}^{0,\alpha}_{\v}(V,\omega_{t_0})^{\mathbb{T}}$ is an isomorphism.
\end{prop}
\begin{proof}
Since $\mathcal{L}^V_{\omega_{t_0},\v}$ is the restriction of an injective operator, it is enough to prove the surjectivity. We proceed analogously to the proof of \cite[Lemma 8]{VA3}.
We denote by $L^2_ {0,\v}(V)^{\mathbb{T}}$ the completion for the $L^2$-norm of $\mathcal{C}_{\v}^{0,\alpha}(V,\omega_{t_0})^{\mathbb{T}}$. We argue by contradiction. Assume $\mathcal{L}^V_{\omega_{t_0},\v} :\mathcal{C}_{\v}^{4,\alpha}(V,\omega_{t_0})^{\mathbb{T}} \rightarrow \mathcal{C}_{\v}^{0,\alpha}(V,\omega_{t_0})^{\mathbb{T}}$ is not surjective. Then, there exists $\phi \in L^2_{0,\v}(V)^{\mathbb{T}}$ satisfying
\begin{equation}{\label{hypothese}}
\int_V \mathcal{L}^V_{\omega_{t_0},\v}(f) \phi \v(m_{t_0}) \omega_{t_0}^{[\ell]}=0
\end{equation}
\noindent for all $f \in \mathcal{C}_{\v}^{4,\alpha}(V,\omega_{t_0})^{\mathbb{T}}$. We claim it implies
\begin{equation}{\label{Hypothesis2}}
\int_M \mathcal{L}_{\tilde{\omega}_{t_0}}(f) \phi \tilde{\omega}_{t_0}^{[n]}=0
\end{equation}
\noindent for all $f \in \mathcal{C}^{4,\alpha}(M, \tilde{\omega}_{t_0})^{\mathbb{T}}$, which contradicts the surjectivity of $ \mathcal{L}_{\tilde{\omega}_{t_0}} : \mathcal{C}^{4,\alpha}(M, \tilde{\omega}_{t_0})^{\mathbb{T}} \longrightarrow \mathcal{C}^{0,\alpha}(M, \tilde{\omega}_{t_0})^{\mathbb{T}} $ established in \cite[Lemma 3.1]{WH}. Therefore, it is sufficient to show "$(\ref{hypothese}) \Rightarrow (\ref{Hypothesis2})$". For this we argue similary to the proof of \cite[Lemma 8]{VA3} using that the image of $\mathcal{L}^V_{\omega_{t_0},\v}$ is $L^2$-orthogonal to the subspace of constant functions with respect to $\v(m_{t_0})\omega^{[\ell]}_{t_0}$.
\end{proof}
\noindent By the Implicit Function Theorem applied to $\mathcal{L}^V_{\omega_{t_0},\v} : \mathcal{C}^{4,\alpha}_{\v}(V,\omega_{t_0})^{\mathbb{T}} \longrightarrow \mathcal{C}^{0,\alpha}_{\v}(V,\omega_{t_0})^{\mathbb{T}}$, we get a sequence of solutions $\{\varphi_{t_i}\}_{i\in \mathbb{N}}$ of (\ref{continuity-path-weithed}) of regularity $\mathcal{C}^{4,\alpha}$. By a well-known bootstrapping argument, any solution of (\ref{continuity-path-weithed}), of regularity $\mathcal{C}^{4,\alpha}$ is in fact smooth. This concludes the proof of Proposition \ref{openess}.
\subsection{Closedness}
\begin{prop}
$S_{1}$ is closed
\end{prop}
\begin{proof}
By hypothesis, there exists an extremal K\"ahler metric in $[\tilde{\omega}_0]$. By Theorem \ref{Chen--Cheng-existence}, the relative Mabuchi energy $\mathcal{M}^{T}$ is $d_{1,T^{\mathbb{C}}}$-proper. Let $\{\varphi_i\}_{i \in \mathbb{N}} \subset \mathcal{K}(V,\omega_0)^{\mathbb{T}}$ be a sequence of solutions of (\ref{continuity-path-weithed}) given by Proposition \ref{openess} with $t_i \rightarrow t_1 < 1$. By Corollary \ref{change-of-metric-cor}, the sequence $\{\varphi_i\}_{i \in \mathbb{N}}$ lies in $\mathcal{K}(M,\tilde{\omega}_0)^{T}$. Consequently, the same argument as in \cite[Lemma~3.3]{WH} shows the existence of a smooth limit $\varphi_{t_1} \in \mathcal{K}(M,\tilde{\omega}_0)^{T}$. Moreover, it follows from (\ref{exact-sequence}) and (\ref{identification-functio-cartesian}), that $\mathcal{K}(V,\omega_0)^{\mathbb{T}}$ is closed in $\mathcal{K}(M,\tilde{\omega}_0)^{T}$ for the Frechet topology. Then, the K\"ahler potential limit $\varphi_{t_1}$ belongs to $ \mathcal{K}(V,\omega)^{\mathbb{T}}$. In particular $\tilde{\omega}_{\varphi_{t_1}}$ is a compatible K\"ahler metric.
Let $\tilde{\varphi}_{t_i} \in \mathring{\mathcal{K}}(V,\omega_0)^{\mathbb{T}}$ (see $(\ref{normalized-compatible-potential})$) be the solution of $(\ref{continuity-path-weithed})$ at $t_i$ for $t_i$ increasing to $1$. By Theorem \ref{Chen--Cheng-existence}, $\mathcal{M}^{T}$ is $d_{1,T^{\mathbb{C}}}$-proper. Then, by Corollary \ref{change-of-metric-cor}, we get a bound with respect to $d_{1,T^{\mathbb{C}}}$, that is
\begin{equation*}
\sup_{i\in \mathbb{N}}d_{1,T^{\mathbb{C}}}(0,\tilde{\varphi}_{t_i}) < \infty.
\end{equation*}
\noindent By definition of $d_{1,T^{\mathbb{C}}}$, there exists $\gamma_i \in T^{\mathbb{C}}$ and $\varphi_{t_i} \in \mathring{\mathcal{K}}(M,\tilde{\omega}_0)^{T}$ such that $\omega_{\varphi_{t_i}}=\gamma_i^*\omega_{\tilde{\varphi}_{t_i}}$, and
\begin{equation*}
\sup_{i\in \mathbb{N}}d_1(0,\varphi_{t_i}) < \infty.
\end{equation*}
By definition $\gamma_i$ preserves $J$. Moreover, the form $\tilde{\omega}_{\varphi_{t_i}}$ is not compatible in general since the connection form $\theta$ and the base K\"ahler metrics $\omega_{a}$ may change by the action of $\gamma_i$. However, by Proposition \ref{extremal-vector-fields}, the $T^{\mathbb{C}}$-action commutes with the $\mathbb{T}_M$-action. Then, for each $t_i$, the $\mathbb{T}_M$-action is still rigid and semi-simple (see Remark \ref{remark-semi-simple}). According to \cite{VA2}, $\tilde{\omega}_{\varphi_i}$ is given by the generalized Calabi ansatz, with a fixed stable quotient $S= \prod_{a=1}^k S_a$ with respect to the complexified action $\mathbb{T}_M^{\mathbb{C}}$. Thus, there exists a connection 1-form $\theta_{Q, i}$ with curvature
\begin{equation*}
d\theta_{i}=\sum_{a=1}^k \pi_S^*(\omega_{a,i}) \otimes p_{a,i} \text{ } \text{ } p_{a,i} \in \Lambda
\end{equation*}
\noindent such that $\tilde{\omega}_{\varphi_i}$ is given by
\begin{equation*}
\tilde{\omega}_{\varphi_{t_i}}= \sum_{a=1}^k(\langle p_{a_i},m_{\varphi_{t_i}}\rangle +c_{a,i})\pi_S^*\omega_{a,i} + \langle dm_{\varphi_{t_i}} , \theta_{i} \rangle,
\end{equation*}
\noindent where $m_{t_i}$ is the moment map of $(M,J,\tilde{\omega}_{\varphi_{t_i}},\mathbb{T})$.
Since $\tilde{\omega}_{\varphi_{t_i}} \in [\tilde{\omega}]$, $c_{a,i}=c_a$ and $p_{a,i}=p_{a}$. By \cite[Theorem~3.5]{WH}, $\tilde{\omega}_{t_i}$ converge smoothly to an extremal metric $\omega_{\varphi_{1}}$. Furthermore, by Propositon \ref{extremal-vector-fields}, the extremal vector field $\xi_{ext}$ of $[\tilde{\omega}_0]$ relatif to $T$ is in the Lie algebra $\mathfrak{t}$ of $\mathbb{T}_M$. Then, by Corollary \ref{equivalence-scalar} and the smooth convergence of $\tilde{\omega}_{t_i}$ to $\tilde{\omega}_1$, we get
\begin{equation}{\label{extremal-presque}}
\langle m_1, \xi_{ext} \rangle + c_{ext}=\sum_{a =1}^k \frac{Scal(\omega_{a,1})}{\langle p_a,m_{1} \rangle +c_a} - \frac{1}{\mathrm{v}(m_{1})} Scal_{\v}(\omega_{1}),
\end{equation}
\noindent where $\omega_1$ is the K\"ahler metric on $(V,J_V)$ corresponding to $\tilde{\omega}_1$. Taking the exterior differential $d_{S_a}$ on $S_a$ in (\ref{extremal-presque}) we get $d_{S_a}Scal(\omega_{a,1})=0$ for all $1\leq a \leq k$, i.e. $\omega_{a,1}$ has constant scalar curvature. Yet, $[\omega_{a,1}]=[\omega_a]$, showing that $Scal(\omega_{a,1})=Scal_a$. By definition of $\mathrm{w} \in \mathcal{C}^{\infty}(P,\mathbb{R})$, we get
\begin{equation*}
Scal_{\v}(\omega_1)=\mathrm{w}(m_1).
\end{equation*}
\end{proof}
\begin{corollary}
In a compatible K\"ahler class, the extremal metrics are given by the Calabi ansatz of \cite{VA2}. Equivalently, in a compatible K\"ahler class, the extremal metrics are induced by $(\v,\mathrm{w})$-cscK metrics on $(V,J_V)$ via $(\ref{metriccalabidata})$ for a suitable connection $\theta$ and suitable K\"ahler metric $\omega_a$.
\end{corollary}
\begin{proof}
Suppose there exists an extremal metric $\omega_{1}$ in $[\tilde{\omega}]$. By a result of Calabi \cite{EC}, $\omega_{1}$ is invariant by some maximal torus $T \subset \mathrm{Aut}_{\mathrm{red}}(M)$. Conjugating if necessary, we can assume that $\mathbb{T}_M \subset T$.
By Theorem \ref{theoremA}, there exists a compatible extremal metric $\omega_{2}$ in $[\tilde{\omega}]$. By Lemma \ref{change-of-metric-cor}, $\omega_{2}$ is $T$-invariant. Then, by unicity of extremal K\"ahler metrics invariant by a maximal torus of the reduced automorphism group \cite{BB, CPZ, AL}, there exists $\gamma \in T^{\mathbb{C}}$ such that $ \omega_{1}=\gamma^*\omega_{2}$. Since $\mathbb{T}_M \subset T$, the action of $\mathbb{T}$ on $(M,J, \omega_{1})$ is still rigid and semi-simple, see Remark \ref{remark-semi-simple}. Thus, according to \cite{VA2}, $\omega_{1}$ is given by the Calabi ansatz.
\end{proof}
\section{Weighted toric K-stability}{\label{section-toric}}
\subsection{Complex and symplectic points of view}{\label{subsection-complex-vs-stmplectic}}
In this subsection, we briefly recall the well-known correspondence between symplectic and K\"ahler potentials established over the years notably in \cite{VA4, VA2, SKD, VG}. We take the point of view of \cite{VA4, VA2}.
Let $(V,\omega_0,\mathbb{T})$ be a toric symplectic manifold classified by its \textit{labelled integral Delzant polytope} $(\textnormal{P,\textbf{L}})$ \cite{VA4, TD}, where $\textbf{L} =(L_j)_{j=1\dots k}$ is the collection of non-negative defining affine-linear functions for $P$, with $dL_j$ being primitive elements of the lattice $\Lambda$ of circle subgroups of $\mathbb{T}$. Choose a K\"ahler structure $(g,J)$ on $(V,\omega_0,\mathbb{T})$ and denote by $(m_0,t_J)$ the associated moment map, i.e. $m_0 : V \longrightarrow \mathfrak{t}^*$ is the moment map of $(V,\omega_0,\mathbb{T})$ and $t_J : V^0 \longrightarrow \mathfrak{t}/2\pi \Lambda $ is the angular coordinates (unique modulo an additive constant) depending on the complex structure $J$ (see \cite[Remark 3]{VA2}). These coordinates are symplectic, i.e. $\omega_0$ is given by (\ref{toric_symplectic}) for its respective moment-angular coordinates. The K\"ahler structure $(g,J)$ is defined on $V^0$ by a smooth strictly convex function $u$ on $P^0$ via
\begin{equation}{\label{metric-toric2}}
g = \langle dm_0, \textbf{G}, dm_0 \rangle + \langle dt_J, \textbf{H},dt_J \rangle \text{ } \text{ and } \text{ } J dm_0= \langle \textbf{H} , dt_J \rangle,
\end{equation}
\noindent where $\textbf{G}:=\textnormal{Hess}(u)$ is a positive definite $S^2\mathfrak{t}$-valued function and $\textbf{H}$ is $S^2\mathfrak{t}^*$-valued function on $P^0$ and inverse of $\textbf{H}$ (when seen $\textbf{H} : \mathfrak{t} \longrightarrow \mathfrak{t}^*$ and $\textbf{G}: \mathfrak{t}^* \longrightarrow \mathfrak{t}$ in each point in $P^0$) and $\langle \cdot , \cdot , \cdot \rangle$ denote the point wise contraction $\mathfrak{t}^* \times S^2 \mathfrak{t} \times \mathfrak{t}^*$ or its dual. It is shown in \cite[Lemma 3]{VA2}, that for two $\mathbb{T}$-invariant K\"ahler structures on $(V,\omega_0,\mathbb{T})$, given explicitly on $V^0$ by (\ref{metric-toric2}) with the same matrix $\textbf{H}$, there exists a $\mathbb{T}$-equivariant K\"ahler isomorphism between them.
Conversely, smooth strictly convex functions $u$ on $P^0$ define $\mathbb{T}$-invariant $\omega_0$-compatible K\"ahler structures on $V^0$ via $(\ref{metric-toric2})$. The following Proposition established in \cite{VA2} gives a criterion for the metric to compactify.
\begin{prop}{\label{boudary}}
Let $(V,\omega, \mathbb{T})$ be a compact toric symplectic $2\ell$-manifold with momentum map $m_{\omega} : V \rightarrow P$ and $u$ be a smooth strictly convex function on $P^0$. Then the positive definite $S^2\mathfrak{t}^{*}$-valued function $\textnormal{\textbf{H}}:=\textnormal{Hess}(u)^{-1}$ on $P^0$ comes from a $\mathbb{T}$-invariant, $\omega$-compatible K\"ahler metric $g$ via $(\ref{metric-toric2})$ if and only if it satisfies the following conditions:
\begin{itemize}
\item \textnormal{[smoothness]} $\textnormal{\textbf{H}}$ is the restriction to $P^0$ of a smooth $S^2\mathfrak{t}^{*}$-valued function on $P$;
\item \textnormal{[boundary values]} for any point $y$ on the codimension one face $F_j \subset P$ with
inward normal $u_j$, we have
\begin{equation}{\label{bounday(condition-equation}}
\textnormal{\textbf{H}}_y(u_j , \cdot ) = 0 \text{ and } (d\textnormal{\textbf{H}})_y(u_j , u_j ) = 2u_j,
\end{equation}
\noindent where the differential $d\textnormal{\textbf{H}}$ is viewed as a smooth $S^2\mathfrak{t}^*\otimes \mathfrak{t}$-valued function on $P$;
\item \textnormal{[positivity]} for any point y in the interior of a face $F \subseteq P$, $\textnormal{\textbf{H}}_y(\cdot,\cdot)$ is positive definite
when viewed as a smooth function with values in $S^2(\mathfrak{t}/\mathfrak{t}_F )^*$, where $\mathfrak{t}_F \subset \mathfrak{t}$ the vector subspace spanned by the
inward normals $u_j$ in $\mathfrak{t}$ to the codimension one faces $F$.
\end{itemize}
\end{prop}
\begin{define}
Let $\mathcal{S}(\textnormal{P,\textbf{L}})$ be the space of smooth strictly convex functions on the interior of $P^0$ such that $\textnormal{\textbf{H}} =\textnormal{Hess}(u)^{-1}$ satisfies the conditions of Proposition $\ref{boudary}$.
\end{define}
Thus, there exists a bijection between $\mathbb{T}$-equivariant isometry classes of $\mathbb{T}$-invariant $\omega_0$-compatible K\"ahler structures and smooth functions $\textbf{H}= \textnormal{Hess}(u)^{-1}$, where $u \in \mathcal{S}(\textnormal{P,\textbf{L}})$.
We fix the Guillemin K\"ahler structure $J_0$ on $(V,\omega_0,\mathbb{T})$, its associate angular coordinates $t_0$ and symplectic potential $u_0$. The $\mathbb{T}$-action on $(V,J_0)$ extends to an holomorphic $\mathbb{T}^{\mathbb{C}}$-action. Choosing a point $x_0 \in V^0$, the holomorphic $\mathbb{T}^{\mathbb{C}}$-action gives rise to an identification $(V^0,J_0) \cong (\mathbb{C}^*)^{\ell} \cdot x_0$.
We denote by $(y_0,t_0)$ the exponential coordinates on $(\mathbb{C}^*)^{\ell}$ and we normalize $u_0$ in such a way that $m_0(x_0) \in P$ is its minimum. By \cite{VG}, we have on $V^0$:
\begin{equation}{\label{Lien-legendre}}
\omega_0 = dd_{J_0}^c \phi_{u_0}(y_0),
\end{equation}
\noindent where $\phi_{u_0}$ is the Legendre transform of $u_0$ and $d^c_{J_0}$ is the twisted differential on $(V,J_0)$. Now, consider another $\omega_0$-compatible K\"ahler structure $J_u$ defined by a symplectic potential $u \in S(\textnormal{P,\textbf{L}})$ via (\ref{metric-toric2}) (and satisfying $d_{x_0} u=0$). By the same argument as before, on $V^0$:
\begin{equation}{\label{Lien-legendre2}}
\omega_0 = dd_{J_u}^c \phi_{u}(y_u),
\end{equation}
\noindent where $\phi_{u}$ is the Legendre transform of $u$ and $(y_u,t)$ are holomorphic coordinates with respect to $(V^0,J_u)$. Consider the biholomorphism $\Phi_u : (V^0,J_u) \cong (V^0,J_0)$ defined by $\Phi_u^*(t,y_0)=(t,y_u)$. Using the Guillemin boundary conditions of $u$ (see \cite{SKD2} or equivalently Proposition \ref{boudary}), it is shown in \cite{SKD2} that $\Phi_u$ extends to biholomorphism between $(V^0,J_u)$ and $(V^0,J_0)$. Let $\omega_u:=\Phi_u^*(\omega_0)$ be the corresponding K\"ahler form on $(V,J_0)$ and $\varphi_u$ such that $\omega_u=\omega_0+dd^c_{J_0}\varphi_u$. We get from (\ref{Lien-legendre}) and (\ref{Lien-legendre2}) on $V^0$:
\begin{equation}
dd^c_{J_0} \varphi_u(y_0)=dd^c_{J_0}\big(\varphi_u(y_0) - \varphi_{u_0}(y_0)\big).
\end{equation}
\noindent It is argued in \cite{SKD2}, using again the boundary condition of $u$, that if we let
\begin{equation}{\label{relation-potential0}}
\varphi_u(y_0)= (\phi_u - \phi_{u_0})(y_0)
\end{equation}
\noindent on $V^0$, $\varphi_u(y_0)$ extends smoothly on $V$.
Conversely, using the dual Legendre transform, any $\mathbb{T}$-invariant K\"ahler potential $\varphi \in \mathcal{K}(V,\omega_0)^{\mathbb{T}}$ gives rise to a corresponding symplectic potential $u \in \mathcal{S}(\textnormal{P,\textbf{L}})$ through $(\ref{relation-potential0})$. Moreover, using (\ref{relation-potential0}) and the definition of the Legendre transform, it is shown in \cite{DG}, that the corresponding paths $\varphi_{u_t} \in \mathcal{K}(V,\omega_0)^{\mathbb{T}}$ and $u_t \in \mathcal{S}(\textnormal{P,\textbf{L}})$ satisfy
\begin{equation}{\label{relation-potentials}}
\frac{d}{dt}u_t=-\frac{d}{dt}\varphi_{t}.
\end{equation}
\subsection{Generalized Abreu's equation}
Thanks to Abreu \cite{MA2}, the scalar curvature $Scal(u)$ associated to a symplectic potential $u\in \mathcal{S}(\textnormal{P,\textbf{L}})$ expressed by
\begin{equation}{\label{abreu}}
Scal(u)=\sum^{\ell}_{i,j=1}-(H^u_{ij})_{,ij},
\end{equation}
\noindent where the partial derivatives and the inverse Hessian $(H_{ij}^u)=\textnormal{Hess}(u)^{-1}$ of $u$ is taken in a fixed basis $\boldsymbol{\xi}^*$ of $\mathfrak{t}^*$.
From \cite[Sect.~6]{AL} and the computation of \cite[Sect. 3]{VA5}, the $\v$-scalar curvature associated to a symplectic potential $u\in \mathcal{S}(\textnormal{P,\textbf{L}})$ and a positive weight function $\v$ is given by
\begin{equation}{\label{v-scalar-curv-toric}}
Scal_{\mathrm{v}}(u)= - \sum_{i,j=1}^{\ell}(\v H^u_{ij})_{,ij}.
\end{equation}
Let $\v \in \mathcal{C}^{\infty}(P,\mathbb{R}_{>0})$ and $\mathrm{w} \in \mathcal{C}^{\infty}(P,\mathbb{R})$. According to Definition \ref{define-scalv}, a K\"ahler structure $(J_u,g_u)$ on $(V,\omega_0,\mathbb{T})$, associated to a symplectic potential $u \in \mathcal{S}(\textnormal{P,\textbf{L}})$, is $(\v,\mathrm{w})$-cscK if and only if it satisfies
\begin{equation}{\label{equation-Abreu}}
- \sum_{i,j=1}^{\ell}(\v H^u_{ij})_{,ij}=\mathrm{w}.
\end{equation}
This formula generalizes the expression $(\ref{abreu})$ and is referred to as \textit{the generalized Abreu equation}. This equation has been studied for example in \cite{VA3, LLS, LLS2, LLS3}.
\subsection{Weighted Donaldson--Futaki invariant}
Following \cite{SKD, AL, LLS}, for $\v \in \mathcal{C}^{\infty}(P,\mathbb{R}_{>0})$ and $\mathrm{w} \in \mathcal{C}^{\infty}(P,\mathbb{R})$, we introduce the $(\v,\mathrm{w})$-Dolandson--Futaki invariant
\begin{equation}{\label{define-futaki}}
\mathcal{F}_{\v,\mathrm{w}}(f):= 2\int_{\partial P} f\v d\sigma + \int_P f \mathrm{w} dx,
\end{equation}
\noindent for all continuous functions $f$ on $P$, where $d\sigma$ is the induced measure on each face $F_i \subset \partial P$ by letting $dL_i \wedge d \sigma = -dx$, where $dx$ is the Lesbegue measure on $P$.
\begin{convention}
The weights $\v>0$ and $\mathrm{w} \in C^{\infty}(P,\mathbb{R})$ satisfy
\begin{equation}{\label{annulation-Futaki}}
\mathcal{F}_{\v,\mathrm{w}}(f)=0
\end{equation}
\noindent for all $f$ affine-linear on $P$.
\end{convention}
Integration by parts (see e.g. \cite{SKD}) reveals that ({\ref{annulation-Futaki}) is a necessary condition for the existence of $(\v,\mathrm{w})$-cscK metric on $(V,\omega_0,\mathbb{T})$.
\begin{remark}
Notice that in the case of semi-simple principal toric fibrations, the weights given by $(\ref{weights})$ satisfy $(\ref{annulation-Futaki})$ above.
\end{remark}
\subsection{Lower bound of the weighted Mabuchi ernergy}
The volume form $\omega_0^{[\ell]}$ on $V$ is pushed forward to the measure $dx$ via the moment map $m_{0}$. Seen as functional on $\mathcal{S}(\textnormal{P,\textbf{L}})$ via (\ref{relation-potentials}), the weighted Mabuchi energy $\mathcal{M}_{\v,\mathrm{w}}$ satisfies
\begin{equation*}
d_{u}\mathcal{M}_{\mathrm{v},\mathrm{w} }(\Dot{u}) = -\int_P\bigg( - \sum^{\ell}_{i,j=1}\big(\v H^u_{ij}\big)_{,ij} - \mathrm{w} \bigg)\Dot{u}dx.
\end{equation*}
\noindent From \cite[Lemma 6]{VA5} (see also Lemma \ref{IPP-lemma} below) we get
\begin{equation*}
d_{u}\mathcal{M}_{\mathrm{v},\mathrm{w} }(\Dot{u})= \mathcal{F}_{\v,\mathrm{w}}(\dot{u})+ \int_P \sum_{i,j=1}^{\ell} \v H_{ij}^u \dot{u}_{,ij}dx,
\end{equation*}
\noindent where $\mathcal{F}_{\v,\mathrm{w}}$ is the Donaldson--Futaki invariant defined in (\ref{define-futaki}). Using that the derivative of $\textnormal{tr}\textbf{H}^{-1}d\textbf{H}$ is $d \log \det \textbf{H}$, we get
\begin{equation*}
\mathcal{M}_{\v,\mathrm{w}}(u)=\mathcal{F}_{\v,\mathrm{w}}(u)- \int_P \log \det \textnormal{Hess}(u)\textnormal{Hess}(u_0)^{-1} \v dx.
\end{equation*}
We denote by $\mathcal{CV}^{\infty}(P)$ the set of continuous convex functions on $P$ which are smooth in the interior $P^0$. Using the same argument than \cite[Lemma 3.3.5]{SKD}, since $\v$ is smooth and $P$ compact, we get:
\begin{lemma}{\label{IPP-lemma}}
Let $\textnormal{\textbf{H}}$ be any smooth $S^2\mathfrak{t}^*$-valued function on $P$ which satisfies the boundary condition $(\ref{bounday(condition-equation})$ of Proposition \ref{boudary}, but not necessarily the positivity
condition. For any $\v \in \mathcal{C}^{\infty}(P,\mathbb{R}_{>0})$ and $f \in \mathcal{CV}^{\infty}(P)$:
\begin{equation}\label{equation-IPP}
\int_P \sum_{i,j=1}^{\ell}\big(\v H_{ij}\big)f_{,ij} dx = \int_P \bigg(\sum_{i,j=1}^{\ell}\big(\v H_{ij}\big)_{,ij}\bigg)f dx + 2 \int_{\partial P} f \v d\sigma.
\end{equation}
\noindent In particular, $\int_P \sum_{i,j=1}^{\ell}\big(\v H_{ij}\big)f_{,ij}dx < \infty$.
\end{lemma}
\noindent The following result and proof are generalizations of \cite[Proposition 3.3.4]{SKD}.
\begin{prop}{\label{extension}}
Let $\v \in \mathcal{C}^{\infty}(P,\mathbb{R}_{>0})$ and $\mathrm{w}\in \mathcal{C}^{\infty}(P,\mathbb{R})$. The Mabuchi energy $\mathcal{M}_{\v,\mathrm{w}}$ extends to the set $\mathcal{CV}^{\infty}(P)$ as functional with values in $(- \infty, + \infty]$. Moreover, if there exists $u\in \mathcal{S}(\textnormal{P,\textbf{L}})$ corresponding to a $(\v,\mathrm{w})$-cscK metric, i.e. which satisfies $(\ref{equation-Abreu})$, then $u$ realizes the minimum of $\mathcal{M}_{\v,\mathrm{w}}$ on $\mathcal{CV}^{\infty}(P)$.
\end{prop}
\begin{proof}
The linear term $\mathcal{F}_{\v,\mathrm{w}}$ is well-defined on $\mathcal{CV}^{\infty}(P)$. We then focus on the nonlinear term of $\mathcal{M}_{\v,\mathrm{w}}$. Let $u\in \mathcal{S}(\textnormal{P,\textbf{L}})$ and $h\in \mathcal{CV}^{\infty}(P)$. Suppose $\det \text{Hess}(h)\neq 0$. By convexity of the functional $-\log \det$ on the space of positive definite matrices, we get:
\begin{equation*}
- \log \det \text{Hess}(h)+ \log \det \text{Hess}(u) \geq - \text{Tr}\big(\text{Hess}(u)^{-1}(\text{Hess}(f)\big),
\end{equation*}
\noindent where $f=h-u$. Turning this around and multiplying by $\v$, we obtain:
\begin{equation*}
\v \log \det \text{Hess}(h) \leq \v \log \det \text{Hess}(u) + \v \text{Tr}\big(\text{Hess}(u)^{-1}(\text{Hess}(f)\big).
\end{equation*}
\noindent By linearity of (\ref{equation-IPP}), the equality still holds when we replace $f$ by a difference of two functions in $\mathcal{CV}^{\infty}(P)$. In particular, this shows that $ \v \text{Tr}\big(\text{Hess}^{-1}(u)(\text{Hess}(f)\big)$ is integrable on $P$ and hence, by the previous inequality, $\v \log \det \text{Hess}(h)$ is integrable too. Now if the determinant of $h$ is equal to $0$, we define the value of $\mathcal{M}_{\v,\mathrm{w}}(h)$ to be $+\infty$. Then $\mathcal{M}_{\v,\mathrm{w}}$ is well-defined on $\mathcal{CV}^{\infty}(P)$. Suppose $u$ satisfies (\ref{equation-Abreu}). If $\det\textnormal{Hess}(f)=0$, then we trivially get $\mathcal{M}_{\v,\mathrm{w}}(u) \leq \mathcal{M}_{\v,\mathrm{w}}(f)$. Now, admit $\det\textnormal{Hess}(f)\neq 0$ and consider the function $g(t)=\mathcal{M}_{\v,\mathrm{w}}(u+tf)$. The function $g$ is therefore a convex function. Moreover, $g$ is differentiable at $t=0$ with
\begin{equation*}
g'(0)=-\bigintsss_P\left( \sum^{\ell}_{i,j=1}(\v H^u_{ij})_{,ij} - \mathrm{w}\right)fdx,
\end{equation*}
\noindent which is equal to $0$ by hypothesis on $u$. Then $\mathcal{M}_{\v,\mathrm{w}}(u) \leq \mathcal{M}_{\v,\mathrm{w}}(f) $ by the convexity of $g$.
\end{proof}
\subsection{Properness and $(\v,\mathrm{w})$-uniform K-Stability }{\label{subsection-K-stability-and-proper}}
\noindent Following \cite{SKD, GS} (see also \cite[Chapter 3.6]{VA1}), we fix $x_0 \in P^0$ and consider the following normalization
\begin{equation}{\label{normalized-function-polytope}}
\mathcal{CV}^{\infty}_*(P):=\{ f \in \mathcal{CV}^{\infty}(P) \text{ } | \text{ } f(x) \geq f(x_0)=0 \}.
\end{equation}
\noindent Then, any $f\in \mathcal{CV}^{\infty}(P)$ can
be written uniquely as $f = f^*+f_0$, where $f_0$ is affine-linear and $f^*=\pi(f) \in \mathcal{CV}_*^{\infty}(P)$, where $\pi : \mathcal{CV}^{\infty}(P) \longrightarrow \mathcal{CV}^{\infty}_*(P) $ is the linear projection.
\begin{define}{\label{uniform-K-stable}}
A Delzant polytope $(\textnormal{P,\textbf{L}})$ is $(\v,\mathrm{w})$-uniformly K-stable if there exists $\lambda > 0$ such that
\begin{eqnarray}{\label{uniform-equation}}
\mathcal{F}_{\v,\mathrm{w}}(f) \geq \lambda \| f^*\|_{1}
\end{eqnarray}
\noindent for all $f \in \mathcal{CV}^{\infty}(P)$, where $\| \cdot \|_{1}$ denotes the $L^1$-norm on $P$.
\end{define}
\begin{prop}{\label{stable-equivaut-energy-propr-v}}
Suppose $(\textnormal{P,\textbf{L}})$ is $(\v,\mathrm{w})$-uniformly K-stable. Then there exists $C>0$ and $D \in \mathbb{R} $ such that
\begin{equation*}
\mathcal{M}_{\v,\mathrm{w}}(u) \geq C \|u^*\|_{1} + D
\end{equation*}
\noindent for all $u \in \mathcal{S}(\textnormal{P,\textbf{L}})$.
\end{prop}
\begin{proof}
This result when $\v=1$ is due to \cite{SKD, ZZ}. The proof is an adaptation of the exposition in \cite{VA1}.
Let $u_0 \in \mathcal{S}(\textnormal{P,\textbf{L}})$ be the Guillemin K\"ahler potential. We consider $\mathcal{F}_{\v, \mathrm{w}_{0}}$ where $\mathrm{w}_{0}:=Scal_{\v}(u_0)$. For any $f\in \mathcal{CV}_*^{\infty}(P)$ there exists $C>0$ such that
\begin{eqnarray*}
| \mathcal{F}_{\v,\mathrm{w}}(f) - \mathcal{F}_{\v,\mathrm{w}_0}(f) | \leq 2C \|f \|_{1}.
\end{eqnarray*}
\noindent Since $(\textnormal{P,\textbf{L}})$ is $(\v,\mathrm{w})$-uniformly stable we get
\begin{eqnarray*}
| \mathcal{F}_{\v,\mathrm{w}_0}(f) - \mathcal{F}_{\v,\mathrm{w}}(f) | \leq C_1 \mathcal{F}_{\v,\mathrm{w}}(f) - C\|f \|_{1},
\end{eqnarray*}
\noindent where $C_1$ is a positive constant depending (\ref{uniform-equation}). We deduce that
\begin{equation}{\label{inequality}}
\mathcal{F}_{\v,\mathrm{w}_0}(f) \leq \tilde{C}\mathcal{F}_{\v,\mathrm{w}}(f) - C\|f \|_{1},
\end{equation}
\noindent where $\tilde{C}:=C_1+1$.
\noindent By Proposition \ref{extension}, the Mabuchi energy extends to $\mathcal{CV}_*^{\infty}(P)$. Then, by (\ref{inequality}) and (\ref{annulation-Futaki}), for any $u\in\mathcal{S}(\textnormal{P,\textbf{L}})$,
\begin{eqnarray*}
\mathcal{M}_{\v,\mathrm{w}}(u) &=& \mathcal{F}_{\v,\mathrm{w}}(u^*)-\int_{P}\v \log\det\text{Hess}(u^*) \text{Hess}(u_0)^{-1} dx \\
&\geq& \tilde{C} \mathcal{F}_{\v,\mathrm{w}_{0}}(u^*) + C \| u^* \|_{1} -\int_{P}\v \log\det\text{Hess}(u^*) \text{Hess}(u_0)^{-1} dx \\
&=& \mathcal{M}_{\v,\mathrm{w}_{0}}(\tilde{C} u^*) + \int_{P}\v \log\det\text{Hess}(\tilde{C} u^*) \text{Hess}(u^*)^{-1} dx + C \| u^* \|_{1} \\
&= & \mathcal{M}_{\v,\mathrm{w}_{0}}(\tilde{C} u^*) + n \log \tilde{C} \int_P \v dx + C \| u^* \|_{1}.
\end{eqnarray*}
\noindent The Mabuchi energy $\mathcal{M}_{\v,\mathrm{w}_0}$ attends its minimum at the potential $u_0\in \mathcal{S}(\textnormal{P,\textbf{L}})$, which is solution of
\begin{equation*}
Scal_{\v}(u_0)=\mathrm{w}_0.
\end{equation*}
\noindent In particular, $\mathcal{M}_{\v,\mathrm{w}_0}$ is bounded from below on $\mathcal{CV}^{\infty}(P)$ by Proposition $\ref{extension}$. Letting $D:=\inf_{\mathcal{CV}^{\infty}} \mathcal{M}_{\v,\mathrm{w}_0}+n\log \tilde{C} \int_P \v dx$ we get the result.
\end{proof}
\subsection{Existence of $(\v,\mathrm{w})$-cscK is equivalent to $(\v,\mathrm{w})$-uniform K-stability}
\noindent The following is established in \cite[Theorem 2.1]{LLS2} and is due to \cite{CLS} when $\v=1$.
\begin{prop}{\label{existence-implies-stable}}
Suppose there exists an $(\v,\mathrm{w})$-cscK metric in $(V,[\omega_0], \mathbb{T})$, i.e. $(\ref{equation-Abreu})$ admits a solution $u \in \mathcal{S}(\textnormal{P,\textbf{L}})$. Then $P$ is $(\v,\mathrm{w})$-uniformly K-stable.
\end{prop}
We now focus on the converse. We consider the space of normalized K\"ahler potentials $(\ref{normalized-compatible-potential})$ and normalized symplectic potentials
\begin{equation}{\label{normalized-smyplecic-potenial}}
\mathring{\mathcal{S}}_{\v}(\textnormal{P,\textbf{L}}):=\{ u \in \mathcal{S}(\textnormal{P,\textbf{L}}) \text{ } | \text{ }\int_P u \v dx = \int_P u_0 \v dx \}.
\end{equation}
\begin{lemma}{\label{correspondance}}
For any $\mathring{u}_t \in \mathring{\mathcal{S}}_{\v}(\textnormal{P,\textbf{L}})$, the corresponding K\"ahler potential $\varphi_t=\varphi_{\mathring{u}_t}$ obtained via $(\ref{relation-potential0})$ belongs to the space of normalized K\"ahler potential $\mathring{\mathcal{K}}_{\v}(V,\omega_0)^{\mathbb{T}}$ defined in $(\ref{normalized-compatible-potential})$. Conversely, any path in $\mathring{\mathcal{K}}_{\v}(V,\omega_0)^{\mathbb{T}}$ comes from a path $\mathring{u}_t$ in $\mathring{\mathcal{S}}_{\v}(\textnormal{P,\textbf{L}})$.
\end{lemma}
\begin{proof}
By \cite[Lemma 2.4]{BW}, the functional $\mathcal{I}_{\v}$ defined in (\ref{Ir-functionnal}), is also characterized by its variation for general weights $\v \in \mathcal{C}^{\infty}(P,\mathbb{R}_{>0})$. Then, a path $\varphi_t \in \mathcal{K}_{\v}(V,\omega_0)^{\mathbb{T}}$ starting from $0$ belongs to $\mathring{\mathcal{K}}_{\v}(V,\omega_0)^{\mathbb{T}}$ if and only if
\begin{equation}{\label{annulation-ir}}
\int_V \dot{\varphi}_t \v(m_{\varphi_t})\omega_t^{[\ell]} =0
\end{equation}
\noindent for all $\dot{\varphi}_t \in T_{\varphi_t}\mathcal{K}_{\v}(V,\omega_0)^{\mathbb{T}}$. By pushing-forward the measure $\omega_t^{[\ell]}$ via $m_{\omega_{\varphi_t}}$ and using $(\ref{relation-potentials})$ we get that $(\ref{annulation-ir})$ is equivalent to
\begin{equation*}
\int_P \dot{u}_t\v dx=0,
\end{equation*}
\noindent where $u_t$ is the path corresponding to $\varphi_t$ via $(\ref{relation-potential0})$. The conclusion follows from the convexity of $\mathcal{S}(\textnormal{P,\textbf{L}})$.
\end{proof}
\begin{theorem}{\label{theorem-B}}
Let $(M,J, \tilde{\omega}_0, \mathbb{T})$ be a semi-simple principal toric fibration with K\"ahler toric fiber $(V,J_V, \omega_0, \mathbb{T})$. Let $(\v,\mathrm{w})$ be the weights defined in $(\ref{weights})$ and denote by $P$ the Delzant polytope associated to $(V, \omega_0, \mathbb{T})$. Then there exists a $ (\v,\mathrm{w})$-weighted cscK metric in $[\omega_0]$ if and only if $P$ is $(\v,\mathrm{w})$-uniformly $K$-stable. In particular, the latter condition is necessary and sufficient for $[\tilde{\omega}_0]$ to admit an extremal K\"ahler metric.
\end{theorem}
\begin{proof}
Suppose there exists a $(\v,\mathrm{w})$-cscK metric in $[\omega_0]$. By Proposition \ref{existence-implies-stable}, $P$ is $(\v,\mathrm{w})$-uniformly K-stable.
Conversely, suppose $P$ is $(\v,\mathrm{w})$-uniformly K-stable. We are going to show that there are uniform positive constants $\tilde{A}$ and $\tilde{B}$ such that
\begin{equation}{\label{coercivity-conclu}}
\mathcal{M}_{\v, \mathrm{w}}(\varphi) \ge\tilde{ A} \inf_{\gamma \in \mathbb{T}^{\mathbb{C}}} d^V_{1,\v}(0, \gamma \cdot \varphi) - \tilde{B},
\end{equation}
where $d^V_{1,\v}$ is defined in Lemma \ref{restriction-distance} and $\mathcal{M}_{\v,\mathrm{w}}$ is the weighted Mabuchi energy of the K\"ahler toric fiber $(V,J_V,[\omega_0], \mathbb{T})$, see (\ref{definition-weighted-eneergy}). For all $\varphi \in \mathring{\mathcal{K}}(V,\omega_0)^{\mathbb{T}}$, there exists $\gamma \in \mathbb{T}^{\mathbb{C}}$ such that the symplectic potential $u_{\gamma \cdot \varphi}$
corresponding to $\gamma \cdot \varphi$ satifies $d_{x_0}u_{\gamma \cdot \varphi}=0$. By Lemma $\ref{correspondance}$ and the inequality in \cite[(66)]{VA1}, we have
\begin{equation}{\label{coercivity-conclu2}}
d^V_{1,\mathbb{T}^{\mathbb{C}}}(0,\gamma \cdot \varphi) \le A \int_{P}|u_{\varphi}^* - u_0^*| dx \le A\|u_{\varphi}^*\|_{1} + B,
\end{equation}
\noindent for some uniform constants $A>0$ and $ B>0$, where $d_{1,\mathbb{T}}^V$ is the $d_1$ distance relative to $\mathbb{T}^{\mathbb{C}}$ (see $(\ref{distance-relative})$) on $\mathcal{K}(V,\omega_0)^{\mathbb{T}}$, $u_{\varphi}\in S(\textnormal{P,\textbf{L}})$ is the symplectic potential corresponding to $\varphi$ and $u^*_{\varphi}$ is its normalization in $S(\textnormal{P,\textbf{L}}) \cap \mathcal{CV}^{\infty}_*(P)$, see (\ref{normalized-function-polytope}). Since $\v>0$ on $P$, we have for the weighted distance $ d_{1,\v}^V \le C d_1^V$. Then (\ref{coercivity-conclu}) follows from (\ref{coercivity-conclu2}) and Proposition \ref{stable-equivaut-energy-propr-v}.
Let $T$ be the maximal torus in $\mathrm{Aut}_{\mathrm{red}}(M)$ containing $\mathbb{T}_M$ and satisfying $(\ref{exact-sequence})$. By Lemma \ref{mabuchi-energy-restriction}, and our choice of normalization $(\ref{normalized-compatible-potential})$, the Mabuchi energy $\mathcal{M}^{T}$ restricts to $\mathcal{M}_{\v,\mathrm{w}}$ on $ \mathring{\mathcal{K}}_{\v}(V,\omega_0)^{\mathbb{T}}$. We denote by $d_{1,T^{\mathbb{C}}}$ the $d_1$ distance relative to $T^{\mathbb{C}}$ on $\mathcal{K}(M,\tilde{\omega}_0)^T$. Since any $\mathbb{T}^{\mathbb{C}}_M$-orbit lies in a $T^{\mathbb{C}}$-orbit, by (\ref{coercivity-conclu}) and Lemma \ref{restriction-distance}, $\mathcal{M}^{T}$ is $d_{1,T^{\mathbb{C}}}$-proper on $\mathring{\mathcal{K}}_{\v}(V,\omega_0)^{\mathbb{T}}$ in the sense of Definition \ref{def-proper}.
In the proof of Theorem \ref{theoremA} $"(1) \Rightarrow (2)"$, we have used the $d_{1,T^{\mathbb{C}}}$-properness only on sequences included in $\mathcal{K}(V,\omega_0)^{\mathbb{T}}$. It allows us to obtain the existence of a $(\v, \mathrm{w})$-cscK metric by the same argument.
The last assertion follows from Theorem \ref{theoremA}.
\end{proof}
\section{Applications}{\label{section-application}}
\subsection{Almost K\"ahler metrics}{\label{subsection-toric}}
As observed in \cite{SKD}, for fixed angular coordinates $dt_0$ with respect to a reference K\"ahler structure $J_0$, one can use (\ref{metric-toric2}) to define a $\mathbb{T}$-invariant \textit{almost-K\"ahler} metric on $V$, as soon as $\textbf{H}$ satisfies the smoothness, boundary value and positivity conditions of Proposition \ref{boudary}, even if the inverse matrix $\textbf{G}:= \textbf{H}^{-1}$ is not necessarily the Hessian of a smooth function. We shall refer to such almost K\"ahler metric as \textit{involutive}. One can further use such involutive AK metrics on $V$ to build a compatible metric $\tilde g_{\textbf{H}}$ on $M$, by the formula
\begin{equation*}
\tilde{g}_{\textbf{H}}=\sum_{a =1}^k\big(\langle p_a, m \rangle +c_a\big)g_a + \langle dm, \textbf{G} , d m \rangle + \langle \theta,\textbf{H}, \theta \rangle.
\end{equation*}
It is shown in \cite{VA3} that $\tilde g_{\textbf{H}}$ is extremal AK on $M$ in the sense of \cite{ML} (i.e. the hermitian scalar curvature of $\tilde g_{\textbf{H}}$ is a Killing potential) if and only if $\textbf{H}$ satisfies the equation
\begin{equation}{\label{acscK-equation}}
- \sum_{i,j}\big(\v H_{ij})_{,ij}=\mathrm{w},
\end{equation}
\noindent for $(\v,\mathrm{w})$ defined in $(\ref{weights})$. We shall more generally consider involutive AK metrics satisfying the equation $(\ref{acscK-equation})$ weight functions $\v>0$ and $\mathrm{w}$. For such AK metrics we say that $(V,J_\textbf{H},,g_\textbf{H},\omega,\mathbb{T})$ is an \textit{involutive }$(\v,\mathrm{w})$\textit{-csc almost K\"ahler metric}.
The point of considering involutive $(\v,\mathrm{w})$-csc almost K\"ahler metrics is that $(\ref{acscK-equation})$ is a linear undetermined PDE for the smooth coefficients of $\textbf{H}$ (which would therefore admit infinitely many solutions if we drop the positivity assumption of $\textbf{H}$), which in some special cases is easier to solve explicitly, as demonstrated in \cite{VA3}. On the other hand, it was observed in \cite{SKD} (see \cite{VA3} for the weighted case) that the existence of a $(\v,\mathrm{w})$-csc almost K\"ahler metric implies that $\mathcal{F}_{\v, \mathrm{w}}(f) \ge 0$ with equality iff $f=0$, and it was conjectured that the existence of a $(\v,\mathrm{w})$-csc almost K\"ahler metric is equivalent to the existence of a $(\v,\mathrm{w})$-cscK metric on $(V, \omega,\mathbb{T})$. E. Legendre \cite{EL} observed that the existence of an involutive extremal almost K\"ahler metric implies the stronger yet uniform stability of $P$, and thus confirmed the conjecture in the case where $\v=1$ and $\mathrm{w}=\ell_{\textnormal{ext}}$, see $\S \ref{section-extremal-vector-fields}$. Our additional observation is that the same arguments as in the proof of Proposition $\ref{existence-implies-stable}$ show the following.
\begin{prop}
Let $(V, \omega,\mathbb{T})$ be a toric manifold associated to Delzant polytope $P$. Let $\v \in \mathcal{C}^{\infty}(P,\mathbb{R}_{>0})$ and $\mathrm{w} \in \mathcal{C}^{\infty}(P,\mathbb{R})$ such that $\mathrm{w}$ satisfies $(\ref{annulation-Futaki})$. Suppose there exists an involutive $(\v,\mathrm{w})$-csc almost K\"ahler metric on $(V, \omega,\mathbb{T})$, i.e. there exists $\textbf{H}$ satisfying the smoothness, boundary value and positivity conditions of Proposition \ref{boudary} and equation $(\ref{acscK-equation})$. Then $P$ is $(\v,\mathrm{w})$-uniformly K-stable.
\end{prop}
Combining this result (for the special weights (v, w) associated to a semi-simple principal torus bundle via $(\ref{weights})$) with Theorem \ref{theorem-B}, we deduce:
\begin{prop}{\label{equivalence-almostcsck}}
Let $(V, \omega,\mathbb{T})$ be a toric manifold associated to Delzant polytope $P$. Let $(\v,\mathrm{w})$ be weights defined in $(\ref{weights})$. Then the following statements are equivalent:
\begin{enumerate}
\item there exists a $(\v,\mathrm{w})$-cscK metric on $(V,\omega,\mathbb{T})$;
\item there exists an involutive $(\v,\mathrm{w})$-csc almost K\"ahler metric on $(V,\omega,\mathbb{T})$;
\item $P$ is $(\v,\mathrm{w})$-uniformly K-stable in the sense of Definition \ref{uniform-K-stable}.
\end{enumerate}
\end{prop}
\subsection{Proof of Corollary \ref{prop-ex}}
Let $(S,J_S)$ be a compact complex curve of genus $\textbf{g}$ and $\mathcal{L}_i \longrightarrow S$ an holomorphic line bundle, $i=0,1,2$. We consider $(M,J):=\mathbb{P}(\mathcal{L}_0\oplus \mathcal{L}_1 \oplus \mathcal{L}_2)$. Since the biholomorphism class of $M$ is invariant by tensoring $\mathcal{L}_0\oplus \mathcal{L}_1 \oplus \mathcal{L}_2$ with a line bundle, we can suppose without lost of generality that $(M,J)=P(\mathcal{O}\oplus \mathcal{L}_1\oplus \mathcal{L}_2)$, where $\mathcal{O}\longrightarrow S$ is the trivial line bundle and $\textit{\textbf{p}}_i:=deg\big(\mathcal{L}_i\big) \geq0$, $i=1,2$. Suppose $\textit{\textbf{p}}_1=\textit{\textbf{p}}_2=0$, then by a result of Fujiki \cite{AF} $(M,J)$ admits an extremal metric (see also \cite[Remark 2]{VA3}) in every K\"ahler class. If $\textit{\textbf{p}}_2 = \textit{\textbf{p}}_1 > 0$ or $\textit{\textbf{p}}_2 > \textit{\textbf{p}}_1 = 0$ and $\textbf{g}=0,1$, there exists an extremal metric in every K\"ahler class by \cite[Theorem 6.2]{VA8}. We then suppose $\textit{\textit{\textbf{p}}}_2>\textit{\textbf{p}}_1>0$.
Suppose $S$ is of genus $\textbf{g}=0$, i.e. $(S,J_S)=\mathbb{CP}^1$. Then $M$ is a toric variety. Using the existence of an extremal almost K\"ahler metric of involutive type compatible with any K\"ahler metric on $M$ (see \cite[Proposition 4]{VA3}) and the Yau-Tian-Donadlson correspondence on toric manifold, it is shown in \cite{EL} that there exists an extremal K\"ahler metric in every K\"ahler class of $\mathbb{P}(\mathcal{O}\oplus \mathcal{L}_1 \oplus \mathcal{L}_2) \longrightarrow \mathbb{CP}^1$. Observe that by applying Proposition \ref{equivalence-almostcsck} and Theorem \ref{theoremA} we obtain that these extremal metrics are given by the Calabi ansatz of \cite{VA2}.
Now suppose that $(S,J_S)$ in an elliptic curve, i.e. $\textbf{g}=1$. The complex manifold $(M,J)$ is not toric. However, it is shown in \cite{VA3} that $(M,J)$ is a semi-simple principal toric fibration. By the Leray-Hirch Theorem $H^2(M,\mathbb{R})$ is of dimension $2$. In particular, up to scaling any K\"ahler class on $(M,J)$ is compatible. It is shown in \cite[Proposition 4]{VA8}, that $M$ admits an extremal almost K\"ahler metric in any compatible K\"ahler class. Then, using Proposition \ref{equivalence-almostcsck} and Theorem \ref{theorem-B}, we conclude that there exists an extremal K\"ahler metric in every K\"ahler class. Furthermore, by Theorem \ref{theoremA}, the extremal K\"ahler metrics are of the form of (\ref{metriccalabidata}), i.e. are given by the generalized Calabi ansatz.
\begin{remark}
We conclude by pointing out that if $\textnormal{\textbf{g}} \geq 2$, it is shown in \cite[Theorem 2]{VA3} that there exists an extremal K\"ahler metric in sufficiently \textit{small} compatible K\"ahler classes. On the other hand, by \cite[Proposition 2]{VA3}, if $\textnormal{\textbf{g}} > 2$ and $\textit{\textbf{p}}_1,\textit{\textbf{p}}_2$ satisfying $2(\textnormal{\textbf{g}}-1)>\textit{\textbf{p}}_1+\textit{\textbf{p}}_2$, there is no extremal K\"ahler metric in sufficiently \textit{big} K\"ahler classes.
\end{remark}
| {'timestamp': '2021-09-02T02:13:34', 'yymm': '2108', 'arxiv_id': '2108.12297', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.12297'} |
\section{Introduction}
\label{1}
When analysing process behaviour, one of the early choices one has to make is between a linear and a branching view of time. In branching-time semantics, the choices a process has for proceeding from a particular state are taken into account when defining a notion of process equivalence (with bisimulation being the typical such equivalence), whereas in linear-time semantics such choices are abstracted away and the emphasis is on the individual executions that a process is able to exhibit. From a system verification perspective, one often chooses the linear-time view, as this not only leads to simpler specification logics and associated verification techniques, but also meets the practical need to verify all possible system executions.
While the theory of coalgebras has, from the outset, been able to provide a uniform account of various bisimulation-like observational equivalences (and later, of various simulation-like behavioural preorders), it has so far not been equally successful in giving a generic account of the linear-time behaviour of a state in a system whose type incorporates a notion of branching. For example, the generic trace theory of \cite{HasuoJS07} only applies to systems modelled as coalgebras of type ${\mathsf{T}} \circ F$, with the monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$ specifying a branching type (e.g.~non-deterministic or probabilistic), and the endofunctor $F : {\mathsf{Set}} \to {\mathsf{Set}}$ defining the structure of individual transitions (e.g.~labelled transitions or successful termination). The approach in loc.\,cit.~is complemented by that of \cite{JacobsSS12}, where traces are derived using a determinisation procedure similar to the one for non-deterministic automata. The latter approach applies to systems modelled as coalgebras of type $G \circ {\mathsf{T}}$, where again a monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$ is used to model branching behaviour, and an endofunctor $G$ specifies the transition structure. Neither of these approaches is able to account for potentially infinite traces, as typically employed in model-based formal verification. This limitation is partly addressed in \cite{cirstea-11}, but again, this only applies to coalgebras of type ${\mathsf{T}} \circ F$, albeit with more flexibility in the underlying category (which in particular allows a measure-theoretic account of infinite traces in probabilistic systems). Finally, none of the above-mentioned approaches exploits the compositionality that is intrinsic to the coalgebraic approach. In particular, coalgebras of type $G \circ {\mathsf{T}} \circ F$ (of which systems with both inputs and outputs are an example, see Example~\ref{input-output}) can not be accounted for by any of the existing approaches. This paper presents an attempt to address the above limitations concerning the types of coalgebras and the nature of traces that can be accounted for, by providing a \emph{uniform} and \emph{compositional} treatment of (possibly infinite) linear-time behaviour in systems with branching.
In our view, one of the reasons for only a partial success in developing a fully general coalgebraic theory of traces is the long-term aspiration within the coalgebra community to obtain a uniform characterisation of trace equivalence via a finality argument, in much the same way as is done for bisimulation (in the presence of a final coalgebra). This encountered difficulties, as a suitable category for carrying out such an argument proved difficult to find in the general case. In this paper, we tackle the problem of getting a handle on the linear-time behaviour of a state in a coalgebra with branching from a different angle: we do not attempt to directly define a notion of trace equivalence between two states (e.g.~via finality in some category), but focus on \emph{testing} whether a state is able to exhibit a particular trace, and on measuring the extent of this ability. This "measuring" relates to the type of branching present in the system, and instantiates to familiar concepts such as the probability of exhibiting a given trace in probabilistic systems, the minimal cost of exhibiting a given trace in weighted computations, and simply the ability to exhibit a trace in non-deterministic systems.
The technical tool for achieving this goal is a generalisation of the notions of relation and relation lifting \cite{HermidaJ98}, which lie at the heart of the definition of coalgebraic bisimulation. Specifically, we employ relations valued in a partial semiring, and a corresponding generalised version of relation lifting. Our approach applies to coalgebras whose type is obtained as the composition of several endofunctors on ${\mathsf{Set}}$: one of these is a monad ${\mathsf{T}}$ that accounts for the presence of branching in the system, while the remaining endofunctors, assumed here to be polynomial, jointly determine the notion of linear-time behaviour. This strictly subsumes the types of systems considered in earlier work on coalgebraic traces \cite{HasuoJS07,cirstea-11,JacobsSS12}, while also providing compositionality in the system type.
Our main contribution, presented in Section~\ref{linear-time}, is a \emph{uniform} and \emph{compositional} account of linear-time behaviour in state-based systems with branching. A by-product of our work is an extension of the study of additive monads carried out in \cite{Kock2011,CoumansJ2011} to what we call \emph{partially additive monads} (Section~\ref{semiring}). Our approach can be summarised as follows:
\begin{itemize}
\item We move from two-valued to multi-valued relations, with the universe of truth values being induced by the choice of monad for modelling branching. This instantiates to relations valued in the interval $[0,1]$ in the case of probabilistic branching, the set $\mathbb N^\infty = \mathbb N \cup \{\infty\}$ in the case of weighted computations, and simply $\{\bot,\top\}$ in the case of non-deterministic branching. This reflects our view that the notion of truth used to reason about the observable behaviour of a system should be dependent on the branching behaviour present in that system. Such a dependency is also expected to result in temporal logics that are more natural and more expressive, and at the same time have a conceptually simpler semantics. In deriving a suitable structure on the universe of truth values, we generalise results on additive monads \cite{Kock2011,CoumansJ2011} to \emph{partially additive monads}. This allows us to incorporate probabilistic branching under our approach. We show that for a commutative, partially additive monad ${\mathsf{T}}$ on ${\mathsf{Set}}$, the set ${\mathsf{T}} 1$ carries a partial semiring structure with an induced preorder, which in turn makes ${\mathsf{T}} 1$ an appropriate choice of universe of truth values.
\item We generalise and adapt the notion of relation lifting used in the definition of coalgebraic bisimulation, in order to (i) support multi-valued relations, and (ii) abstract away branching. Specifically, we make use of the partial semiring structure carried by the universe of truth values to generalise relation lifting of polynomial endofunctors to multi-valued relations, and employ a canonical \emph{extension lifting} induced by the monad ${\mathsf{T}}$ to capture a move from branching to linear time. The use of this extension lifting allows us to make formal the idea of testing whether, and to what extent, a state in a coalgebra with branching can exhibit a particular \emph{linear-time} behaviour. Our approach resembles the idea employed by partition refinement algorithms for computing bisimulation on labelled transition systems with finite state spaces \cite{KS90}. There, one starts from a single partition of the state space, with all states related to each other, and repeatedly refines it through stepwise unfolding of the transition structure, until a fixpoint is reached. Similarly, we start by assuming that a state in a system with branching can exhibit any linear-time behaviour, and moreover, assign the maximum possible value to each pair consisting of a state and a linear-time behaviour. We then repeatedly refine the values associated to such pairs, through stepwise unfolding of the coalgebraic structure.
\end{itemize}
The present work is closely related to our earlier work on maximal traces and path-based logics \cite{cirstea-11}, which described a game-theoretic approach to testing if a system with non-deterministic branching is able to exhibit a particular trace. Here we consider arbitrary branching types, and while we do not emphasise the game-theoretic aspect, our use of greatest fixpoints has a very similar thrust.
\paragraph{Acknowledgements} Several fruitful discussions with participants at the 2012 Dagstuhl Seminar on Coalgebraic Logics helped refine the ideas presented here. Our use of relation lifting was inspired by the recent work on coinductive predicates \cite{Hasuo12}, itself based on the seminal work in \cite{HermidaJ98} on the use of predicate and relation lifting in the formalisation of induction and coinduction principles. Last but not least, the comments received from the anonymous reviewers contributed to improving the presentation of this work and to identifying new directions for future work.
\section{Preliminaries}
\subsection{Relation Lifting}
\label{rel-lifting}
The concepts of \emph{predicate lifting} and \emph{relation lifting}, to our knowledge first introduced in \cite{HermidaJ98}, are by now standard tools in the study of coalgebraic models, used e.g.~to provide an alternative definition of the notion of bisimulation (see e.g.~in \cite{JacobsBook}), or to describe the semantics of coalgebraic modal logics \cite{Pattinson03,Moss99}. While these concepts are very general, their use so far usually restricts this generality by viewing both predicates and relations as sub-objects in some category (possibly carrying additional structure). In this paper, we make use of the full generality of these concepts, and move from the standard view of relations as subsets to a setting where relations are valuations into a universe of truth values. This section recalls the definition of relation lifting in the standard setting where relations are given by monomorphic spans.
Throughout this section (only), {${\mathsf{Rel}}$} denotes the category whose objects are binary relations $(R,\langle r_1,r_2 \rangle)$ with $\langle r_1,r_2 \rangle : R \to X \times Y$ a monomorphic span, and whose arrows from $(R,\langle r_1,r_2 \rangle)$ to $(R',\langle r_1',r_2' \rangle)$ are given by pairs of functions $(f : X \to X'\,,\, g : Y \to Y')$ ~s.t.~ $(f \times g) \circ \langle r_1,r_2 \rangle$ factors through $\langle r_1',r_2' \rangle$:
\[\UseComputerModernTips\xymatrix{
R \ar@{-->}[d] \ar@{>->}[r]^-{\langle r_1,r_2 \rangle} & X \times Y \ar[d]^-{f \times g} \\
R' \ar@{>->}[r]^-{\langle r_1',r_2' \rangle} & X' \times Y' }\]
In this setting, the \emph{relation lifting of a functor $F : {\mathsf{Set}} \to {\mathsf{Set}}$} is defined as a functor ${\mathsf{Rel}}(F) : {\mathsf{Rel}} \to {\mathsf{Rel}}$ taking a relation $\langle r_1,r_2 \rangle : R \to X \times Y$ to the relation defined by the span $\langle F(r_1),F(r_2) \rangle : F(R) \to F(X) \times F(Y)$, obtained via the unique epi-mono factorisation of $\langle F(r_1),F(r_2) \rangle$:
\[\UseComputerModernTips\xymatrix{
R \ar@{>->}[d]_-{\langle r_1,r_2 \rangle} & F(R) \ar[d]_-{\langle F(r_1),F(r_2) \rangle} \ar@{->>}[r] & {\mathsf{Rel}}(F)(R)\ar@{>->} [dl]\\
X \times Y & F(X) \times F(Y)
}\]
It follows easily that this construction is functorial, and in particular preserves the order $\le$ between relations on the same objects given by $(R,\langle r_1,r_2 \rangle) \le (S,\langle s_1,s_2 \rangle)$ if and only if $\langle r_1,r_2 \rangle$ factors through $\langle s_1,s_2 \rangle$:
\[\UseComputerModernTips\xymatrix@+0.3pc{
R \ar@{>-->}[r] \ar@{>->}@<-1.5ex>[rr]_-{\langle r_1,r_2 \rangle} & S \ar@{>->}[r]^-{\langle s_1,s_2 \rangle} & X \times Y}\]
An alternative definition of ${\mathsf{Rel}}(F)$ for $F$ a \emph{polynomial functor} (i.e.~constructed from the identity and constant functors using \emph{finite} products and set-indexed coproducts) can be given by induction on the structure of $F$. We refer the reader to \cite[Section~3.1]{JacobsBook} for details of this definition. An extension of this definition to a more general notion of relation will be given in Section~\ref{gen-rel-lifting}.
\subsection{Coalgebras}
\label{section-coalgebras}
We model state-based, dynamical systems as coalgebras over the category of sets. Given a functor $F : {\mathbb{C}} \to {\mathbb{C}}$ on an arbitrary category, an \emph{$F$-coalgebra} is given by a pair $(C,\gamma)$ with $C$ an object of ${\mathbb{C}}$, used to model the state space, and $\gamma : C \to F C$ a morphism in ${\mathbb{C}}$, describing the one-step evolution of the system states. Then, a canonical notion of observational equivalence between the states of two $F$-coalgebras is provided by the notion of bisimulation. Of the many, and under the assumption that $F$ preserves weak pullbacks, equivalent definitions of bisimulation (see \cite{JacobsBook} for a detailed account), we recall the one based on relation lifting. This applies to coalgebras over the category of sets (as described below), but also more generally to categories with logical factorisation systems (as described in \cite{JacobsBook}). According to this definition, an \emph{$F$-bisimulation} between coalgebras $(C,\gamma)$ and $(D,\delta)$ over ${\mathsf{Set}}$ is a ${\mathsf{Rel}}(F)$-coalgebra:
\[\UseComputerModernTips\xymatrix{
R \ar@{-->}[r] \ar@{>->}[d] & {\mathsf{Rel}}(F)(R)\ar@{>->}[d]\\
X \times Y \ar[r]_-{\gamma \times \delta} & F(X) \times F(Y)
}\]
In the remainder of this section we sketch a coalgebraic generalisation of a well-known partition refinement algorithm for computing \emph{bisimilarity} (i.e.~the largest bisimulation) on finite-state labelled transition systems \cite{KS90}. For an arbitrary endofunctor $F : {\mathsf{Set}} \to {\mathsf{Set}}$ and two finite-state $F$-coalgebras $(C,\gamma)$ and $(D,\delta)$, the generalised algorithm iteratively computes relations $\simeq_i \,\,\subseteq\,\, C \times D$ with $i = 0,1, \ldots$ as follows:
\begin{itemize}
\item $\sim_0 \,=\, C \times D$
\item $\sim_{i+1} \,=\, (\gamma \times \delta)^*({\mathsf{Rel}}(F)(\simeq_i))$ for $i = 0,1,\ldots$
\end{itemize}
where $(\gamma \times \delta)^*$ takes a relation $R \subseteq F C \times F D$ to the relation $\{(c,d) \in C \times D \mid (\gamma(c),\delta(d)) \in R \}$. Thus, in the initial approximation $\simeq_0$ of the bisimilarity relation, all states are related, whereas at step $i+1$ two states are related if and only if their one-step observations are suitably related using the relation $\simeq_i$. Bisimilarity between the coalgebras $(C,\gamma)$ and $(D,\delta)$ thus arises as the greatest fixpoint of a monotone operator on the complete lattice of relations between $C$ and $D$, which takes a relation $R \subseteq C \times D$ to the relation $(\gamma \times \delta)^*({\mathsf{Rel}}(F)(R))$. A similar characterisation of bisimilarity exists for coalgebras with infinite state spaces, but in this case the fixpoint can not, in general, be reached in a finite number of steps.
The above greatest fixpoint characterisation of bisimilarity is generalised and adapted in Section~\ref{linear-time}, in order to characterise the extent to which a state in a coalgebra with branching can exhibit a linear-time behaviour. There, the two coalgebras in question have different types: the former has branching behaviour and is used to model the system of interest, whereas the latter has linear behaviour only and describes the domain of possible traces.
\subsection{Monads}
In what follows, we use monads $({\mathsf{T}},\eta,\mu)$ on ${\mathsf{Set}}$ (where $\eta : {\mathsf{Id}} \Rightarrow {\mathsf{T}}$ and $\mu : {\mathsf{T}} \circ {\mathsf{T}} \Rightarrow {\mathsf{T}}$ are the \emph{unit} and \emph{multiplication} of ${\mathsf{T}}$) to capture branching in coalgebraic types. Moreover, we assume that these monads are \emph{strong} and \emph{commutative}, i.e.~they come equipped with a \emph{strength map} ${\mathsf{st}}_{X,Y} : X \times {\mathsf{T}} Y \to {\mathsf{T}}(X \times Y)$ as well as a \emph{double strength map} ${\mathsf{dst}}_{X,Y} : {\mathsf{T}} X \times {\mathsf{T}} Y \to {\mathsf{T}}(X \times Y)$ for each choice of sets $X,Y$; these maps are natural in $X$ and $Y$, and satisfy coherence conditions w.r.t.~the unit and multiplication of ${\mathsf{T}}$. We also make direct use of the \emph{swapped strength map} ${\mathsf{st}}'_{X,Y} : {\mathsf{T}} X \times Y \to {\mathsf{T}}(X \times Y)$, obtained from the strength via the \emph{twist map} ${\mathsf{tw}}_{X,Y} : X \times Y \to Y \times X$:
\[\UseComputerModernTips\xymatrix@+0.3pc{{\mathsf{T}} X \times Y \ar[r]^-{{\mathsf{tw}}_{{\mathsf{T}} X,Y}} & Y \times {\mathsf{T}} X \ar[r]^-{{\mathsf{st}}_{Y,X}} & {\mathsf{T}}(Y \times X) \ar[r]^-{{\mathsf{T}} {\mathsf{tw}}_{Y,X}} & {\mathsf{T}}(X \times Y)}\]
\begin{example}
\label{example-monads}
As examples of monads, we consider:
\begin{enumerate}
\item the \emph{powerset monad} ${\mathcal{P}} : {\mathsf{Set}} \to {\mathsf{Set}}$, modelling nondeterministic computations, with unit given by singletons and multiplication given by unions. Its strength and double strength are given by
\begin{align*}
{\mathsf{st}}_{X,Y}(x,V) = \{x\} \times V & &
{\mathsf{dst}}_{X,Y}(U,V) = U \times V
\end{align*}
for $x\in X$, $U \in {\mathcal{P}} X$ and $V \in {\mathcal{P}} Y$,
\item the \emph{semiring monad} ${\mathsf{T}}_S : {\mathsf{Set}} \to {\mathsf{Set}}$ with $(S,+,0,\bullet,1)$ a semiring, given by
\[{\mathsf{T}}_S(X) = \{ f : X \to S \mid \sup(f) \text{ is finite} \}\]
with $\sup(f) = \{ x \in X \mid f(x) \ne 0\}$ the \emph{support} of $f$. Its unit and multiplication are given by
\begin{align*}
\eta_X(x)(y) = \begin{cases}1 & \text{ if } y = x \\
0 & \text{ otherwise} \end{cases} & & \mu_X(f \in S^{(S^X)}) = \sum\limits_{g \in \sup(f)}\sum\limits_{x\in \sup(g)} f(g) \bullet g(x)
\end{align*}
while its strength and double strength are given by
\begin{align*}
{\mathsf{st}}_{X,Y}(x,g)(z,y) = \begin{cases} g(y) & \text{ if } z = x \\
0 & \text{ otherwise}
\end{cases} & & {\mathsf{dst}}_{X,Y}(f,g)(z,y) = f(z) \bullet g(y)
\end{align*}
for $x \in X$, $f\in {\mathsf{T}}_S(X)$, $g \in {\mathsf{T}}_S(Y)$, $z \in X$ and $y \in Y$.
As a concrete example, we will consider the semiring $W = (\mathbb N^\infty,\min,\infty,+,0)$, and use ${\mathsf{T}}_W$ to model weighted computations.
\item the \emph{sub-probability distribution monad} ${\mathcal{S}} : {\mathsf{Set}} \to {\mathsf{Set}}$, modelling probabilistic computations, with unit given by the Dirac distributions (i.e.~$\eta_{X}(x) = (x \mapsto 1)$), and multiplication given by $\mu_X(\Phi) = \sum\limits_{\varphi \in \sup(\Phi)}\sum\limits_{x \in \sup(\varphi)} \Phi(\varphi) * \varphi(x)$, with $*$ denoting multiplication on $[0,1]$. Its strength and double strength are given by
\begin{align*}
{\mathsf{st}}_{X,Y}(x,\psi)(z,y) = \begin{cases}\psi(y) & \text{if } z = x\\ 0 & \text{otherwise}
\end{cases} & &
{\mathsf{dst}}_{X,Y}(\varphi,\psi)(z,y) = \varphi(z) * \psi(y)
\end{align*}
for $x \in X$, $\varphi \in {\mathcal{S}}(X)$, $\psi \in {\mathcal{S}}(Y)$, $z \in X$ and $y \in Y$.
\end{enumerate}
\end{example}
\section{From Partially Additive, Commutative Monads to Partial Commutative Semirings with Order}
\label{semiring}
Later in this paper we will consider coalgebras whose type is given by the composition of several endofunctors on ${\mathsf{Set}}$, one of which is a commutative monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$ accounting for the presence of branching in the systems of interest. This section extends results in \cite{Kock2011,CoumansJ2011} to show how to derive a universe of truth values from such a monad. The assumption of loc.\,cit.~concerning the \emph{additivity} of the monad under consideration is here weakened to \emph{partial additivity} (see Definition~\ref{additive}); this allows us to incorporate the sub-probability distribution monad (which is not additive) into our framework. Specifically, we show that any commutative, partially additive monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$ induces a partial commutative semiring structure on the set ${\mathsf{T}} 1$, with $1=\{*\}$ a final object in ${\mathsf{Set}}$. We recall that a \emph{commutative semiring} consists of a set $S$ carrying two commutative monoid structures $(+,0)$ and $(\bullet, 1)$, with the latter distributing over the former: $s \bullet 0 = 0$ and $s \bullet (t + u) = s \bullet t + s \bullet u$ for all $s,t,u \in S$. A \emph{partial commutative semiring} is defined similarly, except that $+$ is a partial operation subject to the condition that whenever $t + u$ is defined, so is $s \bullet t + s \bullet u$, and moreover $s \bullet (t + u) = s \bullet t + s \bullet u$. The relevance of a partial commutative semiring structure on the set of truth values will become clear in Sections~\ref{gen-rel-lifting} and \ref{linear-time}.
It follows from results in \cite{CoumansJ2011} that any commutative monad $({\mathsf{T}},\eta,\mu)$ on ${\mathsf{Set}}$ induces a commutative monoid $({\mathsf{T}}(1),\bullet,\eta_1(*))$, with multiplication $\bullet : {\mathsf{T}}(1) \times {\mathsf{T}}(1) \to {\mathsf{T}}(1)$ given by the composition
\[\UseComputerModernTips\xymatrix@+1pc{{\mathsf{T}} (1) \times {\mathsf{T}} (1) \ar[r]^-{{\mathsf{dst}}_{1,1}} & {\mathsf{T}}(1 \times 1) \ar[r]^-{{\mathsf{T}} \pi_2} & {\mathsf{T}} (1)}\]
Alternatively, this multiplication can be defined as the composition
\[\UseComputerModernTips\xymatrix{{\mathsf{T}}(1) \times {\mathsf{T}}(1) \ar[r]^-{{\mathsf{st}}'_{1,1}} & {\mathsf{T}}(1 \times {\mathsf{T}}(1)) \ar[r]^-{T \pi_2} & {\mathsf{T}}^2(1) \ar[r]^-{\mu_1} & {\mathsf{T}}(1)}\]
or as
\[\UseComputerModernTips\xymatrix{{\mathsf{T}}(1) \times {\mathsf{T}}(1) \ar[r]^-{{\mathsf{st}}_{1,1}} & {\mathsf{T}}({\mathsf{T}}(1) \times 1) \ar[r]^-{T \pi_1} & {\mathsf{T}}^2(1) \ar[r]^-{\mu_1} & {\mathsf{T}}(1)}\]
(While the previous two definitions coincide for commutative monads, this is not the case in general.)
\begin{remark}
\label{actions}
The following maps define left and right actions of $({\mathsf{T}}(1),\bullet)$ on ${\mathsf{T}}( X)$:
\[\UseComputerModernTips\xymatrix{
{\mathsf{T}}(1) \times {\mathsf{T}}(X) \ar[r]^-{{\mathsf{dst}}_{1,X}} & {\mathsf{T}}(1 \times X) \ar[r]^-{{\mathsf{T}} \pi_2} & {\mathsf{T}} (X)
} \qquad \qquad \UseComputerModernTips\xymatrix{
{\mathsf{T}}(X) \times {\mathsf{T}}(1) \ar[r]^-{{\mathsf{dst}}_{X,1}} & {\mathsf{T}}(X \times 1) \ar[r]^-{{\mathsf{T}} \pi_1} & {\mathsf{T}} (X)
}\]
\end{remark}
On the other hand, any monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$ with {$ {\mathsf{T}} \emptyset = 1$} is such that, for any $X$, ${\mathsf{T}} X$ has a \emph{zero element} $0 \in {\mathsf{T}} X$, obtained as $({\mathsf{T}}!_X)(*)$. This yields a \emph{zero map $0 : Y \to {\mathsf{T}} X$} for any $X,Y$, obtained as the composition
\[\UseComputerModernTips\xymatrix{Y \ar[r]^-{!_Y} & {\mathsf{T}} \emptyset \ar[r]^-{T !_X} & {\mathsf{T}} X}\]
with the maps $!_Y : Y \to {\mathsf{T}} \emptyset$ and $!_X : \emptyset \to X$ arising by finality and initiality, respectively. Now consider the following map:
\begin{equation}
\label{map}
\UseComputerModernTips\xymatrix@+4pc{T(X+Y) \ar[r]^-{\langle \mu_X \circ {\mathsf{T}} p_1,\mu_Y \circ {\mathsf{T}} p_2 \rangle} & {\mathsf{T}} X \times {\mathsf{T}} Y}
\end{equation}
where $p_1 = [\eta_X,0] : X + Y \to {\mathsf{T}} X$ and $p_2 = [0,\eta_Y] : X + Y \to {\mathsf{T}} Y$.
\begin{definition}
\label{additive}
A monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$ is called \emph{additive}\footnote{Additive monads were studied in \cite{Kock2011,CoumansJ2011}.} (\emph{partially additive}) if\, ${\mathsf{T}} \emptyset = 1$ and the map in (\ref{map}) is an isomorphism (respectively monomorphism).
\end{definition}
The (partial) inverse of the map $\langle \mu_X \circ {\mathsf{T}} p_1,\mu_Y \circ {\mathsf{T}} p_2 \rangle$ can be used to define a (partial) addition on the set ${\mathsf{T}} X$, given by ${\mathsf{T}}[1_X,1_X] \circ q_{X,X}$, where $q_{X,X} : {\mathsf{T}} X \times {\mathsf{T}} X \to {\mathsf{T}}(X+X)$ is the (partial) left inverse of $\langle \mu_X \circ {\mathsf{T}} p_1,\mu_Y \circ {\mathsf{T}} p_2 \rangle$:
\[\UseComputerModernTips\xymatrix@+1pc{{\mathsf{T}} X & {\mathsf{T}}(X+X) \ar@<+1ex>[rr]^-{\langle \mu_X \circ {\mathsf{T}} p_1,\mu_Y \circ {\mathsf{T}} p_2 \rangle} \ar[l]_-{{\mathsf{T}}[1_X,1_X]} & & {\mathsf{T}} X \times {\mathsf{T}} X \ar@<+1ex>@{-->}[ll]^-{q_{X,X}} \ar@<+1ex>@/^3ex/[lll]^-{{+}}}\]
That is, $a + b$ is defined if and only if $(a,b) \in \Im(\langle \mu_X \circ {\mathsf{T}} p_1,\mu_Y \circ {\mathsf{T}} p_2 \rangle)$\,\footnote{A similar, but \emph{total}, addition operation is defined in \cite{Kock2011,CoumansJ2011} for additive monads.}.
\cite[Section~5.2]{CoumansJ2011} explores the connection between additive, commutative monads and commutative semirings. The next result provides a generalisation to partially additive, commutative monads and partial commutative semirings.
The proof of Proposition~\ref{prop-partial} is a slight adaptation of the corresponding proofs in \cite[Section~5.2]{CoumansJ2011}.
\begin{proposition}
\label{prop-partial}
Let ${\mathsf{T}}$ be a commutative, (partially) additive monad. Then:
\begin{enumerate}
\item $({\mathsf{T}} 1,\bullet,\eta_1(*))$ is a commutative monoid.
\item $({\mathsf{T}} X,0,+)$ is a (partial) commutative monoid, for each set $X$.
\item\label{3} $({\mathsf{T}} 1,0,+,\bullet,\eta_1(*))$ is a (partial) commutative semiring.
\end{enumerate}
\end{proposition}
\begin{proof}[Proof (Sketch)]
The commutativity of the following diagram lies at the heart of the proof of item \ref{3}:
\[\UseComputerModernTips\xymatrix{
{\mathsf{T}} 1 \times {\mathsf{T}} 1 \ar[dd]_-{\bullet} & & {\mathsf{T}} (1+1) \times {\mathsf{T}} 1 \ar[ll]_-{{\mathsf{T}}[1_X,1_X] \times 1_{{\mathsf{T}} 1}} \ar[dd]_-{a_{{\mathsf{T}}(1+1)}} \ar@<+1ex>[r]^-{\delta \times 1_{{\mathsf{T}} 1} }& ({\mathsf{T}} 1 \times {\mathsf{T}} 1) \times {\mathsf{T}} 1 \ar@<+1ex>@{-->}[l]^-{q_{1,1} \times 1_{{\mathsf{T}} 1}} \ar[d]^-{\langle \pi_1 \times \pi_2,\pi_2 \times \pi_2 \rangle} \\
& & & ({\mathsf{T}} 1 \times {\mathsf{T}} 1) \times ({\mathsf{T}} 1 \times {\mathsf{T}} 1) \ar[d]^-{\bullet \,\times\, \bullet}\\
{\mathsf{T}} 1 & & {\mathsf{T}}(1 + 1) \ar@<+1ex>[r]^-{\delta} \ar[ll]^-{{\mathsf{T}}[1_X,1_X]} & {\mathsf{T}} 1 \times {\mathsf{T}} 1\ar@<+1ex>@{-->}[l]^-{q_{1,1}}
}\]
where $a_{{\mathsf{T}} X} : {\mathsf{T}} X \times {\mathsf{T}} 1 \to {\mathsf{T}} X$ is the right action from Remark~\ref{actions}, and $\delta$ is the map $\langle \mu_1 \circ {\mathsf{T}} p_1,\mu_1 \circ {\mathsf{T}} p_2 \rangle$ used in the definition of $+$ on ${\mathsf{T}} 1$. The composition $\bullet \circ ({\mathsf{T}}[1_X,1_X] \times 1_{{\mathsf{T}} 1}) \circ (q_{1,1} \times 1_{{\mathsf{T}} 1})$ captures the computation of $(a + b) \bullet c$, whereas the composition ${\mathsf{T}}[1_X,1_X] \circ q_{1,1} \circ (\bullet \times \bullet) \circ \langle \pi_1 \times \pi_2,\pi_2 \times \pi_2 \rangle$ captures the computation $a \bullet c + b \bullet c$, with $a,b,c \in {\mathsf{T}} 1$. The fact that $\delta$ commutes with the strength map (by (iv) of \cite[Lemma~15]{CoumansJ2011}), together with $a_{{\mathsf{T}}(1+1)}$ and $\bullet$ being essentially given by the double strength maps ${\mathsf{dst}}_{1+1,1}$ and ${\mathsf{dst}}_{1,1}$, yields $(\bullet \times \bullet) \circ \langle \pi_1 \times \pi_2,\pi_2 \times \pi_2 \rangle \circ (\delta \times 1_{{\mathsf{T}} 1}) = \delta \circ a_{{\mathsf{T}}(1+1)}$, that is, commutativity (via the plain arrows) of the right side of the above diagram. This immediately results in $a \bullet c + b \bullet c$ being defined whenever $a + b$ is defined, and hence in the commutativity of the right side of the diagram also via the dashed arrows. This, combined with the commutativity of the left side of the diagram (which is simply naturality of the right action $a$), gives $(a + b) \bullet c = a \bullet c + b \bullet c$ whenever $a+b$ is defined.
\end{proof}
\begin{example}
\label{example-semirings}
For the monads in Example~\ref{example-monads}, one obtains the commutative semirings $(\{\bot,\top\},\vee,\bot,\wedge,\top)$ when {${\mathsf{T}} = {\mathcal{P}}$}, $({\mathbb N}^\infty,\min,\infty,+,0)$ when {${\mathsf{T}} = {\T}_W$}\,\footnote{This is sometimes called the \emph{tropical semiring}.}, and the partial commutative semiring $([0,1],+,0,*,1)$ when ${\mathsf{T}} = {\mathcal{S}}$ (where in the latter case $a + b$ is defined if and only if $a + b \le 1$).
\end{example}
\section{Generalised Relations and Relation Lifting}
\label{gen-rel-lifting}
This section introduces generalised relations valued in a partial commutative semiring, and shows how to lift polynomial endofunctors on ${\mathsf{Set}}$ to the category of generalised relations. We begin by fixing a partial commutative semiring $(S,+,0,\bullet,1)$, and noting that the partial monoid $(S,+,0)$ can be used to define a preorder relation on $S$ as follows:
\[x \sqsubseteq y ~~~\text{if and only if}~~~ \text{there exists } z \in S \text{ such that }x + z = y\]
for $x,y \in S$. It is then straightforward to show (using the definition of a partial commutative semiring) that the preorder $\sqsubseteq$ has $0 \in S$ as bottom element, and is preserved by $\bullet$ in each argument. Proper (i.e.~not partial) semirings where the preorder $\sqsubseteq$ is a partial order are called \emph{naturally ordered} \cite{EsikK07}. We here extend this terminology to partial semirings.
\begin{example}
\label{example-orders}
For the monads in Example~\ref{example-monads}, the preorders associated to the induced partial semirings (see Example~\ref{example-semirings}) are all partial orders: $\le$ on $\{\bot,\top\}$ for ${\mathsf{T}} = {{\mathcal{P}}}$, $\le$ on $[0,1]$ for ${\mathsf{T}} = {{\mathcal{S}}}$, and $\ge$ on ${\mathbb N}^\infty$ for ${\mathsf{T}} = {\T}_W$.
\end{example}
We let ${\mathsf{Rel}}$ denote the category\footnote{To keep notation simple, the dependency on $S$ is left implicit.} with objects given by triples $(X,Y,R)$, where $R : X \times Y \to S$ is a function defining a \emph{multi-valued relation} (or \emph{$S$-relation}), and with arrows from $(X,Y,R)$ to $(X',Y',R')$ given by pairs of functions $(f,g)$ as below, such that $R \sqsubseteq R' \circ (f \times g)$:
\[\UseComputerModernTips\xymatrix{X \times Y \ar@{}[dr]|-{\sqsubseteq}\ar[r]^-{f \times g} \ar[d]_-{R} & X' \times Y' \ar[d]^-{R'}\\ S \ar@{=}[r] & S}\]
Here, the order $\sqsubseteq$ on $S$ has been extended pointwise to $S$-relations with the same carrier.
We write ${\mathsf{Rel}}_{X,Y}$ for the \emph{fibre over $(X,Y)$}, that is, the full subcategory of ${\mathsf{Rel}}$ whose objects are $S$-relations over $X \times Y$ and whose arrows are given by $(1_X,1_Y)$. It is straightforward to check that the functor $q : {\mathsf{Rel}} \to {\mathsf{Set}} \times {\mathsf{Set}}$ taking $(X,Y,R)$ to $(X,Y)$ defines a fibration: the reindexing functor $(f,g)^* : {\mathsf{Rel}}_{X',Y'} \to {\mathsf{Rel}}_{X,Y}$ takes $R' : X' \times Y' \to S$ to $R' \circ (f \times g) : X \times Y \to S$.
We now proceed to generalising relation lifting to $S$-relations.
\begin{definition}
\label{def-gen-rel-lifting}
Let $F : {\mathsf{Set}} \to {\mathsf{Set}}$. A \emph{relation lifting of $F$} is a functor\footnote{Given the definition of the fibration $q$, such a functor is automatically a morphism of fibrations.} $\Gamma : {\mathsf{Rel}} \to {\mathsf{Rel}}$ such that $q \circ \Gamma = (F \times F) \circ q$:
\[\UseComputerModernTips\xymatrix{
{\mathsf{Rel}} \ar[d]_-{q} \ar[r]^-{\Gamma} & {\mathsf{Rel}} \ar[d]^-{q} \\
{\mathsf{Set}} \times {\mathsf{Set}} \ar[r]_-{F \times F} & {\mathsf{Set}} \times {\mathsf{Set}}}\]
\end{definition}
We immediately note a fundamental difference compared to standard relation lifting as defined in Section~\ref{rel-lifting}. While in the case of standard relations each functor admits exactly one lifting, Definition~\ref{def-gen-rel-lifting} implies neither the existence nor the uniqueness of a lifting. We defer the study of a canonical lifting (similar to ${\mathsf{Rel}}(F)$ in the case of standard relations) to future work, and show how to define a relation lifting of $F$ in the case when $F$ is a polynomial functor. To this end, we make the additional assumption that the unit $1$ of the semiring multiplication is a top element (which we also write as $\top$) for the preorder $\sqsubseteq$. Recall that $\sqsubseteq$ also has a bottom element (which we will sometimes denote by $\bot$), given by the unit $0$ of the (partial) semiring addition. The definition of the relation lifting of a polynomial functor $F$ is by structural induction on $F$ and makes use of the semiring structure on $S$:
\begin{itemize}
\item If $F = {\mathsf{Id}}$, ${\mathsf{Rel}}(F)$ takes an $S$-relation to itself.
\item If $F = C$, ${\mathsf{Rel}}(F)$ takes an $S$-relation to the equality relation ${\mathsf{Eq}}(C) : C \times C \to S$ given by
\[{\mathsf{Eq}}_C(c,c') ~=~ \begin{cases} \top \text{ if }c = c' \\
\bot \text{ otherwise} \end{cases}\]
\item If $F = F_1 \times F_2$, ${\mathsf{Rel}}(F)$ takes an $S$-relation $R : X \times Y \to S$ to:
\[\!\!\!\!\!\!\UseComputerModernTips\[email protected]{(F_1 X \times F_2 X) \times (F_1 Y \times F_2 Y) \ar[rrr]^-{\langle \pi_1 \times \pi_1,\pi_2 \times \pi_2 \rangle} & & & (F_1 X \times F_1 Y) \times (F_2 X \times F_2 Y) \ar[rrrrr]^-{{\mathsf{Rel}}(F_1)(R) \times {\mathsf{Rel}}(F_2)(R)} & & & & & S \times S \ar[r]^-{\bullet} & S}\]
The functoriality of this definition follows from the preservation of $\sqsubseteq$ by $\bullet$ (see Section~\ref{semiring}).
\item if $F = F_1 + F_2$, ${\mathsf{Rel}}(F)(R) : (F_1 X + F_2 X) \times (F_1 Y + F_2 Y ) \to S$ is defined by case analysis:
\begin{align*}
{\mathsf{Rel}}(F)(R)(\iota_i(u),\iota_j(v)) & ~=~ \begin{cases} {\mathsf{Rel}}(F_i)(R)(u,v) & \text{ if } i = j\\
\bot & \text{ otherwise} \end{cases}
\end{align*}
for $i,j \in \{1,2\}$, $u \in F_i X$ and $v \in F_j Y$. This definition generalises straightforwardly from binary to set-indexed coproducts.
\end{itemize}
\begin{remark}
A more general definition of relation lifting, which applies to arbitrary functors on ${\mathsf{Set}}$, is outside the scope of this paper. We note in passing that such a relation lifting could be defined by starting from a \emph{generalised predicate lifting} $\delta : F \circ {\mathsf{P}}_0 \Rightarrow {\mathsf{P}}_0 \circ F$ for the functor $F$, similar to the predicate liftings used in the work on coalgebraic modal logic \cite{Pattinson03}. Here, the contravariant functor ${\mathsf{P}}_0 : {\mathsf{Set}} \to {\mathsf{Set}}^{\mathsf{op}}$ takes a set $X$ to the hom-set ${\mathsf{Set}}(X,S)$. Future work will also investigate the relevance of the results in \cite{Ghani2011,Ghani2012} to a general definition of relation lifting in our setting. Specifically, the work in loc.\,cit.~shows how to construct truth-preserving predicate liftings and equality-preserving relation liftings for arbitrary functors on the base category of a \emph{Lawvere fibration}, to the total category of that fibration.
\end{remark}
For the remainder of this paper, we take $(S,+,0,\bullet,1)$ to be the partial semiring derived in Section~\ref{semiring} from a commutative, partially additive monad ${\mathsf{T}}$, and we view $S$ as the set of truth values.
In the case of the powerset monad, this corresponds to the standard view of relations as subsets, whereas in the case of the sub-probability distribution monad, this results in relations given by valuations in the interval $[0,1]$.
\begin{example}
Let $F: {\mathsf{Set}} \to {\mathsf{Set}}$ be given by $F X = 1 + A \times X$, with $A$ a set (of labels), and let $(S,+,0,\bullet,1)$ be the partial semiring with carrier ${\mathsf{T}} 1$ defined in Section~\ref{semiring}.
\begin{itemize}
\item For ${\mathsf{T}} = {\mathcal{P}}$, ${\mathsf{Rel}}(F)$ takes a (standard) relation $R \subseteq X \times Y$ to the relation
\[\{(\iota_1(*),\iota_1(*)\} \cup \{((a,x),(a,y)) \mid a \in A, (x,y) \in R \}\]
\item For ${\mathsf{T}} = {\mathcal{S}}$, ${\mathsf{Rel}}(F)$ takes $R : X \times Y \to [0,1]$ to the relation $R' : F X \times F Y \to [0,1]$ given by
\[ R'(\iota_1(*),\iota_1(*)) = 1 ~\qquad~
R'((a,x),(a,y)) = R(x,y) ~\qquad~
R'(u,v) = 0 ~\text{ in all other cases}
\]
\item For ${\mathsf{T}} = {\T}_W$, ${\mathsf{Rel}}(F)$ takes $R : X \times Y \to \mathbb N^\infty$ to the relation $R' : F X \times F Y \to \mathbb N^\infty$ given by
\[
R'(\iota_1(*),\iota_1(*)) = 0 ~\qquad~
R'((a,x),(a,y)) = R(x,y) ~\qquad~
R'(u,v) = \infty ~\text{ in all other cases}
\]
\end{itemize}
\end{example}
\section{From Bisimulation to Traces}
\label{linear-time}
Throughout this section we fix a commutative, partially additive monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$ and assume, as in the previous section, that the natural preorder $\sqsubseteq$ induced by the partial commutative semiring obtained in Section~\ref{semiring} has the multiplication unit $\eta_1(*) \in {\mathsf{T}} 1$ as top element. Furthermore, we assume that this preorder is an \emph{$\omega^{{\mathsf{op}}}$-chain complete} partial order, where $\omega^{{\mathsf{op}}}$-chain completeness amounts to any decreasing chain $x_1 \sqsupseteq x_2 \sqsupseteq \ldots$ having a greatest lower bound $\sqcap_{i \in \omega} x_i$. These assumptions are clearly satisfied by the orders in Example~\ref{example-orders}.
We now show how combining the liftings of polynomial functors to the category of generalised relations valued in the partial semiring ${\mathsf{T}} 1$ (as defined in Section~\ref{gen-rel-lifting}) with so-called \emph{extension liftings} which arise canonically from the monad ${\mathsf{T}}$, can be used to give an account of the linear-time behaviour of a state in a coalgebra with branching. The type of such a coalgebra can be any composition involving polynomial endofunctors and the branching monad ${\mathsf{T}}$, although compositions of type ${\mathsf{T}} \circ F$, $G \circ {\mathsf{T}}$ and $G \circ {\mathsf{T}} \circ F$ with $F$ and $G$ polynomial endofunctors are particularly emphasised in what follows.
We begin with some informal motivation. When ${\mathsf{Rel}}$ is the standard category of binary relations, recall from Section~\ref{section-coalgebras} that an $F$-bisimulation is simply a ${\mathsf{Rel}}(F)$-coalgebra, and that the largest $F$-bisimulation between two $F$-coalgebras $(C,\gamma)$ and $(D,\delta)$ can be obtained as the greatest fixpoint of the monotone operator on ${\mathsf{Rel}}_{C \times D}$ which takes a relation $R$ to the relation $(\gamma \times \delta)^*( {\mathsf{Rel}}(F)(R))$. Generalising the notion of $F$-bisimulation from standard relations to ${\mathsf{T}} 1$-relations makes little sense when the systems of interest are $F$-coalgebras. However, when considering say, coalgebras of type ${\mathsf{T}} \circ F$, it turns out that liftings of $F$ to the category of ${\mathsf{T}} 1$-relations (as defined in Section~\ref{gen-rel-lifting}) can be used to describe the \emph{linear-time behaviour} of states in such a coalgebra, when combined with suitable liftings of ${\mathsf{T}}$ to the same category of relations. To see why, let us consider labelled transition systems viewed as coalgebras of type ${\mathcal{P}}(1 + A \times {\mathsf{Id}})$. In such a coalgebra $\gamma : C \to {\mathcal{P}}(1 + A \times C)$, explicit termination is modelled via transitions $c \to \iota_1(*)$, whereas deadlock (absence of a transition) is modelled as $\gamma(c) = \emptyset$. In this case, ${\mathsf{Rel}}({\mathcal{P}}) \circ {\mathsf{Rel}}(1 + A \times {\mathsf{Id}})$ is naturally isomorphic to ${\mathsf{Rel}}({\mathcal{P}}(1 + A \times {\mathsf{Id}}))$\,\footnote{A similar observation holds more generally for ${\mathcal{P}} \circ F$ with $F$ a polynomial endofunctor. In general, only a natural transformation ${\mathsf{Rel}}(F \circ G) \Rightarrow {\mathsf{Rel}}(F) \circ {\mathsf{Rel}}(G)$ exists, see \cite[Exercise~4.4.6]{JacobsBook}.}, and takes a relation $R \subseteq X \times Y$ to the relation $R' \subseteq {\mathcal{P}}(1 + A \times X) \times {\mathcal{P}}(1 + A \times Y)$ given by
\[(U,V) \in R' ~~~\text{ if and only if }~~~ \begin{cases}
\text{if } \iota_1(*) \in U \text{ then } \iota_1(*) \in V , \text{ and conversely}\\
\text{if } (a,x) \in U \text{ then there exists } (a,y) \in V \text{ with } (x,y) \in R, \text{ and conversely}
\end{cases}
\]
Thus, the largest ${\mathcal{P}}(1 + A \times {\mathsf{Id}})$-bisimulation between two coalgebras $(C,\gamma)$ and $(D,\delta)$ can be computed as the greatest fixpoint of the operator on ${\mathsf{Rel}}_{C,D}$ obtained as the composition
\begin{equation}
\label{el}
\UseComputerModernTips\xymatrix@+0.6pc{
R \subseteq C \times D \ar@{|->}[r]^-{{\mathsf{Rel}}(F)} & R_1 \subseteq F C \times F D \ar@{|->}[r]^-{{\mathsf{Rel}}({\mathcal{P}})} & R_2 \subseteq {\mathcal{P}}(F C) \times {\mathcal{P}} (F D) \ar@{|->}[r]^-{(\gamma \times \delta)^*} & R'\subseteq C \times D}
\end{equation}
where $F = 1 + A \times {\mathsf{Id}}$. Note first that ${\mathsf{Rel}}({\mathcal{P}})$ (defined in Section~\ref{rel-lifting} for an arbitrary endofunctor on ${\mathsf{Set}}$) takes a relation $R \subseteq X \times Y$ to the relation $R' \subseteq {\mathcal{P}}(X) \times {\mathcal{P}}(Y)$ given by
\[(U,V) \in R' \text{ ~if and only if~ for all } x \in U \text{ there exists } y \in V \text{ with } (x,y) \in R, \text{ and conversely}\]
Now consider the effect of replacing ${\mathsf{Rel}}({\mathcal{P}})$ in (\ref{el}) with the lifting $L : {\mathsf{Rel}} \to {\mathsf{Rel}}$ that takes a relation $R \subseteq X \times Y$ to the relation $R' \subseteq {\mathcal{P}}(X) \times Y$ given by
\[(U,y) \in R' \text{ ~if and only if~ there exists } x \in U \text{ with } (x,y) \in R\]
To do so, we must change the type of the coalgebra $(D,\delta)$ from ${\mathcal{P}} \circ F$ to just $F$. A closer look at the resulting operator on ${\mathsf{Rel}}_{C,D}$ reveals that it can be used to test for the existence of a matching trace: each state of the $F$-coalgebra $(D,\delta)$ can be associated a \emph{maximal trace}, i.e.~an element of the final $F$-coalgebra, by finality. In particular, when $F = 1 + A \times {\mathsf{Id}}$, maximal traces are either finite or infinite sequences of elements of $A$. Thus, the greatest fixpoint of the newly defined operator on ${\mathsf{Rel}}_{C \times D}$ corresponds to the relation on $C \times D$ given by
\begin{eqnarray*}c \ni_{\mathsf{tr}} d \text{ ~if and only if~ there exists a sequence of choices of transitions starting from } c \in C \text{ that leads to}\\
\qquad \qquad \quad \text{ exactly the same maximal trace (element of $A^* \cup A^\omega)$ as the single trace of } d \in D
\end{eqnarray*}
This relation models the ability of the state $c$ to exhibit the same trace as that of $d$.
The remainder of this section formalises the above intuitions, and generalises them to arbitrary monads ${\mathsf{T}}$ and polynomial endofunctors $F$, as well as to arbitrary compositions involving the monad ${\mathsf{T}}$ and polynomial endofunctors. We begin by restricting attention to coalgebras of type ${\mathsf{T}} \circ F$, with the monad ${\mathsf{T}}$ capturing branching and the endofunctor $F$ describing the structure of individual transitions. In this case it is natural to view the elements of the final $F$-coalgebra as possible \emph{linear-time} observable behaviours of states in ${\mathsf{T}} \circ F$-coalgebras. Similarly to the above discussion, we let $(C,\gamma)$ and $(D,\delta)$ denote a ${\mathsf{T}} \circ F$-coalgebra and respectively an $F$-coalgebra. The lifting of $F$ to ${\mathsf{T}} 1$-relations will be used as part of an operator on ${\mathsf{Rel}}_{C,D}$. In order to generalise the lifting $L$ above to arbitrary monads ${\mathsf{T}}$, we recall the following result from \cite{Kock12}, which assumes a strong monad ${\mathsf{T}}$ on a cartesian closed category.
\begin{proposition}[{\cite[Proposition~4.1]{Kock12}}]
\label{prop-kock}
Let $(B,\beta)$ be a ${\mathsf{T}}$-algebra. For any $f : X \times Y \to B$, there exists a unique $1$-linear $\overline{f} : {\mathsf{T}} X \times Y \to B$ making the following triangle commute:
\[\UseComputerModernTips\xymatrix{
{\mathsf{T}} X \times Y \ar[r]^-{\overline{f}} & B \\
X \times Y \ar[u]^-{\eta_X \times 1_Y} \ar[ur]_-{f}
}\]
\end{proposition}
In the above, \emph{$1$-linearity} is linearity in the first variable. More precisely, for ${\mathsf{T}}$-algebras $(A,\alpha)$ and $(B,\beta)$, a map $f : A \times Y \to B$ is called \emph{$1$-linear} if the following diagram commutes:
\[\UseComputerModernTips\xymatrix{
{\mathsf{T}}(A) \times Y \ar[r]^-{{\mathsf{st}}'_{A,Y}} \ar[d]_-{\alpha \times 1_Y} & {\mathsf{T}}(A \times Y) \ar[r]^-{{\mathsf{T}}(f)} & {\mathsf{T}}(B) \ar[d]^-{\beta}\\
A \times Y \ar[rr]_-{f} & & B
}\]
Clearly $1$-linearity should be expected of the lifting $L(R) : {\mathsf{T}} X \times Y \to \T1$ of a relation $R : X \times Y \to {\mathsf{T}} 1$, as this amounts to $L(R)$ commuting with the ${\mathsf{T}}$-algebra structures $({\mathsf{T}} X,\mu_X)$ and $({\mathsf{T}} 1,\mu_1)$. Given this, the diagram of Proposition~\ref{prop-kock} forces the definition of the generalised lifting.
\begin{definition}
The \emph{extension lifting} $L_{\mathsf{T}} : {\mathsf{Rel}} \to {\mathsf{Rel}}$ is the functor taking a relation $R : X \times Y \to \T1$ to its unique $1$-linear extension $\overline{R} : {\mathsf{T}} X \times Y \to \T1$.
\end{definition}
\begin{remark}
It follows from \cite{Kock12} that a direct definition of the relation $\overline{R} : {\mathsf{T}} X \times Y \to {\mathsf{T}} 1$ is as the composition
\[\UseComputerModernTips\xymatrix@+0.3pc{
{\mathsf{T}} X \times Y \ar[r]^-{{\mathsf{st}}'_{X,Y}} & {\mathsf{T}}(X \times Y) \ar[r]^-{{\mathsf{T}} (R)} & {\mathsf{T}}^2 1 \ar[r]^-{\mu_1} & {\mathsf{T}} 1
}\]
This also yields functoriality of $L_{\mathsf{T}}$, which follows from the functoriality of its restriction to each fibre category ${\mathsf{Rel}}_{X,Y}$, as proved next.
\end{remark}
\begin{proposition}
\label{prop-functoriality}
The mapping $R \in {\mathsf{Rel}}_{X,Y} \mapsto \overline{R} \in {\mathsf{Rel}}_{{\mathsf{T}} X,Y}$ is functorial.
\end{proposition}
\begin{proof}[Proof (Sketch)]
Let $R,R' \in {\mathsf{Rel}}_{X,Y}$ be such that $R \sqsubseteq R'$. Hence, there exists $S \in {\mathsf{Rel}}_{X,Y}$ such that $R + S = R'$ (pointwise). To show that $\overline{R} \sqsubseteq \overline{R'}$, it suffices to show that $\mu_1 \circ {\mathsf{T}}(R) \sqsubseteq \mu_1 \circ {\mathsf{T}}(R')$ (pointwise). To this end, we note that commutativity of the map $\delta$ with the monad multiplication, proved in \cite[Lemma~15\,(iii)]{CoumansJ2011} and captured by the commutativity of the lower diagram below (via the plain arrows)
\[\UseComputerModernTips\xymatrix{
{\mathsf{T}}^2 1 \ar[r]^-{\mu_1} & {\mathsf{T}} 1\\
{\mathsf{T}}^2(1 + 1) \ar[r]^-{\mu_{1 + 1}} \ar@<+1ex>[d]^-{{\mathsf{T}} \delta} \ar[u]^-{{\mathsf{T}}^2 !} & {\mathsf{T}}(1 + 1) \ar@<-1ex>[dd]_-{\delta} \ar[u]^-{{\mathsf{T}} !}\\
{\mathsf{T}}({\mathsf{T}} 1 \times {\mathsf{T}} 1) \ar[d]^-{\langle {\mathsf{T}} \pi_1,{\mathsf{T}} \pi_2 \rangle} \ar@<+1ex>@{-->}[u]^-{{\mathsf{T}} q_{1,1}}\\
{\mathsf{T}}^2 1 \times {\mathsf{T}}^2 1 \ar[r]_-{\mu_1 \times \mu_1} & {\mathsf{T}} 1 \times {\mathsf{T}} 1 \ar@<-1ex>@{-->}[uu]_-{q_{1,1}}
}\]
also yields commutativity of the whole diagram (via the dashed arrows). This formalises the commutativity of $+$ (defined as ${\mathsf{T}} ! \circ q_{1,1}$) with the monad multiplication. Now pre-composing this commutative diagram (dashed arrows) with the map
\[\UseComputerModernTips\xymatrix{
{\mathsf{T}}(X \times Y) \ar[r] & {\mathsf{T}}({\mathsf{T}} 1 \times {\mathsf{T}} 1)
}\]
given by the image under ${\mathsf{T}}$ of the map $(x,y) \mapsto \langle R(x,y),S(x,y) \rangle$ yields
\[(\mu_1 \circ {\mathsf{T}}(R)) + (\mu_1 \circ {\mathsf{T}}(S)) = \mu_1 \circ {\mathsf{T}}(R+S) = \mu_1 \circ {\mathsf{T}} R'\]
and therefore, using the definition of $\sqsubseteq$, $\mu_1 \circ {\mathsf{T}}(R) \sqsubseteq \mu_1 \circ {\mathsf{T}}(R')$. This concludes the proof.
\end{proof}
Thus, $L_{\mathsf{T}}$ is a functor making the following diagram commute:
\[\UseComputerModernTips\xymatrix{
{\mathsf{Rel}} \ar[d]_-{q} \ar[r]^-{L_{\mathsf{T}}} & {\mathsf{Rel}} \ar[d]^-{q} \\
{\mathsf{Set}} \times {\mathsf{Set}} \ar[r]_-{{\mathsf{T}} \times {\mathsf{Id}}} & {\mathsf{Set}} \times {\mathsf{Set}}}\]
We are finally ready to give an alternative account of maximal traces of ${\mathsf{T}} \circ F$-coalgebras.
\begin{definition}
\label{max-trace-map}
Let $(C,\gamma)$ denote a ${\mathsf{T}} \circ F$-coalgebra, and let $(Z,\zeta)$ denote the final $F$-coalgebra. The \emph{maximal trace map ${\mathsf{tr}}_\gamma : C \to ({\mathsf{T}} 1)^Z$ of $\gamma$} is the exponential transpose of the greatest fixpoint $R : C \times Z \to \T1$ of the operator ${\mathcal{O}} : {\mathsf{Rel}}_{C,Z} \to {\mathsf{Rel}}_{C,Z}$ given by the composition
\[\UseComputerModernTips\xymatrix@+0.3pc{
{\mathsf{Rel}}_{C,Z} \ar[r]^-{{\mathsf{Rel}}(F)} & {\mathsf{Rel}}_{F C,F Z} \ar[r]^-{L_{\mathsf{T}}} & {\mathsf{Rel}}_{{\mathsf{T}}(F C), F Z} \ar[r]^-{(\gamma \times \zeta)^*} & {\mathsf{Rel}}_{C,Z}
}\]
\end{definition}
The above definition appeals to the existence of least fixpoints in chain-complete partial orders, as formalised in the following fixpoint theorem from \cite{Priestley2002}.
\begin{theorem}[{\cite[8.22]{Priestley2002}}]
Let $P$ be a complete partial order and let ${\mathcal{O}} : P \to P$ be order-preserving. Then ${\mathcal{O}}$ has a least fixpoint.
\end{theorem}
Definition~\ref{max-trace-map} makes use of this result applied to the \emph{dual} of the order $\sqsubseteq$. Our assumption that $\sqsubseteq$ is $\omega^{\mathsf{op}}$-chain complete makes the dual order a complete partial order. Monotonicity of the operator in Definition~\ref{max-trace-map} is an immediate consequence of the functoriality of ${\mathsf{Rel}}(F)$, $L_{\mathsf{T}}$ and $(\gamma \times \delta)^*$.
\cite{Priestley2002} also gives a construction for the least fixpoint of an order-preserving operator on a complete partial order, which involves taking a limit over an ordinal-indexed chain. Instantiating this construction to the dual of the order $\sqsubseteq$ yields an ordinal-indexed sequence of relations $(R_\alpha)$, where:
\begin{itemize}
\item $R_0 = \top$ (i.e.~the relation on $C \times D$ given by $(c,d) \mapsto 1$),
\item $R_{\alpha+1} = {\mathcal{O}}(R_\alpha)$,
\item $R_\alpha = \sqcap_{\beta < \alpha} R_{\beta}$, if $\alpha$ is a limit ordinal.
\end{itemize}
\begin{remark}
While in the case ${\mathsf{T}} = {\mathcal{P}}$, restricting to finite-state coalgebras $(C,\gamma)$ and $(D,\delta)$ results in the above sequence of relations stabilising in a finite number of steps, for ${\mathsf{T}} = {\mathcal{S}}$ or $T = {\T}_W$ this is not in general the case. However, for probabilistic or weighted computations, an approximation of the greatest fixpoint may be sufficient for verification purposes, since a threshold can be provided as part of a verification task.
\end{remark}
\begin{remark}
By replacing the $F$-coalgebra $(Z,\zeta)$ by $(I,\alpha^{-1})$ with $(I,\alpha)$ an \emph{initial} $F$-algebra, one obtains an alternative account of \emph{finite} traces of states in ${\mathsf{T}} \circ F$-coalgebras, with the \emph{finite trace map} ${\mathsf{ftr}}_\gamma : C \to ({\mathsf{T}} 1)^I$ of a ${\mathsf{T}}\circ F$-coalgebra $(C,\gamma)$ being obtained via the greatest fixpoint of essentially the same operator ${\mathcal{O}}$, but this time on ${\mathsf{Rel}}_{C,I}$. In fact, one can use any $F$-coalgebra in place of $(Z,\zeta)$, and for a specific verification task, a coalgebra with a finite state space, encoding a given linear-time behaviour, might be all that is required.
\end{remark}
\begin{remark}
The choice of functor $F$ directly impacts on the notion of linear-time behaviour. For example, by regarding labelled transition systems as coalgebras of type ${\mathcal{P}}(A \times {\mathsf{Id}})$ instead of ${\mathcal{P}}(1 + A \times {\mathsf{Id}})$ (i.e.~not modelling successful termination explicitly), finite traces are not anymore accounted for -- the elements of the final $F$-coalgebra are given by infinite sequences of elements of $A$. This should not be regarded as a drawback, in fact it illustrates the flexibility of our approach.
\end{remark}
\begin{example}
Let $F$ denote an arbitrary polynomial functor (e.g.~$1 + A \times {\mathsf{Id}}$).
\begin{itemize}
\item For $T = {\mathcal{P}}$, the extension lifting $L_{\mathcal{P}} : {\mathsf{Rel}} \to {\mathsf{Rel}}$ takes a (standard) relation $R \subseteq X \times Y$ to the relation $L_{\mathcal{P}}(R) \subseteq {\mathcal{P}} (X) \times Y$ given by
\[(U,y) \in L_{\mathcal{P}}(R) \text{ ~if and only if~ there exists } x \in U \text{ with } (x,y) \in R\]
As a result, the greatest fixpoint of ${\mathcal{O}}$ relates a state $c$ in a ${\mathcal{P}} \circ F$-coalgebra $(C,\gamma)$ with a state $z$ of the final $F$-coalgebra if and only if there exists a sequence of choices in the unfolding of $\gamma$ starting from $c$, that results in an $F$-behaviour bisimilar to $z$. This was made more precise in \cite{cirstea-11}, where infinite two-player games were developed for verifying whether a state of a ${\mathcal{P}} \circ F$-coalgebra has a certain maximal trace (element of the final $F$-coalgebra).
\item For $T = T_{\mathcal{S}}$, the extension lifting $L_{\mathcal{S}} : {\mathsf{Rel}} \to {\mathsf{Rel}}$ takes a valuation $R : X \times Y \to [0,1]$ to the valuation $L_{\mathcal{S}}(R) : {\mathcal{S}}(X) \times Y \to [0,1]$ given by
\[L_{\mathcal{S}}(R)(\varphi,y) = \sum\limits_{x \in \sup(\varphi)} \varphi(x) * R(x,y)\]
Thus, the greatest fixpoint of ${\mathcal{O}}$ yields, for each state in a ${\mathcal{S}} \circ F$-coalgebra and each potential maximal trace $z$, the probability of this trace being exhibited. As computing these probabilities amounts to multiplying infinitely-many probability values, the probability of an infinite trace will often turn out to be $0$ (unless from some point in the unfolding of a particular state, probability values of $1$ are associated to the individual transitions that match a particular infinite trace). This may appear as a deficiency of our framework, and one could argue that a measure-theoretic approach, whereby a probability measure is derived from the probabilities of finite prefixes of infinite traces, would be more appropriate. Future work will investigate the need for a measure-theoretic approach. At this point, we simply point out that in a future extension of the present approach to linear-time logics (where individual maximal traces are to be replaced by linear-time temporal logic formulas), this deficiency is expected to disappear.
\item For ${\mathsf{T}} = {\mathsf{T}}_W$, the extension lifting $L_W : {\mathsf{Rel}} \to {\mathsf{Rel}}$ takes a \emph{weighted relation} $R : X \times Y \to W$ to the relation $L_W(R) : {\mathsf{T}}_W(X) \times Y \to W$ given by
\[L_W(R)(f,y) = \min_{x \in \sup(f)} (f(x) + R(x,y))\]
for $f : X \to W$ and $y \in Y$. Thus, the greatest fixpoint of ${\mathcal{O}}$ maps a pair $(c,z)$, with $c$ a state in a ${\mathsf{T}}_W \circ F$-coalgebra and $z$ a maximal trace, to the \emph{cost} (computed via the $\min$ function) of exhibiting that trace. The case of weighted computations is somewhat different from our other two examples of branching types, in that the computation of the fixpoint starts from a relation that maps each pair of states $(c,z)$ to the value $0 \in \mathbb N^\infty$ (the top element for $\sqsubseteq$), and refines this down (w.r.t.~the $\sqsubseteq$ order) through stepwise unfolding of the coalgebra structures $\gamma$ and $\zeta$.
\end{itemize}
\end{example}
The approach presented above also applies to coalgebras of type $G \circ {\mathsf{T}}$ with $G$ a polynomial endofunctor, and more generally to coalgebras whose type is obtained as the composition of polynomial endofunctors and the monad ${\mathsf{T}}$, with possibly several occurrences of ${\mathsf{T}}$ in this composition. In the case of $G \circ {\mathsf{T}}$-coalgebras, instantiating our approach yields different results to the extension semantics proposed in \cite{JacobsSS12}. Specifically, the instantiation involves taking $(Z,\zeta)$ to be a final $G$-coalgebra and $(C,\gamma)$ to be an arbitrary $G \circ {\mathsf{T}}$-coalgebra, and considering the monotone operator on ${\mathsf{Rel}}_{C,Z}$ given by the composition
\begin{equation}
\label{o1}
\UseComputerModernTips\xymatrix@+0.3pc{
{\mathsf{Rel}}_{C,Z} \ar[r]^-{L_{\mathsf{T}}} & {\mathsf{Rel}}_{{\mathsf{T}} C,Z} \ar[r]^-{{\mathsf{Rel}}(G)} & {\mathsf{Rel}}_{G({\mathsf{T}} C), G Z} \ar[r]^-{(\gamma \times \zeta)^*} & {\mathsf{Rel}}_{C,Z}
}
\end{equation}
The following example illustrates the difference between our approach and that of \cite{JacobsSS12}.
\begin{example}
For $G = 2 \times {\mathsf{Id}}^A$ with $A$ a finite alphabet and ${\mathsf{T}} = {\mathcal{P}}$, $G \circ {\mathsf{T}}$-coalgebras are non-deterministic automata, whereas the elements of the final $G$-coalgebra are given by functions $z : A^* \to 2$ and correspond to languages over $A$. In this case, the greatest fixpoint of the operator in (\ref{o1}) maps a pair $(c,z)$, with $c$ a state of the automaton and $z$ a language over $A$, to $\top$ if and only if there exists a sequence of choices in the unfolding of the automaton starting from $c$ that results in a deterministic automaton which accepts the language denoted by $z$. Taking the union over all $z$ such that $(c,z)$ is mapped to $\top$ now gives the language accepted by the non-deterministic automaton with $c$ as initial state, but only under the assumption that for each $a \in A$, an $a$-labelled transition exists from any state of the automaton. This example points to the need to further generalise our approach, so that in particular it can also be applied to pairs consisting of a $G \circ {\mathsf{T}}$-coalgebra and a $G'$-coalgebra, with $G'$ different from $G$. This would involve considering relation liftings for pairs of (polynomial) endofunctors. We conjecture that taking $G$ and ${\mathsf{T}}$ as above and $G' = 1 + A \times {\mathsf{Id}}$ would allow us to recover the notion of acceptance of a finite word over $A$ by a non-deterministic automaton.
\end{example}
Finally, we sketch the general case of coalgebras whose type is obtained as the composition of several endofunctors on ${\mathsf{Set}}$, one of which is a monad ${\mathsf{T}}$ that accounts for the presence of branching in the system, while the remaining endofunctors are polynomial and jointly determine the notion of linear-time behaviour. For simplicity of presentation, we only consider coalgebras of type $G \circ {\mathsf{T}} \circ F$, with the final $G \circ F$-coalgebra $(Z,\zeta)$ providing the domain of possible linear-time behaviours.
\begin{definition}
\label{linear-time-beh}
The \emph{linear-time behaviour} of a state in a coalgebra $(C,\gamma)$ of type $G \circ {{\mathsf{T}}} \circ F$ is the greatest fixpoint of an operator ${\mathcal{O}}$ on ${\mathsf{Rel}}_{C,Z}$ defined by the composition:
\begin{equation}
\label{o2}
\UseComputerModernTips\xymatrix@+0.3pc{
{\mathsf{Rel}}_{C,Z} \ar[r]^-{{\mathsf{Rel}}(F)} & {\mathsf{Rel}}_{F C,F Z} \ar[r]^-{L_{\mathsf{T}}} & {\mathsf{Rel}}_{{\mathsf{T}}(F C), F Z} \ar[r]^-{{\mathsf{Rel}}(G)} & {\mathsf{Rel}}_{G({\mathsf{T}} F C), G F Z} \ar[r]^-{(\gamma \times \zeta)^*} & {\mathsf{Rel}}_{C,Z}
}
\end{equation}
\end{definition}
The greatest fixpoint of ${\mathcal{O}}$ measures the extent with which a state in a $G \circ {\mathsf{T}} \circ F$-coalgebra can exhibit a given linear behaviour (element of the final $G \circ F$-coalgebra). Definition~\ref{linear-time-beh} generalises straightforwardly to coalgebraic types given by arbitrary compositions of polynomial endofunctors and the monad ${\mathsf{T}}$, with the extension lifting $L_{\mathsf{T}}$ being used once for each occurrence of ${\mathsf{T}}$ in such a composition.
\begin{example}
\label{input-output}
Coalgebras of type $G \circ {\mathsf{T}} \circ F$, where $G = (1 + {\mathsf{Id}})^A$ and $F = {\mathsf{Id}} \times B$, model systems with branching, with both inputs (from a finite set $A$) and outputs (in a set $B$). In this case, the possible linear behaviours are given by special trees, with both finite and infinite branches, whose edges are labelled by elements of $A$ (from each node, one outgoing edge for each $a \in A$), and whose nodes (with the exception of the root) are either labelled by $* \in 1$ (for leaves) or by an element of $B$ (for non-leaves). The linear-time behaviour of a state in a $G \circ {\mathsf{T}} \circ F$-coalgebra is then given by:
\begin{itemize}
\item the set of trees that can be exhibited from that state, when ${{\mathsf{T}} = {\mathcal{P}}}$,
\item the probability of exhibiting each tree (with the probabilities corresponding to different branches being \emph{multiplied} when computing this probability), when ${{\mathsf{T}} = {\mathcal{S}}}$, and
\item the minimum cost of exhibiting each tree (with the costs of different branches being \emph{added} when computing this cost), when ${{\mathsf{T}} = {\T}_W}$.
\end{itemize}
\end{example}
The precise connection between our approach and earlier work in \cite{HasuoJS07,cirstea-11,JacobsSS12} is yet to be explored. In particular, our assumptions are different from those of loc.\,cit., for example in \cite{HasuoJS07} the DCPO$_\bot$-enrichedness of the Kleisli category of ${\mathsf{T}}$ is required.
\begin{remark}
Our approach does not \emph{directly} apply to the probability distribution monad (defined similarly to the sub-probability distribution monad, but with probabilities adding up to exactly $1$), as this monad does not satisfy the condition ${\mathsf{T}} \emptyset = 1$ of Definition~\ref{additive}. However, systems where branching is described using probability distributions can still be dealt with, by regarding all probability distributions as sub-probability distributions.
\end{remark}
In the remainder of this section, we briefly explore the usefulness of an operator similar to ${\mathcal{O}}$, which employs a similar extension lifting arising from the \emph{double strength} of the monad ${\mathsf{T}}$. We begin by noting that a result similar to Proposition~\ref{prop-kock} is proved in \cite{Kock12} for a commutative monad on a cartesian closed category.
\begin{proposition}[{\cite[Proposition~9.3]{Kock12}}]
Let $(B,\beta)$ be a ${\mathsf{T}}$-algebra. Then any $f : X \times Y \to B$ extends uniquely along $\eta_X \times \eta_Y$ to a bilinear $\tilde{f} : {\mathsf{T}} X \times {\mathsf{T}} Y \to B$, making the following triangle commute:
\[\UseComputerModernTips\xymatrix{
{\mathsf{T}} X \times {\mathsf{T}} Y \ar[r]^-{\tilde{f}} & B \\
X \times Y \ar[u]^-{\eta_X \times \eta_Y} \ar[ur]_-{f}
}\]
\end{proposition}
Here, bilinearity amounts to linearity in each argument.
\begin{definition}
For a commutative monad ${\mathsf{T}} : {\mathsf{Set}} \to {\mathsf{Set}}$, the \emph{double extension lifting} $L_{\mathsf{T}}' : {\mathsf{Rel}} \to {\mathsf{Rel}}$ is the functor taking a relation $R : X \times Y \to \T1$ to its unique bilinear extension $\tilde{R} : {\mathsf{T}} X \times {\mathsf{T}} Y \to \T1$.
\end{definition}
\begin{remark}
\label{alt-lifting}
An alternative definition of $L_{\mathsf{T}}'$ is as the composition of $L_{\mathsf{T}}$ with a dual lifting, which takes a relation $R : X \times Y \to {\mathsf{T}} 1$ to its unique $2$-linear extension $\overline{R} : X \times {\mathsf{T}} Y \to {\mathsf{T}} 1$.
\end{remark}
\begin{remark}
Again, it can be shown that a direct definition of the relation $\tilde{R} : {\mathsf{T}} X \times {\mathsf{T}} Y \to {\mathsf{T}} 1$ is as the composition
\[\UseComputerModernTips\xymatrix@+0.3pc{
{\mathsf{T}} X \times {\mathsf{T}} Y \ar[r]^-{{\mathsf{dst}}_{X,Y}} & {\mathsf{T}}(X \times Y) \ar[r]^-{{\mathsf{T}} (R)} & {\mathsf{T}}^2 1 \ar[r]^-{\mu_1} & {\mathsf{T}} 1
}\]
\end{remark}
\begin{proposition}
\label{prop-functoriality-dual}
The mapping $R \in {\mathsf{Rel}}_{X,Y} \mapsto \overline{R} \in {\mathsf{Rel}}_{X,{\mathsf{T}} Y}$ is functorial.
\end{proposition}
We now fix \emph{two} ${\mathsf{T}} \circ F$-coalgebras $(C,\gamma)$ and $(D,\delta)$ and explore the greatest fixpoint of the operator ${\mathcal{O}}' : {\mathsf{Rel}}_{C,D} \to {\mathsf{Rel}}_{C,D}$ defined by the composition
\[\UseComputerModernTips\xymatrix@+0.3pc{
{\mathsf{Rel}}_{C,D} \ar[r]^-{{\mathsf{Rel}}(F)} & {\mathsf{Rel}}_{F C,F D} \ar[r]^-{L_{\mathsf{T}}'} & {\mathsf{Rel}}_{{\mathsf{T}}(F C), {\mathsf{T}}(F D)} \ar[r]^-{(\gamma \times \zeta)^*} & {\mathsf{Rel}}_{C,D}
}\]
As before, the operator ${\mathcal{O}}'$ is monotone and therefore admits a greatest fixpoint. We argue that this fixpoint also yields useful information regarding the linear-time behaviour of states in ${\mathsf{T}} \circ F$-coalgebras. Moreover, this generalises to coalgebras whose types are arbitrary compositions of polynomial functors and the branching monad ${\mathsf{T}}$. This is expected to be of relevance when extending the linear-time view presented here to linear-time logics and associated formal verification techniques. The connection to formal verification constitutes work in progress, but the following examples motivate our claim that the lifting $L_{\mathsf{T}}'$ is worth further exploration.
\begin{example}Let $F : {\mathsf{Set}} \to {\mathsf{Set}}$ be a polynomial endofunctor, describing some linear-type behaviour.
\begin{enumerate}
\item For non-deterministic systems (i.e.~${\mathcal{P}} \circ F$-coalgebras), the greatest fixpoint of ${\mathcal{O}}'$ relates two states if and only if they admit a common maximal trace.
\item For probabilistic systems (i.e.~${\mathcal{S}} \circ F$-coalgebras), the greatest fixpoint of ${\mathcal{O}}'$ measures the probability of two states exhibiting the same maximal trace.
\item For weighted systems (i.e.~${\mathsf{T}}_W \circ F$-coalgebras), the greatest fixpoint of ${\mathcal{O}}'$ measures the \emph{joint} minimal cost of two states exhibiting the same maximal trace. To see this, note that the lifting $L_W' : {\mathsf{Rel}} \to {\mathsf{Rel}}$ takes a weighted relation $R : X \times Y \to W$ to the relation $L_W'(R) : {\mathsf{T}}_W(X) \times {\mathsf{T}}_W(Y) \to W$ given by
\[L_W'(R)(f,g) = \min_{x \in \sup(f),y\in \sup(g)} (f(x) + g(y) + R(x,y))\]
\end{enumerate}
\end{example}
\section{Conclusions and Future Work}
We have provided a general and uniform account of the linear-time behaviour of a state in a coalgebra whose type incorporates some notion of branching (captured by a monad on ${\mathsf{Set}}$). Our approach is compositional, and so far applies to notions of linear behaviour specified by \emph{polynomial} endofunctors on ${\mathsf{Set}}$. The key ingredient of our approach is the notion of extension lifting, which allows the branching behaviour of a state to be abstracted away in a coinductive fashion.
Immediate future work will attempt to exploit the results of \cite{Ghani2011,Ghani2012} in order to define generalised relation liftings for \emph{arbitrary} endofunctors on ${\mathsf{Set}}$, and to extend our approach to other base categories. The work in loc.\,cit.~could also provide an alternative description for the greatest fixpoint used in Definition~\ref{linear-time-beh}.
The present work constitutes a stepping stone towards a coalgebraic approach to the formal verification of linear-time properties. This will employ linear-time coalgebraic temporal logics for the specification of system properties, and automata-based techniques for the verification of these properties, as outlined in \cite{Cirstea11} for the case of non-deterministic systems.
| {'timestamp': '2013-09-05T02:02:38', 'yymm': '1309', 'arxiv_id': '1309.0891', 'language': 'en', 'url': 'https://arxiv.org/abs/1309.0891'} |
\section{Introduction:}
We consider Einstein's equation of general theory of relativity for a fluid with heat flow having the following energy-momentum tensor
\be T^{\alpha\beta} = (\rho + p) v^\alpha v^\beta - pg^{\alpha\beta} + q^\alpha v^\beta + q^\beta v^\alpha,\ee
where, $p$ and $\rho$ are the isotropic pressure and matter density of the fluid respectively, $q_\alpha$ is the heat flux in the radial direction, and $v_\alpha$ is the velocity vector. In the co-moving coordinate system, $v^\alpha = \delta_0^\alpha$, $v_\alpha v^\alpha = -1$ and $q_\alpha v^\alpha = 0$, along with the generalized Robertson-Walker line element
\be ds^2 = A^2 dt^2 - B^2 (dr^2 + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2),\ee
where $A$ and $B$ are functions of $r$ and $t$. Components of Einstein's equation $R_{\alpha\beta} - {1\over 2} g_{\alpha\beta} R = 8\pi G T_{\alpha\beta}$, had been reduced by Bergmann \cite{1} employing a technique formulated by Glass \cite{2}, to the following single equation,
\be A'' + 2{F'\over F} A' - {F''\over F} A = 0. \ee
In the above, prime denotes differentiation with respect to $x = r^2$, and $F = B^{-1}$. Clearly, one physically relevant assumption is required in order to solve the above differential equation containing a pair of variables $A$ and $F = B^{-1}$. However, a physically meaningful assumption on the metric coefficients $A$ and/ or $B$ is obscure. Bergmann \cite{1} therefore obtained a simple solution under the choice $A = 1$. In this paper, we opt for more general solutions. It is important to mention that once the forms of $A$ and $B$ are known, it is quite trivial to compute the radial component of heat flow, which is given by,
\be q = \left({4r\over G B^2}\right) \left(B\over AB\right)',\ee
where, $G$ is the Newtonian gravitational constant.
\section{Generating solutions:}
\textbf{Case:1.} ~~ $A'' = 0$.\\
\noindent
Under this choice, one obtains
\be A' = Q(t);~~~~~ \mathrm{and}~~~~~ A(x, t) = Q(t) x + P(t).\ee
Thus equation (3) reads as,
\be 2 Q F' - Q x F'' - PF'' = 0.\ee
Integrating the above equation and thereafter dividing throughout by $(Qx + P)^4$, one obtains
\be \left({F\over Q x + P}\right)' + {h(t)\over (Q x + P)^4} = 0.\ee
Further integration yields,
\be F = {h\over 3 Q} + (Q x + P)^3 L,\ee
and thus,
\be B(x, t) = F^{-1} = \left[{h\over 3 Q} + (Q x + P)^3 L\right]^{-1},\ee
where, $h,~ Q,~ P,~, L$ are all functions of time. Equations (5) and (9) may be used to find explicit form of of the radial component of heat flow $q$, in view of the expression (4).\\
\noindent
\textbf{Case:2.}~~ $A'' \ne 0$.\\
\noindent
Under this choice, $F' \ne 0$, as may be seen from equation (3) and thus one can express $A$ as,
\be A = A(F, t);~~~A' = A_F F';~~~A'' = A_{FF} F'^2 + A_F F'',\ee
where, suffix stands for derivative. So, equation (3) in this case reduces to
\be {A_{FF} + 2{A_F\over F}\over A_F - {A\over F}} d F + {dF'\over F'} = 0.\ee
Integrating the above equation one obtains,
\be \int \left[{A_{FF} + 2{A_F\over F}\over A_F - {A\over F}}\right] d F + \ln{F'} = \ln{\alpha(t)},\ee
or,
\be \exp{\int \left[{A_{FF} + 2{A_F\over F}\over A_F - {A\over F}}~d F\right]} = \alpha(t) {dx\over dF}.\ee
Integrating yet again one obtains,
\be \int\left[\exp{\int \left({A_{FF} + 2{A_F\over F}\over A_F - {A\over F}}~d F\right)} \right] dF = \alpha(t) x + \beta(t).\ee
Therefore, if $A$ is given as a function of $F$ and $t$, then the above integral can be evaluated and hence the solutions may be obtained. Nevertheless, for a particular case, simple solutions may be obtained as follows.\\
Let us consider $F'' = m F$, where $m$ is a function of time alone. So equation (3) may be written as,
\be {U''\over U} = 2{F''\over F} = \pm k^2,~~i.e.,~~ U'' = \pm k^2 U\ee
where, $U = A F$, and $k$ is a function of time. Solutions of the above equation (16) may now be easily found as given below,
\be \begin{split} & U = C_1 e^{kx} + D_1 e^{-kx}, ~~~\mathrm{where,}~m ~\mathrm{is~ positive}~m = k^2,\\&
U = C_1 \cos{(kx)} + D_1 \sin{(kx)}, ~~~\mathrm{where,}~m ~\mathrm{is~ negative}~ m = - k^2,\\&
U = q x + r, ~~~\mathrm{where,}~m=0.\end{split}\ee
\noindent
Subcase-I: $m = k^2$:\\
\noindent
When $m > 0$, equation (15) may be solved to obtain
\be F = C_2 e^{kx\over \sqrt 2} + D_2 e^{-{kx\over \sqrt 2}}.\ee
Now since, $AF = U$ and $B = F^{-1}$, so
\be A = {C_1 e^{kx} + D_1 e^{-kx}\over C_2 e^{kx\over \sqrt 2} + D_2 e^{-{kx\over \sqrt 2}}}; ~~~~B = {1\over C_2 e^{kx\over \sqrt 2} + D_2 e^{-{kx\over \sqrt 2}}},\ee
where, $C_1$, $C_2$, $D_1$, $D_2$ and $k$ are all functions of time. Solution (18) may be used to evaluate $q$ from expression (4).\\
\noindent
Subcase-II: $m = -k^2$:\\
\noindent
When $m < 0$, equation (15) may be solved to obtain
\be F = C_3 \cos{\left(kx\over \sqrt 2\right)} + D_3 \sin{\left(kx\over \sqrt 2\right)},\ee
where, $C_3$ and $D_3$ are functions of time. As before, one can find $A$ and $B$ as,
\be A = {C_1 \cos{\left(kx\right)} + D_1 \sin{\left(kx\right)}\over C_3 \cos{\left(kx\over \sqrt 2\right)} + D_3 \sin{\left(kx\over \sqrt 2\right)}};~~~~B = {1\over C_3 \cos{\left(kx\over \sqrt 2\right)} + D_3 \sin{\left(kx\over \sqrt 2\right)}},\ee
and hence $q$ may be evaluated as well, from the expression (4).\\
\noindent
Subcase-III: $m = 0$:\\
\noindent
In this case $k^2 = 0$, and so equation (15) may be solved to obtain,
\be F = k(t) x + C(t),\ee
which when substituted in equation (3), one obtains
\be kx {d\over dx}\left({dA\over dx}\right) + C{d\over dx}\left({dA\over dx}\right) + 2k \left({dA\over dx}\right) = 0.\ee
Integration yields,
\be A = {f(t) x + g(t) \over k(t) x + C(t)};~~~~~ B = F^{-1} = {1\over k(t) x + C(t)}.\ee
Equation (23) may be used to find the expression for $q$ from equation (3).
\section{Conclusion:}
Summarily, the present paper gives the complete set of cosmological solutions of Einstein's equation with heat flow which was reduced by Bergman to equation (3), either explicitly or implicitly. For $A'' = 0$, solutions have been obtained explicitly and are presented in (5) and (9). For $A'' \ne 0$, on the contrary, solutions are given implicitly by (14). However, some explicit solutions can be obtained for $F'' = +{1\over 2} k^2 F$ as presented in equation (18), $F = - {1\over 2} k^2 F$ as in (20) and $F'' = 0$, as revealed in equation (23). \\
It has already been stated that the solution of equation (3) gives the solution of Einstein's equation for the metric (2) and the energy-momentum tensor (1), where $B = F^{-1}$ and $q$ is the heat flow given by equation (4). Having obtained these solutions, it remains to be shown that these are physically acceptable. Certain energy conditions have to be satisfied, particularly that the energy density is positive everywhere. \\
\noindent
\textbf{Acknowlwdgement:} The authors would like to thank Dr. A. Banerjee for bringing to their atention the work of Bergmann.
| {'timestamp': '2019-11-19T02:08:45', 'yymm': '1911', 'arxiv_id': '1911.07058', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.07058'} |
\section{Introduction}
\IEEEPARstart{H}{idden} markov models are very popular for modeling and simulating processes, when you do not observe...
\section{System Model}
Let $(S_n, X_n)$ be a two-component process, where $(S_n)$ is unobservable component and $(X_n)$ is observable one, $n \in \{1,2,\ldots, N\}$, $N\in\set{N}$; $(S_n)$ ``controls'' equation coefficients of $(X_n)$. Let $(S_n)$ be a stationary Markov chain with $M$ discrete states and transition matrix $\|p_{i,j}\|,\,p_{i,j}=\Pr(S_n = j \mid S_{n-1} = i)$. The process $(X_n)$ is described by the autoregressive model of order $p$:
\begin{equation}\label{observe_process}
X_n = \mu(S_n) + \sum\limits_{i=1}^p a_i(S_n)(X_{n-i} - \mu(S_n)) + b(S_n)\xi_n,
\end{equation}
where $\{\xi_n\}$ are i.i.d. random variables with the standard normal distribution, $\mu, a_i, b \in\set{R}$ are coefficients controlled by the process $(S_n)$.
As a quality measure for our methods we use mean risk $E(L(S_n, \hat{S}_n))$ with a simple loss function~$L$:
\begin{equation}\label{simple_loss_function}
L(S_n, \hat{S}_n) =
\begin{cases}
1, &S_n \ne \hat{S}_n,\\
0, &S_n = \hat{S}_n,
\end{cases}
\end{equation}
where $\hat{S}_n = \hat{S}_n(X_1^n)$ is an estimator of $S_n$ and $X_1^n = (X_1, X_2,\ldots,X_n)$.
As known, for this risk function with the loss function~(\ref{simple_loss_function}) the optimal estimator is
\begin{equation}\label{argmax_prob}
\hat{S}_n = \underset{m\in\{1,\ldots, M\}}{\operatorname{argmax}}\Pr(S_n = m \mid X_{1}^{n}),
\end{equation}
where $\Pr(S_n = m \mid X_{1}^{n})$ is a posterior probability with respect to a $\sigma$-algebra, generated by r.v. $X_1^n$. Its realization will be denoted by
\begin{equation}
P(S_n = m \mid X_1^n = x_1^n) = P(S_n = m \mid x_1^n),
\end{equation}
where we will write $x_1^n$ instead of $X_1^n = x_1^n$.
\subsection{Basic equations}
In this paper we consider methods of filtering and prediction in the case of unknown parametres (transition matrix) of process $(S_n)$ and known parametres (equation coefficients in~\eqref{observe_process}) of process $(X_n)$. For comparison with some standard we also consider optimal filtering and prediction, where all parametres are known.
Filtering is a problem to estimate $S_n$ by using $X_1^n$. Therefore basic equations for filtering
\begin{multline}\label{filtering_basic_1}
P(S_n=m \mid x_1^n)\\
= \frac{f(x_n\mid S_n=m, x_1^{n-1})}{f(x_n \mid x_1^{n-1})}P(S_n=m\mid x_1^{n-1}),
\end{multline}
\begin{multline}\label{filtering_basic_2}
f(x_n \mid x_1^{n-1})\\
= \sum\limits_{m = 1}^{M} f(x_n\mid S_n=m, x_1^{n-1}) P(S_n=m\mid x_1^{n-1}),
\end{multline}
can be obtained from the total probability formula. Since coefficients in~\eqref{observe_process} are known and $\xi_n \sim \mathcal{N}(0, 1)$ then
\begin{multline}
f(x_n\mid S_n=m, x_1^{n-1})\\
= f(x_n\mid S_n=m, x_{n-p}^{n-1}) = f_{m}(x_n),\label{cond_dens_norm_1}
\end{multline}
where
\begin{multline}
f_{m}(x_n)\\
= \phi\Big(x_n; \mu(m) + \sum\limits_{i=1}^p a_i(m)(x_{n-i} - \mu(m)), b^2(m)\Big)\label{density_f_s_n}
\end{multline}
with normal probability density function
\begin{gather}\label{norm_pdf}
\phi(x;\mu,\sigma^2) = \frac{1}{\sqrt{2 \pi}\sigma}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right),
\end{gather}
where $x, \mu \in \set{R},.\sigma \in\set{R}^{+}$.
\section{Optimal Filtering}
In the optimal filtering all parametres are known. We use~\eqref{cond_dens_norm_1} knowing coefficients in~\eqref{observe_process} and calculate $P(S_n=m\mid x_1^{n-1})$ in~\eqref{filtering_basic_1} knowing transition matrix:
\begin{gather}
P(S_n=m\mid x_1^{n-1}) = \sum\limits_{i=1}^{M}p_{i, m} P(S_{n-1}=i \mid x_1^{n-1}). \label{markov_chain_transition}
\end{gather}
Then the~\eqref{filtering_basic_1} is transformed to the evaluation equation~\cite{dobrovidov_2012}
\begin{gather*}
P(S_n=m\mid x_1^n) = \frac{f_{m}(x_n)\sum\limits_{i=1}^{M}p_{i, m} P(S_{n-1}=i \mid x_1^{n-1})}{\sum\limits_{j = 1}^{M} f_j(x_n)\sum\limits_{i=1}^{M}p_{i, j} P(S_{n-1}=i \mid x_1^{n-1})},\label{opt_filtering_basic_1}
\end{gather*}
which will be considered as the optimal standard.
\section{Non-parametric Filtering}
\subsection{Reducing to optimization problem}
In this section, the transition matrix $\|p_{i, j}\|$ is assumed unknown, therefore we can not use the equation~\eqref{markov_chain_transition}. To overcome this uncertainty we include formula~\eqref{cond_dens_norm_1} in equations~\eqref{filtering_basic_1},~\eqref{filtering_basic_2} and obtain
\begin{gather}
P(S_n=m\mid x_1^n) = \frac{f_m(x_n)}{f(x_n \mid x_1^{n-1})}u_n(m),\label{filtering_basic_u_1}\\
f(x_n \mid x_1^{n-1}) = \sum\limits_{m = 1}^{M} f_m(x_n)u_n(m),\label{filtering_basic_u_2}
\end{gather}
where
\begin{gather*}
u_n(m) = P(S_n=m \mid x_1^{n-1}),\quad \forall m=1, \ldots, M
\end{gather*}
are new variables, which do not depend on $x_n$ and
\begin{gather*}
\sum\limits_{i=1}^M u_i = 1,\quad u_m \ge 0,\quad \forall m=1, \ldots, M.
\end{gather*}
To calculate~\eqref{filtering_basic_u_1} and~\eqref{filtering_basic_u_2} it is neccessary to find all $u_n(m)$. We need to make the assumption. We suppose that process $(S_n, X_n)$ is $\alpha$-mixing, then
\begin{gather*}
f(x_n \mid x_1^{n-1}) \approx f(x_n \mid x_{n-\tau}^{n-1}),\quad \tau\in\{1,2,\ldots, n-1\},
\end{gather*}
and estimate density $f(x_n \mid x_{n-\tau}^{n-1})$ using kernel density estimation and designate this estimator like $\hat{f}(x_n \mid x_{n-\tau}^{n-1})$.
Let us introduce vector $\mathbf{u}_n = (u_n(1), u_n(2), \ldots, u_n(M))$ with unknown elements $u_n(m),.m=1,\ldots, M$. Then for calculating $\mathbf{u}_n$ one proposes the following estimator
\begin{multline}\label{estimator_u_n}
\hat{\mathbf{u}}_n\\
=\underset{\mathbf{u} \in \mathrm{\Delta_M}}{\operatorname{argmin\,}}\int\limits_{-\infty}^{+\infty}|\hat{f}(z_n \mid x_{n-\tau}^{n-1})- \sum\limits_{m=1}^M f_m(z_n) u_m|^2 dz_n,
\end{multline}
where
\begin{multline*}
\mathrm{\Delta}_M = \Big\{(t_1, t_2, \ldots, t_M) \in \set{R}^{M}\\
\mid \sum\limits_{i=1}^M t_i = 1, t_i \ge 0, \forall i \in\{1,2,\ldots, M\}\Big\}\
\end{multline*}
is simplex. Let us rewrite estimator $\hat{\mathbf{u}}_n$ with more detailes:
\begin{gather*}
\hat{\mathbf{u}}_n=\underset{\mathbf{u} \in \mathrm{\Delta_M}}{\operatorname{argmin\,}}I_1 - 2I_2 + I_3,
\end{gather*}
where
\begin{align*}
I_1 &= \int\limits_{-\infty}^{+\infty}\hat{f}^2(z_n \mid x_{n-\tau}^{n-1})dz_n,\\
I_2 &= \int\limits_{-\infty}^{+\infty}\sum\limits_{m=1}^M\hat{f}(z_n \mid x_{n-\tau}^{n-1})f_m(z_n) u_m dz_n,\\
I_3 &= \int\limits_{-\infty}^{+\infty}\sum\limits_{i=1}^M\sum\limits_{j=1}^M f_i(z_n) f_j(z_n) u_i u_j dz_n.
\end{align*}
Since $I_1$ does not depend on $\mathbf{u}$, then reduce it, also transform $I_2$ and $I_3$, so $\hat{\mathbf{u}}_n$ has representation
\begin{align}\label{optimization_problem_2}
\hat{\mathbf{u}}_n&=\underset{\mathbf{u} \in \mathrm{\Delta_M}}{\operatorname{argmin\,}}I_3- 2I_2\notag\\
&=\underset{\mathbf{u} \in \mathrm{\Delta_M}}{\operatorname{argmin\,}}\sum\limits_{i=1}^M\sum\limits_{j=1}^M c_{ij} u_i u_j - 2 \sum\limits_{m=1}^M c_m u_m,
\end{align}
where
\begin{align}
c_{ij} &= \int\limits_{-\infty}^{+\infty}f_i(z_n) f_j(z_n)dz_n,\label{c_ij}\\
c_{m} &= \int\limits_{-\infty}^{+\infty}\hat{f}(z_n \mid x_{n-\tau}^{n-1})f_m(z_n)dz_n. \label{c_m}
\end{align}
To solve optimization problem~\eqref{optimization_problem_2}, primarily, it is necessary to calculate latter coefficients~\eqref{c_ij} and~\eqref{c_m}, which we will obtain using kernel density estimators. Therefore we introduce following chapter.
\subsection{Kernel density estimators}
In the general case kernel density estimator of density $f$ is
\begin{gather}\label{kde_estimator}
\hat{f}(\mathbf{y; H}) = \frac{1}{N}\sum\limits_{i=1}^{N}K_{\mathbf{H}}(\mathbf{y} - \mathbf{Y}_i),
\end{gather}
where $\mathbf{y}=(y_1, y_2, \ldots, y_d)^{T}$ is argument and $\mathbf{Y}_i = (Y_{i1}, Y_{i2}, \ldots, Y_{id})^{T}$, $i=1,2, \ldots, N$ are drawn from density $f$; $K_{\mathbf{H}}(\mathbf{y}) = |\mathbf{H}|^{-1/2}K(\mathbf{H}^{-1/2}\mathbf{y})$, where $K(\mathbf{y})$ is the multivariate kernel, which is probability density function; $\mathbf{H}\in\mathcal{H}$ is the bandwidth matrix and $\mathcal{H}$ is the set of $d\times d$, symmetric and positive-definite matrixes. We propose to use unbiased cross-validation (UCV) to find $\mathbf{H}$ (univariate case proposed in~\cite{rudemo_1982},~\cite{bowman_1984} and multivariate in~\cite{sain_1994},~\cite{duong_2005}). This is a popular and relevant method is aimed to estimate
\begin{gather*}
\mathrm{ISE} (\mathbf{H}) = \int\limits_{\set{R}^d}\left(\hat{f}(\mathbf{y; H}) - f(\mathbf{y})\right)^2d\mathbf{y}
\end{gather*}
and then minimize resulting function
\begin{multline}
\mathrm{UCV}(\mathbf{H})\\
= \frac{1}{N(N-1)}\sum\limits_{i=1}^{N}\sum\limits_{\begin{smallmatrix}j = 1,\\ j\neq i\end{smallmatrix}}^{N}(K_\mathbf{H}*K_\mathbf{H} - 2K_\mathbf{H})(\mathbf{Y}_i-\mathbf{Y}_j)\\
+ \frac{1}{N}R(K)|\mathbf{H}|^{-1/2},\label{ucv}
\end{multline}
\begin{gather*}
R(K)=\int\limits_{\set{R}^d}K(\mathbf{y})^2d\mathbf{y},
\end{gather*}
where $*$ denotes a convolution. Then the estimator of $\mathbf{H}$ is
\begin{gather}\label{ucv_rule}
\mathbf{H}_{\mathrm{UCV}} = \underset{\mathbf{H} \in \mathcal{H}}{\operatorname{argmin\,}} \mathrm{UCV(\mathbf{H})}.
\end{gather}
We suppose to generate components $Y_{ik}$ of vector $\mathbf{Y}_i$ from univariate sample $x_1, x_2, \ldots, x_n$ according to the rule
\begin{gather*}
Y_{ik}= x_{(i-1)l + k},\ k=1,2,\ldots, d
\end{gather*}
where $l\in\set{N}$ influences on stochastic dependence between vectors $\mathbf{Y}_i$ (for bigger $l$ less dependence). Then we suggest to simplify obtaining of estimator~\eqref{kde_estimator} and function~\eqref{ucv}. For this aim we:
\begin{itemize}
\item use normal kernel, it means that we set equal $\mathbf{H}$ to $d$-variate normal density with zero mean vector and identity covariance matrix $\phi$;
\item use scalar $h^2$ multiple of identity $d\times d$ matrix ($\mathbf{I}_d$) for bandwidth matrix: $$\mathbf{H} = h^2 \mathbf{I}_{d}.$$
\end{itemize}
Then the estimator~\eqref{kde_estimator} becomes
\begin{multline}\label{kde_estimator_h}
\hat{f}(\mathbf{y}; h)\\
= \frac{1}{N(2\pi)^{d/2}h^d}\sum\limits_{i=1}^{N}\exp\left(-\frac{\sum\limits_{j=1}^d (y_j - x_{(i-1)l+j}) ^ 2}{2h^2}\right),
\end{multline}
with $N = 1+ \lfloor\frac{n - d}{l}\rfloor$ and the estimator of $h$ is
\begin{gather}\label{estimator_of_h}
\hat{h}=\underset{h > 0}{\operatorname{argmin\,}}\mathrm{UCV}(h),
\end{gather}
\begin{multline*}
\mathrm{UCV}(h)\\
=\frac{1}{N(N-1)(2\pi)^{d/2}h^d}\sum\limits_{i=1}^{N}\sum\limits_{\begin{smallmatrix}j = 1,\\ j\neq i\end{smallmatrix}}^{N}\frac{1}{2^{d/2}}e^{-\frac{\Delta x_{ij}}{4h^2}}-2e^{-\frac{\Delta x_{ij}}{2h^2}}\\ + \frac{1}{N(4\pi)^{d/2}h^d},
\end{multline*}
\begin{gather*}
\Delta x_{ij} = \sum\limits_{k=1}^d \left(x_{(i-1)l+k} - x_{(j-1)l+k}\right) ^ 2.
\end{gather*}
Computing minima analytically is a challenge, so a numerical calculation is popular. The function $\mathrm{UCV}(h)$ often has multiple local minima, therefore more correct way is to use brute-force search to find $\hat{h}$, however it is a very slow algorithm. In~\cite{hall_1991} it was shown that spurios local minima are more likely at too small values of $h$, so we propose to use golden section search between 0 and $h^{+}$, where
\begin{gather*}
h^{+} = \left(\frac{4}{N(d + 2)}\right)^{\frac{1}{d + 4}}\underset{k\in\{1,\ldots, d\}}{\max\,}\hat{\sigma}_k,\\
\end{gather*}
where $\hat{\sigma}_k$ is the sample standard deviation of $k$-th elements of $\mathbf{Y}_i$. The parameter $h^{+}$ is an oversmoothed bandwidth. If the matrix $\mathbf{H}$ was an unconstrained then
\begin{gather*}
\mathbf{H}^{+}=\left(\frac{4}{N(d + 2)}\right)^{\frac{1}{d + 4}}\mathbf{S},
\end{gather*}
where $\mathbf{S}$ is a sample covariance matrix of $\mathbf{Y}_i$. The matrix $\mathbf{H}^{+}$ is oversmoothed bandwidth in the most cases. The latter estimator is proposed in~\cite{terrell_1990}. To calculate $\mathbf{H}_{\mathrm{UCV}}$ with unconstrained $\mathbf{H}$ you may use quasi-Newton minimization algorithm like in~\cite{duong_2005}.
\subsection{Calculation of coefficients $c_{ij}$ and $c_m$}
For calculating unknown coefficients $c_{ij}$ and $c_{m}$ in~\eqref{optimization_problem_2} we use formulas~\eqref{c_ij} and~\eqref{c_m}. Observe that for normal probability density function~\eqref{norm_pdf} following equation
\begin{multline*}
\int\limits_{-\infty}^{+\infty} \phi(x; \mu_1, \sigma_1^2) \phi(x; \mu_2, \sigma_2^2) dx\\
= \phi(\mu_1; \mu_2, \sigma_1 ^ 2 + \sigma_2 ^ 2) = \phi(\mu_2; \mu_1, \sigma_1 ^ 2 + \sigma_2 ^ 2)
\end{multline*}
is correct, therefore using it and~\eqref{density_f_s_n} we have
\begin{multline}\label{c_ij_finish}
c_{ij} = \int\limits_{-\infty}^{+\infty}\phi\Big(z_n; \mu(i) + \sum\limits_{k=1}^p a_k(i)(x_{n-k} - \mu(i)), b^2(i)\Big)\\
\cdot \phi\Big(z_n; \mu(j) + \sum\limits_{k=1}^p a_k(j)(x_{n-k} - \mu(j)), b^2(j)\Big)dz_n\\
=\phi\Big(\mu(i) + \sum\limits_{k=1}^p a_k(i)(x_{n-k} - \mu(i));\\
\mu(j) + \sum\limits_{k=1}^p a_k(j)(x_{n-k} - \mu(j), b^2(i) + b^2(j)\Big),
\end{multline}
also $c_{ij} = c_{j, i} > 0$. For calculating $c_{m}$ we estimate conditional density $\hat{f}(z_n \mid x_{n-\tau}^{n-1})$ applying~\eqref{kde_estimator_h}:
\begin{multline*}
\hat{f}(z_n \mid x_{n-\tau}^{n-1}) = \frac{\hat{f}(z_n, x_{n-\tau}^{n-1})}{\int\limits_{-\infty}^{+\infty} \hat{f}(z_n, x_{n-\tau}^{n-1})dz_n}\\
=\sum\limits_{i=1}^{N}\beta_{ni}(\tau)\phi(z_n; x_{(i-1)l + \tau + 1}, h^2),
\end{multline*}
\begin{gather*}
\beta_{ni}(\tau) = \frac{\exp\left(-\frac{\sum\limits_{j=-\tau}^{-1}(x_{n+j} - x_{(i-1)l+j + \tau + 1}) ^ 2}{2h^2}\right)}{\sum\limits_{k=1}^{N}\exp\left(-\frac{\sum\limits_{j=-\tau}^{-1}(x_{n+j} - x_{(k-1)l+j + \tau + 1}) ^ 2}{2h^2}\right)},
\end{gather*}
where $N = 1+ \lfloor\frac{n - 1 - d}{l}\rfloor$, bandwidth $h$ is estimated by~\eqref{estimator_of_h}. Remark that $\beta_{ni}(\tau)$ does not depend on $z_n$. Then we substitute latter estimator in~\eqref{c_m} and obtain
\begin{multline}\label{c_m_finish}
c_{m} = \int\limits_{-\infty}^{+\infty}\hat{f}(z_n \mid x_{n-\tau}^{n-1})f_m(z_n)dz_n \\
=\int\limits_{-\infty}^{+\infty}\sum\limits_{i=1}^{N}\beta_{ni}(\tau)\phi(z_n; x_{(i-1)l + \tau + 1}, h^2)\\
\cdot\phi\Big(z_n; \mu(m) + \sum\limits_{k=1}^p a_k(m)(x_{n-k} - \mu(m)), b^2(m)\Big)dz_n\\
=\sum\limits_{i=1}^{N}\beta_{ni}(\tau)\int\limits_{-\infty}^{+\infty}\phi(z_n; x_{(i-1)l + \tau + 1}, h^2)\\
\cdot\phi\Big(z_n; \mu(m) + \sum\limits_{k=1}^p a_k(m)(x_{n-k} - \mu(m)), b^2(m)\Big)dz_n\\
=\sum\limits_{i=1}^{N}\beta_{ni}(\tau)\phi\Big(x_{(i-1)l + \tau + 1};\\
\mu(m) + \sum\limits_{k=1}^p a_k(m)(x_{n-k} - \mu(m)), h^2 + b^2(m)\Big),
\end{multline}
also we remark that $c_m > 0$.
\subsection{Solution of optimization problem}
In the previous chapters we reduce main problem to optimization problem
\begin{gather*}
\hat{\mathbf{u}}_n=\underset{\mathbf{u} \in \mathrm{\Delta_M}}{\operatorname{argmin\,}}F_n(\mathbf{u}),\\
F_n(\mathbf{u})=\sum\limits_{i=1}^M\sum\limits_{j=1}^M c_{ij} u_i u_j - 2 \sum\limits_{m=1}^M c_m u_m,
\end{gather*}
where coefficients $c_{ij}$ and $c_m$ were calculated in~\eqref{c_ij_finish} and~\eqref{c_m_finish}. Let us consider kind of optimization. We have that $\mathrm{\Delta}_m$ is convex set and Hessian matrix of function $F_n(\mathbf{s})$ is
\begin{gather*}
\mathcal{L}''_{\mathbf{u}} = 2\cdot\begin{pmatrix}
c_{11} & c_{12} & \ldots & c_{1M}\\
c_{21} & c_{22} & \ldots & c_{2M}\\
\vdots & \vdots & \ddots & \vdots\\
c_{M1} & c_{M2} & \ldots & c_{MM}
\end{pmatrix}.
\end{gather*}
If $\mathcal{L}''_{\mathbf{u}}$ is positive defined, then $F_n(\mathbf{s})$ is convex, thus we have convex optimization. In this case we propose to use Karush--Kuhn--Tucker (KKT) conditions~\cite{kuhn_1951},~\cite{boyd_2004}, because of:
\begin{itemize}
\item our case is special because there is opportunity to solve KKT conditions analytically;
\item for convex optimization KKT conditions, which are primarily necessary, are also sufficient;
\end{itemize}
else you may apply methods of quadratic programming. Also we want remark that $\mathcal{L}''_{\mathbf{u}}$ does not depend on variables $u_i$ and coefficients $c_m$, which means that previous kernel density estiamtors have no influence on kind of optimization.
Let us consider KKT conditions, then Lagrangian is
\begin{gather*}
\mathcal{L} = \lambda_0 F_n(\mathbf{u}) + \sum\limits_{i=1}^{M}\lambda_{i} (-u_i) + \lambda_{M+1}\left(\sum\limits_{i=1}^M u_i - 1\right),
\end{gather*}
where $\lambda^* = (\lambda_0^*, \lambda_1^*, \ldots, \lambda_{M+1}^*)\in\set{R}^{M+2}$. We need to find $\lambda^*$ and $\mathbf{u}^*$ such that stationary condition
\begin{gather*}
\mathcal{L}'_{u_i} = 2\lambda_0^*\left(\sum\limits_{j=1}^M c_{ij} u_j^* - c_i\right) - \lambda_{i}^* + \lambda_{M+1}^* = 0,\\
\forall i=1, \ldots, M
\end{gather*}
primal feasibility
\begin{gather*}
-u_i^* \le 0,\quad \forall i=1, \ldots, M\,\\
\sum\limits_{i=1}^M u_i^* - 1 = 0,
\end{gather*}
dual feasibility
\begin{gather*}
\lambda_{i}^* \ge 0,\quad \forall i=1, \ldots, M\
\end{gather*}
complementary slackness
\begin{gather*}
\lambda_{i}^* u_i^* = 0,\quad \forall i=1, \ldots, M
\end{gather*}
hold. Let $\lambda_0^*=0$ to check that the gradients of constraints are linearly independent at $\mathbf{u}^*$, so KKT conditions lead to system
\begin{gather*}
\begin{cases}
\lambda_1^* = \lambda_{2}^*=\ldots=\lambda_{M+1}^*,\\
\lambda_{i}^* u_i^* = 0,\ \lambda_{i}^* \ge 0, &\forall i=1, \ldots, M\\
\sum\limits_{i=1}^M u_i^* = 1,\ u_i^* \ge 0, &\forall i=1, \ldots, M
\end{cases}
\end{gather*}
which could be solved only with $\lambda^* = \vec{0}$, which means that gradients of constraints are linearly independent for any $\mathbf{u}^*$.
The vector $\lambda^*$ is defined with an accuracy of $\alpha > 0$, so we define $\lambda_0=1/2$, then KKT conditions lead to a system
\begin{gather*}
\mathbf{C}\cdot\vec{\rho} = \mathbf{c},
\end{gather*}
where
\begin{gather*}
\mathbf{C} = \begin{pmatrix}
c_{11} & c_{12} & \cdots & c_{1M} & -1 & 0 &\cdots &0 & 1\\
c_{21} & c_{22} & \cdots & c_{2M} & 0 & -1 & \cdots &0 & 1\\
\vdots & \vdots & \ddots & \vdots & \vdots& \vdots & \ddots & \vdots & \vdots\\
c_{M1} & c_{M2} & \cdots & c_{MM} & 0 & 0&\cdots & -1 & 1\\
1 & 1 & \cdots & 1 & 0 & 0 & \cdots & 0 & 0
\end{pmatrix},\\
\vec{\rho} = \begin{pmatrix}
u_1^*\\
\vdots\\
u_M^*\\
\lambda_1^*\\
\vdots\\
\lambda_{M+1}^*
\end{pmatrix},\ \mathbf{c} = \begin{pmatrix}
c_1\\
c_2\\
\vdots\\
c_M\\
1
\end{pmatrix},\\
\lambda_{i}^* u_i^* = 0,\ \lambda_{i}^* \ge 0,\ u_{i}^* \ge 0.\quad \forall i=1, \ldots, M
\end{gather*}
To solve last system it is necessary to consider all combinations of pairs $(u_i^*, \lambda_i^*),\ \forall i=1, \ldots, M$, where $u_i^*$ or $\lambda_i^*$ is equal to 0 (not both). Total amount of combinations is equal to $2^M$. If $u_i^* = 0$ then $i$-th column in the matrix $\mathbf{C}$ and $i$-th row in $\bar{\rho}$ are reduced, else $\lambda_i^*=0$ and $(M + i)$-th column in the matrix $\mathbf{C}$ and $(M + i)$-th row in $\bar{\rho}$ are reduced. After choosing zero element in each pair $(u_i^*, \lambda_i^*),\ \forall i=1, \ldots, M$ matrix $\mathbf{C}$ is reduced to an $(M + 1)\times (M + 1)$-matrix $\mathbf{C}_r$ and $\bar{\rho}$ to $(M + 1)\times 1$-matrix $\bar{\rho}_r$. Therefore for each combination it is necessary to calculate
\begin{gather*}
\bar{\rho}_r = \mathbf{C}_r^{-1}\cdot\mathbf{c}.
\end{gather*}
If the first $M$ elements in $\bar{\rho}_r$ are non-negative then obtained $\mathbf{u}^*$ is a solution ($\hat{\mathbf{u}}_n$) of optimization problem and there is no reason to calculate $\bar{\rho}_r$ for the next combination, because in convex optimization local minima is global minima.
As a result, we substitute estimator $\hat{\mathbf{u}}_n$ in~\eqref{filtering_basic_u_1} and~\eqref{filtering_basic_u_2} and problem of non-parametric filtering is solved.
\section{One-step Ahead Prediction}
We will consider one-step ahead prediction. Like for filtering we minimize mean risk $E(L(S_n, \hat{S}_n))$ with simple loss function~\eqref{simple_loss_function}. Therefore optimal estimator of $S_n$ is
\begin{gather*}
\hat{S}_n = \underset{m\in\{1,\ldots, M\}}{\operatorname{argmax}}\Pr(S_n = m \mid X_{1}^{n-1}).
\end{gather*}
We remark that probabilty $\Pr(S_n = m \mid X_{1}^{n-1})$ is already obtained in the considered approaches of filtering: for optimal prediction it is written in~\eqref{markov_chain_transition} and for non-parametric prediction accordingly in~\eqref{optimization_problem_2}. It means that we primarily solve problem of one-step ahead prediction and then filtering problem.
\section{Example}
Let the Markov chain $(S_n)$ has 3 states ($M=3$) and transition matrix
\begin{gather}
\|p_{i, j}\| = \begin{pmatrix}
0.8 & 0.1 & 0.1\\
0.05 & 0.9 & 0.05\\
0.1 & 0.05 & 0.85
\end{pmatrix}.
\end{gather}
Sample volume $n$ is changed from 500 to 600. Observable process $(X_n)$ is simulated like AR($2$) model with coefficients $\mu\in\{0, 0.5, 1\}$, $a_1\in\{0.3, 0.2, 0.1\}$, $a_2\in\{0.2, 0.3, 0.4\}$, $b\in\{0.1, 0.2, 0.1\}$. Also we take $\tau=2$ and $l=1$. The results are presented in \figurename~\ref{example_figures} and sample mean errors after 50 repeated experiments in Table~\ref{example_errors}.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Sample Mean Errors}
\label{example_errors}
\centering
\begin{tabular}{ccc}
\hline
& Filtering error, \% & Prediction error, \%\\
\hline
Optimal & 16.4 & 26.6\\
Non-parametric & 22.7 & 37.6\\
\hline
\end{tabular}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[scale=1]{fig_1.eps}
\caption{From top to bottom: 1 --- unobservable $s_n$; 2 --- observable $x_n$;\quad\quad\quad 3, 4 --- optimal and non-parametric filtering; 5, 6 --- optimal and non-parametric prediction.}
\label{example_figures}
\end{figure}
\section{Conclusion}
Preparing...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {'timestamp': '2015-03-10T01:05:53', 'yymm': '1503', 'arxiv_id': '1503.00167', 'language': 'en', 'url': 'https://arxiv.org/abs/1503.00167'} |
\section{Introduction}
Let $\mathbb{B}_{N}$ be the unit ball in $\mathbb{C}_{N}$ and $S_N$ denote the unit sphere. Let $H(\mathbb{B}_{N})$ be
the space of all holomorphic functions on $\mathbb{B}_{N}$. Denote by $S(\mathbb{B}_{N})$ the set of all holomorphic self-maps of $\mathbb{B}_{N}$. The Dirichlet space $\mathcal{D}(\mathbb{B}_{N})$ is defined as
$$
\mathcal{D}(\mathbb{B}_{N})=\{f\in H(\mathbb{B}_{N}):|f(0)|^2+\int_{\mathbb{B}_{N}}|\nabla f(z)|^2d\nu_N(z) < \infty \},
$$
where $d \nu_N(z)$ is the normalized volume measure on $\mathbb{B}_{N}$. The Hardy space $H^2(\mathbb{B}_{N})$ is defined as
$$
H^2(\mathbb{B}_{N})=\{f\in H(\mathbb{B}_{N}):\sup_{0 < r < 1}\int_{S_{N}}|f(r\zeta)|^2d\sigma(\zeta) < \infty \},
$$
where $d\sigma(\zeta)$ is the normalized surface measure on $S_N$.
Recall that for any analytic self-map $\varphi\in S(\mathbb{B}_{N})$ and any analytic function $\psi\in H(\mathbb{B}_{N})$, the weighted composition operator is given by
$$W_{\psi,\varphi}f=\psi\cdot f\circ\varphi.$$
If $\psi\equiv1$, we get the composition operator $C_\varphi$.
In the past five decades, the study of (weighted) composition operator attracted attention of the researchers. It is very interesting to explore how the function theoretic behavior of $\varphi$ affects the properties of $C_\varphi$ on various holomorphic function spaces. For general information about composition operator, we refer the readers to book \cite{CM1} for more details.
An anti-linear operators $C$ on a complex Hibert space $\mathcal{H}$ is called a conjugation if it satisfies the following conditions:
\begin{itemize}
\item[(i)] involution, i.e. $C^2=I$.
\item[(ii)] isometric, i.e. $ \langle Cx, Cy \rangle = \langle y, x \rangle$ for all $x,y\in\mathcal{H}$.
\end{itemize}
A bounded linear operator $T$ on $\mathcal{H}$ is called complex symmetric if there exists a conjugation $C$ such that $TC=CT^*$ $(CTC=T^*)$. We also say that $T$ is a $C$-symmetric operator.
It is well known that the general study of complex symmetric weighted composition operators on $\mathcal{H}$ are derived from the work of Gracia and Putinar in \cite{GP}, the authors show that every normal operator is complex symmetric. Since then, many significant results about the complex symmetric (weighted) composition operators are obtained. In \cite{GW}, Garica and Wogen got that if an operator $T$ is algebraic of order $2$, then $T$ is complex symmetric, so $C_{\varphi_a}$ is complex symmetric when $\varphi_a$ is an
involutive automorphism. Furthermore, an explicit conjugation operator $J_a=JW_a$ on $H^2(\mathbb{B}_{N})$ such that $C_{\varphi_a}=J_aC_{\varphi_a}^*J_a$ was given in \cite{N2}. Very recently, in \cite{JK} Jung et al. studied which combinations of weights $\psi$ and maps of the disk $\varphi$ give rise to complex symmetric weighted composition operators with respect to classical conjugation $Jf(z)=\overline{f(\overline{z})}$. In \cite{GZ1}, Gao and Zhou gave a complete description of complex symmetric composition operators on $H^2(\mathbb{D})$ whose symbols are linear fractional.
Since all conjugation can be considered as a product of a $J$-symmetric unitary operator $U$ and the conjugation $J$. Fatehi in \cite{F} find all unitary weighted composition operators which are $J$-symmetric and consider complex symmetric weighted composition operators with special conjugation $W_{k_a,\varphi_a}J$ on $H^2(\mathbb{D})$. Moreover, a criterion for complex symmetric structure of $W_{\psi,\varphi}$ on $H_\gamma(\mathbb{D})$ (with reproducing kernels $K_w^{\gamma}=(1-\overline{w}z)^{-\gamma}$, where $\gamma\in \mathbb{N}$) was discovered in \cite{LK}. In \cite{ZY}, Yuan and Zhou characterized the adjoint of linear fractional composition operators $C_\varphi$ acting on $\mathcal{D}(\mathbb{B}_{N})$. In \cite{LNN}, the authors showed that no nontrivial normal weighted composition operator exist on the Dirichlet space in the unit disk when $\varphi$ is linear-fractional with fixed point $p\in\mathbb{D}$.
Motivated by these researches, we attempt to generalize some discussion into more spaces, such as $\mathcal{D}(\mathbb{B}_{N})$,
$H^2(\mathbb{B}_{N})$. The rest of the paper is organized as follows: First, we recall some fundamental definitions and theorems concerning our results. Then we examine the question ``Which weighted composition operator on $\mathcal{D}(\mathbb{B}_{N})$ are complex symmetric?". We prove that $W_{\psi,\varphi}$ is $J$-symmetric ($JC_{Uz}$-symmetric) if and only if $W_{\psi,\varphi}$ is a multiple of the corresponding complex symmetric composition operator $C_\varphi$. In addition, we also show that $C_\varphi$ is $J$-symmetric if and only if $\varphi(z) = \varphi'(0)z$ for $z\in\mathbb{B}_{N}$ and $\varphi'(0)$ is a symmetric matrix with $||\varphi'(0)||\leq1$. We then provide characterizations of Hermitian weighted composition operators on $\mathcal{D}(\mathbb{B}_{N})$. Moreover, we study when the class of $J$-symmetric weighted composition operator to be unitary or Hermitian. By providing some sufficient conditions for weighted composition operators to be both unitary and $J$-symmetric, then we get some new examples of complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$. Finally, we discuss the normality of complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$.
\section{Preliminaries}
\subsection{Linear fractional map}
\begin{defn} A linear fractional map $\varphi$ of $\mathbb{C}^N$ is a map of the form
$$\varphi(z)=\frac{Az+B}{\langle z,C\rangle+D},$$
where $A = (a_{j,k})$ be an $N \times N$-matrix, $B=(b_j)$, $C =(c_i)$ be $N$-column vectors, $D$ be a complex number, and $\langle . , .\rangle$ indicates the usual Euclidean inner product in $\mathbb{C}^N$. If $\varphi(\mathbb{B}_{N}) \in \mathbb{B}_{N}$, $\varphi$ is said to be a linear fractional self-map of $\mathbb{B}_{N}$ and signed as $\varphi(z)\in \textrm{LFT} (\mathbb{B}_{N})$.
In this paper, we identify $N \times N$-matrices with linear transformations of $\mathbb{C}_{N}$ via the standard basis of $\mathbb{C}_{N}$.
\end{defn}
\begin{defn}
If $\varphi(z)=\frac{Az+B}{\langle z, C \rangle + D}$ is a linear fractional map, the matrix
$$
m_{\varphi}=\left(
\begin{array}{ccc}
A & B\\
C^*& D\\
\end{array}
\right)
$$
will be called a matrix associated with $\varphi$. If $\varphi(z)\in$ LFT$(\mathbb{B}_{N})$, the adjoint map $\sigma=\sigma_\varphi$ is defined by
$$\sigma(z)=\frac{A^*z-C}{\langle z, -B \rangle + D^*}$$
and the associated matrix of $\sigma$ is
$$
m_{\sigma}=\left(
\begin{array}{ccc}
A^* & -C\\
-B^*& D^*\\
\end{array}
\right),
$$
where $A^* = (\overline{a_{ji}})$ denote the conjugate transpose matrix of $A$.
\end{defn}
\begin{thm}{\rm ([\citealp{CM2}, Theorem 4])}\label{Th2.7}
If the matrix
$$
m_{\varphi}=\left(
\begin{array}{ccc}
A & B\\
C^*& D\\
\end{array}
\right)
$$
is a multiple of an isometry on the Kre$\breve{{\i}}$n space with
$$
J=\left(
\begin{array}{ccc}
I & 0\\
0& -1\\
\end{array}
\right)
,$$
then $\varphi(z)=\frac{Az + B}{\langle z, C \rangle + D}$ maps the unit ball $\mathbb{B}_N$ onto itself. Conversely, if $\varphi(z)=\frac{Az+B}{\langle z, C \rangle + D}$ is a linear fractional map of the unit ball onto itself, then $m_\varphi$ is a multiple of an isometry.
\end{thm}
Fix a vector $a\in\mathbb{B}_N$, we denote by $\varphi_a: \mathbb{B}_N\rightarrow\mathbb{B}_N$ the linear fractional map
$$\varphi_a(z)=\frac{a-P_az-s_aQ_az}{1-\langle z, a \rangle},$$
where $s_a=\sqrt{1-|a|^2}$, $P_a$ be the orthogonal projection of $\mathbb{C}^n$ onto the complex line generated by $a$ and $Q_a=I-P_a$, or equivalently,
$$\varphi_a(z)=\frac{a-Tz}{1-\langle z, a \rangle},$$
where $T$ is a self-adjoint map depending on $a$. We use Aut$(\mathbb{B}_N)$ to denote the set of all automorphism of $\mathbb{B}_N$.
\subsection{Spaces and weighted composition operator}
In the Dirichlet space $\mathcal{D}(\mathbb{B}_N)$, evaluation at $w$ in the unit ball is given by $f(w)=\langle f, K_w \rangle$ where
$$K_w(z)=1+\ln\frac{1}{1 - \langle z, w \rangle} \ \ \ \textrm{and} \ \ \ ||K_w||^2= 1 + \ln \frac{1}{{1-|w|^2}}.$$
Let $k_w$ be the normalization of $K_w$, then
$k_w(z)=\frac{(1-\ln(1-|w|^2))^{-\frac{1}{2}}}{1-\ln{1-\langle z, w \rangle}}$.
And in $H^2(\mathbb{B}_{N})$ the kernel for evaluation at $w$ is given by
$$K_w(z)=\frac{1}{(1-\langle z, w \rangle)^N} \ \ \ \textrm{and} \ \ \ ||K_w||^2=(1-|w|^2)^{-N},$$
then $k_w=\frac{(1-|w|^2)^{\frac{N}{2}}}{(1-\langle z, w \rangle)^N}$.
Next we list some fundamental properties of bounded weighted
composition operators.
\begin{Prop}{\rm ([\citealp{GZ2}, Remark 2.4])}\label{prop2.4}
If $\varphi$ is an automorphism of $\mathbb{B}_N$ and $\psi\in A(\mathbb{B}_N)$ where $A(\mathbb{B}_N)$ denotes the set of functions that holomorphic on $\mathbb{B}_N$ and continuous up to the boundary $S_N$, then
$$W_{\psi,\varphi}=M_\psi C_\varphi.$$
\end{Prop}
\begin{Prop}{\rm ([\citealp{GZ2}, Proposition 2.5])}\label{prop2.5}
Suppose that $W_{\psi,\varphi}$ is bounded on $\mathcal{D}(\mathbb{B}_N)(H^2(\mathbb{B}_N))$, then we have
$$
W_{\psi,\varphi}^*K_w(z)=\overline{\psi(w)}K_{\varphi(w)}(z)
$$
for all $z,w\in \mathbb{B}_N$. In particular, since $C_\varphi=W_{1,\varphi}$, we get $C_\varphi^*K_w(z)=K_{\varphi(w)}(z)$.
\end{Prop}
\begin{thm}{\rm ([\citealp{CM2}, Theorem 16])}\label{Th2.6}
Suppose $\varphi(z)=\frac{Az+B}{\langle z, C \rangle + D}$ is a linear fractional map of $\mathbb{B}_N$ into itself for which $C_\varphi$ is a bounded operator on $\mathcal{H}$. Let $\sigma(z)=\frac{A^*z-C}{\langle z,-B \rangle + D^*}$ be the adjoint mapping.
Then $C_\sigma $ is a bounded operator on $\mathcal{H}$, $g(z)=(\langle z, -B\rangle + D^*)^{-r}$ and
$h(z)= (\langle z, C \rangle + D)^r$ are in $H^\infty(\mathbb{B}_N)$, and
$$
C_\varphi^*=T_gC_\sigma T_h^*.
$$
\end{thm}
\subsection{Some others notations}
In this section, we first recall the first partial derivative reproducing kernel on $\mathcal{D}(\mathbb{B}_N)$.
Let $K_a^{D_1}, K_a^{D_2},\ldots, K_a^{D_n}$ denote the kernels for the first partial derivatives at $a$, that is
$$
\langle f,K_a^{D_j}\rangle=\frac{\partial f}{\partial z_j}(a), \ \ \ \ \ \ \ \ j=1,2,\ldots,n.
$$
It can be shown that
\begin{align}\label{Eq2.1}
K_a^{D_j } = \frac{z_j} {1-\langle z,a \rangle}, \ \ \ \ \ \ \ \ \ j=1,2,\ldots,n.
\end{align}
Continue to the kernels for the first partial derivatives at $a$, we have
\begin{align*}
\langle f , W_{\psi,\varphi}^*K_a^{D_k} \rangle &=\langle W_{\psi,\varphi}f,K_a^{D_k}\rangle\\
&=\frac{\partial\psi(a)}{\partial z_k}f(\varphi(a))+\psi(a)\sum_{j=1}^{n}\frac{\partial f(\varphi(a))}{\partial \varphi_j(z)}\frac{\partial\varphi_j}{\partial z_k}(a)\\
&=\langle f,\overline{\frac{\partial\psi(a)}{\partial z_k}}K_{\varphi(a)}+\overline{\psi(a)}\sum_{j=1}^{n} \overline{\frac{\partial\varphi_j(a)}{\partial z_k}}K_{\varphi(a)}^{D_j} \rangle
\end{align*}
for any $k=1,2,\ldots,n$ and $f\in \mathcal{D}(\mathbb{B}_{N}).$
Thus, we have
\begin{align}\label{Eq2.2}
W_{\psi,\varphi}^*K_a^{D_k} = \overline{\frac{\partial\psi(a)}{\partial z_k}}K_{\varphi(a)}+\overline{\psi(a)}\sum_{j=1}^{n} \overline{\frac{\partial\varphi_j(a)}{\partial z_k}}K_{\varphi(a)}^{D_j}
\end{align}
for any $k=1,2,\ldots,n$ and $f\in \mathcal{D}(\mathbb{B}_{N}).$
In the same way, we will write
$\langle f, K_a^{D_{i,j}} \rangle = \frac{\partial^2 f}{\partial z_i z_j}$ and
\begin{align}\label{Eq2.3}
K_a^{D_{i,j}}= \frac{z_i z_j} {(1-\langle z,a \rangle)^2}
\end{align}
for $i,j=1,2,\ldots,n.$
Also, we find that
\begin{align}\nonumber
\langle f , W_{\psi,\varphi}^* K_a^{D_{11}} \rangle \nonumber
&= \langle W_{\psi,\varphi}f , K_a^{D_{11}} \rangle\\ \nonumber
&=\frac{\partial(\frac{\partial \psi(z)} {\partial z_1} f(\varphi(z)) + \psi(z)\sum\limits_{j=1}^{n}\frac{\partial f(\varphi(z))}{\partial z_j}\frac{\partial \varphi_j}{\partial z_1})} {\partial z_1}\Big{|}_{z=a}\\ \nonumber
&=\{\frac{\partial^2 \psi(z)}{\partial z_1 ^2} f(\varphi(z)) + 2 \frac{\partial \psi(z)}{\partial z_1}\sum\limits_{j=1}^{n}\frac{\partial f(\varphi(z))} {\partial z_j} \frac{\partial \varphi_j}{\partial z_1} \\ \nonumber
&+ \psi(z) \frac{\partial (\sum\limits_{j=1}^{n}\frac{\partial f(\varphi(z))} {\partial z_j} \frac{\partial \varphi_j}{\partial z_1}) }{\partial z_1}\}\Big{|}_{z=a}\\ \nonumber
&=\frac{\partial^2 \psi(a)}{\partial z_1 ^2} f(\varphi(a)) + 2 \frac{\partial \psi(a)}{\partial z_1}\sum\limits_{j=1}^{n}\frac{\partial f(\varphi(a))} {\partial z_j} \frac{\partial \varphi_j(a)}{\partial z_1}\\ \nonumber
&+ \psi(a) \sum\limits_{j=1}^{n} \frac{\partial^2 \varphi_j (a)}{ \partial z_1^2} \frac{ \partial f(\varphi(a))}{ \partial z_j}
+ \psi(a)\frac{ \partial \varphi_1(a)}{ \partial z_1} (\sum\limits_{j=1}^{n} \frac{\partial^2 f(\varphi(a))}{ \partial z_1 \partial z_j}\frac{\partial \varphi_j(a)}{\partial z_1}) \\ \nonumber
& +\ldots + \psi(a) \frac { \partial \varphi_n(a)} { \partial z_1} (\sum\limits_{j=1}^{n} \frac{\partial^2 f(\varphi(a))}{ \partial z_n \partial z_j} \frac{\partial \varphi_j(a)}{\partial z_1})\\ \nonumber
&=\langle f, \overline{\frac{\partial^2 \psi(a)}{\partial z_1 ^2}} K_{\varphi(a)} + 2\overline{\frac{\partial \psi(a)}{\partial z_1}}\sum\limits_{j=1}^{n}\overline{ \frac{\partial \varphi_j(a)}{\partial z_1}} K_{\varphi(a)}^{D_j}\\ \nonumber
&+ \overline{\psi(a)} \sum\limits_{j=1}^{n} \overline{\frac{\partial^2 \varphi_j (a)}{ \partial z_1^2}} K_{\varphi(a)}^{D_j}
+ \overline{\psi(a)} \overline{ \frac{ \partial \varphi_1(a)} { \partial z_1} }(\sum\limits_{j=1}^{n} \overline{\frac{\partial \varphi_j(a)}{\partial z_1}} K_{\varphi(a)}^{D_{1j}})\\ \nonumber
& + \ldots + \overline{\psi(a)} \overline{ \frac{ \partial \varphi_n(a)} { \partial z_1} }(\sum\limits_{j=1}^{n} \overline{\frac{\partial \varphi_j(a)}{\partial z_1}} K_{\varphi(a)}^{D_{nj}}) \rangle.
\end{align}
Therefore,
\begin{align}
W_{\psi,\varphi}^* K_a^{D_{11}} \nonumber
&=\overline{\frac{\partial^2 \psi(a)}{\partial z_1 ^2}} K_{\varphi(a)} + 2\overline{\frac{\partial \psi(a)}{\partial z_1}}\sum\limits_{j=1}^{n}\overline{ \frac{\partial \varphi_j(a)}{\partial z_1}} K_{\varphi(a)}^{D_j}\\ \nonumber
& + \overline{\psi(a)} \sum\limits_{j=1}^{n} \overline{\frac{\partial^2 \varphi_j (a)}{ \partial z_1^2}} K_{\varphi(a)}^{D_j}
+ \overline{\psi(a)} \overline{ \frac{ \partial \varphi_1(a)} { \partial z_1} }(\sum\limits_{j=1}^{n} \overline{\frac{\partial \varphi_j(a)}{\partial z_1}} K_{\varphi(a)}^{D_{1j}}) \\\label{Eq2.4}
& + \ldots + \overline{\psi(a)} \overline{ \frac{ \partial \varphi_n(a)} { \partial z_1} }(\sum\limits_{j=1}^{n} \overline{\frac{\partial \varphi_j(a)}{\partial z_1}} K_{\varphi(a)}^{D_{nj}}).
\end{align}
\section{Complex symmetric composition operators on $\mathcal{D}(\mathbb{B}_N)$}
\subsection{Complex symmetric composition operators on $\mathcal{D}$}
Let us start by characterize composition operators on the Dirichlet space in the unit disk which are complex symmetric with respect to the conjugation $Jf(z)=\overline{f(\bar{z})}$. First we give the following theorem, which limits the kinds
of maps that can induce complex symmetric composition operators on $\mathcal{D}(\mathbb{D})$.
\begin{thm}
Let $\varphi$ be an analytic self-map of the unit disk. Suppose that $C_\varphi$ is a complex symmetric operator on $\mathcal{D}(\mathbb{D})$, then $\varphi$ has a fixed point in the unit disk.
\end{thm}
\textit{Proof.}
Since $C_\varphi$ is complex symmetric with non-empty point spectrum. [\citealp{N1}, Proposition 3.1] shows that $C_\varphi$ is not hypercyclic. By [\citealp{YR}, Theorem 1.1], we obtain $\varphi$ has a fixed point in the unit disk.
\qed
\begin{thm}
Let $\varphi$ be an analytic self-map of the unit disk. Then $C_\varphi$ is $J$-symmetric on $\mathcal{D}$ if and only if $C_\varphi$ is normal.
\end{thm}
{\textit{Proof.}}
Since $C_\varphi$ is $J$-symmetric, it follows from [\citealp{GH}, Proposition 2.4] that $\varphi(z)=az$ for some $|a|\leqslant1$, and so $C_\varphi$ is normal.
For the converse, suppose $C_\varphi$ is normal on $\mathcal{D}$, by [\citealp{CM1}, Theorem 8.2], we conclude that $\varphi(z)=az$ with $|a|\leqslant1$. An easy calculation gives $C_\varphi JK_w(z)=JC_\varphi^*K_w(z)$, for all $z,w \in \mathbb{D}$ and hence $C_\varphi$ is $J$-symmetric.
\qed
\begin{rem}
Indeed, every normal operator is complex symmetric, so it is natural to ask ``if there any complex symmetric but not normal composition operator $C_{\varphi}$ whose symbol is not a constant and also not involution?"
\end{rem}
\subsection{Complex symmetric weighted composition operators on $\mathcal{D}(\mathbb{B}_{N})$}
Following this idea, one is interested in determining whether the $J$-symmetric of composition operator is equivalent to its normality for the dimension greater than 1. Now, we begin with the theorem that gives the sufficient and necessary condition for weighted composition operators $W_{\psi,\varphi}$ to be $J$-symmetric.
\begin{thm}\label{Th3.4}
Let $\varphi$ be an analytic self-map of $\mathbb{B}_N$ and $\psi$ be an analytic function on $\mathbb{B}_{N}$ for which $W_{\psi,\varphi}$ is bounded on $\mathcal{D}(\mathbb{B}_{N})$. Then $W_{\psi,\varphi}$ is complex symmetric with conjugation $J$ if and only if $\psi(z)=c$ and $\varphi(z)=\varphi'(0)z,$ where $c$ is a constant and $\varphi'(0)$ is a symmetric matrix with $||\varphi'(0)||\leq1$.
\end{thm}
\textit{proof}.
If $W_{\psi,\varphi}$ is complex symmetric with conjugation $J$, then we have
$$W_{\psi,\varphi}JK_w(z)=JW_{\psi,\varphi}^*K_w(z)$$
for all $z,w \in \mathbb{B}_N$, which implies that
\begin{equation} \label{Eq3.001}
\psi(z)(1+\ln\frac{1}{1-\langle \varphi(z),\overline{w}\rangle})=\psi(w)(1+\ln\frac{1}{1-\langle z, \overline{\varphi(w)} \rangle}).
\end{equation}
Putting $w=0$ in Equation (\ref{Eq3.001}), then we have
\begin{equation} \label{Eq3.002}
\psi(z)=\psi(0)+\psi(0)\ln\frac{1}{1-\langle z, \overline{\varphi(0)}\rangle}.
\end{equation}
Subsituting the formula for $\psi(z)$ into Equation (\ref{Eq3.001}), we obtain
\begin{align}\nonumber
&(1-\ln(1-\langle z, \overline{\varphi(0)}\rangle))(1-\ln(1-\langle \varphi(z), \overline{w}\rangle))\\ \label{Eq3.003}
&=(1-\ln(1-\langle w, \overline{\varphi(0)}\rangle))(1-\ln(1-\langle z, \overline{\varphi(w)}\rangle)).
\end{align}
Taking partial derivate with respect to $w_1$ on the both sides of Equation (\ref{Eq3.003}), we get
\begin{align} \nonumber
&(1-\ln(1-\langle z, \overline{\varphi(0)}\rangle))\frac{\varphi_1(z)}{1-\langle\varphi(z),\overline{w}\rangle}\\ \nonumber
&=(1-\ln(1-\langle w, \overline{\varphi(0)}\rangle))\frac{\frac{\partial\varphi_1(w)}{\partial w_1}z_1+\ldots+\frac{\partial\varphi_n(w)}{\partial w_1}z_n}{1-\langle z, \overline{\varphi(w)}\rangle}\\ \nonumber
&+(1-\ln(1-\langle z, \overline{\varphi(w)}\rangle))\frac{\varphi_1(0)}{1-\langle w, \overline{\varphi(0)}\rangle}.
\end{align}
Setting $w=0$ in the above equation, we have
\begin{align} \nonumber
\varphi_1(z)=\varphi_1(0) + \frac{\frac{\partial\varphi_1(0)}{\partial w_1}z_1+\ldots+\frac{\partial\varphi_n(0)}{\partial w_1}z_n}{(1-\langle z, \overline{\varphi(0)}\rangle)(1-\ln(1-\langle z, \overline{\varphi(0)}\rangle))}.
\end{align}
Similarly, we get
$$\varphi_k(z)=\varphi_k(0)+\frac{\frac{\partial\varphi_1(0)}{\partial w_k}z_1+\ldots+\frac{\partial\varphi_n(0)}{\partial w_k}z_n}{(1-\langle z, \overline{\varphi(0)}\rangle)(1-\ln(1-\langle z, \overline{\varphi(0)}\rangle))}$$
for $k=1,2,\ldots,n$. So we have
\begin{align}\label{Eq3.004}
\varphi(z)=\varphi(0)+\frac{\varphi'(0)^Tz}{(1-\langle z, \overline{\varphi(0)}\rangle)(1-\ln(1-\langle z, \overline{\varphi(0)}\rangle))},
\end{align}
here $\varphi'(0)^T$ denote the transpose matrix of $\varphi'(0)$.
Next, we claim that $\varphi(0)=0$ when $W_{\psi,\varphi}$ is complex symmetric with conjugation $J$. Note that
\begin{equation} \label{Eq3.005}
W_{\psi,\varphi}J \left(
\begin{array}{ccc}
K_a^{D_{11}}\\
\vdots\\
K_a^{D_{nn}}
\end{array} \right) = JW_{\psi,\varphi}^*\left(
\begin{array}{ccc}
K_a^{D_{11}}\\
\vdots\\
K_a^{D_{nn}}
\end{array}\right).
\end{equation}
Putting $a=0$ in Equation (\ref{Eq3.005}), and by Equation (\ref{Eq2.3}), we have
\begin{equation} \label{Eq3.006}
W_{\psi,\varphi} \left(
\begin{array}{ccc}
z_1^2\\
\vdots\\
z_n^2
\end{array}\right)=JW_{\psi,\varphi}^*\left(
\begin{array}{ccc}
K_0^{D_{11}}\\
\vdots\\
K_0^{D_{nn}}
\end{array}\right).
\end{equation}
It follows from Equation (\ref{Eq3.006}) that
\begin{equation} \label{Eq3.007}
\psi(z) \varphi_1^2(z) = J W_{\psi,\varphi}^* K_0^{D_{11}}(z).
\end{equation}
Since we obtain a precise formula for $\psi(z) \ \textrm{and} \ \varphi(z)$ when $W_{\psi,\varphi}$ is complex symmetric with conjugation $J$, thus by Equation (\ref{Eq3.002}) and (\ref{Eq3.004}) we get
\begin{align} \nonumber
&\psi(z) \varphi_1^2(z)\\ \nonumber
&=\psi(0)\varphi_1^2(0) ( 1 + \ln \frac {1} {1-\langle z, \overline{\varphi(0)} \rangle} ) + 2 \psi(0) \varphi_1(0) \sum\limits_{j=1}^{n} \frac{ \partial \varphi_j(0)} {\partial z_1} \frac{z_j} {1-\langle z, \overline{\varphi(0)} \rangle }+\\ \nonumber
& \psi(0) \frac{ \partial \varphi_1(0)} {\partial z_1} \sum\limits_{j=1}^{n} \frac{ \partial \varphi_j(0)} {\partial z_1} \frac{z_1 z_j} {(1-\langle z, \overline{\varphi(0)} \rangle )^2 (1- \ln (1-\langle z, \overline{\varphi(0)} \rangle)) } + \ldots +\\ \label{Eq3.008}
&\psi(0) \frac{ \partial \varphi_n(0)} {\partial z_1} \sum\limits_{j=1}^{n} \frac{ \partial \varphi_j(0)} {\partial z_1} \frac{z_n z_j} {( 1 - \langle z, \overline{\varphi(0)} \rangle )^2 (1- \ln (1-\langle z, \overline{\varphi(0)} \rangle)) }.
\end{align}
On the other hand, by Equation (\ref{Eq2.4}), we have
\begin{align} \nonumber
&JW_{\psi,\varphi}^* K_0^{D_{11} }(z)\\ \nonumber
&=\frac{\partial^2 \psi(0)}{\partial z_1 ^2} K_{\overline{\varphi(0)}}(z) + 2\frac{\partial \psi(0)}{\partial z_1}\sum\limits_{j=1}^{n} \frac{\partial \varphi_j(0)}{\partial z_1} K_{\overline{\varphi(0)}}^{D_j}(z) + \psi(0) \sum\limits_{j=1}^{n} \frac{\partial^2 \varphi_j (0)}{ \partial z_1^2} K_{\overline{\varphi(0)}}^{D_j} (z) + \\ \label{Eq3.009}
& \psi(0) \frac{ \partial \varphi_1(0)} { \partial z_1} (\sum\limits_{j=1}^{n} \frac{\partial \varphi_j(0)}{\partial z_1} K_{\overline{\varphi(0)}}^{D_{1j}}(z) ) +
\ldots + \psi(0) \frac{ \partial \varphi_n(0)} { \partial z_1} (\sum\limits_{j=1}^{n} \frac{\partial \varphi_j(0)}{\partial z_1} K_{\overline{\varphi(0)}}^{D_{nj}}(z) ).
\end{align}
By Equation (\ref{Eq3.002}), we have
\begin{align}\nonumber
\frac{\partial \psi(z)}{\partial z_1} = \psi(0) \frac{\varphi_1(0)}{1-\langle z, \overline{\varphi(0)} \rangle}
\end{align}
and
\begin{align}\nonumber
\frac{\partial^2 \psi(z)}{\partial z_1^2} = \psi(0) \frac{\varphi_1^2(0)}{(1-\langle z, \overline{\varphi(0)} \rangle)^2}.
\end{align}
Thus,
\begin{align} \label{Eq3.0010}
\frac{\partial \psi(0)}{\partial z_1} = \psi(0)\varphi_1(0)
\end{align}
and
\begin{align} \label{Eq3.0011}
\frac{\partial^2 \psi(0)}{\partial z_1^2} = \psi(0)\varphi_1(0)^2.
\end{align}
Then by Equation (\ref{Eq3.004}), a calculation gives
\begin{align}\nonumber
&\frac {\partial^2 \varphi_1(z)} {\partial z_1^2} \\ \nonumber
&= \frac{-(\frac{\partial\varphi_1(0)}{\partial z_1}z_1+\ldots+\frac{\partial\varphi_n(0)}{\partial z_1}z_n ) \frac{\partial^2 F(z) }{ \partial z_1^2}F(z)^2} { {F(z)}^4}-\\ \nonumber
&\frac{\{\frac{\partial\varphi_1(0)}{\partial z_1} F(z)
-{(\frac{\partial\varphi_1(0)}{\partial z_1}z_1 + \ldots + \frac{\partial\varphi_n(0)}{\partial z_1}z_n) \frac{ \partial F(z)}{ \partial z_1}\}2F(z)\frac{\partial F(z)}{ \partial z_1}}} { {F(z)}^4 },
\end{align}
where $F(z) = (1-\langle z, \overline{\varphi(0)}\rangle)(1-\ln(1-\langle z, \overline{\varphi(0)}\rangle))$. Since $\frac{\partial F(0)}{ \partial z_1}=0$, so we have
\begin{align}\label{Eq3.0012}
\frac{\partial^2 \varphi_1(0)}{\partial z_1^2} = 0.
\end{align}
Similarly, we have
\begin{align}\label{Eq3.0013}
\frac{\partial^2 \varphi_j(0)}{\partial z_1^2} = 0
\end{align}
for $j=1,2,\ldots, n.$
Putting all our information together and returning to the Equation (\ref{Eq3.009}), we get
\begin{align} \nonumber
&JW_{\psi,\varphi}^* K_0^{D_{11}}(z)\\ \nonumber
&=\psi(0)\varphi_1(0)^2 (1 + \ln \frac {1} {1-\langle z, \overline{\varphi(0)} \rangle}) + 2 \psi(0) \varphi_1(0) \sum\limits_{j=1}^{n} \frac{ \partial \varphi_j(0)} {\partial z_1} \frac{z_j} {1-\langle z, \overline{\varphi(0)} \rangle }\\ \nonumber
& + \psi(0) \frac{ \partial \varphi_1(0)} {\partial z_1} \sum\limits_{j=1}^{n} \frac{ \partial \varphi_j(0)} {\partial z_1} \frac{z_1 z_j} {(1-\langle z, \overline{\varphi(0)} \rangle )^2 } + \ldots +\\ \label{Eq3.0014}
&\psi(0) \frac{ \partial \varphi_n(0)} {\partial z_1} \sum\limits_{j=1}^{n} \frac{ \partial \varphi_j(0)} {\partial z_1} \frac{z_n z_j} {( 1 - \langle z, \overline{\varphi(0)}\rangle )^2 }.
\end{align}
Combining Equation (\ref{Eq3.008}) and Equation (\ref{Eq3.0014}), we have
\begin{align} \label{Eq3.0015}
\varphi(0)=0.
\end{align}
Finally, from Equation (\ref{Eq3.002}), Equation (\ref{Eq3.004}) and Equation (\ref{Eq3.0015}) we easily deduce that
$$ \psi(z) = \psi(0) = c \ \ \ \ \ \textrm{and} \ \ \ \ \ \varphi(z) = \varphi'(0)z $$
where $c$ is constant and $\varphi'(0)$ is a symmetric matrix with $||\varphi'(0)||\leq1$. Indeed, notice that if $C_{\varphi'(0)z} $ is $J$-symmetric, it is easy to show that $\varphi'(0)=\overline{\varphi'(0)}^*$, that is, $\varphi'(0)$ is a symmetric matrix.
The converse is clear.
\qed
\begin{cor}
Let $\varphi$ be an analytic self-map of $\mathbb{B}_N$, if $C_\varphi$ is bounded on $\mathcal{D}(\mathbb{B}_{N})$, then $C_\varphi$ is $J$-symmetric on $\mathcal{D}(\mathbb{B}_{N})$ if and only if $\varphi(z)=\varphi'(0)z$
for $z\in\mathbb{B}_{N}$ and $\varphi'(0)$ is a symmetric matrix with $||\varphi'(0)||\leq1$.
\end{cor}
Next we consider the unitary composition operator $C_\varphi$ on $\mathcal{D}(\mathbb{B}_{N})$. Then we can use the unitary composition operator to construct another conjugation operator on $\mathcal{D}(\mathbb{B}_{N})$.
\begin{Lemma}{\rm ([\citealp{ZY}, Theorem 4.1])}
Let $\varphi$ be an analytic self-map of $\mathbb{B}_{N}$. Then $C_\varphi$ is unitary on $\mathcal{D}(\mathbb{B}_{N})$ if and only if $\varphi(z)=Uz$ where $U$ is a unitary matrix.
\end{Lemma}
\begin{Prop} If $U$ is a unitary symmetric matrix, then $JC_{Uz}$ is a conjugation.
\end{Prop}
Using similar proof of Theorem \ref{Th3.4}, we also easily prove the following theorem.
\begin{thm}
Let $\varphi$ be an analytic self-map of $\mathbb{B}_N$ and $\psi$ be an analytic function on $\mathbb{B}_{N}$ for which $W_{\psi,\varphi}$ is bounded on $\mathcal{D}(\mathbb{B}_{N})$. If $W_{\psi,\varphi}$ is complex symmetric with conjugation $JC_{Uz}$ if and only if $\psi(z)=c$ and $\varphi(z)=\varphi'(0)\overline{U}z,$
where $c$ is constant, $\varphi'(0)$ is a symmetric matrix with $||\varphi'(0)||\leq1$ and $\varphi'(0) \overline{U}= \overline{U}\varphi'(0)$.
\end{thm}
\textit{proof}.
Since $W_{\psi,\varphi}$ is complex symmetric with conjugation $JC_{Uz}$, then
$$JC_{Uz}W_{\psi,\varphi}JC_{Uz}=(W_{\psi,\varphi})^*$$
which means that
$$JC_{Uz}W_{\psi,\varphi}=(C_{Uz}W_{\psi,\varphi})^*J.$$
It follows from Theorem \ref{Th3.4} that
$$\psi(Uz)=\psi(0)$$
and
$$\varphi(Uz)=\varphi'(0)z .$$
Therefore, replace $Uz$ by $z$, we have
$$\psi(z)=c \ \ \ \ \ \textrm{and} \ \ \ \ \ \ \varphi(z)=\varphi'(0)\overline{U}z,$$
where $c$ is constant, $\varphi'(0)$ is a symmetric matrix with $||\varphi'(0)||\leq1$ and $\varphi'(0)\overline{U} = \overline{U}\varphi'(0)$. In fact, notice that if $C_\varphi$ is $JC_{Uz}$-symmetric, it is easy to check that that $\varphi'(0)\overline{U} = \overline{U}\varphi'(0)$.
The converse direction follows readily from a simple calculation, so we omit the proof.
\qed
\subsection{Hermitian weighted composition operators on $\mathcal{D}(\mathbb{B}_{N})$}
In this section, we will find out the functions $\psi$ and $\varphi$ when $W_{\psi,\varphi}$ are bounded Hermitian weighted composition operators. Not surprisingly, we will prove that no nontrivial Hermitian weighted composition operator exist on $\mathcal{D}(\mathbb{B}_{N})$.
\begin{thm}
Let $\varphi$ be an analytic self-map of $\mathbb{B}_N$ and $\psi$ be an analytic function on $\mathbb{B}_{N}$ for which $W_{\psi,\varphi}$ is bounded on $\mathcal{D}(\mathbb{B}_{N})$. Then $W_{\psi,\varphi}$ is a Hermitian weighted composition operator on $\mathcal{D}(\mathbb{B}_{N})$ if and only if
$$\psi(z)=c \ \ \ \textrm{and}\ \ \ \varphi(z)=\varphi'(0)z$$
where $c$ is a real constant and $\varphi'(0)$ is a Hermitian matrix.
\end{thm}
\textit{Proof.}
Since $W_{\psi,\varphi}$ is a bounded Hermitian weighted operator on $\mathcal{D}(\mathbb{B}_{N})$, then we have
$$W_{\psi,\varphi}K_w(z)=W_{\psi,\varphi}^*K_w(z)$$
for all $z,w\in\mathbb{B}_{N}$, that is
$$\psi(z)K_w(\varphi(z))=\overline{\psi(w)}K_{\varphi(w)}(z)$$
for all $z,w\in\mathbb{B}_{N}$. Thus
\begin{equation} \label{Eq3.02}
\psi(z)(1+\ln\frac{1}{1-\langle\varphi(z),w\rangle})=\overline{\psi(w)}(1+\ln\frac{1}{1-\langle z,\varphi(w)\rangle}).
\end{equation}
Letting $w=0$ in Equation (\ref{Eq3.02}) gives
\begin{equation} \label{Eq3.03}
\psi(z)=\overline{\psi(0)}(1+\ln\frac{1}{1-\langle z,\varphi(0)\rangle})
\end{equation}
for all $z\in\mathbb{B}_{N}$. Putting $z=0$, we get $\psi(0)=\overline{\psi(0)}$, i.e. $\psi(0)$ is a real number.
Substituting the formula for $\psi(z)$ into Equation (\ref{Eq3.02}), we obtain
\begin{align}\nonumber
&(1-\ln(1-\langle z,\varphi(0)\rangle))(1-\ln(1-\langle\varphi(z),w\rangle)\\\nonumber
&=(1-\ln(1-\langle\varphi(0),w\rangle))(1-\ln(1-\langle z,\varphi(w)\rangle)).
\end{align}
Taking partial derivative with respect to $\overline{w_1}$, we obtain
\begin{align} \nonumber
&(1-\ln(1-\langle z,\varphi(0)\rangle))\frac{\varphi_1(z)}{1-\langle\varphi(z),w\rangle}\\ \nonumber
&=(1-\ln(1-\langle\varphi(0),w\rangle))\frac{\frac{\partial\overline{\varphi_1(w)}}{\partial \overline{w_1}}z_1+\ldots+\frac{\partial\overline{\varphi_n(w)}}{\partial {\overline{w_1}}}z_n}{1-\langle z,\varphi(w)\rangle}\\ \nonumber
&+(1-\ln(1-\langle z,\varphi(w)\rangle))\frac{\varphi_1(0)}{1-\langle\varphi(0),w\rangle}.
\end{align}
Putting $w=0$ in the above equation, we get
$$\varphi_1(z)=\varphi_1(0)+\frac{\frac{\partial\overline{\varphi_1(0)}}{\partial \overline{w_1}}z_1+\ldots+\frac{\partial\overline{\varphi_n(0)}}{\partial \overline{w_1}}z_n}{(1-\langle z,\varphi(0)\rangle)(1-\ln(1-\langle z,\varphi(0)\rangle))}.$$
Similarly, we get
$$\varphi_k(z)=\varphi_k(0)+\frac{\frac{\partial\overline{\varphi_1(0)}}{\partial \overline{w_k}}z_1+\ldots+\frac{\partial\overline{\varphi_n(0)}}{\partial \overline{w_k}}z_n}{(1-\langle z,\varphi(0)\rangle)(1-\ln(1-\langle z,\varphi(0)\rangle))}$$
for $k=1,2,\ldots,n$. Therefore we obtain
\begin{equation}\label{Eq3.04}
\varphi(z)=\varphi(0)+\frac{\varphi'(0)^*z}{(1-\langle z,\varphi(0)\rangle)(1-\ln(1-\langle z,\varphi(0)\rangle))}.
\end{equation}
Taking derivative with respect to $z$, and putting $z=0$, we have
$$\varphi'(0)=\varphi'(0)^*,$$
that is, $\varphi'(0)$ is a Hermitian matrix.
Furthermore, we obtain $\varphi(0)=0$, the proof is similar to that of Theorem \ref{Th3.4}, so we omit the details. It follows from Equation (\ref{Eq3.03}) and Equation (\ref{Eq3.04}) that
$$
\psi(z)=c \ \ \ \textrm{and} \ \ \ \varphi(z)=\varphi'(0)z
$$
where $c$ is a real constant and $\varphi'(0)$ is a Hermitian matrix.
Conversely, if $\psi(z)=c$ and $\varphi(z)=\varphi'(0)z,$ where $c$ is a real constant and $\varphi'(0)$ is a Hermitian matrix. It is easy to see that
$$W_{\psi,\varphi}^*K_w(z)=W_{\psi,\varphi}K_w(z)$$
for all $z,w\in\mathbb{B}_{N}$, which completes the proof of the theorem.
\qed
\begin{cor}
Let $\varphi$ be an analytic self-map of $\mathbb{B}_N$, if $C_\varphi$ is bounded on $\mathcal{D}(\mathbb{B}_{N})$, then $C_\varphi$ is Hermitian on $\mathcal{D}(\mathbb{B}_{N})$ if and only if
$$\varphi(z)=\varphi'(0)z$$
for $z\in\mathbb{B}_{N}$ and some Hermitian matrix $\varphi'(0)$ with $||\varphi'(0)||\leq1$.
\end{cor}
\section{Complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$}
In this section, we begin with some results about complex symmetric weighted composition operators with respect to the conjugation $J$. Then we give some nontrivial sufficient conditions for weighted composition operators to be unitary and $J$-symmetric. Based upon this, we obtain more new examples of complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$. Finally, we characterize the normality of $C_{Uz}J$-symmetric weighted composition operators.
\subsection{ Unitary and Hermitian complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$}
We first point out when the class of $J$-symmetric weighted composition operator is unitary or Hermitian.
\begin{thm}
Let $\psi(z)=\frac{a_1}{(1-\langle z,\overline{a_{0}}\rangle)^{N}}$, and $\varphi(z)=\frac{a_0-Az}{1-\langle z,\overline{a_{0}}\rangle}$, where $a_{1}\neq0, a_{0}\in{\mathbb{B}}_{N}$ and $A$ is a symmetric matrix such that $\varphi$ is a self-map of ${\mathbb{B}}_{N}$. Then $W_{\psi,\varphi}$ is unitary if and only if
$$a_1=\lambda(1-|a_{0}|^2)^{\frac{N}{2}}, \ \overline{A}A-\overline{a_{0}}a^{T}_{0}=(1-|a_{0}|^2)I_{N} \ \textrm{and }\ A\overline{a_{0}}-{a_{0}}=0,$$
where $a_{0}\in \mathbb{B}_{N}-\{0\}$.
In particular, if $a_0=0$, $W_{\psi,\varphi}$ is unitary if and only if $\psi(z)=\lambda$ for some $\lambda\in\mathbb{C}$ with $|\lambda|=1$ and $\varphi(z)=Az$ where $A$ is a unitary and symmetric matrix.
\end{thm}
\textit{Proof.}
First recall that the adjoint of $\varphi$ is defined by
$$ \sigma(z)=\frac{\overline{a_0}-A^{*}z}{1-\langle z,a_{0}\rangle}, \ \ z\in \mathbb{B}_{N}.$$
Our hypotheses show that $W_{\psi,\varphi}$ is $J$-symmetric (see [\citealp{WY}, Theorem 3.1]), then by [\citealp{L}, Corollary 3.6], we have $W_{\psi,\varphi}$ is a unitary operator if and only if $\psi(z)=\lambda k_{\varphi^{-1}(0)}^{N}(z)$ for some complex number $\lambda$ with $|\lambda|=1$ and $\varphi(z)$ is an automorphism of ${\mathbb{B}}_{N}$. Therefore $\varphi^{-1}(0)=\sigma(0)=\overline{a_{0}}=\overline{\varphi(0)}$, $a_1=\lambda(1-|a_{0}|^2)^{\frac{N}{2}}$. Since $\varphi$ is an analytic self-map of $\mathbb{B}_{N}$ and $\varphi$ is one-to-one (also see [\citealp{WY}, Theorem 3.1]), due to the Theorem \ref{Th2.7}, we have $\varphi(z)\in$ Aut$(\mathbb{B}_{N})$ if and only if
$$
m_{\varphi}=\left(
\begin{array}{ccc}
-A & a_0\\
-\overline{a_0}^*& 1\\
\end{array}
\right)
$$
is a non-zero multiple of Kre$\breve{{\i}}$n isometry on Kre$\breve{{\i}}$n space $\mathbb{C}^{N+1}$. So we have
$$
|k|^2\left(\begin{array}{cc}
-A^*&-\overline{a_0}\\
a_0^*&1\\
\end{array} \right)
\left( \begin{array}{cc}
I_N& 0\\
0&-1\\
\end{array} \right)
\left( \begin{array}{cc}
-A & a_0\\
-\overline{a_0}^*& 1\\
\end{array}
\right)=
\left( \begin{array}{cc}
I_N & 0\\
0& -1\\
\end{array}
\right).
$$
Then we can obtain from direct computations that
\begin{equation} \label{4.1}
|k|^2(A^*A-\overline{a}_0 \overline{a}_0^*)=I_N,
\end{equation}
\begin{equation} \label{4.2}
|k|^2(-A^*a_0+\overline{a}_0)=0,
\end{equation}
\begin{equation} \label{4.3}
|k|^2(a_0^*a_0-1)=-1.
\end{equation}
If $a_0=0$ in Equation (\ref{4.3}), we get
$$\psi(z)=a_1=\lambda$$
for some $\lambda\in\mathbb{C}$ with $|\lambda|=1$ and
$$A^*A=AA^*=I_N.$$
Thus we have
$\varphi(z)=Az$, where $A$ is a symmetric unitary matrix.
If $a_0\neq0$ in Equation (\ref{4.3}), we get $|k|^2=\frac{1}{1-|a_0|^2}$, then substituting the expression of $|k|^2$ into Equation (\ref{4.1}) and (\ref{4.2}) we have
$$\overline{A}A-\overline{a}_0a_0^{T}=(1-|a_0|^2)I_N, \ \ \ A\overline{a}_0-a_0=0.$$
Combining these two cases, we have our conclusion.
\qed
\begin{thm}
Let $\psi(z)=\frac{a_1}{(1-\langle z,\overline{a_{0}}\rangle)^{N}}$, and $\varphi(z)=\frac{a_0-Az}{1-\langle z,\overline{a_{0}}\rangle}$, where $a_{1}\neq0, \ a_{0}\in{\mathbb{B}}_{N}$ and $A$ is a symmetric matrix such that $\varphi$ is a self map of ${\mathbb{B}}_{N}$. Then $W_{\psi,\varphi}$ is Hermitian if and only if $a_1$ is a real number, $a_0$ is a real vector, and $A$ is a real matrix.
\end{thm}
\textit{Proof.}
We know from the [\citealp{WY}, Theorem 3.1] that $W_{\psi,\varphi}$ is complex symmetric with conjugation $J$, and note that $W_{\psi,\varphi}$ is Hermitian (i.e. $W_{\psi,\varphi}=W_{\psi,\varphi}^*$) if and only if
$$W_{\psi,\varphi} JK_w(z)=JW_{\psi,\varphi} K_w(z)$$
for any $z,w\in \mathbb{B}_N$, which implies
\begin{equation} \label{Eq4.4}
\frac{a_1}{(1-\langle z,\overline{a_{0}}\rangle)^{N}(1-\langle \varphi(z),\overline{w}\rangle)^N}=\frac{\overline{a_1}}{(1-\langle\overline{a_{0}},\overline{z}\rangle)^{N}(1-\langle w,\varphi(\overline{z})\rangle)^N}.
\end{equation}
Putting $w=0$ in Equation (\ref{Eq4.4}) we have
$$\frac{a_1}{(1-\langle z,\overline{a_{0}}\rangle)^{N}}=\frac{\overline{a_1}}{(1-\langle\overline{a_{0}},\overline{z}\rangle)^{N}}.$$
Thus, $a_1$ is a real number and $a_0$ is a real vector.
Combining these with Equation (\ref{Eq4.4}) we get
$$\frac{1}{(1-\langle \varphi(z),\overline{w}\rangle)^N}=\frac{1}{(1-\langle w,\varphi(\overline{z})\rangle)^N}.$$
Hence, we have $\varphi(z)=\overline{\varphi(\overline {z})}$, that is
$$\frac{a_0-Az}{1-\langle z,a_0\rangle}=\frac{a_0-\overline{A}z}{1-\langle z,a_0\rangle}.$$
It follows that $A$ is a real matrix, so we complete our proof.
\qed
\subsection{ Some new examples of complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$}
In this section, we will give some new examples of complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$. For this purpose, we first present some sufficient conditions for weighted composition operators to be both unitary and $J$-symmetric.
\begin{Prop}\label{Prop4.2}
Let $\Phi$ be an analytic self-map of $\mathbb{B}_{N}$ and $\Psi$ ba an analytic function. If $\Psi(z)=\lambda$,\ $\Phi(z)=Uz$ where $|\lambda|=1$, $U$ is a unitary and symmetric matrix or $\Psi(z)=\mu\frac{(1-|a|^2)^{\frac{N}{2}}}{(1-\langle z,a\rangle)^{N}}$,\ $\Phi(z)=U\frac{a-Tz}{1-\langle z,a\rangle}$, where $\mu \in \mathbb{C}$, $|\mu|=1$, $Ua=\overline{a}$ and $U$
is a symmetric unitary matrix, and $T$ is a self-adjoint map depending on $a$. Then the weighted composition operator $W_{\Psi,\Phi}$ is unitary and $J$-symmetric.
\end{Prop}
\textit{Proof.}
Clearly if $\Psi(z)=\lambda$ and $\Phi(z)=Uz$, where $|\lambda|=1$, $U$ is a unitary and symmetric matrix, then $W_{\Psi,\Phi}$ is unitary and $J$-symmetric.
Suppose that
$$\Psi(z)=\mu\frac{(1-|a|^2)^{\frac{N}{2}}}{(1-\langle z,a\rangle)^{N}},\ \ \ \ \ \Phi(z)=U\frac{a-Tz}{1-\langle z,a\rangle},$$
where $|\mu|=1$, $Ua=\overline{a}$. From [\citealp{L}, Corollary 3.6], we see that $W_{\Psi,\Phi}$ is unitary.
Notice that $Ua=\overline{a}$, so we have
\begin{align} \nonumber
UT&=\sqrt{1-|a|^2}U+(1 - \sqrt{1-|a|^2}) \frac{Ua\overline{a}^T}{|a|^2}\\
&=\sqrt{1-|a|^2}U+(1 - \sqrt{1-|a|^2}) \frac{\overline{a}a^*}{|a|^2}\label{Eq4.05}
\end{align}
Since $U$ is a symmetric unitary matrix, we can then use Equation (\ref{Eq4.05}) to obtain $UT$ is a symmetric matrix. Then [\citealp{WY}, Theorem 3.1] implies that $W_{\Psi,\Phi}$ is a $J$-symmetric weighted composition operator.
\qed
\begin{pro}
Try to give a sufficient and necessary condition for a weighted composition operator to be both unitary and $J$-symmetric. Note that the case for $H^2(\mathbb{D})$ was solved by Fatehi \cite{F}.
\end{pro}
Before beginning any proofs, we present an example that is simple enough that unitary and $J$-symmetric weighted composition operator $W_{\Psi,\Phi}$ can be carried out concretely.
\begin{ex}
\begin{itemize}
\item[(i)]Let $\Phi$ be an analytic self-map of $\mathbb{B}_{N}$ and $\Psi$ ba an analytic function. If $\Psi(z)=\mu\frac{(1-|a|^2)^{\frac{N}{2}}}{(1-\langle z,a\rangle)^{N}}$,\ $\Phi(z)=\frac{a-Tz}{1-\langle z,a\rangle}$, where $\mu \in \mathbb{C}$, $|\mu|=1$ and $a\in\mathbb{R}^N \cap \mathbb{B}^N $. Then the weighted composition operator $W_{\Psi,\Phi}$ is unitary and $J$-symmetric.
\item[(ii)]Let $\Phi$ be an analytic self-map of $\mathbb{B}_{N}$ and $\Psi$ ba an analytic function. If $\Psi(z)=\mu\frac{(1-|a|^2)^{\frac{N}{2}}}{(1-\langle z,a\rangle)^{N}}$,\ $\Phi(z)=U\frac{a-Tz}{1-\langle z,a\rangle}$, where $\mu \in \mathbb{C}$, $|\mu|=1$, $ a=(a_j) $ with $ a_j \neq 0$ for $j=1,2,\ldots,n$ and
\setcounter{MaxMatrixCols}{20}
\setlength{\arraycolsep}{6pt}
\newcommand*\matrixrightlabel[2][1em]{\text{\makebox[0em][l]{\hspace{#1}$\leftarrow(#2)$}}}
$$
U=\left(
\begin{array}{c}
e^{-2iarga_1} \qquad \qquad \qquad \\
\qquad e^{-2iarga_2} \qquad \qquad \\
\qquad \ddots \qquad \\
\qquad \qquad \qquad e^{-2iarga_n}
\end{array}
\right).
$$
Then the weighted composition operator $W_{\Psi,\Phi}$ is unitary and $J$-symmetric.
\end{itemize}
\setcounter{MaxMatrixCols}{20}
\setlength{\arraycolsep}{6pt}
\newcommand*\matrixrightlabel[2][1em]{\text{\makebox[0em][l]{\hspace{#1}$\leftarrow(#2)$}}}
\end{ex}
Next, we will use the unitary and $J$-symmetric weighted composition operator in constructing some special conjugations.
\begin{cor}
If $\Psi$ and $\Phi$ satisfy the conditions in Proposition \ref{Prop4.2}, then $W_{\Psi,\Phi}J$ is a conjugation.
\end{cor}
\textit{ Proof. }
The conclusion follows directly from [\citealp{GPP}, Lemma 3.2].
\qed
Next we present a sufficient and necessary condition for weighted composition operators to be $W_{\Psi,\Phi}J$-symmetric, where $\Psi$ and $\Phi$ satisfy the conditions in Proposition \ref{Prop4.2}.
Most of them are direct extensions of results in \cite{F}, so we omit the details.
\begin{thm}\label{Th4.5}
Let $\psi(z)=\frac{a_1}{(1-\langle z,\overline{a_{0}}\rangle)^{N}}$, and $\varphi(z)=\frac{a_0-Az}{1-\langle z,\overline{a_{0}}\rangle}$, where $a_{1}\neq0, a_{0}\in{\mathbb{B}}_{N}$ and $A$ is a symmetric matrix such that $\varphi$ is a self map of ${\mathbb{B}}_{N}$.
\begin{itemize}
\item[(i)] For $a\neq0$, the weighted composition operator $W_{\widetilde{\psi},\widetilde{\varphi}}$ is complex symmetric with conjugation $W_{\Psi,\Phi}J$ if and only if $\widetilde{\psi}=\Psi\cdot\psi\circ\Phi$, $\widetilde{\varphi}=\varphi\circ\Phi$.
\item[(ii)] If $U$ is a unitary symmetric matrix, the weighted composition operator $W_{\widetilde{\psi},\widetilde{\varphi}}$ is complex symmetric with conjugation $C_{Uz}J$ if and only if
$\widetilde{\psi}(z)=\psi(Uz)$ and $\widetilde{\varphi}(z)=\varphi(Uz).$
\end{itemize}
\end{thm}
Due to a result of [\citealp{NST}, Theorem 2.10], Narayan, Sievewright, and Thompson give some examples of linear-fractional, not automorphic maps that induce complex symmetric composition operators on $H^2(\mathbb{D})$. The ideals in \cite{NST} may be adapted to prove when $C_{Az+c}$ is $JW_{\psi_{b},\varphi_{b}}$-symmetric on $H^2(\mathbb{B}_{N})$, where $\psi_{b}(z)= k_b^N(z)=\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle z,b\rangle)^N}$
,\ $\varphi_{b}(z)=\frac{b-Tz}{1-\langle z,b\rangle}$ with $b=(I-A)^{-1}c $, and $T = \sqrt{1-|b|^2}I + (1 - \sqrt{1-|b|^2}) \frac{bb^T}{|b|^2}$.
However, the result can fail when $N > 1$, we do have the following restricted version.
\begin{Lemma}\label{lem2.7}
Let $\gamma >0$ be given. Suppose $\varphi$ is a linear fractional map of $\mathbb{B}_{N}$ into itself, then $C_\varphi$ is bounded on $H_\gamma$.
\end{Lemma}
\begin{Lemma}
Let $\sigma(z)=Az+c$ is a linear fractional map of $\mathbb{B}_{N}$ into $\mathbb{B}_{N}$. Then $C_\sigma$ is bounded on $H^2(\mathbb{B}_{N})$ and we have
$$C_\sigma^*=T_\psi C_\varphi,$$
where $\psi(z)=\frac{1}{(1+\langle z,-c\rangle)^N}$, $\varphi(z)=\frac{A^*z}{1+\langle z,-c\rangle}.$
\end{Lemma}
\textit{Proof.}
We know from Lemma \ref{lem2.7} that $C_\sigma$ is bounded on $H^2(\mathbb{B}_{N})$, then by the adjoint formula on $H^2(\mathbb{B}_{N})$ given by Theorem \ref{Th2.6}, we will get this result by a simple computation.
\begin{thm}\label{Th4.8}
Let $A=(a_{i,j})$ be $N\times N$-symmetric matrix, and 1 is not a eigenvalue of $A$, $c=(c_i)$ be $N$-column vectors. Let $b=(I-A)^{-1} c\in \mathbb{B}_{N}\cap \mathbb{R}_{N}$, and $Ab=\lambda b$ for some $\lambda\in\mathbb{C}$, let $\sigma(z)=Az+c$, then $C_{\sigma}$ is $JW_{\psi_{b},\varphi_{b}}$-symmetric.
\end{thm}
\textit{Proof.}
we need to prove that
\begin{equation} \label{Eq4.5}
C_\sigma JW_{\psi_{b},\varphi_{b}}K_w(z)=JW_{\psi_{b},\varphi_{b}}C_\sigma^*K_w(z)
\end{equation}
for all $z, w\in\mathbb{B}_{N} $, where $C_{\sigma}^*=M_\psi C_\varphi=\frac{1}{(1+\langle z,-c\rangle)^N}\frac{A^*z}{1+\langle z,-c\rangle}$, $\psi_{b} (z)=k_b^N(z)=\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle z,b\rangle)^N}$, and $\varphi_{b}(z)=\frac{b-Tz}{1-\langle z,b\rangle}$ with $T$ is a self-adjoint operator depending on $b$.
For the left side of of Equation (\ref{Eq4.5}), we have
\begin{align*}
&C_{Az+c}JW_{\psi_{b},\varphi_{b}}K_w(z)\\
&=C_{Az+c}J\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle z,b\rangle)^N}K_w(\frac{b-Tz}{1-\langle z,b\rangle})\\
&=\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle b,\overline{Az+c}\rangle)^N}\overline{K_w(\frac{b-T(\overline{Az+c})}{1-\langle\overline{Az+c},b\rangle})}\\
&=\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle b,\overline{Az+c}\rangle)^N}\frac{1}{(1-\langle w,\frac{b-T(\overline{Az+c})}{1-\langle\overline{Az+c},\overline{b}\rangle}\rangle)^N}\\ &=\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle b,\overline{Az+c}\rangle-\langle w,b-T\overline{Az}-T\overline{c}\rangle)^N}.
\end{align*}
For the right side of of Equation (\ref{Eq4.5}), we have
\begin{align*}
&JW_{\psi_{b},\varphi_{b}}C_{Az+c}^*K_w(z)\\
&=J\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle z,b\rangle)^N}C_\frac{b-Tz}{1-\langle z,b\rangle} \frac{1}{(1+\langle z,-c\rangle)^N} K_w \Big(\frac{A^*z}{1+\langle z,-c\rangle} \Big) \\
&=\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle b,\overline{z}\rangle)^N} \frac{1}{(1+\langle-c,\frac{b-T\overline{z}}{1-\langle\overline{z},b\rangle}\rangle)^N} \frac{1}{(1-\langle w,\frac{A^*b-A^*T\overline{z}}{1-\langle\overline{z},b\rangle+\langle b-T\overline{z},-c\rangle}\rangle)^N} \\
&=\frac{(1-|b|^2)^{\frac{N}{2}}}{(1-\langle b,\overline{z}\rangle+\langle-c,b\rangle+\langle c,T\overline{z}\rangle-\langle Aw,b\rangle+\langle Aw,T\overline{z}\rangle)^N}.
\end{align*}
To see Equation (\ref{Eq4.5}) holds, by the previous calculations, it is enough to show that
\begin{equation} \label{Eq4.6}
-z^TA^Tb - b^Tw + z^TA^TTw + c^TTw = -z^Tb + z^TTc - b^TAw + z^TTAw.
\end{equation}
Since $T = \sqrt{1-|b|^2}I + (1 - \sqrt{1-|b|^2}) \frac{bb^T}{|b|^2}$ and $Ab = \lambda b$ for some $\lambda \in C$, we have $c = kb$ with $k = 1 - \lambda$ and
\begin{align}
Tc &= \sqrt{1-|b|^2}c + \frac{(1 - \sqrt{1-|b|^2})bb^T}{|b|^2}kb \nonumber \\
&= \sqrt{1-|b|^2}kb + (1 - \sqrt{1-|b|^2})kb \nonumber \\
&= kb = c .\label{Eq4.7}
\end{align}
By now, if we can verify that $A$ and $T$ are commute, i.e.
\begin{equation} \label{Eq4.8}
AT = TA,
\end{equation}
then by Equations (\ref{Eq4.6}), (\ref{Eq4.7}) and (\ref{Eq4.8}), we get Equation (\ref{Eq4.5}) holds, immediately.
Indeed, we just need to prove $Abb^T = bb^TA$ holds. Since
$$
Abb^T = A \frac{c}{k} \Big(\frac{c}{k}\Big)^T = \Big(\frac{c}{k}- c\Big) \frac{c^T}{k} = \frac{cc^T}{k^2} - \frac{cc^T}{k}
$$
and
$$
bb^TA = \frac{c}{k} \Big(\frac{c}{k}\Big)^TA = \frac{c}{k} \Big(\frac{Ac}{k}\Big)^T = \frac{c}{k} \Big(\frac{c}{k} - c\Big)^T = \frac{cc^T}{k^2} - \frac{cc^T}{k},
$$
where we use the fact that $A$ is a symmetric matrix, so we have $Abb^T = bb^TA$, as desired.
\qed
\subsection{Normality of complex symmetric weighted composition operators on $H^2(\mathbb{B}_{N})$}
We turn next to the problem of identifying the when the class of complex symmetric weighted composition operators mentioned in Theorem \ref{Th4.5} (ii) is normal.
\begin{thm}
Let $\psi(z)=\frac{a_1}{(1-\langle Uz,\overline{a_{0}}\rangle)^{N}}$, and $\varphi(z)=\frac{a_0-AUz}{1-\langle Uz,\overline{a_{0}}\rangle}$, where $a_{1}\neq0, a_{0}\in{\mathbb{B}}_{N}$, $A$ is a symmetric matrix and $U$ is a unitary symmetric matrix such that $\varphi$ is a self map of ${\mathbb{B}}_{N}$. Then $W_{\psi,\varphi}$ is normal if and only if
$$\overline{UA}AU-\overline{Ua_0}a_0^TU=A\overline{A}-a_0{a_0^*}$$
and
$$\overline{UA}a_0 - \overline{Ua_0} = A\overline{a_0} - a_0.$$
\end{thm}
\textit{Proof.}
Since $\varphi(z)=\frac{a_0-AUz}{1-\langle Uz,\overline{a_{0}}\rangle}$, the associated matrix of $\varphi$ is
$$
m_{\varphi}=\left(
\begin{array}{ccc}
-AU &a_0\\
{-a_0^T}U& 1\\
\end{array}
\right)
$$
and the adjoint map $\sigma=\sigma_\varphi$ is defined by
$$\sigma(z)=\frac{(-AU)^*z+\overline{Ua_0}}{1+\langle z,-a_{0}\rangle}$$
with the associated matrix of $\sigma$ is
$$
m_{\sigma}=\left(
\begin{array}{ccc}
-(AU)^* &\overline{Ua_0}\\
{-a_0}^*& 1\\
\end{array}
\right).
$$
Therefore we have $|\varphi(0)|=|a_0|=|Ua_0|=|\sigma(0)|$, $ \psi(z)=a_1K_{\overline{Ua_0}}(z)=a_1K_{\sigma(0)}(z)$, \ by [\citealp{L}, Proposition 4.6], we see that $W_{\psi,\varphi}$ is normal if and only if $\varphi\circ\sigma=\sigma\circ\varphi$. Since the multiple of $m_{\varphi\circ\sigma}$ give the same linear fractional map, we must have $W_{\psi,\varphi}$ is normal if and only if $m_{\varphi\circ\sigma}=k m_{\sigma\circ\varphi}$ for some $k\neq0$.
A calculation shows that
\begin{equation}\label{Eq4.10}
\overline{UA}AU-\overline{Ua_0}a_0^TU=k(A\overline{A}-a_0{a_0^*}),
\end{equation}
\begin{equation}\label{Eq4.11}
-\overline{UA}a_0+\overline{Ua_0}=k(-A\overline{a_0}+a_0),
\end{equation}
\begin{equation}\label{Eq4.12}
-\overline{a_0}^Ta_0+1=k(-a_0^T\overline{a_0}+1).
\end{equation}
Because $|a_0|=|\varphi(0)|\neq 1$, by Equation (\ref{Eq4.12}), we have $k=1$. Then combining this with Equation (\ref{Eq4.10}) and (\ref{Eq4.11}), we have $W_{\psi,\varphi}$ is normal if and only if $\overline{UA}AU-\overline{Ua_0}a_0^TU=A\overline{A}-a_0{a_0^*}$ and $\overline{UA}a_0 - \overline{Ua_0} = A\overline{a_0} - a_0$, as desired.
\qed
| {'timestamp': '2018-12-27T02:32:46', 'yymm': '1812', 'arxiv_id': '1812.10248', 'language': 'en', 'url': 'https://arxiv.org/abs/1812.10248'} |
\section{Introduction}\label{Introduccion}
The braid group of $n$ strings ${\mathbb B}_n$, is defined by generators
and relations as follows
$$
{\mathbb B}_{n}=<\tau_{1}, \dots,\tau_{n-1}>_{/\sim}
$$
$$
\sim=\{ \tau_k\tau_j=\tau_j\tau_k, \textrm{ if }|k- j|>1; \ \
\tau_k\tau_{k+1}\tau_k=\tau_{k+1}\tau_{k}\tau_{k+1} \ \ 1\leq k\leq n-2\ \}
$$
We will consider finite dimensional complex representations of
${\mathbb B}_n$; that is pairs $(\phi,V)$ where
$$
\phi : {\mathbb B}_n\rightarrow \Aut(V)
$$
is a morphism of groups and $V$ is a complex vector space of
finite dimension.
In this paper, we will construct a family of finite dimensional complex representations of ${\mathbb B}_n$ that contains the standard representations. Moreover, we will give necessary conditions for a member of this family to be irreducible. In this way, we can find explicit families of irreducible representations. In particular, we will define a subfamily of irreducible representations $(\phi_m, V_m)$, $1\leq m<n$, where $\dim V_m=\left( \begin{smallmatrix} n\\\noalign{\medskip}m \end{smallmatrix}\right)$ and the corank of $\phi_m$ is equal to $\frac{2(n-2)!}{(m-1)!(n-m-1)!}$.
This family of representations can be useful in the progress of classification of the irreducible representations of ${\mathbb B}_n$. As long as we known, there are only few contributions in this sense, some known results are the following ones. Formanek classified all the irreducible representations of ${\mathbb B}_n$ of dimension lower than $n$ \cite{F}. Sysoeva did it for dimension equal to $n$ \cite{S}. Larsen and Rowell gave some results for unitary representation of ${\mathbb B}_n$ of dimension multiples of $n$. In particular, they prove there are not irreducible representations of dimension $n+1$. Levaillant proved when the Lawrence-Krammer representation is irreducible and when it is reducible \cite{L}.
\section{Construction and Principal Theorems}
In this section, we will construct a family of representations of ${\mathbb B}_n$ that we believe to be new,
and we will obtain a subfamily of irreducible representations.
We choose $n$ non negative integers $z_1, z_2, \dots, z_n$, not necessarily different. Let $X$ be the set of all
the possible $n$-tuples obtained by permutation of the coordinates of the fixed $n$-tuple $(z_1, z_2, \dots, z_n)$. For example, if the $z_i$ are all different, then the cardinality of $X$ is $n!$. Explicitly, if $n=3$,
$$
X=\{(z_1, z_2, z_3), (z_1, z_3, z_2), (z_2, z_1, z_3), (z_2, z_3, z_1), (z_3, z_1, z_2), (z_3, z_2, z_1)\}
$$
Or if $z_1=z_2=1$ and $z_i=0$ for all $i=3, \dots, n$, then the cardinality of $X$ is $\left( \begin{smallmatrix}
n\\\noalign{\medskip}2 \end{smallmatrix}\right)=\frac{n(n-1)}{2}$. Explicitly, for $n=3$
$$
X=\{(1,1,0), (1,0,1), (0,1,1)\}
$$
Let $V$ be a complex vector space with orthonormal basis $\beta=\{v_x : x\in X\}$. Then the dimension of $V$ is
the cardinality of $X$.
We define $\phi:{\mathbb B}_n \rightarrow \Aut (V)$, such that
$$
\phi(\tau_k)(v_x)= q_{x_k, x_{k+1}} v_{\sigma_k(x)}
$$
where $q_{x_k, x_{k+1}}$ is a non-zero complex number that depends on $x=(x_1, \dots, x_n)$, but, it only depends on the places
$k$ and $k+1$ of $x$; and
$$
\sigma_k(x_1, \dots, x_n)=(x_1, \dots,x_{k-1}, x_{k+1}, x_k, x_{k+2}, \dots, x_n)
$$
With this notations, we have the following theorem,
\begin{theorem}
$(\phi, V)$ is a representation of the braid group ${\mathbb B}_n$.
\end{theorem}
\begin{proof}
We need to check that $\phi(\tau_k)$ satisfy the relations of the braid group. We have for $j\neq k-1, k,
k+1$ that
$$
\phi(\tau_k)\phi(\tau_j)(v_x)= \phi(\tau_k)(q_{x_j, x_{j+1}} v_{\sigma_j(x)})=q_{x_j, x_{j+1}} q_{x_k, x_{k+1}}
v_{\sigma_k \sigma_j(x)}
$$
On the other hand
$$
\phi(\tau_j)\phi(\tau_k)(v_x)= \phi(\tau_j)(q_{x_k, x_{k+1}} v_{\sigma_k(x)})=q_{x_k, x_{k+1}} q_{x_j, x_{j+1}}
v_{\sigma_j \sigma_k(x)}
$$
As $\sigma_k \sigma_j(x)=\sigma_j \sigma_k(x)$, if $|j-k|>1$, then $\phi(\tau_k)\phi(\tau_j)=\phi(\tau_k)\phi(\tau_j)$ if $|j-k|>1$.
In the same way, we have
$$
\begin{aligned}
\phi(\tau_k)\phi(\tau_{k+1})\phi(\tau_k)(v_x)&= \phi(\tau_k)\phi(\tau_{k+1})(q_{x_k, x_{k+1}}
v_{\sigma_k(x)})\\
&=\phi(\tau_k)(q_{x_k, x_{k+1}} q_{x_{k}, x_{k+2}} v_{\sigma_{k+1} \sigma_k(x)})\\
&= q_{x_k, x_{k+1}} q_{x_{k}, x_{k+2}} q_{x_{k+1}, x_{k+2}} v_{\sigma_k \sigma_{k+1} \sigma_k(x)}
\end{aligned}
$$
Similarly,
$$
\begin{aligned}
\phi(\tau_{k+1})\phi(\tau_k)\phi(\tau_{k+1})(v_x)&= \phi(\tau_{k+1})\phi(\tau_k)(q_{x_{k+1}, x_{k+2}}
v_{\sigma_{k+1}(x)})\\
&=\phi(\tau_{k+1})(q_{x_{k+1}, x_{k+2}} q_{x_{k}, x_{k+2}} v_{\sigma_k \sigma_{k+1}(x)})\\
&= q_{x_{k+1}, x_{k+2}} q_{x_{k}, x_{k+2}} q_{x_k, x_{k+1}} v_{\sigma_{k+1} \sigma_k \sigma_{k+1}(x)}
\end{aligned}
$$
As $\sigma_k \sigma_{k+1} \sigma_k(x)=\sigma_{k+1} \sigma_k \sigma_{k+1}(x)$, for all $k$ and $x\in X$, then
$\phi(\tau_k)\phi(\tau_{k+1})\phi(\tau_k)=\phi(\tau_{k+1})\phi(\tau_k)\phi(\tau_{k+1})$ for all $k$.
\end{proof}
As $\beta$ is an orthonormal basis, we have that,
$$
<\phi(\tau_k)v_y, v_x> = <q_{y_k, y_{k+1}}v_{\sigma_k(y)}, v_x> = <v_y, \overline{q_{x_{k+1}, x_k}} v_{\sigma_k(x)}>
$$
then,
$$
(\phi(\tau_k))^*(v_x)= \overline{q_{x_{k+1}, x_k}} v_{\sigma_k(x)}
$$
therefore, $\phi(\tau_k)$ is self-adjoint if and only if $q_{x_{k+1}, x_k}=\overline{q_{x_k, x_{k+1}}}$ for all $x\in
X$. In particular, if $x_k=x_{k+1}$ then $q_{x_k, x_{k+1}}$ is a real number. In the same way, $\phi(\tau_k)$ is
unitary if and only if $|q_{x_k, x_{k+1}}|^2=1$ for all $x\in X$.
Now, we will give a subfamily of irreducible representations.
\begin{theorem} \label{irreducible}
If $\phi(\tau_k)$ is a self-adjoint operator for all
$k$, and for any pair $x,y\in X$, there exists $j$, $1\leq j\leq n-1$, such that $|q_{x_j, x_{j+1}}|^2\neq |q_{y_j, y_{j+1}}|^2$,
then $(\phi, V)$ is an irreducible representation of the
braid group ${\mathbb B}_n$.
\end{theorem}
\begin{proof}
Let $W\subset V$ be a non-zero invariant subspace. It is enough to prove that $W$
contains one of the basis vectors
$v_x$. Indeed, given $y\in X$, there exists a permutation $\sigma$ of the
coordinates of $x$, that sends $x$ to $y$. This happens because the elements of $X$ are $n$-tuples obtained by permutation of the coordinates of the fixed $n$-tuple $(z_1, \dots, z_n)$. Suppose that $\sigma=\sigma_{i_1} \dots \sigma_{i_l}$, then
$\tau:=\tau_{i_1} \dots, \tau_{i_l}$ satisfies that $\phi(\tau)(v_x)=\lambda v_y$, for some
non-zero complex number $\lambda$. Then $W$ contains $v_y$ and therefore, $W$ contains the basis
$\beta=\{v_x : x\in X\}$.
As $\phi(\tau_k)$ is a self-adjoint operator, it commutes with
$P_W$, the orthogonal projection over the subspace $W$. Therefore,
$(\phi(\tau_k))^2$ commute with $P_W$. On the other hand, note
that $(\phi(\tau_k))^2(v_x)=|q_{x_k, x_{k+1}}|^2 v_x$, hence,
$(\phi(\tau_k))^2$ is diagonal in the basis $\beta=\{v_x : x\in
X\}$. Then, the matrix of $P_W$ has at least the same blocks than
$(\phi(\tau_k))^2$ for all $k$, $1\leq k \leq n-1$.
If for some $k$, the matrix of $(\phi(\tau_k))^2$ has one block of
size $1\times 1$, then the matrix of $P_W$ has one block of size
$1 \times 1$. In other words, there exists $x\in X$ such that $v_x$ is an
eigenvector. If the eigenvalue associated to $v_x$ is non-zero,
then $v_x\in W$.
It rest to see that the matrix of $(\phi(\tau_k))^2$ has all its blocks
of size $1\times 1$. By hypothesis, for each pair of vectors in the basis $\beta$, $v_x$ and
$v_y$, there exists $k$, $1\leq k \leq n-1$, such that $|q_{x_k, x_{k+1}}|^2\neq |q_{y_k, y_{k+1}}|^2$.
Fix any order in $X$ and let $x$ and $y$ the first and second element
of $X$. Then there exists $k$ such that $v_x$ and $v_y$
are eigenvectors of $(\phi(\tau_k))^2$ of different eigenvalue. Hence $(\phi(\tau_k))^2$ has the first block of size $1\times 1$. As $(\phi(\tau_j))^2$ commute with $(\phi(\tau_k))^2$ for all $j$, $(\phi(\tau_j))^2$ also has this property.
By induction, suppose that for all $j$ $(\phi(\tau_j))^2$ has its $r-1$ first blocks of size $1\times 1$. Let $x', y'$ the elements $r$ and $r+1$ of $X$, then there exists $k'$ such that $v_{x'}$ and $v_{y'}$ are eigenvectors of $(\phi(\tau_{k'}))^2$ of different eigenvalue. Hence, $(\phi(\tau_{k'}))^2$ has the $r$ block of size $1\times 1$. Therefore $(\phi(\tau_j))^2$ too because it commute with $(\phi(\tau_{k'}))^2$, for all $j$. Then we obtain that all the blocks are of size $1\times 1$.
\end{proof}
Note that if the numbers $q_{x_k, x_{k+1}}$ are all equal and $|X|>1$, then $\phi$ is not irreducible because the
subspace $W$, generated by the vector $v=\sum_{x\in X} v_x$, is an invariant subspace.
\subsection{Examples}
We are going to compute some explicit examples of this family of representations. We will show that the
standard representation (\cite{S}, \cite{TYM}) is a member of this family.
\subsubsection{\it{Standard Representation}}
Let $z_1=1$ and $z_j=0$ for all $j=2, \dots, n$. Then the cardinality of $X$ is $n$ and $\dim V=n$ too. For each
$x\in X$, let $q_{x_k, x_{k+1}}=1 +(t-1) x_{k+1}$, where $t\neq 0,1$ is a complex number. Therefore $\phi:{\mathbb B}_n
\rightarrow \Aut(V)$, given by $\phi(\tau_k)v_x=q_{x_k, x_{k+1}} v_{\sigma_k(x)}$, is equivalent to the standard
representation $\rho$, given by
$$
\rho(\tau_k)= \left(\begin{array}{ccccccc}
\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ddots &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ 0 &\ t &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ 1 &\ 0 &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ddots &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 1
\end{array}\right)
$$ where $t$ is in the place $(k, k+1)$.
In fact, if $\{\beta_j: j=1, \dots, n\}$ is the canonical basis of ${\mathbb C}^n$, and if $x_j$ is the element of $X$
with $1$ in the place $j$ and zero elsewhere, define
$$
\begin{array}{rcl}
\alpha :{\mathbb C}^n & \rightarrow &V \\
\beta_j & \mapsto &v_{x_j}
\end{array}
$$
Then $\alpha (\rho(\tau_k) (\beta_j))=\phi(\tau_k)(\alpha(\beta_j))$ for all $j=1, \dots, n$. Hence the
representations are equivalent.
\subsubsection{Example}
Let $z_1, \dots, z_n\in \{0, 1\}$, such that $z_1=z_2=\dots =z_m=1$ and $z_{m+1}=\dots =z_n=0$. Then the cardinality
of $X$ is $\left( \begin{smallmatrix} n\\\noalign{\medskip}m \end{smallmatrix}\right)=\frac{n!}{m! (n-m)!}$.
If $V_m$ is the vector space with basis $\beta_m=\{v_x : x\in X\}$, then $\dim V_m=\frac{n!}{m! (n-m)!}$.
For each $x:=(x_1, \dots, x_n)\in X$, let
$$q_{x_k, x_{k+1}}=\left
\{\begin{array}{ll}
1 &\text{ if } x_k=x_{k+1} \\
t &\text{ if } x_k\neq x_{k+1}
\end{array}\right.
$$ where $t$ is a real number, $t\neq 0, 1, -1$.
We define $\phi_m: {\mathbb B}_n\rightarrow \Aut (V_m)$, given by
$$
\phi_m(\tau_k)v_x=q_{x_k, x_{k+1}} v_{\sigma_k(x)}
$$
For example, fixing the lexicographic order in $X$, if $n=5$ and $m=3$, then $\dim V_m=10$, the ordered basis is
$$
\begin{aligned}
\beta:=\{&v_{(0,0,1,1,1)}, v_{(0,1,0,1,1)}, v_{(0,1,1,0,1)},v_{(0,1,1,1,0)}, v_{(1,0,0,1,1)}, \\
&v_{(1,0,1,0,1)},v_{(1,0,1,1,0)},v_{(1,1,0,0,1)}, v_{(1,1,0,1,0)}, v_{(1,1,1,0,0)}\}
\end{aligned}
$$
and the matrices in this basis are
$$
\phi_3(\tau_1)= \left(\begin{array}{cccccccccc}
\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ 0 &\ \ &\ \ &\ t &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ 0 &\ 0 &\ \ &\ 0 &\ t &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ 0 &\ 0 &\ 0 &\ 0 &\ 0 &\ t &\ \ &\ \ &\ \ \\
\ \ &\ t &\ 0 &\ 0 &\ 0 &\ 0 &\ 0 &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ t &\ 0 &\ \ &\ 0 &\ 0 &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ t &\ \ &\ \ &\ 0 &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 1 &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 1 &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 1
\end{array}\right)
$$
$$
\phi_3(\tau_2)= \left(\begin{array}{cccccccccc}
\ 0 &\ t &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ t &\ 0 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ 0 &\ \ &\ t &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ 0 &\ 0 &\ 0 &\ t &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ t &\ 0 &\ 0 &\ 0 &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ t &\ \ &\ 0 &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 1
\end{array}\right)
$$
$$
\phi_3(\tau_3)= \left(\begin{array}{cccccccccc}
\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ 0 &\ t &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ t &\ 0 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &1 \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ 0 &\ t &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ t &\ 0 &\ \ &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 1 &\ \ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 1 &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ 0 &\ t \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ t &\ 0
\end{array}\right)
$$
$$
\phi_3(\tau_4)= \left(\begin{array}{cccccccccc}
\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ &\ \ &\ \ \\
\ \ &\ 1 &\ \ &\ \ &\ \ &\ \ &\ \ &\ &\ \ &\ \ \\
\ \ &\ \ &\ 0 &\ t &\ \ &\ \ &\ \ &\ &\ \ &\ \ \\
\ \ &\ \ &\ t &\ 0 &\ \ &\ \ &\ \ &\ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ 1 &\ \ &\ \ &\ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ 0 &\ \ t &\ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ t &\ \ 0 &\ &\ \ &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &0 &\ t &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &t &\ 0 &\ \ \\
\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ \ &\ &\ 1
\end{array}\right)
$$
With this notation, we have the following results,
\begin{theorem}
Let $n>2$, then $(\phi_m, V_m)$ is an irreducible representation of ${\mathbb B}_n$, for all $1\leq m<n$.
\end{theorem}
\begin{proof}
We analyze two cases, $n\neq 2m$ and $n=2m$. Suppose that $n\neq 2m$. Let $x\neq y\in X$, then
there exists $j$, $1\leq j\leq n$, such that $x_j\neq y_j$. If $j>1$, we may suppose that
$x_{j-1}=y_{j-1}$, then $q_{x_{j-1}, x_j}\neq q_{y_{j-1}, y_j}$, therefore $|q_{x_{j-1}, x_j}|^2\neq |q_{y_{j-1}, y_j}|^2$. If $j=1$, and $n\neq 2m$, there exists $l=2, \dots, n$ such that $x_{l-1}\neq y_{l-1}$ and $x_l=y_l$, then $|q_{x_{l-1}, x_l}|^2\neq |q_{y_{l-1}, y_l}|^2$. Then, by theorem \ref{irreducible}, $\phi_m$ is an irreducible representation.
Note that if $n=2m$, $x_0=(1, \dots,1,0,\dots,0)$ and $y_0=(0, \dots, 0,1,\dots, 1)$ satisfy $x_0\neq y_0$ but $q_{x_{j-1}, x_j}= q_{y_{j-1}, y_j}$ for all $j$. So, we can not use theorem \ref{irreducible}. But in the proof of the theorem, we really use that $x$ and $y$ are consecutive in some order. Considering the lexicographic order, $x_0$ and $y_0$ are not consecutive. In general, for each $x\in X$, there exists $y_x\in X$ such that $q_{x_j, x_{j+1}}= q_{y_j, y_{j+1}}$ for all $j=1, \dots, n-1$. We define $y_x$ changed in $x$ the zeros by ones and the ones by zeros. For example, if $x=(1,0,0,1,0,1)$, then $y_x=(0,1,1,0,1,0)$. However, only $x=(0, 1, \dots, 1, 0, \dots,0)$ satisfies that $y_x$ is consecutive to $x$. Therefore
$P_W$, the projection on the invariant subspace $W$, has its blocks $1\times 1$, except the block $2\times 2$ associated to
$\{v_x, v_{y_x}\}$. If some block $1\times 1$ of $P_W$ is non-zero, then $P_W$ contains some $v_x'$ of the basis $\beta_m$. Hence $W=V_m$.
On the other case, $W\subseteq \{v_x,v_{y_x}\}$. If the equality holds, $v_x\in W$ and $W=V_m$. Suppose that $W$ is generated by
$v=a v_x+ b v_{y_x}$, with $a,b\neq 0$. But $\phi_m(\tau_1) v= t(a v_{\sigma_1(x)} + b v_{\sigma_1(y_x)})$, with $\sigma_1(x)\neq x$, $\sigma_1(y_x)\neq y_x$ and $\sigma_1(x)\neq y_x$ (if $n>2$). Therefore $\phi(\tau_1)v\neq \lambda v$, for all $\lambda \in {\mathbb C}$.
This is a contradiction because $W$ is an invariant subspace.
\end{proof}
The {\it{corank }} of a finite dimensional representation $\phi$ of ${\mathbb B}_n$ is the rank of $(\phi(\tau_k)-1)$.
This number does not depend on $k$ because all the $\tau_k$ are conjugate to each other (see p. 655 of \cite{C}).
\begin{theorem}
If $n>2$ and $1\leq m <n$, then $(\phi_m, V_m)$ is an irreducible representation
of dimension $\left( \begin{smallmatrix} n\\\noalign{\medskip}m \end{smallmatrix}\right)$ and corank $\frac{2(n-2)!}{(m-1)!(n-m-1)!}$.
\end{theorem}
\begin{proof}
By theorem before, $(\phi_m, V_m)$ is an irreducible representation. The dimension of $\phi$ is the cardinality of $X$, then
$$
\dim V_m= \left( \begin{smallmatrix} n\\\noalign{\medskip}m \end{smallmatrix}\right)=\frac{n!}{m! (n-m)!}
$$
We compute the corank of $\phi_m$. Let $x\in X$ such that $\sigma_k(x)= x$, then $x_k=x_{k+1}$ and $q_{x_k, x_{k+1}}=1$.
Therefore $\phi_m(\tau_k)(v_x) =v_x$. Hence the corank of $\phi_m$ is equal to the cardinality of $Y=\{x\in X : \sigma_k(x)\neq x
\}$. But it is equal to the cardinality of $X$ minus the cardinality of $\{x\in X : x_k=x_{k+1}=0 \text{ or }
x_k=x_{k+1}=1 \}$. Therefore
$$
\begin{aligned}
cork(\phi_m)=rk(\phi_m(\tau_k) -1)&=\frac{n!}{m!(n-m)!}-\frac{(n-2)!}{m! (n-m-2)!}-\frac{(n-2)!}{(m-2)!(n-m)!}\\
&=\frac{2(n-2)!}{(m-1)!(n-m-1)!}
\end{aligned}
$$
\end{proof}
In the example $n=5$ and $m=3$, we have that $cork(\phi_m)=6$.
Note that if $m=1$, the dimension of $\phi_m$ is $n$ and the corank is $2$. Therefore $\phi_1$ is equivalent to the
standard representation, because this is the unique irreducible representations of ${\mathbb B}_n$ of dimension $n$ \cite{S}.
\section*{Acknowledgment}
The authors thanks to Aroldo Kaplan for his helpful comments.
| {'timestamp': '2009-04-07T19:14:41', 'yymm': '0809', 'arxiv_id': '0809.4173', 'language': 'en', 'url': 'https://arxiv.org/abs/0809.4173'} |
\section{\label{sec:level1}First-level heading}
In the 1980s, studying the effect of the volume cap-
ture of relativistic protons into the channeling regime,
Taratin and Vorobiev [4, 5] demonstrated the possibility
of volume reflection, i.e., the coherent small-angle scattering of particles at angle $\theta \; \tilde{<} \; 2\theta_L$ ($\theta_L$
is the Lindhard critical angle) to the side opposite to the bending of the
crystal. Recent experiments reported in [1–3] confirm
the presence of this effect for 1-, 70-, and 400-GeV proton beams in a Si crystal. The conclusions made in [4,
5] were based primarily on the numerical simulation. In
view of this circumstance, the aim of this work is to
derive analytical expressions for the deflection function
of relativistic particles. At first sight, the perturbation
theory in the potential can be applied at relativistic
energies and weak crystal potential [$U(r)\approx 10-100$]. However, the relativistic generalization of the known classical formula for small-angle scattering in
the central field [6],
\[
\chi = -b\int^{\infty}_{b}\frac{d\phi}{d r} \frac{ d r}{\sqrt{r^2-b^2}},
\]
where $b$ is the impact parameter and $\phi(r)=\frac{2U(r) E}{p_{\infty}^2 c^2}$, $U(r), E, p_{\infty}$ are the centrally
symmetric continuous potential of bent planes, total
energy, and particle momentum at infinity, respectively,
is inapplicable for the entire range of impact parameters. Indeed, the above formula is the first nonzero term of the expansion of the classical deflection function
\begin{eqnarray}
\chi(b)=\pi-2b\int^{\infty}_{r_{o}}\frac{d r }{r \sqrt{r^2[1-\phi(r)]-b^2}},
\label{deflection_function}
\end{eqnarray}
in the power series in the “effective” interaction potential $\phi(r)$. The crystal interaction potential $U(r)$ is the
sum of the potentials of individual bent planes concentrically located in the radial direction with period $d$. It has no singularities (i.e., is bounded in magnitude) and
is localized in a narrow ring region at distances $R-N d <r< R$
(where the crystal thickness $N d << R$ and $N$
is the number of planes). In this region, $U(r)>0$ and $U(r)< 0$ for the positively and negatively charged scattered particles, respectively. Beyond the ring region, it
vanishes rapidly. The perturbation theory in the interaction potential is obviously well applicable if the impact
parameter satisfies the inequality $b < (R-Nd)$. In this
case, the scattering area localized in the potential range
is far from the turning point $r_o$ determined from the
relation $b=r_o \sqrt{1-\phi(r_o)}$ and the root singularity of the
turning point does not contribute to integral (1). In the
general case, it can be shown [7, 8] that the condition of
the convergence of the power series of $\phi$
is a monotonic
increase in the function $u(r) = r\sqrt{1-\phi(r)}$ (e.i. $u(r)^{'} > 0$) in the $r$
regions substantial for integral (1). Such a
monotonicity is achieved if the energy and momentum
of the relativistic particle satisfy the inequality
\begin{eqnarray}
\frac{p_{\infty}^2c^2}{2E}> U(r)+\frac{r}{2} U(r)^{'}.
\label{OrbittingCondition}
\end{eqnarray}
The derivation of this condition is omitted, because it
was given in the Appendix in [8], but the nonrelativistic
case was considered in that work. Taking into account
that the inequality $U(r)<< r U(r)^{'}$ is satisfied for large
distances $r\approx R$, relation (2) is transformed to the known
Tsyganov criterion [9] [$R < \frac{p_{\infty}^2c^2}{E U(r)^{'}}$] of the disappearance of channeling in a strongly bent crystal. Thus, the perturbation theory is obviously applicable only for
strongly bent crystals, where channeling is absent. It is
interesting that the nonrelativistic variant [8] of condition (2) corresponds to the criterion of the absence of
the so-called spiral scattering appearing in small-energy chemical reactions [10].
For this reason, the exact solution of the problem of
the relativistic scattering on the model potential of the
periodic system of rectangular rings
\begin{eqnarray}
U(r)= U_{o} \left\{
\begin{array}{ll}
1,& R-i d-a < r < R-i d, \\
0,& R-(i+1)d < r < R-i d-a,\\
0,& r < R-Nd, r > R,
\end{array} \right.
\label{Potential_M}
\end{eqnarray}
where $i=0,1,...,(N-1)$ and $a$ and $d$ is the thickness
of a single plane and the interplanar distance, is considered below. With the use of the representation
\begin{eqnarray}
\frac{\pi}{2}=b\int^{\infty}_{r_{o}}\frac{d r }{r \sqrt{r^2[1-\phi(r_{o})]-b^2}},
\end{eqnarray}
particle deflection function (1) can be rewritten in the
form
\begin{eqnarray}
2b\int^{\infty}_{r_{o}}\frac{d r }{r}(\frac{1}{ \sqrt{r^2[1-\phi(r_o)]-b^2}}-\frac{1}{ \sqrt{r^2[1-\phi(r)]-b^2}}).
\label{deflection_functionM1}
\end{eqnarray}
Let us consider the range of small impact parameters,
$b < R-Nd$, when particles penetrate into the inner
region of the potential ring and twice intersect each
plane. This geometry is not used in the experiments,
because it requires very thin crystals. However, such an
approach to solve the problem makes it possible to easily calculate the exact deflection function on potential
(3) for all impact parameters and, moreover, to demonstrate an interesting property of the scattering on potentials with an empty core. This property does not seemingly attract particular attention. In this case, the turning point is located in the inner region of the potential
ring, where $\phi(r_o)=0$, and, as easily seen from the equation $b=r_o \sqrt{1-\phi(r_o)}$
, the shortest distance from the
center is $r_o=b$ (see Figs. 1C and 1D).
\begin{figure}
\centering
\includegraphics{ScatteringRing3.eps}
\caption{ Scattering of the charged particles on the ring potential: (A) reflection, $\chi>0$, and (B) refraction, $\chi<0$. The
effect of the empty core for the (C) positively and (D) negatively charged relativistic particles.}
\label{fig:ScatteringRing}
\end{figure}
The deflection function for such impact parameters
has the form
\begin{eqnarray}
\chi(b)=2b\int^{\infty}_{b}\frac{d r }{r}(\frac{1}{ \sqrt{r^2-b^2}}-\frac{1}{ \sqrt{r^2[1-\phi(r)]-b^2}}).
\label{deflection_functionM2}
\end{eqnarray}
Let us consider an arbitrary positive centrally symmetric potential $\phi(r)\geq 0$. Then, the inequality $\frac{1}{ \sqrt{r^2-b^2}} \leq \frac{1}{ \sqrt{r^2[1-\phi(r)]-b^2}}$
is always valid in the integrand in Eq. (6). As immediately follows from this inequality,
the scattering angles for impact parameters $0 < b\; \tilde{<} \; (R-Nd)$ are always negative, $\chi < 0$ for any form of the
positive potential $\phi(r)$; i.e., an arbitrary positive potential is attractive. On the contrary, an arbitrary negative potential, $\phi(r)\leq 0$, is repulsive. This seeming paradox
is called the empty core effect. It is schematically
shown in Figs. 1C and 1D and in Figs. 2A–2D for the scattering on the rectangular potential and can be
treated in various ways, one of which is as follows. The
integral action of the positive potential deflects a particle passing through it to the left from the initial direction (see Fig. 1A); i.e., the particle is reflected. The
potential in the core is absent and, according to the conservation laws, the absolute value of the momentum
and angular momentum should coincide with the
respective initial values. Therefore, the particle trajectory in the core should touch a circle with the radius
equal to the impact parameter. This is possible only if
the particle intersecting the inner boundary of the
potential is deviated to the right from the initial direction. The total deflection angle is equal to the doubled
angle to the turning point. Therefore, the total rotation
is clockwise; i.e., the particle is attracted to the center.
The opposite situation occurs for the negative potential.
Let us introduce the convenient notation
\begin{eqnarray}
\Phi = 1-\phi_o, \; \; \phi_o=\frac{2U_{o} E}{p_{\infty}^2 c^2},\; \;
\hat{r}=\frac{r}{R}, \; \;\hat{b}=\frac{b}{R}, \; \; \hat{a}=\frac{a}{R}, \nonumber\\
\hat{d}=\frac{d}{R}, \; \; \hat{b}_{i} =\frac{\hat{b}}{1-i \hat{d}},\; \; \hat{b}_{a} =\frac{\hat{b}}{1-\hat{a}}, \; \; \hat{b}_{ia}=\frac{\hat{b}}{1-\hat{a}-i \hat{d}}.
\label{definitions}
\end{eqnarray}
For potential (3), the scattering problem is solved
exactly and deflection function (6) for b < R – Nd is represented in the form of the sum,
\begin{eqnarray}
\chi(b)=2 \alpha(b)=2 \sum_{i=0}^{N-1} \alpha_i(b),
\label{Deflection_General_Crystal}
\end{eqnarray}
of the integrals over the regions filled with the potential:
\begin{eqnarray}
\alpha_i(b)=\hat{b}\int^{1-i\hat{d}}_{1-i\hat{d}-\hat{a}}\frac{d \hat{r} }{\hat{r}}(\frac{1}{ \sqrt{\hat{r}^2-\hat{b}^2}}-\frac{1}{ \sqrt{\Phi \hat{r}^2-\hat{b}^2}}).
\label{Deflection_i_Ring}
\end{eqnarray}
Integral (9) is easily calculated and, which is most
important, has an analytic continuation valid for any
impact parameter:
\begin{eqnarray}
\alpha_{i}(b) =arcsin(\frac{\hat{b}_{i}(\sqrt{1-\hat{b}^2_{i}}-\sqrt{\Phi-\hat{b}^2_{i}})}{{\sqrt{\Phi}}})-\nonumber\\ arcsin(\frac{\hat{b}_{ia}(\sqrt{1-\hat{b}_{ia}^2}-\sqrt{\Phi-\hat{b}_{ia}^2})}{{\sqrt{\Phi}}}).
\label{DeflectionGen_i}
\end{eqnarray}
Note that only the real parts of the deflection function
are meaningful. Therefore, if the impact parameter $\hat{b}$ is
such that any root in Eq. (10) is imaginary, it should be
rejected. In what follows, an approximation that is not
strictly necessary is used. All below formulas can be
derived without this approximation, but it significantly
simplifies the form and use of all of the expressions, is
accepted. For small angles entering into Eq. (10) and
$\Phi \approx 1$ in the denominators in Eq. (10), Eq. (10) can be
represented in the form
\begin{eqnarray}
\alpha_{i}(b) =\hat{b}_{i}(\sqrt{1-\hat{b}^2_{i}}-\sqrt{\Phi-\hat{b}^2_{i}})-\nonumber\\ \hat{b}_{ia}(\sqrt{1-\hat{b}_{ia}^2}-\sqrt{\Phi-\hat{b}_{ia}^2}).
\label{DeflectionGen_i_app}
\end{eqnarray}
Let us consider the deflection function given by
Eq. (10) for one ring ($N = 1$ and $i = 0$) in more detail:
\begin{eqnarray}
\alpha(b) =arcsin(\frac{\hat{b}(\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2})}{{\sqrt{\Phi}}})-\nonumber\\ arcsin(\frac{\hat{b}_{a}(\sqrt{1-\hat{b}_{a}^2}-\sqrt{\Phi-\hat{b}_{a}^2})}{{\sqrt{\Phi}}}).
\label{DeflectionFunction_Ring_One}
\end{eqnarray}
If the inner ring radius is equal to zero, then $\hat{a}=1$ and
last two radicals in Eq. (12) become minimal. According to the above rule, they should be rejected. The
resulting exact function describes the scattering of the
particles on a rectangular cylindrical disc ($U_o >0 $) or
well ($U_o <0 $) [6] (see Fig. 2).
\begin{figure}
\centering
\includegraphics{DeflectionFunctionF.eps}
\caption{Deflection function $\alpha(\hat{b})$ for positively charged particles from the relativistic particle at (A) $\hat{a} > \phi_o/2$ and (B)
$\hat{a} < \phi_o/2$ and for negatively charged particles at (C)
$\hat{a} > |\phi_o|/2$, (D)- $\hat{a} < |\phi_o|/2$. The dotted lines correspond to the
scattering on (A and B) a disc ($U_o > 0$) and (C and D) a well
($U_o < 0$). The region of the empty core effect for positively
charged particles $0 < \hat{b} <\sqrt{\Phi}(1-\hat{a})$ is smaller than the
inner radius $(1-\hat{a})$.}
\label{fig:DeflectionFunctionF}
\end{figure}
Furthermore, let us analyze the range of impact parameters in the ring region
$(1-N\hat{d})\leq \hat{b}\leq 1$, where $\hat{b}_{i}$ and $\hat{b}_{ia} \cong 1$
and can be taken
due to the smallness $N\hat{d}<<1$ and these factors in
Eq. (11) can also be omitted. In this approximation,
Eq. (12) has the form
\begin{eqnarray}
\alpha(b) =(\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2})-\nonumber\\ (\sqrt{1-\hat{b}_{a}^2}-\sqrt{\Phi-\hat{b}_{a}^2}).
\label{DeflectionFunction_Ring_app}
\end{eqnarray}
For various $\phi_o$ values ($\phi_o$ is the square of the Lindhard
critical angle with the sign of the charge of the scattered
particle), there are two sequences of critical points that
can pass through the impact parameter $\hat{b}$ when increasing from 0 to 1. For the positive potential $U_o > 0$ ($\Phi < 1$), the critical points form a sequence $\sqrt{\Phi} (1-\hat{a})< \sqrt{\Phi} < 1$. Thus, the deflection function for positively charged particles on one ring has the form
\begin{equation}
\alpha(\hat{b})_{+}=
\label{DeflectionFunction_Ring_v1}
\end{equation}
\begin{eqnarray}
\left \{
\begin{array}{ll}
(\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2})-(\sqrt{1-\hat{b}_{a}^2}-\sqrt{\Phi-\hat{b}_{a}^2}),& \\ for \; \; 0 < \hat{b} <\sqrt{\Phi}(1-\hat{a});\nonumber\\
\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2}, \; \;for \; \;\sqrt{\Phi}(1-\hat{a}) < \hat{b}< \sqrt{\Phi};\\
\sqrt{1-\hat{b}^2},\; \;for\; \; \sqrt{\Phi} < \hat{b}< 1;\\
0, \; \;for \; \;1 < \hat{b}.
\end{array} \right.
\end{eqnarray}
For negative potential $U_o < 0$ ($\Phi > 1$), the critical points
form another sequence $(1-\hat{a})<(1-\hat{a})\sqrt{\Phi}<1$. It provides the deflection function for the scattering of negatively charged particles on one ring:
\begin{equation}
\alpha(\hat{b})_{-}=
\label{DeflectionFunction_Ring_v2}
\end{equation}
\begin{eqnarray}
\left \{
\begin{array}{ll}
(\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2})-(\sqrt{1-\hat{b}_{a}^2}-\sqrt{\Phi-\hat{b}_{a}^2}),& \\ for \; \; 0 < \hat{b} <(1-\hat{a});\\
\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2}+\sqrt{\Phi-\hat{b}_{a}^2},\nonumber\\for \; \;(1-\hat{a}) < \hat{b} <(1-\hat{a})\sqrt{\Phi};\nonumber\\
\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2},\; \;for\; \; (1-\hat{a})\sqrt{\Phi} < \hat{b}< 1;\\
0, \; \;for \; \;1 < \hat{b}.
\end{array} \right.
\end{eqnarray}
Figures 2A and 2C show deflection functions (14) and
(15), respectively, for $|\phi_o|/2 < \hat{a}$ . If the potential is sufficiently large, $|\phi_o|/2 > \hat{a}$ , the arc of the reflection of the
positively charged particles from the outer wall of a
bent plane extends to the left and can be larger than the
width of the ring (and the distance between rings if the
system of rings is considered). This effect causes the
reflection of relativistic particles in the crystal. For negatively charged particles, when $|\phi_o|/2 > \hat{a}$, the corresponding reflection from the inner wall of the potential
is shifted to the right and, correspondingly, the impact
parameter region for the refraction of negatively
charged particles is narrowed. However, at least the narrow refraction region always exists. These two variants
are shown in Figs. 2B and 2D. The substitution of $\hat{b}=\sqrt{\Phi}$, $\hat{b}=\sqrt{\Phi}(1-\hat{a})$ and $\hat{b}=(1-\hat{a})$, $\hat{b}=1$ into
Eqs. (14) and (15), respectively, provides the maximum
(minimum) deflection angles for the positively and negatively charged particles, respectively:
\begin{eqnarray}
\alpha_{max +} =-\alpha_{min -}=\sqrt{|\phi_o|},& &\nonumber\\
\alpha_{min +} =-\alpha_{max -} =\frac{|\phi_o|}{2\sqrt{2\hat{a}}} -\sqrt{|\phi_o|}.
\label{MaxMin}
\end{eqnarray}
The deflection functions for the system of bent planes
forming the crystal are similarly obtained from Eqs. (8)
and (11). In this case, the summation in Eq. (8) should
be performed from $0$ to $k – 1$, where $k$ is the ordinal
number of the radial period containing the turning
point. The sequences of the critical points mentioned
before Eqs. (14) and (15) and reflection functions given
by Eqs. (14) and (15) refer to the $k$-th period. The formulas for the reflection functions are omitted in this short
paper, but the corresponding plots are presented in
Fig. 3.
Two length parameters, and $\hat{a}$, $\hat{d}$ ($\hat{a}<\hat{d}$), exist in
the periodic system of bent planes. Hence, three different variants of the curves can exist with (i) $\hat{d} > \hat{a} > \phi_o/2$, (ii) $\hat{d}> \phi_o/2 > \hat{a}$, and (iii) $\phi_o/2 >\hat{d}> \hat{a}$. Figure 3
shows the final result for the reflection function in the first and third variants.
\begin{figure}
\centering
\includegraphics{DeflectionFunctionS.eps}
\caption{Deflection function for the ring crystal and positively
charged particles at (A) $\hat{a} > \phi_o/2$ and (B) $\hat{d} < \phi_o/2$ and for
negatively charged particles at (C) $\hat{a} > |\phi_o|/2$ and $\hat{d} < |\phi_o|/2$.}
\label{fig:DeflectionFunctionS}
\end{figure}
As mentioned above, under the condition
\begin{eqnarray}
\phi_o >\frac{2d}{R},
\label{ConditionOfReflection}
\end{eqnarray}
the refraction regions of positively charged particles
disappear in the pattern of the deflection function (see
Fig. 3B). Since this effect is of an applied interest for
controlling the relativistic particle beams, let us calculate the average reflection angle under these conditions.
For a rough estimate, the extreme ring where the deflection function has the simplest form of Eq. (14) is used.
The average deflection angle in the impact parameter
range $\sqrt{\Phi} < \hat{b}< 1$ is determined by the integral
\begin{eqnarray}
\bar{\alpha}_{+} = \frac{1}{1-\sqrt{\Phi}} \int^{1}_{\sqrt{\Phi}}
\sqrt{1-\hat{b}^2} d\hat{b} \cong \frac{2\sqrt{\phi_o}}{3}.
\label{AverageAnglePReflectLast}
\end{eqnarray}
Thus, the average reflection angle is $\chi_{+}=2\bar{\alpha}_{+}=4\sqrt{\phi_o} / 3$, which is equal to $1.33\cdot \theta_L$. A more accurate estimate of the reflection angle can be obtained by averaging over any inner period of the reflection function. Let
us use the second impact parameter period from the
edge, $(1-\hat{d})\sqrt{\Phi} < \hat{b}< \sqrt{\Phi}$
(see Fig. 3B). In this case,
it is unnecessary to calculate the total sum in Eq. (8); it
is sufficient to calculate the reflection function at two
extreme rings. This problem is solved by calculating
two integrals
\begin{eqnarray}
I_{1} = \int^{(1-\hat{a})\sqrt{\Phi}}_{(1-\hat{d})\sqrt{\Phi}}
\Bigl((\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2})-\nonumber\\
(\sqrt{1-\hat{b}^2_{a}}-\sqrt{\Phi-\hat{b}^2_{a}})+\nonumber\\ \sqrt{1-\hat{b}_{1}^2}\Bigl) d\hat{b},
\label{AverageAnglePI1}
\end{eqnarray}
and
\begin{eqnarray}
I_{2} = \int^{\sqrt{\Phi}}_{(1-\hat{a})\sqrt{\Phi}}
\Bigl(\sqrt{1-\hat{b}^2}-\sqrt{\Phi-\hat{b}^2}\Bigl) d\hat{b},
\label{AverageAnglePI2}
\end{eqnarray}
with subsequent expansion in small parameters $\hat{a},\hat{d}$ and $\phi_o$. As a result, the average deflection angle
$\bar{\alpha}_{+}=(I_1+I_2)/(\hat{d}\sqrt{\Phi})$ is obtained in the form
\begin{eqnarray}
\bar{\alpha}_{+} = \frac{1}{3\hat{d}}\Bigl(\phi_o^{3/2}+ (2\hat{d}+\phi_o)^{3/2}+(2\hat{d}-2\hat{a})^{3/2}-\nonumber\\-2\sqrt{2}\hat{d}^{3/2}-(2\hat{d}-2\hat{a}+\phi_o)^{3/2}-(2\hat{a}-2\hat{d}+\phi_o)^{3/2}\Bigl).
\label{AverageAnglePReflectF}
\end{eqnarray}
The experiment with 1-GeV protons in a $<\!111\!>$ Si crystal with a bending radius of $R=0.33$m provides an average reflection angle of $236 \mp 6.0$ $\mu$rad [2]. In view
of the data $\phi_o=\theta_{L}^{2}=0.289\cdot10^{-7}$, $a=0.78 \AA$, $d=3.136 \AA$, Eq. (21) provides $\chi_{+}=2\cdot\bar{\alpha}_{+}=318.8$ $\mu$rad.
Note that rough formula (18) provides the value $\chi_{+}=2\cdot\bar{\alpha}_{+}=226.6$ $\mu$rad that is closer to the measured value. The
experiment with 70-GeV protons in a $<\!111\!>$ Si crystal
with a bending radius of $R = 1.7$ m provides an average
reflection angle of $39.5\mp2.0$ $\mu$rad.
In view of the data
$\phi_o=\theta_{L}^{2}=0.58\cdot10^{-9}$, $a=0.78 \AA$, $d=3.136 \AA$,
Eq. (21) provides$\chi_{+}=37.3$ $\mu$rad, which is close to the
experimental value. Formula (18) provides a smaller
angle of 32.0 $\mu$rad.
For the latest CERN experiment for
the reflection of 400-GeV protons [3] in a Si crystal oriented in the $<\!110\!>$ direction ($R = 18.5$ m, $\phi_o=\theta_{L}^{2}=0.1132\cdot10^{-9}$, $a=0.48 \AA$, $d=1.92 \AA$) and the $<\!111\!>$ direction ($R = 11.5 $m, $\phi_o=\theta_{L}^{2}=0.1008\cdot10^{-9}$, $a=0.78 \AA$, $d=3.136 \AA$), Eq. (21) provides $\chi_{+}=2\cdot\bar{\alpha}_{+}=19.0$
and $16.0$ $\mu$rad, respectively. Rough estimate (18) provides values $14.1$ and $13.3$ $\mu$rad, which is very close to
the experimental values $13.9\mp0.2$ and $13.0$ $\mu$rad, for
the $<\!110\!>$ and $<\!111\!>$ orientations, respectively.
Comparison of the theoretical estimates and experimental data shows that estimate formula (18) yields a smaller reflection angle over the entire range of
scanned energies than that assumingly more accurate
formula (21). The experimental data reported in [1, 11]
are sufficiently well reproduced by formula (21), but
the results reported in [2, 3] are closer to estimate (18).
This circumstance requires a more detailed analysis of
the experimental conditions and the accuracy of measurements.
I am grateful to Yu.M. Ivanov for the possibility of
a reading preprint of [11].
| {'timestamp': '2014-10-14T02:11:20', 'yymm': '1410', 'arxiv_id': '1410.3138', 'language': 'en', 'url': 'https://arxiv.org/abs/1410.3138'} |
\section{Introduction}
Understanding the volatility of financial assets is a key problem in econometrics and finance. Over the last two decades, the literature that deals with the modelling of financial asset volatility has expanded significantly. Volatility, a latent variable as such, has been shown to be highly persistent, and all kinds of models have been proposed to extract that persistence and incorporate it into the investment decision-making process, and is of especial interest in the derivatives world, as one of the key inputs for pricing. From the early Autoregressive Conditional Heteroskedastic (ARCH) model of \cite{En82} and the generalized version (GARCH), offered by \cite{Boll86}, to the stochastic volatility approaches \citep{Sh05}, the literature has expanded exponentially, bringing all kinds of adaptations to particular cases (fractional integration, volatility-in-mean equations, etc.), or expansions into the multivariate space and applications in asset management, derivatives pricing and risk management. Most of them share the common feature of taking closing prices over the frequency considered to do model fitting and forecasting.
More recently some structural variations have been proposed. Some of them allow for more flexible uses of the data available. \citep{EngleRussell1998}, use non-equally spaced observations to extract volatility features. \citep{AndBolDiebLab01,BarnShep02} use data at higher frequencies to provide better estimates of the volatility and its persistence at the lower ones. These original papers, and the expansions and advances that followed them, tend to focus on the use of larger amounts of the information available to achieve the common goal of better fitting and forecasting, but still tend to focus their efforts on closing prices at the highest frequency considered, ignoring the paths between those points, even if this information is available.
When it comes to data availability, there have been major changes in the last two decades. From having only (if that) access to daily or weekly closing prices, to having tick-by-tick/bid-ask data available for many series. The increased availability of information must also induce new classes of models and algorithms that tackle that, and that combine the best possible use of that information with an efficient way to process it, so that they remain practical and useful for practitioners. This manuscript tries to achieve these purposes by adding three commonly available series to the statistical process. We continue using closes over the frequency chosen, but add to them the open, high and low data as well, over that period and frequency considered. This data is available for most series through the common data providers (Datastream, Reuters, Bloomberg), and it provides several advantages. First, the use different sources of information (order statistics together with start/end points) about the movement of the underlying, which help to better understand and make inference on its volatility levels and dynamics. The size of the intra-period swings is very informative, and the information content can potentially be very different from that available from the close-to-close data only. Second, adding that information does not have to be a computational burden. We can still provide a quick model that meets the requirements of speed of fitting, while using a larger set of information. Third, because it opens the door to alternative models that could take advantage of the information embedded in those added data features. For example, modifications in the elicitation of the leverage effect in the likelihood, which could potentially be modeled as a relationship between not only return and volatility, but downside range and volatility. Although we do not explore these improvements in this paper, it is worth mentioning that the flexibility provided by the use of this data is there.
The relevance of Close, High, Low and Open (CHLO) data, that is, the relevance of the path followed by the series rather than the start and end points, is well-known. Financial newspapers include it for the frequency reported (daily, weekly, monthly or yearly), data providers add it to their data series, and practitioners in technical analysis have been using it in the past to model volatility by using estimators like ranges or average true ranges \citet{Wilder1978}. Chartists have also been using CHLO data as a key source of information from a graphical/visual standpoint to identify price patterns, trends or reversals. By assuming that log-prices follow a log-normal diffusion, \citet{Par80} showed that high/low prices provide a highly efficient estimator of the variance compared to estimators based solely on open/close prices. By incorporating information from CHLO prices, \citet{GaKa80} extended the estimator of Parkinson and gained a significant amount of efficiency compared to only including open/close prices. \citet{BallTour84} derived a maximum likelihood analogue of the estimator of Parkinson. In the context of time series modeling of asset return volatility, \citet{GalHsuTauch99} and \citet{AliBraDieb02} take one very relevant step forward by using the range of observed prices (rather than the maximum and minimum directly) to estimate stochastic volatility models. \citet{BraJon05} extend the work in \citet{AliBraDieb02} by including the range and the closing price in the estimation of a stochastic volatility model, but again ignore the actual levels of the maximum and minimum prices.
Full CHLO prices have been used by a number of authors, including \cite{CRogSatch91}, \cite{CRogSatchYoon94} and \cite{CRog98}. In particular, \citet{MaAt03} derives a maximum likelihood estimator assuming constant volatility, obtaining better performance than existing previous methods on simulated data. Their method, however, is not integrated into a (G)ARCH or stochastic volatility framework, something done by \citet{Lild02} from a maximum likelihood perspective. We introduce a Bayesian stochastic volatility model that uses full CHLO prices and develop a particle filter approach for infrerence.
Closing prices, especially at the lower frequencies (daily or lower), have been less and less reliable to ascertain realized volatilities based on them, given the patterns recently seen in volumes. For example, the S$\&$P shows unusually large volumes of trades in the last 15 minutes of the sessions. They have been linked to several factors, like high frequency funds unwinding positions accumulated during the day, or exchange-traded funds and hedgers operating in the last minutes to adjust their positions. All of those flows push prices towards outside the extremes observed during the day. However, the actual behavior of the underlying for volatility (risk) purposes is more extreme. Stop-loss/profit-taking levels, as well as positions sizings and their dynamics, are usually defined based on both technical levels and volatility measurements. If those volatilities are based on extremes and paths as well as close-to-close levels, they can potentially lead to very different levels than those based only on close-to-close levels, this applying to any frequency of investment. Intra-day ranges offer, a cleaner, more liquidity-adjusted picture of volatility. Volatility can be very large and still end up with closing prices not far from where they started, while big swings have happened throughout the day. This is especially the case during periods of low liquidity, where the average holding period of any position diminishes significantly, and people react more violently to moves. Stop-loss orders can exacerbate those intra-day swings, while squaring of daily positions for intra-day funds can produce the opposite effect. This is reflective of higher realized volatility and lower liquidity, but it will not be captured by models that only use close-to-close returns.
Our work combines the use of CHLO data with a Bayesian approach, all in a stochastic volatility framework. However, we use a particle filter algorithm to do the filtered estimation, which is computationally quick and can be applied to any frequency of data; which is of special relevance to practitioners handling large amounts of data at the higher frequencies or for processing large numbers of series (or when a quick decision is needed, even if operating with lower frequency data). Section \ref{se:svmodels} provides a quick introduction to the theory behind stochastic volatility models and an introduction to the notation used throughout the paper. Sections \ref{se:extremesv} and \ref{se:inference} present the analytical framework for the joint density of the CHLO prices, the elicitation of prior densities and a description of the particle filtering algorithm used. Sections \ref{se:missing} and \ref{se:illusa} show how to deal with problems of missing data, and applies our model to weekly CHLO data of the S$\&$P. We also provide a comparison with the stochastic volatility models using only closing prices data and show how different the results can be once added the information contained in the observed extremes. Section \ref{se:ccl} concludes with a summary and a description of potential applications and extensions.
In summary, our net contribution is threefold. First, we provide a coherent model that links the traditional stochastic volatility model with the CHLO data without the need of assumptions beyond those used in that traditional stochastic volatility model. Second, we provide a quick and simple algorithm that allows for fast estimation of this model, which should be very appealing to practitioners. Third, by having changed only the observation equation, we show that these changes can be embedded in any model that uses any other types of evolution equations.
\section{Stochastic volatility models}\label{se:svmodels}
Because of its simplicity and tractability, geometric Brownian motion is by far the most popular model to explain the evolution of the price of financial assets, and has a history dating back to Louis Bachelier \citep{CoKaBrCrLeLe00}. A stochastic process $\{ S_t : t \in \mathbb{R}^{+} \}$ is said to follow a Geometric Brownian motion (GBM) if it is the solution of the stochastic differential equation
\begin{eqnarray}\label{eq:constvar}
\dd S_t & = & \mu S_t \dd t + S_t \sigma \dd B_t
\end{eqnarray}
where $B_t$ is a standard Wiener process and $\mu$ and $\sigma$ are, respectively, the instantaneous drift and instantaneous volatility of process. GBM implies that the increments of $y_t = \log S_t$ over intervals of the same length are independent, stationary and identically distributed, i.e., $y_{t+\Delta} - y_t = \log S_{t + \Delta} - \log S_t \sim \mathsf{N}(\Delta\mu, \Delta\sigma^2)$, or equivalently,
$$
S_{t+\Delta} = S_t \exp\left\{ \Delta\mu + \sigma(B(t + \Delta) - B(t)) \right\}
$$
By construction, GMB models assume that the volatility of returns is constant. However, empirical evidence going back at least to \cite{Ma63}, \cite{Fa65} and \cite{Of73} demonstrates that the price volatility of financial assets tends to change over time and therefore the simple model in \eqref{eq:constvar} is generally too restrictive. The GBM model can be generalized by assuming that the both the price and volatility processes follow general diffusions,
\begin{align*}
\dd S_t &= \mu(S_t,\sigma_t) \dd t + \nu(S_t, \sigma_t) \dd W_t \\
\dd \sigma_t &= \alpha(S_t, \sigma_t) \dd t + \beta(S_t, \sigma_t) \dd D_t
\end{align*}
One particularly simple (and popular) version of this approach assumes that, just as before, the price follows a GBM with time varying drift, and that the log-volatility follows an Ornstein-Uhlenbeck (OU) processes \citep{UhlOrn30},
\begin{align}
\dd Y_t & = \mu \dd t + \sigma_t \dd B_t \label{eq:svct_e1}\\
\dd \log(\sigma_t) & = \kappa( \psi - \log(\sigma_t)) + \tau \dd D_t \label{eq:svct_e2}
\end{align}
Practical implementation of these models typically relies on a discretization of the continuos time model in \eqref{eq:svct_e1} and \eqref{eq:svct_e2}. For the remainder of the paper, we assume that the drift and volatility are scaled so that $\Delta = 1$ corresponds to one trading period (e.g., day or week) and we focus attention on the state-space model
\begin{align}
y_t & = \mu + y_{t-1} + \epsilon^{1}_t &\epsilon^{1}_t &\sim \mathsf{N}(0, \sigma_t^2) \label{eq:svdt_eq1} \\
\log(\sigma_{t}) &= \alpha + \phi [\log(\sigma_{t-1}) - \alpha] + \epsilon^{2}_t & \epsilon^{2}_t &\sim \mathsf{N} ( 0 , \tau^2 ), \label{eq:svdt_eq2}
\end{align}
where $\alpha = \kappa\psi$ and $\phi = 1-\kappa$. It is common to assume that $0 \le \phi <1$ and $\log(\sigma_0) \sim \mathsf{N}( \alpha , \tau^2/(1-\phi^2) )$, so that volatilities are positively correlated and the volatility process is stationary (which ensures that the process for the prices is a martingale) with $\alpha$ determining the median of the long-term volatility, $\nu = \exp\{ \alpha \}$. Therefore, we expect the volatility of returns to take values greater than $\nu$ half of the time, and viceversa.
Unlike ARCH and GARCH models \citep{En82,Boll86}, where a single stochastic process controls both the evolution of the volatility and the observed returns, stochastic volatility models use two coupled processes to explain the variability of the returns. By incorporating dependence between $\epsilon^{1}_t$ and $\epsilon^{2}_t$, the model can accommodate leverage effects, while additional flexibility can be obtained by considering more general processes for the volatility (for example, higher order autoregressive process, jump process, and linear or nonlinear regression terms).
Although theoretical work on stochastic volatility models goes back at least to the early 80s, practical application was limited by computational issues. Bayesian fitting of stochastic volatility models has been discussed by different authors, including \cite{JaPoRo94} and \cite{KiShCh98}.
Popular approaches include Gibbs sampling schemes that directly sample from the full conditional distribution of each parameter in the model, algorithms based on offset mixture representations that allow for joint sampling of the sequence of volatility parameters, and particle filter algorithms.
In the sequel, we concentrate on sequential Monte Carlo algorithms for the implementation of stochastic volatility models.
\section{Incorporating extreme values in stochastic volatility models}\label{se:extremesv}
\subsection{Joint density for the closing price and the observed extremes of geometric Brownian motion}
The goal of this section is to extend the stochastic volatility model described in the previous Section to incorporate information on the full CHLO prices. To do so, note that the Euler approximation in \eqref{eq:svdt_eq1} implies that, conditionally on the volatility $\sigma_t$, the distribution of the increments of the asset price follows a Geometric Brownian motion with constant volatility $\sigma_t$ during the $t$-th trading period. Therefore, the discretization allows us to interpret the process generating the asset prices as a a sequence of conditionally independent processes defined over disjoint and adjacent time periods; within each of these periods the price process behaves like a GBM with constant volatility, but the volatility is allowed to change from period to period. This interpretation of the discretized process is extremely useful, as it allows us to derive the joint distribution for the closing, maximum and minimum price of the asset within a trading period using standard stochastic calculus tools.
\begin{theorem}\label{thm:1}
Let $Y_t$ be a Brownian motion with drift and consider the evolution of the process over a time interval of unit length where $Y_{t-1}$ and $Y_{t}$ are the values of the process at the beginning and end of the period, and let $M_t = \sup_{t-1 \le s \le t} \{Y_s\}$ and $m_t = \inf_{t-1 \le s \le t} \{Y_s\}$ be, respectively, the supremum and the infimum values of the process over the period. If we denote by $\mu$ and $\sigma_t$ the drift and volatility of the process between $t-1$ and $t$ (which are assumed to be fixed within this period), the joint distribution of $M_t$, $m_t$ and $Y_t$ conditional on $Y_{t-1} = y_{t-1}$ is given by
\begin{eqnarray*}
\Pr(m_t \ge a_t, M_t \le b_t, Y_t \le c_t | Y_{t-1} = y_{t-1}) & = & \int_{-\infty}^{c_t} q(y_{t},a_t,b_t | y_{t-1}) \dd y_t
\end{eqnarray*}
for $m_t \le \min\{ c_t, y_{t-1} \}$, $M_t \ge \max \{ c_t, y_{t-1} \}$ and $m_t \le M_t$, where
\begin{equation}\label{eq:jointcdf}
\begin{aligned}
q(y_{t},a_t,b_t | y_{t-1}) &= \frac{1}{\sqrt{2\pi}\sigma_t} \exp\left\{ -\frac{[ \mu^2-2\mu (y_{t} - y_{t-1}) ]}{2\sigma_t^2} \right\} \sum_{n=-\infty}^\infty \left( \exp\left\{-d_1(n)\right\} - \exp\left\{-d_2(n) \right\} \right)
\end{aligned}
\end{equation}
and
\begin{align*}
d_1(n) & = \frac{[y_{t}-y_{t-1}-2n(b_t-a_t)]^2}{2\sigma_t^2} & d_2(n) & = \frac{[y_{t}+y_{t-1}-2a_t-2n(b_t-a_t)]^2}{2\sigma_t^2}
\end{align*}
\end{theorem}
The proof of this theorem, which is a simple extension of results in \cite{Fe71}, \cite{DyMc85} and \cite{Kl05}, can be seen in Appendix \ref{ap:th1}. An equivalent but more involved expression was obtained by \cite{Lild02}, who used it to construct GARCH models that incorporate information on CHLO prices. Note that if we take both $a_{t} \to -\infty$ and $b_{t} \to \infty$, the cumulative distribution in \eqref{eq:jointcdf} reduces to the integral of a Gaussian density with mean $y_{t-1} + \mu$ and variance $\sigma_t^2$, which agrees with \eqref{eq:constvar}.
\begin{corollary}
The joint density for $m_t, M_t, Y_t | Y_{t-1} = y_{t-1}$ is given by
\begin{equation}\label{eq:jointpdf}
\begin{aligned}
p(a_t, b_t, y_t | y_{t-1}) &= - \frac{\partial^2 q}{\partial a_{t} \partial b_{t}} q(y_{t},a_t,b_t | y_{t-1}) \\
&= \frac{1}{\sqrt{2\pi}\sigma_{t}^{3}} \exp\left\{ -\frac{\mu^2-2\mu (y_{t} - y_{t-1})}{2\sigma_t^2} \right\} \times \\
& \;\;\; \sum_{n=-\infty}^{\infty} \left[ 4n^2 \left(2d_1(n)-1 \right) \exp\left \{ - d_1(n) \right\} - 4n(n-1) \left( 2d_2(n)-1) \right) \exp\left \{ - d_2(n) \right\} \right]
\end{aligned}
\end{equation}
for $m_t \le \min\{ y_t, y_{t-1} \}$, $M_t \ge \max \{ y_t, y_{t-1} \}$ and $m_t \le M_t$, and zero otherwise.
\end{corollary}
Equation \eqref{eq:jointpdf} provides the likelihood function for the closing, maximum and minimum prices given the volatility and drift, under the first order Euler approximation to the system of stochastic differential equations in \eqref{eq:svct_e1} and \eqref{eq:svct_e2}. Therefore, the basic underlying assumptions about the behavior of asset prices are the same as in standard volatility models; however, by employing \eqref{eq:jointpdf} instead of \eqref{eq:svdt_eq1} we are able to coherently incorporate information about the observed price extremes in the inference of the price process. After a simple transformation $r_t=b_t-a_t$ and $w_t=a_t$ and marginalization over $w_t$, we recover the likelihood function described in \cite{BraJon05},
\begin{equation}\label{eq:rangeclosepdf}
\begin{aligned}
p(r_t, y_t | y_{t-1}) &= \frac{1}{\sqrt{2\pi}\sigma_t} \exp\left\{ -\frac{(y_t - y_{t-1} - \mu)^2}{2\sigma_t^2} \right\} \\
& \;\;\;\; \sum_{n=-\infty}^{\infty} \left[ \frac{4n^2}{\sqrt{2\pi}\sigma_t^2} \left\{ \frac{(2nr_t - |y_{t} - y_{t-1}|)^2}{\sigma_t^2} - 1 \right\} \exp\left\{ \frac{(2nr_t - |y_t - y_{t-1}|)^2}{2\sigma_t^2} \right\} \right. \\
& \;\;\;\;\;\;\; + \frac{2n(n-1)}{\sqrt{2\pi}\sigma_t^2} (2nr_t - |y_{t} - y_{t-1}|) \exp\left\{ \frac{(2nr_t - |y_t - y_{t-1}|)^2}{2\sigma_t^2} \right\} \\
& \;\;\;\;\;\;\; \left. + \frac{2n(n-1)}{\sqrt{2\pi}\sigma_t^2} (2(n-1)r_t + |y_{t} - y_{t-1}|) \exp\left\{ \frac{(2(n-1)r_t + |y_t - y_{t-1}|)^2}{2\sigma_t^2} \right\} \right]
\end{aligned}
\end{equation}
for $r_t > |y_{t} - y_{t-1}|$, while a further marginalization over the closing price $y_t$ yields the (exact) likelihood underlying the range-based model in \cite{AliBraDieb02}, which is independent of the opening price $y_{t-1}$,
\begin{equation}\label{eq:rangeonlypdf}
\begin{aligned}
p(r_t) &= 8 \sum_{n=1}^{\infty} (-1)^{n-1} \frac{n^2 r_t}{\sqrt{2 \pi}\sigma_t} \exp\left\{ -\frac{n^2 r_t^2}{2\sigma_t^2} \right\}
\end{aligned}
\end{equation}
Using either \eqref{eq:rangeclosepdf} or \eqref{eq:rangeonlypdf} as likelihoods entails a loss of information with respect to the full joint likelihood in \eqref{eq:jointpdf}. In the case of \eqref{eq:rangeonlypdf}, the effect is clear as the range is an ancillary statistic for the drift of the diffusion, and therefore provides no information about it. This leads \cite{AliBraDieb02} to assume that the drift is zero, which has little impact in the estimation of the volatility of the process, but might have important consequences for other applications of the model, such as option pricing. In the case of \eqref{eq:rangeclosepdf}, although there is information about the drift contained in the opening and closing prices, the model ignores the additional information about the drift contained in the actual levels of the extremes.
In order to emphasize the importance of the information provided by the minimum and maximum returns, we present in Figure \ref{fi:likelihood} plots of the likelihood function for the first observation of the S\&P500 dataset discussed in Section \ref{se:illus} (the week ending on April 21, 1997). When only the closing price is available, the likelihood provides information about the drift of the process, but not about the volatility (note that in this case the likelihood is unbounded in a neighborhood of $\sigma_t=0$, and therefore the maximum likelihood estimator does not exist). Therefore, information about the volatility in this type of model is obtained solely through the evolution of prices, and is therefore strongly influenced by the underlying smoothing process. In other words, the volatility parameters are only weakly identifiable, with the identifiability begin provided by the autoregressive prior in \eqref{eq:svdt_eq2}. In practice, this means that formally comparing alternative models for the volatility is extremely difficult. However, when the maximum and minimum are included in the analysis, the likelihood for a single time period does provide information about the volatility in that period, which can greatly enhance our ability to infer and test volatility models.
\begin{figure}
\begin{center}
\includegraphics[height=3.2in,angle=0]{likelihood_st.pdf}
\includegraphics[height=3.2in,angle=0]{likelihood_mm.pdf} \\
\caption{Likelihood functions for a single observation. The left panel corresponds to the Gaussian likelihood obtained solely from the closing price (equation \eqref{eq:svdt_eq1}), while the right panel corresponds to the joint likelihood (equation \eqref{eq:jointpdf}).}\label{fi:likelihood}
\end{center}
\end{figure}
Although dealing with an infinite sum can seem troublesome, our experience suggests that a small number of terms (less than 20) suffice to provide an accurate approximation. In addition, we note that since the general term of the sum is strictly decreasing for all the likelihoods discussed above, it is easy to implement an adaptive scheme that stops adding terms once the change in the value is smaller than a given tolerance
Note that equation \eqref{eq:jointcdf} assumes that the log closing price at period $t-1$, $y_{t-1}$ is the same as the log opening price at period $t$ (call this $x_t$). However, in some cases these two prices can differ; for example, wars or unexpected news can happen during holidays whenever markets are closed provoking jumps that are not governed by the diffusion processes. As with other stochastic volatility models, we can easily include this type of ``weekend effect'' by realizing that equations \eqref{eq:jointcdf} and \eqref{eq:jointpdf} use $y_{t-1}$ simply as the initial value of the GBM; therefore, they remain valid if we substitute $y_{t-1}$ by $x_t$.
\subsection{Variance evolution and prior specification}
The previous discussion focused on the characteristics of the likelihood function for the discretized process {\it conditional} on its volatility. The full specification of the model also requires that we define the evolution of the volatility in time. For illustrative purposes, this paper focuses on the simple autoregressive process for the log volatility described in Section \ref{se:svmodels},
\begin{align}
\log(\sigma_{t}) | \log(\sigma_{t-1}) &\sim \mathsf{N} \left( \alpha + \phi [\log(\sigma_{t-1}) - \alpha] , \tau^2 \right) & \log(\sigma_0) &\sim \mathsf{N} \left( \alpha, \frac{\tau^2}{1-\phi^2} \right)
\end{align}
However, we would like to stress that more complex stochastic processes can easily be incorporated in this models without significantly increasing the computational complexity. For example, stationary $AR(p)$ processes for the log volatility (which can potentially provide information on quasi-periodicities in the volatility process) can be introduced using the prior specification discussed by \cite{HuWe99}. Similarly, including volatility jumps is straightforward using Markov switching models \citep{CaLo07}, as well as estimating the parameters of stochastic volatility models with several factors varying at different time scales \citep{FouHanMol09}.
The model is completed by introducing priors for the unknown structural parameters in the models. Following standard practice, we assume that
\begin{align}\label{eq:priors}
\mu &\sim \mathsf{N}( d_{\mu}, D_{\mu} ) & \alpha & \sim \mathsf{N}( d_{\alpha}, D_{\alpha}) & \phi & \sim \mathsf{Beta}( q_{\phi}, r_{\phi} ) & \tau^2 & \sim \mathsf{IG}( u_{\tau}, v_{\tau}).
\end{align}
This choice of priors ensures that all parameters have the right support; in particular, the beta prior for $\phi$ ensures that $0 \le \phi \le 1$ (remember our discussion in Section \ref{se:svmodels}). The eight hyperparametes $d_{\mu}$, $D_{\mu}$, $d_{\alpha}$, $D_{\alpha}$, $q_{\phi}$, $r_{\phi}$, $u_{\tau}$ and $v_{\tau}$ have to be chosen according to the available prior information about the problem at hand. For example, it is natural to choose $d_{\mu}$ to be close to the market risk-free rate, while choosing $d_{\alpha}$ close the logarithm of the long-term average volatility for the asset. In this paper, we avoid noninformative priors because the sequential Monte Carlo algorithms we employ for model fitting (see Section \ref{se:inference} below) require proper priors in order to be implemented.
\section{Inference using particle filters}\label{se:inference}
The use of simulation algorithms to explore the posterior distribution of complex Bayesian hierarchical models has become popular in the last 20 years. In particular, Markov chain Monte Carlo (MCMC) algorithms, which generate a sequence of dependent samples from the posterior distribution of interest, have become ubiquitous. For inference in non-linear state space models, sequential Monte Carlo algorithms, and in particular particle filters, have become a standard tool. Particle filters use a finite discrete approximations to represent the distribution of the state parameters at time $t$ given the observations up to $t$, which in our case reduces to $p(\sigma_{t} | \{ y_l, b_l, a_l \}_{l=1}^{t})$), and sequentially updates it to obtain $p(\sigma_{t+1} | \{ y_l, b_l, a_l \}_{l=1}^{t+1})$. As with other Monte Carlo approaches to inference, the resulting samples can be used to obtain point and interval estimates, as well as to test hypothesis of interest. However, unlike MCMC algorithms, there is no need to check for convergence of the algorithm, study its mixing properties, or devising proposal distributions. \cite{DoFrGo01} provides an excellent introduction to sequential Monte Carlo Methods.
Most sequential Monte Carlo algorithms are unable to handle structural parameters that do not evolve in time, and assume them fixed and known in advance. However, in our stochastic volatility model structural parameters such as $\mu$, $\alpha$, $\phi$ and $\tau^2$ are unknown and need to be estimated from the data. Therefore, in the sequel we concentrate on a version of the auxiliary particle filter \citep{PiSh99} developed by \cite{LiWe01} for exactly this purpose. Their algorithm introduces an artificial Gaussian perturbation in the structural parameters and applies an auxiliary particle filter to the modified problem. In order to correct for the information loss generated by the artificial perturbation, the authors introduce a shrunk kernel approximation constructed in such a way as to preserve the mean and covariance of the distribution of the structural parameters. This kernel density approximation usually works well in practice, producing accurate reconstructions of the posterior distribution of both structural and state parameters while avoiding loss of information that plagues the self-organizing state space models in \cite{Ki98}.
Since the artificial evolution is assumed to follow a Gaussian distribution, it is typically necessary to transform the structural parameters so that the support of their distribution is the whole real line. Therefore, we describe our algorithm in terms of the transformed structural parameter $\eta = (\eta_1,\eta_2,\eta_3,\eta_4) = (\mu, \alpha, \logit(\phi), \log(\tau^2)) \in \mathbb{R}^4$. After choosing a discount factor $0.5 < \epsilon<1$ (controlling both the size of the perturbation and the level of shrinkage in the density estimator) and generating a sample of $N$ particle from the prior distributions in \eqref{eq:priors}, the algorithm proceeds by repeating the following steps for $t=1,\ldots, T$
\begin{enumerate}
\item For each particle $j=1,\ldots, N$, identify prior point estimates $(z_{t+1}^{(j)},m^{j}_{t+1})$ for the joint vector of state and structural parameters $(\sigma_{t+1}^{(j)}, \eta_{t+1}^{(j)})$ such that,
\begin{align*}
z_{t+1}^{(j)} &= \exp \{ \eta_{2,t}^{(j)} + \expit(\eta_{3,t}^{(j)})[\log(\sigma_{t}^{(j)})- \eta_{2,t}^{(j)}] \} \\% \exp \{ \mathsf{E}( \log(\sigma^{(j)}_{t+1}) | \sigma^{(j)}_{t}, \eta_{t}) \} =
m_{t+1}^{(j)} &= a \eta_{t}^{(j)} + (1-a) \bar{\eta}_t
\end{align*}
where $a = (3\epsilon-1)/(2\epsilon)$ and $\bar{\eta}_t = \sum_{j=1}^{N} \eta_{t}^{(j)}$.
\item Sample auxiliary indicators $\xi_t^{(1)}, \ldots, \xi_t^{(N)}$, each one also taking values in the set $\{1,\ldots, N\}$, so that
$$
\Pr(\xi_t^{(j)} = k) = p(y_{t+1},b_{t+1},a_{t+1} |y_{t}, \mu = m_{1,t+1}^{k}, \sigma = z_{t+1}^{k})
$$
where $p(y_{t+1},b_{t+1},a_{t+1} |y_{t}, \mu, \sigma_t^2)$ is given in \eqref{eq:jointpdf}.
\item Generate a set of new structural parameters $\eta^{j}_{t+1}$ by sampling
$$
\eta^{(j)}_{t+1} \sim \mathsf{N}( m_{t+1}^{\xi_{t}^{(j)}}, (1-a^2) V_t)
$$
where $V_t = \sum_{j=1}^{N} (\eta_t^{(j)} - \bar{\eta}_t)'(\eta_t^{(j)} - \bar{\eta}_t)/N$.
\item Sample a value of the current state vector
$$
\log(\sigma_{t+1}^{(j)}) \sim \mathsf{N}( \eta_{2,t+1}^{(j)} + \expit(\eta_{3,t+1}^{(j)})[\log(\sigma_{t}^{(\xi_{t}^{(j)})})- \eta_{2,t+1}^{(j)}] , \exp(\eta_{4,t+1}^{(j)}) )
$$
\item Resample the particles according to probabilities
$$
\omega_{t+1}^{(j)} \propto \frac{ p(y_{t+1},b_{t+1},a_{t+1} |y_{t}, \mu = \eta_{1,t+1}^{k}, \sigma = \sigma_{t+1}^{j}) }{ p(y_{t+1},b_{t+1},a_{t+1} |y_{t}, \mu = m_{1,t+1}^{j}, \sigma = z_{t+1}^{j}) }
$$
\end{enumerate}
Although particle filters are not iterative algorithms and therefore mixing and convergence are not issues, sequential Monte Carlo algorithms might suffer from particle impoverishment. Particle impoverishment happens when the particle approximation at time $t$ differs significantly from the approximation at time $t+1$, leading to a small number of particles receiving most of the posterior weight; in the worst case, a single particle receives all the posterior weight. A similar issue arises in importance sampling algorithms, where efficiency decreases as the distribution of the importance weights becomes less uniform. In the sequential Monte Carlo literature it is common to use the effective sample size (ESS) to monitor particle impoverishment,
$$
ESS_t = \frac{N}{1 + \frac{\mathsf{V}(\omega_t)}{\left[ \mathsf{E}(\omega_t) \right]^2}}
$$
Values of the ESS close to $N$ point to well behaved samplers with little particle impoverishment, while small values of $N$ usually indicate uneven weights and particle representations that might be missing relevant regions of the parameter space.
\section{Missing data}\label{se:missing}
In some instances (for example, non-exchange traded assets), data on the observed extremes might be unavailable or unreliable for some trading periods. When MCMC algorithms are used for inference, their structure can be easily exploited to deal with this type of situation by adding an additional sampling step in which the missing or unreliable data is imputed conditionally on the current value of the parameters. However, this type of iterative procedure is not available in particle filter algorithms such as the one described in Section \ref{se:inference}. This means that implementation under missing data requires that we compute the corresponding marginal distributions from the joint density derived from \eqref{eq:jointcdf}, which are to be used in steps (2) and (5) of the particle filter instead of \eqref{eq:jointcdf}.
As we discussed in Section \ref{se:extremesv}, the marginal density of the log closing price is simply a normal distribution with mean $\mu$ and variance $\sigma_t^2$. Therefore, if both the maximum and minimum are missing, the likelihood for the period simply becomes the likelihood for a standard stochastic volatility model in \eqref{eq:svdt_eq1}. When the minimum is not available, the marginal density for the maximum and the closing price is simply given by
\begin{eqnarray*}
q(y_{t},b_{t}|y_{t-1}) & = & \frac{2(2b_{t}-(y_{t}+y_{t-1}))}{\sqrt{2\pi\sigma_{t}^{6}t^3}}\exp\left\{-\frac{(2b_{t}-(y_{t}+y_{t-1}))^2}{2\sigma_{t}^{2}t}+\frac{\mu(y_{t}-y_{t-1})}{\sigma_{t}^{2}}-\frac{\mu^{2}t}{2\sigma_{t}^{2}}\right\},
\end{eqnarray*}
while if the maximum is missing, the marginal density for the minimum and the closing is given by,
\begin{eqnarray*}
q(y_{t},a_{t}|y_{t-1}) & = & \frac{2(y_{t}+y_{t-1}-2a_{t})}{\sqrt{2\pi\sigma_{t}^{6}t^3}}\exp\left\{-\frac{(y_{t}+y_{t-1}-2a_{t})^2}{2\sigma_{t}^{2}t}+\frac{\mu(y_{t}-y_{t-1})}{\sigma_{t}^{2}}-\frac{\mu^{2}t}{2\sigma_{t}^{2}}\right\}
\end{eqnarray*}
The density of the maximum and minimum can be computed as well by integrating equation \eqref{eq:jointpdf} from $a$ to $b$ with respect to the variable $y_{t}$. Proof of these results can be seen \citet{DanJean07}, or can be directly obtained by computing the corresponding limits on expression \eqref{eq:jointpdf}.
\section{Illustrations}\label{se:illusa}
\subsection{Simulation study}\label{se:illus}
In this section we use a simulation study to compare the performance of four stochastic volatility models: STSV, which uses the Gaussian likelihood in \eqref{eq:constvar} and therefore employs solely the information contained in the opening and closing prices; RASV, which uses the likelihood \eqref{eq:rangeonlypdf} and represents a Bayesian version of the range-only model described by \cite{AliBraDieb02}; RCSV, which uses the likelihood in \eqref{eq:rangeclosepdf} and therefore corresponds to the model based on the range and the opening/closing prices described in \cite{BraJon05}; and EXSV, our proposed model using the full likelihood \eqref{eq:jointpdf} and employing all the information contained in the opening/closing/high/low prices.
Our simulation study uses 100 random samples, each comprising 156 periods of returns generated under the stochastic volatility model described in equations \eqref{eq:svdt_eq1} and \eqref{eq:svdt_eq2}. First, we generate the sequence of volatilities using the Ornstein-Uhlenbeck model in \eqref{eq:svdt_eq2}, with parameters $\alpha = -3.75$, $\phi = 0.9$ and $\tau = 0.11$ (note that these values correspond to the means of the prior distributions used for the analysis of the weekly S\&P500 data in Section \ref{se:sp500} below). Then, for each period, we generate a sample path for the geometric Brownian motion in \eqref{eq:svdt_eq1} over a grid with 1000 nodes using the corresponding value for the volatility, assuming that $\mu=0.000961$ (for weekly returns, this corresponds approximately to an average 5\% annual return). The maximum, minimum, opening and closing prices over the period were computed from this sample path. We assume that the value of the assets at the beginning of each simulation $S_0$ is \$100 and compute both the root mean squared deviation (RMSD) of the estimated volatility with respect to their true value, and their median absolute deviation (MAD).
The particle filter algorithm described in Section \ref{se:inference} was used to fit all four models, and was implemented in {\tt MATLAB} using 30,000 particles. Unless noted otherwise, the results we report below correspond to prior hyperparameters $d_{\mu} = 0$, $D_{\mu} = 0.0001$, $q_{\phi}=9$ and $r_{\phi}=1$, $d_{\alpha} = -3.75$, $D_{\alpha} = 0.025$ and $u_{\tau} = 6$ $v_{\tau} = 0.06$, so that the prior distributions are centered around the true parameter values for all models.
Table \ref{ta:simstudy} compares the performance of the different stochastic volatility models in this simulation scenario. The first number in each cell corresponds to the median ratio of the pertinent performance measure across 100 simulations, while the numbers in parenthesis correspond to the 5\% and 95\% quantiles, respectively. Ratios close to one indicate that the two models in question have similar performance, while ratios much larger than one indicate that the first model has a much larger RMSD (or MAD) than the second one, and therefore performs worse.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
Models & RMSD & MAD \\ \hline\hline
STSV/RASV & 1.43 (1.21, 1.64) & 1.46 (1.05, 1.82) \\
RASV/RCSV & 1.02 (0.90, 1.13) & 0.99 (0.85, 1.16) \\
RCSV/EXSV & 1.06 (0.92, 1.22) & 1.11 (0.96, 1.26) \\
RASV/EXSV & 1.07 (0.97, 1.18) & 1.10 (0.93, 1.27) \\ \hline
\end{tabular}
\caption{Results from our simulation example. We show the median along with the 5\% and 95\% quantiles (in parenthesis) of the ratio between deviation measures for three pairs of models, computed over a total of 100 simulated data sets.}\label{ta:simstudy}
\end{table}
In agreement with the results reported by \cite{AliBraDieb02}, the first row of Table \ref{ta:simstudy} shows that using the range alone as a volatility proxy produces consistently more accurate volatility estimates than using opening and closing prices alone. Similarly, the second row suggests that, although including the opening prices tends to improve the volatility estimates over those obtained from the range-based model most of the time, it can sometimes decrease accuracy (specially when the MAD is used to measure the accuracy of the reconstruction). A similar situation, although much less severe, happens with our model based on the full likelihood; our model tends to improve over RASV and RCSV most of the time (with an average efficiency gain in the range of 5-10\%), but can underperform in some data sets.
Although the impact of the full information on the estimation of the volatility is moderate, including the full information contained in CHLO data can greatly improve the estimation of the drift of the model. For example, the ratio of the absolute error in the estimate of the drift $\mu$ based on the 156 time points between RASV and EXSV has a median of 1.12, with the 5\% and 95\% quantiles being 0.93 and 1.46. This indicates an average gain in efficiency of about 12\%. We also note that EXSV tends to be more robust to prior misspecification. Sensitivity analysis performed using different hyperparameters (results not shown) showed that estimates tend to be less affected by prior choice under our EXSV model.
Further insight into the behavior of these four models is provided by Figure \ref{fi:rsimstdy}, which shows the true and reconstructed volatility paths for one simulation in our study. Note that paths generated by STSV tend to underestimate the volatility of volatility. Also, the paths from RASV, RCSV and EXSV tend to be quite similar to each other, specially RCSV and EXSV. These results suggest that much of the information about the volatility contained in the High and Low prices is provided by the range, but also that the actual levels of the minimum and maximum can provide additional helpful information in most cases.
\begin{figure}
\begin{center}
\includegraphics[height=3.0in,angle=0]{simexample_STSV.pdf}
\includegraphics[height=3.0in,angle=0]{simexample_RASV.pdf}\\
\includegraphics[height=3.0in,angle=0]{simexample_RCSV.pdf}
\includegraphics[height=3.0in,angle=0]{simexample_EXSV.pdf}
\caption{True and reconstructed volatility paths for one simulation in our study for STSV (top left panel), RASV (top right), RCSV (bottom left) and EXSV (bottom right).}\label{fi:rsimstdy}
\end{center}
\end{figure}
\subsection{Estimating the volatility in the S\&P500 index}\label{se:sp500}
In this section we consider the series of the weekly S\&P500 prices covering the ten-year period between April 21, 1997 and April 9, 2007, for a total of 520 observations. Figure \ref{fi:rawreturns} shows the evolution of the log returns at closing, as well as the observed ranges in log returns (computed as the maximum observed log return minus the minimum observed log return over the week). Note that both plots provide complementary but distinct information about the volatility in prices. The series of closing returns does not exhibit any long term trend, but different levels of volatility can be clearly seen from both plots, with the period 1997-2002 presenting a higher average volatility than the period 2003-2007.
\begin{figure}
\begin{center}
\includegraphics[height=3.2in,angle=0]{Weeklysp500_raw_closing.pdf}
\includegraphics[height=3.2in,angle=0]{Weeklysp500_raw_range.pdf} \\
\caption{Observed weekly closing returns (left panel) and return ranges (right panel) in the S\&P500 data.}\label{fi:rawreturns}
\end{center}
\end{figure}
The data was analyzed using three of the models considered in the previous section: STSV, RASV and EXSV. As in the previous section, prior hyperparameters were chosen so that $d_{\mu} = 0$, $D_{\mu} = 0.0001$ (therefore, we expect the average weekly returns to be between -0.03 and 0.03 with high probability), $q_{\phi}=9$ and $r_{\phi}=1$ (so that we expect the autoregressive coefficient to be around $0.9$), $d_{\alpha} = -3.75$, $D_{\alpha} = 0.025$ and $u_{\tau} = 6$ $v_{\tau} = 0.06$ (so that the median of the annualized long-term volatility is a priori around $20\%$). The models were fitted using the particle filter algorithm described in Section \ref{se:inference}. We used a total of 100,000 particles for each model, and monitored particle impoverishment by computing the effective sample sizes at each point in time. Although not a serious issue in any of the three models, particle impoverishment is more pronounced for EXSV. This is probably due to the additional information provided by the extremes, which makes the likelihood tighter and decreases the viability of individual particles.
\begin{figure}
\begin{center}
\includegraphics[height=3.2in,angle=0]{Weeklysp500_mu.pdf}
\includegraphics[height=3.2in,angle=0]{Weeklysp500_alpha.pdf} \\
\includegraphics[height=3.2in,angle=0]{Weeklysp500_beta.pdf}
\includegraphics[height=3.2in,angle=0]{Weeklysp500_tau.pdf} \\
\caption{Filtered means and 5\% and 95\% posterior quantiles for the structural parameters under EXSV.}\label{fi:structural}
\end{center}
\end{figure}
Point and interval estimates for the structural parameters under EXSV are shown in Figure \ref{fi:structural}. These are filtered estimates, which means that they incorporate information available only until the time they were computed. We note that there is substantial learning about the structural parameters. After $T=520$ observations, the posterior mean for the median of the stationary distribution of volatility, $\nu = \exp\{ \alpha \}$, is $0.1546$, with a symmetric 90\% credible interval $(0.1510, 0.1670)$, while the autoregressive coefficient for the volatility has a posterior mean of $0.8935$ with credible interval $(0.8832, 0.9134)$. The results from STSV (not shown) are similar, but tend to produce a larger value of the autocorrelation coefficient (posterior mean 0.9690, 90\% credible interval (0.9245, 0.9873)). One surprising feature of these estimates is the pronounced drop in the autocorrelation coefficient $\phi$ on the week of January 24, 2000, which is accompanied by an increase in the volatility of the volatility, $\tau$. This can be explained by the large negative return observed for this week (around -11\% in the week). This observation suggests that a model that includes volatility jumps would be more appropriate for this data, however, the development of such a model is beyond the scope of this paper and will be discussed elsewhere.
\begin{figure}
\begin{center}
\includegraphics[height=4.9in,angle=0]{Weeklysp500_filtered.pdf}
\caption{Estimated volatilities of S\&P500 data using both stochastic volatility models.}\label{fi:filtered}
\end{center}
\end{figure}
Figure \ref{fi:filtered} shows point estimates for the volatility of returns under all three models. Again, these are filtered estimates. Although the overall level of the volatility series seems to be similar in all three instances, and the estimates obtained from RASV and EXSV are similar, there are striking differences in the behavior of STSV with respect to RASV and EXSV. For example, note the peaks during the high volatility period of 1997-2002 are much more pronounced under RASV and EXSV, while the average volatility between 2003 and 2007 seems to be lower under both RASV and EXSV than under STSV. Therefore, including information on extreme values seems to help to correct not only for underestimation, but also for overestimation of the volatility. To complement the information in Figure \ref{fi:filtered}, we present in Figure \ref{fi:postdenest} the posterior distribution for the volatility of returns on December 6, 2000 and April 2, 2007 under all three models. All distributions are right skewed, but the distributions under RASV and EXSV are shifted to the left, which suggest that STSV overestimates the true volatility of model in both cases. In addition, for December 6, 2000 the distributions for RASV and EXSV are almost identical; however, for April 2, 2007, they are noticeable different. More important, the posterior under both EXSV and RASV has a lower variability than that under STSV, reflecting the additional information contained in the extreme values.
\begin{figure}
\begin{center}
\includegraphics[height=3.2in,angle=0]{Weeklysp500_postdist165.pdf}
\includegraphics[height=3.2in,angle=0]{Weeklysp500_postdist519.pdf}
\caption{Posterior distribution for the volatility of returns for December 6, 2000 (left panel) and April 2, 2007 (right panel)}\label{fi:postdenest}
\end{center}
\end{figure}
In order to better understand the effect of the extreme values on the estimation of volatility, we compare our volatility estimates with the value of VIX index for the dates under consideration, which provides an independent source of information about the volatility on the S\&P500 index. The VIX is constructed as a weighted average from the implied volatilities obtained from options whose underlying asset are included in the S\&P500, and are therefore not directly linked to the S\&P500 extremes. Of course, the VIX is the implied forward looking volatility priced in the market, with 1-2 months average tenors and which includes the price of risk, while our estimates refer to current filtered estimates, but we make this comparison under the assumption that the VIX will reflect current market expectations of where the volatility of the underlying is, and any bias will affect equally all models under consideration. Figure \ref{fi:vix} presents scatter plots of the observed VIX prices at closing, which show strong association with the volatilities estimated under all models. Correlation between the closing VIX price and the filtered volatilities is 0.801 for the STSV model 0.853 for RASV and 0.861 for EXSV. Very similar results are obtained if the average between the opening and closing VIX price are used instead (correlations are 0.816 for STSV, 0.854 for RASV and 0.863 for EXSV). The stronger correlation with EXSV volatilities provides additional evidence that the information contained in the realized extremes can indeed improve the performance of the model, although the difference between RASV and EXSV is small and might not be relevant.
\begin{figure}
\begin{center}
\includegraphics[height=3.0in,angle=0]{Weeklysp500_vixpredsv.pdf}
\includegraphics[height=3.0in,angle=0]{Weeklysp500_vixpredrange.pdf}
\includegraphics[height=3.0in,angle=0]{Weeklysp500_vixpredminmax.pdf} \\
\caption{Closing VIX prices versus filtered volatilities for all three models under consideration.}\label{fi:vix}
\end{center}
\end{figure}
\section{Discussion and future work}\label{se:ccl}
This paper adds to the growing body of literature suggesting that the information contained in the observed extremes of asset prices can greatly contribute to increase the accuracy and efficiency volatility estimates. In addition, our results also suggest that the information contained in the level of the extreme returns (which is lost when using the observed ranges for inferences) can also contribute to more efficient estimation of the volatility, and almost as important in certain applications, to the estimation of the drift of the process. This paper also provides a unifying framework in which to understand several CHLO models available in the literature, allowing for a direct comparison of different assumptions.
Some extensions of the simple stochastic volatility model discussed in this paper are immediate and have already been hinted at. For example, it is well known that leverage effects can be incorporated by introducing a correlation between the innovations in the price and volatility processes. In addition, non-equally spaced observation can be easily accommodated by slightly rewriting the likelihood equations in terms of non-unit length intervals. The strong identifiability of model parameters provided by using the full information in the model allows us to reliably compare different evolution dynamics (e.g., Ornstein-Uhlenbeck, Markov-switching, jump and squared root processes). Indeed, model comparisons in this setting is straightforward thanks to our reliance on particle filters for computation. Finally, we plan to investigate how the additional information provided by the range can help when reconstructing option prices. In this regard, note that, conditionally on the parameters of the underlying stochastic process, the pricing formulas in \cite{He93} can be used almost directly in our problem. Therefore, our model can be extended to generate a posterior distribution for the price of any option of interest, which can be compared to the prices observed in the market.
Another avenue that deserves future exploration is the use of extreme prices for the estimation of covariance across multiple asset prices. In that regard, we can start by writing a model for the multivariate log-asset prices $\bfy_t$ as a multivariate GBM, with time variance covariance matrix, and deriving the joint density for closing, high and low prices conditional on the opening prices, in a manner similar to what we did in this paper. This joint likelihood contains all relevant information in the extremes and avoid the calculation of cross ranges \citep{RoZh08}, which scale badly to higher dimensions.
| {'timestamp': '2009-01-09T22:08:30', 'yymm': '0901', 'arxiv_id': '0901.1315', 'language': 'en', 'url': 'https://arxiv.org/abs/0901.1315'} |
\section{Introduction}
Cryptographic schemes are mainly classified into public key cryptography and symmetric key cryptography.
In symmetric key encryption schemes, both the sender and the receiver of a message use a common key, therefore they have to share the common key in advance.
Public key cryptographic schemes used for this purpose are called key exchange protocols.
The famous key exchange protocols include RSA-based one \cite{RSA1} and Diffie--Hellman key exchange \cite{Diffie}.
For the widely used public key cryptosystems at the present such as RSA cryptosystem \cite{RSA2} and elliptic curve cryptosystems \cite{elliptic,elliptic2}, it has been known that Shor's quantum algorithm \cite{Shor} can break these schemes in polynomial time.
This means that those cryptosystems will be vulnerable once a large-scale quantum computer is developed in future.
As a countermeasure, post-quantum cryptography (PQC) has been studied intensively, which is the class of (public key) cryptographic schemes secure even against quantum algorithms.
NIST has started the standardization process for PQC since 2017, and a lot of candidates have been submitted.
Among them, one of the main candidates for PQC is multivariate cryptography, which is based on the hardness of solving a system of multivariate non-linear equations.
In 2019, Akiyama et al.~\cite{Akiyama} proposed a key exchange protocol which is based on the hardness of solving a system of multivariate non-linear equations but yet has a design strategy different from ordinary multivariate cryptography.
Namely, ordinary multivariate cryptography intends to conceal the internal structure by using compositions of maps, while Akiyama et al.'s scheme uses both composition and addition of maps to conceal the central map $\mvec{\psi}$.
Although their construction improved a previous protocol \cite{Yosh} in reducing the ciphertext size, the original version of their protocol has a drawback that $\mvec{\psi}$ has to be an injective polynomial map, which decreases the number of candidates for $\mvec{\psi}$ and hence weakens the security.
They also proposed a countermeasure that enables to use non-injective $\mvec{\psi}$ for strengthening the security, but this increases the failure probability of establishing a common key.
This trade-off should be resolved in order to make the scheme practically useful.
\subsection{Our Contributions}
In this paper, we give an improvement of the key exchange protocol by Akiyama et al.~\cite{Akiyama} in reducing the failure probability while keeping the security level.
Roughly speaking, in the improved version of Akiyama et al.'s protocol, one of the two parties solves a certain system of polynomial equations defined over a prime field $\mathbb{F}_q$ in order to determine the common key.
The main reason of the larger failure probability is that the system of equations frequently has two or more solutions in $\mathbb{F}_q^n$ and therefore the correct common key is not uniquely determined.
Our main idea is to restrict the range of the correct common key (i.e., the correct solution of the system of equations) into $\mathbb{Z}_p^n \subseteq \mathbb{F}_q^n$ for smaller $p < q$ instead of the whole of $\mathbb{F}_q^n$.
(This idea is inspired by the construction of \cite{Akiyama2}.)
Even if the system of equations has multiple solutions in $\mathbb{F}_q^n$, the solution in $\mathbb{Z}_p^n$ will be unique with high probability, which enables the party to successfully determine the correct common key.
Our proposed protocol with sizes of parameters $q$ and $n$ similar to those of \cite{Akiyama} indeed reduces the failure probability significantly.
Moreover, we propose a parameter set for our proposed protocol that might achieve both failure probability $2^{-120}$ and $128$-bit security level.
We theoretically estimate an upper bound for the failure probability in order to confirm the former property, and discuss the security level against some typical kinds of attacks (Gr\"{o}bner basis computation, linear algebraic attack, etc.) in order to confirm the latter property.
\subsection{Organization of This Paper}
Section \ref{sec:preliminaries} summarizes some notations and basic properties such as properties of Gr\"{o}bner basis.
In Section \ref{sec:previous_scheme}, we recall the construction of the previous protocol in \cite{Akiyama} and summarize advantages and disadvantages of the previous protocol.
In Section \ref{sec:proposed_protocol}, we describe our proposed protocol and give a theoretical estimate of an upper bound for failure probability of the protocol.
Section \ref{sec:experimental_results} summarizes the results of our computer experiments about our proposed protocol.
Section \ref{sec:comparison} summarizes the results of comparison between our proposed protocol and the previous protocol in \cite{Akiyama}.
Finally, in Section \ref{sec:security_evaluation}, we evaluate the security of our proposed protocol.
\section{Preliminaries}
\label{sec:preliminaries}
In this section, we recall algebraic definitions needed to understand the algorithms, and introduce PME problem, which is the base of the security of the existing method.
\subsection{Notation}
Let $\mathbb{F}_q$ denote the finite field of $q$ elements and $\mvec{x} := (x_1, \dots, x_n)$ for $n$ variables $x_1,\dots,x_n$.
Then, $f(\mvec{x})$ expresses a polynomial map of $n$ variables.
We refer to $\mathbb{F}_q[\mvec{x}]$ as the polynomial ring of $n$ variables on coefficient ring $\mathbb{F}_q$, and express $(\psi_1(\mvec{x}),\dots,\psi_m(\mvec{x}))$ as $\mvec{\psi}(\mvec{x})$ for polynomial maps of $n$ variables $\psi_1(\mvec{x}),\dots,\psi_m(\mvec{x})$.
Then we define the degree of $\mvec{\psi}$ by $\deg \mvec{\psi} := \max\{\deg \psi_1,\dots,\deg \psi_n\}$.
In particular, polynomial maps of degree one are called affine maps.
For polynomial maps $\mvec{\psi}=(\psi_1,\dots,\psi_m)$ of $n$ variables and $\mvec{\phi}=(\phi_1,\dots,\phi_n)$, we define the composed map $\mvec{\psi}\circ \mvec{\phi}$ by $\mvec{\psi}\circ \mvec{\phi} = (\psi_1(\phi_1,\dots,\phi_n),\dots,\psi_m(\phi_1,\dots,\phi_n))$.
In this paper, we use the following classes of polynomials:
\[
\begin{split}
\Lambda_{n,d} &:= \{f \in \mathbb{F}_q[x_1, \dots, x_n] \mid \deg f = d\} \enspace,\\
\Lambda_{n,d}^m &:= \{\mvec{f} \in \mathbb{F}_q[x_1, \dots, x_n]^m \mid \deg \mvec{f} = d\} \enspace,\\
(\Lambda_{n,d}^n)^* &:= \{\mvec{\psi} \in \Lambda_{n,d}^n \mid \mvec{\psi} \mbox{ is injective}\} \enspace.
\end{split}
\]
Moreover, $\xleftarrow{r}S$ represents that we choose randomly from the set $S$.
\subsection{PME Problem}
\label{subsec:PME_problem}
PME problem was introduced by Akiyama et al.~\cite{Akiyama}, which is the base of the security of their previous method.
The definition is as follows.
\begin{definition}
[PME problem]
For $(f(\mvec{x}), c_1(\mvec{x}), \dots, c_n(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^{n+1}$ and $(u_1, \dots, u_n) \in \mathbb{F}_q^n$,
PME (Polynomial Map Equation) problem is a problem of finding a solution to the system of multivariate polynomial equations
\begin{eqnarray*}
\begin{cases}
f(x_1, \cdots, x_n) = 0 \\
c_1(x_1, \cdots, x_n) = u_1\\
\qquad\vdots\\
c_n(x_1, \cdots, x_n) =u_n \enspace.\\
\end{cases}
\end{eqnarray*}
\end{definition}
\subsection{Gr\"{o}bner Basis}
Here we summarize some basics for Gr\"{o}bner basis.
See e.g., \cite{text} for the details.
First, we define the monomial term set $\mathcal{M}_n$ of polynomial ring $K[\mvec{x}]$ on coefficient field $K$:
\begin{eqnarray*}
\mathcal{M}_n:=\{x_1^{\alpha_1}\cdots x_n^{\alpha_n}\mid(\alpha_1, \cdots, \alpha_n) \in (\mathbb{Z}_{\geq 0})^n\} \enspace.
\end{eqnarray*}
\begin{definition}
[monomial order]
For a strict partial order $\succ$ on $\mathcal{M}_n$, $\succ$ is called monomial order when it satisfies the following three conditions.
\begin{enumerate}
\item
$\succ$ is a total order.
\item
Any subset $S \neq \emptyset$ of $\mathcal{M}_n$ has the minimum element about $\succ$.
\item
If $s,t \in \mathcal{M}_n$ and $s \succ t$, then for any element $u$ of $\mathcal{M}_n$, we have $su \succ tu$.
\end{enumerate}
\end{definition}
\begin{example}
[lexicographical order]
We define the lexicographical order $\succ_{lex}$ on $\mathcal{M}_n$ by
\[
x_1^{\alpha_1} \cdots x_n^{\alpha_n} \succ_{lex} x_1^{\beta_1} \cdots x_n^{\beta_n} \Leftrightarrow 1 \leq \exists i \leq n \mbox{ s.t.\ } \alpha_j = \beta_j \mbox{ ($1 \leq j \leq i-1$) and } \alpha_i > \beta_i \enspace.
\]
Then it is known that $\succ_{lex}$ is a monomial order.
\end{example}
\begin{definition}
[leading term]
Let $f \in K[\mvec{x}]$.
The leading term of $f$ is the maximum monomial (with coefficient) in $f$ with respect to a given monomial order $\succ$, and it is expressed as $LT_{\succ}(f)$.
\end{definition}
Then the following holds about division of multivariate polynomials.
\begin{proposition}
Let $\mathcal{G}=\{g_1,\dots,g_s\}$ be a finite subset of $K[\mvec{x}]\setminus \{0\}$.
For each $f \in K[\mvec{x}]$, there exist $h_1,\dots,h_s,r \in K[\mvec{x}]$ satisfying the following:
\begin{itemize}
\item
$f = h_1 g_1 + \cdots h_s g_s + r$.
\item
Each monomial appearing in $r$ is not divisible by $LT_{\succ}(g_i)$ for any $1 \leq i \leq s$.
\end{itemize}
\end{proposition}
In the above situation, we write the term $r$ as $\overline{f}^{\succ, \mathcal{G}}$ and call it remainder of $f$ about $\succ$ divided by $\mathcal{G}$.
Now Gr\"{o}bner basis is defined as follows.
\begin{definition}
[Gr\"{o}bner basis]
A finite subset $\mathcal{G}=\{g_1,\dots,g_s\}$ of $K[\mvec{x}] \setminus \{0\}$ is a Gr\"{o}bner basis of an ideal $\mathcal{I} \subseteq K[\mvec{x}]$ about $\succ$ when it satisfies the following condition: For any $f \in \mathcal{I} \setminus \{0\}$, $LT_{\succ}(g_i)$ divides $LT_{\succ}(f)$ for some $1 \leq i \leq s$.
\end{definition}
We note that this definition is equivalent to saying that for any $f \in \mathcal{I} \setminus \{0\}$, the remainder always satisfies that $\overline{f}^{\succ, \mathcal{G}} = 0$.
Hereinafter, we consider solving a system of multivariate non-linear polynomial equations using Gr\"{o}bner basis.
First, we formulate the problem as follows.
\begin{problem}
Given $s$ polynomials $f_1,\dots,f_s \in K[\mvec{x}]$ in $n$ variables $\mvec{x}$, find $(a_1,\dots,a_n) \in K^n$ such that $f_1(a_1,\dots,a_n)= \cdots =f_s(a_1,\dots,a_n)=0$.
\end{problem}
We reformulate this problem in a way that we can apply Gr\"{o}bner basis to the problem.
\begin{problem}
\label{prob:zero_set}
Let $\mathcal{I} \subseteq K[\mvec{x}]$ be the ideal generated by $f_1,\dots,f_s \in K[\mvec{x}]$.
Then find the zero set $V(\mathcal{I}) := \{(a_1,\cdots,a_n)\in K^n \colon f(a_1,\dots,a_n)=0 \mbox{ for any } f \in \mathcal{I}\}$.
\end{problem}
For Gr\"{o}bner basis about lexicographical order, the following property is useful.
\begin{theorem}
For lexicographical order $\succ_{lex}$ with $x_1 \succ x_2 \succ \cdots \succ x_n$, let $\mathcal{G}$ be a Gr\"{o}bner basis of an ideal $\mathcal{I} \subseteq K[\mvec{x}]$ about $\succ_{lex}$.
Then, for any $1 \leq \ell \leq n$, $\mathcal{G} \cap K[x_{\ell}, \dots,x_n]$ is a Gr\"{o}bner basis of the ideal $\mathcal{I} \cap K[x_{\ell}, \dots,x_n]$ of $K[x_{\ell}, \dots,x_n]$.
\end{theorem}
This theorem enables us to reduce Problem \ref{prob:zero_set} to solving problems in less variables.
Moreover, when the ideal $\mathcal{I}$ is a zero-dimensional ideal (in the sense explained below), we can reduce the problem to a further easier one.
\begin{definition}
[zero-dimensional ideal]
An ideal $\mathcal{I} \subseteq K[\mvec{x}]$ is a zero-dimensional ideal when the quotient space $K[\mvec{x}] / \mathcal{I}$ is a finite-dimensional linear space over $K$.
\end{definition}
A Gr\"{o}bner basis $\mathcal{G}= \{g_1,\cdots,g_s\}$ of a zero-dimensional ideal $\mathcal{I}$ about lexicographical order satisfies (with a certain ordering for the elements of $\mathcal{G}$) that for each $1 \leq i \leq n$, $g_i$ is a polynomial in $x_1,\dots,x_i$.
This fact makes it possible to solve the original system of equations by solving univariate non-linear equations finitely many times.
Buchberger's algorithm, $F_4$ algorithm \cite{F4}, and $F_5$ algorithm \cite{F5} are frequently used to calculate Gr\"{o}bner basis.
When the ideal in the problem is a zero-dimensional ideal, we can estimate the computational complexity of $F_5$, which is the best algorithm among them, by using the notion of degree of regularity explained below.
\begin{definition}
[degree of regularity]
We define the degree of regularity of a zero-dimensional ideal $\mathcal{I}=\langle f_1,\dots,f_s\rangle$ by
\begin{eqnarray*}
d_{reg} := \min\left\{ d\geq 0\mid \dim \{ f \in I \colon f \mbox{ is homogeneous of degree } d \} = \binom{n+d-1}{d} \right\} \enspace.
\end{eqnarray*}
\end{definition}
\begin{definition}
[$d$-regular]
For an overdetermined system of polynomial equations $f_1 = \cdots = f_s = 0$ ($s \geq n$) whose polynomials generate a zero-dimensional ideal, this equation system is $d$-regular when the following holds for any $1 \leq i \leq s$ and $g \in K[\mvec{x}]$:
\begin{center}
If $\deg(g) < d-\deg(f_i)$ and $gf_i \in \langle f_1,\dots,f_{i-1}\rangle$, then $g \in \langle f_1,\dots,f_{i-1}\rangle$.
\end{center}
\end{definition}
\begin{definition}
[semi-regular]
A system of polynomial equations is semi-regular when it is $d_{reg}$-regular.
\end{definition}
The following result is shown in \cite{Grobner}.
\begin{theorem}
\label{thm:complexity_of_F5}
For a semi-regular system of polynomial equations, the complexity of $F_5$ algorithm is estimated as
\begin{eqnarray*}
O\left({\binom{n+d_{reg}}{n}}^{\omega}\right)
\end{eqnarray*}
where $\omega<2.39$ denotes the linear algebra constant.
\end{theorem}
\section{The Previous Protocol}
\label{sec:previous_scheme}
In this section, we summarize the previous protocol proposed by Akiyama et al.~\cite{Akiyama}.
\subsection{The Original Protocol}
Here we describe the original version of the previous protocol given in Section 4 of \cite{Akiyama}.
In the protocol, Alice and Bob are going to agree on a common key using a public channel.
We use the following parameters:
\begin{align*}
q \colon& \mbox{prime number which is the number of elements of the coefficient field}\\
n \colon& \mbox{number of variables}\\
m \colon& \mbox{degree of polynomials generated by Bob}\\
d \colon& \mbox{degree of polynomal generated by Alice}
\end{align*}
The protocol is as follows (see also Figure \ref{fig:previous_protocol}).
\begin{enumerate}
\item \label{previous_algorithm__Alice}
Alice sends a multivariate equation $f(\mvec{x})=0$ to Bob and keeps its solution $\mvec{\sigma} \in \mathbb{F}_q^n$ secret.
The detail is as follows:
\begin{enumerate}
\item
Generate a polynomial $f(\mvec{x}) \in \mathbb{F}_q[\mvec{x}]$ of degree $d$ uniformly at random.
\item
Generate a solution $\mvec{\sigma} \in \mathbb{F}_q^n$ of $f(\mvec{x})=0$ as follows.
First, generate $\sigma_1, \dots, \sigma_{n-1} \in \mathbb{F}_q$ uniformly at random.
Then, solve a univariate equation $f(\sigma_1, \dots, \sigma_{n-1}, x_n)=0$ in $x_n$ and keep a solution $\sigma_n$; if it cannot be solved, then modify the constant term and restart generating $\sigma_1, \dots, \sigma_{n-1}$.
\item
Keep the solution $\mvec{\sigma} = (\sigma_1, \dots, \sigma_{n-1}, \sigma_n) \in \mathbb{F}_q^n$.
\item
Send $f(\mvec{x})$ to Bob.
\end{enumerate}
\item
Bob sends multivariate polynomials $\mvec{g}(\mvec{x})$ and $\mvec{c}(\mvec{x})$ to Alice.
The detail is as follows:
\begin{enumerate}
\item
Generate a bijective affine map $\mvec{g}(\mvec{x}) = (g_1(\mvec{x}), \dots, g_n(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^n$ uniformly at random.
\item
Generate an injective polynomial map $\mvec{\psi}(\mvec{x}) = (\psi_1(\mvec{x}), \dots, \psi_n(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^n$ of degree $\deg \psi_j = m$ ($1 \leq j \leq n$) randomly.
\item
Compute $\mvec{\psi}(\mvec{g}(\mvec{x}))$.
\item
Generate a polynomial map $\mvec{r}(\mvec{x}) = (r_1(\mvec{x}), \dots, r_n(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^n$ of degree $\deg r_j = m-d$ ($1 \leq j \leq n$) uniformly at random.
\item
Compute a polynomial map $\mvec{c}(\mvec{x}) = \mvec{\psi}(\mvec{g}(\mvec{x})) + f(\mvec{x})\mvec{r}(\mvec{x})$.
\item
Send $\mvec{g}(\mvec{x})$ and $\mvec{c}(\mvec{x})$ to Alice.
\end{enumerate}
\item
Alice computes a common key $\mvec{s} \in \mathbb{F}_q^n$, and sends $\mvec{u} \in \mathbb{F}_q^n$ to Bob.
The detail is as follows:
\begin{enumerate}
\item
Compute $\mvec{g}(\mvec{\sigma}) = \mvec{s}$, and keeps $\mvec{s}$ as a common key.
\item
Compute $\mvec{c}(\mvec{\sigma}) = \mvec{u}$, and send $\mvec{u}$ to Bob.
\end{enumerate}
\item \label{previous_algorithm__Bob}
Bob computes a common key $\mvec{s}$ as follows.
Since $f(\mvec{\sigma})= 0$ implies $\mvec{c}(\mvec{\sigma}) = \mvec{\psi}(\mvec{g}(\mvec{\sigma})) = \mvec{u}$, Bob can compute the common key $\mvec{s}$ by applying $\mvec{\psi}^{-1}$ to $\mvec{u}$:
\begin{eqnarray*}
\mvec{\psi}^{-1}(\mvec{u}) = \mvec{g}(\mvec{\sigma}) =\mvec{s} \enspace.
\end{eqnarray*}
\end{enumerate}
\begin{figure}[t!]
\centering
\begin{tabular}{|lll|}
\hline
Alice & & Bob \\ \hline
$\mvec{\sigma} \stackrel{r}{\leftarrow} \mathbb{F}_q^n$& & \\
$f \stackrel{r}{\leftarrow} \Lambda_{n,d}$& &\\
$f(\mvec{\sigma}) = 0$ & $\xrightarrow{f}$& $\mvec{g} \stackrel{r}{\leftarrow} \Lambda_{n,1}^n$ \\
& & $\mvec{\psi} \stackrel{r}{\leftarrow} (\Lambda_{n,m}^n)^*$\\
& & $\mvec{r} \stackrel{r}{\leftarrow} \Lambda_{n,m-d}^n$\\
&$\xleftarrow{\left(\mvec{g}, \mvec{c}\right)}$& $\mvec{c}:=\mvec{\psi}\circ\mvec{g}+f\mvec{r}$\\
$\mvec{s}:=\mvec{g}(\mvec{\sigma})$ & & \\
$\mvec{u}:=\mvec{c}(\mvec{\sigma})$ &$\xrightarrow{\mvec{u}}$& $\mvec{s}=\mvec{\psi}^{-1}(\mvec{u})$\\\hline
\end{tabular}
\caption{The previous protocol}
\label{fig:previous_protocol}
\end{figure}
\subsection{An Improved Version in the Original Paper}
\label{subsec:previous_improvement}
In this section, we explain the improvement of the protocol above given in the original paper to increase the possibilities of multivariate polynomial map $\mvec{\psi}$.
In the original protocol, we restricted the polynomial map $\mvec{\psi}$ to be injective in order Bob to obtain the common key uniquely.
In contrast, here we use a general polynomial map $\mvec{\psi}$, and from the candidate set $\mvec{\psi}^{-1}(\mvec{u})$ of common keys, we exclude ones which do not satisfy the necessary condition $f = 0$.
Precisely, we change Step \ref{previous_algorithm__Bob} of the algorithm in the following manner:
\begin{itemize}
\item[(a)]
Compute the set $\mvec{\psi}^{-1}(\mvec{u})$.
\item[(b)]
If $\#\mvec{\psi}^{-1}(\mvec{u}) = 1$, then keep the $\mvec{s} \in \mvec{\psi}^{-1}(\mvec{u})$ as the common key and halt.
\item[(c)]
If $\#\mvec{\psi}^{-1}(\mvec{u}) \neq 1$, then compute all elements of $S := \{ \mvec{s} \in \mvec{\psi}^{-1}(\mvec{u})\mid f( \mvec{g}^{-1}(\mvec{s}) ) = 0\}$.
In other words, for each element $\mvec{s}$ of $\mvec{\psi}^{-1}(\mvec{u})$, check whether it satisfies that $f( \mvec{g}^{-1}(\mvec{s}) ) = 0$, and if not, exclude the $\mvec{s}$ from the set $S$.
\item[(d)]
If finally $\#S = 1$, then keep the element of $S$ as the common key; otherwise, restart from Step \ref{previous_algorithm__Alice}.
\end{itemize}
\subsection{Advantage of the Previous Protocol}
The protocol constructs a ciphertext in a way different from the usual multivariate cryptography (using a central map and composing it with two affine maps), and consequently, there is a possibility to reduce the parameter size and the ciphertext size by avoiding known attacks to multivariate cryptosystems.
Also, the protocol was an improvement of Yosh's protocol \cite{Yosh} and succeeded in decreasing the degree of polynomials from exponential order to polynomial order, which improves the efficiency.
\subsection{Disadvantage of the Previous Protocol}
\label{sec:previous_protocol__disadvantage}
The improvement in Section \ref{subsec:previous_improvement} aimed at enhancing the security by enlarging the possibility of the map $\mvec{\psi}$.
However, even though an additional check using the condition $f(\mvec{\sigma})=0$ is introduced, there may be risk of failure of the protocol due to non-injectivity of $\mvec{\psi}$.
In fact, for the experiments of the protocol performed in the original paper \cite{Akiyama}, the highest average success rate with their proposed parameters was only $89.9\%$.
In contrast, practically desirable values of failure rates are of the order of $2^{-64}$ or even smaller.
Therefore, the success rate of the previous protocol has to be much improved.
\section{Our Proposed Protocol}
\label{sec:proposed_protocol}
\subsection{Protocol Description}
Similarly to \cite{Akiyama}, Alice and Bob are going to agree on a common key using a public channel.
In addition to the originally used parameters, we introduce parameters $p$ related to the range of the common key and $\ell$ determining the number of equations in $f$.
From now, we regard $\mathbb{Z}_p := \{0,1,\dots,p-1\} \subseteq \mathbb{F}_q$.
\begin{align*}
q \colon& \mbox{prime number which is the number of elements of the coefficient field}\\
p \colon& \mbox{integer related to the range of the common key}\\
n \colon& \mbox{number of variables}\\
m \colon& \mbox{degree of polynomials generated by Bob}\\
d \colon& \mbox{degree of polynomial generated by Alice}\\
\ell \colon& \mbox{number of polynomials generated by Alice}
\end{align*}
Our proposed protocol is as follows (see also Figure \ref{fig:proposed_protocol}):
\begin{enumerate}
\item \label{item:proposed_protocol_Alice}
Alice sends a system of multivariate polynomial equations $\mvec{f}(\mvec{x})=0$ to Bob, and keeps its solution $\underline{s}$ belonging to $\mathbb{Z}_p^n \subseteq \mathbb{F}_q^n$ as the common key.
The detail is as follows:
\begin{enumerate}
\item
Generate a uniformly random $\mvec{s} \in \mathbb{Z}_p^n$, which will be the common key.
\item
Generate a system of degree-$d$ polynomials $\tilde{f}(\mvec{x}) = (\tilde{f}_1(\mvec{x}),\dots,\tilde{f}_{\ell}(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^{\ell}$ uniformly at random.
\item
Compute $\mvec{f}(\mvec{x}) := (f_1(\mvec{x}),\dots,f_{\ell}(\mvec{x})) = \tilde{f}(\mvec{x})-\tilde{f}(\mvec{s})$.
\item
Send $\mvec{f}(\mvec{x})$ to Bob.
\end{enumerate}
\item
Bob sends a polynomial map $\mvec{c}(\mvec{x})$ to Alice.
The detail is as follows:
\begin{enumerate}
\item
Randomly generate a polynomial map $\mvec{\psi}(\mvec{x}) = (\psi_1(\mvec{x}), \dots, \psi_n(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^n$ of degree $\deg \psi_j = m$ ($1 \leq j \leq n$).
\item
Generate a polynomial map $\mvec{r}(\mvec{x}) = (r_1(\mvec{x}), \dots, r_n(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^n$ of degree $\deg r_j = m-d$ ($1 \leq j \leq n$) uniformly at random.
\item
Choose $(t_1,\dots,t_n) \in \{1,\dots,\ell\}^n$ uniformly at random.
\item
Compute $c_i(\mvec{x}) = \psi_i(\mvec{x}) + f_{t_i}(\mvec{x})r_i(\mvec{x})$ for each $1 \leq i \leq n$.
\item
Send $\mvec{c}(\mvec{x})$ to Alice.
\end{enumerate}
\item
Alice computes $\mvec{c}(\mvec{s}) = \mvec{u}$ and sends $\mvec{u}$ to Bob.
\item
Bob computes the common key $\mvec{s}$.
The detail is as follows:
\begin{enumerate}
\item
Compute the set $\mvec{\psi}^{-1}(\mvec{u}) \cap \mathbb{Z}_p^n$.
\item
If $\#\mvec{\psi}^{-1}(\mvec{u}) \cap \mathbb{Z}_p^n = 1$, then keep the $\mvec{s} \in \mvec{\psi}^{-1}(\mvec{u}) \cap \mathbb{Z}_p^n$ as the common key and halt.
\item
If $\#\mvec{\psi}^{-1}(\mvec{u}) \cap \mathbb{Z}_p^n \neq 1$, then compute all elements of $S = \{ \mvec{s} \in \mvec{\psi}^{-1}(\mvec{u}) \cap \mathbb{Z}_p^n \mid \mvec{f}(\mvec{s}) = 0 \}$.
\item
If $\#S = 1$, then keep the element of $S$ as the common key; otherwise, restart from Step \ref{item:proposed_protocol_Alice}.
\end{enumerate}
\end{enumerate}
\begin{figure}[t!]
\centering
\begin{tabular}{|lll|}
\hline
Alice & & Bob \\ \hline
$\mvec{s} \stackrel{r}{\leftarrow} \mathbb{Z}_p^n$& & \\
$\mvec{f} \stackrel{r}{\leftarrow} \Lambda_{n,d}^{\ell}$& &\\
$\mvec{f}(\mvec{s}) = 0$ & $\xrightarrow{\mvec{f}}$& $\mvec{\psi} \stackrel{r}{\leftarrow} \Lambda_{n,m}^n$\\
& & $\mvec{r} \stackrel{r}{\leftarrow} \Lambda_{n,m-d}^n$\\
& & $t_i \stackrel{r}{\leftarrow} \{1,\dots,\ell\}$\\
&$\xleftarrow{\mvec{c}}$& $c_i:=\psi_i+f_{t_i}r_i$\\
$\mvec{u}:=\mvec{c}(\mvec{s})$ & &\\
&$\xrightarrow{\mvec{u}}$& $\mvec{s}\in \mvec{\psi}^{-1}(\mvec{u}) \cap \mathbb{Z}_p^n$\\
& & s.t.\ $\mvec{f}(\mvec{s})=0$\\\hline
\end{tabular}
\caption{Our proposed protocol}
\label{fig:proposed_protocol}
\end{figure}
\subsection{Construction of Easy-to-Invert Polynomials}
In our proposed protocol, the total efficiency depends highly on the efficiency of generating the polynomial map $\mvec{\psi}$ and computing the inverse $\mvec{\psi}^{-1}$.
Therefore, it is important to use polynomial maps $\mvec{\psi}$ that can be efficiently generated and whose preimage can be efficiently computed.
Here we adopt polynomial systems where the number of variables in each polynomial is gradually incremented, such as in Gr\"{o}bner basis of a zero-dimensional ideal about lexicographical order (see an example below).
For such a polynomial system, its preimage can be recursively computed by solving univariate polynomial equations and then substituting the solutions to the remaining polynomials.
In our proposed protocol, we only need solutions belonging to $\mathbb{Z}_p^n$, therefore we have to only keep the solutions of the univariate equations belonging to $\mathbb{Z}_p$, and we can prune some branches when a solution in $\mathbb{Z}_p$ does not exist.
This reduces the computational cost drastically, in contrast to the original protocol where we needed to solve univariate equations $d^n$ times.
\begin{example}
Here we give a toy example of our polynomial systems in the case of degree two with three variables over coefficient field $\mathbb{F}_5$:
\begin{eqnarray*}
\left\{
\begin{array}{l}
\psi_1(\underline{x}) = 3{x_1}^2 + x_1 + 4 \\
\psi_2(\underline{x}) = {x_2}^2 + 2{x_1}{x_2} + 4 x_1 + x_2 + 3 \\
\psi_3(\underline{x}) = 4{x_1}^2 + 2{x_3}^2 + {x_1}{x_3} + 3{x_2}{x_3} + 1
\end{array}
\right.
\end{eqnarray*}
\end{example}
\subsection{Theoretical Estimate of Failure Probability}
\label{subsec:error_bound_analysis}
Here, for parameters $(q,p,n,m,d,\ell)$, we estimate the failure probability of our proposed protocol.
Here the \lq\lq failure\rq\rq{} means the case where two or more candidates remain after the computation by Bob to determine the common key (i.e., the protocol is restarted at the final step).
In order to estimate the failure probability, we analyze the expected number of candidates for the common key computed by Bob.
Recall that Bob's computation at the step consists of the following two steps:
\begin{enumerate}
\item \label{error_analysis_step1}
Computing the preimage of $\mvec{\psi}$ in $\mathbb{Z}_p^n$.
\item \label{error_analysis_step2}
From the candidates obtained at Step \ref{error_analysis_step1}, excluding ones that do not satisfy the condition $\mvec{f} = 0$.
\end{enumerate}
We divide the argument into the two steps above.
From now, we focus on the case $m = 2$ and $d = 1$ which are our proposed parameters.
\paragraph{For Step \ref{error_analysis_step1}.}
Recall that now $\psi_1$ is a quadratic polynomial in $x_1$, $\psi_2$ is a quadratic polynomial in $x_1,x_2$, and so on, and $\psi_n$ is a quadratic polynomial in $x_1,\dots,x_n$.
Consequently, we can solve the system of equations by recursively solving univariate quadratic equations.
We represent this process by using a rooted tree structure from $0$-th level (root) to $n$-th level, where a node at $k$-th level corresponds to a partial solution $(s_1,\dots,s_k)$ obtained from the first $k$ polynomials $\psi_1,\dots,\psi_k$.
Hence the nodes at $n$-th level represent the candidates for the common key computed by Bob.
Now for each node $(s_1,\dots,s_k)$ at $k$-th level ($0 \leq k \leq n - 1$), there are the following three possibilities;
\begin{itemize}
\item
the node has no child nodes (that is, the quadratic equation $\psi_{k+1}(s_1,\dots,s_k,x_{k+1})$ has no solution $x_{k+1} \in \mathbb{Z}_p$);
\item
the node has only one child node (that is, the quadratic equation $\psi_{k+1}(s_1,\dots,s_k,x_{k+1})$ has a unique solution $x_{k+1} \in \mathbb{Z}_p$);
\item
the node has two child nodes (that is, the quadratic equation $\psi_{k+1}(s_1,\dots,s_k,x_{k+1})$ has two different solutions $x_{k+1} \in \mathbb{Z}_p$).
\end{itemize}
Here we note that there is always at least one path from the root to a node at $n$-th level, which corresponds to the \lq\lq correct\rq\rq{} solution chosen by Alice.
We call it the correct path.
From now, we evaluate (by using some heuristic assumptions) an upper bound for the expected number of \lq\lq incorrect\rq\rq{} solutions, that is, paths from the root to a node at $n$-th level different from the correct path.
First we consider the case that the correct path has another branch at $k$-th level ($0 \leq k \leq n - 1$).
We note that the current equation $\psi_{k+1}(s_1,\dots,s_k,x_{k+1})$ has at least one solution in $\mathbb{F}_q$ (which is the correct solution), therefore it has two solutions in $\mathbb{F}_q$.
By heuristically assuming that the other solution is uniformly random over $\mathbb{F}_q$, the probability of branching, i.e., the probability that the other solution is in $\mathbb{Z}_p$ and is different from the correct solution, is at most $p/q$.
Secondly, in a situation where a node, say $v$, at $k$-th level ($1 \leq k \leq n - 1$) that is not on the correct path exists, we evaluate the expected number of child nodes of $v$.
To simplify the argument, here we heuristically assume that the behavior of child nodes of $v$ is independent of the behaviors at the previous levels.
Now it seems not easy for evaluating the probability that the current univariate quadratic equation has a solution in $\mathbb{F}_q$ (note that if it has a solution in $\mathbb{F}_q$, then it has two solutions in $\mathbb{F}_q$ possibly with multiplicity); to derive an upper bound, here we just bound the probability from above by $1$.
We also heuristically assume that now the two solutions distribute independently and uniformly at random over $\mathbb{F}_q$.
Under the assumption, the probability that the node $v$ has a first child node (that is, at least one of the two solutions belongs to $\mathbb{Z}_p$) is given by $1 - (1 - p/q)^2 = 2p / q - (p/q)^2 \leq 2p / q$; and the probability that the node $v$ has the second child node (that is, both of the two solutions belong to $\mathbb{Z}_p$ and these are different) is given by $(p/q) \cdot (p-1)/q \leq (p/q)^2$.
Hence, the expected number of child nodes of $v$ is upper bounded by $\alpha := 2p / q + (p/q)^2$.
By heuristically assuming that the behavior of each level is independent of each other, the expected number of nodes at $n$-th level appearing after branching from the correct path at $k$-th level (now the node $v$ above is at $(k+1)$-th level) is upper bounded by $\alpha^{n-(k+1)}$.
Therefore, the expected number of incorrect solutions is upper bounded by
\[
\sum_{k=0}^{n-1} \frac{ p }{ q } \cdot \alpha^{n-(k+1)}
= \frac{ p }{ q } \cdot \frac{ 1 - \alpha^n }{ 1 - \alpha}
= \frac{ p }{ q } \cdot \frac{ 1 - (2p / q + (p/q)^2)^n }{ 1 - (2p / q + (p/q)^2) } \enspace.
\]
\paragraph{For Step \ref{error_analysis_step2}.}
To simplify the argument, we heuristically assume that the values of $f_i(\mvec{s}')$ ($1 \leq i \leq \ell$) for an incorrect solution $\mvec{s}'$ are uniformly random over $\mathbb{F}_q$ and independent of each other.
Under the assumption, the probability that an incorrect solution $\mvec{s}'$ satisfies that $\mvec{f}(\mvec{s}') = 0$ is $1/q^{\ell}$.
\paragraph{}
Summarizing, by writing the number of incorrect candidates for the common key computed by Bob as $X$, we have
\[
\mathbb{E}[X]
\leq \frac{ p }{ q } \cdot \frac{ 1 - (2p / q + (p/q)^2)^n }{ 1 - (2p / q + (p/q)^2) } \cdot \frac{ 1 }{ q^{\ell} } \enspace.
\]
Our proposed protocol fails if and only if $X \geq 1$, and by Markov's Inequality, its probability is bounded by
\begin{equation}
\label{eq:theoretical_upper_bound}
Pr[X \geq 1]
\leq \mathbb{E}[X]
\leq \frac{ p }{ q } \cdot \frac{ 1 - ( 2p/q + (p/q)^2 )^n }{ 1 - ( 2p/q + (p/q)^2 ) } \cdot \frac{ 1 }{ q^{\ell} } \enspace.
\end{equation}
By substituting our choice of parameters
\[
(q,p,n,m,d,\ell) = (46116646144580573897,19,32,2,1,1)
\]
into the formula above, we obtain an estimated upper bound $8.93 \times 10^{-39} \approx 1.52 \times 2^{-127}$ for the failure probability.
\section{Experimental Results}
\label{sec:experimental_results}
In this section, we explain our experimental results on our proposed protocol.
We used a PC with 8 GB memory and 2 GHz Intel Core i5, and used Magma for implementation.
\subsection{Confirmation of the Theoretical Upper Bound}
In order to confirm that our theoretical upper bound in Eq.\eqref{eq:theoretical_upper_bound} under several heuristic assumptions is not too optimistic, we executed our proposed protocol many times and observed whether a failure occurs or not.
Here we used another parameter set $(q,p,n,m,d,\ell)=(130337,19,32,2,1,1)$, as the original parameter yields too small estimated failure probability and therefore it is not feasible to confirm it experimentally.
For our parameter here, the theoretical upper bound in Eq.\eqref{eq:theoretical_upper_bound} becomes $1.12 \times 10^{-9}$.
On the other hand, we performed $100000$ trials with the parameter and no failure occurred during the experiment.
At least this experimental result does not contradict the theoretical estimate.
\subsection{Experiments with Parameters Similar to the Previous Protocol}
\label{sec:experimental_results__similar_parameter}
For the sake of comparison, as a parameter set similar to $(q,n,m,d)=(4,25,2,1)$ used in the previous protocol \cite{Akiyama}, we performed experiments using parameter $(q,p,n,m,d,\ell)=(7,2,32,2,1,1)$.
We executed our protocol $300$ times, and the protocol succeeded $287$ times, therefore the success ratio was about $95.6\%$.
This improves the success ratio $89.9\%$ of the previous protocol mentioned in Section \ref{sec:previous_protocol__disadvantage}.
We note that the theoretical upper bound in Eq.\eqref{eq:theoretical_upper_bound} of the failure probability with this parameter becomes $0.118$, which indeed bounds the experimental failure ratio $0.044$.
We also performed experiments using another parameter $(q,p,n,m,d,\ell) = (7,4,32,2,1,1)$; in this case, the protocol succeeded only $20$ times among $300$ trials.
The significantly lower success ratio would be caused by the property that now the ratio $p / q$ of the range of the correct solution among the whole coefficient field becomes too large.
In fact, now the theoretical upper bound in Eq.\eqref{eq:theoretical_upper_bound} of the failure probability becomes $3.88 \times 10^5$ which is a meaningless value.
\subsection{Computational Time for Our Protocol}
Table \ref{tab:execution_time} shows the computational times of our protocol with parameter
\[
(q,n,m,d,\ell) = (46116646144580573897,32,2,1,1)
\]
and various choices of $p = 19$, $19^2$, $19^3$.
The execution time increased when $p$ becomes larger, but the change of execution times among these choices of $p$ is not significantly large.
The reason would be that now the ratio $p/q$ is too small (e.g., $p/q = 1.49 \times 10^{-16}$ when $p = 19^3$) to affect the number of nodes in the tree (that is, the total number of equations to be solved).
\begin{table}[t!]
\centering
\caption{Computational times for our proposed protocol with $p = 19$, $19^2$, and $19^3$}
\label{tab:execution_time}
\begin{tabular}{|c|c|c|} \hline
$p$& total time for $1000$ executions (s) & average time (s) \\ \hline
$19$ & $71.460$ & $7.15 \times 10^{-2}$\\
$19^2$ & $72.800$ & $7.28 \times 10^{-2}$\\
$19^3$ & $239.370$ & $2.39 \times 10^{-1}$\\ \hline
\end{tabular}
\end{table}
\subsection{Relation between Failure Ratios and Parameter $\ell$}
One of the main difference of our proposed protocol from the previous protocol is that now we may use $\ell \geq 1$ conditions $\mvec{f} = 0$, not only a single condition, to exclude the incorrect common keys.
Table \ref{tab:probability_with_various_l} shows the numbers of success (among $1000$ trials) in our experiments with two parameter sets $(q,p,n,m,d) = (5,2,32,2,1)$ and $(3,2,32,2,1)$ and various choices of $\ell = 1,\dots,5$.
The result shows that the failure ratio decreases when $\ell$ increases, which is consistent with the theoretical estimate given in Section \ref{subsec:error_bound_analysis}.
\begin{table}[t!]
\centering
\caption{Numbers of success with various choices of $\ell$ ($1000$ trials for each parameter)}
\label{tab:probability_with_various_l}
\begin{tabular}{|c|c|c|} \hline
$(q,p,n,m,d)$ & $(5,2,32,2,1)$ & $(3,2,32,2,1)$\\ \hline
$\ell = 1$ & $806$ & $26$\\
$\ell = 2$ & $950$ & $186$\\
$\ell = 3$ & $993$ & $552$\\
$\ell = 4$ & $996$ & $802$\\
$\ell = 5$ & $999$ & $945$\\ \hline
\end{tabular}
\end{table}
\subsection{Degree of Regularity}
\label{sec:experimental_result__degree_of_regularity}
Here we explain our experiments about the degree of regularity used in Section \ref{sec:security_evaluation__Grobner_basis}.
If we consider to solve the system of equations $\mvec{f}(\mvec{x}) = 0$ and $\mvec{c}(\mvec{x}) = \mvec{u}$ in $\mathbb{F}_q^n$ instead of $\mathbb{Z}_p^n$, the corresponding ideal is generated by $\mvec{f}(\mvec{x})$, $c_1(\mvec{x}) - u_1, \dots, c_n(\mvec{x}) - u_n$.
In order to add the constraint that the solution has to be found in $\mathbb{Z}_p^n$, we introduce the following polynomial system $\mvec{h}$:
\begin{eqnarray*}
h_i(\mvec{x}) = \prod_{\gamma = 0}^{p-1} (x_i - \gamma) \mbox{ ($1 \leq i \leq n$)} \enspace.
\end{eqnarray*}
Now the set of solutions in $\mathbb{Z}_p^n$ corresponds to the ideal $\mathcal{I}$ generated by $\mvec{f}(\mvec{x})$, $c_1(\mvec{x}) - u_1, \dots, c_n(\mvec{x}) - u_n$, and $\mvec{h}(\mvec{x})$.
We fix parameters $(q,m,d,\ell) = (46116646144580573897,2,1,1)$, and calculated the degree of regularity for the cases $2 \leq n \leq 10$ and $p \in \{2,3,4,19\}$.
The result was that the degree of regularity of the ideal $\mathcal{I}$ is always $d_{reg} = n + 1$.
Due to the result, we can expect that the degree of regularity of any ideal of this type would be $d_{reg} = n + 1$ regardless of whether $p$ is larger than $n$ or smaller than $n$; this observation will be applied in Section \ref{sec:security_evaluation__Grobner_basis} to the case $p = 19$ and $n = 32$.
\section{Comparison with Previous Protocol}
\label{sec:comparison}
\subsection{Failure Probabilities of Protocols}
As shown in Section \ref{subsec:error_bound_analysis}, the failure probability of our proposed protocol with the proposed parameter set is of the order of $10^{-39}$, which is significantly lower than the experimentally derived failure ratio $10.1\%$ of the previous protocol.
We emphasize that this improvement is not just due to different choices of parameters; the argument in Section \ref{sec:experimental_results__similar_parameter} showed that our proposed protocol also improves the previous protocol even with the choice of similar parameters.
We also note that, while the previous paper \cite{Akiyama} did not give any theoretical analysis for failure probabilities with various parameters, we give a theoretical estimate in Eq.\eqref{eq:theoretical_upper_bound} for failure probabilities of our proposed protocol, which suggests that the failure probability will be reduced when $q$ becomes larger and $p/q$ becomes smaller.
\subsection{Necessary Numbers of Communication Rounds}
\label{sec:comparison__rounds}
Here we suppose that the \lq\lq practical failure probability\rq\rq{} of key exchange protocols is $2^{-120} \approx 7.52 \times 10^{-37}$, taken from the failure probability with a parameter set LightSaber-KEM of SABER \cite{SABER} which is one of the Round 3 Finalists of NIST PQC standardization.
The failure probability of one execution of the previous protocol is $0.101$, therefore $37$ trials are needed to achieve the overall failure probability $2^{-120}$.
One protocol execution uses two communication rounds between Alice and Bob, therefore the total number of communication rounds is $2 \times 37 = 74$.
In contrast, our proposed protocol with the proposed parameter set already achieves failure probability $1.52 \times 2^{-127} < 2^{-120}$, therefore the required number of communication rounds is just two.
\subsection{Communication Complexity}
We compare the amount of communication bits between Alice and Bob, for our proposed protocol and the previous protocol.
Here we express a polynomial as a vector of its coefficients; a polynomial of degree $d$ with $n$ variables is represented by a vector of dimension $\sum_{k=0}^{d} \binom{k+n-1}{k}$.
Also, we suppose that an element of $\mathbb{F}_q$ is represented by $\log_{2}q$ bits.
In the previous protocol, the communicated objects are $f = \mvec{f}$, $\mvec{g}$, $\mvec{c}$, and $\mvec{u}$.
Here $f$ is a single degree-$d$ polynomial with $n$ variables over $\mathbb{F}_q$; $\mvec{g}$ consists of $n$ degree-$1$ polynomials with $n$ variables (each having $n + 1$ coefficients) over $\mathbb{F}_q$; $\mvec{c}$ consists of $n$ degree-$m$ polynomials with $n$ variables over $\mathbb{F}_q$; and $\mvec{u}$ is an $n$-dimensional vector over $\mathbb{F}_q$.
Therefore, the numbers of communicated elements of $\mathbb{F}_q$ are as in the left column of Table \ref{tab:communication_complexity}.
\begin{table}[t!]
\centering
\caption{Comparison of communication complexity}
\label{tab:communication_complexity}
\begin{tabular}{|c|c|c|} \hline
& previous protocol ($\times \log_2 q$ bits) & our protocol ($\times \log_2 q$ bits) \\ \hline
$\mvec{f}$ & $\sum_{k=0}^{d} \binom{k+n-1}{k}$ & $\ell \cdot \sum_{k=0}^{d} \binom{k+n-1}{k}$ \\
$\mvec{g}$ & $n^2+n$ & --- \\
$\mvec{c}$ & $n \cdot \sum_{k=0}^{m} \binom{k+n-1}{k}$ & $n \cdot \sum_{k=0}^{m} \binom{k+n-1}{k}$ \\
$\mvec{u}$ & $n$ & $n$\\
total & $\sum_{k=0}^{d} \binom{k+n-1}{k} + n \cdot \sum_{k=0}^{m} \binom{k+n-1}{k} + n^2 + 2n$ & $\ell \cdot \sum_{k=0}^{d} \binom{k+n-1}{k} + n \cdot \sum_{k=0}^{m} \binom{k+n-1}{k} + n$ \\ \hline
\end{tabular}
\end{table}
In our proposed protocol, the changes from the previous protocol are the following two points; the number of polynomials in $\mvec{f}$ becomes $\ell$ instead of one; and $\mvec{g}$ is not communicated.
Therefore, the numbers of communicated elements of $\mathbb{F}_q$ are as in the right column of Table \ref{tab:communication_complexity}.
For a parameter set $(q,n,m,d) = (9,50,2,1)$ used in the previous paper \cite{Akiyama}, the communication complexity of one execution of the previous protocol is $2.19 \times 10^5$ bits.
As the previous protocol needs $37$ trials to achieve the practical failure probability (see Section \ref{sec:comparison__rounds}), the total communication complexity becomes $8.10 \times 10^6$ bits.
In contrast, our proposed protocol with the proposed parameter
\[
(q,p,n,m,d,\ell) = (46116646144580573897,19,32,2,1,1)
\]
achieves the practical failure probability by only one execution, and the communication complexity is $1.18 \times 10^6$ bits.
Hence our proposed protocol improves the communication complexity compared to the previous protocol.
\section{Security Evaluation}
\label{sec:security_evaluation}
In this section, we analyze the security of our proposed protocol.
\subsection{Constrained PME Problem}
\label{subsec:constrained_PME_problem}
Here we define a computational problem named constrained PME problem, which is a modification of PME problem described in Section \ref{subsec:PME_problem} and is a base of the security of our proposed protocol.
\begin{problem}
[constrained PME problem]
For $(f_1(\mvec{x}), \dots, f_{\ell}(\mvec{x}), c_1(\mvec{x}), \dots, c_n(\mvec{x})) \in \mathbb{F}_q[\mvec{x}]^{n + \ell}$ and $(u_1, \dots, u_n) \in \mathbb{F}_q^n$, constrained PME problem is a problem of finding a solution to the system of multivariate polynomial equations that belongs to $\mathbb{Z}_p^n$:
\begin{eqnarray*}
\begin{cases}
f_1(x_1, \dots, x_n) = 0 \\
\qquad\vdots\\
f_{\ell}(x_1, \dots, x_n) = 0 \\
c_1(x_1, \dots, x_n) = u_1\\
\qquad\vdots\\
c_n(x_1, \dots, x_n) =u_n \enspace.
\end{cases}
\end{eqnarray*}
\end{problem}
\subsection{Security against Key Recovery Attacks}
As well as the previous paper \cite{Akiyama}, in this paper, we consider the security against the Key Recovery Attack by Honest Passive Observer (KRA-HPO).
In the scenario, an attacker is supposed to not interfere with the communication between Alice and Bob.
Then we prove that our proposed protocol is secure in this sense by assuming the hardness of constrained PME problem defined above.
To formulate the security, let $\Sigma$ denote our proposed key exchange protocol, as described in Figure \ref{fig:our_protocol_for_security_definition}.
Let $\mathcal{A}$ denote an adversary's attack algorithm.
Then we define the security experiment for KRA-HPO adversary as in Figure \ref{fig:KRA-HPO_experiment}, where $\mathsf{Gen}$ denotes an algorithm to generate the parameters of our proposed protocol.
Now we define the security as follows:
\begin{figure}[t!]
\centering
\begin{tabular}{|l|}
\hline
$\Sigma(q,p,n,m,d,\ell)$ \\ \hline
$\mvec{f}$ $\stackrel{r}{\leftarrow}$ $\mathbb{F}_q[\mvec{x}]^{\ell};$\\
$f(\mvec{s})$ $=$ $0, \mvec{s} \stackrel{r}{\leftarrow} \mathbb{Z}_p^n;$\\
$\mvec{r}$ $\stackrel{r}{\leftarrow}$ $\Lambda_{n, m-d}^{n};$\\
$\mvec{\psi}$ $\stackrel{r}{\leftarrow}$ $\Lambda_{n,m}^n;$\\
$t_i$ $\stackrel{r}{\leftarrow}$ $\{1,\dots,\ell\};$\\
$c_i$ $\leftarrow$ $\psi_i + f_{t_i}\cdot r_i;$\\
$\mvec{u}$ $\leftarrow$ $\mvec{c}(\mvec{s});$\\
Output $(\mvec{s},\mvec{f},\mvec{c},\mvec{u})$\\
\hline
\end{tabular}
\caption{Our proposed key exchange protocol $\Sigma$}
\label{fig:our_protocol_for_security_definition}
\end{figure}
\begin{figure}[t!]
\centering
\begin{tabular}{|l|}
\hline
$\mathsf{Exp}_{\Sigma, \mathcal{A}}^{\mathrm{KRA-HPO}}(\kappa)$ \\ \hline
$(q,p,n,m,d,\ell)\stackrel{r}{\leftarrow} \mathsf{Gen}(1^{\kappa});$\\
$(\mvec{s},\mvec{f},\mvec{c},\mvec{u})\stackrel{r}{\leftarrow}\Sigma(q,p,n,m,d,\ell);$\\
$\mvec{s}{}' \leftarrow\mathcal{A}(\mvec{f},\mvec{c},\mvec{u});$\\
Output $( \mvec{s}, \mvec{s}{}' )$\\
\hline
\end{tabular}
\caption{Security experiment for KRA-HPO adversary $\mathcal{A}$ against protocol $\Sigma$}
\label{fig:KRA-HPO_experiment}
\end{figure}
\begin{definition}
We say that the key exchange protocol $\Sigma$ is KRA-HPO secure when for any probabilistic polynomial-time (PPT) adversary $\mathcal{A}$, its advantage defined below is negligible in the security parameter $\kappa$:
\begin{eqnarray*}
\mathsf{Adv}_{\Sigma, \mathcal{A}}^{\mathrm{KRA-HPO}}(\kappa) := \Pr[\mvec{s} = \mvec{s}{}' \mid (\mvec{s}, \mvec{s}{}' ) \leftarrow \mathsf{Exp}_{\Sigma, \mathcal{A}}^{\mathrm{KRA-HPO}}(\kappa)] \enspace.
\end{eqnarray*}
\end{definition}
We also put the following assumption on the hardness of constrained PME problem:
\begin{assumption}
[hardness of constrained PME problem]
Suppose that $\mvec{f}$, $\mvec{c}$, and $\mvec{u}$ are chosen as in our proposed protocol.
We assume that for any PPT algorithm $\mathcal{B} = \mathcal{B}(1^{\kappa},\mvec{f},\mvec{c},\mvec{u})$, the probability that its output $\mvec{s}{}'$ satisfies that $\mvec{f}(\mvec{s}{}') = 0$ and $\mvec{c}(\mvec{s}{}') = \mvec{u}$ is negligible in $\kappa$.
\end{assumption}
Then we have the following theorem:
\begin{theorem}
Under the assumption on the hardness of constrained PME problem explained above, our proposed protocol $\Sigma$ is KRA-HPO secure.
In more detail, if a PPT adversary $\mathcal{A}$ breaks the security of $\Sigma$ with advantage $\mathsf{Adv}_{\Sigma, \mathcal{A}}^{\mathrm{KRA-HPO}}(\kappa)$, then there exists a PPT algorithm $\mathcal{B}$ solving the constrained PME problem with probability at least $\mathsf{Adv}_{\Sigma, \mathcal{A}}^{\mathrm{KRA-HPO}}(\kappa)$.
\end{theorem}
\begin{proof}
Given an adversary $\mathcal{A}$ as in the statement, we define an algorithm $\mathcal{B}(1^{\kappa},\mvec{f},\mvec{c},\mvec{u})$ as follows: it runs $\mathcal{A}(\mvec{f},\mvec{c},\mvec{u})$ to obtain $\mvec{s}{}'$, and outputs the $\mvec{s}{}'$.
This $\mathcal{B}$ is PPT as well as $\mathcal{A}$.
Now the condition $\mvec{s}{}' = \mvec{s}$ holds in the experiment $\mathsf{Exp}_{\Sigma, \mathcal{A}}^{\mathrm{KRA-HPO}}(\kappa)$ with probability $\mathsf{Adv}_{\Sigma, \mathcal{A}}^{\mathrm{KRA-HPO}}(\kappa)$, and if it happens, then the output of $\mathcal{B}$ also satisfies that $\mvec{s}{}' = \mvec{s}$ and (as $\mvec{f}(\mvec{s}) = 0$ and $\mvec{c}(\mvec{s}) = \mvec{u}$ by the definition) hence $\mvec{f}(\mvec{s}{}') = 0$ and $\mvec{c}(\mvec{s}{}') = \mvec{u}$, i.e., $\mathcal{B}$ solves the constrained PME problem.
Hence the claim holds.
\end{proof}
\subsection{On Solving the Problem Using Gr\"{o}bner Basis}
\label{sec:security_evaluation__Grobner_basis}
In order to confirm our security assumption described above, here we consider to solve the constrained PME problem by computing Gr\"{o}bner basis of the ideal $\mathcal{I}$ generated by $\mvec{f}(\mvec{x})$, $c_1(\mvec{x}) - u_1,\dots,c_n(\mvec{x}) - u_n$, and $\mvec{h}(\mvec{x})$, where the polynomials $\mvec{h}$ are as defined in Section \ref{sec:experimental_result__degree_of_regularity}.
The experimental result described in Section \ref{sec:experimental_result__degree_of_regularity} suggests that the ideal $\mathcal{I}$ is semi-regular; if it is true, then we have the degree of regularity $d_{reg} = 33$ for $\mathcal{I}$.
Now when we want to compute a Gr\"{o}bner basis of $\mathcal{I}$ by using $F_5$ algorithm, the computational complexity is estimated (as in Theorem \ref{thm:complexity_of_F5}) as the order of $\binom{n + d_{reg}}{n}^{\omega}$.
By substituting $n = 32$, $d_{reg}= 33$, and by using an estimate $\omega = 2.3$ as in \cite{omega}, the value becomes
\[
\binom{n + d_{reg}}{n}^{\omega}
\approx 4.8 \times 10^{42} \enspace.
\]
This value is significantly larger than $2^{128} \approx 3.4 \times 10^{38}$.
Hence it is expected that our proposed protocol would be secure in the sense of $128$-bit security against this kind of attacks.
\subsection{On Attacks Using Linear Algebra}
Here we consider a kind of attacks to recover the polynomial map $\mvec{\psi}$; if it were possible, then the adversary could compute the set $\mvec{\psi}^{-1}(\mvec{u}) \cap \mathbb{Z}_p^n$ which contains the common key $\mvec{s}$.
For the choice of parameter $\ell = 1$ mainly used in this paper, we note that by the construction of the protocol, the following relation holds:
\begin{eqnarray*}
\psi_i(\underline{x}) + f(\underline{x})r_i(\underline{x}) = c_i(\underline{x}) \mbox{ for } i=1, \dots, n \enspace.
\end{eqnarray*}
Here, for each $i$, the coefficients of $f$ and $c_i$ are known and the coefficients of $\psi_i$ and $r_i$ are not known.
This situation can be regarded as a system of linear equations.
Now the number of equations and the number of unknown coefficients in $\psi_i$ are equal, therefore the dimension of the solution space is equal to the number, say $N$, of unknown coefficients in $r_i$.
As $\deg r_i = m-d$, we have
\[
N = \sum_{k=0}^{m-d} \binom{k+n-1}{k} \enspace.
\]
With our proposed parameters
\[
(q,p,n,m,d,\ell) = (46116646144580573897,19,32,2,1,1) \enspace,
\]
we have $N = n + 1 = 33$ and $q^N = 8.08 \times 10^{648}$.
Therefore the number of candidates for the solution is significantly larger than $2^{128}$ and hence our proposed protocol is secure in the sense of $128$-bit security against this kind of attacks.
\subsection{On the Exhaustive Search with the Help of $\mvec{f}$}
Considering the exhaustive search for the common key $\mvec{s}$ over the range $\mathbb{Z}_p^n$, when we use parameters $\ell = 1$ and $d = 1$ as above, the number of candidates for $\mvec{s}$ can be reduced by a factor of $1/p$.
Indeed, when the coefficient of $x_i$ in $f$ is non-zero, for each value of $s_1,\dots,s_{i-1},s_{i+1},\dots,s_n$, the condition $f(\mvec{s}) = 0$ yields at most one possible value of $s_i \in \mathbb{Z}_p$.
Due to the observation, when we want to keep $128$-bit security, we have to compare $2^{128}$ with $p^{n-1}$ instead of $p^n$.
With our proposed parameters as above, we have $p^{n-1} = 19^{31} = 4.38 \times 10^{39} > 2^{128} = 3.40 \times 10^{38}$.
Therefore our proposed protocol is secure in the sense of $128$-bit security against this kind of attacks.
When the parameter $\ell$ is increased, there is an advantage that the upper bound of the failure probability decreases as shown in Eq.\eqref{eq:theoretical_upper_bound}.
However, there is also a disadvantage that the larger number of conditions in $\mvec{f}$ restricts the range of the common key further, which may make the exhaustive search easier.
\subsection*{Acknowledgements.}
The authors thank Tsuyoshi Takagi, Momonari Kudo, Hiroki Furue, and Yacheng Wang for their precious advice for this research.
This research was supported by the Ministry of Internal Affairs and Communications SCOPE Grant Number 182103105.
| {'timestamp': '2021-07-14T02:13:55', 'yymm': '2107', 'arxiv_id': '2107.05924', 'language': 'en', 'url': 'https://arxiv.org/abs/2107.05924'} |
\section{Introduction} \label{section_introduction}
It is well-known that Saturn, Jupiter, Uranus and Neptune have ring systems. For Mars, no rings have been detected yet, but it was suggested by \citet{soter1971dust} that Mars should possess a ring system, which consists of dust grains originating from Phobos and Deimos. A number of studies contributed to the dynamical modeling of the suspected Martian dust rings, which are reviewed in several papers \citep{hamilton1996asymmetric, krivov1997martian, krivov2006search, zakharov2014dust}. Attempts to detect the rings are described by \citet{showalter2006deep}, \citet{zakharov2014dust}, \citet{showalter2017dust}. The Japan Aerospace Exploration Agency (JAXA) will launch the Martian Moons Exploration mission in 2024 with the Circum-Martian Dust Monitor \citep{kobayashi2018situ}, an instrument that will conduct in-situ dust measurements.
Recent studies of particle dynamics in the Martian dust rings were presented by \citet{makuch2005long} and \citet{krivov2006search}. In addition to solar radiation pressure and the Martian oblateness $J_2$, \citet{makuch2005long} included in their model the Poynting-Robertson (P-R) drag which is important for grains with long lifetime. They integrated the orbit-averaged equations of motion for dust grains to obtain the spatial structure of the Deimos torus. Furthermore, the Martian eccentricity ($e_\mathrm{Mars}$) combined with the three aforementioned perturbation forces was considered by \citet{krivov2006search}, who also integrated the orbit-averaged equations of motion. From their numerical simulations the configurations of the Phobos and Deimos dust rings were obtained and the ring optical depths were estimated. The effect of $e_\mathrm{Mars}$ on the dust dynamics was also discussed by \citet{showalter2006deep}.
In this paper we study the full dynamics of dust particles ejected from Phobos and Deimos in terms of direct numerical integrations of the equations of motion. Particle lifetimes and spatial configurations of number density are obtained for 12 different grain sizes ranging from $0.5\mu\mathrm{m}$ to $100\mu\mathrm{m}$. The steady state size distributions, dominant grain sizes, cumulative grain densities and geometric optical depths for the rings are derived by averaging over the initial mass distribution of ejecta. The major asymmetries of the rings are evaluated in a sun-fixed reference frame, allowing for quantitative predictions for future in-situ measurements by forthcoming missions.
The paper is organized as follows. In Section \ref{section_production}, the mass production rates of dust from both source moons are estimated. The equations of motion and the numerical integrations are described in Section \ref{section_simulations}. The resulting lifetimes of various grain sizes are presented in Section \ref{section_lifetime}, and the effect of the planetary odd zonal harmonic coefficient $J_3$ is analyzed in Section \ref{section_J3}. In Section \ref{section_configuration}, we present the resulting spatial configuration of the Phobos and Deimos rings. A discussion of the model results and uncertainties, along with our conclusions, is given in Section \ref{section_comparison_uncertainties}.
\section{Dust production rates} \label{section_production}
The mass production rate of dust ejected from the surface of Phobos and Deimos by impacts of the hypervelocity interplanetary particles can be written as \citep{krivov2003impact}
\begin{equation} \label{eq:M+}
M^+= \alpha_\mathrm{G}\, F_\mathrm{imp}^{\infty}\, Y\,S \,,
\end{equation}
where $F_\mathrm{imp}^{\infty}$ is the mass flux of the interplanetary projectiles, $\alpha_\mathrm{G}$ accounts for the enhancement of the flux due to gravitational focusing by Mars, $Y$ is the yield (which denotes the ratio of the total mass of the ejected particles to the mass of the projectiles), and $S$ is the cross section of the source moon. The value of $\alpha_\mathrm{G}$ is calculated from Eq.~(14) in \citet{2006P&SS...54.1024S}. For Phobos we obtain $\alpha_\mathrm{G} \! \approx \! 1.003$, and a focused impact velocity $v_\mathrm{imp} \! = \! 15.30 \, \mathrm{km\,s^{-1}}$; for Deimos we have $\alpha_\mathrm{G} \! \approx \! 1.010$ and $v_\mathrm{imp} \! = \! 15.12 \, \mathrm{km\,s^{-1}}$. The gravitational focusing effect at Phobos and Deimos is low because of the small mass of Mars. The yield $Y$ is calculated with the empirical formula by \citet{2001Icar..154..391K}. For $F_\mathrm{imp}^{\infty}$ at Mars we adopt the same value as \citet{krivov2006search}, i.e.~$1 \times 10^{-15} \, \mathrm{kg\,m^{-2}\,s^{-1}}$, obtained from interplanetary dust models \citep{1985Icar...62..244G, divine1993five}, which is consistent by order of magnitude with the number from the Interplanetary Meteoroid Engineering Model (version 1.1) \citep{2011Mints, 2005AdSpR..35.1282D}. For a pure silicate surface of the moons, we obtain mass production rates $M^+$ of about $1.1 \times 10^{-1} \, \mathrm{g/s}$ for ejecta from Phobos and about $3.4 \times 10^{-2} \, \mathrm{g/s}$ for ejecta from Deimos. We note that there are large uncertainties in the estimate of the mass production rate (see Section \ref{section_comparison_uncertainties}).
\section{Numerical simulations} \label{section_simulations}
A well-tested numerical code \citep[see][]{liu2016dynamics, liu2018dust, liu2018comparison} is used to integrate the evolution of dust grains. Grains of sizes ranging from submicron to 100 microns are simulated: $0.5 \,\mathrm{\mu m}$, $1 \, \mathrm{\mu m}$, $2 \, \mathrm{\mu m}$, $5 \, \mathrm{\mu m}$, $10 \, \mathrm{\mu m}$, $15 \, \mathrm{\mu m}$, $20 \, \mathrm{\mu m}$, $25 \, \mathrm{\mu m}$, $30 \, \mathrm{\mu m}$, $40 \, \mathrm{\mu m}$, $60 \, \mathrm{\mu m}$, and $100 \, \mathrm{\mu m}$, including the effects of Martian gravity, gravitational perturbations from the Sun, Phobos, and Deimos, solar radiation pressure, as well as P-R drag.
The shapes of Jupiter and Saturn are both nearly hemispherically and axially symmetric, which is not the case for Mars. Thus, in our model the Martian gravity field up to 5th degree and 5th order is considered. We use silicate as the material for dust and adopt a bulk density of 2.37 $\mathrm {g \, cm^{-3}}$ \citep{makuch2005long, krivov2006search}.
The equations of motion for dust particles in the Martian system read
\begin{equation} \label{equ_dynamic_model}
\begin{split}
\ddot{\vec r} = &GM_\mathrm M\nabla \Bigg[ \frac{1}{r}+\frac{1}{r}\sum_{l=1}^{l_\mathrm{max}}\sum_{m=0}^{l}\left(\frac{R_\mathrm M}{r}\right)^l \times \\
& P_{lm}(\sin\phi) (C_{lm}\cos m\lambda + S_{lm}\sin m\lambda) \Bigg] \\
& + GM_{\mathrm P}\left(\frac{\vec r_{\mathrm {dP}}}{r_{\mathrm {dP}}^3}-\frac{\vec r_{\mathrm P}}{r_{\mathrm P}^3}\right) + GM_{\mathrm D}\left(\frac{\vec r_{\mathrm {dD}}}{r_{\mathrm {dD}}^3}-\frac{\vec r_{\mathrm D}}{r_{\mathrm D}^3}\right) \\
& + GM_\mathrm{S}\left(\frac{\vec r_\mathrm{dS}}{r_\mathrm{dS}^3}-\frac{\vec r_\mathrm S}{r_\mathrm S^3}\right) \\
& + \frac{3Q_\mathrm SQ_\mathrm {pr}\mathrm{AU}^2}{4r_\mathrm {Sd}^2\rho_\mathrm gr_\mathrm gc}\left\{\left[1-\frac{\dot r_\mathrm {Sd}}{c}\right]\hat{\vec r}_\mathrm {Sd} - \frac{\dot{\vec r}_\mathrm {Sd}}{c}\right\} \,.
\end{split}
\end{equation}
Here $G$ is the gravitational constant, $M_\mathrm M$ the mass of Mars, and $R_\mathrm M$ the Martian reference radius. We denote with $P_{lm}$ the associated Legendre functions of degree $l$ and order $m$, where $l_\mathrm{max}=5$ in this work and $C_{lm}$ and $S_{lm}$ are the spherical harmonics of the Martian gravity field. Further, $\phi$ and $\lambda$ are the latitude and longitude in the Martian body-fixed frame, respectively, $M_{\mathrm P}$ ($M_{\mathrm D}$) is the mass of Phobos (Deimos), $\vec r_{\mathrm {dP}}$ ($\vec r_{\mathrm {dD}}$) is the vector from the dust particle to Phobos (Deimos), and $\vec r_{\mathrm P}$ ($\vec r_{\mathrm D}$) is the vector of the Phobos (Deimos) position. Similarly, $M_{\mathrm S}$ is the mass of the Sun, $\vec r_{\mathrm {dS}}$ is the vector from the dust particle to the Sun, and $\vec r_{\mathrm S}$ is the radius vector of the Sun. The symbol AU denotes the astronomical unit, $Q_\mathrm S$ is the solar radiation energy flux at one AU, $Q_\mathrm{pr}$ is the solar radiation pressure efficiency, $\vec r_{\mathrm {Sd}} = -\vec r_{\mathrm {dS}}$, $\rho_\mathrm g$ is the bulk density of the grain, $r_\mathrm g$ is the grain radius, and $c$ is the light speed.
The values of the gravity spherical harmonics are taken from the Mars gravity model MRO120D \citep{konopliv2016improved}. The formulas for solar radiation pressure and the P-R drag are taken from \citet{burns1979radiation}. The size-dependent values of the solar radiation pressure efficiency $Q_\mathrm{pr}$ are computed from Mie theory \citep{mishchenko1999bidirectional,mishchenko2002scattering} for spherical grains (see Fig.~3 in \citet{liu2018dust}), using the optical constants for silicate grains from \citet{mukai1989cometary}. Due to the small size of Phobos ($R_\mathrm{P} \! \approx \!$ 11 km) and Deimos ($R_\mathrm{D} \! \approx \!$ 6 km), the dust ejection velocity is higher than the escape velocity but much lower than the orbital velocity of the moons \citep{horanyi1990toward, horanyi1991dynamics}. Therefore, it is a very good approximation to start grains directly from the orbits of Phobos and Deimos. It is known that the dynamical behavior of dust particles strongly depends on the solar longitude (Martian season) at the launch time \citep{hamilton1996asymmetric, krivov1997martian, makuch2005long, krivov2006search}. Thus, in our simulations 100 particles per grain size are launched with uniformly distributed initial mean anomalies of the Martian orbit around the Sun \citep{krivov1997martian, showalter2006deep}.
The motions of dust grains are simulated until they hit Phobos, Deimos, Mars, they escape from the Martian system, or for a maximum of 100,000 years. In order to save computation cost, we also stop the simulation when the fraction of grains remaining in orbit for certain sizes are less than 5\% (the integrations for the 100 particles per grain size run in parallel on a computer cluster provided by the Finnish CSC -- IT Center for Science). When we check for collisions of the dust particles with a given target (Phobos, Deimos, and Mars), at each time step of the integration we calculate the distance between the particle and the center of the target. If this distance is smaller than the target's radius, an impact occurs. When the grain is close to the target, in order to avoid overlooking the impact due to discrete time steps of the numerical integrators, a cubic Hermite interpolation is adopted to calculate the minimum distance between the particle and the target's center approximately (see \citet{chambers1999hybrid} and \citet{liu2016dynamics}). In our simulations, we store the slowly changing orbital elements including semi-major axis, eccentricity, inclination, argument of pericenter, longitude of ascending node, and true anomaly. Generally, we store 10 sets of orbital elements for one orbital period. The orbital segment between two consecutively stored sets of osculating elements is approximately considered as Keplerian. For denser output, each of these segments is further divided into intervals that are equidistant in time. Since dust particles are produced and absorbed continually (by hitting sinks or escape from the system), each discrete point corresponds to one particle in the steady-state ring configuration (see details in Sections 3.4 and 4 in \citet{liu2016dynamics} as well as Section 4 in \citet{liu2018dust}).
\section{Particle Lifetimes} \label{section_lifetime}
The lifetimes for dust from Phobos and Deimos derived from our simulations are shown in Figure ~\ref{fig:Lifetime}. Overall, we confirm the picture drawn from previous work (see table 1 in \citet{krivov2006search} and references given there): Grains larger than about $5\mu\mathrm{m}$ to $10\mu\mathrm{m}$ have lifetimes on the order of 10,000 years if they come from Deimos and a few tens of years if they are ejected from Phobos. For grain sizes smaller than 5 $\mathrm{\mu m}$ the lifetimes drop to a value of months.
To explain this jump in lifetime, a conserved ``Hamiltonian" for the orbit-averaged system is used, which reads \citep{1996Icar..123..503H}
\begin{equation}
\begin{split}
\label{eq:Hamiltonian}
\mathcal H (e, \phi_\odot) = & \sqrt{1-e^2} + Ce\cos\phi_\odot + \frac{1}{2}Ae^2\left[1+5\cos(2\phi_\odot)\right] \\
& + \frac{W}{3(1-e^2)^{3/2}} \ .
\end{split}
\end{equation}
Here $e$ is the eccentricity, $\phi_\odot$ is the solar angle, denoting the angle between the Sun and the grain's pericenter as seen from the planet, and $C$, $A$, and $W$ are parameters labeling the relative strengths of solar radiation pressure, solar tidal force, and the Martian $J_2$, respectively (see \citet{1996Icar..123..503H} for details). For this analysis, the perturbations from solar radiation pressure, the solar tidal force and $J_2$ are included, but other perturbations are ignored. Besides, the inclinations and eccentricities of Mars and source moons are neglected, and the grains have zero inclination.
Due to the low grain ejection velocity, which is negligible compared to the orbital velocity of the moons, the particles start their evolution with $e_0 \! \approx \! 0$.
Since the Hamiltonian $\mathcal H$ is conserved, the orbital evolution of dust particles follows in the phase plane of the Hamiltonian a curve with the value at the starting point $\mathcal H (e=0, \phi_\odot)$. Typical phase portraits, obtained as a contour plot from Eq.~\ref{eq:Hamiltonian}, are shown in Figs.~\ref{fig:phase_phobos_impact} and \ref{fig:phase_deimos_bifu_impact}.
\begin{figure}
\centering
\noindent\includegraphics[width=0.45\textwidth]{lifetime.pdf}
\caption{Lifetimes for particles of various sizes originating from the Martian moons Phobos and Deimos.}
\label{fig:Lifetime}
\end{figure}
\begin{figure}
\centering
\noindent\includegraphics[width=0.4\textwidth]{phase_phobos_impact.pdf}
\caption{Phase portrait for 10.7 $\mathrm{\mu m}$ particles from Phobos. The blue dashed circle denotes $e_\mathrm{impact}$ = 0.638 for Phobos grains.}
\label{fig:phase_phobos_impact}
\end{figure}
\begin{figure}
\centering
\noindent\includegraphics[width=0.3\textwidth,angle=-90]{phase_deimos_impact.pdf}
\caption{Phase portrait for 6.0 $\mathrm{\mu m}$ particles from Deimos. The blue dashed circle denotes $e_\mathrm{impact}$ = 0.855 for Deimos grains.}
\label{fig:phase_deimos_bifu_impact}
\end{figure}
The value of eccentricity for which the grains' pericenter distance is equal to the Martian radius reads
\begin{equation}\label{eq:e_impact}
e_\mathrm{impact} = 1-\frac{R_\mathrm{M}}{a_\mathrm{moon}} \,,
\end{equation}
where $a_\mathrm{moon}$ is the semi-major axis of the source moon. The value of $e_\mathrm{impact}$ is of 0.638 for Phobos grains and of 0.855 for Deimos grains \citep{hamilton1996asymmetric}. With $r_\mathrm{g}^\mathrm{impact}$ we denote the grain size for which the maximum eccentricity $e_\mathrm{max}$, attained in the course of the orbital evolution, reaches $e_\mathrm{impact}$ \citep{hamilton1996asymmetric}. Particles smaller than $r_\mathrm{g}^\mathrm{impact}$ will develop a lager $e_\mathrm{max}$, and thus will hit Mars shortly after ejection, while grains larger than $r_\mathrm{g}^\mathrm{impact}$ will stay in the circum-Martian space for a much longer time. Based on Eq.~\ref{eq:Hamiltonian}, $r_\mathrm{g}^\mathrm{impact}$ = 10.7 $\mathrm{\mu m}$ for grains from Phobos (Fig.~\ref{fig:phase_phobos_impact}). Our analytical value of $r_\mathrm{g}^\mathrm{impact}$ is different from the one given by \citet{hamilton1996asymmetric} because we use size-dependent values of $Q_\mathrm{pr}$ for silicate grains (see Section \ref{section_simulations}). From the full numerical simulations, the value of $r_\mathrm{g}^\mathrm{impact}$ is close to but a bit smaller than 10.7 $\mathrm{\mu m}$, explaining the jump in the lifetime between 5 and 10 $\mathrm{\mu m}$ for Phobos grains (Fig.~\ref{fig:Lifetime}). From Fig.~\ref{fig:phase_phobos_impact} it is also evident that in principle we expect in the Phobos ring mostly particles with a solar angle in the range $90^\circ < \phi_\odot < 270^\circ $. For grains lifted from Deimos the phase portrait is shown in Fig.~\ref{fig:phase_deimos_bifu_impact}. For Deimos grains $r_\mathrm{g}^\mathrm{impact}$ = 6.0 $\mathrm{\mu m}$.
Particles larger than $r_\mathrm{g}^\mathrm{impact}$ are safe from rapid collision with Mars. Their lifetime is limited mainly by collisions with their source moon and by the slow reduction of their semi-major axis, which, ultimately, increases again their chance to hit Mars.
\section{The effect of $J_3$} \label{section_J3}
The effect of $J_2$ ($\approx \! 1.96 \times 10^{-3}$) on the dynamics of Martian dust was well studied in previous papers \citep[e.g.~][]{hamilton1996asymmetric, krivov1995dynamics}. In this section, the effect of a higher degree term, i.e.~the $J_3$ gravitational coefficient is analyzed. The $J_3$ term reflects an asymmetry in the mass distributions between the northern and southern hemispheres of the planet, which causes variations of inclination and eccentricity \citep{roy1965astrodynamics,paskowitz2006design}
\begin{equation} \label{equ_dot_i_J3}
\left\langle \frac{\mathrm{d}i}{\mathrm{d}t} \right\rangle_{J_3} = \frac{3nJ_3R_\mathrm{M}^3}{2a^3(1-e^2)^3} e \cos i \left(1-\frac{5}{4}\sin^2i \right) \cos\omega
\end{equation}
\begin{equation} \label{equ_dot_e_J3}
\left\langle \frac{\mathrm{d}e}{\mathrm{d}t} \right\rangle_{J_3} = -\frac{3nJ_3R_\mathrm{M}^3}{2a^3(1-e^2)^3} \sin i \left(1-\frac{5}{4}\sin^2i \right) \cos\omega \ .
\end{equation}
Here $i$ is the inclination, $n$ the mean motion, $a$ the semi-major axis, and $\omega$ the argument of pericenter.
For Jupiter and Saturn, because of their nearly hemispherically and axially symmetric shapes, the values of $J_3$ are almost zero ($J_3\approx -4.2 \times 10^{-8}$ for Jupiter \citep{iess2018measurement}, and $J_3 \! \approx \! 5.9 \times 10^{-8}$ for Saturn \citep{iess2019measurement}), and thus $J_3$ has negligible effect on the dynamics of particles in the Jovian and Saturnian rings. In contrast, Mars has a much larger value of $J_3$ ($\approx \! 3.15 \times 10^{-5}$), exceeding the value of $J_3$ for the Earth ($\approx -2.5 \times 10^{-6}$ \citep{pavlis2012development}), so that we might expect a noticeable effect on the dynamics of circum-Martian dust. From Eq.~\ref{equ_dot_i_J3}, $J_3$ could be important for the evolution of the inclination for eccentric orbits.
In Fig.~\ref{fig:dot_i_J3_phobos} we show the evolution of a 20 $\mu \mathrm{m}$ particle from Phobos with and without the action of $J_3$. The maximal effect of $J_3$ on the inclination amounts to 10\%, roughly. The variations in inclination due to $J_3$, in turn, alter the effects of solar radiation and Martian $J_2$ on the evolution of semi-major axis and eccentricity. For a specific grain, the mildly altered dynamics can have a drastic effect on the lifetime (Fig.~\ref{fig:dot_i_J3_phobos}), while the overall, averaged effect on the lifetimes seems to remain small (see comparison to previous work in Section \ref{section_lifetime}).
Because of the factor $\sin i$ in Eq.~\ref{equ_dot_e_J3}, which is small due to the generally low orbital inclination, the direct effect of $J_3$ on eccentricity remains negligible compared to that of solar radiation pressure.
The $J_3$ effect decreases rapidly with increasing semi-major axis (Eqs.~\ref{equ_dot_i_J3} and \ref{equ_dot_e_J3}). Thus, it is more important for grains from Phobos than for those from Deimos. The $J_5$ gravitational coefficient has a similar, albeit much weaker effect on inclination. For particles from Phobos the strength of the perturbation induced by $J_5$ is roughly $J_5/J_3 \times \left(R_\mathrm{M}/a_\mathrm{Phobos}\right)^2 \! \approx \! 2\% $ of the perturbation induced by $J_3$.
\begin{figure}
\centering
\noindent\includegraphics[width=0.45\textwidth]{i_J3_phobos.pdf}
\caption{Evolution of inclination for a 20 $\mu \mathrm{m}$ particle from Phobos. The black line denotes the inclination evolution with all perturbation forces (see Section \ref{section_simulations}). The red line corresponds to the case with all perturbation forces except $J_3$. Without $J_3$ the particle re-impacts on the source moon at an earlier time.}
\label{fig:dot_i_J3_phobos}
\end{figure}
\section{Particle size distribution and configuration of the rings} \label{section_configuration}
The differential mass distribution of the ejected particles is assumed to follow a power law
\begin{equation} \label{equ_mass_distri}
p(m) \propto m^{-(1+\alpha)} \,.
\end{equation}
For the slope $\alpha$ we adopt a value of $0.9$, derived from the in-situ measurements in the impact-generated lunar dust cloud \citep{Horanyi:2015faa} performed by the Lunar Dust Experiment onboard NASA's Lunar Atmosphere and Dust Environment Explorer mission. The mass distribution we normalize with the mass production rates given by Eq.~(\ref{eq:M+}). Averaging the results from the long-term simulations over the initial mass distribution (\ref{equ_mass_distri}) we obtain an estimate for the steady-state differential size distribution of grains in the Phobos and Deimos rings (Fig.~\ref{fig:size_distri}). Physically, this distribution arises from a folding of the steep size dependence of the initial grain production with the size dependent lifetime (Fig.~\ref{fig:Lifetime}).
Since the lifetimes of larger grains from Phobos are not substantially longer than those of the small grains, we find that the Phobos ring is dominated by grains $\leq$ 2 $\mathrm{\mu m}$. A small, secondary peak around $10\mu\mathrm{m}$ is visible best in logarithmic scale (Fig.~\ref{fig:size_distri}). For the small grains that dominate the Phobos ring, solar radiation pressure is the most important perturbation force, which pushes nearly instantly the solar angle to $90^\circ$ \citep{hamilton1996asymmetric, 1996Icar..123..503H}. The subsequent rotation of the solar angle is induced by $J_2$ and by the orbital motion of Mars (see discussion around equation 5 of \citet{hamilton1996asymmetric}). Because of the small semi-major axis of Phobos, and thus of the grains lifted from its surface, the effect of $J_2$ dominates and the rotation is in anti-clockwise direction, assuming in principle values in the range $90^\circ < \phi_\odot < 270^\circ $ (see Fig.~\ref{fig:phase_phobos_impact}). But the rotation of the solar angle takes place on timescales that are longer than the lifetime of the grains. As a result, all grains develop their eccentricities with a sun-angle that remains close to $90^\circ$ and the Phobos ring appears shifted towards the negative $y$-axis (Fig.~\ref{fig:normal_rotating_density}).
\begin{figure}
\centering
\noindent\includegraphics[width=0.45\textwidth]{size_distri.pdf}
\caption{\textbf{(a)} Steady-state differential size distribution (logarithmic scale) for the Phobos and Deimos rings obtained from numerical simulations (Section \ref{section_simulations}) combined with initial mass distribution of ejecta Eq.~(\ref{equ_mass_distri}). A power law with slope $q$ = 2.7 is also shown for reference. \textbf{(b)} Same as \textbf{(a)}, but in linear scale for $r_\mathrm{g}<25\mu\mathrm{m}$.}
\label{fig:size_distri}
\end{figure}
\begin{figure}
\centering
\noindent\includegraphics[width=0.4\textwidth]{normal_rotating_density.pdf}
\caption{\textbf{(a)} Grain number density in the Phobos ring projected onto the Martian equatorial plane, vertically averaged over $[-0.3, 0.3] \, R_\mathrm{M}$ in a frame that keeps a fixed orientation with respect to the Sun. The plot is obtained by averaging simulation results of particles of 12 grain sizes ranging from $0.5 \,\mathrm{\mu m}$ to $100 \,\mathrm{\mu m}$ (Section \ref{section_simulations}) over an initial mass distribution of ejecta with the differential slope $\alpha \! = \! 0.9$ (Eq.~\ref{equ_mass_distri}). See Fig.~\ref{fig:size_distri} for the steady-state differential size distribution for these ring particles. The positive $x_\mathrm{rot}$-axis points to the direction of the Sun. The blue line denotes the Martian radius, and the red dashed lines denote the orbits of Phobos (inner) and Deimos (outer). \textbf{(b)} Same as \textbf{(a)}, but for the Deimos ring, vertically averaged over $[-2.0, 2.0] \, R_\mathrm{M}$.}
\label{fig:normal_rotating_density}
\end{figure}
\begin{figure}
\centering
\noindent\includegraphics[width=0.4\textwidth]{edge_yz_inertial.pdf}
\caption{\textbf{(a)} Geometrical optical depth of the Phobos ring, when viewed from the opposite direction of the Martian vernal equinox point. The plot is obtained by averaging simulation results of particles of 12 grain sizes ranging from $0.5 \,\mathrm{\mu m}$ to $100 \,\mathrm{\mu m}$ (Section \ref{section_simulations}) over an initial mass distribution of ejecta with the differential slope $\alpha \! = \! 0.9$ (Eq.~\ref{equ_mass_distri}). The $z$-axis is along the Martian spin axis. The $x$-axis (not shown in this plot) points to the Martian vernal equinox point, and the $y$-axis is perpendicular to the $x$-axis in the Martian equatorial plane. The blue line denotes the Martian radius, and the red dashed lines denote the orbital distances of Phobos (inner) and Deimos (outer). \textbf{(b)} Same as \textbf{(a)}, but for the Deimos ring.}
\label{fig:edge_yz_inertial}
\end{figure}
The Deimos ring, in contrast, is dominated by larger grains (Fig.~\ref{fig:size_distri}) in the size range of about 5-20 $\mathrm{\mu m}$, owing to the very large lifetimes of these particles (Fig.~\ref{fig:Lifetime}). This lifetime is longer than the period of the cycle in the evolution of the solar angle and the eccentricity (Fig.~\ref{fig:phase_deimos_bifu_impact}). For Deimos grains the rotation of the solar angle induced by the orbital motion of Mars dominates over the effect of $J_2$, so that the solar angle rotates clockwise \citep{hamilton1996asymmetric}. The maximum eccentricity is attained at $\phi_\odot \! = \! 0^\circ$. Averaging over many grains, lifted from Deimos uniformly over one Martian year, the Deimos ring appears shifted away from the Sun (Fig.~\ref{fig:normal_rotating_density}).
We find a peak number density (Fig.~\ref{fig:normal_rotating_density}) for grains $\geq$ 0.5 $\mu \mathrm{m}$ for the Phobos ring of about $4.7\times 10^{-5} \, \mathrm{m}^{-3}$ if vertically averaged over $[-0.3, 0.3] \, R_\mathrm{M}$ (corresponding roughly to the thickness of the ring, Fig.~\ref{fig:edge_yz_inertial}), and for the Deimos ring of about $9.2\times 10^{-6} \, \mathrm{m}^{-3}$ if vertically averaged over $[-2.0, 2.0] \, R_\mathrm{M}$. This allows us to estimate the number of dust impacts detected by an in-situ instrument on a spacecraft that traverses the rings. For a vertical crossing of the rings with a 1 $\mathrm{m}^2$ detector (as the sensitive area of the detector onboard JAXA's Martian Moons Exploration mission) one expects to record about 96 particles ($\geq$ 0.5 $\mu \mathrm{m}$) from the Phobos ring and about 125 particles from the Deimos ring.
The non-detection of the Martian rings in observations with the Hubble Space Telescope in 2001 \citep{showalter2006deep} placed upper limits on their edge-on brightness. For a geometric albedo of $0.07$, this translates into upper limits for the edge-on optical depth of $\sim2\times 10^{-6}$ for the Phobos ring and $\sim10^{-6}$ for the Deimos ring \citep{krivov2006search}. The edge-on geometric optical depths from our simulations are shown in Fig.~\ref{fig:edge_yz_inertial}. Here, the viewing direction is from the Martian vernal equinox point in the plane of the sky, i.e.\ the ascending node of the Martian orbital plane in the Martian equatorial plane. For this configuration we use a slope of $\alpha=0.9$ for the initial mass distribution (\ref{equ_mass_distri}) and we obtain an average edge-on geometric optical depth for the Phobos ring of about $3.5\times 10^{-8}$, and about $3.1\times 10^{-7}$ for the Deimos ring. If, alternatively, we use a value of $\alpha \! = \! 0.8$ \citep{krivov2003impact}, which is consistent with measurements of the dust cloud around the Galilean satellites by the Galileo Dust Detection System \citep{2000P&SS...48.1457K}, we get slightly larger values of about $3.7\times 10^{-8}$ for the Phobos ring and about $3.7\times 10^{-7}$ for the Deimos ring. In either case ($\alpha \! = \! 0.9$ or $\alpha \! = \! 0.8$) the average edge-on optical depth is lower than the upper limits inferred from the Hubble observations.
\section{Summary and Discussion} \label{section_comparison_uncertainties}
In this work we present a comprehensive numerical model for the evolution of dust particles ejected from the Martian moons Phobos and Deimos, with the goal to construct a steady state model for the configuration of the putative Martian dust rings. The new model ingredients are: (i) We perform direct numerical integrations of the equations of motion for a large number of particles, instead of integrating the orbit averaged evolution equations for the orbital elements. (ii) We use an array of 12 grain sizes, from submicron to 100 microns, to better resolve than previous studies the size dependent lifetimes. We present results as averages over an initial ejecta mass distribution. (iii) We include the Martian gravity field up to 5th degree and 5th order and account for the gravitational perturbations of Phobos and Deimos. (iv) We check for impacts of grains on the source moons and with Mars directly at (and between) the time steps of the integration, which allows for a more accurate evaluation of grain lifetimes than the probabilistic approach used in previous studies.
Our results are: \\
(1) We evaluate the lifetimes of grains with radii between $0.5\mu\mathrm{m}$ and $100\mu\mathrm{m}$, confirming results obtained in the literature. For grains from both source moons a jump in lifetime occurs between $5$ and $10$ microns. Smaller grains have lifetimes of months up to one (Earth) year. Larger grains from Phobos have lifetimes of tens of years while grains from Deimos remain in orbit for 10,000 years or more.\\
(2) The gravity perturbation induced by the Martian north-south asymmetry has in our simulations a small but noticeable effect on the orbital evolution of the grains from Phobos, in particular on the evolution of the inclination. For Deimos the effect is much smaller.\\
(3) Taking into account the initial mass distribution of ejecta, we derive the steady-state size distribution of dust particles in both ring components. The Phobos ring is dominated by grains smaller than a few microns. The Deimos ring is dominated by grains around $10$ microns in size. This is consistent with (but refines) previous results.\\
(4) Averaging over the initial mass distribution of ejecta and a large number of grains produced on the surfaces of the moons uniformly over one Martian year, we obtain a model for the steady state configuration of the rings. For the Deimos ring we confirm results from previous studies, in that the ring extends in the anti-sun direction, owing to an interplay of solar radiation pressure and the effect of Martian $J_2$ and the orbital motion of Mars. The ring has a thickness of about 4 Martian radii. For the Phobos ring we obtain the new result that the steady state ring should have a solar angle of $90^\circ$. This configuration arises from the dominance of small grains in this ring component and the correspondingly small lifetimes. The grains do not have time to perform a full cycle of the evolution of the eccentricity, and the solar angle stays close to its initial value of $90^\circ$. \\
(5) For a vertical traversal with a spacecraft, we estimate that a dust detector of $1\mathrm{m}^2$ area should record about 100 grains larger than $0.5\mu\mathrm{m}$ from either ring.\\
(6) We derive the edge-on geometric optical depth from our model, giving for the Phobos ring an estimate of $\tau\sim3.5\times 10^{-8}$ and about $\tau\sim3.1\times 10^{-7}$ for the Deimos ring, which is below the upper limits for the edge-on photometric optical depth inferred from observations of $\sim2\times 10^{-6}$ for the Phobos ring and $\sim10^{-6}$ for the Deimos ring.
Our model results are subject to fairly large uncertainties, as is the case for the models presented previously in the literature. The most uncertain parameter is the mass production rate of dust particles. On the one hand, the interplanetary projectile flux is still poorly constrained. On the other hand, not much is known about the surfaces properties of the source moons, which induces uncertainties in the ejecta yield, the bulk density of ejected grains, and in their dynamical response on solar radiation pressure and Poynting-Robertson drag. The values of the interplanetary flux and the ejecta yield affect the results linearly. The shape of the initial mass distribution, the bulk density of the particles, and their material have a size dependent effect, and their variation will affect the steady state size distribution, the number densities, and the optical depth of the rings in a non-linear manner. The precise error limits of the model results induced by the uncertainties in the parameters is difficult to assess, but it might easily amount to an uncertainty of an order of magnitude, or even more. For the initial distribution of ejecta masses we use a differential power-law with a slope of $\alpha=0.9$, as it was inferred by in-situ measurements in the lunar dust cloud. We also checked models with a slope of $\alpha=0.8$ (a value used in previous modelling of dust rings) and found that our main conclusions on the grain size distribution in the rings, the number densities, as well as their spatial configuration and the optical depths are robust. An additional complication might arise from the intermittency of the interplanetary flux \citep{Horanyi:2015faa}. For the small lifetimes of the grains that dominate the Phobos ring, this might result in a significant variability of the ring over months. Finally, we note that \citet{krivov2006search} pointed out the potential importance of grain-grain collisions as a sink, which we have not included in our modelling. Collisions might play a role especially for the Deimos ring, owing to the long particle lifetimes, leading to a depletion of the number density and optical depth.
Although the Martian rings escaped detection so far, there is little or no doubt that the dust tori of Phobos and Deimos exist. The mechanism of quasi-continuous dust production in impacts of interplanetary meteoroids has been confirmed by measurements \citep{1999Natur.399..558K,Horanyi:2015faa}, as well as the formation of dust rings by this mechanism \citep{1999Sci...284.1146B,2004jpsm.book..241B,Hedman:2009kt}. The best chance to detect the rings might be in-situ measurements with a dust detector onboard a spacecraft, or, high-phase angle imaging from an orbiter when the spacecraft is in the shadow of Mars.
\section*{Acknowledgements}
This work was supported by the European Space Agency under the project Jovian Meteoroid Environment Model at the University of Oulu and by the Academy of Finland. We acknowledge CSC -- IT Center for Science for the allocation of computational resources.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
| {'timestamp': '2022-01-11T02:11:19', 'yymm': '2201', 'arxiv_id': '2201.02847', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.02847'} |
\section{Introduction and main results}
We consider a lattice system of unbounded and continuous spins on the $d$-dimensional lattice $\mathds{Z}^d$. The formal Hamiltonian $H: \mathds{R}^{\mathds{Z}^d} \to \mathds{R}$ of the system is given by
\begin{equation}
\label{e_d_Hamiltonian}
H(x) = \sum_{i \in \mathds{Z}^d } \psi_i(x_i) + \frac{1}{2} \sum_{i,j \in \mathds{Z}^d} M_{ij} x_i x_j.
\end{equation}
We assume that the single-site potentials $\psi_i : \mathds{R} \to \mathds{R}$ are smooth and perturbed convex. This means that there is a splitting $\psi_i= \psi_i^c + \psi_i^b$ such that for all $i \in \mathds{Z}^d$ and $z \in \mathds{R}$
\begin{equation}\label{e_cond_psi}
(\psi_i^c)'' (z) \geq 0 \qquad \mbox{and} \qquad |\psi_i^b (z)| + | ( \psi_i^b)' (z) | \lesssim 1.
\end{equation}
Here, we used the convention (see Definition~\ref{def:dep} below for more details)
\begin{equation*}
a \lesssim b \qquad :\Leftrightarrow \mbox{there is a uniform constant $C>0$ such that $a \leq C b$}.
\end{equation*}
Moreover, we assume that
\begin{itemize}
\item the interaction is symmetric i.e.~
\begin{equation}
\label{e_ass_sym}
M_{ij}=M_{ji} \qquad \mbox{ for all $i, j \in \mathds{Z}^d$,}
\end{equation}
\item and the matrix $M= (M_{ij})$ is strictly diagonal dominant i.e.~for some $\delta > 0$ it holds for any $i \in \mathds{Z}^d$
\begin{equation}\label{e_strictly_diag_dominant}
\sum_{j \in \mathds{Z}^d, j \neq i} |M_{ij}| + \delta \le M_{ii}.
\end{equation}
\end{itemize}
\begin{notation}
Let $S\subset \mathds{Z}^d$ be an arbitrary subset of $\mathds{Z}^d$. For convenience, we write $x^S$ as a shorthand for $(x_i)_{i \in S}$.
\end{notation}
\begin{definition}[Tempered spin-values]
Given a finite subset $\Lambda\subset \mathds{Z}^d$, we call the spin values $x^{\mathds{Z}^d \backslash \Lambda}$ tempered, if for all $i \in \Lambda$
\begin{equation*}
\sum_{j \in \mathds{Z}^d \backslash {\Lambda}} |M_{ij}| \ |x_j| < \infty.
\end{equation*}
\end{definition}
\begin{definition}[Finite-volume Gibbs measure]
Let $\Lambda$ be a finite subset of the lattice $\mathds{Z}^d$ and let $x^{\mathds{Z}^d \backslash \Lambda}$ be a tempered state. We call the measure $\mu_{\Lambda}( dx^{\Lambda})$ finite-volume Gibbs measure associated to the Hamiltonian $H$ with boundary values $x^{\mathds{Z}^d \backslash \Lambda}$, if it is a probability measure on the space $\mathds{R}^{\Lambda}$ given by the density
\begin{equation}
\label{e_d_Gibbs_measure}
\mu_{\Lambda}(dx^\Lambda) = \frac{1}{Z_{\mu_\Lambda}} \mathrm{e}^{-H(x^\Lambda,x^{\mathds{Z}^d \backslash \Lambda} )} \mathrm{d} x^\Lambda .
\end{equation}
Here, $Z_{\mu_\Lambda}$ denotes the normalization constant that turns $\mu_{\Lambda}$ into a probability measure. If there is no ambiguity, we also may write $Z$ to denote the normalization constant of a probability measure. We also used the short notation
\begin{align*}
H(x^\Lambda,x^{\mathds{Z}^d \backslash \Lambda} ) = H(x) \quad \mbox{with} \quad x= (x^\Lambda,x^{\mathds{Z}^d \backslash \Lambda}).
\end{align*}
Note that $\mu_\Lambda$ depends on the spin values $x^{\mathds{Z}^d \backslash \Lambda}$ outside of the set $\Lambda$.
\end{definition}
The main object of study in this article is the question if the finite-volume Gibbs measure $\mu_{\Lambda}$ satisfies a logarithmic Sobolev inequality~(LSI).
\begin{definition}[LSI] \label{d_LSI}
Let $X$ be a Euclidean space. A Borel probability measure $\mu$ on $X$ satisfies the LSI with constant $\varrho>0$, if for all smooth functions $f \geq 0$
\begin{equation}\label{e_definition_of_LSI}
\int f \log f \ d \mu - \int f d\mu \log \left( \int f d\mu \right) \leq \frac{1}{2 \varrho} \int \frac{|\nabla f|^2}{f} d\mu.
\end{equation}
Here, $\nabla$ denotes the gradient determined by the Euclidean structure of~$X$.
\end{definition}
The LSI yields by linearization the Poincar\'e inequality (PI) (see for example~\cite{L}).
\begin{definition}[PI]\label{d_SG}
Let $X$ be a Euclidean space. A Borel probability measure $\mu$ on $X$ satisfies the PI with constant $\varrho>0$, if for all smooth functions~$f$
\[
\var_{\mu} (f): = \int \left(f - \int f d \mu \right)^2 d \mu \leq \frac{1}{\varrho} \int |\nabla f|^2 d\mu.
\]
Here, $\nabla$ denotes the gradient determined by the Euclidean structure of~$X$.
\end{definition}
The LSI was originally introduced by Gross \cite{Gross}. It can be used as a powerful tool for studying spin systems. The LSI implies exponential convergence to equilibrium of the naturally associated conservative diffusion process. The rate of convergence is given by the LSI constant $\varrho$ (cf.~\cite[Chapter~3.2]{Roy07}). At least in the case of finite-range interaction, independence from the system size of the LSI constant of the local Gibbs state directly yields the uniqueness of the infinite-volume Gibbs state (cf.~\cite{Roy07,Yos_2,Zitt}). \medskip
In the literature, there are several results known that connect the decay of spin-spin correlations to the validity of a LSI uniform in the system size \cite{StrZeg,StrZeg2,Zeg96,Yos99,Yos01,B-H1}. This means that a static property of the equilibrium state of the system is connected to a dynamic property namely the relaxation to the equilibrium. We refer the reader to the Section~2.2.~of the article of Otto \& Reznikoff~\cite{OR07}, which gives a nice overview and discussion on the results in the literature. Otto \& Reznikoff used the two-scale criterion for the LSI (cf.~\cite[Theorem~1]{OR07} or \cite[Theorem 3]{GORV}) to deduce the following statement:\medskip
\begin{theorem}[\mbox{\cite[Theorem 3]{OR07}}]\label{p_OR_original}
Consider the formal Hamiltonian $H:\mathds{R}^{\mathds{Z}^d} \to \mathds{R} $ given by~\eqref{e_d_Hamiltonian}. Assume that the single site potentials $\psi_i=\psi$ are given by a function of the form
\begin{align}
\label{e_single_site_potential_otto}
\psi(z) = \frac{1}{12} z^4 + \psi^b (z) \qquad \mbox{with} \quad | \frac{d^2}{dz^2} \psi^b (z)|\leq C.
\end{align}
Assume that the interaction is symmetric i.e. $M_{ij}= M_{ji}$ and has zero diagonal i.e. $M_{ii}=0$. Consider a subset~$\Lambda_{\mathrm{tot}} \subset \mathbb{Z}^d$. We assume the uniform control:
\begin{align}
\label{e_decay_inter_Otto}
|M_{ij}| \lesssim \exp \left( - \frac{|i-j|}{C} \right)
\end{align}
for $i,j \in \Lambda$ and
\begin{align}
\label{e_decay_corr_Otto}
| \cov_{\mu_{\Lambda}} (x_i,x_j)| \lesssim \exp \left( - \frac{|i-j|}{C} \right)
\end{align}
uniformly in $\Lambda \subset \Lambda_{\mathrm{tot}}$ and $i,j \in \Lambda$. Here, $\mu_{\Lambda}$ denotes the finite-volume Gibbs measures $\mu_{\Lambda}$ given by~\eqref{e_d_Gibbs_measure}. \newline
Then the finite-volume Gibbs measure $\mu_{\Lambda_{\mathrm{tot}}}$ satisfies the LSI with constant $\varrho>0$ depending only on the constant $C>0$ in~\eqref{e_single_site_potential_otto}, \eqref{e_decay_inter_Otto}, and~\eqref{e_decay_corr_Otto}.
\end{theorem}
The most important feature of Theorem~\ref{p_OR_original} is that the LSI constant $\varrho$ is independent of the system size $|\Lambda_{\mathrm{tot}}|$ and of the spin values~$x^{\mathds{Z}^d\backslash \Lambda_{\mathrm{tot}}}$ outside of~$\Lambda_{\mathrm{tot}}$. The advantage of Theorem~\ref{p_OR_original} over existing results connecting a decay of correlations to a uniform LSI is that it can deal with infinite-range interaction (cf.~\cite{StrZeg,StrZeg2,Zeg96,Yos99,Yos01,B-H1}). However, Theorem~\ref{p_OR_original} calls for some technical improvements. The main result of this article is the following generalized version of Theorem~\ref{p_OR_original}:
\begin{theorem}[Generalization of \mbox{\cite[Theorem 3]{OR07}}]\label{p_mr_OR}
Assume that the formal Hamiltonian $H:\mathds{R}^{\mathds{Z}^d} \to \mathds{R} $ given by~\eqref{e_d_Hamiltonian} satisfies the Assumptions~\eqref{e_cond_psi}~-~\eqref{e_strictly_diag_dominant}. Let $\Lambda_{\mathrm{tot}} \subset \mathds{Z}^d$ be an arbitrary, finite subset of the lattice $\mathds{Z}^d$.\newline
Assume the following decay of interactions and correlations: For some $\alpha>0$ it holds
\begin{equation}\label{e_cond_inter_alg_decay_OR}
|M_{ij}| \lesssim \frac{1}{|i-j|^{d+ \alpha}}
\end{equation}
uniformly in $i,j \in \Lambda_{\mathrm{tot}}$ and
\begin{equation}\label{e_cond_alg_decay_OR}
| \cov_{\mu_{\Lambda}} (x_i,x_j)| \lesssim \frac{1}{|i-j|^{d + \alpha}}
\end{equation}
uniformly in $\Lambda \subset \Lambda_{\mathrm{tot}}$, and $i,j \in \Lambda$. Here, $\mu_{\Lambda}$ denote the finite-volume Gibbs measures given by (cf.~\eqref{e_d_Gibbs_measure}).\newline
Then the finite-volume Gibbs measure $\mu_{\Lambda_{\mathrm{tot}}}$ satisfies the LSI with a constant $\varrho>0$ depending only on the constant in~\eqref{e_cond_psi},~\eqref{e_strictly_diag_dominant}, \eqref{e_cond_alg_decay_OR} and \eqref{e_cond_inter_alg_decay_OR}.
\end{theorem}
Theorem~\ref{p_mr_OR} improves Theorem~\ref{p_OR_original} in two ways:\newline
Note that Theorem~\ref{p_OR_original} needs an exponential decay of interaction and spin-spin correlations. However, analyzing the proof of~\cite[Theorem~3]{OR07} one sees that the exponential decay is only needed to guarantee that certain sums are summable. Therefore this assumption can be weakened to algebraically decaying interaction and spin-spin correlations. Of course now, the order of the algebraic decay depends on the dimension of the underlying lattice to guarantee summability. \smallskip
The second improvement is more subtle. Theorem~\ref{p_OR_original} needs a special structure on the single-site potentials $\psi_i$. Namely, the single-site potentials $\psi_i$ have to be perturbed quartic in the sense of~\eqref{e_single_site_potential_otto}. Analyzing the proof of~\cite[Theorem~3]{OR07} shows that the argument does not rely on a quartic potential~$\psi_i^c$. For the argument of Otto \& Reznikoff it would be sufficient to have a perturbation of a strictly-superquadratic potential~i.e.
\begin{equation}
\label{e_super_quadratic}
\liminf_{|x|\to \infty} \frac{d^2}{dx^2} \psi_i^c (x) \to \infty.
\end{equation}
The condition~\eqref{e_super_quadratic} on the single-site potential $\psi_i$ is widespread and accepted in the literature on the uniform LSI (cf.~for example~\cite{Yos01,Yos_2,ProSco}). \smallskip
However, a result by Zegarlinski~\cite[Theorem~4.1.]{Zeg96} indicates that the condition~\eqref{e_super_quadratic} is not necessary for deducing a uniform LSI. Zegarlinski deduced in~\cite[Theorem~4.1.]{Zeg96} the uniform LSI for the finite-volume Gibbs measure $\mu_\Lambda$ given by~\eqref{e_d_Gibbs_measure} on an one-dimensional lattice $\Lambda_{\mathrm{tot}} \subset \mathds{Z}$ with finite-range interaction. For Zegarlinski's argument it is sufficient that the single-site potentials $\psi_i$ satisfy the conditions~\eqref{e_cond_psi} and~\eqref{e_strictly_diag_dominant}, which is strictly weaker than the condition~\eqref{e_super_quadratic} (for a proof of this statement we refer the reader to \cite[Proof of Lemma~1]{OR07}). In Theorem~\ref{p_mr_OR} we show that the conditions~\eqref{e_cond_psi} and~\eqref{e_strictly_diag_dominant} are in fact also sufficient for the Otto-Reznikoff approach.
\begin{remark}\label{r_linear_term}
Note that the structural assumptions~\eqref{e_cond_psi}~-~\eqref{e_strictly_diag_dominant} on the Hamiltonian $H$ are invariant under adding a linear term like
\begin{equation*}
\sum_{i \in \mathds{Z}} x_ib_i
\end{equation*}
for arbitrary $b_i \in \mathds{R}$. Therefore the the LSI constant of Theorem~\ref{p_mr_OR} is invariant under adding a linear term to the Hamiltonian. Such a linear term can be interpreted as a field acting on the system. If the coefficients $b_i$ are chosen randomly, one calls the linear term random field.
\end{remark}
Let us discuss what are the ingredients to weaken the structural assumptions on the single-site potential from the condition~\eqref{e_single_site_potential_otto} to the condition~\eqref{e_cond_psi} and~\eqref{e_strictly_diag_dominant}. Analyzing the proof of Otto \& Reznikoff, it all boils down to understanding the structure of the Hamiltonian of the marginals of the finite-volume Gibbs measure conditioned on the spin values of some set $S \subset \Lambda_{\mathrm{tot}}$ (cf.~\cite[Lemma 2, Lemma 3 and Lemma 4]{OR07} or see Section~\ref{s_LSI}). Because our structural assumptions ~\eqref{e_cond_psi} and~\eqref{e_strictly_diag_dominant} on the single-site potentials are weaker, our proof needs new ingredients and more detailed arguments compared to~\cite{OR07}. \medskip
The first new ingredient in the proof of Theorem~\ref{p_mr_OR} is the covariance estimate of Proposition~\ref{p_algebraic_decay_correlations}. With this estimate it is possible to deduce algebraic decay of correlations, provided the interactions $M_{ij}$ also decay algebraically and the nonconvex perturbation~$\psi_i^b$ is small enough.\medskip
The second new ingredient in the proof of Theorem~\ref{p_mr_OR} is a uniform estimate of $\var_{\mu_{\Lambda}} (x_i)$ (see Lemma~\ref{p_est_var_ss}), which we reduce to a moment estimate due to Robin Nittka (cf.~\cite[Lemma 4.2]{MN} and Lemma~\ref{lem:moments}). The full proof of Theorem~\ref{p_mr_OR} is given in Section~\ref{s_LSI}.\medskip
However, Theorem~\ref{p_mr_OR} still calls for further improvements. Note that in the condition~\eqref{e_cond_alg_decay_OR} of Theorem~\ref{p_mr_OR} one needs to check the decay of correlations for all finite-volume Gibbs measures $\mu_\Lambda$ with $\Lambda \subset \Lambda_{\mathrm{tot}}$. Even if this is a very common assumption (see for example \cite[Condition (DS3)]{Yos01}) it may be a bit tedious to verify. Instead of the strong condition~\eqref{e_cond_alg_decay_OR}, one would like to have a weak condition like the one used for discrete spins in~\cite{MaOl}. The main difference between the weak and the strong condition for the decay of correlations is that in the weak condition it suffices to show that for a sufficiently large box $\Lambda$ the correlations decay nicely. The main advantage of the weak condition is that one does not have to control the decay of correlations for all growing subsets $\Lambda \to \mathds{Z}^d$. Therefore, the weak condition is easier to verify by experiments. Unfortunately, we cannot get rid of the strong decay of correlations condition~\eqref{e_cond_alg_decay_OR} in the Otto-Reznikoff approach. However, we show how verifying the strong decay of correlations condition~ \eqref{e_cond_alg_decay_OR} can be simplified by two comparison principles.\smallskip
The first comparison principle (see Lemma~\ref{p_comparison_covariances} below) shows that in the case of ferromagnetic interaction (i.e.~$M_{ij}<0$ for all $i,j \in \Lambda_{\mathrm{tot}}$) the correlations of a smaller system are controlled by correlations of the larger system.
\begin{lemma}\label{p_comparison_covariances}
Assume that the formal Hamiltonian $H:\mathds{R}^{\mathds{Z}^d} \to \mathds{R} $ given by~\eqref{e_d_Hamiltonian} satisfies the Assumptions~\eqref{e_cond_psi}~-~\eqref{e_strictly_diag_dominant}. Additionally, assume that the interactions are ferromagnetic i.e.~$M_{i,j} \leq 0$ for $i \neq j$. \newline
For arbitrary subsets $\Lambda \subset \Lambda_{\mathrm{tot}} \subset \mathds{Z}^d$, we consider the finite-volume Gibbs measure $\mu_{\Lambda}$ and $\mu_{\Lambda_{\mathrm{tot}}}$ with the same tempered state $x^{\mathds{Z}^d \backslash \Lambda_{\mathrm{tot}}}$. Then it holds for any $i,j \in \Lambda$
\begin{align} \label{e_comp_small_big_corr}
\cov_{\mu_{\Lambda}} (x_i,x_j) \leq \cov_{\mu_{\Lambda_{\mathrm{tot}}}}(x_i,x_j) .
\end{align}
\end{lemma}
The proof of Lemma~\ref{p_comparison_covariances} is given in Section~\ref{s_covariance_estimate}. The second comparison principle is rather standard. It states that correlations of a non-ferromagnetic system are controlled by the correlations of the associated ferromagnetic system:
\begin{lemma}\label{p:attractive_interact_dominates}
Assume that the formal Hamiltonian $H:\mathds{R}^{\mathds{Z}^d} \to \mathds{R} $ given by~\eqref{e_d_Hamiltonian} satisfies the Assumptions~\eqref{e_cond_psi}~-~\eqref{e_strictly_diag_dominant}.
Let $\mu_{\Lambda}$ denote the finite-volume Gibbs measure given by~\eqref{e_d_Gibbs_measure}. Additionally, consider the corresponding finite-volume Gibbs measure~~$\mu_{\Lambda,|M|}$ with attractive interaction i.e.~the associated formal Hamiltonian is given by
\begin{equation*}
H(x) = \sum_{i \in \mathds{Z}^d } \psi_i(x_i) - \frac{1}{2} \sum_{i,j \in \mathds{Z}^d} |M_{ij}| x_i x_j.
\end{equation*}
Then it holds that for any $i,j \in \Lambda$
\begin{equation}
\label{eq:covariance_domination}
| \cov_{\mu_{\Lambda}} (x_i,x_j) | \leq \cov_{\mu_{\Lambda,|M|}} (x_i,x_j) .
\end{equation}
\end{lemma}
We do not state the proof of the last lemma. One can find the proof for example in a recent work by Robin Nittka and the author. The proof follows the argument of~\cite{HorMor} for discrete spins (see~\cite[Lemma 2.1.]{MN}).
\begin{remark}
Usually, one considers finite-volume Gibbs measures for some inverse temperature $\beta >0$ i.e.
\begin{equation*}
\mu_\Lambda (d x^\Lambda ) = \frac{1}{Z_\mu} \mathrm{e}^{- \beta H(x^\Lambda, x^{\mathds{Z} \backslash \Lambda})} \mathrm{d} x \qquad \mbox{for } x^\Lambda \in \mathds{R}^{\Lambda}.
\end{equation*}
This case is also contained in the main results of the article, because the Hamiltonian $\beta H$ still satisfies the structural Assumptions~\eqref{e_cond_psi}~-~\eqref{e_strictly_diag_dominant}. Of course, the LSI constant of Theorem~\ref{p_mr_OR} would depend on the inverse temperature~$\beta$.
\end{remark}
\begin{remark}
Because we assume that the matrix $M= (M_{ij})$ is strictly diagonal dominant (cf.~\eqref{e_strictly_diag_dominant}), the full single-site potential
\begin{equation*}
\psi_i(x_i) + M_{ii} x_i^2 = M_{ii} x_i^2 + \psi_i^c(x_i) + \psi_i^b (x_i)
\end{equation*}
is perturbed strictly-convex. We want to note that this is the same structural assumption as used in the article~\cite{MO}.
\end{remark}
Let us turn to an application of Theorem~\ref{p_mr_OR}. We will show how the decay of correlations condition~\eqref{e_cond_alg_decay_OR} combined with the uniform LSI of Theorem~\ref{p_mr_OR} yields the uniqueness of the infinite-volume Gibbs measure. The statement that a uniform LSI yields the uniqueness of the Gibbs state is already known from the case of finite-range interaction (cf. for example~\cite{Yos_2}, the~conditions (DS1), (DS2), and (DS3) in~\cite{Yos01}). The related arguments of~\cite{Roy07},~\cite{Zitt}, and~\cite{Yos01} are based on semigroup properties of an associated diffusion process. Though the semigroup probably may work in the case of infinite-range interaction, we follow a more straightforward approach to deduce the uniqueness of the Gibbs measure. Before we formulate the precise statement (see Theorem~\ref{p_unique_Gibbs} below), we specify the notion of an infinite-volume Gibbs measure.
\begin{definition}[Infinite-Volume Gibbs measure]
Let $\mu$ be a probability measure on the state space $\mathds{R}^{\mathds{Z}^d}$ equipped with the standard product Borel sigma-algebra. For any finite subset $\Lambda \subset \mathds{Z}^d$ we decompose the measure $\mu$ into the conditional measure $\mu(dx^\Lambda| x^{\mathds{Z}^d \backslash \Lambda})$ and the marginal $\bar \mu (d x^{\mathds{Z}^d \backslash \Lambda})$. This means that for any test function $f$ it holds
\begin{equation*}
\int f(x) \mu (dx) = \int \int f(x) \mu(dx^\Lambda| x^{\mathds{Z}^d \backslash \Lambda}) \bar \mu (d x^{\mathds{Z}^d \backslash \Lambda}).
\end{equation*}
We say that the measure $\mu$ is the infinite-volume Gibbs measure associated to the Hamiltonian $H$, if the conditional measures $\mu(dx^\Lambda| x^{\mathds{Z}^d \backslash \Lambda})$ are given by the finite-volume Gibbs measures $\mu_{\Lambda}(dx^\Lambda)$ defined by~\eqref{e_d_Gibbs_measure} i.e.
\begin{equation*}
\mu(dx^\Lambda| x^{\mathds{Z}^d \backslash \Lambda}) = \mu_\Lambda (dx^\Lambda).
\end{equation*}
The equations of the last identity are also called Dobrushin-Lanford-Ruelle (DLR) equations.
\end{definition}
The precise statement connecting the decay of correlations with the uniqueness of the infinite-volume Gibbs measure is:
\begin{theorem}[Uniqueness of the infinite-volume Gibbs measure]\label{p_unique_Gibbs}
Under the same assumptions as in Theorem~\ref{p_mr_OR}, there is at most one unique Gibbs measure $\mu$ associated to the Hamiltonian $H$ satisfying the uniform bound
\begin{equation}~\label{e_sup_moment}
\sup_{i \in \mathds{Z}^d} \int (x_i)^2 \mu (dx) < \infty.
\end{equation}
\end{theorem}
The moment condition~\eqref{e_sup_moment} in Theorem~\ref{p_unique_Gibbs} is standard in the study of infinite-volume Gibbs measures (see for example~\cite{BHK82} and~\cite[Chapter~4]{Roy07}). It is relatively easy to show that the condition~\eqref{e_sup_moment} is invariant under adding a bounded random field to the Hamiltonian~$H$ (cf.~Remark~\ref{r_linear_term}). \smallskip
Theorem~\ref{p_unique_Gibbs} is one of the \emph{well-known} statements for which it is hard to find a proof. Therefore we state the proof in full detail in the Appendix~\ref{s_decay_and_uniqueness}. The argument does not need that the finite-volume Gibbs measures $\mu_{\Lambda}$ satisfy a uniform LSI. It suffices that the finite-volume Gibbs measures $\mu_{\Lambda}$ satisfy a uniform PI, which is a weaker condition then the LSI (see~Definition~\ref{d_SG}). \medskip
We also want to note that the main results of this article, namely Theorem~\ref{p_mr_OR} and Theorem~\ref{p_unique_Gibbs} were applied in~\cite{MN} to deduce a uniform LSI and the uniqueness of the infinite-volume Gibbs measure of a one-dimensional lattice system with long-range interaction, generalizing Zegarlinsk's result~\cite[Theorem~4.1.]{Zeg96} to interactions of infinite range.
\begin{remark}
In this article, we do not show the existence of an infinite-volume Gibbs measure. However, the author of this article believes that under the assumption~\eqref{e_sup_moment} the existence should follow by an compactness argument similarly to the one used in~\cite{BHK82}.
\end{remark}
In order to avoid confusion, let us make the notation $a \lesssim b$ from above precise.
\begin{definition}\label{def:dep}
We will use the notation $a \lesssim b$ for quantities $a$ and $b$
to indicate that there is a constant $C \ge 0$
which depends only on a lower bound for $\delta$ and upper bounds for $|\psi_i^b|$, $|(\psi_i^b)'|$, and $\sup_i \sum_{j \in \mathds{Z}^d} |M_{ij}|$ such that $a \le C b$. In the same manner, if we assert the existence of certain constants, they may freely depend on the above mentioned quantities, whereas all other dependencies will be pointed out.
\end{definition}
We close the introduction by giving an outline of the article.\smallskip
\begin{itemize}
\item In Section~\ref{s_covariance_estimate}, we prove Lemma~\ref{p_comparison_covariances}. This contains the comparison principle for covariances of smaller systems to larger systems.
\item In Section~\ref{s_LSI}, we consider the generalization of Theorem~\ref{p_OR_original} and give the proof of Theorem~\ref{p_mr_OR}.
\item In the Appendix~\ref{s_decay_and_uniqueness}, we consider the uniqueness of the infinite-volume Gibbs measure and give the proof of Theorem~\ref{p_unique_Gibbs}.
\item In the Appendix~\ref{s_BE_HS} we state some well-known facts about the LSI and the PI.
\end{itemize}
\section{Comparing covariances of a smaller system to covariances to a bigger system: Proof of Lemma~\ref{p_comparison_covariances}}\label{s_covariance_estimate}
The proof of Lemma~\ref{p_comparison_covariances} uses an idea of Sylvester of expanding the exponential function~\cite{Sylvester}. Sylvester used this idea to give a simple unified derivation of a bunch of correlation inequalities for ferromagnets.
\begin{proof}[Proof of Lemma~\ref{p_comparison_covariances}]
We fix the spin values $m_i$, $i \in \Lambda_{\mathrm{tot}} \backslash \Lambda$. Recall that in our notations $\mu_\Lambda$ coincides with the conditional measure
\begin{align*}
\mu_{\Lambda}(d x^{\Lambda}) = \mu_{\Lambda_{\mathrm{tot}}}(dx^\Lambda | m^{\Lambda_{\mathrm{tot}} \backslash \Lambda }).
\end{align*}
We introduce the auxiliary Hamiltonian $H_\alpha$, $\alpha >0$, by the formula
\begin{align*}
H_\alpha (x) = H(x) + \alpha \sum_{i \in \Lambda_{\mathrm{tot}}\backslash \Lambda }(x_i - m_i)^2.
\end{align*}
We denote by $\mu_{\alpha}$ the associated Gibbs measure active on the sites $\Lambda_{\mathrm{tot}}$. The measure $\mu_{\alpha}$ is given by the density
\begin{align*}
\mu_{\alpha} (dx) = \frac{1}{Z} \ \exp \left( - H_{\alpha} (x) \right) \ d x \qquad \mbox{for } x \in \mathbb{R}^{\Lambda_{\mathrm{tot}}}.
\end{align*}
Note that the measure $\mu_{\alpha}$ interpolates between the measure $\mu_{\Lambda}$ and $\mu_{\Lambda_{\mathrm{tot}}}$ in the sense that $\mu_0 = \mu_{\Lambda_{\mathrm{tot}}}$ and for any integrable function $f: \mathbb{R}^{\Lambda} \to \mathbb{R}$
\begin{align*}
\lim_{\alpha \to \infty} \int f(x^\Lambda) \mu_\alpha (dx^{\Lambda_{\mathrm{tot}}}) = \int f(x^\Lambda) \mu(dx^\Lambda | m^{\Lambda_{\mathrm{tot}} \backslash \Lambda}).
\end{align*}
So we formally have $\mu_{\infty} = \mu_{\Lambda}$
Therefore it also holds for $i, j \in \Lambda$
\begin{align*}
\lim_{\alpha \to \infty} \cov_{\mu_\alpha} (x_i, x_j) = \cov_{\mu_{\Lambda}} (x_i, x_j)
\end{align*}
This yields by the fundamental theorem of calculus that
\begin{align*}
\cov_{\mu_\infty} (x_i, x_j) - \cov_{\mu_0} (x_i, x_j) = \int_0^\infty \frac{d}{d \alpha} \cov_{\mu_\alpha} (x_i, x_j).
\end{align*}
We will now show that $$\frac{d}{d \alpha} \cov_{\mu_\alpha} (x_i, x_j) < 0,$$ which yields the statement of Lemma~\ref{p_comparison_covariances}.\newline
Indeed, direct calculation shows that
\begin{align*}
& \frac{d}{d \alpha} \cov_{\mu_\alpha} (x_i, x_j)\\
& \quad = \frac{d}{d \alpha} \left( \int x_i x_j \mu_{\alpha} - \int x_i \mu_{\alpha} \int x_j \mu_{\alpha} \right) \\
& \quad = - \cov_{\mu_\alpha} \left( x_i x_j - \int x_i \mu_{\alpha} \int x_j \mu_{\alpha} , \sum_{l \in \Lambda_{\mathrm{tot}} \backslash \Lambda }(x_l - m_l)^2 \right) \\
& \quad = - \cov_{\mu_\alpha} \left( x_i x_j , \sum_{l \in \Lambda_{\mathrm{tot}}\backslash \Lambda }(x_l - m_l)^2 \right).
\end{align*}
We will show now that
\begin{align}
\cov_{\mu_\alpha} \left( x_i x_j , \sum_{l \in \Lambda_{\mathrm{tot}} \backslash \Lambda }(x_l - m_l)^2 \right) \geq 0. \label{e_cond_covariances_crucial_estimate}
\end{align}
For this purpose, we follow the method by Sylvester~\cite{Sylvester} of expanding the interaction term. Recall that this method is also used to show for example that
\begin{align*}
\cov_{\mu_\alpha} \left( x_i , x_j \right) \geq 0,
\end{align*}
provided the interactions are ferromagnetic. By doubling the variables we get
\begin{align*}
& \cov_{\mu_\alpha} \left( x_i x_j , \sum_{l \in \Lambda_{\mathrm{tot}}\backslash \Lambda }(x_l - m_l)^2 \right) \\
& = \int ( x_i x_j - \tilde x_i \tilde x_j ) \sum_{l \in \Lambda_{\mathrm{tot}} \backslash \Lambda }(x_l - m_l)^2 \mu_\alpha (dx) \ \mu_\alpha (d \tilde x) \\
&= \frac{1}{Z^2}\int ( x_i x_j - \tilde x_i \tilde x_j ) \sum_{l \in \Lambda_{\mathrm{tot}} \backslash \Lambda }(x_l - m_l)^2 \exp (- H_\alpha (x) - H_\alpha (\tilde x)) dx d \tilde x
\end{align*}
Because the partition function $Z>0$ is positive, the sign of the covariance is determined by the integral on the right hand side of the last identity.
We change variables according to $x_i= (p_i + q_i)$ and $\tilde x_i = (p_i - q_i)$ and get
\begin{align}
& \int ( x_i x_j - \tilde x_i \tilde x_j ) \sum_{l \in \Lambda_{\mathrm{tot}\backslash \Lambda }}(x_l - m_l)^2 \exp (- H_\alpha (x) - H_\alpha (\tilde x)) dx d \tilde x \notag \\
& \quad = C \int ( (p_i + q_i) (p_j + q_j) - (p_i - q_i) (p_j - q_j) ) \notag \\
& \qquad \times \sum_{l \in \Lambda_{\mathrm{tot}}\backslash \Lambda }((p_l + q_l) - m_l)^2 \exp (- H_\alpha (p-q) - H_\alpha (p+q)) dp dq \notag \\
& \quad = C \int ( 2 p_i q_j + 2 q_i p_j) \notag \\
& \qquad \times \sum_{l \in \Lambda_{\mathrm{tot}}\backslash \Lambda }((p_l + q_l) - m_l)^2 \exp (- H_\alpha (p-q) - H_\alpha (p+q)) dp dq \label{e_integral_to_expand}
\end{align}
where $C>0$ is the constant from the transformation. Straightforward calculation reveals
\begin{align}
& \tilde H_\alpha (p,q) = H_\alpha (p-q) + H_\alpha (p+q) \notag \\
& = \sum_l \psi_l (p_l-q_l) + \psi_l (p_l + q_l) +4 m_l p_l + \alpha (p_l - q_l )^2 + \alpha (p_l + q_l )^2 + 2 m_l^2 \notag \\
& \qquad \qquad + 2 p \cdot M p + 2 q \cdot M q . \label{e_calc_tilde_H}
\end{align}
For convenience, we only consider the first summand on the right hand side of~\eqref{e_integral_to_expand}. The second summand can be estimated in the same way. \newline
Due to symmetry of $\tilde H_\alpha (p,q)$ in the $q_l$ variables it holds
\begin{align*}
\int q_j \sum_{l \in \Lambda_{\mathrm{tot}}\backslash \Lambda }((p_l + q_l) - m_l)^2 \exp (- \tilde H_\alpha (p,q)) dp dq = 0
\end{align*}
Therefore we get by doubling the variable $p$ first and then changing of variables $p = r + \tilde q$ and $\tilde p = r - \tilde q$ that
\begin{align}
& \int 2 p_i q_j \sum_{l \in \Lambda_{\mathrm{tot}\backslash \Lambda }}((p_l + q_l) - m_l)^2 \exp (- \tilde H_\alpha (p,q)) dp dq \notag \\
& \quad = \frac{1}{Z} \int 2 (p_i - \tilde p_i ) q_j \sum_{l \in \Lambda_{\mathrm{tot}\backslash \Lambda }}((p_l + q_l) - m_l)^2 \notag \\
& \qquad \qquad \times \exp (- \tilde H_\alpha (p,q)- \tilde H_\alpha (\tilde p,q) ) d \tilde p dp dq \notag \\
& \quad = \frac{1}{Z} \int 4 \tilde q_i q_j \sum_{l \in \Lambda_{\mathrm{tot}\backslash \Lambda }}((r_l+\tilde q_l + q_l) - m_l)^2 \notag \\
& \qquad \qquad \times \exp (- \tilde{\tilde{H}}_\alpha (r,\tilde q,q) ) d \tilde q dq dr , \label{e_integral_to_expand_finally}
\end{align}
where the Hamiltonian $\tilde{\tilde{H}}_\alpha (r,\tilde q,q)$ is given by
\begin{align*}
& \tilde{\tilde{H}}_\alpha (r,\tilde q,q) \\
& = \tilde H_\alpha (r + \tilde q , q) + \tilde H_\alpha (r - \tilde q , q) \\
& = H_\alpha (r + \tilde q-q) + H_\alpha (r + \tilde q+q) + H_\alpha (r - \tilde q-q) + H_\alpha (r - \tilde q+q)
\end{align*}
As we have seen in~\eqref{e_calc_tilde_H} from above, the Hamiltonian $\tilde{\tilde{H}}_\alpha (r,\tilde q,q)$ contains no mixed terms in the variables $r, \tilde q$ and $q$. More precisely, $\tilde{\tilde{H}}_\alpha (r,\tilde q,q)$ has three interaction terms i.e.
\begin{align*}
4 r \cdot M r, \qquad 4 \tilde q \cdot M \tilde q , \qquad \mbox{and} \qquad 2 q \cdot M q.
\end{align*}
So we can rewrite $\tilde{\tilde{H}}_\alpha (r,\tilde q,q)$ as
\begin{align*}
\tilde{\tilde{H}}_\alpha (r,\tilde q,q) & = F(r,\tilde q, q) + 4 r \cdot M r + 4 \tilde q \cdot M \tilde q + 2 q \cdot M q,
\end{align*}
where the function $F$ is of the form
\begin{align*}
F(r, \tilde q, q) = \sum_l \tilde{\psi}_l (r_l, \tilde q_l , q_l)
\end{align*}
for some single-site potentials $\tilde{\psi}_l$ that are symmetric in the variables $\tilde q_l$ and $q_l$. Expanding the term
\begin{align*}
\exp( - 4 \tilde q \cdot M \tilde q - 2 q \cdot M q)
\end{align*}
on the right hand side of~\eqref{e_integral_to_expand_finally} yields a sum of terms of the form
\begin{align*}
& -M_{mn} \int q_{l_1}^{n_1} \cdots q_{l_k}^{n_k} \tilde q_{\tilde l_1}^{\tilde n_1} \ldots \tilde q_{\tilde l_1}^{\tilde n_1} ((r_l+\tilde q_l - q_l) - m_l)^2 \\
& \qquad \times \exp \left( - \sum_l \tilde{\psi}_l (r_l, \tilde q_l , q_l) -4 r \cdot (M) r\right) d\tilde q dq dr.
\end{align*}
Because the functions $\tilde{\psi}_l$ are symmetric in the variables $\tilde q_l$ and $q_l$ any term with an odd exponent vanishes. Hence, the exponents $n_1 ,\ldots, n_k, $ and $\tilde n_1 ,\ldots, \tilde n_k, $ are all even. Because $-M_{mn} \geq 0 $ due to the fact that the interaction is ferromagnetic we get
\begin{align*}
& -M_{mn} \int q_{l_1}^{n_1} \cdots q_{l_k}^{n_k} \tilde q_{\tilde l_1}^{\tilde n_1} \ldots \tilde q_{\tilde l_1}^{\tilde n_1} ((r_l+\tilde q_l - q_l) - m_l)^2 \\
& \qquad \times \exp \left( - \sum_l \tilde{\psi}_l (r_l, \tilde q_l , q_l) -4 r \cdot (M) r\right) d\tilde q dq dr \geq 0.
\end{align*}
All in all, the last inequality yields the desired estimate~\eqref{e_cond_covariances_crucial_estimate} and therefore completes the proof.
\end{proof}
\section{The Logarithmic Sobolev inequality: proof of Theorem~\ref{p_mr_OR}}\label{s_LSI}
This section is devoted to the proof of Theorem~\ref{p_mr_OR}. We adapt the strategy of Otto \& Reznikoff~\cite[Theorem 3]{OR07} to our situation.
Recall that compared to Theorem~\ref{p_mr_OR}, we work with weaker assumptions:
\begin{itemize}
\item The single-site potentials $\psi_i$ are only quadratic and not super-quadratic (cf.~\eqref{e_cond_psi} vs.~\eqref{e_single_site_potential_otto}). Also note that in Theorem~\ref{p_OR_original} it is assumed that $M_{ii}=0$, whereas in Theorem~\ref{p_mr_OR} it is assumed that $M_{ii} \geq c >0$ (cf.~\eqref{e_strictly_diag_dominant}). In order to compare both statements it makes sense to think of the single-site potentials in Theorem~\ref{p_mr_OR} as
\begin{align*}
\psi_i (x_i) + \frac{1}{2} M_{ii} x_i^2.
\end{align*}
\item The interactions $M_{ij}$ decay only algebraically and not exponentially (cf.~\eqref{e_decay_inter_Otto} vs.~\eqref{e_cond_inter_alg_decay_OR}).
\item The correlations are decaying only algebraically and not exponentially (cf. \eqref{e_decay_corr_Otto}~vs.~\eqref{e_cond_alg_decay_OR}).
\end{itemize}
The algebraic decay of interactions and correlations is easy to incorporate in the original argument of~\cite{OR07}, whereas using quadratic and not super-quadratic potentials represents the main technical challenge of the proof. \medskip
The crucial ingredients in the proof of~\cite[Theorem 3]{OR07} are two auxiliary lemmas, namely \cite[Lemma 3 and Lemma 4]{OR07}. A careful analysis of the proof of~\cite{OR07} shows that only this part of the argument is sensitive to weakening the assumptions. Once the analog statements under weaker assumptions (see Lemma~\ref{p_crucial_lemma _1_OR} and Lemma~\ref{p_crucial_lemma _1_OR} below) are verified, the rest of the argument of~\cite[Theorem 3]{OR07} would work the same and is skipped in this article. The remaining part of the argument is based on an recursive application of a general principle, namely the two-scale criterion for LSI (cf.~\cite[Theorem 1]{OR07}), and is therefore not sensitive to changing the assumptions. Hence for the proof of Theorem~\ref{p_mr_OR} it suffices to show that the auxiliary lemmas \cite[Lemma 3 and Lemma 4]{OR07} remain valid under weakening the assumptions. \medskip
Let us turn to the first auxiliary Lemma (cf.~\cite[Lemma~3]{OR07} or Lemma~\ref{p_crucial_lemma _1_OR} from below). It states that the single-site conditional measures satisfy a LSI uniformly in the in the system size and the conditioned spin-values. The argument of \cite[Lemma~3]{OR07} by Otto \& Reznikoff is heavily based on the assumption that the single-site potential $\psi$ is super-quadratic. At this point we provide a new, different, and more elaborated argument showing that the statement of~\cite[Lemma~3]{OR07} remains valid if the single-site potential~$\psi$ is only perturbed quadratic. One could say that the proof of Lemma~\ref{p_crucial_lemma _1_OR} represents the main new ingredient compared to the argument of~\cite{OR07}.
\begin{lemma}[Generalization of~\mbox{\cite[Lemma~3]{OR07}}]\label{p_crucial_lemma _1_OR}
We assume the same conditions as in Theorem~\ref{p_mr_OR}. We consider for an arbitrary subset $S \subset \Lambda_{\mathrm{tot}}$ and site $i \in S$ the single-site conditional measure
\begin{align*}
\bar \mu (dx_i | x^S) := \frac{1}{Z} \exp (- \bar H ( (x^S)) d x_i
\end{align*}
with Hamiltonian
\begin{align}\label{d_coarse_grained_hamiltonian}
\bar H ( x^S) = - \log \int \exp(- H(x)) dx^{\Lambda_{\mathrm{tot}} \backslash S}.
\end{align}
Then the single-site conditional measure $\bar \mu (dx_i | x^S)$ satisfies a LSI with constant $\varrho>0$ (cf.~Definition~\ref{d_LSI}) that is uniform in $\Lambda_{\mathrm{tot}}$, $S$ and the conditioned spins~$x^S$.
\end{lemma}
We state the proof of Lemma~\ref{p_crucial_lemma _1_OR} in Section~\ref{s_crucial_lemma_1_OR}. \medskip
Let us turn to the second auxiliary Lemma (cf.~\cite[Lemma~4]{OR07} or Lemma~\ref{p_crucial_lemma _2_OR} from below). For some fixed but large enough integer $K$ let us consider the $K$-sublattice $\Lambda_K$ given by
\begin{align}\label{e_def_sublattice}
\Lambda_K := K \mathbb{Z}^d \cap \Lambda_{\mathrm{tot}}.
\end{align}
Let S an arbitrary subset satisfying $\Lambda_K \subset S \subset \Lambda_{\mathrm{tot}}$. The second auxiliary lemma states that measure on $\Lambda_K$, which is conditioned on the spins in $S \backslash \Lambda_{K}$ and averaged over the spins in $\Lambda_{\mathrm{tot}} \backslash S$, satisfies a LSI with constant $\varrho>0$ uniformly in $S$ and the conditioned spins:
\begin{lemma}[Generalization of~\mbox{\cite[Lemma~4]{OR07}}]\label{p_crucial_lemma _2_OR}
We assume the same conditions as in Theorem~\ref{p_mr_OR}. Let $S$ be an arbitrary set with $\Lambda_K \subset S \subset \Lambda_{\mathrm{tot}}$. Consider the conditional measure
\begin{align*}
\bar \mu (dx^{\Lambda_K} | x^{ S \backslash \Lambda_K}) := \frac{1}{Z} \exp (- \bar H ( x^S)) d x^ \Lambda_K
\end{align*}
with Hamiltonian
\begin{align*
\bar H ( x^S) = - \log \int \exp(- H(x)) dx^{\Lambda_{\mathrm{tot}} \backslash S}.
\end{align*}
Then there is some integer $K$ such that the conditional measure $ \bar \mu (dx^{\Lambda_K} | x^{S \backslash \Lambda_K})$ satisfies a LSI with constant $\varrho>0$ (cf.~Definition~\ref{d_LSI}) that is uniform in $\Lambda_{\mathrm{tot}}$, $S$ and the conditioned spins~$x^{S \backslash \Lambda_K}$.
\end{lemma}
\subsection{Proof of Lemma~\ref{p_crucial_lemma _1_OR} and Lemma~\ref{p_crucial_lemma _2_OR}}\label{s_crucial_lemma_1_OR}
Let us first turn to the proof of Lemma~\ref{p_crucial_lemma _1_OR}. For the argument we need the two new ingredients. The first one is the covariance estimate of Proposition~\ref{p_algebraic_decay_correlations} from below. The second one is that the variances of our kind of Gibbs measure are uniformly bounded (see Lemma~\ref{p_est_var_ss} from below). \medskip
Let us now state the covariance estimate of Proposition~\ref{p_algebraic_decay_correlations}.
\begin{proposition}\label{p_algebraic_decay_correlations}
Let $\Lambda \subset \mathds{Z}^d$ an arbitrary finite subset of the $d$-dimensional lattice $\mathds{Z}^d$. We consider a probability measure $d\mu:= Z^{-1} \exp (-H(x)) \ dx$ on $\mathds{R}^\Lambda$. We assume that
\begin{itemize}
\item the conditional measures $\mu(dx_i | \bar x_i )$, $i \in \Lambda $, satisfy a uniform PI with constant $\varrho_i>0$.
\item the numbers $\kappa_{ij}$, $i \neq j, i,j \in \Lambda$, satisfy
\begin{equation*}
|\nabla_i \nabla_j H(x)|\leq \kappa_{ij} < \infty
\end{equation*}
uniformly in $x \in \mathds{R}^\Lambda$. Here, $|\cdot|$ denotes the operator norm of a bilinear form.
\item the numbers $\kappa_{ij}$ decay algebraically in the sense of
\begin{align}
\label{e_algeb_decay_of_kappa}
\kappa_{ij} \lesssim \frac{1}{|i-j|^{d+\alpha} +1}
\end{align}
for some $\alpha>0$.
\item the symmetric matrix $A=(A_{ij})_{N \times N}$ defined by
\begin{equation*}
A_{ij} =
\begin{cases}
\varrho_i, & \mbox{if }\; i=j , \\
-\kappa_{ij}, & \mbox{if } \; i< j,
\end{cases}
\end{equation*}
is strictly diagonally dominant i.e.~for some $\delta > 0$ it holds for any $i \in \Lambda$
\begin{equation}\label{e_strictly_diag_dominant_A}
\sum_{j \in \Lambda, j \neq i} |A_{ij}| + \delta \le A_{ii}.
\end{equation}
\end{itemize}
Then for all functions $f=f(x_i)$ and $g=g(x_j)$, $i, j \in \Lambda$,
\begin{equation}
\label{e_covariance_decay_algebraic}
| \cov_{\mu}(f,g) | \lesssim (A^{-1})_{ij} \left( \int |\nabla_i f|^2 \ d \mu \right)^{\frac{1}{2}} \left( \int |\nabla_j g |^2 \ d \mu \right)^{\frac{1}{2}}
\end{equation}
and for any $i, j \in \Lambda$
\begin{align}
\label{e_decay_M_inverse}
|(A^{-1})_{ij}| \lesssim \frac{1}{|i-j|^{d + \tilde \alpha}+1},
\end{align}
for some $\tilde \alpha >0$.
\end{proposition}
For the proof of Proposition~\ref{p_algebraic_decay_correlations} we refer the reader to the article~\cite{Cov_est}.
\begin{proof}[Proof of Lemma~\ref{p_crucial_lemma _1_OR}]
The strategy is to show that the Hamiltonian~$\bar H_i ( x_i )$ of the single-site conditional measure~$ \bar \mu (dx_i | x^S)$ is perturbed strictly- convex in the sense that there exists a splitting
\begin{align}
\label{e_decom_single_site_hamitlonian}
\bar H_i ( x_i) = \tilde \psi_i^c (x_i) + \tilde \psi_i^b (x_i)
\end{align}
into the sum of two functions $\tilde \psi_i^c (x_i)$ and $\tilde \psi_i^b (x_i)$ satisfying
\begin{align}
\label{e_ssp_perturbed_strictly_convex}
\tilde (\psi_i^c)'' (x_i) \geq c >0 \quad \mbox{and} \quad |\tilde \psi_i^b (x_i)| \leq C < \infty
\end{align}
uniformly in $x_i \in \mathbb{R}$, $i\in S$, $\Lambda_{\mathrm{tot}}$ and $S$.\newline
Once~\eqref{e_decom_single_site_hamitlonian} and~\eqref{e_ssp_perturbed_strictly_convex} are validated, the statement of Lemma~\ref{p_crucial_lemma _1_OR} follows simply from a combination of the criterion of Bakry-\'Emery for LSI and the Holley-Stroock perturbation principle (cf.~Appendix~\ref{s_BE_HS} and the proof of~\cite[Lemma 1]{OR07} for details). \medskip
The aim is to decompose $\bar H_i$ such that~\eqref{e_decom_single_site_hamitlonian} and~\eqref{e_ssp_perturbed_strictly_convex} is satisfied. For that purpose, let us define the auxiliary Hamiltonian $H_{\mathrm{aux}} (x)$, $x \in \mathbb{R}^{\Lambda_{\mathrm{tot}}}$, as
\begin{align}\label{e_def_H_aux}
H_{\mathrm{aux}} (x) = H(x) - \sum_{j: |j-i| \leq R} \psi_i^b (x_j).
\end{align}
Note that $H_{\mathrm{aux}}$ is strictly convex, if restricted to spins $x_j$ with $|i-j| \leq R$.\newline
For convenience, let us introduce the notation $S^c := \Lambda_{\mathrm{tot}} \backslash S$. The Hamiltonian $\bar H_i$ is then written as
\begin{align*}
\bar H_i (x_{i}) & \overset{\eqref{d_coarse_grained_hamiltonian}}{=} - \log \int \exp(- H(x)) dx^{S^c} \\
& = \underbrace{- \log \int \exp(- H_{\mathrm{aux}}(x)) dx^{S^c}}_{=: \tilde \psi_i^c (x_i)}\\
& \qquad \underbrace{- \log \frac{\int \exp(- H(x)) dx^{S^c}}{\int \exp(- H_{\mathrm{aux}}(x)) dx^{S^c}}}_{=: \tilde \psi_i^b (x_i) }.
\end{align*}
Now, let us check that the functions $\tilde \psi_i^c (x_i)$ and $\tilde \psi_i^b (x_i)$ defined by the last identity satisfy the structural condition~\eqref{e_ssp_perturbed_strictly_convex}. \medskip
Let us consider first the function $\tilde \psi_i^b (x_i)$. We introduce the auxiliary measure $\mu_{\mathrm{aux}}$ by
\begin{align*}
\mu_{\mathrm{aux}} (dx^{S^c}) = \frac{1}{Z} \exp \left( - H_{\mathrm{aux}} (x) \right) dx^{S^c}.
\end{align*}
Then it follows from the definition~\eqref{e_def_H_aux} of $H_{\mathrm{aux}}$ that
\begin{align*}
\left| \tilde \psi_i^b (x_i) \right| & \leq \left| \log \int \exp (- \sum_{j:|j-i| \leq R} \psi_i^b (x_j)) \ \mu_{\mathrm{aux}} (dx^{S^c}) \right| \\
& \leq \sum_{j:|j-i|\leq R} \| \psi_i^b \|_{\infty} \leq 2(R+1)^d C.
\end{align*}
It is now left to show that~$\tilde \psi_i^c (x_i)$ is uniformly strictly convex. Direct calculation yields
\begin{align}
\label{e_second_deriv_tilde_psi_c}
\frac{d^2}{dx_i^2} \tilde \psi_i^c (x_i) &= \int \frac{d^2}{dx_i^2} H_{\mathrm{aux}} (x) \mu_{\mathrm{aux}} - \var_{\mu_{\mathrm{aux}}} \left( \frac{d}{dx_i} H_{\mathrm{aux}} (x) \right) .
\end{align}
We decompose the measure $\mu_{\mathrm{aux}}$ into
\begin{align*}
& \mu_{\mathrm{aux}} ( dx^{S^c}) \\
& \quad = \mu_{\mathrm{aux}} \left( (dx_j)_{j \in S^c, |j-i| \leq R} \ | \ (x_j)_{j \in S^c, |j-i|> R} \right ) \bar \mu_{\mathrm{aux}} ( (dx_j)_{j \in S^c, |j-i|> R}).
\end{align*}
Here, $\mu_{\mathrm{aux}} \left( (dx_j)_{j \in S^c, |j-i| \leq R} \ | \ (x_j)_{j \in S^c, |j-i | > R} \right )$ denotes the conditional measure given by
\begin{align*}
& \mu_{\mathrm{aux}} \left( (dx_j)_{j \in S^c, |j-i| \leq R} \ | \ (x_j)_{j \in S^c, |j-i|> R} \right ) \\
& \qquad = \frac{1}{Z} \exp\left( - H_{\mathrm{aux}} (x) \right) \ \otimes_{\substack{j \in S^c, \\ |j-i| \leq R}} dx_j ,
\end{align*}
whereas $\bar \mu_{\mathrm{aux}} ( (dx_j)_{j \in S^c, |j-i|> R})$ denotes the marginal measure given by
\begin{align*}
& \bar \mu_{\mathrm{aux}} ( (dx_j)_{j \in S^c, |j-i|> R }) \\ & \qquad = \frac{1}{Z} \left( \int \exp\left( - H_{\mathrm{aux}} (x) \right) \ \otimes_{\substack{l \in S^c, \\ |l-i| \leq R}} dx_l \right) \ \ \otimes_{\substack{j \in S^c, |j-i|> R }} dx_j .
\end{align*}
For convenience, we write $\mu_{\mathrm{aux},c}$ instead of the conditional measure $\mu_{\mathrm{aux}} \left( (dx_j)_{j \in S^c, |j-i| \leq R} \ | \ (x_j)_{j \in S^c, |j-i|> R} \right )$.
Applying the decomposition to~\eqref{e_second_deriv_tilde_psi_c} yields
\begin{align}
\frac{d^2}{dx_i^2} \tilde \psi_i^c (x_i) &= \int \left( \int \frac{d^2}{dx_i^2} H_{\mathrm{aux}} (x) \mu_{\mathrm{aux},c} - \var_{\mu_{\mathrm{aux},c}} \left( \frac{d}{dx_i} H_{\mathrm{aux}} (x) \right) \right) \ \bar \mu_{\mathrm{aux}} \notag \\
& \qquad - \var_{\bar \mu_{\mathrm{aux}}} \left( \int \frac{d}{dx_i} H_{\mathrm{aux}} (x) \mu_{\mathrm{aux},c}\right). \label{e_decomp_desintegration}
\end{align}
The first term on the right hand side of the last identity is controlled easily. Note that the Hamiltonian $H_{\mathrm{aux}}$ is strictly-convex, if restricted to spins $x_j$ with $|j-i|\leq R$. So it follows from a standard argument based on the Brascamp lieb inequality that (for details see for example~\cite[Chapter 3]{Dizdar})
\begin{align*}
\int \frac{d^2}{dx_i^2} H_{\mathrm{aux}} (x) \mu_{\mathrm{aux},c} - \var_{\mu_{\mathrm{aux},c}} \left( \frac{d}{dx_i} H_{\mathrm{aux}} (x) \right) \geq c >0
\end{align*}
uniformly in $R$ and therefore also
\begin{align*}
\int \left( \int \frac{d^2}{dx_i^2} H_{\mathrm{aux}} (x) \mu_{\mathrm{aux},c} - \var_{\mu_{\mathrm{aux},c}} \left( \frac{d}{dx_i} H_{\mathrm{aux}} (x) \right) \right) \ \bar \mu_{\mathrm{aux}} \geq c >0
\end{align*}
uniformly in $R$. \medskip
Let us now turn to the second term in~\eqref{e_decomp_desintegration}. Straightforward calculation yields
\begin{align*}
\frac{d}{dx_i} H_{\mathrm{aux}} (x) & = (\psi_i^c)' (x_i) + M_{ii} x_i + s_i + \frac{1}{2} \sum_{j \in \Lambda_{\mathrm{aux}}} M_{ij} x_j.
\end{align*}
Because the measures $\bar \mu_{\mathrm{aux}}$ and $\mu_{\mathrm{aux},c}$ live on a subset of $S^c$, $i \in S$, and the variance is invariant under adding constants, we have
\begin{align}
& \var_{\bar \mu_{\mathrm{aux}}} \left( \int \frac{d}{dx_i} H_{\mathrm{aux}} (x) \mu_{\mathrm{aux},c}\right) = \var_{\bar \mu_{\mathrm{aux}}} \left( \frac{1}{2} \int \sum_{j \in S^c} M_{ij} x_j \ \mu_{\mathrm{aux},c}\right) \notag \\
& \quad = \frac{1}{4}\var_{\bar \mu_{\mathrm{aux}}} \left( \sum_{\substack{j \in S^c, \\ |j-i| > R}} M_{ij} x_j \right) \\
& \qquad + \frac{1}{4}\var_{\bar \mu_{\mathrm{aux}}} \left( \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c}\right). \label{e_decomp_var_crucail_lemma_1_OR}
\end{align}
The first summand on the right hand side of the last identity is estimated in a straightforward manner i.e.
\begin{align*}
& \var_{\bar \mu_{\mathrm{aux}}} \left( \sum_{\substack{j \in S^c, \\ |j-i| > R}} M_{ij} x_j \right) \\
& = \var_{ \mu_{\mathrm{aux}}} \left( \sum_{\substack{j \in S^c, \\ |j-i| > R}} M_{ij} x_j \right) \\
& = \sum_{\substack{j \in S^c, \\ |j-i| > R}} M_{ij} \sum_{\substack{l \in S^c, \\ |l-i| > R}} M_{il} \cov_{\mu_{\mathrm{aux}}} (x_j,x_l) \\
& \leq \sum_{\substack{j \in S^c, \\ |j-i| > R}} \sum_{\substack{l \in S^c, \\ |l-i| > R}} M_{ij} M_{il} \left( \var_{\mu_{\mathrm{aux}}} (x_j) \right)^{\frac{1}{2}} \left( \var_{\mu_{\mathrm{aux}}} (x_l) \right)^{\frac{1}{2}}\\
& \overset{\eqref{e_est_ss_var}}{\leq} C \sum_{\substack{j \in S^c, \\ |j-i| > R}} \sum_{\substack{l \in S^c, \\ |l-i| > R}} M_{ij} M_{il} \\
& \overset{~\eqref{e_cond_inter_alg_decay_OR}}{\leq} C \sum_{\substack{j \in S^c, \\ |j-i| > R}} \sum_{\substack{l \in S^c, \\ |l-i| > R}} \frac{1}{|i-j|^{d + \alpha} +1 } \ \frac{1}{|i-l|^{d + \alpha} +1 } \\
& \leq C \frac{1}{R^{\frac{\alpha}{2}}}.
\end{align*}
Here we have used one of the new ingredients, namely the uniform estimate~\eqref{e_est_ss_var} stated in Lemma~\ref{p_est_var_ss} from below. Note that Lemma~\ref{p_est_var_ss} also applies to the measure~$\mu_{\mathrm{aux}}$ because~$\mu_{\mathrm{aux}}$ satisfies the same structural assumptions as the measure~$\mu_{\Lambda}$. \newline
Let us consider now the second summand on the right hand side of~\eqref{e_decomp_var_crucail_lemma_1_OR}. By doubling the variables we get
\begin{align*}
& \var_{\bar \mu_{\mathrm{aux}}} \left( \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c}\right) \\
& \quad = \int \Big( \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c} (dx| y) \\
& \quad \qquad - \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c} (dx| \bar y )\Big)^2 \bar \mu_{\mathrm{aux}} (dy) \bar \mu_{\mathrm{aux}} (d \bar y)
\end{align*}
By interpolation we have
\begin{align*}
& \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c} (dx| y)- \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c} (dx| \bar y ) \\
& \quad = \int_0^1 \frac{d}{dt} \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c} (dx| ty + (1-t) \bar y) \ dt \\
& \quad = \int_0^1 \cov_{\mu_{\mathrm{aux},c} (dx| ty + (1-t) \bar y)} \left( \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j, \sum_{\substack{k,l \in S^c , \\ |k-i| \leq R \\ |l-i| \geq R }} x_k M_{kl} (\bar y_l - y_l) \right) \ dt \\
& \quad = \int_0^1 \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} M_{kl} (\bar y_l - y_l) \sum_{\substack{k,l \in S^c , \\ |k-i| \leq R \\ |l-i| \geq R }} \cov_{\mu_{\mathrm{aux},c} (dx| ty + (1-t) \bar y)} \left( x_j, x_k \right) \ dt.
\end{align*}
Without loss of generality we may assume that the interaction is ferromagnetic i.e.~$M_{kl} \leq 0$ for all $k \neq l$ (else use $M_{kl}\leq|M_{kl}|$ and Lemma~\ref{p:attractive_interact_dominates}). Note that the measure $\mu_{\mathrm{aux},c}$ has strictly convex single-site potentials. Therefore the single-site conditional measures $\mu (dx_1| x)$ satisfy a LSI with constant $\frac{1}{2}M_{ii}$ by the Bakry-\'Emery criterion (see Theorem~\ref{local:thm:BakryEmery}). Because the interaction is strictly-diagonally dominant in the sense of~\eqref{e_strictly_diag_dominant}, an application of Proposition~\ref{p_algebraic_decay_correlations} yields that the covariance can be estimated as
\begin{align*}
\cov_{\mu_{\mathrm{aux},c} (dx| ty + (1-t) \bar y)} \left( x_j, x_k \right) \leq (M^{-1})_{jk},
\end{align*}
where the matrix $M$ is given by the elements
\begin{align*}
M_{ln} \quad \mbox{for} \quad l , n \in S^c , |l-i| \leq R, |k-i| \leq R \quad \mbox{or} \quad l=n=i.
\end{align*}
We want to note that by an simple standard result (see for example~\cite[Lemma 5]{OR07}) or~\cite[Lemma 4.3]{MN}) it holds $(M^{-1})_{kl} \geq 0$ for all $k,l$. Using this information, we get by an application of Jensen's inequality that
\begin{align*}
& \var_{\bar \mu_{\mathrm{aux}}} \left( \int \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} M_{ij} x_j \ \mu_{\mathrm{aux},c}\right) \\
& \quad \leq C \int_0^1 \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} \sum_{\substack{k,l \in S^c , \\ |k-i| \leq R \\ |l-i| \geq R }} M_{ij} M_{kl} (M^{-1})_{jk} \int (\bar y_l - y_l)^2 \ \bar \mu_{\mathrm{aux}} (dy) \bar \mu_{\mathrm{aux}} (d \bar y)\ dt \ \\
& \quad \leq C \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} \sum_{\substack{k,l \in S^c , \\ |k-i| \leq R \\ |l-i| \geq R }} M_{ij} M_{kl} (M^{-1})_{jk} \var_{\bar \mu_{\mathrm{aux}}}( y_l)\\
& \quad \leq C \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} \sum_{\substack{k,l \in S^c , \\ |k-i| \leq R \\ |l-i| \geq R }} M_{ij} M_{kl} (M^{-1})_{jk} \var_{ \mu_{\mathrm{aux}}}( y_l)\\
& \quad \overset{\eqref{e_est_ss_var}}{\leq} C \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} \sum_{\substack{k,l \in S^c , \\ |k-i| \leq R \\ |l-i| \geq R }} M_{ij} M_{kl} (M^{-1})_{jk} \\
& \quad \overset{~\eqref{e_cond_inter_alg_decay_OR}, \eqref{e_decay_M_inverse}}{\leq} C \sum_{\substack{j \in S^c , \\ |j-i| \leq R}} \sum_{\substack{k,l \in S^c , \\ |k-i| \leq R \\ |l-i| \geq R }} \frac{1}{|i-j|^{d+\alpha} +1} \ \frac{1}{|k-l|^{d+\alpha} +1} \frac{1}{|j-k|^{d+\tilde \alpha} +1} \\
& \quad \leq \frac{C}{R^{\frac{\tilde \alpha}{{2}}}}
\end{align*}
Note that here we also used the second ingredient, namely the covariance estimates~\eqref{e_covariance_decay_algebraic} and~\eqref{e_decay_M_inverse}.
Hence, both terms on the right hand side of~\eqref{e_decomp_var_crucail_lemma_1_OR} are arbitrarily small, if we choose $R$ big enough. Overall this leads to the desired statement (cf.~\eqref{e_decomp_desintegration} ff.)
\begin{align*}
\frac{d^2}{dx_i^2} \tilde \psi_i^c (x_i) \geq c >0,
\end{align*}
which completes the argument.
\end{proof}
In the proof of Lemma~\ref{p_crucial_lemma _1_OR}, we needed the following auxiliary statement.
\begin{lemma}\label{p_est_var_ss}
Under the same assumptions as in Lemma~\ref{p_crucial_lemma _1_OR}, it holds that for all $i \in \Lambda$
\begin{align}\label{e_est_ss_var}
\var_{\mu_\Lambda} (x_i) \leq C,
\end{align}
where the bound is uniform in $\Lambda$ and only depends on the constants appearing in~\eqref{e_cond_psi} and in~\eqref{e_strictly_diag_dominant}.
\end{lemma}
The proof of Lemma~\ref{p_est_var_ss} is a simple and straightforward application of a exponential moment bound due to Robin Nittka.
\begin{lemma}[\mbox{\cite[Lemma~4.3]{MN}}]\label{lem:moments}
We assume that the formal Hamiltonian $H:\mathds{R}^{\mathds{Z}^d} \to \mathds{R} $ given by~\eqref{e_d_Hamiltonian} satisfies the Assumptions~\eqref{e_cond_psi}~-~\eqref{e_strictly_diag_dominant}.\newline
Additionally, we assume that for all $ i \in \mathds{Z}^d$ the convex part $\psi_i^c$ of the single-site potentials $\psi_i$ has a global minimum in $x_i=0$. \newline
Let $\delta >0$ be given by~\eqref{e_strictly_diag_dominant}. Then for every $0 \le a \le \frac{\delta}{2}$ and any subset $\Lambda \subset \mathds{Z}^d$ it holds
\begin{equation} \label{e:exponential_moment}
\mathds{E}_{\mu_{\Lambda}} \bigl[\mathrm{e}^{a p_i^2}\bigr] \lesssim 1.
\end{equation}
In particular, for any $k \in \mathds{N}_0$ this yields
\begin{equation} \label{e:arbitrary_moment}
\mathds{E}_{\mu_{\Lambda}}[p_i^{2k}] \lesssim k!.
\end{equation}
\end{lemma}
The statement of Lemma~\ref{lem:moments} is a slight improvement of~\cite[Section~3]{BHK82}, because the assumptions are slightly weaker compared to~\cite {BHK82}. More precisely, $\psi_i''$ may change sign outside every compact set and there is no condition on the signs of the interaction. Even if \cite[Lemma~4.3]{MN} is formulated in~\cite{MN} for systems on an one-dimensional lattice, a simple analysis of the proof shows that the statement is also true on lattices of any dimension.
\begin{proof}[Proof of Lemma~\ref{p_est_var_ss}]
By doubling the variables we get
\begin{equation*}
\var_{\mu_{\Lambda}}(x_i) = \frac{1}{2} \int \int (x_i -y_i)^2 \mu_{\Lambda}(dx) \mu_{\Lambda} (dy).
\end{equation*}
By the change of coordinates $x_k= q_k + p_k$ and $y_k= q_k -p_k$ for all $k \in \Lambda$, the last identity yields by using the definition~\eqref{e_d_Gibbs_measure} of the finite-volume Gibbs measure $\mu_{\Lambda}$ that
\begin{align*}
\var_{\mu_{\Lambda}}(x_i) &= C \int \int p_i^2 \ \underbrace{\frac{e^{-H(q^{\Lambda}+p^{\Lambda}, x^{\mathds{Z}\backslash \Lambda}) - H(q^{\Lambda}-p^{\Lambda}, x^{\mathds{Z}\backslash \Lambda})}}{\int e^{-H(q^{\Lambda}+p^{\Lambda}, x^{\mathds{Z}\backslash \Lambda}) -H(q^{\Lambda}-p^{\Lambda}, x^{\mathds{Z}\backslash \Lambda})} \mathrm{d} p^{\Lambda} \mathrm{d} q^{\Lambda} } \mathrm{d} p^{\Lambda} \mathrm{d} q^{\Lambda} }_{=: d \tilde \mu_{\Lambda} (q^{\Lambda},p^{\Lambda})} .
\end{align*}
By conditioning on the values $q^{\Lambda}$ it directly follows from the definition~\eqref{e_d_Hamiltonian} of $H$ that
\begin{equation} \label{e:repre_covariance}
\var_{\mu_{\Lambda}}(x_i) = C
\mathds{E}_{\tilde \mu_{\Lambda}} \left[ \mathds{E}_{\mu_{\Lambda,q}} \left[ p_i^2 \right] \right].
\end{equation}
Here, the conditional measure $\mu_{\Lambda,q}$ is given by the density
\begin{equation}
\label{eq:def_mu_q}
\mathrm{d}\mu_{\Lambda,q}(p^\Lambda) \coloneqq \frac{1}{Z_{\mu_{\Lambda,q}}} \mathrm{e}^{-\sum_{k \in \Lambda} \psi_{k,q}(p_k) - \sum_{k,l \in \Lambda} M_{kl} p_k p_l} \mathrm{d} p^\Lambda
\end{equation}
with single-site potentials $\psi_{k,q} \coloneqq \psi_{k,q}^c + \psi_{k,q}^b$ defined by
\begin{align*}
\psi_{k,q}^c(p_k) & \coloneqq \psi_k^c(q_k + p_k) + \psi_k^c(q_k - p_k) \qquad \mbox{and} \\
\psi_{k,q}^b(p_k) & \coloneqq \psi_k^b(q_k + p_k) + \psi_k^b(q_k - p_k).
\end{align*}
Because of symmetry in the variable $p_k$,the convex part of the single-site potential $\psi_{k,q}^c(p_k)$ has a global minimum at $p_k=0$ for any $k$. Therefore, an application of Lemma~\ref{lem:moments} yields the desired statement. \end{proof}
Let us turn to the verification of Lemma~\ref{p_crucial_lemma _2_OR}. We also need an auxiliary statement, namely Lemma~\ref{p_aux_lemma_2_OR} from below.
It is a generalization of~\cite[Lemma 2]{OR07} and states that the interactions of the Hamiltonian $\bar H ((x_i)_{i \in S})$ given by~\eqref{d_coarse_grained_hamiltonian} decay sufficiently fast.
\begin{lemma} [Generalization of~\mbox{\cite[Lemma 2]{OR07}}]\label{p_aux_lemma_2_OR}
In the same situation as in Lemma~\ref{p_crucial_lemma _1_OR}, the interactions of $\bar H (x^S)$ decay algebraically i.e. there are constants $0, \varepsilon, C < \infty$ such that
\begin{align*}
\left|\frac{d}{dx_i} \frac{d}{dx_j} \bar H \right| \leq C \frac{1}{|i-j|^{d+\bar \varepsilon}+1}
\end{align*}
uniformly in $i,j \in S$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{p_aux_lemma_2_OR}]
Direct calculation as in~\cite[Lemma 2]{OR07} shows that
\begin{align*}
\frac{d}{dx_i} \frac{d}{dx_j} \bar H = -M_{ij} - \sum_{k \in \Lambda_{\mathrm{tot}} \backslash S} \sum_{l \in \Lambda_{\mathrm{tot}} \backslash S} M_{ik} \ M_{jl} \ \cov_{\Lambda_{\mathrm{tot} \backslash S}} (x_k, x_l).
\end{align*}
The last identity immediately yields the estimate (cf.~\cite[(52)]{OR07})
\begin{align*}
\left|\frac{d}{dx_i} \frac{d}{dx_j} \bar H \right| \leq |M_{ij}| + \sum_{k \in \Lambda_{\mathrm{tot}} \backslash S} \sum_{l \in \Lambda_{\mathrm{tot}} \backslash S} |M_{ik}| \ |M_{jl}| \ | \cov_{\Lambda_{\mathrm{tot} \backslash S}} (x_k, x_l)|
\end{align*}
Using the decay of interactions~\eqref{e_decay_inter_Otto} and the decay of correlations~\eqref{e_decay_corr_Otto} we get
\begin{align*}
& \left|\frac{d}{dx_i} \frac{d}{dx_j} \bar H \right| \\
& \quad \leq \frac{C}{|i-j|^{d+ \alpha}} + C \sum_{k \in \Lambda_{\mathrm{tot}} \backslash S} \sum_{l \in \Lambda_{\mathrm{tot}} \backslash S} \frac{1}{|i-k|^{d+ \alpha}} \frac{1}{|j-l|^{d+ \alpha}} \frac{1}{|k-l|^{d+ \alpha}}
\end{align*}
Now we use the same kind of argument as used in in the proof of Proposition~\ref{p_algebraic_decay_correlations} to estimate the term $T_k$. This means that for any multi-indexes $i,k,l,j \in \Lambda_{\mathrm{tot}}$ it holds either
\begin{align*}
|i-k|\geq \frac{C}{3} |i-j|, \quad |j-l|\geq \frac{C}{3} |i-j|, \quad \mbox{or} \quad |k-l|\geq \frac{C}{3} |i-j|.
\end{align*}
Therefore, we have
\begin{align*}
& \sum_{k \in \Lambda_{\mathrm{tot}} \backslash S} \sum_{l \in \Lambda_{\mathrm{tot}} \backslash S} \frac{1}{|i-k|^{d+ \alpha}} \frac{1}{|j-l|^{d+ \alpha}} \frac{1}{|k-l|^{d+ \alpha}} \\
& \leq \sum_{ \substack{k \in \Lambda_{\mathrm{tot}} \backslash S, \\ l \in \Lambda_{\mathrm{tot}} \backslash S, \\ |i-k|\geq \frac{1}{3} |i-j|}} \frac{1}{|i-k|^{d + \alpha}} \frac{1}{|j-l|^{d + \alpha}} \frac{1}{|k-l|^{d + \alpha}} \\
& \qquad + \sum_{ \substack{k \in \Lambda_{\mathrm{tot}} \backslash S, \\ l \in \Lambda_{\mathrm{tot}} \backslash S, \\ |j-l|\geq \frac{1}{3} |i-j|}} \ldots + \sum_{ \substack{k \in \Lambda_{\mathrm{tot}} \backslash S, \\ l \in \Lambda_{\mathrm{tot}} \backslash S, \\ |k-l|\geq \frac{1}{3} |i-j|}} \ldots \\
& \leq C \ \frac{1}{|i-j|^{d + \alpha}},
\end{align*}
which yields the desired statement of Lemma~\ref{p_aux_lemma_2_OR}.
\end{proof}
As in the proof of~\cite[Lemma~4]{OR07} we verify Lemma~\ref{p_crucial_lemma _2_OR} by an application of the Otto-Reznikoff criterion for LSI i.e.
\begin{theorem}[Otto-Reznikoff criterion for LSI,~{\mbox{\cite[Theorem 1]{OR07}}}] \label{p_otto_reznikoff}
Let $d\mu:= Z^{-1} \exp (-H(x)) \ dx$ be a probability measure on a direct product of Euclidean spaces $X= X_1 \times \cdots \times X_N$. We assume that
\begin{itemize}
\item the conditional measures $\mu(dx_i | \bar x_i )$, $1\leq i \leq N$, satisfy a uniform LSI($\varrho_i $).
\item the numbers $\kappa_{ij}$, $1 \leq i \neq j \leq N$, satisfy
\begin{equation*}
|\nabla_i \nabla_j H(x)|\leq \kappa_{ij} < \infty
\end{equation*}
uniformly in $x \in X$. Here, $|\cdot|$ denotes the operator norm of a bilinear form.
\item the symmetric matrix $A=(A_{ij})_{N \times N}$ defined by
\begin{equation*}
A_{ij} =
\begin{cases}
\varrho_i, & \mbox{if } \; i=j , \\
-\kappa_{ij}, & \mbox{if } \; i < j,
\end{cases}
\end{equation*}
satisfies in the sense of quadratic forms
\begin{equation}\label{e_cond_OR}
A \geq \varrho \Id \qquad \mbox{for a constant } \varrho>0.
\end{equation}
\end{itemize}
Then $\mu$ satisfies LSI($\varrho$).
\end{theorem}
\begin{proof}[Proof of Lemma~\ref{p_crucial_lemma _2_OR}]
We want to apply Theorem~\ref{p_otto_reznikoff}. By an application of Lemma~\ref{p_crucial_lemma _1_OR}, we know that the single-site measures conditional measures $\bar \mu (dx_i |x^{\Lambda_K} x^{ S \backslash, \Lambda_K})$, $i \in \Lambda_k$ satisfy a LSI with uniform constant $\varrho>0$. \newline
For the mixed derivatives of the Hamiltonian, we have according to Lemma~\ref{p_aux_lemma_2_OR}
\begin{align*}
\left|\frac{d}{dx_i} \frac{d}{dx_j} \bar H \right| \leq C \frac{1}{|i-j|^{d+\bar \varepsilon}+1}.
\end{align*}
Hence, in order to apply Theorem 1 we have to consider the symmetric matrix $A = (A_{ij})_{i,j \in \Lambda_K}$ with
\begin{align*}
& A_{ii}= \varrho, \\
& A_{ij} = - C \frac{1}{|i-j|^{d+\bar \varepsilon} +1}, \qquad \mbox{for } i\neq j.
\end{align*}
We will argue that A is strict positive-definite if we choose the integer $K$ large enough.
We have
\begin{align*}
\sum_{i,j \in \Lambda_K} x_i A_{ij} x_j = \sum_{i \in \Lambda_K} \varrho x_i^2 + \sum_{i,j \in \Lambda_K, \ i \neq j} x_i A_{ij} x_j.
\end{align*}
Let us estimate the second term of the right hand side. We have
\begin{align*}
| \sum_{i,j \in \Lambda_K, \ i \neq j} x_i A_{ij} x_j | & \leq \frac{1}{2} \sum_{i \in \Lambda_K} \sum_{j \in \Lambda_K, \ i \neq j} |A_{ij}| x_i^2 + \frac{1}{2} \sum_{j \in \Lambda_K} \sum_{i \in \Lambda_K, \ i \neq j} |A_{ij}| x_j^2\\
& \leq C \sum_{i \in \Lambda_K} \sum_{j \in \Lambda_K, \ i \neq j} \frac{1}{|i-j|^{d + \bar \varepsilon}} x_i^2 \\
& \leq \frac{C}{K^{\frac{\bar \varepsilon}{2}}} \sum_{i \in \Lambda_K} x_i^2 \sum_{j \in \Lambda_K, \ i \neq j} \frac{1}{|i-j|^{d + \frac{\bar \varepsilon}{2}}}
\end{align*}
where the last inequality holds if we choose $K$ large enough. So we get overall that
\begin{align*}
\sum_{i,j \in \Lambda_K} x_i A_{ij} x_j \geq \frac{\varrho}{2} \sum_{i \in \Lambda_K} x_i^2 >0 ,
\end{align*}
which yields the desired statement of Lemma~\ref{p_crucial_lemma _2_OR} by an application of Theorem~\ref{p_otto_reznikoff}.
\end{proof}
| {'timestamp': '2014-04-11T02:11:22', 'yymm': '1309', 'arxiv_id': '1309.0862', 'language': 'en', 'url': 'https://arxiv.org/abs/1309.0862'} |
\section{Introduction}
\label{sec:intro}
Nucleon polarizabilities are of fundamental importance for understanding the dynamics of the internal structure of nucleons. The static electric and magnetic dipole polarizabilities, $\alpha_{E1}$ and $\beta_{M1}$, characterize the response of the nucleon to external electromagnetic stimulus by relating the strength of the induced electric and magnetic dipole moments of the nucleon to the applied field. Decades-long endeavors have been devoted to studying the static polarizabilities of nucleons experimentally and theoretically~\cite{Schumacher05,Griesshammer12,Hagelstein15}. Nuclear Compton scattering is a powerful tool to access the nucleon polarizabilities, where incident real photons apply an electromagnetic field to the nucleon and induce multipole radiation by displacing the charges and currents inside the nucleon. In the past decade, effective field theories (EFTs) have proven to be successful theoretical frameworks to describe such processes as well as to predict and extract static nucleon polarizabilities from the low-energy Compton scattering data~\cite{Griesshammer12}. Values of $\alpha_{E1}$ and $\beta_{M1}$ of the proton have been successfully extracted from Compton scattering experiments using liquid hydrogen targets. With $\alpha_{E1}^p+\beta_{M1}^p$ constrained by the Baldin sum rule (BSR), the latest EFT fit to the global database of proton Compton scattering gives~\cite{McGovern12,Griesshammer15}
\begin{equation}
\begin{split}
\alpha_{E1}^p &= 10.65\pm0.35_{\rm stat}\pm0.2_{\rm BSR}\pm0.3_{\rm theo},\\
\beta_{M1}^p &= 3.15\mp0.35_{\rm stat}\pm0.2_{\rm BSR}\mp0.3_{\rm theo},
\end{split}
\end{equation}
where the polarizabilities are given here and throughout this paper in units of $10^{-4}\,\rm fm^3$. The neutron polarizabilities, in contrast, are less well determined due to the lack of free neutron targets and the small Thomson cross sections of the neutron due to the fact that the neutron is uncharged~\cite{Schumacher05,Myers12,Myers:2014ace}.
Light nuclear targets, such as liquid deuterium~\cite{Hornidge99,Lundin02,Myers15,Myers:2014ace}, liquid $^4$He~\cite{Sikora17,Fuhrberg95,Proff99}, and $^6$Li~\cite{Myers12,Myers14}, can be utilized as effective neutron targets to extract neutron polarizabilities. After accounting for the binding effects, these isoscalar targets allow for the extraction of the isoscalar-averaged polarizabilities of the proton and neutron. By subtracting the better-known proton results, the neutron polarizabilities can be obtained. Indeed, the group at the MAX~IV Laboratory in Lund reported the most recent EFT extraction of the neutron polarizabilities from the world data of elastic deuteron Compton scattering as~\cite{Myers:2014ace}
\begin{equation}
\begin{split}
\alpha_{E1}^n &= 11.55\pm1.25_{\rm stat}\pm0.2_{\rm BSR}\pm0.8_{\rm theo}, \\
\beta_{M1}^n &= 3.65\mp1.25_{\rm stat}\pm0.2_{\rm BSR}\mp0.8_{\rm theo},
\end{split}
\end{equation}
with the BSR constraint applied.
Although no model-independent calculation currently exists for nuclei with mass number $A>3$, light nuclei of higher masses are still advantageous in real Compton scattering experiments. Their cross sections are much larger both due to a higher atomic number $Z$ compared to the deuteron, and due to the fact that the meson-exchange currents play a larger role. In particular, $^4$He is a favorable candidate among isoscalar targets because, unlike $^6$Li, its description is well within the reach of modern high-accuracy theoretical approaches. Our data will show that the cross section of Compton scattering from $^4$He is approximately a factor of 6 to 8 larger than that from the deuteron. This substantial enhancement in the cross section enables high statistics measurements. Besides, this confirms the idea that Compton scattering cross sections do not only scale with $Z$ but are also sensitive to the amount of nuclear binding. The Compton scattering cross section, in turn, is related to the number of charged meson-exchange pairs in the target nucleus, see Ref.~\cite{Margaryan18} for details. Additionally, because the first inelastic channel is the $^4$He($\gamma,p$)$^3$H reaction with a $Q$ value of 19.8\,MeV and there are no bound excited states below that energy, elastic Compton scattering from $^4$He can be more easily distinguished from inelastic scattering compared to the deuteron with 2.2\,MeV binding energy. Now that the theory for $^3$He has been explored extensively~\cite{Margaryan18,Shukla:2018rzp,Shukla:2008zc}, one can expect a full theoretical calculation of Compton scattering from $^4$He as a next step. The first precise measurement of the $^4$He Compton scattering cross section was successfully performed at the High Intensity $\gamma$-ray Source (HI$\gamma$S) facility at an incident photon energy of 61\,MeV with high statistical accuracy and well-controlled systematic uncertainties~\cite{Sikora17}. A measurement at a higher photon energy is then motivated in order to obtain greater sensitivity to nucleon polarizabilities and to stimulate the development of EFT calculations of $^4$He Compton scattering to fully interpret the data.
In this paper, we report a new high-precision measurement of the cross section of elastic Compton scattering from $^4$He at a weighted mean incident photon beam energy of 81.3\,MeV. The results are compared to the previous $^4$He Compton scattering data and are discussed in the context of their significance to the extraction of the nucleon polarizabilities.
\section{Experimental Setup}
\label{sec:experiment}
The experiment was performed at HI$\gamma$S at the Triangle Universities Nuclear Laboratory (TUNL)~\cite{Weller09}. The HI$\gamma$S facility utilizes a storage-ring based free-electron laser (FEL) to produce intense, quasi-monoenergetic, and nearly 100\% circularly and linearly polarized $\gamma$-ray beams via Compton backscattering~\cite{Yan:2019bru,Wu:2015hta,Wu:2006zzc}. The $\gamma$-ray beam pulses have a width of about 300\,ps FWHM and are separated by 179\,ns. These features of the HI$\gamma$S photon beam lead to detector energy spectra which are much cleaner and simpler to interpret compared to Compton scattering experiments that use tagged bremsstrahlung beams.
The $\gamma$-ray beam was collimated by a lead collimator with a circular opening of 25.4\,mm diameter located 52.8\,m downstream from the electron-photon collision point. The $\gamma$-ray beam energy was determined from the set-point energy of the storage ring and the measured wavelength of the FEL beam. In this experiment, the calculated energy spectrum of the $\gamma$-ray beam incident on the liquid $^4$He target was peaked at around 85\,MeV with an estimated rms uncertainty of about 1\%~\cite{WYing}. In our experiment an accurate determination of the energy distribution of the $\gamma$-ray beam at the lower energy end was not possible due to the insertion of a set of apertured copper absorbers inside the FEL cavity, which preferably attenuated low energy $\gamma$ rays at larger angles. Instead a study of the beam energy profile was performed to obtain the weighted mean energy of the incident $\gamma$-ray beams. The weighted mean energy was the average energy weighted by the numbers of photons with different energies in the beam. Details of the determination of the weighted mean beam energy are discussed in Sec.~\ref{sec:analysis}. The $\gamma$-ray beam flux was continuously monitored by a system composed of five thin plastic scintillators with an aluminum radiator inserted after the second scintillator~\cite{Pywell09}. This system was located about 70\,cm downstream from the end of the collimator and about 12\,m upstream of the target. The charged particles produced in the radiator were detected to determine the photon beam intensity. The efficiency of the beam flux monitor was measured at low photon fluxes using a large NaI(Tl) detector located downstream of the target and was corrected for multiple hits at high photon rates~\cite{Pywell09}. For the present experiment, the on-target intensity of the circularly polarized photon beams was $\approx10^7\,\gamma/s$.
The beam was scattered from a cryogenic liquid $^4$He target~\cite{Kendellen16}. The near-cylindrical target cell was 20\,cm in length and approximately 4\,cm in diameter. The walls and end windows of the cell were made from 0.125-mm-thick Kapton foil. The cylindrical axis of the target cell was aligned along the beam axis. The target cell was located inside an aluminum vacuum can and two aluminum radiation shields. Two 0.125-mm-thick Kapton windows were installed on the vacuum can, and gaps were cut in the aluminum shields to allow the photon beam to enter and exit the cryogenic target. The liquid $^4$He target was maintained at 3.4\,K with a target thickness of $(4.17\pm0.04) \times 10^{23}$\,nuclei/cm$^2$. During production runs, the cell was periodically emptied for background measurements.
The Compton scattered photons were detected by an array of eight NaI(Tl) detectors placed at scattering angles of $\theta = 55\degree$, 90$\degree$, and 125$\degree$ in the laboratory frame. Each detector consisted of a cylindrical NaI(Tl) core of diameter 25.4\,cm surrounded by eight annular segments of 7.5-cm-thick NaI(Tl) crystals. The lengths of the core NaI crystals ranged from 25.4 to 30.5\,cm. The annular segments were used as an anticoincidence shield to veto the cosmic-ray background. A 15-cm-thick lead collimator was installed at the front face of each detector to define the acceptance cone. The conical aperture of the lead shield was filled with boron-doped paraffin wax to reduce neutron background.
The layout of the detectors and the cryogenic target is shown in Fig.~\ref{fig:Setup}. For this experiment, five detectors (one at $\theta = 55\degree$, two at $\theta = 90\degree$, the other two at $\theta = 125\degree$) were placed on the tables with the axes of the detectors and the target aligned in the same horizontal plane. The other three detectors were placed beneath the beam axis pointing towards the target, located at $\theta = 55\degree$, $90\degree$, and $125\degree$, respectively. The geometry of the experimental apparatus was surveyed to a precision of 0.5\,mm and the results were incorporated into a \textsc{geant4}\xspace~\cite{Agostinelli02} simulation to determine the effective solid angle of each detector. The effective solid angle accounts for the geometric effects due to the extended target and the finite acceptance of the detectors, as well as the attenuation of scattered photons in the target cell and the cryostat. With the front faces of the detector collimator apertures placed about 58\,cm from the target center, the simulated effective solid angles ranged from 63.4 to 66.9\,msr.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=1\columnwidth]{target_hinda.pdf}
\caption{(Color online) Schematic of the experimental apparatus showing the layout of the cryogenic target and the array of NaI(Tl) detectors. The photon beam is incident from the lower left side of the figure. The target cell is contained inside the aluminum vacuum can. \label{fig:Setup}}
\end{center}
\end{figure}
\section{Data Analysis}
\label{sec:analysis}
Figure~\ref{fig:Daq} illustrates the simplified flow chart of the data acquisition system for this experiment. One copy of the core signal was recorded by the digitizer, while a second copy was used to generate the trigger for the data acquisition system. After passing a hardware threshold of about 10\,MeV, which was set using a constant fraction discriminator (CFD), a logical OR of all core signals was formed to trigger the digitizer. For each event trigger, in addition to recording the waveform of the signal from the core detector that generated the trigger, the waveform of the combined signal from the eight NaI shield segments associated with this core NaI detector and the time of flight (TOF) of the detected $\gamma$-ray events produced by the signal from a time-to-amplitude converter (TAC) were digitized. The TOF was defined as the time difference between an event trigger and the next reference signal of the electron beam pulse from the accelerator every 179\,ns. The energy deposition in the core and shield detectors was extracted from the integral of the pulse shape, while the TOF was obtained from the peak-sensed amplitude of the waveform from the TAC.
\begin{figure}[!htp]
\includegraphics[width=1\columnwidth]{daq.pdf}
\caption{(Color online) Simplified diagram of the data acquisition system. \label{fig:Daq}}
\end{figure}
\begin{figure}[!htp]
\includegraphics[angle=0,width=1\columnwidth]{shield2d.pdf}
\caption{2D spectrum showing energy deposition in the shield detector versus energy deposition in the core detector. An apparent gap around the dashed line is observed between the cosmic-ray events (above the dashed line) and the Compton-scattering events (below the dashed line). The shield energy cut is placed at the dashed line.
\label{fig:shield2d}}
\end{figure}
\begin{figure}[!htp]
\includegraphics[angle=0,width=1\columnwidth]{tof.pdf}
\caption{(Color online) Time-of-flight spectrum showing prompt and random regions with shield-energy cut applied. The bin width is 0.4\,ns.
\label{fig:ToF}}
\end{figure}
Cosmic-ray events were the major source of background in this experiment and could be rejected by employing two methods. First, the energy spectra of the anticoincidence shield detectors were analyzed to suppress such background. Due to the lead collimator in front of each detector, events scattered from the target deposited energy in the shields primarily through electromagnetic shower loss from the core crystal. In contrast, high-energy muons produced by cosmic rays traversed the detector and were minimum ionizing. This significant difference in the shield-energy spectra of the Compton scattered photons and cosmic muons enabled a cut on shield energy (Fig.~\ref{fig:shield2d}) to veto the cosmic-ray background without affecting the Compton scattering events. Secondly, the time structure of the $\gamma$-ray beam produced a clear prompt timing peak (Fig.~\ref{fig:ToF}) for the beam-produced events, allowing for a timing cut to select beam-related scattering events. The shield-energy cut and timing cut together removed over 99\% of the cosmic-ray events within the region of interest (ROI) in the energy spectrum. The background from time-uncorrelated (random) events appeared as a uniform distribution in the TOF spectrum in Fig.~\ref{fig:ToF}. After applying both cuts, the remaining background from residual random events was removed by sampling the energy spectrum from the random region and subtracting it from the energy spectrum in the prompt region after normalizing to the relative widths of the timing windows. The above analysis was performed on both full- and empty-target data. Typical energy spectra from the analysis at the three scattering angles are shown in Fig.~\ref{fig:FullEmpty}. For each detector, the empty-target energy spectrum was subtracted from the full-target energy spectrum after scaling to the number of incident photons to obtain the final energy spectrum.
\begin{figure}[!htp]
\includegraphics[width=1\columnwidth]{full_empty.pdf}
\caption{(Color online) Representative energy spectra for full (closed dot) and empty (open dot) targets at $\theta$ = 55$\degree$, 90$\degree$, and 125$\degree$. The bin width is 0.8\,MeV. The empty-target spectra have been normalized to the number of incident photons for the full-target spectra.
\label{fig:FullEmpty}}
\end{figure}
\begin{figure}[!htp]
\includegraphics[width=1\columnwidth]{lineshape.pdf}
\caption{(Color online) Representative final energy spectra at $\theta$ = 55$\degree$, 90$\degree$, and 125$\degree$ with empty-target events removed. The bin width is 0.8\,MeV. The fit to the data (dashed curve) consists of the electromagnetic background (dot-dashed curve) and the {\textsc{geant4}\xspace} simulated detector response function (solid curve). At $\theta$ = 55$\degree$ and 90$\degree$, the backgrounds are the sum of an exponential low-energy contribution accounting for the atomic scattering and a constant background resulting from the electron-beam-induced bremsstrahlung photons. At $\theta$ = 125$\degree$, the background is free from the exponential low-energy component and therefore includes the constant contribution only. The vertical dashed lines indicate the ROI used to obtain the yield at each angle.
\label{fig:FitSpectra}}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[angle=0,width=1\columnwidth]{beam_profile.pdf}
\caption{The reconstructed effective beam energy profile on the target for a $\gamma$-ray beam produced by an electron beam energy of 975\,MeV in the storage ring and a laser beam of about 192 nm in the FEL optical cavity. This energy distribution is peaked at 85.1\,MeV with a FWHM of 5.5\,MeV (6.5\%). The weighted mean value is 81.3\,MeV.
\label{fig:Beam}}
\end{center}
\end{figure}
\begin{figure}[!htp]
\includegraphics[angle=0,width=1\columnwidth]{cross_section.pdf}
\caption{(Color online) Compton scattering cross sections of $^4$He reported in the current work (circles) compared to the results from Lund (squares, $E_\gamma=$ 87\,MeV)~\cite{Fuhrberg95} and the previous measurement at HI$\gamma$S (triangles, $E_\gamma=$ 61\,MeV)~\cite{Sikora17}. The error bars shown are the statistical and systematic uncertainties added in quadrature.
\label{fig:CrossSections}}
\end{figure}
Typical final energy spectra at the three scattering angles are shown in Fig.~\ref{fig:FitSpectra}. The final energy spectrum of each detector was fitted with the detector response function obtained from the aforementioned {\textsc{geant4}\xspace} simulation of the full experimental apparatus. Photons were generated from the target cell and the energy deposited in the detectors was recorded. The energy of the simulated scattered photon $E_\gamma^{\prime}$ was
\begin{equation}
E_\gamma^{\prime} = \frac{E_\gamma}{1 + \frac{E_\gamma}{M}(1-\cos\theta)},
\end{equation}
where $E_\gamma$ is the incident photon energy sampled from the effective beam energy profile, $\theta$ is the laboratory scattering angle, and $M$ is the mass of the $^4$He nucleus. The simulated detector response function was fitted to the final energy spectrum with a Gaussian smearing convolution accounting for the intrinsic detector resolution. Due to the lack of a direct method to determine the beam energy profile in the entire distribution range, the effective beam energy profile was determined using the following strategy. A series of samples of beam energy profiles were calculated with a set of beam radii on the target, corresponding to various energy spreads. Each sample was incorporated into the aforementioned simulation to obtain the corresponding detector response function. As the detector response function was fitted to data, the resulting fitting parameter, particularly the width of the Gaussian smearing function representing the intrinsic detector resolution, was evaluated. Only those samples resulting in physically reasonable values of the fitting parameter were taken into account to estimate the weighted mean beam energy, which gave a range of 80.2\,MeV to 84.0\,MeV among the samples. The effective beam energy profile used in the cross-section extraction is shown in Fig.~\ref{fig:Beam} with a FWHM of 5.5\,MeV (6.5\%) and a weighted mean energy value of 81.3\,MeV. A flat background resulting from the scattering of bremsstrahlung photons produced from the 1\,GeV electrons in the storage ring was fitted simultaneously to the final energy spectrum. For the forward-angle detectors, the background from atomic Compton scattering was prominent in the low-energy region and was fitted with an exponential function in addition to the flat background. Typical line-shape fitting for the final energy spectra at forward and backward angles is shown in Fig.~\ref{fig:FitSpectra}. The fitted backgrounds were subtracted from the final energy spectrum, and the number of events was counted within the ROI in the elastic peak as indicated by the vertical dashed lines in Fig.~\ref{fig:FitSpectra}. The yield was extracted by scaling the summed number with an efficiency factor defined as the fraction of the fitted response function within the ROI. The aforementioned shield-energy and timing cuts were chosen such that the maximum yield was obtained while the uncertainty of the yield was minimized. As the lowest-energy breakup reaction $^4$He($\gamma,p$)$^3$H is 19.8\,MeV away from the elastic channel, the inelastic contribution is not expected to contaminate the ROI. The yield was corrected for the absorption of the incident photon beam in the target and normalized to the number of incident photons, target thickness, and effective solid angles to calculate the differential cross sections.
The systematic uncertainties in this experiment were grouped into two categories and are listed in Table~\ref{tab:Systematic}. The first was the point-to-point systematic uncertainty that varied from each datum. This type of uncertainty reflected the effects of placing cuts in the energy and timing spectra of each detector in yield extraction. Values of the point-to-point uncertainties were determined by slightly varying the boundaries of ROI, the windows for the line-shape fitting, the timing cut, and the shield-energy cut. The contribution from the last item was negligible compared to the others. The second type was an overall normalization uncertainty that applied equally to all data. This overall normalization uncertainty included the contributions from the number of incident photons and the target thickness. The uncertainty from using different effective beam profiles from the selected samples was found negligible. Also, the uncertainty from the effective solid angles was evaluated and found negligible due to the high precision in the geometry survey of the experimental apparatus and the small statistical uncertainty of the simulation of the detector system. The contributions of the systematic uncertainties were summed in quadrature to obtain the total systematic uncertainty.
\begin{table}[]
\begin{center}
\caption{Systematic uncertainties.}
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}} l l r }
\hline\hline\noalign{\smallskip}
Type & Source & Value \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Point-to-point & Timing cut & 0.3\%--1.7\% \\
& Line-shape fit window & 0.4\%--1.9\% \\
& ROI & 0.3\%--1.3\% \\
\noalign{\smallskip}\noalign{\smallskip}
Normalization & Number of incident photons & 2.0\% \\
& Target thickness & 1.0\% \\
\noalign{\smallskip}\noalign{\smallskip}
Total & & 2.5\%--3.5\% \\
\noalign{\smallskip}\hline\hline
\end{tabular*}
\label{tab:Systematic}
\end{center}
\end{table}
\begin{table}[!htp]
\begin{center}
\caption{The Compton scattering cross section of $^4$He at a weighted mean beam energy of 81.3\,MeV measured by the eight NaI(Tl) detectors used in our experiment. The last three columns list the statistical, point-to-point systematic, and total systematic uncertainties.}
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}} l c c c c c}
\hline\hline\noalign{\smallskip}
$\theta_{\text{Lab}}$ &$\phi$ & $d\sigma/d\Omega$ & Stat & Point-to-point &Total Syst \\
\noalign{\smallskip}
& & (nb/sr) & (nb/sr) &Syst (nb/sr) &(nb/sr)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$55\degree$ & $0\degree$ & 75.1 &$\pm2.6$ &$\pm1.1$ &$\pm2.0$ \\
$55\degree$ & $270\degree$ & 81.6 &$\pm2.8$ &$\pm2.2$ &$\pm2.9$ \\
\noalign{\smallskip}\noalign{\smallskip}
$90\degree$ & $0\degree$ & 58.8 &$\pm2.2$ &$\pm1.0$ &$\pm1.6$ \\
$90\degree$ & $180\degree$ & 66.7 &$\pm2.7$ &$\pm1.0$ &$\pm1.8$ \\
$90\degree$ & $270\degree$ & 61.8 &$\pm2.2$ &$\pm0.7$ &$\pm1.6$ \\
\noalign{\smallskip}\noalign{\smallskip}
$125\degree$ & $0\degree$ & 90.6 &$\pm2.6$ &$\pm1.2$ &$\pm2.4$ \\
$125\degree$ & $180\degree$ & 97.2 &$\pm2.6$ &$\pm1.2$ &$\pm2.5$ \\
$125\degree$ & $270\degree$ & 102.1 &$\pm2.4$ &$\pm1.1$ &$\pm2.5$ \\
\noalign{\smallskip}\hline\hline
\end{tabular*}
\label{tab:CrossSections}
\end{center}
\end{table}
\begin{table}[!htp]
\begin{center}
\caption{The averaged Compton scattering cross section of $^4$He measured at weighted mean beam energy 81.3\,MeV at the three angles of our experimental setup. These data are plotted in Fig.~\ref{fig:CrossSections}. The third column lists the statistical uncertainties. The last two columns list the point-to-point and total systematic uncertainties.}
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}} l c c c c}
\hline\hline\noalign{\smallskip}
$\theta_{\text{Lab}}$ & $d\sigma/d\Omega$ & Stat & Point-to-point & Total Syst \\
\noalign{\smallskip}
& (nb/sr) & (nb/sr) &Syst (nb/sr) &(nb/sr)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$55\degree$ & 78.0 &$\pm1.9$ &$\pm1.7$ &$\pm2.4$\\
$90\degree$ & 61.9 &$\pm1.3$ &$\pm0.9$ &$\pm1.6$\\
$125\degree$ & 97.0 &$\pm1.5$ &$\pm1.2$ &$\pm2.5$\\
\noalign{\smallskip}\hline\hline
\end{tabular*}
\label{tab:Results}
\end{center}
\end{table}
\section{Results and Discussion}
\label{sec:results}
The differential cross section extracted for each detector is listed in Table~\ref{tab:CrossSections}. For different detectors at the same angle $\theta$, the cross sections are overall in good agreement with each other, although spreads among different azimuthal angles are more pronounced at $125\degree$. Part of the spreads can be accounted for once systematic uncertainties are taken into account. However, systematic aspects due to detection variations among detectors are difficult to take into account in our analysis but can likely contribute to the observed spreads. The cross section value at each scattering angle in the end is assigned as the weighted average by the statistical uncertainties. The reduced $\chi^{2}$ is calculated at each scattering angle with respect to the weighted average value. The calculated reduced $\chi^{2}$ values are $2.07$, $2.17$, and $4.36$ at $\theta = 55\degree$, $90\degree$, and $125\degree$, respectively. The final results are plotted in Fig.~\ref{fig:CrossSections} and listed in Table~\ref{tab:Results}. The elastic Compton scattering data from Lund~\cite{Fuhrberg95} at an incident photon energy of 87\,MeV and the previous HI$\gamma$S measurement~\cite{Sikora17} at 61\,MeV are also shown in Fig.~\ref{fig:CrossSections}. The results of this work are in good agreement with the Lund results. Particularly, the present data follow the same fore-aft asymmetry in the angular distribution as the Lund data, with a strong backward peaking that is evident in both data sets. This asymmetry is distinct from the 61-MeV HI$\gamma$S data and indicates a strong sensitivity to sub-nuclear effects, including the nucleon polarizabilities. An accurate theoretical calculation of the reaction, currently lacking, is needed to explain this behavior as well as to extract the values for the isoscalar polarizabilities from these data. Such a calculation has already been done for Compton scattering from $^3$He using an EFT framework~\cite{Margaryan18,Shukla:2018rzp,Shukla:2008zc}, therefore the prospects for such a treatment for $^4$He are very promising. The high statistical accuracy of the present work and the previous HI$\gamma$S measurement at 61\,MeV provide a strong motivation for further theoretical work on $^4$He in order to extract the neutron polarizabilities with a precision that is difficult to achieve from deuterium experiments.
\section{Summary}
\label{sec:summary}
Elastic Compton scattering from $^4$He provides a complementary approach to deuteron experiments that allows for the extraction of the nucleon polarizabilities. To this end, a new high-precision measurement of the cross section of Compton scattering from $^4$He at 81.3\,MeV has been performed at HI$\gamma$S. While the results exhibit a behavior similar to that seen in previously reported data, this experimental work has achieved an unprecedented level of accuracy. This work, together with the HI$\gamma$S measurement at 61\,MeV, is expected to strongly spur the development of a rigorous theoretical treatment to interpret the $^4$He Compton scattering data in order to extract the polarizabilities of the proton and neutron with high accuracy.
\acknowledgments{
This work is funded in part by the US Department of Energy under Contracts No.~DE-FG02-03ER41231, DE-FG02-97ER41033, DE-FG02-97ER41041, DE-FG02-97ER41046, DE-FG02-97ER41042, DE-SC0005367, DE-SC0015393, DE-SC0016581, and DE-SC0016656, National Science Foundation Grants No.~NSF-PHY-0619183, NSF-PHY-1309130, and NSF-PHY-1714833, and funds from the Dean of the Columbian College of Arts and Sciences at The George Washington University, and its Vice-President for Research. We acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada and the support of Eugen-Merzbacher Fellowship. We acknowledge the support of the HI$\gamma$S accelerator scientists, engineers, and operators for assisting with the experimental setup, tuning up the accelerator system for high-energy operation, and for the high-quality production of the $\gamma$-ray beam.
}
| {'timestamp': '2020-04-28T02:04:06', 'yymm': '1912', 'arxiv_id': '1912.06915', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.06915'} |
\section{Introduction}
The $q$-state Potts model \cite{Potts_52,Wu_82,Wu_84} is certainly one of
the simplest and most studied models in Statistical Mechanics.
However, despite many efforts over more than 50 years, its {\em exact}
solution (even in two dimensions) is still unknown. The ferromagnetic
regime is the best understood case: there are exact (albeit not always
rigorous) results for the location of the critical temperature,
the order of the transition, etc.
The antiferromagnetic regime is less understood, partly because
universality is not expected to hold in general (in contrast with the
ferromagnetic regime); in particular, critical behavior may depend
on the lattice structure of the model.
One interesting feature of this antiferromagnetic
regime is that zero-temperature phase transition may occur for certain values
of $q$ and certain lattices: e.g., the models with $q=2,4$ on the triangular
lattice, and $q=3$ on the square and kagom\'e lattices
\cite[and references therein]{Salas_Sokal_97}.
The standard $q$-state Potts model can be defined on any finite undirected
graph $G = (V,E)$ with vertex set $V$ and edge set $E$. On each vertex
of the graph $i\in V$, we place a spin $\sigma(i)\in \{1,2,\ldots,q\}$,
where $q\ge 2$ in an integer. The spins interact via a Hamiltonian
\begin{equation}
H(\{\sigma\}) \;=\; -J \sum\limits_{e=ij\in E} \delta_{\sigma(i),\sigma(j)} \,,
\end{equation}
where the sum is over all edges $e \in E$, $J\in{\mathbb R}$ is the coupling constant,
and $\delta_{a,b}$ is the Kronecker delta.
The {\em Boltzmann weight}\/ of a configuration is then $e^{-\beta H}$,
where $\beta \ge 0$ is the inverse temperature.
The {\em partition function}\/ is the sum, taken over all configurations,
of their Boltzmann weights:
\begin{equation}
Z_G^{\rm Potts}(q, \beta J) \;=\;
\sum_{ \sigma \colon\, V \to \{ 1,2,\ldots,q \} } \;
e^{- \beta H(\{\sigma\}) } \,.
\label{def.ZPotts}
\end{equation}
A coupling $J$ is called {\em ferromagnetic}\/ if $J \ge 0$,
as it is then favored for adjacent spins to take the same value; and
{\em antiferromagnetic}\/ if $-\infty \le J \le 0$,
as it is then favored for adjacent spins to take different values.
The zero-temperature ($\beta \to +\infty$) limit of the antiferromagnetic
($J < 0$) Potts model has an interpretation as a coloring problem: the limit
$\lim_{\beta.\to+\infty} Z_G^{\rm Potts}(q,-\beta |J|)=P_G(q)$ is the
{\em chromatic polynomial}, which gives the number of proper
$q$-colorings of $G$. A {\em proper $q$-coloring}\/ of $G$ is a map
$\sigma \colon\, V \to \{ 1,2,\ldots,q \}$ such that
$\sigma(i) \neq \sigma(j)$ for all pairs of adjacent vertices $ij\in E$.
For many Statistical Mechanics systems for which an exact solution is
not known, Markov Chain Monte Carlo simulations \cite{Bremaud}
have become a very valuable tool to extract physical information.
An necessary condition for a Markov Chain Monte Carlo algorithm to work is
that it should be ergodic (or irreducible): i.e., the chain can eventually
get from each state to every other state. This condition is usually easy to
check at positive temperature; but in many cases, it becomes a
highly non-trivial question at zero temperature in the antiferromagnetic regime.
One popular Monte Carlo algorithm for the {\em antiferromagnetic}\/
$q$-state Potts model is the Wang--Swendsen--Koteck\'y (WSK) {\em non-local}
cluster dynamics \cite{WSK_89,WSK_90}.
At zero temperature (where we expect interesting critical phenomena), it
leaves invariant the uniform measure over proper $q$-colorings; but
its ergodicity is a non-trivial question (and not completely
understood).\footnote{
WSK dynamics can indeed be defined for positive temperature.
In this case, it is easy to show its ergodicity on the set of {\em all}
$q$-colorings of the graph $G$ (i.e., proper and non-proper).
}
It is interesting to note that at zero temperature, the basic moves of
the WSK dynamics correspond to the so-called {\em Kempe changes},
introduced by Kempe in his unsuccessful proof of the four-color theorem.
This connection makes this problem interesting from a purely
mathematical point of view.
In this paper we will address the problem of the ergodicity of the
WSK algorithm for the 4--state Potts antiferromagnet on the triangular
lattice. Although the Potts model can be defined on any graph $G$, in
Statistical Mechanics one is mainly interested in ``large'' regular graphs
embedded on the torus (to minimize finite-size effects). Therefore, we will
focus on certain regular triangulations of the torus, that we will denoted as
$T(3L,3M)$ (loosely speaking the triangulation $T(3L,3M)$ is a subset of a
triangular lattice with linear size $(3L)\times (3M)$ and fully periodic
boundary conditions. For a more detailed definition, see next section).
The ergodicity of the WSK algorithm for the $q$-state antiferromagnetic
on the triangular lattice embedded on a torus is only an open question
for $q=4,5,6$. For $q=2$ (the Ising model) it is trivially non-ergodic, as each
WSK move is equivalent to a global spin flip; while for $q=3$ is trivially
ergodic, as there is a single allowed three-coloring modulo global color
permutations. On the contrary, for $q\ge 7$ the algorithm is ergodic
(See next section for more details). Among the unknown cases, $q=4$ is the
most interesting one, because the system is expected to be critical at zero
temperature.
Proper 4-colorings of triangulation of the torus are rather special, as
they can be regarded as maps from a sphere $S^2$
(using the tetrahedral representation of the spin) to an orientable surface.
Therefore, one can borrow concepts from algebraic topology; in particular,
the degree of a four-coloring. This approach (pioneered by Fisk
\cite{Fisk_73a,Fisk_77a,Fisk_77b}) can only deal with $q=4$, and cannot
be extended to the other two cases $q=5,6$.
Our first goal is to obtain a quantity that is invariant under
a Kempe change (or zero-temperature WSK move), at least for a class
of triangulations that includes all triangulations of the type $T(3L,3M)$.
We succeeded in proving
that for any three-colorable triangulation of a closed orientable surface,
the degree of a four-coloring modulo $12$ is a Kempe invariant. Because
any four-coloring of a closed orientable surface has a degree
multiple of six, and any three-coloring has degree zero, then we
conclude that WSK with $q=4$ colors is not ergodic on any three-colorable
triangulation of a closed orientable surface which admits a four-coloring
with degree congruent with 6 modulo 12.
The next goal is to prove that for any triangulation $T(3L,3M)$ of the
torus, such four-coloring with degree congruent with 6 modulo 12 exists.
We first proved this statement for any symmetric triangulation
$T(3L,3L)$ with $L\ge 2$. Then, we extended this result to any
triangulation of the form $T(3L,3M)$ with $L\ge 3$ and $M\ge L$,
and those of the form $T(6,6(2M+1))$ with $M\ge 0$. Therefore,
we conclude that WSK with $q=4$ colors is generically non-ergodic on
the triangulations $T(3L,3M)$ of the torus.
The paper is organized as follows: In Section~\ref{sec.setup} we
introduce our basic definitions, and review what is
known in the literature about the problem of the ergodicity of the
Kempe dynamics. In Section~\ref{sec.4colorings}, we introduce the
algebraic topology approach borrowed from Fisk. This section includes
two main results: the proof that the degree modulo 12 is a Kempe
invariant for a wide enough class of triangulations, and a complete
proof of Fisk theorem \cite{Fisk_77b} for the class of triangulations
$T(r,s,t)$ of the torus. In the next section, we apply the new invariant
to prove that WSK is non-ergodic on any triangulation $T(3L,3L)$ with
$L\ge 2$. In Section~\ref{sec.asym} we extend the later result to
non-symetric triangulations of the torus $T(3L,3M)$ with $L\ge 3$ and
$M\ge L$ (and also to $T(6,6(2M+1))$ with $M\ge 0$). Finally,
in Section~\ref{sec.summary} we present our conclusions and discuss
prospects of future work.
\section{Basic setup} \label{sec.setup}
Let $G = (V,E)$ be a finite undirected graph with vertex set $V$ and edge set
$E$. Then for each graph $G$ there exists
a polynomial $P_G$ with integer coefficients such that, for each $q \in {\mathbb Z}_+$,
the number of proper $q$-colorings of $G$ is precisely $P_G(q)$.
This polynomial $P_G$ is called the {\em chromatic polynomial}\/ of $G$.
The set of all proper $q$-colorings of $G$ will be denoted as
$\mathcal{C}_q = \mathcal{C}_q(G)$ (thus, $|\mathcal{C}_q(G)|=P_G(q)$).
It is far from obvious that $Z_G^{\rm Potts}(q, \beta J)$
[cf. \reff{def.ZPotts}], which is defined separately for each positive
integer $q$, is in fact the restriction to $q \in {\mathbb Z}_+$ of
a {\em polynomial}\/ in $q$. But this is in fact the case, and indeed we have:
\begin{theorem}[Fortuin--Kasteleyn \protect\cite{Kasteleyn_69,Fortuin_72}
representation of the Potts model] \label{thm.FK}
\hfill\break
\vspace*{-4mm}
\par\noindent
For every integer $q \ge 1$, we have
\begin{equation}
Z_G^{\rm Potts}(q, v) \;=\;
\sum_{ A \subseteq E } q^{k(A)} \, v^{|A|} \;,
\label{eq.FK.identity}
\end{equation}
where $v=e^{\beta J}-1$, and $k(A)$ denotes the number of connected components
in the spanning subgraph $(V,A)$.
\end{theorem}
The foregoing considerations motivate defining the {\em Tutte polynomial}\/
of the graph $G$:
\begin{equation}
Z_G(q, v) \;=\;
\sum_{A \subseteq E} q^{k(A)} \, v^{|A|} \;,
\label{def.ZG}
\end{equation}
where $q$ and $v$ are commuting indeterminates. This polynomial is
equivalent to the standard Tutte polynomial $T_G(x,y)$ after a simple
change of variables. If we set $v=-1$, we obtain the
{\em chromatic polynomial} $P_G(q) = Z_G(q,-1)$. In particular, $q$ and
$v$ can be taken as complex variables. See \cite{Sokal_bcc2005} for a
recent survey.
As explained in the Introduction, we will focus on regular triangulations
embedded on the torus
The class of regular triangulations of the torus with degree six is
characterized by the following theorem:
\begin{theorem}[Altschulter \protect\cite{Altschulter}]
Let\/ $T$ be a triangulation of the torus such that all vertices have degree
six. Then\/ $T$ is one of triangulations $T(r,s,t)$, which are obtained
from the $(r+1)\times (s+1)$ grid by adding diagonals in the squares of
the grid as shown in Figure~\ref{figure_T_6_2_2}, and then identifying
opposite sides to get a triangulation of the torus.
In $T(r,s,t)$ the top and bottom rows have $r$ edges, the left and right
sides $s$ edges. The left and right side are identified as usual;
but the top and the bottom row are identified after (cyclically) shifting
the top row by $t$ edges to the right.
\end{theorem}
\begin{figure}[htb]
\centering
\psset{xunit=50pt}
\psset{yunit=50pt}
\psset{labelsep=10pt}
\pspicture(-0.5,-0.5)(6.5,2.5)
\multirput{0}(0,0)(0,1){3}{\psline[linewidth=2pt,linecolor=blue](0,0)(6,0)}
\multirput{0}(0,0)(1,0){7}{\psline[linewidth=2pt,linecolor=blue](0,0)(0,2)}
\multirput{0}(0,0)(1,0){5}{\psline[linewidth=2pt,linecolor=blue](0,0)(2,2)}
\psline[linewidth=2pt,linecolor=blue](0,1)(1,2)
\psline[linewidth=2pt,linecolor=blue](5,0)(6,1)
\multirput{0}(0,0)(0,1){3}{%
\multirput{0}(0,0)(1,0){7}{%
\pscircle*[linecolor=white]{10pt}
\pscircle[linewidth=1pt,linecolor=black]{10pt}
}
}
\rput{0}(0,0){${11}$}
\rput{0}(1,0){${22}$}
\rput{0}(2,0){${01}$}
\rput{0}(3,0){${12}$}
\rput{0}(4,0){${21}$}
\rput{0}(5,0){${02}$}
\rput{0}(6,0){${11}$}
\rput{0}(0,2){${01}$}
\rput{0}(1,2){${12}$}
\rput{0}(2,2){${21}$}
\rput{0}(3,2){${02}$}
\rput{0}(4,2){${11}$}
\rput{0}(5,2){${22}$}
\rput{0}(6,2){${01}$}
\rput{0}(0,1){${23}$}
\rput{0}(1,1){${04}$}
\rput{0}(2,1){${13}$}
\rput{0}(3,1){${24}$}
\rput{0}(4,1){${03}$}
\rput{0}(5,1){${14}$}
\rput{0}(6,1){${23}$}
\uput[90](0.4,1.5){${T_1}$}
\uput[90](1.4,1.5){${T_3}$}
\uput[90](2.4,1.5){${T_5}$}
\uput[90](3.4,1.5){${T_7}$}
\uput[90](4.4,1.5){${T_9}$}
\uput[90](5.4,1.5){${T_{11}}$}
\uput[90](0.4,0.5){${T_{13}}$}
\uput[90](1.4,0.5){${T_{15}}$}
\uput[90](2.4,0.5){${T_{17}}$}
\uput[90](3.4,0.5){${T_{19}}$}
\uput[90](4.4,0.5){${T_{21}}$}
\uput[90](5.4,0.5){${T_{23}}$}
\uput[270](0.6,0.5){${T_{14}}$}
\uput[270](1.6,0.5){${T_{16}}$}
\uput[270](2.6,0.5){${T_{18}}$}
\uput[270](3.6,0.5){${T_{20}}$}
\uput[270](4.6,0.5){${T_{22}}$}
\uput[270](5.6,0.5){${T_{24}}$}
\uput[270](0.6,1.5){${T_{2}}$}
\uput[270](1.6,1.5){${T_{4}}$}
\uput[270](2.6,1.5){${T_{6}}$}
\uput[270](3.6,1.5){${T_{8}}$}
\uput[270](4.6,1.5){${T_{10}}$}
\uput[270](5.6,1.5){${T_{12}}$}
\endpspicture
\caption{\label{figure_T_6_2_2}
The triangulation $T(6,2,2)=\Delta^2 \times \partial\Delta^3$ of the torus.
Each vertex $x$ of $T(6,2,2)$ is labelled by two integers $ij$,
where $i$ (resp.\ $j$) corresponds to the associated vertex in
$\Delta^2$ (resp.\ $\partial\Delta^3$).
The vertices of $\Delta^2$ are labelled $\{0,1,2\}$, while the vertices of
$\partial\Delta^3$ are labelled $\{1,2,3,4\}$. The triangulation
$T(6,2,2)$ has $12$ vertices, and those in the figure with the same label
should be identified. We have also labelled the $24$ triangular faces $T_i$
in $T(6,2,2)$.
}
\end{figure}
In Figure~\ref{figure_T_6_2_2} we have displayed the triangulation $T(6,2,2)$
of the torus. We will represent these triangulations as
embedded in a rectangular
grid with three kinds of edges: horizontal, vertical, and diagonal.
The three-colorability of the triangulations $T(r,s,t)$ is given by the
following result (whose proof is left to the reader):
\begin{proposition}
The triangulation $T(r,s,t)$ is three-colorable if and only if
$r\equiv 0 \pmod{3}$ and $s-t\equiv 0 \pmod{3}$.
\end{proposition}
In Monte Carlo simulations, it is usual to consider toroidal boundary conditions
with no shifting, so $t=0$. Then, the three-colorability condition reduces
to the standard result $r,s\equiv 0\pmod{3}$. In general, we will consider
the following triangulations of the torus $T(3L,3M,0)=T(3L,3M)$ with
$L,M\geq 1$.
The unique three-coloring $c_0$ of $T(3L,3M)$ can be described as:
\begin{equation}
c_0(x,y) \;=\; \mod(x+y-2,3) + 1 \,,
\quad 1\leq x \leq 3L\,, \quad 1\leq y \leq 3M \,,
\label{def_coloring_c0}
\ee
where we have explicitly used the above-described embedding of the
triangulation $T(3L,3M)$ in a square grid.
Finally, in most Monte Carlo simulations one usually considers tori of
aspect ratio one: i.e., $T(3L,3L)$. This is the class of triangulations we
are most interested in from the point of view of Statistical Mechanics.
\subsection{Kempe changes}
Given a graph $G=(V,E)$ and $q\in{\mathbb N}$, we can define the following dynamics on
$\mathcal{C}_q$: Choose uniformly at random two distinct colors
$a,b\in\{1,2,\ldots,q\}$, and let $G_{ab}$ be the induced subgraph of $G$
consisting of vertices $x\in V$ for which $\sigma(x)=a$ or $b$. Then,
independently for each connected component of $G_{ab}$, with probability
$ \smfrac{1}{2} $ either interchange the colors $a$ and $b$ on it, or leave
the component unchanged. This dynamics is the zero--temperature
limit of the Wang--Swendsen--Koteck\'y (WSK) {\em non-local} cluster
dynamics \cite{WSK_89,WSK_90} for the antiferromagnetic $q$-state Potts model.
This zero--temperature Markov chain leaves invariant the uniform
measure over proper $q$-colorings; but its ergodicity cannot be taken
for granted.
The basic moves of the WSK dynamics correspond to {\em Kempe changes\/}
(or K-{\em changes}).
In each K-change, we interchange the colors $a,b$ on
a given connected component (or K-{\em component\/}) of the induced subgraph
$G_{ab}$.
Two $q$-colorings $c_1,c_2\in\mathcal{C}_q(G)$ related by a series of
K-changes are {\em Kempe equivalent\/} (or K$_q$-{\em equivalent\/}).
This (equivalence) relation is denoted as $c_1 \stackrel{q}{\sim} c_2$.
The equivalence classes $\mathcal{C}_q(G)/\stackrel{q}{\sim}$ are called the
{\em Kempe classes\/} (or {\em K$_q$-classes\/}). The number of K$_q$-classes
of $G$ is denoted by $\Kc(G,q)$. Then, if $\Kc(G,q)>1$, the
zero-temperature WSK dynamics is not ergodic on $G$ for $q$ colors.
In this paper, we will consider two $q$-colorings related by a {\em global}
color permutation to be the same one. In other words, a $q$-coloring is
actually an equivalence class of standard $q$-colorings modulo global
color permutations. Thus, the number of (equivalence classes of) proper
$q$-colorings is given by $P_G(q)/q!$. This convention will simplify
the notation in the sequel.
\subsection{The number of Kempe classes}
In this section we will briefly review what it is known in the literature
about the number of Kempe equivalence classes for several families of graphs.
The first result implies that WSK dynamics is ergodic on any bipartite
graph.\footnote{All the cited authors have discovered this theorem
independently.}
\begin{proposition}[Burton and Henley \protect\cite{Henley_97a},
Ferreira and Sokal \protect\cite{Sokal_99a}, %
Mohar \protect\cite{Mohar_05}]
\label{prop.bipartite}
\hfill\break
\vspace*{-4mm}
\par\noindent
Let\/ $G$ be a bipartite graph and $q\geq 2$ an integer. Then, $\Kc(G,q)=1$.
\end{proposition}
It is worth noting that Lubin and Sokal \cite{Sokal_93} showed that
the WSK dynamics with 3 colors is not ergodic on any square--lattice
grid of size $3M\times 3N$ (with $M,N$ relatively prime) wrapped on a torus.
These graphs are indeed not bipartite.
The second type of results deals with graphs of bounded maximum degree
$\Delta$, and shows that $\Kc(G,q)=1$ whenever $q$ is large enough.
\begin{proposition}[Jerrum \protect\cite{Jerrum_private} and %
Mohar \protect\cite{Mohar_05}]
\label{prop.deltamax}
Let $\Delta$ be the maximum degree of a graph $G$ and let $q\geq \Delta+1$ be
an integer. Then $\Kc(G,q)=1$. If $G$ is connected and contains a vertex
of degree $<\Delta$, then also $\Kc(G,\Delta)=1$.
\end{proposition}
This result implies that for any 6--regular triangulation $T=T(r,s,t)$,
$\Kc(T,q)=1$ for any $q\geq \Delta+1=7$. However, the cases $q=4,5,6$
are not covered by the above proposition. The case $q=3$ is not covered
either; but this one is trivial if the triangulation is {\em three-colorable}:
the three-coloring is unique and therefore, $\Kc(T,3)=1$.
Finally, if we consider planar graphs the situation is better
understood. Fisk \cite{Fisk_77a} and Moore and Newman \protect\cite{Moore_00}
showed that $\Kc(T,4)=1$ for planar 3-colorable triangulations.
Moore and Newman's goal was to establish a height representation
of the corresponding zero-temperature antiferromagnetic Potts model.
One of the authors extended this result as follows:
\begin{theorem}[Mohar \protect\cite{Mohar_05}, Theorem~4.4]
Let $G$ be a three-colorable planar graph. Then $\Kc(G,4)=1$.
\end{theorem}
\begin{corollary}[Mohar \cite{Mohar_05}, Corollary~4.5]
Let $G$ be a planar graph and $q > \chi(G)$. Then $\Kc(G,q)=1$.
\end{corollary}
Indeed, none of our graphs $T(3L,3M)$ is planar. Thus, the above results do
not apply to our case.
The main theorem for triangulations appears in \cite{Fisk_77b}.
It involves the notion of the degree of a four-coloring, whose definition
is deferred to the next section.
\begin{theorem}[Fisk \protect\cite{Fisk_77b}] \label{theo_Fisk}
Suppose that\/ $T$ is a triangulation of the sphere, projective plane, or torus.
If\/ $T$ has a three-coloring, then all four-colorings with degree divisible
by $12$ are Kempe equivalent.
\end{theorem}
In Section \ref{sect.Fisk_Trst},
we provide a complete self-contained proof of Fisk's result when restricted
to the 6-regular triangulations of the torus treated in this paper.
\section{Four-colorings of triangulations of the torus} \label{sec.4colorings}
In this section we will consider four-colorings of triangulations of the
torus. Most of the known results concerning this section were obtained
by Fisk \cite{Fisk_73a,Fisk_77a,Fisk_77b}. We will follow his
notation hereafter.
\subsection{An alternative approach to four-colorings}
Fisk \cite{Fisk_73a,Fisk_77a} considered a definition of a four-coloring
that allows to borrow concepts and results from algebraic topology.
A (proper) four-coloring $f$ of a triangulation $T$ is a non-degenerate
simplicial map
\begin{equation}
f \;\colon\; T \; \longrightarrow \; \partial \Delta^3 \,,
\ee
where $\partial \Delta^3$ is the surface of a tetrahedron (thus, it can also
be considered as a triangulation of the sphere $S^2$).\footnote{
A map $f \colon T \to \partial \Delta^3$ is non-degenerate if the image
of every triangle of $T$ under $f$ is a triangle of $\partial \Delta^3$.
}
From algebraic topology \cite{Fisk_77a}, if $T$ is the triangulation of an
orientable closed surface (e.g., a sphere or a torus), there is an
integer-valued function $\deg(f)$ determined up to a sign by $f$.
In any practical computation, we should choose orientations
for the triangulation $T$ and the tetrahedron $\partial \Delta^3$. Then,
given any triangle $t$ of $\partial \Delta^3$ (i.e., a particular
three-coloring of a triangular face), we can compute the number $p$
(resp.\ $n$) of triangles of $T$ mapping to $t$ which have their
orientation preserved (resp.\ reversed) by $f$.
Then, the degree of the four-coloring $f$ is defined as
\begin{equation}
\deg(f) \;=\; p-n \,,
\label{def_deg}
\ee
and it is independent of the choice of the triangle $t$. For instance,
the three-coloring of any triangulation has zero degree, as
there are no vertices colored $4$, so for $t=124$ we have
$p=n=0$. As we are interested in equivalence classes
of four-colorings modulo global color permutations, in practical computations
it only makes sense to consider the absolute value of the degree: i.e.,
$|\deg(f)|$.
Tutte \cite{Tutte_69} proved a formula for the degree of a four-coloring
modulo 2 (the parity of a four-coloring) in terms of the degrees
of all vertices colored with a specific color.
We write $\rho(x)$ for the degree of a vertex $x\in V$.
A vertex is {\em even\/} (resp.\ {\em odd\/})
if its degree is even (resp.\ odd).
\begin{lemma}[Tutte \protect\cite{Tutte_69}]
\label{lemma.Tutte}
Given a triangulation $T$ of a closed orientable surface, the degree of a
four-coloring\/ $f$ of\/ $T$ satisfies
\begin{equation}
\deg(f) \;\equiv\; \sum\limits_{f(x) = a} \rho(x) \pmod{2}
\label{eq.lemma.Tutte}
\ee
for $a=1,2,3,4$.
\end{lemma}
\par\medskip\noindent{\sc Proof.\ }
By definition, the degree of a four-coloring is modulo 2 equal to the
number $N$ of triangles of $T$ mapping to a given triangle of
$\partial \Delta^3$: $\deg(f) \equiv p + n \pmod{2}$ and $N=p+n$.
If we take a color $a$, which is a vertex of $\partial \Delta^3$, then
there are three triangular faces of $\partial \Delta^3$ sharing this vertex $a$:
i.e., $t_1$, $t_2$, and $t_3$. For each of these triangles $t_i$, there are
$N_i$ triangles of $T$ mapping to $t_i$. Then,
\begin{eqnarray}
\deg(f) &\equiv& 3 \deg(f) \pmod{2} \nonumber \\
&\equiv& N_1 + N_2 + N_3 \pmod{2}
\end{eqnarray}
which is equal to the number of triangles of $T$ with a vertex colored $a$.
This number can indeed be written as the r.h.s. of \reff{eq.lemma.Tutte}. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\bigskip
Lemma \ref{lemma.Tutte} implies that any Eulerian triangulation, in particular,
any triangulation $T(r,s,t)$, can only have four-colorings with even degree,
as every vertex $x\in V$ has even degree [i.e., $\rho(x)=6$ for any vertex $x$
of $T(r,s,t)$].
A natural question is how many possible values the degree of a
four-coloring $f$ can take. An answer for a restricted class of triangulations
is given by the following proposition:
\begin{proposition}[Fisk \protect\cite{Fisk_73a}, %
Problem I.6.6 in \protect\cite{Fisk_77a}]
\label{prop_Fisk}
Let\/ $T$ be a triangulation of a closed orientable surface, and let\/
$f$ be a four-coloring of\/ $T$. If\/ $T$ admits a three-coloring,
then $\deg(f) \equiv 0 \pmod{6}$.
\end{proposition}
\par\medskip\noindent{\sc Proof.\ }
The idea is to mimic the proof of Theorem~4 in \cite{Fisk_77a}.
If $T$ has a three-coloring $h$, and $f$ is a 4-coloring of $T$, then we
can combine these two maps and give
\begin{equation}
h\times f \;\colon \; T \; \longrightarrow \;
\Delta^2 \times \partial\Delta^3 \,,
\ee
where $\Delta^2 \times \partial\Delta^3 =T(6,2,2)$
(see Figure~\ref{figure_T_6_2_2}). We have the following diagram
$$
\psset{xunit=1cm}
\psset{yunit=1cm}
\pspicture(-0.5,-0.5)(4.5,2.5)
\rput(0,2){\rnode{G}{$T$}}
\rput(0,0){\rnode{D3}{$\partial \Delta^3$}}
\rput(4,0){\rnode{Prod}{$\Delta^2 \times \partial \Delta^3$}}
\rput(4,2){\rnode{D2}{$\Delta^2$}}
\ncline[nodesep=5pt,linewidth=1pt]{->}{G}{D2}
\ncline[nodesep=5pt,linewidth=1pt]{->}{G}{D3}
\ncline[nodesep=5pt,linewidth=1pt]{->}{Prod}{D2}
\ncline[nodesep=5pt,linewidth=1pt]{->}{Prod}{D3}
\ncline[nodesep=5pt,linewidth=1pt]{->}{G}{Prod}
\uput[180](0,1){$f$}
\uput[0](2,1.1){$h\times f$}
\uput[270](2,0){$g$}
\uput[90](2,2){$h$}
\endpspicture
$$
where $g$ is the projection of $\Delta^2 \times \partial \Delta^3$ onto its
second factor $\partial\Delta^3$. By commutativity,
$\deg(f)=\deg(h\times f)\deg(g)$.
As the degree of $g$ is $6$, then $\deg(f)= 6\deg(h\times f)\equiv 0 \pmod{6}$.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
In this geometric approach to four-colorings, it is useful to introduce
the concept of a Kempe region \cite{Fisk_77a}.
Suppose that $D$ is a region of the triangulation $T$ (i.e., a union of
triangles of $T$), and that the four-coloring $f$ uses only two colors on the
boundary $\partial D$ of $D$. We define a new coloring $g$ of $T$ that is
equal to $f$ on $T\setminus D$, and equal to $\pi(f)$ on $D$, where
$\pi$ is the permutation which interchanges the two colors {\em not} on
$\partial D$. Fisk calls $D$ a Kempe region of $f$, and $\partial D$ a
Kempe cycle. The coloring is {\em not} changed on $\partial D$ itself.
Indeed, inside a Kempe region $D$ we find one or more Kempe components
of the two colors not on $\partial D$. So, the new coloring is K-equivalent
to $f$. Conversely, every K-change can be described as a change on the
region consisting of all triangles containing an edge affected by the
K-change.
Finally it is worth noting that Lemma~\ref{lemma.Tutte} implies that
the parity of a four-coloring [i.e., $\deg(f) \pmod{2}$]
is a Kempe invariant:
\begin{corollary} \label{cor.Tutte}
Given a triangulation $T$ of a closed orientable surface, then
the parity of a four-coloring of\/ $T$ is a Kempe invariant.
\end{corollary}
\par\medskip\noindent{\sc Proof.\ }
If we consider a K-change on a region $D$, we take $a$ to be one of the
colors on the boundary $\partial D$ (or one of the colors not on the Kempe
component $T_{bc}$). Then, the parity given by \reff{eq.lemma.Tutte}
is not affected by the K-change, and therefore, it is an invariant. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
Unfortunately, the parity is not useful for our purposes, as we are interested
in $6$-regular triangulations of the torus $T(r,s,t)$. Thus, all
four-colorings have even parity.
In addition, in the class of three-colorable triangulations of any orientable
surface, Proposition~\ref{prop_Fisk} ensures that all four-colorings have
$\deg(f) \equiv 0 \pmod{6}$.
\subsection{A new Kempe invariant for a class of triangulations}
In this section we shall consider a special class of triangulations in
which every vertex is of even degree. Such a triangulation is said
to be {\em even} (or Eulerian).
Observe that every 3-colorable triangulation is even.
Tutte's lemma \ref{lemma.Tutte} implies that if we have a four-coloring $f$
of a triangulation $T$ and we perform a Kempe change to obtain a new
four-coloring $g$, then
\begin{equation}
\deg(g) \;\equiv\; \deg(f) \pmod{2} \,.
\ee
For even triangulations this result has no useful consequences, as all
four-colorings have even degree. However, for the restricted class of
three-colorable triangulations of orientable surfaces we can do better.
\begin{theorem}
\label{main.theo}
Let\/ $T$ be a three-colorable triangulation of a closed orientable surface.
If $f$ and $g$ are two four-colorings of\/ $T$ related by a Kempe change on a
region $R$, then
\begin{equation}
\deg(g) \;\equiv \;\deg(f) \pmod{12} \,.
\label{main.eq}
\ee
\end{theorem}
\par\medskip\noindent{\sc Proof.\ }
We begin by noting that if $T$ is three-colorable, then it is an even
triangulation.
Proposition~\ref{prop_Fisk} ensures that $\deg(f),\deg(g)\equiv 0 \pmod{6}$.
As in the proof of Proposition~\ref{prop_Fisk}, we can combine the
three-color map $h$ with both four-colorings to define the following maps
\begin{subeqnarray}
F &=& h\times f \\
G &=& h\times g
\end{subeqnarray}
from $T$ onto $\Delta^2 \times \partial \Delta^3 = T(6,6,2)$, where $h$ is
the 3-coloring of $T$. Let us consider the following commutative diagram:
$$
\psset{xunit=1cm}
\psset{yunit=1cm}
\pspicture(-0.5,-0.5)(8.5,2.5)
\rput(0,0){\rnode{D31}{$\partial \Delta^3$}}
\rput(4,0){\rnode{T}{$T$}}
\rput(8,0){\rnode{D32}{$\partial \Delta^3$}}
\rput(0,2){\rnode{T6621}{$\Delta^2 \times \partial \Delta^3$}}
\rput(4,2){\rnode{D2}{$\Delta^2$}}
\rput(8,2){\rnode{T6622}{$\Delta^2 \times \partial \Delta^3$}}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T}{D2}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T}{D31}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T}{D32}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T6621}{D2}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T6622}{D2}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T6621}{D31}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T6622}{D32}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T}{T6621}
\ncline[nodesep=5pt,linewidth=1pt]{->}{T}{T6622}
\uput[180](0,1){$p_2$}
\uput[0] (8,1){$p_2$}
\uput[90] (2,2){$p_1$}
\uput[90] (6,2){$p_1$}
\uput[270](6,0){$f$}
\uput[270](2,0){$g$}
\uput[0](2,1.1){$G$}
\uput[0](6,0.9){$F$}
\endpspicture
$$
Since $\deg(f) = \deg(F)\deg(p_2) = 6\deg(F)$ and $\deg(g) = 6\deg(G)$,
our claim is equivalent to $\deg G \equiv \deg F \pmod{2}$.
For simplicity, let us suppose that there is a Kempe region $R$ such that
its boundary $\partial R$ is colored $3$ and $4$. Then, the Kempe change
on $R$ consists in swapping colors $1$ and $2$ on $R$.
Let us see in detail what happens after this K-change. Consider
Figure~\ref{figure_T_6_2_2} for notation. Triangles in
Figure~\ref{figure_T_6_2_2} are labeled $T_1,\dots,T_{24}$. We say that
a triangle $t$ in $T$ is of {\em type $i$\/} with respect to the coloring
$f$ if it is mapped to $T_i$ by the mapping $F$. Similarly, we consider
types of triangles under $g$.
A triangle of type $T_1$ with
positive (resp.\ negative) orientation is mapped on a
triangle of type $T_{24}$ with negative (resp.\ positive) orientation after
we swap colors $1$ and $2$. We represent this correspondence
as $\pm T_1 \leftrightarrow \mp T_{24}$.
In fact, this K-change induces a bijection from the set of triangular faces of
$T(6,2,2)$ onto itself of the form
\begin{subeqnarray}
\pm T_1 &\leftrightarrow& \mp T_{24} \\
\pm T_{1+k} &\leftrightarrow& \mp T_{12+k} \,, \qquad 1\le k\le 11.
\end{subeqnarray}
This correspondence can be written shortly as
\begin{equation}
\pm T_k \;\leftrightarrow\; \mp T_{\gamma(k)}
\ee
where $\gamma$ is an appropriate permutation.
After the K-change, the number of triangles of a given type outside $R$
is not changed, so we have to count only the changes inside $R$.
Let us introduce some useful notation: the total number of triangles of a
given type $k\in\{1,\ldots,24\}$ inside a region $A$ of the triangulation
$T$ is denoted by $N_k^{(A)}$. Let
$P_k^{(A)}$ (resp.\ $M_k^{(A)}$) denote the number of triangles of
type $k$ inside region $A$ with positive (resp.\ negative) orientation. Hence,
\begin{equation}
N_k^{(A)} \;=\; P_k^{(A)} + M_k^{(A)} \,, \quad k=1,2,\ldots,24 \,,
\quad A\subseteq T \,. \nonumber
\ee
If we split the triangulation $T$ into two
regions $R$ and $T\setminus R$, we get
\begin{equation}
\deg F \;=\; P^{(T\setminus R)}_k - M^{(T\setminus R)}_k +
P^{(R)}_k - M^{(R)}_k
\,, \quad k=1,2,\ldots,24 \,. \nonumber
\ee
After the K-change we obtain a new four-coloring $g$. The composite
coloring $G$ is identical to $F$ outside $R$. The differences can only occur
inside $R$. The degree of $G$ is given by:
\begin{equation}
\deg G \;=\; P^{(T\setminus R)}_k - M^{(T\setminus R)}_k -
P^{(R)}_{\gamma(k)} + M^{(R)}_{\gamma(k)}
\,, \quad k=1,2,\ldots,24 \,. \nonumber
\ee
Let $\Delta\deg = \deg F - \deg G$. Then
\begin{displaymath}
\Delta\deg \;=\; P^{(R)}_k + P^{(R)}_{\gamma(k)} -
(M^{(R)}_k + M^{(R)}_{\gamma(k)}) \,,
\quad k=1,2,\ldots,24 \,. \nonumber
\end{displaymath}
But this is equivalent to
\begin{eqnarray*}
\Delta\deg &\equiv& P^{(R)}_k + P^{(R)}_{\gamma(k)} +
M^{(R)}_k + M^{(R)}_{\gamma(k)} \pmod{2} \nonumber\\
&\equiv& N^{(R)}_k + N^{(R)}_{\gamma(k)} \pmod{2}
\,, \quad k=1,2,\ldots,24. \nonumber
\end{eqnarray*}
In particular, we have that for $k=1,5,9$:
\begin{eqnarray*}
\Delta\deg &\equiv& N^{(R)}_1 + N^{(R)}_{24} \pmod{2} \nonumber \\
\Delta\deg &\equiv& N^{(R)}_5 + N^{(R)}_{16} \pmod{2} \nonumber \\
\Delta\deg &\equiv& N^{(R)}_9 + N^{(R)}_{20} \pmod{2} \nonumber
\end{eqnarray*}
Summing these three equations we arrive at the formula
\begin{eqnarray}
\Delta\deg &\equiv& N^{(R)}_1 + N^{(R)}_{24} +
N^{(R)}_5 + N^{(R)}_{16} +
N^{(R)}_9 + N^{(R)}_{20} \pmod{2} \nonumber \\
&\equiv& \text{\# of triangles inside $R$ with no vertex colored $4$}
\pmod{2} \nonumber \\
&\equiv& \text{\# of triangles inside $R$ colored $123$} \pmod{2}
\label{eq.1}
\end{eqnarray}
Note that if we repeat this procedure with $k=3,7,11$ we obtain a
similar equation and conclude that $\Delta\deg$ has the same parity
as the number of triangles inside $R$ colored $124$.
On the other hand, we cannot obtain a similar formula for the triangles
colored $134$ or $234$.
Let us go back to Eq.~\reff{eq.1}. All vertices colored $1$ inside $R$
belong to the interior of $R$ (i.e., none of them lies on its boundary,
as $\partial R$ is colored $3,4$).
In addition, because the triangulation $T$ is even, each interior vertex
colored $1$ belongs to an even number of triangular faces; all of them
belonging to $R$.
Let us consider one of these interior vertices colored $1$, say $x$.
If none of its neighbors is colored $4$, $x$ contributes $\rho(x)$
to $\Delta\deg$ in Eq.~\reff{eq.1}, which is an even number.
For any neighboring vertex of $x$ colored $4$, this
contribution is reduced by two.
Thus, for each interior vertex colored $1$, there is an even
number of triangles belonging to $R$ and colored $123$. This implies that
$\Delta\deg = \deg F - \deg G \equiv 0 \pmod{2}$, and therefore
\begin{equation}
\deg f - \deg g \;=\; 6 (\deg F - \deg G) \;\equiv\; 0 \pmod{12} \,, \nonumber
\ee
as claimed. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
Theorem~\ref{main.theo} implies that a four-coloring $f$ with degree
$\deg f \equiv 6 \pmod{12}$ cannot be K-equivalent to the three-coloring
$h$, whose degree is zero. This proves the following corollary:
\begin{corollary}
\label{main.corollary}
Let\/ $T$ be a three-colorable triangulation of the torus.
Then $\Kc(T,4)>1$ if and only if there exists a four-coloring $f$ with
$\deg(f)\equiv 6 \pmod{12}$.
\end{corollary}
\par\medskip\noindent{\sc Proof.\ }
Fisk's Theorem~\ref{theo_Fisk} together with Theorem~\ref{main.theo}
imply the existence of a Kempe equivalence class characterized by
$\deg(g)\equiv 0 \pmod{12}$. This class includes the three-coloring.
Thus, $\Kc(T,4)>1$ if and only if there is a four-coloring $f$ with
$\deg(f)\equiv 6 \pmod{12}$. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
By Theorem \ref{main.theo}, the ``if'' part of Corollary \ref{main.corollary}
holds on arbitrary closed orientable surfaces.
The question of the ergodicity of the WSK dynamics on triangulations
$T(3L,3M)$ reduces to the existence of four-colorings of degree
$\equiv 6 \pmod{12}$. If there are no such four-colorings, WSK dynamics is
ergodic, while if such four-colorings exist, then WSK dynamics is
non-ergodic, and the corresponding Markov chain will not converge to the
uniform measure over $\mathcal{C}_4(T)$.
\subsection{A complete proof of Fisk's theorem for $\bm{T(r,s,t)}$}
\label{sect.Fisk_Trst}
The proof of Theorem \ref{theo_Fisk} in \cite{Fisk_77b}
seems to be missing some minor details, as reported in \cite{Mohar_84}.
However, as far as the authors can see, Fisk's proof is complete
and correct apart from these minor issues. Nevertheless,
in this section we provide a self-contained proof of Fisk's result
when restricted to the 6-regular triangulations of the torus treated
in this paper.
Another advantage of our proof is that it gives a closer insight into Kempe
equivalence between 4-colorings of triangulations $T(r,s,t)$.
\begin{theorem}
\label{thm_Fisk_for_Trst}
If the triangulation $T(r,s,t)$ admits a\/ $3$-coloring, then every\/
$4$-coloring of $T$, whose degree is divisible by 12, is K-equivalent to the
$3$-coloring.
\end{theorem}
For the proof we shall consider the ``non-singular structure'' of 4-colorings
and show that we can eliminate the ``non-singular'' part completely
by applying K-changes and thus arrive to the 3-coloring. This will be done
by a series of lemmas. But first we need some definitions.
Let $f$ be a 4-coloring of a triangulation $T$. Let $xy\in E(T)$ and let
$xyz$ and $xyw$ be the two triangles of $T$ containing the edge $xy$. We say that
the edge $xy$ is {\em singular\/} (for the coloring $f$) if $f(z)=f(w)$,
and is {\em non-singular\/} if $f(z)\ne f(w)$.
Let $N(f)$ be the set of all non-singular edges, and
for any distinct colors $i,j$, let $N_{ij}=N_{ij}(f)$ be the set of
non-singular edges $xy\in N(f)$ for which $\{f(x),f(y)\}=\{i,j\}$.
For a vertex $x$, let $N_{ij}^x$ be the set of edges in $N_{ij}$ that are
incident with~$x$.
{}From now on we assume that $T=T(r,s,t)$ is a fixed triangulation of the
torus and that $f$ is a 4-coloring of $T$. We also let $i,j\in \{1,\dots,4\}$
be distinct colors used by the 4-coloring~$f$.
\begin{lemma}
\label{lem:L1}
If\/ $x$ is a vertex of color $f(x)=i$, and $N^x_{ij}\neq \emptyset$,
then $|N^x_{ij}|=2$. Therefore, each $N_{ij}$ is a union of disjoint
cycles in\/ $T$. If two such cycles, $C\subseteq N_{ij}$ and
$C'\subseteq N_{il}$ $(j\ne l)$, cross each other at the
vertex $x$, then there is a third cycle $C''\subseteq N_{ik}$ $(k\ne j,l)$
passing through $x$ and crossing both $C$ and $C'$ at $x$.
\end{lemma}
\par\medskip\noindent{\sc Proof.\ }
Let us consider the
possible 4-colorings around $x$. Up to symmetries (permutations
of the colors and the dihedral symmetries of the 6-cycle), there are precisely
four possibilities that are shown in Figure \ref{fig:f1}. The non-singular edges
are drawn by bold solid or broken lines, and a brief inspection shows that the
claims of the lemma hold.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\begin{figure}[htb]
\centering
\psset{xunit=1.2cm}
\psset{yunit=1.2cm}
\psset{labelsep=0.3cm}
\pspicture(-1.5,-1.5)(12,1.5)
\rput{0}(0,0)
\psline[linecolor=black,linewidth=1pt](0,0)(1,0)
\psline[linecolor=black,linewidth=1pt](0,0)(0.500,0.866)
\psline[linecolor=black,linewidth=1pt](0,0)(-0.500,0.866)
\psline[linecolor=black,linewidth=1pt](0,0)(-0.500,-0.866)
\psline[linecolor=black,linewidth=1pt](0,0)(0.500,-0.866)
\psline[linecolor=black,linewidth=1pt](0,0)(-1,0)
\psline[linecolor=black,linewidth=1pt](1,0)(0.500,0.866)(-0.500,0.866)%
(-1,0)(-0.500,-0.866)(0.500,-0.866)(1,0)
\rput{0}(0.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[270](0,0){4}
}
\rput{0}(1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[0.0](0,0){1}
}
\rput{0}(0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[60.0](0,0){2}
}
\rput{0}(-0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[120.0](0,0){1}
}
\rput{0}(-1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[180.0](0,0){2}
}
\rput{0}(-0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[240.0](0,0){1}
}
\rput{0}(0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[300.0](0,0){2}
}
}
\rput{0}(3.5,0)
\psline[linecolor=green,linewidth=2pt](-1,0)(1,0)
\psline[linecolor=blue,linewidth=2pt,linestyle=dashed,dash=2pt 2pt]%
(0.500,0.866)(-0.500,-0.866)
\psline[linecolor=red,linewidth=2pt,linestyle=dashed,dash=3pt 3pt]%
(-0.500,0.866)(0.500,-0.866)
\psline[linecolor=black,linewidth=1pt](1,0)(0.500,0.866)(-0.500,0.866)%
(-1,0)(-0.500,-0.866)(0.500,-0.866)(1,0)
\rput{0}(0.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[270](0,0){4}
}
\rput{0}(1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[0.0](0,0){3}
}
\rput{0}(0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[60.0](0,0){2}
}
\rput{0}(-0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[120.0](0,0){1}
}
\rput{0}(-1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[180.0](0,0){3}
}
\rput{0}(-0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[240.0](0,0){2}
}
\rput{0}(0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[300.0](0,0){1}
}
}
\rput{0}(7,0)
\psline[linecolor=green,linewidth=2pt](-1,0)(0,0)(0.500,0.866)
\psline[linecolor=blue,linewidth=2pt,linestyle=dashed,dash=3pt 3pt]%
(-0.500,-0.866)(0,0)(1,0)
\psline[linecolor=black,linewidth=1pt](-0.500,0.866)(0.500,-0.866)
\psline[linecolor=black,linewidth=1pt](1,0)(0.500,0.866)(-0.500,0.866)%
(-1,0)(-0.500,-0.866)(0.500,-0.866)(1,0)
\rput{0}(0.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[270](0,0){4}
}
\rput{0}(1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[0.0](0,0){3}
}
\rput{0}(0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[60.0](0,0){2}
}
\rput{0}(-0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[120.0](0,0){1}
}
\rput{0}(-1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[180.0](0,0){2}
}
\rput{0}(-0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[240.0](0,0){3}
}
\rput{0}(0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[300.0](0,0){1}
}
}
\rput{0}(10.5,0)
\psline[linecolor=green,linewidth=2pt](-1,0)(0,0)(0.500,0.866)
\psline[linecolor=black,linewidth=1pt](-0.500,0.866)(0.500,-0.866)
\psline[linecolor=black,linewidth=1pt](-0.500,-0.866)(0,0)(1,0)
\psline[linecolor=black,linewidth=1pt](1,0)(0.500,0.866)(-0.500,0.866)%
(-1,0)(-0.500,-0.866)(0.500,-0.866)(1,0)
\rput{0}(0.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[270](0,0){4}
}
\rput{0}(1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[0.0](0,0){3}
}
\rput{0}(0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[60.0](0,0){2}
}
\rput{0}(-0.500,0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[120.0](0,0){1}
}
\rput{0}(-1.000,0.000){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[180.0](0,0){2}
}
\rput{0}(-0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[240.0](0,0){3}
}
\rput{0}(0.500,-0.866){%
\pscircle*[linecolor=white](0,0){4pt}
\pscircle[linecolor=black,linewidth=1pt](0,0){4pt}
\uput[300.0](0,0){2}
}
}
\endpspicture
\caption{Non-singular edges around a vertex.}
\label{fig:f1}
\end{figure}
A 4-coloring $f$ of $T$ is said to be {\em non-singularly minimal\/}
({\em NS-minimal\/} for short) if for any two distinct colors $i,j$,
the non-singular set $N_{ij}$ is either empty or forms a single non-contractible
cycle. The next lemma and its proof explain why such colorings are called
``minimal''.
\begin{lemma}
\label{lem:L2}
Let $f$ be a $4$-coloring of\/ $T$. Then there exists an NS-minimal\/
$4$-coloring
$f'$ of $T$ that is K-equivalent with $f$ and $N(f')\subseteq N(f)$.
\end{lemma}
\par\medskip\noindent{\sc Proof.\ }
Let $f'$ be a 4-coloring of $T$ that is K-equivalent to $f$, such that
$N(f')\subseteq N(f)$, and $f'$ has minimum number of non-singular edges
subject to these requirements. Since $f$ has the stated conditions, $f'$ exists.
Let us now consider an arbitrary pair of colors, say 1 and 2.
If $C\subseteq N_{12}(f')$ is a contractible cycle, let $R$ be the disk region
bounded by $C$. By exchanging colors 3 and 4 on $R$ (which keeps us in the
same K-class), all the change in nonsingular edges is that $C$ becomes singular.
(However, note that particular sets $N_{ij}$ may be changed.)
This contradicts the minimality of $N(f')$.
Therefore, every non-singular cycle in $N_{12}(f')$ is non-contractible.
Suppose that $N_{12}(f')$ contains distinct cycles $C,C'$.
As proved above, $C$ and $C'$ are non-contractible.
By Lemma \ref{lem:L1}, $C$ and $C'$ are disjoint, so
they are homotopic and therefore together bound a cylinder region $R$.
As above, by exchanging colors 3 and 4 on $R$, we get a contradiction to the
minimality assumption. This completes the proof.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\begin{figure}[htb]
\centering
\psset{xunit=50pt}
\psset{yunit=50pt}
\psset{labelsep=10pt}
\pspicture(-0.5,-0.5)(6.5,2.5)
\multirput{0}(0,0)(0,1){3}{\psline[linewidth=2pt,linecolor=blue](0,0)(6,0)}
\multirput{0}(0,0)(1,0){7}{\psline[linewidth=2pt,linecolor=blue](0,0)(0,2)}
\multirput{0}(0,0)(1,0){5}{\psline[linewidth=2pt,linecolor=blue](0,0)(2,2)}
\psline[linewidth=2pt,linecolor=blue](0,1)(1,2)
\psline[linewidth=2pt,linecolor=blue](5,0)(6,1)
\psline[linewidth=2pt,linecolor=red,linestyle=dashed, dash=3pt 3pt]%
(0.5,0)(0.6667,0.3333)(1.3333,0.6667)(1.6667,0.3333)(2.3333,0.6667)%
(2.6667,1.3333)(3.3333,1.6667)(3.6667,1.3333)(4.3333,1.6667)(4.5,2)
\multirput{0}(0,0)(0,1){3}{%
\multirput{0}(0,0)(1,0){7}{%
\pscircle*[linecolor=white]{5pt}
\pscircle[linewidth=1pt,linecolor=black]{5pt}
}
}
\uput[270](0,0){$\bm{1_1}$}
\uput[270](1,0){$\bm{2_2}$}
\uput[270](2,0){$\bm{1_3}$}
\uput[270](3,0){$\bm{2_1}$}
\uput[270](4,0){$\bm{1_2}$}
\uput[270](5,0){$\bm{2_3}$}
\uput[270](6,0){$\bm{1_1}$}
\uput[90](0,2){$\bm{1_3}$}
\uput[90](1,2){$\bm{2_1}$}
\uput[90](2,2){$\bm{1_2}$}
\uput[90](3,2){$\bm{2_3}$}
\uput[90](4,2){$\bm{1_1}$}
\uput[90](5,2){$\bm{2_2}$}
\uput[90](6,2){$\bm{1_3}$}
\uput[135](0,1){$\bm{4_2}$}
\uput[135](1,1){$\bm{3_3}$}
\uput[135](2,1){$\bm{4_1}$}
\uput[315](3,1){$\bm{3_2}$}
\uput[315](4,1){$\bm{4_3}$}
\uput[315](5,1){$\bm{3_1}$}
\uput[315](6,1){$\bm{4_2}$}
\endpspicture
\caption{\label{fig:T622}
The triangulation $T_0 = \Delta^2\times\partial\Delta^3
\approx T(6,2,2)$. The dashed line shows the sequence of triangles
$(g\times f)(\gamma)$ (see text).
}
\end{figure}
As defined earlier, let $T_0 = \Delta^2\times\partial\Delta^3 \approx T(6,2,2)$
be the 6-regular triangulation of the torus shown in Figure~\ref{fig:T622}.
Note that $T_0$ admits a 3-coloring and a non-singular 4-coloring. Its vertices
can be labeled by pairs of colors, written as $i_j$, where $i\in\{1,2,3,4\}$
is the color of the non-singular 4-coloring, and $j\in\{1,2,3\}$ is its color
under the 3-coloring; see Figure \ref{fig:T622}.
If the triangulation $T$ has a 3-coloring $g$ and a 4-coloring $f$, then we
define a simplicial map $g\times f: T\to T_0$ by setting
$(g\times f)(x) = f(x)_{g(x)}\in V(T_0)$ for every vertex $x$ of $T$.
If $\gamma$ is a closed curve on the torus $T$ that does not pass through
the vertices of $T$, then $\gamma$ can be described (up to homotopy)
by specifying
the sequence of triangles of $T$ traversed by it. This closed sequence of
triangles, $A_1,A_2,\dots,A_N,A_1$, is uniquely determined if we cancel out
possible immediate backtracking, i.e., subsequences of the form $A,B,A$.
The mapping $g\times f$ then determines a closed sequence
$B_1,B_2,\dots,B_N,B_1$ of triangles in $T_0$,
where $B_i = (g\times f)(A_i)$ for $i=1,\dots,N$.
This sequence will be denoted by $(g\times f)(\gamma)$
(See Figure~\ref{fig:T622}).
The main property of this correspondence is that $B_i=B_{i+1}$ if and only if
the edge common to $A_i$ and $A_{i+1}$ is singular with respect to the
4-coloring $f$ of $T$, i.e. $\gamma$ crosses a singular edge of $f$ when
passing from $A_i$ to $A_{i+1}$.
\begin{lemma}
\label{lem:L3}
Let\/ $T=T(r,s,t)$ be a $3$-colorable triangulation of the torus, and let
$f$ be an NS-minimal\/ $4$-coloring of\/ $T$. If $f$ is not
the $3$-coloring of\/ $T$, then all non-singular cycles $N_{ij}$
$(1\le i<j\le 4)$ exist. Two such cycles $N_{ij}$ and $N_{kl}$
$(\{i,j\}\ne\{k,l\})$ are homotopic if and only if\/
$\{i,j\}\cap\{k,l\}=\emptyset$.
\end{lemma}
\par\medskip\noindent{\sc Proof.\ }
We shall use the notation introduced above. Since $f$ is not the 3-coloring
(which is unique, up to global permutations of colors),
we may assume that $N_{12}\ne\emptyset$. Let $\gamma$ be a simple closed curve
in the torus that crosses $N_{ij}$ precisely once and is given by the sequence
of triangles $A_1,\dots,A_N,A_1$. Let us consider the corresponding sequence
$\gamma' = (g\times f)(\gamma)=B_1,B_2,\dots,B_N,B_1$ of triangles in $T_0$.
Let $K_{ij}$ be the non-singular cycle in $T_0$ passing through all vertices
$i_l$ and $j_l$, $l=1,2,3$. Since $\gamma$ crosses $N_{12}$ precisely once,
$\gamma'$ crosses $K_{12}$ exactly once. We may assume that
it crosses $K_{12}$ through the edge $e=1_12_2$ as shown in
Figure~\ref{fig:T622}.
For a cycle $K_{ij}$ we define the {\em algebraic crossing number\/} with
$\gamma'$ by first counting the number of consecutive triangles
$B_l,B_{l+1}$ in
$\gamma'$ such that $B_l$ is ``on the left'' of $K_{ij}$, while $B_{l+1}$ is
``on the right'' of it, and then subtracting the number of such pairs,
where $B_l$ is ``on the right'' and $B_{l+1}$ is ``on the left''. (For the
two ``horizontal'' cycles $K_{12}$ and $K_{34}$ we replace ``left''
by ``bottom'' and ``right'' by ``top''. All of these directions of
course refer to Figure~\ref{fig:T622}.)
We denote this number by $\algcr(\gamma',K_{ij})$.
For an arbitrary edge-set $F\subseteq E(K_{ij})$, we define $\algcr(\gamma',F)$
in the same way, except that we only consider consecutive
triangles $B_l,B_{l+1}$ sharing the edges in $F$.
Let $k = \algcr(\gamma',\{1_14_2,4_21_3\})$.
This number can be viewed as the ``winding number'' around the cylinder
obtained from $T_0$ by cutting along the cycle $K_{12}$,
cf.~Figure~\ref{fig:T622}.
Using the fact that $\gamma'$ is contained in this cylinder except for its
crossing of the edge $1_12_2$, it is easy to see that
$\algcr(\gamma',K_{13}) = 3k+1$, $\algcr(\gamma',K_{24}) = 3k+1$,
$\algcr(\gamma',K_{14}) = 3k+2$, and $\algcr(\gamma',K_{23}) = 3k+2$.
Moreover, $\algcr(\gamma',K_{12}) = \algcr(\gamma',K_{34}) = 1$.
In particular, none of these numbers is zero (modulo 3).
Let us recall that $B_i\ne B_{i+1}$ if and only if the edge common to $A_i$ and
$A_{i+1}$ is non-singular with respect to $f$. Therefore, $\gamma'$ crosses
an edge of $K_{ij}$ precisely when $\gamma$ crosses an edge in $N_{ij}(f)$.
Therefore $\algcr(\gamma',K_{ij}) = \algcr(\gamma,N_{ij}) \ne 0$.
This shows that none of the sets $N_{ij}$ is empty.
If $\{i,j\}\cap\{k,l\}=\emptyset$, the two cycles $N_{ij}$ and $N_{kl}$ are
disjoint. Since they are non-contractible and the surface is the torus, they
are homotopic to each other. On the other hand, since
$\algcr(\gamma,N_{13}) = \algcr(\gamma,N_{14}) - 1$, cycles $N_{13}$ and
$N_{14}$ cannot be homotopic. Similarly, by starting the above proof with other
cycles instead of $N_{12}$, we conclude that cycles $N_{ij}$ and $N_{kl}$
cannot be homotopic if $\{i,j\}\cap\{k,l\}\ne\emptyset$.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
Note that in the proof of Lemma \ref{lem:L3}, we did not use any assumption
on the degree of the 4-coloring $f$. On the other hand, in our last lemma,
when arguing about the degree of a 4-coloring, we will not need the existence
of the 3-coloring.
\begin{lemma}
\label{lem:L4}
Let $f$ be an NS-minimal\/ $4$-coloring of\/ $T$ such that all non-singular
cycles $N_{ij}(f)$ exist and such that two such cycles $N_{ij}$ and $N_{kl}$
$(\{i,j\}\ne\{k,l\})$ are homotopic if and only
if\/ $\{i,j\}\cap\{k,l\}=\emptyset$.
Then the degree of $f$ is congruent to $2$ modulo $4$.
In particular, it is not divisible by\/ $12$.
\end{lemma}
\par\medskip\noindent{\sc Proof.\ }
Let us consider cycles $N_{12}$ and $N_{13}$. Since they are not homotopic,
they cross at least once, and this happens at vertices of color 1.
By Lemma \ref{lem:L1}, both these cycles are crossed by $N_{14}$ at each
such crossing point. Let us fix an orientation on the torus $T$ and let
$x\in V(T)$ be a vertex of color 1 at which $N_{12},N_{13},N_{14}$ cross
each other. If the local clockwise order around $x$ is
$N_{12},N_{13},N_{14},N_{12},N_{13},N_{14}$, then we say that $x$ is
a {\em positive crossing point\/} (of color 1); if the local clockwise order
is $N_{12},N_{14},N_{13},N_{12},N_{14},N_{13}$, then $x$ is
a {\em negative crossing point}.
We claim that the difference of the number of positive minus the number
of negative crossing points of color 1 is equal (in absolute value)
to the algebraic crossing number $\algcr(N_{12},N_{13})$.
This is a consequence of the fact that color 4 changes
sides from left to right side of $N_{13}$, or vice versa,
every time when the curve $N_{13}$ passes through a crossing point of
color 1 or through a crossing point of color 3 (thus crossing
the cycle $N_{34}$ which is homotopic to $N_{12}$).
We leave the details to the reader.
Since the numbers of positive and negative crossing points of color
1 are also the same for other pairs of non-singular cycles that involve
color 1, we conclude that
\begin{equation}
|\algcr(N_{12},N_{13})| \;=\; |\algcr(N_{12},N_{14})| \;=\;
|\algcr(N_{13},N_{14})|\,.
\label{eq:algcr1}
\end{equation}
Let us fix two simple closed curves $\gamma,\nu$ on the torus $T$,
where $\nu$ is the curve corresponding to the cycle $N_{12}(f)$ and $\gamma$
crosses $\nu$ precisely once. Then every closed curve $\alpha$ on $T$
is homotopic to the curve which winds $a$ times around $\nu$, and then
winds $b$ times around $\gamma$, where $a$ and $b$ are
integers. We say that $\alpha$ has {\em homotopy type\/} $(a,b)$.
The homotopy type of $N_{12}$ is clearly (1,0). Let $(a,b)$ and $(c,d)$ be the
homotopy types of $N_{13}$ and $N_{14}$, respectively. The algebraic
crossing number between closed curves is a (free) homotopy invariant
and can be expressed as the determinant of the $2\times 2$ matrix
whose rows are the homotopy types of the curves (see, e.g.~\cite{Zieschang}).
In particular,
\begin{eqnarray}
\algcr(N_{12},N_{13}) &=& \pm\det\begin{pmatrix}1&0\\a&b\end{pmatrix}
\;=\; \pm b \,, \label{eq:algcr2}\\
\algcr(N_{12},N_{14}) &=& \pm\det\begin{pmatrix}1&0\\c&d\end{pmatrix}
\;=\; \pm d\,, \label{eq:algcr3}\\
\algcr(N_{13},N_{14}) &=& \pm\det\begin{pmatrix}a&b\\c&d\end{pmatrix}
\;=\; \pm(ad-bc) \,. \label{eq:algcr4}
\end{eqnarray}
By (\ref{eq:algcr1}), all three algebraic crossing numbers in
(\ref{eq:algcr2})--(\ref{eq:algcr4}) are equal up to the sign, so
$|b| = |d| = |ad-bc|$. It follows that either $|a-c|=1$ or $|a+c|=1$.
Here we have used the fact that $b\ne0$, and this is true since $N_{13}$
is not homotopic to $N_{12}$. A particular consequence of the above conclusion
is that either $a$ or $c$ is even.
Suppose first that $a$ is even. Since $N_{13}$ is a simple curve,
its homotopy type $(a,b)$ satisfies $\gcd(a,b)=1$ (cf.~\cite{Zieschang}).
Therefore $b$ and henceforth also $d$ are odd.
The other case is when $c$ is even. In that case, we derive the same
conclusion as above. From this it follows that the
total number of crossing points of color 1 is odd.
Of course, we can repeat the same proof for crossing points of color 2
to conclude that their number is odd as well.
We are ready for the second part of the proof, where we will relate the number
of crossing points and the degree of the coloring $f$.
Let us traverse the cycle $N_{12}$ and consider the (cyclic) sequence of
all crossing points of colors 1 and 2 as they appear on $N_{12}$.
We shall see that one can determine the degree of $f$ just from
this sequence.
Let us recall that $\deg(f)$ is equal to the difference between the number
of triangles colored $123$, whose orientation on the surface is $123$,
minus the number of such triangles whose orientation is $132$.
If $t$ is such a triangle and its edge colored $12$ is not in $N_{12}$,
then there is another triangle colored 123 sharing that edge with $t$ and
having opposite orientation. The contribution of all such triangles towards
the degree of $f$ thus cancels out.
On the other hand, each edge of $N_{12}$ is contained in precisely one triangle
colored $123$. Consider two consecutive edges $xy$ and $yz$ on $N_{12}$. If $y$
is not a crossing point with other non-singular curves, then one of the two
triangles colored $123$ and incident with these edges is oriented positively,
the other one negatively, and so their contributions will cancel out.
On the other hand, if $y$ is a crossing point, then they have the
same orientation. If two consecutive crossing points on $N_{12}$ are of
the same color, then the pair at one of these two crossing points is
positively oriented, while the pair at the other crossing point is
negatively oriented, and hence they cancel out. This has the same
effect as removing two consecutive 1's or two consecutive 2's from the
cyclic sequence of crossing points on $N_{12}$. Therefore, we may
assume that the sequence of crossing points is alternating,
$1212\dots 12$. The number of 1's
is an odd integer, say $2k+1$, as shown in the first part of the proof.
This implies that all triangles at crossing points have positive (or all have
negative) orientation. Therefore, $\deg(f) = \pm 2(2k+1) \equiv 2 \pmod{4}$,
which we were to prove.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
\proofof{Theorem~\ref{thm_Fisk_for_Trst}}
Let $f$ be a 4-coloring of $T=T(r,s,t)$. By Lemma \ref{lem:L2}
there is an NS-minimal coloring $f'$ that is K-equivalent to $f$ and
has $N(f')\subseteq N(f)$. If $f'$ is not the 3-coloring, then
by Lemma \ref{lem:L3}, all six non-singular curves $N_{ij}(f')$ exist
and their homotopy is as stated in the lemma. But then Lemma \ref{lem:L4}
implies that $\deg(f')\equiv 2 \pmod{4}$. Since the K-equivalence
preserves the value of the degree modulo 12 (cf.~Theorem~\ref{main.theo}),
this yields a contradiction
to the assumption that the degree of $f$ is divisible by 12.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\section{Consequences for the triangulations $\bm{T(3L,3L)}$} \label{sec.sym}
A simple corollary of Proposition~\ref{prop_Fisk} and Theorem~\ref{theo_Fisk}
shows that all 4-colorings of $T(3,3)$ are K-equivalent:
\begin{corollary} \label{theo_L=3}
$\Kc(T(3,3),4)=1$.
\end{corollary}
\par\medskip\noindent{\sc Proof.\ }
The smallest (in modulus) non-zero degree for a four-coloring of an even
three-colorable triangulation is $6$ by Proposition~\ref{prop_Fisk}.
But in order to have a four-coloring $f$ with such degree, we would need
at least $6\times 4=24$ triangular faces.
However, the triangulation $T(3,3)$ only has $3^2 \times 2=18$ such faces.
Then, $\deg(f)=0$ for all four-colorings of $T(3,3)$, and
Theorem~\ref{theo_Fisk} implies that $\Kc(T(3,3),4)=1$. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
A four-coloring $f$ is said to be {\em non-singular\/} if all edges
are non-singular with respect to $f$. Fisk \cite{Fisk_77b} showed that
the triangulation
$T(r,s,t)$ has a non-singular four-coloring $c_\text{ns}$ if and only if
$r,s,t$ are all even. In this non-singular coloring, each horizontal
row uses exactly two colors. This also holds for all vertical and diagonal
``straight-ahead cycles''. For the triangulation $T(3L,3M)$, the
non-singular coloring is given by
\begin{equation}
c_\text{ns}(x,y) \;=\; \begin{cases}
1 & \text{if $x,y\equiv 1 \bmod{2}$} \\
2 & \text{if $x \equiv 1$ and
$y \equiv 0 \bmod{2}$} \\
3 & \text{if $x \equiv 0$ and
$y \equiv 1 \bmod{2}$ }\\
4 & \text{if $x,y\equiv 0 \bmod{2}$}
\end{cases} \,, \quad 1\leq x \leq 3L\,, \quad 1\leq y \leq 3M
\label{def_coloring_ns}
\ee
\begin{proposition}
\label{prop.non-singular}
The triangulation $T(3L,3M)$ has a non-singular four-coloring $c_\text{ns}$
if and only $L=2\ell$ and $M=2m$ are both even. If so, then
$|\deg c_\text{ns}|=18\ell m$. In particular, $\Kc(T(6\ell,6m),4)\ge2$
if $\ell$ and $m$ are both odd.
\end{proposition}
\par\medskip\noindent{\sc Proof.\ }
Under the non-singular coloring, all triangles are mapped to
$\partial\Delta^3$ with the same orientation. Thus,
$|\deg c_\text{ns}| = \frac{1}{4}(\#\text{triangles of }T(3L,3M)) = 18\ell m$.
If $\ell$ and $m$ are both odd, the degree is $\equiv 6 \bmod{12}$,
and now Corollary~\ref{main.corollary} applies.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
The next non-trivial result shows that $\Kc(T(6,6),4)=2$; hence
WSK dynamics is non-ergodic on this triangulation.
\begin{theorem}[with Alan Sokal]\label{theo_L=6}
$\Kc(T(6,6),4)=2$.
\end{theorem}
\par\medskip\noindent{\sc Proof.\ }
Proposition \ref{prop.non-singular} shows that the non-singular
four-coloring of $T(6,6)$ has $\deg(c_\text{ns})\equiv 6 \pmod{12}$
and that there are at least two Kempe equivalence classes for this
triangulation. One class $\mathcal{C}_4^{(0)}$
corresponds to all colorings whose degree is a multiple of $12$. The other
classes contain colorings with degree $\equiv 6 \pmod{12}$.
The fact that the number of Kempe classes is exactly two can be derived
as follows. Let us first observe that the maximum degree of a four-coloring
of the triangulation $T(3L,3L)$ is $\lfloor 9L^2/2 \rfloor$; therefore,
for $T(6,6)$ this maximum degree is $18$. Thus, we should focus on all
four-colorings $f$ with $|\deg(f)|=6,18$, and show that they form a unique
Kempe equivalence class.
There is a single four-coloring $f$ with $|\deg(f)|=18$:
the non-singular coloring $c_\text{ns}$ depicted in
Figure~\ref{figure_tri_L=6}(a).
Each row (horizontal, vertical or diagonal) contains exactly two colors,
and for any choice of colors $a,b$, the induced subgraph $T_{ab}$ contains
three parallel connected components, each of them being a cycle of length six.
Then, the only non-trivial K-changes correspond to swapping colors on one
of these cycles (as swapping colors simultaneously on two such cycles
is equivalent to swapping colors on the third cycle and permute
colors $a,b$ globally).
If we choose colors $1,2$ and swap colors on the bottom row, we get the
four-coloring $f_b$ with degree $|\deg(f_b)|=6$ depicted in
Figure~\ref{figure_tri_L=6}(b).
To obtain a new coloring we should choose the other pair of colors $3,4$,
as for any other choice $(a,b)\neq (1,2)$ or $(3,4)$, the induced subgraph
$T_{ab}$ is connected, so we would not obtain a distinct coloring.
Again, we only need to consider one of the three horizontal cycles of the
induced subgraph $T_{34}$. Now we have two different choices: the second or
the fourth rows from the bottom. The resulting colorings $f_c,f_d$
are depicted respectively in Figures~\ref{figure_tri_L=6}(c) and (d).
Both have $|\deg(f_i)|=6$, and all the induced subgraphs $T_{a,b}$ with
$(a,b)\neq (1,2)$ or $(3,4)$ are again connected. Thus, all these
colorings form a closed class $\mathcal{C}_4^{(1)}$ under K-changes; but
we still need to prove that there are no additional colorings $f$ with
$|\deg f|=6$.
To count the number of four-colorings $f$ with $|\deg(f)|=6$ belonging to the
class $\mathcal{C}_4^{(1)}$, we can fix the colors of the three vertices
of a triangular face $t$. Then, all we can do is (for each of the three
directions -- horizontal, vertical, and diagonal) to swap colors on any
non-empty subset of the four cycles in the chosen direction not intersecting
$t$. Since there are 15 non-empty subsets, we have $15\times 3=45$ colorings
$f$ with $|\deg(f)|=6$, and therefore, $|\mathcal{C}_4^{(1)}|=46$.
Finally, we used a computer program (written in {\sc perl}) that enumerates
all possible four-colorings on $T(6,6)$ and classify them according to
$|\deg(f)|$. It finds $305192$ proper four-colorings with zero degree,
$45$ colorings with $|\deg(f)|=6$, and a single coloring with $|\deg(f)|=18$.
Therefore, $\mathcal{C}^{(1)}_4$ contains all colorings with $|\deg(f)|=6,18$,
$\mathcal{C}_4(T(6,6)) = \mathcal{C}_4^{(0)} \cup \mathcal{C}^{(1)}_4$, and
$\Kc(T(6,6),4)=2$. Indeed, the number of all these colorings is equal to
$P_{T(6,6)}(4)/4! = 305238$. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\begin{figure}[htb]
\centering
\begin{tabular}{cc}
\psset{xunit=22pt}
\psset{yunit=22pt}
\pspicture(-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(5.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(5.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(5.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(5.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(5.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(5.5,5)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,5.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,5.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,5.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,5.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,5.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(5.5,4.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(5.5,3.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(5.5,2.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(5.5,1.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(5.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(4.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(3.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(2.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(1.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(0.5,5.5)
\multirput{0}(0,0)(0,1){6}{%
\multirput{0}(0,0)(1,0){6}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 1}}
\rput{0}(0,1){{\bf 3}}
\rput{0}(0,2){{\bf 1}}
\rput{0}(0,3){{\bf 3}}
\rput{0}(0,4){{\bf 1}}
\rput{0}(0,5){{\bf 3}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf 4}}
\rput{0}(1,2){{\bf 2}}
\rput{0}(1,3){{\bf 4}}
\rput{0}(1,4){{\bf 2}}
\rput{0}(1,5){{\bf 4}}
\rput{0}(2,0){{\bf 1}}
\rput{0}(2,1){{\bf 3}}
\rput{0}(2,2){{\bf 1}}
\rput{0}(2,3){{\bf 3}}
\rput{0}(2,4){{\bf 1}}
\rput{0}(2,5){{\bf 3}}
\rput{0}(3,0){{\bf 2}}
\rput{0}(3,1){{\bf 4}}
\rput{0}(3,2){{\bf 2}}
\rput{0}(3,3){{\bf 4}}
\rput{0}(3,4){{\bf 2}}
\rput{0}(3,5){{\bf 4}}
\rput{0}(4,0){{\bf 1}}
\rput{0}(4,1){{\bf 3}}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 3}}
\rput{0}(4,4){{\bf 1}}
\rput{0}(4,5){{\bf 3}}
\rput{0}(5,0){{\bf 2}}
\rput{0}(5,1){{\bf 4}}
\rput{0}(5,2){{\bf 2}}
\rput{0}(5,3){{\bf 4}}
\rput{0}(5,4){{\bf 2}}
\rput{0}(5,5){{\bf 4}}
\endpspicture
\qquad
&
\qquad
\psset{xunit=22pt}
\psset{yunit=22pt}
\pspicture(-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(5.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(5.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(5.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(5.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(5.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(5.5,5)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,5.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,5.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,5.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,5.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,5.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(5.5,4.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(5.5,3.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(5.5,2.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(5.5,1.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(5.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(4.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(3.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(2.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(1.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(0.5,5.5)
\multirput{0}(0,0)(0,1){6}{%
\multirput{0}(0,0)(1,0){6}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 2}}
\rput{0}(0,1){{\bf 3}}
\rput{0}(0,2){{\bf 1}}
\rput{0}(0,3){{\bf 3}}
\rput{0}(0,4){{\bf 1}}
\rput{0}(0,5){{\bf 3}}
\rput{0}(1,0){{\bf 1}}
\rput{0}(1,1){{\bf 4}}
\rput{0}(1,2){{\bf 2}}
\rput{0}(1,3){{\bf 4}}
\rput{0}(1,4){{\bf 2}}
\rput{0}(1,5){{\bf 4}}
\rput{0}(2,0){{\bf 2}}
\rput{0}(2,1){{\bf 3}}
\rput{0}(2,2){{\bf 1}}
\rput{0}(2,3){{\bf 3}}
\rput{0}(2,4){{\bf 1}}
\rput{0}(2,5){{\bf 3}}
\rput{0}(3,0){{\bf 1}}
\rput{0}(3,1){{\bf 4}}
\rput{0}(3,2){{\bf 2}}
\rput{0}(3,3){{\bf 4}}
\rput{0}(3,4){{\bf 2}}
\rput{0}(3,5){{\bf 4}}
\rput{0}(4,0){{\bf 2}}
\rput{0}(4,1){{\bf 3}}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 3}}
\rput{0}(4,4){{\bf 1}}
\rput{0}(4,5){{\bf 3}}
\rput{0}(5,0){{\bf 1}}
\rput{0}(5,1){{\bf 4}}
\rput{0}(5,2){{\bf 2}}
\rput{0}(5,3){{\bf 4}}
\rput{0}(5,4){{\bf 2}}
\rput{0}(5,5){{\bf 4}}
\endpspicture
\\[2mm]
(a) &\phantom{(a)} (b) \\[5mm]
\psset{xunit=22pt}
\psset{yunit=22pt}
\pspicture(-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(5.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(5.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(5.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(5.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(5.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(5.5,5)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,5.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,5.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,5.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,5.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,5.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(5.5,4.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(5.5,3.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(5.5,2.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(5.5,1.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(5.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(4.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(3.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(2.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(1.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(0.5,5.5)
\multirput{0}(0,0)(0,1){6}{%
\multirput{0}(0,0)(1,0){6}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 2}}
\rput{0}(0,1){{\bf 4}}
\rput{0}(0,2){{\bf 1}}
\rput{0}(0,3){{\bf 3}}
\rput{0}(0,4){{\bf 1}}
\rput{0}(0,5){{\bf 3}}
\rput{0}(1,0){{\bf 1}}
\rput{0}(1,1){{\bf 3}}
\rput{0}(1,2){{\bf 2}}
\rput{0}(1,3){{\bf 4}}
\rput{0}(1,4){{\bf 2}}
\rput{0}(1,5){{\bf 4}}
\rput{0}(2,0){{\bf 2}}
\rput{0}(2,1){{\bf 4}}
\rput{0}(2,2){{\bf 1}}
\rput{0}(2,3){{\bf 3}}
\rput{0}(2,4){{\bf 1}}
\rput{0}(2,5){{\bf 3}}
\rput{0}(3,0){{\bf 1}}
\rput{0}(3,1){{\bf 3}}
\rput{0}(3,2){{\bf 2}}
\rput{0}(3,3){{\bf 4}}
\rput{0}(3,4){{\bf 2}}
\rput{0}(3,5){{\bf 4}}
\rput{0}(4,0){{\bf 2}}
\rput{0}(4,1){{\bf 4}}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 3}}
\rput{0}(4,4){{\bf 1}}
\rput{0}(4,5){{\bf 3}}
\rput{0}(5,0){{\bf 1}}
\rput{0}(5,1){{\bf 3}}
\rput{0}(5,2){{\bf 2}}
\rput{0}(5,3){{\bf 4}}
\rput{0}(5,4){{\bf 2}}
\rput{0}(5,5){{\bf 4}}
\endpspicture
\qquad
&
\qquad
\psset{xunit=22pt}
\psset{yunit=22pt}
\pspicture(-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(5.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(5.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(5.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(5.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(5.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(5.5,5)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,5.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,5.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,5.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,5.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,5.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(5.5,4.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(5.5,3.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(5.5,2.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(5.5,1.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(5.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(4.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(3.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(2.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(1.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(0.5,5.5)
\multirput{0}(0,0)(0,1){6}{%
\multirput{0}(0,0)(1,0){6}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 2}}
\rput{0}(0,1){{\bf 3}}
\rput{0}(0,2){{\bf 1}}
\rput{0}(0,3){{\bf 4}}
\rput{0}(0,4){{\bf 1}}
\rput{0}(0,5){{\bf 3}}
\rput{0}(1,0){{\bf 1}}
\rput{0}(1,1){{\bf 4}}
\rput{0}(1,2){{\bf 2}}
\rput{0}(1,3){{\bf 3}}
\rput{0}(1,4){{\bf 2}}
\rput{0}(1,5){{\bf 4}}
\rput{0}(2,0){{\bf 2}}
\rput{0}(2,1){{\bf 3}}
\rput{0}(2,2){{\bf 1}}
\rput{0}(2,3){{\bf 4}}
\rput{0}(2,4){{\bf 1}}
\rput{0}(2,5){{\bf 3}}
\rput{0}(3,0){{\bf 1}}
\rput{0}(3,1){{\bf 4}}
\rput{0}(3,2){{\bf 2}}
\rput{0}(3,3){{\bf 3}}
\rput{0}(3,4){{\bf 2}}
\rput{0}(3,5){{\bf 4}}
\rput{0}(4,0){{\bf 2}}
\rput{0}(4,1){{\bf 3}}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 4}}
\rput{0}(4,4){{\bf 1}}
\rput{0}(4,5){{\bf 3}}
\rput{0}(5,0){{\bf 1}}
\rput{0}(5,1){{\bf 4}}
\rput{0}(5,2){{\bf 2}}
\rput{0}(5,3){{\bf 3}}
\rput{0}(5,4){{\bf 2}}
\rput{0}(5,5){{\bf 4}}
\endpspicture
\\[2mm]
(c) & \phantom{(a)} (d) \\[5mm]
\end{tabular}
\caption{\label{figure_tri_L=6}
Four-colorings of the triangulation $T(6,6)$.
(a) Coloring $c_\text{ns}$ \protect\reff{def_coloring_ns} with
$|\deg(c_\text{ns})|=18$.
(b) Coloring $f_b$ obtained from $c_\text{ns}$ by swapping colors $1,2$
on the bottom row.
(c) Coloring $f_c$ obtained from $f_b$ by swapping colors $3,4$ on the
second row from the bottom.
(d) Coloring $f_d$ obtained from $f_b$ by swapping colors $3,4$ on the
fourth row from the bottom.
The coloring $c_\text{ns}$ in (a) has $|\deg(c_\text{ns})|=18$;
the colorings $f_i$ in (b)--(d) have $|\deg(f_i)|=6$.
}
\end{figure}
\noindent
{\bf Remark.} The class $\mathcal{C}^{(0)}_4$ is grossly larger
than $\mathcal{C}^{(1)}_4$: to be more precise,
$|\mathcal{C}^{(1)}_4|/|\mathcal{C}^{(0)}_4|\approx 1.5\times 10^{-4}$.
\medskip
Let us now state a simple lemma which is the basic key in the proof of the
next theorems.
\begin{lemma} \label{lemma.tech}
{\rm (a)} If there is a four-coloring $f$ of the triangulation $T(r,s)$
with $\deg(f)\equiv 2 \pmod{4}$, then there exists a four-coloring $g$ of
$T(3r,3s)$ with $\deg(g)\equiv 6 \pmod{12}$.
{\rm (b)} If there is a four-coloring $f$ of $T(3r,s)$ or
$T(r,3s)$ with $\deg(f)\equiv 2 \pmod{4}$, then there exists a four-coloring
$g$ of $T(3r,3s)$ with $\deg(g)\equiv 6 \pmod{12}$.
{\rm (c)} If there is a four-coloring $f$ of the triangulation $T(3r,3s)$
with $\deg(f)\equiv 6\pmod{12}$, then for any odd integers $p,q$,
there exists a four-coloring $g$ of the triangulation $T(3rp,3sq)$ with
$\deg(g)\equiv 6 \pmod{12}$.
\end{lemma}
\par\medskip\noindent{\sc Proof.\ }
(a) If $f$ is a four-coloring of $T(r,s)$, then we can
obtain a four-coloring $g$ of $T(3r,3s)$ by extending $f$
periodically three times in each direction. If $\deg(f)=2 + 4k$,
with $k\in{\mathbb Z}$, then
\begin{equation}
\deg(g) \;=\; 9 \deg(f) \;=\; 18 + 36 k \;\equiv\; 6 \pmod{12} \,.\nonumber
\ee
(b) The same arguments as in (a) apply here; the only difference is that
the coloring of $T(3r,3s)$ is obtained from the coloring in $T(3r,s)$
(resp.\ $T(r,3s)$) by extending periodically the former three times in
the vertical (resp.\ horizontal) direction. If $\deg(f)=2 + 4k$, then
the degree of the periodically extended coloring $g$ is
\begin{equation}
\deg(g) \;=\; 3 \deg(f) \;=\; 6 + 12 k \;\equiv\; 6 \pmod{12} \,.\nonumber
\ee
(c) If $f$ is a four-coloring of $T(3r,3s)$ with $\deg(f)\equiv 6 \pmod{12}$,
then we can obtain a four-coloring $g$ of $T(3rp,3rq)$ by extending
$f$ periodically $p$ times in the horizontal direction and $q$ times in the
vertical direction. If $\deg(f)=6+12k$ with $k\in{\mathbb Z}$, the degree of $g$ is
\begin{equation}
\deg(g) \;=\; p q \deg(f) \;=\; 6pq + 12pqk \;\equiv\; 6 \pmod{12} \,\nonumber
\ee
if both $p$ and $q$ are odd integers. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\subsection{Main results for $\bm{T(3L,3L)}$}
Our main results for triangulations of the type $T(3L,3L)$ can be
summarized as follows:
\begin{theorem} \label{theo.main}
For any triangulation $T(3L,3L)$ with $L\geq 2$ there exists a four-coloring
$f$ with $\deg(f)\equiv 6 \pmod{12}$. Hence, $\Kc(T(3L,3L),4)> 1$.
In other words, the WSK dynamics for four-colorings on $T(3L,3L)$
is non-ergodic.
\end{theorem}
\par\medskip\noindent{\sc Proof.\ }
The rest of this section is devoted to the proof of Theorem \ref{theo.main}.
We will show that $T(3L,3L)$ admits a four-coloring $f$ with
$\deg(f)\equiv 6 \pmod{12}$. Then, Corollary~\ref{main.corollary} implies that
$\Kc(T(3L,3L),4)> 1$ for any $L\geq 2$. The construction of $f$ will depend
on the value of $L$ modulo 4, and we will split the proof in four cases,
$L=4k-2, 4k-1, 4k$, or $L=4k+1$, with $k\in{\mathbb N}$.
The basic strategy for all
these proofs is to explicitly construct the four-coloring with the
desired degree. With this aim, it is useful to fix orientations
of both triangulations $T(3L,3L)$ and $\partial\Delta^3$ in order to
compute the degree of a given four-coloring (without ambiguity).
We orient $T(3L,3L)$ and $\partial\Delta^3$ in such a way that
the boundaries of all triangular faces are always followed clockwise.
The contribution of a triangular face $t$ of $T(3L,3L)$
to the degree is $+1$ (resp.\ $-1$) if the coloring is $123$ (resp.\ $132$)
if we move clockwise around the boundary of $t$. In our figures,
those faces with orientation preserved (resp.\ reversed) by $f$ are
depicted in light (resp.\ dark) gray.
The easiest case is when $L=4k-2$. In this case, $T(3L,3L)$ admits the
non-singular 4-coloring, whose degree is congruent to 6 modulo 12 by
Proposition~\ref{prop.non-singular}.
Other cases need a more elaborate
construction. The common strategy is to devise an algorithm to obtain
the desired four-coloring; and the main ingredient is to use the
counter-diagonals of the triangulations: these counter-diagonals are
orthogonal to the inclined edges of the triangulation when embedded
in a square grid. They will be denoted as D$j$ with $1\leq j \leq 3L$.
In Figure~\ref{figure_tri_notation} we show the triangulation $T(6,6)$, and
its six counter-diagonals D$j$. As we have embedded the triangulation into
a square grid, we will use Cartesian coordinates $(x,y)$, $1\leq x,y\leq 3L$,
for labelling the vertices.
\begin{figure}[htb]
\centering
\psset{xunit=22pt}
\psset{yunit=22pt}
\pspicture(-1,-1)(6,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(5.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(5.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(5.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(5.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(5.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(5.5,5)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,5.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,5.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,5.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,5.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,5.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(5.5,5.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(5.5,4.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(5.5,3.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(5.5,2.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(5.5,1.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(5.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(4.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(3.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(2.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(1.5,5.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(0.5,5.5)
\multirput{0}(0,0)(0,1){6}{%
\multirput{0}(0,0)(1,0){6}{%
\pscircle*[linecolor=white]{6pt}
\pscircle[linewidth=1pt,linecolor=black] {6pt}
}
}
\multirput{0}(5.5,-0.5)(0,1){6}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](5.7,-0.7){D1}
\uput[0](5.7, 0.3){D2}
\uput[0](5.7, 1.3){D3}
\uput[0](5.7, 2.3){D4}
\uput[0](5.7, 3.3){D5}
\uput[0](5.7, 4.3){D6}
\multirput{0}(5.5,-0.5)(-1,0){6}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[270](4.7,-0.7){D6}
\uput[270](3.7,-0.7){D5}
\uput[270](2.7,-0.7){D4}
\uput[270](1.7,-0.7){D3}
\uput[270](0.7,-0.7){D2}
\uput[90](2.5,6){$x$}
\uput[90](0,5.5){\small 1}
\uput[90](1,5.5){\small 2}
\uput[90](2,5.5){\small 3}
\uput[90](3,5.5){\small 4}
\uput[90](4,5.5){\small 5}
\uput[90](5,5.5){\small 6}
\uput[180](-0.5,0){\small 1}
\uput[180](-0.5,1){\small 2}
\uput[180](-0.5,2){\small 3}
\uput[180](-0.5,3){\small 4}
\uput[180](-0.5,4){\small 5}
\uput[180](-0.5,5){\small 6}
\uput[180](-1,2.5){$y$}
\endpspicture
\caption{\label{figure_tri_notation}
Notation used in the proof of Theorem~\protect\ref{theo.main}.
Given a triangulation $T(M,M)$ (here we depict the case $M=6$),
we label each vertex using Cartesian coordinates $(x,y)$ [$1\leq x,y\leq M$].
The arrows (pointing north-west) show the counter-diagonals D$j$
with $j=1,\ldots,M$.
}
\end{figure}
We will describe an algorithm that provides the
desired coloring $f$. It is useful to monitor the degree of the coloring as
we construct it. In particular, at a given step of the algorithm, the
four-coloring $f$ will be defined on some region $R$ of $T=T(3L,3L)$ (i.e.,
the union of all properly colored triangular faces of $T$). What we mean
by the degree of $f$ at this stage, is the contribution to the degree of $f$
of the triangles belonging to $R$: $\deg(f|_R)$.
Again, we will count only those triangular faces of $T$ colored $123$.
Notice that at the end of the algorithm, when $R=T$, this partial degree
will coincide with the standard one, $\deg(f)=\deg(f|_T)$.
\medskip
\proofofcase{2}{$L=4k-1$}
Let us consider the triangulation $T=T(12k-3,12k-3)$ with $k\in{\mathbb N}$ (the
case $k=1$ will illustrate our ideas in Figures
\ref{prop.12k-3.fig1}--\ref{prop.12k-3.fig3-4}).
Our goal is to obtain a four-coloring
$f$ of $T$ with degree $\deg(f)\equiv 6 \pmod{12}$. The algorithm to
obtain such a coloring consists of four steps:
\begin{figure}[htb]
\centering
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-1,-1)(9,9)
\psline*[linecolor=lightgray](1,5)(1,6)(3,6)(3,7)(1,5)
\psline*[linecolor=lightgray](6,0)(7,0)(7,2)(8,2)(6,0)
\psline*[linecolor=lightgray](0,6)(0,7)(1,7)(0,6)
\psline*[linecolor=lightgray](0,8)(-0.5,8)(-0.5,7.5)(0,8)
\psline*[linecolor=lightgray](8,8)(8.5,8)(8.5,7.5)(8,7)(8,8)
\psline*[linecolor=lightgray](3,5)(4,5)(4,6)(3,5)
\psline*[linecolor=lightgray](4,4)(6,4)(5,3)(5,5)(4,4)
\psline*[linecolor=lightgray](6,2)(6,3)(7,3)(6,2)
\psline*[linecolor=lightgray](7,8)(8,8)(8,8.5)(7.5,8.5)(7,8)
\psline*[linecolor=lightgray](7,0)(8,0)(7.5,-0.5)(7,-0.5)(7,0)
\psline*[linecolor=darkgray](8,7)(8.5,7)(8.5,7.5)(8,7)
\psline*[linecolor=darkgray](0,7)(0,8)(-0.5,7.5)(-0.5,7)(0,7)
\psline*[linecolor=darkgray](0,6)(1,6)(1,7)(0,6)
\psline*[linecolor=darkgray](3,5)(3,6)(4,6)(3,5)
\psline*[linecolor=darkgray](4,4)(4,5)(5,5)(4,4)
\psline*[linecolor=darkgray](5,3)(6,3)(6,4)(5,3)
\psline*[linecolor=darkgray](6,2)(7,2)(7,3)(6,2)
\psline*[linecolor=darkgray](7,8)(7,8.5)(7.5,8.5)(7,8)
\psline*[linecolor=darkgray](8,0)(8,-0.5)(7.5,-0.5)(8,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(8.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(8.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(8.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(8.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(8.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(8.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(8.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(8.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(8.5,8)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,8.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,8.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,8.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,8.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,8.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,8.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,8.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,8.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(8.5,8.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(8.5,7.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(8.5,6.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(8.5,5.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(8.5,4.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(8.5,3.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(8.5,2.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(8.5,1.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(8.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(7.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(6.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(5.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(4.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(3.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(2.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(1.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(0.5,8.5)
\multirput{0}(0,0)(0,1){9}{%
\multirput{0}(0,0)(1,0){9}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 4}}
\rput{0}(0,1){{\bf 1}}
\rput{0}(0,2){{\bf }}
\rput{0}(0,3){{\bf }}
\rput{0}(0,4){{\bf }}
\rput{0}(0,5){{\bf }}
\rput{0}(0,6){{\bf 2}}
\rput{0}(0,7){{\bf 3}}
\rput{0}(0,8){{\bf 1}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf }}
\rput{0}(1,2){{\bf }}
\rput{0}(1,3){{\bf }}
\rput{0}(1,4){{\bf }}
\rput{0}(1,5){{\bf 2}}
\rput{0}(1,6){{\bf 3}}
\rput{0}(1,7){{\bf 1}}
\rput{0}(1,8){{\bf 4}}
\rput{0}(2,0){{\bf }}
\rput{0}(2,1){{\bf }}
\rput{0}(2,2){{\bf }}
\rput{0}(2,3){{\bf }}
\rput{0}(2,4){{\bf 2}}
\rput{0}(2,5){{\bf 4}}
\rput{0}(2,6){{\bf 1}}
\rput{0}(2,7){{\bf 4}}
\rput{0}(2,8){{\bf 2}}
\rput{0}(3,0){{\bf }}
\rput{0}(3,1){{\bf }}
\rput{0}(3,2){{\bf }}
\rput{0}(3,3){{\bf 2}}
\rput{0}(3,4){{\bf 4}}
\rput{0}(3,5){{\bf 1}}
\rput{0}(3,6){{\bf 3}}
\rput{0}(3,7){{\bf 2}}
\rput{0}(3,8){{\bf }}
\rput{0}(4,0){{\bf }}
\rput{0}(4,1){{\bf }}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 4}}
\rput{0}(4,4){{\bf 1}}
\rput{0}(4,5){{\bf 3}}
\rput{0}(4,6){{\bf 2}}
\rput{0}(4,7){{\bf }}
\rput{0}(4,8){{\bf }}
\rput{0}(5,0){{\bf }}
\rput{0}(5,1){{\bf 1}}
\rput{0}(5,2){{\bf 4}}
\rput{0}(5,3){{\bf 2}}
\rput{0}(5,4){{\bf 3}}
\rput{0}(5,5){{\bf 2}}
\rput{0}(5,6){{\bf }}
\rput{0}(5,7){{\bf }}
\rput{0}(5,8){{\bf }}
\rput{0}(6,0){{\bf 1}}
\rput{0}(6,1){{\bf 4}}
\rput{0}(6,2){{\bf 2}}
\rput{0}(6,3){{\bf 3}}
\rput{0}(6,4){{\bf 1}}
\rput{0}(6,5){{\bf }}
\rput{0}(6,6){{\bf }}
\rput{0}(6,7){{\bf }}
\rput{0}(6,8){{\bf }}
\rput{0}(7,0){{\bf 3}}
\rput{0}(7,1){{\bf 2}}
\rput{0}(7,2){{\bf 3}}
\rput{0}(7,3){{\bf 1}}
\rput{0}(7,4){{\bf }}
\rput{0}(7,5){{\bf }}
\rput{0}(7,6){{\bf }}
\rput{0}(7,7){{\bf }}
\rput{0}(7,8){{\bf 1}}
\rput{0}(8,0){{\bf 2}}
\rput{0}(8,1){{\bf 4}}
\rput{0}(8,2){{\bf 1}}
\rput{0}(8,3){{\bf }}
\rput{0}(8,4){{\bf }}
\rput{0}(8,5){{\bf }}
\rput{0}(8,6){{\bf }}
\rput{0}(8,7){{\bf 2}}
\rput{0}(8,8){{\bf 3}}
\multirput{0}(8.5,-0.5)(0,1){9}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](8.7,-0.7){D1}
\uput[0](8.7, 0.3){D2}
\uput[0](8.7, 1.3){D3}
\uput[0](8.7, 2.3){D4}
\uput[0](8.7, 3.3){D5}
\uput[0](8.7, 4.3){D6}
\uput[0](8.7, 5.3){D7}
\uput[0](8.7, 6.3){D8}
\uput[0](8.7, 7.3){D9}
\endpspicture
\caption{ \label{prop.12k-3.fig1}
The 4-coloring of $T(9,9)$ after Step~1 in the proof of the case $L=4k-1$.}
\end{figure}
\noindent
{\bf Step 1.}
We start by coloring the counter-diagonal D1: we color $1$ the vertices with
$x$-coordinates $1\leq x \leq 6k-1$; the other $6k-2$ vertices are colored $2$.
On D2, we color $3$ those $6k-1$ vertices with $x$-coordinates
$3k+1\leq x \leq 9k-1$. The other vertices on D2 are colored $4$. The
vertices on D$(12k-3)$ are colored $3$ or $4$ in such a
way that the resulting coloring is proper (for each vertex, there is a
unique choice).
On D3 and D$(12k-4)$, we color all vertices $1$ or $2$ (there is a
unique choice for each vertex). The resulting coloring is depicted on
Figure~\ref{prop.12k-3.fig1}. The partial degree of $f$ is $\deg f|_R = 4$.
\medskip
\noindent
{\bf Step 2.}
For $k>1$, we find that there are $12k-8$ counter-diagonals to be colored
and we need to sequentially color all of them but four. This can be
achieved by performing the following procedure: suppose that we
have already colored counter-diagonals D$j$ and D$(12k-j-1)$ ($j\geq 3$)
using colors $1$ and $2$. Then, we color D$(j+1)$ and D$(12k-j-2)$ using
colors $3$ and $4$, and D$(j+2)$ and D$(12k-j-3)$ using colors $1$ and $2$.
As in Step~1, for each vertex there is a unique choice.
This procedure is repeated $3(k-1)$ times, so we add $12(k-1)$
counter-diagonals, and there are only four counter-diagonals not yet
colored. Indeed, the last colored counter-diagonals D$(6k-3)$ and
D$(6k+2)$ have colors $1$ and $2$, the same as it was at the end of Step~1.
Each of these $3(k-1)$ steps adds $4$ to the degree of the coloring.
Thus, the partial degree of $f$ is $\deg f|_R = 4 + 12(k-1)$.
\medskip
\noindent
{\bf Step 3.}
There remain only four counter-diagonals to be colored: D$(6k-2)$, D$(6k-1)$,
D$(6k)$, and D$(6k+1)$.
On D$(6k-2)$, the vertices $(3k-1,3k-1)$ and $(9k-2,9k-3)$ only admit
a single color (which is $3$ for one of them, and $4$ for the other one).
The rest of the vertices on D$(6k-2)$ are colored $1$ and $2$ (again, there
is a unique choice for each vertex).
We now color $3$ or $4$ all the vertices on D$(6k+1)$ (the choice is again
unique for each vertex). The resulting coloring is depicted in
Figure~\ref{prop.12k-3.fig3-4}(a).
The contribution to the partial degree of the new triangles is zero; the
partial degree of $f$ is given by $\deg f|_R = 4 + 12(k-1)$.
\medskip
\noindent
{\bf Step 4.}
On D$(6k-1)$, there are two pairs of nearby vertices which only admit a
single color (which is $3$ for one pair, and $4$ for the other one).
These vertices are located at $(3k-1,3k)$, $(3k,3k-1)$, $(9k-1,9k-3)$, and
$(9k-2,9k-2)$. The other vertices on D$(6k-1)$ can be colored $3$ or $4$
(with only one choice for each of them). The increment of the degree after
coloring these vertices is $-2$, thus $\deg f|_R = 2 + 12(k-1)$.
Finally, all vertices on D$(6k)$ are colored $1$ and $2$; and again
the choice is unique for each vertex. The final coloring is depicted on
Figure~\ref{prop.12k-3.fig3-4}(b). The increment in the degree is $4$,
and therefore, the degree of the four-coloring $f$ is
\begin{equation}
\deg f \;=\; 6 + 12(k-1) \;\equiv\; 6 \pmod{12}
\end{equation}
This coloring $f$ of $T(12k-3,12k-3)$ satisfies the two needed properties:
it is a proper coloring and its degree is congruent to six modulo $12$.
\proofofcase{3}{$L=4k$}
Let us consider the triangulation $T=T(12k,12k)$ with $k\in{\mathbb N}$ (we will
illustrate the main steps with the case $k=1$). Our algorithm consists of
five steps:
\medskip
\noindent
{\bf Step 1.}
On the counter-diagonal D1 we color $1$ the $6k$ consecutive vertices with
$x$-coordinates $1\leq x \leq 6k$. The other $6k$ vertices on D1 are colored
$2$.
\clearpage
\begin{figure}[htb]
\centering
\begin{tabular}{c}
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-1,-1)(9,9)
\psline*[linecolor=lightgray](1,5)(1,6)(3,6)(3,7)(1,5)
\psline*[linecolor=lightgray](6,0)(7,0)(7,2)(8,2)(6,0)
\psline*[linecolor=lightgray](0,6)(0,7)(1,7)(0,6)
\psline*[linecolor=lightgray](0,8)(-0.5,8)(-0.5,7.5)(0,8)
\psline*[linecolor=lightgray](8,8)(8.5,8)(8.5,7.5)(8,7)(8,8)
\psline*[linecolor=lightgray](3,5)(4,5)(4,6)(3,5)
\psline*[linecolor=lightgray](4,4)(6,4)(5,3)(5,5)(4,4)
\psline*[linecolor=lightgray](6,2)(6,3)(7,3)(6,2)
\psline*[linecolor=lightgray](7,8)(8,8)(8,8.5)(7.5,8.5)(7,8)
\psline*[linecolor=lightgray](7,0)(8,0)(7.5,-0.5)(7,-0.5)(7,0)
\psline*[linecolor=lightgray](3,6)(4,6)(4,7)(3,6)
\psline*[linecolor=lightgray](4,5)(5,5)(5,6)(4,5)
\psline*[linecolor=lightgray](6,3)(6,4)(7,4)(6,3)
\psline*[linecolor=lightgray](7,2)(7,3)(8,3)(7,2)
\psline*[linecolor=darkgray](8,7)(8.5,7)(8.5,7.5)(8,7)
\psline*[linecolor=darkgray](0,7)(0,8)(-0.5,7.5)(-0.5,7)(0,7)
\psline*[linecolor=darkgray](0,6)(1,6)(1,7)(0,6)
\psline*[linecolor=darkgray](3,5)(3,6)(4,6)(3,5)
\psline*[linecolor=darkgray](4,4)(4,5)(5,5)(4,4)
\psline*[linecolor=darkgray](5,3)(6,3)(6,4)(5,3)
\psline*[linecolor=darkgray](6,2)(7,2)(7,3)(6,2)
\psline*[linecolor=darkgray](7,8)(7,8.5)(7.5,8.5)(7,8)
\psline*[linecolor=darkgray](8,0)(8,-0.5)(7.5,-0.5)(8,0)
\psline*[linecolor=darkgray](3,6)(3,7)(4,7)(3,6)
\psline*[linecolor=darkgray](4,5)(4,6)(5,6)(4,5)
\psline*[linecolor=darkgray](6,3)(7,3)(7,4)(6,3)
\psline*[linecolor=darkgray](7,2)(8,2)(8,3)(7,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(8.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(8.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(8.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(8.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(8.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(8.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(8.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(8.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(8.5,8)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,8.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,8.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,8.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,8.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,8.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,8.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,8.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,8.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(8.5,8.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(8.5,7.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(8.5,6.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(8.5,5.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(8.5,4.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(8.5,3.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(8.5,2.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(8.5,1.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(8.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(7.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(6.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(5.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(4.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(3.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(2.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(1.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(0.5,8.5)
\multirput{0}(0,0)(0,1){9}{%
\multirput{0}(0,0)(1,0){9}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 4}}
\rput{0}(0,1){{\bf 1}}
\rput{0}(0,2){{\bf 2}}
\rput{0}(0,3){{\bf }}
\rput{0}(0,4){{\bf }}
\rput{0}(0,5){{\bf 4}}
\rput{0}(0,6){{\bf 2}}
\rput{0}(0,7){{\bf 3}}
\rput{0}(0,8){{\bf 1}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf 3}}
\rput{0}(1,2){{\bf }}
\rput{0}(1,3){{\bf }}
\rput{0}(1,4){{\bf 3}}
\rput{0}(1,5){{\bf 2}}
\rput{0}(1,6){{\bf 3}}
\rput{0}(1,7){{\bf 1}}
\rput{0}(1,8){{\bf 4}}
\rput{0}(2,0){{\bf 1}}
\rput{0}(2,1){{\bf }}
\rput{0}(2,2){{\bf }}
\rput{0}(2,3){{\bf 3}}
\rput{0}(2,4){{\bf 2}}
\rput{0}(2,5){{\bf 4}}
\rput{0}(2,6){{\bf 1}}
\rput{0}(2,7){{\bf 4}}
\rput{0}(2,8){{\bf 2}}
\rput{0}(3,0){{\bf }}
\rput{0}(3,1){{\bf }}
\rput{0}(3,2){{\bf 3}}
\rput{0}(3,3){{\bf 2}}
\rput{0}(3,4){{\bf 4}}
\rput{0}(3,5){{\bf 1}}
\rput{0}(3,6){{\bf 3}}
\rput{0}(3,7){{\bf 2}}
\rput{0}(3,8){{\bf 1}}
\rput{0}(4,0){{\bf }}
\rput{0}(4,1){{\bf 3}}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 4}}
\rput{0}(4,4){{\bf 1}}
\rput{0}(4,5){{\bf 3}}
\rput{0}(4,6){{\bf 2}}
\rput{0}(4,7){{\bf 1}}
\rput{0}(4,8){{\bf }}
\rput{0}(5,0){{\bf 3}}
\rput{0}(5,1){{\bf 1}}
\rput{0}(5,2){{\bf 4}}
\rput{0}(5,3){{\bf 2}}
\rput{0}(5,4){{\bf 3}}
\rput{0}(5,5){{\bf 2}}
\rput{0}(5,6){{\bf 1}}
\rput{0}(5,7){{\bf }}
\rput{0}(5,8){{\bf }}
\rput{0}(6,0){{\bf 1}}
\rput{0}(6,1){{\bf 4}}
\rput{0}(6,2){{\bf 2}}
\rput{0}(6,3){{\bf 3}}
\rput{0}(6,4){{\bf 1}}
\rput{0}(6,5){{\bf 4}}
\rput{0}(6,6){{\bf }}
\rput{0}(6,7){{\bf }}
\rput{0}(6,8){{\bf 4}}
\rput{0}(7,0){{\bf 3}}
\rput{0}(7,1){{\bf 2}}
\rput{0}(7,2){{\bf 3}}
\rput{0}(7,3){{\bf 1}}
\rput{0}(7,4){{\bf 2}}
\rput{0}(7,5){{\bf }}
\rput{0}(7,6){{\bf }}
\rput{0}(7,7){{\bf 4}}
\rput{0}(7,8){{\bf 1}}
\rput{0}(8,0){{\bf 2}}
\rput{0}(8,1){{\bf 4}}
\rput{0}(8,2){{\bf 1}}
\rput{0}(8,3){{\bf 2}}
\rput{0}(8,4){{\bf }}
\rput{0}(8,5){{\bf }}
\rput{0}(8,6){{\bf 4}}
\rput{0}(8,7){{\bf 2}}
\rput{0}(8,8){{\bf 3}}
\multirput{0}(8.5,-0.5)(0,1){9}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](8.7,-0.7){D1}
\uput[0](8.7, 0.3){D2}
\uput[0](8.7, 1.3){D3}
\uput[0](8.7, 2.3){D4}
\uput[0](8.7, 3.3){D5}
\uput[0](8.7, 4.3){D6}
\uput[0](8.7, 5.3){D7}
\uput[0](8.7, 6.3){D8}
\uput[0](8.7, 7.3){D9}
\endpspicture
\\[1mm]
(a)
\\
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-1,-1)(9,9)
\psline*[linecolor=lightgray](1,5)(1,6)(3,6)(3,7)(1,5)
\psline*[linecolor=lightgray](6,0)(7,0)(7,2)(8,2)(6,0)
\psline*[linecolor=lightgray](0,6)(0,7)(1,7)(0,6)
\psline*[linecolor=lightgray](0,8)(-0.5,8)(-0.5,7.5)(0,8)
\psline*[linecolor=lightgray](8,8)(8.5,8)(8.5,7.5)(8,7)(8,8)
\psline*[linecolor=lightgray](3,5)(4,5)(4,6)(3,5)
\psline*[linecolor=lightgray](4,4)(6,4)(5,3)(5,5)(4,4)
\psline*[linecolor=lightgray](6,2)(6,3)(7,3)(6,2)
\psline*[linecolor=lightgray](7,0)(8,0)(7.5,-0.5)(7,-0.5)(7,0)
\psline*[linecolor=lightgray](3,6)(4,6)(4,7)(3,6)
\psline*[linecolor=lightgray](4,5)(5,5)(5,6)(4,5)
\psline*[linecolor=lightgray](6,3)(6,4)(7,4)(6,3)
\psline*[linecolor=lightgray](7,2)(7,3)(8,3)(7,2)
\psline*[linecolor=lightgray](4,6)(5,6)(5,7)(4,6)
\psline*[linecolor=lightgray](7,3)(7,4)(8,4)(7,3)
\psline*[linecolor=lightgray](0,4)(1,4)(1,5)(0,4)
\psline*[linecolor=lightgray](1,3)(2,3)(2,4)(1,3)
\psline*[linecolor=lightgray](2,2)(3,3)(3,1)(4,2)(2,2)
\psline*[linecolor=lightgray](4,0)(4,1)(5,1)(4,0)
\psline*[linecolor=lightgray](5,0)(6,0)(5.5,-0.5)(5,-0.5)(5,0)
\psline*[linecolor=lightgray](5,8)(5,8.5)(5.5,8.5)(5,8)
\psline*[linecolor=lightgray](5,8)(5,7)(4,7)(5,8)
\psline*[linecolor=lightgray](6,7)(6,6)(5,6)(6,7)
\psline*[linecolor=lightgray](4,7)(5,7)(5,8.5)(5.5,8.5)(4,7)
\psline*[linecolor=lightgray](7,4)(7,5)(8,5)(7,4)
\psline*[linecolor=lightgray](8,3)(8,4)(8.5,4)(8.5,3.5)(8,3)
\psline*[linecolor=lightgray](0,4)(-0.5,4)(-0.5,3.5)(0,4)
\psline*[linecolor=lightgray](7,8)(7,8.5)(7.5,8.5)(7,8)
\psline*[linecolor=darkgray](8,7)(8.5,7)(8.5,7.5)(8,7)
\psline*[linecolor=darkgray](0,7)(0,8)(-0.5,7.5)(-0.5,7)(0,7)
\psline*[linecolor=darkgray](0,6)(1,6)(1,7)(0,6)
\psline*[linecolor=darkgray](3,5)(3,6)(4,6)(3,5)
\psline*[linecolor=darkgray](4,4)(4,5)(5,5)(4,4)
\psline*[linecolor=darkgray](5,3)(6,3)(6,4)(5,3)
\psline*[linecolor=darkgray](6,2)(7,2)(7,3)(6,2)
\psline*[linecolor=darkgray](8,0)(8,-0.5)(7.5,-0.5)(8,0)
\psline*[linecolor=darkgray](3,6)(3,7)(4,7)(3,6)
\psline*[linecolor=darkgray](4,5)(4,6)(5,6)(4,5)
\psline*[linecolor=darkgray](6,3)(7,3)(7,4)(6,3)
\psline*[linecolor=darkgray](7,2)(8,2)(8,3)(7,2)
\psline*[linecolor=darkgray](4,6)(4,7)(5,7)(4,6)
\psline*[linecolor=darkgray](5,5)(5,6)(6,6)(5,5)
\psline*[linecolor=darkgray](6,4)(7,4)(7,5)(6,4)
\psline*[linecolor=darkgray](7,3)(8,3)(8,4)(7,3)
\psline*[linecolor=darkgray](1,3)(1,4)(2,4)(1,3)
\psline*[linecolor=darkgray](2,2)(2,3)(3,3)(2,2)
\psline*[linecolor=darkgray](3,1)(4,1)(4,2)(3,1)
\psline*[linecolor=darkgray](4,0)(5,0)(5,1)(4,0)
\psline*[linecolor=darkgray](5,6)(5,7)(6,7)(5,6)
\psline*[linecolor=darkgray](7,4)(8,4)(8,5)(7,4)
\psline*[linecolor=darkgray](7,8)(8,8)(8,8.5)(7.5,8.5)(7,8)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(8.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(8.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(8.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(8.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(8.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(8.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(8.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(8.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(8.5,8)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,8.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,8.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,8.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,8.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,8.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,8.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,8.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,8.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(8.5,8.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(8.5,7.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(8.5,6.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(8.5,5.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(8.5,4.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(8.5,3.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(8.5,2.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(8.5,1.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(8.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(7.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(6.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(5.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(4.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(3.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(2.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(1.5,8.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(0.5,8.5)
\multirput{0}(0,0)(0,1){9}{%
\multirput{0}(0,0)(1,0){9}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 4}}
\rput{0}(0,1){{\bf 1}}
\rput{0}(0,2){{\bf 2}}
\rput{0}(0,3){{\bf 4}}
\rput{0}(0,4){{\bf 1}}
\rput{0}(0,5){{\bf 4}}
\rput{0}(0,6){{\bf 2}}
\rput{0}(0,7){{\bf 3}}
\rput{0}(0,8){{\bf 1}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf 3}}
\rput{0}(1,2){{\bf 4}}
\rput{0}(1,3){{\bf 1}}
\rput{0}(1,4){{\bf 3}}
\rput{0}(1,5){{\bf 2}}
\rput{0}(1,6){{\bf 3}}
\rput{0}(1,7){{\bf 1}}
\rput{0}(1,8){{\bf 4}}
\rput{0}(2,0){{\bf 1}}
\rput{0}(2,1){{\bf 4}}
\rput{0}(2,2){{\bf 1}}
\rput{0}(2,3){{\bf 3}}
\rput{0}(2,4){{\bf 2}}
\rput{0}(2,5){{\bf 4}}
\rput{0}(2,6){{\bf 1}}
\rput{0}(2,7){{\bf 4}}
\rput{0}(2,8){{\bf 2}}
\rput{0}(3,0){{\bf 4}}
\rput{0}(3,1){{\bf 2}}
\rput{0}(3,2){{\bf 3}}
\rput{0}(3,3){{\bf 2}}
\rput{0}(3,4){{\bf 4}}
\rput{0}(3,5){{\bf 1}}
\rput{0}(3,6){{\bf 3}}
\rput{0}(3,7){{\bf 2}}
\rput{0}(3,8){{\bf 1}}
\rput{0}(4,0){{\bf 2}}
\rput{0}(4,1){{\bf 3}}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 4}}
\rput{0}(4,4){{\bf 1}}
\rput{0}(4,5){{\bf 3}}
\rput{0}(4,6){{\bf 2}}
\rput{0}(4,7){{\bf 1}}
\rput{0}(4,8){{\bf 4}}
\rput{0}(5,0){{\bf 3}}
\rput{0}(5,1){{\bf 1}}
\rput{0}(5,2){{\bf 4}}
\rput{0}(5,3){{\bf 2}}
\rput{0}(5,4){{\bf 3}}
\rput{0}(5,5){{\bf 2}}
\rput{0}(5,6){{\bf 1}}
\rput{0}(5,7){{\bf 3}}
\rput{0}(5,8){{\bf 2}}
\rput{0}(6,0){{\bf 1}}
\rput{0}(6,1){{\bf 4}}
\rput{0}(6,2){{\bf 2}}
\rput{0}(6,3){{\bf 3}}
\rput{0}(6,4){{\bf 1}}
\rput{0}(6,5){{\bf 4}}
\rput{0}(6,6){{\bf 3}}
\rput{0}(6,7){{\bf 2}}
\rput{0}(6,8){{\bf 4}}
\rput{0}(7,0){{\bf 3}}
\rput{0}(7,1){{\bf 2}}
\rput{0}(7,2){{\bf 3}}
\rput{0}(7,3){{\bf 1}}
\rput{0}(7,4){{\bf 2}}
\rput{0}(7,5){{\bf 3}}
\rput{0}(7,6){{\bf 1}}
\rput{0}(7,7){{\bf 4}}
\rput{0}(7,8){{\bf 1}}
\rput{0}(8,0){{\bf 2}}
\rput{0}(8,1){{\bf 4}}
\rput{0}(8,2){{\bf 1}}
\rput{0}(8,3){{\bf 2}}
\rput{0}(8,4){{\bf 3}}
\rput{0}(8,5){{\bf 1}}
\rput{0}(8,6){{\bf 4}}
\rput{0}(8,7){{\bf 2}}
\rput{0}(8,8){{\bf 3}}
\multirput{0}(8.5,-0.5)(0,1){9}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](8.7,-0.7){D1}
\uput[0](8.7, 0.3){D2}
\uput[0](8.7, 1.3){D3}
\uput[0](8.7, 2.3){D4}
\uput[0](8.7, 3.3){D5}
\uput[0](8.7, 4.3){D6}
\uput[0](8.7, 5.3){D7}
\uput[0](8.7, 6.3){D8}
\uput[0](8.7, 7.3){D9}
\endpspicture
\\[1mm]
(b)
\end{tabular}
\caption{\label{prop.12k-3.fig3-4}
Four-colorings of the triangulation $T(9,9)$ after Steps~3 (a) and~4 (b)
in the proof of the case $L=4k-1$.
}
\end{figure}
\begin{figure}[htb]
\centering
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-1,-1)(13,12)
\psline*[linecolor=lightgray](0,9)(0,10)(1,10)(0,9)
\psline*[linecolor=lightgray](1,8)(1,9)(2,9)(1,8)
\psline*[linecolor=lightgray](2,7)(2,8)(4,8)(4,9)(2,7)
\psline*[linecolor=lightgray](4,7)(5,7)(5,8)(4,7)
\psline*[linecolor=lightgray](5,6)(7,6)(6,5)(6,7)(5,6)
\psline*[linecolor=lightgray](7,5)(8,5)(7,4)(7,5)
\psline*[linecolor=lightgray](8,4)(9,4)(8,3)(8,4)
\psline*[linecolor=lightgray](9,3)(10,3)(9,2)(9,3)
\psline*[linecolor=lightgray](8,1)(9,1)(9,2)(8,1)
\psline*[linecolor=lightgray](9,0)(10,0)(10,1)(9,0)
\psline*[linecolor=lightgray](11,0)(11,-0.5)(10.5,-0.5)(11,0)
\psline*[linecolor=lightgray](10,11)(10.5,11.5)(11,11.5)(11,11)(10,11)
\psline*[linecolor=lightgray](11,11)(11.5,11)(11.5,10.5)(11,10)(11,11)
\psline*[linecolor=lightgray](0,11)(-0.5,11)(-0.5,10.5)(0,11)
\psline*[linecolor=lightgray](1,6)(2,6)(2,7)(1,6)
\psline*[linecolor=lightgray](2,5)(3,5)(3,6)(2,5)
\psline*[linecolor=lightgray](3,4)(5,4)(4,3)(4,5)(3,4)
\psline*[linecolor=lightgray](5,2)(5,3)(6,3)(5,2)
\psline*[linecolor=lightgray](6,1)(6,2)(7,2)(6,1)
\psline*[linecolor=lightgray](7,0)(7,1)(8,1)(7,0)
\psline*[linecolor=lightgray](4,9)(4,10)(5,10)(4,9)
\psline*[linecolor=lightgray](10,3)(11,3)(11,4)(10,3)
\psline*[linecolor=darkgray](0,9)(1,9)(1,10)(0,9)
\psline*[linecolor=darkgray](1,8)(2,8)(2,9)(1,8)
\psline*[linecolor=darkgray](4,7)(4,8)(5,8)(4,7)
\psline*[linecolor=darkgray](5,6)(5,7)(6,7)(5,6)
\psline*[linecolor=darkgray](6,5)(7,5)(7,6)(6,5)
\psline*[linecolor=darkgray](7,4)(8,4)(8,5)(7,4)
\psline*[linecolor=darkgray](8,3)(9,3)(9,4)(8,3)
\psline*[linecolor=darkgray](9,0)(9,1)(10,1)(9,0)
\psline*[linecolor=darkgray](10,0)(10,-0.5)(10.5,-0.5)(11,0)(10,0)
\psline*[linecolor=darkgray](10,11)(10,11.5)(10.5,11.5)(10,11)
\psline*[linecolor=darkgray](0,10)(-0.5,10)(-0.5,10.5)(0,11)(0,10)
\psline*[linecolor=darkgray](11,10)(11.5,10)(11.5,10.5)(11,10)
\psline*[linecolor=darkgray](2,5)(2,6)(3,6)(2,5)
\psline*[linecolor=darkgray](3,4)(3,5)(4,5)(3,4)
\psline*[linecolor=darkgray](4,3)(5,3)(5,4)(4,3)
\psline*[linecolor=darkgray](5,2)(6,2)(6,3)(5,2)
\psline*[linecolor=darkgray](6,1)(7,1)(7,2)(6,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(11.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(11.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(11.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(11.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(11.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(11.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(11.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(11.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(11.5,8)
\psline[linewidth=2pt,linecolor=blue](-0.5,9)(11.5,9)
\psline[linewidth=2pt,linecolor=blue](-0.5,10)(11.5,10)
\psline[linewidth=2pt,linecolor=blue](-0.5,11)(11.5,11)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,11.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,11.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,11.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,11.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,11.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,11.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,11.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,11.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,11.5)
\psline[linewidth=2pt,linecolor=blue](9,-0.5)(9,11.5)
\psline[linewidth=2pt,linecolor=blue](10,-0.5)(10,11.5)
\psline[linewidth=2pt,linecolor=blue](11,-0.5)(11,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(11.5,11.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(11.5,10.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(11.5,9.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(11.5,8.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(11.5,7.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(11.5,6.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(11.5,5.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(11.5,4.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(11.5,3.5)
\psline[linewidth=2pt,linecolor=blue](8.5,-0.5)(11.5,2.5)
\psline[linewidth=2pt,linecolor=blue](9.5,-0.5)(11.5,1.5)
\psline[linewidth=2pt,linecolor=blue](10.5,-0.5)(11.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(10.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(9.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(8.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(7.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(6.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(5.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(4.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(3.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,8.5)(2.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,9.5)(1.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,10.5)(0.5,11.5)
\multirput{0}(0,0)(0,1){12}{%
\multirput{0}(0,0)(1,0){12}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black]{8pt}
}
}
\rput{0}(0,0){{\bf 4}}
\rput{0}(0,1){{\bf 1}}
\rput{0}(0,2){{\bf 3}}
\rput{0}(0,3){{\bf 4}}
\rput{0}(0,4){{\bf }}
\rput{0}(0,5){{\bf }}
\rput{0}(0,6){{\bf }}
\rput{0}(0,7){{\bf 1}}
\rput{0}(0,8){{\bf 4}}
\rput{0}(0,9){{\bf 2}}
\rput{0}(0,10){{\bf 3}}
\rput{0}(0,11){{\bf 1}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf 3}}
\rput{0}(1,2){{\bf 4}}
\rput{0}(1,3){{\bf }}
\rput{0}(1,4){{\bf }}
\rput{0}(1,5){{\bf }}
\rput{0}(1,6){{\bf 1}}
\rput{0}(1,7){{\bf 4}}
\rput{0}(1,8){{\bf 2}}
\rput{0}(1,9){{\bf 3}}
\rput{0}(1,10){{\bf 1}}
\rput{0}(1,11){{\bf 4}}
\rput{0}(2,0){{\bf 3}}
\rput{0}(2,1){{\bf 4}}
\rput{0}(2,2){{\bf }}
\rput{0}(2,3){{\bf }}
\rput{0}(2,4){{\bf }}
\rput{0}(2,5){{\bf 1}}
\rput{0}(2,6){{\bf 3}}
\rput{0}(2,7){{\bf 2}}
\rput{0}(2,8){{\bf 3}}
\rput{0}(2,9){{\bf 1}}
\rput{0}(2,10){{\bf 4}}
\rput{0}(2,11){{\bf 2}}
\rput{0}(3,0){{\bf 4}}
\rput{0}(3,1){{\bf }}
\rput{0}(3,2){{\bf }}
\rput{0}(3,3){{\bf }}
\rput{0}(3,4){{\bf 1}}
\rput{0}(3,5){{\bf 3}}
\rput{0}(3,6){{\bf 2}}
\rput{0}(3,7){{\bf 4}}
\rput{0}(3,8){{\bf 1}}
\rput{0}(3,9){{\bf 4}}
\rput{0}(3,10){{\bf 2}}
\rput{0}(3,11){{\bf 3}}
\rput{0}(4,0){{\bf }}
\rput{0}(4,1){{\bf }}
\rput{0}(4,2){{\bf }}
\rput{0}(4,3){{\bf 2}}
\rput{0}(4,4){{\bf 3}}
\rput{0}(4,5){{\bf 2}}
\rput{0}(4,6){{\bf 4}}
\rput{0}(4,7){{\bf 1}}
\rput{0}(4,8){{\bf 3}}
\rput{0}(4,9){{\bf 2}}
\rput{0}(4,10){{\bf 3}}
\rput{0}(4,11){{\bf 4}}
\rput{0}(5,0){{\bf }}
\rput{0}(5,1){{\bf }}
\rput{0}(5,2){{\bf 2}}
\rput{0}(5,3){{\bf 3}}
\rput{0}(5,4){{\bf 1}}
\rput{0}(5,5){{\bf 4}}
\rput{0}(5,6){{\bf 1}}
\rput{0}(5,7){{\bf 3}}
\rput{0}(5,8){{\bf 2}}
\rput{0}(5,9){{\bf 4}}
\rput{0}(5,10){{\bf 1}}
\rput{0}(5,11){{\bf }}
\rput{0}(6,0){{\bf }}
\rput{0}(6,1){{\bf 2}}
\rput{0}(6,2){{\bf 3}}
\rput{0}(6,3){{\bf 1}}
\rput{0}(6,4){{\bf 4}}
\rput{0}(6,5){{\bf 2}}
\rput{0}(6,6){{\bf 3}}
\rput{0}(6,7){{\bf 2}}
\rput{0}(6,8){{\bf 4}}
\rput{0}(6,9){{\bf 3}}
\rput{0}(6,10){{\bf }}
\rput{0}(6,11){{\bf }}
\rput{0}(7,0){{\bf 2}}
\rput{0}(7,1){{\bf 3}}
\rput{0}(7,2){{\bf 1}}
\rput{0}(7,3){{\bf 4}}
\rput{0}(7,4){{\bf 2}}
\rput{0}(7,5){{\bf 3}}
\rput{0}(7,6){{\bf 1}}
\rput{0}(7,7){{\bf 4}}
\rput{0}(7,8){{\bf 3}}
\rput{0}(7,9){{\bf }}
\rput{0}(7,10){{\bf }}
\rput{0}(7,11){{\bf }}
\rput{0}(8,0){{\bf 4}}
\rput{0}(8,1){{\bf 1}}
\rput{0}(8,2){{\bf 4}}
\rput{0}(8,3){{\bf 2}}
\rput{0}(8,4){{\bf 3}}
\rput{0}(8,5){{\bf 1}}
\rput{0}(8,6){{\bf 4}}
\rput{0}(8,7){{\bf 3}}
\rput{0}(8,8){{\bf }}
\rput{0}(8,9){{\bf }}
\rput{0}(8,10){{\bf }}
\rput{0}(8,11){{\bf 2}}
\rput{0}(9,0){{\bf 1}}
\rput{0}(9,1){{\bf 3}}
\rput{0}(9,2){{\bf 2}}
\rput{0}(9,3){{\bf 3}}
\rput{0}(9,4){{\bf 1}}
\rput{0}(9,5){{\bf 4}}
\rput{0}(9,6){{\bf 3}}
\rput{0}(9,7){{\bf }}
\rput{0}(9,8){{\bf }}
\rput{0}(9,9){{\bf }}
\rput{0}(9,10){{\bf 2}}
\rput{0}(9,11){{\bf 4}}
\rput{0}(10,0){{\bf 3}}
\rput{0}(10,1){{\bf 2}}
\rput{0}(10,2){{\bf 4}}
\rput{0}(10,3){{\bf 1}}
\rput{0}(10,4){{\bf 4}}
\rput{0}(10,5){{\bf 3}}
\rput{0}(10,6){{\bf }}
\rput{0}(10,7){{\bf }}
\rput{0}(10,8){{\bf }}
\rput{0}(10,9){{\bf 1}}
\rput{0}(10,10){{\bf 4}}
\rput{0}(10,11){{\bf 1}}
\rput{0}(11,0){{\bf 2}}
\rput{0}(11,1){{\bf 4}}
\rput{0}(11,2){{\bf 1}}
\rput{0}(11,3){{\bf 3}}
\rput{0}(11,4){{\bf 2}}
\rput{0}(11,5){{\bf }}
\rput{0}(11,6){{\bf }}
\rput{0}(11,7){{\bf }}
\rput{0}(11,8){{\bf 1}}
\rput{0}(11,9){{\bf 4}}
\rput{0}(11,10){{\bf 2}}
\rput{0}(11,11){{\bf 3}}
\multirput{0}(11.5,-0.5)(0,1){12}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](11.7,-0.7){D1}
\uput[0](11.7, 0.3){D2}
\uput[0](11.7, 1.3){D3}
\uput[0](11.7, 2.3){D4}
\uput[0](11.7, 3.3){D5}
\uput[0](11.7, 4.3){D6}
\uput[0](11.7, 5.3){D7}
\uput[0](11.7, 6.3){D8}
\uput[0](11.7, 7.3){D9}
\uput[0](11.7, 8.3){D10}
\uput[0](11.7, 9.3){D11}
\uput[0](11.7,10.3){D12}
\endpspicture
\caption{ \label{prop.12k.fig2}
The 4-coloring of $T(12,12)$ after Step~3 in the case $L=4k$.}
\end{figure}
On D2, we color $3$ the $6k$ consecutive vertices with $x$-coordinates
$3k+2\leq x \leq 9k+1$. The other vertices on D2 are colored $4$.
The vertices on D$(12k)$ are colored $3$ or $4$ in such a way that the
resulting coloring is proper (for each vertex, the choice is unique).
We color all vertices on D3 and D$(12k-1)$ using colors $1$ and $2$. We then
color D4 and D$(12k-2)$ using colors $3$ and $4$. Again the condition
that $f$ is proper implies that for each vertex the choice is unique.
The partial degree of $f$ is $\deg f|_R = 4$.
\medskip
\noindent
{\bf Step 2.}
For $k>1$, we find that there are $12k-7$ counter-diagonals to be colored,
and we need to sequentially color all of them but five. This can be
achieved by performing the following procedure: suppose that we
have already colored counter-diagonals D$j$ and D$(12k-j-2)$ ($j\geq 4$)
using colors $3$ and $4$. Then, we color D$(j+1)$ and D$(12k-j+1)$ using
$1$ and $2$, and then, we color D$(j+2)$ and D$(12k-j)$ using $3$ and $4$.
Again, for each vertex we have only one choice. This step is repeated
$3(k-1)$ times: we add $12(k-1)$ counter-diagonals, and there are only
five counter-diagonals not yet colored. Indeed, the last colored
counter-diagonals use colors $3$ and $4$, as it was at the end of Step~1.
Each of these $3(k-1)$ steps adds $4$ to the degree of the coloring.
Thus, the partial degree of the coloring is $\deg f|_R = 4 + 12(k-1)$.
\medskip
\noindent
{\bf Step 3.}
The last colored counter-diagonals are D$(6k-2)$ and D$(6k+4)$.
On D$(6k-1)$, the vertices at $(6k,12k-1)$ and $(12k,6k-1)$ only admit one
color: one of them should have color $1$ and the other one $2$.
The rest of the vertices on D$(6k-1)$ are colored $3$ or $4$
(again, there is a unique choice for each vertex).
We color $1$ or $2$ all vertices on D$(6k+3)$; again there is a unique
choice for each vertex.
As shown in Figure~\ref{prop.12k.fig2}, the contribution to the degree of
these new triangles is $4$; thus, the partial degree of $f$ is
$\deg f|_R = 8 + 12(k-1)$.
\begin{figure}[htb]
\centering
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-1,-1)(13,12)
\psline*[linecolor=lightgray](1,8)(1,9)(2,9)(1,8)
\psline*[linecolor=lightgray](2,7)(2,8)(4,8)(4,9)(2,7)
\psline*[linecolor=lightgray](4,7)(5,7)(5,8)(4,7)
\psline*[linecolor=lightgray](5,6)(7,6)(6,5)(6,7)(5,6)
\psline*[linecolor=lightgray](7,5)(8,5)(7,4)(7,5)
\psline*[linecolor=lightgray](8,4)(9,4)(8,3)(8,4)
\psline*[linecolor=lightgray](9,3)(10,3)(9,2)(9,3)
\psline*[linecolor=lightgray](8,1)(9,1)(9,2)(8,1)
\psline*[linecolor=lightgray](9,0)(10,0)(10,1)(9,0)
\psline*[linecolor=lightgray](11,0)(11,-0.5)(10.5,-0.5)(11,0)
\psline*[linecolor=lightgray](10,11)(10.5,11.5)(11,11.5)(11,11)(10,11)
\psline*[linecolor=lightgray](11,11)(11.5,11)(11.5,10.5)(11,10)(11,11)
\psline*[linecolor=lightgray](0,11)(-0.5,11)(-0.5,10.5)(0,11)
\psline*[linecolor=lightgray](1,6)(2,6)(2,7)(1,6)
\psline*[linecolor=lightgray](2,5)(3,5)(3,6)(2,5)
\psline*[linecolor=lightgray](3,4)(5,4)(4,3)(4,5)(3,4)
\psline*[linecolor=lightgray](5,2)(5,3)(6,3)(5,2)
\psline*[linecolor=lightgray](6,1)(6,2)(7,2)(6,1)
\psline*[linecolor=lightgray](7,0)(7,1)(8,1)(7,0)
\psline*[linecolor=lightgray](4,9)(4,10)(5,10)(4,9)
\psline*[linecolor=lightgray](10,3)(11,3)(11,4)(10,3)
\psline*[linecolor=lightgray](5,10)(5,11)(6,11)(5,10)
\psline*[linecolor=lightgray](1,5)(2,5)(2,6)(1,5)
\psline*[linecolor=lightgray](0,4)(1,4)(1,5)(0,4)
\psline*[linecolor=lightgray](6,10)(6,11)(7,11)(6,10)
\psline*[linecolor=darkgray](0,9)(1,9)(1,10)(0,9)
\psline*[linecolor=darkgray](1,8)(2,8)(2,9)(1,8)
\psline*[linecolor=darkgray](4,7)(4,8)(5,8)(4,7)
\psline*[linecolor=darkgray](5,6)(5,7)(6,7)(5,6)
\psline*[linecolor=darkgray](6,5)(7,5)(7,6)(6,5)
\psline*[linecolor=darkgray](7,4)(8,4)(8,5)(7,4)
\psline*[linecolor=darkgray](8,3)(9,3)(9,4)(8,3)
\psline*[linecolor=darkgray](9,0)(9,1)(10,1)(9,0)
\psline*[linecolor=darkgray](10,0)(10,-0.5)(10.5,-0.5)(11,0)(10,0)
\psline*[linecolor=darkgray](10,11)(10,11.5)(10.5,11.5)(10,11)
\psline*[linecolor=darkgray](0,10)(-0.5,10)(-0.5,10.5)(0,11)(0,10)
\psline*[linecolor=darkgray](11,10)(11.5,10)(11.5,10.5)(11,10)
\psline*[linecolor=darkgray](2,5)(2,6)(3,6)(2,5)
\psline*[linecolor=darkgray](3,4)(3,5)(4,5)(3,4)
\psline*[linecolor=darkgray](4,3)(5,3)(5,4)(4,3)
\psline*[linecolor=darkgray](5,2)(6,2)(6,3)(5,2)
\psline*[linecolor=darkgray](6,1)(7,1)(7,2)(6,1)
\psline*[linecolor=darkgray](5,10)(6,10)(6,11)(5,10)
\psline*[linecolor=darkgray](4,10)(5,10)(5,11)(4,10)
\psline*[linecolor=darkgray](1,5)(1,6)(2,6)(1,5)
\psline*[linecolor=darkgray](11,4)(11.5,4)(11.5,3.5)(11,3)
\psline*[linecolor=darkgray](0,4)(-0.5,4)(-0.5,3.5)(0,4)
\psline*[linecolor=darkgray](1,4)(1,5)(2,5)(1,4)
\psline*[linecolor=darkgray](6,11)(6.5,11.5)(7,11.5)(7,11)(6,11)
\psline*[linecolor=darkgray](7,0)(7,-0.5)(6.5,-0.5)(7,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(11.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(11.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(11.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(11.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(11.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(11.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(11.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(11.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(11.5,8)
\psline[linewidth=2pt,linecolor=blue](-0.5,9)(11.5,9)
\psline[linewidth=2pt,linecolor=blue](-0.5,10)(11.5,10)
\psline[linewidth=2pt,linecolor=blue](-0.5,11)(11.5,11)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,11.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,11.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,11.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,11.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,11.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,11.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,11.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,11.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,11.5)
\psline[linewidth=2pt,linecolor=blue](9,-0.5)(9,11.5)
\psline[linewidth=2pt,linecolor=blue](10,-0.5)(10,11.5)
\psline[linewidth=2pt,linecolor=blue](11,-0.5)(11,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(11.5,11.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(11.5,10.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(11.5,9.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(11.5,8.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(11.5,7.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(11.5,6.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(11.5,5.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(11.5,4.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(11.5,3.5)
\psline[linewidth=2pt,linecolor=blue](8.5,-0.5)(11.5,2.5)
\psline[linewidth=2pt,linecolor=blue](9.5,-0.5)(11.5,1.5)
\psline[linewidth=2pt,linecolor=blue](10.5,-0.5)(11.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(10.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(9.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(8.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(7.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(6.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(5.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(4.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(3.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,8.5)(2.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,9.5)(1.5,11.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,10.5)(0.5,11.5)
\multirput{0}(0,0)(0,1){12}{%
\multirput{0}(0,0)(1,0){12}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 4}}
\rput{0}(0,1){{\bf 1}}
\rput{0}(0,2){{\bf 3}}
\rput{0}(0,3){{\bf 4}}
\rput{0}(0,4){{\bf 1}}
\rput{0}(0,5){{\bf 4}}
\rput{0}(0,6){{\bf 3}}
\rput{0}(0,7){{\bf 1}}
\rput{0}(0,8){{\bf 4}}
\rput{0}(0,9){{\bf 2}}
\rput{0}(0,10){{\bf 3}}
\rput{0}(0,11){{\bf 1}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf 3}}
\rput{0}(1,2){{\bf 4}}
\rput{0}(1,3){{\bf 1}}
\rput{0}(1,4){{\bf 3}}
\rput{0}(1,5){{\bf 2}}
\rput{0}(1,6){{\bf 1}}
\rput{0}(1,7){{\bf 4}}
\rput{0}(1,8){{\bf 2}}
\rput{0}(1,9){{\bf 3}}
\rput{0}(1,10){{\bf 1}}
\rput{0}(1,11){{\bf 4}}
\rput{0}(2,0){{\bf 3}}
\rput{0}(2,1){{\bf 4}}
\rput{0}(2,2){{\bf 1}}
\rput{0}(2,3){{\bf 3}}
\rput{0}(2,4){{\bf 4}}
\rput{0}(2,5){{\bf 1}}
\rput{0}(2,6){{\bf 3}}
\rput{0}(2,7){{\bf 2}}
\rput{0}(2,8){{\bf 3}}
\rput{0}(2,9){{\bf 1}}
\rput{0}(2,10){{\bf 4}}
\rput{0}(2,11){{\bf 2}}
\rput{0}(3,0){{\bf 4}}
\rput{0}(3,1){{\bf 1}}
\rput{0}(3,2){{\bf 3}}
\rput{0}(3,3){{\bf 4}}
\rput{0}(3,4){{\bf 1}}
\rput{0}(3,5){{\bf 3}}
\rput{0}(3,6){{\bf 2}}
\rput{0}(3,7){{\bf 4}}
\rput{0}(3,8){{\bf 1}}
\rput{0}(3,9){{\bf 4}}
\rput{0}(3,10){{\bf 2}}
\rput{0}(3,11){{\bf 3}}
\rput{0}(4,0){{\bf 1}}
\rput{0}(4,1){{\bf 3}}
\rput{0}(4,2){{\bf 4}}
\rput{0}(4,3){{\bf 2}}
\rput{0}(4,4){{\bf 3}}
\rput{0}(4,5){{\bf 2}}
\rput{0}(4,6){{\bf 4}}
\rput{0}(4,7){{\bf 1}}
\rput{0}(4,8){{\bf 3}}
\rput{0}(4,9){{\bf 2}}
\rput{0}(4,10){{\bf 3}}
\rput{0}(4,11){{\bf 4}}
\rput{0}(5,0){{\bf 3}}
\rput{0}(5,1){{\bf 4}}
\rput{0}(5,2){{\bf 2}}
\rput{0}(5,3){{\bf 3}}
\rput{0}(5,4){{\bf 1}}
\rput{0}(5,5){{\bf 4}}
\rput{0}(5,6){{\bf 1}}
\rput{0}(5,7){{\bf 3}}
\rput{0}(5,8){{\bf 2}}
\rput{0}(5,9){{\bf 4}}
\rput{0}(5,10){{\bf 1}}
\rput{0}(5,11){{\bf 2}}
\rput{0}(6,0){{\bf 4}}
\rput{0}(6,1){{\bf 2}}
\rput{0}(6,2){{\bf 3}}
\rput{0}(6,3){{\bf 1}}
\rput{0}(6,4){{\bf 4}}
\rput{0}(6,5){{\bf 2}}
\rput{0}(6,6){{\bf 3}}
\rput{0}(6,7){{\bf 2}}
\rput{0}(6,8){{\bf 4}}
\rput{0}(6,9){{\bf 3}}
\rput{0}(6,10){{\bf 2}}
\rput{0}(6,11){{\bf 3}}
\rput{0}(7,0){{\bf 2}}
\rput{0}(7,1){{\bf 3}}
\rput{0}(7,2){{\bf 1}}
\rput{0}(7,3){{\bf 4}}
\rput{0}(7,4){{\bf 2}}
\rput{0}(7,5){{\bf 3}}
\rput{0}(7,6){{\bf 1}}
\rput{0}(7,7){{\bf 4}}
\rput{0}(7,8){{\bf 3}}
\rput{0}(7,9){{\bf 2}}
\rput{0}(7,10){{\bf 4}}
\rput{0}(7,11){{\bf 1}}
\rput{0}(8,0){{\bf 4}}
\rput{0}(8,1){{\bf 1}}
\rput{0}(8,2){{\bf 4}}
\rput{0}(8,3){{\bf 2}}
\rput{0}(8,4){{\bf 3}}
\rput{0}(8,5){{\bf 1}}
\rput{0}(8,6){{\bf 4}}
\rput{0}(8,7){{\bf 3}}
\rput{0}(8,8){{\bf 2}}
\rput{0}(8,9){{\bf 4}}
\rput{0}(8,10){{\bf 3}}
\rput{0}(8,11){{\bf 2}}
\rput{0}(9,0){{\bf 1}}
\rput{0}(9,1){{\bf 3}}
\rput{0}(9,2){{\bf 2}}
\rput{0}(9,3){{\bf 3}}
\rput{0}(9,4){{\bf 1}}
\rput{0}(9,5){{\bf 4}}
\rput{0}(9,6){{\bf 3}}
\rput{0}(9,7){{\bf 2}}
\rput{0}(9,8){{\bf 4}}
\rput{0}(9,9){{\bf 3}}
\rput{0}(9,10){{\bf 2}}
\rput{0}(9,11){{\bf 4}}
\rput{0}(10,0){{\bf 3}}
\rput{0}(10,1){{\bf 2}}
\rput{0}(10,2){{\bf 4}}
\rput{0}(10,3){{\bf 1}}
\rput{0}(10,4){{\bf 4}}
\rput{0}(10,5){{\bf 3}}
\rput{0}(10,6){{\bf 2}}
\rput{0}(10,7){{\bf 4}}
\rput{0}(10,8){{\bf 3}}
\rput{0}(10,9){{\bf 1}}
\rput{0}(10,10){{\bf 4}}
\rput{0}(10,11){{\bf 1}}
\rput{0}(11,0){{\bf 2}}
\rput{0}(11,1){{\bf 4}}
\rput{0}(11,2){{\bf 1}}
\rput{0}(11,3){{\bf 3}}
\rput{0}(11,4){{\bf 2}}
\rput{0}(11,5){{\bf 1}}
\rput{0}(11,6){{\bf 4}}
\rput{0}(11,7){{\bf 3}}
\rput{0}(11,8){{\bf 1}}
\rput{0}(11,9){{\bf 4}}
\rput{0}(11,10){{\bf 2}}
\rput{0}(11,11){{\bf 3}}
\multirput{0}(11.5,-0.5)(0,1){12}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](11.7,-0.7){D1}
\uput[0](11.7, 0.3){D2}
\uput[0](11.7, 1.3){D3}
\uput[0](11.7, 2.3){D4}
\uput[0](11.7, 3.3){D5}
\uput[0](11.7, 4.3){D6}
\uput[0](11.7, 5.3){D7}
\uput[0](11.7, 6.3){D8}
\uput[0](11.7, 7.3){D9}
\uput[0](11.7, 8.3){D10}
\uput[0](11.7, 9.3){D11}
\uput[0](11.7,10.3){D12}
\endpspicture
\caption{ \label{prop.12k.fig4}
The 4-coloring of $T(12,12)$ after Step~5 in the case $L=4k$.
}
\end{figure}
\noindent
{\bf Step 4.}
On D$(6k)$ the vertices at $(1,6k-1)$, $(12k,6k)$, $(6k+1,12k-1)$, and
$(6k,12k)$ only admit a unique color choice: either $1$ or $2$. The first
two vertices should be colored alike, while the last two vertices take the
other color. We color the other vertices on D$(6k)$
with 1 and 2 in such a way that those
vertices with $x$-coordinate satisfying $1\leq x < 6k$ take the same
color as the vertex at $(1,6k-1)$; the rest are colored the same as
the vertex at $(6k,12k)$.
All vertices on D$(6k+1)$ are colored $3$ or $4$. For all of them, except for
those at $(1,6k)$ and $(6k+1,12k)$, there is unique possibility to do so.
We color $4$ the vertex at $(1,6k)$, and color $3$ the vertex at $(6k+1,12k)$.
The increment of the partial degree is $-2$, thus $\deg f|_R = 6 + 12(k-1)$.
\medskip
\noindent
{\bf Step 5.}
Finally, on D$(6k+2)$, there are two vertices which only admit a single color
chosen among $1$ and $2$. For odd $k$ these vertices are $(2,6k)$ and
$(6k+2,12k)$; while for even $k$, these vertices are $(1,6k+1)$ and $(6k+1,1)$.
The other vertices on D$(6k+2)$ can be colored $3$ and $4$ (uniquely).
The resulting coloring is depicted in Figure~\ref{prop.12k.fig4}.
In this step, the increment in the degree is zero. Therefore, the degree
of the obtained four-coloring is
\begin{equation}
\deg f \;=\; 6 + 12(k-1) \;\equiv\; 6 \pmod{12} \nonumber
\end{equation}
This coloring $f$ of $T(12k,12k)$ is proper and its degree is congruent
to six modulo $12$, as claimed.
\medskip
\proofofcase{4}{$L=4k+1$}
Let us consider the triangulation $T=T(12k+3,12k+3)$ with $k\in{\mathbb N}$ (we will
illustrate the main steps with the case $k=1$).
\medskip
\noindent
{\bf Step 1.}
On D1 we color $1$ the $6k+2$ consecutive vertices with
$x$-coordinate $1\leq x \leq 6k+2$. The other $6k+1$ vertices on D1
are colored $2$.
On D2 we color $3$ the $6k+1$ consecutive vertices with
$x$-coordinate $3k+3\leq x \leq 9k+3$. The other vertices on D2 are
colored $4$. We color $3$ or $4$ all vertices on D$(12k+3)$; the choice
is unique for each vertex.
We color $1$ or $2$ all vertices on D3, D5, D$(12k+2)$, and D$(12k)$.
And we color $3$ or $4$ all vertices on D4 and D$(12k+1)$. In all cases,
the choice is unique for each vertex.
The resulting (partial) coloring is depicted in Figure~\ref{prop.12k+3.fig1}.
The partial degree of this coloring is $\deg f|_R = 8$.
\medskip
\noindent
{\bf Step 2.}
For $k>1$, we find that there are $12k-6$ counter-diagonals to be colored
and in this step we will sequentially color all of them but six. This can be
achieved by performing the following procedure: suppose that we
have already colored D$j$ and D$(12k-j+5)$ ($j\geq 5$)
using colors $1$ and $2$. Then, we color D$(j+1)$ and D$(12k-j+4)$ using
colors $3$ and $4$, and D$(j+2)$ and D$(12k-j+3)$ using colors $1$ and $2$.
Again, for each vertex the choice is unique.
This step is repeated $3(k-1)$ times; thus, we add $12(k-1)$ counter-diagonals,
and there are only six counter-diagonals not yet colored. Indeed, the last
colored counter-diagonals use colors $1$ and $2$, as it was at the end of
Step~1.
Each of these $3(k-1)$ steps adds $4$ to the degree of the coloring.
Thus, the partial degree is $\deg f|_R = 8 + 12(k-1)$.
\begin{figure}[htb]
\centering
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-1,-1)(16,15)
\psline*[linecolor=lightgray](0,2)(0,3)(-0.5,2.5)(-0.5,2)(0,2)
\psline*[linecolor=lightgray](0,1)(2,1)(1,0)(1,2)(0,1)
\psline*[linecolor=lightgray](2,0)(3,0)(2.5,-0.5)(2,-0.5)(2,0)
\psline*[linecolor=lightgray](0,14)(-0.5,14)(-0.5,13.5)(0,14)
\psline*[linecolor=lightgray](0,13)(1,13)(0,12)(0,13)
\psline*[linecolor=lightgray](1,12)(2,12)(1,11)(1,12)
\psline*[linecolor=lightgray](2,11)(3,11)(2,10)(2,11)
\psline*[linecolor=lightgray](3,10)(4,10)(3,9)(3,10)
\psline*[linecolor=lightgray](3,9)(3,8)(2,8)(3,9)
\psline*[linecolor=lightgray](4,8)(4,7)(3,7)(4,8)
\psline*[linecolor=lightgray](5,7)(5,6)(4,6)(5,7)
\psline*[linecolor=lightgray](5,5)(7,5)(6,4)(6,6)(5,5)
\psline*[linecolor=lightgray](7,3)(7,4)(8,4)(7,3)
\psline*[linecolor=lightgray](8,2)(8,3)(9,3)(8,2)
\psline*[linecolor=lightgray](9,1)(9,2)(10,2)(9,1)
\psline*[linecolor=lightgray](10,2)(11,2)(11,4)(12,4)(10,2)
\psline*[linecolor=lightgray](2,14)(2,14.5)(2.5,14.5)(2,14)
\psline*[linecolor=lightgray](3,14)(3,13)(4,14)(3,14)
\psline*[linecolor=lightgray](4,13)(4,12)(5,13)(4,13)
\psline*[linecolor=lightgray](5,12)(5,11)(6,12)(5,12)
\psline*[linecolor=lightgray](4,10)(5,10)(5,11)(4,10)
\psline*[linecolor=lightgray](5,9)(6,9)(6,10)(5,9)
\psline*[linecolor=lightgray](6,8)(7,8)(7,9)(6,8)
\psline*[linecolor=lightgray](7,7)(9,7)(8,6)(8,8)(7,7)
\psline*[linecolor=lightgray](9,5)(9,6)(10,6)(9,5)
\psline*[linecolor=lightgray](10,4)(10,5)(11,5)(10,4)
\psline*[linecolor=lightgray](12,4)(13,4)(13,5)(12,4)
\psline*[linecolor=lightgray](13,3)(14,3)(14,4)(13,3)
\psline*[linecolor=lightgray](14,2)(14.5,2)(14.5,2.5)(14,2)
\psline*[linecolor=lightgray](13,14)(13.5,14.5)(14,14.5)(14,14)(13,14)
\psline*[linecolor=lightgray](14,13)(14,14)(14.5,14)(14.5,13.5)(14,13)
\psline*[linecolor=lightgray](11,1)(12,1)(12,2)(11,1)
\psline*[linecolor=lightgray](12,0)(13,0)(13,1)(12,0)
\psline*[linecolor=lightgray](14,0)(14,-0.5)(13.5,-0.5)(14,0)
\psline*[linecolor=darkgray](0,3)(-0.5,3)(-0.5,2.5)(0,3)
\psline*[linecolor=darkgray](0,2)(1,2)(0,1)(0,2)
\psline*[linecolor=darkgray](1,0)(2,0)(2,1)(1,0)
\psline*[linecolor=darkgray](3,0)(3,-0.5)(2.5,-0.5)(3,0)
\psline*[linecolor=darkgray](0,13)(-0.5,13)(-0.5,13.5)(0,14)(0,13)
\psline*[linecolor=darkgray](0,12)(1,12)(1,13)(0,12)
\psline*[linecolor=darkgray](1,11)(2,11)(2,12)(1,11)
\psline*[linecolor=darkgray](2,10)(3,10)(3,11)(2,10)
\psline*[linecolor=darkgray](3,7)(3,8)(4,8)(3,7)
\psline*[linecolor=darkgray](4,6)(4,7)(5,7)(4,6)
\psline*[linecolor=darkgray](5,5)(5,6)(6,6)(5,5)
\psline*[linecolor=darkgray](6,4)(7,4)(7,5)(6,4)
\psline*[linecolor=darkgray](7,3)(8,3)(8,4)(7,3)
\psline*[linecolor=darkgray](8,2)(9,2)(9,3)(8,2)
\psline*[linecolor=darkgray](2,14)(2.5,14.5)(3,14.5)(3,14)(2,14)
\psline*[linecolor=darkgray](3,13)(4,13)(4,14)(3,13)
\psline*[linecolor=darkgray](4,12)(5,12)(5,13)(4,12)
\psline*[linecolor=darkgray](5,9)(5,10)(6,10)(5,9)
\psline*[linecolor=darkgray](6,8)(6,9)(7,9)(6,8)
\psline*[linecolor=darkgray](7,7)(7,8)(8,8)(7,7)
\psline*[linecolor=darkgray](8,6)(9,6)(9,7)(8,6)
\psline*[linecolor=darkgray](9,5)(10,5)(10,6)(9,5)
\psline*[linecolor=darkgray](10,4)(11,4)(11,5)(10,4)
\psline*[linecolor=darkgray](13,3)(13,4)(14,4)(13,3)
\psline*[linecolor=darkgray](14,2)(14,3)(14.5,3)(14.5,2.5)(14,2)
\psline*[linecolor=darkgray](13,14)(13,14.5)(13.5,14.5)(13,14)
\psline*[linecolor=darkgray](14,13)(14.5,13)(14.5,13.5)(14,13)
\psline*[linecolor=darkgray](11,1)(11,2)(12,2)(11,1)
\psline*[linecolor=darkgray](12,0)(12,1)(13,1)(12,0)
\psline*[linecolor=darkgray](13,0)(14,0)(13.5,-0.5)(13,-0.5)(13,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(14.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(14.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(14.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(14.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(14.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(14.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(14.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(14.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(14.5,8)
\psline[linewidth=2pt,linecolor=blue](-0.5,9)(14.5,9)
\psline[linewidth=2pt,linecolor=blue](-0.5,10)(14.5,10)
\psline[linewidth=2pt,linecolor=blue](-0.5,11)(14.5,11)
\psline[linewidth=2pt,linecolor=blue](-0.5,12)(14.5,12)
\psline[linewidth=2pt,linecolor=blue](-0.5,13)(14.5,13)
\psline[linewidth=2pt,linecolor=blue](-0.5,14)(14.5,14)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,14.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,14.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,14.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,14.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,14.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,14.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,14.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,14.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,14.5)
\psline[linewidth=2pt,linecolor=blue](9,-0.5)(9,14.5)
\psline[linewidth=2pt,linecolor=blue](10,-0.5)(10,14.5)
\psline[linewidth=2pt,linecolor=blue](11,-0.5)(11,14.5)
\psline[linewidth=2pt,linecolor=blue](12,-0.5)(12,14.5)
\psline[linewidth=2pt,linecolor=blue](13,-0.5)(13,14.5)
\psline[linewidth=2pt,linecolor=blue](14,-0.5)(14,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(14.5,14.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(14.5,13.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(14.5,12.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(14.5,11.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(14.5,10.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(14.5,9.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(14.5,8.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(14.5,7.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(14.5,6.5)
\psline[linewidth=2pt,linecolor=blue](8.5,-0.5)(14.5,5.5)
\psline[linewidth=2pt,linecolor=blue](9.5,-0.5)(14.5,4.5)
\psline[linewidth=2pt,linecolor=blue](10.5,-0.5)(14.5,3.5)
\psline[linewidth=2pt,linecolor=blue](11.5,-0.5)(14.5,2.5)
\psline[linewidth=2pt,linecolor=blue](12.5,-0.5)(14.5,1.5)
\psline[linewidth=2pt,linecolor=blue](13.5,-0.5)(14.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(13.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(12.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(11.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(10.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(9.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(8.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(7.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(6.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,8.5)(5.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,9.5)(4.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,10.5)(3.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,11.5)(2.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,12.5)(1.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,13.5)(0.5,14.5)
\multirput{0}(0,0)(0,1){15}{%
\multirput{0}(0,0)(1,0){15}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0) {{\bf 4}}
\rput{0}(0,1) {{\bf 1}}
\rput{0}(0,2) {{\bf 3}}
\rput{0}(0,3) {{\bf 2}}
\rput{0}(0,4) {{\bf }}
\rput{0}(0,5) {{\bf }}
\rput{0}(0,6) {{\bf }}
\rput{0}(0,7) {{\bf }}
\rput{0}(0,8) {{\bf }}
\rput{0}(0,9) {{\bf }}
\rput{0}(0,10) {{\bf 1}}
\rput{0}(0,11) {{\bf 4}}
\rput{0}(0,12) {{\bf 2}}
\rput{0}(0,13) {{\bf 3}}
\rput{0}(0,14) {{\bf 1}}
\rput{0}(1,0) {{\bf 2}}
\rput{0}(1,1) {{\bf 3}}
\rput{0}(1,2) {{\bf 2}}
\rput{0}(1,3) {{\bf }}
\rput{0}(1,4) {{\bf }}
\rput{0}(1,5) {{\bf }}
\rput{0}(1,6) {{\bf }}
\rput{0}(1,7) {{\bf }}
\rput{0}(1,8) {{\bf }}
\rput{0}(1,9) {{\bf 1}}
\rput{0}(1,10) {{\bf 4}}
\rput{0}(1,11) {{\bf 2}}
\rput{0}(1,12) {{\bf 3}}
\rput{0}(1,13) {{\bf 1}}
\rput{0}(1,14) {{\bf 4}}
\rput{0}(2,0) {{\bf 3}}
\rput{0}(2,1) {{\bf 1}}
\rput{0}(2,2) {{\bf }}
\rput{0}(2,3) {{\bf }}
\rput{0}(2,4) {{\bf }}
\rput{0}(2,5) {{\bf }}
\rput{0}(2,6) {{\bf }}
\rput{0}(2,7) {{\bf }}
\rput{0}(2,8) {{\bf 1}}
\rput{0}(2,9) {{\bf 4}}
\rput{0}(2,10) {{\bf 2}}
\rput{0}(2,11) {{\bf 3}}
\rput{0}(2,12) {{\bf 1}}
\rput{0}(2,13) {{\bf 4}}
\rput{0}(2,14) {{\bf 2}}
\rput{0}(3,0) {{\bf 1}}
\rput{0}(3,1) {{\bf }}
\rput{0}(3,2) {{\bf }}
\rput{0}(3,3) {{\bf }}
\rput{0}(3,4) {{\bf }}
\rput{0}(3,5) {{\bf }}
\rput{0}(3,6) {{\bf }}
\rput{0}(3,7) {{\bf 1}}
\rput{0}(3,8) {{\bf 3}}
\rput{0}(3,9) {{\bf 2}}
\rput{0}(3,10) {{\bf 3}}
\rput{0}(3,11) {{\bf 1}}
\rput{0}(3,12) {{\bf 4}}
\rput{0}(3,13) {{\bf 2}}
\rput{0}(3,14) {{\bf 3}}
\rput{0}(4,0) {{\bf }}
\rput{0}(4,1) {{\bf }}
\rput{0}(4,2) {{\bf }}
\rput{0}(4,3) {{\bf }}
\rput{0}(4,4) {{\bf }}
\rput{0}(4,5) {{\bf }}
\rput{0}(4,6) {{\bf 1}}
\rput{0}(4,7) {{\bf 3}}
\rput{0}(4,8) {{\bf 2}}
\rput{0}(4,9) {{\bf 4}}
\rput{0}(4,10) {{\bf 1}}
\rput{0}(4,11) {{\bf 4}}
\rput{0}(4,12) {{\bf 2}}
\rput{0}(4,13) {{\bf 3}}
\rput{0}(4,14) {{\bf 1}}
\rput{0}(5,0) {{\bf }}
\rput{0}(5,1) {{\bf }}
\rput{0}(5,2) {{\bf }}
\rput{0}(5,3) {{\bf }}
\rput{0}(5,4) {{\bf }}
\rput{0}(5,5) {{\bf 1}}
\rput{0}(5,6) {{\bf 3}}
\rput{0}(5,7) {{\bf 2}}
\rput{0}(5,8) {{\bf 4}}
\rput{0}(5,9) {{\bf 1}}
\rput{0}(5,10) {{\bf 3}}
\rput{0}(5,11) {{\bf 2}}
\rput{0}(5,12) {{\bf 4}}
\rput{0}(5,13) {{\bf 1}}
\rput{0}(5,14) {{\bf }}
\rput{0}(6,0) {{\bf }}
\rput{0}(6,1) {{\bf }}
\rput{0}(6,2) {{\bf }}
\rput{0}(6,3) {{\bf }}
\rput{0}(6,4) {{\bf 2}}
\rput{0}(6,5) {{\bf 3}}
\rput{0}(6,6) {{\bf 2}}
\rput{0}(6,7) {{\bf 4}}
\rput{0}(6,8) {{\bf 1}}
\rput{0}(6,9) {{\bf 3}}
\rput{0}(6,10) {{\bf 2}}
\rput{0}(6,11) {{\bf 4}}
\rput{0}(6,12) {{\bf 1}}
\rput{0}(6,13) {{\bf }}
\rput{0}(6,14) {{\bf }}
\rput{0}(7,0) {{\bf }}
\rput{0}(7,1) {{\bf }}
\rput{0}(7,2) {{\bf }}
\rput{0}(7,3) {{\bf 2}}
\rput{0}(7,4) {{\bf 3}}
\rput{0}(7,5) {{\bf 1}}
\rput{0}(7,6) {{\bf 4}}
\rput{0}(7,7) {{\bf 1}}
\rput{0}(7,8) {{\bf 3}}
\rput{0}(7,9) {{\bf 2}}
\rput{0}(7,10) {{\bf 4}}
\rput{0}(7,11) {{\bf 1}}
\rput{0}(7,12) {{\bf }}
\rput{0}(7,13) {{\bf }}
\rput{0}(7,14) {{\bf }}
\rput{0}(8,0) {{\bf }}
\rput{0}(8,1) {{\bf }}
\rput{0}(8,2) {{\bf 2}}
\rput{0}(8,3) {{\bf 3}}
\rput{0}(8,4) {{\bf 1}}
\rput{0}(8,5) {{\bf 4}}
\rput{0}(8,6) {{\bf 2}}
\rput{0}(8,7) {{\bf 3}}
\rput{0}(8,8) {{\bf 2}}
\rput{0}(8,9) {{\bf 4}}
\rput{0}(8,10) {{\bf 1}}
\rput{0}(8,11) {{\bf }}
\rput{0}(8,12) {{\bf }}
\rput{0}(8,13) {{\bf }}
\rput{0}(8,14) {{\bf }}
\rput{0}(9,0) {{\bf }}
\rput{0}(9,1) {{\bf 2}}
\rput{0}(9,2) {{\bf 3}}
\rput{0}(9,3) {{\bf 1}}
\rput{0}(9,4) {{\bf 4}}
\rput{0}(9,5) {{\bf 2}}
\rput{0}(9,6) {{\bf 3}}
\rput{0}(9,7) {{\bf 1}}
\rput{0}(9,8) {{\bf 4}}
\rput{0}(9,9) {{\bf 1}}
\rput{0}(9,10) {{\bf }}
\rput{0}(9,11) {{\bf }}
\rput{0}(9,12) {{\bf }}
\rput{0}(9,13) {{\bf }}
\rput{0}(9,14) {{\bf }}
\rput{0}(10,0) {{\bf 2}}
\rput{0}(10,1) {{\bf 4}}
\rput{0}(10,2) {{\bf 1}}
\rput{0}(10,3) {{\bf 4}}
\rput{0}(10,4) {{\bf 2}}
\rput{0}(10,5) {{\bf 3}}
\rput{0}(10,6) {{\bf 1}}
\rput{0}(10,7) {{\bf 4}}
\rput{0}(10,8) {{\bf 2}}
\rput{0}(10,9) {{\bf }}
\rput{0}(10,10){{\bf }}
\rput{0}(10,11){{\bf }}
\rput{0}(10,12){{\bf }}
\rput{0}(10,13){{\bf }}
\rput{0}(10,14){{\bf }}
\rput{0}(11,0) {{\bf 4}}
\rput{0}(11,1) {{\bf 1}}
\rput{0}(11,2) {{\bf 3}}
\rput{0}(11,3) {{\bf 2}}
\rput{0}(11,4) {{\bf 3}}
\rput{0}(11,5) {{\bf 1}}
\rput{0}(11,6) {{\bf 4}}
\rput{0}(11,7) {{\bf 2}}
\rput{0}(11,8) {{\bf }}
\rput{0}(11,9) {{\bf }}
\rput{0}(11,10){{\bf }}
\rput{0}(11,11){{\bf }}
\rput{0}(11,12){{\bf }}
\rput{0}(11,13){{\bf }}
\rput{0}(11,14){{\bf 2}}
\rput{0}(12,0) {{\bf 1}}
\rput{0}(12,1) {{\bf 3}}
\rput{0}(12,2) {{\bf 2}}
\rput{0}(12,3) {{\bf 4}}
\rput{0}(12,4) {{\bf 1}}
\rput{0}(12,5) {{\bf 4}}
\rput{0}(12,6) {{\bf 2}}
\rput{0}(12,7) {{\bf }}
\rput{0}(12,8) {{\bf }}
\rput{0}(12,9) {{\bf }}
\rput{0}(12,10){{\bf }}
\rput{0}(12,11){{\bf }}
\rput{0}(12,12){{\bf }}
\rput{0}(12,13){{\bf 2}}
\rput{0}(12,14){{\bf 4}}
\rput{0}(13,0) {{\bf 3}}
\rput{0}(13,1) {{\bf 2}}
\rput{0}(13,2) {{\bf 4}}
\rput{0}(13,3) {{\bf 1}}
\rput{0}(13,4) {{\bf 3}}
\rput{0}(13,5) {{\bf 2}}
\rput{0}(13,6) {{\bf }}
\rput{0}(13,7) {{\bf }}
\rput{0}(13,8) {{\bf }}
\rput{0}(13,9) {{\bf }}
\rput{0}(13,10){{\bf }}
\rput{0}(13,11){{\bf }}
\rput{0}(13,12){{\bf 1}}
\rput{0}(13,13){{\bf 4}}
\rput{0}(13,14){{\bf 1}}
\rput{0}(14,0) {{\bf 2}}
\rput{0}(14,1) {{\bf 4}}
\rput{0}(14,2) {{\bf 1}}
\rput{0}(14,3) {{\bf 3}}
\rput{0}(14,4) {{\bf 2}}
\rput{0}(14,5) {{\bf }}
\rput{0}(14,6) {{\bf }}
\rput{0}(14,7) {{\bf }}
\rput{0}(14,8) {{\bf }}
\rput{0}(14,9) {{\bf }}
\rput{0}(14,10){{\bf }}
\rput{0}(14,11){{\bf 1}}
\rput{0}(14,12){{\bf 4}}
\rput{0}(14,13){{\bf 2}}
\rput{0}(14,14){{\bf 3}}
\multirput{0}(14.5,-0.5)(0,1){15}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](14.7,-0.7){D1}
\uput[0](14.7, 0.3){D2}
\uput[0](14.7, 1.3){D3}
\uput[0](14.7, 2.3){D4}
\uput[0](14.7, 3.3){D5}
\uput[0](14.7, 4.3){D6}
\uput[0](14.7, 5.3){D7}
\uput[0](14.7, 6.3){D8}
\uput[0](14.7, 7.3){D9}
\uput[0](14.7, 8.3){D10}
\uput[0](14.7, 9.3){D11}
\uput[0](14.7,10.3){D12}
\uput[0](14.7,11.3){D13}
\uput[0](14.7,12.3){D14}
\uput[0](14.7,13.3){D15}
\endpspicture
\caption{ \label{prop.12k+3.fig1}
The 4-coloring of $T(15,15)$ after Step~1 in the case $L=4k+1$.
}
\end{figure}
\noindent
{\bf Step 3.}
The last colored counter-diagonals are D$(6k-1)$ and D$(6k+6)$.
On D$(6k)$ the vertices at $(3k,3k)$ and $(9k+2,9k+1)$ only admit a single
color: either $3$ or $4$. We color the rest of the vertices of D$(6k)$
with colors $1$ and $2$ (again, uniquely).
On D$(6k+5)$ we perform the same procedure; here the vertices with only one
color choice are located at $(3k+3,3k)$ and $(9k+4,9k+4)$.
The contribution to the degree
of the newly colored triangles is zero: the partial degree is still
$\deg f|_R = 8 + 12(k-1)$.
\begin{figure}[htb]
\centering
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-1,-1)(16,15)
\psline*[linecolor=lightgray](0,2)(0,3)(-0.5,2.5)(-0.5,2)(0,2)
\psline*[linecolor=lightgray](0,1)(2,1)(1,0)(1,2)(0,1)
\psline*[linecolor=lightgray](2,0)(3,0)(2.5,-0.5)(2,-0.5)(2,0)
\psline*[linecolor=lightgray](0,14)(-0.5,14)(-0.5,13.5)(0,14)
\psline*[linecolor=lightgray](0,13)(1,13)(0,12)(0,13)
\psline*[linecolor=lightgray](1,12)(2,12)(1,11)(1,12)
\psline*[linecolor=lightgray](2,11)(3,11)(2,10)(2,11)
\psline*[linecolor=lightgray](3,10)(4,10)(3,9)(3,10)
\psline*[linecolor=lightgray](3,9)(3,8)(2,8)(3,9)
\psline*[linecolor=lightgray](4,8)(4,7)(3,7)(4,8)
\psline*[linecolor=lightgray](5,7)(5,6)(4,6)(5,7)
\psline*[linecolor=lightgray](5,5)(7,5)(6,4)(6,6)(5,5)
\psline*[linecolor=lightgray](7,3)(7,4)(8,4)(7,3)
\psline*[linecolor=lightgray](8,2)(8,3)(9,3)(8,2)
\psline*[linecolor=lightgray](9,1)(9,2)(10,2)(9,1)
\psline*[linecolor=lightgray](10,2)(11,2)(11,4)(12,4)(10,2)
\psline*[linecolor=lightgray](2,14)(2,14.5)(2.5,14.5)(2,14)
\psline*[linecolor=lightgray](3,14)(3,13)(4,14)(3,14)
\psline*[linecolor=lightgray](4,13)(4,12)(5,13)(4,13)
\psline*[linecolor=lightgray](5,12)(5,11)(6,12)(5,12)
\psline*[linecolor=lightgray](4,10)(5,10)(5,11)(4,10)
\psline*[linecolor=lightgray](5,9)(6,9)(6,10)(5,9)
\psline*[linecolor=lightgray](6,8)(7,8)(7,9)(6,8)
\psline*[linecolor=lightgray](7,7)(9,7)(8,6)(8,8)(7,7)
\psline*[linecolor=lightgray](9,5)(9,6)(10,6)(9,5)
\psline*[linecolor=lightgray](10,4)(10,5)(11,5)(10,4)
\psline*[linecolor=lightgray](12,4)(13,4)(13,5)(12,4)
\psline*[linecolor=lightgray](13,3)(14,3)(14,4)(13,3)
\psline*[linecolor=lightgray](14,2)(14.5,2)(14.5,2.5)(14,2)
\psline*[linecolor=lightgray](13,14)(13.5,14.5)(14,14.5)(14,14)(13,14)
\psline*[linecolor=lightgray](14,13)(14,14)(14.5,14)(14.5,13.5)(14,13)
\psline*[linecolor=lightgray](0,2)(1,2)(1,3)(0,2)
\psline*[linecolor=lightgray](2,0)(2,1)(3,1)(2,0)
\psline*[linecolor=lightgray](3,0)(3,1)(4,1)(3,0)
\psline*[linecolor=lightgray](8,0)(8,1)(9,1)(8,0)
\psline*[linecolor=lightgray](8,1)(8,2)(9,2)(8,1)
\psline*[linecolor=lightgray](7,1)(7,2)(8,2)(7,1)
\psline*[linecolor=lightgray](7,2)(7,3)(8,3)(7,2)
\psline*[linecolor=lightgray](6,2)(6,3)(7,3)(6,2)
\psline*[linecolor=lightgray](6,3)(6,4)(7,4)(6,3)
\psline*[linecolor=lightgray](2,7)(3,7)(3,8)(2,7)
\psline*[linecolor=lightgray](3,6)(4,6)(4,7)(3,6)
\psline*[linecolor=lightgray](3,5)(4,5)(4,6)(3,5)
\psline*[linecolor=lightgray](4,5)(5,5)(5,6)(4,5)
\psline*[linecolor=lightgray](4,14)(4,14.5)(4.5,14.5)(4,14)
\psline*[linecolor=lightgray](4,0)(4,-0.5)(4.5,-0.5)(5,0)(4,0)
\psline*[linecolor=lightgray](3,14)(3,14.5)(3.5,14.5)(3,14)
\psline*[linecolor=lightgray](3,0)(3,-0.5)(3.5,-0.5)(4,0)(3,0)
\psline*[linecolor=lightgray](4,14)(5,14)(4,13)(4,14)
\psline*[linecolor=lightgray](5,14)(6,14)(5,13)(5,14)
\psline*[linecolor=lightgray](5,13)(6,13)(5,12)(5,13)
\psline*[linecolor=lightgray](6,13)(7,13)(6,12)(6,13)
\psline*[linecolor=lightgray](7,12)(8,12)(7,11)(7,12)
\psline*[linecolor=lightgray](8,11)(9,11)(8,10)(8,11)
\psline*[linecolor=lightgray](9,14)(9,14.5)(9.5,14.5)(9,14)
\psline*[linecolor=lightgray](10,13)(10,14)(11,14)(10,13)
\psline*[linecolor=lightgray](9,0)(10,0)(9.5,-0.5)(9,-0.5)(9,0)
\psline*[linecolor=lightgray](1,3)(2,3)(2,4)(1,3)
\psline*[linecolor=lightgray](2,5)(3,5)(3,6)(2,5)
\psline*[linecolor=lightgray](5,0)(6,0)(5.5,-0.5)(5,-0.5)(5,0)
\psline*[linecolor=lightgray](4,0)(4,1)(5,1)(4,0)
\psline*[linecolor=lightgray](3,1)(3,2)(4,2)(3,1)
\psline*[linecolor=lightgray](3,2)(3,3)(4,3)(3,2)
\psline*[linecolor=lightgray](4,2)(4,3)(5,3)(4,2)
\psline*[linecolor=lightgray](5,14)(5,14.5)(5.5,14.5)(5,14)
\psline*[linecolor=lightgray](6,13)(6,14)(7,14)(6,13)
\psline*[linecolor=lightgray](7,12)(7,13)(8,13)(7,12)
\psline*[linecolor=lightgray](8,11)(8,12)(9,12)(8,11)
\psline*[linecolor=lightgray](9,10)(9,11)(10,11)(9,10)
\psline*[linecolor=lightgray](9,11)(9,12)(10,12)(9,11)
\psline*[linecolor=lightgray](10,12)(10,13)(11,13)(10,12)
\psline*[linecolor=lightgray](11,1)(12,1)(12,2)(11,1)
\psline*[linecolor=lightgray](12,0)(13,0)(13,1)(12,0)
\psline*[linecolor=lightgray](14,0)(14,-0.5)(13.5,-0.5)(14,0)
\psline*[linecolor=darkgray](0,3)(-0.5,3)(-0.5,2.5)(0,3)
\psline*[linecolor=darkgray](0,2)(1,2)(0,1)(0,2)
\psline*[linecolor=darkgray](1,0)(2,0)(2,1)(1,0)
\psline*[linecolor=darkgray](3,0)(3,-0.5)(2.5,-0.5)(3,0)
\psline*[linecolor=darkgray](0,13)(-0.5,13)(-0.5,13.5)(0,14)(0,13)
\psline*[linecolor=darkgray](0,12)(1,12)(1,13)(0,12)
\psline*[linecolor=darkgray](1,11)(2,11)(2,12)(1,11)
\psline*[linecolor=darkgray](2,10)(3,10)(3,11)(2,10)
\psline*[linecolor=darkgray](3,7)(3,8)(4,8)(3,7)
\psline*[linecolor=darkgray](4,6)(4,7)(5,7)(4,6)
\psline*[linecolor=darkgray](5,5)(5,6)(6,6)(5,5)
\psline*[linecolor=darkgray](6,4)(7,4)(7,5)(6,4)
\psline*[linecolor=darkgray](7,3)(8,3)(8,4)(7,3)
\psline*[linecolor=darkgray](8,2)(9,2)(9,3)(8,2)
\psline*[linecolor=darkgray](2,14)(2.5,14.5)(3,14.5)(3,14)(2,14)
\psline*[linecolor=darkgray](3,13)(4,13)(4,14)(3,13)
\psline*[linecolor=darkgray](4,12)(5,12)(5,13)(4,12)
\psline*[linecolor=darkgray](5,9)(5,10)(6,10)(5,9)
\psline*[linecolor=darkgray](6,8)(6,9)(7,9)(6,8)
\psline*[linecolor=darkgray](7,7)(7,8)(8,8)(7,7)
\psline*[linecolor=darkgray](8,6)(9,6)(9,7)(8,6)
\psline*[linecolor=darkgray](9,5)(10,5)(10,6)(9,5)
\psline*[linecolor=darkgray](10,4)(11,4)(11,5)(10,4)
\psline*[linecolor=darkgray](13,3)(13,4)(14,4)(13,3)
\psline*[linecolor=darkgray](14,2)(14,3)(14.5,3)(14.5,2.5)(14,2)
\psline*[linecolor=darkgray](13,14)(13,14.5)(13.5,14.5)(13,14)
\psline*[linecolor=darkgray](14,13)(14.5,13)(14.5,13.5)(14,13)
\psline*[linecolor=darkgray](0,2)(0,3)(1,3)(0,2)
\psline*[linecolor=darkgray](1,2)(1,3)(2,3)(1,2)
\psline*[linecolor=darkgray](2,0)(3,0)(3,1)(2,0)
\psline*[linecolor=darkgray](2,1)(3,1)(3,2)(2,1)
\psline*[linecolor=darkgray](3,0)(4,0)(4,1)(3,0)
\psline*[linecolor=darkgray](8,0)(9,0)(9,1)(8,0)
\psline*[linecolor=darkgray](8,1)(9,1)(9,2)(8,1)
\psline*[linecolor=darkgray](7,1)(8,1)(8,2)(7,1)
\psline*[linecolor=darkgray](7,2)(8,2)(8,3)(7,2)
\psline*[linecolor=darkgray](6,2)(7,2)(7,3)(6,2)
\psline*[linecolor=darkgray](6,3)(7,3)(7,4)(6,3)
\psline*[linecolor=darkgray](5,3)(6,3)(6,4)(5,3)
\psline*[linecolor=darkgray](2,7)(2,8)(3,8)(2,7)
\psline*[linecolor=darkgray](3,6)(3,7)(4,7)(3,6)
\psline*[linecolor=darkgray](3,5)(3,6)(4,6)(3,5)
\psline*[linecolor=darkgray](4,5)(4,6)(5,6)(4,5)
\psline*[linecolor=darkgray](4,4)(4,5)(5,5)(4,4)
\psline*[linecolor=darkgray](4,0)(4,-0.5)(3.5,-0.5)(4,0)
\psline*[linecolor=darkgray](5,0)(5,-0.5)(4.5,-0.5)(5,0)
\psline*[linecolor=darkgray](3,14)(3.5,14.5)(4,14.5)(4,14)(3,14)
\psline*[linecolor=darkgray](4,14)(4.5,14.5)(5,14.5)(5,14)(4,14)
\psline*[linecolor=darkgray](4,13)(5,13)(5,14)(4,13)
\psline*[linecolor=darkgray](5,13)(6,13)(6,14)(5,13)
\psline*[linecolor=darkgray](5,12)(6,12)(6,13)(5,12)
\psline*[linecolor=darkgray](6,12)(7,12)(7,13)(6,12)
\psline*[linecolor=darkgray](7,11)(8,11)(8,12)(7,11)
\psline*[linecolor=darkgray](8,10)(9,10)(9,11)(8,10)
\psline*[linecolor=darkgray](10,13)(11,13)(11,14)(10,13)
\psline*[linecolor=darkgray](9,14)(10,14)(10,14.5)(9.5,14.5)(9,14)
\psline*[linecolor=darkgray](10,0)(10,-0.5)(9.5,-0.5)(10,0)
\psline*[linecolor=darkgray](2,4)(2,5)(3,5)(2,4)
\psline*[linecolor=darkgray](3,3)(4,3)(4,4)(3,3)
\psline*[linecolor=darkgray](3,2)(4,2)(4,3)(3,2)
\psline*[linecolor=darkgray](3,1)(4,1)(4,2)(3,1)
\psline*[linecolor=darkgray](4,0)(5,0)(5,1)(4,0)
\psline*[linecolor=darkgray](6,0)(6,-0.5)(5.5,-0.5)(6,0)
\psline*[linecolor=darkgray](5,14)(5.5,14.5)(6,14.5)(6,14)(5,14)
\psline*[linecolor=darkgray](6,13)(7,13)(7,14)(6,13)
\psline*[linecolor=darkgray](7,12)(8,12)(8,13)(7,12)
\psline*[linecolor=darkgray](8,11)(9,11)(9,12)(8,11)
\psline*[linecolor=darkgray](9,11)(10,11)(10,12)(9,11)
\psline*[linecolor=darkgray](9,12)(10,12)(10,13)(9,12)
\psline*[linecolor=darkgray](11,1)(11,2)(12,2)(11,1)
\psline*[linecolor=darkgray](12,0)(12,1)(13,1)(12,0)
\psline*[linecolor=darkgray](13,0)(14,0)(13.5,-0.5)(13,-0.5)(13,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(14.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(14.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(14.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(14.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(14.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(14.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(14.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(14.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(14.5,8)
\psline[linewidth=2pt,linecolor=blue](-0.5,9)(14.5,9)
\psline[linewidth=2pt,linecolor=blue](-0.5,10)(14.5,10)
\psline[linewidth=2pt,linecolor=blue](-0.5,11)(14.5,11)
\psline[linewidth=2pt,linecolor=blue](-0.5,12)(14.5,12)
\psline[linewidth=2pt,linecolor=blue](-0.5,13)(14.5,13)
\psline[linewidth=2pt,linecolor=blue](-0.5,14)(14.5,14)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,14.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,14.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,14.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,14.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,14.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,14.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,14.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,14.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,14.5)
\psline[linewidth=2pt,linecolor=blue](9,-0.5)(9,14.5)
\psline[linewidth=2pt,linecolor=blue](10,-0.5)(10,14.5)
\psline[linewidth=2pt,linecolor=blue](11,-0.5)(11,14.5)
\psline[linewidth=2pt,linecolor=blue](12,-0.5)(12,14.5)
\psline[linewidth=2pt,linecolor=blue](13,-0.5)(13,14.5)
\psline[linewidth=2pt,linecolor=blue](14,-0.5)(14,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(14.5,14.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(14.5,13.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(14.5,12.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(14.5,11.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(14.5,10.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(14.5,9.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(14.5,8.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(14.5,7.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(14.5,6.5)
\psline[linewidth=2pt,linecolor=blue](8.5,-0.5)(14.5,5.5)
\psline[linewidth=2pt,linecolor=blue](9.5,-0.5)(14.5,4.5)
\psline[linewidth=2pt,linecolor=blue](10.5,-0.5)(14.5,3.5)
\psline[linewidth=2pt,linecolor=blue](11.5,-0.5)(14.5,2.5)
\psline[linewidth=2pt,linecolor=blue](12.5,-0.5)(14.5,1.5)
\psline[linewidth=2pt,linecolor=blue](13.5,-0.5)(14.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(13.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(12.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(11.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(10.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(9.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(8.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(7.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(6.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,8.5)(5.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,9.5)(4.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,10.5)(3.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,11.5)(2.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,12.5)(1.5,14.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,13.5)(0.5,14.5)
\multirput{0}(0,0)(0,1){15}{%
\multirput{0}(0,0)(1,0){15}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0) {{\bf 4}}
\rput{0}(0,1) {{\bf 1}}
\rput{0}(0,2) {{\bf 3}}
\rput{0}(0,3) {{\bf 2}}
\rput{0}(0,4) {{\bf 1}}
\rput{0}(0,5) {{\bf 4}}
\rput{0}(0,6) {{\bf 2}}
\rput{0}(0,7) {{\bf 3}}
\rput{0}(0,8) {{\bf 4}}
\rput{0}(0,9) {{\bf 2}}
\rput{0}(0,10) {{\bf 1}}
\rput{0}(0,11) {{\bf 4}}
\rput{0}(0,12) {{\bf 2}}
\rput{0}(0,13) {{\bf 3}}
\rput{0}(0,14) {{\bf 1}}
\rput{0}(1,0) {{\bf 2}}
\rput{0}(1,1) {{\bf 3}}
\rput{0}(1,2) {{\bf 2}}
\rput{0}(1,3) {{\bf 1}}
\rput{0}(1,4) {{\bf 4}}
\rput{0}(1,5) {{\bf 2}}
\rput{0}(1,6) {{\bf 3}}
\rput{0}(1,7) {{\bf 4}}
\rput{0}(1,8) {{\bf 2}}
\rput{0}(1,9) {{\bf 1}}
\rput{0}(1,10) {{\bf 4}}
\rput{0}(1,11) {{\bf 2}}
\rput{0}(1,12) {{\bf 3}}
\rput{0}(1,13) {{\bf 1}}
\rput{0}(1,14) {{\bf 4}}
\rput{0}(2,0) {{\bf 3}}
\rput{0}(2,1) {{\bf 1}}
\rput{0}(2,2) {{\bf 4}}
\rput{0}(2,3) {{\bf 3}}
\rput{0}(2,4) {{\bf 2}}
\rput{0}(2,5) {{\bf 1}}
\rput{0}(2,6) {{\bf 4}}
\rput{0}(2,7) {{\bf 2}}
\rput{0}(2,8) {{\bf 1}}
\rput{0}(2,9) {{\bf 4}}
\rput{0}(2,10) {{\bf 2}}
\rput{0}(2,11) {{\bf 3}}
\rput{0}(2,12) {{\bf 1}}
\rput{0}(2,13) {{\bf 4}}
\rput{0}(2,14) {{\bf 2}}
\rput{0}(3,0) {{\bf 1}}
\rput{0}(3,1) {{\bf 2}}
\rput{0}(3,2) {{\bf 3}}
\rput{0}(3,3) {{\bf 1}}
\rput{0}(3,4) {{\bf 4}}
\rput{0}(3,5) {{\bf 3}}
\rput{0}(3,6) {{\bf 2}}
\rput{0}(3,7) {{\bf 1}}
\rput{0}(3,8) {{\bf 3}}
\rput{0}(3,9) {{\bf 2}}
\rput{0}(3,10) {{\bf 3}}
\rput{0}(3,11) {{\bf 1}}
\rput{0}(3,12) {{\bf 4}}
\rput{0}(3,13) {{\bf 2}}
\rput{0}(3,14) {{\bf 3}}
\rput{0}(4,0) {{\bf 2}}
\rput{0}(4,1) {{\bf 3}}
\rput{0}(4,2) {{\bf 1}}
\rput{0}(4,3) {{\bf 2}}
\rput{0}(4,4) {{\bf 3}}
\rput{0}(4,5) {{\bf 2}}
\rput{0}(4,6) {{\bf 1}}
\rput{0}(4,7) {{\bf 3}}
\rput{0}(4,8) {{\bf 2}}
\rput{0}(4,9) {{\bf 4}}
\rput{0}(4,10) {{\bf 1}}
\rput{0}(4,11) {{\bf 4}}
\rput{0}(4,12) {{\bf 2}}
\rput{0}(4,13) {{\bf 3}}
\rput{0}(4,14) {{\bf 1}}
\rput{0}(5,0) {{\bf 3}}
\rput{0}(5,1) {{\bf 1}}
\rput{0}(5,2) {{\bf 4}}
\rput{0}(5,3) {{\bf 3}}
\rput{0}(5,4) {{\bf 4}}
\rput{0}(5,5) {{\bf 1}}
\rput{0}(5,6) {{\bf 3}}
\rput{0}(5,7) {{\bf 2}}
\rput{0}(5,8) {{\bf 4}}
\rput{0}(5,9) {{\bf 1}}
\rput{0}(5,10) {{\bf 3}}
\rput{0}(5,11) {{\bf 2}}
\rput{0}(5,12) {{\bf 3}}
\rput{0}(5,13) {{\bf 1}}
\rput{0}(5,14) {{\bf 2}}
\rput{0}(6,0) {{\bf 1}}
\rput{0}(6,1) {{\bf 4}}
\rput{0}(6,2) {{\bf 3}}
\rput{0}(6,3) {{\bf 1}}
\rput{0}(6,4) {{\bf 2}}
\rput{0}(6,5) {{\bf 3}}
\rput{0}(6,6) {{\bf 2}}
\rput{0}(6,7) {{\bf 4}}
\rput{0}(6,8) {{\bf 1}}
\rput{0}(6,9) {{\bf 3}}
\rput{0}(6,10) {{\bf 2}}
\rput{0}(6,11) {{\bf 4}}
\rput{0}(6,12) {{\bf 1}}
\rput{0}(6,13) {{\bf 2}}
\rput{0}(6,14) {{\bf 3}}
\rput{0}(7,0) {{\bf 4}}
\rput{0}(7,1) {{\bf 3}}
\rput{0}(7,2) {{\bf 1}}
\rput{0}(7,3) {{\bf 2}}
\rput{0}(7,4) {{\bf 3}}
\rput{0}(7,5) {{\bf 1}}
\rput{0}(7,6) {{\bf 4}}
\rput{0}(7,7) {{\bf 1}}
\rput{0}(7,8) {{\bf 3}}
\rput{0}(7,9) {{\bf 2}}
\rput{0}(7,10) {{\bf 4}}
\rput{0}(7,11) {{\bf 1}}
\rput{0}(7,12) {{\bf 2}}
\rput{0}(7,13) {{\bf 3}}
\rput{0}(7,14) {{\bf 1}}
\rput{0}(8,0) {{\bf 3}}
\rput{0}(8,1) {{\bf 1}}
\rput{0}(8,2) {{\bf 2}}
\rput{0}(8,3) {{\bf 3}}
\rput{0}(8,4) {{\bf 1}}
\rput{0}(8,5) {{\bf 4}}
\rput{0}(8,6) {{\bf 2}}
\rput{0}(8,7) {{\bf 3}}
\rput{0}(8,8) {{\bf 2}}
\rput{0}(8,9) {{\bf 4}}
\rput{0}(8,10) {{\bf 1}}
\rput{0}(8,11) {{\bf 2}}
\rput{0}(8,12) {{\bf 3}}
\rput{0}(8,13) {{\bf 1}}
\rput{0}(8,14) {{\bf 4}}
\rput{0}(9,0) {{\bf 1}}
\rput{0}(9,1) {{\bf 2}}
\rput{0}(9,2) {{\bf 3}}
\rput{0}(9,3) {{\bf 1}}
\rput{0}(9,4) {{\bf 4}}
\rput{0}(9,5) {{\bf 2}}
\rput{0}(9,6) {{\bf 3}}
\rput{0}(9,7) {{\bf 1}}
\rput{0}(9,8) {{\bf 4}}
\rput{0}(9,9) {{\bf 1}}
\rput{0}(9,10) {{\bf 2}}
\rput{0}(9,11) {{\bf 3}}
\rput{0}(9,12) {{\bf 1}}
\rput{0}(9,13) {{\bf 4}}
\rput{0}(9,14) {{\bf 3}}
\rput{0}(10,0) {{\bf 2}}
\rput{0}(10,1) {{\bf 4}}
\rput{0}(10,2) {{\bf 1}}
\rput{0}(10,3) {{\bf 4}}
\rput{0}(10,4) {{\bf 2}}
\rput{0}(10,5) {{\bf 3}}
\rput{0}(10,6) {{\bf 1}}
\rput{0}(10,7) {{\bf 4}}
\rput{0}(10,8) {{\bf 2}}
\rput{0}(10,9) {{\bf 3}}
\rput{0}(10,10){{\bf 4}}
\rput{0}(10,11){{\bf 1}}
\rput{0}(10,12){{\bf 2}}
\rput{0}(10,13){{\bf 3}}
\rput{0}(10,14){{\bf 1}}
\rput{0}(11,0) {{\bf 4}}
\rput{0}(11,1) {{\bf 1}}
\rput{0}(11,2) {{\bf 3}}
\rput{0}(11,3) {{\bf 2}}
\rput{0}(11,4) {{\bf 3}}
\rput{0}(11,5) {{\bf 1}}
\rput{0}(11,6) {{\bf 4}}
\rput{0}(11,7) {{\bf 2}}
\rput{0}(11,8) {{\bf 1}}
\rput{0}(11,9) {{\bf 4}}
\rput{0}(11,10){{\bf 1}}
\rput{0}(11,11){{\bf 2}}
\rput{0}(11,12){{\bf 4}}
\rput{0}(11,13){{\bf 1}}
\rput{0}(11,14){{\bf 2}}
\rput{0}(12,0) {{\bf 1}}
\rput{0}(12,1) {{\bf 3}}
\rput{0}(12,2) {{\bf 2}}
\rput{0}(12,3) {{\bf 4}}
\rput{0}(12,4) {{\bf 1}}
\rput{0}(12,5) {{\bf 4}}
\rput{0}(12,6) {{\bf 2}}
\rput{0}(12,7) {{\bf 1}}
\rput{0}(12,8) {{\bf 4}}
\rput{0}(12,9) {{\bf 2}}
\rput{0}(12,10){{\bf 3}}
\rput{0}(12,11){{\bf 4}}
\rput{0}(12,12){{\bf 3}}
\rput{0}(12,13){{\bf 2}}
\rput{0}(12,14){{\bf 4}}
\rput{0}(13,0) {{\bf 3}}
\rput{0}(13,1) {{\bf 2}}
\rput{0}(13,2) {{\bf 4}}
\rput{0}(13,3) {{\bf 1}}
\rput{0}(13,4) {{\bf 3}}
\rput{0}(13,5) {{\bf 2}}
\rput{0}(13,6) {{\bf 1}}
\rput{0}(13,7) {{\bf 4}}
\rput{0}(13,8) {{\bf 2}}
\rput{0}(13,9) {{\bf 3}}
\rput{0}(13,10){{\bf 4}}
\rput{0}(13,11){{\bf 2}}
\rput{0}(13,12){{\bf 1}}
\rput{0}(13,13){{\bf 4}}
\rput{0}(13,14){{\bf 1}}
\rput{0}(14,0) {{\bf 2}}
\rput{0}(14,1) {{\bf 4}}
\rput{0}(14,2) {{\bf 1}}
\rput{0}(14,3) {{\bf 3}}
\rput{0}(14,4) {{\bf 2}}
\rput{0}(14,5) {{\bf 1}}
\rput{0}(14,6) {{\bf 4}}
\rput{0}(14,7) {{\bf 2}}
\rput{0}(14,8) {{\bf 3}}
\rput{0}(14,9) {{\bf 4}}
\rput{0}(14,10){{\bf 2}}
\rput{0}(14,11){{\bf 1}}
\rput{0}(14,12){{\bf 4}}
\rput{0}(14,13){{\bf 2}}
\rput{0}(14,14){{\bf 3}}
\multirput{0}(14.5,-0.5)(0,1){15}{%
\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)
}
\uput[0](14.7,-0.7){D1}
\uput[0](14.7, 0.3){D2}
\uput[0](14.7, 1.3){D3}
\uput[0](14.7, 2.3){D4}
\uput[0](14.7, 3.3){D5}
\uput[0](14.7, 4.3){D6}
\uput[0](14.7, 5.3){D7}
\uput[0](14.7, 6.3){D8}
\uput[0](14.7, 7.3){D9}
\uput[0](14.7, 8.3){D10}
\uput[0](14.7, 9.3){D11}
\uput[0](14.7,10.3){D12}
\uput[0](14.7,11.3){D13}
\uput[0](14.7,12.3){D14}
\uput[0](14.7,13.3){D15}
\endpspicture
\caption{ \label{prop.12k+3.fig3}
The 4-coloring of $T(15,15)$ after Step~4 in the case $L=4k+1$.
}
\end{figure}
On D$(6k+1)$ there are two pairs of nearby vertices which only admit
one color among $3$ and $4$. One pair is $(3k+1,3k)$ and
$(3k,3k+1)$; the other one is $(9k+3,9k+1)$ and $(9k+2,9k+2)$.
We color the other vertices on D$(6k+1)$ by colors $3$ and $4$
while using the following rule: those with $x$-coordinate satisfying
$3k+1<x<9k+2$ are colored $3$ (resp.\ $4$) if $k$ is odd (resp.\ even).
At the end, there are $6k+2$ and $6k+1$ vertices colored alike on D$(6k+1)$.
On D$(6k+4)$ we also find two pairs of vertices which only admit one color
among $3$ and $4$: one pair is $(3k+3,3k+1)$ and $(3k+2,3k+2)$; the other
one is $(9k+4,9k+3)$ and $(9k+3,9k+4)$. The other vertices on D$(6k+4)$ are
then colored $3$ and $4$ with the help of the following rules: 1) those
with $x$-coordinate satisfying $3k+3<x<9k+3$ are colored $3$ (resp.\ $4$)
if $k$ is odd (resp.\ even); 2) the number of vertices colored $3$ is
the same as on D$(6k+1)$. This second rule is used to determine the color
of the vertex at $(3k+1,3k+3)$.
The contribution to the partial degree of these
new triangles is $-4$; thus, the partial degree of $f$ is
$\deg f|_R = 4 + 12(k-1)$.
\medskip
\noindent
{\bf Step 4.}
On D$(6k+2)$ there are two vertices located at $(3k,3k+2)$ and $(9k+2,9k+3)$
whose colors are fixed to either $1$ or $2$. Color with the same color as
$(9k+2,9k+3)$ the two vertices $(3k+1,3k+1)$ and $(9k+3,9k+2)$.
At the end, there are $6k+4$ vertices having one color, and $6k+1$ having
the other one.
On D$(6k+3)$ there are two vertices whose colors are fixed to either $3$ or
$4$. There are also four additional vertices whose colors are fixed to either
$1$ or $2$. These six vertices are located at
$(3k+2,3k+1)$, $(3k+1,3k+2)$, $(3k,3k+3)$, $(9k+4,9k+2)$, $(9k+3,9k+3)$,
and $(9k+2,9k+4)$. The other vertices on D$(6k+3)$ are colored $3$ or $4$
(the choice for each vertex is unique).
In Figure~\ref{prop.12k+3.fig3} the final coloring $f$ is depicted.
The increment in the partial degree is $2$. Therefore,
\begin{equation}
\deg f \;=\; 6 + 12(k-1) \;\equiv\; 6 \pmod{12}. \nonumber
\end{equation}
The coloring $f$ of $T(12k+3,12k+3)$ is proper and its degree is congruent
to 6 modulo $12$, as claimed. This completes the proof. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\section{Further results for $\bm{T(3L,3M)}$} \label{sec.asym}
In the previous section we have proven that $T(3L,3L)$ has
at least one coloring with degree $\equiv 6 \pmod{12}$ for any $L\geq 2$,
and hence $\Kc(T(3L,3L),4)>1$. This result can be used for some other
triangulations with aspect ratio different from $1$:
\begin{theorem} \label{theo.cases}
The number of Kempe equivalence classes $\Kc(T,4)$ is at least two
for any triangulation $T(3Lp,3Lq)$ for $L\geq 2$ and any odd integers
$p,q$.
\end{theorem}
\par\medskip\noindent{\sc Proof.\ }
Theorem~\ref{theo.main} shows that there is a coloring $f$ of
$T(3L,3L)$ for $L\geq 2$ with $\deg(f)\equiv 6 \pmod{12}$. Then,
Lemma~\ref{lemma.tech}(c) proves the claimed result. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
In order to obtain more general results, it is convenient to prove the
following simple proposition.
\begin{proposition} \label{prop.T_Lx3}
The degree of any four-coloring of any triangulation $T(L,3)$ or $T(3,L)$
with $L\geq 1$ is zero.
\end{proposition}
\par\medskip\noindent{\sc Proof.\ }
Suppose we compute the degree of a given 4-coloring $c$ of the triangulation
$T(3,L)$ by counting those triangular faces colored $123$. We can focus
on those sites colored $3$. Let us suppose the vertex $x$ is colored $3$.
Because the 4-coloring $c$ is proper, none of the neighbors of $x$
can be colored $3$.
And because the triangulation has width $3$, the two neighbors along the
horizontal axis are also adjacent to each other, so they have different colors,
say 1 and 2. This situation is depicted in Figure~\ref{prop.T_Lx3.fig}.
There are only $9$ different four-colorings of the above graph, and all of
them contribute zero to the degree.
Therefore, the contribution of all vertices colored $3$ to the degree
is zero, and the claimed result is proven. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\begin{figure}[htb]
\centering
\psset{xunit=40pt}
\psset{yunit=40pt}
\pspicture(-0.5,-0.5)(2.5,2.5)
\psline[linewidth=1pt](0,0)(0,1)(1,1)(0,0)(1,0)(1,1)(1,2)(0,1)
\psline[linewidth=1pt](1,0)(2,1)(1,1)
\psline[linewidth=1pt](1,1)(2,2)(2,1)(2,2)(1,2)
\psline[linewidth=1pt](0,1)(-0.5,1)
\psline[linewidth=1pt](2,1)(2.5,1)
\multirput{0}(0,0)(1,0){2}{%
\rput{0}(0,0){%
\pscircle*[linewidth=1pt,linecolor=white]{9pt}%
\pscircle[linewidth=1pt]{9pt}%
}}
\multirput{0}(0,1)(1,0){3}{%
\rput{0}(0,0){%
\pscircle*[linewidth=1pt,linecolor=white]{9pt}%
\pscircle[linewidth=1pt]{9pt}%
}}
\multirput{0}(1,2)(1,0){2}{%
\rput{0}(0,0){%
\pscircle*[linewidth=1pt,linecolor=white]{9pt}%
\pscircle[linewidth=1pt]{9pt}%
}}
\rput[c]{0}(0,0){$\bm{c_1}$}
\rput[c]{0}(1,0){$\bm{c_2}$}
\rput[c]{0}(0,1){$\bm{1}$}
\rput[c]{0}(1,1){$\bm{3}$}
\rput[c]{0}(1,2){$\bm{c_5}$}
\rput[c]{0}(2,1){$\bm{2}$}
\rput[c]{0}(2,2){$\bm{c_6}$}
\endpspicture
\caption{ \label{prop.T_Lx3.fig}
Subset of the triangulation $T(3,L)$ used in the proof of
Proposition~\ref{prop.T_Lx3}.
}
\end{figure}
The following lemma shows how to build a four-coloring of the triangulation
$T(L,M+3)$ by ``gluing'' four-colorings of the triangulations $T(L,M)$ and
$T(L,3)$ that have the same coloring on the top row. One key point is that
the degree is an invariant under this operation.
\begin{lemma} \label{lemma_plusLx3}
Let us suppose that $c$ is a four-coloring of a triangulation $T(L,M)$
with degree $d$, and that the coloring on the top row is $c_{\rm top}$.
Let us further suppose there exists a four-coloring $c'$ of the triangulation
$T(L,3)$ with the same coloring on the top row $c'_{\rm top} = c_{\rm top}$.
Then, there exists a four-coloring of the triangulation $T(L,M+3)$
with degree $d$.
\end{lemma}
\par\medskip\noindent{\sc Proof.\ }
Because both $T(L,M)$ and $T(L,3)$ are triangulations of a torus with the same
width $L$, and the corresponding colorings $c$ and $c'$
both have the same top-row coloring $c_{\rm top}$, we can
obtain a four-coloring $c''$ of the triangulation $T(L,M+3)$ by
``gluing'' together these two colorings.
This is indeed a proper coloring of $T(L,M+3)$ and its
degree can be computed as $\deg(c'')=\deg(c)+\deg(c')=\deg(c)=d$,
since $\deg(c')=0$ by Proposition~\ref{prop.T_Lx3}.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
This lemma gives us the opportunity to devise an inductive proof that
there is a four-coloring with degree $6\pmod{12}$ for any triangulation
$T(3L,3M)$ with $M\geq L$. The base case $L=M$ is already verified by
Theorem~\ref{theo.main}. If we can find a proper four-coloring of the
triangulation $T(3L,3)$ with a top-row coloring equal to the top-row
coloring of the coloring obtained in the proof of Theorem~\ref{theo.main},
then the above lemma can be used to prove the inductive step. The main
issue is therefore, to prove the existence of such coloring for
$T(3L,3)$.
\begin{theorem} \label{theo.asym}
For any triangulation $T(3L,3M)$ with any $L\geq 3$ and $M\geq L$,
there exists a four-coloring $f$ with $\deg(f)\equiv 6\pmod{12}$.
Consequently, the WSK dynamics for four-colorings of $T(3L,3M)$ is
non-ergodic.
\end{theorem}
\par\medskip\noindent{\sc Proof.\ }
The proof is by induction on $M$. The base case $M=L\geq 3$ is proven by
Theorem~\ref{theo.main}. Now suppose that there exist such colorings for
all triangulations $T(3L,3M')$ with $L\le M'\leq M$, and we wish to prove that
such configuration exists also for $M$.
The main idea is to prove the existence of a proper four-coloring of
the triangulation $T(3L,3)$ such that its top row coloring coincides with the
one obtained in the proof of the corresponding case in Theorem~\ref{theo.main}.
To simplify the notation we will denote by $c_i$ the sequence of colors
in the row $i$ of $T(3L,3)$ and by $c_0$ the coloring of the top row of
$T(3L,3L)$ obtained in the proof of Theorem~\ref{theo.main}. Of course, our
goal is to have $c_0=c_3$.
To describe a sequence of colors, we will use the following notation:
$[a_1 a_2 \cdots a_s]^t$ will be the sequence of length $st$ in
which $a_1 a_2 \cdots a_s$ is repeated $t$ times. For example,
$12[34]^32 = 123434342$.
Our basic strategy is, as in Theorem~\ref{theo.main}, to explicitly construct
four-colorings of $T(3L,3)$ with $L\geq 3$. The construction of such
a coloring will depend on the value of $L$ modulo 4, and we will split
the proof in four cases, $L=4k-2, 4k-1, 4k$, or $L=4k+1$, with $k\in{\mathbb N}$.
The case $L=4k-2$ was the easiest one in the proof of Theorem~\ref{theo.main};
however, in this case it is the most elaborate. Thus, we will start the proof
by considering the easiest cases, and delay the most complex one to the end.
\proofofcase{1}{$L=4k-1$}
Let $t = \lfloor \tfrac{3k-2}{2} \rfloor$.
The top-row coloring obtained from the proof of Case~2 in
Theorem~\ref{theo.main} can be written as
$$
c_0 = c_3 = [1423]^t 1231 [3241]^t 3
$$
when $k$ is even. Then we define $c_1$ and $c_2$ as:
\begin{eqnarray*}
c_2 & = & 3[1423]^t 142 [1324]^t 2 \\
c_1 & = & 23[1423]^t 14 [2413]^t 4.
\end{eqnarray*}
If $k$ is odd, then we have:
\begin{eqnarray*}
c_0 = c_3 & = & [1423]^t 14214241 [3241]^t 3 \\
c_2 & = & 3[1423]^t 1423124 [1324]^t 2 \;=\;
3[1423]^{t+1} 124 [1324]^t 2\\
c_1 & = & 23[1423]^t 14231 [3241]^t 34 \;=\;
23[1423]^{t+1} 1 [3241]^t 34.
\end{eqnarray*}
It is easy to verify that this gives a proper 4-coloring of $T(3L,3)$.
By Proposition~\ref{prop.T_Lx3}, it has zero degree. This completes the proof
of this case.
\proofofcase{2}{$L=4k$}
As for the previous case, let $t = \lfloor \tfrac{3k-2}{2} \rfloor$.
The top-row coloring $c_3=c_0$ is obtained from the proof of Case~3
in Theorem~\ref{theo.main}. When $k$ is even, the sought 4-coloring is
defined as follows:
\begin{eqnarray*}
c_0 = c_3 &=& [1423]^t 1431341 [3241]^t 3 \\
c_2 &=& 3[1423]^t 124132 [4132]^t 4 \;=\; 3[1423]^t 12 [4132]^{t+1} 4\\
c_1 &=& 4[2314]^t 312413 [2413]^t 2 \;=\; 4[2314]^t 31 [2413]^{t+1} 2
\,.
\end{eqnarray*}
If $k$ is odd, then we have:
\begin{eqnarray*}
c_0 = c_3& =& [1423]^t 14234231241 [3241]^t 3 \;=\;
[1423]^{t+1} 4231241 [3241]^t 3 \\
c_2& =& 3 [1423]^t 1423423132 [4132]^t 4 \;=\;
3 [1423]^{t+1} 423132 [4132]^t 4 \\
c_1& =& 4 [2314]^t 2342312413 [2413]^t 2 \;=\;
4 [2314]^t 234231 [2413]^{t+1} 2\,.
\end{eqnarray*}
Again, it is easy to verify that this gives a proper 4-coloring of $T(3L,3)$,
and by Proposition~\ref{prop.T_Lx3}, it has zero degree.
This completes the proof of this case.
\clearpage
\proofofcase{3}{$L=4k+1$}
Let $t = \lfloor \tfrac{3k-2}{2} \rfloor$.
The top-row coloring $c_3=c_0$ is obtained from the proof of Case~4
in Theorem~\ref{theo.main}. When $k$ is even, the sought 4-coloring is
defined as follows:
\begin{eqnarray*}
c_0 = c_3 & = & [1423]^t 1421423421 [3241]^t 3 \\
c_2 & = & 3[1423]^t 14214213 [2413]^t 42 \\
c_1 & = & 2[3142]^t 314214213 [2413]^t 4 \;=\;
2[3142]^{t+1} 14213 [2413]^t 4\,.
\end{eqnarray*}
If $k$ is odd, then we have:
\begin{eqnarray*}
c_0 = c_3 & = & [1423]^{t+1} 1231431241 [3241]^t 3 \\
c_2 & = & [1423]^{t+1} 312312413 [2413]^t 42 \;=\;
[1423]^{t+1} 31231 [2413]^{t+1} 42 \\
c_1 & = & [2314]^{t+1} 2312312413 [2413]^t 2 \;=\;
[2314]^{t+1} 2312312413 [2413]^{t+1} 2 \,.
\end{eqnarray*}
Again, it is easy to verify that this gives a proper 4-coloring of $T(3L,3)$,
and by Proposition~\ref{prop.T_Lx3}, it has zero degree.
This completes the proof of this case.
\proofofcase{4}{$L=4k-2$}
We cannot use the results of the proof of Theorem~\ref{theo.main},
as the resulting four-coloring for $T(3L,3L)$ is characterized by
the fact that any row (horizontal, vertical or inclined) is bi-colored.
Thus, we cannot obtain a four-coloring of $T(12k-6,3)$ with a bi-colored
horizontal row.
We first need to obtain a proper four-coloring $f$ of $T(12k-6,12k-6)$ with
$\deg(f)\equiv 6\pmod{12}$, and such as there is a proper four-coloring
of $T(12k-6,3)$ compatible with the coloring of one of the horizontal rows
of $f$. We obtain such coloring $f$ by a constructive proof
similar to those explained in the proof of Theorem~\ref{theo.main}.
The notation we use is the same as in Theorem~\ref{theo.main}.
Let us consider the triangulation $T=T(12k-6,12k-6)$ with integer $k\ge 2$
(the case $k=2$ will illustrate our ideas). Our goal is to obtain
a four-coloring $f$ of $T$ with degree $\deg(f)\equiv 6 \pmod{12}$.
The algorithm to obtain such a coloring consists of four steps:
\medskip
\noindent
{\bf Step 1.}
We start by coloring counter-diagonal D1: we color $1$ the vertices with
$x$-coordinates $1\leq x \leq 6k-3$; the other $6k-3$ vertices are colored $2$.
On D2, we color $3$ those $6k-3$ vertices with $x$-coordinates
$3k\leq x \leq 9k-4$. The other vertices on D2 are colored $4$. The
vertices on D$(12k-6)$ are colored $3$ or $4$ in such a
way that the resulting coloring is proper (for each vertex, there is a
unique choice).
On D3 and D$(12k-7)$, we color all vertices $1$ or $2$; on D4 and
D$(12k-8)$, we color all vertices $3$ and $4$, and finally, on D5 and
D$(12k-9)$, we color all vertices $1$ and $2$. In every case, there is a
unique color choice for each vertex. The resulting coloring is depicted on
Figure~\ref{prop.12k-6.fig1}. The partial degree of $f$ is $\deg f|_R = 8$.
\medskip
\noindent
{\bf Step 2.}
For $k>2$, we find that there are $12k-15$ counter-diagonals to be colored
and we need to sequentially color all of them but nine. (Notice that this is
why this algorithm does not work for $k=1$.) This can be
achieved by performing the following procedure: suppose that we
have already colored counter-diagonals D$j$ and D$(12k-j-4)$ ($j\geq 5$)
using colors $1$ and $2$. Then, we color D$(j+1)$ and D$(12k-j-5)$ using
colors $3$ and $4$, and D$(j+2)$ and D$(12k-j-6)$ using colors $1$ and $2$.
As in Step~1, for each vertex there is a unique choice.
\begin{figure}[htb]
\centering
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-0.5,-0.9)(18.6,17.5)
\psline*[linewidth=2pt,linecolor=lightgray](0,1)(1,1)(1,2)(0,1)
\psline*[linewidth=2pt,linecolor=darkgray](0,1)(0,2)(1,2)(0,1)
\psline*[linewidth=2pt,linecolor=darkgray](0,15)(1,15)(1,16)(0,15)
\psline*[linewidth=2pt,linecolor=lightgray](0,15)(0,16)(1,16)(0,15)
\psline*[linewidth=2pt,linecolor=darkgray](1,0)(2,0)(2,1)(1,0)
\psline*[linewidth=2pt,linecolor=lightgray](1,0)(1,1)(2,1)(1,0)
\psline*[linewidth=2pt,linecolor=darkgray](1,14)(2,14)(2,15)(1,14)
\psline*[linewidth=2pt,linecolor=lightgray](1,14)(1,15)(2,15)(1,14)
\psline*[linewidth=2pt,linecolor=lightgray](2,11)(3,11)(3,12)(2,11)
\psline*[linewidth=2pt,linecolor=darkgray](2,13)(3,13)(3,14)(2,13)
\psline*[linewidth=2pt,linecolor=lightgray](2,13)(2,14)(3,14)(2,13)
\psline*[linewidth=2pt,linecolor=darkgray](2,17)(3,17)(3,17.5)(2.5,17.5)(2,17)
\psline*[linewidth=2pt,linecolor=darkgray](3,0)(3,-0.5)(2.5,-0.5)(3,0)
\psline*[linewidth=2pt,linecolor=lightgray](2,17)(2,17.5)(2.5,17.5)(2,17)
\psline*[linewidth=2pt,linecolor=lightgray](2,0)(2,-0.5)(2.5,-0.5)(3,0)(2,0)
\psline*[linewidth=2pt,linecolor=lightgray](3,10)(4,10)(4,11)(3,10)
\psline*[linewidth=2pt,linecolor=darkgray](3,10)(3,11)(4,11)(3,10)
\psline*[linewidth=2pt,linecolor=lightgray](3,12)(3,13)(4,13)(3,12)
\psline*[linewidth=2pt,linecolor=darkgray](3,16)(4,16)(4,17)(3,16)
\psline*[linewidth=2pt,linecolor=lightgray](3,16)(3,17)(4,17)(3,16)
\psline*[linewidth=2pt,linecolor=lightgray](4,9)(5,9)(5,10)(4,9)
\psline*[linewidth=2pt,linecolor=darkgray](4,9)(4,10)(5,10)(4,9)
\psline*[linewidth=2pt,linecolor=lightgray](4,13)(5,13)(5,14)(4,13)
\psline*[linewidth=2pt,linecolor=darkgray](4,15)(5,15)(5,16)(4,15)
\psline*[linewidth=2pt,linecolor=lightgray](4,15)(4,16)(5,16)(4,15)
\psline*[linewidth=2pt,linecolor=lightgray](5,8)(6,8)(6,9)(5,8)
\psline*[linewidth=2pt,linecolor=darkgray](5,8)(5,9)(6,9)(5,8)
\psline*[linewidth=2pt,linecolor=lightgray](5,12)(6,12)(6,13)(5,12)
\psline*[linewidth=2pt,linecolor=darkgray](5,12)(5,13)(6,13)(5,12)
\psline*[linewidth=2pt,linecolor=lightgray](5,14)(5,15)(6,15)(5,14)
\psline*[linewidth=2pt,linecolor=lightgray](6,7)(7,7)(7,8)(6,7)
\psline*[linewidth=2pt,linecolor=darkgray](6,7)(6,8)(7,8)(6,7)
\psline*[linewidth=2pt,linecolor=lightgray](6,11)(7,11)(7,12)(6,11)
\psline*[linewidth=2pt,linecolor=darkgray](6,11)(6,12)(7,12)(6,11)
\psline*[linewidth=2pt,linecolor=darkgray](7,6)(8,6)(8,7)(7,6)
\psline*[linewidth=2pt,linecolor=lightgray](7,6)(7,7)(8,7)(7,6)
\psline*[linewidth=2pt,linecolor=lightgray](7,10)(8,10)(8,11)(7,10)
\psline*[linewidth=2pt,linecolor=darkgray](7,10)(7,11)(8,11)(7,10)
\psline*[linewidth=2pt,linecolor=darkgray](8,5)(9,5)(9,6)(8,5)
\psline*[linewidth=2pt,linecolor=lightgray](8,5)(8,6)(9,6)(8,5)
\psline*[linewidth=2pt,linecolor=lightgray](8,9)(9,9)(9,10)(8,9)
\psline*[linewidth=2pt,linecolor=darkgray](8,9)(8,10)(9,10)(8,9)
\psline*[linewidth=2pt,linecolor=darkgray](9,4)(10,4)(10,5)(9,4)
\psline*[linewidth=2pt,linecolor=lightgray](9,4)(9,5)(10,5)(9,4)
\psline*[linewidth=2pt,linecolor=darkgray](9,8)(10,8)(10,9)(9,8)
\psline*[linewidth=2pt,linecolor=lightgray](9,8)(9,9)(10,9)(9,8)
\psline*[linewidth=2pt,linecolor=darkgray](10,3)(11,3)(11,4)(10,3)
\psline*[linewidth=2pt,linecolor=lightgray](10,3)(10,4)(11,4)(10,3)
\psline*[linewidth=2pt,linecolor=darkgray](10,7)(11,7)(11,8)(10,7)
\psline*[linewidth=2pt,linecolor=lightgray](10,7)(10,8)(11,8)(10,7)
\psline*[linewidth=2pt,linecolor=lightgray](11,2)(11,3)(12,3)(11,2)
\psline*[linewidth=2pt,linecolor=darkgray](11,6)(12,6)(12,7)(11,6)
\psline*[linewidth=2pt,linecolor=lightgray](11,6)(11,7)(12,7)(11,6)
\psline*[linewidth=2pt,linecolor=lightgray](12,3)(13,3)(13,4)(12,3)
\psline*[linewidth=2pt,linecolor=darkgray](12,5)(13,5)(13,6)(12,5)
\psline*[linewidth=2pt,linecolor=lightgray](12,5)(12,6)(13,6)(12,5)
\psline*[linewidth=2pt,linecolor=lightgray](13,2)(14,2)(14,3)(13,2)
\psline*[linewidth=2pt,linecolor=darkgray](13,2)(13,3)(14,3)(13,2)
\psline*[linewidth=2pt,linecolor=lightgray](13,4)(13,5)(14,5)(13,4)
\psline*[linewidth=2pt,linecolor=lightgray](14,1)(15,1)(15,2)(14,1)
\psline*[linewidth=2pt,linecolor=darkgray](14,1)(14,2)(15,2)(14,1)
\psline*[linewidth=2pt,linecolor=lightgray](14,5)(15,5)(15,6)(14,5)
\psline*[linewidth=2pt,linecolor=lightgray](15,0)(16,0)(16,1)(15,0)
\psline*[linewidth=2pt,linecolor=darkgray](15,0)(15,1)(16,1)(15,0)
\psline*[linewidth=2pt,linecolor=lightgray](15,4)(16,4)(16,5)(15,4)
\psline*[linewidth=2pt,linecolor=darkgray](15,4)(15,5)(16,5)(15,4)
\psline*[linewidth=2pt,linecolor=lightgray](16,3)(17,3)(17,4)(16,3)
\psline*[linewidth=2pt,linecolor=darkgray](16,3)(16,4)(17,4)(16,3)
\psline*[linewidth=2pt,linecolor=lightgray](16,17)(17,17)(17,17.5)(16.5,17.5)(16,17)
\psline*[linewidth=2pt,linecolor=lightgray](17,0)(17,-0.5)(16.5,-0.5)(17,0)
\psline*[linewidth=2pt,linecolor=darkgray](16,17)(16,17.5)(16.5,17.5)(16,17)
\psline*[linewidth=2pt,linecolor=darkgray](16,0)(16,-0.5)(16.5,-0.5)(17,0)(16,0)
\psline*[linewidth=2pt,linecolor=lightgray](17,2)(17.5,2)(17.5,2.5)(17,2)
\psline*[linewidth=2pt,linecolor=lightgray](0,2)(-0.5,2)(-0.5,2.5)(0,3,)(0,2)
\psline*[linewidth=2pt,linecolor=darkgray](17,2)(17,3)(17.5,3)(17.5,2.5)(17,2)
\psline*[linewidth=2pt,linecolor=darkgray](0,3)(-0.5,3)(-0.5,2.5)(0,3)
\psline*[linewidth=2pt,linecolor=darkgray](17,16)(17.5,16)(17.5,16.5)(17,16)
\psline*[linewidth=2pt,linecolor=darkgray](0,16)(-0.5,16)(-0.5,16.5)(0,17,)(0,16)
\psline*[linewidth=2pt,linecolor=lightgray](17,16)(17,17)(17.5,17)(17.5,16.5)(17,16)
\psline*[linewidth=2pt,linecolor=lightgray](0,17)(-0.5,17)(-0.5,16.5)(0,17)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(17.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(17.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(17.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(17.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(17.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(17.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(17.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(17.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(17.5,8)
\psline[linewidth=2pt,linecolor=blue](-0.5,9)(17.5,9)
\psline[linewidth=2pt,linecolor=blue](-0.5,10)(17.5,10)
\psline[linewidth=2pt,linecolor=blue](-0.5,11)(17.5,11)
\psline[linewidth=2pt,linecolor=blue](-0.5,12)(17.5,12)
\psline[linewidth=2pt,linecolor=blue](-0.5,13)(17.5,13)
\psline[linewidth=2pt,linecolor=blue](-0.5,14)(17.5,14)
\psline[linewidth=2pt,linecolor=blue](-0.5,15)(17.5,15)
\psline[linewidth=2pt,linecolor=blue](-0.5,16)(17.5,16)
\psline[linewidth=2pt,linecolor=blue](-0.5,17)(17.5,17)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,17.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,17.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,17.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,17.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,17.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,17.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,17.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,17.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,17.5)
\psline[linewidth=2pt,linecolor=blue](9,-0.5)(9,17.5)
\psline[linewidth=2pt,linecolor=blue](10,-0.5)(10,17.5)
\psline[linewidth=2pt,linecolor=blue](11,-0.5)(11,17.5)
\psline[linewidth=2pt,linecolor=blue](12,-0.5)(12,17.5)
\psline[linewidth=2pt,linecolor=blue](13,-0.5)(13,17.5)
\psline[linewidth=2pt,linecolor=blue](14,-0.5)(14,17.5)
\psline[linewidth=2pt,linecolor=blue](15,-0.5)(15,17.5)
\psline[linewidth=2pt,linecolor=blue](16,-0.5)(16,17.5)
\psline[linewidth=2pt,linecolor=blue](17,-0.5)(17,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(17.5,17.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(17.5,16.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(17.5,15.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(17.5,14.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(17.5,13.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(17.5,12.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(17.5,11.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(17.5,10.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(17.5,9.5)
\psline[linewidth=2pt,linecolor=blue](8.5,-0.5)(17.5,8.5)
\psline[linewidth=2pt,linecolor=blue](9.5,-0.5)(17.5,7.5)
\psline[linewidth=2pt,linecolor=blue](10.5,-0.5)(17.5,6.5)
\psline[linewidth=2pt,linecolor=blue](11.5,-0.5)(17.5,5.5)
\psline[linewidth=2pt,linecolor=blue](12.5,-0.5)(17.5,4.5)
\psline[linewidth=2pt,linecolor=blue](13.5,-0.5)(17.5,3.5)
\psline[linewidth=2pt,linecolor=blue](14.5,-0.5)(17.5,2.5)
\psline[linewidth=2pt,linecolor=blue](15.5,-0.5)(17.5,1.5)
\psline[linewidth=2pt,linecolor=blue](16.5,-0.5)(17.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(16.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(15.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(14.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(13.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(12.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(11.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(10.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(9.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,8.5)(8.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,9.5)(7.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,10.5)(6.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,11.5)(5.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,12.5)(4.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,13.5)(3.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,14.5)(2.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,15.5)(1.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,16.5)(0.5,17.5)
\multirput{0}(0,0)(0,1){18}{%
\multirput{0}(0,0)(1,0){18}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 4}}
\rput{0}(0,1){{\bf 1}}
\rput{0}(0,2){{\bf 3}}
\rput{0}(0,3){{\bf 2}}
\rput{0}(0,4){{\bf }}
\rput{0}(0,5){{\bf }}
\rput{0}(0,6){{\bf }}
\rput{0}(0,7){{\bf }}
\rput{0}(0,8){{\bf }}
\rput{0}(0,9){{\bf }}
\rput{0}(0,10){{\bf }}
\rput{0}(0,11){{\bf }}
\rput{0}(0,12){{\bf }}
\rput{0}(0,13){{\bf 1}}
\rput{0}(0,14){{\bf 4}}
\rput{0}(0,15){{\bf 2}}
\rput{0}(0,16){{\bf 3}}
\rput{0}(0,17){{\bf 1}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf 3}}
\rput{0}(1,2){{\bf 2}}
\rput{0}(1,3){{\bf }}
\rput{0}(1,4){{\bf }}
\rput{0}(1,5){{\bf }}
\rput{0}(1,6){{\bf }}
\rput{0}(1,7){{\bf }}
\rput{0}(1,8){{\bf }}
\rput{0}(1,9){{\bf }}
\rput{0}(1,10){{\bf }}
\rput{0}(1,11){{\bf }}
\rput{0}(1,12){{\bf 1}}
\rput{0}(1,13){{\bf 4}}
\rput{0}(1,14){{\bf 2}}
\rput{0}(1,15){{\bf 3}}
\rput{0}(1,16){{\bf 1}}
\rput{0}(1,17){{\bf 4}}
\rput{0}(2,0){{\bf 3}}
\rput{0}(2,1){{\bf 1}}
\rput{0}(2,2){{\bf }}
\rput{0}(2,3){{\bf }}
\rput{0}(2,4){{\bf }}
\rput{0}(2,5){{\bf }}
\rput{0}(2,6){{\bf }}
\rput{0}(2,7){{\bf }}
\rput{0}(2,8){{\bf }}
\rput{0}(2,9){{\bf }}
\rput{0}(2,10){{\bf }}
\rput{0}(2,11){{\bf 1}}
\rput{0}(2,12){{\bf 4}}
\rput{0}(2,13){{\bf 2}}
\rput{0}(2,14){{\bf 3}}
\rput{0}(2,15){{\bf 1}}
\rput{0}(2,16){{\bf 4}}
\rput{0}(2,17){{\bf 2}}
\rput{0}(3,0){{\bf 1}}
\rput{0}(3,1){{\bf }}
\rput{0}(3,2){{\bf }}
\rput{0}(3,3){{\bf }}
\rput{0}(3,4){{\bf }}
\rput{0}(3,5){{\bf }}
\rput{0}(3,6){{\bf }}
\rput{0}(3,7){{\bf }}
\rput{0}(3,8){{\bf }}
\rput{0}(3,9){{\bf }}
\rput{0}(3,10){{\bf 1}}
\rput{0}(3,11){{\bf 3}}
\rput{0}(3,12){{\bf 2}}
\rput{0}(3,13){{\bf 3}}
\rput{0}(3,14){{\bf 1}}
\rput{0}(3,15){{\bf 4}}
\rput{0}(3,16){{\bf 2}}
\rput{0}(3,17){{\bf 3}}
\rput{0}(4,0){{\bf }}
\rput{0}(4,1){{\bf }}
\rput{0}(4,2){{\bf }}
\rput{0}(4,3){{\bf }}
\rput{0}(4,4){{\bf }}
\rput{0}(4,5){{\bf }}
\rput{0}(4,6){{\bf }}
\rput{0}(4,7){{\bf }}
\rput{0}(4,8){{\bf }}
\rput{0}(4,9){{\bf 1}}
\rput{0}(4,10){{\bf 3}}
\rput{0}(4,11){{\bf 2}}
\rput{0}(4,12){{\bf 4}}
\rput{0}(4,13){{\bf 1}}
\rput{0}(4,14){{\bf 4}}
\rput{0}(4,15){{\bf 2}}
\rput{0}(4,16){{\bf 3}}
\rput{0}(4,17){{\bf 1}}
\rput{0}(5,0){{\bf }}
\rput{0}(5,1){{\bf }}
\rput{0}(5,2){{\bf }}
\rput{0}(5,3){{\bf }}
\rput{0}(5,4){{\bf }}
\rput{0}(5,5){{\bf }}
\rput{0}(5,6){{\bf }}
\rput{0}(5,7){{\bf }}
\rput{0}(5,8){{\bf 1}}
\rput{0}(5,9){{\bf 3}}
\rput{0}(5,10){{\bf 2}}
\rput{0}(5,11){{\bf 4}}
\rput{0}(5,12){{\bf 1}}
\rput{0}(5,13){{\bf 3}}
\rput{0}(5,14){{\bf 2}}
\rput{0}(5,15){{\bf 3}}
\rput{0}(5,16){{\bf 1}}
\rput{0}(5,17){{\bf }}
\rput{0}(6,0){{\bf }}
\rput{0}(6,1){{\bf }}
\rput{0}(6,2){{\bf }}
\rput{0}(6,3){{\bf }}
\rput{0}(6,4){{\bf }}
\rput{0}(6,5){{\bf }}
\rput{0}(6,6){{\bf }}
\rput{0}(6,7){{\bf 1}}
\rput{0}(6,8){{\bf 3}}
\rput{0}(6,9){{\bf 2}}
\rput{0}(6,10){{\bf 4}}
\rput{0}(6,11){{\bf 1}}
\rput{0}(6,12){{\bf 3}}
\rput{0}(6,13){{\bf 2}}
\rput{0}(6,14){{\bf 4}}
\rput{0}(6,15){{\bf 1}}
\rput{0}(6,16){{\bf }}
\rput{0}(6,17){{\bf }}
\rput{0}(7,0){{\bf }}
\rput{0}(7,1){{\bf }}
\rput{0}(7,2){{\bf }}
\rput{0}(7,3){{\bf }}
\rput{0}(7,4){{\bf }}
\rput{0}(7,5){{\bf }}
\rput{0}(7,6){{\bf 2}}
\rput{0}(7,7){{\bf 3}}
\rput{0}(7,8){{\bf 2}}
\rput{0}(7,9){{\bf 4}}
\rput{0}(7,10){{\bf 1}}
\rput{0}(7,11){{\bf 3}}
\rput{0}(7,12){{\bf 2}}
\rput{0}(7,13){{\bf 4}}
\rput{0}(7,14){{\bf 1}}
\rput{0}(7,15){{\bf }}
\rput{0}(7,16){{\bf }}
\rput{0}(7,17){{\bf }}
\rput{0}(8,0){{\bf }}
\rput{0}(8,1){{\bf }}
\rput{0}(8,2){{\bf }}
\rput{0}(8,3){{\bf }}
\rput{0}(8,4){{\bf }}
\rput{0}(8,5){{\bf 2}}
\rput{0}(8,6){{\bf 3}}
\rput{0}(8,7){{\bf 1}}
\rput{0}(8,8){{\bf 4}}
\rput{0}(8,9){{\bf 1}}
\rput{0}(8,10){{\bf 3}}
\rput{0}(8,11){{\bf 2}}
\rput{0}(8,12){{\bf 4}}
\rput{0}(8,13){{\bf 1}}
\rput{0}(8,14){{\bf }}
\rput{0}(8,15){{\bf }}
\rput{0}(8,16){{\bf }}
\rput{0}(8,17){{\bf }}
\rput{0}(9,0){{\bf }}
\rput{0}(9,1){{\bf }}
\rput{0}(9,2){{\bf }}
\rput{0}(9,3){{\bf }}
\rput{0}(9,4){{\bf 2}}
\rput{0}(9,5){{\bf 3}}
\rput{0}(9,6){{\bf 1}}
\rput{0}(9,7){{\bf 4}}
\rput{0}(9,8){{\bf 2}}
\rput{0}(9,9){{\bf 3}}
\rput{0}(9,10){{\bf 2}}
\rput{0}(9,11){{\bf 4}}
\rput{0}(9,12){{\bf 1}}
\rput{0}(9,13){{\bf }}
\rput{0}(9,14){{\bf }}
\rput{0}(9,15){{\bf }}
\rput{0}(9,16){{\bf }}
\rput{0}(9,17){{\bf }}
\rput{0}(10,0){{\bf }}
\rput{0}(10,1){{\bf }}
\rput{0}(10,2){{\bf }}
\rput{0}(10,3){{\bf 2}}
\rput{0}(10,4){{\bf 3}}
\rput{0}(10,5){{\bf 1}}
\rput{0}(10,6){{\bf 4}}
\rput{0}(10,7){{\bf 2}}
\rput{0}(10,8){{\bf 3}}
\rput{0}(10,9){{\bf 1}}
\rput{0}(10,10){{\bf 4}}
\rput{0}(10,11){{\bf 1}}
\rput{0}(10,12){{\bf }}
\rput{0}(10,13){{\bf }}
\rput{0}(10,14){{\bf }}
\rput{0}(10,15){{\bf }}
\rput{0}(10,16){{\bf }}
\rput{0}(10,17){{\bf }}
\rput{0}(11,0){{\bf }}
\rput{0}(11,1){{\bf }}
\rput{0}(11,2){{\bf 2}}
\rput{0}(11,3){{\bf 3}}
\rput{0}(11,4){{\bf 1}}
\rput{0}(11,5){{\bf 4}}
\rput{0}(11,6){{\bf 2}}
\rput{0}(11,7){{\bf 3}}
\rput{0}(11,8){{\bf 1}}
\rput{0}(11,9){{\bf 4}}
\rput{0}(11,10){{\bf 2}}
\rput{0}(11,11){{\bf }}
\rput{0}(11,12){{\bf }}
\rput{0}(11,13){{\bf }}
\rput{0}(11,14){{\bf }}
\rput{0}(11,15){{\bf }}
\rput{0}(11,16){{\bf }}
\rput{0}(11,17){{\bf }}
\rput{0}(12,0){{\bf }}
\rput{0}(12,1){{\bf 2}}
\rput{0}(12,2){{\bf 4}}
\rput{0}(12,3){{\bf 1}}
\rput{0}(12,4){{\bf 4}}
\rput{0}(12,5){{\bf 2}}
\rput{0}(12,6){{\bf 3}}
\rput{0}(12,7){{\bf 1}}
\rput{0}(12,8){{\bf 4}}
\rput{0}(12,9){{\bf 2}}
\rput{0}(12,10){{\bf }}
\rput{0}(12,11){{\bf }}
\rput{0}(12,12){{\bf }}
\rput{0}(12,13){{\bf }}
\rput{0}(12,14){{\bf }}
\rput{0}(12,15){{\bf }}
\rput{0}(12,16){{\bf }}
\rput{0}(12,17){{\bf }}
\rput{0}(13,0){{\bf 2}}
\rput{0}(13,1){{\bf 4}}
\rput{0}(13,2){{\bf 1}}
\rput{0}(13,3){{\bf 3}}
\rput{0}(13,4){{\bf 2}}
\rput{0}(13,5){{\bf 3}}
\rput{0}(13,6){{\bf 1}}
\rput{0}(13,7){{\bf 4}}
\rput{0}(13,8){{\bf 2}}
\rput{0}(13,9){{\bf }}
\rput{0}(13,10){{\bf }}
\rput{0}(13,11){{\bf }}
\rput{0}(13,12){{\bf }}
\rput{0}(13,13){{\bf }}
\rput{0}(13,14){{\bf }}
\rput{0}(13,15){{\bf }}
\rput{0}(13,16){{\bf }}
\rput{0}(13,17){{\bf }}
\rput{0}(14,0){{\bf 4}}
\rput{0}(14,1){{\bf 1}}
\rput{0}(14,2){{\bf 3}}
\rput{0}(14,3){{\bf 2}}
\rput{0}(14,4){{\bf 4}}
\rput{0}(14,5){{\bf 1}}
\rput{0}(14,6){{\bf 4}}
\rput{0}(14,7){{\bf 2}}
\rput{0}(14,8){{\bf }}
\rput{0}(14,9){{\bf }}
\rput{0}(14,10){{\bf }}
\rput{0}(14,11){{\bf }}
\rput{0}(14,12){{\bf }}
\rput{0}(14,13){{\bf }}
\rput{0}(14,14){{\bf }}
\rput{0}(14,15){{\bf }}
\rput{0}(14,16){{\bf }}
\rput{0}(14,17){{\bf 2}}
\rput{0}(15,0){{\bf 1}}
\rput{0}(15,1){{\bf 3}}
\rput{0}(15,2){{\bf 2}}
\rput{0}(15,3){{\bf 4}}
\rput{0}(15,4){{\bf 1}}
\rput{0}(15,5){{\bf 3}}
\rput{0}(15,6){{\bf 2}}
\rput{0}(15,7){{\bf }}
\rput{0}(15,8){{\bf }}
\rput{0}(15,9){{\bf }}
\rput{0}(15,10){{\bf }}
\rput{0}(15,11){{\bf }}
\rput{0}(15,12){{\bf }}
\rput{0}(15,13){{\bf }}
\rput{0}(15,14){{\bf }}
\rput{0}(15,15){{\bf }}
\rput{0}(15,16){{\bf 2}}
\rput{0}(15,17){{\bf 4}}
\rput{0}(16,0){{\bf 3}}
\rput{0}(16,1){{\bf 2}}
\rput{0}(16,2){{\bf 4}}
\rput{0}(16,3){{\bf 1}}
\rput{0}(16,4){{\bf 3}}
\rput{0}(16,5){{\bf 2}}
\rput{0}(16,6){{\bf }}
\rput{0}(16,7){{\bf }}
\rput{0}(16,8){{\bf }}
\rput{0}(16,9){{\bf }}
\rput{0}(16,10){{\bf }}
\rput{0}(16,11){{\bf }}
\rput{0}(16,12){{\bf }}
\rput{0}(16,13){{\bf }}
\rput{0}(16,14){{\bf }}
\rput{0}(16,15){{\bf 1}}
\rput{0}(16,16){{\bf 4}}
\rput{0}(16,17){{\bf 1}}
\rput{0}(17,0){{\bf 2}}
\rput{0}(17,1){{\bf 4}}
\rput{0}(17,2){{\bf 1}}
\rput{0}(17,3){{\bf 3}}
\rput{0}(17,4){{\bf 2}}
\rput{0}(17,5){{\bf }}
\rput{0}(17,6){{\bf }}
\rput{0}(17,7){{\bf }}
\rput{0}(17,8){{\bf }}
\rput{0}(17,9){{\bf }}
\rput{0}(17,10){{\bf }}
\rput{0}(17,11){{\bf }}
\rput{0}(17,12){{\bf }}
\rput{0}(17,13){{\bf }}
\rput{0}(17,14){{\bf 1}}
\rput{0}(17,15){{\bf 4}}
\rput{0}(17,16){{\bf 2}}
\rput{0}(17,17){{\bf 3}}
\multirput{0}(17.5,-0.5)(0,1){18}{\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)}
\uput[0](17.7,-0.7){D1}
\uput[0](17.7,0.3){D2}
\uput[0](17.7,1.3){D3}
\uput[0](17.7,2.3){D4}
\uput[0](17.7,3.3){D5}
\uput[0](17.7,4.3){D6}
\uput[0](17.7,5.3){D7}
\uput[0](17.7,6.3){D8}
\uput[0](17.7,7.3){D9}
\uput[0](17.7,8.3){D10}
\uput[0](17.7,9.3){D11}
\uput[0](17.7,10.3){D12}
\uput[0](17.7,11.3){D13}
\uput[0](17.7,12.3){D14}
\uput[0](17.7,13.3){D15}
\uput[0](17.7,14.3){D16}
\uput[0](17.7,15.3){D17}
\uput[0](17.7,16.3){D18}
\endpspicture
\caption{ \label{prop.12k-6.fig1}
The 4-coloring of $T(18,18)$ after Step~1 in the case $L=4k-2$.
}
\end{figure}
This procedure is repeated $3(k-2)$ times, so we add $12(k-2)$
counter-diagonals, and there are only nine counter-diagonals not yet
colored. Indeed, the last colored counter-diagonals D$(6k-7)$ and
D$(6k+3)$ have colors $1$ and $2$, the same as it was at the end of Step~1.
Each of these $3(k-2)$ steps adds $4$ to the degree of the coloring.
Thus, the partial degree of $f$ is $\deg f|_R = 8 + 12(k-2)$.
\medskip
\noindent
{\bf Step 3.}
On D$(6k-6)$, the vertices $(3k-3,3k-3)$ and $(9k-6,9k-6)$ only admit
a single color (which is $3$ for one of them, and $4$ for the other one).
The rest of the vertices on D$(6k-6)$ are colored $1$ and $2$ (again, there
is a unique choice for each vertex).
On D$(6k+2)$, there are two vertices $(3k+1,3k+1)$ and $(9k-2,9k-2)$
admitting a single color (again $3$ or $4$). The other vertices on D$(6k+2)$
are colored $1$ or $2$ (again, the choice for each vertex is unique).
On $D(6k-5)$ there are four vertices which admit a single color $\in\{3,4\}$:
vertices $(3k-2,3k-3)$ and $(3k-3,3k-2)$ should be colored $c_1$, while
$(9k-5,9k-6)$ and $(9k-6,9k-5)$ should be colored $c_2\neq c_1$.
The other vertices satisfying $3k-1\leq x \leq 9k-4$ are colored $c_2$,
and the rest of the vertices are colored $c_1$.
Finally, on D$(6k+1)$, we also find another four vertices admitting a single
color chosen from the set $\{3,4\}$:
vertices $(3k+1,3k)$ and $(3k,3k+1)$ should be colored $c_1$, while
$(9k-2,9k-3)$ and $(9k-3,9k-2)$ should be colored $c_2\neq c_1$.
The other vertices satisfying $3k+2\leq x \leq 9k-4$ are colored $c_2$,
and the rest of the vertices are colored $c_1$.
The contribution to the partial degree of the
new triangles is $-4$; the partial degree of $f$ is given by
$\deg f|_R = 4 + 12(k-2)$.
\begin{figure}[htb]
\centering
\psset{xunit=21pt}
\psset{yunit=21pt}
\psset{labelsep=5pt}
\pspicture(-0.5,-0.9)(18.6,17.5)
\psline*[linewidth=2pt,linecolor=lightgray](0,1)(1,1)(1,2)(0,1)
\psline*[linewidth=2pt,linecolor=darkgray](0,1)(0,2)(1,2)(0,1)
\psline*[linewidth=2pt,linecolor=lightgray](0,2)(1,2)(1,3)(0,2)
\psline*[linewidth=2pt,linecolor=darkgray](0,2)(0,3)(1,3)(0,2)
\psline*[linewidth=2pt,linecolor=lightgray](0,3)(1,3)(1,4)(0,3)
\psline*[linewidth=2pt,linecolor=darkgray](0,3)(0,4)(1,4)(0,3)
\psline*[linewidth=2pt,linecolor=lightgray](0,4)(1,4)(1,5)(0,4)
\psline*[linewidth=2pt,linecolor=darkgray](0,4)(0,5)(1,5)(0,4)
\psline*[linewidth=2pt,linecolor=lightgray](0,5)(1,5)(1,6)(0,5)
\psline*[linewidth=2pt,linecolor=darkgray](0,5)(0,6)(1,6)(0,5)
\psline*[linewidth=2pt,linecolor=lightgray](0,6)(1,6)(1,7)(0,6)
\psline*[linewidth=2pt,linecolor=darkgray](0,6)(0,7)(1,7)(0,6)
\psline*[linewidth=2pt,linecolor=lightgray](0,10)(1,10)(1,11)(0,10)
\psline*[linewidth=2pt,linecolor=darkgray](0,10)(0,11)(1,11)(0,10)
\psline*[linewidth=2pt,linecolor=lightgray](0,11)(1,11)(1,12)(0,11)
\psline*[linewidth=2pt,linecolor=darkgray](0,11)(0,12)(1,12)(0,11)
\psline*[linewidth=2pt,linecolor=darkgray](0,15)(1,15)(1,16)(0,15)
\psline*[linewidth=2pt,linecolor=lightgray](0,15)(0,16)(1,16)(0,15)
\psline*[linewidth=2pt,linecolor=darkgray](1,0)(2,0)(2,1)(1,0)
\psline*[linewidth=2pt,linecolor=lightgray](1,0)(1,1)(2,1)(1,0)
\psline*[linewidth=2pt,linecolor=darkgray](1,2)(1,3)(2,3)(1,2)
\psline*[linewidth=2pt,linecolor=lightgray](1,3)(2,3)(2,4)(1,3)
\psline*[linewidth=2pt,linecolor=darkgray](1,3)(1,4)(2,4)(1,3)
\psline*[linewidth=2pt,linecolor=lightgray](1,4)(2,4)(2,5)(1,4)
\psline*[linewidth=2pt,linecolor=darkgray](1,4)(1,5)(2,5)(1,4)
\psline*[linewidth=2pt,linecolor=lightgray](1,5)(2,5)(2,6)(1,5)
\psline*[linewidth=2pt,linecolor=darkgray](1,5)(1,6)(2,6)(1,5)
\psline*[linewidth=2pt,linecolor=lightgray](1,9)(2,9)(2,10)(1,9)
\psline*[linewidth=2pt,linecolor=darkgray](1,9)(1,10)(2,10)(1,9)
\psline*[linewidth=2pt,linecolor=lightgray](1,10)(2,10)(2,11)(1,10)
\psline*[linewidth=2pt,linecolor=darkgray](1,10)(1,11)(2,11)(1,10)
\psline*[linewidth=2pt,linecolor=darkgray](1,14)(2,14)(2,15)(1,14)
\psline*[linewidth=2pt,linecolor=lightgray](1,14)(1,15)(2,15)(1,14)
\psline*[linewidth=2pt,linecolor=darkgray](2,0)(3,0)(3,1)(2,0)
\psline*[linewidth=2pt,linecolor=lightgray](2,0)(2,1)(3,1)(2,0)
\psline*[linewidth=2pt,linecolor=darkgray](2,1)(3,1)(3,2)(2,1)
\psline*[linewidth=2pt,linecolor=lightgray](2,3)(3,3)(3,4)(2,3)
\psline*[linewidth=2pt,linecolor=darkgray](2,3)(2,4)(3,4)(2,3)
\psline*[linewidth=2pt,linecolor=lightgray](2,4)(3,4)(3,5)(2,4)
\psline*[linewidth=2pt,linecolor=darkgray](2,4)(2,5)(3,5)(2,4)
\psline*[linewidth=2pt,linecolor=lightgray](2,8)(3,8)(3,9)(2,8)
\psline*[linewidth=2pt,linecolor=darkgray](2,8)(2,9)(3,9)(2,8)
\psline*[linewidth=2pt,linecolor=lightgray](2,9)(3,9)(3,10)(2,9)
\psline*[linewidth=2pt,linecolor=darkgray](2,9)(2,10)(3,10)(2,9)
\psline*[linewidth=2pt,linecolor=lightgray](2,10)(3,10)(3,11)(2,10)
\psline*[linewidth=2pt,linecolor=darkgray](2,10)(2,11)(3,11)(2,10)
\psline*[linewidth=2pt,linecolor=lightgray](2,11)(3,11)(3,12)(2,11)
\psline*[linewidth=2pt,linecolor=darkgray](2,13)(3,13)(3,14)(2,13)
\psline*[linewidth=2pt,linecolor=lightgray](2,13)(2,14)(3,14)(2,13)
\psline*[linewidth=2pt,linecolor=darkgray](2,17)(3,17)(3,17.5)(2.5,17.5)(2,17)
\psline*[linewidth=2pt,linecolor=darkgray](3,0)(3,-0.5)(2.5,-0.5)(3,0)
\psline*[linewidth=2pt,linecolor=lightgray](2,17)(2,17.5)(2.5,17.5)(2,17)
\psline*[linewidth=2pt,linecolor=lightgray](2,0)(2,-0.5)(2.5,-0.5)(3,0)(2,0)
\psline*[linewidth=2pt,linecolor=lightgray](3,1)(3,2)(4,2)(3,1)
\psline*[linewidth=2pt,linecolor=darkgray](3,3)(3,4)(4,4)(3,3)
\psline*[linewidth=2pt,linecolor=lightgray](3,7)(4,7)(4,8)(3,7)
\psline*[linewidth=2pt,linecolor=darkgray](3,7)(3,8)(4,8)(3,7)
\psline*[linewidth=2pt,linecolor=lightgray](3,8)(4,8)(4,9)(3,8)
\psline*[linewidth=2pt,linecolor=darkgray](3,8)(3,9)(4,9)(3,8)
\psline*[linewidth=2pt,linecolor=lightgray](3,9)(4,9)(4,10)(3,9)
\psline*[linewidth=2pt,linecolor=darkgray](3,9)(3,10)(4,10)(3,9)
\psline*[linewidth=2pt,linecolor=lightgray](3,10)(4,10)(4,11)(3,10)
\psline*[linewidth=2pt,linecolor=darkgray](3,10)(3,11)(4,11)(3,10)
\psline*[linewidth=2pt,linecolor=lightgray](3,12)(3,13)(4,13)(3,12)
\psline*[linewidth=2pt,linecolor=darkgray](3,16)(4,16)(4,17)(3,16)
\psline*[linewidth=2pt,linecolor=lightgray](3,16)(3,17)(4,17)(3,16)
\psline*[linewidth=2pt,linecolor=darkgray](3,17)(4,17)(4,17.5)(3.5,17.5)(3,17)
\psline*[linewidth=2pt,linecolor=darkgray](4,0)(4,-0.5)(3.5,-0.5)(4,0)
\psline*[linewidth=2pt,linecolor=lightgray](3,17)(3,17.5)(3.5,17.5)(3,17)
\psline*[linewidth=2pt,linecolor=lightgray](3,0)(3,-0.5)(3.5,-0.5)(4,0)(3,0)
\psline*[linewidth=2pt,linecolor=darkgray](4,2)(5,2)(5,3)(4,2)
\psline*[linewidth=2pt,linecolor=darkgray](4,4)(5,4)(5,5)(4,4)
\psline*[linewidth=2pt,linecolor=lightgray](4,6)(5,6)(5,7)(4,6)
\psline*[linewidth=2pt,linecolor=darkgray](4,6)(4,7)(5,7)(4,6)
\psline*[linewidth=2pt,linecolor=lightgray](4,7)(5,7)(5,8)(4,7)
\psline*[linewidth=2pt,linecolor=darkgray](4,7)(4,8)(5,8)(4,7)
\psline*[linewidth=2pt,linecolor=lightgray](4,8)(5,8)(5,9)(4,8)
\psline*[linewidth=2pt,linecolor=darkgray](4,8)(4,9)(5,9)(4,8)
\psline*[linewidth=2pt,linecolor=lightgray](4,9)(5,9)(5,10)(4,9)
\psline*[linewidth=2pt,linecolor=darkgray](4,9)(4,10)(5,10)(4,9)
\psline*[linewidth=2pt,linecolor=lightgray](4,13)(5,13)(5,14)(4,13)
\psline*[linewidth=2pt,linecolor=darkgray](4,15)(5,15)(5,16)(4,15)
\psline*[linewidth=2pt,linecolor=lightgray](4,15)(4,16)(5,16)(4,15)
\psline*[linewidth=2pt,linecolor=darkgray](4,16)(5,16)(5,17)(4,16)
\psline*[linewidth=2pt,linecolor=lightgray](4,16)(4,17)(5,17)(4,16)
\psline*[linewidth=2pt,linecolor=lightgray](5,2)(5,3)(6,3)(5,2)
\psline*[linewidth=2pt,linecolor=darkgray](5,3)(6,3)(6,4)(5,3)
\psline*[linewidth=2pt,linecolor=lightgray](5,3)(5,4)(6,4)(5,3)
\psline*[linewidth=2pt,linecolor=darkgray](5,4)(6,4)(6,5)(5,4)
\psline*[linewidth=2pt,linecolor=lightgray](5,4)(5,5)(6,5)(5,4)
\psline*[linewidth=2pt,linecolor=darkgray](5,6)(5,7)(6,7)(5,6)
\psline*[linewidth=2pt,linecolor=lightgray](5,7)(6,7)(6,8)(5,7)
\psline*[linewidth=2pt,linecolor=darkgray](5,7)(5,8)(6,8)(5,7)
\psline*[linewidth=2pt,linecolor=lightgray](5,8)(6,8)(6,9)(5,8)
\psline*[linewidth=2pt,linecolor=darkgray](5,8)(5,9)(6,9)(5,8)
\psline*[linewidth=2pt,linecolor=lightgray](5,12)(6,12)(6,13)(5,12)
\psline*[linewidth=2pt,linecolor=darkgray](5,12)(5,13)(6,13)(5,12)
\psline*[linewidth=2pt,linecolor=lightgray](5,14)(5,15)(6,15)(5,14)
\psline*[linewidth=2pt,linecolor=darkgray](5,15)(6,15)(6,16)(5,15)
\psline*[linewidth=2pt,linecolor=lightgray](5,15)(5,16)(6,16)(5,15)
\psline*[linewidth=2pt,linecolor=lightgray](6,4)(6,5)(7,5)(6,4)
\psline*[linewidth=2pt,linecolor=darkgray](6,5)(7,5)(7,6)(6,5)
\psline*[linewidth=2pt,linecolor=lightgray](6,7)(7,7)(7,8)(6,7)
\psline*[linewidth=2pt,linecolor=darkgray](6,7)(6,8)(7,8)(6,7)
\psline*[linewidth=2pt,linecolor=lightgray](6,11)(7,11)(7,12)(6,11)
\psline*[linewidth=2pt,linecolor=darkgray](6,11)(6,12)(7,12)(6,11)
\psline*[linewidth=2pt,linecolor=darkgray](7,5)(8,5)(8,6)(7,5)
\psline*[linewidth=2pt,linecolor=lightgray](7,5)(7,6)(8,6)(7,5)
\psline*[linewidth=2pt,linecolor=darkgray](7,6)(8,6)(8,7)(7,6)
\psline*[linewidth=2pt,linecolor=lightgray](7,6)(7,7)(8,7)(7,6)
\psline*[linewidth=2pt,linecolor=lightgray](7,10)(8,10)(8,11)(7,10)
\psline*[linewidth=2pt,linecolor=darkgray](7,10)(7,11)(8,11)(7,10)
\psline*[linewidth=2pt,linecolor=darkgray](8,4)(9,4)(9,5)(8,4)
\psline*[linewidth=2pt,linecolor=lightgray](8,4)(8,5)(9,5)(8,4)
\psline*[linewidth=2pt,linecolor=darkgray](8,5)(9,5)(9,6)(8,5)
\psline*[linewidth=2pt,linecolor=lightgray](8,5)(8,6)(9,6)(8,5)
\psline*[linewidth=2pt,linecolor=lightgray](8,9)(9,9)(9,10)(8,9)
\psline*[linewidth=2pt,linecolor=darkgray](8,9)(8,10)(9,10)(8,9)
\psline*[linewidth=2pt,linecolor=darkgray](9,3)(10,3)(10,4)(9,3)
\psline*[linewidth=2pt,linecolor=lightgray](9,3)(9,4)(10,4)(9,3)
\psline*[linewidth=2pt,linecolor=darkgray](9,4)(10,4)(10,5)(9,4)
\psline*[linewidth=2pt,linecolor=lightgray](9,4)(9,5)(10,5)(9,4)
\psline*[linewidth=2pt,linecolor=darkgray](9,8)(10,8)(10,9)(9,8)
\psline*[linewidth=2pt,linecolor=lightgray](9,8)(9,9)(10,9)(9,8)
\psline*[linewidth=2pt,linecolor=darkgray](10,2)(11,2)(11,3)(10,2)
\psline*[linewidth=2pt,linecolor=lightgray](10,2)(10,3)(11,3)(10,2)
\psline*[linewidth=2pt,linecolor=darkgray](10,3)(11,3)(11,4)(10,3)
\psline*[linewidth=2pt,linecolor=lightgray](10,3)(10,4)(11,4)(10,3)
\psline*[linewidth=2pt,linecolor=darkgray](10,7)(11,7)(11,8)(10,7)
\psline*[linewidth=2pt,linecolor=lightgray](10,7)(10,8)(11,8)(10,7)
\psline*[linewidth=2pt,linecolor=lightgray](11,2)(11,3)(12,3)(11,2)
\psline*[linewidth=2pt,linecolor=darkgray](11,6)(12,6)(12,7)(11,6)
\psline*[linewidth=2pt,linecolor=lightgray](11,6)(11,7)(12,7)(11,6)
\psline*[linewidth=2pt,linecolor=lightgray](12,3)(13,3)(13,4)(12,3)
\psline*[linewidth=2pt,linecolor=darkgray](12,5)(13,5)(13,6)(12,5)
\psline*[linewidth=2pt,linecolor=lightgray](12,5)(12,6)(13,6)(12,5)
\psline*[linewidth=2pt,linecolor=lightgray](12,9)(13,9)(13,10)(12,9)
\psline*[linewidth=2pt,linecolor=darkgray](12,9)(12,10)(13,10)(12,9)
\psline*[linewidth=2pt,linecolor=lightgray](12,10)(13,10)(13,11)(12,10)
\psline*[linewidth=2pt,linecolor=lightgray](13,2)(14,2)(14,3)(13,2)
\psline*[linewidth=2pt,linecolor=darkgray](13,2)(13,3)(14,3)(13,2)
\psline*[linewidth=2pt,linecolor=lightgray](13,4)(13,5)(14,5)(13,4)
\psline*[linewidth=2pt,linecolor=lightgray](13,8)(14,8)(14,9)(13,8)
\psline*[linewidth=2pt,linecolor=darkgray](13,8)(13,9)(14,9)(13,8)
\psline*[linewidth=2pt,linecolor=lightgray](13,9)(14,9)(14,10)(13,9)
\psline*[linewidth=2pt,linecolor=darkgray](13,9)(13,10)(14,10)(13,9)
\psline*[linewidth=2pt,linecolor=lightgray](13,10)(14,10)(14,11)(13,10)
\psline*[linewidth=2pt,linecolor=darkgray](13,10)(13,11)(14,11)(13,10)
\psline*[linewidth=2pt,linecolor=lightgray](14,1)(15,1)(15,2)(14,1)
\psline*[linewidth=2pt,linecolor=darkgray](14,1)(14,2)(15,2)(14,1)
\psline*[linewidth=2pt,linecolor=lightgray](14,5)(15,5)(15,6)(14,5)
\psline*[linewidth=2pt,linecolor=lightgray](14,7)(15,7)(15,8)(14,7)
\psline*[linewidth=2pt,linecolor=darkgray](14,7)(14,8)(15,8)(14,7)
\psline*[linewidth=2pt,linecolor=lightgray](14,8)(15,8)(15,9)(14,8)
\psline*[linewidth=2pt,linecolor=darkgray](14,8)(14,9)(15,9)(14,8)
\psline*[linewidth=2pt,linecolor=lightgray](14,9)(15,9)(15,10)(14,9)
\psline*[linewidth=2pt,linecolor=darkgray](14,9)(14,10)(15,10)(14,9)
\psline*[linewidth=2pt,linecolor=lightgray](15,0)(16,0)(16,1)(15,0)
\psline*[linewidth=2pt,linecolor=darkgray](15,0)(15,1)(16,1)(15,0)
\psline*[linewidth=2pt,linecolor=lightgray](15,4)(16,4)(16,5)(15,4)
\psline*[linewidth=2pt,linecolor=darkgray](15,4)(15,5)(16,5)(15,4)
\psline*[linewidth=2pt,linecolor=lightgray](15,5)(16,5)(16,6)(15,5)
\psline*[linewidth=2pt,linecolor=darkgray](15,5)(15,6)(16,6)(15,5)
\psline*[linewidth=2pt,linecolor=lightgray](15,6)(16,6)(16,7)(15,6)
\psline*[linewidth=2pt,linecolor=darkgray](15,6)(15,7)(16,7)(15,6)
\psline*[linewidth=2pt,linecolor=lightgray](15,7)(16,7)(16,8)(15,7)
\psline*[linewidth=2pt,linecolor=darkgray](15,7)(15,8)(16,8)(15,7)
\psline*[linewidth=2pt,linecolor=lightgray](15,8)(16,8)(16,9)(15,8)
\psline*[linewidth=2pt,linecolor=darkgray](15,8)(15,9)(16,9)(15,8)
\psline*[linewidth=2pt,linecolor=lightgray](15,9)(16,9)(16,10)(15,9)
\psline*[linewidth=2pt,linecolor=darkgray](15,9)(15,10)(16,10)(15,9)
\psline*[linewidth=2pt,linecolor=lightgray](15,10)(16,10)(16,11)(15,10)
\psline*[linewidth=2pt,linecolor=lightgray](15,12)(16,12)(16,13)(15,12)
\psline*[linewidth=2pt,linecolor=darkgray](15,12)(15,13)(16,13)(15,12)
\psline*[linewidth=2pt,linecolor=lightgray](15,13)(16,13)(16,14)(15,13)
\psline*[linewidth=2pt,linecolor=lightgray](16,3)(17,3)(17,4)(16,3)
\psline*[linewidth=2pt,linecolor=darkgray](16,3)(16,4)(17,4)(16,3)
\psline*[linewidth=2pt,linecolor=lightgray](16,4)(17,4)(17,5)(16,4)
\psline*[linewidth=2pt,linecolor=darkgray](16,4)(16,5)(17,5)(16,4)
\psline*[linewidth=2pt,linecolor=lightgray](16,5)(17,5)(17,6)(16,5)
\psline*[linewidth=2pt,linecolor=darkgray](16,5)(16,6)(17,6)(16,5)
\psline*[linewidth=2pt,linecolor=lightgray](16,6)(17,6)(17,7)(16,6)
\psline*[linewidth=2pt,linecolor=darkgray](16,6)(16,7)(17,7)(16,6)
\psline*[linewidth=2pt,linecolor=lightgray](16,7)(17,7)(17,8)(16,7)
\psline*[linewidth=2pt,linecolor=darkgray](16,7)(16,8)(17,8)(16,7)
\psline*[linewidth=2pt,linecolor=lightgray](16,8)(17,8)(17,9)(16,8)
\psline*[linewidth=2pt,linecolor=darkgray](16,8)(16,9)(17,9)(16,8)
\psline*[linewidth=2pt,linecolor=darkgray](16,10)(16,11)(17,11)(16,10)
\psline*[linewidth=2pt,linecolor=lightgray](16,11)(17,11)(17,12)(16,11)
\psline*[linewidth=2pt,linecolor=darkgray](16,11)(16,12)(17,12)(16,11)
\psline*[linewidth=2pt,linecolor=lightgray](16,12)(17,12)(17,13)(16,12)
\psline*[linewidth=2pt,linecolor=darkgray](16,12)(16,13)(17,13)(16,12)
\psline*[linewidth=2pt,linecolor=lightgray](16,13)(17,13)(17,14)(16,13)
\psline*[linewidth=2pt,linecolor=darkgray](16,13)(16,14)(17,14)(16,13)
\psline*[linewidth=2pt,linecolor=lightgray](16,17)(17,17)(17,17.5)(16.5,17.5)(16,17)
\psline*[linewidth=2pt,linecolor=lightgray](17,0)(17,-0.5)(16.5,-0.5)(17,0)
\psline*[linewidth=2pt,linecolor=darkgray](16,17)(16,17.5)(16.5,17.5)(16,17)
\psline*[linewidth=2pt,linecolor=darkgray](16,0)(16,-0.5)(16.5,-0.5)(17,0)(16,0)
\psline*[linewidth=2pt,linecolor=lightgray](17,2)(17.5,2)(17.5,2.5)(17,2)
\psline*[linewidth=2pt,linecolor=lightgray](0,2)(-0.5,2)(-0.5,2.5)(0,3,)(0,2)
\psline*[linewidth=2pt,linecolor=darkgray](17,2)(17,3)(17.5,3)(17.5,2.5)(17,2)
\psline*[linewidth=2pt,linecolor=darkgray](0,3)(-0.5,3)(-0.5,2.5)(0,3)
\psline*[linewidth=2pt,linecolor=lightgray](17,3)(17.5,3)(17.5,3.5)(17,3)
\psline*[linewidth=2pt,linecolor=lightgray](0,3)(-0.5,3)(-0.5,3.5)(0,4,)(0,3)
\psline*[linewidth=2pt,linecolor=darkgray](17,3)(17,4)(17.5,4)(17.5,3.5)(17,3)
\psline*[linewidth=2pt,linecolor=darkgray](0,4)(-0.5,4)(-0.5,3.5)(0,4)
\psline*[linewidth=2pt,linecolor=lightgray](17,4)(17.5,4)(17.5,4.5)(17,4)
\psline*[linewidth=2pt,linecolor=lightgray](0,4)(-0.5,4)(-0.5,4.5)(0,5,)(0,4)
\psline*[linewidth=2pt,linecolor=darkgray](17,4)(17,5)(17.5,5)(17.5,4.5)(17,4)
\psline*[linewidth=2pt,linecolor=darkgray](0,5)(-0.5,5)(-0.5,4.5)(0,5)
\psline*[linewidth=2pt,linecolor=lightgray](17,5)(17.5,5)(17.5,5.5)(17,5)
\psline*[linewidth=2pt,linecolor=lightgray](0,5)(-0.5,5)(-0.5,5.5)(0,6,)(0,5)
\psline*[linewidth=2pt,linecolor=darkgray](17,5)(17,6)(17.5,6)(17.5,5.5)(17,5)
\psline*[linewidth=2pt,linecolor=darkgray](0,6)(-0.5,6)(-0.5,5.5)(0,6)
\psline*[linewidth=2pt,linecolor=lightgray](17,6)(17.5,6)(17.5,6.5)(17,6)
\psline*[linewidth=2pt,linecolor=lightgray](0,6)(-0.5,6)(-0.5,6.5)(0,7,)(0,6)
\psline*[linewidth=2pt,linecolor=darkgray](17,6)(17,7)(17.5,7)(17.5,6.5)(17,6)
\psline*[linewidth=2pt,linecolor=darkgray](0,7)(-0.5,7)(-0.5,6.5)(0,7)
\psline*[linewidth=2pt,linecolor=lightgray](17,7)(17.5,7)(17.5,7.5)(17,7)
\psline*[linewidth=2pt,linecolor=lightgray](0,7)(-0.5,7)(-0.5,7.5)(0,8,)(0,7)
\psline*[linewidth=2pt,linecolor=darkgray](17,7)(17,8)(17.5,8)(17.5,7.5)(17,7)
\psline*[linewidth=2pt,linecolor=darkgray](0,8)(-0.5,8)(-0.5,7.5)(0,8)
\psline*[linewidth=2pt,linecolor=lightgray](17,11)(17.5,11)(17.5,11.5)(17,11)
\psline*[linewidth=2pt,linecolor=lightgray](0,11)(-0.5,11)(-0.5,11.5)(0,12,)(0,11)
\psline*[linewidth=2pt,linecolor=darkgray](17,11)(17,12)(17.5,12)(17.5,11.5)(17,11)
\psline*[linewidth=2pt,linecolor=darkgray](0,12)(-0.5,12)(-0.5,11.5)(0,12)
\psline*[linewidth=2pt,linecolor=lightgray](17,12)(17.5,12)(17.5,12.5)(17,12)
\psline*[linewidth=2pt,linecolor=lightgray](0,12)(-0.5,12)(-0.5,12.5)(0,13,)(0,12)
\psline*[linewidth=2pt,linecolor=darkgray](17,12)(17,13)(17.5,13)(17.5,12.5)(17,12)
\psline*[linewidth=2pt,linecolor=darkgray](0,13)(-0.5,13)(-0.5,12.5)(0,13)
\psline*[linewidth=2pt,linecolor=darkgray](17,16)(17.5,16)(17.5,16.5)(17,16)
\psline*[linewidth=2pt,linecolor=darkgray](0,16)(-0.5,16)(-0.5,16.5)(0,17,)(0,16)
\psline*[linewidth=2pt,linecolor=lightgray](17,16)(17,17)(17.5,17)(17.5,16.5)(17,16)
\psline*[linewidth=2pt,linecolor=lightgray](0,17)(-0.5,17)(-0.5,16.5)(0,17)
\psline[linewidth=2pt,linecolor=blue](-0.5,0)(17.5,0)
\psline[linewidth=2pt,linecolor=blue](-0.5,1)(17.5,1)
\psline[linewidth=2pt,linecolor=blue](-0.5,2)(17.5,2)
\psline[linewidth=2pt,linecolor=blue](-0.5,3)(17.5,3)
\psline[linewidth=2pt,linecolor=blue](-0.5,4)(17.5,4)
\psline[linewidth=2pt,linecolor=blue](-0.5,5)(17.5,5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6)(17.5,6)
\psline[linewidth=2pt,linecolor=blue](-0.5,7)(17.5,7)
\psline[linewidth=2pt,linecolor=blue](-0.5,8)(17.5,8)
\psline[linewidth=2pt,linecolor=blue](-0.5,9)(17.5,9)
\psline[linewidth=2pt,linecolor=blue](-0.5,10)(17.5,10)
\psline[linewidth=2pt,linecolor=blue](-0.5,11)(17.5,11)
\psline[linewidth=2pt,linecolor=blue](-0.5,12)(17.5,12)
\psline[linewidth=2pt,linecolor=blue](-0.5,13)(17.5,13)
\psline[linewidth=2pt,linecolor=blue](-0.5,14)(17.5,14)
\psline[linewidth=2pt,linecolor=blue](-0.5,15)(17.5,15)
\psline[linewidth=2pt,linecolor=blue](-0.5,16)(17.5,16)
\psline[linewidth=2pt,linecolor=blue](-0.5,17)(17.5,17)
\psline[linewidth=2pt,linecolor=blue](0,-0.5)(0,17.5)
\psline[linewidth=2pt,linecolor=blue](1,-0.5)(1,17.5)
\psline[linewidth=2pt,linecolor=blue](2,-0.5)(2,17.5)
\psline[linewidth=2pt,linecolor=blue](3,-0.5)(3,17.5)
\psline[linewidth=2pt,linecolor=blue](4,-0.5)(4,17.5)
\psline[linewidth=2pt,linecolor=blue](5,-0.5)(5,17.5)
\psline[linewidth=2pt,linecolor=blue](6,-0.5)(6,17.5)
\psline[linewidth=2pt,linecolor=blue](7,-0.5)(7,17.5)
\psline[linewidth=2pt,linecolor=blue](8,-0.5)(8,17.5)
\psline[linewidth=2pt,linecolor=blue](9,-0.5)(9,17.5)
\psline[linewidth=2pt,linecolor=blue](10,-0.5)(10,17.5)
\psline[linewidth=2pt,linecolor=blue](11,-0.5)(11,17.5)
\psline[linewidth=2pt,linecolor=blue](12,-0.5)(12,17.5)
\psline[linewidth=2pt,linecolor=blue](13,-0.5)(13,17.5)
\psline[linewidth=2pt,linecolor=blue](14,-0.5)(14,17.5)
\psline[linewidth=2pt,linecolor=blue](15,-0.5)(15,17.5)
\psline[linewidth=2pt,linecolor=blue](16,-0.5)(16,17.5)
\psline[linewidth=2pt,linecolor=blue](17,-0.5)(17,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,-0.5)(17.5,17.5)
\psline[linewidth=2pt,linecolor=blue](0.5,-0.5)(17.5,16.5)
\psline[linewidth=2pt,linecolor=blue](1.5,-0.5)(17.5,15.5)
\psline[linewidth=2pt,linecolor=blue](2.5,-0.5)(17.5,14.5)
\psline[linewidth=2pt,linecolor=blue](3.5,-0.5)(17.5,13.5)
\psline[linewidth=2pt,linecolor=blue](4.5,-0.5)(17.5,12.5)
\psline[linewidth=2pt,linecolor=blue](5.5,-0.5)(17.5,11.5)
\psline[linewidth=2pt,linecolor=blue](6.5,-0.5)(17.5,10.5)
\psline[linewidth=2pt,linecolor=blue](7.5,-0.5)(17.5,9.5)
\psline[linewidth=2pt,linecolor=blue](8.5,-0.5)(17.5,8.5)
\psline[linewidth=2pt,linecolor=blue](9.5,-0.5)(17.5,7.5)
\psline[linewidth=2pt,linecolor=blue](10.5,-0.5)(17.5,6.5)
\psline[linewidth=2pt,linecolor=blue](11.5,-0.5)(17.5,5.5)
\psline[linewidth=2pt,linecolor=blue](12.5,-0.5)(17.5,4.5)
\psline[linewidth=2pt,linecolor=blue](13.5,-0.5)(17.5,3.5)
\psline[linewidth=2pt,linecolor=blue](14.5,-0.5)(17.5,2.5)
\psline[linewidth=2pt,linecolor=blue](15.5,-0.5)(17.5,1.5)
\psline[linewidth=2pt,linecolor=blue](16.5,-0.5)(17.5,0.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,0.5)(16.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,1.5)(15.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,2.5)(14.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,3.5)(13.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,4.5)(12.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,5.5)(11.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,6.5)(10.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,7.5)(9.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,8.5)(8.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,9.5)(7.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,10.5)(6.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,11.5)(5.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,12.5)(4.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,13.5)(3.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,14.5)(2.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,15.5)(1.5,17.5)
\psline[linewidth=2pt,linecolor=blue](-0.5,16.5)(0.5,17.5)
\multirput{0}(0,0)(0,1){18}{%
\multirput{0}(0,0)(1,0){18}{%
\pscircle*[linecolor=white]{8pt}
\pscircle[linewidth=1pt,linecolor=black] {8pt}
}
}
\rput{0}(0,0){{\bf 4}}
\rput{0}(0,1){{\bf 1}}
\rput{0}(0,2){{\bf 3}}
\rput{0}(0,3){{\bf 2}}
\rput{0}(0,4){{\bf 1}}
\rput{0}(0,5){{\bf 3}}
\rput{0}(0,6){{\bf 2}}
\rput{0}(0,7){{\bf 1}}
\rput{0}(0,8){{\bf 3}}
\rput{0}(0,9){{\bf 4}}
\rput{0}(0,10){{\bf 1}}
\rput{0}(0,11){{\bf 3}}
\rput{0}(0,12){{\bf 2}}
\rput{0}(0,13){{\bf 1}}
\rput{0}(0,14){{\bf 4}}
\rput{0}(0,15){{\bf 2}}
\rput{0}(0,16){{\bf 3}}
\rput{0}(0,17){{\bf 1}}
\rput{0}(1,0){{\bf 2}}
\rput{0}(1,1){{\bf 3}}
\rput{0}(1,2){{\bf 2}}
\rput{0}(1,3){{\bf 1}}
\rput{0}(1,4){{\bf 3}}
\rput{0}(1,5){{\bf 2}}
\rput{0}(1,6){{\bf 1}}
\rput{0}(1,7){{\bf 3}}
\rput{0}(1,8){{\bf 4}}
\rput{0}(1,9){{\bf 1}}
\rput{0}(1,10){{\bf 3}}
\rput{0}(1,11){{\bf 2}}
\rput{0}(1,12){{\bf 1}}
\rput{0}(1,13){{\bf 4}}
\rput{0}(1,14){{\bf 2}}
\rput{0}(1,15){{\bf 3}}
\rput{0}(1,16){{\bf 1}}
\rput{0}(1,17){{\bf 4}}
\rput{0}(2,0){{\bf 3}}
\rput{0}(2,1){{\bf 1}}
\rput{0}(2,2){{\bf 4}}
\rput{0}(2,3){{\bf 3}}
\rput{0}(2,4){{\bf 2}}
\rput{0}(2,5){{\bf 1}}
\rput{0}(2,6){{\bf 3}}
\rput{0}(2,7){{\bf 4}}
\rput{0}(2,8){{\bf 1}}
\rput{0}(2,9){{\bf 3}}
\rput{0}(2,10){{\bf 2}}
\rput{0}(2,11){{\bf 1}}
\rput{0}(2,12){{\bf 4}}
\rput{0}(2,13){{\bf 2}}
\rput{0}(2,14){{\bf 3}}
\rput{0}(2,15){{\bf 1}}
\rput{0}(2,16){{\bf 4}}
\rput{0}(2,17){{\bf 2}}
\rput{0}(3,0){{\bf 1}}
\rput{0}(3,1){{\bf 2}}
\rput{0}(3,2){{\bf 3}}
\rput{0}(3,3){{\bf 2}}
\rput{0}(3,4){{\bf 1}}
\rput{0}(3,5){{\bf 3}}
\rput{0}(3,6){{\bf 4}}
\rput{0}(3,7){{\bf 1}}
\rput{0}(3,8){{\bf 3}}
\rput{0}(3,9){{\bf 2}}
\rput{0}(3,10){{\bf 1}}
\rput{0}(3,11){{\bf 3}}
\rput{0}(3,12){{\bf 2}}
\rput{0}(3,13){{\bf 3}}
\rput{0}(3,14){{\bf 1}}
\rput{0}(3,15){{\bf 4}}
\rput{0}(3,16){{\bf 2}}
\rput{0}(3,17){{\bf 3}}
\rput{0}(4,0){{\bf 2}}
\rput{0}(4,1){{\bf 4}}
\rput{0}(4,2){{\bf 1}}
\rput{0}(4,3){{\bf 4}}
\rput{0}(4,4){{\bf 3}}
\rput{0}(4,5){{\bf 4}}
\rput{0}(4,6){{\bf 1}}
\rput{0}(4,7){{\bf 3}}
\rput{0}(4,8){{\bf 2}}
\rput{0}(4,9){{\bf 1}}
\rput{0}(4,10){{\bf 3}}
\rput{0}(4,11){{\bf 2}}
\rput{0}(4,12){{\bf 4}}
\rput{0}(4,13){{\bf 1}}
\rput{0}(4,14){{\bf 4}}
\rput{0}(4,15){{\bf 2}}
\rput{0}(4,16){{\bf 3}}
\rput{0}(4,17){{\bf 1}}
\rput{0}(5,0){{\bf 4}}
\rput{0}(5,1){{\bf 1}}
\rput{0}(5,2){{\bf 2}}
\rput{0}(5,3){{\bf 3}}
\rput{0}(5,4){{\bf 1}}
\rput{0}(5,5){{\bf 2}}
\rput{0}(5,6){{\bf 3}}
\rput{0}(5,7){{\bf 2}}
\rput{0}(5,8){{\bf 1}}
\rput{0}(5,9){{\bf 3}}
\rput{0}(5,10){{\bf 2}}
\rput{0}(5,11){{\bf 4}}
\rput{0}(5,12){{\bf 1}}
\rput{0}(5,13){{\bf 3}}
\rput{0}(5,14){{\bf 2}}
\rput{0}(5,15){{\bf 3}}
\rput{0}(5,16){{\bf 1}}
\rput{0}(5,17){{\bf 2}}
\rput{0}(6,0){{\bf 1}}
\rput{0}(6,1){{\bf 2}}
\rput{0}(6,2){{\bf 4}}
\rput{0}(6,3){{\bf 1}}
\rput{0}(6,4){{\bf 2}}
\rput{0}(6,5){{\bf 3}}
\rput{0}(6,6){{\bf 4}}
\rput{0}(6,7){{\bf 1}}
\rput{0}(6,8){{\bf 3}}
\rput{0}(6,9){{\bf 2}}
\rput{0}(6,10){{\bf 4}}
\rput{0}(6,11){{\bf 1}}
\rput{0}(6,12){{\bf 3}}
\rput{0}(6,13){{\bf 2}}
\rput{0}(6,14){{\bf 4}}
\rput{0}(6,15){{\bf 1}}
\rput{0}(6,16){{\bf 2}}
\rput{0}(6,17){{\bf 4}}
\rput{0}(7,0){{\bf 2}}
\rput{0}(7,1){{\bf 4}}
\rput{0}(7,2){{\bf 3}}
\rput{0}(7,3){{\bf 2}}
\rput{0}(7,4){{\bf 4}}
\rput{0}(7,5){{\bf 1}}
\rput{0}(7,6){{\bf 2}}
\rput{0}(7,7){{\bf 3}}
\rput{0}(7,8){{\bf 2}}
\rput{0}(7,9){{\bf 4}}
\rput{0}(7,10){{\bf 1}}
\rput{0}(7,11){{\bf 3}}
\rput{0}(7,12){{\bf 2}}
\rput{0}(7,13){{\bf 4}}
\rput{0}(7,14){{\bf 1}}
\rput{0}(7,15){{\bf 2}}
\rput{0}(7,16){{\bf 4}}
\rput{0}(7,17){{\bf 1}}
\rput{0}(8,0){{\bf 4}}
\rput{0}(8,1){{\bf 3}}
\rput{0}(8,2){{\bf 2}}
\rput{0}(8,3){{\bf 4}}
\rput{0}(8,4){{\bf 1}}
\rput{0}(8,5){{\bf 2}}
\rput{0}(8,6){{\bf 3}}
\rput{0}(8,7){{\bf 1}}
\rput{0}(8,8){{\bf 4}}
\rput{0}(8,9){{\bf 1}}
\rput{0}(8,10){{\bf 3}}
\rput{0}(8,11){{\bf 2}}
\rput{0}(8,12){{\bf 4}}
\rput{0}(8,13){{\bf 1}}
\rput{0}(8,14){{\bf 2}}
\rput{0}(8,15){{\bf 4}}
\rput{0}(8,16){{\bf 1}}
\rput{0}(8,17){{\bf 2}}
\rput{0}(9,0){{\bf 3}}
\rput{0}(9,1){{\bf 2}}
\rput{0}(9,2){{\bf 4}}
\rput{0}(9,3){{\bf 1}}
\rput{0}(9,4){{\bf 2}}
\rput{0}(9,5){{\bf 3}}
\rput{0}(9,6){{\bf 1}}
\rput{0}(9,7){{\bf 4}}
\rput{0}(9,8){{\bf 2}}
\rput{0}(9,9){{\bf 3}}
\rput{0}(9,10){{\bf 2}}
\rput{0}(9,11){{\bf 4}}
\rput{0}(9,12){{\bf 1}}
\rput{0}(9,13){{\bf 2}}
\rput{0}(9,14){{\bf 4}}
\rput{0}(9,15){{\bf 1}}
\rput{0}(9,16){{\bf 2}}
\rput{0}(9,17){{\bf 4}}
\rput{0}(10,0){{\bf 2}}
\rput{0}(10,1){{\bf 4}}
\rput{0}(10,2){{\bf 1}}
\rput{0}(10,3){{\bf 2}}
\rput{0}(10,4){{\bf 3}}
\rput{0}(10,5){{\bf 1}}
\rput{0}(10,6){{\bf 4}}
\rput{0}(10,7){{\bf 2}}
\rput{0}(10,8){{\bf 3}}
\rput{0}(10,9){{\bf 1}}
\rput{0}(10,10){{\bf 4}}
\rput{0}(10,11){{\bf 1}}
\rput{0}(10,12){{\bf 2}}
\rput{0}(10,13){{\bf 4}}
\rput{0}(10,14){{\bf 1}}
\rput{0}(10,15){{\bf 2}}
\rput{0}(10,16){{\bf 4}}
\rput{0}(10,17){{\bf 3}}
\rput{0}(11,0){{\bf 4}}
\rput{0}(11,1){{\bf 1}}
\rput{0}(11,2){{\bf 2}}
\rput{0}(11,3){{\bf 3}}
\rput{0}(11,4){{\bf 1}}
\rput{0}(11,5){{\bf 4}}
\rput{0}(11,6){{\bf 2}}
\rput{0}(11,7){{\bf 3}}
\rput{0}(11,8){{\bf 1}}
\rput{0}(11,9){{\bf 4}}
\rput{0}(11,10){{\bf 2}}
\rput{0}(11,11){{\bf 3}}
\rput{0}(11,12){{\bf 4}}
\rput{0}(11,13){{\bf 1}}
\rput{0}(11,14){{\bf 2}}
\rput{0}(11,15){{\bf 4}}
\rput{0}(11,16){{\bf 3}}
\rput{0}(11,17){{\bf 2}}
\rput{0}(12,0){{\bf 1}}
\rput{0}(12,1){{\bf 2}}
\rput{0}(12,2){{\bf 4}}
\rput{0}(12,3){{\bf 1}}
\rput{0}(12,4){{\bf 4}}
\rput{0}(12,5){{\bf 2}}
\rput{0}(12,6){{\bf 3}}
\rput{0}(12,7){{\bf 1}}
\rput{0}(12,8){{\bf 4}}
\rput{0}(12,9){{\bf 2}}
\rput{0}(12,10){{\bf 1}}
\rput{0}(12,11){{\bf 4}}
\rput{0}(12,12){{\bf 1}}
\rput{0}(12,13){{\bf 2}}
\rput{0}(12,14){{\bf 4}}
\rput{0}(12,15){{\bf 3}}
\rput{0}(12,16){{\bf 2}}
\rput{0}(12,17){{\bf 4}}
\rput{0}(13,0){{\bf 2}}
\rput{0}(13,1){{\bf 4}}
\rput{0}(13,2){{\bf 1}}
\rput{0}(13,3){{\bf 3}}
\rput{0}(13,4){{\bf 2}}
\rput{0}(13,5){{\bf 3}}
\rput{0}(13,6){{\bf 1}}
\rput{0}(13,7){{\bf 4}}
\rput{0}(13,8){{\bf 2}}
\rput{0}(13,9){{\bf 1}}
\rput{0}(13,10){{\bf 3}}
\rput{0}(13,11){{\bf 2}}
\rput{0}(13,12){{\bf 3}}
\rput{0}(13,13){{\bf 4}}
\rput{0}(13,14){{\bf 3}}
\rput{0}(13,15){{\bf 2}}
\rput{0}(13,16){{\bf 4}}
\rput{0}(13,17){{\bf 1}}
\rput{0}(14,0){{\bf 4}}
\rput{0}(14,1){{\bf 1}}
\rput{0}(14,2){{\bf 3}}
\rput{0}(14,3){{\bf 2}}
\rput{0}(14,4){{\bf 4}}
\rput{0}(14,5){{\bf 1}}
\rput{0}(14,6){{\bf 4}}
\rput{0}(14,7){{\bf 2}}
\rput{0}(14,8){{\bf 1}}
\rput{0}(14,9){{\bf 3}}
\rput{0}(14,10){{\bf 2}}
\rput{0}(14,11){{\bf 1}}
\rput{0}(14,12){{\bf 4}}
\rput{0}(14,13){{\bf 2}}
\rput{0}(14,14){{\bf 1}}
\rput{0}(14,15){{\bf 4}}
\rput{0}(14,16){{\bf 1}}
\rput{0}(14,17){{\bf 2}}
\rput{0}(15,0){{\bf 1}}
\rput{0}(15,1){{\bf 3}}
\rput{0}(15,2){{\bf 2}}
\rput{0}(15,3){{\bf 4}}
\rput{0}(15,4){{\bf 1}}
\rput{0}(15,5){{\bf 3}}
\rput{0}(15,6){{\bf 2}}
\rput{0}(15,7){{\bf 1}}
\rput{0}(15,8){{\bf 3}}
\rput{0}(15,9){{\bf 2}}
\rput{0}(15,10){{\bf 1}}
\rput{0}(15,11){{\bf 4}}
\rput{0}(15,12){{\bf 2}}
\rput{0}(15,13){{\bf 1}}
\rput{0}(15,14){{\bf 4}}
\rput{0}(15,15){{\bf 3}}
\rput{0}(15,16){{\bf 2}}
\rput{0}(15,17){{\bf 4}}
\rput{0}(16,0){{\bf 3}}
\rput{0}(16,1){{\bf 2}}
\rput{0}(16,2){{\bf 4}}
\rput{0}(16,3){{\bf 1}}
\rput{0}(16,4){{\bf 3}}
\rput{0}(16,5){{\bf 2}}
\rput{0}(16,6){{\bf 1}}
\rput{0}(16,7){{\bf 3}}
\rput{0}(16,8){{\bf 2}}
\rput{0}(16,9){{\bf 1}}
\rput{0}(16,10){{\bf 3}}
\rput{0}(16,11){{\bf 2}}
\rput{0}(16,12){{\bf 1}}
\rput{0}(16,13){{\bf 3}}
\rput{0}(16,14){{\bf 2}}
\rput{0}(16,15){{\bf 1}}
\rput{0}(16,16){{\bf 4}}
\rput{0}(16,17){{\bf 1}}
\rput{0}(17,0){{\bf 2}}
\rput{0}(17,1){{\bf 4}}
\rput{0}(17,2){{\bf 1}}
\rput{0}(17,3){{\bf 3}}
\rput{0}(17,4){{\bf 2}}
\rput{0}(17,5){{\bf 1}}
\rput{0}(17,6){{\bf 3}}
\rput{0}(17,7){{\bf 2}}
\rput{0}(17,8){{\bf 1}}
\rput{0}(17,9){{\bf 3}}
\rput{0}(17,10){{\bf 4}}
\rput{0}(17,11){{\bf 1}}
\rput{0}(17,12){{\bf 3}}
\rput{0}(17,13){{\bf 2}}
\rput{0}(17,14){{\bf 1}}
\rput{0}(17,15){{\bf 4}}
\rput{0}(17,16){{\bf 2}}
\rput{0}(17,17){{\bf 3}}
\multirput{0}(17.5,-0.5)(0,1){18}{\psline[linewidth=2pt,linecolor=black]{->}(0.2,-0.2)(-0.1,0.1)}
\uput[0](17.7,-0.7){D1}
\uput[0](17.7,0.3){D2}
\uput[0](17.7,1.3){D3}
\uput[0](17.7,2.3){D4}
\uput[0](17.7,3.3){D5}
\uput[0](17.7,4.3){D6}
\uput[0](17.7,5.3){D7}
\uput[0](17.7,6.3){D8}
\uput[0](17.7,7.3){D9}
\uput[0](17.7,8.3){D10}
\uput[0](17.7,9.3){D11}
\uput[0](17.7,10.3){D12}
\uput[0](17.7,11.3){D13}
\uput[0](17.7,12.3){D14}
\uput[0](17.7,13.3){D15}
\uput[0](17.7,14.3){D16}
\uput[0](17.7,15.3){D17}
\uput[0](17.7,16.3){D18}
\endpspicture
\caption{ \label{prop.12k-6.fig3}
The 4-coloring of $T(18,18)$ after Step~4 in the case $L=4k-2$.
}
\end{figure}
\medskip
\noindent
{\bf Step 4.}
There are only five counter-diagonals to be colored.
All vertices on D$(6k-4)$ are colored $1$ or $2$ using the following simple
rule: the vertex $(x,y)$ is colored $1$ (resp.\ $2$) if the vertex
$(x,y-1)$ is colored $4$ (resp.\ $3$). In particular, those vertices with
$3k-1 \leq x \leq 9k-5$ are colored alike.
On D$(6k)$ we find two vertices admitting a single color in the set $\{1,2\}$:
$(3k-1,3k-1)$ and ($9k-2,9k-4)$ taking respectively, colors $c_1$ and $c_2$.
The vertices satisfying $3k\leq x \leq 9k-4$ are colored $c_1$, and the others
are colored $c_2$.
On D$(6k-3)$ we find two vertices $(3k-1,3k-2)$ and $(9k-4,9k-5)$
that admit a single color from the set $\{3,4\}$. The other vertices are
colored $1$ and $2$ (there is a unique choice for each vertex).
On D$(6k-2)$ there are four vertices admitting a single color from the set
$\{3,4\}$: the vertices $(3k,3k-2)$ and $(3k-1,3k-1)$ are colored $c_1$,
while $(9k-3,9k-5)$ and $(9k-4,9k-4)$ are colored $c_2\neq c_1$. Those
vertices satisfying $3k+1\leq x \leq 9k-2$ are colored $c_2$, and the rest
are colored $c_1$.
The last counter-diagonal D$(6k-1)$ contains seven vertices that admit a
single color: $(3k+1,3k-2)$, $(3k,3k-1)$, $(3k-1,3k)$, $(9k-1,9k-6)$,
$(9k-2,9k-5)$, $(9k-3,9k-4)$, and $(9k-4,9k-3)$. The other vertices are
colored $3$ and $4$ (there is a unique choice for each vertex).
The resulting coloring is depicted in
Figure~\ref{prop.12k-6.fig3}. The contribution to the partial degree of the
new triangles is $2$; the partial degree of $f$ is given by
$\deg f|_R = 6 + 12(k-2) \equiv 6 \pmod{12}$.
\medskip
The above argument proves the base case of our induction. Now we have to
find a four-coloring of the triangulation $T(12k-6,3)$ with $k\geq 2$
such that it has the same top-row coloring $c_{3}$ as $f$
(see Figure~\ref{prop.12k-6.fig3}). We proceed as for the previous cases:
let $t = \lfloor \tfrac{3k-6}{2} \rfloor$; the 4-coloring we need
is defined as follows for $k$ even:
\begin{eqnarray*}
c_0 = c_3 & = & [1423]^{t+1} 1241 2432 4124 1 [3241]^t 3 \\
c_2 & = & 3[1423]^{t+1} 1241 243 2413 [2413]^t 42 \;=\;
3[1423]^{t+1} 1241 243 [2413]^{t+1} 42 \\
c_1 & = & [2314]^{t+1} 2312 4124 3 2413 [2413]^t 4 \;=\;
[2314]^{t+1} 2312 4124 3 [2413]^{t+1} 4\,.
\end{eqnarray*}
If $k$ is odd, then we have:
\begin{eqnarray*}
c_0 =c_3 & = & [1423]^{t+1} 1421 3213 4132 13 [2413]^{t+1} \\
c_2 & = & [3142]^{t+1} 3142 1321 3413 [2413]^{t+1} 42 \;=\;
[3142]^{t+2} 1321 3413 [2413]^{t+1} 42 \\
c_1 & = & [2314]^{t+1} 2314 2132 1341 3 [2413]^{t+1} 4 \;=\;
[2314]^{t+2} 2132 1341 3 [2413]^{t+1} 4 \,.
\end{eqnarray*}
Again, it is easy to verify that this gives a proper 4-coloring of $T(3L,3)$,
and by Proposition~\ref{prop.T_Lx3}, it has zero degree.
This completes the proof of the theorem. \hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\medskip
Theorems~\ref{theo.main} and~\ref{theo.asym} imply that WSK is non
ergodic on any triangulation $T(3L,3M)$ with $3\leq L\leq M$.
Proposition~\ref{prop.T_Lx3} together with Fisk's theorem implies that
WSK is ergodic on any triangulation $T(3,3L)$. The triangulations
$T(6,3L)$ are special in the sense that WSK is ergodic depending on
the value of $L$. In particular, WSK is not ergodic for any $T(6,6p)$ with
odd $p$, because of Theorem~\ref{theo.main} [or Theorem~\ref{theo_L=6}]
and Lemma~\ref{lemma.tech}.
By direct computer enumeration of the $299 146 792$ proper four-colorings of
$T(6,9)$, we have checked that all of them have zero degree.
We have also checked with a computer that we can transform any of these
colorings into the three-coloring by a {\em finite} number of K--changes.
Therefore we have obtained a computer--assisted proof of the following
Theorem:
\begin{proposition} \label{prop_tri_L=6x9}
$\Kc(T(6,9),4) = 1$
\end{proposition}
\medskip
\noindent
{\bf Remark.} Fisk's Theorem~\ref{theo_Fisk} can be used to prove the
ergodicity of the WSK on $T(6,9)$ directly from the fact that all
colorings have zero degree.
\section{Summary and open problems} \label{sec.summary}
We have considered the question of the ergodicity of the
Wang--Swendsen--Koteck\'y dynamics for the zero-temperature
4--state Potts antiferromagnet on triangulations $T(3L,3M)$ of the torus.
This dynamics is equivalent (for the zero-temperature case only) to that
of the Kempe chains studied in Combinatorics. We have obtained two
main results:
1) For the wider family of the even triangulations of the torus (which
contains the triangulations $T(3L,3M)$ as a proper subset), we find that the
degree of a 4--coloring modulo 12 is invariant under Kempe changes.
2) For any triangulation $T(3L,3M)$ of the torus with $3\le L \le M$,
there are at least two Kempe equivalence classes for 4 colors. In other
words, the Wang--Swendsen--Koteck\'y dynamics with 4 colors on these
triangulations is non-ergodic. For $L=2$, we can only show that this
dynamics is non-ergodic for $M=2p$ with odd $p$.
In addition to their intrinsic mathematical interest, these results have
a great practical importance in Statistical Mechanics. The
triangular-lattice 4--state Potts antiferromagnet is believed to have
a zero temperature critical point \cite[and references therein]{transfer3}.
But we {\em cannot}\/ study the critical properties of this model
using WSK dynamics because of the non-ergodicity of the algorithm. (This also
holds for the single-site Metropolis dynamics, as it corresponds
to a particular subset of moves of the WSK dynamics.)
Indeed, one can simulate the 4--state Potts antiferromagnet at zero temperature
using the WSK algorithm on planar graphs (e.g., a triangular grid with
free boundary conditions); but surface effects cannot be eliminated, and one
has to go to much larger lattice sizes to attain high--precision results.
It is therefore important to devise a new Monte Carlo algorithm for
this model which is ergodic at zero temperature.
There are other open problems related to the ergodicity of the Kempe
dynamics. The case of four-colors on triangulations of the torus is rather
special, as we can make use of concepts borrowed from Algebraic Topology.
However these techniques cannot be applied to the cases of $q=5,6$ colors,
and the ergodicity of the corresponding WSK dynamics is still an open problem.
Finally, let us mention that {\em at zero temperature}, the 4--state Potts
model on the triangular lattice is essentially equivalent to the
3--state Potts model on the kagom\'e lattice. We have found that the
WSK dynamics for this model also fails to be ergodic on most
kagom\'e lattices when embedded on a torus. The details will be published
elsewhere.
\section*{Acknowledgments}
We are indebted to Alan Sokal for his participation on the early stages
of this work, and his encouragement and useful suggestions later on.
We also wish to thank Eduardo J.S. Villase\~nor for useful discussions.
J.S.\ is grateful for the kind hospitality of
the Physics Department of New York University and the Mathematics
Department of University College London, where part of this work was done; and
also thanks the Isaac Newton Institute for Mathematical Sciences,
University of Cambridge, for hospitality during the programme on
Combinatorics and Statistical Mechanics (January--June 2008).
The authors' research was supported in part by
the ARRS (Slovenia) Research Program P1--0297, by an NSERC Discovery Grant,
and by the Canada Research Chair program (B.M.),
by U.S.\ National Science Foundation grants
PHY--0116590 and PHY--0424082,
and by Spanish MEC grants MTM2005--08618 and MTM2008--03020 (J.S.).
| {'timestamp': '2009-05-22T12:37:21', 'yymm': '0901', 'arxiv_id': '0901.1010', 'language': 'en', 'url': 'https://arxiv.org/abs/0901.1010'} |
\section{Introduction}
\subsection{Innermost stable circular orbits in General Relativity}
Let us consider a classical test non-spinning particle moving at stable circular orbit around a central massive body. In Newtonian theory, this orbit can have arbitrary radius. This follows from the fact that the effective potential of particle always has minimum, for any value of particle angular momentum, and if angular momentum tends to zero the radius of stable circular orbit goes to zero too \cite{Zeld-Novikov1971}, see Fig. \ref{fig-U-Newt}. All circular orbits are stable till zero radius, and in Newtonian theory there is no minimum radius of stable circular orbit \cite{Kaplan}.
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{U-Newt.eps}}} \caption{Newtonian effective potential $U = - M/r + L^2/2r^2$ for a test particle moving in the gravitational field of a central body with mass $M$, for different values of $L$ ($L$ is the angular momentum per unit mass, $G=1$). Solid circles show positions of minima corresponding to stable circular orbits.} \label{fig-U-Newt}
\end{figure}
In General Relativity (GR) the situation is different. The effective potential has a more complicated form, depending on the particle angular momentum, see Fig. \ref{fig-U-Schw} for the potential in the Schwarzschild metric. For large values of the angular momentum the effective potential has two extrema: maximum which corresponds to unstable circular orbit, and minimum which corresponds to stable circular orbit. With decreasing of angular momentum, radii of unstable and stable circular orbits become closer to each other. When angular momentum reach a boundary value, two extrema of effective potential merge into one inflection point. This point corresponds to minimal possible radius of stable circular orbit. Such orbit is called the innermost stable circular orbit (ISCO). Further decreasing of angular momentum leads to potential without extrema. For these values of angular momentum neither type of finite motion is possible.
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{U-Schw1.eps}}} \caption{Effective potential (per unit particle rest mass) for motion in Schwarzschild metric $U_{Schw}= \sqrt{\left( 1 -\frac{2M}{r} \right) \left( 1 +\frac{L^2}{r^2} \right)}$ for different values of $L$ ($L$ is the angular momentum per unit particle rest mass, $G=1$). Maxima of potential are shown by circles, and minima are shown by solid circles.} \label{fig-U-Schw}
\end{figure}
Radius and other values of the ISCO parameters (total angular momentum, energy, orbital angular frequency) are different in different metrics. For the Schwarzschild background the radius of ISCO equals to $6M$\footnote{In this paper we use the system of units where $G=c=1$, the Schwarzschild radius $R_S=2M$, and other physical quantities which will be introduced further have the following dimensionalities: $[L]=[M]$, $[J]=[M]$, $[E]=1$, $[a]=[M]$, $[s]=[M]$.} , it was found by Kaplan \cite{Kaplan}, see also \cite{LL2}. In the Kerr space-time circular motion is possible only in the equatorial plane of BH and the radius of ISCO depends on the direction of motion of the particle in comparison with the direction of BH rotation, whether they co-rotate or counter-rotate. Co-rotation and counter-rotation cases correspond to parallel and antiparallel orientation of vectors of the orbital angular momentum of the particle and the BH angular momentum. In a case of orbital co-rotation the ISCO radius becomes smaller than $6M$, in case of counter-rotation -- bigger, see Fig. \ref{fig-schw-kerr}. For the case of the extreme Kerr background the difference between these two variants is quite considerable: we have $9M$ for the antiparallel and $M$ for the parallel orientation. The parameters of ISCO in the Kerr space-time for a non-spinning particle were obtained in works of Ruffini \& Wheeler \cite{Ruffini1971} and Bardeen, Press \& Teukolsky \cite{Bardeen1972}. This problem is described at length, for example, in the textbook by Hobson \textit{et al.} \cite{Hobson}.
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{SchwKerr.eps}}} \caption{Innermost stable circular orbits in Schwarzschild and Kerr metric. In Kerr metric radius of ISCO depends on direction of orbital motion of test particle.} \label{fig-schw-kerr}
\end{figure}
\subsection{Spinning particles in General Relativity}
In GR a rotation of the central gravitating body influences a motion of the particle orbiting it. Due to this reason the orbits of test particles in Kerr metric differ from orbits in Schwarzschild metric. When, in turn, a test particle has spin as well, it will also influence the particle's orbit. In particular, the motion of a spinning particle will differ from the non-spinning one even in the Schwarzschild background.
The problem of the motion of a classical spinning test body in GR was considered in papers of Mathisson \cite{Mathisson1937}, Papapetrou \cite{Papapetrou1951a} and Dixon \cite{Dixon1970a, Dixon1970b, Dixon1978}, using different techniques. The equations of motion of a spinning test particle in a given gravitational field were derived in different forms; they are now referred to as Mathisson-Papapetrou-Dixon equations. From these equations it follows that the motion of the centre of mass and the particle rotation are connected to each other, and when the particle has spin the orbits will differ from geodesics of a spinless massive particle.
\subsection{The ISCO of spinning particles}
Influence of a spin on the orbits in the Schwarzschild metric was investigated in the paper of Corinaldesi and Papapetrou \cite{Papapetrou1951b}, and in the paper of Micoulaut \cite{Micoulaut1967}. A motion of a spinning test particle in Kerr metric was considered by Rasband \cite{Rasband1973} and Tod, de Felice \& Calvani \cite{Tod1976}, in particular ISCO radius was calculated numerically, see also papers of Abramowicz \& Calvani \cite{Abramowicz1979} and Calvani \cite{Calvani1980}. Subsequently the number of works on this subject were published \cite{Hojman1977,Suzuki1997, Suzuki1998, SaijoMaeda1998, TanakaMino1996, Apostolatos1996, Semerak1999, Semerak2007, Plyatsko2012a, Plyatsko2012b, Plyatsko2013, Bini2004a, Bini2004b, Bini2011a, Bini2011b, BiniDamour2014, Damour2008, Faye2006, Favata2011, Steinhoff2011, Steinhoff2012, Hackmann2014,Kunst2015,Putten1999}.
The detailed derivation of the equations of motion of spinning particle in Kerr space-time is presented in the work of Saijo \textit{et al.} \cite{SaijoMaeda1998}. Method of calculation of ISCO parameters of spinning particle moving in Kerr metric is presented in details \cite{Favata2011}. Linear corrections in spin for the ISCO parameters in Schwarzschild metric have been found by Favata \cite{Favata2011}.
In paper of Jefremov, Tsupko \& Bisnovatyi-Kogan \cite{Jefremov2015} we have analytically obtained the small spin corrections for the ISCO parameters for the Kerr metric at arbitrary value of Kerr parameter $a$. The cases of Schwarzschild, slowly rotating and extreme Kerr black hole were considered in details. For a slowly rotating black hole the ISCO parameters are obtained up to quadratic in $a$ and particle's spin $s$ terms. For the extreme $a=M$ and almost extreme $a=(1-\delta)M$ Kerr BH we succeeded to find the exact analytical solution for the ISCO parameters for arbitrary spin, with only restrictions connected with applicability of Mathisson-Papapetrou-Dixon equations. It has been shown that the limiting values of ISCO radius and frequency for $a=M$ do not depend on the particle's spin while values of energy and total angular momentum do depend on it.
In this work we review some results of our recent research of innermost stable circular orbits (ISCO) \cite{Jefremov2015} and present some new calculations. ISCO radius, total angular momentum, energy, orbital angular frequency are considered. We calculate the ISCO parameters numerically for different values of Kerr parameter $a$ and investigate their dependence on both black hole and test particle spins. Then we describe in details how to calculate analytically small-spin corrections to ISCO parameters for arbitrary values of $a$, presenting our formulae in different forms.
\section{The motion of a spinning test body in the equatorial plane of a Kerr black hole} \label{section-MPD}
In the present treatment of the problem of spinning body motion in GR we use the so-called "pole-dipole" approximation \cite{Papapetrou1951a}, in the frame of which the motion is described by the Mathisson-Papapetrou-Dixon (MPD) equations \cite{Papapetrou1951a,Mathisson1937,Dixon1970a,Dixon1970b, Dixon1978, SaijoMaeda1998}:
\begin{equation}
\begin{split}
&\frac{Dp^\mu}{D\tau}=-\frac{1}{2}R^{\mu}{}_{\nu \rho \sigma}v^{\nu}S^{\rho \sigma} ,\\
&\frac{DS^{\mu \nu}}{D\tau}=p^\mu v^\nu - p^\nu v^\mu.
\label{MPD}
\end{split}
\end{equation}
Here $D/D \tau$ is a covariant derivative along the particle trajectory, $\tau$ is an affine parameter of the orbit \cite{SaijoMaeda1998}, $R^{\mu}{}_{\nu \rho \sigma}$ is the Riemannian tensor, $p^\mu$ and $v^\mu$ are 4-momentum and 4-velocity of a test body, $S^{\rho \sigma}$ is its spin-tensor. The equations were derived under the assumption that characteristic radius of the spinning particle is much smaller than the curvature scale of a background spacetime \cite{SaijoMaeda1998} (see also \cite{Rasband1973}, \cite{Apostolatos1996}) and the mass of a spinning body is much less than that of BH.
It is known, however, that these equations are incomplete, because they do not define which point on the test body is used for spin and trajectory measurements. Therefore we need some extra condition (`spin supplementary condition') to do that and to close the system of equations \cite{Papapetrou1951b}. We use the condition of Tulczyjew \cite{Tulczyjew1959} given by
\begin{equation}
p_\mu S^{\mu \nu}=0.
\label{SSC}
\end{equation}
The system of equations (\ref{MPD}) with (\ref{SSC}) in a general space-time admits two conserved quantities: particle's mass $m^2= -p^{\mu}p_{\mu}$ and the magnitude of its specific spin $s^2= S^{\mu \nu}S_{\mu \nu}/(2m^2)$, see \cite{SaijoMaeda1998}.
We will consider the motion of a spinning particle in the equatorial plane of Kerr metric ($\theta=\pi/2$),
which is given in Boyer-Lindquist coordinates by \cite{LL2, Hobson}
\begin{equation}
\begin{split}
ds^2 & = - \left( 1 -\frac{2M r}{\Sigma} \right) dt^2 - \\
&- \frac{4M a r \sin^2 \theta}{\Sigma} dt \ d\varphi + \frac{\Sigma}{\Delta}dr^2 + \Sigma \ d\theta^2 + \\
&+ \left( r^2 +a^2 +\frac{2M ra^2 \sin^2\theta}{\Sigma} \right)\sin^2\theta \ d\varphi^2,
\end{split}
\label{}
\end{equation}
where $a$ is the specific angular momentum of a black hole, $\Sigma \equiv r^2 + a^2 \cos^2\theta$, $\Delta \equiv r^2 - 2 Mr + a^2$. In this case there are two additional conserved quantities: total energy of the particle and the projection of its total angular momentum onto $z$-axis.
In case of the motion of a spinning particle in the equatorial plane, the angular momentum of a spinning particle is always perpendicular to the equatorial plane \cite{SaijoMaeda1998}. Therefore we can describe the test particle spin by only one constant $s$ which is the specific spin angular momentum of the particle. Value $|s|$ indicates the magnitude of the spin and $s$ itself is its projection on the $z$-axis. It is more obvious to think of the spin in terms of the particle's spin angular momentum $\mathbf{S_1}=sm\mathbf{\hat{z}}$ which is parallel to the BH spin angular momentum $\mathbf{S_2}=aM\mathbf{\hat{z}}$, when $s>0$, and antiparallel, when $s<0$. Here $\mathbf{\hat{z}}$ is a unit vector in the direction of the $z$-axis and $m$ is a mass of the particle \cite{SaijoMaeda1998}, \cite{Favata2011}.
Saijo \textit{et al} \cite{SaijoMaeda1998} have derived the equations of motion of a spinning test particle for the equatorial plane of Kerr BH. The equations of motion for the variables $r$, $t$, $\varphi$ in this case have the form \cite{SaijoMaeda1998}
\begin{equation}
\begin{split}
& (\Sigma_s \Lambda_s \dot r)^2 =R_s,\\
& \Sigma_s \Lambda_s \dot t =a\left( 1 +\frac{3Ms^2}{r \Sigma_s}\right)\left[ J -(a+s)E \right] +\\
& +\frac{r^2 +a^2}{\Delta}P_s,\\
& \Sigma_s \Lambda_s \dot \varphi =\left( 1 +\frac{3Ms^2}{r \Sigma_s}\right)\left[ J -(a+s)E \right] +\frac{a}{\Delta}P_s.\\
\end{split}
\label{spin-eqs}
\end{equation}
where
\begin{equation}
\begin{split}
& \Sigma_s= r^2 \left(1 -\frac{Ms^2}{r^3}\right),\\
& \Lambda_s= 1 - \frac{3 M s^2 r [-(a + s) E +J]^2}{\Sigma_s^3},\\
&R_s = P_s^2 - \Delta \left\{ \frac{\Sigma_s^2}{r^2} + [-(a + s) E +J]^2 \right\}, \\
&P_s = \left[r^2 + a^2 + \frac{a s (r + M)}{r} \right] E - \left(a + \frac{M s}{r} \right) J.
\end{split}
\end{equation}
Here $\dot{x} \equiv dx/d \tau$ and the affine parameter $\tau$ is normalised as $p^{\nu} v_{\nu} =-m$ \cite{SaijoMaeda1998}; $E$ is the conserved energy per unit particle rest mass, and $J=J_z$ is the conserved total angular momentum per unit particle rest mass which is collinear to the spin of a BH.
We can write the equation for radial motion in the form \cite{Favata2011}, \cite{Rasband1973}
\begin{equation}
(\Sigma_s \Lambda_s \dot r)^2 =\alpha_s E^2 -2\beta_s E +\gamma_s,
\label{rad-motion}
\end{equation}
where
\begin{equation}
\begin{split}
&\alpha_s = \left[r^2 + a^2 + \frac{a s (r + M)}{r} \right]^2 - \Delta (a + s)^2,\\
&\beta_s = \left[ \left(a + \frac{M s}{r} \right) \left(r^2 + a^2 + \frac{a s (r + M)}{r} \right) - \right. \\
&\left. - \Delta (a + s) \right]J,\\
&\gamma_s = \left(a + \frac{M s}{r} \right)^2 J^2 - \Delta \left[r^2 \left(1 - \frac{M s^2}{r^3}\right)^2 + J^2 \right].
\end{split} \label{abc-spin}
\end{equation}
We can consider the whole right-hand side of (\ref{rad-motion}) as an effective potential. For the matter of convenience, let us further divide it by $r^4$ and define the effective potential as
\begin{equation}
V_s(r;J,E)= \frac{1}{r^4} (\alpha_s E^2 -2\beta_s E +\gamma_s).
\label{V-spin}
\end{equation}
\section{Numerical calculation of ISCO parameters}
The equations which define circular orbits are given by a system
\begin{equation}
\left\{
\begin{aligned}
V_s &= 0 \, ,\\
\frac{dV_s}{dr} &=0 \, .\\
\end{aligned}
\right.
\label{eff12}
\end{equation}
In order to find the last stable orbit (ISCO) we need to demand additionally that the second derivative of the effective potential vanishes:
\begin{equation}
\begin{aligned}
&\frac{d^2 V_s}{dr^2} = 0.\\
&\\
\end{aligned}
\label{eff3}
\end{equation}
For the sake of convenience, we change variables and work not with $r$ and $J$ but with $u=1/r$ and $x=J-aE$, so the function $V_s(u; x,E)$ will be used. In new variables (see \cite{Jefremov2015} for details), system of equations determining ISCO have the form
\begin{equation}
\left\{
\begin{aligned}
V_s &= 0 \, ,\\
\frac{dV_s}{du} &=0 \, ,\\
\frac{d^2 V_s}{du^2} &= 0 \, .\\
\end{aligned}
\right.
\label{n}
\end{equation}
The explicit form of these equations is \cite{Jefremov2015}:
\begin{equation}
\begin{aligned}
&(1 + 2 a s u^2 - s^2 u^2 + 2 M s^2 u^3) E^2 + \\
&+(-2 a u^2 x + 2 s u^2 x - 6 M s u^3 x - 2 a M s^2 u^5 x) E -\\ &-1 + 2 M u - a^2 u^2 + 2 M s^2 u^3 -\\
&- 4 M^2 s^2 u^4 + 2 a^2 M s^2 u^5 - M^2 s^4 u^6 +\\
&+ 2 M^3 s^4 u^7 - a^2 M^2 s^4 u^8 - u^2 x^2 + 2 M u^3 x^2 +\\
&+ 2 a M s u^5 x^2 + M^2 s^2 u^6 x^2= 0 \, ;\\
&(4 a s u - 2 s^2 u + 6 M s^2 u^2) E^2 + \\
&+ (-4 a u x + 4 s u x - 18 M s u^2 x - 10 a M s^2 u^4 x) E +\\
&+2 M - 2 a^2 u + 6 M s^2 u^2 - 16 M^2 s^2 u^3 +\\
& + 10 a^2 M s^2 u^4 - 6 M^2 s^4 u^5 + 14 M^3 s^4 u^6 -\\
&- 8 a^2 M^2 s^4 u^7 - 2 u x^2 + 6 M u^2 x^2 +\\
&+ 10 a M s u^4 x^2 + 6 M^2 s^2 u^5 x^2=0 \, ;\\
&(4 a s - 2 s^2 + 12 M s^2 u) E^2 + \\
&+ (-4 a x + 4 s x - 36 M s u x - 40 a M s^2 u^3 x)E -\\
&- 2 (a^2 - 6 M s^2 u + 24 M^2 s^2 u^2 - 20 a^2 M s^2 u^3 +\\
&+ 15 M^2 s^4 u^4 - 42 M^3 s^4 u^5 +28 a^2 M^2 s^4 u^6 + x^2- \\
& - 6 M u x^2 - 20 a M s u^3 x^2 - 15 M^2 s^2 u^4 x^2) =0 \, .
\end{aligned}
\label{system}
\end{equation}
These three equations form a closed system for three parameters of ISCO $E$, $x$ and $u$, which are dependent only on the Kerr parameter $a$ and particle's spin $s$. This system can be used for numerical calculation of $E$, $x$, $u$ of ISCO at given $a$ and $s$. Then, values of $r$ and $J$ can be found numerically by using $r=1/u$ and $J=x+aE$.
Another important characteristics of the particle circular motion is its angular velocity. The orbital angular frequency of the particle at the ISCO, as seen from an observer at infinity, is defined as
\begin{equation}
\Omega \equiv \frac{d \varphi / d\tau}{ dt / d\tau}.
\label{Omega-definition}
\end{equation}
The values $d \varphi / d\tau$ and $dt / d\tau$ are found from the second and the third equations in (\ref{spin-eqs}), where we should substitute values of $r$, $E$ and $J$ at a given orbit. To find the ISCO frequency $\Omega_{\mathrm{\, ISCO}}$ we need to use the ISCO values of $r$, $E$ and $J$, see \cite{Favata2011}.
At given $a$ and $s$ the system lead to solutions both for co-rotating and counter-rotating case. Corotation means parallel orientation of particle's angular momentum $\mathbf{J}$ and BH spin $\mathbf{a}$, $J>0$; counter-rotation means antiparallel orientation, $J<0$. The $z$-axis is chosen to be parallel to BH spin $\mathbf{a}$, so $a$ is positive or equal to zero. Spin $s$ is the projection of spin on the $z$-axis and can be positive (spins of particle and BH are parallel) or negative (antiparallel).
Specifying $a$ and $s$, we can find $r_{\mathrm{\, ISCO}}$, $E_{\mathrm{\, ISCO}}$ and $J_{\mathrm{\, ISCO}}$ numerically by solving system (\ref{system}), all parameters are in units of $M$. Results for the radius calculation are presented in Figures \ref{fig-r-co} and \ref{fig-r-counter}. For $a=0$ and non-spinning particle ($s=0$) radius equals to $6M$. Increase of $a$ leads to decrease of $r_{\mathrm{\, ISCO}}$ for co-rotating case and to increase of $r_{\mathrm{\, ISCO}}$ for counter-rotating case.
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{r-co.eps}}} \caption{Radius of ISCO for the case of corotation: angular momentum of black hole and total angular momentum of particle are parallel, $J>0$. All values are in units of $M$. See also Figure 3 in paper of Suzuki and Maeda \cite{Suzuki1998}.} \label{fig-r-co}
\end{figure}
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{r-counter.eps}}} \caption{Radius of ISCO for the case of counterrotation: angular momentum of black hole and total angular momentum of particle are antiparallel, $J<0$. All values are in units of $M$.} \label{fig-r-counter}
\end{figure}
Calculations of ISCO energy are shown in Figures \ref{fig-E-co} and \ref{fig-E-counter}. For $a=0$ and $s=0$ the energy equals to $2\sqrt{2}/3$.
Calculations of ISCO total angular momentum are presented in Figures \ref{fig-J-co} and \ref{fig-J-counter}. For $a=0$ and $s=0$ the magnitude of angular momentum equals to $2\sqrt{3}M$.
Calculations of ISCO angular frequency are shown in Figures \ref{fig-Omega-co} and \ref{fig-Omega-counter}. For $a=0$ and $s=0$ the magnitude of angular frequency equals to $1/6\sqrt{6}M$.
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{E-co.eps}}} \caption{Energy of ISCO for the case of corotation.} \label{fig-E-co}
\end{figure}
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{E-counter.eps}}} \caption{Energy of ISCO for the case of counterrotation.} \label{fig-E-counter}
\end{figure}
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{J-co.eps}}} \caption{Total angular momentum of ISCO for the case of corotation.} \label{fig-J-co}
\end{figure}
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{J-counter-modulus.eps}}} \caption{Total angular momentum (absolute value) of ISCO for the case of counterrotation.} \label{fig-J-counter}
\end{figure}
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{Omega-co.eps}}} \caption{Orbital angular frequency of ISCO for the case of corotation.} \label{fig-Omega-co}
\end{figure}
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{Omega-counter-modulus.eps}}} \caption{Orbital angular frequency (absolute value) of ISCO for the case of counterrotation.} \label{fig-Omega-counter}
\end{figure}
\section{Analytical calculation of ISCO parameters, small-spin corrections}
In paper \cite{Jefremov2015} we have derived linear small-spin corrections for ISCO parameters for arbitrary $a$. Parameters are written there with using variable $u_0=1/r_0$ which is inverse radius of ISCO for non-spinning particle. The scheme of calculation of all parameters are presented there, see text after formula (61) in \cite{Jefremov2015}. Here, we rewrite all formulas in terms of $r_0$.
The scheme of calculation of ISCO parameters with linear small-spin corrections:
(i) Radius of ISCO. We need to solve equation for ISCO radius $r_0$ of non-spinning particle
\begin{equation}
r_0^2 - 6Mr_0 - 3a^2 \mp 8a \sqrt{Mr_0} =0.
\label{eq-r0}
\end{equation}
and find $r_0$. In this equation and all formulas below the upper sign corresponds to the antiparallel orientation of particle's angular momentum $\mathbf{J}$ and BH spin $\mathbf{a}$ (counter-rotation, $J<0$), the lower -- to the parallel one (corotation, $J>0$). Solution of (\ref{eq-r0}) can be found analytically, see \cite{Bardeen1972}. To avoid large formulas, we write all other unknowns not as explicit functions of $a$ but as explicit functions of $a$ and $r_0$, keeping in mind that $r_0$ can be found from Eq. (\ref{eq-r0}) at arbitrary $a$. We need to notice that representation of all unknowns via $r_0$ given below could be rewritten in a different form using (\ref{eq-r0}), see \cite{Jefremov2015} for different representations.
Linear correction is
\begin{equation}
r_1 = \frac{4}{r_0} (a \pm \sqrt{Mr_0}) .
\end{equation}
Finally radius of ISCO for given $a$ is
\begin{equation}
r_{\mathrm{\, ISCO}} = r_0 + s r_1 \, .
\end{equation}
(ii) Energy of ISCO is:
\begin{equation}
E_{\mathrm{\, ISCO}} = E_0 + s E_1 ,
\end{equation}
\begin{equation}
E_0 = \sqrt{1-\frac{2M}{3r_0}} ,
\end{equation}
\begin{equation}
E_1 = \pm \frac{1}{\sqrt{3}} \frac{M}{r_0^2} .
\end{equation}
(iii) Total angular momentum of ISCO is:
\begin{equation}
J_{\mathrm{\, ISCO}} = J_0 + s J_1 ,
\end{equation}
\begin{equation}
J_0 = \mp \frac{r_0}{\sqrt{3}} + a \sqrt{1-\frac{2M}{3r_0}} ,
\end{equation}
\begin{equation}
J_1 = \frac{2\sqrt{M} r_0^{3/2} \pm a(3r_0 +M) }{\sqrt{3} r_0^2} .
\end{equation}
(iv) Orbital angular frequency of ISCO is:
\begin{equation}
\Omega_{\mathrm{\, ISCO}} = \Omega_0 + s \Omega_1 ,
\end{equation}
\begin{equation}
\Omega_0 = \frac{\sqrt{M}}{a\sqrt{M} \mp r_0^{3/2} } ,
\end{equation}
\begin{equation}
\Omega_1 = \frac{9 \sqrt{M} ( \sqrt{r_0 M} \pm a ) }{2 \sqrt{r_0} \left(r_0^{3/2} \mp a \sqrt{M} \right)^2 } .
\end{equation}
In work of Favata \cite{Favata2011} the shift in the ISCO due to the spin of the test-particle is calculated numerically, see right picture on Fig. 2 in \cite{Favata2011} and the sixth column in Table I in \cite{Favata2011}. Our analytical results (with appropriate change of variables) agree with calculations of paper of Favata: value of $\Omega_0$ for given $a$ (see first column in Table I; note that BH spin can be negative in this Table, it corresponds to case of counterrotation in our work) give numbers in second column of Table I, value of $\Omega_1/\Omega_0$ give numbers in six column.
Now let us consider particular cases.
Results for the Schwarzschild case are \cite{Jefremov2015}:
\begin{equation}
\begin{aligned}
J_{\mathrm{\, ISCO}}&=2 \sqrt{3} M + \frac{\sqrt{2}}{3}s_J ,\\
E_{\mathrm{\, ISCO}}&= \frac{2 \sqrt{2}}{3} -\frac{1}{36 \sqrt{3}}\frac{s_J}{M} ,\\
r_{\mathrm{\, ISCO}}&= 6M -2\sqrt{\frac{2}{3}}s_J ,\\
\Omega_{\mathrm{\, ISCO}}&= \frac{1}{6 \sqrt{6}M} +\frac{s_J}{48 M^2} .\\
\end{aligned}
\label{Schw}
\end{equation}
Here instead of $s$, which is the projection on $z$-axis that does not unabiguously correspond to any physical direction in Schwarzschild case, we use $s_J$ which is the projection of particle's spin upon the direction of $\mathbf{J}$ and is positive when the particle's spin is parallel to it and negative when it is antiparallel. Value $J$ is considered as positive in this case. Small-spin corrections for Schwarzschild metric were derived by Favata \cite{Favata2011}.
For Kerr BH with slow rotation ($a \ll M$) we have obtained the corrections up to quadratic terms \cite{Jefremov2015}:
\begin{equation}
\begin{aligned}
J_{\mathrm{\, ISCO}}&= \mp 2\sqrt{3}M -\frac{2 \sqrt{2}}{3}a +\frac{\sqrt{2}}{3}s \pm\frac{11}{36 \sqrt{3}}\frac{a}{M}s \pm\\&\pm \frac{4 \sqrt{3} M}{27}\left( \frac{a}{M} \right)^2 \pm \frac{1}{4 M \sqrt{3}}s^2,\\
E_{\mathrm{\, ISCO}}&= \frac{2 \sqrt{2}}{3} \pm\frac{1}{18 \sqrt{3}}\frac{a}{M} \pm \frac{1}{36 \sqrt{3}}\frac{s}{M} -\frac{\sqrt{2}}{81}\frac{a}{M}\frac{s}{M} -\\&-\frac{5}{162 \sqrt{2}}\left( \frac{a}{M} \right)^2 -\frac{5}{432 \sqrt{2}M^2}s^2 ,\\
r_{\mathrm{\, ISCO}}&= 6 M \pm 4 \sqrt{\frac{2}{3}}a \pm 2 \sqrt{\frac{2}{3}}s + \frac{2}{9} \frac{a}{M}s -\\&-\frac{7M}{18}\left( \frac{a}{M} \right)^2 -\frac{29}{72M}s^2 .\\
\Omega_{\mathrm{\, ISCO}} &= \mp \frac{1}{6 \sqrt{6}M} +\frac{11}{216 M}\frac{a}{M} +\frac{1}{48 M^2}s \, \mp\\
&\mp \left( \frac{1}{18 \sqrt{6}M}\frac{as}{M^2} + \frac{59}{648 \sqrt{6}M} \frac{a^2}{M^2} + \right. \\
&\left. + \frac{97}{3456 \sqrt{6}M}\frac{s^2}{M^2} \right).
\end{aligned}
\label{Kerr-slow1}
\end{equation}
For extreme Kerr BH ($a=M$) for counterrotation we have obtained \cite{Jefremov2015}:
\begin{equation}
\begin{aligned}
J_{\mathrm{\, ISCO}}&= -\frac{22 \sqrt{3}}{9}M +\frac{82\sqrt{3}}{243}s,\\
E_{\mathrm{\, ISCO}}&=\frac{5 \sqrt{3}}{9} +\frac{\sqrt{3}}{243}\frac{s}{M},\\
r_{\mathrm{\, ISCO}}&= 9M +\frac{16}{9}s,\\
\Omega_{\mathrm{\, ISCO}} &= -\frac{1}{26M} +\frac{3s}{338 M^2}.\\
\end{aligned}
\label{Kerr-counter}
\end{equation}
For the case of corotation we have considered nearly extreme Kerr BH with $a=(1-\delta)M$ with $\delta \ll 1$, and have obtained:
\begin{equation}
\begin{aligned}
J_{\mathrm{\, ISCO}}&=\left( \frac{2}{\sqrt{3}} +\frac{2 \times 2^{2/3} \delta^{1/3}}{\sqrt{3}} \right)M +\\
&+\left( -\frac{2}{\sqrt{3}} +\frac{4 \times 2^{2/3}\delta^{1/3}}{\sqrt{3}} \right)s,\\
E_{\mathrm{\, ISCO}}&=\left( \frac{1}{\sqrt{3}} +\frac{2^{2/3} \delta^{1/3}}{\sqrt{3}} \right) +\\
&+\left( -\frac{1}{\sqrt{3}} +\frac{2 \times 2^{2/3}\delta^{1/3}}{\sqrt{3}} \right) \frac{s}{M},\\
r_{\mathrm{\, ISCO}}&= \left( 1 +2^{2/3}\delta^{1/3} \right)M -2 \times 2^{2/3} \delta^{1/3}s,\\
\Omega_{\mathrm{\, ISCO}} &= \frac{1}{2M} - \frac{3 \times 2^{2/3} \delta^{1/3}}{8M} + \frac{9 \times 2^{2/3} \delta^{1/3}}{16M^2} s .\\
\end{aligned}
\label{Kerr-co}
\end{equation}
We see that in the case of $a=M$ ($\delta=0$) the corrections, linear in spin, are absent in formulae for ISCO radius and frequency. This was also demonstrated in \cite{Abramowicz1979}. In the work \cite{TanakaMino1996} on basis of the numerical calculation, it was noticed that in the extreme Kerr background for the parallel case the magnitude of test-body's spin does not influence the radius of the last stable orbit and it always remains equal to $M$. We have succeeded in proving this analytically. We have obtained the exact (in spin) values of ISCO parameters for nearly extreme Kerr BH in case of corotation \cite{Jefremov2015}:
\begin{equation}
\begin{aligned}
&J_{\mathrm{\, ISCO}}= 2 M E_{\mathrm{\, ISCO}} \, ,\\
&E_{\mathrm{\, ISCO}}= \frac{M^2-s^2}{M^2\sqrt{3 +6s/M}} +\\
&+ \frac{(M^2 -s^2)^{1/3}(2M +s)^{2/3} Z(M,s)^{2/3}}{\sqrt{3}M^{5/2}(M +2s)^{3/2}} \, \delta^{1/3},\\
&r_{\mathrm{\, ISCO}}=M +\frac{M(M^2 -s^2)^{1/3}(2M +s)^{2/3}}{Z(M,s)^{1/3}} \, \delta^{1/3}, \\
&\Omega_{\mathrm{\, ISCO}} = \frac{1}{2M} -\\
&-\frac{3(M -s)^{1/3}(M +2s)}{4(2M +s)^{1/3}(M +s)^{2/3} Z(M,s)^{1/3}}\, \delta^{1/3} ,\\
&Z(M,s) \equiv M^4 +7M^3s +9M^2 s^2 +11M s^3 -s^4 .\\
\end{aligned}
\label{Kerr-delta}
\end{equation}
From this solution we see that for $\delta=0$ ($a=M$) the radius and the angular frequency are independent of the particle's spin $s$ while the values of energy and total angular momentum depend on it.
It can be easily seen from the exact solution (\ref{Kerr-delta}) that for extreme Kerr BH solution for energy and angular momentum diverges with $s \rightarrow -M/2$. It shows that an approximation of test body does not work with such large values of $s$. Of course, limits of test body application depend on $a$, but we emphasize that all results beyond the approximation $s \ll M$ should be considered with big care, see \cite{SaijoMaeda1998}, \cite{Jefremov2015}.
For a spinless particle the conserved quantity is the orbital angular momentum $L_z$, whereas in the case of a spinning particle the conserved quantity is the total angular momentum $J_z$, which includes spin terms \cite{SaijoMaeda1998}. In this case 'orbital angular momentum' at infinity $L_z$ can also be introduced as $L_z=J_z - s$, see \cite{SaijoMaeda1998}.
In the paper \cite{Jefremov2015} we present formulae for a small-spin linear corrections for $E$, $J$ and $\Omega$ at circular orbit with a given radius $r$. It can be seen that for $r \rightarrow \infty$ the total angular momentum equals to $J=J_0 + s$, where $J_0$ is total angular momentum for non-spinning particle. In spinless case it consists of orbital angular momentum part only, therefore $J_0=L$. This justifies the introduction of orbital momentum at infinity just as difference between $J$ and $s$.
For circular orbits at finite radius (in particular, ISCO) orbital angular momentum cannot be defined by such simple way \cite{SaijoMaeda1998}. But using test body approximation $s \ll M$ allows us to use term 'corotation' and 'counterrotation' as terms for orbital motion.
If we use tentatively the orbital angular momentum in the form $L_z=J_z - s$ for ISCO, we will get for the Schwarzschild case:
\begin{equation}
L_{\mathrm{\, ISCO}} = 2 \sqrt{3} M - \left(1- \frac{\sqrt{2}}{3} \right) s_J .
\label{L-orb}
\end{equation}
We see from (\ref{L-orb}) and (\ref{Schw}) that increasing of positive $s_J$ leads to increasing of $r_{\mathrm{\, ISCO}}$ and decreasing of orbital angular momentum $L_{\mathrm{\, ISCO}}$. At the same time the total angular momentum becomes bigger but it happens only due to increasing of its spin part.
\section{Binding energy in the innermost stable circular orbit}
Let us denote efficiency $\varepsilon$ as the fraction of the rest mass
energy that can be released in making the transition from rest at infinity to
the innermost stable circular orbit \cite{Hobson}, in our units it is given by
\begin{equation}
\varepsilon = 1 - E_{\mathrm{\, ISCO}} = E_{\mathrm{\, bind}}.
\end{equation}
Note that in our notations $E$ is energy per unit particle rest mass. In other words the efficiency is the binding energy $E_{\mathrm{\, bind}}$ at ISCO per unit mass. The efficiency shows how much energy can be released by radiation during the accretion process. For the Schwarzschild black hole and non-spinning test particle the efficiency equals to 0.057, and it reaches maximum for the extreme Kerr black hole -- 0.42 \cite{Hobson}, \cite{MTW}.
In the case of spinning test particles we can easily calculate the efficiency with using of expressions for $E_{\mathrm{\, ISCO}}$ presented in (\ref{Schw}), (\ref{Kerr-delta}), see Fig. \ref{fig-binding-energy}.
\begin{figure}
\centerline{\hbox{\includegraphics[width=0.45\textwidth]{bind-energy.eps}}} \caption{The efficiency (binding energy) of spinning test particle at ISCO. For case of extreme black hole the binding energy can be smaller or larger than 0.42 depending on spin orientation.} \label{fig-binding-energy}
\end{figure}
For the Schwarzschild black hole (see (\ref{Schw})) the efficiency is:
\begin{equation}
\varepsilon = 1 - \frac{2 \sqrt{2}}{3} + \frac{1}{36 \sqrt{3}}\frac{s_J}{M} .
\end{equation}
For extreme Kerr black hole in case of orbital corotation (see (\ref{Kerr-co})), the efficiency is:
\begin{equation}
\varepsilon = 1 - \frac{1}{\sqrt{3}} + \frac{1}{\sqrt{3}} \frac{s}{M} .
\end{equation}
It means that in case when particle spin is parallel to the total angular momentum of particle and the black hole spins, the efficiency can be larger than 42 \%. Note that the ISCO radius and angular frequency do not depend on spin in the case of extreme Kerr BH.
\section*{Acknowledgments}
The work of GSBK and OYuT was partially supported by the Russian Foundation for Basic Research Grant No. 14-02-00728 and the Russian Federation President Grant for Support of Leading Scientific Schools, Grant No. NSh-261.2014.2. GSBK acknowledges partial support by by the Russian Foundation for Basic Research Grant No. OFI-M 14-29-06045.
| {'timestamp': '2016-05-16T02:10:06', 'yymm': '1605', 'arxiv_id': '1605.04189', 'language': 'en', 'url': 'https://arxiv.org/abs/1605.04189'} |
\section{Introduction}
Being the unique standard model (SM) fermion with a mass of the
electroweak symmetry breaking scale, the top quark may be closely
related to the TeV scale new physics. In particular, many of the new
physics candidates predict a $t\,\bar t$ ($t\,t$) resonance, i.e., a
heavy particle that decays to $t\,\bar t$ ($t\,t$). The $t\,\bar{t}$
resonance occurs, for example, in Technicolor~\cite{Hill:2002ap},
Topcolor~\cite{topcolor}, Little Higgs~\cite{lh}, and Randall-Sundrum
(RS) models~\cite{Randall:1999ee}, while the $t\,t$ resonance exists
in the grand unified theory in the warped
extra-dimension~\cite{WarpedGUT}. Therefore, it is crucial to study
$t\,\bar t$ ($t\,t$) invariant mass distributions
and look for possible resonances at the ongoing Large Hadron Collider
(LHC), which may provide us the opportunity for revealing the new
physics beyond the SM .
The top quark almost only decays to a $b$ quark and a $W$
boson. Depending on how the $W$ boson decays, events with a pair of tops
can be divided to the all-hadronic, the semileptonic and
the dilepton channels. The all-hadronic channel, in which both $W$'s
decay hadronically, has the largest branching ratio of 36/81, but
suffers from the largest background since that all observed objects are
jets. The semileptonic channel, in which one $W$ decays hadronically
and the other one decays leptonically, has a significant
branching ratio of 24/81 and also smaller background. Although
there is one neutrino in the event, only its longitudinal momentum is
unknown, which can be easily extracted using the $W$ mass
constraint. Therefore, this has been thought to be the best channel
for discovering $t\,\bar t$ resonance and most of existing studies
have been concentrating on this
channel~\cite{Semileptonic,Baur:2007ck,Baur:2008uv}. The dilepton
channel, in which both $W$'s decay leptonically,
has been thought to be a very challenging and not promising
channel. The reason is twofold: first, not counting $\tau$'s, the
branching ratio for this channel is only 4/81; second, due to the
fact that there are two neutrinos in the final states, the event
reconstruction is much more difficult than the semileptonic channel.
Nevertheless, the dilepton channel also has its own merits,
making it more than a complementary to the other two channels. An
obvious advantage is that it has much smaller SM backgrounds. More
importantly, the two leptons in the decay products carry information
that is unavailable in the other channels. First, it is well-known
that the charged lepton is the most powerful analyzer of the top
spin~\cite{Jezabek:1988ja,Willenbrock:2002ta}, because its angular
distribution is $100\%$ correlated with the top polarization in the top
rest frame. The down-type quark from hadronic decay of the $W$ boson has
an equal power, but it is indistinguishable from the up-type quark in
a collider detector. If the $b$ jet from the top decay is not tagged, the
ambiguity is even worse. Only the dilepton channel is free from this
ambiguity.
Secondly, the charges of the two leptons are both measurable, which
makes the same-sign dilepton channel ideal for studying $t\,t$ or
$\bar t\,\bar t$ production, since it has very
small SM backgrounds. Note that although we are discussing resonances,
the analysis applies equally for any events with two same-sign top
quarks, as long as there are not missing particles other than the two
neutrinos. For example, it can be used to study the excess of $t\,t$
or $\bar t\,\bar t$ production in flavor violating
processes~\cite{Mohapatra:2007af,MFV,Gao:2008vv}. On the contrary, the
charge information in the other two channels is
unavailable\footnote{It is possible to identify the charges of the
$b$-jets but only at a few percent level.}, and hence a more
significant event rate is needed to see an excess over the SM $t\,\bar
t$ background.
Motivated by the above observations, we perform a model
independent study on $t\,\bar t$ ($t\,t$) resonances in the dilepton
channel. The crucial step of this analysis
is the event reconstruction, which we describe in the next section. We
will focus on the most challenging case when the resonance is heavy
($\ge 2 ~\mbox{TeV}$) and discuss a few related difficulties and their
solutions. As an illustration, the method is applied to a KK gluon in
the RS model with a mass of 3 TeV. In Section \ref{sec:discovery}, we
estimate the discovery limits of representative resonances with
different spins. It is shown that despite the smaller
branching ratio, the discovery limits from this channel compete with
those from the semileptonic channel. In Section \ref{sec:spin}, we present
the method for spin measurements and estimate the minimal number of
events needed to distinguish the spin of the resonance. Section
\ref{sec:discussion} contains a few discussions and the conclusion.
\section{Event Reconstruction}
\label{sec:reconstruction}
\subsection{The Method}
In this section, we discuss the method for reconstructing the $t\,\bar t$
system in the dilepton channel at the LHC. The process we consider is
$pp\rightarrow \Pi\rightarrow t\bar t\rightarrow b\bar b W^{+}
W^{-}\rightarrow b\bar b \ell^+\ell^- \nu_{\ell}\bar{\nu}_{\ell}$,
with $\Pi$ a $t\,\bar{t}$ resonance and $\ell=e\,,\mu$. There can be
other particles associated with the $\Pi$ production such as the initial
state radiation, but in our analysis it is crucial that the missing
momentum is only from the two neutrinos. The method described in
this section can also be applied to $t\,t$ resonances.
Assuming tops and $W$'s are on-shell and their masses are known, the
4-momenta of the neutrinos can be solved from the mass shell and the
measured missing transverse momentum constraints \cite{ATLAS_ttbar}:
\begin{eqnarray}
&&p_\nu^2\,=\,p_{\bar\nu}^2\,=\,0\,,\nonumber\\
&&(p_\nu+p_{l^+})^2\,=\,(p_{\bar\nu}+p_{l^-})^2\,=\,m_W^2\,,\nonumber\\
&&(p_\nu+p_{l^+}+p_b)^2\,=\,(p_{\bar\nu}+p_{l^-}+p_{\bar b})^2\,=\,m_t^2\,,\nonumber\\
&&p_\nu^x+p_{\bar\nu}^x\,=\,\slashchar{p}^x,\; \; \quad
p_\nu^y+p_{\bar\nu}^y\,=\,\slashchar{p}^y \,,
\label{eq:system}
\end{eqnarray}
where $p_{i}$ is the four-momentum of the particle $i$. We have $8$
unknowns from the two neutrinos' four-momenta and $8$
equations. Therefore, Eqs.~(\ref{eq:system}) can be {\it solved} for
discrete solutions.
This system can be reduced to two quadratic equations plus 6 linear
equations~\cite{Sonnenschein:2006ud,Cheng:2007xv}. In general, the
system has 4 complex solutions, which introduces an
ambiguity when more than one solutions are real and physical. After
solving for $p_\nu$ and $p_{\bar\nu}$, it is
straightforward to obtain $p_t$ and $p_{\bar t}$ and calculate the
$t\,\bar t$ invariant mass $M_{\Pi}^{2}=(p_{t}+p_{\bar t})^{2}$.
The system in Eqs.~(\ref{eq:system}) has been applied to measure the top
mass \cite{ATLAS_ttbar,CMS_ttbar} and to study the spin correlations in
$t\,\bar t$ decays~\cite{CMS_ttbar,Beneke:2000hk}. These studies focus
on low center of mass energies below 1~TeV and
involve only the SM $t\,\bar t$ production. We will concentrate on the
heavy-resonance case when $t$, $\bar t$ and their decay products are
highly boosted. There are a few complications in disentangling new
physics contributions from the SM, as discussed below.
The first complication comes from the fact that for a highly boosted
top, its decay products are collimated and therefore are difficult to
be identified
as isolated objects. In other words, all decay products of the top,
in either the hardronic or the leptonic decay channels,
form a fat ``top jet''. This interesting fact has triggered recent
studies for developing new methods to distinguish top jets from
ordinary QCD jets~\cite{Boostedtop1,Boostedtop2}. For the dilepton
channel, in order to keep as many signal events as possible, we
include
both isolated leptons and non-isolated muons. Non-isolated muons can
be measured in the muon chamber, while non-isolated electrons are
difficult to be distinguished from the rest of the jet and
therefore not included in our analysis. This is very different from
the low center-of-mass energy case where two isolated leptons can
often be identified.
Once non-isolated muons are included, we have to consider the SM
non-$t\,\bar t$ backgrounds such as $b\,t$ and $b\,b$ productions with
one or two muons coming from $b$ or $c$ hadron decays. Since muons
from hadronic decays are relatively softer, we will use a high
$p_T>100~\mbox{GeV}$ cut for the non-isolated muons to reduce the
background. This is similar to using the jet energy fraction carried
by the muon as a cut~\cite{Boostedtop1}. Similarly, it is unnecessary
to require one or two $b$-jet taggings, which may have a small
efficiency at high energies~\cite{btag}. Instead, we consider all
signal and background events with two high-$p_{T}$ jets. Besides
high-$p_{T}$ cuts, the mass-shell constraints in
Eqs.~(\ref{eq:system}) are also efficient for reducing the
background/signal ratio.
The second complication is caused by wrong but physical
solutions. Part of the wrong solutions come from wrong
combinatorics--either one or more irrelevant jets or leptons from
sources other than $t\,\bar{t}$ are included in the reconstruction
equations, or the relevant jets and leptons are identified but
combined in a wrong way. Even when we have identified the correct
objects and combinatorics, there can be wrong solutions due to the
non-linear nature of the equation system. As mentioned before, there
could be up to three wrong solutions in addition to the correct
one. The wrong solutions will change the $t\,\bar t$ invariant mass
distribution. This is not a severe problem for a light ($<1$ TeV)
resonance because both signals and
backgrounds can be large. The wrong solutions will smear but not destroy
the signal peak. For heavy resonances in the multi-TeV range, the
signal cross section is necessarily small due
to the rapid decreasing of the parton distribution functions (PDF's). This
would not be a problem if we only obtained the correct solution since the
decreasing would happen for both the signals and the
backgrounds. However, when a wrong solution is present, it will shift
the $t\,\bar t$ invariant mass to a different value from the correct
one, either lower or higher. Due to the large cross section of the SM
$t\,\bar t$ production in the low invariant mass region, even if a
small fraction of masses are shifted to the higher region, the signal
will be swamped.
Wrong solutions exist because the momenta of the neutrinos are
unknown except the sums of their transverse momenta. Clearly, for a
$t\,\bar t$ invariant mass shifted to be higher than the correct value,
the solved neutrino momenta are larger than their right values
statistically. Therefore, we can reduce the fraction of wrong
solutions by cutting off solutions with unnaturally large neutrino
momenta. This is achieved by two different cuts. First, we can cut off
``soft'' events before reconstruction. That is, we apply a cut on
the cluster transverse mass $m_{T_{cl}}$ defined from the
measured momenta~\cite{Baur:2007ck}:
\begin{equation}
m_{T_{cl}}^2\,=\,\left(\sqrt{p_T^2(l^+\,l^-\,b\,\bar
b)+m^2(l^+\,l^-\,b\,\bar b)}+\slashchar{p}_T\right)^2-
\left(\vec{p}_T(l^+\,l^-\,b\,\bar b)+\vec{\slashchar{p}}_T\right)^2,
\end{equation}
where $\vec{p}_T(l^+l^-b\bar b)$ and $m^2(l^+l^-b\bar b)$ are the transverse
momentum and the invariant mass of the $l^+l^-b\bar b$ system, and
$\slashchar{p}_T=|\vec{\slashchar{p}}_T|$.
Second, after reconstruction, we define a cut on the fraction of the
transverse momentum carried by the neutrinoes,
\begin{equation}
r_{\nu b}\,=\,\frac{p_T^\nu+p_T^{\bar\nu}}{p_T^b+p_T^{\bar b}}<2\,. \label{eq:rcut}
\end{equation}
As we will see in Section \ref{sec:kkgluon}, the $r_{\nu b}$ cut is
useful for increasing signal/background ratio. The value in
Eq.~(\ref{eq:rcut}) is approximately optimized for the examples we
consider and taken to be fixed in the rest of the article.
On the other hand, we choose to explicitly vary the $m_{T_{cl}}$ cut
to optimize the discovery significance because it is what the
significance is most sensitive to. In practice, one could as well
optimize all other cuts and obtain better results.
The third issue is with regard to the experimental resolutions. The
smearing of the measured momenta modifies the coefficients in
Eqs.~(\ref{eq:system}). When the modification is small, the correct
solutions of the neutrino momenta are shifted, but we still obtain
real solutions.\footnote{Note that the finite widths of the top quark
and the $W$ boson have similar effect, although their $1-2$ GeV
widths are negligible compared with the detector resolutions.}
However, when the modification is large, it is possible to render the
solutions to be complex. Again, this effect is more significant when
the top is more energetic. The absolute smearings are larger (although
the fractional resolution is better), which make it harder to have
real solutions . For comparison, $38\%$
signal events from a 1 TeV resonance have real solutions. The
percentage decreases to $26\%$ for a 3 TeV resonance. This is based on a
semi-realistic analysis detailed in the next subsection.
The best treatment of this problem is perhaps to find the real
solutions by varying the visible
momenta, and then weight the solutions according to the experimental
errors. In this article, we adopt a much simpler solution, namely, we
keep those solutions with a small imaginary part. More precisely, we
first solve Eqs.~(\ref{eq:system}) for $p_\nu$ and $p_{\bar\nu}$. Then we
keep all four complex solutions and add them to the corresponding
lepton and $b$-jet momenta to obtain $p_t$ and $p_{\bar t}$. We demand
\begin{equation}
|{\rm Im}(E_t)|<0.4\,|{\rm Re}(E_t)|\,,\ \ |{\rm Im}(E_{\bar
t})|<0.4\,|{\rm Re}(E_{\bar t})|\,.\label{eq:realcut}
\end{equation}
where $E_t$ and $E_{\bar t}$ are respectively the energies of $t$ and
$\bar t$. Similar to the $r_{\nu b}$ cut, the values we choose in
Eq.~(\ref{eq:realcut}) are approximately optimized and taken to be
fixed through the rest of the article. For events passing the above
cuts, we make the 4-momenta of
$t$ and $\bar t$ real by taking the norm of each component, but keep
the sign of the original real part. Note that complex
solutions always appear in pairs, giving the same real solution after
taking the norm. We only count it once.
\subsection{Event Generation}
\label{sec:eventgeneration}
The hard process of $pp\rightarrow \Pi\rightarrow t\bar t\rightarrow
b\bar b \ell^+\ell^- \nu_{\ell}\bar{\nu}_{\ell}$ is simulated with TopBSM
\cite{Frederix:2007gi} in MadGraph/MadEvent \cite{Alwall:2007st},
where $\Pi$ denotes the $t\,\bar t$ resonance. In this article, we will
consider a spin-0 color-singlet scalar, a spin-0 color-singlet
pseudo-scalar, a spin-1 color octet and a spin-2 color-singlet. The
major SM background processes, including $t\,\bar t$, $b\,\bar b$,
$c\,\bar c$, $bb\ell\nu$ and $jj\ell\ell$, are also simulated with
MadGraph/MadEvent using CTEQ6L1 PDF's~\cite{Pumplin:2002vw}. We choose
the renormalization and factorization scales as the square root of the
quadratic sum of the maximum mass among final state particles, and
$p_{T}$'s of jets and massless visible particles, as described in
MadGraph/MadEvent. Showering and hadronization are added to the events
by Pythia 6.4~\cite{Sjostrand:2006za}. Finally, the events are
processed with the detector simulation package, PGS4~\cite{pgs4}. We
have not included theoretical uncertainties in the cross-section
calculations, which mainly comes from PDF uncertainties at high
invariant mass \cite{Frederix:2007gi}. In Ref.~\cite{Frederix:2007gi}
(Fig.~3), it is estimated using the CTEQ6 PDF set that the SM $t\bar
t$ cross-section has a theoretical uncertainty around 20\%$\sim$ 30\% at 2 TeV,
increasing to about 80\% at 4 TeV, which may significantly affect some
of the results in our analysis. Nevertheless, we note that the PDFs
can be improved with the Tevatron data \cite{Diaconu:2009jj} at large
$x$, and our focus here is event reconstruction. Therefore,
we ignore systematic errors in the following discussions.
The cuts used to reduce the background/signal ratio are summarized
below, some of which have been discussed in the previous section:
\begin{enumerate}
\item Before reconstruction
\begin{itemize}
\item At least two leptons satisfying: $p_T>20~\mbox{GeV}$ for isolated leptons or
$p_T>100~\mbox{GeV}$ for non-isolated muons. The two highest $p_T$ leptons are
taken to be the leptons in Eqs.~(\ref{eq:system});
\item $m_{\ell\ell}>100~\mbox{GeV}$ where $m_{\ell\ell}$ is the invariant
mass of the two highest $p_{T}$ leptons.
\item At least two jets satisfying: $p_T>50~\mbox{GeV}$ for $b$-tagged,
$p_T>150~\mbox{GeV}$ for not-b-tagged. The two highest $p_T$ jets are
taken to be the b jets in Eqs.~(\ref{eq:system});
\item $\slashchar{p}_T>50~\mbox{GeV}$;
\item Varying $m_{T_{cl}}$ cut.
\end{itemize}
\item After reconstruction
\begin{itemize}
\item $|{\rm Im}(E_t)|<0.4\,|{\rm Re}(E_t)|\,,\ \ |{\rm Im}(E_{\bar
t})|<0.4\, |{\rm Re}(E_{\bar t})|$\,;
\item $r_{\nu b}<2$\,.
\end{itemize}
\end{enumerate}
The complex solutions are made real using the method discussed in the
previous section. There can be 0-4 solutions after the above cuts. We
discard events with zero solution. For a solvable event with $n\ge 1$
solutions, we weight the solutions by $1/n$.
\subsection{KK gluon as an example}
\label{sec:kkgluon}
We illustrate the efficiency of the reconstruction procedure by
considering the KK gluon in the basic RS model with fermions
propagating in the bulk. The KK gluon is denoted by $\Pi^{1}_{o}$, which has
the following couplings to the SM quarks,
\begin{equation}
g_{L,R}^q=0.2\,g_s\,,\ \ g_L^{t}=g_L^{b}=g_s\,,\ \ g_R^{t}=4\,g_s\,,\ \ g_R^{b}=-0.2\,
g_s\,,
\label{eq:KKgluon}
\end{equation}
where $g_s$ is the strong coupling constant and $q$ represents quarks in the
light two generations. With this set of couplings, the KK
gluon has a width $\Gamma_{\Pi^{1}_{o}}=0.153\,M_{\Pi^{1}_{o}}$, and the branching ratio
$Br(\Pi^{1}_{o}\rightarrow t\,\bar{t})=92.6\%$. For a KK gluon of mass 3 TeV, the
total leading-order cross section in the dilepton channel is
approximately 10~fb. The parton level $m_{t\bar t}$ distribution is
shown in Fig.~\ref{fig:parton}, together
with the SM $t\,\bar t$ background, also in the dilepton channel. The
interference between the KK gluon and the SM is small and ignored in
Fig.~\ref{fig:parton}.
Within the mass window $(M_{G}-\Gamma_{G}, M_{G}+\Gamma_{G}) \approx
(2500, 3500)$~GeV, the total number of
events is around 770 for the signal and 610 for the background, for
100~$\mbox{fb}^{-1}$.
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.6 \textwidth]{parton.eps}
}
\caption{The number of events in the dilepton channel of the
$t\,\bar{t}$ production through a KK gluon at the LHC. The mass and
width of the KK gluon are chosen to be 3~TeV and 459~GeV,
respectively. The solid (blue) curve is signal+background and the
dashed (black) curve is the SM $t\bar t$ dilepton background. }
\label{fig:parton}
\end{figure}
Although the SM $t\,\bar t$ production in the dilepton channel
comprises the largest background, we have to consider backgrounds from
other sources since we are utilizing not-b-tagged jets and
non-isolated leptons, which can come from heavy flavor
hadron decays. The major additional backgrounds are:
\begin{enumerate}
\item $t\,\bar t$ processes in other decay channels, which include the semi-leptonic
channel, the all-hadronic channel and channels involving $\tau$'s;
\item Heavy flavor di-jets, including $b\bar b$ and $c\bar c$ with
$b\bar b$ dominating.
\item Other processes that contain one or more isolated leptons
including $jj\ell\ell$, $bb\ell v$ productions.
\end{enumerate}
The above backgrounds are included in our particle level analysis. In
Table \ref{table:breakdown}, we show the number of events of the
signal and backgrounds before and after the reconstruction
procedure. The cuts discussed in the previous subsection are
applied, with a moderate $m_{T_{cl}}>1500$~GeV cut. Note that these numbers are
without any mass window cut, while the kinematic cuts in the previous
subsection have been applied. Also note that the number of signal events
is much smaller after detector simulation and applying the kinematic cuts: this
is because most of the leptons are non-isolated
when the $t\bar t$ resonance mass is as high as 3 TeV, and we have only
included non-isolated muons in the analysis. The probability for both
leptons from $W$ decays to be muons is only 1/4. This fact, together with the kinematic
cuts, drastically reduces the number of signal events. This reduction
also occurs for the SM $t\,\bar t$ dilepton events with a high center of mass energy.
\begin{table}[htb]
\begin{center}
\begin{tabular}{c|c|c|c|c|c}
\hline\hline
&3 TeV KK gluon &$t\bar t$ dilep&$t\bar t$ others&$b\bar b$, $c\bar
c$& $jj\ell\ell, bb\ell v$\\
\hline
Before Recon.&167&317&96&68&63\\
\hline
After Recon.&82&159&37&33&13\\
\hline
$r_{vb}<2$&73&146&31&7&11 \\
\hline
\hline
\end{tabular}{\caption{\label{table:breakdown}Number of signal and background
events for 100 ${\mbox{fb}}^{-1}$ before and after reconstruction.}}
\end{center}
\end{table}
From Table \ref{table:breakdown}, we can see the effects of the event
reconstruction. Before applying the $r_{\nu b}$ cut, the
reconstruction efficiencies for the signal events and the SM $t\,\bar t$
dilepton events are approximately equal and around
50\%. The efficiencies for the other backgrounds are substantially
smaller. Moreover, the cut on the variable $r_{\nu b}$, which is only
available {\it after} the event reconstruction, also favors the signal and the
SM $t\,\bar t$ dilepton events. Therefore, we obtain a larger $S/B$ at the
cost of slightly decreasing significance $S/\sqrt{B}$. In the
following, we will define the significance after the event
reconstruction.
Of course, the effects of event reconstruction are beyond simple
event counting. More importantly, we obtain the 4-momenta of the top
quarks, which are necessary for determining the spin of the $t\,\bar t$
resonance. We will discuss the spin measurement in
Section~\ref{sec:spin}. We also obtain the mass peak on top of the
background after reconstruction, as
can be seen from Fig.~\ref{fig:mtcl}, where we show the $m_{t\bar t}$
distributions of both background and signal+background for a few different
$m_{T_{cl}}$ cuts. For the left plot with $m_{T_{cl}}>1500$~GeV, there is
a clear excess of events, although the mass peak is not obvious.
By comparing with Fig.~\ref{fig:parton}, we see that $S/B$ is smaller
than the parton level distribution in the mass window (2500, 3500) GeV $\approx
(M_{G}-\Gamma_{G}, M_{G}+\Gamma_{G})$. This indicates that
wrong solutions from the lower $m_{t\bar t}$ background events have
contaminated the higher $m_{t\bar t}$ distribution.
As we
increase the $m_{T_{cl}}$ cut, the numbers of both signal events and
background events decrease, but $S/B$
is increasing, showing that the contamination is reduced. The
contamination reduction is also confirmed
by tracing back the reconstructed $m_{t\bar t}$ to its Monte Carlo
origin. For the $m_{T_{cl}}$ cut of 1500 GeV, the reconstructed background
$m_{t\bar
t}$ in the mass window of (2500, 3500) GeV is decomposed as: 44\%
from the SM $t\,\bar{t}$
events with original $m_{t\bar t}$ smaller than 2500 GeV; 25\% from
the SM $t\,\bar{t}$
events with original $m_{t\bar t}$ larger than 2500 GeV; the other
21\% come from other SM backgrounds. The decomposition becomes (in the
same order as above) \{23\%,
43\%, 34\%\} for the $m_{T_{cl}}$ cut of 2000 GeV, and \{13\%,
60\%, 27\%\} for the $m_{T_{cl}}$ cut of 2500 GeV. Nevertheless
we cannot choose too high a $m_{T_{cl}}$ cut since it can reduce
$S/\sqrt{B}$. For the KK gluon example, the significance
is maximized when the $m_{T_{cl}}$ cut is around 2000~GeV. More precisely,
in the mass window (2500, 3500)~GeV for $M_{G}$, we have $S/B=0.69, 1.3, 1.8$
and $S/\sqrt{B}=4.9, 6.1, 4.5$ for $m_{T_{cl}}\ge 1500, 2000, 2500$~GeV,
respectively.
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.33 \textwidth]{mtcl1500.eps}
\includegraphics[width=0.33 \textwidth]{mtcl2000.eps}
\includegraphics[width=0.33 \textwidth]{mtcl2500.eps} }
\caption{The event distributions of $t\,\bar t$ invariant masses
after reconstruction. The solid (blue) histogram is signal+background
and the dashed (black) histogram is background only. From left to
right, we have the cuts on $m_{T_{cl}}$ to be 1500, 2000,
2500~GeV.}
\label{fig:mtcl}
\end{figure}
\section{Discovery Limits}
\label{sec:discovery}
Having discussed our strategy of selecting cuts to optimize the
discovery limit, we now consider the needed signal cross section for
different $t\,\bar{t}$ and $t\,t$ resonances at $5\,\sigma$ confidence
level (CL) at the LHC.
\subsection{Statistics}
\label{sec:statistics}
We are interested in heavy resonances with masses from $2$ to
$4$~TeV. In order to cover most of the signal events, we examine
reconstructed masses in the range of $1.5-5.1$~TeV. We ignore
uncertainties from the overall normalization of the signal and
background cross sections, which can be determined from the low
$m_{t\bar t}$ mass events, where the statistics is much better. To
utilize the shape differences, we equally divide the mass range to
$N_{\rm bin}=18$ bins, which amounts to having a $200$~GeV bin width. In
each bin, we define the number of the background events as $b_{i}$, while
the number of the signal events as $s_{i}$. When the number of events
is small, the distribution is Poisson. Following~\cite{PDG}, we first
calculate the Pearson's $\chi^{2}$ statistic
\beq
\chi^{2}\,=\,\sum_{i}^{N_{\rm
bin}}\frac{(n_{i}\,-\,v_{i})^{2}}{v_{i}}\,=\,\sum_{i}^{N_{\rm
bin}}\,\frac{s_{i}^{2}}{b_{i}}\,,
\eeq
where $n_{i}=b_{i}+s_{i}$ is the measured value and $v_{i}=b_{i}$ is
the expected value. Assuming that the goodness-of-fit statistic
follows a $\chi^{2}$ probability density function, we then calculate
the $p$-value for the ``background only'' hypothesis
\beq
p\,=\,\int^{\infty}_{\chi^{2}}\,\frac{1}{2^{N_{\rm
bin}/2}\,\Gamma(N_{\rm bin}/2)}\,z^{N_{\rm
bin}/2-1}\,e^{-z/2}\,dz\,,
\eeq
where $N_{\rm bin}$ counts the number of degrees of freedom. For a
$5\,\sigma$ discovery, we need to have $p=2.85\times 10^{-7}$ and
therefore $\chi^{2}\approx 65$ for $N_{\rm bin}=18$.
For a particular resonance, we define a reference model with a known
cross section. We then vary the $m_{T_{cl}}$ cut from $1.5$~TeV to
$3.5$~TeV in $100$~GeV steps to generate different sets of $b_{i}$ and
$s_{i}$. We find the optimized $m_{T_{cl}}$ cut that maximizes the
$\chi^{2}$. After optimizing the $m_{T_{cl}}$ cut, we multiply the
number of signal events by a factor of $Z$ to achieve $\chi^{2}=65$ or
$5\,\sigma$ discovery. This is equivalent to requiring the production
cross section of the signal to be $Z$ times the reference cross
section.
\subsection{Discovery limits}
For $t\,\bar{t}$ resonances, we choose a representative set of
$t\,\bar{t}$ resonances with different spins and quantum numbers under
$SU(3)$ color gauge group. We label spin-0 color-singlet scalar,
spin-0 color-singlet pseudo-scalar, spin-1 color octet and spin-2
color-singlet particles as $\Pi^{0}$, $\Pi^{0}_{p}$, $\Pi^{1}_{o}$ and
$\Pi^{2}$ respectively. For the spin-0 particles, $\Pi^{0}$ and
$\Pi^{0}_{p}$, we assume that they only couple to top quarks with
couplings equal to the top Yukawa coupling in the standard model, and
hence they are mainly produced through the one-loop gluon fusion
process at the LHC. Their decay widths are around $3/(16\,\pi)$ times
their masses and calculated automatically in the Madgraph. For the
spin-one particle, $\Pi^{1}_{o}$, we still use the KK gluon described
in Section~{\ref{sec:kkgluon}} as the reference particle, and use the
same couplings defined in Eq.~(\ref{eq:KKgluon}). The decay width of
the KK gluon is fixed to $0.153$ times its mass. For the spin-two
particle, $\Pi^{2}$, we choose the first KK graviton in the RS model
as the reference particle, and choose the model parameter,
$\kappa/M_{pl}$ = 0.1, where $M_{pl}$ is the Planck scale and $\kappa$
is defined in~\cite{Randall:1999ee}. Its decay width is calculated in
Madgraph.
For $t\,t$ resonances, we study the spin-one particle, which is
suggested to exist at the TeV scale in grand unified models in a
warped extra-dimension~\cite{WarpedGUT}. Under the SM gauge
symmetries, the $X$ gauge boson is the up part of the gauge bosons
with the quantum numbers $(\bar{3}, 2, 5/6)$. It has the electric
charge $4/3$ and couples to up-type quarks with a form
$g_{i}\,\epsilon_{abc}\,\bar{u}_{i\,L}^{{\cal
C}\,a}\,\overline{X}^{\,b}_{\mu}\,\gamma^{\mu}\,u^{c}_{i\,L}\,+\,h.c.$
($i$ is the family index; $a,b,c$ are color indices; ${\cal C}$
denotes charge conjugate). In general, the gauge couplings $g_{i}$
depend on the fermion localizations in the fifth warped
extra-dimension. However, we do not specify any details of model
buildings including how to suppress the proton decay, and only focus
on the discovery feasibility at the LHC. For simplicity, we model the
$t\,t$ resonance the same way as the KK gluon, but flip the sign of one
lepton at the parton level. We also choose the reference $t\,t$
production cross section through $X$ as the cross section of
$t\,\bar{t}$ production through the KK gluon described in
Eq.~(\ref{eq:KKgluon}). We fix the decay width of $X$ to be $10\%$
of its mass. In our analysis, we use the same set of background events
as in the $t\,\bar{t}$ case. There are two main sources of the SM
backgrounds for same-sign dileptons. The lepton charges from $b$-jets
can have either sign. Another source is lepton charge
misidentifications. There are other intrinsic SM backgrounds from
processes like $u\,\bar{d}\rightarrow W^{+}W^{+}d\,\bar{u}$. However,
this is a pure electroweak process and hard to pass the reconstruction
cuts. We neglect such processes in our analysis.
In Fig.~\ref{fig:discoverylimit}, we show values of the multiplying
factor $Z$ for the $t\,\bar{t}\,(t\,t)$ production cross sections
to have $5\,\sigma$ discovery at the LHC for $100~{\rm
fb}^{-1}$ integrated luminosity. Note that we do not change the
widths of the resonances according to the models described above when
multiplying the factor $Z$. Those models
serve as reference points only. In obtaining the discovery limits, we
have ignored all interferences between the resonances and the SM
$t\,\bar t$ productions. As shown in
Ref.~\cite{Frederix:2007gi}, the interference between a KK gluon or a KK
graviton and the SM $t\,\bar t$ productions is negligible. For a spin-0
resonance (scalar or pseudo-scalar), a peak-dip structure in the
$m_{t\bar t}$ distribution is generally visible at the parton level if the resonance is
produced through gluon fusion similar to the SM Higgs. We do not
anticipate that the interference will change our results
significantly.
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.7 \textwidth]{limit.eps}}
\caption{The multiplying factor $Z$ (shown in the figure is its square
root) for the production cross sections
to have $5\,\sigma$ discovery at the LHC with a $100~{\rm fb}^{-1}$
integrated luminosity, as a function of $t\,\bar{t}\,(t\,t)$
invariant masses. $\Pi^{0}$, $\Pi^{0}_{p}$, $\Pi^{1}_{o}$ and
$\Pi^{2}$ are spin-0 color-singlet scalar, spin-0 color-singlet
pseudo-scalar, spin-1 color octet and spin-2 color-singlet $t\,\bar
t$ resonances, respectively. }
\label{fig:discoverylimit}
\end{figure}
The discovery limits for the KK gluon are $3.2$~TeV and $3.7$~TeV for
$100~{\rm fb}^{-1}$ and $300~{\rm fb}^{-1}$ integrated
luminosity. For comparison, the discovery limit for the KK gluon in
the semileptonic
channel is $3.8$~TeV for $100~{\rm fb}^{-1}$ and $4.3$~TeV for
$300~{\rm fb}^{-1}$ given in~\cite{Baur:2008uv}. There they combine
the invariant mass and top $p_{T}$ distributions. If only the
invariant mass distribution were used, the discovery limit would be
reduced by a few hundred GeV. Therefore, the discovery limit in the
dilepton channel is competitive to the semileptonic channel. Comparing
the black (solid) line and the orange (thick dashed) line,
we have a better discovery limit for the $t\,t$ resonance than the
$t\,\bar{t}$ resonance when they have the same production
cross section. This is because the SM background for $t\,t$ is much
smaller than the background for $t\,\bar{t}$. The $X$
gauge boson can be discovered with a mass up to $4.0$~TeV and
$4.4$~TeV, respectively, for $100~{\rm fb}^{-1}$ and $300~{\rm fb}^{-1}$.
If a $t\,\bar t$ resonance is discovered, it is important to measure
the mass. The peak position of the $m_{t\bar t}$ distribution in
general does not coincide with the true resonance mass, and also
shifts according to the $m_{T_{cl}}$ cut applied, as can be seen in
Fig.~\ref{fig:mtcl}. We can eliminate this
systematic error, as well as minimize the statistical error by using the
usual ``template'' method. That is, we can generate the $m_{t\bar t}$
distributions for different input masses, and then
compare them with the measured distribution to obtain the true mass. A
detailed study of mass measurement is beyond the scope of this
article.
\section{Spin Measurements}
\label{sec:spin}
The momenta of all particles are known after event reconstruction,
which allows us to determine the spins of the $t\,\bar t$
resonances. We first consider the angular distributions of the top quark in the
$t\,\bar t$ resonance rest frame. To minimize the effect of initial state
radiation, we use the Collins-Soper angle~\cite{Collins:1977iv}
defined as the angle between the top momentum and the axis bisecting
the angle between the two incoming protons, all in the $t\,\bar t$ rest
frame. In the case that the initial state radiation vanishes, this
angle becomes the angle between the top momentum and the beam
direction. Using the lab frame momenta, the Collins-Soper angle is
conveniently given by
\begin{equation}
\cos\theta=\frac{2}{m_{t\bar t} \sqrt{m_{t\bar t}^2+p_T^2}}(p^+_t\,p^-_{\bar t}-p^-_t\,p^+_{\bar t}),
\end{equation}
where $m_{t\bar t}$ and $p_T$ are the invariant mass and the transverse momentum of the $t\,\bar t$ system and $p^{\pm}_{t}$, $p^{\pm}_{\bar t}$ are defined by
\begin{equation}
p^{\pm}_t=\frac{1}{\sqrt{2}}(p_t^0\pm p_t^z)\,,\quad \ p^{\pm}_{\bar
t}=\frac{1}{\sqrt{2}}(p_{\bar t}^0\pm p_{\bar t}^z)\,.
\end{equation}
One can also consider angular correlations among the decay products
of top quarks. As mentioned in the introduction, the best analyzer for
the top polarization is the charged lepton. Therefore, we examine the opening
angle $\phi$ between the $\ell^+$ direction in the $t$ rest frame and
the $\ell^{-}$ direction in the $\bar t$ rest
frame. The parton level distribution for the opening
angle has a form
\begin{equation}
\frac1\sigma\frac{d\sigma}{d\cos\phi}=\frac12(1-D\cos\phi)\,,
\end{equation}
where $D$ is a constant depending on the $t\,\bar t$ polarizations,
and hence model details. At particle level, the distribution is
affected by the experimental resolutions and wrong solutions from
event reconstruction.
\begin{figure}[htb]
\centerline{
\includegraphics[width= \textwidth]{cos.eps}
}
\caption{Distributions of the Collins-Soper angle $\theta$ (left) and the
opening angle $\phi$ (right) at particle level for different
resonances with a mass of 2 TeV and the SM backgrounds. A mass
window cut $(1600\,\mbox{GeV}, 2400\,\mbox{GeV})$ is applied on all solutions.}
\label{fig:spin}
\end{figure}
In Fig.~\ref{fig:spin}, we show the particle level distributions
of $\cos\theta$ and $\cos\phi$ for 4 different $t\,\bar t$ resonances:
a scalar, a pseudo-scalar, a vector boson that couples to left- and
right-handed quarks equally, and a KK graviton in the RS model. The
cuts described in Sec.~\ref{sec:eventgeneration} are applied with
$m_{T_{cl}}>1500\mbox{GeV}$. A mass window cut of $(1600\,\mbox{GeV}, 2400\,\mbox{GeV})$
is also applied on the solutions to increase $S/B$. From the left
panel of Fig.~\ref{fig:spin}, we see significant suppressions in the
forward and backward regions of $\cos\theta$, due to the kinematic
cuts. Except that, both the scalar and the pseudo-scalar have a flat
distribution in $\cos\theta$ and are hard to be distinguished from
each other. The $\cos\theta$ distributions for the vector boson and
the graviton show the biggest difference with respect to each
other. As shown in the right panel of Fig.~\ref{fig:spin}, the slope
of the pseudo-scalar distribution in $\cos \phi$ has an opposite sign
to all others, which can be used to identify a pseudo-scalar
resonance. Therefore one has to use both distributions to distinguish
the four particles.
Given the distributions, we can estimate how many events are needed to
determine the spin of a $t\,\bar t$ resonance. We first reform the
question in a more specific way: given experimentally observed distributions in $\cos
\theta$ and $\cos \phi$ generated by a
particle of spin $s_a$, we ask how many events are needed to decide,
at $95\%$ CL, that they are not from a particle of spin
$s_b$. This is being done by comparing the observed distributions with
Monte Carlo distributions of different spins. If
the observed distributions are inconsistent with all but one spin, we
claim that we have identified the spin. Of course, without real data,
the ``observed''
distribution in this article is also from Monte Carlo. We quantify the
deviation of
two distributions from different spins $s_{a}$ and $s_{b}$ as
\beqa
\chi^{2}_{s_{a}:s_{b}}\,=\,\sum_{i}^{N_{\rm bin}}\frac{(n_{s_{a},\,i}-n_{s_{b},\,i})^{2}}{n_{s_{b},\,i}}\,,
\eeqa
where $N_{\rm bin}$ is the total number of bins and is equal to 20 by
choosing a $0.1$ bin size for both $\cos \theta$ and $\cos \phi$;
$n_{s_{a},\,i}$ and $n_{s_{b},\,i}$ are the number of events in the
$i$'th bin, which satisfy $\sum n_{s_{a},\,i}=\sum
n_{s_{b},\,i}$. When $\chi^{2}=33$, we claim that we can distinguish
the spin $s_{a}$ particle from the spin $s_{b}$ particle at $95\%$
CL, corresponding to the $p$-value of $2.5\times 10^{-2}$, for $19$
degrees of freedom (here we keep the total number of events fixed, and
hence we have one degree of freedom less). The number of signal events
(after reconstruction) needed to distinguish each pair of spins are
listed in Table~\ref{table:spin}. The same cuts as for obtaining
Fig.~\ref{fig:spin} are applied.
\begin{table}[htb]
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline\hline
$s_a$$\backslash$ $s_b$&Vector &Scalar&Pseudo-scalar&Graviton
\\
\hline
Vector&-&661 (501)&262 (140)&316 (122)\\
\hline
Scalar&705 (577)&-&199 (94)&771 (455)\\
\hline
Pseudo-scalar&275 (182)&200 (116)&-&240 (128) \\
\hline
Graviton&356 (243)&878 (694)&239 (123)&-\\
\hline
\hline
\end{tabular}
{\caption{\label{table:spin}Number of {\it signal} events after reconstruction
needed to distinguish a particle of spin
$s_a$ from spin $s_b$ at 95\% CL. The number of background events is fixed to
136, corresponding to $100\, {\rm fb}^{-1}$ data. All resonance
masses are $2$~TeV. For reference, the needed numbers of signal events
without background are given in the parentheses.} }
\end{center}
\end{table}
In Table \ref{table:spin}, we have shown two sets of numbers. The
numbers of events outside the parentheses are the minimum numbers of
signal events needed to distinguish the spin for $100\, {\rm fb}^{-1}$
data. We use the same cuts as for Fig.~\ref{fig:spin}. The number of
background events is 136 with the $t\bar t$ dilepton events dominating
(109). For reference, we also list in the parentheses the
numbers of needed events assuming no background. The background
distributions are canceled when comparing the observed distributions and
the Monte Carlo distributions. However, they do introduce
uncertainties that can significantly increase the number of needed
signal events.
The numbers listed in Table \ref{table:spin} are large but
achievable in some models. For example, a KK gluon of 2 TeV in the
basic RS model yields 230 events for $100\, {\rm fb}^{-1}$ in the mass
window $(1600\,\mbox{GeV}, 2400\,\mbox{GeV})$, which is not enough to distinguish
it from other spins at 95\% CL. With $300\, {\rm fb}^{-1}$ data, we
can distinguish it from a pseudo-scalar or a KK-graviton using the
dilepton channel alone, but will need to combine other channels to
distinguish it from a scalar.
\section{Discussions and Conclusions}
\label{sec:discussion}
An important assumption leading to the fully solvable system is that
the only missing transverse momentum comes from the two neutrinos from
the top decays. There are also other sources of missing momenta such as
neutrinos from heavy flavor hadron decays. But they are usually soft
and their effects have already been included in the simulation. More
challengingly, the assumption is invalid when there are other missing
particles in the event, for example, in supersymmetric theories with
R-parity. Consider the process $pp\rightarrow \tilde t\,\tilde{t}^*
\rightarrow t\,\bar t\,\tilde\chi_1^0\,\tilde\chi_1^0$ in the minimal
supersymmetric standard model, which has the same visible
final state particles as a $t\,\bar t$ resonance. We have to be able to
distinguish the two cases before claiming a $t\,\bar t$
resonance. Distributions in various kinematic observables are
certainly different for the two cases. Nevertheless, we find that the
most efficient way to separate them is still by using the event
reconstruction.
As an example, we have generated 10,000 events in the
above MSSM decay chain and let both $t$ and $\bar t$ decay leptonically,
for $m_{\tilde t}=1500~\mbox{GeV}$ and $m_{\tilde\chi_1^0}=97~\mbox{GeV}$. There are 705
events which pass the kinematic cuts described in Section
\ref{sec:reconstruction} with a $m_{T_{cl}}$ cut of 1500 GeV. Out of those
705 events, only 30 pass the reconstruction procedure, that is,
satisfy Eq.~(\ref{eq:realcut}). This is to the
vast contrast of a $t\,\bar t$ resonance of 3 TeV, where a half of
the events after cuts survive the reconstruction procedure. The
difference between the two cases is not difficult to
understand: for a $t\,\bar t$ resonance, ignoring initial state
radiations, we have $t$ and $\bar t$ back-to-back in the
transverse plane and their decay products nearly collinear. On the other hand, the
directions of the two neutralinos in the MSSM case are unrelated
and therefore the direction of the missing $p_T$ is separated from both
of the $b\,\ell$ systems. It is then very unlikely to satisfy the mass shell
constraints simultaneously for both $t$ and $\bar t$.
In conclusion, by reconstructing $t\,\bar t$ and $t\,t$ events in the
dilepton channel, we studied the $t\,\bar t$ and $t\,t$ resonances at
the LHC in a model-independent way. The kinematic system is fully
solvable from the $W$ boson and top quark mass-shell constraints, as
well as the constraints from the measured missing transverse
momentum. After solving this system for the momenta of the two
neutrinos, we obtained the $t$, $\bar t$ momenta and therefore the
$t\,\bar t$ invariant mass distribution. The same procedure can also
be applied to the $t\,t$ system. We showed that this method can be
utilized to discover and measure the spins of $t\,\bar t$ and $t\,t$
resonances at the LHC.
The event reconstruction is the most challenging when the $t\,\bar t$
resonance is heavy. This is not only because of the suppression of
parton distribution functions at high energies. More importantly, in
this case the top
quarks are highly boosted and the lepton and the $b$-jet from the same top
decay are highly collimated. Therefore, the lepton is often not
isolated from the $b$-jet. To solve this problem, we included
non-isolated muons, which can be identified in a detector. The
$b$-tagging efficiency may also degrade at high energies, which drove
us to consider events without $b$-tagged jets. In summary, we included
all events with two high $p_T$ (isolated or non-isolated) leptons
and two high $p_T$ ($b$-tagged or not-tagged) jets passing the
kinematic cuts described in
Section~\ref{sec:eventgeneration}. Accordingly, we have to consider
all SM backgrounds containing the same final state particles. We
simulated and analyzed all major backgrounds and found that they can
be efficiently reduced by imposing the kinematic cuts and the
mass-shell constraints.
The reconstruction procedure was applied to four $t\,\bar t$ resonances
with different spins. We found that despite a smaller branching ratio,
the dilepton channel is competitive to the semileptonic channel in
discovering the $t\,\bar t$ resonance. This is due to the better
experimental resolution of the lepton momentum measurement and smaller
SM backgrounds. For example, the first KK gluon in the basic RS model
with fermions propagating in the bulk can be discovered at $5\,\sigma$
level or better, up to a mass of $3.7$~TeV for $300~\rm{fb}^{-1}$ integrated
luminosity. We also considered the possibility of finding a $t\,t$
resonance, for which the dilepton channel is the best because it is
the only channel in which the charges of both tops can be
identified. Due to the smallness of the SM same-sign dilepton
backgrounds, the $t\,t$ resonance has a better discovery limit than
the $t\,\bar t$ resonance. Assuming the same production cross section
as the KK gluon, the $t\,t$ resonance can be discovered up to a mass
of $4.4$~TeV.
The dilepton channel is also advantageous for identifying the spin of
the resonance. We considered the top quark angular distribution in the
$t\,\bar{t}$ rest frame, and the opening angle distribution of the
two leptons in their respective top quark rest frames. Combining those
two distributions, we found that for $100\, {\rm fb}^{-1}$ a few
hundred signal events (after reconstruction) are needed to distinguish
the spins of different resonances.
\bigskip
{\bf Acknowledgments:}
Many thanks to Hsin-Chia Cheng and Markus Luty for interesting
discussions. We also thank Kaustubh Agashe and Ulrich Baur for useful
correspondences. Z.H. is supported in part by the United States
Department of Energy grand no. DE-FG03-91ER40674. Fermilab is operated
by Fermi Research Alliance, LLC under contract no. DE-AC02-07CH11359
with the United States Department of Energy.
| {'timestamp': '2009-04-13T00:35:42', 'yymm': '0809', 'arxiv_id': '0809.4487', 'language': 'en', 'url': 'https://arxiv.org/abs/0809.4487'} |
\section{Introduction}\label{Introduction}
The quantification of shapes has become an important research direction in both the applied and theoretical sciences. It has brought advances to many fields including network analysis \citep{lee2011discriminative}, geometric morphometrics \citep{boyer2011algorithms,gao2019gaussian}, biophysics and structural biology \citep{wang2021statistical}, and radiomics (i.e., the field that aims to correlate clinical imaging features and genomic assays) \citep{crawford2020predicting}. If shapes are considered random, then their corresponding quantitative summaries are also random --- implying such summaries of random shapes are statistics. The statistical inference of shapes based on these quantitative summaries has been of particular interest (e.g., the derivation of confidence sets by \cite{fasy2014confidence} and a framework for hypothesis testing by \cite{robinson2017hypothesis}).
\subsection{Overview of Topological Data Analysis}
Topological data analysis (TDA) presents a collection of statistical methods that quantitatively summarize the shapes represented in data using computational topology \citep{edelsbrunner2010computational}. One common statistical invariant in TDA is the persistent diagram (PD) \citep{edelsbrunner2000topological}). When equipped with the $p$-th Wasserstein distance for $1\le p<\infty$, the collection of PDs denoted as $\mathscr{D}$ is a Polish space \citep{mileyko2011probability}; hence, probability measures can be applied, and the randomness of shapes can be potentially represented using the probability measures on $\mathscr{D}$. However, a single PD does not preserve all relevant information of a shape \citep{crawford2020predicting}. Using the Euler calculus, \cite{ghrist2018persistent} (Corollary 6 therein) showed that the persistent homology transform (PHT) \citep{turner2014persistent}, motivated by integral geometry and differential topology, concisely summarizes information within shapes. The PHT takes values in $C(\mathbb{S}^{d-1};\mathscr{D}^d)=\{\mbox{all continuous maps }F:\mathbb{S}^{d-1} \rightarrow \mathscr{D}^d\}$, where $\mathbb{S}^{d-1}$ denotes the sphere $\{x\in\mathbb{R}^d:\Vert x\Vert=1\}$ and $\mathscr{D}^d$ is the $d$-fold Cartesian product of $\mathscr{D}$ (see Lemma 2.1 and Definition 2.1 of \cite{turner2014persistent}). Since $\mathscr{D}$ is not a vector space and the distances on $\mathscr{D}$ (e.g., the $p$-th Wasserstein and bottleneck distances \citep{cohen2007stability}) are very abstract, many fundamental concepts in classical statistics are not easy to implement with summaries resulting from the PHT. For example, the definition of moments corresponding to probability measures on $\mathscr{D}$ (e.g., means and variances) is highly nontrivial (see \cite{mileyko2011probability} and \cite{turner2014frechet}). The difficulty in defining these properties prevents the applications of PHT-based statistical inference methods in $C(\mathbb{S}^{d-1};\mathscr{D}^d)$.
The smooth Euler characteristic transform (SECT) proposed by \cite{crawford2020predicting} provides an alternative summary statistic for shapes. The SECT not only preserves the information of shapes of interest (see Corollary 6 of \cite{ghrist2018persistent}), but it also represents shapes using univariate continuous functions instead of PDs. Specifically, values of the SECT are maps from the sphere $\mathbb{S}^{d-1}$ to a separable Banach space --- the collection of real-valued continuous functions on a compact interval, say $C([0,T]) = \mathcal{B}$ for some $T>0$ (values of $T$ will be given in Eq.~\eqref{eq: def of sublevel sets}). That is, for any shape $K$, its SECT, denoted as $SECT(K) = \{SECT(K)(\nu)\}_{\nu\in\mathbb{S}^{d-1}}$, is in $\mathcal{B}^{\mathbb{S}^{d-1}}=\{\mbox{all maps }f: \mathbb{S}^{d-1}\rightarrow\mathcal{B}\}$; specifically, $SECT(K)(\nu)\in\mathcal{B}$ for each $\nu\in\mathbb{S}^{d-1}$. Therefore, the randomness of shapes $K$ is represented via the SECT by a collection of $\mathcal{B}$-valued random variables. The probability theory on separable Banach spaces has been better developed than on $\mathscr{D}$. Specifically, a $\mathcal{B}$-valued random variable is a stochastic process with its paths in $\mathcal{B}$ (we will further show in Section \ref{The Definition of Smooth Euler Characteristic Transform} that $\mathcal{B}$ herein can be replaced with a reproducing kernel Hilbert space (RKHS)), and the theory of stochastic processes has been developed for nearly a century. Many mathematical tools are available for building the foundations of the randomness and statistical inference of shapes.
The work from \cite{crawford2020predicting} applied the SECT to magnetic resonance images taken from tumors of a cohort of glioblastoma multiforme (GBM) patients. Using summary statistics derived from the SECT as predictors of the Gaussian process regression, the authors showed that the SECT has the power to predict clinical outcomes better than existing tumor shape quantification approaches and common molecular assays. The relative performance of the SECT in the GBM study represents a promising future for the utility of SECT in medical imaging and more general statistical applications investigating shapes. Similarly, \cite{wang2021statistical} applied derivatives of the Euler characteristic transform as predictors of statistical models for subimage analysis which is related to the task of variable selection and seeks to identify physical features that are most important for differentiating between two classes of shapes. In both \cite{crawford2020predicting} and \cite{wang2021statistical}, the shapes were implemented as the predictors of regressions, and the randomness of these predictors was ignored.
\subsection{Overview of Contributions}
In this paper, we model the distributions of shapes via the SECT using RKHS-valued random fields and provide the corresponding foundations using tools in algebraic and computational topology, Sobolev spaces, and functional analysis. In contrast to work like \cite{crawford2020predicting} and \cite{wang2021statistical}, we model realizations from the SECT as the responses of an RKHS-valued random field rather than as input variables or predictors. Modeling the distributions of shapes helps answer the following statistical inference question: \textit{Is the difference between two groups of shapes significant or just random?} Based on the foundations of the randomness of shapes, we propose an approach for testing hypotheses on random shapes and answer this statistical inference question.
Using the homotopy theory and PDs, we first propose a general collection of shapes on which the SECT is well-defined. Then, we show that, for each shape in this collection, its SECT is in $C(\mathbb{S}^{d-1};\mathcal{H})=\{\mbox{all continuous maps }F: \mathbb{S}^{d-1}\rightarrow\mathcal{H}\}$, where $\mathcal{H} = H_0^1([0,T])$ is not only a Sobolev space (see Chapter 8.3 of \cite{brezis2011functional}) but also an RKHS$^\dagger$\footnote{$\dagger$: Strictly speaking, the functions in Sobolev space $\mathcal{H}$ are defined on the open interval $(0,T)$ instead of the closed interval $[0,T]$ (see Chapter 8.2 of \cite{brezis2011functional}). Hence, the rigorous notation of $\mathcal{H}$ should be $H_0^1((0,T))$. However, Theorem 8.8 of \cite{brezis2011functional} indicates that each function in $H_0^1((0,T))$ can be uniquely represented by a continuous function defined on $[0,T]$, which implies that functions in $H_0^1((0,T))$ can be viewed as defined on the closed interval $[0,T]$. Therefore, to implement the boundary values on $\partial (0,T)=\{0,T\}$, we use the notation $H_0^1([0,T])$ throughout this paper to indicate that all functions in $\mathcal{H}$ are viewed as defined on $[0,T]$. The same reasoning is applied for the space $W_0^{1,p}([0,T])$ implemented later in this paper (see Theorem 8.8 and the Remark 8 after Proposition 8.3 in \cite{brezis2011functional}).}. We further construct a probability space to model the distributions of shapes. This probability space makes SECT an $\mathcal{H}$-valued random field indexed by $\mathbb{S}^{d-1}$ with continuous paths, which is equivalent to that SECT being a $C(\mathbb{S}^{d-1};\mathcal{H})$-valued random variable. Additionally, the restriction of the SECT to any point on $\mathbb{S}^{d-1}$ is a real-valued stochastic process with paths in $\mathcal{H}$. Based on the proposed probability space, we provide an assumption on the moments of the SECT and define the form of its mean and covariance. Using a Sobolev embedding result and functional analysis arguments, we show some properties of the mean and covariance for the SECT. These properties allow us to develop the Karhunen–Loève expansion of the SECT, which is the key tool for our proposed hypothesis testing framework.
Traditionally, the statistical inference of shapes in TDA is conducted in persistent diagram space $\mathscr{D}$, which is not suitable for exponential family-based distributions and requires any corresponding statistical inference to be highly nonparametric. For example, \cite{fasy2014confidence} used subsampling, and \cite{robinson2017hypothesis} applied permutation tests. The PHT-based statistical inference in $C(\mathbb{S}^{d-1};\mathscr{D}^d)$ is even more difficult. With the Karhunen–Loève expansion of the SECT and the central limit theorem, some hypotheses on the distributions of shapes can be tested using the normal distribution-based methods (e.g., the adaptive Neyman test \citep{fan1996test} and the $\chi^2$-test). We will provide the details of these methods in Section \ref{section: hypothesis testing}.
One can use a morphology example from \cite{turner2014persistent} to illustrate a potential application of SECT-based hypothesis testing. The study of variation of complex traits and phenotypes is fundamental to understanding hypotheses in evolutionary biology. Suppose we observe heel bones from two groups of primates (i.e., heel bones $\{K_i^{(j)}\}_{i=1}^n$ are from the $j$-th group for $j\in\{1,2\}$). For each group $j$, suppose the underlying distribution of statistics $\{SECT(K_i^{(j)})\}_{i=1}^n$ is $\mathbf{P}^{(j)}$, and $\mathbf{P}^{(j)}$ is a probability meansure on space $C(\mathbb{S}^{d-1};\mathcal{H})$. Then, assessing the evolutionary biological question of ``whether the two groups of primates are of different species" may be boiled down to testing the hypotheses $H_0$: ``$\mathbf{P}^{(1)}$ and $\mathbf{P}^{(2)}$ have the same mean" vs. $H_1$: ``$\mathbf{P}^{(1)}$ and $\mathbf{P}^{(2)}$ do not have the same mean" (the rigorous definition of ``the mean of $\mathbf{P}^{(j)}$" and the rigorous version of the hypotheses will be provided in Section \ref{section: distributions of H-valued GP} and Eq.~\eqref{eq: the main hypotheses}, respectively, of this paper).
\subsection{Relevant Notation and Paper Organization}
Throughout this paper, a ``shape" refers to a triangulable subset of $\mathbb{R}^d$ defined as follows.
\begin{definition}\label{def: finite triangularization}
(i) Let $K$ be a subset of $\mathbb{R}^d$. If there exists a finite simplicial complex $S$ such that the corresponding polygon $\vert S\vert=\bigcup_{s\in S}s$ is homeomorphic to $K$, we say $K$ is triangulable. (ii) Let $\mathscr{S}_d$ denote the collection of all triangulable subsets of $\mathbb{R}^d$.
\end{definition}
\noindent The definitions of simplicial complexes and triangulable spaces can be found in \cite{munkres2018elements}. The triangulability assumption is standard in the TDA literature (e.g., \cite{cohen2010lipschitz} and \cite{turner2014persistent}). Throughout this paper, we apply the following:
\begin{enumerate}
\item All the linear spaces are defined with respect to field $\mathbb{R}$. For any normed space $\mathcal{V}$, let $\Vert\cdot\Vert_{\mathcal{V}}$ denote its norm. Let $\Vert x\Vert$ denote the Euclidean norm if $x$ is a finite-dimensional vector.
\item Let $X$ be a compact metric space equipped with metric $d_X$ and let $\mathcal{V}$ denote a normed space. $C(X;\mathcal{V})$ is the collection of all continuous maps from $X$ to $\mathcal{V}$. Furthermore, $C(X;\mathcal{V})$ is a normed space equipped with the norm
\begin{align*}
\Vert f\Vert_{C(X;\mathcal{V})} = \sup_{x\in X}\Vert f(x)\Vert_{\mathcal{V}}.
\end{align*}
The Hölder space $C^{0,\frac{1}{2}}(X;\mathcal{V})$ is defined as
\begin{align*}
\left\{f\in C(X;\mathcal{V}) \Bigg\vert \sup_{x,y\in X,\, x\ne y}\left(\frac{\Vert f(x)-f(y)\Vert_{\mathcal{V}}}{\sqrt{d_X(x,y)}}\right)<\infty\right\}.
\end{align*}
Here, $C^{0,\frac{1}{2}}(X;\mathcal{V})$ is a normed space equipped with the norm
\begin{align*}
\Vert f\Vert_{C^{0,\frac{1}{2}}(X;\mathcal{V})} = \Vert f\Vert_{C(X;\mathcal{V})}+\sup_{x,y\in X,\, x\ne y}\left(\frac{\Vert f(x)-f(y)\Vert_{\mathcal{V}}}{\sqrt{d_X(x,y)}}\right).
\end{align*}
Obviously, $C^{0,\frac{1}{2}}(X;\mathcal{V})\subset C(X;\mathcal{V})$. For simplicity, we denote $C(X) = C(X;\mathbb{R})$ and $C^{0,\frac{1}{2}}(X) = C^{0,\frac{1}{2}}(X;\mathbb{R})$ (see Chapter 5.1 of \cite{evans2010partial}). For a given $T>0$ (see Eq.~\eqref{eq: def of sublevel sets}), we denote the separable Banach space $C([0,T])$ as $\mathcal{B}$.
\item All derivatives implemented in this paper are \textit{weak derivatives} (defined in Chapter 5.2.1 of \cite{evans2010partial}). The inner product of $\mathcal{H} = H_0^1([0,T])$, where $\mathcal{H}$ is a separable Hilbert space (see Chapter 8.3 of \cite{brezis2011functional}), is defined as
\begin{align*}
\langle f, g \rangle_{\mathcal{H}} = \int_0^T f'(t) g'(t) dt.
\end{align*}
Unless otherwise stated, the inner product $\langle\cdot,\cdot\rangle$ denotes $\langle\cdot,\cdot\rangle_{\mathcal{H}}$ for simplicity.
\item Suppose $(X, d_X)$ is a metric space. $\mathscr{B}(X)$ and $\mathscr{B}(d_X)$ denote the Borel algebra generated by the metric topology corresponding to $d_X$.
\end{enumerate}
The following inequalities are also useful for deriving many results presented in this paper
\begin{align}\label{eq: Sobolev embedding from Morrey}
\Vert f\Vert_{\mathcal{B}}\le \Vert f\Vert_{C^{0,\frac{1}{2}}([0,T])}\le \Tilde{C}_T \Vert f\Vert_{\mathcal{H}}, \ \ \mbox{ for all }f\in\mathcal{H},
\end{align}
where $\Tilde{C}_T $ is a constant depending only on $T$. The first inequality in Eq.~\eqref{eq: Sobolev embedding from Morrey} results from the definition of $\Vert\cdot\Vert_{\mathcal{B}}$ and $\Vert\cdot\Vert_{C^{0,\frac{1}{2}}([0,T])}$, while the second inequality is from \cite{evans2010partial} (Theorem 5 of Chapter 5.6). Eq.~\eqref{eq: Sobolev embedding from Morrey} further implies the following embeddings
\begin{align}\label{eq: H, Holder, B embeddings}
H_0^1([0,T]) = \mathcal{H} \subset C^{0,\frac{1}{2}}([0,T]) \subset \mathcal{B} = C([0,T]).
\end{align}
The algebraic topology concepts referred to in this paper (e.g., Betti numbers, homology groups, and homotopy equivalence) can be found in \cite{hatcher2002algebraic} and \cite{munkres2018elements}.
We organize this paper as follows. In Section \ref{SECTs and Their Gaussian Distributions}, we provide a collection of shapes and show that the SECT is well defined for elements in this collection. Additionally, we provide several properties of the SECT using Sobolev and Hölder spaces. These properties will be necessary, in later sections, for developing the probability theory of both the SECT and SECT-based hypothesis testing. In Section \ref{section: distributions of Gaussian bridge}, we construct a probability space for modeling the distributions of shapes. Based on this probability space, we model the SECT as a $C(\mathbb{S}^{d-1};\mathcal{H})$-valued random variable. Then, we define the mean and covariance of the SECT. In Section \ref{section: hypothesis testing}, we propose the Karhunen–Loève expansion of the SECT based on the mean and covariance defined in Section \ref{section: distributions of Gaussian bridge}, and this expansion leads to a normal distribution-based statistic for testing hypotheses on the distributions of shapes. In Section \ref{section: Simulation experiments}, we provide simulation studies showing the performance of the proposed hypothesis testing approach. In Section \ref{Conclusions and Discussions}, we conclude this paper and propose several relevant topics for future research. The Appendix in the Supplementary Material \citep{meng2022supplementary} provides the necessary mathematical tools for this paper. To avoid distraction from the main flow of the paper, unless otherwise stated, the proof of each theorem is in the Appendix.
\section{The Smooth Euler Characteristic Transform of Shapes}\label{SECTs and Their Gaussian Distributions}
In this section, we give the background on the smooth Euler characteristic transform (SECT) and propose corresponding mathematical foundations under the assumption that shapes are deterministic.
\subsection{The Definition of Smooth Euler Characteristic Transform}\label{The Definition of Smooth Euler Characteristic Transform}
We assume $K\subset \overline{B(0,R)} = \{x\in\mathbb{R}^d:\Vert x\Vert\le R\}$ which denotes a closed ball centered at the origin with some radius $R>0$. For each direction $\nu\in\mathbb{S}^{d-1}$, we define a collection of sublevel sets of $K$ by
\begin{align}\label{eq: def of sublevel sets}
K_t^\nu \overset{\operatorname{def}}{=} \left\{x\in K\vert x\cdot\nu \le t-R \right\},\ \ \mbox{ for all } t\in[0,T],\ \ \mbox{ where }T = 2R.
\end{align}
We then have the following Euler characteristic curve (ECC) in direction $\nu$
\begin{align}\label{Eq: first def of Euler characteristic curve}
\chi_t^{\nu}(K)& \overset{\operatorname{def}}{=} \mbox{ the Euler characteristic of }K_{t}^\nu = \chi (K^\nu_t) = \sum_{k=0}^{d-1} (-1)^{k}\cdot\beta_k(K_t^\nu),
\end{align}
for $t\in[0,T]$, where $\beta_k(K_t^\nu)$ is the $k$-th Betti number of $K_t^\nu$. The Euler characteristic transform (ECT) defined as $ECT(K): \mathbb{S}^{d-1} \rightarrow \mathbb{Z}^{[0,T]}, \nu \mapsto \{\chi_{t}^\nu(K)\}_{t\in[0,T]}$ was first proposed by \cite{turner2014persistent} as an alternative to the PHT. Based on the ECT, \cite{crawford2020predicting} further proposed the SECT as follows
\begin{align}\label{Eq: definition of SECT}
\begin{aligned}
& SECT(K): \ \ \mathbb{S}^{d-1}\rightarrow\mathbb{R}^{[0,T]},\ \ \ \nu \mapsto SECT(K)(\nu) = \left\{SECT(K)(\nu;t) \right\}_{t\in[0,T]}, \\
& \mbox{ where}\ \ SECT(K)(\nu;t) \overset{\operatorname{def}}{=} \int_0^t \chi_{\tau}^\nu(K) d\tau-\frac{t}{T}\int_0^T \chi_{\tau}^\nu(K)d\tau.
\end{aligned}
\end{align}
In order for the integrals in Eq.~\eqref{Eq: definition of SECT} to be well-defined, we need to introduce additional conditions. We first propose the following concept motivated by \cite{cohen2007stability}.
\begin{definition}\label{def: HCP and tameness}
Suppose $K\in\mathscr{S}_d$ and $K\subset\overline{B(0,R)}$. (i) If $K^\nu_{t^*}$ is not homotopy equivalent to $K^\nu_{t^*-\delta}$ for any $\delta>0$, we call $t^*\in[0,T]$ a homotopy critical point (HCP) of $K$ in direction $\nu$. (ii) If $K$ has finite HCPs in every direction $\nu\in\mathbb{S}^{d-1}$, we call $K$ tame.
\end{definition}
\noindent Because the Euler characteristic is homotopy invariant, for each $\nu\in \mathbb{S}^{d-1}$, the ECC $\chi^\nu_t(K)$ is a step function of $t$ with finite discontinuities, and hence measurable, if $K$ is tame. These discontinuities are the HCPs of $K$ in direction $\nu$. We may refine the definition of the ECC $\{\chi_{t}^\nu(K)\}_{t\in[0,T]}$ in Eq.~\eqref{Eq: first def of Euler characteristic curve} as follows: for a tame $K\in\mathscr{S}_d$, define $\chi_t^\nu(K) = \chi(K_t^\nu)$ if $t$ is not an HCP in direction $\nu$, and $\chi_t^\nu(K) = \lim_{t'\rightarrow t+}\chi_{t'}^\nu(K)$ if $t$ is an HCP in direction $\nu$. Under these conditions, $\{\chi_{t}^\nu(K)\}_{t\in[0,T]}$ is a right continuous with left limits (RCLL) function and has finite discontinuities. We can define $k$-th Betti number curves $\{\beta_k(K_t^{\nu})\}_{t\in[0,T]}$ to be RCLL functions of $t\in[0,T]$ in the same way. To investigate $SECT(K)$ across shapes $K$, especially the distribution of $SECT(K)$ across $K$, we need the following condition (needed in the proofs of several theorems in this paper) for tame shapes $K\subset\overline{B(0,R)}$ to restrict our attention to a specific subset of $\mathscr{S}_d$
\begin{align}\label{Eq: topological invariants boundedness condition}
\sup_{k\in\{0,\cdots,d-1\}}\left[\sup_{\nu\in\mathbb{S}^{d-1}}\left(\#\Big\{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu}) \, \Big\vert \, \operatorname{pers}(\xi)>0\Big\}\right)\right] \le\frac{M}{d},
\end{align}
where $\operatorname{Dgm}_k(K;\phi_{\nu})$ is a PD, $\operatorname{pers}(\xi)$ is the persistence of the homology feature $\xi$, $\#\{\cdot\}$ denotes the cardinality of a multiset, and $M>0$ is allowed to be any sufficiently large fixed number. To avoid distraction from the main flow of the paper, we save the details of condition \eqref{Eq: topological invariants boundedness condition} and the definitions of $\operatorname{Dgm}_k(K;\phi_{\nu})$ and $\operatorname{pers}(\xi)$ for the Appendix \ref{The Relationship between PHT and SECT}. A heuristic interpretation of condition \eqref{Eq: topological invariants boundedness condition} is that there exists a uniform upper bound for the numbers of nontrivial homology features of shape $K$ across all levels $t$ and directions $\nu$ (see Theorem \ref{thm: boundedness topological invariants theorem} below). This condition is usually satisfied in medical imaging studies (e.g., a tumor has finitely many connected components and cavities across all levels and directions). Condition \eqref{Eq: topological invariants boundedness condition} is needed in the proofs of several theorems in this paper, particularly Theorem \ref{lemma: The continuity lemma}. Throughout this paper, we focus on shapes in the following collection
\begin{align*}
\mathscr{S}_{R,d}^M \overset{\operatorname{def}}{=} \left\{K \in\mathscr{S}_d \big\vert K\subset\overline{B(0,R)},\ \ K \mbox{ is tame and satisfies condition (\ref{Eq: topological invariants boundedness condition}) with fixed }M>0 \right\}.
\end{align*}
Before Appendix \ref{The Relationship between PHT and SECT}, we only need the following boundedness derived from condition (\ref{Eq: topological invariants boundedness condition}).
\begin{theorem}\label{thm: boundedness topological invariants theorem}
For any $K\in\mathscr{S}_{R,d}^M$, we have the following bounds:\\ (i) $\sup_{\nu\in\mathbb{S}^{d-1}}\left[\sup_{0\le t\le T}\left(\sup_{k\in\{0,\cdots,d-1\}}\beta_k(K_t^{\nu})\right)\right] \le M/d$, and \\ (ii) $\sup_{\nu\in\mathbb{S}^{d-1}}\left(\sup_{0\le t\le T}\left\vert\chi_{t}^\nu(K)\right\vert\right) \le M$.
\end{theorem}
The tameness of $K$ and boundedness of $\chi^\nu_t(K)$ in Theorem \ref{thm: boundedness topological invariants theorem} guarantee that, for each fixed direction $\nu\in\mathbb{S}^{d-1}$, an ECC $\chi^\nu_{(\cdot)}(K)$ is a measurable and bounded function. Therefore, the integrals in Eq.~\eqref{Eq: definition of SECT} are well defined Lebesgue integrals. Since $\{\chi_{t}^\nu(K)\}_{t\in[0,T]}\in L^1([0,T])$, $SECT(K)(\nu)$ is absolutely continuous on $[0,T]$. Furthermore, we have the following regularity result of the Sobolev type on $SECT(K)(\nu)$ defined in Eq.~\eqref{Eq: definition of SECT}.
\begin{theorem}\label{thm: Sobolev function paths}
For any $K\in\mathscr{S}_{R,d}^M$ and $\nu\in\mathbb{S}^{d-1}$, we have the following:\\
(i) Function $\{\int_0^t \chi_\tau^\nu(K) d\tau\}_{t\in[0,T]}$ has its first-order weak derivative $\{ \chi_t^\nu(K) \}_{t\in[0,T]}$;\\ (ii) $SECT(K)(\nu)\in W^{1,p}_0([0,1]) \subset \mathcal{B}$ for all $p\in[1,\infty)$.
\end{theorem}
\noindent Here, $W^{1,p}_0([0,T])$ is a Sobolev space defined by the following (see the Theorem 8.12 of \cite{brezis2011functional} for this specific definition)
\begin{align*}
W^{1,p}_0([0,T]) = \Big\{f\in L^p([0,T]) \, \Big\vert \, \mbox{weak derivative }f' \mbox{ exists, }f'\in L^p([0,T]), \mbox{ and }f(0)=f(T)=0\Big\}
\end{align*}
The special case when $p=2$ in Theorem \ref{thm: Sobolev function paths} is of importance, as $\mathcal{H}=H_0^1([0,T])=W^{1,2}_0([0,T])$ is the RKHS reproduced by the following kernel
\begin{align}\label{Eq: kernel of the Brownian bridge}
\kappa(s,t) = \min\{s,t\}-\frac{st}{T}, \ \mbox{ for all }s,t\in[0,T],
\end{align}
which is the covariance function of the Brownion bridge on $[0,T]$ (see Example 4.9 of \cite{lifshits2012lectures}). The relationship between $\mathcal{H}$ and $\kappa(s,t)$ from the RKHS perspective implies that $\langle \kappa(t,), f \rangle = f(t)$ for all $f\in\mathcal{H}$ and $t\in[0,T]$. This result plays an important role later in this paper. The case when $p=2$ in Theorem \ref{thm: Sobolev function paths} indicates that $SECT(\mathscr{S}_{R,d}^M) \subset \mathcal{H}^{\mathbb{S}^{d-1}} = \{\mbox{all maps }F:\mathbb{S}^{d-1}\rightarrow\mathcal{H}\}$, which is enhanced by the following theorem.
\begin{theorem}\label{lemma: The continuity lemma}
For each $K\in\mathscr{S}_{R,d}^M$, we have the following:\\
(i) There exists a constant $C^*_{M,R,d}$ depending only on $M$, $R$, and $d$ such that the following two inequalities hold for any two directions $\nu_1,\nu_2\in\mathbb{S}^{d-1}$,
\begin{align}\label{Eq: continuity inequality}
& \left( \int_0^T \Big\vert\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\vert^2 d\tau \right)^{1/2} \le C^*_{M,R,d} \cdot \sqrt{\Vert \nu_1-\nu_2\Vert}, \\
\notag & \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2)\Big\Vert_{\mathcal{H}} \le C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 }.
\end{align}
(ii) $SECT(K) \in C^{0,\frac{1}{2}}(\mathbb{S}^{d-1};\mathcal{H})$, where $\mathbb{S}^{d-1}$ is equipped with the geodesic distance $d_{\mathbb{S}^{d-1}}$.\\
(iii) The constant $\Tilde{C}_T$ in Eq.~\eqref{eq: Sobolev embedding from Morrey} provides the following inequality
\begin{align}\label{eq: bivariate Holder continuity}
\begin{aligned}
& \Big\vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_2; t_2)\Big\vert\\
& \le \Tilde{C}_T \left\{\Vert SECT\Vert_{C(\mathbb{S}^{d-1};\mathcal{H})}\cdot \sqrt{\vert t_1-t_2\vert} + C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 } \right\},
\end{aligned}
\end{align}
for all $\nu_1, \nu_2\in\mathbb{S}^{d-1}$ and $t_1, t_2\in[0,T]$, which implies that $(\nu,t)\mapsto SECT(K)(\nu;t)$, as a function on $\mathbb{S}^{d-1}\times[0,T]$, belongs to $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1}\times[0,T];\mathbb{R})$.
\end{theorem}
\noindent Theorem \ref{lemma: The continuity lemma} (i) is a counterpart of the Lemma 2.1 in \cite{turner2014persistent} and is derived using ``bottleneck stability" (see \cite{cohen2007stability}). Additionally, Theorem \ref{lemma: The continuity lemma} (i) is implemented in the proof of Theorem \ref{lemma: The continuity lemma} (ii) and will be used in Section \ref{The Primitive Euler Characteristic Transform of Shapes}. Furthermore, Theorem \ref{lemma: The continuity lemma} (ii) guarantees that $SECT(\mathscr{S}_{R,d}^M) \subset C^{0,\frac{1}{2}}(\mathbb{S}^{d-1}; \mathcal{H}) \subset C(\mathbb{S}^{d-1}; \mathcal{H}) \subset \mathcal{H}^{\mathbb{S}^{d-1}}$. As a result, Eq.~\eqref{Eq: definition of SECT} defines the following map
\begin{align}\label{Eq: final def of SECT}
SECT: \mathscr{S}_{R,d}^M \rightarrow C(\mathbb{S}^{d-1}; \mathcal{H}),\ \ \ K \mapsto \left\{SECT(K)(\nu)\right\}_{\nu\in\mathbb{S}^{d-1}} \overset{\operatorname{def}}{=} SECT(K).
\end{align}
Theorem \ref{lemma: The continuity lemma} (iii) will be implemented in the mathematical foundation of the hypothesis testing in Section \ref{section: hypothesis testing} (specifically, see Theorems \ref{thm: lemma for KL expansions} and \ref{thm: KL expansions of SECT}).
Corollary 6 of \cite{ghrist2018persistent} implies the following result, which shows that the map in Eq.~\eqref{Eq: final def of SECT} preserves all the information of shapes $K\in \mathscr{S}_{R,d}^M$.
\begin{theorem}\label{thm: invertibility}
The map $SECT$ defined in Eq.~\eqref{Eq: final def of SECT} is invertible for all dimensions $d$.
\end{theorem}
\noindent Together with Eq.~\eqref{Eq: final def of SECT}, Theorem \ref{thm: invertibility} enables us to view each shape $K\in\mathscr{S}_{R,d}^M$ as $SECT(K)$ which is an element of $C(\mathbb{S}^{d-1}; \mathcal{H})$. This viewpoint will help us characterize the randomness of shapes $K \in \mathscr{S}_{R,d}^M$ using probability measures on the separable Banach space $C(\mathbb{S}^{d-1}; \mathcal{H})$.
If the shapes of interest are represented as triangular meshes, their ECC defined in Eq.~\eqref{Eq: first def of Euler characteristic curve} can easily be computed (e.g., Section 2 of \cite{wang2021statistical}). The SECT is then derived directly from the computed ECC (see Eq.~\eqref{Eq: definition of SECT}). In Appendix \ref{section: Computation of SECT Using the Čech Complexes}, we briefly introduce an approach for approximating the ECC of shapes in $\mathbb{R}^d$ in scenarios where the triangular mesh representation of shapes is not available, which in turn provides a corresponding approximation to SECT. This approach applies Čech complexes and the nerve theorem (see Chapter III of \cite{edelsbrunner2010computational}), which are implemented in all proof-of-concept examples and simulations throughout this paper. In addition, an approach was studied in \cite{niyogi2008finding} for finding the Betti numbers of submanifolds of Euclidean spaces from random point clouds. For the case where a point cloud is drawn near a submanifold, this approach can be applied to estimate the SECT of the submanifold. The application of \cite{niyogi2008finding} to SECT is left for future research.
\subsection{Proof-of-Concept Simulation Examples I: Deterministic Shapes}\label{Proof-of-Concept Simulation Examples I: Deterministic Shapes}
In this subsection, we compute the SECT of two simulated shapes $K^{(1)}$ and $K^{(2)}$ of dimension $d=2$. These shapes are defined as the following and presented in Figures \ref{fig: SECT visualizations, deterministic}(a) and (g), respectively:
\begin{align}\label{eq: example shapes K1 and K2}
K^{(j)} = \left\{x\in\mathbb{R}^2 \,\Bigg\vert \, \inf_{y\in S^{(j)}}\Vert x-y\Vert\le \frac{1}{5}\right\},\ \ \mbox{ where }j\in\{1,2\},
\end{align}
\begin{align*}
& S^{(1)} = \left\{\left(\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, \frac{\pi}{5}\le t\le\frac{9\pi}{5}\right\}\bigcup\left\{\left(-\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\},\\
& S^{(2)} = \left\{\left(\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, 0\le t\le 2\pi\right\}\bigcup\left\{\left(-\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\}.
\end{align*}
We compute $SECT(K^{(j)})(\nu;t)$ across direction $\nu\in\mathbb{S}^{1}$ and sublevel set $t\in[0,3]$ with $T=3$. For the following visualization, we identify each direction $\nu\in\mathbb{S}^1$ through the parametrization $\nu=(\cos\vartheta, \sin\vartheta)$ with $\vartheta\in[0,2\pi)$.
\begin{itemize}
\item The surfaces of the bivariate maps $(\vartheta, t)\mapsto SECT(K^{(j)})(\nu;t)$, for $j\in\{1,2\}$, are presented in Figures \ref{fig: SECT visualizations, deterministic}(b), (c), (h), and (i).
\item The curves of the univariate maps $t\mapsto SECT(K^{(j)})\left((1,0)^\intercal;t\right)$, for $j\in\{1,2\}$, are presented by the black solid lines in Figures \ref{fig: SECT visualizations, deterministic}(d) and (j); while the curves of the univariate maps $t\mapsto SECT(K^{(j)})\left((0,1)^\intercal;t\right)$, for $j\in\{1,2\}$, are presented by the black solid lines in Figures \ref{fig: SECT visualizations, deterministic}(e) and (k).
\item Lastly, the curves of the univariate maps $\vartheta\mapsto SECT(K^{(j)})\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)$, for $j\in\{1,2\}$, are presented by the black solid lines in Figures \ref{fig: SECT visualizations, deterministic}(f) and (l).
\end{itemize}
These figures illustrate the continuity of $(\nu,t)\mapsto SECT(K^{(j)})(\nu;t)$ stated in Theorem \ref{lemma: The continuity lemma} (iii). Specifically, the curves and surfaces in these figures look not as rough as the path of Brownian motions, while they are not differentiable everywhere. With probability one, paths of a Brownian motion are not locally $C^{0,\frac{1}{2}}$-continuous (see Remark 22.4 of \cite{klenke2013probability}). Hence, based on Figure \ref{fig: SECT visualizations, deterministic}, the regularity of $(\nu,t)\mapsto SECT(K^{(j)})(\nu;t)$ is likely to be better than that of Brownian motion paths, but worse than continuously differentiable functions. Therefore, Figure \ref{fig: SECT visualizations, deterministic} supports the $C^{0,\frac{1}{2}}$-continuity in Theorem \ref{fig: SECT visualizations, deterministic}.
The invertibility in Theorem \ref{thm: invertibility} indicates that all information of $K^{(1)}$ and $K^{(2)}$ is stored in the surfaces presented by Figures \ref{fig: SECT visualizations, deterministic}(b), (c), (h), and (i). The red dashed curves in Figures \ref{fig: SECT visualizations, deterministic}(j), (k), and (l) are the counterparts of $K^{(1)}$ (see the curves in Figures \ref{fig: SECT visualizations, deterministic}(d), (e), and (f)). The discrepancy between the solid black and dashed red curves illustrates the SECT's ability to distinguish shapes. Simulation studies in Section \ref{section: Simulation experiments} will further confirm the ability of SECT in distinguishing shapes.
\begin{figure}
\centering
\includegraphics[scale=0.62]{SECT_visualizations.pdf}
\caption{Visualizations of $SECT(K^{(j)})$ for $j\in\{1,2\}$, where $K^{(1)}$ and $K^{(2)}$ are defined in Eq.~\eqref{eq: example shapes K1 and K2}. Panels (b) and (c) present the same surface from different angles. Similarly, panels (h) and (i) present the same surface. The similarity between the curves in the panel (j) partially indicates that SECT in only one direction does not preserve all the geometric information of a shape.}
\label{fig: SECT visualizations, deterministic}
\end{figure}
\subsection{The Primitive Euler Characteristic Transform of Shapes}\label{The Primitive Euler Characteristic Transform of Shapes}
In this work, we also introduce another topological invariant --- the primitive Euler characteristic transform (PECT) --- which is related to SECT and will play an important role in defining a distance function on $\mathscr{S}_{R,d}^M$. The PECT is defined as follows
\begin{align}\label{Eq: def of PECT}
\begin{aligned}
& PECT: \ \mathscr{S}_{R,d}^M \rightarrow C(\mathbb{S}^{d-1};\mathcal{H}_{BM}),\ \ \ K \mapsto PECT(K) \overset{\operatorname{def}}{=} \{PECT(K)(\nu)\}_{\nu\in\mathbb{S}^{d-1}}, \\
& \mbox{where }\ PECT(K)(\nu) \overset{\operatorname{def}}{=} \left\{\int_0^t \chi_\tau^\nu(K) d\tau\right\}_{t\in[0,T]},
\end{aligned}
\end{align}
and $\mathcal{H}_{BM} \overset{\operatorname{def}}{=} \{f\in L^2([0,T]) \,\vert\, \mbox{weak derivative }f' \mbox{ exists, }f'\in L^2([0,T]), \mbox{ and }x(0)=0 \}$ is a separable Hilbert space equipped with the inner product $\langle f, g \rangle_{\mathcal{H}_{BM}} = \int_0^T f'(t)g'(t) dt$. In addition, $\mathcal{H}_{BM}$ is the RKHS generated by the covariance function $(s,t)\mapsto\min\{s,t\}$ of the Brownian motion on $[0,T]$. The inequality in Eq.~\eqref{Eq: continuity inequality}, together with the weak derivative of $PECT(K)(\nu)$ being $\{\chi_t^\nu(K)\}_{t\in[0,T]}$ (see Theorem \ref{thm: Sobolev function paths}), implies that
\begin{align*}
\left\Vert PECT(K)(\nu_1)-PECT(K)(\nu_2)\right\Vert_{\mathcal{H}_{BM}}\le C^*_{M,R,d} \cdot \sqrt{\Vert \nu_1-\nu_2\Vert}.
\end{align*}
Therefore, we have the $PECT(K)\in C(\mathbb{S}^{d-1}; \mathcal{H}_{BM})$ in Eq.~\eqref{Eq: def of PECT}. PECT relates to SECT via the following equation.
\begin{align}\label{eq: relationship between PECT and SECT}
SECT(K)(\nu;t)=PECT(K)(\nu;t)-\frac{t}{T}PECT(K)(\nu;t),
\end{align}
for all $\nu\in\mathbb{S}^{d-1}$ and $t\in[0,T]$. Additionally, Theorem \ref{thm: invertibility}, together with Eq.~\eqref{eq: relationship between PECT and SECT}, implies that the $PECT$ in Eq.~\eqref{Eq: def of PECT} is invertible.
\section{Random Shapes and Probabilistic Distributions over the SECT}\label{section: distributions of Gaussian bridge}
In Section \ref{SECTs and Their Gaussian Distributions}, shapes $K$ were viewed as deterministic. From this section onward, we investigate the randomness of shapes. Suppose now that $\mathscr{S}_{R,d}^M$ is equipped with a $\sigma$-algebra $\mathscr{F}$ and that a distribution of shapes $K$ across $\mathscr{S}_{R,d}^M$ can be represented by a probability measure $\mathbb{P}=\mathbb{P}(dK)$ on $\mathscr{F}$. Then, $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$ is a probability space, and it is the underlying probability space of interest.
\subsection{Formulation of the SECT as a Random Variable}\label{Distributions of SECT in Each Direction}
For each direction $\nu\in\mathbb{S}^{d-1}$ and level $t\in[0,T]$, the integer-valued map $\chi^\nu_t: K \mapsto \chi^\nu_t(K)$ is defined on $\mathscr{S}_{R,d}^M$. We need the following assumption on the measurability of $\chi_t^\nu$.
\begin{assumption}\label{assumption: the measurability of ECC}
For each fixed $\nu\in\mathbb{S}^{d-1}$ and $t\in[0,T]$, the map
\begin{align*}
\chi^\nu_t:\, \left(\mathscr{S}^M_{R,d},\, \mathscr{F} \right) \rightarrow \left(\mathbb{R}, \, \mathscr{B}(\mathbb{R}) \right)
\end{align*}
is a real-valued random variable --- meaning that $(\chi^{\nu}_t)^{-1}(B) = \{K\in\mathscr{S}_{R,d}^M \, \vert \,
\chi^{\nu}_t(K)\in B\}\in\mathscr{F}$ for all $B\in\mathscr{B}(\mathbb{R})$.
\end{assumption}
\noindent A $\sigma$-algebra $\mathscr{F}$ satisfying Assumption \ref{assumption: the measurability of ECC} does exist. Specifically, we construct a metric on $\mathscr{S}_{R,d}^M$ and show that the Borel algebra induced by this metric satisfies Assumption \ref{assumption: the measurability of ECC}. Motivated by the metric of $C(\mathbb{S}^{d-1};\mathcal{H}_{BM})$, we define the semi-distance
\begin{align}\label{Eq: distance between shapes}
\begin{aligned}
& \rho:\ \ \mathscr{S}_{R,d}^M\times \mathscr{S}_{R,d}^M \rightarrow [0,\infty),\ \ \ \ \ (K_1, K_2)\mapsto \rho(K_1, K_2), \ \ \mbox{where} \\
& \rho(K_1, K_2) \overset{\operatorname{def}}{=} \sup_{\nu\in\mathbb{S}^{d-1}} \left\{ \left( \int_0^T \Big\vert\chi_\tau^{\nu}(K_1)-\chi_\tau^{\nu}(K_2)\Big\vert^2 d\tau \right)^{1/2} \right\}.
\end{aligned}
\end{align}
The following theorem shows that the $\rho$ defined in Eq.~\eqref{Eq: distance between shapes} is indeed a metric on $\mathscr{S}_{R,d}^M$. Furthermore, it provides an example of a $\sigma$-algebra satisfying Assumption \ref{assumption: the measurability of ECC}.
\begin{theorem}\label{Thm: metric theorem for shapes}
(i) The map $\rho$ defined in Eq.~\eqref{Eq: distance between shapes} is a metric on $\mathscr{S}_{R,d}^M$. \\
(ii) Assumption \ref{assumption: the measurability of ECC} is satisfied if $\mathscr{F}=\mathscr{B}(\rho)$.
\end{theorem}
\noindent Additionally, this $\rho$ metric is a counterpart of the $dist_{\mathcal{M}_d}$ metric in \cite{turner2014persistent} (Eq.~(2.3) therein) and the $dist^{SECT}_{\mathcal{M}_{d-1}}$ metric in \cite{crawford2020predicting} (Eq.~(7) therein).
Under Assumption \ref{assumption: the measurability of ECC}, the ECC $\{\chi^\nu_t\}_{t\in[0,T]}$, for each fixed $\nu\in\mathbb{S}^{d-1}$, is a continuous-time stochastic process with RCLL paths defined on probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$. Since each path $\{\chi^\nu_t(K)\}_{t\in[0,T]}$ is a step function with finitely many discontinuities, the integrals $\int_0^t \chi_\tau^\nu(K) d\tau$ for $t\in[0,T]$ are Riemann integrals of the form
\begin{align}\label{Eq: Riemann sum}
\int_0^t \chi_\tau^\nu(K) d\tau = \lim_{n\rightarrow\infty} \left\{\frac{t}{n} \sum_{l=1}^n \chi^\nu_{\frac{tl}{n}}(K)\right\}, \ \mbox{ for all }K\in\mathscr{S}_{R,d}^M.
\end{align}
Because each $\chi^\nu_{\frac{tl}{n}}$ in Eq.~\eqref{Eq: Riemann sum} is a random variable under Assumption \ref{assumption: the measurability of ECC}, the limit in Eq.~\eqref{Eq: Riemann sum} for each fixed $t\in[0,T]$ is a random variable as well. Therefore, for each fixed $\nu\in\mathbb{S}^{d-1}$, $\{\int_0^t \chi_\tau^\nu d\tau\}_{t\in[0,T]}$ with $\int_0^t \chi_\tau^\nu d\tau: K \mapsto \int_0^t \chi_\tau^\nu(K) d\tau$ is a real-valued stochastic process with continuous paths. Then, under Assumption \ref{assumption: the measurability of ECC}, Eqs.~\eqref{Eq: definition of SECT} and \eqref{Eq: def of PECT} define the following real-valued stochastic processes on the probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$
\begin{align}\label{Eq: def SECTs as stochastic processes}
\begin{aligned}
& SECT(\cdot)(\nu) \overset{\operatorname{def}}{=} \left\{\int_0^t\chi_\tau^\nu d\tau - \frac{t}{T} \int_0^T \chi_\tau^\nu d\tau \overset{\operatorname{def}}{=} SECT(\cdot)(\nu;t) \right\}_{t\in[0,T]}, \\
& PECT(\cdot)(\nu) \overset{\operatorname{def}}{=} \left\{\int_0^t \chi_\tau^\nu d\tau \overset{\operatorname{def}}{=} PECT(\cdot)(\nu;t)\right\}_{t\in[0,T]}.
\end {aligned}
\end{align}
Eq.~\eqref{eq: relationship between PECT and SECT} indicates that $SECT(\cdot)(\nu)$ is the ``bridge version"$^\ddagger$\footnote{$\ddagger$: Motivated by Brownian bridges, for any stochastic process $\{X_t\}_{t\in[0,T]}$ with continuous paths and $X_0=0$, we refer to $\{U_t = X_t-\frac{t}{T}X_T\}_{t\in[0,T]}$ as its bridge version.} of $PECT(\cdot)(\nu)$. Theorem \ref{thm: Sobolev function paths}, together with \cite{berlinet2011reproducing} (see Corollary 13 in Chapter 4 therein), directly implies the following theorem; hence, the corresponding proof is omitted.
\begin{theorem}\label{thm: SECT distribution theorem in each direction}
For each direction $\nu\in\mathbb{S}^{d-1}$, under Assumption \ref{assumption: the measurability of ECC}, $SECT(\cdot)(\nu)$ is a real-valued stochastic process with paths in $\mathcal{H}$. Equivalently, $SECT(\cdot)(\nu)$ is a random variable taking values in $(\mathcal{H}, \mathscr{B}(\mathcal{H}))$. Additionally, $SECT(\cdot)(\nu;0)=SECT(\cdot)(\nu;T)=0$.
\end{theorem}
\subsection{Mean and Covariance of the SECT}\label{section: distributions of H-valued GP}
Theorem \ref{thm: SECT distribution theorem in each direction} indicates that the SECT is an $\mathcal{H}$-valued random field on manifold $\mathbb{S}^{d-1}$. Theorem \ref{lemma: The continuity lemma} implies that the sample paths of this random field are in $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1};\mathcal{H})$. To define the mean and covariance for the SECT, we need the following assumption on the second moments corresponding to the distribution $\mathbb{P}$.
\begin{assumption}\label{assumption: existence of second moments}
We have the following finite second moments for all $\nu\in\mathbb{S}^{d-1}$
\begin{align*}
\mathbb{E}\Vert SECT(\cdot)(\nu)\Vert^2_{\mathcal{H}}=\int_{\mathscr{S}_{R,d}^M}\Vert SECT(K)(\nu)\Vert^2_{\mathcal{H}}\mathbb{P}(dK)<\infty.
\end{align*}
\end{assumption}
\noindent Together, Eq.~\eqref{eq: Sobolev embedding from Morrey} and Assumption \ref{assumption: existence of second moments} imply that
\begin{align}\label{eq: finite second moments for all v and t}
\mathbb{E}\vert SECT(\cdot)(\nu;t)\vert^2 \le \Tilde{C}^2_T \cdot \mathbb{E}\Vert SECT(\cdot)(\nu)\Vert^2_{\mathcal{H}}<\infty,\ \ \mbox{ for all }\nu\mbox{ and }t.
\end{align}
Therefore, we may define the mean function $\{m\}_{\nu\in\mathbb{S}^{d-1}}$ as follows under Assumption \ref{assumption: existence of second moments}
\begin{align}\label{Eq: mean function of our Gaussian bridge}
\begin{aligned}
m_\nu(t) & = \mathbb{E}\left\{ SECT(\cdot)(\nu;t) \right\}\\
& =\int_{\mathscr{S}_{R,d}^M}\left\{ \int_0^t\chi_\tau^\nu(K) d\tau - \frac{t}{T} \int_0^T \chi_\tau^\nu(K) d\tau\right\} \mathbb{P}(dK),\ \ t\in[0,T].
\end{aligned}
\end{align}
The next theorem is crucial for defining the covariance of the SECT as well as for conducting the hypothesis testing framework that we will detail in Section \ref{section: hypothesis testing}.
\begin{theorem}\label{thm: mean is in H}
(i) For each fixed direction $\nu\in\mathbb{S}^{d-1}$, the real-valued function $m_\nu \overset{\operatorname{def}}{=} \{m_\nu(t)\}_{t\in[0,T]}$ of $t$ belongs to $\mathcal{H}$; hence, $SECT(K)(\nu)-m_\nu \in\mathcal{H}$ for all $K\in\mathscr{S}_{R,d}^M$. \\ (ii) The map $(K, t)\mapsto SECT(K)(\nu;t)$ belongs to $L^2(\mathscr{S}_{R,d}^M\times[0,T],\, \mathbb{P}(dK)\times dt)$, where $\mathbb{P}(dK)\times dt$ denotes the product measure generated by $\mathbb{P}(dK)$ and the Lebesgue measure $dt$. \\
(iii) The map $\nu\mapsto m_\nu$ belongs to $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1};\mathcal{H})$; hence, this map belongs to $C(\mathbb{S}^{d-1};\mathcal{H})$.
\end{theorem}
\noindent As an analog to the covariance of Gaussian measures on separable Banach spaces (see \cite{hairer2009introduction}, Eq.~(3.1) therein), the covariance of the SECT is defined as a linear map $C(\nu_1, \nu_2):\mathcal{H}\rightarrow\mathcal{H}$ via the following equation
\begin{align}\label{eq: covariance of random field}
\begin{aligned}
& \Big\langle h_1, C(\nu_1, \nu_2) h_2 \Big\rangle \\
& \overset{\operatorname{def}}{=} \mathbb{E} \left\{\Big\langle h_1, SECT(\cdot)(\nu_1)-m_{\nu_1} \Big\rangle\cdot\Big\langle h_1, SECT(\cdot)(\nu_2)-m_{\nu_2} \Big\rangle\right\},
\end{aligned}
\end{align}
for all $h_1, h_2\in\mathcal{H}$. Eq.~\eqref{eq: covariance of random field} is well-defined because of the first statement in Theorem \ref{thm: mean is in H} and the following inequalities induced by Assumption \ref{assumption: existence of second moments}
\begin{align*}
& \mathbb{E} \left\vert\Big\langle h_1, SECT(\cdot)(\nu_1)-m_{\nu_1} \Big\rangle\cdot\Big\langle h_1, SECT(\cdot)(\nu_2)-m_{\nu_2} \Big\rangle\right\vert \\
& \le \Vert h_1\Vert_{\mathcal{H}}\Vert h_2\Vert_{\mathcal{H}} \cdot \Big(\mathbb{E}\Vert SECT(\cdot)(\nu_1)-m_{\nu_1}\Vert^2_{\mathcal{H}}\Big)^{1/2} \Big(\mathbb{E}\Vert SECT(\cdot)(\nu_2)-m_{\nu_2}\Vert^2_{\mathcal{H}}\Big)^{1/2} <\infty,
\end{align*}
which further indicate that $C(\nu_1, \nu_2)$ is a bounded linear operator.
Because of the finite second moments in Eq.~\eqref{eq: finite second moments for all v and t}, the covariance function
\begin{align*}
\Xi_\nu(s,t) = cov\left(SECT(\cdot)(\nu;s), SECT(\cdot)(\nu;t)\right),\ \ \mbox{ for }s,t\in[0,T],
\end{align*}
is well-defined. The real-valued covariance $\Xi_\nu(s,t)$ is determined by the operator-valued covariance $C(\nu,\nu)$ via the following
\begin{align}\label{eq: relationship between operator-valued cov and real-valued cov}
\begin{aligned}
\Xi_\nu(s,t) &=\mathbb{E} \left\{\Big\langle \kappa(s,\cdot), SECT(\cdot)(\nu)-m_{\nu} \Big\rangle\cdot\Big\langle \kappa(t,\cdot), SECT(\cdot)(\nu)-m_{\nu} \Big\rangle\right\} \\
& = \Big\langle \kappa(s,\cdot), C(\nu, \nu) \kappa(t,\cdot) \Big\rangle,\ \ \mbox{ for all }\nu\in\mathbb{S}^{d-1},
\end{aligned}
\end{align}
where the first identity follows from the fact that $\kappa$ as defined in Eq.~\eqref{Eq: kernel of the Brownian bridge} is the reproducing kernel of RKHS $\mathcal{H}$. The following theorem on the covariance function $\Xi_\nu(s,t)$ will be implemented in Section \ref{section: hypothesis testing}.
\begin{theorem}\label{thm: lemma for KL expansions}
For each fixed direction $\nu\in\mathbb{S}^{d-1}$, we have the following: (i) $SECT(\cdot)(\nu)$ is mean-square continuous (i.e., $\lim_{\epsilon\rightarrow0}\mathbb{E}\vert SECT(\cdot)(\nu;t+\epsilon)-SECT(\cdot)(\nu;t)\vert^2=0$); \\
(ii) $(s,t)\mapsto\Xi_\nu(s,t)$ is continuous on $[0,T]^2$.
\end{theorem}
\noindent The first part of Theorem \ref{thm: lemma for KL expansions} directly follows from Eq.~\eqref{eq: bivariate Holder continuity}, while the second half follows from Lemma 4.2 of \cite{alexanderian2015brief}; hence, the proof of Theorem \ref{thm: lemma for KL expansions} is omitted.
In applications, we cannot reasonably sample infinitely many directions $\nu\in\mathbb{S}^{d-1}$ and levels $t\in[0,T]$. For each given shape $K$, we can only compute $SECT(K)(\nu;t)$ for finitely many directions $\{\nu_1, \cdots, \nu_\Gamma\}\subset\mathbb{S}^{d-1}$ and levels $\{t_1, \cdots, t_\Delta\}\subset[0,T]$ with $t_1<t_2<\cdots<t_\Delta$. Hence, for a collection of shapes $\{K_i\}_{i=1}^n \subset \mathscr{S}_{R,d}^M$ sampled from $\mathbb{P}$, the data we have in applications are presented by the following 3-dimensional arrays
\begin{align}\label{Eq: data matrix}
\Big\{\ SECT(K_i)(\nu_p, t_q)\ \Big\vert\ i=1,\cdots,n,\ p=1,\cdots, \Gamma, \mbox{ and }q=1, \cdots, \Delta\ \Big\}.
\end{align}
To preserve as much information about a shape $K$ as possible, we need to make the numbers of directions and levels (i.e., $\Gamma$ and $\Delta$ in Eq.~\eqref{Eq: data matrix}) sufficiently large. \cite{curry2018many} (Theorem 7.14 therein) provided an explicit formula of the upper bound on the minimal number of directions. Levels $\{t_j\}_{j=1}^\Delta$ should be sufficiently dense such that there is at least one $t_j$ between any two consecutive HCPs in each direction.
\subsection{Proof-of-Concept Simulation Examples II: Random Shapes}
In this subsection, we compute the SECT for a collection of shapes $\{K_i\}_{i=1}^n$ of dimension $d=2$. These shapes are randomly generated as follows
\begin{align}\label{eq: randon shapes under null}
K_i = & \left\{x\in\mathbb{R}^2 \, \Bigg\vert\, \inf_{y\in S_i}\Vert x-y\Vert\le \frac{1}{5}\right\},\ \ \mbox{ where } \\
\notag S_i = & \left\{\left(\frac{2}{5}+a_{1,i}\times\cos t, b_{1,i}\times\sin t\right) \, \Bigg\vert\, \frac{\pi}{5}\le t\le\frac{9\pi}{5}\right\}\\
\notag & \bigcup\left\{\left(-\frac{2}{5}+a_{2,i}\times\cos t, b_{2,i}\times\sin t\right) \, \Bigg\vert\, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\},
\end{align}
and $\{a_{1,i}, a_{2,i}, b_{1,i}, b_{2,i}\}_{i=1}^n \overset{i.i.d.}{\sim} N(1, 0.05^2)$. One element of the shape collection $\{K_i\}_{i=1}^n$ is presented in Figure \ref{fig: SECT visualizations, random}(a). The underlying distribution on $\mathscr{S}_{R,d}^M$ generating $\{K_i\}_{i=1}^n$ is denoted by $\mathbb{P}$, and the expectation associated with $\mathbb{P}$ is denoted by $\mathbb{E}$. We estimate the expected value $\mathbb{E}\{SECT(\cdot)(\nu;t)\}$ by the sample average $\frac{1}{n}\sum_{i=1}^n SECT(K_i)(\nu;t)$ with $n=100$. We identify each direction $\nu\in\mathbb{S}^1$ through the parametrization $\nu=(\cos\vartheta, \sin\vartheta)$ with some $\vartheta\in[0,2\pi)$ as we did in Section \ref{Proof-of-Concept Simulation Examples I: Deterministic Shapes}.
\begin{itemize}
\item The surface of the map $(\vartheta,t)\mapsto \mathbb{E}\{SECT(\cdot)(\nu;t)\}$ is presented in Figures \ref{fig: SECT visualizations, random}(b) and (c).
\item The black solid curves in Figure \ref{fig: SECT visualizations, random}(d) present the 100 paths $t\mapsto SECT(K_i)\left((1,0)^\intercal;t\right)$, the black solid curves in Figure \ref{fig: SECT visualizations, random}(e) present paths $t\mapsto SECT(K_i)\left((0,1)^\intercal;t\right)$, and the black solid curves in Figure \ref{fig: SECT visualizations, random}(f) present paths $\vartheta\mapsto SECT(K_i)\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)$, for $i\in\{1,\cdots,100\}$.
\item The red solid curves in Figures \ref{fig: SECT visualizations, random}(d), (e), and (f) present mean curves
$t\mapsto \mathbb{E}\{SECT(\cdot)\left((1,0)^\intercal;t\right)\}$, $t\mapsto \mathbb{E}\{SECT(\cdot)\left((0,1)^\intercal;t\right)\}$, and $\vartheta\mapsto \mathbb{E}\{SECT(\cdot)\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)\}$, respectively.
\end{itemize}
The smoothness of the red solid curves in Figures \ref{fig: SECT visualizations, random}(d) and (e) supports the regularity of $\{m_\nu(t)\}_{t\in[0,T]}$ in Theorem \ref{thm: mean is in H}. The finite variance of $SECT(\cdot)(\nu;t)$ for $\nu=(1,0)^\intercal, (0,1)^\intercal$ and $t=3/2$ visually presented in Figures \ref{fig: SECT visualizations, random}(d), (e), and (f) supports Eq.~\eqref{eq: finite second moments for all v and t}.
In addition, the blue dashed curves in Figures \ref{fig: SECT visualizations, random}(d), (e), and (f) present curves $t\mapsto SECT(K^{(1)})\left((1,0)^\intercal;t\right)$, $t\mapsto SECT(K^{(1)})\left((0,1)^\intercal;t\right)$, and $\vartheta\mapsto SECT(K^{(1)})\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)$, respectively, where shape $K^{(1)}$ is defined in Eq.~\eqref{eq: example shapes K1 and K2}. Since $\mathbb{E}\{a_{1,i}\}=\mathbb{E}\{a_{2,i}\}=\mathbb{E}\{b_{1,i}\}=\mathbb{E}\{b_{2,i}\}=1$, the shape $K^{(1)}$ defined in Eq.~\eqref{eq: example shapes K1 and K2} can be viewed as the ``mean shape" of the random collection $\{K_i\}_{i=1}^n$. The similarity between the red solid curves and blue dashed curves in \ref{fig: SECT visualizations, random}(d), (e), and (f) supports the ``mean shape" role of $K^{(1)}$. The rigorous definition of a ``mean shape" and its relationship to the mean function $\mathbb{E}\{SECT(\cdot)(\nu;t)\}$ is left for future research. A potential way of defining mean shapes is through the following Fréchet mean form \citep{frechet1948elements}
\begin{align}\label{Frechet mean shape}
K_\oplus \overset{\operatorname{def}}{=} \arg\min_{K\in\mathscr{S}_{R,d}^M} \mathbb{E}\left[\left\{\rho(\cdot, K)\right\}^2\right] = \arg\min_{K\in\mathscr{S}_{R,d}^M} \left[ \int_{\mathscr{S}_{R,d}^M} \left\{\rho(K', K)\right\}^2 \mathbb{P}(dK') \right],
\end{align}
where $\rho$ can be either the metric on $\mathscr{S}_{R,d}^M$ defined in Eq.~\eqref{Eq: distance between shapes} or any other metrics generating $\sigma$-algebras satisfying Assumption \ref{assumption: the measurability of ECC}. The existence and uniqueness of the minimizer $K_\oplus$ in Eq.~\eqref{Frechet mean shape}, the relationship between $SECT(K_\oplus)$ and $\mathbb{E}\{SECT(\cdot)\}$, and the extension of Eq.~\eqref{Frechet mean shape} to Fréchet regression \citep{petersen2019frechet} for random shapes are left for future research. The study of the existence of $K_\oplus$ will be a counterpart of Section 4 of \cite{mileyko2011probability}.
In the scenarios where the SECT of shapes from distribution $\mathbb{P}$ are computed only in finitely many directions and at finitely many levels (see the end of Section \ref{section: distributions of H-valued GP}), the mean surface $(\vartheta, t) \mapsto \mathbb{E}\{SECT(\cdot)(\nu;t)\}$ in Figures \ref{fig: SECT visualizations, random}(b) and (c) can also be potentially estimated using manifold learning methods (e.g., \cite{yue2016parameterization}, \cite{dunson2021inferring}, and \cite{meng2021principal}).
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{random_examples.pdf}
\caption{Visualizations of $\mathbb{E}\{SECT(\cdot)(\nu;t)\}$, $\{SECT(K_i)\}_{i=1}^n$ with $n=100$, and $SECT(K^{(1)})(\nu;t)$. Panels (b) and (c) present the same surface, but from different angles.}
\label{fig: SECT visualizations, random}
\end{figure}
\section{Testing Hypotheses on Shapes}\label{section: hypothesis testing}
In this section, we apply the probabilistic formulation proposed in Section \ref{section: distributions of Gaussian bridge} to testing statistical hypotheses on shapes. Suppose $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$ are two independent distributions on $\mathscr{S}_{R,d}^M$ satisfying Assumption \ref{assumption: existence of second moments}, and $m_\nu^{(j)}(t) = \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu;t)\mathbb{P}^{(j)}(dK)$, for $j\in\{1,2\}$, are the mean functions corresponding to the two distributions. We are interested in testing the following hypotheses
\begin{align}\label{eq: the main hypotheses}
\begin{aligned}
& H_0: m_\nu^{(1)}(t)=m_\nu^{(2)}(t)\mbox{ for all }(\nu,t)\in\mathbb{S}^{d-1}\times[0,T]\\
& vs. \ \ H_1: m_\nu^{(1)}(t)\ne m_\nu^{(2)}(t)\mbox{ for some }(\nu,t).
\end{aligned}
\end{align}
For example, suppose we are interested in the distributions of heel bones of two groups of primates, testing the hypotheses above helps us distinguish the two groups of primates from a morphology perspective.
The null hypothesis $H_0$ in Eq.~\eqref{eq: the main hypotheses} is equivalent to $\sup_{\nu\in\mathbb{S}^{d-1}}\{\Vert m_{\nu}^{(1)}-m_{\nu}^{(2)} \Vert_{\mathcal{B}}\}=0$. Theorem \ref{thm: mean is in H} (iii), together with Eq.~\eqref{eq: Sobolev embedding from Morrey}, indicates that the following maximizer exists
\begin{align}\label{eq: def of distinguishing direction}
\nu^* \overset{\operatorname{def}}{=} \arg\max_{\nu\in\mathbb{S}^{d-1}} \left\{\Vert m_{\nu}^{(1)}-m_{\nu}^{(2)} \Vert_{\mathcal{B}} \right\}.
\end{align}
Therefore, $H_0$ is equivalent to assessing whether $\Vert m_{\nu^*}^{(1)}-m_{\nu^*}^{(2)} \Vert_{\mathcal{B}}=0$. The direction $\nu^*$ defined in Eq.~\eqref{eq: def of distinguishing direction} is called a \textit{distinguishing direction}, which we will focus on with the SECT. Hence, we investigate the distributions of the real-valued stochastic process $SECT(\cdot)(\nu^*)$ corresponding to $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$, respectively.
\subsection{Karhunen–Loève Expansion}
Let $C^{(j)}(\nu_1,\nu_2)$ be the operator-valued covariance of $SECT(\cdot)$ corresponding to distribution $\mathbb{P}^{(j)}$ (see Eq.~\eqref{eq: covariance of random field}) and $\Xi_{\nu^*}^{(j)}(s,t)$ the real-valued covariance of process $SECT(\cdot)(\nu^*)$ corresponding to $\mathbb{P}^{(j)}$, for $j\in\{1,2\}$. Hereafter, we assume the following.
\begin{assumption}\label{assumption: equal covariance aasumption}
$C^{(1)}(\nu_1, \nu_2)=C^{(2)}(\nu_1, \nu_2)$, for all $\nu_1, \nu_2\in\mathbb{S}^{d-1}$.
\end{assumption}
\noindent Eq.~\eqref{eq: relationship between operator-valued cov and real-valued cov} and Assumption \ref{assumption: equal covariance aasumption} imply $\Xi_{\nu^*}^{(1)}=\Xi_{\nu^*}^{(2)} \overset{\operatorname{def}}{=} \Xi_{\nu^*}$. The second half of Theorem \ref{thm: lemma for KL expansions} implies $\Xi_{\nu^*}(\cdot, \cdot)\in L^2([0,T]^2)$. Hence, we may define the integral operator
\begin{align}\label{eq: def of the L2 compact operator}
L^2([0,T])\rightarrow L^2([0,T]),\ \ \ \ f\mapsto \int_0^T f(s)\Xi_{\nu^*}(s,\cdot)ds.
\end{align}
Theorems VI.22 (e) and VI.23 of \cite{reed2012methods} indicate that the integral operator defined in Eq.~\eqref{eq: def of the L2 compact operator} is compact. It is straightforward that this integral operator is also self-adjoint. Furthermore, the Hilbert-Schmidt theorem (Theorem VI.16 of \cite{reed2012methods}) implies that this operator has countably many eigenfunctions $\{\phi_l\}_{l=1}^\infty$ and eigenvalues $\{\lambda_l\}_{l=1}^\infty$ with $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_l\ge\cdots\ge0$, and $\{\phi_l\}_{l=1}^\infty$ form an orthonormal basis of $L^2([0,T])$. Then, Theorems \ref{thm: mean is in H} and \ref{thm: lemma for KL expansions} imply the following expansions of the SECT in the distinguishing direction $\nu^*$.
\begin{theorem}\label{thm: KL expansions of SECT}
(Karhunen–Loève expansion) For each $j\in\{1,2\}$, suppose the random shapes $\{K_i^{(j)}\}_{i=1}^n\overset{i.i.d.}{\sim}\mathbb{P}^{(j)}$. We have the following expansions for each sample $K_i^{(j)}$
\begin{align}\label{eq: KL expansions of SECT}
\begin{aligned}
& SECT(K_i^{(j)})(\nu^*;t)= m^{(j)}_{\nu^*}(t) +\sum_{l=1}^\infty \sqrt{\lambda_l} \cdot Z_{l,i}^{(j)} \cdot \phi_l(t) \ \mbox{ for all }t\in[0,T], \\
& \mbox{ where } Z_{l,i}^{(j)} = \frac{1}{\sqrt{\lambda_l}}\int_0^T \left\{SECT(K_i^{(j)})(\nu^*;t)-m_{\nu^*}^{(j)}(t) \right\} \phi_l(t)dt,
\end{aligned}
\end{align}
for all $l=1,2,\ldots$, and the infinite series converges in the $L^2(\mathscr{S}_{R,d}^M,\mathbb{P}^{(j)})$ sense. For each fixed $j\in\{1,2\}$ and $i\in\{1,\cdots,n\}$, random variables $\{Z_{l,i}^{(j)}\}_{l=1}^\infty$ are defined on the probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P}^{(j)})$, mutually uncorrelated, and have mean 0 and variance 1. Furthermore, since $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$ are independent, the two collections $\{(Z_{l,1}^{(1)}, \cdots, Z_{l,n}^{(1)})\}_{l=1}^\infty$ and $\{(Z_{l,1}^{(2)},\cdots, Z_{l,n}^{(2)})\}_{l=1}^\infty$ are independent.
\end{theorem}
\noindent We omit the proof of Theorem \ref{thm: KL expansions of SECT} as it follows directly from Corollary 5.5 of \cite{alexanderian2015brief}. The uncorrelation of $\{Z_{l,i}^{(j)}\}_{l=1}^\infty$ across $l$ in Theorem \ref{thm: KL expansions of SECT} plays an important role in our hypothesis testing framework.
\subsection{The Theoretical Foundation for Hypothesis Testing}\label{The Theoretical foundation for Hypothesis Testing}
Suppose we have two collections of random samples $\{K_i^{(j)}\}_{i=1}^n\overset{i.i.d.}{\sim}\mathbb{P}^{(j)}$, for $j\in\{1,2\}$. In this subsection, we provide the theoretical foundation of using $K_i^{(j)}$ to test the hypotheses in Eq.~\eqref{eq: the main hypotheses}.
The Karhunen–Loève expansions in Eq.~\eqref{eq: KL expansions of SECT} provide the following
\begin{align*}
X_i(t) & \overset{\operatorname{def}}{=} SECT(K_i^{(1)})(\nu^*;t) - SECT(K_i^{(2)})(\nu^*;t)\\
& = \left\{ m_{\nu^*}^{(1)}(t) - m_{\nu^*}^{(2)}(t) \right\} + \sum_{l=1}^\infty \sqrt{2\lambda_l} \cdot \left( \frac{Z_{l,i}^{(1)}-Z_{l,i}^{(2)}}{\sqrt{2}} \right) \cdot \phi_l(t).
\end{align*}
We further define the random variables $\{(\xi_{l,1}, \cdots, \xi_{l,n})\}_{l=1}^\infty$ as follows
\begin{align}\label{eq: def of the xi statistic}
\begin{aligned}
& \xi_{l,i} \overset{\operatorname{def}}{=} \frac{1}{\sqrt{2\lambda_l}}\cdot \int_0^T X_i(t) \phi_l(t) dt \overset{(1)}{=} \theta_l + \left( \frac{Z_{l,i}^{(1)}-Z_{l,i}^{(2)}}{\sqrt{2}} \right), \\ & \mbox{ where }\ \theta_l = \frac{1}{\sqrt{2 \lambda_l}} \int_0^T \left\{ m_{\nu^*}^{(1)}(t) - m_{\nu^*}^{(2)}(t) \right\} \phi_l(t) dt
\end{aligned}
\end{align}
and $\overset{(1)}{=}$ follows from that $\{\phi_l\}_{l=1}^\infty$ is an orthonormal basis of $L^2([0,T])$. Then, each $\xi_{l,i}$ has mean $\theta_l$ and variance 1. The following theorem transforms the null $H_0$ in Eq.~\eqref{eq: the main hypotheses} using the means $\{\theta_l\}_{l=1}^\infty$.
\begin{theorem}
The null $H_0$ in Eq.~\eqref{eq: the main hypotheses} is equivalent to $\theta_l=0$ for all $l=1, 2, 3, \cdots$.
\end{theorem}
\begin{proof}
We have shown that the null $H_0$ is equivalent to $m_{\nu^*}^{(1)}(t)=m_{\nu^*}^{(2)}(t)$ for all $t\in[0,T]$, where $\nu^*$ is defined in Eq.~\eqref{eq: def of distinguishing direction}. The null $H_0$ directly implies that $\theta_l=0$ for all $l$. On the other hand, if $\theta_l=0$ for all $l$, that $\{\phi_l\}_l$ is an orthonormal basis of $L^2([0,T])$ indicates that $m_{\nu^*}^{(1)}=m_{\nu^*}^{(2)}$ almost everywhere with respect to the Lebesgue measure $dt$. Theorem \ref{thm: mean is in H} (i) and the embedding $\mathcal{H}\subset\mathcal{B}$ in Eq.~\eqref{eq: H, Holder, B embeddings} imply that $m_{\nu^*}^{(1)}$ and $m_{\nu^*}^{(2)}$ are continuous functions. Then, $m_{\nu^*}^{(1)}(t)=m_{\nu^*}^{(2)}(t)$ for all $t\in[0,T]$, which is equivalent to the null $H_0$ in Eq.~\eqref{eq: the main hypotheses}.
\end{proof}
When eigenvalues $\lambda_l$ in the denominators of Eq.~\eqref{eq: def of the xi statistic} are close to zero for large $l$, the estimated $\theta_l$ corresponding to the small eigenvalues can be unstable. Specifically, even if $ m_{\nu^*}^{(1)}(t) \approx m_{\nu^*}^{(2)}(t)$ for all $t$, an extremely small $\lambda_l$ can move the corresponding $\theta_l$ far away from zero. Motivated by the cumulative variance $\int_0^T \mathbb{V}\{X_i(t)\}dt=2(\sum_{l=1}^\infty \lambda_l)$ and the standard approach in principal component analysis, we focus on $\{\theta_l\}_{l=1}^L$ with
\begin{align}\label{eq: def of L}
L \overset{\operatorname{def}}{=} \min \left\{ l\in\mathbb{N}\, \bigg\vert\, \frac{\sum_{l'=1}^l \lambda_{l'}}{\sum_{l^{''}=1}^\infty \lambda_{l^{''}}} >0.95\right\}.
\end{align}
Hence, to test the null and alternative hypotheses in Eq.~\eqref{eq: the main hypotheses}, we test the following
\begin{align}\label{eq: approximate hypotheses}
\begin{aligned}
& \widehat{H}_0: \theta_1=\theta_2=\cdots=\theta_L=0,\ \ vs. \\
& \widehat{H}_1: \mbox{ there exists } l'\in\{1,\cdots,L\} \mbox{ such that }\theta_{l'}\ne0.
\end{aligned}
\end{align}
To test the hypotheses in Eq.~\eqref{eq: approximate hypotheses}, we need the following independence assumption.
\begin{assumption}\label{assumption: independence one}
$\{(Z_{l,i}^{(1)}-Z_{l,i}^{(2)})/\sqrt{2}\}_{i=1}^n$ is independent of $\{(Z_{l',i}^{(1)}-Z_{l',i}^{(2)})/\sqrt{2}\}_{i=1}^n$ for any $l,l'\in\{1,\cdots,L\}$ with $l\ne l'$.
\end{assumption}
\noindent Assumption \ref{assumption: independence one} is satisfied under some conditions. The following theorem gives an example.
\begin{theorem}\label{thm: independence implies Gaussianity}
For any partition of $[0,T]$, say $0=t_0<t_1<\cdots<t_l=T$, the stochastic processes $\{\chi^{\nu^*}_t:t_0\le t \le t_1\}, \cdots, \{\chi^{\nu^*}_t:t_{l-1}\le t \le t_l\}$ are independent according to $\mathbb{P}^{(j)}$.
\end{theorem}
\begin{proof}
The continuity of the paths of $PECT(\cdot)(\nu^*)=\{\int_0^t \chi_\tau^{\nu^*} d\tau\}_{t\in[0,T]}$, together with $\int_0^0 \chi_\tau^{\nu^*} d\tau=0$, implies that $PECT(\cdot)(\nu^*)$ is a Gaussian process according to $\mathbb{P}^{(j)}$ (see Theorem 14.4 of \cite{kallenberg2021foundations}). Eq.~\eqref{eq: relationship between PECT and SECT} implies that $SECT(\cdot)(\nu^*)$ is a Gaussian process according to $\mathbb{P}^{(j)}$ as well. Then, for each fixed $i\in\{1,\cdots,n\}$ and $j\in\{1,2\}$, the $\{Z_{l,i}^{(j)}\}_{l=1}^\infty$ in Eq.~\eqref{thm: KL expansions of SECT} are i.i.d. standard normal random variables (see the discussion after Corollary 5.5 in \cite{alexanderian2015brief}). Therefore, Assumption \ref{assumption: independence one} is satisfied.
\end{proof}
Here, we provide simulations supporting Assumption \ref{assumption: independence one}. Suppose that $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$ are equal to the distribution generating the random shapes defined in Eq.~\eqref{eq: randon shapes under null}. We generate $\{K_i^{(j)}\}_{i=1}^n\overset{i.i.d.}{\sim} \mathbb{P}^{(j)}$ with $n=100$ and $300$, for $j\in\{1,2\}$. Using the numerical method proposed in Section \ref{The Numerical foundation for Hypothesis Testing}, we approximately compute $\{\xi_{l,i}\}_{i=1}^n$, for $l\in\{1,2\}$ (the $L$ defined in Eq.~\eqref{eq: def of L} throughout all our simulations are 2 or 3). The mean functions $m_{\nu^*}^{(j)}$, for $j\in\{1,2\}$, and the histograms of $\{\xi_{l,i}\}_{i=1}^n$ are presented in Figure \ref{fig: size_indep_type1error}. The overlapping curves of mean functions (blue solid and orange dashed curves in Figure \ref{fig: size_indep_type1error}) indicate that the null $H_0$ in Eq.~\eqref{eq: the main hypotheses} is true; hence, $\xi_{l,i}=(Z_{l,i}^{(1)}-Z_{l,i}^{(2)})/\sqrt{2}$ for $l\in\{1,2\}$. More importantly, the normality of $\{\xi_{l,i}\}_{i=1}^n$ presented by the histograms in Figure \ref{fig: size_indep_type1error} indicates that $\{\xi_{1,i}\}_{i=1}^n=\{(Z_{1,i}^{(1)}-Z_{1,i}^{(2)})/\sqrt{2}\}_{i=1}^n$ and $\{\xi_{2,i}\}_{i=1}^n=\{(Z_{2,i}^{(1)}-Z_{2,i}^{(2)})/\sqrt{2}\}_{i=1}^n$ are independent, instead of just uncorrelated (see Theorem \ref{thm: KL expansions of SECT}). Furthermore, the simulation with $n=300$ presents better normality and weaker correlation than the one with $n=100$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.76]{size_indep_type1error.pdf}
\caption{The first row presents the simulation results with $n=100$ and the second row presents the simulation results with $n=300$. In the left panel of each row, the back and red solid curves present the sample paths of $\{SECT(K_i^{(1)})(\nu^*)\}$ and $\{SECT(K_i^{(2)})(\nu^*)\}$, respectively, in the distinguishing direction $\nu^*$ (the red curves are thinner than the black ones and drawn on top of the black curves); the blue solid and red dashed curves present mean functions $m_{\nu^*}^{(1)}$ and $m_{\nu^*}^{(2)}$, respectively. The histograms of $\{\xi_{l,i}\}_{i=1}^n$, for $l\in\{1,2\}$, in the middle and right panels are presented with the $N(0,1)$-density curve (red) overlaid on top of them. When $n=100$, the sample correlation between $\{\xi_{1,i}\}_{i=1}^n$ and $\{\xi_{2,i}\}_{i=1}^n$ across $i\in\{1,\cdots,n\}$ is 0.0907; when $n=300$, this correlation is 0.00903.}
\label{fig: size_indep_type1error}
\end{figure}
When $L$ is very large, we may implement the following adaptive Neyman test statistic proposed in \cite{fan1996test} to test the hypotheses in Eq.~\eqref{eq: approximate hypotheses}
\begin{align}\label{eq: adaptive Neyman statistic}
\begin{aligned}
T_{L,i}^{(AN)} = &\sqrt{2\log\log L}\cdot\max_{1\le l\le L}\left\{\frac{1}{\sqrt{2l}}\sum_{l'=1}^l(\xi_{l',i}^2-1)\right\}\\
& -\left\{2\log\log L+\frac{1}{2}\log\log\log L-\frac{1}{2}\log(4\pi)\right\}.
\end{aligned}
\end{align}
Theorem 1 of \cite{darling1956limit} implies that $\lim_{L\rightarrow\infty}\mathbb{P}(T^{(AN)}_{L,i}<t)=\exp\{-\exp(-t)\}$ under the null hypothesis and Assumption \ref{assumption: independence one} (also see the discussions right after the Theorem 2.1 in \cite{fan1996test}). Then $\widehat{H}_0$ in Eq.~\eqref{eq: approximate hypotheses} is rejected if $\{T_{L,i}^{(AN)}\}_{i=1}^n$ deviates from the distribution with the cumulative density function $\exp\{-\exp(-t)\}$. However, the number $L$ definend in Eq.~\eqref{eq: def of L} is usually small. For example, if $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$ are equal to the distribution generating random shapes defined in Eq.~\eqref{eq: randon shapes under null}, then the $L$ is equal to $2$ or $3$ across thousands of simulations. Hence, we implement a simple testing approach when $L$ is not large, and we will focus on the simple testing approach for small $L$ scenarios.
For each $l\in\{1,\cdots,L\}$, the central limit theorem indicates that $\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}$ asymptotically follows a $N(\theta_l,1)$ distribution as $n\rightarrow\infty$. Under Assumption \ref{assumption: independence one}, $\{\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}\}_{l=1}^L$ are independent across $l$. Even if Assumption \ref{assumption: independence one} is violated, the uncorrelation property from Theorem \ref{thm: KL expansions of SECT} and the asymptotic normality of $\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}$ provide the asymptotic independence of $\{\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}\}_{l=1}^L$ across $l$. In either case, $\sum_{l=1}^L (\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i})^2$ is asymptotically $\chi_L^2$ distributed under the null $\widehat{H}_0$ in Eq.~\eqref{eq: approximate hypotheses} as $n\rightarrow\infty$. Therefore, at the asymptotic confidence level $1-\alpha$ for $\alpha\in(0,1)$, we reject $\widehat{H}_0$ if
\begin{align}\label{eq: rejection region}
\sum_{l=1}^L \left(\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}\right)^2 > \chi^2_{L, 1-\alpha} = \mbox{ the $1-\alpha$ lower quantile of the $\chi^2_L$ distribution}.
\end{align}
In Section \ref{section: Simulation experiments}, we will provide simulation studies for this testing procedure.
\subsection{The Numerical foundation for Hypothesis Testing}\label{The Numerical foundation for Hypothesis Testing}
In Section \ref{The Theoretical foundation for Hypothesis Testing}, we proposed an approach to testing the hypotheses in Eq.~\eqref{eq: the main hypotheses} based on $\{\xi_{l,i}\}_{l}$ defined in Eq.~\eqref{eq: def of the xi statistic}. In applications, neither the mean function $m_{\nu}^{(j)}(t)$ nor the covariance function $\Xi_{\nu}(t',t)$ is known. Hence, the corresponding Karhunen–Loève expansions in Eq.~\eqref{eq: KL expansions of SECT} are not available, and the proposed hypothesis testing approach is not directly applicable. In this subsection, motivated by Chapter 4.3.2 of \cite{williams2006gaussian}, we propose a method for estimating the $\{\xi_{l,i}\}_{l}$ in Eq.~\eqref{eq: def of the xi statistic}, which enables us to conduct statistical inference.
For random shapes $\{K_i^{(j)}\}_{i=1}^n\overset{i.i.d.}{\sim}\mathbb{P}^{(j)}$ coming from the two distributions $j\in\{1,2\}$, we compute each of their SECT in finitely many directions and at finitely many levels as discussed in Section \ref{section: distributions of H-valued GP} and get $\{SECT(K_i^{(j)})(\nu_p;t_q)\,|\, p=1,\cdots, \Gamma \mbox{ and }q=1,\cdots,\Delta\}_{i=1}^n$ for $j\in\{1,2\}$, where $t_q = \frac{T}{\Delta}q$. We want to emphasize that the SECT of all shapes $K_i^{(j)}$ in the two collections are computed in the same collection of directions $\{\nu_p\}_{p=1}^\Gamma$ and at the same collection of levels $\{t_q\}_{q=1}^\Delta$. We estimate the mean $m_{\nu_p}^{(j)}(t_q)$ at level $t_q$ by $\widehat{m}_{\nu_p}^{(j)}(t_q) = $ the sample mean of $\{SECT(K_i^{(j)})(\nu_p;t_q)\}_{i=1}^n$ across $i\in\{1,\cdots,n\}$. Then, we estimate the distinguishing direction $\nu^*$ by
\begin{align}\label{eq: estimated distinguishing direction}
\widehat{\nu}^* \overset{\operatorname{def}}{=} \arg\max_{\nu_p} \left[ \max_{t_q} \left\{\left\vert \widehat{m}_{\nu_p}^{(1)}(t_q) - \widehat{m}_{\nu_p}^{(2)}(t_q) \right\vert \right\} \right].
\end{align}
Based on Assumption \ref{assumption: equal covariance aasumption}, we estimate the covariance matrix $\left(\Xi_{\nu^*}(t_{q'},t_{q})\right)_{q,q'=1,\cdots,\Delta}$ by the sample covariance matrix $\pmb{C} = (\widehat{\Xi}_{\nu^*}(t_{q'},t_{q}))_{q,q'=1,\cdots,\Delta}$ derived from the following centered sample vectors across $i\in\{1,\cdots,n\}$ and $j\in\{1,2\}$
\begin{align}\label{eq: centered sample vectors}
\left\{ \left( SECT(K_i^{(j)})(\widehat{\nu}^*;t_1) - \widehat{m}_{\widehat{\nu}^*}^{(j)}(t_1), \cdots, SECT(K_i^{(j)})(\widehat{\nu}^*;t_\Delta)- \widehat{m}_{\widehat{\nu}^*}^{(j)}(t_\Delta) \right)^\intercal\Big\vert j=1,2 \right\}_{i=1}^n.
\end{align}
Since the eigenfunctions $\{\phi_l\}_{l=1}^\infty$ and eigenvalues $\{\lambda_l\}_{l=1}^\infty$ satisfy $\lambda_l\phi_l=\int_0^T \phi_l(s)\Xi_{\nu^*}(s,\cdot)ds$, we have the following approximation
\begin{align*}
\lambda_l\phi_l(t_q)=\int_0^T \phi_l(s)\Xi_{\nu^*}(s,t_q)ds\approx\frac{T}{\Delta}\sum_{q'=1}^\Delta \phi_l(t_{q'})\Xi_{\nu^*}(t_{q'}, t_q) \approx \frac{T}{\Delta}\sum_{q'=1}^\Delta \phi_l(t_{q'})\widehat{\Xi}_{\nu^*}(t_{q'}, t_q),
\end{align*}
which is represented in the following matrix form
\begin{align}\label{eq: matrix form, approximate integrals by riemann sums}
&\lambda_l\begin{pmatrix}
\phi_l(t_1)\\
\vdots \\
\phi_l(t_\Delta)
\end{pmatrix}\approx\frac{T}{\Delta}
\begin{pmatrix}
\widehat{\Xi}_{\nu^*}(t_{1}, t_1) & \ldots & \widehat{\Xi}_{\nu^*}(t_\Delta, t_1) \\
\vdots & \ddots & \vdots \\
\widehat{\Xi}_{\nu^*}(t_\Delta, t_2) & \ldots & \widehat{\Xi}_{\nu^*}(t_\Delta, t_\Delta)
\end{pmatrix}
\begin{pmatrix}
\phi_l(t_1)\\
\vdots \\
\phi_l(t_\Delta)
\end{pmatrix}.
\end{align}
We denote the eigenvectors and eigenvalues of $\pmb{C}$ as $\{\pmb{v}_l=(v_{l,1}, \cdots, v_{l,\Delta})^\intercal\}_{l=1}^\Delta$ and $\{\Lambda_l\}_{l=1}^\Delta$, respectively. The following equation motivates the estimate $\phi_l(t_q)\approx \widehat{\phi}_l(t_q) \overset{\operatorname{def}}{=} \sqrt{\frac{\Delta}{T}} \cdot v_{l,q}$, for all $l\in\{1,\cdots,\Delta\}$
\begin{align*}
\sum_{q=1}^\Delta v_{l,q}^2 = \Vert \pmb{v}_l \Vert^2 = 1 = \int_0^T\vert \phi_l(t)\vert^2 dt \approx \frac{T}{\Delta}\sum_{q=1}^\Delta \left(\phi_l(t_q)\right)^2 = \sum_{q=1}^\Delta \left(\sqrt{\frac{T}{\Delta}}\cdot\phi_l(t_q)\right)^2.
\end{align*}
The following equation motivates the estimate $\lambda_l\approx\widehat{\lambda}_l \overset{\operatorname{def}}{=} \frac{T}{\Delta}\Lambda_l$, for all $l\in\{1,\cdots,\Delta\}$
\begin{align*}
\lambda_l \left(\widehat{\phi}_l(t_1), \cdots, \widehat{\phi}_l(t_\Delta)\right)^\intercal &\approx \frac{T}{\Delta}\pmb{C}\left(\widehat{\phi}_l(t_1), \cdots, \widehat{\phi}_l(t_\Delta)\right)^\intercal \\
& = \sqrt{\frac{T}{\Delta}} \pmb{C} \pmb{v}_l = \sqrt{\frac{T}{\Delta}} \Lambda_l \pmb{v}_l = \left(\frac{T}{\Delta}\Lambda_l\right) \left(\widehat{\phi}_l(t_1), \cdots, \widehat{\phi}_l(t_\Delta)\right)^\intercal.
\end{align*}
Additionally, we estimate the $L$ defined in Eq.~\eqref{eq: def of L} by the following
\begin{align}\label{eq: estimated L}
L\approx \widehat{L} \overset{\operatorname{def}}{=} \min \left\{ l=1,\cdots, \Delta \,\Bigg\vert\, \frac{\sum_{l'=1}^l \vert \widehat{\lambda}_{l'}\vert}{\sum_{l^{''}=1}^\Delta \vert \widehat{\lambda}_{l^{''}}\vert} >0.95\right\};
\end{align}
we take the absolute values of the estimated eigenvalues in computing $\widehat{L}$ because the estimated eigenvalues may be numerically negative in applications.
We estimate the random variables $\xi_{l,i}$ defined in Eq.~\eqref{eq: def of the xi statistic} by the following
\begin{align}\label{eq: def of xi_hat}
\xi_{l,i} \approx \widehat{\xi}_{l,i} = \frac{1}{\sqrt{2\widehat{\lambda}_l}} \cdot \frac{T}{\Delta} \sum_{q=1}^\Delta \left\{ SECT(K_i^{(1)})(\widehat{\nu}^*;t_q) - SECT(K_i^{(2)})(\widehat{\nu}^*;t_q) \right\} \widehat{\phi}_l(t_q),
\end{align}
for $l=1,\ldots,\widehat{L}$ and $i=1,\ldots,n$. Then, when $\widehat{L}$ is large (e.g., several hundreds; see Section 3 of \cite{fan1996test}), we implement the adaptive Neyman test by replacing the $\xi_{l,i}$ and $L$ in Eq.~\eqref{eq: adaptive Neyman statistic} with $\widehat{\xi}_{l,i}$ and $\widehat{L}$, respectively. When $\widehat{L}$ is moderate, which is true in all our simulations, we can implement the $\chi^2$-test in Eq.~\eqref{eq: rejection region} as follows
\begin{align}\label{eq: numerical chisq rejection region}
\sum_{l=1}^{ \widehat{L}}\left(\frac{1}{\sqrt{n}}\sum_{i=1}^n \widehat{\xi}_{l,i}\right)^2 > \chi^2_{\widehat{L}, 1-\alpha} = \mbox{ the $1-\alpha$ lower quantile of the $\chi^2_{\widehat{L}}$ distribution}.
\end{align}
A complete outline of this $\chi^2$-hypothesis testing procedure is given in Algorithm \ref{algorithm: testing hypotheses on mean functions}.
\begin{algorithm}[h]
\caption{: ($\chi^2$-based) Testing the hypotheses in Eq.~\eqref{eq: the main hypotheses}}\label{algorithm: testing hypotheses on mean functions}
\begin{algorithmic}[1]
\INPUT
\noindent (i) SECT of two collection of shapes $\{SECT(K_i^{(j)})(\nu_p;t_q):p=1,\cdots,\Gamma \mbox{ and } q=1,\cdots,\Delta\}_{i=1}^n$ for $j\in\{1,2\}$;
(ii) desired confidence level $1-\alpha$ with $\alpha\in(0,1)$.
\OUTPUT \texttt{Accept} or \texttt{Reject} the null hypothesis $H_0$ in Eq.~\eqref{eq: the main hypotheses}.
\STATE For $j\in\{1,2\}$, compute $\widehat{m}_{\nu_p}^{(j)}(t_q) \overset{\operatorname{def}}{=}$ sample mean of $\{SECT(K_i^{(j)})(\nu_p;t_q)\}_{i=1}^n$ across $i\in\{1,\cdots,n\}$.
\STATE Compute the estimated distinguishing direction $\widehat{\nu}^*$ using Eq.~\eqref{eq: estimated distinguishing direction}.
\STATE Compute $\pmb{C}=(\widehat{\Xi}_{\nu^*}(t_{q'},t_{q}))_{q,q'=1,\cdots,\Delta} \overset{\operatorname{def}}{=} $ the sample covariance matrix derived from the centered sample vectors in Eq.~\eqref{eq: centered sample vectors} across $i\in\{1,\cdots,n\}$ and $j\in\{1,2\}$.
\STATE Compute the eigenvectors $\{\pmb{v}_l\}_{l=1}^\Delta$ and eigenvalues $\{\Lambda_l\}_{l=1}^\Delta$ of the sample covariance matrix $\pmb{C}$.
\STATE Compute $\widehat{\phi}_l(t_q) \overset{\operatorname{def}}{=} \sqrt{\frac{\Delta}{T}} v_{l,q}$ and $\widehat{\lambda}_l \overset{\operatorname{def}}{=} \frac{T}{\Delta}\Lambda_l$ for all $l=1,\cdots,\Delta$.
\STATE Compute $\widehat{L}$ using Eq.~\eqref{eq: estimated L}.
\STATE Compute $\{\xi_{l,i}:l=1,\cdots,\widehat{L}\}_{i=1}^n$ using Eq.~\eqref{eq: def of xi_hat}, test null $H_0$ using Eq.~\eqref{eq: numerical chisq rejection region}, and report the output.
\end{algorithmic}
\end{algorithm}
In addition to the $\chi^2$-test detailed in Algorithm \ref{algorithm: testing hypotheses on mean functions}, we also propose a permutation test as an alternative approach for assessing the statistical hypotheses in Eq.~\eqref{eq: the main hypotheses}. The main idea behind the permutation test is that, under the null hypothesis, shuffling (or permuting) the group labels of shapes should not heavily change the test statistic of interest. To perform the permutation test, we first apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to our original data and then repeatedly re-apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to shapes with shuffled labels.$^\S$\footnote{$\S$: When we apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the original SECT, the result of Eq.~\eqref{eq: estimated L} is denoted as $\widehat{L}_0$. When we apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the shuffled SECT, the $\widehat{L}$ resulting from Eq.~\eqref{eq: estimated L} may differ from $\widehat{L}_0$. To make the comparison between $\mathfrak{S}_0$ and $\mathfrak{S}_{k^*}$ fair (see the last step of Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}), we set $\widehat{L}$ to be $\widehat{L}_0$.} We then compare how the test statistics derived from the original differ from those computed on the shuffled data. The details of this permutation-based approach are provided in Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}. Simulation studies in Section \ref{section: Simulation experiments} show that Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} can eliminate the moderate type I error inflation of Algorithm \ref{algorithm: testing hypotheses on mean functions}; however, the power under the alternative for Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} is moderately weaker than that of Algorithm \ref{algorithm: testing hypotheses on mean functions}.
\begin{algorithm}[h]
\caption{: (Permutation-based) Testing the hypotheses in Eq.~\eqref{eq: the main hypotheses}}\label{algorithm: permutation-based testing hypotheses on mean functions}
\begin{algorithmic}[1]
\INPUT
\noindent (i) SECT of two collection of shapes $\{SECT(K_i^{(j)})(\nu_p;t_q):p=1,\cdots,\Gamma \mbox{ and } q=1,\cdots,\Delta\}_{i=1}^n$ for $j\in\{1,2\}$;
(ii) desired confidence level $1-\alpha$ with $\alpha\in(0,1)$; (iii) the number of permutations $\Pi$.
\OUTPUT \texttt{Accept} or \texttt{Reject} the null hypothesis $H_0$ in Eq.~\eqref{eq: the main hypotheses}.
\STATE Apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the original input SECT data, compute $\widehat{L}_0$ using Eq.~\eqref{eq: estimated L} (see footnote $\S$), and compute the $\chi^2$-test statistic denoted as $\mathfrak{S}_0$ using Eq.~\eqref{eq: numerical chisq rejection region}.
\FORALL{$k=1,\cdots,\Pi$, }
\STATE Randomly permute the group labels $j\in\{1,2\}$ of the input SECT data.
\STATE Apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the permuted SECT data while setting $\widehat{L}$ to be the $\widehat{L}_0$, instead of using Eq.~\eqref{eq: estimated L}, and compute a $\chi^2$-test statistic $\mathfrak{S}_k$ using Eq.~\eqref{eq: numerical chisq rejection region}.
\ENDFOR
\STATE Compute $k^* \overset{\operatorname{def}}{=}[(1-\alpha)\cdot\Pi]\overset{\operatorname{def}}{=}$ the largest integer smaller than $(1-\alpha)\cdot\Pi$.
\STATE \texttt{Reject} the null hypothesis $H_0$ if $\mathfrak{S}_0>\mathfrak{S}_{k^*}$ and report the output.
\end{algorithmic}
\end{algorithm}
\section{Experiments Using Simulations}\label{section: Simulation experiments}
In this section, we present proof-of-concept simulation studies showing the performance of the hypothesis testing framework outlined in Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}. We focus on a family of distributions $\{\mathbb{P}^{(\varepsilon)}\}_{0\le\varepsilon\le0.1}$ on $\mathscr{S}_{R,d}^M$. For each $\varepsilon$, a collection of shapes $\{K_i^{(\varepsilon)}\}_{i=1}^n$ are i.i.d. generated from distribution $\mathbb{P}^{(\varepsilon)}$ via the following
\begin{align}\label{eq: explicit P varepsilon}
K_i^{(\varepsilon)} = & \left\{x\in\mathbb{R}^2 \, \Bigg\vert\, \inf_{y\in S_i^{(\varepsilon)}}\Vert x-y\Vert\le \frac{1}{5}\right\},\ \ \mbox{ where} \\
\notag S_i^{(\varepsilon)} = & \left\{\left(\frac{2}{5}+a_{1,i}\cdot\cos t, b_{1,i}\cdot\sin t\right) \, \Bigg\vert\, \frac{1-\varepsilon}{5}\pi\le t\le\frac{9+\varepsilon}{5}\pi\right\}\\
\notag &\bigcup\left\{\left(-\frac{2}{5}+a_{2,i}\cdot\cos t, b_{2,i}\cdot\sin t\right) \, \Bigg\vert\, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\},
\end{align}
where $\{a_{1,i}, a_{2,i}, b_{1,i}, b_{2,i}\}_{i=1}^n \overset{i.i.d.}{\sim} N(1, 0.05^2)$, and the index $\varepsilon$ denotes the dissimilarity between the distributions $\mathbb{P}^{(\varepsilon)}$ and $\mathbb{P}^{(0)}$. For each $\varepsilon\in[0,0.1]$, we test the following hypotheses using the scheme described in Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}
\begin{align}\label{eq: null in the simulation section}
\begin{aligned}
& H_0: m_\nu^{(0)}(t)=m_\nu^{(\varepsilon)}(t)\mbox{ for all }(\nu,t)\in\mathbb{S}^{d-1}\times[0,T],\\
& vs. \ \ H_1: m_\nu^{(0)}(t)\ne m_\nu^{(\varepsilon)}(t)\mbox{ for some }(\nu,t),
\end{aligned}
\end{align}
where the mean $m_\nu^{(\varepsilon)}(t) \overset{\operatorname{def}}{=} \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu;t) \mathbb{P}^{(\varepsilon)}(dK)$, and the null hypothesis $H_0$ is true when $\varepsilon=0$.
We set $T=3$, directions $\nu_p=(\cos\frac{p-1}{4}\pi, \sin\frac{p-1}{4}\pi)^\intercal$ for $p\in\{1,2,3,4\}$, levels $t_q=\frac{T}{50}q=0.06q$ for $q\in\{1,\cdots,50\}$, the confidence level $95\%$ (i.e., $\alpha=0.05$ in Eq.~\eqref{eq: numerical chisq rejection region}), and the number of permutations $\Pi=1000$. For each $\varepsilon\in\{0,\, 0.0125,\, 0.025,\, 0.0375,\, 0.05,\, 0.075,\, 0.1\}$, we independently generate two collection of shapes: $\{K_i^{(0)}\}_{i=1}^n\overset{i.i.d.}{\sim} \mathbb{P}^{(0)}$ and $\{K_i^{(\varepsilon)}\}_{i=1}^n\overset{i.i.d.}{\sim} \mathbb{P}^{(\varepsilon)}$ through Eq~\eqref{eq: explicit P varepsilon} with $n=100$, and we compute the SECT of each generated shape in directions $\{\nu_p\}_{p=1}^4$ and at levels $\{t_q\}_{q=1}^{50}$. Then, we implement Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} to these computed SECT variables and get the corresponding \texttt{Accept}/\texttt{Reject} output. We repeat this procedure for 100 times, and the rejection rates across all replicates for each $\varepsilon$ are presented in Figure \ref{fig: simulation visualizations} and Table \ref{table: epsilon vs. rejection rates}.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Simulation_visualizations.pdf}
\caption{The ``\texttt{rejection rates} vs.~\texttt{epsilon}" plot in the first panel presents the relationship between the indices $\varepsilon$ and the rejection rates computed via Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} throughout all simulations. This relationship is presented in Table \ref{table: epsilon vs. rejection rates} as well. The red squares and orange dots present the rejection rates corresponding to values $\varepsilon\in\{0,\, 0.0125,\, 0.025,\, 0.0375,\, 0.05,\, 0.075,\, 0.1\}$. The black dashed and blue dotted straight lines present 0.05 and 1, respectively. In the last two panels of the first row, the gray and green solid curves present sample path collections $\{SECT(K_i^{(0)})(\widehat{\nu}^*)\}_{i=1}^{100}$ and $\{SECT(K_i^{(\varepsilon)})(\widehat{\nu}^*)\}_{i=1}^{100}$, respectively; the blue solid and red dashed curves present $\widehat{m}_{\widehat{\nu}^*}^{(0)}$ and $\widehat{m}_{\widehat{\nu}^*}^{(\varepsilon)}$, respectively, for $\varepsilon\in\{0,\, 0.075\}$. The blue shape in the second row is generated from $\mathbb{P}^{(0)}$, and the two pink shapes are generated from $\mathbb{P}^{(0.075)}$.}
\label{fig: simulation visualizations}
\end{figure}
\setlength{\extrarowheight}{2pt}
\begin{table}[h]
\centering
\caption{Indices $\varepsilon$ and the rejection rates of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}.}
\label{table: epsilon vs. rejection rates}
\vspace*{0.5em}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Indices $\varepsilon$ & 0.000 & 0.0125 & 0.0250 & 0.0375 & 0.0500 & 0.0750 & 0.100 \\ [2pt]\hline\hline
Rejection rates of Algorithm \ref{algorithm: testing hypotheses on mean functions} & 0.15 & 0.16 & 0.32 & 0.67 & 0.92 & 0.98 & 1.00 \\ [2pt]\hline\hline
Rejection rates of Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} & 0.03 & 0.14 & 0.22 & 0.47 & 0.74 & 1.00 & 1.00 \\ [2pt]\hline
\end{tabular}
\end{table}
The presented simulation studies show that our Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} are extremely powerful in detecting the difference between $\mathbb{P}^{(\varepsilon)}$ and $\mathbb{P}^{(0)}$ in terms of the discrepancy between the corresponding mean functions. The estimated mean functions $\widehat{m}_{\widehat{\nu}^*}^{(0)}$ and $\widehat{m}_{\widehat{\nu}^*}^{(\varepsilon)}$ in direction $\widehat{\nu}^*$ are presented by the blue solid and red dashed curves, respectively, in Figure \ref{fig: simulation visualizations}. The discrepancy between the two estimated mean functions is presented therein. As $\varepsilon$ increases, $\mathbb{P}^{(\varepsilon)}$ deviates from $\mathbb{P}^{(0)}$, and the power of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} --- rejection rates under the alternative hypothesis --- in detecting the deviation increases. When $\varepsilon>0.075$, the power of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} exceeds 0.95. For all values of $\varepsilon$, it is hard to tell the deviation of $\mathbb{P}^{(\varepsilon)}$ from $\mathbb{P}^{(0)}$ by eye. For example, by just visualizing the shapes in Figure \ref{fig: simulation visualizations}, it is difficult to distinguish between the shape collections generated by $\mathbb{P}^{(0)}$ (blue) and $\mathbb{P}^{(0.075)}$ (pink), while our hypothesis testing framework detected the difference between the two shape collections in more than $95\%$ of the simulations.
Despite the strong ability of Algorithm \ref{algorithm: testing hypotheses on mean functions} to detect the true discrepancy between mean functions, its type I error rate --- rejection rate under the null model --- is moderately inflated. Specifically, the type I error rate of Algorithm \ref{algorithm: testing hypotheses on mean functions} is 0.15, while the expected error rate is 0.05 (see Table \ref{table: epsilon vs. rejection rates}). In contrast, Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} does not suffer from the type I error inflation under the null, while its power under the alternative is moderately weaker than that of Algorithm \ref{algorithm: testing hypotheses on mean functions}.
\section{Conclusions and Discussions}\label{Conclusions and Discussions}
In this paper, we provided the mathematical foundations for the randomness of shapes and the distributions of the smooth Euler characteristic transform (SECT). The $(\mathscr{S}_{R,d}^M, \mathscr{B}(\rho), \mathbb{P})$ was constructed as an underlying probability space for modeling the randomness of shapes, and the SECT was modeled as a $C(\mathbb{S}^{d-1};\mathcal{H})$-valued random variable defined on this probability space. We showed several properties of the SECT ensuring its Karhunen–Loève expansion, which led to normal distribution-based statistics $\sum_{l=1}^L (\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i})^2$ and $\{T_{L,i}^{(AN)}\}_{i=1}^n$ for testing the hypotheses on random shapes. Simulation studies were provided to support our mathematical derivations and show the performance of the proposed hypothesis testing framework. Our approach was shown to be extremely powerful in detecting the difference between the mean functions corresponding to two distributions of shapes. We list several potential future research areas that we believe are related to our work.
\paragraph*{Definition of Mean Shapes and Fréchet/Wasserstein Regression} The existence and uniqueness of the mean shapes $K_\oplus$ as defined in Eq.~\eqref{Frechet mean shape} are still unknown. If mean shapes $K_\oplus$ do exist, the relationship between $SECT(K_\oplus)$ and $\mathbb{E}\{SECT(\cdot)\}$ is of particular interest. The Fréchet mean in Eq.~\eqref{Frechet mean shape} may be extended to the conditional Fréchet mean and implemented in the Fréchet regression --- predicting shapes $K$ using multiple scalar predictors \citep{petersen2019frechet}. For example, predicting molecular shapes and structures from scalar-valued indicators or sequences has become of high interest in systems biology \citep{jumper2021highly, yang2020improved}. As an example of the other way around, predicting clinical outcomes from the tumors is also of interest \citep{moon2020predicting, somasundaram2021persistent, vipond2021multiparameter}. \cite{crawford2020predicting} conducted this prediction using Gaussian process regression. Another possible way of making these types of predictions is via Wasserstein regression \citep{chen2021wasserstein}. The Wasserstein regression framework also allows both predictors and responses to be random shapes. For example, both the treatment of interest and the shapes of tumors pre-treatment can play the role of predictors, while tumor shapes after treatment can be used as the responses of a Wasserstein regression model. The combination of Wasserstein regression and the random shape framework proposed herein is left for future research.
\paragraph*{Investigating the Gaussianity of the SECT} Both the proof of Theorem \ref{thm: independence implies Gaussianity} and the simulation results presented in Figure \ref{fig: size_indep_type1error} indicate that modeling the real-valued stochastic process $SECT(\cdot)(\nu)$, for each $\nu\in\mathbb{S}^{d-1}$, as a Gaussian process is suitable for some underlying distributions $\mathbb{P}$ of shapes $K$. If $SECT(\cdot)(\nu)$ are Gaussian processes, we may further model the distributions of $SECT(\cdot)(\nu)$ using parametric covariance functions. Finding the proper assumptions making each $SECT(\cdot)(\nu)$ a Gaussian process is of both theoretical and applied modeling interest.
\paragraph*{Generative Models for Complex Shapes} The foundations for the randomness of shapes and the distributions of the SECT allow for the generative modeling of complex shapes. Suppose we are interested in a collection of shapes $\{K_i\}_{i=1}^n\subset\mathscr{S}_{R,d}^M$ (e.g., heel bones or tumors), and the underlying distribution of $\{SECT(K_i)\}_{i=1}^n$ is $\mathbf{P}$, where $\mathbf{P}$ is a probability measure on $C(\mathbb{S}^{d-1};\mathcal{H})$. Suppose $\mathbf{P}$ is known or accurately estimated. We can then simulate $\{SECT_{i'}\}_{i'=1}^{n'} \overset{i.i.d.}{\sim} \mathbf{P}$. Since the shape-to-SECT map $K\mapsto SECT(K)$ is invertible (see Theorem \ref{thm: invertibility}), for each $i'\in\{1,\cdots,n'\}$, there exists a unique shape $K_{i'}$ whose SECT is $SECT_{i'}$. Then the simulated $\{K_{i'}\}_{i'=1}^{n'}$ and the shapes of interest share the same distribution $\mathbf{P}$. The estimation of the underlying distribution $\mathbf{P}$ from $\{K_i\}_{i=1}^n$ is left for future research. The shape reconstruction step (i.e., constructing an (approximate) inverse map $SECT_i\mapsto K_i$) is outside the scope of the current paper; however, when shapes $K$ are 3-dimensional ``meshes," a shape reconstruction approach can be applied (see Section 2.4 of \cite{wang2021statistical}).
\paragraph*{Lifted Euler Characteristic
Transform} \cite{kirveslahti2021representing} generalized the Euler characteristic transform to field type data using a lifting argument and proposed the lifted Euler characteristic transform (LECT). The randomness and distributions of the LECT and corresponding hypothesis testing framework extensions are left for future research.
\section*{Software Availability}
Source code for implementing the simulation studies in Section \ref{section: Simulation experiments} is publicly available online at
\begin{center}
\small{\url{https://github.com/KMengBrown/Randomness-Inference-via-SECT.git}}.
\end{center}
\section*{Acknowledgments}
We want to thank Dr. Matthew T.~Harrison in the Division of Applied Mathematics at Brown University for useful comments and suggestions on previous versions of the manuscript.
\section*{Competing Interests}
The authors declare no competing interests.
\clearpage
\newpage
\begin{center}
\Large{\textbf{Appendix}}
\end{center}
\begin{appendix}
\section{Overview of Persistent Diagrams}\label{The Relationship between PHT and SECT}
\noindent This subsection gives an overview of PDs in the literature. The overview is provided for the following purposes:
\begin{itemize}
\item we provide the details for the definition of $\mathscr{S}_{R,d}^M$, particularly the condition in Eq.~\eqref{Eq: topological invariants boundedness condition};
\item the PD framework is the necessary tool for proving several theorems in this paper (see Appendix \ref{section: appendix, proofs}).
\end{itemize}
Most of the materials in the overview come from or are modified from \cite{mileyko2011probability} and \cite{turner2013means}.
Let $\mathbb{K}$ be a compact topological space and $\varphi$ be a real-valued continuous function defined on $\mathbb{K}$. Because of the compactness of $\mathbb{K}$ and continuity of $\varphi$, we assume $\varphi(\mathbb{K})\subset[0,T]$ without loss of generality. For each $t\in [0,T]$, denote
\begin{align*}
\mathbb{K}^\varphi_t \overset{\operatorname{def} }{=} \{x\in\mathbb{K} \,\vert \, \varphi(x)\le t\}.
\end{align*}
Then $\mathbb{K}^\varphi_{t_1} \subset\mathbb{K}^\varphi_{t_2}$ for all $0\le t_1\le t_2 \le T$, and $i_{t_1 \rightarrow t_2}$ denotes the corresponding inclusion map. Definition \ref{def: HCP and tameness} is an analogue to the following concepts:
\begin{enumerate}
\item \textit{$t^*$ is a homotopy critical point (HCP) of $\mathbb{K}$ with respect to $\varphi$ if $\mathbb{K}^\varphi_{t^*}$ is not homotopy equivalent to $\mathbb{K}^\varphi_{t^*-\delta}$ for any $\delta>0$;}
\item \textit{$\mathbb{K}$ is called tame with respect to $\varphi$ if $\mathbb{K}$ has finitely many HCP with respect to $\varphi$.}
\end{enumerate}
If we take $\mathbb{K}=K\in\mathscr{S}_{R,d}^M$ and
\begin{align}\label{Eq: Morse function 1}
\varphi(x)=x\cdot \nu+R \overset{\operatorname{def}}{=} \phi_\nu(x),\ \ \ x\in K,\ \ \nu\in\mathbb{S}^{d-1},
\end{align}
we have the scenario discussed in Section \ref{The Definition of Smooth Euler Characteristic Transform}. The definition of $\mathscr{S}_{R,d}^M$ provides the tameness of $K$ with respect to $\phi_\nu$ and $\phi_\nu(K)\subset[0,T]$, for any fixed $\nu\in\mathbb{S}^{d-1}$.
The inclusion maps $i_{t_1 \rightarrow t_2}: \mathbb{K}_{t_1}^\varphi \rightarrow \mathbb{K}_{t_2}^\varphi$ induces the group homomorphisms
\begin{align*}
i^{\#}_{t_1 \rightarrow t_2}: H_k(\mathbb{K}_{t_1}^\varphi) \rightarrow H_k(\mathbb{K}_{t_2}^\varphi), \ \ \mbox{ for all }k\in\mathbb{Z},
\end{align*}
where $H_k(\cdot)=H_k(\cdot;\mathbb{Z}_2)$ denotes the $k$-th homology group with respect to field $\mathbb{Z}_2$, and $\mathbb{Z}_2$ is omitted for simplicity. Because of the tameness of $\mathbb{K}$ with respect to $\varphi$, for any $t_1 \le t_2$, we have that the image
\begin{align*}
im \left(i^{\#}_{(t_1-\delta) \rightarrow t_2} \right)=im \left(i^{\#}_{(t_1-\delta) \rightarrow t_1} \circ i^{\#}_{t_1 \rightarrow t_2} \right)
\end{align*}
does not depend on $\delta>0$ when $\delta$ is sufficiently small, and then this constant image is denoted as $im(i^{\#}_{(t_1-) \rightarrow t_2})$. For any $t$, the $k$-th birth group at $t$ is defined as the quotient group
\begin{align*}
B_k^{t} \overset{\operatorname{def}}{=} H_k(\mathbb{K}_t^\varphi)/im(i^{\#}_{(t-)\rightarrow t}),
\end{align*}
and $\pi_{B_k^t}: H_k(\mathbb{K}_t^\varphi) \rightarrow B_k^{t}$ denotes the corresponding quotient map. For any $\alpha\in H_k(\mathbb{K}_t^\varphi)$, we say $\alpha$ is born at $t$ if $\pi_{B_k^t}(\alpha)\ne 0$ in $B_k^t$. The tameness implies that $B_k^t$ is a nontrivial group only for finitely many $t$. For any $t_1<t_2$, we denote the quotient group
\begin{align*}
E_k^{t_1, t_2} \overset{\operatorname{def}}{=} H_k(\mathbb{K}_{t_2}^\varphi)/im(i^{\#}_{(t_1-)\rightarrow t_2})
\end{align*}
and the corresponding quotient map $\pi_{E_k^{t_1, t_2}}: H_k(\mathbb{K}_{t_2}^\varphi) \rightarrow E_k^{t_1, t_2}$. Furthermore, we define the following map
\begin{align*}
g_k^{t_1, t_2}:\ \ B_k^{t_1} \rightarrow E_k^{t_1, t_2},\ \ \ \ \pi_{B_k^{t_1}}(\alpha) \mapsto \pi_{E_k^{t_1, t_2}}\left(i^{\#}_{t_1 \rightarrow t_2}(\alpha)\right),
\end{align*}
for all $\alpha \in H_k(\mathbb{K}_{t_1}^\varphi)$. Then we define the death group
\begin{align*}
D_k^{t_1, t_2} \overset{\operatorname{def}}{=} ker(g_k^{t_1, t_2}).
\end{align*}
We say a homology class $\alpha\in H_k(\mathbb{K}_{t_1}^\varphi)$ is born at $t_1$ and dies at $t_2$ if (i) $\pi_{B_k^{t_1}}(\alpha)\ne 0$, (ii) $\pi_{B_k^{t_1}}(\alpha)\in D_{k}^{t_1, t_2}$, and (iii) $\pi_{B_k^{t_1}}(\alpha)\notin D_{k}^{t_1, t_2-\delta}$ for any $\delta\in(0, t_2-t_1)$. If $\alpha$ does not die, we artificially say that it dies at $T$ as $K_T^\varphi=\mathbb{K}$. Then we denote $\operatorname{birth}(\alpha)=t_1$ and $\operatorname{death}(\alpha)=t_2$, and the persistence of $\alpha$ is defined as
\begin{align*}
\operatorname{pers}(\alpha) \overset{\operatorname{def}}{=} \operatorname{death}(\alpha) - \operatorname{birth}(\alpha).
\end{align*}
With the notions of $\operatorname{death}(\alpha)$ and $\operatorname{birth}(\alpha)$, the $k$-th PD of $\mathbb{K}$ with respect to $\varphi$ is defined as the following multiset of 2-dimensional points (see Definition 2 of \cite{mileyko2011probability}).
\begin{align}\label{Eq: def of PD}
\operatorname{Dgm}_k(\mathbb{K};\varphi) \overset{\operatorname{def}}{=} \bigg\{\big( \operatorname{birth}(\alpha), \operatorname{death}(\alpha)\big) \,\bigg\vert\, \alpha\in H_k(\mathbb{K}_t) \mbox{ for some }t\in[0,T] \mbox{ with }\operatorname{pers}(\alpha)>0 \bigg\}\bigcup \mathfrak{D},
\end{align}
where $\big( \operatorname{birth}(\alpha_1), \operatorname{death}(\alpha_1) \big)$ and $\big( \operatorname{birth}(\alpha_2), \operatorname{death}(\alpha_2) \big)$ for $\alpha_1\ne\alpha_2$ are counted as two points even if $\alpha_1$ and $\alpha_2$ are born and die at the same times, respectively; that is, the multiplicity of the point $\big( \operatorname{birth}(\alpha_1), \operatorname{death}(\alpha_1) \big) = \big( \operatorname{birth}(\alpha_2), \operatorname{death}(\alpha_2) \big)$ is at least $2$; $\mathfrak{D}$ denotes the diagonal $\{(t,t)\,|\, t\in\mathbb{R}\}$ with the multiplicity of each point on this diagnal is the cardinality of $\mathbb{Z}$. Since $\operatorname{birth}(\alpha)$ is no later than $\operatorname{death}(\alpha)$, the PD $\operatorname{Dgm}_k(\mathbb{K};\varphi)$ is contained in the triangular region $\{(s,t)\in\mathbb{R}^2: 0\le s\le t\le T\}$.
\paragraph*{Condition (\ref{Eq: topological invariants boundedness condition}) in the definition of $\mathscr{S}_{R,d}^M$} \textit{The function $\phi_\nu$ defined in Eq.~\eqref{Eq: Morse function 1}, the corresponding PDs defined by Eq.~\eqref{Eq: def of PD}, and the definition of $\operatorname{pers}(\cdot)$ provide the details of condition (\ref{Eq: topological invariants boundedness condition}). The notation $\#\{\cdot\}$ counts the multiplicity of the corresponding multiset.}
Generally, a persistence diagram is a countable multiset of points in triangular region $\{(s,t)\in\mathbb{R}^2: 0\le s, t\le T \mbox{ and }s\le t\}$ along with $\mathfrak{D}$ (see Definition 2 of \cite{mileyko2011probability}). The collection of all persistence diagrams is denoted as $\mathscr{D}$. Obviously, all the $\operatorname{Dgm}_k(\mathbb{K};\varphi)$ defined in Eq.~\eqref{Eq: def of PD} is in $\mathscr{D}$. The following definition and stability result of the \textit{bottleneck distance} are from \cite{cohen2007stability}, and they will play important roles in the proof of Theorem \ref{lemma: The continuity lemma}.
\begin{definition}\label{def: bottleneck distance}
Let $\mathbb{K}$ be a compact topological space. $\varphi_1$ and $\varphi_2$ are two continuous real-valued functions on $\mathbb{K}$ such that $\mathbb{K}$ is tame with respect to both $\varphi_1$ and $\varphi_2$. The bottleneck distance between PDs $\operatorname{Dgm}_k(\mathbb{K};\varphi_1)$ and $\operatorname{Dgm}_k(\mathbb{K};\varphi_2)$ is defined as
\begin{align*}
W_\infty \Big(\operatorname{Dgm}_k(\mathbb{K};\varphi_1), \operatorname{Dgm}_k(\mathbb{K};\varphi_2) \Big) \overset{\operatorname{def}}{=} \inf_{\gamma} \Big(\sup \left\{\Vert \xi - \gamma(\xi) \Vert_{l^\infty} \, \Big\vert \, \xi\in \operatorname{Dgm}_k(\mathbb{K};\varphi_1)\right\} \Big),
\end{align*}
where $\gamma$ ranges over bijections from $\operatorname{Dgm}_k(\mathbb{K};\varphi_1)$ to $\operatorname{Dgm}_k(\mathbb{K};\varphi_2)$, and
\begin{align}\label{eq: def of l infinity norm}
\Vert \xi\Vert_{l^\infty} \overset{\operatorname{def}}{=} \max\{\vert \xi_1\vert , \vert \xi_2\vert\},\ \ \mbox{ for all }\xi=(\xi_1, \xi_2)^\intercal\in\mathbb{R}^2.
\end{align}
\end{definition}
\begin{theorem}\label{thm: bottleneck stability}
Let $\mathbb{K}$ be a compact and triangulable topological space. $\varphi_1$ and $\varphi_2$ are two continuous real-valued functions on $\mathbb{K}$ such that $\mathbb{K}$ is tame with respect to both $\varphi_1$ and $\varphi_2$. Then, we have the bottleneck distance as follows
\begin{align*}
W_\infty \Big(\operatorname{Dgm}_k(\mathbb{K};\varphi_1), \operatorname{Dgm}_k(\mathbb{K};\varphi_2) \Big) \le \sup_{x\in\mathbb{K}} \left\vert \varphi_1(x) - \varphi_2(x) \right\vert.
\end{align*}
\end{theorem}
\section{Computation of SECT}\label{section: Computation of SECT Using the Čech Complexes}
\noindent Let $K \subset\mathbb{R}^d$ be a shape of interest. Suppose a finite set of points $\{x_i\}_{i=1}^I\subset K$ and a radius $r>0$ are properly chosen such that
\begin{align}\label{eq: ball unions approx shapes}
\begin{aligned}
& K_t^\nu = \left\{x\in K \, \vert \, x\cdot\nu \le t-R \right\} \approx \bigcup_{i\in \mathfrak{I}_t^\nu} \overline{B(x_i,r)},\ \ \mbox{ for all }t\in[0,T]\mbox{ and }\nu\in\mathbb{S}^{d-1},\\
& \mbox{ where }\, \mathfrak{I}_t^\nu \overset{\operatorname{def}}{=} \left\{i\in\mathbb{N} \, \Big\vert \, 1\le i\le I \, \mbox{ and }\, x_i\cdot\nu\le t-R\right\},
\end{aligned}
\end{align}
and $\overline{B(x_i,r)} := \{x\in\mathbb{R}^d:\Vert x-x_i\Vert\le r\}$ denotes a closed ball centered at $x_i$ with radius $r$. For example, when $d=2$, centers $x_i$ may be chosen as a subset of the grid points
\begin{align*}
\left\{y_{j,j'} \overset{\operatorname{def}}{=} \left (-R+j\cdot\delta, \, -R+j'\cdot\delta \right)^\intercal \right\}_{j,j'=1}^J
\end{align*}
of the square $[-R,R]^2$ containing shape $K$, where $\delta=\frac{2R}{J}$ and radius $r=\delta$. Specifically,
\begin{align}\label{eq: approximation using grid points}
K_t^\nu \approx \bigcup_{y_{j,j'}\in K_t^\nu} \overline{B\left(y_{j,j'}, \, \delta\right)}\ \ \mbox{ for all }t\in[0,T]\mbox{ and }\nu\in\mathbb{S}^{d-1},
\end{align}
which is a special case of Eq.~\eqref{eq: ball unions approx shapes}. The shape approximation in Eq.~\eqref{eq: approximation using grid points} is illustrated by Figures \ref{fig: Computing_SECT}(a) and (b).
\paragraph*{Čech complexes} The Čech Complex determined by the point set $\{x_i\}_{i\in \mathfrak{I}_t^\nu}$ and radius $r$ in Eq.~\eqref{eq: ball unions approx shapes} is defined as the following simplicial complex
\begin{align*
\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right) \overset{\operatorname{def}}{=} \left\{ \operatorname{conv}\left(\{x_i\}_{i\in s}\right) \, \Bigg\vert \, s\in 2^{\mathfrak{I}_t^\nu} \mbox{ and } \bigcap_{i\in s}\overline{B(x_i,r)}\ne\emptyset\right\},
\end{align*}
where $\operatorname{conv}\left(\{x_i\}_{i\in s}\right)$ denotes the convex hull generated by points $\{x_i\}_{i\in s}$. The nerve theorem (see Chapter III of \cite{edelsbrunner2010computational}) indicates that the Čech Complex $\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right)$ and the union $\bigcup_{i\in \mathfrak{I}_t^\nu} \overline{B(x_i,r)}$ have the same homotopy type. Hence, they share the same Betti numbers, i.e.,
\begin{align*}
\beta_k\Big(\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right)\Big) = \beta_k\left(\bigcup_{i\in \mathfrak{I}_t^\nu} \overline{B(x_i,r)}\right),\ \ \mbox{ for all }k\in\mathbb{Z}.
\end{align*}
Using the shape approximation in Eq.~\eqref{eq: ball unions approx shapes}, we have the following approximation for ECC
\begin{align}\label{eq: ECC approximation using Cech complexes}
\chi_t^{\nu}(K) \approx \sum_{k=0}^{d-1} (-1)^{k} \cdot \beta_k\Big(\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right)\Big),\ \ \ t\in[0,T].
\end{align}
The method of computing the Betti numbers of simplicial complexes in Eq.~\eqref{eq: ECC approximation using Cech complexes} is standard and can be found in the literature (e.g., Chapter IV of \cite{edelsbrunner2010computational} and Section 3.1 of \cite{niyogi2008finding}). Then, the SECT of $K$ is estimated using Eq.~\eqref{Eq: definition of SECT}. The smoothing effect of the integrals in Eq.~\eqref{Eq: definition of SECT} reduces the estimation error.
\paragraph*{Computing the SECT for our proof-of-concept and simulation examples} For the shape $K^{(1)}$ defined in Eq. \eqref{eq: example shapes K1 and K2} of Section \ref{Proof-of-Concept Simulation Examples I: Deterministic Shapes}, we estimate the SECT of $K^{(1)}$ using the aforementioned Čech complex approach with the following setup: $R=\frac{3}{2}$, $r=\frac{1}{5}$, and the point set $\{x_i\}_i$ is equal to the following collection
\begin{align}\label{eq: the centers of our shape approximation}
\left\{\left(\frac{2}{5}+\cos t_j, \sin t_j\right) \,\Bigg\vert\, t_j=\frac{\pi}{5}+\frac{j}{J}\cdot\frac{8\pi}{5}\right\}_{j=1}^J\bigcup\left\{\left(-\frac{2}{5}+\cos t_j, \sin t_j\right) \,\Bigg\vert\, \frac{6\pi}{5}+\frac{j}{J}\cdot\frac{8\pi}{5}\right\}_{j=1}^J,
\end{align}
where $J$ is a sufficiently large integer. We implement $J=100$ in our proof-of-concept example. Figure \ref{fig: Computing_SECT}(c) illustrates the shape approximation using this setup. The SECT for other shapes in our proof-of-concept/simulation examples is estimated in the similar way.
\begin{figure}[h]
\centering
\includegraphics[scale=0.63]{Computing_SECT.pdf}
\caption{Illustrations of the shape approximation in Eq.~\eqref{eq: ball unions approx shapes}. The shape $K$ of interest herein is equal to the $K^{(1)}$ defined in Eq.~\eqref{eq: example shapes K1 and K2} (see Figure \ref{fig: SECT visualizations, deterministic}(a) and the blue shape in the panel (c) herein). We set $R=\frac{3}{2}$, $t=1$, and $\nu=\left(\sqrt{2}/2,\, \sqrt{2}/2\right)^\intercal$. Panels (a) and (b) specifically illustrate the approximation in Eq.~\eqref{eq: approximation using grid points} using grid points, and the pink shapes in the two panels present the union $\bigcup_{y_{j,j'}\in K_t^\nu} \overline{B (y_{j,j'},\, \delta )}$ in Eq.~\eqref{eq: approximation using grid points}; in panel (a), $J=30$ (i.e., $\delta=0.1$); in panel (b), $J=100$ (i.e., $\delta=0.03$). Panel (c) illustrates the the approximation in Eq.~\eqref{eq: ball unions approx shapes} with centers $\{x_i\}_i$ being the points in Eq.~\eqref{eq: the centers of our shape approximation} and $r=\frac{1}{5}$. Each pink circle in panel (c) presents a ball $\overline{B(x_i, r)}$.}
\label{fig: Computing_SECT}
\end{figure}
\section{Proofs}\label{section: appendix, proofs}
\paragraph*{Proof of Theorem \ref{thm: boundedness topological invariants theorem}} The following inclusion is straightforward
\begin{align*}
\left\{\operatorname{Dgm}_k(K;\phi_{\nu})\cap (-\infty,t)\times(t,\infty)\right\} \subset \left\{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu}) \, \vert \, \operatorname{pers}(\xi)>0\right\},
\end{align*}
where the definitions of $\operatorname{Dgm}_k(K;\phi_{\nu})$ and $\operatorname{pers}(\xi)$ are provided in Appendix \ref{The Relationship between PHT and SECT}. Together with the k-triangle lemma (see \cite{edelsbrunner2000topological} and \cite{cohen2007stability}), this inclusion implies
\begin{align*}
\beta_k(K_t^\nu) &=\# \{\operatorname{Dgm}_k(K;\phi_{\nu})\cap (-\infty,t)\times(t,\infty)\} \\
& \le \# \{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu}) \, \vert \, \operatorname{pers}(\xi)>0\}\le\frac{M}{d},
\end{align*}
for all $k\in\{1,\cdots,d-1\}$, all $\nu\in\mathbb{S}^{d-1}$, and all $t$ that are not HCPs in direction $\nu$, where $\#\{\cdot\}$ counts the multiplicity of the multisets. Then, result (i) comes from the RCLL property of functions $\{\beta_k(K^\nu_{t})\}_{t\in[0,T]}$. Eq.~\eqref{Eq: first def of Euler characteristic curve} implies
\begin{align*}
\sup_{\nu\in\mathbb{S}^{d-1}}\left(\sup_{0\le t\le T}\left\vert\chi_{t}^\nu(K)\right\vert\right) & = \sup_{\nu\in\mathbb{S}^{d-1}}\left(\sup_{0\le t\le T}\left\vert\sum_{k=0}^{d-1} (-1)^{k}\cdot\beta_k(K_t^{\nu})\right\vert\right) \\
&\le \sup_{\nu\in\mathbb{S}^{d-1}}\left[\sup_{0\le t\le T} \left( d\cdot\sup_{k\in\{0,\cdots,d-1\}}\beta_k(K_t^{\nu})\right) \right] \le M,
\end{align*}
which follows from result (i) and gives result (ii). \hfill$\square$\smallskip
\paragraph*{Proof of Theorem \ref{thm: Sobolev function paths}} Because of
\begin{align*}
\left\{\int_0^t \chi_\tau^\nu(K) d\tau\right\}_{t\in[0,T]} &\in\{\mbox{all absolutely continuous functions on }[0,T]\}\\
&=\{x\in L^1([0,T]): \mbox{the weak derivative $x'$ exists and }x'\in L^1([0,T])\}\\
& \overset{\operatorname{def}}{=} W^{1,1}([0,T])
\end{align*}
(see the Remark 8 after Proposition 8.3 in \cite{brezis2011functional} for details), the weak derivative of $\{\int_0^t \chi_\tau^\nu(K) d\tau\}_{t\in[0,T]}$ exists. This weak derivative is easily computed using the tameness of $K$, integration by parts, and the definition of weak derivatives. The proof of result (i) is completed. For the simplicity of notations, we denote
\begin{align*}
F(t) \overset{\operatorname{def}}{=} \int_0^t\chi_\tau^\nu(K) d\tau - \frac{t}{T} \int_0^T \chi_\tau^\nu(K) d\tau, \ \ \mbox{ for }t\in[0,T].
\end{align*}
Theorem \ref{thm: boundedness topological invariants theorem} implies
\begin{align*}
\vert F(t)\vert \le \int_0^T \vert\chi_\tau^\nu(K) \vert d\tau + \frac{t}{T} \int_0^T \vert\chi_\tau^\nu(K)\vert d\tau \le 2TM,\ \ \mbox{ for }t\in[0,T].
\end{align*}
Hence, $F\in L^p([0,T])$ for $p\in[1,\infty)$. Result (i) implies that the weak derivative of $F$ exists and is $F'(t)=\chi_t^\nu(K)-\frac{1}{T}\int_0^T \vert\chi_\tau^\nu(K)\vert d\tau$. We have the boundedness
\begin{align*}
\vert F'(t)\vert \le \vert \chi_{t}^\nu(K) \vert + \frac{1}{T} \int_{0}^T \vert \chi_\tau^\nu(K)\vert d\tau \le 2M, \ \ \mbox{ for }t\in[0,T],
\end{align*}
which implies $F'\in L^p([0,T])$ for $p\in[1,\infty)$. Furthermore, $F(0)=F(T)=0$, together with the discussion above, implies $F\in W^{1,p}_0([0,T])$ for all $p\in[1,\infty)$ (see the Theorem 8.12 of \cite{brezis2011functional}). Theorem 8.8 and the Remark 8 after Proposition 8.3 in \cite{brezis2011functional} imply $W^{1,p}_0([0,T]) \subset \mathcal{B}$ for $p\in[1,\infty)$. Result (ii) follows. \hfill$\square$\smallskip
\medskip
The following lemmas are prepared for the proof of Theorem \ref{lemma: The continuity lemma}.
\begin{lemma}\label{lemma: stability lemma 1}
Suppose $K\in\mathscr{S}_{R,d}^M$. Then, we have the following estimate for all $t$ that are neither HCPs in direction $\nu_1$ nor HCPs in direction $\nu_2$.
\begin{align}\label{Eq: counting estimate}
\begin{aligned}
& \Upsilon_k(t;\nu_1, \nu_2) \overset{\operatorname{def}}{=} \left\vert \beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2}) \right\vert \\
& \le \#\left\{x\in \operatorname{Dgm}_k(K;\phi_{\nu_1}) \, \Big\vert \, x\ne\gamma^*(x) \mbox{ and } \underline{(x,\gamma^*(x))}\bigcap\partial\big((-\infty,t)\times(t,\infty)\big)\ne\emptyset\right\},
\end{aligned}
\end{align}
where $\underline{(x,\gamma^*(x))}$ denotes the straight line segment connecting points $x$ and $\gamma^*(x)$ in $\mathbb{R}^2$, $\gamma^*$ is any optimal bijection such that
\begin{align}\label{eq: optimal bijection condition}
W_\infty \Big(\operatorname{Dgm}_k(K;\phi_{\nu_1}), \operatorname{Dgm}_k(K;\phi_{\nu_2}) \Big) = \sup \Big\{\Vert \xi - \gamma^*(\xi) \Vert_{l^\infty} \, \Big\vert \, \xi\in \operatorname{Dgm}_k(\mathbb{K};\phi_{\nu_1})\Big\}
\end{align}
(see Definition \ref{def: bottleneck distance}, and $\Vert\cdot\Vert_{l^\infty}$ is defined in Eq.~\eqref{eq: def of l infinity norm}), and ``$\#$" counts the corresponding multiplicity.
\end{lemma}
\begin{remark}
Because $(\mathscr{D}, W_\infty)$ is a geodesic space, the optimal bijection $\gamma^*$ does exist (see Proposition 1 of \cite{turner2013means} and its proof therein).
\end{remark}
\begin{proof}
Since $t$ is not an HCP, neither $\operatorname{Dgm}_k(K;\phi_{\nu_1})$ nor $\operatorname{Dgm}_k(K;\phi_{\nu_2})$ has a point on the boundary $\partial\big((-\infty,t)\times(t,\infty)\big)$. If $\beta_k(K_t^{\nu_1}) = \beta_k(K_t^{\nu_2})$, Eq.~\eqref{Eq: counting estimate} is true. Otherwise, without loss of generality, we assume $\beta_k(K_t^{\nu_1}) > \beta_k(K_t^{\nu_2})$. Notice
\begin{align*}
\beta_k(K_t^{\nu_i})=\#\left\{\operatorname{Dgm}_k(K;\phi_{\nu_i})\bigcap (-\infty,t)\times(t,\infty) \right\},\ \ \mbox{ for }i\in\{1,2\}.
\end{align*}
Let $\gamma^*$ be any optimal bijection, then there should be at least $\beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2})$ straight line segments $\underline{(x,\gamma^*(x))}$ crossing $\partial\big((-\infty,t)\times(t,\infty)\big)$; otherwise, $\gamma^*$ is not bijective. Hence,
\begin{align*}
&\beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2}) \\
& \le \#\left\{x\in \operatorname{Dgm}_k(K;\phi_{\nu_1})\,\Big\vert\, x\ne\gamma^*(x) \mbox{ and } \underline{(x,\gamma^*(x))}\bigcap\partial\big((-\infty,t)\times(t,\infty)\big)\ne\emptyset\right\},
\end{align*}
and Eq.~\eqref{Eq: counting estimate} follows.
\end{proof}
\begin{lemma}\label{lemma: stability lemma 2}
Suppose $K\in\mathscr{S}_{R,d}^M$. Except for finitely many $t$, we have
\begin{align*}
\Upsilon_k(t;\nu_1, \nu_2) \le \frac{2M}{d} \cdot \mathbf{1}_{\mathcal{T}_k},\ \ \mbox{where}
\end{align*}
\begin{align*}
\mathcal{T}_k \overset{\operatorname{def}}{=} \left\{t\in[0,T] \mbox{ not an HCP in direction }\nu_1\mbox{ or }\nu_2\,\Big\vert\, \mbox{there exists } x\in \operatorname{Dgm}_k(K;\phi_{\nu_1}) \mbox{ such that } x\ne\gamma^*(x) \right.
\\ \left. \mbox{ and } \underline{(x,\gamma^*(x))}\bigcap\partial\big((-\infty,t)\times(t,\infty)\big)\ne\emptyset\right\},
\end{align*}
and $\gamma^*: \operatorname{Dgm}_k(K;\phi_{\nu_1})\rightarrow \operatorname{Dgm}_k(K;\phi_{\nu_2})$ is any optimal bijection satisfying Eq.~\eqref{eq: optimal bijection condition}.
\end{lemma}
\begin{proof}
Theorem \ref{thm: boundedness topological invariants theorem} implies
\begin{align*}
\Upsilon_k(t;\nu_1, \nu_2) = \left\vert \beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2}) \right\vert \le 2M/d.
\end{align*}
Furthermore, the inequality in Eq.~\eqref{Eq: counting estimate} indicates that $\Upsilon_k(t;\nu_1, \nu_2)=0$ if $t\notin\mathcal{T}_k$, except for finitely many HCPs in directions $\nu_1$ and $\nu_2$. Then the desired estimate follows.
\end{proof}
\paragraph*{Proof of Theorem \ref{lemma: The continuity lemma}} The definition of Euler characteristic (see Eq.~\eqref{Eq: first def of Euler characteristic curve}) and Lemma \ref{lemma: stability lemma 2} imply the following: for $p\in[1,\infty)$, we have
\begin{align}\label{Eq: estimate in proof}
\begin{aligned}
& \int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau \\
& = \int_0^T \left\vert \sum_{k=0}^{d-1} (-1)^k\cdot\Big(\beta_k(K_\tau^{\nu_1}) - \beta_k(K_\tau^{\nu_2}) \Big) \right\vert^p d\tau \\
& \le \int_0^T \left(\sum_{k=0}^{d-1}\Upsilon_k(\tau;\nu_1,\nu_2)\right)^p d\tau \\
& \le d^{(p-1)} \cdot \sum_{k=0}^{d-1} \int_0^T \Big(\Upsilon_k(\tau;\nu_1,\nu_2)\Big)^p d\tau \\
& \le \frac{(2M)^p}{d} \cdot \sum_{k=0}^{d-1} \int_{\mathcal{T}_k} d\tau \\
& \le \frac{(2M)^p}{d} \cdot \sum_{k=0}^{d-1} \left(\sum_{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu_1})} 2\cdot\Vert \xi-\gamma^*(\xi)\Vert_{l^\infty}\right),
\end{aligned}
\end{align}
where the last inequality follows from the definition of $\mathcal{T}_k$. Since $\Vert \xi-\gamma^*(\xi)\Vert_{l^\infty}$ can be positive only if $\operatorname{pers}(\xi)>0$ or $\operatorname{pers}(\gamma^*(\xi))>0$, there are at most $N$ terms $\Vert \xi-\gamma^*(\xi)\Vert_{l^\infty}>0$, where condition (\ref{Eq: topological invariants boundedness condition}) implies
\begin{align*}
N \overset{\operatorname{def}}{=} \sum_{i=1}^2 \#\{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu_i}) \,\vert \, \operatorname{pers}(\xi)>0\} \le 2M/d.
\end{align*}
Therefore, the inequality in Eq.~\eqref{Eq: estimate in proof} implies
\begin{align*}
\int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau & \le \frac{2 \cdot (2M)^{(p+1)}}{d} \cdot \sup\Big\{ \Vert \xi-\gamma^*(\xi)\Vert_{l^\infty} \,\Big\vert\, \xi\in Dym(\mathbb{K};\phi_{\nu_1})\Big\} \\
& = \frac{2 \cdot (2M)^{(p+1)}}{d} \cdot W_\infty \Big(\operatorname{Dgm}_k(K;\phi_{\nu_1}), \operatorname{Dgm}_k(K;\phi_{\nu_2}) \Big).
\end{align*}
Then, Theorem \ref{thm: bottleneck stability} implies
\begin{align*}
\int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau \le \frac{2\cdot(2M)^{(p+1)}}{d} \cdot \sup_{x\in K} \vert x\cdot(\nu_1-\nu_2)\vert.
\end{align*}
Additionally, $\vert x\cdot(\nu_1-\nu_2)\vert\le \Vert x\Vert\cdot\Vert \nu_1-\nu_2\Vert$ and $K\subset\overline{B(0,R)}$ provide
\begin{align}\label{Eq: Continuity inequality lemma}
\int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau \le \frac{2 \cdot R \cdot (2M)^{(p+1)}}{d} \cdot \Vert \nu_1-\nu_2\Vert.
\end{align}
Define the constant $C^*_{M,R,d}$ as follows
\begin{align*}
C^*_{M,R,d} \overset{\operatorname{def}}{=} \sqrt{ \frac{16M^3R}{d} + \frac{32M^3R}{d} + \frac{64 M^4 R}{d^2} } .
\end{align*}
Setting $p=2$, Eq.~\eqref{Eq: Continuity inequality lemma} implies the following
\begin{align*}
\left( \int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^2 d\tau \right)^{1/2} \le \sqrt{ \frac{16M^3R}{d} \cdot \Vert \nu_1-\nu_2\Vert } \le C^*_{M,R,d} \cdot \sqrt{\Vert \nu_1-\nu_2\Vert},
\end{align*}
which is the inequality in Eq.~\eqref{Eq: continuity inequality}.
The definition of $SECT(K)$, together with Eq.~\eqref{Eq: Continuity inequality lemma}, implies
\begin{align*}
& \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2) \Big\Vert^2_{\mathcal{H}} \\
& = \int_0^T \left\vert \frac{d}{dt}SECT(K)(\nu_1;t) - \frac{d}{dt}SECT(K)(\nu_2;t) \right\vert^2 dt \\
& = \int_0^T \left\vert \Big(\chi_t^{\nu_1}(K)-\chi_t^{\nu_2}(K)\Big) - \frac{1}{T} \int_0^T \Big(\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big) d\tau \right\vert^2 dt\\
& \le \int_0^T \left( \Big\vert\chi_t^{\nu_1}(K)-\chi_t^{\nu_2}(K)\Big\vert + \frac{1}{T} \int_0^T \Big\vert\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\vert d\tau \right)^2 dt \\
& \le 2\int_0^T \Big\vert\chi_t^{\nu_1}(K)-\chi_t^{\nu_2}(K)\Big\vert^2 dt + \frac{2}{T}\left(\int_0^T \Big\vert\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\vert d\tau \right)^2 \\
& \le \frac{32M^3R}{d} \cdot \Vert \nu_1 - \nu_2\Vert + \frac{64 M^4 R}{d^2} \cdot \Vert
\nu_1-\nu_2 \Vert^2,
\end{align*}
where the last inequality above comes from Eq.~\eqref{Eq: Continuity inequality lemma}. Then, we have
\begin{align*}
& \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2) \Big\Vert_{\mathcal{H}} \\
& \le \sqrt{ \frac{32M^3R}{d} \cdot \Vert \nu_1 - \nu_2\Vert + \frac{64 M^4 R}{d^2} \cdot \Vert
\nu_1-\nu_2 \Vert^2 } \\
& \le C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 }.
\end{align*}
The proof of the second inequality in result (i) is completed.
The law of cosines and Taylor's expansion indicates
\begin{align*}
\frac{\Vert \nu_1-\nu_2 \Vert}{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)} & = \sqrt{ 2 \cdot \frac{1-\cos\left(d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)\right)}{\left\{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)\right\}^2} }\\
& = \sqrt{ 2 \cdot \left[ \sum_{n=1}^\infty \frac{(-1)^{n+1}}{(2n)!} \cdot \Big\{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)\Big\}^{2n-2} \right] } = O(1).
\end{align*}
Then, result (ii) comes from the following
\begin{align}\label{eq: 1/2 Holder argument}
\begin{aligned}
& \frac{\Vert SECT(K)(\nu_1)-SECT(K)(\nu_2) \Vert_{\mathcal{H}}}{\sqrt{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)}}\\
& \le C^*_{M,R,d} \cdot \sqrt{ \frac{\Vert \nu_1 - \nu_2\Vert}{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)} + \frac{\Vert
\nu_1-\nu_2 \Vert^2}{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)} }=O(1).
\end{aligned}
\end{align}
We consider the following inequality for all $\nu_1,\nu_2\in\mathbb{S}^{d-1}$ and $t_1, t_2\in[0,T]$
\begin{align}\label{eq: pm terms for Holder}
\begin{aligned}
&\vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_2; t_2)\vert\\
&\le \vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_1; t_2)\vert\\
& + \vert SECT(K)(\nu_1; t_2)-SECT(K)(\nu_2; t_2)\vert\\
& \overset{\operatorname{def}}{=} I+II.
\end{aligned}
\end{align}
From the definition of $\Vert\cdot\Vert_{C^{0,\frac{1}{2}}([0,T])}$ and Eq.~\eqref{eq: Sobolev embedding from Morrey}, we have
\begin{align*}
& \sup_{t_1, t_2\in[0,T], \, t_1\ne t_2}\frac{\vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_1; t_2)\vert}{\vert t_1-t_2 \vert^{1/2}} \\
& \le \Vert SECT(K)(\nu_1)\Vert_{C^{0,\frac{1}{2}}([0,T])}\\
& \le \Tilde{C}_T \Vert SECT(K)(\nu_1)\Vert_\mathcal{H}\\
& \le \Tilde{C}_T \Vert SECT(K)\Vert_{C(\mathbb{S}^{d-1};\mathcal{H})},
\end{align*}
which implies $I\le \Tilde{C}_T \Vert SECT(K)\Vert_{C(\mathbb{S}^{d-1};\mathcal{H})} \cdot \vert t_1-t_2 \vert^{1/2}$ for all $t_1, t_2\in[0,T]$. Applying Eq.~\eqref{eq: Sobolev embedding from Morrey} again, we have
\begin{align*}
II & \le \Vert SECT(K)(\nu_1)-SECT(K)(\nu_2)\Vert_{\mathcal{B}} \\
& \le \Tilde{C}_T \Vert SECT(K)(\nu_1)-SECT(K)(\nu_2)\Vert_{\mathcal{H}}.
\end{align*}
Then, the inequality in Eq.~\eqref{eq: bivariate Holder continuity} follows from Eq.~\eqref{eq: pm terms for Holder} and result (ii). With an argument similar to Eq.~\eqref{eq: 1/2 Holder argument}, the function $(\nu,t)\mapsto SECT(K)(\nu;t)$ belongs to $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1}\times[0,T];\mathbb{R})$. The proof of Theorem \ref{lemma: The continuity lemma} is completed. \hfill$\square$\smallskip
The proof of Theorem \ref{thm: invertibility} needs the following concepts, which can be found in \cite{andradas2012constructible}.
\begin{definition}\label{Definition: constructible sets}
(Constructible sets) (i) A locally closed set is a subset of a topological space that is the intersection of an open and a closed subset. (ii) A constructible set is a finite union of the aforementioned locally closed sets.
\end{definition}
\paragraph*{Proof of Theorem \ref{thm: invertibility}} Since $K\in\mathscr{S}_{R,d}^M$ is a triangulable subset of $\mathbb{R}^d$, the shape $K$ is homeomorphic to a polygon $\vert S\vert = \bigcup_{s\in S} s \subset\mathbb{R}^d$, where $S$ is a finite simplicial complex and each $s$ is a simplex. We may assume $K=\vert S\vert$ without loss of generality. Since $s=s\bigcap\mathbb{R}^d$, each simplex $s$ is a locally closed set; hence, $K$ is constructible (see Definition \ref{Definition: constructible sets}). Then, Corollary 6 of \cite{ghrist2018persistent} implies the desired result. \hfill$\square$\smallskip
\paragraph*{Proof of Theorem \ref{Thm: metric theorem for shapes}} The triangle inequalities and symmetry of $\rho$ follow from that of the metric of $C(\mathbb{S}^{d-1};\mathcal{H})$. Equation $\rho(K_1, K_2)=0$ indicates $\Vert PECT(K_1)(\nu)-PECT(K_2)(\nu)\Vert_{\mathcal{H}_{BM}}=0$ for all $\nu\in\mathbb{S}^{d-1}$. \cite{evans2010partial} (Theorem 5 of Chapter 5.6) implies $\Vert PECT(K_1)(\nu)-PECT(K_2)(\nu)\Vert_{\mathcal{B}}=0$ for all $\nu\in\mathbb{S}^{d-1}$. Then, we have $\int_0^t\chi_\tau^\nu(K_1) d\tau=\int_0^t\chi_\tau^\nu(K_2) d\tau$ for all $t\in[0,T]$ and $\nu\in\mathbb{S}^{d-1}$; hence, $SECT(K_1)=SECT(K_2)$. Then the invertibility in Theorem \ref{thm: invertibility} implies $K_1=K_2$. Therefore, $\rho$ is a distance, and the proof of result (i) is completed.
The proof of result (ii) is motivated by the following chain of maps for any fixed $\nu\in\mathbb{S}^{d-1}$ and $t\in[0,T]$.
\begin{align*}
& \mathscr{S}_{R,d}^M \ \ \ \xrightarrow{PECT}\ \ \ C(\mathbb{S}^{d-1};\mathcal{H}_{BM})\ \ \ \xrightarrow{\text{projection}}\ \ \ \mathcal{H}_{BM}\mbox{, which is embedded into }\mathcal{B} \ \ \ \xrightarrow{\text{projection}} \ \ \mathbb{R}, \\
& K \mapsto \left\{PECT(K)(\nu')\right\}_{\nu'\in\mathbb{S}^{d-1}} \mapsto \left\{PECT(K)(\nu;t') \right\}_{t'\in[0,T]} \ \mapsto PECT(K)(\nu;t)=\int_0^{t} \chi_{\tau}^\nu(K) d\tau,
\end{align*}
where all spaces above are metric spaces and equipped with their Borel algebras. We notice the following facts:
\begin{itemize}
\item mapping $PECT: \mathscr{S}_{R,d}^M \rightarrow C(\mathbb{S}^{d-1}; \mathcal{H}_{BM})$ is isometric;
\item projection $C(\mathbb{S}^{d-1};\mathcal{H}_{BM})\rightarrow\mathcal{H}_{BM}, \, \{F(\nu')\}_{\nu'\in\mathbb{S}^{d-1}}\mapsto F(\nu)$ is continuous for each fixed direction $\nu$;
\item applying \cite{evans2010partial} (Theorem 5 of Chapter 5.6) again, the embedding $\mathcal{H}_{BM}\rightarrow\mathcal{B}, \, F(\nu) \mapsto F(\nu)$ is continuous;
\item projection $\mathcal{B}\rightarrow \mathbb{R}, \{x(t')\}_{t'\in[0,T]}\mapsto x(t)$ is continuous.
\end{itemize}
Therefore, $\mathscr{S}_{R,d}^M\rightarrow\mathbb{R}, K\mapsto PECT(K)(\nu,t)$ is continuous, hence measurable. Because $\chi_{(\cdot)}^\nu(K)$, for each $K\in\mathscr{S}_{R,d}^M$, is a RCLL step function with finitely many discontinuities,
\begin{align*}
\chi^\nu_{t}(K)=\lim_{n\rightarrow\infty}\left[\frac{1}{\delta_n}\left
\{PECT(K)(\nu;t+\delta_n)-PECT(K)(\nu;t)\right\}\right],
\end{align*}
for all $K\in\mathscr{S}_{R,d}^M$, where $\lim_{n\rightarrow\infty}\delta_n=0$ and $\delta_n>0$. The measurability of $PECT(\cdot)(\nu;t+\delta_n)$ and $PECT(\cdot)(\nu;t)$ implies that $\chi_t^\nu(\cdot): \mathscr{S}_{R,d}^M\rightarrow\mathbb{R}, K\mapsto \chi_t^\nu(K)$ is measurable, for any fixed $\nu$ and $t$. The proof of result (ii) is completed. \hfill$\square$\smallskip
\medskip
\paragraph*{Proof of Theorem \ref{thm: mean is in H}} For each fixed direction $\nu\in\mathbb{S}^{d-1}$, Theorem \ref{thm: SECT distribution theorem in each direction} indicates that the mapping $SECT(\cdot)(\nu): K\mapsto SECT(K)(\nu)$ is an $\mathcal{H}$-valued measurable function defined on the probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$. We first show the Bochner $\mathbb{P}$-integrability of $SECT(\cdot)(\nu)$ (see Section 5 in Chapter V of \cite{yosida1965functional} for the definition of Bochner $\mathbb{P}$-integrability), and the Bochner integral of $SECT(\cdot)(\nu)$ will be fundamental to our proof of Theorem \ref{thm: mean is in H}. Lemma 1.3 of \cite{da2014stochastic} indicates that $SECT(\cdot)(\nu)$ is strongly $\mathscr{F}$-measurable (see Section 4 in Chapter V of \cite{yosida1965functional} for the definition of strong $\mathscr{F}$-measurability). Then, Assumption \ref{assumption: existence of second moments} indicates that the Bochner integral
\begin{align*}
m^*_\nu \overset{\operatorname{def}}{=} \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu)\mathbb{P}(dK)
\end{align*}
is Bochner $\mathbb{P}$-integrable and $m^*_\nu \in\mathcal{H}$ (see \cite{yosida1965functional}, Section 5 of Chapter V, Theorem 1 therein particularly). The Corollary 2 in Section 5 of Chapter V of \cite{yosida1965functional}, together with that $\mathcal{H}$ is the RKHS generated by the kernel $\kappa(\cdot,\cdot)$ defined in Eq.~\eqref{Eq: kernel of the Brownian bridge}, implies
\begin{align*}
m^*_\nu(t) &= \langle \kappa(t,\cdot), m^*_\nu \rangle \\
& = \int_{\mathscr{S}_{R,d}^M} \Big\langle \kappa(t,\cdot), SECT(K)(\nu) \Big\rangle \mathbb{P}(dK) \\
& =\int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu;t)\mathbb{P}(dK) \\
& = \mathbb{E}\left\{ SECT(\cdot)(\nu;t)\right\} =m_\nu(t),\ \ \mbox{ for all }t\in[0,T].
\end{align*}
Therefore, $m_\nu=m^*_\nu\in\mathcal{H}$. The proof of result (i) is completed.
To prove result (ii), we first show the product measurability of the following map for each fixed direction $\nu\in\mathbb{S}^{d-1}$
\begin{align}\label{eq: product measurability}
\begin{aligned}
& \Big(\mathscr{S}_{R,d}^M \times [0,T], \mathscr{F}\times \mathscr{B}([0,T])\Big)\ \ \rightarrow \ \ (\mathbb{R}, \mathscr{B}(\mathbb{R})),\\
& (K,t) \ \ \mapsto \ \ SECT(K)(\nu;t),
\end{aligned}
\end{align}
where $\mathscr{F}\times \mathscr{B}([0,T])$ denotes the product $\sigma$-algebra generated by $\mathscr{F}$ and $\mathscr{B}([0,T])$. Define the filtration $\{\mathscr{F}_t\}_{t\in[0,T]}$ by $\mathscr{F}_t=\sigma(\{SECT(\cdot)(\nu;t')\,\vert\, t'\in[0,t]\})$ for $t\in[0,T]$. Because the paths of $SECT(\cdot)(\nu)$ are in $\mathcal{H}$, these paths are continuous (see Eq.~\eqref{eq: H, Holder, B embeddings}). Proposition 1.13 of \cite{karatzas2012brownian} implies that the stochastic process $SECT(\cdot)(\nu)$ is progressively measurable with respect to the filtration $\{\mathscr{F}_t\}_{t\in[0,T]}$. Then, the mapping in Eq.~\eqref{eq: product measurability} is measurable with respect to the product $\sigma$-algebra $\mathscr{F}\times \mathscr{B}([0,T])$ (see \cite{karatzas2012brownian}, particularly Definitions 1.6 and 1.11, also the paragraph right after Definition 1.11 therein). Eq.~\eqref{eq: finite second moments for all v and t} implies
\begin{align*}
\int_0^T \int_{\mathscr{S}_{R,d}^M} \vert SECT(K)(\nu;t)\vert^2 \mathbb{P}(dK) dt \le T \cdot \Tilde{C}^2_T \cdot \mathbb{E}\Vert SECT(\cdot)(\nu)\Vert^2_{\mathcal{H}}<\infty,
\end{align*}
where the double integral is well-defined because of the product measurability of the mapping in Eq.~\eqref{eq: product measurability} and the Fubini's theorem. Then, the proof of result (ii) is completed.
For any $\nu_1,\nu_2\in\mathbb{S}^{d-1}$, the proof of result (i) implies the following Bochner integral representation
\begin{align*}
\Vert m_{\nu_1} - m_{\nu_2} \Vert_{\mathcal{H}} & = \left\Vert \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu_1) - SECT(K)(\nu_2) \mathbb{P}(dK) \right\Vert_{\mathcal{H}} \\
& \overset{(1)}{\le} \int_{\mathscr{S}_{R,d}^M} \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2) \Big\Vert_{\mathcal{H}} \mathbb{P}(dK) \\
& \overset{(2)}{\le} C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 },
\end{align*}
where the inequality (1) follows from the Corollary 1 in Section 5 of Chapter V of \cite{yosida1965functional}, and the inequality (2) follows from Theorem \ref{lemma: The continuity lemma} (i). With the argument in Eq.~\eqref{eq: 1/2 Holder argument}, the proof of result (iii) is completed.
\hfill$\square$\smallskip
\end{appendix}
\clearpage
\newpage
| {'timestamp': '2022-04-28T02:11:35', 'yymm': '2204', 'arxiv_id': '2204.12699', 'language': 'en', 'url': 'https://arxiv.org/abs/2204.12699'} |
\subsubsection{Training Neural Network Vehicle Dynamics}
Our fully-connected neural network approximates the nonlinear vehicle dynamics. Note that only the dynamic state variables are considered as input to the neural network. We denote $\left|{\bf x}_{d}\right|$ and $\left|{\bf u}\right|$ as the size of the controller's state and action. The history of states and actions, which has a size of $(\left|{\bf x}_{d}\right| + \left|{\bf u}\right|) \cdot H $, is forward propagated to the neural network. Then, the output of the network predicts the residual difference between the current state and the next state, which has the size of $\left|{\bf x}_{d}\right|$. The network was implemented as an MLP with four hidden layers. We applied a hyperbolic tangent (tanh) activation function, and the network was trained with the mean squared error loss function and Adam optimizer.
We collected a human-controlled driving dataset, with a data rate of $10$ Hz, to train our network by expanding the methods used in our previous work \cite{bae_curriculum_2021}. We found that the dataset should comprise three types of distinct maneuvers in order for the neural network to accurately represent the vehicle dynamics for various friction conditions:
\begin{enumerate}
\item Zig-zag driving at low speeds ($20-25$ km/h) on the race track, in both clockwise and counter-clockwise directions. Each driving in a different direction was treated as a separate maneuver.
\item High speed driving on the race track in both directions, trying to maintain $40$ km/h as much as possible.
\item Sliding maneuvers at the friction limits on flat ground, in combinations of acceleration and deceleration with various steering angles.
\end{enumerate}
The above five maneuvers were done with seven different friction coefficients: $[0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]$. These $35$ maneuvers were logged for two minutes each, obtaining a total of $70$ minutes of driving data. We divided the data into $70 \, \%$ for training and $30 \, \%$ for testing after randomizing the data to break temporal correlations. The trained model was saved when it showed the best result for test error. Then, it was evaluated with the validation dataset. We collected the validation data on the race track in both directions. The default friction coefficient was set to $0.8$ and the friction coefficients of the six curves were set to $[0.95, 0.85, 0.75, 0.65, 0.55, 0.45]$, respectively, which were previously not included in the training data.
The test and validation errors of our neural network are shown in Table~\ref{table:errors}. Root Mean Square Error (RMSE) is denoted as $\mathbf{E}_{RMS}$ and the max error is denoted as $\mathbf{E}_{max}$. The results show that our neural network can make accurate one-step predictions under a variety of driving conditions.
\begin{table}[ht]
\renewcommand\arraystretch{1.2}
\captionsetup{justification=centering}
\caption{Test and validation errors of the neural network vehicle dynamics.}
\label{table:errors}
\begin{center}
\begin{tabularx}{1.0\columnwidth}{c|cc|cc|cc|}
& \multicolumn{2}{c|}{$v_x$ [m/s]} & \multicolumn{2}{c|}{$v_y$ [m/s]} & \multicolumn{2}{c|}{$r$ [rad/s]} \\ \cline{2-7}
& $\mathbf{E}_{RMS}$ & $\mathbf{E}_{max}$ & $\mathbf{E}_{RMS}$ & $\mathbf{E}_{max}$ & $\mathbf{E}_{RMS}$ & $\mathbf{E}_{max}$ \\ \cline{1-7}
\textbf{Test} & 0.0273 & 0.3940 & 0.0178 & 0.3180 & 0.0147 & 0.3455 \\
\textbf{Val.} & 0.0220 & 0.3753 & 0.0114 & 0.1127 & 0.0095 & 0.1413 \\
\end{tabularx}
\end{center}
\vspace*{-0.15in}
\end{table}
\subsubsection{Experimental Setup}
We designed the state-dependent cost function $c({\bf x})$ in MPC to have the following form:
\begin{equation} \label{eq:cost}
\begin{aligned}
c({\bf x}) & = \alpha_1{\text{Track}({\bf x})} + \alpha_2{\text{Speed}({\bf x})} + \alpha_3{\text{Slip}({\bf x})}, \\
\text{Track}({\bf x}) & = (0.9)^t\,{10000 \, \textbf{M}(p_x,p_y)} , \\
\text{Speed}({\bf x}) & = (v_x - v_{ref})^2 , \\
\text{Slip}({\bf x}) & = \sigma^2 + 10000I \, (\{\left | \sigma \right | > 0.2\}) ,
\end{aligned}
\end{equation}
where $I$ is an indicator function. $\textbf{M}(p_x,p_y)$ in track cost is the two-dimensional cost map value at the global position $(p_x, p_y)$. Thanks to the sampling-based derivation of SMPPI, we can provide an impulse-like penalty in the cost function. The speed cost is a simple quadratic cost to achieve the reference vehicle speed $v_{ref}$. The slip cost imposes both soft and hard costs to discourage slip angle in the trajectory $( \sigma = -\text{arctan} ( \frac{v_y}{\lVert v_x \rVert} ) )$. The trajectory expected to have a slip angle greater than $0.2$ rad (approximately $11.46 \, ^{\circ}$) will be penalized since it has the potential to make the vehicle unstable.
For SMPPI, the number of parallel samples was $10000$ and the number of time steps was $35$. For TOAST, the control parameters were set as follows:
\begin{equation}
\begin{aligned}
{\bf Q} &= \text{Diag}
\begin{pmatrix}
5, & 5, & 0.01, & 0.05, & 0.05, & 0.025
\end{pmatrix}, \\
{\bf R} &= \text{Diag}
\begin{pmatrix}
0.4, & 0.1
\end{pmatrix} .
\end{aligned}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{Figures/gain_plot.pdf}
\caption{Visualization of the time-varying feedback gains with different controllers while they are driving the same section. (a) ``Low Gain", (b) ``High Gain", and (c) TOAST. For clarity, only the gains corresponding to the steering angle are visualized. We applied rotation on the hand-tuned gains of $p_x$ and $p_y$ from global to vehicle coordinate frame. TOAST adaptively changes the gains according to the time-varying dynamic characteristics of the system.}
\label{fig:gain_plot}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figures/trajectory.pdf}
\caption{Vehicle trajectories of the compared controllers. The friction coefficient of \textit{Corner \#2} (in gray) is 0.60 and \textit{Corner \#3} (in brown) is 0.65.}
\label{fig:trajectory_compare}
\end{figure}
\subsubsection{Experimental Results\label{subsubsection:result}}
We evaluated the control performance of the four controllers on the race track. The goal of the controllers was to complete whole laps in a clockwise direction.
The race track was adjusted to be more challenging to drive than it was when the training dataset was collected. The friction coefficients $\mu$ were modified to have values of $[0.55, 0.60, 0.65, 0.70, 0.75, 0.80]$ at the corners, respectively, and to have a default of $1.0$. During ten laps around the course, we measured average lap times on the six corners at a reference speed of $40$ km/h. When the vehicle left the track, it was placed at the starting point and started a new lap. All of the methods used the same pre-trained model and no further training was given throughout the experiments. The results are shown in Table~\ref{table:time_took}.
\begin{table}[ht]
\renewcommand\arraystretch{1.2}
\captionsetup{justification=centering}
\caption{Average lap times on the six corners of different control methods. The minimum speed (in [km/h]) and maximum slip angle (in [$^{\circ}$]) at each corner are also analyzed with our method. The success rates (SR) are also displayed.}
\label{table:time_took}
\begin{center}
\begin{tabular}{c|cccccc|c}
Lap time [s] & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & SR \\\hline
No Feedback & 7.8 & 9.4 & 12.2 & 8.7 & 7.0 & 11.9 & 1/10 \\
Low Gain & 7.9 & 8.9 & 12.2 & 8.9 & 6.9 & 12.2 & 3/10 \\
High Gain & 8.3 & 8.7 & 12.2 & 10.0 & 7.6 & 13.6 & 10/10 \\
\rowcolor{gray!50} TOAST & 7.7 & 7.8 & 11.8 & 9.5 & 8.3 & 9.4 & 10/10 \\ \hline \hline
\rowcolor{gray!50} Min. Speed & 33.2 & 24.4 & 22.6 & 33.5 & 26.7 & 26.7 & \\
\rowcolor{gray!50} Max. Slip & 3.4 & 7.2 & 6.8 & 2.8 & 7.4 & 4.2 & \\
\end{tabular}
\end{center}
\vspace*{-0.1in}
\end{table}
``No Feedback" and ``Low Gain" barely got through \textit{Corner \#2} and \textit{Corner \#3}, which are the most challenging sections of the track. They completed the full course with only one and three out of ten laps, respectively. The learned dynamics predicted that the vehicle would slide greatly if it did not slow down due to the low friction surfaces. Therefore, SMPPI generated trajectories that applied brakes first, then steered the vehicle and slowly accelerated it to pass the corners. However, the vehicle failed to follow the planned trajectory and understeering occurred due to model uncertainty. In contrast, ``High Gain" and ``TOAST" completed entire laps in $100\%$ of the trials. However, the results show that TOAST has a faster lap time than ``High Gain" in most cases. This is because if the feedback controller keeps a fixed high gain all the time, it frequently interferes with the control sequence of the MPC and thus loses the optimality of the solution. Our controller computes the optimal feedback gain for each optimal control sequence based on the contextual information encoded in the neural network dynamics. As a result, a low gain can be maintained in situations where feedback compensation is not greatly required. We show the time-varying feedback gains of the compared controllers in Fig.~\ref{fig:gain_plot}. For TOAST, the overall average speed during ten laps was $33.4$ km/h and the maximum slip angle was $7.4 \, ^{\circ}$. We visualize the vehicle trajectories on \textit{Corner \#2} and \textit{Corner \#3} in Fig.~\ref{fig:trajectory_compare}.
\begin{wrapfigure}{R}{0.5\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{Figures/du.pdf}
\caption{Derivative action for each control variable. They are quantified by the L2 norm.}
\label{fig:du}
\end{wrapfigure}
We also analyzed the degree of chattering on control values with ``High Gain" and TOAST. It is a well-known fact that rapid changes in the action commands are a burden to the actuators. In the straight sections of the track with high friction coefficients, the vehicle is stable and the requirement for a feedback controller becomes negligible. Therefore, we analyzed the derivative actions during $10$ laps excluding the corners. The results are shown in Fig.~\ref{fig:du}. For ``High Gain", the average derivative actions of the steering angle ($\delta$) and the desired speed ($v_{des}$) were $1.31$, and $0.81$, respectively. On the other hand, those of TOAST were $0.96$ and $0.46$. The results demonstrate that the adaptive manner of our proposed method can alleviate chattering.
\section{INTRODUCTION}
\input{_I.Introduction/intro}
\section{METHODOLOGY}
\input{_II.Methodology/_intro}
\subsection{Smooth Model Predictive Path Integral Controller}
\input{_II.Methodology/a_mppi}
\subsection{Designing a Simultaneous Optimal Tracking Controller}
\input{_II.Methodology/b_lqr}
\subsection{Trajectory Optimization and Simultaneous Tracking}
\input{_II.Methodology/c_toast}
\subsection{Extension to an Aggressive Autonomous Driving Task \label{subsection:extension}}
\input{_II.Methodology/d_extension}
\section{EXPERIMENTS}
\input{_III.Experiments/_intro}
\subsection{Pendulum}
\input{_III.Experiments/a_pendulum}
\subsection{Cartpole}
\input{_III.Experiments/b_cartpole}
\subsection{Aggressive Autonomous Driving}
\input{_III.Experiments/c_carmaker}
\section{CONCLUSION}
\input{_IV.Conclusion/conclusion}
\bibliographystyle{IEEEtran}
\typeout{}
| {'timestamp': '2022-07-15T02:19:04', 'yymm': '2201', 'arxiv_id': '2201.08321', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.08321'} |
\section{Introduction} \label{sec:intro}
\subsection{A brief history of Mendelian
randomization} \label{sec:intro:historyofmr}
Mendelian randomization (MR) is a causal inference approach that uses the
random allocation of genes from parents to offspring as a foundation
for causal inference \parencite{Sanderson2022}. The ideas behind MR can be traced
back to the intertwined beginning of modern statistics and genetics
about a century ago. In one of the earliest examples, \textcite{Wright1920} used
selective inbreeding of guinea pigs to investigate the causes of
colour variation and, in particular, the relative contribution of
heredity and environment. In a later defence of this work,
\textcite[p.\ 251]{Wright1923} argued that his analysis of path
coefficients, a precursor to modern causal graphical models, ``rests
on the validity of the premises, i.e., on the evidence for Mendelian
heridity'', and the ``universality'' of Mendelian laws justifies
ascribing a causal interpretation to his findings.
At around the same time, \textcite{fisher26_arran_field_exper} started
to contemplate the randomization principle in experimental design and
used it to justify his analysis of variance (ANOVA) procedure, which was
motivated by genetic problems. In fact, the term ``variance'' first
appeared in Fisher's groundbreaking paper that bridged Darwin's
theory of evolution and Mendel's theory of genetic inheritance
\parencite{fisher19_correlation}. \textcite{Fisher1935} described
randomization as the ``reasoned basis'' (p.\ 12) for inference and
``the physical basis of the validity of the
test'' (p.\ 17). Later, it was revealed that his
factorial method of experimentation derives ``its structure and its
name from the simultaneous inheritance of Mendelian factors''
\parencite[p. 330]{Fisher1951}. Indeed,
Fisher viewed randomness in meiosis as uniquely shielding geneticists from the
difficulties of establishing reliably controlled comparisons,
remarking that ``the different genotypes possible from the same mating
have been beautifully randomized by the meiotic process''
\parencite[p. 332]{Fisher1951}.
While this source of randomization was originally used for eliciting
genetic causes of phenotypic variation, it was later identified as a
possible avenue for understanding causation among modifiable
phenotypes
themselves \parencite{DaveySmith2006}. \textcite{lower79_n_acety_phenot_risk_urinar_bladd_cancer}
used N-acetylation, a phenotype of known genetic regulation and a
component of detoxification pathways for arylamine, to strengthen the
inference that arylamine exposure causes bladder
cancer. \textcite{Katan2004a} proposed to address
reverse causation in the hypothesized effect of low serum cholesterol
on cancer risk via polymorphisms in the apolipoprotein E
(\textit{APOE}) gene. He argued that, if low cholesterol was indeed a
risk factor for cancer, we would expect to see higher rates of cancer
in individuals with the low cholesterol allele. Another pioneering
application of this reasoning can be found in a proposed study of the
effectiveness of bone marrow transplantation relative to chemotherapy \parencite{Gray1991}, for example, in the treatment of acute myeloid leukaemia \parencite{Wheatley2004}. Patients with a compatible
donor sibling were more likely to receive transplantation than
patients without. Since compatibility is a consequence of random
genetic assortment, comparing survival outcomes between the two groups
can be viewed as akin to an intention-to-treat analysis in a
randomized controlled trial. This paper appears to be the first to
use the term ``Mendelian randomization''.
It would be a dozen more years before an argument for the broader applicability of MR was put forward by \textcite{DaveySmith2003}. At the time, a number of criticisms had been levelled against the state of observational epidemiology and its methods of inquiry \parencite{Feinstein1988, Taubes1995, DaveySmith2001}. Several high profile results failed to be corroborated by subsequent randomized controlled trials, such as the role of beta-carotene consumption in lowering risk of cardiovascular
disease, with unobserved confounding identified as the likely culprit \parencite[p.\ 329-330]{DaveySmith2001}. This string of failures motivated the development of a more rigorous observational design with an explicit source of unconfounded randomization in the exposures of interest \parencite{DaveySmith2020}.
Originally, \textcite{DaveySmith2003} recognized that MR is best justified in a within-family design with parent-offspring trios. MR is commonly described as being analogous to a randomized controlled trial with non-compliance. This analogy is based on exact randomization in the transmission of alleles from parents to offspring which can be viewed as a form of treatment assignment. From its inception, it was recognized that data limitations would largely restrict MR to be performed in samples of unrelated individuals, which \cite{DaveySmith2003} termed ``approximate MR''. Such approximate MR has been the norm, seen in the majority of applied and methodological studies to date. However, MR in unrelated individuals lacks the explicit source of randomization offered by the within-family design, thereby suffering potential biases from dynastic effects, population structure and assortative mating \parencite{Davies2019, Brumpton2020, Howe2022}.
In addition to random assignment of exposure-modifying genetic
variants, we must also assume that the effects of these genetic
variants on the outcome are fully mediated by the exposure, known as
the exclusion restriction. When this assumption holds, MR can be
framed as a special case of instrumental variable analysis
\parencite{Thomas2004, Didelez2007a}. Within this framework, there has
been considerable recent methodological work to replace the exclusion
restriction with more plausible assumptions, typically by placing
structure on the sparsity \parencite{kang16_instr_variab_estim_with_some} or distribution of pleiotropic
effects across individual genetic variants \parencite{Bowden2015,
Zhao2020,kolesar15_ident_infer_with_many_inval_instr}.
\subsection{Towards an almost exact inference for MR} \label{sec:intro:ourtest}
As parent-offspring trio data becomes more widely available, it is
increasingly feasible to perform MR within families, as originally intended. There has been
some recent methodological and applied development for within-family designs
\parencite{Davies2019, Brumpton2020}. Thus far this has consisted of
extensions of traditional MR techniques in which structural models for
the gene-exposure and gene-outcome relationships are proposed and
samples are assumed to be drawn according to these models from some
large population. In particular, \textcite{Brumpton2020} propose a
linear regression model with parental genotype fixed effects. Their
inference is based on this model and so the role of meiotic
randomization is only implicit.
However, one of the unique advantages of MR as an observational design
is that it has an explicit inferential basis, randomness in meiosis and fertilization,
which has been thoroughly studied and modelled in genetics since
\textcite{Haldane1919}. Haldane developed a simple model for
recombination during meiosis that has demonstrated good performance on multiple
pedigrees across many species. The connection between this meiosis
model and causal inference in parent-offspring trio studies was
recently described in the context of identifying causal genetic
variants \parencite{Bates2020a} and was implicit in earlier genetic
linkage analysis \parencite{morton1955sequential} and the transmission
disequilibrium test \parencite{Spielman1993}. \textcite{Lauritzen2003}
attempted to represent meiosis models using graphs; however, they were
concerned with computatational advantages of graphical models and did
not consider their potential for causal inference.
The idea of exact hypothesis testing dates back to Fisher's original
proposal for randomized experiments and is well illustrated in his
famous `lady tasting tea' example
\parencite{Fisher1935}. \textcite{pitman37_signif_tests_which_may_be}
appears to be the first to fully embraced the idea of randomization
testing. This mode of reasoning is
usually referred to as randomization inference or design-based
inference to contrast with model-based inference. With the aid of the
potential outcome framework \parencite{Neyman1923,Rubin1974}, we can
construct an exact randomization test for the sharp null hypothesis by
conditioning on all the potential outcomes
\parencite{Rubin1980,Rosenbaum1983}. Randomization tests are widely
used in a variety of settings, including genetics
\parencite{Spielman1993,Bates2020a}, clinical trials
\parencite{Rosenberger2019}, program evaluation
\parencite{Heckman2019} and instrumental variable analysis
\parencite{Rosenbaum2004, Kang2018}.
\subsection{Our contributions}
\label{sec:our-contribution}
In this article, we propose a statistical framework that enables us to
use meiosis models as the ``reasoned basis'' for inference in MR by
unifying several ideas mentioned above. The randomization test we
propose is \emph{almost exact} in the sense that the test has exactly
the nominal size if the meiosis and fertilization model is correct.
Our first contribution is a theoretical
description of MR (and the assumptions therein) via the language of
causal directed acyclic graphs (DAGs) \parencite{Pearl2009}. These
graphical tools allow us to visualize and dissect the assumptions imposed
on the biological processes involved in heredity. In particular, we
show how various biological and social processes, including population
stratification, gamete formation, fertilization, genetic linkage,
assortative mating, dynastic effects, and pleiotropy, can be
represented using a DAG and how they can introduce bias in MR
analyses. Furthermore, by using single world intervention graphs
(SWIGs) \parencite{Richardson2013b}, we identify sufficient confounder
adjustment sets to eliminate these sources of bias. Our results
provide important theoretical insights into a trade-off between
reducing pleiotropy-induced bias and increasing statistical power.
For statistical inference, we propose a randomization test by
connecting two existing literatures. The first literature concerns
randomization inference for instrumental variable analyses, which
usually assumes that the instrumental variables are randomized
according to a simple design (such as random sampling of a binary
instrument without replacement) \parencite{Rosenbaum1983,
Kang2018}. However, in MR, offspring genotypes are very
high-dimensional and are randomized based on the parental
haplotypes. The second literature attempts to identify the approximate
location of (``map'') causal genetic variants by modelling the meiotic
process \parencite{morton1955sequential,Spielman1993,Bates2020a}. %
We show how the hidden Markov model for meiosis and fertilization implied by
\textcite{Haldane1919} greatly simplifies the sufficient adjustment
sets and computation of the randomization test. In essence, our
proposal extends existing randomization inference techniques for
instrumental variables to allow testing based on biological randomness
in reproduction (i.e.\ Mendelian randomization).
In addition to the considerable conceptual advantages, our almost
exact MR approach has several practical advantages too. First, unlike
model-based approaches for within-family MR \parencite{Brumpton2020},
our approach does not rely on a correctly specified phenotype
model. Nonetheless, the randomization test can take advantage of an
accurate phenotype model to dramatically improve its
power. Furthermore, the hidden Markov model based on Haldane's original formulation implies a propensity
score for each instrument given a sufficient adjustment set
\parencite{Rosenbaum1983}. This can be used as a ``clever covariate''
\parencite{Rose2008} to build powerful test statistics with attractive
robustness properties. Second,
since the randomization test is exact, it is robust to arbitrarily
weak instruments. For an ``irrelevant'' instrument which induces no
variation in the exposure, the test will simply have no
power. Finally, by taking advantage of the DAG representation and
using a sufficient confounder adjustment set, our method is also
provably robust to biases arising from population structure (including
multi-ethnic samples), assortative mating, dynastic effects and
pleiotropy by linkage.
We demonstrate these advantages with a simulation study and real data example in the Avon Longitudinal Study of Parents and Children (ALSPAC). The simulation study first confirms that our almost exact test produces uniformly-distributed p-values under the null and then explores the power of the test in a number of scenarios. The applied examples consists of a negative control and a positive control. The negative control is the effect of child's body mass index (BMI) at age 7 on mother's BMI pre-pregnancy. Although a causal effect is temporally impossible, backdoor paths could exist to produce a false rejection of the null. We provide evidence that our almost exact test closes these paths. The positive control is the effect of child's BMI on itself plus some zero-mean noise. We also compare our results with the results from a ``typical'' MR analysis unconditional on any parental or offspring haplotypes.
\section{Background} \label{sec:background}
\subsection{Causal inference preliminaries} \label{sec:causal-inference}
This section lays out some standard notation and assumptions in causal inference. Readers looking for an introduction to causal inference concepts, including causal graphical models, single world intervention graphs, randomization inference, and instrumental variables, can consult \Cref{sec:intro-to-causal-inference}. We express our causal assumptions and model via causal graphs, then demonstrate that randomization inference for instrumental variables is a natural vehicle for inference in within-family MR. As such, a good grasp of these concepts is required to understand the remainder of the article.
Suppose we have a collection of $N$ individuals indexed by $i = 1, 2, \ldots, N$ and, among these individuals, we are interested in the effect of an exposure $D_i$ on an outcome $Y_i$. For example, the exposure could be the level of alcohol consumption over some period of time and the outcome could be the resulting incidence of cardiovascular disease. Individual $i$'s \emph{potential outcomes} (also called \emph{counterfactuals}) corresponding to exposure level $D_i = d$ are given by $Y_i(d)$. The collection of potential outcomes for the sample is given by $\mathcal{F} = \{ (Y_i(0), Y_i(1)) \colon i = 1, 2, \ldots, N \}$.
We make the \emph{no interference} assumption which posits that the potential outcomes of each individual are unaffected by the exposures of other individuals \parencite{Rubin1980, Imbens2015}, such that $Y_i(d) \perp\mkern-9.5mu\perp D_j$ for all $i \neq j$ and $d$ in the support of $D_i$. We also assume \emph{no hidden versions of the same treatment}. A violation of this assumption could occur if $D_i \in \{0, 1\}$ were a binary measure indicating abstinence ($D_i = 0$) or some alcohol consumption ($D_i = 1$). The effect of alcohol consumption on cardiovascular disease is likely to exhibit a dose-response relationship, meaning that the potential outcome $Y_i(1)$ is not well-defined since it could take multiple distinct values depending on the unobserved amount of consumption. The previous two assumptions are sometimes jointly referred to as the \emph{stable unit treatment value assumption} \parencite{Rubin1980}. We also make the \emph{consistency} assumption \parencite{Hernan2020} which states that the observed outcome corresponds to the potential outcome at the realized exposure level $Y_i = Y_i(D_i)$.
\subsection{Genetic preliminaries} \label{sec:genetic-preliminaries}
Before we proceed, it is instructive to provide a basic overview of the relevant concepts in genetics, with a focus on modelling the processes involved in genetic inheritance, namely \emph{meiosis} and \emph{fertilization}. For a thorough exposition on statistical models for meiosis and pedigree data, see \textcite{Thompson2000}.
Human somatic cells consist of 23 pairs of chromosomes, with one in each pair inherited from the mother and the other from the father. Each chromosome is a doubled strand of helical DNA comprised of complementary nucleotide base pairs. A base pair which exhibits population-level variation in its nucleotides is called a \emph{single nucleotide polymorphism} (SNP). DNA sequences are typically characterized by the detectable variant forms induced by different combinations of SNPs. These variant forms are called \emph{alleles}. In this article, we will only consider variants with two alleles. A set of alleles on one chromosome inherited together from the same parent is called a \emph{haplotype} \parencite{Bates2020a} and the two haplotypes forming a homologous pair of chromosomes is called a \emph{genotype}.
Meiosis is a type of cell division that results in reproductive cells containing one copy of each chromosome. During this process, homologous chromosomes line up and exchange segments of DNA between themselves in a biochemical process called \emph{crossover}. The recombined chromosomes are then further divided and separated into gametes. Since recombinations are infrequent (roughly one to four per chromosome in most eukaryotes) SNPs located nearby on the same parental chromosome are more likely to be transmitted together, which results in \emph{genetic linkage}. Fertilization is the process by which germ cells in the father (sperm cells) and mother (egg cells) join together to form a zygote, which will normally develop into an embryo.
In genetic trio studies we observe the haplotypes of the mother, father and their child at $p$ SNPs on a single chromosome, where $\mathcal{J} = \{ 1, 2, \ldots , p \}$ is the set of SNP indices. We will denote the haplotypes as follows:
\[ \begin{array}{lll}%
\text{Individual's haplotypes:} & \bm{Z}^m = (Z_1^m, \ldots, Z_p^m) \in \{ 0, 1 \}^p, & \bm{Z}^f = (Z_1^f, \ldots, Z_p^f) \in \{ 0, 1 \}^p; \\[0.3em]
\text{Mother's haplotypes:} & \bm{M}^m = (M_1^m, \ldots, M_p^m) \in \{ 0, 1 \}^p, & \bm{M}^f = (M_1^f, \ldots, M_p^f) \in \{ 0, 1 \}^p; \\[0.3em]
\text{Father's haplotypes:} & \bm{F}^m = (F_1^m, \ldots, F_p^m) \in \{ 0, 1 \}^p, & \bm{F}^f = (F_1^f, \ldots, F_p^f) \in \{ 0, 1 \}^p,
\end{array}\]%
where the superscript $m$ (or $f$) indicates a maternally (or
paternally) inherited haplotype. Furthermore, denote $\bm{M}_j^{mf} =
(M_j^m, M_j^f)$ as the mother's haplotypes at site $j$ and similarly
for $\bm{F}_j^{mf}$ and $\bm{Z}_j^{mf}$.
The offspring's genotype
at site $j \in \mathcal{J}$ is given by $Z_j = Z_j^m + Z_j^f$ and let
$\bm{Z} = \bm{Z}^m + \bm{Z}^f \in \{ 0, 1, 2 \}^p$ denote the vector
of offspring genotypes.
\begin{figure}[t]
\begin{center}
\caption{Illustration of the meiotic process for five sites on a chromosome.}
\includegraphics[scale=1.25]{recombination.pdf}
\label{fig:recombination}
\end{center}
\end{figure}
\Cref{fig:recombination} illustrates how an offspring's maternally-inherited haplotype $\bm{Z}^m$ at five sites on a chromosome are related to the mother's haplotypes
$\bm{M}^m$ and $\bm{M}^f$. At site $j \in \mathcal{J}$ in a gamete produced by meiosis, the allele is inherited from either the mother's $m$ haplotype or $f$ haplotype (ignoring mutations). This can be formalized as an ancestry indicator, $U_j^m \in \{m,f\}$. The classical meiosis model of \textcite{Haldane1919} assumes that $\bm U^m = (U_1^m,\dotsc,U_p^m)$ follows a homogeneous Poisson process. Haldane's model is described in \Cref{sec:randomization-distribution} in detail and can simplify our method considerably (\Cref{subsec:hmm-simplification}). Nonetheless, our ``almost exact'' MR framework is modular and does not rely on a specific meiosis model. In fact, it is theoretically straightforward to incorporate more sophisticated meiosis models that allow for ``interference'' between the crossovers \parencite{Otto2019}. As the meiosis model become more accurate, our test will become closer to exact randomization inference.
The description in the last paragraph does not take genetic mutation into account. Many meiosis models assume that there is a small probability of independent mutations. This is formalized in the next assumption.
\begin{assumption}[Haldane's model] \label{assum:de-novo-distribution}
Given that $U_j^m = u_j^m \in \{m, f\}$ and fertilization occurs
(this is represented as $S=1$ in \Cref{sec:exacttest}), each
$Z_j^m$ is equal to $M_j^{(u_j^m)}$ unless an independent mutation
occurs. More specifically,
\[
Z_j^m = \begin{cases}
M_j^{(u_j^m)} & \text{with probability $1 - \epsilon$} \\
1-M_j^{(u_j^m)} & \text{with probability $\epsilon$.}
\end{cases}
\]
The same model holds for the paternally-inherited haplotypes.
\end{assumption}
The rate of \emph{de novo} mutation $\epsilon$ is about $10^{-8}$ in
humans \parencite{Acuna-Hidalgo2016}. Unless it is necessary to
compute the exact randomization distribution under a recombination
model, for practical purposes it often suffices to treat $\epsilon =
0$ (i.e., no mutations).
%
%
%
This meiosis model assumes the absence of \emph{transmission ratio distortion}. Transmission ratio distortion occurs when one of the two parental alleles is passed on to the (surviving) offspring at more or less than the expected Mendelian rate of 50\%. Transmission ratio distortion falls into two categories: segregation distortion, when processes during meiosis or fertilization select certain alleles more frequently than others, and viability selection, when the viability of zygotes themselves depend on the offspring genotype \parencite{Davies2019}. While we sidestep this discussion for now, we return to it in \Cref{sec:discussion}.
\section{Almost exact Mendelian randomization} \label{sec:exacttest}
\subsection{A causal model for Mendelian inheritance}
\label{sec:setup}
Returning to the alcohol and cardiovascular disease example in \Cref{sec:causal-inference}, observational studies suggest that moderate alcohol consumption confers reduced risk relative to abstinence or heavy consumption \parencite{Millwood2019}. This could indicate systematic differences among people with different drinking patterns (confounding) rather than a causal effect. There is a genetic variant in the \emph{ALDH2} gene which regulates acetaldehyde metabolism. In some populations, an allele of \emph{ALDH2} produces a protein that is inactive in metabolising acetaldehyde, causing discomfort while drinking and thereby reducing consumption. We might like to use the random allocation of variant copies of \emph{ALDH2} during meiosis and fertilization to make causal inference about the downstream effect of alcohol consumption on cardiovascular disease, however, we need to carefully clarify the conditions under which this inference would be valid. To this end, we construct a very general causal model in this section to describe the process of Mendelian inheritance and genotype-phenotype relationships. This causal model allows us to identify sources of bias in within-family MR and construct sufficient adjustment sets to control for them.
Under this causal model, the central idea behind almost exact MR is to base statistical inference precisely on randomness in genetic inheritance via a model for meiosis and fertilization. Technically speaking, we would like to apply the randomization test described in \Cref{sec:rand-infer} to MR.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.95]{images/fammr_dag.pdf}
\caption{The single world intervention graph for a working
example of a chromosome with $p = 3$ variants. Transparent
nodes are observed and grey nodes are unobserved. Square nodes
are the confounders being conditioned on in
\Cref{prop:example-adjustment-set}. $A$ is ancestry; $\bm M^f =
(M_1^f,M_2^f,M_3^f)$ is the mother's haplotype inherited from
her father; $\bm M^m,\bm F^m$, and $\bm F^f$ are defined
similarly; $C^m$ and $C^f$ are generic phenotypes of the mother
and father; $S$ is an indicator of mating; $\bm Z^m =
(Z_1^m,Z_2^m,Z_3^m)$ is the offspring's maternal haplotype and
$\bm U^m$ is a meiosis indicator; $\bm Z^f$ and $\bm U^f$ are
defined simlarly; $\bm Z = (Z_1,Z_2,Z_3)$ is the offspring's
genotype; $D$ is the exposure of interest; $Y(d)$ is the
potential outcome of $Y$ under the intervention that sets $D$
to $d$; $C$ is an environmental confounder between $D$ and
$Y$.}
\label{fig:fammr_dag}
\end{center}
\end{figure}
\Cref{fig:fammr_dag} shows a working example of our causal model
on a chromosome with just $p=3$ variants. The directed acyclic graph
is structured in roughly chronological order from left to right,
where $A$ describes the population structure, $S$ is an indicator for
mating, and $C$ is any environmental confounder between the exposure
$D$ and outcome $Y$. %
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
At first glance, \Cref{fig:fammr_dag} appears to be quite complicated
but, by the modularity of graphical models, it can be
decomposed into a collection of much simpler subgraphs that describe different
biological processes (\Cref{fig:subgraphs}). By
definition, a joint distribution \emph{factorizes} according to the
DAG in \Cref{fig:fammr_dag} if its density function can be decomposed
as (let $f$ be a generic symbol for density function)
\begin{align*}
%
%
& f(\text{all variables}) & \\
=& f(A) f(\bm U^m) f(\bm U^f) f(C) &\text{(Exogenous variables)} \\ & f(\bm M^m, \bm M^f, \bm F^m, \bm F^f \mid A) & \text{(Population stratification, \Cref{sec:parental-genotypes})}
\\ & f(C^m \mid A, \bm M^m, \bm M^f) f(C^f \mid A, \bm F^m, \bm F^f) &\text{(Parental phenotypes, \Cref{sec:parental-phenotypes})} \\ & f(\bm Z^m \mid \bm M^m, \bm M^f, \bm U^m) f(\bm Z^f \mid \bm F^m, \bm F^f, \bm U^f) & \text{(Meiosis, \Cref{sec:offspring-genotypes})} \\
& f(S \mid C^m, C^f) &\text{(Assortative mating, \Cref{sec:offspring-genotypes})} \\
& f(\bm Z \mid \bm Z^m, \bm Z^f, S) & \text{(Fertilization, \Cref{sec:offspring-genotypes})} \\ & f(D \mid A, \bm Z, C^m, C^f, C) f(Y(d) \mid A, \bm Z, C^m, C^f, C) & \text{(Offspring phenotypes, dynastic effects,} \\ & & \text{ confounding, \Cref{sec:offspring-phenotypes})}
\end{align*}
Next, we describe each term on the right hand side above and its
corresponding subgraph and biological process. To simplify the
discussion, we assume all DAGs in this article are faithful, so
conditional independence between random variables is equivalent to
d-separation in the DAG.
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{images/population_stratification.pdf}
\caption{Population structure (\Cref{sec:parental-genotypes}).}
\label{fig:parental-genotypes}
\end{subfigure}
\par\bigskip
\begin{subfigure}[b]{0.6\textwidth}
\centering
\includegraphics[width=\textwidth]{images/meiosis_dag.pdf}
\caption{Meiotic recombination of the mother's haplotypes (\Cref{sec:genetic-preliminaries}).}
\label{fig:meiosis_dag}
\end{subfigure}
\par\bigskip
\begin{subfigure}[b]{0.275\textwidth}
\centering
\includegraphics[width=\textwidth]{images/assortative_mating.pdf}
\caption{Assortative mating (\Cref{sec:offspring-genotypes}).}
\label{fig:assortative-mating}
\end{subfigure}
\par\bigskip
\begin{subfigure}[b]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{images/offspring_phenotypes.pdf}
\caption{Offspring phenotypes. (\Cref{sec:offspring-phenotypes}).}
\label{fig:offspring-phenotypes}
\end{subfigure}
\hspace{15mm}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{images/dynastic_effects.pdf}
\caption{Dynastic effects (\Cref{sec:parental-phenotypes}).}
\label{fig:dynasticeffects}
\end{subfigure}%
\caption{The constituent subgraphs of our within-family Mendelian
randomization model. White nodes represent observed variables;
grey nodes represent unobserved variables; and striped nodes
represent variables for which some elements may be unobserved.}
\label{fig:subgraphs}
\end{figure}
\subsubsection{Parental genotypes} \label{sec:parental-genotypes}
We assume that parental genotypes originate from some arbitrary, latent
population structure. Population stratification is a phenomenon characterized by systematic differences in the distribution of alleles among subgroups of a population. These disparities
typically emerge from social and genetic mechanisms including
non-random mating, migration patterns and `founder effects'
\parencite{Cardon2003} and can often be detected by principal
component analysis \parencite{patterson06_popul_struc_eigen}. This
can introduce spurious associations between genetic variants and
traits \parencite{Lander1994}.
We represent population structure via the node $A$ in the subgraph in
\Cref{fig:parental-genotypes}. The arrows from $A$ to $\bm{M}^m,
\bm{M}^f$ and $\bm{F}^m, \bm{F}^f$ indicate that the distribution of
parental haplotypes depends on the latent population structure. This
is formalized in the assumption below. \begin{assumption}
\label{assum:parental-genotypes}
The parental haplotypes $\bm M^m$, $\bm M^f$, $\bm F^m$, and $\bm F^f$
depend on the latent population structure $A$, so
\[
A \centernot{\independent} (\bm M^m, \bm M^f, \bm F^m, \bm F^f).
\]
\end{assumption}
The node $A$ may also capture any linkage disequilibrium in the
parental haplotypes. That is, because the parental haplotypes are
determined by the same process as the grandparental haplotypes and
so on, recombination introduces dependence among nearby genetic
variants. The distribution of $A$ and the distribution of the parental
haplotypes given $A$ are not important below, because an appropriate
subset of the parental haplotypes will be conditioned on and the paths
from $A$ to $\bm M^m$, $\bm M^f$, $\bm F^m$, and $\bm F^f$ will be blocked.
\subsubsection{Parental phenotypes} \label{sec:parental-phenotypes}
We impose no assumptions on the nature and the distribution of the parental
phenotypes $C^m$ and $C^f$. They can depend arbitrarily on the
parental haplotypes $\bm{M}^m, \bm{M}^f, \bm{F}^m, \bm{F}^f$ and the
population structure $A$, once again because our proposal for almost
exact MR conditions on the parental haplotypes.
\begin{assumption} \label{assum:parental-phenotypes}
The parental phenotypes $C^m$ and $C^f$ are descendants of the
latent population structure $A$ and the corresponding parental
haplotypes (i.e. $(\bm M^m,\bm M^f)$ for $C^m$ and $(\bm F^m,\bm
F^f)$ for $C^f$). In other words, we allow the following dependence:
\[
C^m \centernot{\independent} (A, \bm{M}^m, \bm{M}^f),~
C^f \centernot{\independent} (A, \bm{F}^m, \bm{F}^f).
\]%
\end{assumption}
\subsubsection{Offspring genotypes} \label{sec:offspring-genotypes}
There are two biological processes involved in the genesis of the
offspring's genotype: meiosis (gamete formation) and
fertilization. The meiotic process is briefly reviewed in
\Cref{sec:genetic-preliminaries}, and the key
\Cref{assum:de-novo-distribution} can be represented by the causal
diagram in \Cref{fig:meiosis_dag} (for the mother). A crucial
assumption underlying our almost
exact inference is the exogeneity of the meiosis indicators $\bm
U^m$ and $\bm U^f$. This is reflected in \Cref{fig:fammr_dag,fig:meiosis_dag} as $\bm
U^m$ and $\bm U^f$ have no parents and their only children are the
offspring's haplotypes. Formally, we assume:
\begin{assumption} \label{assum:meiosis-indicator-independence}
The meiosis indicators are independent of parental haplotypes and
phenotypes and any other confounders:
\[
(\bm{U}^m, \bm{U}^f) \perp\mkern-9.5mu\perp (A, C^m, C^f, C, \bm{M}^m, \bm{M}^f, \bm{F}^m, \bm{F}^f).
\]%
\end{assumption}
Many different models have been proposed for the distribution of the
ancestry indicator $\bm U^m$; see \textcite{Otto2019} for a recent
review. Due to the dependence in $\bm U^m$, nearby alleles on the same
chromosome tend to be inherited together. This phenomenon is known as
\emph{genetic linkage}. In \Cref{sec:genetic-preliminaries}, we describe
the classical model of \textcite{Haldane1919} which assumes $\bm
U^m$ follows a Poisson process. This model has been used by
\textcite{Bates2020a} to locate causal variants. We will see in
\Cref{subsec:hmm-simplification} that such Markovian structure
greatly simplifies randomization inference.
Another mechanism that needs to be modeled is fertilization. In
Mendelian inheritance, it is assumed that the potential gametes
(sperms and eggs) come together at random. However, mating may not be
a random event. \emph{Assortative mating} refers to the phenomenon
where individuals are more likely to mate if they have complementary
phenotypes. For example, there is evidence in UK Biobank that a SNP on the \emph{ADH1B} gene related to alcohol consumption is more likely to be shared among spouses relative to non-spouses \parencite{Howe2019}. This suggests assortative mating on
drinking behaviour and may introduce bias to MR studies on alcohol
consumption \parencite{Hartwig2018}. The subgraph describing assortative
mating is shown in \Cref{fig:assortative-mating}, where the mating
indicator $S \in \{0, 1\}$ is a common child of the parental phenotypes
$C^m$ and $C^f$ ($S = 1$ means mating).
In any MR study, we necessarily condition on $S = 1$, otherwise the
offspring would not exist. This is formalized in
\Cref{fig:assortative-mating} by the arrows from $S$ to $\bm{Z}$. In
particular, we may define the offspring's genotype $\bm Z$ as
\begin{align} \label{eqn:offspring-genotype-undefined}
\bm{Z} = \begin{cases}
\bm{Z}^m + \bm{Z}^f, & \text{if $S = 1$}, \\
\text{Undefined}, & \text{if $S = 0$.}
\end{cases}
\end{align}
Notice that the above definition recognizes the fact that the gametes
$\bm{Z}^m$ and $\bm{Z}^f$ are produced regardless of whether
fertilization actually takes place.
Our causal model implies that $\bm{Z}^m \perp\mkern-9.5mu\perp \bm{Z}^f \mid (\bm{M}^m, \bm{M}^f, \bm{F}^m, \bm{F}^f, S = 1)$, however, this is not necessarily a benign assumption. Indeed, there is empirical evidence that gametes may pair up non-randomly \parencite{Nadeau2017}, which could be represented by arrows from $\bm{Z}^m$ and $\bm{Z}^f$ to $S$. This is an example of transmission ratio distortion, which we discuss later in \Cref{sec:discussion}. For now, we simply note that we must assume the absence of this phenomenon.
\subsubsection{Offspring phenotypes} \label{sec:offspring-phenotypes}
Finally, we describe assumptions on the offspring phenotypes. We
are interested in estimating the causal effect of an offspring
phenotype $D \in \mathcal{D} \subseteq
\mathbb{R}$ on another offspring phenotype $Y \in
\mathcal{Y} \subseteq \mathbb{R}$. We refer to $D$ as the exposure
variable and $Y$ as the outcome variable. These phenotypes are
determined by the offspring genotypes and environmental factors
(including parental traits). For a particular realization of the genotypes $\bm z$, we denote the counterfactual exposure as $D(\bm{z})$. Furthermore, under an additional intervention that sets
$D$ to $d$, we denote the counterfactual outcome as $Y(\bm{z},
d)$. These potential outcomes are related to the observed data tuple
$(\bm{Z}, D, Y)$ by
\[\begin{array}{ll}%
D = D(\bm{Z}),~ %
Y = Y(\bm{Z}, D) = Y(\bm{Z},D(\bm{Z})), %
\end{array}\]%
which is a simple extension of the consistency assumption
\eqref{eq:consistency} before.
We are interested in making inference about the counterfactuals $Y(d)
= Y(\bm{Z},d)$ when the exposure is set to $d \in \mathcal{D}$. As the
exposure $D$ typically varies according to population structure,
parental phenotypes and other environmental factors, it is not randomized.
\begin{assumption}
There may be unmeasured confounders between the exposure and outcome,
so that
\[
Y(d) \centernot{\independent} D ~ \text{for some or all $d \in
\mathcal{D}$}.
\]
\end{assumption}
For example, if $D$ is alcohol consumption and $Y$ is cardiovascular
disease, there may exist offspring confounders such as diet or smoking
habits which are common causes of both $D$ and $Y$. The exact nature
of the confunders is not very important as MR uses unconfounded
variation (in $\bm U^m$ and $\bm U^f$) to make causal inference.
It will be helpful to categorize the genetic variants based on whether
they have direct causal effects on $D$ and/or $Y$.
\begin{assumption} \label{assum:snp-subsets}
The set $\mathcal{J} = \{1, \ldots, p\}$ of genetic variants can be
paritioned as $\mathcal{J} = \mathcal{J}_y \cup \mathcal{J}_d \cup
\mathcal{J}_0$, where
\begin{itemize}
\item $\mathcal{J}_y$ includes all \emph{pleiotropic} variants with a direct causal
effect on $Y$ (some of which may have a causal effect on $D$ as well).
\item $\mathcal{J}_d$ includes all causal variants for $D$ with no direct effect on $Y$.
\item $\mathcal{J}_0 = \mathcal{J} \setminus (\mathcal{J}_y \cup
\mathcal{J}_d)$ includes all other variants.
\end{itemize}
\end{assumption}
In our working example in \Cref{fig:fammr_dag}, $\mathcal{J}_y =
\{3\}$, $\mathcal{J}_d = \{2\}$, and $\mathcal{J}_0 = \{1\}$. If the
exposure $D$ indeed has a causal effect on the outcome $Y$, the loci
of the causal variants of $Y$ are given by $\mathcal{J}_y \cup
\mathcal{J}_d$.
For subscripts $s \in \{ 0, d, y \}$, we let $\bm{Z}_s = \{
Z_j \colon j \in \mathcal{J}_s \}$ denote the corresponding genotypes,
which has support $\mathcal{Z}_s = \{ 0, 1, 2
\}^{|\mathcal{J}_s|}$. By \Cref{assum:snp-subsets}, our potential
outcomes can be written as (with an abuse of notation)
\[
D(\bm z) = D(\bm z_d),~Y(\bm z, d) = Y(\bm z_y, d),~Y(\bm z) =
Y(\bm z_y, D(\bm z_d)) = Y(\bm z_y, \bm z_d),
\]
where $\bm z = (\bm z_d, \bm z_y, \bm z_0) \in \mathcal{Z}_d \times \mathcal{Z}_y \times \mathcal{Z}_0 = \mathcal{Z}$ and $d \in \mathcal{D}$.
\Cref{fig:offspring-phenotypes} provides the graphical representation
of \Cref{assum:snp-subsets}. Each element of $\bm{Z}_0$ has no
effect on $D$ or $Y(d)$, each element of $\bm{Z}_d$ has an effect on
$D$ and each element of $\bm{Z}_y$ has an effect on $Y(d)$ (some are
also causes of $D$). The vector $\bm{Z}_y$ contains the so-called
pleiotropic variants that are causally involved in the
expression of multiple phenotypes \parencite{Hemani2018a}. The view that pleiotropy is widespread, if not
universal, is implied in Fisher's infinitesimal model
\parencite{fisher19_correlation} and supported by recent human
genetic studies \parencite{Boyle2017}.
\emph{Dynastic effects}, sometimes called \emph{genetic nurture} \parencite{Kong2018}, is a phenomenon characterized by parental phenotypes exerting a direct influence on the offspring's
phenotypes. This is depicted in
\Cref{fig:dynasticeffects}, where paths emanate from the
parental haplotypes $\bm M^m, \bm M^f$ and $\bm F^m, \bm
F^f$ to the parental phenotypes $C^m$ and $C^m$ and on to the
offspring phenotypes $D$ and $Y$.
\subsection{Conditions for identification}
\label{sec:identification}
With the causal model outlined in \Cref{sec:setup} in mind, we now
describe some sufficient conditions under which some $Z_j \in \bm Z$
is a valid instrumental variable for estimating the causal effect of
$D$ on $Y$. Recall that an instrumental
variable induces unconfounded variation in the exposure without otherwise
affecting the outcome. Due to population stratification
(\Cref{fig:parental-genotypes}), assortative mating
(\Cref{fig:assortative-mating}), and dynastic effects
(\Cref{fig:dynasticeffects}), the offpsring genotypes $\bm Z$ as a
whole are usually not properly randomized without conditioning on the
parental haplotypes. That is,
\[
\bm Z \not \perp\mkern-9.5mu\perp D(\bm z), Y(\bm z, d) ~ \text{for some or
all $\bm z \in \mathcal{Z}$ and $d \in \mathcal{D}$.}
\]
To restore validity of genetic instruments, the key idea is to
condition on the parental haplotypes \parencite{Spielman1993, Bates2020a}. This allows us to
use precisely the exogenous randomness in the ancestry indicators $\bm
U^m$ and $\bm U^f$ that occurs during meiosis and fertilization. This idea is formalized in the next proposition.
\begin{proposition}
\label{prop:offspring-genotype-independence}
Under the causal graphical model described in \Cref{sec:setup},
the offspring's haplotype $Z_j^m$ (or genotype $Z_j$) at some site
$j \in \mathcal{J}$ is independent of all ancestral and offspring
confounders given the maternal (or parental) haplotypes at site $j$:
\begin{equation}
\label{eq:zj-independence}
\begin{split}
&Z_j^m \perp\mkern-9.5mu\perp (A, C^m, C^f, C) \mid (M_j^m, M_j^f, S=1), \\
&Z_j \perp\mkern-9.5mu\perp (A, C^m, C^f, C) \mid (M_j^m, M_j^f, F_j^m,
F_j^f, S=1). \\
%
\end{split}
\end{equation}
\end{proposition}
However, the conditional independence \eqref{eq:zj-independence}
alone does not guarantee the validity of $Z_j$ as an instrumental
variable. The main issue is that $Z_j$ might be in linkage disequilibrium with other
causal variants of $Y$, as recognized by \textcite{Bates2020a} in the
context of mapping causal variants. Our goal is
to find a set of variables $\bm V$ such that $Z_j$ is conditionally
independent of the potential outcome $Y(d)$. This is formalized in
the definition below.
\begin{definition} \label{def:identification-conditions}
We say a genotype $Z_j$ is a \emph{valid instrumental variable} given $\bm V$ (for estimating the
causal effect of $D$ on $Y$) if the following conditions are
satisfied:
\begin{enumerate}
\item Relevance: $Z_j \not \perp\mkern-9.5mu\perp D \mid \bm V$;
\item Exogeneity: $Z_j \perp\mkern-9.5mu\perp Y(d) \mid \bm V ~ \text{for all
$d \in \mathcal{D}$}$;
\item Exclusion restriction: $Y(z_j, d) = Y(d) ~ \text{for all
$z_j \in \{0,1,2\}$ and $d \in \mathcal{D}$}$.
\end{enumerate}
Simiarly, we say a haplotype $Z_j^m$ is a valid instrument given
$\bm V$ if the same conditions above hold with $Z_j$ replaced by
$Z_j^m$ and $z_j \in \{0,1,2\}$ replaced by $z_j^m \in \{0,1\}$.
\end{definition}
In our setup (\Cref{assum:snp-subsets}), the exclusion restriction is
satisfied if and only if $j \not \in \mathcal{J}_y$.
Returning to the example in \Cref{fig:fammr_dag}, we see that $Z_3$
does not satisfy the exclusion restriction because $Z_3$ has a direct
effect on $Y$. The causal variant $Z_2$ for $D$ would be a valid
instrument if we condition on the corresponding haplotypes and $Z_3$,
but $Z_2$ is not observed in this example. This leaves us with one
remaining candidate instrument: $Z_1$ (and potentially its
haplotypes $Z_1^m$ and $Z_1^f$). The relevance
assumption is satisfied as long as $\bm V$ does not block both of the
following paths
\[\begin{array}{ll}%
Z_1 \leftarrow Z_1^m \leftarrow \bm{U}^m \rightarrow Z_2^m \rightarrow Z_2 \rightarrow D; \\[0.3em]
Z_1 \leftarrow Z_1^f \leftarrow \bm{U}^f \rightarrow Z_2^f \rightarrow Z_2 \rightarrow D. \\[0.3em]
\end{array}\]%
The exclusion restriction is satisfied because $Z_1$ is not a causal
variant for $Y$. Finally, exogeneity is satisfied if $\bm V$ blocks the
path
\[\begin{array}{ll}%
Z_1 \leftarrow Z_1^m \leftarrow \bm{U}^m \rightarrow Z_3^m \rightarrow Z_3 \rightarrow Y(d); \\[0.3em]
Z_1 \leftarrow Z_1^f \leftarrow \bm{U}^f \rightarrow Z_3^f \rightarrow Z_3 \rightarrow Y(d). \\[0.3em]
\end{array}
\]%
Thus, we have the following result:
\begin{proposition} \label{prop:example-adjustment-set}
For the example in \Cref{fig:fammr_dag}, the following conditional
independence relationships are true for all $d \in
\mathcal{D}$:
\begin{align}
\label{eqn:adjustment-set-1} Z_1^m \perp\mkern-9.5mu\perp Y(d) &\mid (\bm M_1^{mf}, \bm V_{\{3\}}^m = (\bm M_3^{mf}, Z_3^m), S = 1), \\
\label{eqn:adjustment-set-2} Z_1 \perp\mkern-9.5mu\perp Y(d) & \mid (\bm M_1^{mf}, \bm F_1^{mf}, \bm V_{\{3\}} = ( \bm M_3^{mf}, \bm F_3^{mf}, Z_3), S=1).
\end{align}
The adjustment variables above are minimal in the sense
that no subsets of them satisfy the same conditional independence.
\end{proposition}
\begin{table}[tb]
\caption{Some paths between $Z_1$ and $Y(d)$ in \Cref{fig:fammr_dag}.}
\label{tab:dseparation}
\begin{center}
\begin{tabular}{ccl}
\toprule
Name of bias & Path & Blocking variable \\
\midrule
Dynastic effect & $ Z_1^m \leftarrow M_1^m, M_1^f \rightarrow C^m \rightarrow Y(d)$ & $(M_1^m, M_1^f)$ \\[0.5em]
Population stratification & $ Z_1^m \leftarrow M_1^m, M_1^f \leftarrow A \rightarrow Y(d)$ & $(M_1^m, M_1^f)$ \\[0.5em]
Pleiotropy & $Z_1^m \leftarrow \bm{U}^m \rightarrow Z_3^m \rightarrow Z_3 \rightarrow Y(d)$ & $Z_3^m$ or $Z_3$ \\[0.5em]
Assortative mating & $Z_1^m \leftarrow M_1^m, M_1^f \leftarrow C^m \rightarrow \boxed{S} \leftarrow$ & $(M_1^m, M_1^f)$ or $Z_3^f$ \\
& $C^f \leftarrow F_3^m, F_3^f \rightarrow Z_3^f \rightarrow Z_3 \rightarrow Y(d)$ & or $Z_3$ or $(F_3^m, F_3^f)$ \\[0.5em]
Nearly determined ancestry & $Z_1^m \leftarrow \bm{U}^m \rightarrow \boxed{Z_3^m} \leftarrow $ & $(M_3^m, M_3^f)$ \\
& $M_3^m, M_3^f \leftarrow A \rightarrow Y(d)$ & \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\begin{proof}
The conditional independence follows almost immediately from
our discussion above. To show $\bm V = (\bm M_1^{mf}, \bm V_{\{3\}}^m)$ is
minimal for \eqref{eqn:adjustment-set-1} and better understand the
potential biases in MR studies, \Cref{tab:dseparation} lists several
paths between $Z_1^m$ and $Y(d)$ that are named after the key
biological mechanism involved. The table only includes the maternal
paths, but the same blocking also holds for the paternal paths.
\end{proof}
To our knowledge, the potential bias in
\Cref{tab:dseparation} due to nearly determined ancestry has not yet
been identified in the literature. This is a form of collider bias introduced because the ancestry indicator
$U_j^m$ can often be almost perfectly determined if we are given the
mother's haplotypes and the offspring's maternal haplotype . For
example, if the mother is heterozygous $M_3^m = 1,
M_3^f = 0$ and the offspring's maternal haplotype is $Z_3^m = 1$,
then we know that $U_3^m = m$ is true with very high probability. Due to
genetic linkage, there is also a high probability that $U_1^m = m$.
We conclude this section with a sufficient condition for the validity
of $Z_j^m$ and $Z_j$ in our general setting. To simplify the
exposition, let $\bm V_{\mathcal{B}}^m = (\bm{M}_{\mathcal{B}}^{mf},
\bm{Z}_{\mathcal{B}}^m)$ be a set of maternal
adjustment variables, where $\mathcal{B} \subseteq \mathcal{J}
\setminus \{j\}$ is a subset of loci. Furthermore, let $\bm
V_{\mathcal{B}} = (\bm{M}_{\mathcal{B}}^{mf}, \bm
{F}_{\mathcal{B}}^{mf}, \bm{Z}_{\mathcal{B}})$.
\begin{theorem} \label{thm:sufficient-adjustment-set}
Suppose $\bm{Z} = (Z_1,\dotsc, Z_p)$ is a full chromosome. Consider
the general causal model for Mendelian randomization in
\Cref{sec:setup} and let $j \in
\mathcal{J}$ be the index of a candidate instrument. Then $Z_j^m$ is a
valid instrument conditional on $(\bm M_j^{mf}, \bm V_{\mathcal{B}}^m)$ if the following conditions
are satisfied:
\begin{enumerate}
\item $Z_j^m \not \perp\mkern-9.5mu\perp \bm Z_d^m \mid (\bm M_j^{mf}, \bm V_{\mathcal{B}}^m, S = 1)$;
\item $Z_j^m \perp\mkern-9.5mu\perp \bm Z_y^m \mid (\bm M_j^{mf}, \bm V_{\mathcal{B}}^m, S = 1)$;
\end{enumerate}
\end{theorem}
\begin{proof}
The relevance of $Z_j^m$ follows from the first condition, because $Z_j^m$
is dependent on some causal variants (or is itself a causal
variant) of $D$. The exclusion restriction ($j \not \in
\mathcal{J}_y$) follows directly from the second
condition. For exogeneity, paths from $Z_j^m$ to $Y(d)$ either go
through the confounders $A$, $C^f$, $C^m$, or $C$, which are blocked
by $\bm M_j^{mf}$ by \Cref{prop:offspring-genotype-independence},
or through some causal variants of the outcome as in $Z_j^m \leftarrow
\bm U^m \rightarrow \bm Z_y^m \rightarrow \bm Z_y \rightarrow Y(d)$,
which are blocked by the second condition.
\end{proof}
Since \Cref{prop:offspring-genotype-independence} ensures that, after conditioning on $\bm{M}_j^{mf}$, $Z_j^m$ is independent of all ancestral and offspring confounders $(A, C^m, C^f, C)$, the only remaining threats to the validity of $Z_j^m$ as an instrument are irrelevance and pleiotropy. The set $\mathcal{B}$ is chosen to ensure that $Z_j^m$ is independent of all pleiotropic variants conditional on $\bm{V}_{\mathcal{B}}^m$ (condition 2 of \Cref{thm:identification}) but not independent of the set of causal variants (condition 1 of \Cref{thm:identification}). We will work with a general $\mathcal{B}$ until \Cref{subsec:hmm-simplification} where we describe the structure of this set. It is straightforward to extend \Cref{thm:sufficient-adjustment-set} to establish validity of the genotype $Z_j$ at locus $j$ as an instrumental variable. Details are omitted.
\subsection{Hypothesis testing}
\label{subsec:hypothesis-test}
This section describes our randomization-based approach to statistical inference in Mendelian randomization studies. We begin by describing an idealized exact setting where the randomization distribution is known. We then discuss the realistic setting where the randomization distribution must be approximated by a meiosis model.
We first describe the simplest case where we use a single genetic
variant from the offspring's maternally-inherited haplotype as an
instrument. In particular, define the \emph{propensity
score} for some instrument $Z_{ij}^m$ at locus $j$ of individual $i$
as
\begin{equation} \label{eqn:exact-propensity-score}
\pi_{ij}^m = \mathbb{P}(Z_{ij}^m = 1 \mid \bm{M}_{ij}^{mf}, \bm{V}_{i\mathcal{B}}^m)
\end{equation}
where $\mathcal{B} \subseteq \mathcal{J}$. In words, $\pi_{ij}^m$ describes the
randomization distribution of the haplotype $Z_{ij}^m$ conditional on
a set of parental and offspring haplotypes or genotypes chosen to satisfy
the conditions in \Cref{thm:sufficient-adjustment-set}.
Let us consider a model for the potential outcomes of the form
\begin{equation}
Y_i(d) = Y_i(0) + \beta d ~ \text{for all $d \in \mathcal{D}$ and $i = 1, \ldots, N$}.
\label{eqn:potential-outcomes}
\end{equation}
Let $\mathcal{F} = \{Y_i(0) \colon i = 1, \ldots, N\}$ denote the
collection of potential outcomes for all individuals $i$ under no
exposure $d = 0$. Our goal is to test null hypotheses of the form
\begin{equation} \label{eqn:hypothesis-test}
H_0 \colon \beta = \beta_0, ~ H_1 \colon \beta \neq \beta_0
\end{equation}
where $\beta_0$ is some hypothetical value of the causal effect. If the null hypothesis is true, then the model \eqref{eqn:potential-outcomes} implies that the potential outcome under no exposure $d = 0$ can be identified from the observed data since
\[
Y_i(0) = Y_i(D_i) - \beta_0 D_i = Y_i - \beta_0 D_i.
\]
For ease of notation, let $Q_i(\beta_0) = Y_i - \beta_0 D_i$ be the adjusted outcome.
\Cref{thm:identification} and the model \eqref{eqn:potential-outcomes}
imply that we are testing the following conditional independence:
\begin{equation} \label{eqn:hypothesis-adjusted-outcome}
H_0 \colon Z_{ij}^m \perp\mkern-9.5mu\perp Q_i(\beta_0) \mid \bm (\bm{M}_{ij}^{mf}, \bm{V}_{i\mathcal{B}}^m), \quad H_1 \colon Z_{ij}^m \centernot{\independent} Q_i(\beta_0) \mid (\bm{M}_{ij}^{mf}, \bm{V}_{i\mathcal{B}}^m).
\end{equation}
Suppose we have selected a test statistic $T(\bm{Z}_j^m \mid
\mathcal{F})$ where possible dependence on $(\bm{M}_j^{mf},
\bm{V}_{\mathcal{B}}^m)$ is implicit. For example, this could be the
coefficient from a regression of the adjusted outcome on the
instrument. The randomization-based p-value for $H_0$ can then be
written as
\begin{equation}\begin{array}{ll} \label{eqn:randomization-pvalue}
P(\bm{Z}_j^m \mid \mathcal{F}) &= \tilde{\mathbb{P}}(T(\tilde{\bm{Z}}_j^m \mid \mathcal{F}) \leq T(\bm{Z}_j^m \mid \mathcal{F})) \\[1em]
&= \displaystyle \sum_{\tilde{\bm{z}}^m \in \{0,1\}^N} I\{T(\tilde{\bm{z}}_j^m \mid \mathcal{F}) \leq T(\bm{Z}_j^m \mid \mathcal{F})\} \prod_{\tilde{z}_i^m \in \tilde{\bm{z}}^m} (\pi_{ij}^m)^{\tilde{z}_i} (1-\pi_{ij}^m)^{1-\tilde{z}_i},
\end{array}\end{equation}
where $I\{\cdot\}$ is the indicator function, $\tilde{\bm{Z}}_j^m$ denotes a random draw from the distribution \eqref{eqn:exact-propensity-score} and $\tilde{\mathbb{P}}$ denotes probability with respect to the distribution \eqref{eqn:exact-propensity-score}. Given the propensity score and the
null hypothesis, this p-value can be computed exactly by enumerating
over all possible values of $\tilde{\bm{Z}}^m$ or approximated by
drawing $\tilde{\bm{Z}}^m$ a finite number of times from
$\bm{\pi}_j^m$; see Algorithm \ref{algo:pvalue} for the pseudo-code. It is
straightforward to replace the haplotype $Z_{ij}^m$ with the genotype
$Z_{ij}$; the randomization distribution of $Z_{ij} \in \{0,1,2\}$ is
a simple function of $\pi_{ij}^m$ and $\pi_{ij}^f$ since meioses in
the mother and father are independent.
\Cref{eqn:randomization-pvalue} highlights that knowledge of the
propensity score $\bm{\pi}_j^m$ is essential for performing
randomization inference. However, $\bm{\pi}_j^m$ describes a biochemical
process occurring in the human body which is not precisely known to,
or controlled by, the analyst. Therefore, the best we can do is
perform \emph{almost exact} inference by replacing $\bm{\pi}_j^m$ with
a reasonable model-based approximation. The model we use in this paper
is Haldane's hidden Markov model described in
\Cref{sec:randomization-distribution}. As discussed in
\Cref{sec:genetic-preliminaries} our method is modular in the sense
that more sophisticated meiosis models can easily be substituted as
the randomization distribution; see \textcite{Broman2000,Otto2019} for
discussion and comparison of alternative models.
\begin{algorithm}[H] \label{algo:pvalue}
\caption{Almost exact test}
\SetAlgoLined
Compute the test statistic on the observed data $t = T(\bm{Z}_j^m \mid \mathcal{F})$\;
\For{$k = 1, \ldots, K$}{
Sample a counterfactual instrument $\tilde{\bm{Z}}_j^m$ from the
randomization distribution (e.g. \Cref{thm:propensity-score} in
\Cref{sec:randomization-distribution} based on Haldane's model)\;
Compute the test statistic using the counterfactual instrument $\tilde{t}_k = T(\tilde{\bm{Z}}_j^m \mid \mathcal{F})$\;
}
Compute an approximation to the p-value in \Cref{eqn:randomization-pvalue} via the proportion of $\tilde{t}_1, \ldots, \tilde{t}_K$ which are larger than $t$:
\[ \hat{P}(\bm{Z}_j^m \mid \mathcal{F}) = \frac{|\{k \colon t \leq \tilde{t}_k \}|}{K}. \]
\end{algorithm}
\subsection{Choosing a test statistic} \label{subsec:test-statistic}
Our randomization test retains exact nominal size under the null
hypothesis regardless of test statistic, however, we can often improve
power by selecting a test statistic that better blocks the confounding
paths between $Z_j$ and $Q(\beta_0)$. We show in
\Cref{sec:simulation-test-statistics} that test statistics that do not control for confounders can have almost
no power for certain null hypotheses. We propose a powerful test
statistic that can be constructed by including a so-called ``clever covariate'' \parencite{scharfstein1999adjusting, Rose2008} in the test statistic
\[
X_j^m = \frac{Z_j^m}{\pi_j^m} - \frac{1-Z_j^m}{1-\pi_j^m}
\]
such that
\[
T(\bm{Z}_j^m \mid \mathcal{F}) = \sum_{i=1}^N Q_i(\beta_0) X_j^m.
\]
This covariate exploits the ``central role of the propensity score'' \parencite{Rosenbaum1983} that
\[
Y(d) \perp\mkern-9.5mu\perp Z_j^m \mid \pi_j^m.
\]
where $\pi_j^m$ is defined as in equation \eqref{eqn:exact-propensity-score}, provided $0 < \pi_j^m < 1$. Conditioning on $\pi_j^m$ blocks all confounding paths between $Z_j^m$ and $Y(d)$. Furthermore, $\pi_j^m$ reduces to a single variable the adjustment set $(\bm{M}_j^{mf}, \bm{V}_{\mathcal{B}})$ which is potentially high dimensional and highly correlated.
Alternatively, we could improve power by constructing data-driven test
statistics via flexible machine learning techniques such as neural
networks, gradient boosting or random forests, although this may be
computationally costly \parencite{Watson2019}.
\subsection{Simplification via Markovian
structure} \label{subsec:hmm-simplification}
Conditional independencies implied by Haldane's model for meiosis also
allow us to greatly simplify the sufficient confounder adjustment
set. \Cref{thm:sufficient-adjustment-set} highlights a trade-off in
choosing the adjustment variables $\bm V_{\mathcal{B}}$: by choosing a
larger subset $\mathcal{B}$, the second condition is more likely but
the first condition is less likely to be satisfied. The reason is
that, when conditioning on more genetic variants, we are more likely
to block the pleiotropic paths to $Y$ but we are also more likely to
block the path between the instrument and the causal variant.
The conditions in \Cref{thm:sufficient-adjustment-set} are trivially
satisfied with $\mathcal{B} = \emptyset$ if $\mathcal{J}_y =
\emptyset$ and $\mathcal{J}_d \neq \emptyset$, i.e., all causal
variants of $Y$ on this chromosome only affect $Y$ through
$D$. However, this is a rather unlikely situation. More often, we
need to condition on other variants to block the pleiotropic paths, as
illustrated in the working example in \Cref{fig:fammr_dag}. To this
end, we can utilize the Markovian structure on the meiosis indicators
$\bm U^m$ and $\bm U^f$ implied by Haldane's model. Roughly speaking, such structure allows us to
conclude $Z_j \perp\mkern-9.5mu\perp Z_l \mid \bm M_j^{mf}, \bm F_j^{mf}, \bm
V_{\{k\}}$ for all $j < k < l$ if there are no mutations and $M_k^f \neq M_k^m$.
Let $b_1$ and $b_2$ ($b_1 < j < b_2$) be two heterozygous loci in
the mother's genome, i.e., $M_{b_1}^f \neq M_{b_1}^m$ and
$M_{b_2}^f \neq M_{b_2}^m$. Let $\mathcal{A} =
\{b_1+1,\dotsc,b_2-1\}$ be the loci between $b_1$ and $b_2$, which of
course contains the locus $j$ of interest.
\begin{theorem} \label{thm:identification}
Consider the setting in \Cref{thm:sufficient-adjustment-set} and
suppose
\begin{enumerate}
\item The meiosis indicator process is a Markov chain so that $U^m_j
\perp\mkern-9.5mu\perp U^m_l \mid U^m_k$ for all $j < k < l$;
\item There are no mutations: $\epsilon = 0$.
\end{enumerate}
Then
$Z_j^m$ is a valid instrumental variable conditional on $(\bm
M_j^{mf}, \bm V^m_{\{b_1,b_2\}})$ if
\begin{enumerate}[resume]
\item $\mathcal{A} \cap \mathcal{J}_d \neq \emptyset$;
\item $\mathcal{A} \cap \mathcal{J}_y = \emptyset$.
\end{enumerate}
\end{theorem}
\begin{proof}
Because there are no mutations and $M_{b_1}$ and $M_{b_2}$ are
heterozygous, we can uniquely determine $U^m_{b_1}$ and $U^m_{b_2}$
from $\bm V_{\{b_1,b_2\}}^m$. By the assumed Markovian structure,
this means that
\[
Z_j^m \perp\mkern-9.5mu\perp Z_l^m \mid \bm M_j^{mf}, \bm
V^m_{\{b_1,b_2\}}~\text{for all}~j < b_1~\text{or}~j > b_2.
\]
Thus, the last two conditions in \Cref{thm:identification} imply the
first two conditions in \Cref{thm:sufficient-adjustment-set}.
\end{proof}
One can easily mirror the above result for using the paternal haplotype $Z_j^f$ as an instrument variable. Furthermore, let $b_1'$ and $b_2'$ ($b_1' < j < b_2'$) be two heterozygous loci in the father's genome. Then it is easy to see that $Z_j = Z_j^m + Z_j^f$ is a valid instrument conditional on $(\bm
M_j^{mf}, \bm F_j^{mf}, \bm V^m_{\{b_1,b_2\}},\bm
V^f_{\{b_1',b_2'\}})$ if the last two conditions hold for the
union $\mathcal{A} = \{\min(b_1,
b_1')+1,\dotsc,\max(b_2,b_2')-1\}$.
Under the setting in \Cref{thm:identification}, we can partition
the offspring genome into mutually independent subsets by conditioning
on heterozygous parental genotypes. This partition is useful for
constructing independent p-values when we have multiple
instruments. Suppose we have a collection of genomic position
$\mathcal{B} = \{b_1, \ldots, b_k\}$ that will be conditioned on and
let $\mathcal{A}_k = \{b_{k-1}+1, \ldots, b_k-1 \}$ be the loci in
between (suppose $b_0 = 0$ and $b_{k+1} = p+1$). This indues the
following partition of the chromosome
\[\mathcal{J} = \mathcal{A}_1 \cup \{b_1\} \cup \mathcal{A}_2 \cup
\{b_2\} \cup \ldots \cup \mathcal{A}_k \cup \{b_k\} \cup
\mathcal{A}_{k+1}.\]
\begin{proposition} \label{prop:independent-instruments}
Suppose $M_j^m \neq M_j^f$ for all $j \in \mathcal{B}$. Then, under
the first two assumptions in \Cref{thm:identification}, we have
\[
Z_j^m \perp\mkern-9.5mu\perp Z_{j^{\prime}}^m \mid (\bm{M}_j^{mf}, \bm{M}_{j^{\prime}}^{mf}, \bm{V}^m_{\mathcal{B}}).
\]
for any $j \in \mathcal{A}_{l}$ and $j^{\prime} \in \mathcal{A}_{l^{\prime}}$ such that $l \neq l^{\prime}$.
\end{proposition}
\begin{proof}
The proof follows from an almost identical argument to
\Cref{thm:sufficient-adjustment-set}. The assumption that $\epsilon
= 0$ means that $U_j^m$ is uniquely determined for all $j \in
\mathcal{B}$ from $\bm{M}^{mf}_j$ and $Z^m_j$. Therefore the assumed
Markovian structure implies that conditioning on
$\bm{V}_{\mathcal{B}}^m$, along with the parental haplotypes
$\bm{M}_j^{mf}$ and $\bm{M}_{j^{\prime}}^{mf}$, then induces the
conditional independence.
\end{proof}
\subsection{Multiple instruments} \label{subsec:multiple-ivs}
\Cref{prop:independent-instruments} allows us to formalize the
intuition that genetic instruments across the genome can provide
independent evidence about the causal effect of the exposure, if the
right loci are conditioned on.
\begin{corollary} \label{corr:independent-tests}
$Z_j^m$ and $Z_{j^{\prime}}^m$ are independent valid instruments conditional on $(\bm{M}_j^{mf}, \bm{M}_{j^{\prime}}^{mf}, \bm{V}^m_{\mathcal{B}})$ if
\begin{enumerate}
\item The first two assumptions of \Cref{thm:identification} hold;
\item $\mathcal{A}_l \cap \mathcal{J}_d \neq 0$ and $\mathcal{A}_{l^{\prime}} \cap \mathcal{J}_d \neq 0$;
\item $\mathcal{A}_l \cap \mathcal{J}_y = 0$ and $\mathcal{A}_{l^{\prime}} \cap \mathcal{J}_y = 0$.
\end{enumerate}
\end{corollary}
\Cref{corr:independent-tests} says that any two instruments are valid
and independent if they lie within separate partitions and each
partition contains a causal variant of the exposure and does not
contain any pleiotropic variants (i.e. with a direct effect on $Y$ not through $D$). As a result of this corollary, we
can combine the p-values using standard procedures to test the
intersection or global null hypothesis
\parencite{bretz16_multip_compar_using_r}.
One such procedure is called Fisher's method \parencite{Fisher1925, Wang2019}. If $\{ p_1, p_2, \ldots, p_k \}$ are a collection of independent p-values then, when all of the corresponding null hypotheses are true (or a single shared null hypothesis is true),
\[
-2 \sum_{j = 1}^k \log(p_j) \sim \mathcal{X}_{2k}^2,
\]
where $\mathcal{X}_{2k}^2$ denotes the chi-squared distribution with $2k$ degrees of freedom. We use Fisher's method to aggregate our independent p-values in the applied example in \Cref{sec:applied-example}.
As some instruments may
violate the exclusion restriction, a more robust approach is to test
the partial conjunction of the null hypotheses
\parencite{Wang2019}. In practice, it may not be
possible to separate closely linked instruments into partitions
separated by a heterozygous variant, in which case the hypothesis
\eqref{eqn:hypothesis-adjusted-outcome} can be tested using $(Z_j^m, Z_{j^{\prime}}^m)$
jointly. \Cref{cor:multiple-instruments} in \Cref{sec:technical-proofs} derives the joint randomization distribution of a collection of instruments.
\section{Simulation} \label{sec:simulation}
\subsection{Setup} \label{sec:simulation-setup}
In this section we explore the properties of our almost exact test via
simulation. The set up of the simulation is described in detail in
\Cref{sec:simulation-description}. To summarize, we consider a null
effect of an exposure on an outcome (i.e. $\beta = 0$), both of which
have variance one, using 5 genetic instruments on different
chromosomes. %
The instruments are non-causal markers for nearby causal
variants and there are also pleiotropic variants in linkage
disequilibrium with the instruments. From the above setup we simulate
a sample of 15,000 parent-offspring trios.
To make our setup
more tangible, \Cref{tab:counterfactual-simulation-data} shows the first 6
lines of observed and counterfactual data (in red) from the simulation for one of the instruments
and corresponding parental haplotypes. We can see that individual $4$
will provide almost no information for a test of the null hypothesis;
both of her parents are homozygous so there is no randomization in her
genotype outside of de novo mutations. Conversely, both of individual
$1$'s parents are heterozygous so she could receive both major
alleles, both minor alleles or one of each.
Suppose we wish to test the null hypothesis $H_0: \beta = -0.3$. Column $\tilde{Z}_i$ in \Cref{tab:counterfactual-simulation-data} shows a counterfactual draw of each individual's instrument conditional on the adjustment set given in \Cref{eqn:sim-adjustment-set} in \Cref{sec:simulation-description}, along with the adjusted outcome $Q_i(-0.3)$. Note that $\tilde{Z}_i$ is independent of $Q_i(-0.3)$ by construction so the null hypothesis is necessarily satisfied for this counterfactual. As expected individual $4$ has the same genotype in this counterfactual, however, individual $1$ now inherits both minor alleles. \Cref{fig:simulation-null-distribution} plots a distribution of 10,000 counterfactual test statistics drawn under the null hypothesis. The test statistic is the F-statistic from a regression of the adjusted outcome on the instruments. The bars highlighted in red are larger than the observed test statistic, such that the almost exact p-value is around 0.13.
\begin{table}[!ht]
\caption{First 6 lines of observed data from the simulation}
\begin{center}
\begin{tabular}{cccccccccc}
\toprule
$i$ & $Z_i$ & \textcolor{red}{$\tilde{Z}_i$} & $M_i^m$ & $M_i^f$ & $F_i^m$ & $F_i^f$ & $D_i$ & $Y_i$ & $Q_i(-0.3)$ \\
\midrule
1 & 1 & \textcolor{red}{2} & 1 & 0 & 1 & 0 & 1.11 & 0.73 & 1.06 \\
2 & 0 & \textcolor{red}{1} & 1 & 0 & 0 & 0 & 0.83 & -0.52 & 0.77 \\
3 & 1 & \textcolor{red}{1} & 1 & 0 & 0 & 0 & 0.94 & 0.31 & 0.59 \\
4 & 0 & \textcolor{red}{0} & 0 & 0 & 0 & 0 & 1.43 & 3.30 & 3.73 \\
5 & 0 & \textcolor{red}{0} & 0 & 0 & 0 & 0 & 0.15 & 1.34 & 1.38 \\
6 & 0 & \textcolor{red}{0} & 0 & 0 & 0 & 0 & -0.14 & 1.60 & 1.56 \\
\bottomrule
\end{tabular}
\label{tab:counterfactual-simulation-data}
\end{center}
\end{table}
\begin{figure}[!ht]
\begin{center}
\caption{Histogram of 10,000 test statistics under the exact null hypothesis $H_0: \beta = -0.3$}
\includegraphics[scale=0.65]{null_dist.pdf}
\label{fig:simulation-null-distribution}
\end{center}
\end{figure}
\subsection{Power} \label{sec:simulation-test-statistics}
In this section we simulate the power of our almost
exact test using a correct adjustment set (see
\Cref{eqn:sim-adjustment-set} in
\Cref{sec:simulation-description}). As the haplotypes are simulated
according to Haldane's meiosis model, the randomization test should be
exact. This is verified by the near-uniform distributions of the
p-values under the correct $\beta = 0$ in the left panels of
\Cref{fig:p-value-histogram}.
The histograms on the right side of \Cref{fig:p-value-histogram}
depict the distribution of p-values under an alternative hypothesis
$H_1: \beta = 0.5$. The power to reject this hypothesis
varies significantly across the choices of test
statistic. The simple $F$-statistic based on a linear regression of
the adjusted outcome on the instruments (test statistic 1) has almost
no power, while the test statistic obtained from the same model but with
the propensity score included as a clever covariate (test statistic 2)
has a reasonable power of about 0.52.
\Cref{fig:power-curves} expands upon the previous figure by plotting a
power curve for test statistic 1 and 2. We can see that test statistic
1 has power almost equal to $0$ between $\beta_0 = 0$ and $\beta_0 =
1$. This occurs because the simple two-stage least squares estimator unconditional on the adjustment set is upward biased, with an Anderson-Rubin 95\% confidence interval of 0.64--0.89. We have minimal power to reject null hypotheses in that region unless we condition on the confounders in the test statistic, because the resampled instruments retain their correlation with the confounders.
Test statistic 2, which conditions on the confounders via a clever covariate, has a power curve that is centred
on the true null $\beta_0 = 0$ and has significantly improved power in
the region between $\beta_0 = 0$ and $\beta_0 = 1$. However, it is
interesting to note that using test statistic 2 is not always more
powerful than test statistic 1.
\begin{figure}
\centering
\caption{Histograms of 1,000 p-values for several null hypotheses
and test statistics. Test statistic 1 is the F-statistic from a
linear regression of the adjusted outcome on the
instruments. Test statistic 2 is similar but
includes the propensity scores for each instrument as
covariates. Test statistic 3 includes only the parental genotypes for
each instrument as covariates.} \label{fig:p-value-histogram}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pvalue_correct_n1a1.pdf}
\caption{$H_0: \beta = 0$ and test statistic 1}
\label{fig:null-1-test-statistic-1}
\end{subfigure}
\hspace{5mm}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pvalue_correct_n2a1.pdf}
\caption{$H_0: \beta = 0.5$ and test statistic 1}
\label{fig:null-2-test-statistic-1}
\end{subfigure}%
\par\bigskip
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pvalue_correct_n1a2.pdf}
\caption{$H_0: \beta = 0$ and test statistic 2}
\label{fig:null-1-test-statistic-2}
\end{subfigure}
\hspace{5mm}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pvalue_correct_n2a2.pdf}
\caption{$H_0: \beta = 0.5$ and test statistic 2}
\label{fig:null-2-test-statistic-2}
\end{subfigure}%
\par\bigskip
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pvalue_correct_n1a3.pdf}
\caption{$H_0: \beta = 0$ and test statistic 3}
\label{fig:null-1-test-statistic-3}
\end{subfigure}
\hspace{5mm}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pvalue_correct_n2a3.pdf}
\caption{$H_0: \beta = 0.5$ and test statistic 3}
\label{fig:null-2-test-statistic-3}
\end{subfigure}%
\end{figure}
\begin{figure}[!ht]
\begin{center}
\caption{Power curves for two choices of test statistic. Test statistic 1 is the F-statistic from a naive regression of the adjusted outcome on the instruments. Test statistic 2 is similar but includes the propensity scores for each instrument as covariates. Each point on the figure is the rejection frequency over 1,000 replications.}
\includegraphics[scale=0.7]{images/power_curves.pdf}
\label{fig:power-curves}
\end{center}
\end{figure}
%
%
\section{Applied example} \label{sec:applied-example}
\subsection{Preliminaries} \label{sec:applied-example-preliminaries}
In this section we illustrate our approach using a negative control and a positive control. The negative control is the effect of child's BMI at age 7 on mother's BMI pre-pregnancy. Dynastic effects could induce a spurious correlation between child's BMI-associated variants and their mother's BMI pre-pregnancy. This opens the backdoor path seen in \Cref{fig:dynasticeffects}. Closing this backdoor path is crucial for reliable causal inference. The positive control is the effect of child's BMI at age 7 on itself, plus some mean-zero noise. We vary the proportion of the outcome that is attributable to noise to assess the power of our test.
Our data consist of 6,222 mother-child duos from the Avon Longitudinal Study of Parents and Children (ALSPAC). ALSPAC is a longitudinal cohort initially comprising pregnant women resident in Avon, UK with expected dates of delivery from 1 April 1991 to 31 December 1992. The initial sample consisted of 14,676 fetuses, resulting in 14,062 live births and 13,988 children who were alive at 1 year of age. In subsequent years, mothers, children and occasionally partners attended several waves of questionnaires and clinic visits, including genotyping. For a more thorough cohort description, see \cite{Boyd2013} and \cite{Fraser2013}. Please note that the study website contains details of all the data that is available through a fully searchable data dictionary and variable search tool (\url{https://www.bristol.ac.uk/alspac/researchers/our-data/}).
Our instruments are selected from the genome-wide association study (GWAS) of \cite{Vogelezang2020}, which identifies 25 genetic variants for childhood BMI, including 2 novel loci located close to \emph{NEDD4L} and \emph{SLC45A3}. Of the genome-wide significant variants in the discovery sample, we select 11 with a p-value of less than $0.001$ in the replication sample. ALSPAC is included in the discovery sample, so independent replication is important for avoiding spurious associations with the exposure. Two of our instruments, rs571312 and rs76227980, are located close together near \emph{MC4R} and need to be tested jointly. We exclude rs62107261 because it is not contained in the 1000 Genomes genetic map file. Around each instrument, we condition on all variants which are more than 500 kilobases away.
\subsection{Data processing} \label{sec:applied-example-data-processing}
We use ALSPAC genotype data generated using the Illumina HumanHap550 chip (for children) and Illumina human660W chip (for mothers) and imputed to the 1000 Genomes reference panel. We remove SNPs with missingness of more than 5\% and minor allele frequency of less than 1\%. Haplotypes are phased using the \texttt{SHAPEIT2} software with the \texttt{duoHMM} flag, which ensures that phased haplotypes are consistent with known pedigrees in the sample. We obtain recombination probabilities from the 1000 Genomes genetic map file on Genome Reference Consortium Human Build 37.
\subsection{Results} \label{sec:applied-example-results}
\Cref{tab:negative-control-results} shows the negative control results and \Cref{tab:positive-control-results} shows the positive control results across all instruments. The last row of each table shows the p-value from Fisher's method aggregated across all independent p-values. The aggregated p-value for the negative control is 0.21, indicating little evidence against the null. The aggregated p-values for the positive control range from 0.03 (when 10\% of the simulated outcome is noise) to 0.16 (when 50\% of the simulated outcome is noise). This is weak evidence against the null, resulting from insufficiently strong instruments.
\begin{table}[!ht]
\caption{Results from the ALSPAC negative control example.}
\begin{center}
\begin{tabular}{lllc}
\toprule
Instrument (rsID) & Chromosome & Proximal gene & P-value \\
\midrule
rs11676272 & 2 & \emph{ADCY3} & 0.45 \\
rs7138803 & 12 & \emph{BCDIN3D} & 0.55 \\
rs939584 & 2 & \emph{TMEM18} & 0.39 \\
rs17817449 & 16 & \emph{FTO} & 0.06 \\
rs12042908 & 1 & \emph{TNNI3K} & 0.35 \\
rs543874 & 1 & \emph{SEC16B} & 0.07 \\
rs56133711 & 11 & \emph{BDNF} & 0.59 \\
rs571312, rs76227980 & 18 & \emph{MC4R} & 0.48 \\
rs12641981 & 4 & \emph{GNPDA2} & 0.62 \\
rs1094647 & 1 & \emph{SLC45A3} & 0.19 \\
\midrule
\multicolumn{3}{l}{Fisher's method} & 0.21 \\
\bottomrule
\end{tabular}
\label{tab:negative-control-results}
\end{center}
\end{table}
\begin{table}[!ht]
\caption{Results from the ALSPAC positive control example}
\begin{center}
\begin{tabular}{lllccc}
\toprule
Instrument (rsID) & Chromosome & Proximal gene & \multicolumn{3}{c}{P-value for noise of} \\
& & & 10\% & 20\% & 50\% \\
\midrule
rs11676272 & 2 & \emph{ADCY3} & 0.01 & 0.01 & 0.01 \\
rs7138803 & 12 & \emph{BCDIN3D} & 0.01 & 0.01 & 0.01 \\
rs939584 & 2 & \emph{TMEM18} & 0.98 & 0.95 & 0.88 \\
rs17817449 & 16 & \emph{FTO} & 0.33 & 0.35 & 0.44 \\
rs12042908 & 1 & \emph{TNNI3K} & 0.77 & 0.79 & 0.85 \\
rs543874 & 1 & \emph{SEC16B} & 0.48 & 0.64 & 0.92 \\
rs56133711 & 11 & \emph{BDNF} & 0.12 & 0.14 & 0.25 \\
rs571312, rs76227980 & 18 & \emph{MC4R} & 0.31 & 0.39 & 0.63 \\
rs12641981 & 4 & \emph{GNPDA2} & 0.49 & 0.56 & 0.76 \\
rs1094647 & 1 & \emph{SLC45A3} & 0.23 & 0.25 & 0.35 \\
\midrule
\multicolumn{3}{l}{Fisher's method} & 0.03 & 0.05 & 0.16 \\
\bottomrule
\end{tabular}
\label{tab:positive-control-results}
\end{center}
\end{table}
We can also compare the results in Tables \ref{tab:negative-control-results} and \ref{tab:positive-control-results} with a typical two-stage least squares (2SLS) regression using the same offspring haplotypes as instruments, unconditional on parental or other offspring haplotypes. For the negative control, the p-value from Fisher's method is 0.02, indicating some evidence against the null. This is expected, given that the backdoor paths remain unblocked. For the positive control, the p-values from Fisher's method range from less than $10^{-20}$ (when 10\% of the simulated outcome is noise) to $4.5 \times 10^{-11}$ (when 50\% of the simulated outcome is noise). This indicates that the unconditional analysis has significantly more power to detect non-zero effects compared to our ``almost exact'' test. We discuss potential reasons for, and implications of, this low power in \Cref{sec:discussion}
\section{Discussion} \label{sec:discussion}
Our test represents an almost exact approach to within-family MR, however, \Cref{sec:applied-example} demonstrates that power may be limited relative to typical Mendelian randomization analyses in unrelated individuals. Since our test can leverage the precise amount of power available in a single meiosis, this suggests that Mendelian randomization in unrelated individuals is drawing power from elsewhere, most likely many meioses across multiple generations. For example, an offspring with parents who are homozygous for the non-effect allele offers no power in our test, since their genotype will not vary across meioses. However, if we assume that genotypes are randomly distributed at the population level (as MR in unrelated individuals must), that same offspring can act as a comparator for individuals with the effect allele. \cite{Brumpton2020} corroborate this loss of power for their within-family method, but do not elaborate on the broader implications for how Mendelian randomization is typically justified. It would be valuable for the MR literature to discuss the extent to which Mendelian inheritance across multiple generations is driving the power behind existing results.
We must also return to the problem of transmission ratio distortion (TRD) discussed in \Cref{sec:genetic-preliminaries}. TRD violates the assumptions of our meiosis model that alleles are (unconditionally) passed from parents to offspring at the Mendelian rate of 50\%. We could represent TRD in our causal model in \Cref{fig:fammr_dag} via an arrow from the gametes $(\bm{Z}^m, \bm{Z}^f)$ to the mating indicator $S$. This indicates that the gametes themselves influence survival of their corresponding zygote to term. If our putative instrument $Z_1^m$ is in linkage with any variant exhibiting TRD, then this invalidates it as an instrument. Suppose $Z_3^m$ exhibits TRD, then this opens collider paths via the parental phenotypes $C^m$ and $C^f$, for example, $Y(d) \leftarrow C^m \rightarrow \boxed{S} \leftarrow Z_3^m \leftarrow \bm{U}^m \rightarrow Z_1^m$. The intuition is that parental phenotypes related to the likelihood of mating become associated with offspring variants related to the likelihood of offspring survival. Within our causal model, this pathway can be closed by conditioning on $Z_3^m$, with unconditioned variants obeying the meiosis model. If any unconditioned variants exhibit TRD, then this bias will remain and our meiosis model will incorrectly describe the inheritance patterns of any linked variants, resulting in an erroneous randomization distribution. Expanding resources of parent-offspring data may allow us to test the prevalence of transmission ratio distortion, which will help to inform the reasonableness of maintaining Mendel's First Law in our meiosis and fertilization model.
\section{Acknowledgements} \label{sec:acknowledgements}
The authors thank Kate Tilling, Rachael A Hughes, Jack Bowden, Gibran Hemani, Neil M Davies, Ben Brumpton, and Nianqiao Ju for their helpful feedback. In addition, the authors are extremely grateful to all the families who took part in the ALSPAC cohort, the midwives for their help in recruiting them, and the whole ALSPAC team, which includes interviewers, computer and laboratory technicians, clerical workers, research scientists, volunteers, managers, receptionists and nurses. Ethical approval for our applied example was obtained from the ALSPAC Ethics and Law Committee and the Local Research Ethics Committees. The UK Medical Research Council and Wellcome (grant number 217065/Z/19/Z) and the University of Bristol provide core support for ALSPAC. GWAS data was generated by Sample Logistics and Genotyping Facilities at Wellcome Sanger Institute and LabCorp (Laboratory Corporation of America) using support from 23andMe. This publication is the work of the authors who will serve as guarantors for the contents of this paper. This research was supported in part by the Wellcome Trust (grant number 220067/Z/20/Z) and EPSRC (grant number EP/V049968/1). For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
\printbibliography
| {'timestamp': '2022-08-31T02:09:41', 'yymm': '2208', 'arxiv_id': '2208.14035', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.14035'} |